date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,588,501,819,000
I have multiple CentOS machines on a network and I need to be able to push one script at a time to all of the machines at once. I have looked at something like Atera, but that is for windows, is not free nor open source and it also has way too much functionality. If anyone can please give me a recommendation on what software I can use for this.
I can recommend Ansible about that. With Ansible developed by Redhat, you can send and configure commands to multiple servers from one place. I can also recommend Terraform. Many practices about both applications are available on the internet pages. You can also check out: Puppet, Chef, Salt.
Remote script for multiple machines
1,588,501,819,000
Is there anything (specifically in linux machines) that can jump from client to server that would have the ability to perform a root-level task, such as stopping a service in an automated manner? (regardless of service) Example: A DHCP server running from dnsmasq on the server side issues an address to a client. The client receives the address and somehow tells the server "ok, we're all good, you can stop dnsmasq now". Can this be done? If so, how? In full awareness this would be a colossal security hole, I'm guessing some kind of authentication would need to be involved in many cases, but I'm mostly curious if it's even possible. I can't think of an instance where I've seen it done.
A secure way is to do this through ssh. Create a service-specific user ("dhcpkiller") and a private/public key pair so that the client can run this command: ssh dhcpkiller@dhcpserver pkill dnsmasq This one-line script gets triggered by your dhcp environment, which can happen in various ways, depending on the DHCP suite on the client. Quoting from dhclient / RHEL manpages: Immediately after dhclient brings an interface UP with a new IP address, subnet mask, and routes, in the REBOOT/BOUND states, it will check for the existence of an executable /etc/dhcp/dhclient-up-hooks script, and source it if found. This script can handle DHCP options in the environment that are not handled by default. When dhclient needs to invoke the client configuration script, it defines a set of variables in the environment, and then invokes /sbin/dhclient-script. In all cases, $reason is set to the name of the reason why the script has been invoked. The following reasons are currently defined: MEDIUM, PREINIT, BOUND, RENEW, REBIND, REBOOT, EXPIRE, FAIL, STOP, RELEASE, NBI and TIMEOUT
is it possible to have a dhcp client stop the dhcp service of the server after receiving an address?
1,655,792,954,000
I have setup a directory and some files with setfacl. jobq@workstation:~/Pool$ getfacl /etc/jobq getfacl: Removing leading '/' from absolute path names # file: etc/jobq # owner: root # group: jobq user::rwx user:jobq:rw- group::r-x group:jobq:rwx mask::rwx other::r-x jobq@workstation:~/Pool$ sudo getfacl /etc/jobq/log.txt getfacl: Removing leading '/' from absolute path names # file: etc/jobq/log.txt # owner: root # group: jobq user::rw- group::rw- group:jobq:rwx mask::rwx other::r-- jobq@workstation:~/Pool$ groups jobq However, when I run a command, like ls -al /etc/jobq I'm getting permission errors: ls: cannot access '/etc/jobq/log.txt': Permission denied total 0 d????????? ? ? ? ? ? . d????????? ? ? ? ? ? .. Since user jobq is in the group jobq, they should have access to the directory. What am I misunderstanding? How can I fix this?
The problem comes from this ACL on /etc/jobq: user:jobq:rw- This means that user jobq can’t “search” the directory, which is what stops ls from showing its contents. To fix this, you need to add the x permission. See Execute vs Read bit. How do directory permissions in Linux work? for details. See also Restrictive "group" permissions but open "world" permissions? to understand why the group permissions don’t help here. Thus another solution would be to drop the user ACL for jobq, and rely on the group permissions instead.
ls throws errors when trying to access directory guarded with ACL
1,655,792,954,000
CentOS release 5.10. getfacl --omit-header Shared/ user::rws user:foo:rw group::rw mask:rw I want to remove user:foo:rw from this entry. setfacl -m user:foo:0 simply removes permissions. getfacl --omit-header Shared/ user::rws user:foo:--- group::rw mask:rw Is this possible?
You need setfacl -x user:foo Shared/ -m just removes the permissions whereas -x removes the user.
Remove User from ACLs for a specific Directory
1,655,792,954,000
I have created a backup-user (let's just call it jeremy) on an Ubuntu-server. Then I've created a backup-dir, containing files from several different servers: /backup | |--server1 | |--daily_backup | |--weekly_backup | |--server2 | |--daily_backup | |--weekly_backup I've then granted access to the backup-dir for jeremy with the following command: setfacl -R -m u:jeremy:rwx backup/ However... If I login with the root-user and create a new directory, for instance: /backup/server2/monthly_backup, then jeremy won’t have access to the folder. Is there a way to make it so that both root and jeremy can read, write and execute everything in the /backup-directory?
One must remember to create a default mask for all new filesystem objects as well. setfacl -d -m u:jeremy:rwx backup/
Grant access for user to folder (even files created after permission is granted)
1,655,792,954,000
I have a folder /stuff that is owned by root:stuff with setgid set so all new folders' have group set to stuff. I want it so: New files have rw-rw----: User: read and write Group: read and write Other: none New folders have rwxrwx---: User: read, write, and execute Group: read, write, and execute Other: none If I set default ACLs with setfacl then it seems to apply to both files and folders. For me, this is fine for Other since both files and folders get no permissions: setfacl -d -m o::---- /stuff But what do I do for User and Group? If I do something like above then it will be set on all files and folders. And I can't use umask. I have a shared drive. I am trying to make it so folks in stuff can read/write/execute but nobody else (Other) can. And I wan to make sure that by default files do not get the execute bit set, regardless of what the account's umask is.
There is no way to differentiate between files and directories using setfacl only. Instead you can workaround the issue with using inotify-tools to detect new created files/dirs, then apply the correct ACLs for each one recursively: 1- You have to install inotify-tools package first. 2- Recover the default /stuff directory acls sudo setfacl -bn /stuff 3- SetGID sudo chmod g+s /stuff 4- Execute the following script in the background for testing purpose, for a permanent solution wrap it within a service. #!/bin/bash sudo inotifywait -m -r -e create --format '%w%f' /stuff | while read NEW do # when a new dir created if [ -d "$NEW" ]; then sudo setfacl -m u::rwx "$NEW" sudo setfacl -m g::rwx "$NEW" # when a new file created elif [ -f "$NEW" ]; then sudo setfacl -m u::rw "$NEW" sudo setfacl -m g::rw "$NEW" fi # setting no permissions for others sudo setfacl -m o:--- "$NEW" done
How do I set different default permissions for files vs folders using setfacl?
1,655,792,954,000
With root-user, I've executed this command: setfacl -R -d -m u:MYUSER:rwx /myfolder When I then change to that user ( su MYUSER ) and try to remove a file ( rm /myfolder/somefile.sql then I get the this error: rm: cannot remove 'somefile.sql': Permission denied I can't mv it either; then I get this error: mv: cannot move 'somefile.sql' to 'someotherfile.sql': Permission denied I've added MYUSER to /etc/sudoers, - so when I run: sudo rm /myfolder/somefile.sql, then I'm prompted for MYUSERs password; and then it works. But I need it to work without sudo, so I can run it as a crontab-job. If I write getfacl /myfolder, then I get this output: # file: /myfolder/ # owner: root # group: root user::rwx group::r-x other::r-x default:user::rwx default:user:MYUSER:rwx <-- That looks right, doesn't it? default:group::r-x default:mask::rwx default:other::r-x ... Why in the name of Zeus can't I remove files in this directory?
MYUSER is a default owner, but not an effective owner. You need to run both setfacl -R -d -m u:MYUSER:rwx /myfolder setfacl -R -m u:MYUSER:rwx /myfolder note second command do not have a default (-d/--default) flag. this sould result in getfacl giving # file: /myfolder/ # owner: root # group: root user::rwx user:MYUSER:rwx group::r-x other::r-x default:user::rwx default:user:MYUSER:rwx default:group::r-x default:mask::rwx default:other::r-x
Unable to remove or change files after setfacl rwx-command
1,655,792,954,000
I have a group called webdev and I want only the root and the memebers of the group webdev to have write access on the directory /web. Now, Here's the problem: # chmod -R u=rwX,go=rX /web # ls -l /web total 4 -rw-r--r--. 1 root root 165 Mar 8 12:29 index.html # ls -ld /web drwxr-xr-x. 2 root root 24 Mar 8 12:34 /web # setfacl -R -m g:webdev:rwX /web # ls -ld drwxrwxr-x+ 2 root root 24 Mar 8 12:34 . # getfacl /web getfacl: Removing leading '/' from absolute path names # file: web # owner: root # group: root user::rwx group::r-x group:webdev:rwx mask::rwx other::r-x So, the moment I allow the group webdev write permissions on the folder, ls -ld shows that my folder is now writable for group root. However, this is contradicted by the output of getfacl /web, where, group still has the (correct) permissions r-x. So, what's going on?
ls -ld shows that my folder is now writable for group root. Wrong. It shows, with the + symbol in that position, that the file has ACLs. Since the file has ACLs, the meaning of the middle three permissions letters displayed by ls is the mask, not the file-group permissions. Further reading View extended ACL for a file with '+' in ls -l output Winfried Trümper (1999-02-28). Summary about Posix.1e Portable Applications Standards Committee of the IEEE Computer Society (October 1997). Draft Standard for Information Technology—Portable Operating System Interface (POSIX)—Part 1: System Application Program Interface (API)— Amendment #: Protection, Audit and Control Interfaces [C Language] IEEE 1003.1e. Draft 17. Craig Rubin (1989-08-18). Rationale for Selecting Access Control List Features for the Unix System. NCSC-TG-020-A. DIANE Publishing. ISBN 9780788105548.
How to stop setfacl from making my directory writable for group?
1,655,792,954,000
I have the following code in a bash file: sudo setfacl -m g:jobq:x /usr/local/sbin/jobq_submit sudo setfacl -m g:jobq:x /usr/local/sbin/jobq_server sudo setfacl -m g:jobq:x /usr/local/sbin/jobq_server_stop sudo setfacl -m g:jobq:x /usr/local/sbin/jobq_server_start sudo setfacl -m g:jobq:x /usr/local/sbin/jobq_status sudo setfacl -m g:jobq:x /usr/local/sbin/jobq_stop sudo setfacl -x g:jobq:rw /usr/local/sbin/jobq_submit sudo setfacl -x g:jobq:rw /usr/local/sbin/jobq_server sudo setfacl -x g:jobq:rw /usr/local/sbin/jobq_server_stop sudo setfacl -x g:jobq:rw /usr/local/sbin/jobq_server_start sudo setfacl -x g:jobq:rw /usr/local/sbin/jobq_status sudo setfacl -x g:jobq:rw /usr/local/sbin/jobq_stop sudo setfacl -x g:jobq:rw /usr/local/sbin/jobq_submit The lines with -m do not give an error message, but the lines with -x say setfacl: Option -x: Invalid argument near character 8 What is wrong here?
setfacl -x only takes a reference to the ACL to remove, not the permissions associated with the ACL: sudo setfacl -x g:jobq /usr/local/sbin/jobq_submit
setfacl -m works but setfacl -x does not work
1,655,792,954,000
I want on a dir,a default mask of --- for others on linux setfacl -d -m o:--- coldir/ works fine On Freebsd 11.0,with ufs2 setfacl -d -m other:--- coldir/ setfacl: other:---: Invalid argument setfacl -m m::others:--- coldir/ setfacl: malformed ACL: invalid "tag" field setfacl: m::others:---: Invalid argument How to solve using setfacl?
Solution found,syntax was a little different sudo setfacl -d -m u::rwx,g::rwx,o::,mask::rwx coldir/ So new dir created are 770,perfect for collaborative dir.
Freebsd setfacl
1,655,792,954,000
I would like to restrict a user (user sftp-user, group webgroup) to sftp access for the /var/www/html directory in CentOS 8. They should have read and write permissions so they can make changes to website files. I am able to successfully jail the user to their homedir with ChrootDirectory %h but I can't quite get it to work when I change it to ChrootDirectory /var/www/html in /etc/ssh/sshd_config. The user gets this error when trying to sftp: fatal: bad ownership or modes for chroot directory "/var/www/html" What I did is try to use setfacl to give the group webgroup rw- permissions for /var/www/html (though not recursively, but everything inside that folder is owner by sftp-user:group). How do I get it to work? I've seen some solutions suggest using mount, I'm not sure if that's the better solution. Also, the html folder is owner by root:root, and everything inside it is owned by sftp-user:webgroup as I mentioned. Is this the correct ownership? for the sake of completeness, here's the output of getfactl /var/www/html: # file: html/ # owner: root # group: root user::rwx group::r-x group:webgroup:rw- mask::rwx other::r-x Thank you.
I got this figured out. It looks like I was conflating different factors. Trying to work with sftp and ACL at the same time caused me to misdiagnose the problem. All credit to Ulrich Schwarz for essentially solving the problem. To make it work, I had to make sure every directory in the path /var/www/html was owned by root as chroot jail apparently requires. The other problem was that the permissions on html were changed to 775 at one point while I was trying to figure this out. I changed them to 755 and I was able to sftp into ChrootDirectory with no problem. On the ACL front, the files in /var/www/html are owned by apache:apache, while the user is sftp-user in group webgroup. I simply gave the user rwX permissions: setfacl -Rm u:sftp-user:rwX html -m is modify, -R is recursive.
How do I jail a user in /var/www/html?
1,655,792,954,000
I have a program on Ubuntu Linux that creates a logs/error.log file with a permissions 660 (rw-rw----) or 640 (rw-r-----). But I want the file permissions always to be 666 (rw-rw-rw-) (including when the program creates the file). Restrictions: I can't modify the program. Therefore, I can't change 660 mode using by the program for a new files. The program can recreate the file at any time. Therefore, the solution with a single manual execution of chmod is not suitable. I need to add a bits of permissions, but not subtract. Therefore, umask and setfacl are not suitable.
You can use inotify to track your file or directory where it is created so you can update its permissions when it is created
How to force set (not subtract) permissions to future files?
1,655,792,954,000
I have a user user1 that has a bunch of folders in their home directory, including /home/user1/data/special/files. I have another user user2. I want: user2 to be able to read/write files/folders in /home/user1/data/special/files user2 to be able to ls: / and see /home /home and see /home/user1 /home/user1 and see /home/user1/data but not other files/folders in /home/user1 /home/user1/data and see /home/user1/data/special but not other files/folders in /home/user1/data /home/user1/data/special and see /home/user1/data/special/files but not other files/folders in /home/user1/data/special I used setfacl to set permissions to /home/user1/data/special/files but it does not let them browse the path. sudo setfacl -Rm d:u:user1:rwx,u:user1:rwx /home/user1/data/special/files I do not want to hange the user/group owner of any of these folders because they ultimately should belong to user1. If it helps, here are more specific details. I'm using a web app/service running on this box. The web app/service is running as user2. In the web app/service, I have to browse/navigate to a folder with the files I want to see. So I have to go to: / then /home then /home/user1 then /home/user1/data then /home/user1/data/special then /home/user1/data/special/files I cannot just enter /home/user1/data/special/files. Is this possible?
It's possible. chown -R user1:user1 /home/user1 This step will make you the owner of /home/user1 and all underlying directories. chmod 755 /home/user1 chmod 700 /home/user1/data chmod 700 /home/user1/data/special chmod 700 /home/user1/data/special/files And this sets the necessary permission, 755 means the user has rwx(7) and the others have only rw(5) access. This is necessary, so that they can traverse your home directory. setfacl -Rm u:user2:rwx /home/user1/data/special/files setfacl -Rdm u:user2:rwx /home/user1/data/special/files With these settings, user2 will be able to read and write files in /home/user1/data/special/files. Additionally, user2 will be able to traverse the directory tree from / to /home/user1/data/special/files and see the relevant directories along the way.
How can I give another user read/write access to a specific sub folder and the ability to ls the tree but only for the folder they have access to?
1,655,792,954,000
I have a NAS with a directory for each member of my house. My user name, is member of each member's group. So I have access to folders of all users. I want, when I place a file, inside a user's directory, the file take the ownership from parent directory. Or get specific permissions. Directory of my wife is "zoe_folder" with ownership zoe:zoe and permissions rwxrwx---. When I run e.g. as root the command touch file.txt inside my wife's directory (or subdirectories), then file.txt has ownership root:root and permissions rw-r--r--. I want ownership zoe:zoe (or as workaround permissions rw-rw-rw-). And of course without run any of commands chmod, chown. It is a NAS and client PCs are Windows. I cannot login every time in NAS with console and change permissions. Any ideas?
I think what you are looking for, is to: Use a setgid on each user's directory, so that each new file in that directory will have the same group as the directory; and Set your system's umask to 0002, as it appears to be 0022. umask removes existing permissions from the default permissions, which is 0777 for directories and 0666 for files. With the new setting, the default permissions will change for files: from 0644 to 0664, and for directories: from 0755 to 0775. My understanding of the details you've given is that it will apply to your system. To put a setgid on all the subdirectories, use the find command as follows, but ensure that your starting directory is the one just on top of the users' directories, so that a simple ls will list them all, as using the wrong starting directory can cause a bit of a pain reversing all that has been done: find ./ -type d -mindepth 1 -maxdepth 1 -exec chmod --preserve-root g+s '{}' \; The given options do the following: -type d returns everything of filetype 'directory'; -mindepth 1 prevents the starting directory from being listed, so that it's permissions will not change; -maxdepth 1 lists the 1st level subdirectories, but does not go deeper into their own subdirectories; -exec executes the following command on every item that passes the tests, which is what '{}' stands for; and --preserve-root is a protection in chmod to prevent the permission change to accidentally be applied to the root directory (and potentially the whole filesystem). If you're not sure what will be affected, simply run the find command without the -exec argument, like so: find ./ -type d -mindepth 1 -maxdepth 1, which will give you a list of every file it would pass on to whatever command you use with the -exec argument. Possible duplicate With a small search I found this question may have been (partially) answered here: How to set default file permissions for all folders/files in a directory? The accepted answer refers to a step-by-step tutorial on how to set default permissions for a directory in https://www.linuxquestions.org/questions/linux-desktop-74/applying-default-permissions-for-newly-created-files-within-a-specific-folder-605129/ Please confirm whether this is or is not the case.
Change owner to a file when saved in a specific folder
1,655,792,954,000
I want to change the ACL and the default ACL for all directories and files in a base directory. In other answers (such as this one), the -R flag is used. However, I get $ setfacl -R -m u::rwx my_dir/ setfacl: unknown option -- R Try `setfacl --help' for more information. # this is different from what's done on, e.g. Ubuntu # setfacl -R -d -m u::rwx mydir/ $ setfacl -R -m d:u::rwx mydir/ How can I recursively set the ACL permissions on Cygwin?
To repeat the command for any file and directory contained in a directory you can use find and its -exec option find my_dir -exec setfacl -m u::rwx {} \;
setfacl -R doesn't work on Cygwin
1,655,792,954,000
I ran sudo setfacl -d -m g::rwx /tmp then went into /tmp and created a file and looked at its permissions: -rw-rw-rw-. 1 jm wheel 0 Aug 24 10:26 test_file then I ran sudo chmod g+rwx test_file and looked at the permissions: -rwxrwxrwx. 1 jm wheel 0 Aug 24 10:26 test_file shouldn't the first command setfacl -d -m g::rws /tmp have given it executable permissions in the first place? why did I have to run chmodto get them?
why did I have to run chmod to get them? Because you are doing it wrong! Try: setfacl -Rdm g:maulinglawns:rwx tmp/ touch tmp/foo getfacl tmp/foo # file: tmp/foo # owner: maulinglawns # group: maulinglawns user::rw- group::r-x #effective:r-- group:maulinglawns:rwx #effective:rw- mask::rw- other::r-- And here is what ls says (note the + sign at the end, it indicates that we indeed have an acl set): ls -l tmp/ totalt 0 -rw-rw-r--+ 1 maulinglawns maulinglawns 0 aug 24 18:55 foo Please read the man page for setfacl. Especially the ACL ENTRIES part.
why doesn't setfacl give executable permission?
1,293,015,404,000
When I want Linux to consider newly created partitions without rebooting, I have several tools available to force a refresh of the kernel "partition cache": partx -va /dev/sdX kpartx -va /dev/sdX hdparm -z /dev/sdX blockdev --rereadpt /dev/sdX sfdisk -R /dev/sdX (deprecated) partprobe /dev/sdX ... I'm not sure about the difference between these techniques, but I think they don't use the same ioctl, like BLKRRPART or BLKPG. So, what is the difference between those ioctl?
BLKRRPART tells the kernel to reread the partition table. man 4 sd With BLKPG you can create, add, delete partitions as you please (from the kernel, not on disk of course). You have to tell the kernel the offset and size of individual partition, which implies that you must have parsed the partition table yourself beforehand. See Linux kernel: /include/uapi/linux/blkpg.h I personally use partprobe (part of parted), which uses the latter approach, probably to support partition tables not supported by the kernel.
Forced reread of partition table: difference between BLKRRPART and BLKPG ioctl? (Linux)
1,293,015,404,000
I want to capture only the disks from lsblk as showing here fd0 also appears in spite its not really disk for use in this case we can just do lsblk | grep disk | grep -v fd0 but maybe we missed some other devices that need to filter them by grep -v what other disk devices that could be appears from lsblk | grep disk and not really disks ? lsblk | grep disk fd0 2:0 1 4K 0 disk sda 8:0 0 100G 0 disk sdb 8:16 0 2G 0 disk /Kol sdc 8:32 0 2G 0 disk sdd 8:48 0 2G 0 disk sde 8:64 0 2G 0 disk sdf 8:80 0 2G 0 disk lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT fd0 2:0 1 4K 0 disk sda 8:0 0 150G 0 disk ├─sda1 8:1 0 500M 0 part /boot └─sda2 8:2 0 149.5G 0 part ├─vg00-yv_root 253:0 0 19.6G 0 lvm / ├─vg00-yv_swap 253:1 0 15.6G 0 lvm [SWAP] └─vg00-yv_var 253:2 0 100G 0 lvm /var sdb 8:16 0 2G 0 disk /Kol sdc 8:32 0 2G 0 disk sdd 8:48 0 2G 0 disk sde 8:64 0 2G 0 disk sdf 8:80 0 2G 0 disk sr0 11:0 1 1024M 0 rom
If you want only disks identified as SCSI by the device major number 8, without device partitions, you could search on device major rather than the string "disk": lsblk -d | awk '/ 8:/' where the -d (or --no-deps) option indicates to not include device partitions. For reasonably recent linux systems, the simpler lsblk -I 8 -d should suffice, as noted by user Nick.
lsblk + capture only the disks
1,293,015,404,000
I wanted to convert some logical partitions to extended ones, so I was following this accepted answer. However, at the step of backing up my current partition table, I messed up and typed the following instead of what was written. sfdisk -f /dev/sda > /mnt/parts.txt Which resulted in this: Disk /dev/sda: 30401 cylinders, 255 heads, 63 sectors/track Old situation: Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0 Device Boot Start End #cyls #blocks Id System /dev/sda1 0+ 304- 304- 2441214+ 82 Linux swap / Solaris /dev/sda2 * 304+ 565- 262- 2097152 83 Linux /dev/sda3 565+ 4486- 3921- 31495168 83 Linux /dev/sda4 4486+ 30401- 25916- 208163840 5 Extended /dev/sda5 4486+ 24026- 19540- 156954624 83 Linux As you can see, units are cylinders and blocks, which probably makes a loss of precision compared to a correct export with sectors. The problem is, I have broken my partition table, and now I have to exploit this incomplete (or is it?) backup. So far, I've tried to rewrite the partition table using sector as an unit (cylinders doesn't lead anywhere) knowing that one block is two sectors (I don't know if it's general, but exporting the partition table in sectors tells you that 1 sector = 512 bytes, and as 1 block = 1024 bytes...). root@debian:/home/user# sfdisk -u S /dev/sda Checking that no-one is using this disk right now ... OK Disk /dev/sda: 30401 cylinders, 255 heads, 63 sectors/track Old situation: Units = sectors of 512 bytes, counting from 0 Device Boot Start End #sectors Id System /dev/sda1 1 4882429 4882429 82 Linux swap / Solaris /dev/sda2 4882430 9076733 4194304 83 Linux /dev/sda3 9076734 72067069 62990336 83 Linux /dev/sda4 72067070 488394749 416327680 83 Linux Input in the following format; absent fields get a default value. <start> <size> <type [E,S,L,X,hex]> <bootable [-,*]> <c,h,s> <c,h,s> Usually you only need to specify <start> and <size> (and perhaps <type>). /dev/sda1 :1 4882429 S /dev/sda1 1 4882429 4882429 82 Linux swap / Solaris /dev/sda2 :4882430 4194304 L * /dev/sda2 * 4882430 9076733 4194304 83 Linux /dev/sda3 :9076734 62990336 /dev/sda3 9076734 72067069 62990336 83 Linux /dev/sda4 :72067070 416327680 E /dev/sda4 72067070 488394749 416327680 5 Extended /dev/sda5 :72067071 313909248 /dev/sda5 72067071 385976318 313909248 83 Linux /dev/sda6 : /dev/sda6 385976320 488394749 102418430 83 Linux /dev/sda7 : No room for more New situation: Units = sectors of 512 bytes, counting from 0 Device Boot Start End #sectors Id System /dev/sda1 1 4882429 4882429 82 Linux swap / Solaris /dev/sda2 * 4882430 9076733 4194304 83 Linux /dev/sda3 9076734 72067069 62990336 83 Linux /dev/sda4 72067070 488394749 416327680 5 Extended /dev/sda5 72067071 385976318 313909248 83 Linux /dev/sda6 385976320 488394749 102418430 83 Linux Warning: partition 1 does not end at a cylinder boundary Warning: partition 2 does not start at a cylinder boundary Warning: partition 2 does not end at a cylinder boundary Warning: partition 3 does not start at a cylinder boundary Warning: partition 3 does not end at a cylinder boundary Warning: partition 4 does not start at a cylinder boundary Warning: partition 4 does not end at a cylinder boundary Warning: partition 5 does not end at a cylinder boundary Warning: partition [6] does not start at a cylinder boundary Warning: partition [6] does not end at a cylinder boundary Warning: partition 6 does not end at a cylinder boundary Do you want to write this to disk? [ynq] y Successfully wrote the new partition table Re-reading the partition table ... If you created or changed a DOS partition, /dev/foo7, say, then use dd(1) to zero the first 512 bytes: dd if=/dev/zero of=/dev/foo7 bs=512 count=1 (See fdisk(8).) It says: no partition start/stop on a cylinder boundary. I don't know what this means, but as there are + and -, that means that the number have been rounded, on the original export, I assume this is normal. I also issued the same mistaken command to see if the output were the same: Disk /dev/sda: 30401 cylinders, 255 heads, 63 sectors/track Old situation: Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0 Device Boot Start End #cyls #blocks Id System /dev/sda1 0+ 303- 304- 2441214+ 82 Linux swap / Solaris /dev/sda2 * 303+ 565- 262- 2097152 83 Linux /dev/sda3 565+ 4485- 3921- 31495168 83 Linux /dev/sda4 4485+ 30401- 25916- 208163840 5 Extended /dev/sda5 4485+ 24025- 19540- 156954624 83 Linux /dev/sda6 24025+ 30401- 6376- 51209215 83 Linux It's close, but it's not exactly the same. Also, gparted doesn't seem to recognize any partition filesystem (everything is "unknown"). Also, how can /dev/sda1 be 304- cylinders large, but ends at 303- cylinders? I guess that I'm close to the solution, but I can't get the exact numbers that are required, probably because I miscalculated something or I'm doing it wrong. But I can't change them one by one to see which combination works (I can, but it would require some bash coding and processing time, and I wouldn't know what was wrong). I have recent backups of the most important for this disk, but if I could fix it without reinstalling and copying files, that could be nice.
This is going to be tricky to fix by hand. I hope you haven't modified any more data on this disk, apart from the broken partition table you wrote to it. Using sfdisk, fdisk, etc to create a backup of the partition table is a good idea (when you don't accidentally type the wrong command :) ). But for extra insurance I like to back up the boot sectors of my drives using dd. Are you sure that sda1 starts at block 1, or was that a guess? It used to be common that only 1 block was used at the start of a disk, since that's all you need to hold the MBR and (primary) partition table, but in recent years it's been common for partitioning software to reserve more space, eg a starting sector of 63 for the 1st partition is not unusual. I've also seen partitioning software (gparted, IIRC) reserve a megabyte at the start of a drive and then force all subsequent partitions onto megabyte boundaries. On older systems it was important for partitions to start and stop on cylinder boundaries. IOW, unpartitioned region at the start of the disk should be a whole number of cylinders, and so should each subsequent primary partition; there will generally also be unallocated space at the end of the disk. But that's generally not been an issue for many years, but a lot of partitioning software still mentions it, just in case you're interested. :) However, partitions do have to start and stop on sector boundaries. And that makes analysis of your block-oriented data in the 1st listing a lot easier. Thus 2441214+ blocks can only refer to 2441214.5 blocks = 4882429 sectors = 2499803648 bytes. But rather than trying to fix this by hand, you should seriously consider using a tool like testdisk. You may even have it already installed on your distro, if not, it should be in your repos.
Approximate partition table backup with sfdisk
1,293,015,404,000
# sfdisk /dev/mmcblk0p1 Welcome to sfdisk (util-linux 2.29.2). Changes will remain in memory only, until you decide to write them. Be careful before using the write command. Checking that no-one is using this disk right now ... FAILED This disk is currently in use - repartitioning is probably a bad idea. Umount all file systems, and swapoff all swap partitions on this disk. Use the --no-reread flag to suppress this check. But I cannot find where it is used. # grep mmc /proc/mounts # grep mmc /proc/swaps # lsof /dev/mmc* # fuser /dev/mmc* sfdisk is from util-linux 2.29.2-1+deb9u1. # strace -f sfdisk /dev/mmcblk0p1 ... write(1, "\33[0mChanges will remain in memor"..., 115Changes will remain in memory only, until you decide to write them. Be careful before using the write command. ) = 115 write(1, "\n", 1 ) = 1 fstat64(3, {st_mode=S_IFBLK|0660, st_rdev=makedev(179, 1), ...}) = 0 ioctl(3, BLKRRPART) = -1 EINVAL (Invalid argument) write(1, "Checking that no-one is using th"..., 62Checking that no-one is using this disk right now ... FAILED
ioctl(3, BLKRRPART) = -1 EINVAL (Invalid argument) Indeed, if the message was correct, the error code should show EBUSY not EINVAL. You have an "Invalid argument" because you passed /dev/mmcblk0p1. This is a partition. sfdisk edits the table that lists all partitions. You need to pass the whole device, i.e. # sfdisk /dev/mmcblk0
sfdisk - "This disk is currently in use" - but nothing seems to be using it?
1,293,015,404,000
I am trying to understand what I did wrong with the following mount command. Take the following file from here: http://elinux.org/CI20_Distros#Debian_8_2016-02-02_Beta Simply download the img file from here. Then I verified the md5sum is correct per the upstream page: $ md5sum nand_2016_06_02.img 3ad5e53c7ee89322ff8132f800dc5ad3 nand_2016_06_02.img Here is what file has to say: $ file nand_2016_06_02.img nand_2016_06_02.img: x86 boot sector; partition 1: ID=0x83, starthead 68, startsector 4096, 3321856 sectors, extended partition table (last)\011, code offset 0x0 So let's check the start of the first partition of this image: $ /sbin/fdisk -l nand_2016_06_02.img Disk nand_2016_06_02.img: 1.6 GiB, 1702887424 bytes, 3325952 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x0212268d Device Boot Start End Sectors Size Id Type nand_2016_06_02.img1 4096 3325951 3321856 1.6G 83 Linux In my case Units size is 512, and Start is 4096, which means offset is at byte 2097152. In which case, the following should just work, but isn't: $ mkdir /tmp/img $ sudo mount -o loop,offset=2097152 nand_2016_06_02.img /tmp/img/ mount: wrong fs type, bad option, bad superblock on /dev/loop0, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so. And, dmesg reveals: $ dmesg | tail [ 1632.732163] loop: module loaded [ 1854.815436] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 1854.815452] EXT4-fs (loop0): bad geometry: block count 967424 exceeds size of device (415232 blocks) None of the solutions listed here worked for me: resize2fs or, sfdisk What did I missed ? Some other experiments that I tried: $ dd bs=2097152 skip=1 if=nand_2016_06_02.img of=trunc.img which leads to: $ file trunc.img trunc.img: Linux rev 1.0 ext2 filesystem data (mounted or unclean), UUID=960b67cf-ee8f-4f0d-b6b0-2ffac7b91c1a (large files) and same goes the same story: $ sudo mount -o loop trunc.img /tmp/img/ mount: wrong fs type, bad option, bad superblock on /dev/loop2, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so. I cannot use resize2fs since I am required to run e2fsck first: $ /sbin/e2fsck -f trunc.img e2fsck 1.42.9 (28-Dec-2013) The filesystem size (according to the superblock) is 967424 blocks The physical size of the device is 415232 blocks Either the superblock or the partition table is likely to be corrupt! Abort<y>? yes
Once you have extracted the filesystem you are interested in (using dd), simply adapt the file size (967424*4096=3962568704): $ truncate -s 3962568704 trunc.img And then simply: $ sudo mount -o loop trunc.img /tmp/img/ $ sudo find /tmp/img/ /tmp/img/ /tmp/img/u-boot-spl.bin /tmp/img/u-boot.img /tmp/img/root.ubifs.9 /tmp/img/root.ubifs.4 /tmp/img/root.ubifs.5 /tmp/img/root.ubifs.7 /tmp/img/root.ubifs.2 /tmp/img/root.ubifs.6 /tmp/img/lost+found /tmp/img/root.ubifs.3 /tmp/img/boot.ubifs /tmp/img/root.ubifs.0 /tmp/img/root.ubifs.1 /tmp/img/root.ubifs.8 Another simpler solution is to truncate directly on the original img file: $ truncate -s 3964665856 nand_2016_06_02.img $ sudo mount -o loop,offset=2097152 nand_2016_06_02.img /tmp/img/ Where 3962568704 + 2097152 = 3964665856
bad geometry: block count 967424 exceeds size of device (415232 blocks)
1,293,015,404,000
I'm studying partitioning under Linux, starting with sfdisk. If I copy a partition table from one drive to another, it'll copy the device UUID and the PTUUIDs for each partition, but if I'm creating a new device, I can specify a UUID for GPT drives, but not MBR drives. This leads me to think that UUIDs and PTUUIDs are not necessary for MBR drives. What is the situation with that? And if I need a UUID for the drive, plus PTUUIDs for the partitions, how do I do that by hand? I see that sfdisk allows me to specify a UUID for GPT devices, but only a label for MBR devices. How can I create a UUID for an MBR and what can I do to make sure the partition PTUUIDs are created based on that device UUID? I can't find out how to make the primary UUID for the device or how to make PTUUIDs based on it for the partitions.
This leads me to think that UUIDs and PTUUIDs are not necessary for MBR drives. That's mostly correct. PTUUIDs on the partition table itself don't serve any purpose to MBR itself, but it's used by the system. It doesn't have PARTUUIDs on the partitions at all, just their position in the table. And if I need a UUID for the drive, plus PTUUIDs for the partitions, how do I do that by hand? I see that sfdisk allows me to specify a UUID for GPT devices, but only a label for MBR devices. Actually sfdisk does allow you to set the "disk-id". MBR partition UUIDs are just the disk-id followed by the position in the table. So if I set a disk-id to 12345 the first PARTUUID will be 12345-1. See sfdisk manual: --disk-id device [id] Change the disk identifier. If id is not specified, then print the current identifier. The identifier is UUID for GPT or unsigned integer for MBR. Background The naming convention used by Linux erroneously uses the term "UUID" which is very misleading. Universally Unique Identifier is a very specific standardised format of unique identifier with strictly standardised rules on generation that depend on the version embedded in each ID. To avoid confusion I'll avoid using the acronym "UUID" for anything other than the Linux partitioning use: The Linux naming convention is: "PTUUID" is the unique identifier for partition table itself embedded in the table. GPT uses Universally Unique Identifiers for this, MBR does not. However MBR does have a short PTUUID because humans are bad at picking unique names for their drives. "PARTUUID" is the unique identifier for a partition inside the partition table. MBR does not record any such ID. However it only has a maximum of 4 partitions. So conventionally partitions can be easily referred to by their position in the table (1,2,3,4). So to identify an MBR partition on a system you need both the PTUUID of the MBR table and the position in the table. Eg on my virtual machine I have a drive ee7273d0 with partition ee7273d0-1. This can be found under /dev/disk/by-partuuid/ee7273d0-1. GPT does record these and uses Universally Unique Identifiers for them. So because of the strict rules of Universally Unique Identifiers, the PTUUID is not needed to identify the partition: they will always be unique as long as they are properly generated. "UUID" is the unique identifier embedded in the file system, LUKS header, or LVM physical volume etc. An unformatted partition will not have one likewise a LUKS encrypted drive with a detached header will not have one. An ext4 file system uses Universally Unique Identifiers for it's UUID FAT and FAT32 use a much shorter ID that could be the same as another drive somewhere. LVM doesn't seem to use Universally Unique Identifiers but it's IDs are so long and randomised the risk of collision with another drive negligible.
Are UUIDs and PTUUIDs important for MBR disks? If so, how do I create them on my own?
1,293,015,404,000
Say I have a LUKS encrypted USB (full disk). I am looking for a way to hide the fact that it is a LUKS device. Which strategy would you use to hide the fact that my USB is a LUKS device? My approach would be to alter the LUKS header to make the LUKS header unrecognizable (and be able to go back easily). What about exchanging a define portion of the bits of the header for instance? PS: Say I don't want to use Truecrypt
So, without opening the whole topic of deniable encryption (and the cryptsetup FAQ has a section dedicated to that topic too) and since you're asking to simply hide the LUKS device (if that's sufficient), I'd use the luksHeaderBackup and luksHeaderRestore options from cryptsetup(8). Example for an already created LUKS device with an ext4 file system on it: # file -Ls /dev/vg0/lv1 /dev/mapper/test /dev/vg0/lv1: LUKS encrypted file, ver 1 [aes, cbc-plain, sha1] UUID: 0b52a420-742d-4f0f-87f1-29c51d8b2232 /dev/mapper/test: Linux rev 1.0 ext4 filesystem data, UUID=840fc046-df2f-428d-8069-faa239c2f9f3 (extents) (large files) (huge files) Backup the LUKS header: # cryptsetup luksHeaderBackup /dev/vg0/lv1 --header-backup-file test.bkp # ls -go test.bkp -r-------- 1 1052672 Dec 17 18:41 test.bkp Now we can overwrite this many bytes of the beginning of the LUKS partition. Our root partition is formatted with ext4 as well, let's just use that: # dd if=/dev/sda1 of=/dev/vg0/lv1 bs=1 count=1052672 # file -Ls /dev/sda1 /dev/vg0/lv1 /dev/sda1: Linux rev 1.0 ext4 filesystem data, UUID=f0cf7fa2-9977-4e1f-938d-1e8f71934cce (needs journal recovery) (extents) (large files) (huge files) /dev/vg0/lv1: Linux rev 1.0 ext4 filesystem data, UUID=f0cf7fa2-9977-4e1f-938d-1e8f71934cce (needs journal recovery) (extents) (large files) (huge files) Now our LUKS partition looks like our root partition. Hm, they both have now the same header, even the UUID matches, maybe a bit too similar. Of course we could have filled the LUKS partition with something else, you get the point. The important part is to backup the LUKS headerfile (test.bkp), because without that we won't be able to unlock the partition again. Once you feel safe to unlock the LUKS partition again, get the backup file (maybe from your own USB drive) and simply restore the header: # cryptsetup luksHeaderRestore /dev/vg0/lv1 --header-backup-file test.bkp ...and unlock the partition again.
Can I hide the fact that my USB is LUKS encryted?
1,293,015,404,000
I am not able to successfully redirect STDOUT+STDERR on commands that operates with disks. Standard redirecting which always works, is somehow now catching the output. Two practical examples: Example 1: # wipefs --all --force /dev/sda >>/var/log/custom.log 2>&1 [ 20.169018 ] sda: sda1 Example 2: # mount --verbose --options defaults --types ext4 /dev/sda1 /path/is/here >>/var/log/custom.log 2>&1 [ 30.947410 ] EXT4-fs (sda1): mounted filesystem with ordered data mode. Opts: (null) Interesting is, that this only happens when touching disks somehow. All other redirects within the script works as expected. Any ideas?
This is not output from the command, this is kernel log messages. Since the messages aren't coming from the command, they aren't affected by redirection. Kernel log messages normally go into log files, and important messages are additionally shown on the console. Exactly what “important” and “the console” mean depend on the logging configuration. If you're using sysklogd, the configuration file is /etc/syslog.conf. If you're using rsyslog, it's /etc/rsyslog.conf and /etc/rsyslog.d/*. If you're using systemd's built-in logging, it's /etc/systemd/journal.conf. The kernel can also print logs to the console directly without going through a logging daemon, which can be configured with dmesg --console-… (but this is usually done indirectly via the configuration of the logging daemon).
Redirecting output from within disk operations does not work
1,293,015,404,000
sfdisk --delete $disk From Ubuntu 18.04 or later works. What is the equivalent command in Ubuntu 16.04 LTS? Ubuntu 16.04 LTS (--delete missing), Ubuntu 18.04 LTS (--delete present)
Solution 1: build and install sfdisk from sources (to be able to use a more recent version) wget https://mirrors.edge.kernel.org/pub/linux/utils/util-linux/v2.35/util-linux-2.35.tar.gz tar -xvf util-linux-2.35.tar.gz cd cd util-linux-2.35 ./configure make make install /usr/local/bin/sfdisk --delete $disk Solution 2: use fdisk # list disk and partitions fdisk -l # open the target disk with fdisk fdisk /dev/target-disk # then use the d command option to delete the partition you want to remove # then use the w command option to save the changes
Missing argument option in sfdisk?
1,293,015,404,000
I'm creating an image in memory (/tmp is a tmpfs): $ dd if=/dev/zero of=/tmp/sdcard.img bs=512 count=3147775 It's supposed to hold a partition table and the first 3 partitions of a device $ losetup /dev/loop0 /tmp/sdcard.img $ dd if=bootloader.img of=/dev/loop0 bs=512 The first 2048 sectors contain a partition table. $ fdisk -l /tmp/sdcard.img Disk /dev/loop0: 1.5 GiB, 1611660800 bytes, 3147775 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x0000de21 Device Boot Start End Sectors Size Id Type /dev/loop0p1 2048 1050623 1048576 512M c W95 FAT32 (LBA) /dev/loop0p2 1050624 2099199 1048576 512M 83 Linux /dev/loop0p3 2099200 3147775 1048576 512M 83 Linux But I have a problem. I want to add a fourth partition /dev/loop0p4, that starts at 3147775 and ends at 37748724. I don't want to physically create the partition, but I want to modify the partition table so that it thinks this drive exists. However, when I use fdisk for this purpose it complains Value out of range. How can I force fdisk to just do it. I don't care that the partition table is invalid. I'm going to be dding this to a larger disk and then formatting it later. I'd like the parition table to be part of what I dd to that larger disk (there are reason for this, would rather not delve into those details). All I want to know is how I can write a partition table with these arbitrary values without pulling out the hex editor.
You can make a file as big or as small as you want - especially on a linux tmpfs. df -h /tmp Filesystem Size Used Avail Use% Mounted on tmpfs 12G 472K 12G 1% /tmp We can just make a sparse file. for cmd in \ 'dd bs=1024k seek=20k of=' \ 'ls -slh ' do eval "$cmd/tmp/file" echo done </dev/null 0+0 records in 0+0 records out 0 bytes (0 B) copied, 0.000152051 s, 0.0 kB/s 0 -rw-r--r-- 1 mikeserv mikeserv 20G Dec 24 20:19 /tmp/file See? It's using 0 blocks of disk space, but its apparent size is 20 gigabytes. You can then just fdisk /tmp/file. I just created a partition table on it. Here's fdisk -l: Disk /tmp/file: 20 GiB, 21474836480 bytes, 41943040 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x057d787a Device Boot Start End Sectors Size Id Type /tmp/file1 2048 20973567 20971520 10G 83 Linux /tmp/file2 20973568 31459327 10485760 5G 5 Extended /tmp/file3 31459328 41943039 10483712 5G 83 Linux After the table is written it does use a little bit of space: ls -lsh /tmp/file 8.0K -rw-r--r-- 1 mikeserv mikeserv 20G Dec 24 20:21 /tmp/file You wouldn't know, though. df -h /tmp Filesystem Size Used Avail Use% Mounted on tmpfs 12G 480K 12G 1% /tmp And you can sparsely extend a file in the same way: for cmd in \ 'dd bs=1024k seek=30k of=' \ 'ls -slh ' 'fdisk -l ' do eval "$cmd/tmp/file" echo done </dev/null 0+0 records in 0+0 records out 0 bytes (0 B) copied, 9.8239e-05 s, 0.0 kB/s 8.0K -rw-r--r-- 1 mikeserv mikeserv 30G Dec 26 14:24 /tmp/file Disk /tmp/file: 30 GiB, 32212254720 bytes, 62914560 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x057d787a Device Boot Start End Sectors Size Id Type /tmp/file1 2048 20973567 20971520 10G 83 Linux /tmp/file2 20973568 31459327 10485760 5G 5 Extended /tmp/file3 31459328 41943039 10483712 5G 83 Linux
How can I manipulate a partition table file without fdisk checking the validity of it?
1,293,015,404,000
we can print all disks by the following ( on our RHEL machine ) fdisk -lu | grep "Disk /dev" Disk /dev/sda: 247.0 GB, 246960619520 bytes, 482344960 sectors Disk /dev/sdb: 4294 MB, 4294967296 bytes, 8388608 sectors Disk /dev/sdc: 4294 MB, 4294967296 bytes, 8388608 sectors Disk /dev/sdd: 4294 MB, 4294967296 bytes, 8388608 sectors Disk /dev/sde: 4294 MB, 4294967296 bytes, 8388608 sectors as we can see above disks are in GB and not in GIB note: GB is the traditional, metric style of measurement with 1 GB equaling 1,000³ bytes. GiB is the binary method; which is the way computers measure data at 1024³ bytes. any option by using fdisk or sfdisk or maybe other manipulation in order to print the disks size with GIB ?
From fdisk man page: In the case the size is specified in bytes than the number may be followed by the multiplicative suffixes KiB=1024, MiB=1024*1024, and so on for GiB, TiB, PiB, EiB, ZiB and YiB. The "iB" is optional, e.g., "K" has the same meaning as "KiB". For backward compatibility fdisk also accepts the suffixes KB=1000, MB=1000*1000, and so on for GB, TB, PB, EB, ZB and YB. These 10^N suffixes are deprecated. IOW, you'll need to patch it and it doesn't sound likely that your patch will ever be accepted/merged. Here's the obvious rationale behind it: all modern storage devices operate with either 512 or 4096 bytes sectors - the latter becoming more and more ubiquitous. 1MB is not even divisible by either 512 or 4096.
fdisk or sfdisk + how to show the disks size in GIB and not in GB
1,293,015,404,000
When I run sfdisk -l on my Ubuntu 14.04 machine, it returns the following: Disk /dev/xvda: 36473 cylinders, 255 heads, 63 sectors/track Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0 Device Boot Start End #cyls #blocks Id System /dev/xvda1 0+ 522 523- 4200966 83 Linux /dev/xvda2 523 784 262 2104515 82 Linux swap / Solaris /dev/xvda3 785 36473- 35689- 286664983+ 8e Linux LVM end: (c,h,s) expected (1023,254,63) found (633,35,42) /dev/xvda4 0 - 0 0 0 Empty ... I am suprised about the /dev/xvda4 entry: There is no entry for it when listing the /dev/ directory, and other commands such as parted -l and lsblk don't reference that device either. What does sfdisk -l show with that /dev/xvda4 entry?
This disk appears to use the traditional PC partitioning type, also known as MBR. In the MBR format, there are exactly four primary partitions, no more, no less: the partition table in the first sector of the disk has four entries. An entry can be marked with type 0, meaning unused. fdisk -l (like many other tools) omits entries in the primary partition table that are unused. sfdisk -l lists those partitions and indicates that they're empty.
sfdisk lists an unknown device /dev/xvda4
1,293,015,404,000
partx fails to read the partition table of /dev/sdb on this system. Why does it return this 'failed to read partition table' shown below, instead of null/empty ? Will this 'failed' result always mean the device's partition table is damaged? Notes: here the sdb can work fine as LVM without any partition! # pvs PV VG Fmt Attr PSize PFree /dev/sda3 vgroot lvm2 a-- 89.00g 4.00m /dev/sda4 vgroot lvm2 a-- 746.78g 746.78g /dev/sdb vgdata lvm2 a-- <836.99g 0 # # # sfdisk -l /dev/sdb Disk /dev/sdb: 109262 cylinders, 255 heads, 63 sectors/track # # # sfdisk -l /dev/sda Disk /dev/sda: 109262 cylinders, 255 heads, 63 sectors/track Units: cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0 Device Boot Start End #cyls #blocks Id System /dev/sda1 0+ 109262- 109263- 877647871+ ee GPT sfdisk: start: (c,h,s) expected (0,0,2) found (0,0,1) /dev/sda2 0 - 0 0 0 Empty /dev/sda3 0 - 0 0 0 Empty /dev/sda4 0 - 0 0 0 Empty # # # partx -s /dev/sdb partx: /dev/sdb: failed to read partition table # # partx -s /dev/sda NR START END SECTORS SIZE NAME UUID 1 2048 411647 409600 200M EFI System Partition 255f05dd-3c30-4eb5-b4ef-e222216eb27e 2 411648 2508799 2097152 1G 0eba1772-1106-4a63-bad6-6d20be988dba 3 2508800 189171711 186662912 89G 39fab8c9-bd96-47a2-b5db-495e43159055 4 189171712 1755295710 1566123999 746.8G 9e3d6237-5c7f-4443-8b60-b258052a8b32 # # # pvdisplay /dev/sdb --- Physical volume --- PV Name /dev/sdb VG Name vgdata PV Size 836.99 GiB / not usable 2.00 MiB Allocatable yes (but full) PE Size 4.00 MiB Total PE 214269 Free PE 0 Allocated PE 214269 PV UUID IsOr0G-UBTt-Qn1E-bx6R-dzvY-HqSE-bNiCaq # # # lsb_release -a LSB Version: :core-4.1-amd64:core-4.1-noarch Distributor ID: RedHatEnterpriseServer Description: Red Hat Enterprise Linux Server release 7.4 (Maipo) Release: 7.4 Codename: Maipo # # Another system scense: The following partx can directly output null/empty without any partition, why? Notes: the following sdb can also work fine as LVM without any partition! # # # partx /dev/sdb # # # # pvdisplay /dev/sdb --- Physical volume --- PV Name /dev/sdb VG Name vgdoc PV Size 600.00 GB / not usable 4.00 MB Allocatable yes (but full) PE Size (KByte) 4096 Total PE 153599 Free PE 0 Allocated PE 153599 PV UUID mMPvrE-NBP5-9n3J-77w5-57p0-1R7E-ggFCEj # # # # lsb_release -a LSB Version: :core-3.1-amd64:core-3.1-ia32:core-3.1-noarch:graphics-3.1-amd64:graphics-3.1-ia32:graphics-3.1-noarch Distributor ID: RedHatEnterpriseServer Description: Red Hat Enterprise Linux Server release 5.5 (Tikanga) Release: 5.5 Codename: Tikanga # # 1) My thought is that the partx should not exited with "failed to read ..." if the disk really has no partition. Notes: as you could see, the partx just only output null/empty in another system with the non-partition disk. 2) It is normal that the PV/LVMs can be created and used on the disk partitions, isn't it ? Device Boot Start End #cyls #blocks Id System /dev/sda1 * 0+ 25- 26- 204800 83 Linux /dev/sda2 25+ 36404- 36380- 292215808 8e Linux LVM
partx requires a partition table. The results posted show there is no partition table on sdb. This is different from a disk which has a partition table, but no partitions. In that case, you could not have an LVM PV on that disk. Compare the output of blkid -o export /dev/sda blkid -o export /dev/sda3 blkid -o export /dev/sdb You might also find lsblk useful. sdb cannot be simultaneously formatted as an LVM PV and as a partition table, because they would include conflicting structures in the first sector. To see that they would both include structures in the first sector, compare: wipefs --no-act /dev/sda wipefs --no-act /dev/sda3 wipefs --no-act /dev/sdb The "offset" column of wipefs --no-act is in bytes. You must be careful if you run wipefs. It does what it sounds like. However it is safe if you run wipefs --no-act.
Why partx can't read the partition table of some disks
1,293,015,404,000
I've picked up an HP SimpleSave sd500a backup drive. This is a 2.5", 500GB drive. It has a mysterious CD-like partition, but otherwise seems to contain a WD Scorpio Blue disk. It seems that the CD-like partition is implemented in the enclosure's firmware, but I've no way to be certain of this. I'm repartitioning the drive for the first time. When attempting to open the drive using cfdisk /dev/sdb, it exits with status 4 after outputting this error message: FATAL ERROR: Bad primary partition 0: Partition ends in the final partial cylinder sfdisk -l is able to output info on the drive without errors: Disk /dev/sdb: 60715 cylinders, 255 heads, 63 sectors/track Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0 Device Boot Start End #cyls #blocks Id System /dev/sdb1 0+ 60715- 60716- 487699456 7 HPFS/NTFS /dev/sdb2 0 - 0 0 0 Empty /dev/sdb3 0 - 0 0 0 Empty /dev/sdb4 0 - 0 0 0 Empty Is the error from cfdisk any reason to question the stability of the drive or the compatibility of its firmware?
cfdisk reads the partition table of the device at startup, it will exit if the geometry of a partition is wrong. You can force cfdisk to not read the existing partition table by adding -z: cfdisk -z /dev/sdb This is a cfdisk specific behavior, fdisk will show a similar error but won't exit. The stability of the drive is not affected, it's just a partition issue. Alternatively use a partition tool like fdisk, parted or gparted. I've just checked my own partition and the first one (/boot) also reported this error. I never had any problems with it.
Errors from cfdisk with new external USB backup drive
1,293,015,404,000
I use msdos partition table, so there is no PARTUUID supported (it's only on GPT partition tables) root@xenial:~# fdisk -l Disk /dev/sda: 200 GiB, 214748364800 bytes, 419430400 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0xa9ff83af Device Boot Start End Sectors Size Id Type /dev/sda1 * 2048 314574847 314572800 150G 7 HPFS/NTFS/exFAT /dev/sda2 314574848 315598847 1024000 500M 83 Linux /dev/sda3 315598848 399484927 83886080 40G 83 Linux /dev/sda4 399486974 407875583 8388610 4G 5 Extended /dev/sda5 399486976 407875583 8388608 4G 82 Linux swap / Solaris So what is PARTUUID displayed in blkid ? root@xenial:~# blkid /dev/sda1: LABEL="windows" UUID="3364EC1A72AE6339" TYPE="ntfs" PARTUUID="a9ff83af-01" /dev/sda2: LABEL="/boot" UUID="9de57715-5090-4fe1-bb45-68674e5fd32c" TYPE="ext4" PARTUUID="a9ff83af-02" /dev/sda3: LABEL="/" UUID="553912bf-82f3-450a-b559-89caf1b8a145" TYPE="ext4" PARTUUID="a9ff83af-03" /dev/sda5: LABEL="SWAP-sda5" UUID="12e4fe69-c8c2-4c93-86b6-7d6a86fdcb2b" TYPE="swap" PARTUUID="a9ff83af-05" I need to change it to debug a ubuntu kickstart multiboot installation, where can i set this PARTUUID ?
Looks like the PARTUUID on a MBR-partitioned disk is the Windows Disk Signature from the MBR block (8 hex digits) + a dash + a two-digit partition number. The Windows Disk Signature is stored in locations 0x1B8..0x1BB in the first block of the disk (the MBR block), in little-endian byte order. This command will display the Windows Disk Signature straight out of the MBR: # dd if=/dev/sda bs=1 count=4 skip=440 2>/dev/null | od -t x4 -An
What is PARTUUID from blkid when using msdos partition table?
1,293,015,404,000
I'm testing a Python script intended for use on Raspberry Pi systems for reformatting and copying partition info and partition data. To get the information from the first device (often a USB memory stick), I use: sfdisk -d /dev/sda >sda_data.txt Then, to copy the same table to the destination drive, I use: sfdisk /dev/sdb <sda_data.txt Overall, it works as expected, but what if I use it on a smaller disk? For instance, say sda1 is a msdos format boot partition and sda2 is an extfs partition filling the rest of the drive. Say the total data (other than boot commands and data) is about 10GB on /dev/sda2 and /dev/sda is 64GB. My data is small enough that when I copy the files from the 64GB device to a smaller 32GB device, there will be enough room on the smaller device at /dev/sdb for it to store all the data on a smaller device. So if sda is 64GB and sdb is 32GB, I can still copy all the data I have on sda to sdb. The problem or question lies with the partition of the smaller sdb. When I read in the partition information from sda, it includes partition sizes and the 2nd partition, sda2, will be a lot bigger than it's new counterpart, sdb2. When I use the info dump in sda_data.txt on sdb, my experience is that sfdisk allows for the smaller device size on sdb and doesn't try to make too large a partition - it creates a partition on sdb2 that automatically goes to the end of the smaller device. This is my experience. Is this standard behavior? Will sfdisk always resize that last partition to the smaller device? In other words, can I rely on sfdisk creating that smaller partition on the smaller device? (This is assuming no partition is starting after the end of sdb and would be entirely outside of the space on that device. For the purposes of this discussion, we can assume that the last partition may need to be reduced, but that it will still be able to fit on sdb - it just won't be full size.)
The dump option is more suitable to exactly replicate a given partitioning. I can't tell you how reliable it is to use sfdisk with a recipe where the partitioning does not match the device size. But you can easily create a more lenient sfdisk script either from scratch or based on a dump. For example if I got the following dump from sfdisk label: gpt label-id: 01234567-89AB-CDEF-0123-456789ABCDEF device: /dev/sda unit: sectors first-lba: 34 last-lba: 7814037134 sector-size: 512 /dev/sda1 : start= 34, size= 32734, type=E3C9E316-0B5C-4DB8-817D-F92DF00215AE, uuid=01234567-89AB-CDEF-0123-456789ABCDEF, name="Microsoft reserved partition" /dev/sda2 : start= 32768, size= 629145600, type=EBD0A0A2-B9E5-4433-87C0-68B6B72699C7, uuid=01234567-89AB-CDEF-0123-456789ABCDEF, name="something" /dev/sda3 : start= 629178368, size= 1300070400, type=EBD0A0A2-B9E5-4433-87C0-68B6B72699C7, uuid=01234567-89AB-CDEF-0123-456789ABCDEF, name="something else" /dev/sda4 : start= 1996365824, size= 5817669632, type=0FC63DAF-8483-4772-8E79-3D69D8477DE4, uuid=01234567-89AB-CDEF-0123-456789ABCDEF, name="root" You can create a more flexible partitioning by removing especially the device path, start sector and one of the size arguments. label: gpt size= 32734, type=E3C9E316-0B5C-4DB8-817D-F92DF00215AE, name="Microsoft reserved partition" size= 629145600, type=EBD0A0A2-B9E5-4433-87C0-68B6B72699C7, name="something" size= 1300070400, type=EBD0A0A2-B9E5-4433-87C0-68B6B72699C7, name="something else" type=0FC63DAF-8483-4772-8E79-3D69D8477DE4, name="root" This will create four partitions where sfdisk will automatically calculate the preferred start positions, expand the last partition to fill the remaining space (because it has no required size) and create new UUIDs for the partitions. If you want the partitions to have the same UUIDs as before just keep the uuid argument from the dump. Relevant parts from the manpage: The recommended way is not to specify start offsets at all and specify partition size in MiB, GiB (or so). In this case sfdisk aligns all partitions to block-device I/O limits (or when I/O limits are too small then to megabyte boundary to keep disk layout portable). The default value of size indicates "as much as possible"; i.e., until the next partition or end-of-device. In general it is feasible to hand write sfdisk scripts. For your boot + root setup (note: what filesystem a partition will get doesn't really matter for sfdisk) you should be fine with something simple like: label: gpt size=5GiB, type=linux, name="boot", bootable type=linux, name="root"
Copying a partition table with sfdisk from a larger device to a smaller one
1,293,015,404,000
I have some disk image, taken with dd if=/dev/somedevice of=filename.img. I was able to shrink them following this tutorial. Now I would like to script all the procedure, and I managed to perform almost everything, apart the fdisk resize part. I'm trying to resize the partition with this command echo " , +7506944K," | sfdisk -N 2 /dev/loop14 But independently from the size I use I get an error: /dev/loop14p2: Failed to resize partition #2. How can i script the redefinition of the end of a partition? Why is my command failing, can I get some more information somehow?
I understood what was wrong: First, sfdisk accept the size of the partition, not the increment, so the + sign is wrong. One difference from fdisk is that the end is the sector number from the beginning of the partition, not from the beginning of the device. Then the unit cannot be other than sectors. So in my case, given the sector size of 512 bytes and a requested final size of approximately 7Gb , I had to launch the command as: sudo sh -c 'echo " ,14596416" | sfdisk -N 2 /dev/loop14'
Scripting the partition shrinking
1,293,015,404,000
I have a ssd(256 Gb), when I'm trying to part it: sfdisk /dev/sda << EOF 2048,8388608,S ,104857600,L ,,E ,20971520,L ,20971520,L ,20971520,L ,,L EOF fi the output is: Disk /dev/sda: 238.5GiB, 256060514304 bytes, 500118192 sectors Disklabel type: dos Disk identifier: 0xdedcd8ac Device Boot Start End Sectors Size Id Type /dev/sda1 2048 8390655 8388608 4G 82 Linux swap /dev/sda2 8390656 113248255 104857600 50G 83 Linux /dev/sda3 113248256 500118191 386869936 184.5G 5 Extended /dev/sda5 113250304 134221823 20971520 10G 83 Linux /dev/sda6 134223872 155195391 20971520 10G 83 Linux /dev/sda7 155197440 176168959 20971520 10G 83 Linux /dev/sda8 176171008 500118191 323947184 154.5G 83 Linux How could it be? The total size of parts more than ssd size (423>256).
Partition 3 is an extended partition and shows its size as sum of parts 5, 6, 7, 8.
sfdisk strange behavior: total size of partitions greater than device size
1,293,015,404,000
I dumped the partition layout of a disk, with: sfdisk -d /dev/sda > part_table cat part_table output label: dos label-id: 0x0004bc49 device: /dev/sda unit: sectors /dev/sda1 : start= 2048, size= 131072, type=83 /dev/sda2 : start= 133120, size= 131072, type=83 /dev/sda3 : start= 264192, size= 131072, type=83 /dev/sda4 : start= 395264, size= 234045440, type=5 /dev/sda5 : start= 397312, size= 131072, type=af /dev/sda6 : start= 530432, size= 131072, type=83 /dev/sda7 : start= 663552, size= 131072, type=83 /dev/sda8 : start= 796672, size= 131072, type=83 /dev/sda9 : start= 929792, size= 131072, type=7 Is there a way to import this partition layout into a ramdisk?
You can use the sfdisk output to create the new partition table sfdisk /dev/ram <part_table If you're really daring (or old-fashioned), you can also use dd dd if=/dev/sda of=/tmp/sda-mbr.bin bs=512 count=1 dd if=sda-mbr.bin of=/dev/ram0 bs=1 count=64 skip=446 seek=446
Is there a way to import a layout with multiple partitions into ramdisk?
1,293,015,404,000
We are using a Python script for running sfdisk creating partitions on our Linux. The code is as below: stdin,stdout=\ (lambda x:(x.stdin,x.stdout))\ (subprocess.Popen( ["/sbin/sfdisk","-uM","--no-reread",device], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)) #<start>;<size>;<id>;<bootable>; t="""0;95;83; ;160;b;* ;;E; ;0;; ;20;b; ;95;b; ;;b; """ print "Writing\n%s"%(t,) stdin.write(t) stdin.close() """Explanation: / 100MB (hda1) /mnt/system 175MB (hda2) /mnt/configuration 25MB (hda5) /mnt/logs 100MB (hda6) /mnt/user 88MB (hda7) """ After running this command I am seeing that the explanation is correct and for example systm has 175 MB. What I don't understand is how the size 95 is mapped to 100 Megabyte, 160 to 175, 20 to 25 and so on. The other question is if I want to increase the size of system to 210 megabyte then what is correct number to write in the command?
Note, you have a slightly out-of-date sfdisk command, as since 2015 version 2.26 it no longer accepts -uM, which is used to set the default "unit". The difference you are seeing is due to whether numbers are given in MB i.e. Megabytes (1000*1000) or MiB i.e. Mebibytes (1024*1024). 100MB is approximately 95MiB. If you want to future proof your code against a newer sfdisk you should remove the -uM and assume sizes are in sectors of 512 bytes. The newer version allows you to give numbers with a suffix like MiB. 210MB can be calculated as: echo '210*1000*1000/1024/1024' | bc about 200MiB.
how mapping of sfdisk partition size is working
1,293,015,404,000
I just used sfdisk to clone my partition table to a new disk, sudo sfdisk -d /dev/nvme0n1 > /tmp/part.txt sudo sfdisk /dev/nvme1n1 < /tmp/part.txt However, now both drives have the same uuid. How can I fix that and generate a new UUID for device with the cloned partition table? The number that are duped can be seen with sudo fdisk -l. You can see the "523436E9-4DA5-474F-87CA-D784E4BF345D" is shared as a common "Disk identifier" Disk /dev/nvme1n1: 1.82 TiB, 2000398934016 bytes, 3907029168 sectors [...] Disklabel type: gpt Disk identifier: 523436E9-4DA5-474F-87CA-D784E4BF345D [...] Disk /dev/nvme0n1: 1.82 TiB, 2000398934016 bytes, 3907029168 sectors [...] Disklabel type: gpt Disk identifier: 523436E9-4DA5-474F-87CA-D784E4BF345D You can also see a shared UUID with, ❯ lsblk -o +uuid NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS UUID nvme1n1 259:0 0 1.8T 0 disk ├─nvme1n1p1 259:2 0 512M 0 part └─nvme1n1p2 259:3 0 1.8T 0 part 7d78ed4b-e4aa-4270-853d-6489ea4d6c54 nvme0n1 259:1 0 1.8T 0 disk ├─nvme0n1p1 259:4 0 512M 0 part /boot/efi 1D40-E385 └─nvme0n1p2 259:5 0 1.8T 0 part / 7d78ed4b-e4aa-4270-853d-6489ea4d6c54 On the partition "7d78ed4b-e4aa-4270-853d-6489ea4d6c54" is shared?
The device identifier UUID I was able to change the UUID of disk with sfdisk, sudo sfdisk --disk-id /dev/nvme1n1 $(uuidgen) Disk identifier changed from 523436E9-4DA5-474F-87CA-D784E4BF345D to E15A552B-CD07-4332-B73C-E67765D11F4E. The partition table has been altered. Calling ioctl() to re-read partition table. Syncing disks. Partition UUIDs In order to give the partitions a new UUID with sudo btrfstune -f -U $(uuidgen) /dev/nvme1n1p2 I had to first take the device offline by removing it from the raid1 array -- which because there were only two disks required first removing raid1, sudo btrfs filesystem balance start -dconvert=single -mconvert=dup Then I was able to remove the device, sudo btrfs device remove /dev/nvme1n1p2 / Then I had to create a btrfs filesystem on the device so I could use btrfstune sudo mkfs.btrfs /dev/nvme1n1p2 Then I could change the partition uuid, sudo btrfstune -f -U $(uuidgen) /dev/nvme1n1p2 But lsblk -o +uuid does not show the partition (nvme1n1p2)'s uuid, so I'm not sure what exactly is going on, lsblk -o +uuid NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS UUID nvme1n1 259:0 0 1.8T 0 disk ├─nvme1n1p1 259:2 0 512M 0 part └─nvme1n1p2 259:3 0 1.8T 0 part nvme0n1 259:1 0 1.8T 0 disk ├─nvme0n1p1 259:4 0 512M 0 part /boot/efi 1D40-E385 └─nvme0n1p2 259:5 0 1.8T 0 part / 7d78ed4b-e4aa-4270-853d-6489ea4d6c54
How can you give a disk and a new UUID?
1,293,015,404,000
I have custom LFS installer which contains sfdisk, I am trying to add support for NVME disks on it. When I make partitions with sfdisk on a normal SATA disk, things go as expected but when I do the exact same on a NVME disk, it creates the partitions, but when I am trying to get the size of a partition (with the sfdisk -s /dev/nvme0n1p1 command), it outputs No such device or address while trying to determine filesystem size. lsblk output: NAME MAJ:MIN SIZE TYPE nvme0n1 259:0 1.8T disk |nvme0n1p1 259:1 200G part `nvme0n1p2 259:10 1.6T part sfdisk usage: ,200G,L ,,L /proc/partitions major minor #blocks name 259 0 1953514584 nvme0n1 259 2 209715200 nvme0n1p1 259 3 1743798343 nvme0n1p2 They are also listed under /dev as nvme0n1, nvme0n1p1 and nvme0n1p2. Now if I use sfdisk -s /dev/nvme0n1p1 I get the output: 209715200 and sfdisk -s /dev/nvme0n1p2 gives: No such device or address while trying to determine filesystem size. Now the strange thing is, if I create the partitions again, and I do sfdisk -s /dev/nvme0n1p1 this now gives: No such device or address while trying to determine filesystem size and sfdisk -s /dev/nvme0n1p2 gives 209715200. And if I it again over and over, it keeps changing, one partition is usable, other not, it swaps constantly. Things I tried: Other SSD (same type), same result; I am using a pcie adapter for the NVME disk, tried other adapter, same result; Using the adapter in a running open suze installation, I can execute these comands with no issues; Normal sata drive, no issues. [edit] I figured out after a reboot without the partitioning the drive again, it is possible to execute these commands, is this important to a NVME disk, but seems not to normal sata? I am quite out of ideas now what to try or what the couse of this could be, any help would be appreciated.
I managed to find a sollution so I am adding the answer here so it might help others in case they encountered a similar problem. I used the blockdev --rereadpt /dev/nvme0n1 command. This rereads the partition table, and now I can execute the sfdisk -s /dev/nvme0n1p2 command with no issues without the need of a reboot. I am still not sure why this is not needed with normal sata drives, so if someone knows why this is not the case, feel free to leave a comment.
Sfdisk NVME issue, No such device or address
1,402,969,883,000
I'm trying to control a python based program (which doesn't detach itself from console) #!/bin/bash user=nobody pid=/var/run/xx.pid name=xx prog=/xx.py case $1 in start) /sbin/start-stop-daemon --start -b --oknodo --user "$user" --name "$name" --pidfile "$pid" --startas "$prog" --chuid nobody -- --daemon ;; stop) /sbin/start-stop-daemon --stop --oknodo --user "$user" --name "$name" --pidfile "$pid" --retry=TERM/5/KILL/1 ;; restart) ;; *) ;; esac The start part works fine. I can see the script up and running, but the stop part doesn't. It simply says No xx found running; none killed. So I guess there's something wrong with the start part?
start-stop-daemon --start --pidfile "$pid" doesn't write to the pid file unless --make-pidfile (-m) is specified. Without --make-pidfile it is up to the program being launched to create it. Also for --make-pidfile to work, the process being launched can't daemonize itself (via a fork), as then start-stop-daemon won't know what PID it should put in the file. The only thing --pidfile "$pid" does in your usage scenario is that it will result in start-stop-daemon not starting the program if it is already running. If process still is not stopping, all the criteria passed to start-stop-daemon --stop must match. Meaning $pid has to be a running process, the UID of the process has to match $user, and the process name (arg0) has to match $name. You can determine the value of arg0 by doing ps h -p $pid -o comm
start-stop-daemon not working as expected, no pid file was written
1,402,969,883,000
I'm learning how to create services with systemd. I get this error: .service: Start request repeated too quickly. I can't start the service any more; it was working yesterday. What am I doing wrong? (root@Kundrum)-(11:03:19)-(~) $nano /lib/systemd/system/swatchWATCH.service 1 [Unit] 2 Description=Monitor Logfiles and send Mail reports 3 After=syslog.target network.target 4 5 [Service] 6 Type=simple 7 ExecStart=/usr/bin/swatch --config-file=/home/kristjan/.swatchrc --input-record-separator="\n \n " --tail-file=/var/log/snort/alert --daemon 8 Restart=on-failure 9 StartLimitInterval=3 10 StartLimitBurst=100 11 12 [Install] 13 WantedBy=multi-user.target StartLimitInterval and StartLimitBurst I added after trying to fix it. My system is Debian 9.8 Stretch all updates.
First, if this is a custom service, it belongs in /etc/systemd/system. /lib/systemd is intended for package-provided files. Second, the service is likely crashing and systemd is attempted to restart it repeatedly, so you need to figure out why it's crashing. Check the service logs with: journalctl -e -u swatchWATCH It's possible there will be some extra detail in the main journal: journalctl -e Finally, check to see it runs directly on the CLI ok: /usr/bin/swatch --config-file=/home/kristjan/.swatchrc --input-record-separator="\n \n " --tail-file=/var/log/snort/alert --daemon I see you are using a --daemon option. That's often a mistake with systemd. Systemd daemonizes for you. Try removing this option. If all else fails, review what changed since yesterday when it was working.
How to fix ".service: Start request repeated too quickly." on custom service?
1,402,969,883,000
When I type sudo service networking restart, I am getting error as shown below: edward@computer:~$ sudo service networking restart stop: Job failed while stopping start: Job is already running: networking Got this error when I wanted to restart networking after changing mac address and also after setting static IP in /etc/network/interfaces file. I get same error even after reverting back those changes and when my computer works fine. While looking through /var/log/syslog I found this: kernel: [ 6448.036144] init: networking post-stop process (28701) terminated with status 100 is that relevant to the failed stop/start? I am on Ubuntu 14.04
The error (post-stop) in your log seems related to this (in /etc/init/networking.conf line 25ff.): post-stop script if [ -z "$UPSTART_STOP_EVENTS" ]; then echo "Stopping or restarting the networking job is not supported." echo "Use ifdown & ifup to reconfigure desired interface." exit 100 fi You get the exit code, but not the more informative message if you do sudo service networking restart. There is a lot of detail in this bug report about the issue. It seems deprecated behaviour. /etc/init.d/networking stop doesn't work any more and on Debian Jessie sudo service networking stop doesn't have any effect either. You seem to have to run ifup/ifdown on the individual network interfaces now, so let's hope you don't have too many of them. If using ifup/ifdown is unacceptable, this allows you to restore the 13.10 behaviour. The final solution for it is: sudo service network-manager restart
unable to restart networking daemon
1,402,969,883,000
I am trying to run Google AppEngine on my Debian machine, I created a file init.d/gae: . /lib/lsb/init-functions # # Initialize variables # name=gae user=$name pid=/var/run/$name.pid prog="python /opt/google_appengine/dev_appserver.py --host=0.0.0.0 --admin_host=0.0.0.0 --php_executable_path=/usr/bin/php-cgi /var/www" case "${1}" in start) echo "Starting...Google App Engine" start-stop-daemon --start --make-pidfile --background --oknodo --user "$user" --name "$name" --pidfile "$pid" --startas "$prog" ;; stop) echo "Stopping...Google App Engine" ;; restart) ${0} stop sleep 1 ${0} start ;; *) echo "Usage: ${0} {start|stop|restart}" exit 1 ;; esac exit 0 # End scriptname I am testing the script by manually invoking, and the script runs but not as a daemon or at least it doesn't detach from terminal. I am expecting/looking for similar functionality to Apache. What switch am I missing? EDIT I should note that no PID file is being written or created despite the switch indicating it should be created
You have two problems I can see: prog=python /opt/google_appengine/dev_appserver.py --host=0.0.0.0 --admin_host=0.0.0.0 --php_executable_path=/usr/bin/php-cgi /var/www Will start /opt/google_appengine/dev_appserver.py with prog=python in the environment. This is before your start block, so start-stop-daemon isn't even getting involved. The quick fix is to quote the entire assignment like this: prog='python /opt/google_appengine/dev_appserver.py --host=0.0.0.0 --admin_host=0.0.0.0 --php_executable_path=/usr/bin/php-cgi /var/www' But a better fix is to use the style from /etc/init.d/skeleton, and do DAEMON='python /opt/google/appengine/dev_appserver.py' DAEMON_ARGS='--host=0.0.0.0 --admin_host=0.0.0.0 --php_executable_path=/usr/bin/php-cgi /var/www' The second problem is that you're wrongly quoting $prog. start-stop-daemon --start --make-pidfile --background --oknodo --user "$user" --name "$name" --pidfile "$pid" --startas "$prog" tells start-stop-daemon to try to start a program called python /opt/google_appengine/dev_appserver.py --host=0.0.0.0 --admin_host=0.0.0.0 --php_executable_path=/usr/bin/php-cgi /var/www. But clearly there is no program called that. You want to start python with arguments. Removing the double quotes there is the quick fix, but a better one, again following /etc/init.d/skeleton, would be start-stop-daemon --start --quiet --chuid $CHUID --pidfile $PIDFILE --exec $DAEMON -- $DAEMON_ARGS
start-stop-daemon won't start my Python script as service
1,402,969,883,000
I'm trying to recover from a faulty installation, and want to remove some packages. But I can't. # apt autoremove offending-package dpkg: warning: 'start-stop-daemon' not found in PATH or not executable (My PATH is fine.) According to packages.debian.org, start-stop-daemon should be in /sbin/. It isn't there! What should I do?
Another way to do this is first create a dummy /usr/local/sbin/start-stop-daemon that does nothing:#!/bin/sh exec true then simply reinstall the dpkg package:aptitude reinstall dpkg then (of course) remove the dummy /usr/local/sbin/start-stop-daemon. Installing the dpkg package does not in fact require start-stop-daemon at any point. It is simply the case that the dpkg command, that is run to reinstall its own package, checks that start-stop-daemon is on the command search path in case a package installation/deinstallation script happens to use it.
Debian Stretch - where did start-stop-daemon go, and how do I get it back?
1,402,969,883,000
I'm writing a daemon to manage my Java app on a headless Ubuntu 16.04 box using jsvc and this (probably pre-systemd) tutorial, and got as far as running update-rc.d mydaemon enable, receiving the error update-rc.d: error: mydaemon Default-Start contains no runlevels, aborting Having Googled around a bit this appears to have something to do with the (fairly?) recent move to systemd, which I have confirmed is running with pidof systemd. How do I achieve the same starting-at-boot behaviour as update-rc.d (and more importantly stopping the service via /etc/init.d/mydaemon stop rather than just killing the process as the Java app needs to clean up). And are systemd and update-rc.d different systems, or does systemd just change how the latter works?
I don't have a Ubuntu 16.04 to test this on, or provide you with many details, but systemd has a compatibility feature to allow older /etc/init.d scripts to continue working. Instead of using update-rc.d to enable your daemon, use the systemd native command equivalent: sudo systemctl enable mydaemon If this still produces the same error, add the missing lines to the starting set of comments in your script: # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 between the ### BEGIN INIT INFO and ### END INIT INFO lines, and try again. See the LSB core description for these lines. You can also explicitly start the daemon with sudo systemctl start mydaemon and ask for its status with sudo systemctl status -l mydaemon See man systemd-sysv-generator for the compatibility feature. See this wiki for converting System V or upstart scripts like yours to native systemd Units.
Set daemon to start at boot with systemd
1,402,969,883,000
I run Debian 8 jessie. I have activated a daemon's debugging facility, which causes the daemon to print debugging info to stdout and/or stderr. How can I persuade start-stop-daemon(8), as invoked by /lib/lsb/init-functions, to redirect the daemon's stdout and stderr to my debugging log file /root/log? It appears that >/root/log 2>&1 is ineffective. I suppose that this makes sense, since start-stop-daemon(8) is not a shell. At any rate, how shall I redirect the daemon's stdout and stderr? [The daemon happens to be exim4(8), but this is not relevant to my question as far as I know. LSB evidently delegates the management of the daemon to Systemd; this could be indirectly relevant for all I know.]
Trying to pass magic options through the various layers of shell scripting is entirely the wrong way to go about this on a systemd Linux operating system. systemd already logs the standard outputs/errors of services that are auto-generated by the "sysv" service generator, as this one is. The "sysv" service generator has made an exim4.service (somewhere under /run/systemd) that invokes your /etc/init.d/exim4 as the service. There's no delegation going on. Your rc scripts aren't in charge of the service in the first place. They're simply being run as handy proxies for it. So what you need to do is look at the already captured log output for the exim4.service service. This will either be in the journal or in whatever syslog variant you have arranged to feed off the journal. For the latter do whatever is appropriate for your syslog variant. For the former, observe that systemctl shows you the recent journal entries for the service whenever you run systemctl status exim4.service with appropriate privileges (superuser or membership of the systemd-journal group). You can also view the journal entries for the service since the last bootstrap (that the journal has not already rotated off) with journalctl -u exim4.service -e -b exim under proper service management Ironically, all of that rc script monstrosity can be replaced with some fairly short exim4-queue.service, [email protected]+exim4-smtp-relay.socket, and [email protected]+exim4-smtp-submission.socket service and socket units. Also note that it is a falsehood that exim conflates "foreground" and "debug"/"verbose". Its -bdf option is explicitly the non-"dæmonizing" version of -bd, although to invoke it as a per-connection "socket-activated" dæmon (as per the examples in the further reading) where the service management tools handle the listening socket, one would use -bs rather than -bdf anyway. Further reading https://unix.stackexchange.com/a/233581/5132 Jonathan de Boyne Pollard (2014). A side-by-side look at run scripts and service units.. Frequently Given Answers. Jonathan de Boyne Pollard (2014). "Don't assume that "foreground" means "debug mode".'. Mistakes to avoid when designing Unix dæmon programs. Frequently Given Answers.
How to redirect a daemon's stdout and stderr using start-stop-daemon(8)?
1,402,969,883,000
Here is a piece of my SysVinit file. NAME="flask-daemon" PIDFILE="/var/run/"$NAME".pid" DAEMON="/home/ubuntu/flask/run.py" DAEMON_USER=root f_start() { echo -e "\nStarting : $NAME" start-stop-daemon --start --background --pidfile $PIDFILE --make-pidfile --user $DAEMON_USER --exec $DAEMON } Does anyone knows where must be the error? Also, it's terrible that in this situation, it is only writing the PID of ONE process into the pidfile. Therefore, if I do /etc/init.d/flask-daemon stop, it only kills the process which is related with the PID that was considered to be written into the file. Processes (Why two?): ps aux | grep run.py root 3591 3.0 1.7 132700 17460 ? S 19:27 0:00 /usr/bin/python /home/ubuntu/flask/run.py root 3595 4.5 1.7 213144 18080 ? Sl 19:27 0:00 /usr/bin/python /home/ubuntu/flask/run.py root 3602 0.0 0.0 10460 948 pts/0 S+ 19:27 0:00 grep --color=auto run.py PID file: $ cat /var/run/flask-daemon.pid 3591 Just one process was killed... ps aux | grep run.py root 3595 0.3 1.7 213144 18080 ? Sl 19:27 0:00 /usr/bin/python /home/ubuntu/flask/run.py root 3613 0.0 0.0 10460 948 pts/0 S+ 19:27 0:00 grep --color=auto run.py Observation:> I also tried to use --startas, but it spawns two processes as well, an even worst: it records the PID from any other process into /var/run/flask-daemon.py, with exception of the PIDs from the daemons
Guessing you're daemon is running in daemon mode, so it's creating a copy of itself when it starts. I think that might be what the "l" part of "Sl" means in the STAT column of your ps output. I've been use python-daemon quite a bit recently and if that's what your script is using, you can tell it whether to detach the process or not in the constructor of your daemoncontext, just tell it not to do that and you should be golden. -OR- don't use start-stop-daemon and just make a systemd service that utilizes the detach_process flag. -OR- Do both and tell your process whether you want to detach the process or not.
Why "start-stop-daemon" spawns two processes?
1,402,969,883,000
On Ubuntu >=12.04, what is the most correct/prettiest way to get rngd to run in multiple instances using an init script? The current one only accepts one random source, so multiple instances are needed. I.e. I would like rngd to be controlled with the "service" command. When I manually start rngd, it works as I hoped and thus gathering randomness twice the speed of just a single source. $ rngd --pidfile=/var/run/rngd0.pid -r /dev/hwrng0 $ rngd --pidfile=/var/run/rngd1.pid -r /dev/hwrng1 Any ideas how to solve this? Edit End version looks like this, thanks @CameronNemo: /etc/init/rng-tools.conf: description "rng-tools daemon" start on runlevel [2345] stop on runlevel [016] env DEVLIST="$(find /dev/hwrng* -follow -type c)" pre-start script for device in $DEVLIST; do start rngd-instance DEVICE=$device || failed="${failed}$device " done test -n "$failed" || { echo "Failed to start instances: $failed"; exit 1; } end script /etc/init/rngd-instance.conf: stop on stopping rng-tools or runlevel [016] description "rngd instance" usage "DEVICE=full path to rng device" instance $DEVICE pre-start script test -c "$DEVICE" || { echo "Not a device: $DEVICE"; exit 1; } mkdir -p /var/run/rngd end script exec rngd --foreground --pidfile=/var/run/rngd/$(basename "$DEVICE") -r $DEVICE /etc/init.d/rng-tools : $ cd /etc/init.d/ $ sudo ln -sf /lib/init/upstart-job rng-tools
You can try to write an Upstart job using instances (the device file would be the instance), then another job that starts all the instances you want at boot. http://upstart.ubuntu.com/cookbook/#instance It would be easier for you if you made the pidfiles based on the device name, so it would be something like "rngd-instance": stop on stopping rng-tools or runlevel [016] instance $DEVICE usage "DEVICE=full path to rng device" pre-start script test -c $DEVICE || { echo "Not a device: $DEVICE"; exit 1; } mkdir -p $(dirname /var/run/rngd/$DEVICE) end script exec rngd --foreground --pidfile=/var/run/rngd/$DEVICE -r $DEVICE then another job, rng-tools, like this: start on runlevel [2345] stop on runlevel [016] env DEVLIST="/dev/hwrng0 /dev/hwrng1" pre-start script for d in $DEVLIST; do initctl start rngd-instance DEVICE=$d || failed="${failed}$d " done test -n "$failed" || { echo "Failed to start instances: $failed"; exit 1; } end script You place these files as /etc/init/rngd-instance.conf and /etc/init/rng-tools.conf.
rngd - multiple instances from init script
1,402,969,883,000
I have a bunch of redis service instances, and I would like to add a label to them in the output of the ps command. Currently I see: $ ps aux | grep redis root <snipped> /usr/local/bin/redis-server *:6381 root <snipped> /usr/local/bin/redis-server *:6380 Is there a way to have an output like this: root <snipped> /usr/local/bin/redis-server *:6381 item cache # <== label root <snipped> /usr/local/bin/redis-server *:6380 page cache # <== label i.e. adding a text label to easily identify what each of those instances is for. Is there a way to do this instead of having to make copies of the binary?
Assuming redis-server does not have built-in support for changing its own command name after startup (some programs, especially daemons, do have such support), there are a few things you can do: Use an alternate command name. Although the first argument in the command line (argv[0]) is normally the name of the binary used to invoke a command (either its full path name or its base name), it doesn't have to be. And if it isn't, then the application itself probably won't notice or care. But shells launch commands with argv[0] set following this convention so you have to launch it in a "special" way. To do this, you would probably want to modify the /etc/init.d script that launches this daemon. Make hard links to the binary and launch those. This is similar to your suggestion of copying the binary, but copies are unnecessary. If you use hard links, the binary will not occupy any additional disk space and the code (text) of the multiple instances will all share memory, which won't happen with copies.
Adding a label to start-stop-daemon service in process list
1,402,969,883,000
I entered sudo shutdown -h now to shutdown my computer. Then, the "A stop job is running..." message appeared. I wanted to abort my shutdown and go back to appropriately closing the running programs... but I couldn't find a way to cancel my shutdown so all I could do was wait for the shutdown to continue. Most of the guidance I find online are focused on forcing immediate shutdown whereas I want to stop the shutdown. Is it possible to abort the shutdown in this situation? Environment: Debian, XFCE, xfce4-terminal, Bash
that "a stop job is running" is visible when you're already pretty much at the end of what systemd could shutdown so far. At this point, you're not able to have an interactive session anymore; you'd need to "boot back" up to the multiuser target. That's technically not possible, and it would also not allow you to save anything: That interactive program that (you assume, it's hard to check!) blocks shutdown has long lost its graphical interface, it can't be interacted with anymore, even if you restarted X11/your wayland compositor. So, I'm afraid what you want cannot exist!
How can I cancel (abort) a shutdown when I see "A stop job is running..."?
1,402,969,883,000
I am trying to get an init script running that starts nodejs as a daemon. The problem is, that when I executing start-stop-daemon it always returns '0', regardless of what error the nodejs-daemon may return. I got as far as figuring out that the issue arises when using start-stop-daemon with the --background switch. With the switch, start-stop-daemon always returns '0', even when the nodejs-daemon fails. root# start-stop-daemon --start --chuid $GHOST_USER:$GHOST_GROUP --chdir $GHOST_ROOT --make-pidfile --pidfile $PIDFILE --exec $DAEMON --background -- $DAEMON_ARGS ; echo ---error: $? ---error: 0 Note, that the daemon silently failed and is NOT running at this moment! Without the switch, we can actually see the daemon failing to start. root# start-stop-daemon --start --chuid $GHOST_USER:$GHOST_GROUP --chdir $GHOST_ROOT --make-pidfile --pidfile $PIDFILE --exec $DAEMON -- $DAEMON_ARGS ; echo ---error: $? ERROR: Unsupported version of Node Ghost needs Node version ~0.10.0 || ~0.12.0 || ^4.2.0 you are using version 5.10.0 Please see http://support.ghost.org/supported-node-versions/ for more information ---error: 231 Now I am looking to find a solution so I can use the --background switch and having a error code bigger than '0' when it fails to start the nodejs daemon.
This is documented behavior. The foreground process completes after forking the background process. From the man page: -b, --background Typically used with programs that don't detach on their own. This option will force start-stop-daemon to fork before starting the process, and force it into the background. Warning: start-stop-daemon cannot check the exit status if the process fails to execute for any reason. This is a last resort, and is only meant for programs that either make no sense forking on their own, or where it's not feasible to add the code for them to do this themselves.
start-stop-daemon always returns '0' although the process fails
1,402,969,883,000
I have set up start-stop-daemon to start my script automatically case "$1" in start) log_begin_msg "starting foo" start-stop-daemon --start --chuid nobody --user nobody --pidfile \ /tmp/foo.pid --startas /usr/local/bin/foo.sh & log_end_msg $? the problem is, it always returns 0 (success),even if the process was not started. How can I capture the return code of start-stop-daemon properly ?
You are not capturing return code of start-stop-daemon. Your problem is that you are launching it in the background and it is started properly. I mean that you are capturing return code of starting something in background that wants to start something in background. Try this: rm /tmp/not_existent_file & echo $? This always prints 0. In order to get the return code of a backgrounded process, you must wait for it to exit with wait. Here is an example: rm /tmp/not_existent_file & wait $! echo $? If you want to start process that is not forking on its own, try to use --background switch and remove & from the end of start-stop-daemon line. See start-stop-deamon manpage
start-stop-daemon returning always 0 (success)
1,402,969,883,000
I have the init.d script show below. I want to start the daemon with the argument --http_root /tv When I start the application without the daemon the argument is accepted However I cannot seem to figure out how to adjust the script below to start the daemon with that argument. I want the argument to be passed every time sudo service tvheadend start is issued. I do not want to pass an argument to the init.d script. How can I achieve that? #! /bin/sh ### BEGIN INIT INFO # Provides: tvheadend # Required-Start: $local_fs $remote_fs udev # Required-Stop: $local_fs $remote_fs # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 ### END INIT INFO # Author: Andreas Öman # Do NOT "set -e" # PATH should only include /usr/* if it runs after the mountnfs.sh script PATH=/usr/sbin:/usr/bin:/sbin:/bin DESC="Tvheadend" NAME=tvheadend DAEMON=/usr/bin/$NAME PIDFILE=/var/run/$NAME.pid SCRIPTNAME=/etc/init.d/$NAME # Exit if the package is not installed [ -x "$DAEMON" ] || exit 0 # Read configuration variable file if it is present [ -r /etc/default/$NAME ] && . /etc/default/$NAME # Configure command line options [ "$TVH_ENABLED" = "1" ] || exit 0 ARGS="-f" [ -z "$TVH_USER" ] || ARGS="$ARGS -u $TVH_USER" [ -z "$TVH_GROUP" ] || ARGS="$ARGS -g $TVH_GROUP" [ -z "$TVH_CONF_DIR" ] || ARGS="$ARGS -c $TVH_CONF_DIR" [ -z "$TVH_ADAPTERS" ] || ARGS="$ARGS -a $TVH_ADAPTERS" [ "$TVH_IPV6" = "1" ] && ARGS="$ARGS -6" [ -z "$TVH_HTTP_PORT" ] || ARGS="$ARGS --http_port $TVH_HTTP_PORT" [ -z "$TVH_HTTP_ROOT" ] || ARGS="$ARGS --http_root $TVH_HTTP_ROOT" [ -z "$TVH_HTSP_PORT" ] || ARGS="$ARGS --htsp_port $TVH_HTSP_PORT" [ -z "$TVH_ARGS" ] || ARGS="$ARGS $TVH_ARGS" [ "$TVH_DEBUG" = "1" ] && ARGS="$ARGS -s" # Load the VERBOSE setting and other rcS variables [ -f /etc/default/rcS ] && . /etc/default/rcS # Define LSB log_* functions. # Depend on lsb-base (>= 3.0-6) to ensure that this file is present. . /lib/lsb/init-functions # # Function that starts the daemon/service # do_start() { # Return # 0 if daemon has been started # 1 if daemon was already running # 2 if daemon could not be started udevadm settle start-stop-daemon --start --quiet --pidfile $PIDFILE --exec $DAEMON --test > /dev/null \ || return 1 start-stop-daemon --start --quiet --pidfile $PIDFILE --exec $DAEMON -- \ $ARGS \ || return 2 } # # Function that stops the daemon/service # do_stop() { # Return # 0 if daemon has been stopped # 1 if daemon was already stopped # 2 if daemon could not be stopped # other if a failure occurred start-stop-daemon --stop --quiet --retry=TERM/30/KILL/5 --pidfile $PIDFILE --name $NAME RETVAL="$?" [ "$RETVAL" = 2 ] && return 2 # Many daemons don't delete their pidfiles when they exit. rm -f $PIDFILE return "$RETVAL" } case "$1" in start) [ "$VERBOSE" != no ] && log_daemon_msg "Starting $DESC" "$NAME" do_start case "$?" in 0|1) [ "$VERBOSE" != no ] && log_end_msg 0 ;; 2) [ "$VERBOSE" != no ] && log_end_msg 1 ;; esac ;; stop) [ "$VERBOSE" != no ] && log_daemon_msg "Stopping $DESC" "$NAME" do_stop case "$?" in 0|1) [ "$VERBOSE" != no ] && log_end_msg 0 ;; 2) [ "$VERBOSE" != no ] && log_end_msg 1 ;; esac ;; restart|force-reload) # # If the "reload" option is implemented then remove the # 'force-reload' alias # log_daemon_msg "Restarting $DESC" "$NAME" do_stop case "$?" in 0|1) do_start case "$?" in 0) log_end_msg 0 ;; 1) log_end_msg 1 ;; # Old process is still running *) log_end_msg 1 ;; # Failed to start esac ;; *) # Failed to stop log_end_msg 1 ;; esac ;; *) echo "Usage: $SCRIPTNAME {start|stop|restart|force-reload}" >&2 exit 3 ;; esac :
If you don't want to modify the init.d script, change (or create) /etc/default/tvheadend as steeldriver pointed out in his comment, edit (add, extend) the entry: TVH_HTTP_ROOT=/tv This file gets sourced in the init.d script, with this line: [ -r /etc/default/$NAME ] && . /etc/default/$NAME and then [ -z "$TVH_HTTP_ROOT" ] || ARGS="$ARGS --http_root $TVH_HTTP_ROOT" will set the argument to the daemon invocation. Otherwise you can also just change the line. ARGS="-f" in the script to ARGS="-f --http_root /tv" But that gets overwritten on a package update.
Passing variable in init.d script
1,402,969,883,000
I have a task to design a service out of a bash script in order to be called in the way service ... start/stop/restart. The script that is to become a service is a infinite while loop which wakes up every minute and does some checking. I call it like this: start() { echo -n $"Starting $DESC:" DAEMON_ARGS=$(xmlstarlet sel -T -t -m "/config/input/sensor/device/resource" -v "concat(../../@type, ' ', ../../@dev, ' ', @res)" -n $CONFIGURATION_FILE | extract_devices) # get device names out of XML file echo "daemon args $DAEMON_ARGS" start-stop-daemon --start --pidfile $PIDFILE --exec $DAEMON -- $DAEMON_ARGS echo} pid and daemon are defined like this: PIDFILE="/var/run/detection.pid" NAME="jblub_control_loop.sh" DAEMON="/root/test_det/${NAME}" When I run ./detection start (I also tried to copy into init.d and to run with service detection start) The init script simply doesn't get out of the loop and stays blocked in the start-stop-daemon function. When I comment out infinite while loop inside my jblub_control_loop.sh it passes but no pid file is created. My question is how to properly start script with an infinite loop as a service and why my pid file is not created.
For a start, use the --background switch so it is forked. use the -m switch to create a PID file start-stop-daemon --start --background -m --pidfile $PIDFILE --exec $DAEMON -- $DAEMON_ARGS For a copmplete answer, see https://stackoverflow.com/questions/16139940/what-is-start-stop-daemon-in-linux-scripting Enjoy
start-stop-daemon blocks for process with infinite loop
1,402,969,883,000
How am I supposed to do this in systemd? start unit.A started unit.A start unit.B started unit.B stop unit.A stopped unit.A stop unit.B stopped unit.B I know After=/Before= will order units in reverse on start/stop like AB -> BA, but I need AB -> AB. My guess is, that I have to merge unit.A with unit.B, something along the lines of unit.A.service: ExecStartPost=unit.A And handle stop ordering in ExecStopPost=. EDIT: It seemed that a combination of Upholds= and PropagatesStopTo= would probably give me what I want, or very close, but it turns out, those are added in systemd version 249, but I have to get it running on 241/247. I still have an academic interest in whether Upholds= and PropagatesStopTo= would have been the right call, had I access to systemd 249?
I ended up with this: thisunit_ExecStartPost.sh: systemctl start otherunit thisunit_ExecStop.sh: systemctl stop otherunit In my case, otherunit notices if thisunit fails, otherwise stopping otherunit may need to be moved to thisunit_ExecStopPost.sh, which is generally a better place (but not in my project).
Start and stop systemd units in the same order
1,402,969,883,000
vim /home/mytest.sh rm -f /home/mytest/* I want to write a service to execute the remove action. My expections: sudo systemctl stop mytest can delete files in /home/mytest sudo systemctl start mytest do nothing. Edit my service file. sudo vim /etc/systemd/system/mytest.service [Unit] Description=delete file [Service] Type=oneshot ExecStart=/bin/true ExecStop=/bin/bash /home/mytest.sh [Install] WantedBy=multi-user.target Enable it. sudo systemctl enable mytest Now i found a strange action for mytest service. sudo systemctl start mytest can delete files in /home/mytest sudo systemctl stop mytest do nothing. Why? Pleas give a explanation in detail.
This is explained in detail in the systemd service documentation, but you pretty much need to read all of it to understand what’s going on. The most pertinent part in this case is example 3; from that, the reader can gather that a oneshot service as you’ve declared it never becomes active, so its stop action will be run once its start action completes. To achieve what you’re after, you need a oneshot service which nevertheless becomes active: [Unit] Description=delete file [Service] Type=oneshot RemainAfterExit=yes ExecStop=/bin/bash /home/mytest.sh [Install] WantedBy=multi-user.target
Why `systemctl stop service` can't invoke the service?
1,402,969,883,000
I want to turn on / enable the auditd daemon to record system events in Slackware 14.2. I could not see the daemon auditd, when I run the code below: ls /etc/rc*.d | grep "auditd" It means it was not existing. How will I enable auditd in Slackware? I know how to start, stop and restart services but if it is not existing , how will I make that service up and running?
The distribution does not provide auditd, but one can find the build script SlackBuild. (The URLs will change in the future.) Create a working directory and acquire the required files. mkdir /usr/local/src/audit && cd /usr/local/src/audit wget http://people.redhat.com/sgrubb/audit/audit-2.3.6.tar.gz wget https://slackbuilds.org/slackbuilds/14.2/system/audit.tar.gz Unpack the Build script, and move the source code into the Build directory. tar xf audit.tar.gz && rm audit.tar.gz cd /usr/local/src/audit/audit mv /usr/local/src/audit/audit-2.3.6.tar.gz /usr/local/src/audit/audit/ It's important to read notes associated with the Build. less README less README.SLACKWARE Execute the Build script. /usr/local/src/audit/audit/audit.SlackBuild After the software has been compiled successfully, the script installs all of the binaries, libraries, and configuration files required for the software to execute inside of a package file. Use installpkg. installpkg /tmp/audit-2.3.6-x86_64_SBo.tgz Optionally remove the source, temporary, and package files. rm -rf /usr/local/src/audit/ /tmp/SBo/ /tmp/audit* The audit subsystem is not enabled in Slackware kernels, so one must also rebuild the kernel with support for the audit subsystem in order to use auditd as intended.
Slackware 14.2 - Turn on the auditd daemon
1,402,969,883,000
I have a bash script that is written in a debian-based distribution (System-V) and I want to run it under CentOS 7. There is a part of the script that runs a command as a daemon like this: start-stop-daemon --start --pidfile $PIDFILE \ --chdir "$DIR" --startas $PROGRAM --name foo --chuid "$USER" -- $ARGS And stops the daemon like this: start-stop-daemon --stop --quiet --pidfile $PID \ --user "$USER" --name foo --retry=TERM/30/KILL/5 My question is how do something equivalent in CentOS 7? Is the daemon function in /etc/init.d/functions an alternative?
The daemon() shell function from /etc/rc.d/init.d/functions on RHEL/CentOS 6 is not an exact equivalent of Debian's start-stop-daemon. The fact that all of these van Smoorenburg rc tool libraries have subtly different helper command sets is one of the well-known problems with van Smoorenburg rc. You're using CentOS 7. You have systemd. Write a systemd service unit. Further reading https://unix.stackexchange.com/a/202731/5132 https://unix.stackexchange.com/a/247543/5132 status, killproc commands in Ubuntu Jonathan de Boyne Pollard (2015). The known problems with System 5 rc. Frequently Given Answers.
Debian start-stop-daemon equivalent in CentOS
1,402,969,883,000
Trying to stop the transmission daemon on debian 11 gives me: start-stop-daemon --stop --chuid debian-transmission --exec /usr/bin/transmission-daemon -- --config-dir /var/lib/transmission-daemon/info No /usr/bin/transmission-daemon found running; none killed. But I'm pretty sure that's not the case: root@91c79f82a860:/var/www/html# ps -ef | grep transmission debian-+ 1347 1 0 19:02 ? 00:00:00 /usr/bin/transmission-daemon --config-dir /var/lib/transmission-daemon/info System information: root@91c79f82a860:/var/www/html# dpkg -s transmission-daemon | grep Version Version: 3.00-1 root@91c79f82a860:/var/www/html# lsb_release -a No LSB modules are available. Distributor ID: Debian Description: Debian GNU/Linux 11 (bullseye) Also, I'm doing this inside a docker container, php:8.1.8-apache. I extracted the start-stop-daemon CMD from the /etc/init.d/transmission-daemon.
If you ran start-stop-daemon under strace you'll see: readlink("/proc/3130/exe", 0x7ffc68a5f890, 256) = -1 EACCES (Permission denied) numbers can be different but the point is reading exe symlink results in EACCES. Solution is to run docker container with --cap-add=SYS_PTRACE or --privileged option.
start-stop-daemon can't stop the daemon "No $daemon found running; none killed."
1,402,969,883,000
I am currently following a tutorial that teaches how to create a queue in php. An infinite loop was created in a php script. I simplified the code in order to focus on the question at hand: while(1) { echo 'no jobs to do - waiting...', PHP_EOL; sleep(10); } I use PuTTy (with an SSH connection) to connect to the linux terminal in my shared hosting account (godaddy). If I run php queuefile.php in the terminal, I know it will run with no problems (already tested the code with a finite for loop instead of the infinite while loop). QUESTION: How could I exit out of the infinite loop once it has started? I have already read online the option of creating code that "checks" if it should continue looping with something like the following code: $bool = TRUE; while ($bool) { if(!file_exists(allow.txt)){$bool = FALSE} //... the rest of the code though I am curious if there might be a command I can type in the terminal, or a set of keys I can push that will cause the script to terminate. If there is any way of terminating the script, or if there is a better way to make the previous "check", I would love your feedback!
Assuming that you didn't install any signal handlers, you can send SIGINT to the process by pressing Ctrl-C if it in the foreground. If it is not in the foreground, you can use pkill -INT -f '^php queuefile.php$'. By default, php will quit on receiving SIGINT. If, for some reason, there is a signal handler installed, you can also try the TERM signal (the default with pkill), and ultimately if you can find no reasonable signal to kill the process with, you can use the (violent and forceful) SIGKILL, by passing -TERM to pkill in place of -INT.
Stopping infinit loop from php script run in linux terminal
1,402,969,883,000
When I start a LSBInitScript as a service I get an SSL error because my script uses an SSL certificate to operate. The certificate lies in the same directory as the script itself. Why do I get the error when starting as service but when called in the console I don't? SSL Error when starting the service: ubuntu@ip-0-0-0-0:/heartbeat/deviceAPI$ sudo service deviceAPIClient.service start * DeviceAPIClient process is not running * Starting the process DeviceAPIClient Traceback (most recent call last): File "/heartbeat/deviceAPI/DeviceAPIClient.py", line 120, in <module> main() File "/heartbeat/deviceAPI/DeviceAPIClient.py", line 90, in main res = register(instanceName) File "/heartbeat/deviceAPI/DeviceAPIClient.py", line 40, in register verify = 'cloud-server-ca-chain.pem' File "/usr/lib/python2.7/dist-packages/requests/api.py", line 88, in post return request('post', url, data=data, **kwargs) File "/usr/lib/python2.7/dist-packages/requests/api.py", line 44, in request return session.request(method=method, url=url, **kwargs) File "/usr/lib/python2.7/dist-packages/requests/sessions.py", line 455, in request resp = self.send(prep, **send_kwargs) File "/usr/lib/python2.7/dist-packages/requests/sessions.py", line 558, in send r = adapter.send(request, **kwargs) File "/usr/lib/python2.7/dist-packages/requests/adapters.py", line 385, in send raise SSLError(e) requests.exceptions.SSLError: [Errno 185090050] _ssl.c:344: error:0B084002:x509 certificate routines:X509_load_cert_crl_file:system lib No error when I start the python script in the console: ubuntu@ip-0-0-0-0:/heartbeat/deviceAPI$ /heartbeat/deviceAPI/DeviceAPIClient.py Successful registering at cloud with 02-57-49-9c-d4 Using API endpoint https://mydomain Update API endpoint (not used in Demo) https://mydomain.com/device-api Sending Data to Cloud... Update As suggested by @mrc02_kr I've put the certificate cloud-server-ca-chain.pem into the folder /etc/ssl/certs. The error changed to a private key issue ``SSL_CTX_use_PrivateKey_file`: ubuntu@ip-0-0-0-0:/heartbeat/deviceAPI$ sudo service deviceAPIClient.service start * DeviceAPIClient process is not running * Starting the process DeviceAPIClient Traceback (most recent call last): File "/heartbeat/deviceAPI/DeviceAPIClient.py", line 120, in <module> main() File "/heartbeat/deviceAPI/DeviceAPIClient.py", line 90, in main res = register(instanceName) File "/heartbeat/deviceAPI/DeviceAPIClient.py", line 40, in register verify = '/etc/ssl/certs/cloud-server-ca-chain.pem' File "/usr/lib/python2.7/dist-packages/requests/api.py", line 88, in post return request('post', url, data=data, **kwargs) File "/usr/lib/python2.7/dist-packages/requests/api.py", line 44, in request return session.request(method=method, url=url, **kwargs) File "/usr/lib/python2.7/dist-packages/requests/sessions.py", line 455, in request resp = self.send(prep, **send_kwargs) File "/usr/lib/python2.7/dist-packages/requests/sessions.py", line 558, in send r = adapter.send(request, **kwargs) File "/usr/lib/python2.7/dist-packages/requests/adapters.py", line 385, in send raise SSLError(e) requests.exceptions.SSLError: [Errno 336265218] _ssl.c:355: error:140B0002:SSL routines:SSL_CTX_use_PrivateKey_file:system lib You need to know that the script uses a private key to identify itself and the certificate of the cloud server to identify the server. Do I need to store the private key in a special folder as well? Update 2 The private key I can install in /etc/ssl/private and adapt the script accordingly.
Probably there is error during service startup, because you provided relative path to certificate. There should be absolute path to certificate file. When the system starts a service it doesn't change $PWD to the script location. You can copy certificate to /etc/ssl/certs (according to this answer) and change: verify = 'cloud-server-ca-chain.pem' to: verify = '/etc/ssl/certs/cloud-server-ca-chain.pem' in your code (File "/heartbeat/deviceAPI/DeviceAPIClient.py", line 40) You can also modify your init script to change directory to location of certificate and then start Python program.
start-stop-daemon Python script as service using SSL
1,662,120,855,000
Say I have a C program main.c that statically links to libmine.a. Statically linking to a library causes library functions to be embedded into the main executable at compile time. If libmine.a were to feature functions that weren't used by main.c, would the compiler (e.g. GCC) discard these functions? This question is inspired by the "common messaging" that using static libraries make executables larger, so I'm curious if the compiler at least strips away unused code from an archive file.
By default, linkers handle object files as a whole. In your example, the executable will end up containing the code from main.c (main.o), and any object files from libmine.a (which is an archive of object files) required to provide all the functions used by main.c (transitively). So the linker won’t necessarily include all of libmine.a, but the granularity it can use isn’t functions (by default), it’s object files (strictly speaking, sections). The reason for this is that when a given .c file is compiled to an object file, information from the source code is lost; in particular, the end of a function isn’t stored, only its start, and since multiple functions can be combined, it’s very difficult to determine from an object file what can actually be removed if a function is unused. It is however possible for compilers and linkers to do better than this if they have access to the extra information needed. For example, the LightspeedC programming environment on ’80s Macs could use projects as libraries, and since it had the full source code in such cases, it would only include functions that were actually needed. On more modern systems, the compiler can be told to produce object files which allow the linker to handle functions separately. With GCC, build your .o files with the -ffunction-sections -fdata-sections options enabled, and link the final program with the --gc-sections option. This does have an impact, notably by preventing certain categories of optimisation; see discard unused functions in GCC for details. Another option you can use with modern compilers and linkers is link-time optimisation; enable this with -flto. When optimisation is enabled (e.g. -O2 when compiling the object files), the linker will not include unused functions in the resulting binary. This works even without -ffunction-sections -fdata-sections.
Do C compilers discard unused functions when statically linking to .a file?
1,662,120,855,000
What is the purpose of bash-static compared to regular bash? Is it good as a rescue shell or what is it good for?
You are probably referring to the .deb package bash-static which is a statically linked version of bash, which means that it is a stand-alone program that does not depend on anything else (especially libraries) on your disk. I have used it on two different occasions: Installed on servers so that I always have a working bash, even in catastrophic situations when ld.so or libc are not functioning (absent, corrupt, ill-configured, ...) Delivered to customers along with my own shell scripts, so that I'm sure they have the proper version of bash, since I was using some advanced features that were introduced with bash v4.3.
What is bash-static used for?
1,662,120,855,000
I installed an application [ e.g. fdisk ]. But it required libraries for execution. I am looking for utility/tool which will help me to create a static binary from already installed binaries. So that I can use it anywhere. The only reliable tools that I found is ErmineLight from here , but this one is share-ware. Is there any open-source software is available for the same ? EDIT fdisk is just an example. I most of the time work on LFS, So If I have to use any utility, I need to follow the steps as Download the source configure make make test make install, So just-to-save time, I am looking for a solution in which I will make a static binary from debian or from fedora or from other distrbution, try it on LFS, and If it works fine or as per my requirement, I will go with source-code for compilation.
If fdisk is just an example and your goal is really to make static executables from dynamic executables, try Elf statifier. There's even a comparison with Ermine (by the Ermine vendor, so caveat (non-)emptor). Note that If you have many executables, their combined size is likely to be more than the combined size of the dynamically-linked executables plus the necessary dynamic libraries. There are features of GNU libc that may not work in a statically-linked executables, such as NSS (databases of user names, host names, etc.) and locale-related features. If your goal is to have a small, portable suite of system tools, you're looking for BusyBox, a suite of core tools intended for embedded systems (including fdisk). You may also be interested in a smaller standard library than Glibc, for example dietlibc or µClibc.
Creating Static Binary
1,662,120,855,000
I'm playing around with chroot environments, and I'd like to have a portable C compiler so that I can easily set up some basic build-tools in each environment without having to move too many libraries around. Gcc seems pretty bloaty for what I want to do, but if it's reasonably easy to build a static gcc I wouldn't mind a few dozen megabytes. I am also looking at the Tiny C Compiler, which is smaller but still looks like it's got an impressive feature set. However, running ./configure --help in the source directory doesn't list any option for building tcc statically, and I'm not sure how it might be done otherwise.
Building a static binary should be as simple as running gcc with -static, or if ld is being called directly use -Bstatic. Try CFLAGS=-static make after running configure. If it fails, the results will be obvious, e.g. rafts of undefined references at link time.
How can I get a static C compiler?
1,662,120,855,000
I have a simple C program. I run: $ gcc Q1.c -Wall -save-temps -o Q1 Then I inspect the executable generated: $ objdump -f Q1 Q1: file format elf32-i386 architecture: i386, flags 0x00000112: EXEC_P, HAS_SYMS, D_PAGED start address 0x080483b0 Then I compile it with static linking: $ gcc Q1.c -Wall -save-temps -static -o Q1 and inspect the file again: $ objdump -f Q1 Q1: file format elf32-i386 architecture: i386, flags 0x00000112: EXEC_P, HAS_SYMS, D_PAGED start address 0x08048e08 What effect does static and dynamic linking have on the start address of the program? The start address is the address of main(), right?
The start address is the address of main(), right? Not really: The start of a program isn't really main(). By default, GCC will produce executables whose start address corresponds to the _start symbol. You can see that by doing a objdump --disassemble Q1. Here's the output on a simple program of mine that only does return 0; in main(): 0000000000400e30 <_start>: 400e30: 31 ed xor %ebp,%ebp 400e32: 49 89 d1 mov %rdx,%r9 400e35: 5e pop %rsi 400e36: 48 89 e2 mov %rsp,%rdx 400e39: 48 83 e4 f0 and $0xfffffffffffffff0,%rsp 400e3d: 50 push %rax 400e3e: 54 push %rsp 400e3f: 49 c7 c0 a0 15 40 00 mov $0x4015a0,%r8 400e46: 48 c7 c1 10 15 40 00 mov $0x401510,%rcx 400e4d: 48 c7 c7 40 0f 40 00 mov $0x400f40,%rdi 400e54: e8 f7 00 00 00 callq 400f50 <__libc_start_main> 400e59: f4 hlt 400e5a: 66 90 xchg %ax,%ax 400e5c: 0f 1f 40 00 nopl 0x0(%rax) As you can see at address 400e54, _start() in turn invokes __libc_start_main, which initializes the necessary stuff (pthreads, atexit,...) and finally calls main() with the appropriate arguments (argc, argv and env). Okay, but what does it have to do with the start address changing? When you ask gcc to link statically, it means that all the initialization that I mentioned above has to be done using functions that are in the executable. And indeed if you look at the sizes of both executables, you'll find that the static version is way larger. On my test, the static version is 800K while the shared version is only 6K. The extra functions happen to be placed before _start(), hence the change in start address. Here's the layout of the static executable around start(): 000000000049e960 r translit_from_tbl 0000000000400a76 t _i18n_number_rewrite 0000000000400bc0 t fini 0000000000400bd0 t init_cacheinfo 0000000000400e30 T _start 0000000000400e60 t deregister_tm_clones 0000000000400e90 t register_tm_clones 0000000000400ed0 t __do_global_dtors_aux And here's the layout of the shared executable: 00000000004003c0 T _start 00000000004003f0 t deregister_tm_clones 00000000004004b0 T main 00000000004004c0 T __libc_csu_init 00000000006008a0 B _end 0000000000400370 T _init As a result, I get slightly different start addresses: 0x400e30 in the static case and 0x4003c0 in the shared case.
Effect of static and dynamic linking on start address
1,662,120,855,000
I'm not very knowledgeable on this topic, and therefore can't figure out why the following command does not work: $ gfortran -o dsimpletest -O dsimpletest.o ../lib/libdmumps.a \ ../lib/libmumps_common.a -L/usr -lparmetis -lmetis -L../PORD/lib/ \ -lpord -L/home/eiser/src/scotch_5.1.12_esmumps/lib -lptesmumps -lptscotch \ -lptscotcherr /opt/scalapack/lib/libscalapack.a -L/usr/lib/openmpi/ \ -lmpi -L/opt/scalapack/lib/librefblas.a -lrefblas -lpthread /usr/bin/ld: cannot find -lrefblas collect2: ld returned 1 exit status This happens when compiling the mumps library. The above command is executed by make. I've got the librefblas.a in the correct path: $ ls /opt/scalapack/lib/ -l total 20728 -rw-r--r-- 1 root root 619584 May 3 14:56 librefblas.a -rw-r--r-- 1 root root 9828686 May 3 14:59 libreflapack.a -rw-r--r-- 1 root root 10113810 May 3 15:06 libscalapack.a -rw-r--r-- 1 root root 653924 May 3 14:59 libtmg.a Question 1: I thought the -L switch of ld takes directories, why does it refer to the file directly here? If I remove the librefblas.a from the -L argument, I get a lot of "undefined reference" errors. Question 2: -l should imply looking for .a and then looking for .so, if I recall correctly. Is it a problem that I don't have the .so file? I tried to find out by using gfortran -v ..., but this didn't help me debugging it.
I was able to solve this with the help of the comments, particular credit to @Mat. Since I wanted to compile the openmpi version, it helped to use mpif90 instead of gfortran, which, on my system, is $ mpif90 --showme /usr/bin/gfortran -I/usr/include -pthread -I/usr/lib/openmpi -L/usr/lib/openmpi -lmpi_f90 -lmpi_f77 -lmpi -ldl -lhwloc
Why can't ld find this library?
1,662,120,855,000
I would like to build standalone bash binaries, which would hopefully work on a good portion of the linux distributions out there. Complete coverage is definitely not a goal. How would I approach this? Best-effort suggestions welcome. I understand that each distribution has e.g. unique readline implementations. If it is feasible, I would like to statically link a fixed version and ship it with the standalone bash on e.g. an usb stick. Hope someone can help me with that :) cheers
Download the bash-static package from Debian and extract the executable. ar p bash-static_*.deb data.tar.xz | tar -xJ ./bin/bash-static If you want to see how it's done, look in the sources. The build instructions are in debian/rules. There's a lot of expansion going on, so run it: debian/rules static-build I think all you need is this (but I haven't tried): ./configure --enable-static-link make The question is why you'd want to do that. Virtually all distributions already have bash installed as /bin/bash, and it isn't optional. It would be more useful with zsh which in most distributions is available, but not installed by default. For zsh, you need (again, untried): ./configure --enable-ldflags=-static --disable-dynamic --disable-dynamic-nss make
Build a standalone bash
1,662,120,855,000
I was wondering how prelinking works. If I prelink my whole system and than delete glibc, will the system 'get up' after restart?
Well of course it won't, because you won't have a C library anymore. All prelink does is to try and calculate an optimal load address for each library so that no program will have overlapping libraries, then update the libraries so that they default to loading at that address. Then when a program is run the libraries it uses are unlikely to need to be relocated as they can probably be loaded at their default address.
How does prelink work
1,662,120,855,000
When creating a windows static library, we simply create a .lib file which should be included in the linker path. When creating a windows shared library, along with the .dll, we also a generate a .lib file. This lib file contains the signatures of the API exposed by the library. There are two ways to use this library Either we can directly refer the library API in our project and add the path to .lib file in the linker properties. Some people call it as statically linked dynamic library Or we can explicitly load the dynamic library during runtime. In this case we need not specify the lib file path for linker. Call it dynamically linked dynamic library. My question is do we have something similar for shared libraries on Linux also? or just the static library (.a) and shared library (.so)? I know how to include a static library on linux using gcc -l option. Can we use the same option for including a dynamic library (.so) also?
I can't say I understand what a "statically linked dynamic library", nor do I know anything about signatures contained in libraries (sounds interesting though: does this mean the linker is able to check for type mismatches in arguments and return types at link time? ELF definitely does not have such a feature.) so this answer will not be from a comparative point of view. Also, as your question is very broad, the answer will be superficial in detail. Yes, you can create either a static library (.a) or a shared library (.so). When the linker looks for libraries requested with -l, it will prefer the shared library if both exist, unless overridden with an option like -static. When building a library from source code, one only needs to build it as a static library (.a) or as a shared library (.so), not both. Still, quite a few packages' build scripts are set up to build both versions (which requires compiling twice, once with position independent code and once without) in order to give consumers of the library the choice of which one to link with. The necessary pieces of a static library are totally incorporated into the binary that is built. There is no need to have the .a file available at run time. In contrast, a shared library that was linked to a binary has to be available at run time, although the run-time dynamic linker will typically search for it under a modified name, its "soname" (usually libsomething.so at link time and libsomething.so.<integer> at run time), which is a feature that allows multiple different versions of a library with slightly different APIs to be installed in the system at the same time. In your question you mention also explicitly loading dynamic libraries at run time. This is often done for modular applications or applications with plugins. In this case, the library in question (often called a "module" or "plugin") is not linked with the application at all and the build-time linker knows nothing of it. Instead, the application developer must write code to call the run-time dynamic linker and ask it to open a library by filename or full pathname. Sometimes the names of the modules to open are listed in the application's configuration file, or there is some other piece of application logic that decides which modules are or aren't needed.
Types of dynamic linking in Unix/Linux environments
1,662,120,855,000
I'm having trouble trying to build a static binary of ffmpeg - I've got almost the whole build working, with the exception of two libs - libvorbis and libmp3lame. These two libs are failing during ./configure, specifically on undefined functions from the math.h / libm: libvorbis: gcc -L/vol/build/lib -static -static-libstdc++ -static-libgcc -Wl,--as-needed -Wl,-z,noexecstack -I/vol/build/include -L/vol/build/lib -o /tmp/ffconf.UKKLGhCv/test /tmp/ffconf.UKKLGhCv/test.o -lvorbis -lm -logg -lstdc++ -lpthread -lexpat -ldl -lm --enable-libopencore-amrnb /vol/build/lib/libvorbis.a(envelope.o): In function `_ve_envelope_init': envelope.c:(.text+0x983): undefined reference to `_ZGVbN2v_sin' envelope.c:(.text+0x9a9): undefined reference to `_ZGVbN2v_sin' /vol/build/lib/libvorbis.a(lsp.o): In function `vorbis_lsp_to_curve': lsp.c:(.text+0x650): undefined reference to `_ZGVbN2v_cos' lsp.c:(.text+0x669): undefined reference to `_ZGVbN2v_cos' libmp3lame: gcc -L/vol/build/lib -static -static-libstdc++ -static-libgcc -Wl,--as-needed -Wl,-z,noexecstack -o /tmp/ffconf.dC4w1f5B/test /tmp/ffconf.dC4w1f5B/test.o -lmp3lame -lm -lstdc++ -lpthread -lexpat -ldl -lm --enable-libopencore-amrnb /vol/build/lib/libmp3lame.a(psymodel.o): In function `init_s3_values': psymodel.c:(.text+0x14d3): undefined reference to `_ZGVbN2v___exp_finite' psymodel.c:(.text+0x14fa): undefined reference to `_ZGVbN2v___exp_finite' /vol/build/lib/libmp3lame.a(psymodel.o): In function `psymodel_init': psymodel.c:(.text+0xb62d): undefined reference to `_ZGVbN4vv___powf_finite' psymodel.c:(.text+0xb677): undefined reference to `_ZGVbN4vv___powf_finite' psymodel.c:(.text+0xb6c4): undefined reference to `_ZGVbN4vv___powf_finite' psymodel.c:(.text+0xb711): undefined reference to `_ZGVbN4vv___powf_finite' psymodel.c:(.text+0xb75b): undefined reference to `_ZGVbN4vv___powf_finite' /vol/build/lib/libmp3lame.a(psymodel.o):psymodel.c:(.text+0xb7a2): more undefined references to `_ZGVbN4vv___powf_finite' follow /vol/build/lib/libmp3lame.a(util.o): In function `fill_buffer': util.c:(.text+0x28a6): undefined reference to `_ZGVbN2v_cos' util.c:(.text+0x28cc): undefined reference to `_ZGVbN2v_cos' util.c:(.text+0x28fb): undefined reference to `_ZGVbN2v_cos' util.c:(.text+0x2921): undefined reference to `_ZGVbN2v_cos' util.c:(.text+0x29cc): undefined reference to `_ZGVbN2v_sin' util.c:(.text+0x29e8): undefined reference to `_ZGVbN2v_sin' I can't figure out how to get these to sucessfully build. From what I understand, passing the -lm option should be enough, but apparently isn't. I checked for the presence of libm.a, which is located at /usr/lib/x86_64-linux-gnu/libm.a, I also tried to pass this directory in the -L flags, but no difference. The libs build fine when removing the -static flag, but the resulting binary is (duh) linked against libm.so. Just in case, these are the flags I'm using to build the two libraries: libvorbis: ./configure --prefix=${CMAKE_BINARY_DIR} --disable-shared --disable-oggtest libmp3lame: ./configure --prefix=${CMAKE_BINARY_DIR} --disable-shared I'd appreciate any pointers on how to fix or debug this any further. Edit: after playing around with it some more, it seems like the libm is getting linked in - when I remove the -lm flag, I'm getting a ton more undefined references - sin, cos, __pow_finite, etc. When I put it back in, most of these go away and only those mangled symbols, such as _ZGVbN4vv___powf_finite and _ZGVbN2v_cos remain.
Well, I managed to solve it - googling the mangled symbols such as _ZGVbN2v_cos led me to this patch mentioning vector math, and in combination with ldd's output during dynamic linking mentioning libmvec, I realized that I might have to link that in as well. For libmp3lame, it has to be linked in before libm: gcc -L/vol/build/lib -static -o /tmp/ffconf.dC4w1f5B/test /tmp/ffconf.dC4w1f5B/test.o -lmp3lame -lmvec -lm For libvorbis, the order of -lm and -lmvec doesn't matter, it builds either way.
Error during static build of libvorbis and libmp3lame
1,662,120,855,000
I'm using Arch Linux and have successfully built https://github.com/JosephP91/curlcpp However, I have no idea how to build the example program. I keep getting fatal error: curl_easy.h: No such file or directory Of course, this is because I don't know how to add it to the library/include path. In the <curlcpp root>/build/src/ folder, I have a libcurlcpp.a file, which has all the .o files, and <curlcpp root>/include/ has all the .h files needed. I've tried commands specified in the README, trying -I library/include, and other combinations. Do I need to manually copy the file somewhere or run some command line app to make it system wide? I don't think ldconfig is the right program since that's for dynamic libraries.
What exact command do you use to build executable of your program? You need to tell g++ about additional directories with project-specific headers and libraries. If you have libcurlcpp.a copied into $proj_home/lib and libcurlcpp.h copied into $proj_home/hdr this will be something like: $ g++ your_program.cpp -Ihdr -Llib -lcurlcpp -static -o your_executable -I specifies additional directory with headers -L specifies additional directory with libraries -l specifies particular library that you want to link (without lib and .a/.so) -static tells g++ to prefer static libraries (*.a) over dynamic (*.so) (default is reverse) Paths for -I and -L are specified without space between key and the path itself. Similar thing is for -l. Arrange project Makefile accordingly when you'll figure out particular command that works for you.
how to add curlcpp to the library/include path?
1,662,120,855,000
I have a number of legacy codes that need to be compiled with specific (and often conflicting) libraries. To be specific I have a program which can only be compiled with g77 and another program which can only be compiled with gfortran. Let's call the first program makee and the second program UVES_popler. When compiling (and running) they both need to be linked to the version of pgplot that is compiled with the respective compiler. So, compiling makee with g77 needs to run with pgplot that has been compiled by g77 as well. And the same, respectively for UVES_popler and gfortran. Let's assume that I can compile pgplot with both g77 and gfortran -- what is the best practice for organizing my bashrc? Should I create a bash function for each program and have the proper links and LD_LIBRARY_PATHs? Something like: runmakee() { export LD_LIBRARY_PATH=/path/to/g77-pgplot/ makee } And (probably relatedly) is there a way of setting flags in Makefiles so that the proper libraries are called when compiling these programs respectively?
Your bash functions should work, but the "usual" way of doing that is to write a wrapper script for each executable, and set anything that needs to be set in there. (You can change the executable name to foo.bin for instance, and call the wrapper script foo to make it easy to call.) For ELF targets (not sure about other object formats), you can also set the -rpath linker option to hard-code a runtime library search path in your executables directly. With gcc (for C code), for the final link stage, it would look like: gcc ... -Wl,-rpath,/your/hardcoded/path ... I assume the Fortran compilers have similar options, or that you can change the linker options directly yourself (in which case the option is -rpath /your/path).
How to link different (incompatible) libraries at runtime depending on program?
1,662,120,855,000
I'm having trouble compiling a simple, sample program against glib on Ubunutu. I get these errors. I can get it to compile but not link with the -c flag. Which I believe means I have the glib headers installed, but it's not finding the shared object code. See also the make file below. $> make re gcc -I/usr/include/glib-2.0 -I/usr/lib/x86_64-linux-gnu/glib-2.0/include -lglib-2.0 re.c -o re /tmp/ccxas1nI.o: In function `print_uppercase_words': re.c:(.text+0x21): undefined reference to `g_regex_new' re.c:(.text+0x41): undefined reference to `g_regex_match' re.c:(.text+0x54): undefined reference to `g_match_info_fetch' re.c:(.text+0x6e): undefined reference to `g_print' re.c:(.text+0x7a): undefined reference to `g_free' re.c:(.text+0x8b): undefined reference to `g_match_info_next' re.c:(.text+0x97): undefined reference to `g_match_info_matches' re.c:(.text+0xa7): undefined reference to `g_match_info_free' re.c:(.text+0xb3): undefined reference to `g_regex_unref' collect2: ld returned 1 exit status make: *** [re] Error 1 Makefile used: # Need to installed libglib2.0-dev some system specific install that will # provide a value for pkg-config INCLUDES=$(shell pkg-config --libs --cflags glib-2.0) CC=gcc $(INCLUDES) PROJECT=re # Targets full: clean compile clean: rm $(PROJECT) compile: $(CC) $(PROJECT).c -o $(PROJECT) .c code being compiled: #include <glib.h> void print_upppercase_words(const gchar *string) { /* Print all uppercase-only words. */ GRegex *regex; GMatchInfo *match_info; regex = g_regex_new("[A-Z]+", 0, 0, NULL); g_regex_match(regex, string, 0, &match_info); while (g_match_info_matches(match_info)) { gchar *word = g_match_info_fetch(match_info, 0); g_print("Found %s\n", word); g_free(word); g_match_info_next(match_info, NULL); } g_match_info_free(match_info); g_regex_unref(regex); } int main() { gchar *string = "My body is a cage. My mind is THE key."; print_uppercase_words(string); } Strangely, when I run glib-config it doesn't like that command -- though I don't know how to tell bash or make how to just use one over the other when it complains that gdlib-config is in these 2 packages. $> glib-config No command 'glib-config' found, did you mean: Command 'gdlib-config' from package 'libgd2-xpm-dev' (main) Command 'gdlib-config' from package 'libgd2-noxpm-dev' (main) glib-config: command not found
glib is not your problem. This is: re.c:(.text+0xd6): undefined reference to `print_uppercase_words' What it's saying is you're calling a function print_uppercase_words, but it can't find it. And there's a reason. Look very closely. There's a typo: void print_upppercase_words(const gchar *string) After you fix that, you might still have a problem because you are specifying the libraries before the modules that require those libraries. In short, your command should be written gcc -o re re.o -lglib-2.0 so that -lglib-2.0 comes after re.o. So I'd write your Makefile more like this: re.o: re.c $(CC) -I<includes> -o $@ -c $^ re: re.o $(CC) $^ -l<libraries> -o $@ In fact, if you set the right variables, make will figure it all out for you automatically. CFLAGS=$(shell pkg-config --cflags glib-2.0) LDLIBS=$(shell pkg-config --libs glib-2.0) CC=gcc re: re.o
Linker errors when compiling against glib...?
1,662,120,855,000
I don't yet fully understand how segfaults and backtraces work, but I get the impression that if the function at the top of the list references "glib" or "gobject", you have Bad Issues(TM) with libraries that usually shouldn't go wrong. Well, that's what I'm getting here, from two completely different programs. The first is the latest build of irssi, compiled (cleanly, without any glitches or errors) directly from github.com. Program received signal SIGSEGV, Segmentation fault. 0xb7cf77ea in g_ascii_strcasecmp () from /usr/lib/libglib-2.0.so.0 (gdb) bt #0 0xb7cf77ea in g_ascii_strcasecmp () from /usr/lib/libglib-2.0.so.0 #1 0x08103455 in config_node_section_index () #2 0x081036b0 in config_node_traverse () #3 0x080fb674 in settings_get_bool () #4 0x08090bce in command_history_init () #5 0x08093d81 in fe_common_core_init () #6 0x0805a60d in main () The second program I'm having issues with is the NetSurf web browser (which also compiles 100% cleanly) when built against GTK (when not built to use GTK it runs fine): Program received signal SIGSEGV, Segmentation fault. 0xb7c1bace in g_type_check_instance_cast () from /usr/lib/libgobject-2.0.so.0 (gdb) bt #0 0xb7c1bace in g_type_check_instance_cast () from /usr/lib/libgobject-2.0.so.0 #1 0x080cd31c in nsgtk_scaffolding_set_websearch () #2 0x080d05da in nsgtk_new_scaffolding () #3 0x080dafd8 in gui_create_browser_window () #4 0x0809e806 in browser_window_create () #5 0x080c2fa9 in ?? () #6 0x0807c09d in main () I'm 99.99% confident the issues I'm looking at are some kind of glitch-out with glib2. The rest of my system works 100% fine, just these two programs are doing weird things. I'm similarly confident that if I tried to build other programs that used these libraries, they would quite likely fail too. Obviously, poking glib and friends - and making even one tiny little mistake - is an instant recipe to make practically every single program in the system catastrophically break horribly (and I speak from experience with another system, long ago :P). Given I have absolutely no idea what I'm doing with this kind of thing and I know it, I am loathe to go there; I'd like to keep my current system configuration functional :) I was thinking of compiling a new version of glib2 (and co.), then statically linking these programs against it. I just have no idea how to do this - what steps do I need to perform? An alternative idea I had was to ./configure --prefix=/usr; make; make install exactly the same version of glib I have right now "back into" my system, to reinstall it. I see that the associated core libraries all end with "0.3200.4": -rwxr-xr-x 1 root root 1.4M Aug 9 2012 /usr/lib/libgio-2.0.so.0.3200.4 -rwxr-xr-x 1 root root 1.2M Aug 9 2012 /usr/lib/libglib-2.0.so.0.3200.4 -rwxr-xr-x 1 root root 11K Aug 9 2012 /usr/lib/libgmodule-2.0.so.0.3200.4 -rwxr-xr-x 1 root root 308K Aug 9 2012 /usr/lib/libgobject-2.0.so.0.3200.4 -rwxr-xr-x 1 root root 3.7K Aug 9 2012 /usr/lib/libgthread-2.0.so.0.3200.4 Would that possibly work, or break things horribly? :S If it would possibly work, what version does "0.3200.4" translate to? What other ideas can I try? I'm not necessarily looking for fixes for glib itself that correct whatever fundamental error is going on - it isn't affecting me that badly. I just want to get irssi and NetSurf to run correctly.
I get the impression that if the function at the top of the list references "glib" or "gobject", you have Bad Issues(TM) with libraries that usually shouldn't go wrong. You get the wrong impression, if you mean this indicates the flaw is probably in those libraries. It doesn't mean that; it more likely means that's where an earlier mistake finally blew up. By nature C doesn't have a lot of runtime safeguards in it, so you can easily pass arguments that will compile but aren't validated any further (unless you do it yourself). Simple example: int main (void) { char whoops[3] = { 'a', 'b', 'c' }; if (strcmp(whoops, "abcdef")) puts(whoops); Passes an unterminated string to several different string functions. This will compile no problem, and most likely run okay because the memory violation will be very slight, but it could seg fault in strcmp() or puts(). That doesn't mean the strcmp() implementation is buggy; the mistake is clearly right there in main(). Functions like those can't logically determine if an argument passed is properly terminated (this is what I meant WRT runtime checks and C "by nature" lacking them). There's not much point in stipulating the compiler should check, because most of the time the data won't be hard coded like that. The stuff in the middle of a backtrace doesn't necessarily play a role either, although it could. Generally the place to start looking is the last entry; that's where the problem has been traced back to. But the bug could always be anywhere. Often comparing a backtrace to errors reported by a mem checker like valgrind can help narrow things down. WRT your examples there may be a lot to sift through though; last I checked valgrind and gtk were not happy playmates. I was thinking of compiling a new version of glib2 (and co.), then statically linking these programs against it. You could, although I don't see any reason to believe anything will work any better because of it. It's grasping at straws. You can't actually debug the problem yourself, which is understandable, so you consider what you could try out of desperation. Most likely you will be just be wasting a lot of time and frustrating yourself. I'm 99.99% confident the issues I'm looking at are some kind of glitch-out with glib2. I'm 99% confident you are overconfident there. While again the bug could be anywhere, as a rule of thumb, consider the most widely tested parts the least likely culprits. In this case, glib is pretty ubiquitous, whereas irssi and NetSurf are relatively obscure. The best thing for you to do is probably file a bug report. Backtraces are usually much appreciated there. Start with irssi and NetSurf; if you go straight to glib they will, reasonably enough, just say there's no reason for them to believe it's their problem unless you can demonstrate it (which all this doesn't). If on the other hand the irssi people determine it is in glib, they'll probably want to pursue that themselves.
Getting segmentation faults from inside glib and gobject - I THINK I want to build/statically link against an independant version of glib2
1,662,120,855,000
I've got reasons for not wanting to rely on a specific build system. I don't mean to dis anybody's favorite, but I really just want to stick to what comes with the compiler. In this case, GCC. Automake has certain compatibility issues, especially with Windows. <3 GNU make is so limited that it often needs to be supplemented with shell scripts. Shell scripts can take many forms, and to make a long story short and probably piss a lot of people off, here is what I want to do -- The main entry point is God. Be it a C or C++ source file, it is the center of the application. Not only do I want the main entry point to be the first thing that is executed, I also want it to be the first thing that is compiled. Let me explain -- There was a time when proprietary and closed-source libraries were common. Thanks to Apple switching to Unix and Microsoft shooting themselves in the foot, that time is over. Any library that needs to be dynamically linked can be included as a supporting file of the application. For that reason, separate build instructions for .SOs (and maybe .DLLs ;]) is all fine and dandy, because they are separate executable files. Any other library should be statically linked. Now, let's talk about static linking -- Static linking is a real bitch. That's what makefiles are for. If the whole project was written in one language (for instance C OR C++), you can #include the libraries as headers. That's just fine. But now, let's consider another scenario -- Let's say you're like me and can't be arsed to figure out C's difficult excuse for strings, so you decide to use C++. But you want to use a C library, like for instance MiniBasic. God help us. If the C library wasn't designed to conform to C++'s syntax, you're screwed. That's when makefiles come in, since you need to compile the C source file with a C compiler and the C++ source file with a C++ compiler. I don't want to use makefiles. I would hope that there is a way to exploit GCC's preprocessor macros to tell it something like this: Hi, GCC. How are you doing? In case you forgot, this source file you're looking at right now is written in C++. You should of course compile it with G++. There's another file that this file needs, but it's written in C. It's called "lolcats.c". I want you to compile that one with GCC into an object file and I want you to compile this one with G++ into the main object file, then I want you to link them together into an executable file. How might I write such a thing in preprocessor lingo? Does GCC even do that?
The main entry point is God. Be it a C or C++ source file, it is the center of the application. Only in the same way that nitrogen is the center of a pine tree. It is where everything starts, but there's nothing about C or C++ that makes you put the "center" of your application in main(). A great many C and C++ programs are built on an event loop or an I/O pump. These are the "centers" of such programs. You don't even have to put these loops in the same module as main(). Not only do I want the main entry point to be the first thing that is executed, I also want it to be the first thing that is compiled. It is actually easiest to put main() last in a C or C++ source file. C and C++ are not like some languages, where symbols can be used before they are declared. Putting main() first means you have to forward-declare everything else. There was a time when proprietary and closed-source libraries were common. Thanks to Apple switching to Unix and Microsoft shooting themselves in the foot, that time is over. "Tell 'im 'e's dreamin'!" OS X and iOS are full of proprietary code, and Microsoft isn't going away any time soon. What do Microsoft's current difficulties have to do with your question, anyway? You say you might want to make DLLs, and you mention Automake's inability to cope effectively with Windows. That tells me Microsoft remains relevant in your world, too. Static linking is a real bitch. Really? I've always found it easier than linking to dynamic libraries. It's an older, simpler technology, with fewer things to go wrong. Static linking incorporates the external dependencies into the executable, so that the executable stands alone, self-contained. From the rest of your question, that should appeal to you. you can #include the libraries as headers No... You #include library headers, not libraries. This isn't just pedantry. The terminology matters. It has meaning. If you could #include libraries, #include </usr/lib/libfoo.a> would work. In many programming languages, that is the way external module/library references work. That is, you reference the external code directly. C and C++ are not among the languages that work that way. If the C library wasn't designed to conform to C++'s syntax, you're screwed. No, you just have to learn to use C++. Specifically here, extern "C". How might I write such a thing in preprocessor lingo? It is perfectly legal to #include another C or C++ file: #include <some/library/main.cpp> #include <some/other/library/main.c> #include <some/other/library/secondary_module.c> #include <iostream> int main() { call_the_library(); do_other_stuff(); return 0; } We don't use extern "C" here because this pulls the C and C++ code from those other libraries directly into our C++ file, so the C modules need to be legal C++ as well. There are a number of annoying little differences between C and C++, but if you're going to intermix the two languages, you're going to have to know how to cope with them regardless. Another tricky part of doing this is that the order of the #includes is more sensitive than the order of library references if a linker command. When you bypass the linker in this way, you end up having to do some things manually that the linker would otherwise do for you automatically. To prove the point, I took MiniBasic (your own example) and converted its script.c driver program to a standalone C++ program that says #include <basic.c> instead of #include <basic.h>. (patch) Just to prove that it's really a C++ program now, I changed all the printf() calls to cout stream insertions. I had to make a few other changes, all of them well within a normal day's work for someone who's going to intermix C and C++: The MiniBasic code makes use of C's willingness to tolerate automatic conversions from void* to any other pointer type. C++ makes you be explicit. Newer compilers are no longer tolerating use of C string constants (e.g. "Hello, world!\n") in char* contexts. The standard says the compiler is allowed to place them into read-only memory, so you need to use const char*. That's it. Just a few minutes work, patching GCC complaints. I had to make some similar changes in basic.c to those in the linked script.c patch file. I haven't bothered posting the diffs, since they're just more of the same. For another way to go about this, study the SQLite Amalgamation, as compared to the SQLite source tree. SQLite doesn't use #include all the other files into a single master file; they're actually concatenated together, but that is also all #include does in C or C++.
Compiling C/C++ code by way of including preprocessor build instructions in an actual C/C++ source file
1,662,120,855,000
I use Ubuntu 18.04. I install libraries using apt, for example: sudo apt install freeglut3-dev Does apt always install dynamic libraries or I can determine if a package contains static or dynamic library ?
By convention: libfoo1 will contain a dynamic library, while libfoo-dev will create the headers and static library. libfoo1 is only runtime dependencies, and dynamic libraries are runtime dependencies. libfoo-dev is a build-dependency, and static libraries are only used during building/linking. If you want to know what's in a library, you can use dpkg to look at what's in an installed package: $ dpkg -L libfoo1 /usr/lib/x86_64-linux-gnu/libfoo.so.1.0.0 /usr/share/doc/libfoo1/changelog.gz /usr/share/doc/libfoo1/copywrite /usr/lib/x86_64-linux-gnu/libfoo.so.1 If the package is not installed, you can use the apt-file command, but you need to have recently used apt update to get the file list. $ apt-file list libfoo-dev libfoo-dev: /usr/include/foo.h libfoo-dev: /usr/lib/x86_64-linux-gnu/libfoo.a libfoo-dev: /usr/lib/x86_64-linux-gnu/libfoo.so libfoo.so (in libfoo-dev) is actually just a symbolic link to libfoo.so.1 (in libfoo1) which itself is a symbolic link to libfoo.so.1.0.0 (also in libfoo1).
apt packages - static vs dynamic libraries
1,662,120,855,000
Is it possible to build a binary with dependent .so files included so that the binary can be built once and used on machines with the same hardware and OS, without them having the .so files? For example, I am building curl with nghttp2. I do ./configure --with-nghttp2=/usr/local Then I ran make. I got the curl binary. When I copy over this binary on to another machine and try to run it, it says ./curl: error while loading shared libraries: libnghttp2.so.14: cannot open shared object file: No such file or directory I also tried running make as follows: make SHARED=0 CFLAGS='-static' I still get that same error.
Depend .so files can be in form of shared objects (they are, .so files) or .a files aka static objects. You can rebuild nghttp2 and pass --disable-shared flag to it's configure. Then you can try to reconfigure and rebuild curl as usual. The point is to be sure that you have only static .a object in /usr/local/lib to link curl with. Do not forget to check that /usr/local/lib does not contain .so version of nghttp2! (Or you can specify another --prefix= to experiment with. You even can install anything into /tmp or your $HOME and play with that locally created tree) Note that this will not eliminate other dependencies from curl, since it is large project that depends on code form third parties. It can even depend on itself, libcurl. You can pass --disable-shared flag to it to build only it's static version. At end, run readelf -d /path/to/your/curl | fgrep NEEDED to see it's full dependencies!
Building binary with static objects included
1,662,120,855,000
I have successfully developed a graphical Qt 5.15.2 application using Qt Creator with dynamically linked libraries. For various reasons I have determined static linking would be better for my application. I attempted to switch my development environment to use static libraries instead of dynamic. My application built with no errors, but when I deployed my application to a development board (a BeagleBone Black running Debian 11.5), I got the following error: qt.qpa.plugin: Could not find the Qt platform plugin "linuxfb" in "" This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem. I passed in a -platform linuxfb command line argument, I built my own static Qt libraries, I added CONFIG += static to my .pro file, and I am using the static libraries in Qt Creator. Is there something special I need to do to get the linuxfb library/plugin to link? Any insight into how I can resolve this would be greatly appreciated.
The Qt documentation is rather sparse, but it turns out my .pro file needed the line QTPLUGIN += qlinuxfb. This resolved the error I was getting.
Qt statically linked application error: linuxfb plugin not found by the application
1,550,498,510,000
In the following example: $ ip a | grep scope.global inet 147.202.85.48/24 brd 147.202.85.255 scope global dynamic enp0s3 What does the 'brd' mean?
brd is short for broadcast. 147.202.85.255 is the broadcast address for whatever interface that line belongs to.
meaning of "brd" in output of IP commands
1,550,498,510,000
Let's say I want to create an internal network with 4 subnets. There is no central router or switch. I have a "management subnet" available to link the gateways on all four subnets (192.168.0.0/24). The general diagram would look like this: 10.0.1.0/24 <-> 10.0.2.0/24 <-> 10.0.3.0/24 <-> 10.0.4.0/24 In words, I configure a single linux box on each subnet with 2 interfaces, a 10.0.x.1 and 192.168.0.x. These function as the gateway devices for each subnet. There will be multiple hosts for each 10.x/24 subnet. Other hosts will only have 1 interface available as a 10.0.x.x. I want each host to be able to ping each other host on any other subnet. My question is first: is this possible. And second, if so, I need some help configuring iptables and/or routes. I've been experimenting with this, but can only come up with a solution that allow for pings in one direction (icmp packets are only an example, I'd ultimately like full network capabilities between hosts e.g. ssh, telnet, ftp, etc).
Ok, so you have five networks 10.0.1.0/24, 10.0.2.0/24, 10.0.3.0/24, 10.0.4.0/24 and 192.168.0.0/24, and four boxes routing between them. Let's say the routing boxes have addresses 10.0.1.1/192.168.0.1, 10.0.2.1/192.168.0.2, 10.0.3.1/192.168.0.3, and 10.0.4.1/192.168.0.4. You will need to add static routes to the other 10.0.x.0/24 networks on each router box, with commands something like this (EDITED!): # on the 10.0.1.1 box ip route add 10.0.2.0/24 via 192.168.0.2 ip route add 10.0.3.0/24 via 192.168.0.3 ip route add 10.0.4.0/24 via 192.168.0.4 and the corresponding routes on the other router boxes. On the non-routing boxes with only one interface, set the default route to point to 10.0.x.1. Of course you will also have to add the static addresses and netmasks on all the interfaces. Also note that linux does not function as a router by default, you will need to enable packet forwarding with: echo 1 > /proc/sys/net/ipv4/ip_forward The ip commands above do not make the settings persistent, how to do that is dependent on the distribution. As I said, I haven't tested this and may have forgotten something.
Routing Between Multiple Subnets
1,550,498,510,000
I want to know if there's a way to check if some subnets are (or not) overlapped with a List of IPs For example, we have this List: 197.26.9.128/25 193.36.81.128/25 194.33.24.0/22 188.115.195.80/28 188.115.195.64/28 185.59.69.96/28 185.59.69.32/27 41.202.219.32/27 41.202.219.128/29 154.70.120.16/28 154.70.120.32/28 154.70.120.0/28 41.202.219.208/28 41.202.219.136/29 197.157.209.0/24 and I want to check if the following IPs are overlapped with the previous list. 197.26.9.0/26 194.33.26.0/26 (IP overlapped with 194.33.24.0/22) 188.115.195.88/29 (IP overlapped with 188.115.195.80/28) 41.202.219.0/24 197.157.209.128/28 (IP overlapped with 197.157.209.0/24) The output would be the next: 197.26.9.0/26 41.202.219.0/24
Here are a few bits for you. First, a script in Bash, so not very efficient. It doesn't do exactly what you want, because it only checks one pair of subnets and reports the overlap. Below the script a few rough shell commands follow, thought the result is not presented in the form you want. So you need to integrate and adjust the whole bunch to your needs, or treat them as a sketch illustrating the logic. #!/usr/bin/env bash subnet1="$1" subnet2="$2" # calculate min and max of subnet1 # calculate min and max of subnet2 # find the common range (check_overlap) # print it if there is one read_range () { IFS=/ read ip mask <<<"$1" IFS=. read -a octets <<< "$ip"; set -- "${octets[@]}"; min_ip=$(($1*256*256*256 + $2*256*256 + $3*256 + $4)); host=$((32-mask)) max_ip=$(($min_ip+(2**host)-1)) printf "%d-%d\n" "$min_ip" "$max_ip" } check_overlap () { IFS=- read min1 max1 <<<"$1"; IFS=- read min2 max2 <<<"$2"; if [ "$max1" -lt "$min2" ] || [ "$max2" -lt "$min1" ]; then return; fi [ "$max1" -ge "$max2" ] && max="$max2" || max="$max1" [ "$min1" -le "$min2" ] && min="$min2" || min="$min1" printf "%s-%s\n" "$(to_octets $min)" "$(to_octets $max)" } to_octets () { first=$(($1>>24)) second=$((($1&(256*256*255))>>16)) third=$((($1&(256*255))>>8)) fourth=$(($1&255)) printf "%d.%d.%d.%d\n" "$first" "$second" "$third" "$fourth" } range1="$(read_range $subnet1)" range2="$(read_range $subnet2)" overlap="$(check_overlap $range1 $range2)" [ -n "$overlap" ] && echo "Overlap $overlap of $subnet1 and $subnet2" The usage and result are these: $ ./overlap.bash 194.33.26.0/26 194.33.24.0/22 Overlap 194.33.26.0-194.33.26.63 of 194.33.26.0/26 and 194.33.24.0/22 Now given your first list of subnets is in file list and the subnets to check are in the file to_check, you can use the script to find all overlaps. $ while read l; do list+=("$l"); done < list $ while read t; do to_check+=("$t"); done < to_check $ for i in "${list[@]}"; do for j in "${to_check[@]}"; do \ ./overlap.bash "$i" "$j"; done; done This is the result: Overlap 194.33.26.0-194.33.26.63 of 194.33.24.0/22 and 194.33.26.0/26 Overlap 188.115.195.88-188.115.195.95 of 188.115.195.80/28 and 188.115.195.88/29 Overlap 41.202.219.32-41.202.219.63 of 41.202.219.32/27 and 41.202.219.0/24 Overlap 41.202.219.128-41.202.219.135 of 41.202.219.128/29 and 41.202.219.0/24 Overlap 41.202.219.208-41.202.219.223 of 41.202.219.208/28 and 41.202.219.0/24 Overlap 41.202.219.136-41.202.219.143 of 41.202.219.136/29 and 41.202.219.0/24 Overlap 197.157.209.128-197.157.209.143 of 197.157.209.0/24 and 197.157.209.128/28 As you can see, 41.202.219.0/24 has four overlaps, contrary to your expectations in your question. To get only the subnets with no overlaps with the first list, the script would be much shorter. You don't need the to_octets function and the check_overlap function can already give result on this line: if [ "$max1" -lt "$min2" ] || [ "$max2" -lt "$min1" ]; then return; fi Also the last two lines can can be changed (with the last one removed completely). As for the integration logic, there's place for short-circuiting the checking against the first list, as not all combinations have to be checked. One negative is enough.
Check overlapped subnets
1,550,498,510,000
On Ubuntu, I've written some host declaration in each subnet with isc-dhcp-server, and each fixed address for each network interface are successfully leased. There are plugged two network cards on this DHCP server. But how to make corrections for this warning? dhcpd[11328]: WARNING: Host declarations are global. They are not limited to the scope you declared them in. This post on the same warning message answers host declarations are out of subnet definitions. I don't think that is true in my case where two cards involved.
Host definitions are always global. So I have 3 networks on my router; "LAN" 10.0.0.0/24, "guest" 10.100.100.0/24 and "IoT" 10.100.200.0/24 My dhcpd.conf has the following sort of config subnet 10.0.0.0 netmask 255.255.255.0 { authoritative; option routers 10.0.0.1; blah; } subnet 10.100.100.0 netmask 255.255.255.0 { authoritative; option routers 10.100.100.1; blah; } subnet 10.100.200.0 netmask 255.255.255.0 { authoritative; option routers 10.100.200.1; blah; } host machine1 { hardware ethernet xx:xx:xx:xx:xx:xx; fixed-address 10.0.0.13; option host-name "machine1"; } host machine2 { hardware ethernet yy:yy:yy:yy:yy:yy; fixed-address 10.100.200.15; option host-name "machine2"; } DHCPd correctly works out that machine1 is on LAN and machine2 is on the IoT subnet and sends the correct configuration (netmask, default route, DNS server etc etc) relevant to that subnet. If you have a machine that can connect to multiple interfaces and you want them to get different addresses then you can list the host multiple times. For example, my cellphone: host s8 { hardware ethernet aa:aa:aa:aa:aa:aa; fixed-address 10.0.0.34; option host-name "s8"; } host s8-guest { hardware ethernet aa:aa:aa:aa:aa:aa; fixed-address 10.100.100.9; option host-name "s8-guest"; } Now it'll will get a different address, depending on what network it is on. If there's no static entry for that network then it'll get a dynamic address. If there's no free addresses on the subnet then it won't get assigned any address.
How to declare fixed addresses for each subnet in isc-dhcp-server?
1,550,498,510,000
I currently have my Debian VM using NAT and it is on subnet 10.0.2.x where my host is on 192.168.0.x how can I get my guest on the same subnet as my host? Host: Ethernet adapter VirtualBox Host-Only Network #2: Connection-specific DNS Suffix . : Link-local IPv6 Address . . . . . : fe80::b0cd:11de:7c85:f11a%16 IPv4 Address. . . . . . . . . . . : 192.168.56.1 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : Ethernet adapter Ethernet 2: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : Wireless LAN adapter Wi-Fi: Connection-specific DNS Suffix . : Link-local IPv6 Address . . . . . : fe80::f551:dcf4:fbf5:bf9e%3 IPv4 Address. . . . . . . . . . . : 192.168.0.11 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : 192.168.0.1 Guest: eth0 Link encap:Ethernet HWaddr 08:00:27:03:7c:ec inet addr:10.0.2.15 Bcast:10.0.2.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:fe03:7cec/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:625 errors:0 dropped:0 overruns:0 frame:0 TX packets:275 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:572004 (558.5 KiB) TX bytes:36572 (35.7 KiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:88 errors:0 dropped:0 overruns:0 frame:0 TX packets:88 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:36209 (35.3 KiB) TX bytes:36209 (35.3 KiB)
What you describe is the result of having the virtual machine's network adapter in "NAT" mode; in this mode your host machine is acting as a router for your VM. If you want your VM to be on the same IP subnet as the host machine, the interface must be set in "bridged" mode; this allows network traffic to go seamlessly between the VM and other devices on the physical network.
Virtual Box guest on different subnet than host
1,550,498,510,000
On a RHEL 7 box whose local IP address is 10.0.0.159, the following command prints out the IP 10.0.0.159: $ echo "$(ifconfig eth0 | grep -Eo 'inet (addr:)?([0-9]*\.){3}[0-9]*' | \ grep -Eo '([0-9]*\.){3}[0-9]*' | grep -v '127.0.0.1')""(rw,sync)" What would the command have to change to in order to print out `10.0.0.0/8" instead?
NOTE: ifconfig is a deprecated cmd. your should be using cmds. in the iproute2 package going forward. Below I show how to use ip to accomplish what you want using the replacement tool. Rather than do this with ifconfig I'd recommend using the ip command instead. IP CIDR This form shows the IP address in CIDR notation: $ ip addr list eth0 | awk '/inet.*brd/ {print $2}' 10.0.2.15/24 -or- $ ip a l eth0 | awk '/inet.*brd/ {print $2}' 10.0.2.15/24 Network CIDR This form shows the network address in CIDR notation: $ ip route show | awk '/eth0.*scope/ {print $1}' 10.0.2.0/24 ipcalc You can also use the ipcalc command to manipulate the above addresses to calculate other formats. For example: $ ipcalc -n $(ip a l eth0 | awk '/inet.*brd/ {print $2}') NETWORK=10.0.2.0 $ ipcalc -p $(ip a l eth0 | awk '/inet.*brd/ {print $2}') PREFIX=24 With ipcalc you can use it to more simply form whatever variations you want, rather than have to do a lot of sed & awk to parse text that's typically overly complicated to do.
grep for an IP range?
1,550,498,510,000
I have two machines A and B which are in different subnets, both behind separate firewalls. Machine A can see B, but B cannot see A. I have a user account (non-root) on both machines, I can SSH B from A, and I would like to be able to SSH A from B instead, but that cannot be done directly. I have used tunnelling to hop through intermediate servers with SSH, but what I am asking is different here, and I don't know what it would be called. Is there a way to open a connection from A to B, which could in turn be used "in reverse" from machine B to run commands on A?
The short answer is yes you can, the how is: machine-A$ ssh -R 127.0.0.1:2222:127.0.0.1:22 [ip__or_name_of_B] Then on B you can ssh to A with: machine-B$ ssh -p2222 127.0.0.1 This says the following: On A create a tunnel on the remote side (-R), such that any traffic that goes to localhost (127.0.0.1) on port 2222 should come back through the tunnel and be sent to localhost (127.0.0.1 now on the local side) port 22 The B command simply says, ssh to localhost port 2222 which is the entrance to the tunnel.
"Reverse" an SSH connection from destination to target [duplicate]
1,550,498,510,000
I want an unmanned WireGuard client to work with redundant WireGuard servers. Physical: I have a master database server in a VPS of provider A in the USA. I have continuous replication running to a slave server in a VPS of provider B in Europe. I have a backup database server, also running as a replication slave, on a WiFi LAN in my home office. Network: The master database server in the USA runs a WireGuard server as 10.20.20.1. The slave database server in Europe runs a WireGuard server as 10.20.10.1. The backup database in my home office is successfully configured to interact with either the master or slave remote WireGuard servers individually. To connect via USA I need someone at home to do: suda wg-quick down wgEUR; suda wg-quick up wgUSA; To connect via Europe I need someone at home to do: suda wg-quick down wgUSA; suda wg-quick up wgEUR; However!! The point is to be able SSH into the home office machine, from where ever I am in the world, via either one of the WireGuard servers; if one goes down the other is still available. How can I configure routing in the home office WireGuard client to permit simultaneous access from both remote WireGuard server's subnets? Settings Europe (37.xxx.xxx.139:34567): wg0.conf [Interface] Address = 10.20.10.1/24 PostUp = iptables -A FORWARD -i wg0 -j ACCEPT PostUp = iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE PostDown = iptables -D FORWARD -i wg0 -j ACCEPT PostDown = iptables -t nat -D POSTROUTING -o eth0 -j MASQUERAD ListenPort = 34567 PrivateKey = MNf4xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxiVg= [Peer] PublicKey = durAZO/EtWQnqwnbadbadbadzDa9+klqUmqCT6VplWc= AllowedIPs = 10.20.10.16/32 USA (185.xxx.xxx.36:34567): wg0.conf [Interface] Address = 10.20.20.1/24 PostUp = iptables -A FORWARD -i wg0 -j ACCEPT PostUp = iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE PostDown = iptables -D FORWARD -i wg0 -j ACCEPT PostDown = iptables -t nat -D POSTROUTING -o eth0 -j MASQUERAD ListenPort = 34567 PrivateKey = EGdxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxp2Q= [Peer] PublicKey = durAZO/EtWQnbadbadbadMkTzDa9+klqUmqCT6VplWc= AllowedIPs = 10.20.20.16/32 Client wgEUR.conf: [Interface] ### PrivateKey_of_the_Client PrivateKey = EBmxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxaXlE= ### IP VPN for the Client Address = 10.20.10.16/24 ### DNS Server DNS = 8.8.8.8, 8.8.4.4 [Peer] ###Public of the WireGuard VPN Server PublicKey = pTm/tJwOWJ3QRwEcbadbadbadWx/BbCthbFa52M2uVE= ### IP and Port of the WireGuard VPN Server ##### Syntax: IP_of_the_server:Port Endpoint = 37.xxx.xxx.139:34567 ### Allow all traffic AllowedIPs = 0.0.0.0/0 Client wgUSA.conf: [Interface] ### PrivateKey_of_the_Client PrivateKey = EBxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxXlE= ### IP VPN for the Client Address = 10.20.20.16/24 ### DNS Server DNS = 8.8.8.8, 8.8.4.4 [Peer] ###Public of the WireGuard VPN Server PublicKey = f/H+1b/jkkXvbhYPEbadbadbadkKMBMgEW1IvmOeCEE= ### IP and Port of the WireGuard VPN Server ##### Syntax: IP_of_the_server:Port Endpoint = 185.xxx.xxx.36:34567 ### Allow all traffic AllowedIPs = 0.0.0.0/0
This VPN is to access private resources, not a way to access Internet anonymously. So split tunneling should be used. Just replace on the client side: AllowedIPs = 0.0.0.0/0 by only the needed resource: the server running the database. For client's wgEUR.conf: AllowedIPs = 10.20.10.1/32 For client's wgUSA.conf: AllowedIPs = 10.20.20.1/32 Now the two tunnels can be up at the same time: wg-quick won't hijack routing (which happens with the default Table = auto setting and presented with AllowedIPs having 0.0.0.0/0 or ::/0) and the two tunnels won't conflict with each other. The client's usual Internet access won't go through tunnels, and that's certainly an improvement: why would a database server provide such service?
How to configure a WireGuard client to interact with two distinct servers?
1,550,498,510,000
I'm building a website on my laptop. To see how it renders, I serve it locally on port 80 with lighttpd. I can then open it in my laptop's browser via any IP or URL referring to the laptop http://localhost or http://192.168.1.47 (IP on the local subnet) or http://coulomb (its hostname) . Fine. Now I want to test its responsive design, so I try to open the laptop address in my phone's browser: http://192.168.1.47 or http://coulomb. Both devices (phone and laptop) are in the 192.168.1.* subnet of my Wifi DSL box. Strangely to me, the phone's browser (be it Firefox or Chrome) "rephrases" the IP into "localhost". The connexion then fails with a "site unreachable"-like error. lighttpd is not the culprit. To check this I instead served the files of some directory of the laptop with sudo ruby -run -ehttpd . -p80, the behavior is the same. There is something with the port. If I serve the website on port 3000 (as shown in lighttpd docs) or 8000 or 8080 it works: the phone's browser opens 192.168.1.47:3000 (or :8000 or :8080) and I see the website. The phone seems not to be the culprit either: I can open the HTTP interface of the DSL box at 192.168.1.1, default port, without problem. (If asked to, I might try and use a computer client instead of the phone, but it's not easy for practical reasons.) If you wonder why I insist on serving it on port 80: it is built with Wordpress, and doesn't work right on a custom port, the plain text is shown but no css or images are loaded. I don't want to workaround the problem by tweaking Wordpress to make it custom-port-compatible, because when the site is ready I'll mirror it to a public server.
You can't define the WordPress site as "localhost", since then as you've found out it will insist on referencing itself by that name. Instead, use a name that can be resolved on your LAN (if necessary by using the relevant /etc/hosts files, but ideally by using your DNS), and make sure you're listening on the LAN IP address as well as localhost
Website served on port 80 unreachable from my phone in the local subnet
1,550,498,510,000
I read on a book about networks that class A has 16777214 hosts and 127 subnets,my question is: how they obtain the 127 number?
The A class networks space corresponds to all IP addresses with the first bit set to 0, i.e. IPs from 0.0.0.0 to 127.255.255.255, i.e. the class A subnets from 0.0.0.0/8 to 127.0.0.0/8. Since 0.0.0.0/8 is reserved by the protocol (see RFC1122, section 3.2.1.3 for details) you are left with the subnets from 1.0.0.0/8 to 127.0.0.0/8. Anyway, classful routing is interesting only for historical reasons and is not relevant in any way to how modern network devices work.
Network: how they obtain 127 subnets? [closed]
1,550,498,510,000
I am setting up my dhcp.conf file for my dhcp server. All documentation says you 'can' define subnetworks but I have not seen and example without them. Definitions of a subnetwork say they are a network within a network. My network is only very simple with a hand full of devices connecting to dhcp server, I don't need multiple subnets. For a dhcp server with a single IP range can I configure it without subnet statements? Or is a network with a single IP range a network with one subnet?
It's just one subnet. Your configuration requires a subnet declaration, even if it's just one network. subnet 10.100.0.0 netmask 255.255.255.0 { option routers 10.100.0.1; option domain-name-servers 10.100.0.1; option domain-name "angelsofclockwork.net"; option subnet-mask 255.255.255.0; range 10.100.0.100 10.100.0.254; filename "/pxelinux.0"; default-lease-time 21600; max-lease-time 43200; next-server 10.100.0.1; }
Is a subnet required for a dhcp.conf?
1,550,498,510,000
I hope someone can help me. This server issue I have is driving me crazy! So I have the following configuration: INTERNET | +----------------------+ | MODEM/ROUTER | +-----------------+----------------------+ | | IP: 192.168.2.254/24 | +----------------------+ +----------------------+ | WIFI HOME-NETWORK | | +----------------------+ | | WLAN: 192.168.2.*/24 | | +----------------------+ | | +-----------------------+ +------------------------+ | HUAWEI SOLAR INVERTER | | HOME AUTOMATION SERVER | +-----------------------+ +------------------------+ | MODEL: 6KTL-M0 | | UBUNTU 16.04 | | IP: 192.168.8.1/24 | | ENP1S0 | | WLAN: 192.168.8.*/24 | | IP: 192.168.2.49/24 | | +--------------------------------------------+ | +----------| SOLAR SERVER |-------------+ +----------------------+---------------------+ | WLAN0 | ETH0 | | IP: 192.168.8.100/24 | IP: 192.168.2.35/24 | | | SSH listener | +----------------------+---------------------+ And I'm having this problem that whatever I try to change in my route, I can't get a result pinging from 192.168.2.49 (HOME AUTOMATION SERVER) to the IP of the HUWAEI SOLAR INVERTER. However in this same secondary subnet I can reach the WLAN0 IP of the SOLAR SERVER (RPI). I've added NAT on SOLAR SERVER with the following commands. solar-server:~ $sudo iptables -t nat -A POSTROUTING -o wlan0 -j MASQUERADE solar-server:~ $sudo iptables -A FORWARD -i wlan0 -o eth0 -m state --state RELATED,ESTABLISHED -j ACCEPT solar-server:~ $sudo iptables -A FORWARD -i eth0 -o wlan0 -j ACCEPT I've added these iptables changes into my /etc/network/interfaces so it will be sustainable also on reboot. Since I'm not a netwerk-guru I so stuck with this. I tried all similar cases which I found, but none seems to work in my situation. Is there anyone who can give me some clue or help? Below mentioned I summarized the ping-results in this matter. +---------------------------------------+ | PING RESULTS | +--------------+---------------+--------+ | FROM | TO | RESULT | +--------------+---------------+--------+ | 192.168.2.49 | 192.168.2.35 | SUCCES | | 192.168.2.49 | 192.168.8.1 | FAIL | <--MAIN ISSUE! | 192.168.2.49 | 192.168.8.100 | SUCCES | | 192.168.2.35 | 192.168.2.49 | SUCCES | | 192.168.2.35 | 192.168.2.254 | SUCCES | | 192.168.2.35 | 192.168.8.1 | SUCCES | | 192.168.2.35 | 192.168.8.100 | SUCCES | +--------------+---------------+--------+ And I copied in the ip route of both servers. home-automation-server:~ $ ip route default via 192.168.2.254 dev enp1s0 192.168.2.0/24 dev enp1s0 proto kernel scope link src 192.168.2.49 192.168.8.0/24 via 192.168.2.35 dev enp1s0 proto static src 192.168.2.49 192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1 linkdown solar-server:~ $ ip route default via 192.168.2.254 dev eth0 proto dhcp src 192.168.2.35 metric 202 192.168.2.0/24 dev eth0 proto dhcp scope link src 192.168.2.35 metric 202 192.168.8.0/24 dev wlan0 proto dhcp scope link src 192.168.8.100 metric 303 mtu 1500
Fixed the issue by adding ‘ up route add -net 192.168.8.0/24 gw 192.168.2.35 dev enp1s0’ to /etc/network/interfaces’ thanks for the time, understand and giving me the directions of learning to add a static route. That did the tric!
Can’t reach the whole subnet
1,550,498,510,000
I having a hard time understanding IP subnetting for /16 mask. I went through some tutorials and understood the host part and non-vlsm but with vlsm and dividing into equal parts, is something I am not sure yet. Especially for the below sample, if someone can help me with the output, I will be able to deduce the explanation. For this: 10.0.0.0/16, I have been asked to divide into 5 equal parts. Based on the tutorials I followed, one of them is this : IP Subnetting, I came up with these but my mentor says its wrong and I am not sure why. 10.0.0.0/18 --- IP Range: 10.0.0.1 ==> 10.0.63.254 (16384 IP addresses) 10.0.64.0/18 --- IP Range: 10.0.64.1 ==> 10.0.127.254 (16384 IP addresses) 10.0.128.0/18 --- IP Range: 10.0.128.1 ==> 10.0.191.254 (16384 IP addresses) 10.0.192.0/19 --- IP Range: 10.0.192.1 ==> 10.0.223.254 (8192 IP addresses) 10.0.224.0/19 --- IP Range: 10.0.224.1 ==> 10.0.255.254 (8192 IP addresses) I am assuming it's incorrect because it's not in equal parts. I hope someone guides and can provide the correct answer to it. Thanks in advance!
It is impossible to divide 65536 addresses into 5 equal parts, since 65536/5 = 13107.2 and you can't have a "one-fifth address".
IP Subnetting /16 into 5 equal parts [closed]
1,550,498,510,000
I prepared dhcpd configuration in Debian 8 and it successfully worked but then I added bridge and dhcpd failed. bridge: root@remote:/home/s# brctl show bridge name bridge id STP enabled interfaces br0 8000.00224dad5ddf no eth0 eth1 root@remote:/home/s# cat /etc/network/interfaces auto lo br0 iface lo inet loopback iface ppp0 inet wvdial provider orange iface eth0 inet manual iface eth1 inet manual allow-hotplug br0 iface br0 inet static bridge_ports eth0 eth1 address 192.168.0.1 netmask 255.255.255.0 root@remote:/home/s# dhcpd: root@remote:/home/s# systemctl -l status isc-dhcp-server.service ● isc-dhcp-server.service - LSB: DHCP server Loaded: loaded (/etc/init.d/isc-dhcp-server) Active: failed (Result: exit-code) since Sun 2018-07-01 21:51:43 CEST; 24min ago Process: 1037 ExecStart=/etc/init.d/isc-dhcp-server start (code=exited, status=1/FAILURE) Jul 01 21:51:41 remote dhcpd[1076]: No subnet declaration for eth0 (no IPv4 addresses). Jul 01 21:51:41 remote dhcpd[1076]: ** Ignoring requests on eth0. If this is not what Jul 01 21:51:41 remote dhcpd[1076]: you want, please write a subnet declaration Jul 01 21:51:41 remote dhcpd[1076]: in your dhcpd.conf file for the network segment Jul 01 21:51:41 remote dhcpd[1076]: to which interface eth0 is attached. ** Jul 01 21:51:43 remote isc-dhcp-server[1037]: Starting ISC DHCP server: dhcpdcheck syslog for diagnostics. ... failed! Jul 01 21:51:43 remote isc-dhcp-server[1037]: failed! Jul 01 21:51:43 remote systemd[1]: isc-dhcp-server.service: control process exited, code=exited status=1 Jul 01 21:51:43 remote systemd[1]: Failed to start LSB: DHCP server. Jul 01 21:51:43 remote systemd[1]: Unit isc-dhcp-server.service entered failed state. root@remote:/home/s# cat /etc/dhcp/dhcpd.conf ddns-update-style none; default-lease-time 600; max-lease-time 7200; log-facility local7; subnet 192.168.0.0 netmask 255.255.255.0 { range 192.168.0.2 192.168.0.6; option domain-name-servers 8.8.8.8,8.8.4.4; option routers 192.168.0.1;} root@remote:/home/s# How to add this "subnet declaration for eth0" and fix dhcpd service?
Your original DHCP server configuration used eth0. You've now replaced that in your network definitions with br0, so you need to update your DHCP server configuration accordingly.
dhcpd and bridge - No subnet declaration error
1,550,498,510,000
I have an USR-TCP232-S2 IP-to-Serial converter and I would like to access it over Ethernet to set it up. The module comes with a fixed IP address 192.168.0.7. My PC (Lubuntu 18.04) is however on a different subnet (192.168.1.0/24, IP address 192.168.1.80, gateway 192.168.1.235), so I can't talk to the module directly. I expected to be able to reach the module if I added a second IP address to my interface: ip addr add 192.168.0.6/24 dev enp2s0 but that didn't work, I got: root@lbox0:~# telnet 192.168.0.7 80 Trying 192.168.0.7... telnet: Unable to connect to remote host: No route to host I guess I might have to set up a route, using ip route, to get to my module. But I couldn't find anything that involves just an IP address, wihtout using a gateway. Output of ip addr and ip route: root@lbox0:~# ip addr show dev enp2s0 2: enp2s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 50:e5:49:84:2b:4c brd ff:ff:ff:ff:ff:ff inet 192.168.1.80/24 brd 192.168.1.255 scope global dynamic noprefixroute enp2s0 valid_lft 686535sec preferred_lft 686535sec inet 192.168.0.6/24 scope global enp2s0 valid_lft forever preferred_lft forever inet6 fe80::c553:9525:6f96:5b5b/64 scope link noprefixroute valid_lft forever preferred_lft forever root@lbox0:~# ip route default via 192.168.1.235 dev enp2s0 proto dhcp metric 100 172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown 192.168.0.0/24 dev enp2s0 proto kernel scope link src 192.168.0.6 192.168.1.0/24 dev enp2s0 proto kernel scope link src 192.168.1.80 metric 100
It turned out that the module was shipped configured as DHCP instead of Static IP. When I ran nmap -p80 192.168.1.*, it found the module at IP address 192.168.1.11. I was then able to change it to Static IP, with IP address 192.168.0.7. I then couldn't reach the module anymore on IP address 192.168.1.11 and also not on IP address 192.168.0.7. After I entered ip addr add 192.168.0.6/24 dev enp2s0, I could access the module at IP address 192.168.0.7. This proves that it is sufficient to give your NIC an additional IP address in another subnet to enable access to hosts in that subnet.
Access IP address on different subnet without gateway
1,550,498,510,000
Situation: I have 3 devices on an Ethernet network. 1) 172.30.40.100 (Ubuntu 14.04) 2) 172.30.41.101 (other) 3) 192.168.30.102 (other) I would like to setup 1) to be able to send msgs to both of the devices. I can hear each of the devices emitting UDP traffic (ngrep/tcpdump/Wireshark). There is a UDP protocol msg which will tell 2)/3) to change its IP, allowing a proper network setup where all three devices are on the same network. Background: 2)/3) are devices that may reboot and when they do, they come up on a different network(192) than what I currently have set. There are other devices on the network that can only hear UDP msgs on the current network so getting the 2)/3) onto the proper network is important. Currently, I have a program running on 1) that will change its IP address to 192 and send the msg to 3) to change to 172, and then change its own IP back to 172. I am hoping there is some other way to be able to send UDP msgs to each device WITHOUT changing 1)'s IP address. Is this possible?
If you are plugged into a non-managed switch or hub, an Ethernet Alias will fix you up. Not sure how to do it in Network Manager (I always remove it anyway, and use the /etc/network/interfaces file) but if you open a terminal you can do sudo ifconfig eth0:1 192.168.30.105 netmask 255.255.255.0 And you should be able to talk freely between either of the other devices from the Ubuntu machine. In /etc/network/interfaces simply add a second stanza referencing eth0:1 and set an IP and netmask. Don't set a gateway address.
Is there a way to setup a computer to talk to two different devices on different subnets which are physically connected?
1,550,498,510,000
Thanks for taking a look to my issue and think with me for a solution. I have a samba server on a subnet 172.23.3.55/23 (2.0 --> 3.255) and within that subnet I can access the server no problem. Also the 172.23.4.0/23 subnet that lives on the same Core Switch can access the server no problem. Even our Office Subnet 129.228.114.0/23 can access the system through the firewall with no issue. But when I connect to our VPN network, 172.23.45.0/24 or when I come from a different office with totally different ranges I cannot access the server. The server responds, and I need to login, but the login is rejected always. Here is my [global] and [share] section of the smb.conf workgroup = localdomain.nmc netbios name = AMS-QTGW02 server string = %h server (Samba %v) # hosts allow = 172.23.202.0/24 172.23.45.0/24 129.228.114.0/23 129.228.70.0/24 129.228.109.42 129.228.109.83 force user = nobody force group = nobody force create mode = 0666 force directory mode = 0777 create mode = 0666 directory mode = 0777 guest account = vimn security = user passdb backend = tdbsam ntlm auth = yes log file = /var/log/samba/log.%m log level = 2 passdb:5 auth:5 max log size = 50M #Performance Tuning: use sendfile = true kernel oplocks = no strict locking = no #FUCK OSX! veto files = /.DS_Store/.AppleDesktop/.AppleDB/.AppleDouble/.Temporary Items/ delete veto files = yes printing = cups printcap name = cups load printers = no cups options = raw [AMS-HATCH] comment = HATCH Storage Share (AutoCleaned 30 Days) path = /quantum/AMS-HATCH browseable = yes writable = yes guest ok = yes force user = nobody force group = nobody valid users = @LinuxAdmins, vimn, mll As you can see I outhashed the line "hosts allow" so that all IP's can access them, later when all is working I would like to limit access through that (or "hosts deny"). The credentials are checked already multiple times, and they are enterred correctly. I red something about samba-winbond for non-domain servers to be disabled, but I did not install it, is there a setting I don't know about that I mis or should use? In the log file of this session I have this: [2018/02/19 11:21:07.724423, 5] ../source3/auth/server_info_sam.c:122(make_server_info_sam) make_server_info_sam: made server info for user vimn -> vimn [2018/02/19 11:21:07.724461, 3] ../source3/auth/auth.c:249(auth_check_ntlm_password) check_ntlm_password: sam authentication for user [vimn] succeeded [2018/02/19 11:21:07.724516, 5] ../source3/auth/auth.c:292(auth_check_ntlm_password) check_ntlm_password: PAM Account for user [vimn] succeeded [2018/02/19 11:21:07.724537, 2] ../source3/auth/auth.c:305(auth_check_ntlm_password) check_ntlm_password: authentication for user [vimn] -> [vimn] -> [vimn] succeeded [2018/02/19 11:21:07.725216, 5] ../source3/passdb/pdb_interface.c:1749(lookup_global_sam_rid) lookup_global_sam_rid: looking up RID 513. [2018/02/19 11:21:07.725264, 5] ../source3/passdb/pdb_tdb.c:658(tdbsam_getsampwrid) pdb_getsampwrid (TDB): error looking up RID 513 by key RID_00000201. [2018/02/19 11:21:07.725300, 5] ../source3/passdb/pdb_interface.c:1825(lookup_global_sam_rid) Can't find a unix id for an unmapped group [2018/02/19 11:21:07.725317, 5] ../source3/passdb/pdb_interface.c:1535(pdb_default_sid_to_id) SID S-1-5-21-3363938291-73671434-3978610123-513 belongs to our domain, but there is no corresponding object in the database. Password is authenticated correctly, but still the connection is cut-off. Thanks a lot people. edit: added the log section.
Nobody supplied an answer, but the problem does not persist anymore.
Samba share not accessable from other subnets
1,550,498,510,000
I am trying to setup a PXE server on my laptop on CentOS 7 to connect to a physical test client, following the tutorial on: https://www.linuxtechi.com/configure-pxe-installation-server-centos-7/#comment-35567 All of the configuration files and setup procedures are from this website. On “Step: 6 Start and enable xinetd, dhcp, and vsftpd service.”, The commands: “systemctl start xinetd” and “systemctl enable xinetd” work, but when I run the command: “systemctl start dhcpd.service”, I receive the following error message: Job for dhcpd.service failed because the control process exited with error code. See “systemctl status dhcpd.service” and “journalctl -xe” for details. When I run “systemctl status -l dhcpd.service”, I receive the following error message: systemctl status -l dhcpd.service dhcpd.service - DHCPv4 Server Daemon Loaded: loaded (/usr/lib/systemd/system/dhcpd.service; disabled; vendor preset: disabled) Active: failed (Result: exit-code) since Tue 2022-07-05 11:18:07 EDT; 1min 12s ago Docs: man:dhcpd(8) man:dhcpd.conf(5) Process: 11655 ExecStart=/usr/sbin/dhcpd -f -cf /etc/dhcp/dhcpd.conf -user dhcpd -group dhcpd --no-pid (code=exited, status=1/FAILURE) Main PID: 11655 (code=exited, status=1/FAILURE) Jul 05 11:18:07 localhost.localdomain dhcpd[11655]: to which interface virbr0 is attached. ** Jul 05 11:18:07 localhost.localdomain dhcpd[11655]: Jul 05 11:18:07 localhost.localdomain dhcpd[11655]: Jul 05 11:18:07 localhost.localdomain dhcpd[11655]: No subnet declaration for enp0s20f0u13 (10.249.6.154). Jul 05 11:18:07 localhost.localdomain dhcpd[11655]: ** Ignoring requests on enp0s20f0u13. If this is not what Jul 05 11:18:07 localhost.localdomain dhcpd[11655]: you want, please write a subnet declaration Jul 05 11:18:07 localhost.localdomain systemd[1]: dhcpd.service: main process exited, code=exited, status=1/FAILURE Jul 05 11:18:07 localhost.localdomain systemd[1]: Failed to start DHCPv4 Server Daemon. Jul 05 11:18:07 localhost.localdomain systemd[1]: Unit dhcpd.service entered failed state. Jul 05 11:18:07 localhost.localdomain systemd[1]: dhcpd.service failed. Also here is the Dhcpd.conf file: # # DHCP Server Configuration file. # see /usr/share/doc/dhcp*/dhcpd.conf.example # see dhcpd.conf(5) man page # # DHCP Server Configuration file. ddns-update-style interim; ignore client-updates; authoritative; allow booting; allow bootp; allow unknown-clients; # internal subnet for my DHCP Server subnet 172.168.1.0 netmask 255.255.255.0 { range 172.168.1.21 172.168.1.151; option domain-name-servers 172.168.1.11; option domain-name "pxe.example.com"; option routers 172.168.1.11; option broadcast-address 172.168.1.255; default-lease-time 600; max-lease-time 7200; # IP of PXE Server next-server 172.168.1.11; filename "pxelinux.0"; } What do I need to change in my dhcpd.conf file to make the command “systemctl start dhcpd.service” work so I can finish going through the PXE server tutorial?
dhcpd is not detecting any network interface that would be already configured with an IP address in the 172.168.1.0 subnet, and so it cannot figure out which network interface it should use to provide its services. And no, you cannot use the DHCP server you're starting to assign an IP address to the system that is actually running the DHCP server. Apparently your system has a physical network interface enp0s20f0u13 with IP address 10.249.6.154. That address is part of the "RFC 1918" private IP address ranges, which can be used by anyone. On the other hand, the addresses used in the tutorial you're following are actually in use by Microsoft so you should not copy the configuration from the tutorial as-is, but instead use one of the private address ranges, or a subsection of it. The other two private IP address ranges are 172.16.0.0 ... 172.31.255.255 and 192.168.0.0 ... 192.168.255.255, so maybe the writer of the tutorial was thinking of using either of these, and managed to mix it up? There is actually another set of smaller IP address ranges assigned specifically to be used in written documentation, to minimize the chances of anyone copying a configuration directly from a tutorial accidentally usurping an IP address that is actually in use. If you are planning to use your PXE server with virtual machines in your laptop, you should begin by configuring the virbr0 virtual bridge interface with an IP address. You might assign something like 192.168.1.1/24 for it (note: the /24 at the end is a shorter way of saying "netmask 255.255.255.0"). Since that interface is going to be the default gateway IP address for your VMs, you should not configure a default gateway address for the virbr0 interface: your laptop probably already has a default gateway for its external interface, and one default gateway on a system is enough for most usual cases. Once you have virbr0 configured, you can configure your DHCP server like this: [...] # internal subnet for my DHCP Server subnet 192.168.1.0 netmask 255.255.255.0 { range 192.168.1.21 192.168.1.151; option domain-name-servers 192.168.1.1; option domain-name "pxe.domain.example"; # this should match the IP address of the virbr0 interface option routers 192.168.1.1; option broadcast-address 192.168.1.255; default-lease-time 600; max-lease-time 7200; # IP of PXE Server; this should also match the IP address of the virbr0 interface next-server 192.168.1.1; filename "pxelinux.0"; } Now, the DHCP server should detect that the subnet 192.168.1.0 netmask 255.255.255.0 declaration matches the IP address and netmask configured for interface virbr0 and automatically start serving on that interface. This should allow you to complete the tutorial, and then you would be ready to start setting up virtual machines on your laptop, booting the installer by PXE. If you want to PXE boot physical machines, you should do it in a network segment that has no other DHCP servers. If two DHCP servers have not been specifically configured to cooperate with each other, they will compete for clients instead, which could cause your PXE boot attempts to frustratingly work only some of the time. (It is technically possible to set up a second DHCP server for PXE booting without touching the configuration of the first one, but I count that as "dirty tricks you should avoid, unless you know what you are doing and have absolutely no way to do it correctly in the first place.")
“systemctl start dhcpd.service” command not working for PXE server setup virtual
1,550,498,510,000
I have some CentOS 7 servers hosting VMs. The VMs are connected to a network bridge on their respective hosts that allows them to communicate with each other and with the host (via a dummy adapter on the host). The hosts also each have a physical adapter which allows external communication. The bridges must not be connected to the physical adapters. This diagram should make the current layout clear. Subnet A connects the hosts to each other. Subnet B exists entirely within host 1. Subnet C exists entirely within host 2. The VMs must not have addresses on subnet A. I'd like to combine the two bridges into a single virtual subnet so all the VMs share the same address space and broadcast domain. Is there a way to do this? Here's the goal: The VMs have CloudStack "public" IP addresses, which need to belong to their respective subnets. The VMs must not be in the same address space as subnet A. CloudStack public IP ranges are defined at the zone level, so the VMs would have to be all on the same subnet to get them into the same zone, let alone the same pod. The hosts can route traffic between the subnets. I can't add more subnet A addresses to the hosts, and I also can't use NAT on the hosts. I'm also unable to set up VLANs in subnet A. For most purposes, subnet A is outside the scope of what I can control.
I was able to solve this by creating an overlay network using vxlan. I added a vxlan adapter to each host, and connected the adapters to the bridges on the hosts. One of the hosts serves as a router to connect the vxlan subnet to the rest of the network.
How can I create a virtual subnet that spans multiple servers?
1,550,498,510,000
it might be a dumb question but I'm stuck. I have to subnets on my router 10.0.0.x/24 and 192.168.88.x/24, while ping is working on VM's with 10.0.0.x/24 subnet and they see each other - telnet is adamant in saying that there is no route: [root@centos7 ~]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 08:00:27:aa:66:2e brd ff:ff:ff:ff:ff:ff inet 10.0.0.20/24 brd 10.0.0.255 scope global enp0s3 valid_lft forever preferred_lft forever inet6 fe80::a00:27ff:feaa:662e/64 scope link valid_lft forever preferred_lft forever [root@centos7 ~]# ip route default via 10.0.0.1 dev enp0s3 10.0.0.0/24 dev enp0s3 proto kernel scope link src 10.0.0.20 169.254.0.0/16 dev enp0s3 scope link metric 1002 [root@centos7 ~]# ping 10.0.0.10 PING 10.0.0.10 (10.0.0.10) 56(84) bytes of data. 64 bytes from 10.0.0.10: icmp_seq=1 ttl=64 time=0.627 ms 64 bytes from 10.0.0.10: icmp_seq=2 ttl=64 time=0.667 ms [root@centos7 ~]# telnet 10.0.0.10 53 Trying 10.0.0.10... telnet: connect to address 10.0.0.10: No route to host
[Edited for clarity, and addition of information] I believe that DNS uses UDP by preference (or at least it used to... I'm an old-timer). "No route to host" probably indicates that somewhere along the path (which essentially means "on the nameserver" in this case) the traffic to tcp port 53 is being denied by a firewall, as you obviously have a route to get there. If you have access, log into 10.0.0.10 as root and check "iptables --list" to see if there's a rule in place to block traffic to that port.
OL7: Telnet no route to host
1,550,498,510,000
Right now whenever I connect to a (commercial) VPN server a tun interface spins up with an inet address assigned to it as well as a peer one (whatever that means). root@mininet-vm:/etc/openvpn# ip -4 a show dev tun0 33: tun0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN group default qlen 100 inet 10.9.10.6 peer 10.9.10.5/32 scope global tun0 valid_lft forever preferred_lft forever I'm have two virtual interfaces that I want to have all their traffic pass through the VPN. I can get this to work by routing traffic from these interfaces to tun0 and doing an SNAT to 10.9.10.6. I have tried to assign ips like 10.9.10.7 and 10.9.10.8 to them but I only get response traffic on tun0 when the src is 10.9.10.6 (hence why SNAT worked). I was wondering it's possible to turn tun0 into a regular subnet so that I can assign more than one clients on that subnet and do this without resorting to NATing. I have found the topology option on openvpn man page but this option seems to be pushed by the server subnet -- Use a subnet rather than a point-to-point topology by configuring the tun interface with a local IP address and subnet mask, similar to the topology used in --dev tap and ethernet bridging mode. I have tried to override it with /usr/sbin/openvpn --topology subnet --pull-filter ignore "topology" ... but it fails when adding the IP: Fri Dec 28 01:52:47 2018 /sbin/ip addr add dev tun0 10.66.10.6/-1 broadcast 255.255.255.254 Error: an inet prefix is expected rather than "10.66.10.6/-1". Is there any other way to get this to work? (unfortunately I cannot control the server this a commercial VPN)
If you have no control over the server then you can't do this. The TUN interfaces handles layer three packets (as opposed to a TAP interface, which handles layer two). The server will have static rules for routing and you wont be able to change this using only client side configuration.
OpenVPN configuring client to use subnet topology results in "inet prefix is expected rather than 10.66.10.6/-1"
1,436,514,143,000
I had a tough misleading error while connecting in AWS VPC subnets. The error did occur in B->A connection, and did not happen while A->B, so at the beginning I thought that it is library bug. It happened to be caused by AWS-system "double layer routing, and the NAT instance in the subnet, that did redirect packets over wrong network channel, causing ssh to drop connection. Below there is a copy of mine post with 'case-study', that was deleted from the original thread: As far as I can tell this isn't even attempting to answer the question, so I'm deleting it. If you have a separate question feel free to post it as one |@michael-mrozek In my case: as @patrick suggested (ssh_exchange_identification: read: Connection reset by peer): CLIENT (subnetB 172.16.3.76) ssh 172.16.0.141 -vvv -p23 OpenSSH_6.6.1, OpenSSL 1.0.1f 6 Jan 2014 debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to 172.16.0.141 [172.16.0.141] port 23. debug1: Connection established. debug1: permanently_set_uid: 0/0 debug1: identity file /root/.ssh/id_rsa type -1 debug1: identity file /root/.ssh/id_rsa-cert type -1 debug1: identity file /root/.ssh/id_dsa type -1 debug1: identity file /root/.ssh/id_dsa-cert type -1 debug1: identity file /root/.ssh/id_ecdsa type -1 debug1: identity file /root/.ssh/id_ecdsa-cert type -1 debug1: identity file /root/.ssh/id_ed25519 type -1 debug1: identity file /root/.ssh/id_ed25519-cert type -1 debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_6.6.1p1 Ubuntu-2ubuntu2 ssh_exchange_identification: read: Connection reset by peer SERVER (SubnetA 172.16.0.141) $(which sshd) -d -p 23 debug1: sshd version OpenSSH_6.6.1, OpenSSL 1.0.1f 6 Jan 2014 debug1: key_parse_private2: missing begin marker debug1: read PEM private key done: type RSA debug1: private host key: #0 type 1 RSA debug1: key_parse_private2: missing begin marker debug1: read PEM private key done: type DSA debug1: private host key: #1 type 2 DSA debug1: key_parse_private2: missing begin marker debug1: read PEM private key done: type ECDSA debug1: private host key: #2 type 3 ECDSA debug1: could not open key file '/etc/ssh/ssh_host_ed25519_key': No such file or directory Could not load host key: /etc/ssh/ssh_host_ed25519_key debug1: rexec_argv[0]='/usr/sbin/sshd' debug1: rexec_argv[1]='-d' debug1: rexec_argv[2]='-p' debug1: rexec_argv[3]='23' Set /proc/self/oom_score_adj from 0 to -1000 debug1: Bind to port 23 on 0.0.0.0. Server listening on 0.0.0.0 port 23. debug1: Bind to port 23 on ::. Server listening on :: port 23. debug1: Server will not fork when running in debugging mode. debug1: rexec start in 5 out 5 newsock 5 pipe -1 sock 8 debug1: inetd sockets after dupping: 3, 3 debug1: getpeername failed: Transport endpoint is not connected debug1: get_remote_port failed https://superuser.com/questions/856989/ssh-error-ssh-exchange-identification-read-connection-reset-by-peer The VPC setup and case description: I do have AWS EC2 Amazon instances running in the VPC (172.16.0.0/16) There is public subnetA (172.16.0.0/24), with NAT-instanceA (172.16.0.200) with attached elastic IP The other instances in subnetA communicates to the internet via instanceA (default via 172.16.0.200 dev eth0) There are instances in subnetB (172.16.3.0/24) route table is similar to https://stackoverflow.com/questions/10243833/how-to-connect-to-outside-world-from-amazon-vpc The problem: The hosts both from subnetA and subnetB can ping/communicate. The hosts from subnetA can ssh to host in subnetB The hosts from subnetB can ssh to instanceA in subnetA NONE of the hosts in subnetB can ssh to OTHER instance in subnetA (other than instanceA), there is an error : ssh_exchange_identification: read: Connection reset by peer IF_AND_ONLY_IF the instances in SubnetA HAVE defaut gateway set to NAT-InstanceA (example 'default via 172.16.0.200 dev eth0'). If there is instance_in_subnetA with not_changed default gateway (example 'default via 172.16.0.1 dev eth0'), then You can ssh to that instance from SubnetBhosts comment: If there won't be a NAT in subnetA, the instances in subnetA won't have outgoing internet connection So... The problem is probably caused by Amazon AWS Router and/or NAT configuration. For the moment, I guess, that despite the fact, that the VPC routing table is set to: Destination Target 172.16.0.0/16 local 0.0.0.0/0 igw-nnnnn The subnetA instances are in 172.16.0.0/24 (edit: source of the problem: routing table redirecting traffic other than 172.16.0.0/24 via NAT instance, overriding AWS-side-Routing: 172.16.0.0/16) default via 172.16.0.200 dev eth0 172.16.0.0/24 dev eth0 proto kernel scope link src 172.16.0.60 The subnetB instances are in 172.16.3.0/24 When hosts from subnetB connect to instances in subnetA (other than NAT-instanceA), the traffic goes like: 172.16.3.X/24 --> 172.16.3.1 --> 172.16.0.Y V ??? <-- 172.16.3.200 (NAT) And that is the problem. I would have to tcpdump that and verify, it might be fixable via NAT rules, though it is more complex than it should be. Actually, the rule in AWS router Destination Target 172.16.0.0/16 local should in theory cover the VPC/16 subnet, but the instance/24 subnet + NAT gateway hide the functionality on the "system_level".
On the instances in subnetA (with NAT-instance 172.16.0.200), the routing table looks like: default via 172.16.0.200 dev eth0 172.16.0.0/24 dev eth0 proto kernel scope link src 172.16.0.141 actually, the one addition: $ ip r a 172.16.3.0/24 via 172.16.0.1 (or ip r a 172.16.3.0/16 via 172.16.0.1) Fixes the system routing table: default via 172.16.0.200 dev eth0 172.16.0.0/24 dev eth0 proto kernel scope link src 172.16.0.141 172.16.3.0/24 via 172.16.0.1 dev eth0 and shifts the VPC subnets routing over to AWS routers Destination Target 172.16.0.0/16 local 0.0.0.0/0 igw-nnnnn
AWS VPC NAT | ssh_exchange_identification: read: Connection reset by peer [closed]