date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,305,731,391,000
When I run a command through strace utility I can see access errors such as access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory) Now I've read somewhere that what's happening on the above line is that a linker is looking for optimized version of the command that I'm running but can't find it. How do I solve this problem ? What package do I need to install so that I can have that ld.so.nohwcap file on the system ? Even if not for optimization purposes but just to get rid of these errors in strace ?
You probably don’t want to “solve” this problem; according to the Debian glibc manpage for ld.so, /etc/ld.so.nohwcap When this file is present the dynamic linker will load the non-optimized version of a library, even if the CPU supports the optimized version. It’s not installed by a package, it can be created by the system administrator to disable loading optimised libraries. Note that this is Debian-specific: the feature is implemented by a patch in the Debian glibc package, and isn’t available in upstream glibc. The feature’s documentation disappeared from the ld.so manpage when the latter was moved from glibc to the man-pages project.
Where to get "/etc/ld.so.nohwcap" file from? [duplicate]
1,305,731,391,000
While both are called "linker" and are used to link binaries, I can't really figure out how they differ from each other. Can anyone tell me their differences?
Without getting too technical: Both are "linkers", i.e. a tool that combines/loads a piece of compiled code with/into another piece of compiled code. ld is a static linker, while ld.so is a dynamic linker. The letters so are, I believe, short for "shared object", and you'll usually see it as a file name suffix of shared libraries, i.e. libraries that may be dynamically linked into programs (one library is "shared" among several programs). In contrast, a static library often has the file name suffix .a, for "archive" (created by the ar utility). A static linker links a program or library at compile-time, usually as the last step in the compilation process, creating a binary executable or a library. In the case of a binary executable file, it may be a static binary with all libraries loaded into the binary itself, or it may be a dynamically linked binary with only some libraries statically linked. A dynamic linker loads the libraries that were dynamically linked at compile-time into the process' address space at run-time. See the manuals for ld and ld.so on your system.
Difference between 'ld' and 'ld.so'?
1,305,731,391,000
I have an application that reads a file. Let's call it processname and the file ~/.configuration. When processname runs it always reads ~/.configuration and can't be configured differently. There are also other applications that rely on "~/.configuration", before and after, but not while processname is running. Wrapping processname in a script that replaces the contents of ~/.configuration is an option, but I recently had a power outage (while the contents were swapped out), where I lost the previous contents of said file, so this is not desirable. Is there a way (perhaps using something distantly related to LD_DEBUG=files processname?) for fooling a process into reading different contents when it tries to read a specific file? Searching and replacing the filename in the executable is a bit too invasive, but should work as well. I know it's possible to write a kernel module that takes over the open() call (https://news.ycombinator.com/item?id=2972958), but is there a simpler or cleaner way? EDIT: When searching for ~/.configuration in the processname executable I discovered that it tried to read another filename right before reading ~/.configuration. Problem solved.
In recent versions of Linux, you can unshare the mount namespace. That is, you can start processes that view the virtual file system differently (with file systems mounted differently). That can also be done with chroot, but unshare is more adapted to your case. Like chroot, you need superuser priviledged to unshare the mount namespace. So, say you have ~/.configuration and ~/.configuration-for-that-cmd files. You can start a process for which ~/.configuration is actually a bind-mount of ~/.configuration-for-that-cmd in there, and execute that-cmd in there. like: sudo unshare -m sh -c " mount --bind '$HOME/.configuration-for-that-cmd' \ '$HOME/.configuration' && exec that-cmd" that-cmd and all its descendant processes will see a different ~/.configuration. that-cmd above will run as root, use sudo -u another-user that-cmd if it need to run as a another-user.
Making a process read a different file for the same filename
1,305,731,391,000
With two files, one compiled and linked with gcc and the other manually with nasm and ld I get ELF 32-bit LSB shared object ... ELF 32-bit LSB executable ... What's the difference between these two things? I can see with readelf -h that one is Type: DYN (Shared object file) Type: EXEC (Executable file) I can see these documented on Wikipedia as ET_DYN and ET_EXEC. What are the practical differences between these two?
It seems this has something to do with Position Independent Executable (PIE). When GCC compiles executable by defaults it makes them PIE which changes the output flag on the ELF Header to ET_DYN. You can disable the generation of PIE executables with gcc -no-pie If you're seeing this check the default options gcc is configured with gcc -v, you should see something like --enable-default-pie. Answer inspired by this submission on StackOverflow. I intend to play more with it and explain more here.
What is the difference between "LSB executable" (ET_EXEC) and "LSB shared object" (ET_DYN)?
1,305,731,391,000
I have a question about overwriting a running executable, or overwriting a shared library (.so) file that's in use by one or more running programs. Back in the day, for the obvious reasons, overwriting a running executable didn't work. There's even a specific errno value, ETXTBSY, that covers this case. But for quite a while now, I've noticed that when I accidentally try to overwrite a running executable (for example, by firing off a build whose last step is cc -o exefile on an exefile that happens to be running), it works! So my questions are, how does this work, is it documented anywhere, and is it safe to depend on it? It looks like someone may have tweaked ld to unlink its output file and create a new one, just to eliminate errors in this case. I can't quite tell if it's doing this all the time, or only if it needs to (that is, perhaps after it tries to overwrite the existing file, and encounters ETXTBSY). And I don't see any mention of this on ld's man page. (And I wonder why people aren't complaining that ld may now be breaking their hard links, or changing file ownership, and like that.) Addendum: The question wasn't specifically about cc/ld (although that does end up being a big part of the answer); the question was really just "How come I never see ETXTBSY any more? Is it still an error?" And the answer is, yes, it is still an error, just a rare one in practice. (See also the clarifying answer I just posted to my own question.)
It depends on the kernel, and on some kernels it might depend on the type of executable, but I think all modern systems return ETXTBSY (”text file busy“) if you try to open a running executable for writing or to execute a file that's open for writing. Documentation suggests that it's always been the case on BSD, but it wasn't the case on early Solaris (later versions did implement this protection), which matches my memory. It's been the case on Linux since forever, or at least 1.0. What goes for executables may or may not go as well for dynamic libraries. Overwriting a dynamic library causes exactly the same problem that overwriting an executable does: instructions will suddenly be loaded from the same old address in the new file, which probably has something completely different. But this is in fact not the case everywhere. In particular, on Linux, programs call the open system call to open a dynamic library under the hood, with the same flags as any data file, and Linux happily allows you to rewrite the library file even though a running process might load code from it at any time. Most kernels allow removing and renaming files while they're being executed, just like they allow removing and renaming files while they're open for reading or writing. Just like an open file, a file that's removed while it's being executed will not be actually removed from the storage medium as long as it is in use, i.e. until the last instance of the executable exits. Linux and *BSD allow it, but Solaris and HP-UX don't. Removing a file and writing a new file by the same name is perfectly safe: the association between the code to load and the open (or being-executed) file that contains the code goes by the file descriptor, not the file name. It has the additional benefit that it can be done atomically, by writing to a temporary file then moving that file into place (the rename system call atomically replaces an existing destination file by the source file). It's much better than remove-then-open-write since it doesn't temporarily put an invalid, partially-written executable in place Whether cc and ld overwrite their output file, or remove it and create a new one, depends on the implementation. GCC (at least modern versions) and Clang do this, in both cases by calling unlink on the target if it exists then open to create a new file. (I wonder why they don't do write-to-temp-then-rename.) I don't recommend depending on this behavior except as a safeguard since it doesn't work on every system (it may work on every modern systems for executables, but not for shared libraries), and common toolchains don't do things in the best way. In your build scripts, always generate files under a temporary file, then move them into place, unless you know the underlying tool does this.
Overwriting a running executable or .so
1,305,731,391,000
Running example C code is a painful exercise unless it comes with a makefile. I often find myself with a C file containing code that supposedly does something very cool, but for which a first basic attempt at compilation (gcc main.c) fails with— main.c:(.text+0x1f): undefined reference to `XListInputDevices' clang-3.7: error: linker command failed with exit code 1 (use -v to see invocation) —or similar. I know this means I'm missing the right linker flags, like -lX11, -lXext or -lpthread. But which ones? The way I currently deal with this is to find the library header that a function was included from, use Github's search to find some other program that imports that same header, open its makefile, find the linker flags, copy them onto my compilation command, and keep deleting flags until I find a minimal set that still compiles. This is inefficient, boring, and makes me feel like there must be a better way.
The question is how to determine what linker flag to use from inspection of the source file. The example below will work for Debian. The header files are the relevant items to note here. So, suppose one has a C source file containing the header #include <X11/extensions/XInput.h>. We can do a search for XInput.h using, say apt-file. If you know this header file is contained in an installed package, dpkg -S or dlocate will also work. E.g. apt-file search XInput.h libxi-dev: /usr/include/X11/extensions/XInput.h That tells you that this header file belongs to the development package for libxi (for C libraries, the development packages (normally of the form libname-dev or libname-devel) contain the header files), and therefore you should use the -lxi linker flag. Similar methods should work for any distribution with a package management system.
How can I find out what linker flags are needed to use a given C library function?
1,305,731,391,000
I use LD_PRELOAD to overwrite the read function. For a minimal test application it works fine, but if I test it with a larger application it does not work anymore. Also LD_DEBUG=all does not show anything at all: LD_DEBUG=all LD_PRELOAD=./lib.so ./big_app This just runs ./big_app and LD_PRELOAD has no effect. Is there a way to debug that?
This answer is only valid for a GNU/Linux environment. From comments, OP's binary has privilege features added to it: capabilities. This switchs ld.so(8) into secure-execution mode which by default disables most dynamic-linker related environment variables, including LD_PRELOAD and LD_DEBUG. Secure-execution mode For security reasons, if the dynamic linker determines that a binary should be run in secure-execution mode, the effects of some environment variables are voided or modified [...] A binary is executed in secure-execution mode if [...] including: The process's real and effective user IDs differ, or the real and effective group IDs differ. [...] A process with a non-root user ID executed a binary that conferred capabilities to the process. [...] However, with root access on the system it's possible to configure it to meet the prerequisites of secure-execution mode for these two parameters when run as non-root, still as described in ld.so(8): LD_PRELOAD [...] In secure-execution mode, preload pathnames containing slashes are ignored. Furthermore, shared objects are preloaded only from the standard search directories and only if they have set-user-ID mode bit enabled (which is not typical). What are the standard search directories? They are provided with the output of ld.so --help. For example on a Debian amd64/x86_64: $ ld.so --help [...] Shared library search path: (libraries located via /etc/ld.so.cache) /lib/x86_64-linux-gnu (system search path) /usr/lib/x86_64-linux-gnu (system search path) /lib (system search path) /usr/lib (system search path) [...] In the end the shared object file must be in one of these places. From testing it : doesn't have to be owned by root, as long as it has mode u+s set. can be a symlink pointing to the actual u+s file in any place as long as the symlink is in the right place. no path should be present in LD_PRELOAD, only (a) filename(s) without / anywhere. if the binary's capabilities don't grant it arbitrary access to read a file, the shared object can have its mode set so that it allows only a single user or a group (typically chmod o-rwx should be applied and the correct ownership set, then u+s restored). For setuid-root or CAP_DAC_OVERRIDE / CAP_DAC_READ_SEARCH this probably won't prevent the library to be used by any user executing such privileged binary. LD_DEBUG [...] Since glibc 2.3.4, LD_DEBUG is ignored in secure-execution mode, unless the file /etc/suid-debug exists (the content of the file is irrelevant). I didn't find a way to restrict it to only an user or group. Example (with root access using sudo): $ sudo touch /etc/suid-debug $ sudo cp -aiL ./lib.so /usr/lib/lib.so $ sudo chmod u+s /usr/lib/lib.so which now allows to run as normal user with both variables taken into account: LD_DEBUG=all LD_PRELOAD=lib.so ./big_app
LD_PRELOAD does not work and LD_DEBUG shows nothing
1,305,731,391,000
I'm trying to override malloc/free functions for the program, that requires setuid/setgid permissions. I use the LD_PRELOAD variable for this purpose. According to the ld documentation, I need to put my library into one of the standard search directories (I chose /usr/lib) and give it setuid/setgid permissions. I've done that. However, I still can't link to my .so file, getting the error: object 'liballoc.so' from LD_PRELOAD cannot be preloaded: ignored What can be the possible reasons for that? Tested this .so file on programs that don't have setuid/setgid permissions and all works fine. OS: RedHat 7.0
According to the ld documentation, I need to put my library into one of the standard search directories (I chose /usr/lib) That was the mistake. You should've put it in /usr/lib64 (assuming that your machine is a x86_64). I've just tried the recipe from the manpage on a Centos 7 VM (which should be ~identical to RHEL 7) and it works: As root: cynt# cc -shared -fPIC -xc - <<<'char *setlocale(int c, const char *l){ errx(1, "not today"); }' -o /usr/lib64/liblo.so cynt# chmod 4755 /usr/lib64/liblo.so As a regular user with a setuid program: cynt$ LD_PRELOAD=liblo.so su - su: not today Whether it's a good idea to use that feature is a totally different matter (IMHO, it isn't).
LD_PRELOAD for setuid binary
1,305,731,391,000
I have Fedora 27. I am building something from source. (It is https://github.com/xmrig/xmrig-nvidia if that matters). Make gets to linking and then fails with this message: /usr/bin/ld: cannot find -lstdc++ collect2: error: ld returned 1 exit status The packages libstdc++ and libstdc++-devel are installed. Their 32-bit versions, just in case, are now also installed. I still get the message. What can I do to fix this? Thanks!
Okay, I found what file it's looking for using strace, and the answer was libstdc++.a , so I fixed it by installing the libstdc++-static package
Fedora 27 /usr/bin/ld: cannot find -lstdc++
1,305,731,391,000
I am wondering if I can keep the entries in /etc/ld.so.conf sorted. My ld.so.conf looks now like this: /usr/X11R6/lib64/Xaw3d /usr/X11R6/lib64 /usr/lib64/Xaw3d /usr/X11R6/lib/Xaw3d /usr/X11R6/lib /usr/lib/Xaw3d /usr/x86_64-suse-linux/lib /usr/local/lib /opt/kde3/lib /usr/local/lib64 /opt/kde3/lib64 /lib64 /lib /usr/lib64 /usr/lib /usr/local/cuda-6.5/lib64 When I sort it would look like this - can I safely do it or are they some dependencies which I would "destroy" with the sort? /lib /lib64 /opt/kde3/lib /opt/kde3/lib64 /usr/X11R6/lib /usr/X11R6/lib/Xaw3d /usr/X11R6/lib64 /usr/X11R6/lib64/Xaw3d /usr/lib /usr/lib/Xaw3d /usr/lib64 /usr/lib64/Xaw3d /usr/local/cuda-6.5/lib64 /usr/local/lib /usr/local/lib64 /usr/x86_64-suse-linux/lib include /etc/ld.so.conf.d/*.conf
The entries in /etc/ld.so.conf are searched in order. Therefore, order matters. This only matters if the same library name (precisely speaking, the same SONAME) is present in multiple directories. If there are directories that you are absolutely sure will never contain the same library then you can put them in the order you prefer. In particular this means that directories in /usr/local should come before directories outside /usr/local, since the point of these directories is to have priority over the default system files. Among distribution-managed directories, it probably doesn't matter.
Is it OK to sort /etc/ld.so.conf
1,305,731,391,000
The NASM docs on "elf Extensions to the GLOBAL Directive" say, Optionally, you can control the ELF visibility of the symbol. Just add one of the visibility keywords: default, internal, hidden, or protected. The default is default of course. Where are these defined? and how does ld use them? I see access levels mentioned frequently in C++ which include protected, public, and private, but I don't know if this is what ELF is referencing? My use-case is C and Assembly so if you can make this relevant to those two languages and the linker, extra points.
It seems from the NASM source these seem to correspond with the docs from Oracle "Linker and Libraries Guide", these seem to correspond to STV_DEFAULT, STV_INTERNAL, STV_HIDDEN, and STV_PROTECTED. Oracle says this: STV_DEFAULT The visibility of symbols with the STV_DEFAULT attribute is as specified by the symbol's binding type. That is, global and weak symbols are visible outside of their defining component, the executable file or shared object. Local symbols are hidden. Global and weak symbols can also be preempted, that is, they may by interposed by definitions of the same name in another component. STV_PROTECTED A symbol defined in the current component is protected if it is visible in other components but cannot be preempted. Any reference to such a symbol from within the defining component must be resolved to the definition in that component, even if there is a definition in another component that would interpose by the default rules. A symbol with STB_LOCAL binding will not have STV_PROTECTED visibility. STV_HIDDEN A symbol defined in the current component is hidden if its name is not visible to other components. Such a symbol is necessarily protected. This attribute is used to control the external interface of a component. An object named by such a symbol may still be referenced from another component if its address is passed outside. A hidden symbol contained in a relocatable object is either removed or converted to STB_LOCAL binding by the link-editor when the relocatable object is included in an executable file or shared object. STV_INTERNAL This visibility attribute is currently reserved. As for the effect on C and Assembly, the Oracle docs go on to say None of the visibility attributes affects the resolution of symbols within an executable or shared object during link-editing. Such resolution is controlled by the binding type. Once the link-editor has chosen its resolution, these attributes impose two requirements. Both requirements are based on the fact that references in the code being linked may have been optimized to take advantage of the attributes. First, all of the non-default visibility attributes, when applied to a symbol reference, imply that a definition to satisfy that reference must be provided within the current executable or shared object. If this type of symbol reference has no definition within the component being linked, then the reference must have STB_WEAK binding and is resolved to zero. Second, if any reference to or definition of a name is a symbol with a non-default visibility attribute, the visibility attribute must be propagated to the resolving symbol in the linked object. If different visibility attributes are specified for distinct references to or definitions of a symbol, the most constraining visibility attribute must be propagated to the resolving symbol in the linked object. The attributes, ordered from least to most constraining, are STV_PROTECTED, STV_HIDDEN and STV_INTERNAL. See also IBM "What is symbol and symbol visibility" Oracle "Linker and Libraries Guide"
What are difference between the ELF symbol visibility levels?
1,305,731,391,000
In the man page for ld.so(8), it says that When resolving library dependencies, the dynamic linker first inspects each dependency string to see if it contains a slash (this can occur if a library pathname containing slashes was specified at link time). If a slash is found, then the dependency string is interpreted as a (relative or absolute) pathname, and the library is loaded using that pathname. How can gcc link against a library with a path with a slash? I have tried with -l but that seems to work only with a library name which it uses to search various paths, not with a path argument itself. One follow-on question: when linking to a relative path in this way, what is the path relative to (e.g. the directory containing the binary or the working directory at runtime)? All of the linking guides I find when searching discuss using RPATH, LD_LIBRARY_PATH, and RUNPATH. RPATH is deprecated and most discussions discourage using LD_LIBRARY_PATH. RUNPATH with a path starting with $ORIGIN allows for a link to a relative path, but it is a little fragile because it can be overridden by LD_LIBRARY_PATH. I wanted to know if a relative path would be more robust (since I can't find anything discussing this I am guessing not, likely because the path is relative to the runtime directory).
If we (for the moment) ignore the gcc or linking portion of the question and instead modify a binary with patchelf on a linux system $ ldd hello linux-vdso.so.1 => (0x00007ffd35584000) libhello.so.1 => not found libc.so.6 => /lib64/libc.so.6 (0x00007f02e4f6f000) /lib64/ld-linux-x86-64.so.2 (0x00007f02e533c000) $ patchelf --remove-needed libhello.so.1 hello $ patchelf --add-needed ./libhello.so.1 hello $ ldd hello linux-vdso.so.1 => (0x00007ffdb74fc000) ./libhello.so.1 => not found libc.so.6 => /lib64/libc.so.6 (0x00007f2ad5c28000) /lib64/ld-linux-x86-64.so.2 (0x00007f2ad5ff5000) We now have a binary with a relative path library, which if there exist suitable directories with libhello.so.1 files present in them $ cd english/ $ ../hello hello, world $ cd ../lojban/ $ ../hello coi rodo we find that the path is relative to the working directory of the process, which opens up all sorts of problems, especially security problems. There might be some productive use for this, testing different versions of a library, perhaps. It would likely be simpler to compile two different binaries, or patchelf in the necessary library without the complication of a relative working directory. compile steps libhello only has a helloworld call $ cat libhello.c #include <stdio.h> void helloworld(void) { printf("coi rodo\n"); } and was compiled via CFLAGS="-fPIC" make libhello.o gcc -shared -fPIC -Wl,-soname,libhello.so.1 -o libhello.so.1.0.0 libhello.o -lc ln -s libhello.so.1.0.0 libhello.so.1 ln -s libhello.so.1.0.0 libhello.so and the hello that makes the helloworld call was compiled via $ cat hello.c int main(void) { helloworld(); return 0; } $ CFLAGS="-lhello -L`pwd`/english" make hello without patchelf In hindsight, modify the gcc command to use a relative directory path: $ gcc -shared -fPIC -Wl,-soname,./libhello.so.1 -o libhello.so.1.0.0 libhello.o -lc $ cd .. $ rm hello $ CFLAGS="-lhello -L`pwd`/lojban" make hello $ ldd hello | grep hello ./libhello.so.1 => not found $ english $ ../hello hello, world It's probably more sensible to compile the library in a normal fashion and then fiddle around with any binaries as necessary using patchelf.
How to link to a shared library with a relative path?
1,305,731,391,000
I am using a third party .NET Core application (a binary distribution used by a VS Code extension) that unfortunately has diagnostic logging enabled with no apparent way to disable it (I did already report this to the authors). The ideal solution (beside being able to disable it), would be if I could specify to systemd that it should not log anything for that particular program, but I have been unable to find any way to do so. Here is everything I tried so far: The first thing I tried was to redirect stdout and stderr to /dev/null: dotnet-app > /dev/null 2>&1. This indeed disabled any of the normal output, but the diagnostic logging was still being written to the systemd journal. I hoped that the application had a command line argument that allowed me to disable the diagnostic logging. It did have a verbosity argument, but after experimenting with, it only seemed to have effect on the normal output, not the diagnostic logging. By using strace and looking for calls to connect, I found out that the application instead wrote the diagnostic logging directly to /dev/log. The path /dev/log is a symlink to /run/systemd/journal/dev-log, so to verify my finding, I changed the symlink to point to /dev/null instead. This indeed stopped the diagnostic logging from showing up in the systemd journal. I was told about LD_PRELOAD and made a library that replaced the standard connect with my own version that returned an error in the case it tried to connect to /dev/log. This worked correctly in my test program, but failed with the .NET Core application, failing with connect ENOENT /tmp/CoreFxPipe_1ddf2df2725f40a68990c92cb4d1ff1e. I experimented with my library, but even if all I did was directly pass the arguments to the standard connect function, it would still fail with the same error. I then tried using Linux namespaces to make it so that /dev/log would point to /dev/null only for the .NET Core application: unshare --map-root-user --mount sh -c "mount --bind /dev/null /dev/log; dotnet-app $@". This too failed with the same error, even though it again worked for my test program. Even just using unshare --map-root-user --mount dotnet-app "$@" would fail with the error. Next I tried using gdb to close the file descriptor to /dev/log while the application was running. This worked, but it reopens it after some time has passed. I also tried changing the file descriptor to point to /dev/null, which also worked, but it too was reset to /dev/log after some time. My last attempt was to write my own UNIX socket that would filter out all written to it by the .NET Core application. That actually worked, but I learned that the PID is send along with what is written to UNIX sockets, so everything passed along to the systemd journal would report coming from the PID of the program backing my UNIX socket. For now this is solution is acceptable for me, because on my system almost nothing uses /dev/log, but I would welcome a better solution. For example, I read that it was possible to spoof certain things as root for UNIX sockets, but I was unable to find out more about it. Or if someone might have any insights on why both LD_PRELOAD and unshare might fail for the .NET Core application, while they work fine for a simple C test program that writes to /dev/log?
In short, have your library loaded by LD_PRELOAD override syslog(3) rather than connect(3). The /dev/log Unix socket is used by the syslog(3) glibc function, which connects to it and writes to it. Overriding connect(3) probably doesn't work because the syslog(3) implementation inside glibc will execute the connect(2) system call rather than the library function, so an LD_PRELOAD hook will not trap the call from within syslog(3). There's a disconnect between strace, which shows you syscalls, and LD_PRELOAD, which can override library functions (in this case, functions from glibc.) The fact that there's a connect(3) glibc function and also a connect(2) system call also helps with this confusion. (It's possible that using ltrace would have helped here, showing calls to syslog(3) instead.) You can probably confirm that overriding connect(3) in LD_PRELOAD as you're doing won't work with syslog(3) by having your test program call syslog(3) directly rather than explicitly connecting to /dev/log, which I suspect is how the .NET Core application is behaving. Hooking into syslog(3) is also potentially more useful, because being at a higher level in the stack, you can use that hook to make decisions such as selectively forwarding some of the messages to syslog. (You can load the syslog function from glibc with dlsym(RTLD_NEXT, "syslog"), and then you can use that function pointer to call syslog(3) for the messages you do want to forward from your hook.) The approach of replacing /dev/log with a symlink to /dev/null is flawed in that /dev/null will not accept a connect(2) operation (only file operations such as open(2)), so syslog(3) will try to connect and get an error and somehow try to handle it (or maybe return it to the caller), in any case, this might have side effects. Hopefully using an LD_PRELOAD override of syslog(3) is all you need here.
How to prevent a process from writing to the systemd journal?
1,305,731,391,000
When using the ldd command there is an option, -u, to print unused direct dependencies as stated in the on-line help. For example: ldd -u /bin/gcc Unused direct dependencies: /lib64/libm.so.6 /lib64/ld-linux-x86-64.so.2 What are "unused direct dependencies"? Why are they unused? Why are they dependencies?
They are dependencies because the binary lists them as dependencies, as “NEEDED” entries in its dynamic section: readelf -d /usr/bin/gcc will show you the libraries gcc requests. They are unused because gcc doesn’t actually need any of the symbols exported by the libraries in question. In ld-linux-x86-64.so.2’s case, that’s normal, because that’s the interpreter. In libm’s case, that usually results from an unconditional -lm, without corresponding linker options to drop unused libraries. In many cases this results from the limited granularity of build tools; in particular, linking e.g. GNOME libraries tends to result in long lists of libraries, which aren’t always all needed as direct dependencies (but end up in the tree of library dependencies anyway). It’s usually better to try to avoid having unused dependencies, to simplify dependency processing (both by the runtime linker, and by package management tools). It’s safe to ignore libm though since that’s tied to libc anyway.
What does "unused direct dependencies" mean?
1,305,731,391,000
I didn't exactly find something about the following in the man-page. How is the supposed behavior in subprocesses spawned by a process which was itself spawned by stdbuf? E.g.: stdbuf -oL myprog From the code, I get that it sets LD_PRELOAD, and as far as I know, all environment variables are inherited in any subprocesses. I'm interested in both fork(); and fork(); execv(); subprocesses. (Not sure if that would make a difference.) fork(); should not change the behavior at all. execv() would use the same LD_PRELOAD (as well as the stdbuf settings which also stored in env) and thus apply the same behavior (from the example: stdout is line-buffered). Right?
straceing the execve (with environ) and write system calls can help see what's going on: Here with the stdbuf of GNU coreutils 8.25. I beleive FreeBSD's stdbuf works similarly: exec and no fork: $ env -i strace -s200 -vfe execve,write /usr/bin/stdbuf -o0 /usr/bin/env /usr/bin/env > /dev/null execve("/usr/bin/stdbuf", ["/usr/bin/stdbuf", "-o0", "/usr/bin/env", "/usr/bin/env"], []) = 0 execve("/usr/bin/env", ["/usr/bin/env", "/usr/bin/env"], ["_STDBUF_O=0", "LD_PRELOAD=/usr/lib/x86_64-linux-gnu/coreutils/libstdbuf.so"]) = 0 execve("/usr/bin/env", ["/usr/bin/env"], ["_STDBUF_O=0", "LD_PRELOAD=/usr/lib/x86_64-linux-gnu/coreutils/libstdbuf.so"]) = 0 write(1, "_STDBUF_O=0\n", 12) = 12 write(1, "LD_PRELOAD=/usr/lib/x86_64-linux-gnu/coreutils/libstdbuf.so\n", 60) = 60 +++ exited with 0 +++ LD_PRELOAD and the config in _STDBUF_O is passed to both env commands. The two write() system calls even though the output doesn't go to a terminal confirms the output is not buffered. fork and exec: $ env -i strace -s200 -vfe execve,write /usr/bin/stdbuf -o0 /bin/sh -c '/usr/bin/env; :' > /dev/null execve("/usr/bin/stdbuf", ["/usr/bin/stdbuf", "-o0", "/bin/sh", "-c", "/usr/bin/env; :"], []) = 0 execve("/bin/sh", ["/bin/sh", "-c", "/usr/bin/env; :"], ["_STDBUF_O=0", "LD_PRELOAD=/usr/lib/x86_64-linux-gnu/coreutils/libstdbuf.so"]) = 0 Process 16809 attached [pid 16809] execve("/usr/bin/env", ["/usr/bin/env"], ["_STDBUF_O=0", "LD_PRELOAD=/usr/lib/x86_64-linux-gnu/coreutils/libstdbuf.so", "PWD=/home/stephane"]) = 0 [pid 16809] write(1, "_STDBUF_O=0\n", 12) = 12 [pid 16809] write(1, "LD_PRELOAD=/usr/lib/x86_64-linux-gnu/coreutils/libstdbuf.so\n", 60) = 60 [pid 16809] write(1, "PWD=/home/stephane\n", 19) = 19 [pid 16809] +++ exited with 0 +++ --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, Same situation. So yes stdbuf applies to the command it runs and all of its descendants (provided they don't clean their environment like the dynamic linker or libc do with LD_PRELOAD for setuid/setgid... applications).
stdbuf supposed behavior for subprocesses
1,305,731,391,000
The dynamic linker can be run either indirectly by running some dynamically linked program or shared object (in which case no command-line options to the dynamic linker can be passed and, in the ELF case, the dynamic linker which is stored in the .interp section of the program is executed) or directly by running: /lib/ld-linux.so.* [OPTIONS] [PROGRAM [ARGUMENTS]] https://jlk.fjfi.cvut.cz/arch/manpages/man/core/man-pages/ld.so.8.en Similar info can be found in Program Library HOWTO. But when I try, $ LD_DEBUG=libs /usr/lib/ld-linux.so.2 ls 23325: find library=ls [0]; searching 23325: search cache=/etc/ld.so.cache 23325: ls: error while loading shared libraries: ls: cannot open shared object file $ LD_DEBUG=libs ls 23503: find library=libcap.so.2 [0]; searching 23503: search cache=/etc/ld.so.cache 23503: trying file=/usr/lib/libcap.so.2 ... What am I doing wrong? Is there a way to use ld-linux.so directly to run a program?
Try using full path for ls: [ctor@dom0 tst]$ /lib64/ld-linux-x86-64.so.2 /usr/bin/ls afile [ctor@dom0 tst]$ /lib64/ld-linux-x86-64.so.2 ls ls: error while loading shared libraries: ls: cannot open shared object file [ctor@dom0 tst]$ /lib64/ld-linux-x86-64.so.2 anyinexistentcommandhere anyinexistentcommandhere: error while loading shared libraries: anyinexistentcommandhere: cannot open shared object file [ctor@dom0 tst]$ ldd ls ldd: ./ls: No such file or directory [ctor@dom0 tst]$ ldd `type -P ls` linux-vdso.so.1 (0x00007fffd636c000) libselinux.so.1 => /lib64/libselinux.so.1 (0x000074b858cc3000) libcap.so.2 => /lib64/libcap.so.2 (0x000074b858abe000) libc.so.6 => /lib64/libc.so.6 (0x000074b8586f8000) libpcre.so.1 => /lib64/libpcre.so.1 (0x000074b858486000) libdl.so.2 => /lib64/libdl.so.2 (0x000074b858282000) /lib64/ld-linux-x86-64.so.2 (0x000074b85910a000) libpthread.so.0 => /lib64/libpthread.so.0 (0x000074b858064000) [ctor@dom0 tst]$ LD_DEBUG=libs /lib64/ld-linux-x86-64.so.2 ls 6380: find library=ls [0]; searching 6380: search cache=/etc/ld.so.cache 6380: ls: error while loading shared libraries: ls: cannot open shared object file [ctor@dom0 tst]$ LD_DEBUG=libs /lib64/ld-linux-x86-64.so.2 inexistentcommand 6415: find library=inexistentcommand [0]; searching 6415: search cache=/etc/ld.so.cache 6415: inexistentcommand: error while loading shared libraries: inexistentcommand: cannot open shared object file [ctor@dom0 tst]$ LD_DEBUG=libs /lib64/ld-linux-x86-64.so.2 /usr/bin/ls 6342: find library=libselinux.so.1 [0]; searching 6342: search cache=/etc/ld.so.cache 6342: trying file=/lib64/libselinux.so.1 6342: 6342: find library=libcap.so.2 [0]; searching 6342: search cache=/etc/ld.so.cache 6342: trying file=/lib64/libcap.so.2 6342: 6342: find library=libc.so.6 [0]; searching 6342: search cache=/etc/ld.so.cache 6342: trying file=/lib64/libc.so.6 6342: 6342: find library=libpcre.so.1 [0]; searching 6342: search cache=/etc/ld.so.cache 6342: trying file=/lib64/libpcre.so.1 6342: 6342: find library=libdl.so.2 [0]; searching 6342: search cache=/etc/ld.so.cache 6342: trying file=/lib64/libdl.so.2 6342: 6342: find library=libpthread.so.0 [0]; searching 6342: search cache=/etc/ld.so.cache 6342: trying file=/lib64/libpthread.so.0 6342: 6342: 6342: calling init: /lib64/libpthread.so.0 6342: 6342: 6342: calling init: /lib64/libc.so.6 6342: 6342: 6342: calling init: /lib64/libdl.so.2 6342: 6342: 6342: calling init: /lib64/libpcre.so.1 6342: 6342: 6342: calling init: /lib64/libcap.so.2 6342: 6342: 6342: calling init: /lib64/libselinux.so.1 6342: 6342: 6342: initialize program: /usr/bin/ls 6342: 6342: 6342: transferring control: /usr/bin/ls 6342: afile
How to run programs with ld-linux.so?
1,305,731,391,000
I know that ELF executable files need to have a visible _start subroutine where the execution begins. However, from what I can understand, the Kernel actually calls in ld-linux.so (or some other interpreter) and hand over the execution to it. So, my questions are: Who mandates the _start entrypoint? How does the kernel "call into" ld-linux.so? Does it have a stable API? A _start function, so to speak? Bonus Question: It seems from a cursory glance that Glibc, libdl and ld-linux.so are all part of the same codebase and are tightly wound together (using each other's private interfaces). Does this mean that it is impossible to write a custom libdl-equivalent library to implement dlopen, etc.? Is it impossible for a non-C systems language to generate binaries that do not depend on libc and could still load *.so files?
The entry point is conventionally named _start, and is defined in the C runtime assembly routine that is linked into the executable. This short piece of code is responsible for setting up the stack, possibly calling C++ constructors, and finally calling main. The definitive answer to where a program starts execution is the e_entry value in the ELF header in the executable file. This value is set to point to _start by the linker. You can see this by examining an executable program with readelf -a progfile. Dynamic linking complicates matters a bit, since the dynamic linker is loaded and started first, with the responsibility of loading and linking the shared libraries the program needs. The dynamic linker is also specified in the executable file (it is called the "program interpreter".) Lwn.net had an excellent two-part article on How programs get run (part two), which I recommend reading if you really want to get into the details of this topic.
What mandates the _start entrypoint (kernel, ld-linux.so, etc.)?
1,305,731,391,000
I installed ATLAS (with Netlib LAPACK) in a Docker image, and now every time I run ldconfig, I get the following errors: ldconfig: Can't link /usr/local/lib//usr/local/lib/libtatlas.so to libtatlas.so ldconfig: Can't link /usr/local/lib//usr/local/lib/libsatlas.so to libsatlas.so Of course, /usr/local/lib//usr/local/lib/libtatlas.so doesn't exist, but I'm confused why it would try to look for this file, since libtatlas.so isn't a symbolic link: root@cd00953552ab:/usr/local/lib# ls -la | grep atlas -rw-r--r-- 1 root staff 15242054 Apr 27 08:18 libatlas.a -rwxr-xr-x 1 root staff 17590040 Apr 27 08:18 libatlas.so -rwxr-xr-x 1 root staff 17492184 Apr 27 08:18 libsatlas.so -rwxr-xr-x 1 root staff 17590040 Apr 27 08:18 libtatlas.so Why would this be happening, and is there a way to fix it/turn off this error message? Edit: Here's the Readelf output: root@cd00953552ab:/usr/local/lib# eu-readelf -a /usr/local/lib/libatlas.so | grep SONAME SONAME Library soname: [/usr/local/lib/libtatlas.so]
For some reason, probably related to the way the libraries were built (and more specifically, linked), they’ve stored their installation directory in their soname: thus libtatlas.so’s soname is /usr/local/lib/libtatlas.so. ldconfig tries to link libraries to their soname, if it doesn’t exist, in the same directory: it finds /usr/local/lib/libtatlas.so, checks its soname, determines that a link needs to be made from /usr/local/lib//usr/local/lib/libtatlas.so (the directory and soname concatenated) to /usr/local/lib/libtatlas.so, and fails because /usr/local/lib/usr/local/lib doesn’t exist. The appropriate way to fix this is to ensure that the libraries’ sonames are defined correctly. Typically I’d expect libtatlas.so.3 etc. with no directory name (the version would depend on the ABI level of the library being built). You probably need to rebuild the libraries, or find a correctly-built package... Alternatively, you can edit a library’s soname using PatchELF: patchelf --set-soname libtatlas.so /usr/local/lib/libtatlas.so Ideally you should relink the programs you built using this library, since they’ll have the soname embedded too (you can also patch that using PatchELF). In an evolving system, you’d really want to specify a version in the soname, but in a container it probably doesn’t matter — you should be rebuilding the container for upgrades anyway.
ldconfig cannot link to specific files
1,305,731,391,000
I installed zeromq 3.2.5 from source $ wget http://download.zeromq.org/zeromq-3.2.5.tar.gz $ tar xf zeromq-3.2.5.tar.gz $ cd zeromq-3.2.5 $ ./configure && make -j4 $ sudo make install This installs libzmq.so.3 into /usr/local/lib: $ sudo updatedb $ locate libzmq.so.3 /usr/local/lib/libzmq.so.3 /usr/local/lib/libzmq.so.3.0.0 I've confirmed that /usr/local/lib is in the ld search path: $ grep /usr/local/lib /etc/ld.so.conf.d/* /etc/ld.so.conf.d/libc.conf:/usr/local/lib I've confirmed that ld can find the library: $ ldconfig -v 2>/dev/null | egrep -e zmq\|^/ ... /usr/local/lib: libzmq.so.3 -> libzmq.so.3.0.0 ... However, if I run ldd on my app, it cannot find libzmq.so.3 $ ldd test_app ... libzmq.so.3 => not found ... If I set LD_LIBRARY_PATH then it works $ export LD_LIBRARY_PATH=/usr/local/lib $ ldd test_app ... libzmq.so.3 => /usr/local/lib/libzmq.so.3 (0x00007f22418d9000) ... Question: Why can't ld find libzmq.so.3 without LD_LIBRARY_PATH when it's in a standard path? How can I fix this without having to set LD_LIBRARY_PATH? Notes: The RPATH is set on the binary, in case that's important: $ readelf -a test_app | grep RPATH 0x000000000000000f (RPATH) Library rpath: [/home/steve/src/.../bin/gcc-4.9.3/debug] I'm running Ubuntu 14.04 in case that's of any use
When you add new libraries to the system directories you may need to refresh the linker cache with ldconfig This needs to be run as root. Without this command the runtime linker will have a stale idea of what libraries are available. You similarly need to do this if you decide to add new directories to the system linker path. Setting LD_LIBRARY_PATH caused the runtime linker to manually look in that directory, outside of the cache.
ld can't find .so
1,305,731,391,000
I'm having trouble linking the Intel MKL libraries to use in building Julia with MKL support. I've had this problem with other projects as well, but here I'll focus on Julia. I have MKL installed in /opt/intel. I've tried: Running /opt/intel/bin/compilervars.sh intel64 Running /opt/intel/mkl/bin/mklvars.sh intel64 Adding the library (libmkl_rt.so) to LD_LIBRARY_PATH: export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/intel/mkl/lib/intel64_lin Adding a file called "mkl.conf" within /etc/ld.so.conf.d with the contents /opt/intel/compilers_and_libraries_2019/linux/mkl/lib/intel64_lin After the last two I ran sudo ldconfig, but there hasn't been any change. How can I get Make to recognize this library?
LD_LIBRARY_PATH and files in /etc/ld.so.conf.d configure the runtime linker, not the linker used during builds. To build Julia with MKL, you should add USE_INTEL_MKL = 1 to Make.user run source /opt/intel/bin/compilervars.sh intel64 and build Julia from the same shell (so that the variables set by compilervars are taken into account).
ld linker ignores LD_LIBRARY_PATH
1,305,731,391,000
There are at least two standards of Executable and Linkable Format (ELF), one of them System V Application Binary Interface AMD64 Architecture Processor Supplement (With LP64 and ILP32 Programming Models) Version 1.0 Tool Interface Standard (TIS) Executable and Linking Format (ELF) Specification Version 1.2 The older one, the TIS ELF Standard 1.2 is 106 pages while the SysV ABI is 157 pages but covers ELF only on pages 63-86 (23 pages). How do these two standards relate to each other? And which one does Linux and GNU Linker use? What is the Tool Interface Standard?
The TIS/ELF one covers ELF in general, while the System V ABI is a supplement which documents the x86_64 Application Binary Interface. The second document does not contain any information about x86_64 since the architecture didn't exist at the time it was written.
Different standards of ELF (SysV vs TIS) and Linux?
1,305,731,391,000
My shared library libnew.so uses some symbols form an already built third-party shared library libold.so. I would like to build an executable binary file that should be only linked against libnew.so. But it still needs to be linked against libold.so too. Otherwise, the linker complains about undefined reference to symbol... . I used these commands to build libnew.so and the executable file. For some reason, it is only allowed to use the current directory and I cannot put my files somewhere else. Btw, nm libnew.so shows symbols from libold.so as undefined. ldd libnew.so also does not report any libold.so. If I do not use libold.so for building libnew.so, the size of libnew.so will not be changed! For libnew.so : gcc -Wall -fPIC -c libnew.c gcc -shared -Wl,-soname,libnew.so.1 -o libnew.so.1.0 *.o -L. -lold ln -sf libnew.so.1.0 libnew.so.1 ln -sf libnew.so.1.0 libnew.so For the executable : #this does not work and needs -lold as well. gcc -Wall main.c -L. -lnew -o prog Is there anything missing? Thanks!
What is missing is that your linker command gcc -shared -Wl,-soname,libnew.so.1 -o libnew.so.1.0 *.o -L. -lold does not copy objects from libold.so, but refers to symbols in that file to tell the dynamic loader where it can obtain those symbols. Normally when someone is trying to suppress/hide a given library, they start by recombining the object-files which were used to make the shared library. You might be able to accomplish this via partial linking, but I do not see a solution (since shared libraries and shared objects are not interchangeable). Further reading: Merge multiple .so shared libraries (says it will not work...) What is Partial Linking in GNU Linker? ld - The GNU linker
How can I build my shared library (.so) so that symbols from a different shared library are also included? [closed]
1,305,731,391,000
I have an application that needs a modified LD_PRELOAD. I want to start the application using the originally provided rc script, so I can benefit from an automatically updated rc script on an update of the application. I can't modify the original rc script of course, because any change would be lost on the next update. So, is there maybe some system settings like: If starting application X, use a modified LD_PRELOAD? Or would my best way really be to copy the original rc script, modify it and use the modified rc script?
The best way is probably to create your own rc-script that you will use instead of the "official one". Otherwise, your rc-script probably includes an external "config" file if you check it. The include may look like this: . /etc/default/mydaemon-config So that you can edit /etc/default/mydaemon-config and do something like: export LD_PRELOAD=whateveryouwant But be careful, it may not be what you want, because every process started from the script will have that LD_PRELOAD configuration. Otherwise, the original script may have something like: DAEMON=/usr/bin/mydaemon So you might be able to change it from /etc/default/mydaemon-config with: DAEMON="LDPRELOAD=whateveryouwant $DAEMON" This depends on your original rc-script, that we don't have, so it's only speculation... Anyway, these are all workarounds, and IMHO, you should rather look for a solution to avoid using LD_PRELOAD in the first place.
Automatically start an application with a modifed LD_PRELOAD?
1,305,731,391,000
I know that snipersim isn't a very typical "project" but this is more a linux/linking problem than anything else, so I think it goes here. I have also contacted the developers, but have yet to receive an answer. First, for quick explanation of what I'm trying to do: For my master thesis I am using the architectural simulator snipersim (http://www.snipersim.org). I downloaded it to my local machine (running Linux Mint 17.2), built it, and started working with it. Everything works perfectly fine. Seeing as I need to make a few hundred simulations, each of which takes hours, I was given access to an university computing cluster using HTCondor, on x86_64 OpenSUSE 13.1 machines. Obviously, I do not have root access to the cluster. Due to them having different distributions, I can't simply copy over the binaries (I ended up trying later, but the code behaves erratically), so I wanted to recompile snipersim. My compilation process In the cluster's access machine (from where you can submit the parallel jobs using condor_submit), I cloned my snipersim fork. SQLite3 I checked whether the necessary libraries were installed, and noticed libsqlite3 was missing. In order to fix this, I downloaded sqlite-autoconf-3090200.tar.gz from the SQLite.org website, configured it to install to ~/sqlite (./configure -prefix=~/sqlite), and did make && make install. I then configured SQLITE_PATH to point to ~/sqlite, and both LIBRARY_PATH and LD_LIBRARY_PATH to point to ~/sqlite/lib So far, so good. (For future reference, yes, SQLite3 was compiled with -fPIC) Sniper With all the libraries out of the way, I set out to compile sniper. Changed to the main directory, and typed make. Everything seems fine, exactly like on my home machine. It goes through all the dependencies, source files, etc, and then gets to the last step, which is the linking of the main executable sniper. Here, it suddenly gives an error, and halts: [LD ] lib/sniper /usr/lib/gcc/x86_64-pc-linux-gnu/4.8.4/../../../../x86_64-pc-linux-gnu/bin/ld: ~/sniper/standalone/../standalone/standalone.o: relocation R_X86_6ldrelocationssnipecompilation4_32S against `.rodata.str1.1' can not be used when making a shared object; recompile with -fPIC ~/sniper/standalone/../standalone/standalone.o: error adding symbols: Bad value collect2: error: ld returned 1 exit status Makefile:34: recipe for target '~/sniper/standalone/../lib/sniper' failed make: *** [~/sniper/standalone/../lib/sniper] Error 1 This error message has me stumped. Everything is compiled with -fPIC, so the error must have to do with some libraries that get pulled in from outside. The command executed here for the linking (for those of you who never used sniper) (It actually uses full paths, instead of '~', but I replaced those with '~' as they contain a ton of personally identifying information): g++ -L~/sniper/standalone/../lib -L~/sniper/standalone/../sift -L~/sniper/standalone/../pin_kit/extras/xed-intel64/lib -L~/sqlite/lib -L~/sniper/standalone/../pin_kit/extras/xed2-intel64/lib -o ~/sniper/standalone/../lib/sniper ~/sniper/standalone/../standalone/exceptions.o ~/sniper/standalone/../standalone/standalone.o -lcarbon_sim -lpthread -lsift -lxed -L~/sniper/standalone/../python_kit/intel64/lib -lpython2.7 -lrt -lz -lsqlite3 -lxed -O2 -g -std=c++0x In particular, -lcarbon_sim -sift reference custom libraries that were compiled previously during the make process, and -lxed references the Intel PIN library (used for Processor instrumentation). My thoughts / What I've tried This is a surprising error message. For some reason, it's stating that standalone.o is position dependent and can't be compiled into a shared object, when all the compilation steps have -fPIC (I triple-checked). Again, the only thing that comes to mind is that one of the libraries being pulled in was compiled without -fPIC, which is unlikely, I must be overlooking something. Is there any way to have ld print a list of all the libraries it pulls in, as it does so? If I could figure out where the problem lies, I might be able to pull in a manually-compiled library, for example. In addition, I checked relocations on my home machine, and the exact same relocation ld complains about (R_X86_64_32S against .rodata.str1.1) exists in the standalone.o file, but everything works fine there. I thought that maybe it could be due to my custom install of SQLite3 (I had it installed through a package manager in my home machine), as such I tried installing a copy on my home machine through the exact same process as on the cluster. Everything still works, and I confirmed through ldd that it was actually linking against my copy (instead of the system version). I also compared gcc, g++ and ld versions between the two machines, and they match. In addition, one weird thing I have noticed: The file ~/sniper/lib/pin_sim.so (compiled from some sniper code that pulls in the Intel PIN library) is a 64-bit dynamically linked ELF executable (as expected), but running ldd pin_sim.so simply prints not a dynamic executable, while in my home machine it prints all the used shared libraries. I tried copying pin_sim.so from my computer to the cluster, and ldd is also not able to read it. readelf -d pin_sim.so still works on both machines. This is very weird. The only reference I could find to ldd failing when readelf/objdump don't was when calling it on a 32-bit executable under a 64-bit system. In this case, both the executable and the system are 64-bit, so that's not it. I'm completely out of ideas on what to do. I spent about 5 hours today scouring the web for solutions to similar issues and trying them all, to no avail. Hopefully someone here has some ideas? Edit 1: Comparison of linker parameters between both machines As suggested by siblynx, I ran g++ on both machines using the -v parameter to try and figure out what the differences were when it calls the linker (collect2). I cleaned them up slightly (only the relevant file names, not the whole path), and removed the library directories (-L): Common Parameters: --eh-frame-hdr -m elf_x86_64 -dynamic-linker /lib64/ld-linux-x86-64.so.2 -o lib/sniper crti.o crtn.o standalone/exceptions.o standalone/standalone.o -lcarbon_sim -lpthread -lsift -lxed -lpython2.7 -lrt -lz -lsqlite3 -lxed -lstdc++ -lm -lgcc_s -lgcc -lc -lgcc_s -lgcc Parameters unique to my home machine (Linux Mint 17.2, where sniper works): --sysroot=/ --build-id -z relro crt1.o crtbegin.o crtend.o Parameters unique to the cluster machine (where it doesn't work): -pie -z now Scrt1.o crtbeginS.o crtendS.o I'm not very knowledgeable when it comes to linking, but I did notice that my home setup includes crt1.o crtbegin.o crtend.o, while the cluster includes Scrt1.o crtbeginS.o crtendS.o (notice the extra 'S'). What exactly do these files do, and what does the S in the filename mean? (one of "Shared" or "Static", I assume?)
Cause It seems the '-pie' parameter breaks Sniper compilation. I tried adding it to my home machine and it fails with the exact same error. Removing it from the cluster line and the linker succeeds. As the user siblynx mentioned, OpenSUSE (at least the one in the cluster) enforces executables to use PIE when being linked, while Linux Mint doesn't. Solution Simply adding -fno-pie to the $(TARGET) linker call in the file standalone/Makefile overrides the COLLECT_GCC_OPTIONS -pie and everything seems to work correctly. ldd pin_sim.so still doesn't work, but that's a different issue entirely. I might post a separate question for this, actually.
Error while building snipersim: "relocation R_X86_64_32S against `.rodata.str1.1' can not be used when making a shared object; recompile with -fPIC"
1,305,731,391,000
I am trying to install plexmediaplayer from source. This involves compiling libmpv.so.1 which I've done and installed under /usr/local/lib When I run plexmediaplayer, I get the following error: $ plexmediaplayer plexmediaplayer: error while loading shared libraries: libmpv.so.1: cannot open shared object file: No such file or directory ldconfig finds the library correctly: $ ldconfig -v | grep libmpv libmpv.so.1 -> libmpv.so.1.24.0 ldd on the plexmiediaplayer binary shows libmpv: $ ldd plexmediaplayer | grep libmpv libmpv.so.1 => /usr/local/lib/libmpv.so.1 (0x00007f2fe4f33000) which is a symlink: ls -l /usr/local/lib/libmpv.so.1 lrwxrwxrwx 1 root root 16 Feb 9 20:37 /usr/local/lib/libmpv.so.1 -> libmpv.so.1.24.0 both the shared object and executable are compiled for x86_64 and readable by the non-root user trying to run plexmediaplayer: $ file /usr/local/lib/libmpv.so.1.24.0 /usr/local/lib/libmpv.so.1.24.0: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, BuildID[sha1]=855d9cbf952c76e3c0c1c1a162c4c94ea5a12b91, not stripped $ file /usr/local/bin/plexmediaplayer /usr/local/bin/plexmediaplayer: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 2.6.32, BuildID[sha1]=dc92ac026c5ac7bc3e5554a591321de81a3f4576, not stripped These both match my machine arch: $ uname -a Linux hostname 4.4.0-66-generic #87-Ubuntu SMP Fri Mar 3 15:29:05 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux Running strace on plexmediaplayer gives the following: $ strace -o lotsalogs -ff -e trace=file plexmediaplayer open("/opt/Qt5.8.0/5.8/gcc_64//lib/tls/x86_64/libmpv.so.1", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) open("/opt/Qt5.8.0/5.8/gcc_64//lib/tls/libmpv.so.1", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) open("/opt/Qt5.8.0/5.8/gcc_64//lib/x86_64/libmpv.so.1", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) open("/opt/Qt5.8.0/5.8/gcc_64//lib/libmpv.so.1", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) open("/usr/local/lib/libmpv.so.1", O_RDONLY|O_CLOEXEC) = -1 EACCES (Permission denied) open("/lib/x86_64-linux-gnu/tls/x86_64/libmpv.so.1", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) open("/lib/x86_64-linux-gnu/tls/libmpv.so.1", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) open("/lib/x86_64-linux-gnu/x86_64/libmpv.so.1", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) open("/lib/x86_64-linux-gnu/libmpv.so.1", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) open("/usr/lib/x86_64-linux-gnu/tls/x86_64/libmpv.so.1", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) open("/usr/lib/x86_64-linux-gnu/tls/libmpv.so.1", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) open("/usr/lib/x86_64-linux-gnu/x86_64/libmpv.so.1", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) open("/usr/lib/x86_64-linux-gnu/libmpv.so.1", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) open("/lib/tls/x86_64/libmpv.so.1", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) open("/lib/tls/libmpv.so.1", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) open("/lib/x86_64/libmpv.so.1", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) open("/lib/libmpv.so.1", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) open("/usr/lib/tls/x86_64/libmpv.so.1", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) open("/usr/lib/tls/libmpv.so.1", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) open("/usr/lib/x86_64/libmpv.so.1", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) open("/usr/lib/libmpv.so.1", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) Which includes: open("/usr/local/lib/libmpv.so.1", O_RDONLY|O_CLOEXEC) = -1 EACCES (Permission denied) but the permissions on the file through the symlink are: ls -l /usr/local/lib/libmpv.so.1.24.0 -rwxr-xr-x 1 root root 27872856 Mar 22 22:17 /usr/local/lib/libmpv.so.1.24.0 Any ideas why this can't be found by my binary? EDIT: I wiped all libmpv under /usr/local/lib and plexmediaplayer under /usr/local/bin, and removed by source directory, then reinstalled side-by-side in a VM. The build in the VM worked, the one on my host machine did not. I also hashed ld on both machines, and (unsurprisingly) they match.
It turns out that I (badly?) configured apparmor for plexmediaplayer months ago, which caused the problem upon updating and recompiling.
Cannot find shared object file even though it's in library path
1,305,731,391,000
What is the difference between the 386 and 32 bit options in ld -V? elf32_x86_64 elf_i386 i386linux i386pep i386pe And, where can I find the documentation on these "emulation modes"
The “emulation” selects different linker scripts; you’ll find the scripts themselves in /usr/lib/ldscripts on your system. The emulations you’ve listed correspond to elf32_x86_64: ELF for x64-32, aka x32 — 32-bit x86-64 binaries elf_i386: ELF for i386 — 32-bit i386 binaries i386linux: a.out for i386 i386pep: PE+ for x86-64 — Windows-format 64-bit binaries i386pe: PE for i386 — Windows-format 32-bit binaries The linker scripts define the output format and architecture, the search directories (where ld looks for libraries), the sections in the binary, among other things. The linker script format is well documented (see above), but the available scripts aren’t; in most cases GCC will specify the right one, so you don’t need to worry about it, and in other cases you effectively end up needing to read the linker scripts themselves to figure out what they do.
GNU Linker differences between the different 32bit emulation modes?
1,305,731,391,000
I have one secret library built for CentOS 6.5 as a package. I can't build package for CentOS 7.4, make install fails on this line: $ gcc -static -O3 -Wno-long-long -funroll-loops -Wall -g -DLINUX testlib.c -o test-lib -L. -llsh -lstdc++ /usr/bin/ld: cannot find -lstdc++ /usr/bin/ld: cannot find -lc I tried to investigate this, for example: $ ld -lstdc++ --verbose ... attempt to open /usr/x86_64-redhat-linux/lib64/libstdc++.so failed attempt to open /usr/x86_64-redhat-linux/lib64/libstdc++.a failed attempt to open /usr/lib64/libstdc++.so failed attempt to open /usr/lib64/libstdc++.a failed attempt to open /usr/local/lib64/libstdc++.so failed attempt to open /usr/local/lib64/libstdc++.a failed attempt to open /lib64/libstdc++.so failed attempt to open /lib64/libstdc++.a failed attempt to open /usr/x86_64-redhat-linux/lib/libstdc++.so failed attempt to open /usr/x86_64-redhat-linux/lib/libstdc++.a failed attempt to open /usr/local/lib/libstdc++.so failed attempt to open /usr/local/lib/libstdc++.a failed attempt to open /lib/libstdc++.so failed attempt to open /lib/libstdc++.a failed attempt to open /usr/lib/libstdc++.so failed attempt to open /usr/lib/libstdc++.a failed ld: cannot find -lstdc++ I take a look at some paths and found this: /usr/lib/libstdc++.so.6 I created symlink (tried to cheat this) but go another error: $ sudo ln -s /usr/lib/libstdc++.so.6.0.19 /usr/lib/libstdc++.so $ ld -lstdc++ --verbose ... attempt to open /usr/x86_64-redhat-linux/lib64/libstdc++.so failed attempt to open /usr/x86_64-redhat-linux/lib64/libstdc++.a failed attempt to open /usr/lib64/libstdc++.so failed attempt to open /usr/lib64/libstdc++.a failed attempt to open /usr/local/lib64/libstdc++.so failed attempt to open /usr/local/lib64/libstdc++.a failed attempt to open /lib64/libstdc++.so failed attempt to open /lib64/libstdc++.a failed attempt to open /usr/x86_64-redhat-linux/lib/libstdc++.so failed attempt to open /usr/x86_64-redhat-linux/lib/libstdc++.a failed attempt to open /usr/local/lib/libstdc++.so failed attempt to open /usr/local/lib/libstdc++.a failed attempt to open /lib/libstdc++.so succeeded ld: skipping incompatible /lib/libstdc++.so when searching for -lstdc++ attempt to open /lib/libstdc++.a failed attempt to open /usr/lib/libstdc++.so succeeded ld: skipping incompatible /usr/lib/libstdc++.so when searching for -lstdc++ attempt to open /usr/lib/libstdc++.a failed ld: cannot find -lstdc++ CentOS 6.5: $ gcc --version gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-4) CentOS 7.4: $ gcc --version gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-16) Please, help to fix this. Also, looking forward to get some advice how to investigate such issues.
Try to install: libstdc++-static glibc-static Starting from Redhat 7/CentOS 7, static libraries was moved to an optional package. In CentOS 6 it was a part of: libstdc++-devel glibc-devel
How to investigate and fix missing libraries and/or skipping incompatible library?
1,305,731,391,000
Working with Fedora 35: I want to run a few different software packages that share a dependency, seemingly Qt. In the shell, I get this response, from Cadence and other software: ImportError: /lib64/libQt5Core.so.5: version `Qt_5_PRIVATE_API' not found (required by /usr/local/lib/python3.10/site-packages/PyQt5-5.15.6-py3.10-linux-x86_64.egg/PyQt5/QtCore.abi3.so) ldconfig -p | grep "libQt5Core.so.5" gets me libQt5Core.so.5 (libc6,x86-64, OS ABI: Linux 3.17.0) => /lib64/libQt5Core.so.5 If I remove /lib64/libQt5Core.so.5 I get ImportError: libQt5Core.so.5: cannot open shared object file: No such file or directory Reinstalling python3-pyqt5-sip or other qt, lib or python dependencies does not seem to help. So, libQt5Core.so.5 is found by the system, but it does not work. Though others with Fedora 35 do not have this problem. How can I provide Pathon with the required Qt_5_PRIVATE_API?
Thanks to the comment by @MarkusMüller, I traced back the issue to another package that had installed PyQt at an unexpected place. The solution was to remove the other package and its dependencies. Then reinstalling Cadence worked and it ran.
ImportError /usr/lib64/libQt5Core.so.5 - in several software packages
1,305,731,391,000
I just did some basic functions in asm that I compile in a shared library. Like : BITS 64 global foo section .text foo: mov rax, 1 ret I compiled with : nasm -f elf64 foo.S -o foo.o && gcc -shared foo.o -o libfoo.so I have a main of test : #include <stdio.h> int foo(); int main() { printf("%d\n", foo()); return (0); } If I compiled it with the foo.o directly, everything works well. But if I compiled like this : gcc main.c -L. -lfoo I would get this error : /usr/.../bin/ld: warning: type and size of dynamic symbol `foo' are not defined I thought it was because the prototype is not declared, but I recompiled foo.o with a lib.h file containing the prototype, and the same problem occurs. Is that I must complete another section of the elf file? Thank you.
You need to specify that the foo symbol corresponds to a function: [BITS 64] global foo:function section .text foo: mov rax, 1 ret
Compile shared library from asm code with current sources
1,305,731,391,000
I am trying to understand how to properly setup gcc to find stuff in my environmental variables. Currently I compiled some code, SDL and I added it to my .bashrc and sourced that .bashrc as well. Here's a simple hello program. #include "SDL.h" #include "SDL_ttf.h" #include "SDL_image.h" #include "SDL_mixer.h" #include <stdlib.h> #include <stdio.h> SDL_Window* window; SDL_GLContext* main_context; int main(int argc, char** argv) { printf("hello world %d %c \n", argc, argv[0][argc]); if (SDL_Init(SDL_INIT_EVERYTHING) != 0) { SDL_Log("sdl failed to core_engine_init, %s", SDL_GetError()); SDL_Quit(); return -1; } SDL_GL_SetAttribute(SDL_GL_CONTEXT_MAJOR_VERSION, 3); SDL_GL_SetAttribute(SDL_GL_CONTEXT_MINOR_VERSION, 2); SDL_GL_SetAttribute(SDL_GL_CONTEXT_PROFILE_MASK, SDL_GL_CONTEXT_PROFILE_CORE); SDL_GL_SetAttribute(SDL_GL_DOUBLEBUFFER, 1); SDL_GL_SetAttribute(SDL_GL_DEPTH_SIZE, 24); window = SDL_CreateWindow( "title", SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED, 300, 300, SDL_WINDOW_OPENGL | SDL_WINDOW_SHOWN | SDL_WINDOW_RESIZABLE); if (NULL == window) { SDL_Log("SDL Failed to create window, %s", SDL_GetError()); SDL_Quit(); return -1; } main_context = SDL_GL_CreateContext(window); if (NULL == main_context) { SDL_Log("SDL failed to create main context, %s", SDL_GetError()); SDL_Quit(); return -1; } return 0; } Trying to compile this with gcc -o main main.c I get these errors: blubee$ gcc -o main main.c /tmp/cc5hRcaO.o: In function `main': main.c:(.text+0x3e): undefined reference to `SDL_Init' main.c:(.text+0x47): undefined reference to `SDL_GetError' main.c:(.text+0x59): undefined reference to `SDL_Log' main.c:(.text+0x5e): undefined reference to `SDL_Quit' main.c:(.text+0x77): undefined reference to `SDL_GL_SetAttribute' main.c:(.text+0x86): undefined reference to `SDL_GL_SetAttribute' main.c:(.text+0x95): undefined reference to `SDL_GL_SetAttribute' main.c:(.text+0xa4): undefined reference to `SDL_GL_SetAttribute' main.c:(.text+0xb3): undefined reference to `SDL_GL_SetAttribute' main.c:(.text+0xd8): undefined reference to `SDL_CreateWindow' main.c:(.text+0xf0): undefined reference to `SDL_GetError' main.c:(.text+0x102): undefined reference to `SDL_Log' main.c:(.text+0x107): undefined reference to `SDL_Quit' main.c:(.text+0x11d): undefined reference to `SDL_GL_CreateContext' main.c:(.text+0x135): undefined reference to `SDL_GetError' main.c:(.text+0x147): undefined reference to `SDL_Log' main.c:(.text+0x14c): undefined reference to `SDL_Quit' collect2: error: ld returned 1 exit status adding the SDL2 linker flag this returns an error still: blubee$ gcc -lSDL2 -o main main.c /usr/bin/ld: cannot find -lSDL2 collect2: error: ld returned 1 exit status this command compiles everything fine though blubee$ gcc -I/opt/SDL2/include/SDL2 main.c -o main -L/opt/SDL2/lib -l SDL2 the thing is that I've added these paths to my .bashrc although I might have done it incorrectly. Here is my bashrc export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/SDL2/lib export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/SDL_IMAGE/lib export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/SDL_TTF/lib export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/SDL_MIXER/lib export LD_RUN_PATH=$LD_RUN_PATH:/opt/SDL2/lib export LD_RUN_PATH=$LD_RUN_PATH:/opt/SDL_IMAGE/lib export LD_RUN_PATH=$LD_RUN_PATH:/opt/SDL_TTF/lib export LD_RUN_PATH=$LD_RUN_PATH:/opt/SDL_MIXER/lib export C_INCLUDE_PATH=$C_INCLUDE_PATH:/opt/SDL2/include/SDL2 export C_INCLUDE_PATH=$C_INCLUDE_PATH:/opt/SDL_IMAGE/include/SDL2 export C_INCLUDE_PATH=$C_INCLUDE_PATH:/opt/SDL_TTF/include/SDL2 export C_INCLUDE_PATH=$C_INCLUDE_PATH:/opt/SDL_MIXER/include/SDL2 echoing these environmental variables show that they are there and should be working but it's not. What am I doing wrong with this setup?
You don't need any environment variables, just pass in right cflags and ldflags that SDL2 wants you to use: gcc main.c `pkg-config --cflags sdl2` -o main `pkg-config --libs sdl2` or either gcc main.c `sdl2-config --cflags` -o main `sdl2-config --libs` Remember: CFLAGS come before LDFLAGS, and LDFLAGS (and library specification with -l) comes last. SDL2 comes with sdl2-config script preinstalled. You will need to set your PATH to the directory where it resides to call it successfully: export PATH=/opt/SDL2/bin:$PATH If you will run every of *-config commands directly, you will see that they just output right cflags and ldflags for you. That's because the libraries that employ these scripts usually bigger than to specify single -I/-L argument, and it is not portable to specify single -I/-L arguments for them, because number of such arguments can be increased in future. And you should not install every package in it's own directory. Install everything into /usr/local for example, then you will not need even to specify anything (most distros point you to /usr/local automatically).
Using gcc compile flags
1,305,731,391,000
Typically on Debian when you install things from the repository, they just work. It sets up things just fine and life is good. This is great for things that are up to date in the repository. I am building some tools that I would like to manually update from github or mercurial. using cmake or the configure script to build the code is fine, I also add my own prefix path so that I can easily remove or update the packages if need be. I just build SDL2 from mercurial and installed it into /opt/SDL2 and added that to my path. I had to do that to be able to build SDL_image which gave me this output after finishing it's process. Libraries have been installed in: /opt/SDL_IMAGE/lib If you ever happen to want to link against installed libraries in a given directory, LIBDIR, you must either use libtool, and specify the full pathname of the library, or use the `-LLIBDIR' flag during linking and do at least one of the following: - add LIBDIR to the `LD_LIBRARY_PATH' environment variable during execution - add LIBDIR to the `LD_RUN_PATH' environment variable during linking - use the `-Wl,-rpath -Wl,LIBDIR' linker flag - have your system administrator add LIBDIR to `/etc/ld.so.conf' See any operating system documentation about shared libraries for more information, such as the ld(1) and ld.so(8) manual pages. This output above says a lot and I am not really sure how to parse it. In the past I used a mac which simplified a lot of this stuff but on linux I am having some trouble. My understanding from reading that above code is that I should add something like this to my bashrc file. export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/SDL_IMAGE/lib export LD_RUN_PATH=$LD_RUN_PATH:/opt/SDL_IMAGE/lib to my bashrc, so that when I am linking against sdl image headers it'll find it? I've skimmed the man pages for ld but honestly I don't get it and that's why I am asking. Especially this line: use the `-Wl,-rpath -Wl,LIBDIR' linker flag
Xcode and Fink|Homebrew|MacPorts on Mac OS X have these complications (they just largely hide it from you). There are two aspects to this problem, compiling, and running. Compiling will require a variety of details for any library installed to a custom path. This info for some libraries can be provided by pkg-config, e.g. for a little software depot I maintain under my home directory: $ ls ~/usr/rhel6-x86_64/lib/pkgconfig/ goptfoo.pc jkiss.pc libsodium.pc $ echo $PKG_CONFIG_PATH /homes/jdoe/usr/rhel6-x86_64/lib/pkgconfig $ pkg-config --libs --cflags libsodium -I/homes/jdoe/usr/rhel6-x86_64/include -L/homes/jdoe/usr/rhel6-x86_64/lib -lsodium $ These magic strings must be fed into the compile process for any software that is being built against libraries in your custom install tree. Details will vary depending on whether Makefile or autotools or cmake or so forth. One easy way is to set CFLAGS to contain the pkg-config output, or just include the output on the build line: mkpwhash: mkpwhash.c gcc -std=gnu99 `pkg-config --cflags --libs libsodium` -lcrypt -Werror -Wall -Wextra -Wundef -ftrapv -fstack-protector-all -pedantic -pipe -o mkpwhash mkpwhash.c For autotools or cmake, you'll need to dig around to see how they attach this particular onion to their belt, e.g. study existing configure.ac configurations from packages that use autotools, etc. For running something that has been compiled to use a shared library from the custom path, setting LD_LIBRARY_PATH probably will suffice (or, system-wide, fiddle with ld.so.conf): $ unset LD_LIBRARY_PATH $ ldd ~/usr/rhel6-x86_64/bin/mkpwhash | grep sodium libsodium.so.13 => not found $ exec $SHELL $ echo $LD_LIBRARY_PATH /homes/jdoe/usr/rhel6-x86_64/lib $ ldd ~/usr/rhel6-x86_64/bin/mkpwhash | grep sodium libsodium.so.13 => /homes/jdoe/usr/rhel6-x86_64/lib/libsodium.so.13 (0x00007e5c12ca7000) $ (This being unix, there are several ways to exfoliate the Bos grunniens, hence the "at least one of..." advice from your build process output. More complicated software depots will likely use stow or similar, depending on how much rope (and, thus, headaches) you want to give yourself.)
building code from source and adding them to your path
1,672,869,171,000
It's observed that when running a program having setuid bit set, it won't receive some environment variables set on the shell (bash etc.). Several environment variables which get removed this way are LD_PRELOAD, LD_LIBRARY_PATH, LD_ORIGIN_PATH, LD_DEBUG_OUTPUT, LD_PROFILE, LD_USE_LOAD_BIAS, GCONV_PATH. As mentioned in here and in this question, this is the intended behavior. Reason for this is to reduce attack vector. Manual page of ld.so (8) also states this. The question is, which component of a Linux OS removes environment variables like this? Is it the shell? Is it a function like fork() or execve() etc. a shell calls internally when executing a command? Is it the ld.so? Note: any answer will be helpful. However, if you can direct me to which resources contain information on this matter, like which manual pages should I read, it would be more helpful.
Most of these variables are intended for the dynamic linker, or other components of the C library, and it’s the dynamic linker which takes care of removing them when starting setuid binaries. This is documented in the “ENVIRONMENT” section of man ld.so (for the GNU C library): For security reasons, if the dynamic linker determines that a binary should be run in secure-execution mode, the effects of some environment variables are voided or modified, and furthermore those environment variables are stripped from the environment, so that the program does not even see the definitions. Some of these environment variables affect the operation of the dynamic linker itself, and are described below. Other environment variables treated in this way include: GCONV_PATH, GETCONF_DIR, HOSTALIASES, LOCALDOMAIN, LOCPATH, MALLOC_TRACE, NIS_PATH, NLSPATH, RESOLV_HOST_CONF, RES_OPTIONS, TMPDIR, and TZDIR. The ld.so-affecting variables are documented individually: This variable is ignored in secure-execution mode. appears in the documentation for each such variable. The full list can also be seen in the GNU C library’s source code, as can the removal code itself, both for dynamically-linked binaries and for dynamic linking in statically-linked binaries. Other C libraries’ dynamic linkers behave in a similar fashion, for variables which they care about; for example, musl documents that This variable is completely ignored in programs invoked setuid, setgid, or with other elevated capabilities. for a number of variables (LD_PRELOAD, LD_LIBRARY_PATH, MUSL_LOCPATH), and that some features of the TZ variable aren’t available in such circumstances.
Which component of linux remove filter environment variables on executing setuid program?
1,672,869,171,000
I'm trying to install KIWI on my RaspBerry Pi. When I attempt a pip install kiwi I get a linking failure, with /usr/lib64/gcc/aarch64-suse-linux/10/../../../../aarch64-suse-linux/bin/ld: cannot find -lpython3.6m collect2: error: ld returned 1 exit status error: command 'gcc' failed with exit status 1 So I add the relevant directories to my ld.so.conf, and run sudo ldconfig -v | grep python, and the output was: \ldconfig: Can't stat /libilp32: No such file or directory ldconfig: Path `/usr/lib' given more than once (from <builtin>:0 and /etc/ld.so.conf:4) ldconfig: Path `/usr/lib64' given more than once (from <builtin>:0 and /etc/ld.so.conf:2) ldconfig: Can't stat /usr/libilp32: No such file or directory libpython3.6m.so.1.0 -> libpython3.6m.so.1.0 libpython3.8.so.1.0 -> libpython3.8.so.1.0 libpython3.so -> libpython3.so libboost_python-py3.so.1.75.0 -> libboost_python3.so libboost_mpi_python-py3.so.1.75.0 -> libboost_mpi_python-py3.so.1.75.0 libpytalloc-util.cpython-38-aarch64-linux-gnu.so.2 -> libpytalloc-util.cpython-38-aarch64-linux-gnu.so.2.3.1 libpyldb-util.cpython-38-aarch64-linux-gnu.so.2 -> libpyldb-util.cpython-38-aarch64-linux-gnu.so.2.2.0 libpython2.7.so.1.0 -> libpython2.7.so.1.0 /usr/include/python3.8: (from /etc/ld.so.conf:6) note that libpython3.6m.so is in that list, which is what ld was complaining that it could not find. Why is the pip install of kiwi failing on ld when ld is clearly able to find the library to link?
ldconfig doesn’t configure ld, it configures ld.so, the dynamic linker/loader. ld is failing here because it’s looking for libpython3.6m.so; to provide that, you should install the relevant development package (presumably python3-devel).
'LD' can't find library to link, even though 'ldconfig -v' lists the file
1,672,869,171,000
Let's say I have a C++ file called dummy.cpp, and I need to compile it with g++ in such a way that it's being from stdin and g++ spits the compiled binary out to stdout. If only the stdin part is necessary the following command does the trick: $ g++ -x c++ -o dummy - < dummy.cpp Now adding the output part, as far as I know we need to use for example /dev/stdout (or /proc/self/fd/1) as the output parameter, however it won't work as it exits with a linker error. $ g++ -x c++ -o /dev/stdout - < dummy.cpp /usr/bin/ld: final link failed: Illegal seek collect2: error: ld returned 1 exit status If I redirect it from the terminal to a file via g++ -x c++ -o /dev/stdout - < dummy.cpp > dummy it will work correctly. I guess the problem is that stdout is not seekable and when it gets piped into a file it will "become". But why does ld the file to be seekable and can it be circumvented somehow?
As was mentioned in the comments, this is because the linker writes the file in stages and then fills in entries in the header area with the sizes and offsets. The ELF format has these values in the early part of the file to make finding the appropriate sections easy and efficient, and as a result, it's natural for the linker to work the way it does. Formats which are designed for streaming, like Zip files, tend to have much more complexity due to putting manifest data at the end. While it is theoretically possible to implement streaming output support, doing so would likely require buffering large amounts of data, computing data twice, or various other inefficient practices, and because this scenario is so rare, it probably wasn't considered worth the code complexity and potential inefficiency in the linker. You could probably use a small shell script or even a shell one-liner to implement this using a temporary file with appropriate cleanup.
Linker error by g++ when compiling to stdout
1,672,869,171,000
/lib # ./ld-musl-x86_64.so.1 --list /usr/lib/libEGL.so.1 ./ld-musl-x86_64.so.1 (0x7f2b06797000) libdl.so.2 => ./ld-musl-x86_64.so.1 (0x7f2b06797000) libm.so.6 => ./ld-musl-x86_64.so.1 (0x7f2b06797000) libGLdispatch.so.0 => /usr/lib/libGLdispatch.so.0 (0x7f2b06000000) libc.so.6 => ./ld-musl-x86_64.so.1 (0x7f2b06797000) Error relocating /usr/lib/libGLdispatch.so.0: __strdup: symbol not found Error relocating /usr/lib/libEGL.so.1: __strdup: symbol not found I have set /etc/ld-musl-x86_64.path1 to include directory where libc.so.6 is located. I even tried moving libc.so.6 to /lib but it did not help. It still mislinks to ld-musl-x86_64.so.1 instead of the actual libc.so.6. How can I make it use the actual libc.so.6? https://wiki.musl-libc.org/faq
In musl, the dynamic loader (ld-musl-x86_64.so.1) is the same binary as C library. All the libraries which are shipped in the musl C library (libc, libpthread, librt, libm, libdl, libutil, libxnet, as you discovered) are resolved using the dynamic loader, since it’s already loaded. Tying the loader to the library in this way makes explicit the reality of other C libraries: the dynamic loader is part of the C library, whether it’s the same binary or not. The GNU dynamic loader works with the GNU C library, the musl dynamic loader works with the musl C library, they aren’t interchangeable in general.
musl ld maps libc.so.6 to ld-musl-x86_64.so.1
1,672,869,171,000
As suggested by Zac Anger, i copy this question over here: I have a yocto recipe in which I copy/install some stuff to an image. After that, I want to add a line to the /etc/ld.so.conf file like this, so that the dynamic loader finds my library files: do_install(){ # install some stuff... echo /opt/myStuff/lib >> /etc/ld.so.conf ldconfig } During the build process I get the following error which aborts the build process: ... | DEBUG: Python function extend_recipe_sysroot finished | DEBUG: Executing shell function do_install | /home/debian/Devel/myYocto/build/tmp/work/myTarget/myRecipe/1.0-r0/temp/run.do_install.3176: 203: cannot create /etc/ld.so.conf: Permission denied | WARNING: exit code 2 from a shell command. ERROR: Task (/home/debian/Devel/myYocto/poky/meta-myLayer/myRecipe/myRecipe.bb:do_install) failed with exit code '1' Now to my question: How do I add a custom path to the dynamic loader by adding a line or editing the /etc/ld.so.conf file in a yocto recipe?
I suppose you want that addition in the /etc/ld.so.conf of your target system, but echo /opt/myStuff/lib >> /etc/ld.so.conf would change that file on your build host. Fortunally, this gives an error. Your target rootfs is $D, so your file would be unter $D/etc/ld.so.conf, but more generally, the file doesn't need to be located in /etc, so you would use ${D}${sysconfdir}/ld.so.conf. But then you experienced the problem that you can't do that in do_install(), because different receipes would generate separate ld.so.conf, leading to conflicts. Thus, better work with a ld.so.conf.d: install -d ${D}${sysconfdir}/ld.so.conf.d/ echo /opt/myStuff/lib >> ${D}${sysconfdir}/ld.so.conf.d/myStuff.conf Or, even better, put that file in your recipe and do install -m 0755 ${WORKDIR}/myStuff.conf ${D}${sysconfdir}/ld.so.conf.d/ Also, don't run ldconfig on your host. Some Yocto magic will update your library cache anyhow.
How do I edit '/etc/ld.so.conf' in a yocto recipe?
1,672,869,171,000
I am attempting to install rejoystick, and when I run make, I get this: Making all in src make[1]: Entering directory '/home/chrx/Downloads/joystick/rejoystick-0.8.1/src' make[2]: Entering directory '/home/chrx/Downloads/joystick/rejoystick-0.8.1/src' /bin/bash ../libtool --tag=CC --mode=link gcc -g -O2 -std=iso9899:1990 -Wall -pedantic -I../include -O2 -s -pthread -I/usr/include/glib-2.0 -I/usr/lib/x86_64-linux-gnu/glib-2.0/include -I/usr/include/SDL -D_GNU_SOURCE=1 -D_REENTRANT -pthread -I/usr/include/gtk-2.0 -I/usr/lib/x86_64-linux-gnu/gtk-2.0/include -I/usr/include/gio-unix-2.0/ -I/usr/include/cairo -I/usr/include/pango-1.0 -I/usr/include/atk-1.0 -I/usr/include/cairo -I/usr/include/pixman-1 -I/usr/include/libpng12 -I/usr/include/gdk-pixbuf-2.0 -I/usr/include/libpng12 -I/usr/include/pango-1.0 -I/usr/include/harfbuzz -I/usr/include/pango-1.0 -I/usr/include/freetype2 -I/usr/include/glib-2.0 -I/usr/lib/x86_64-linux-gnu/glib-2.0/include -o rejoystick assign_button.o backend.o button_axis.o error.o io.o js_axis.o js_button.o list.o main.o sdl_misc.o -lXtst -lgthread-2.0 -pthread -lglib-2.0 -L/usr/lib/x86_64-linux-gnu -lSDL -lgtk-x11-2.0 -lgdk-x11-2.0 -lpangocairo-1.0 -latk-1.0 -lcairo -lgdk_pixbuf-2.0 -lgio-2.0 -lpangoft2-1.0 -lpango-1.0 -lgobject-2.0 -lfontconfig -lfreetype -lglib-2.0 gcc -g -O2 -std=iso9899:1990 -Wall -pedantic -I../include -O2 -s -pthread -I/usr/include/glib-2.0 -I/usr/lib/x86_64-linux-gnu/glib-2.0/include -I/usr/include/SDL -D_GNU_SOURCE=1 -D_REENTRANT -pthread -I/usr/include/gtk-2.0 -I/usr/lib/x86_64-linux-gnu/gtk-2.0/include -I/usr/include/gio-unix-2.0/ -I/usr/include/cairo -I/usr/include/pango-1.0 -I/usr/include/atk-1.0 -I/usr/include/cairo -I/usr/include/pixman-1 -I/usr/include/libpng12 -I/usr/include/gdk-pixbuf-2.0 -I/usr/include/libpng12 -I/usr/include/pango-1.0 -I/usr/include/harfbuzz -I/usr/include/pango-1.0 -I/usr/include/freetype2 -I/usr/include/glib-2.0 -I/usr/lib/x86_64-linux-gnu/glib-2.0/include -o rejoystick assign_button.o backend.o button_axis.o error.o io.o js_axis.o js_button.o list.o main.o sdl_misc.o -pthread -lXtst -lgthread-2.0 -L/usr/lib/x86_64-linux-gnu -lSDL -lgtk-x11-2.0 -lgdk-x11-2.0 -lpangocairo-1.0 -latk-1.0 -lcairo -lgdk_pixbuf-2.0 -lgio-2.0 -lpangoft2-1.0 -lpango-1.0 -lgobject-2.0 -lfontconfig /usr/lib/x86_64-linux-gnu/libfreetype.so -lglib-2.0 -Wl,--rpath -Wl,/usr/lib/x86_64-linux-gnu -Wl,--rpath -Wl,/usr/lib/x86_64-linux-gnu /usr/bin/ld: io.o: undefined reference to symbol 'XKeycodeToKeysym' /usr/lib/x86_64-linux-gnu/libX11.so.6: error adding symbols: DSO missing from command line collect2: error: ld returned 1 exit status Makefile:277: recipe for target 'rejoystick' failed make[2]: *** [rejoystick] Error 1 make[2]: Leaving directory '/home/chrx/Downloads/joystick/rejoystick-0.8.1/src' Makefile:335: recipe for target 'all-recursive' failed make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory '/home/chrx/Downloads/joystick/rejoystick-0.8.1/src' Makefile:248: recipe for target 'all-recursive' failed make: *** [all-recursive] Error 1 What can I do to fix this? (I'm guessing the error is error adding symbols: DSO missing from command line)
The build is missing -lX11; to work around that, run ./configure LIBS=-lX11 && make
Make error: DSO missing from command line
1,672,869,171,000
I am testing how dynamic linking works with RUNPATH variable, and trying to run bash in a minimal chroot directory: $ find dir_chroot/ -type f dir_chroot/bin/bash dir_chroot/lib/x86_64-linux-gnu/libc.so.6 dir_chroot/lib/x86_64-linux-gnu/libdl.so.2 dir_chroot/lib/x86_64-linux-gnu/libtinfo.so.5 dir_chroot/lib64/ld-linux-x86-64.so.2 -- these are all dependencies of bash, and they are actual binaries (find -type f), not symbolic links. Also they don't have RUNPATH: $ find dir_chroot/ -type f -exec sh -c "readelf -d {} | grep RUNPATH" \; $ chroot works fine with this directory: $ sudo chroot dir_chroot /bin/bash bash-4.3# exit exit However, if I copy everything and set RUNPATH to $ORIGIN/ in lib64/ld-linux-x86-64.so.2 I get exit code 139 (segfault?) when running chroot: $ cp -R dir_chroot dir_chroot4 $ find dir_chroot4/ -type f -exec sh -c "echo {} `readelf -d {} | grep RUNPATH`" \; dir_chroot4/bin/bash dir_chroot4/lib/x86_64-linux-gnu/libc.so.6 dir_chroot4/lib/x86_64-linux-gnu/libdl.so.2 dir_chroot4/lib/x86_64-linux-gnu/libtinfo.so.5 dir_chroot4/lib64/ld-linux-x86-64.so.2 $ $ patchelf --set-rpath "\$ORIGIN/" dir_chroot4/lib64/ld-linux-x86-64.so.2 $ find dir_chroot4/ -type f -exec sh -c "echo {} `readelf -d {} | grep RUNPATH`" \; dir_chroot4/bin/bash dir_chroot4/lib/x86_64-linux-gnu/libc.so.6 dir_chroot4/lib/x86_64-linux-gnu/libdl.so.2 dir_chroot4/lib/x86_64-linux-gnu/libtinfo.so.5 dir_chroot4/lib64/ld-linux-x86-64.so.2 0x000000000000001d (RUNPATH) Library runpath: [$ORIGIN/] $ $ sudo chroot dir_chroot4 /bin/bash $ $ echo $status 139 -- $status is the status variable in fish shell. It happens only if ld-linux-x86-64.so.2 is patched, other libraries and bash executable work ok with RUNPATH. Why is it so?
Apparently, ld-linux-x86-64.so.2 is statically linked, at least it is on my system: >ldd ld-linux-x86-64.so.2 statically linked unlike libc.so.6,libdl.so.2 and libtinfo.so.5 >ldd libc.so.6 libdl.so.2 libtinfo.so.5 libc.so.6: /lib64/ld-linux-x86-64.so.2 (0x000056469847a000) linux-vdso.so.1 => (0x00007ffe95185000) libdl.so.2: linux-vdso.so.1 => (0x00007fffc4718000) libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fa1df136000) /lib64/ld-linux-x86-64.so.2 (0x0000558334a9c000) libtinfo.so.5: linux-vdso.so.1 => (0x00007ffe1b7bd000) libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007ffa990b9000) /lib64/ld-linux-x86-64.so.2 (0x00005590bfced000) which makes the loader go mad and results in a segfault, when you forcibly inject RUNPATH into it.
Cannot chroot bash after setting RUNPATH in ld-linux-x86-64.so.2 with patchelf 0.6 and 0.8
1,672,869,171,000
I am attempting to get madplay installed on my shared host I've run: ./configure --prefix=$HOME CPPFLAGS="-I /home/dir/include" LDFLAGS="-L /home/dir/lib" and then "make", but on that on run into an error I can't understand: /home/dir/lib: file not recognized: Is a directory collect2: ld returned 1 exit status make[2]: *** [madplay] Error 1 make[2]: Leaving directory `/home/dir/madplay-0.15.2b' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory `/home/dir/madplay-0.15.2b' make: *** [all] Error 2 Please, point out to me why it is looking for a directory instead of a file? This is preventing me from installing this software.
You should leave out the space between -L and /home/dir/lib in the LDFLAGS setting. As it is the compiler assumes that -L has no argument and /home/dir/lib is a source file. You should probably also remove the space after the -I option, as per the directives for gcc options directory search.
Failed make when installing madplay source
1,672,869,171,000
I try to get a more up to date version of bash from LinuxMint. I have a chroot with Debian Sid in my box. What I try to do in a bash wrapper script, early in my PATH #!/bin/bash LD_LIBRARY_PATH=/path/to/chroot/usr/lib/x86_64-linux-gnu:/path/to/chroot/lib:/path/to/chroot/lib64:/path/to/chroot/var/lib:/path/to/chroot/usr/lib:/path/to/chroot/usr/local/lib /path/to/chroot/bin/bash "$@" But I get: /home/mevatlave/bin/bash: line 3: 1492488 Segmentation fault (core dumped) LD_LIBRARY_PATH=/path/to/chroot/usr/lib/x86_64-linux-gnu:/path/to/chroot/lib:/path/to/chroot/lib64:/path/to/chroot/var/lib:/path/to/chroot/usr/lib:/path/to/chroot/usr/local/lib /path/to/chroot/bin/bash "$@" From the chroot: % ldd /bin/bash linux-vdso.so.1 (0x00007fff237fc000) libtinfo.so.6 => /lib/x86_64-linux-gnu/libtinfo.so.6 (0x00007f94de839000) libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f94de658000) /lib64/ld-linux-x86-64.so.2 (0x00007f94de9af000) Is it feasible? EDIT: With LD_LIBRARY_PATH=/path/to/chroot/lib:/path/to/chroot/lib64:/path/to/chroot/var/lib:/path/to/chroot/usr/lib:/path/to/chroot/usr/local/lib /path/to/chroot/lib/x86_64-linux-gnu/ld-linux-x86-64.so.2 /path/to/chroot/bin/bash "$@" I get /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.36' not found With LD_LIBRARY_PATH=/path/to/chroot/usr/lib/x86_64-linux-gnu:/path/to/chroot/lib:/path/to/chroot/lib64:/path/to/chroot/var/lib:/path/to/chroot/usr/lib:/path/to/chroot/usr/local/lib /path/to/chroot/lib/x86_64-linux-gnu/ld-linux-x86-64.so.2 /path/to/chroot/bin/bash "$@" I get: Segmentation fault (core dumped) LD_LIBRARY_PATH=/path/to/chroot/usr/lib/x86_64-linux- gnu:/path/to/chroot/lib:/path/to/chroot/lib64:/path/to/chroot/var/lib:/path/to/chroot/usr/lib:/path/to/chroot/usr/local/lib: /lib/x86_64-linux-gnu/ld-linux-x86-64.so.2 /path/to/chroot/bin/bash "$@" EDIT2: I can run this one: #!/bin/bash LANG=C LD_LIBRARY_PATH=/path/to/chroot/usr/lib/x86_64-linux-gnu:/path/to/chroot/lib:/path/to/chroot/lib64:/path/to/chroot/var/lib:/path/to/chroot/usr/lib:/path/to/chroot/usr/local/lib /path/to/chroot/lib/x86_64-linux-gnu/ld-linux-x86-64.so.2 /path/to/chroot/bin/bash "$@" But when I run bash --version, I get: Segmentation fault (core dumped) root@debian-sid_chroot:/# dpkg -l | grep libc6 ii libc6:amd64 2.36-8 amd64 GNU C Library: Shared libraries ii libc6-dev:amd64 2.36-8 amd64 GNU C Library: Development Libraries and Header Files
OK, found a workaround that avoid me to compile bash and the need to maintain it in the future: from chroot # apt install bash-static Then, my upgrade script on LinuxMint: #!/bin/bash if mount | grep -q "/home/sid-chroot"; then chroot /home/sid-chroot <<< 'apt-get -yy update; apt-get -yy upgrade' else debian-sid <<< 'apt-get -y update; apt-get -y upgrade' fi apt update apt-get install zsh apt-get -y upgrade zsh<<EOF mv /bin/bash /bin/bash.origin mv /usr/bin/bash /usr/bin/bash.origin &>/dev/null cp -a /home/sid-chroot/bin/bash-static /bin/bash cp -a /home/sid-chroot/bin/bash-static /usr/bin/bash EOF
Hacking LD_LIBRARY_PATH to use a recent bash from a chroot
1,672,869,171,000
I'm trying to install mopidy-spotify on my freebox delta that allow me to install vm and is arm64 based After many problems, i've manage to get most of the dependencies working and to get rid of most of the errors. But am still struggling on libspotify when trying to compile pyspotify. I've compiled successfully (i think) libspotify on my system using the sources from that link but I'm always getting here are the log output: Obtaining file:///home/jc/pyspotify Installing build dependencies ... done Getting requirements to build wheel ... done Installing backend dependencies ... done Preparing wheel metadata ... done Requirement already satisfied: setuptools in /usr/lib/python3/dist-packages (from pyspotify==2.1.3) (45.2.0) Requirement already satisfied: cffi>=1.0.0 in /usr/local/lib/python3.8/dist-packages (from pyspotify==2.1.3) (1.14.0) Requirement already satisfied: pycparser in /usr/local/lib/python3.8/dist-packages (from cffi>=1.0.0->pyspotify==2.1.3) (2.20) Installing collected packages: pyspotify Running setup.py develop for pyspotify ERROR: Command errored out with exit status 1: command: /usr/bin/python3 -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/home/jc/pyspotify/setup.py'"'"'; __file__='"'"'/home/jc/pyspotify/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' develop --no-deps cwd: /home/jc/pyspotify/ Complete output (22 lines): running develop running egg_info writing pyspotify.egg-info/PKG-INFO writing dependency_links to pyspotify.egg-info/dependency_links.txt writing requirements to pyspotify.egg-info/requires.txt writing top-level names to pyspotify.egg-info/top_level.txt reading manifest file 'pyspotify.egg-info/SOURCES.txt' reading manifest template 'MANIFEST.in' no previously-included directories found matching 'docs/_build' no previously-included directories found matching 'examples/tmp' warning: no previously-included files matching '__pycache__/*' found anywhere in distribution writing manifest file 'pyspotify.egg-info/SOURCES.txt' running build_ext generating cffi module 'build/temp.linux-aarch64-3.8/spotify._spotify.c' already up-to-date building 'spotify._spotify' extension aarch64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/include/python3.8 -c build/temp.linux-aarch64-3.8/spotify._spotify.c -o build/temp.linux-aarch64-3.8/build/temp.linux-aarch64-3.8/spotify._spotify.o aarch64-linux-gnu-gcc -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions -Wl,-z,relro -g -fwrapv -O2 -Wl,-Bsymbolic-functions -Wl,-z,relro -g -fwrapv -O2 -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 build/temp.linux-aarch64-3.8/build/temp.linux-aarch64-3.8/spotify._spotify.o -lspotify -o build/lib.linux-aarch64-3.8/spotify/_spotify.abi3.so /usr/bin/ld: skipping incompatible /usr/local/lib/libspotify.so when searching for -lspotify /usr/bin/ld: cannot find -lspotify collect2: error: ld returned 1 exit status error: command 'aarch64-linux-gnu-gcc' failed with exit status 1 please be free to ask for more informations Any clues on that?
You're most likely trying to mix 32 bit and 64 bit libraries. 32 bit applications must be linked against 32 bit libraries whereas 64 bit applications must be linked against 64 bit libraries. You can run file /usr/local/lib/libspotify.so to check if your library has been compiled for 32 bit or 64 bit. You can instruct a GCC running on a 64 bit system to compile 32 bit code by setting the following environment variables: CFLAGS=-m32 CXXFLAGS=-m32 make Also see /usr/bin/ld: skipping incompatible foo.so when searching for foo.
pyspotify compilation ld error
1,672,869,171,000
CSAPP says Linux systems provide a simple interface to the dynamic linker that allows application programs to load and link shared libraries at run time. #include <dlfcn.h> void *dlopen(const char *filename, int flag); Returns: pointer to handle if OK, NULL on error Does dlopen() performs dynamic linking by invoking dynamic linker ld-linux.so? Is ld-linux.so the dynamic linker which dlopen() invokes to perform dynamic linking? Thanks.
dlopen is provided by libdl, but behind the scenes, with the GNU C library implementation at least, the latter relies on symbols provided by ld-linux.so to perform the dynamic linking. If dlopen is called from a dynamically-linked program, ld-linux.so is already loaded, so it uses those symbols directly; if it’s called from a statically-linked program, it tries to load ld-linux.so.
Does `dlopen()` performs dynamic linking by invoking dynamic linker `ld-linux.so`?
1,672,869,171,000
This question is in continuation of How does compiler lay out code in memory, which is posted at stack-overflow. I have few questions with respect to ld (GNU) utility available in Linux. Whenever a program is run in the shell, say ./a.out, the shell uses ld to load the program represented by a.out. How does the shell know it has to use ld to load a.out. Does it scan the a.out to check if it is in the ELF format and if so, uses ld? It certainly can't use the file name extension, since there is no rule to name executable's in a certain format. Can ld utility load programs represented in any other executable formats other than ELF? Suppose I come up with my own executable format, say "xyz" and I write my own loader abc which handles such executables. Then, is there any shell command to configure: "use loader abc to load program compiled in a particular executable format "xyz"?
The shell doesn’t know, the kernel does. See What types of executable files exist on Linux? and the linked articles for details. The kernel loader loads the binary, and if necessary, its interpreter (which is ld.so for dynamic binaries). Each implementation of ld.so is format-specific. Yes, either by adding a binary loader to the kernel, or by using binfmt_misc. See How is Mono magical? for details.
Is loader for a particular "executable format" configurable in Linux?
1,672,869,171,000
I am running cmake and it is passing a flag to my linker that is unrecognized (-rdynamic), and it's causing an error. I cannot figure out where it is getting this flag from, so I want to just filter it out. I can specify -DCMAKE_LINKER=<linker>, so what I would like to do is set <linker> to a program that reads its command line arguments, filters out the bad one, and then passes the result back to the actual linker. I have been using awk '{gsub("-rdynamic", "");print}', but I don't know to make the input stdin and the output ld.
This bash script loops through its arguments, ignoring those matching the string "-rdynamic", and adding any others to an array. Once it runs out of arguments, it executes ld with the filtered list. #!/bin/bash declare -a finalopts finalopts=() for o in "$@"; do if [ "$o" = "-rdynamic" ] ; then continue fi #add all other options to the list finalopts+=("$o") done exec ld "${finalopts[@]}"
Filter out command line options before passing to a program
1,672,869,171,000
I have an executable but when I run it I get "No such file or directory" $ chmod a+x bin $ file bin bin: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib/ld64.so.1, not stripped $ ./bin bash: ./bin: No such file or directory Notice that this executable is in fact ELF 64-bit, as is the operating system.
This is because I forgot to include the -dynamic-linker options in the call to ld -dynamic-linker /lib/x86_64-linux-gnu/ld-linux-x86-64.so.2 Calling it as such, ld -m elf_x86_64 -dynamic-linker /lib/x86_64-linux-gnu/ld-linux-x86-64.so.2 -o bin makes it work fine. For more information from a similar problem with 32-bit/64-bit mismatch see this question
Running a custom-compiled executable returns "No such file or directory"
1,672,869,171,000
I'm using Archlinux. After a recent update, I find that the gdbus doesn't work and it presents a symbol lookup error: ➜ tidedra@ZgrArch ~ gdbus gdbus: symbol lookup error: /usr/lib/libgobject-2.0.so.0: undefined symbol: g_string_free_and_steal Then I thought it was probably a problem about the version of library, so I checked the linked library of related files: ➜ tidedra@ZgrArch ~ ldd /usr/bin/gdbus linux-vdso.so.1 (0x00007ffd17dd7000) libgio-2.0.so.0 => /usr/lib/libgio-2.0.so.0 (0x00007f6b395eb000) libglib-2.0.so.0 => /usr/lib/libglib-2.0.so.0 (0x00007f6b394a0000) libgobject-2.0.so.0 => /usr/lib/libgobject-2.0.so.0 (0x00007f6b3943f000) libc.so.6 => /usr/lib/libc.so.6 (0x00007f6b39258000) libgmodule-2.0.so.0 => /usr/lib/libgmodule-2.0.so.0 (0x00007f6b39251000) libz.so.1 => /usr/lib/libz.so.1 (0x00007f6b39237000) libmount.so.1 => /usr/lib/libmount.so.1 (0x00007f6b391f1000) libpcre2-8.so.0 => /usr/lib/libpcre2-8.so.0 (0x00007f6b39156000) libffi.so.8 => /usr/lib/libffi.so.8 (0x00007f6b3914b000) /lib64/ld-linux-x86-64.so.2 => /usr/lib64/ld-linux-x86-64.so.2 (0x00007f6b397f9000) libblkid.so.1 => /usr/lib/libblkid.so.1 (0x00007f6b39113000) ➜ tidedra@ZgrArch ~ ls -il /usr/lib/libgobject-2.0.so* 1195587 lrwxrwxrwx 1 root root 19 3月10日 23:18 /usr/lib/libgobject-2.0.so -> libgobject-2.0.so.0 1195588 lrwxrwxrwx 1 root root 35 3月14日 18:48 /usr/lib/libgobject-2.0.so.0 -> /usr/lib/libgobject-2.0.so.0.7600.0 1195589 -rwxr-xr-x 1 root root 391208 3月10日 23:18 /usr/lib/libgobject-2.0.so.0.7600.0 ➜ tidedra@ZgrArch ~ ls -il /usr/lib/libglib-2.0.so* 1195561 lrwxrwxrwx 1 root root 16 3月10日 23:18 /usr/lib/libglib-2.0.so -> libglib-2.0.so.0 1198392 lrwxrwxrwx 1 root root 32 3月14日 18:44 /usr/lib/libglib-2.0.so.0 -> /usr/lib/libglib-2.0.so.0.7600.0 1195573 -rwxr-xr-x 1 root root 1351064 3月10日 23:18 /usr/lib/libglib-2.0.so.0.7600.0 ➜ tidedra@ZgrArch ~ sudo updatedb ➜ tidedra@ZgrArch ~ locate libgobject /home/tidedra/.conda/pkgs/glib-2.69.1-he621ea3_2/lib/libgobject-2.0.so /home/tidedra/.conda/pkgs/glib-2.69.1-he621ea3_2/lib/libgobject-2.0.so.0 /home/tidedra/.conda/pkgs/glib-2.69.1-he621ea3_2/lib/libgobject-2.0.so.0.6901.0 /opt/miniconda3/lib/libgobject-2.0.so /opt/miniconda3/lib/libgobject-2.0.so.0 /opt/miniconda3/lib/libgobject-2.0.so.0.6901.0 /opt/miniconda3/pkgs/glib-2.69.1-h4ff587b_1/lib/libgobject-2.0.so /opt/miniconda3/pkgs/glib-2.69.1-h4ff587b_1/lib/libgobject-2.0.so.0 /opt/miniconda3/pkgs/glib-2.69.1-h4ff587b_1/lib/libgobject-2.0.so.0.6901.0 /opt/miniconda3/pkgs/glib-2.69.1-he621ea3_2/lib/libgobject-2.0.so /opt/miniconda3/pkgs/glib-2.69.1-he621ea3_2/lib/libgobject-2.0.so.0 /opt/miniconda3/pkgs/glib-2.69.1-he621ea3_2/lib/libgobject-2.0.so.0.6901.0 /usr/lib/libgobject-2.0.a /usr/lib/libgobject-2.0.so /usr/lib/libgobject-2.0.so.0 /usr/lib/libgobject-2.0.so.0.7600.0 /usr/lib32/libgobject-2.0.so /usr/lib32/libgobject-2.0.so.0 /usr/lib32/libgobject-2.0.so.0.7600.0 /usr/share/gdb/auto-load/usr/lib/libgobject-2.0.so.0.7600.0-gdb.py ➜ tidedra@ZgrArch ~ locate libglib-2.0 /home/tidedra/.conda/pkgs/glib-2.69.1-he621ea3_2/lib/libglib-2.0.so /home/tidedra/.conda/pkgs/glib-2.69.1-he621ea3_2/lib/libglib-2.0.so.0 /home/tidedra/.conda/pkgs/glib-2.69.1-he621ea3_2/lib/libglib-2.0.so.0.6901.0 /opt/miniconda3/lib/libglib-2.0.so /opt/miniconda3/lib/libglib-2.0.so.0 /opt/miniconda3/lib/libglib-2.0.so.0.6901.0 /opt/miniconda3/pkgs/glib-2.69.1-h4ff587b_1/lib/libglib-2.0.so /opt/miniconda3/pkgs/glib-2.69.1-h4ff587b_1/lib/libglib-2.0.so.0 /opt/miniconda3/pkgs/glib-2.69.1-h4ff587b_1/lib/libglib-2.0.so.0.6901.0 /opt/miniconda3/pkgs/glib-2.69.1-he621ea3_2/lib/libglib-2.0.so /opt/miniconda3/pkgs/glib-2.69.1-he621ea3_2/lib/libglib-2.0.so.0 /opt/miniconda3/pkgs/glib-2.69.1-he621ea3_2/lib/libglib-2.0.so.0.6901.0 /usr/lib/libglib-2.0.a /usr/lib/libglib-2.0.so /usr/lib/libglib-2.0.so.0 /usr/lib/libglib-2.0.so.0.7600.0 /usr/lib32/libglib-2.0.so /usr/lib32/libglib-2.0.so.0 /usr/lib32/libglib-2.0.so.0.7600.0 /usr/share/gdb/auto-load/usr/lib/libglib-2.0.so.0.7600.0-gdb.py I didn't find any problem, and there is only the 7600 version, except the 6901 version in miniconda, which I think not related to this error. So what's wrong with my gdbus?
I know what's wrong. I checked my PATH and I found that /opt/miniconda/bin was before /usr/bin, which means when I call gdbus in the terminal, it calls /opt/miniconda/bin/gdbus actually, instead of /usr/bin/gdbus, and as you can see, the version of some linked libraries of miniconda gdbus is inconsistent with others. So I remove the miniconda path in PATH, or you can let miniconda not start with terminal, then the problem is fixed.
gdbus symbol look up error
1,672,869,171,000
Suppose /usr/lib/x86_64-linux-gnu/ contains libfoo: libfoo.so.2 -> libfoo.so.2.0.0 (symbolic link) libfoo.so.2.0.0 Notably missing is libfoo.so. Suppose there is a program /usr/local/bin/sillyprog that compiles things using something like gcc somefile.c -lfoo. Every time I try to use sillyprog, it will fail with /usr/bin/ld: cannot find -lfoo because libfoo.so is missing. Assuming that I do not have permission to edit any files in /usr, what workarounds can I use to successfully link libfoo when running sillyprog?
One option would be to create a link with the correct name to the library in a directory you control. Then you can use the LIBRARY_PATH and LD_LIBRARY_PATH environment variables to point to this directory. These variables influence where the linker and loader look for libraries when compiling or running a program respectively. According to the GCC documentation: The value of LIBRARY_PATH is a colon-separated list of directories, much like PATH. When configured as a native compiler, GCC tries the directories thus specified when searching for special linker files, if it cannot find them using GCC_EXEC_PREFIX. Linking using GCC also uses these directories when searching for ordinary libraries for the -l option (but directories specified with -L come first). So, something like: mkdir -p ~/.local/lib ln -s /usr/lib/x86_64-linux-gnu/libfoo.so.2 ~/.local/lib/libfoo.so And then, to run a program that uses libfoo.so for compilation: LIBRARY_PATH=~/.local/lib sillyprog Or to run a program that itself is linked to libfoo.so: LD_LIBRARY_PATH=~/.local/lib sillyprog
How to link using -lfoo when there are versioned names of libfoo but no libfoo.so
1,672,869,171,000
I am trying to build SimGear from the FlightGear project using the download_an_compile.sh script (which uses CMake to build the binaries). The build went fine so far, but when the script tried linking the built object file together to a library, I get tons of //usr/lib/x86_64-linux-gnu/libldap_r-2.4.so.2: warning: undefined reference to [email protected]_2 (where ... is a different function name for each message). Now I thought I would just manually instruct CMake to link the lber library to the library being built, by adding -DCMAKE_CXX_STANDARD_LIBRARIES="-llber-2.4" to CMake's arguments. That resulted in /usr/bin/ld: -llber-2.4 could not be found Which is a riddle to me, because it is there: $ ls /usr/lib/x86_64-linux-gnu | grep lber liblber-2.4.so.2 liblber-2.4.so.2.10.8 In fact, I should not be getting the undefined reference errors, because these functions are all there: $ nm /usr/lib/x86_64-linux-gnu/liblber-2.4.so.2 $ nm -D /usr/lib/x86_64-linux-gnu/liblber-2.4.so.2 | grep ber 0000000000005fe0 T ber_alloc 0000000000005fa0 T ber_alloc_t 0000000000006d50 T ber_bprint 0000000000007ec0 T ber_bvarray_add 0000000000007df0 T ber_bvarray_add_x 0000000000007cd0 T ber_bvarray_dup_x 0000000000007cc0 T ber_bvarray_free 0000000000007c30 T ber_bvarray_free_x 0000000000007830 T ber_bvdup 0000000000007700 T ber_bvecadd 0000000000007650 T ber_bvecadd_x 0000000000007640 T ber_bvecfree 00000000000075c0 T ber_bvecfree_x 00000000000075b0 T ber_bvfree 0000000000007570 T ber_bvfree_x 0000000000007c20 T ber_bvreplace 0000000000007b80 T ber_bvreplace_x 0000000000002c70 T ber_decode_oid 0000000000006fc0 T ber_dump 0000000000006000 T ber_dup 0000000000007820 T ber_dupbv 0000000000007710 T ber_dupbv_x 0000000000004cc0 T ber_encode_oid 0000000000006ab0 T ber_errno_addr 0000000000006a30 T ber_error_print 0000000000003a80 T ber_first_element 0000000000006250 T ber_flatten 0000000000006170 T ber_flatten2 0000000000005f90 T ber_flush 0000000000005db0 T ber_flush2 0000000000005d70 T ber_free 0000000000005d10 T ber_free_buf 00000000000038d0 T ber_get_bitstringa 0000000000003a70 T ber_get_boolean 0000000000003150 T ber_get_enum 0000000000003080 T ber_get_int 0000000000006400 T ber_get_next 0000000000003a20 T ber_get_null 0000000000007ed0 T ber_get_option 0000000000003730 T ber_get_stringa 0000000000003810 T ber_get_stringal 00000000000037a0 T ber_get_stringa_null 0000000000003160 T ber_get_stringb 00000000000031f0 T ber_get_stringbv 0000000000003650 T ber_get_stringbv_null 0000000000002e30 T ber_get_tag 0000000000006380 T ber_init 00000000000060c0 T ber_init2 0000000000006160 T ber_init_w_nullc 000000000020d168 B ber_int_errno_fn 000000000020d178 B ber_int_log_proc 000000000020d190 B ber_int_memory_fns 000000000020d1a0 B ber_int_options 0000000000009590 T ber_int_sb_close 0000000000009610 T ber_int_sb_destroy 0000000000009500 T ber_int_sb_init 0000000000009710 T ber_int_sb_read 00000000000099e0 T ber_int_sb_write 00000000000069d0 T ber_len 0000000000006f70 T ber_log_bprint 00000000000070b0 T ber_log_dump 0000000000007120 T ber_log_sos_dump 0000000000007a50 T ber_mem2bv 0000000000007950 T ber_mem2bv_x 0000000000007460 T ber_memalloc 0000000000007400 T ber_memalloc_x 00000000000074d0 T ber_memcalloc 0000000000007470 T ber_memcalloc_x 0000000000007390 T ber_memfree 0000000000007330 T ber_memfree_x 0000000000007560 T ber_memrealloc 00000000000074e0 T ber_memrealloc_x 00000000000073f0 T ber_memvfree 00000000000073a0 T ber_memvfree_x 0000000000003b00 T ber_next_element 0000000000002e80 T ber_peek_element 0000000000002fd0 T ber_peek_tag 0000000000005370 T ber_printf 00000000000069e0 T ber_ptrlen 0000000000005080 T ber_put_berval 0000000000005100 T ber_put_bitstring 0000000000005290 T ber_put_boolean 0000000000004f30 T ber_put_enum 0000000000004f50 T ber_put_int 0000000000005220 T ber_put_null 0000000000004f70 T ber_put_ostring 0000000000005350 T ber_put_seq 0000000000005360 T ber_put_set 00000000000050b0 T ber_put_string 000000000020d170 B ber_pvt_err_file 0000000000006ad0 T ber_pvt_log_output 000000000020d008 D ber_pvt_log_print 0000000000006c20 T ber_pvt_log_printf 000000000020d1e0 B ber_pvt_opt_on 0000000000008f00 T ber_pvt_sb_buf_destroy 0000000000008ee0 T ber_pvt_sb_buf_init 0000000000009180 T ber_pvt_sb_copy_out 00000000000093b0 T ber_pvt_sb_do_write 0000000000008fe0 T ber_pvt_sb_grow_buffer 00000000000094c0 T ber_pvt_socket_set_nonblock 0000000000005a20 T ber_read 0000000000005ad0 T ber_realloc 0000000000006a20 T ber_remaining 00000000000062f0 T ber_reset 00000000000069f0 T ber_rewind 0000000000003ba0 T ber_scanf 00000000000080f0 T ber_set_option 00000000000059a0 T ber_skip_data 0000000000002f90 T ber_skip_element 0000000000003020 T ber_skip_tag 0000000000008d30 T ber_sockbuf_add_io 0000000000009560 T ber_sockbuf_alloc 0000000000009800 T ber_sockbuf_ctrl 00000000000096a0 T ber_sockbuf_free 000000000020d060 D ber_sockbuf_io_debug 000000000020d0a0 D ber_sockbuf_io_fd 000000000020d0e0 D ber_sockbuf_io_readahead 000000000020d120 D ber_sockbuf_io_tcp 000000000020d020 D ber_sockbuf_io_udp 0000000000008e20 T ber_sockbuf_remove_io 0000000000007130 T ber_sos_dump 00000000000069c0 T ber_start 0000000000005310 T ber_start_seq 0000000000005330 T ber_start_set 0000000000007940 T ber_str2bv 0000000000007840 T ber_str2bv_x 0000000000007ac0 T ber_strdup 0000000000007a60 T ber_strdup_x 0000000000007b70 T ber_strndup 0000000000007b10 T ber_strndup_x 0000000000007ad0 T ber_strnlen 0000000000005c00 T ber_write ldd also shows that libldap is referencing the right liblber: $ ldd /usr/lib/x86_64-linux-gnu/libldap_r-2.4.so.2 | grep lber liblber-2.4.so.2 => /usr/lib/x86_64-linux-gnu/liblber-2.4.so.2 (0x00007f28c8bdc000) Does anyone have any ideas ? I don't … If I forgot any details, please just let me know, and I'll add them !
At least in Debian (and derivatives thereof), a shared library's development files are split off into a separate binary package: If there are development files associated with a shared library, the source package needs to generate a binary development package named libraryname-dev, or if you need to support multiple development versions at a time, librarynameapiversion-dev. Installing the development package must result in installation of all the development files necessary for compiling programs against that shared library. "Development files" in this context mostly means C/C++ header files, but importantly often includes a symbolic link to the shared library itself The development package should contain a symlink for the associated shared library without a version number. For example, the libgdbm-dev package should include a symlink from /usr/lib/libgdbm.so to libgdbm.so.3.0.0. This symlink is needed by the linker (ld) when compiling packages, as it will only look for libgdbm.so when compiling dynamically. In this case, although you already have the shared libraries liblber-2.4.so.2 liblber-2.4.so.2.10.8 in /usr/lib/x86_64-linux-gnu but do not appear to have the symbolic link /usr/lib/x86_64-linux-gnu/liblber.so, which is provided by the corresponding development package libldap2-dev.
Weird linking issue with libldap using cmake
1,672,869,171,000
After installing the an RPM on centos8 I found that the package manager dnf - inexplicably stopped working with a cryptic error: Traceback (most recent call last): File "/usr/lib64/python3.6/site-packages/libdnf/common_types.py", line 14, in swig_import_helper return importlib.import_module(mname) File "/usr/lib64/python3.6/importlib/init.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 994, in _gcd_import File "<frozen importlib._bootstrap>", line 971, in _find_and_load File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 658, in _load_unlocked File "<frozen importlib._bootstrap>", line 571, in module_from_spec File "<frozen importlib._bootstrap_external>", line 922, in create_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed ImportError: /lib64/libdnf.so.2: undefined symbol: sqlite3_expanded_sql During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/bin/dnf", line 57, in <module> from dnf.cli import main File "/usr/lib/python3.6/site-packages/dnf/init.py", line 30, in <module> import dnf.base File "/usr/lib/python3.6/site-packages/dnf/base.py", line 29, in <module> import libdnf.transaction File "/usr/lib64/python3.6/site-packages/libdnf/init.py", line 3, in <module> from . import common_types File "/usr/lib64/python3.6/site-packages/libdnf/common_types.py", line 17, in <module> _common_types = swig_import_helper() File "/usr/lib64/python3.6/site-packages/libdnf/common_types.py", line 16, in swig_import_helper return importlib.import_module('_common_types') File "/usr/lib64/python3.6/importlib/init.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) ModuleNotFoundError: No module named '_common_types' After a lot of head scratching I discovered the problem was because the RPM installed its own copy of libsqlite3.so to a path in /opt//lib and in the post install script added a entry for /opt//lib to ld.so.conf . For some reason dnf picks up this version rather than the system version in /usr/lib64 which it normally uses. So the question I have is why is /usr/lib64 not earlier on the search path than entries in ld.so.conf I cannot find where /usr/lib64 is configured. Is it hard-coded into LD or the kernel? Note that /usr/lib64/libdnf.so.2 is already in /usr/lib64 so why isn't /usr/lib64 searched first? My fix was to add an entry for /usr/lib64 to ld.so.conf. Is this the best approach? I think perhaps it would be better for /opt/ to be using RPATH to find the libraries and not adding anything to ld.so.conf at all. How does this fit in to: Where do executables look for shared objects at runtime?
why is /usr/lib64 not earlier on the search path than entries in ld.so.conf In the absence of any other configuration, the system library paths are the last entries on the search path. I cannot find where /usr/lib64 is configured. Is it hard-coded into LD or the kernel? It’s hard-coded in ld.so, the dynamic linker (/lib64/ld-linux-x86-64.so.2 in your case, assuming you’re on x86_64). See What is the default value of LD_LIBRARY_PATH? for details. Your fix is probably the best you can do without touching the package’s contents. As you say, a better fix would be to set the package’s binaries’ rpath, or to add wrapper shell scripts to set LD_LIBRARY_PATH when invoking the binaries.
dnf broken by installation - how does /usr/lib64 get on the search path and why isn't it earlier?
1,672,869,171,000
I run Fedora 30 on my laptop. Yesterday I tried to install wine using the following commands: $ sudo dnf config-manager --add-repo https://dl.winehq.org/wine-builds/fedora/30/winehq.repo $ sudo dnf -y install winehq-stable The installation seemed to work, but when I try to launch winecfg $ winecfg /opt/wine-stable/bin/wine: error while loading shared libraries: libwine.so.1: cannot create shared object descriptor: Operation not permitted or any *.exe file $ wine whatever.exe /opt/wine-stable/bin/wine: error while loading shared libraries: libwine.so.1: cannot create shared object descriptor: Operation not permitted I checked the ld libraries for the wine executable in /usr/bin: $ cd /usr/bin $ ldd wine linux-gate.so.1 (0x2a9f2000) libwine.so.1 => /usr/bin/./../lib/libwine.so.1 (0x2a836000) libpthread.so.0 => /usr/bin/./../lib/libpthread.so.0 (0x2a815000) libc.so.6 => /usr/bin/./../lib/libc.so.6 (0x2a66e000) libdl.so.2 => /lib/libdl.so.2 (0x2a63b000) /lib/ld-linux.so.2 (0x2a9f3000) Everything seems ok there. So, why do I get that "cannot create shared object descriptor: Operation not permitted" error? :(
I fixed this by giving $ sudo sysctl -w vm.mmap_min_addr=0 I found this solution here: https://wiki.winehq.org/Preloader_Page_Zero_Problem
Operation not permitted - libwine.so.1
1,672,869,171,000
I have f30 installed 3 weeks and I keep seeing this error when I try to compile C++ of maybe fortran code. It is an error connected to ld : error: ld returned 126 exit status I've tried to look into it and so far I have no explanation. What I can share is that ld resides in /usr/bin which is a soft link from /etc/alternatives. [astamato@pcen35240 ~]$ ls -al /usr/bin/ld* lrwxrwxrwx. 1 root root 20 Apr 26 04:27 /usr/bin/ld -> /etc/alternatives/ld -rwxr-xr-x. 1 root root 13536 Aug 11 11:27 /usr/bin/ld.bfd -rwxr-xr-x. 1 root root 5441 Jun 6 13:55 /usr/bin/ldd -rwxr-xr-x. 1 root root 3853632 Mar 6 11:00 /usr/bin/ld.gold When I try to execute ld by it self (so not having it called from another program or installation script), I get the following [astamato@pcen35240 talys]$ /usr/bin/ld bash: /usr/bin/ld: cannot execute binary file: Exec format error [astamato@pcen35240 talys]$ sudo /usr/bin/ld /usr/bin/ld: /usr/bin/ld: cannot execute binary file Then I searched the original /etc/alternatives location, but it's again a soft link [astamato@pcen35240 talys]$ ls -al /etc/alternatives/ld* lrwxrwxrwx. 1 root root 15 Apr 26 04:27 /etc/alternatives/ld -> /usr/bin/ld.bfd Surprisingly enough, the link is to a ld.bfd file which is in /usr/bin. I tried to find the version of ld.bfd but it seems that it cannot be executed [astamato@pcen35240 talys]$ /usr/bin/ld.bfd --version bash: /usr/bin/ld.bfd: cannot execute binary file: Exec format error [astamato@pcen35240 talys]$ sudo /usr/bin/ld.bfd --version /usr/bin/ld.bfd: /usr/bin/ld.bfd: cannot execute binary file Any idea on how to understand what is wrong and solve the issue? EDIT After @steeldriver 's suggestion I report the following outputs $ file -L /usr/bin/ld.bfd /usr/bin/ld.bfd: ELF 64-bit LSB pie executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 3.2.0, BuildID[sha1]=d88173c7f8919542e59738a8c5b626f6ed81d7d8, stripped, too many notes (256) $ uname -m x86_64
I don't really know why it happened or if it could be fixed otherwise or if it will have an effect somewhere else but I just reinstalled binutils and it seems it solved it. So just type sudo yum reinstall binutils and it should be ok.
Cannot execute ld : error 126
1,672,869,171,000
What are these files /lib/x86_64-linux-gnu/ldscripts/elf32_x86_64.xs /lib/x86_64-linux-gnu/ldscripts/elf_x86_64.xs /lib/x86_64-linux-gnu/ldscripts/elf_i386.xs /lib/x86_64-linux-gnu/ldscripts/elf_iamcu.xs /usr/lib/x86_64-linux-gnu/ldscripts/elf32_x86_64.xs /usr/lib/x86_64-linux-gnu/ldscripts/elf_x86_64.xs /usr/lib/x86_64-linux-gnu/ldscripts/elf_i386.xs /usr/lib/x86_64-linux-gnu/ldscripts/elf_iamcu.xs /usr/lib/x86_64-linux-gnu/ldscripts/elf32_x86_64.xs /usr/lib/x86_64-linux-gnu/ldscripts/elf_x86_64.xs /usr/lib/x86_64-linux-gnu/ldscripts/elf_i386.xs /usr/lib/x86_64-linux-gnu/ldscripts/elf_iamcu.xs They're packaged by binutils-x86-64-linux-gnu, but how do they fit into the system? Guessing they're some kind of definition file for the system. What uses them and are they documented?
They're used to generate an "shlib" script you can see the comment below here # Generate 5 or 6 script files from a master script template in # ${srcdir}/scripttempl/${SCRIPT_NAME}.sh. Which one of the 5 or 6 # script files is actually used depends on command line options given # to ld. (SCRIPT_NAME was set in the emulparams_file.) # # A .x script file is the default script. # A .xr script is for linking without relocation (-r flag). # A .xu script is like .xr, but *do* create constructors (-Ur flag). # A .xn script is for linking with -n flag (mix text and data on same page). # A .xbn script is for linking with -N flag (mix text and data on same page). # A .xs script is for generating a shared library with the --shared # flag; it is only generated if $GENERATE_SHLIB_SCRIPT is set by the # emulation parameters. # A .xc script is for linking with -z combreloc; it is only generated if # $GENERATE_COMBRELOC_SCRIPT is set by the emulation parameters or # $SCRIPT_NAME is "elf". # A .xsc script is for linking with --shared -z combreloc; it is generated # if $GENERATE_COMBRELOC_SCRIPT is set by the emulation parameters or # $SCRIPT_NAME is "elf" and $GENERATE_SHLIB_SCRIPT is set by the emulation # parameters too. I don't see anymore information about them. But feel free to add to this answer if you can find more information.
What is the .xs and .x* files in ldscripts?
1,470,312,180,000
I have installed an SSL certificate from Let's Encrypt with Certbot on my Apache server with Debian 8 following this tutorial from Let's Encrypt's own documentation: https://certbot.eff.org/#debianjessie-apache $ certbot --apache You need to specify the domains where you want to install the certificates for, but I only added the example.com domain. Now I want to add the www.example.com, but cannot find how to do this.
UPDATE: You can now do this by passing the --expand flag (see docs): --expand tells Certbot to update an existing certificate with a new certificate that contains all of the old domains and one or more additional new domains. See this answer for an example. In short: you can't. The domains you specify during the initial config become integral parts of the final certificate that is then signed by Let's Encrypt. You can't retroactively change it by adding additional domains or even subdomains as this would undermine its validity. Solution: start from scratch! (not really a big deal with certbot)
Certbot add www domain to existing domain certificate
1,470,312,180,000
I have certbot installed and successfully use it to encrypt my homepage. Now i tried to setup an email system for my website using dovecot and postfix. I got it mostly running, only problem is, that thunderbird gives me a warning about the adress being fraudulent because I use the ssl key of mysite.com for imap.mysite.com (same for smtp) So how can I add imap.mysite.com and smtp.mysite.com to the existing mysite.com certificate using certbot in order to avoid the warning?
You have to use the --expand option of certbot --expand tells Certbot to update an existing certificate with a new certificate that contains all of the old domains and one or more additional new domains. With the --expand option, use the -d option to specify all existing domains and one or more new domains. Example : certbot --expand -d mysite.com,imap.mysite.com,smtp.mysite.com https://certbot.eff.org/docs/using.html#re-creating-and-updating-existing-certificates
How can I add subdomains to letsencrypt using certbots?
1,470,312,180,000
I have an Ubuntu-server 16.04 VPS and Nginx. Now I'm implementing HTTP1 (without TLS, utilizing port 80) but I desire to go "one step forward" and work with HTTP2 (with TLS, utilizing port 443), for all my (Wordpress) websites. Assuming I adjusted my environment, this way: 1. Firewall ufw app list # Choose Nginx HTTPS 2. Server blocks Default server block server { listen 80 default_server; listen [::]:80 default_server; server_name server_domain_or_IP; return 302 https://$server_name$request_uri; } server { # SSL configuration listen 443 ssl http2 default_server; listen [::]:443 ssl http2 default_server; include snippets/self-signed.conf; include snippets/ssl-params.conf; } Each site server block server { listen 443 ssl http2; listen [::]:443 ssl http2; root /var/www/html/example.com; index index.php index.html index.htm index.nginx-debian.html; example.com www.example.com; location / { try_files $uri $uri/ =404; } location ~ \.php$ { include snippets/fastcgi-php.conf; fastcgi_pass unix:/run/php/php7.0-fpm.sock; } location ~ /\.ht { deny all; } } Now I need to create OpenSSL certificates, sign them with Let'sEncrypt, and associate them with each site dir, respectively. My question: How can the creation of OSSL certs, LE signage, and SDIR associating, be done as much automatic as possible from inside the terminal? Of course there is some part in which I need to verify a domain from my email, but beyond that, AFAIU, everything is done from the terminal, thus can be fully automated. Can you share a Bash script code example (or a utilization of a particular utility, maybe GNU make), that helps achieving that? Notes I would humbly prefer a dockerless solution (I read here and bedsides the fact it has to do with renewling, it also seems to implement docker which I have no intention to do for a small private server of less than 10 small sites, by means of minimalism). I understand that creating, signing, and site dir associating, requires a different algorithm than renewaling. I am asking only on creating, signing and associating. Why I even ask this question: Well, I just want to use HTTP2 on my self-managed, minimal VPS (no kernel/shell customization, no compilations, almost no contrib utilities), and it seems insane to me to manually implement this algorithm for many sites, or each time a new site is added.
It's easy to create and update Let's Encrypt cerificates with dehydrated (https://github.com/lukas2511/dehydrated). You have to add /.well-known/acme-challenge/ location for each site as Let's Encrypt service will look on challenge responses under this location to verify that you are the owner of sites you have requested certificates for: location /.well-known/acme-challenge/ { allow all; root /st/hosting/hamilton/htdocs; } And use same path in dehydrated config: egrep -v "^#|^[[:space:]]*$" config WELLKNOWN="/st/hosting/hamilton/htdocs/.well-known/acme-challenge" CONTACT_EMAIL=<you@email> After that put all your domains in domain.txt file: on each line first domain will be CommonName and other names will be AlternativeNames, for example: head -n1 domains.txt hamilton.rinet.ru jenkins.hamilton.rinet.ru munin.hamilton.rinet.ru After that you should put dehydrated -c in cron and use script like this one to install new generated certificates: #!/bin/sh CERTS_DIR=/usr/local/etc/dehydrated/certs NGINX_SSL=/usr/local/etc/nginx/ssl DOMAINS=$(awk '{ print $1 }' /usr/local/etc/dehydrated/domains.txt) for d in $DOMAINS; do short_d=${d%%.rinet.ru} short_d=${short_d%%.ru} # short_d=${short_d##www.} cp -v ${CERTS_DIR}/$d/fullchain.pem ${NGINX_SSL}/${short_d}.crt cp -v ${CERTS_DIR}/$d/privkey.pem ${NGINX_SSL}/${short_d}.key done # Also update certs for Dovecot cp -v ${CERTS_DIR}/hamilton.rinet.ru/fullchain.pem /usr/local/etc/dovecot/certs/certs/server.crt cp -v ${CERTS_DIR}/hamilton.rinet.ru/privkey.pem /usr/local/etc/dovecot/certs/private/server.key
Automating OpenSSL certificates creation, Let'sEncrypt signing, and site dir associating, in an Nginx environment
1,470,312,180,000
The whole day, I am fixing bugs in mainly TLS area, but this question is not specifically about TLS. Well, I have one web server with a few web sites, each with its own SSL certificate. But to the point, I managed to install Certbot version 0.19.0 on my Debian 9.2 like this: Adding backports to the sources: deb http://ftp.debian.org/debian stretch-backports main Installing newer version of Certbot from backports: apt-get install python-certbot-apache -t stretch-backports Afterwards, I had to make some major adjustments to the renewal file, so it looks like this: # renew_before_expiry = 30 days version = 0.10.2 archive_dir = /etc/letsencrypt/archive/pavelstriz.cz-0001 cert = /etc/letsencrypt/live/pavelstriz.cz-0001/cert.pem privkey = /etc/letsencrypt/live/pavelstriz.cz-0001/privkey.pem chain = /etc/letsencrypt/live/pavelstriz.cz-0001/chain.pem fullchain = /etc/letsencrypt/live/pavelstriz.cz-0001/fullchain.pem # Options used in the renewal process [renewalparams] authenticator = webroot installer = apache rsa_key_size = 4096 account = c3f3d026995c1d7370e4d8201c3c11a2 must_staple = True [[webroot_map]] pavelstriz.cz = /home/pavelstriz/public_html www.pavelstriz.cz = /home/pavelstriz/public_html I have managed to renew the pavelstriz.cz domain after this with: certbot renew --dry-run But what worries me is the daily Certbot's cron: # /etc/cron.d/certbot: crontab entries for the certbot package # # Upstream recommends attempting renewal twice a day # # Eventually, this will be an opportunity to validate certificates # haven't been revoked, etc. Renewal will only occur if expiration # is within 30 days. SHELL=/bin/sh PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin 0 */12 * * * root test -x /usr/bin/certbot -a \! -d /run/systemd/system && perl -e 'sleep int(rand(3600))' && certbot -q renew I can't figure out if it works for real or how to run it successfully? If I run: /usr/bin/certbot -a \! -d /run/systemd/system && perl -e 'sleep int(rand(3600))' && certbot -q renew in Bash, it says: Saving debug log to /var/log/letsencrypt/letsencrypt.log The requested ! plugin does not appear to be installed I may have misunderstood those commands.
The actual command run by cron is: test -x /usr/bin/certbot -a \! -d /run/systemd/system && perl -e 'sleep int(rand(3600))' && certbot -q renew It starts by testing some files test -x /usr/bin/certbot -a \! -d /run/systemd/system which translates in does /usr/bin/certbot exist and is executable (-x /usr/bin/certbot) and not (-a \!) /run/systemd/system exist and is a directory (-d /run/systemd/system) If the test succeeds, wait for a random number of seconds (perl -e 'sleep int(rand(3600))'), then try to renew the certificate (certbot -q renew). However, on Debian 9, systemd is installed by default, which means that /run/systemd/system exists and is a directory, so the initial test fails and the renew command is never run. The actual renewal is managed by a systemd timer defined in the file lib/systemd/system/certbot.timer. As of version 0.27.0-1; its content is: [Unit] Description=Run certbot twice daily [Timer] OnCalendar=*-*-* 00,12:00:00 RandomizedDelaySec=43200 Persistent=true [Install] WantedBy=timers.target If cerbot is properly configured, you should probably find lines like Nov 2 20:06:14 hostname systemd[1]: Starting Run certbot twice daily. Nov 2 20:06:14 hostname systemd[1]: Started Run certbot twice daily. in your syslog.
How to validate / fix an error in Certbot renewal cron
1,470,312,180,000
I am trying to install a Let's Encrypt certificate on a Oracle Linux Server 7.6. Since the server does not have a public IP, I had to validate via DNS.I followed the instructions here https://github.com/joohoi/acme-dns-certbot-joohoi and the validation worked and I got the certificate. How do I now install the certificate? I followed instructions online and moved the certificate to etc/ssl/certs and deleted the old certificate. After restarting the machine however the website does not work and I get an error site cannot be reached. I can interact with the server only via SSH.
I believe this should be comparable to CentOS 7.6. The path etc/ssl/certs is simply a symbolic link to /etc/pki/tls/certs/. The certificate is divided into two parts, the first which you have already mentioned is the *.crt file which contains the public key and shall be placed in /etc/pki/tls/certs/ which is in my case certificate.crt, while the other part is the private key, and shall be placed in /etc/pki/tls/private/, usually has *.key extension, in my case private.key. In case you are using Apache web server, here is a working example of my redmine.conf, it should be enough to guide you thru: <VirtualHost *:80> RewriteEngine On RewriteCond %{HTTPS} !=on RewriteRule ^/?(.*) https://%{SERVER_NAME}/$1 [R,L] </VirtualHost> <VirtualHost *:443> ServerName www.example.com ServerAlias 192.0.2.37 SSLEngine on SSLCertificateFile /etc/pki/tls/certs/certificate.crt SSLCertificateKeyFile /etc/pki/tls/private/private.key SSLCertificateChainFile /etc/pki/tls/certs/ca_bundle.crt DocumentRoot /var/www/html/redmine/public <Directory /var/www/html/redmine/public> Allow from all Options -MultiViews Require all granted </Directory> </VirtualHost> I almost forgot to mention - which might solve your problem - is that you need to make sure that you have firewall rules in place, and permanent ones as follows: firewall-cmd --permanent --add-service=http --add-service=https --zone=public firewall-cmd --reload Also, make sure you have SeLinux disabled in case you have not changed its rules for your web service.
Install Let's Encrypt SSL certificate on Oracle Linux Server
1,470,312,180,000
Is there a way to use certbot and letsencrypt certificate for multiserver setup without having to manually copy the certificates from one node to another? I have a domain name example.com which is resolved to 192.0.2.1 in Americas and to 192.0.2.2 in Asia. I run certbot from American server and it successfully generates certificate. I can't run the same command from Asian server, as certbot will be able to resolve domain only to 192.0.2.1. Therefore in order to install certificate for Asian server I have to copy it from 192.0.2.1 to 192.0.2.2. Yes, the copy process can be scripted, though it doesn't look like a good idea for me. Is there other way around?
In the end I used solution described here. In a couple of words: Use a single node for certificate generation Use nginx proxy to forward /.well-known/ from all frontends to the node from step 1 Copy with scripts certificates to all frontend servers
Certbot for multiserver configuration
1,470,312,180,000
I would like to create a shell script for getting let's encrypt certificates: #!/bin/bash sudo docker run -it --rm -p 443:443 -p 80:80 --name certbot \ -v "/etc/letsencrypt:/etc/letsencrypt" \ -v "/var/lib/letsencrypt:/var/lib/letsencrypt" \ quay.io/letsencrypt/letsencrypt:latest certonly But now I have to provide some infos manually: email address option 2 (standalone) the domain Is it possible to automate these inputs?
Don't reinvent the wheel if you don't have too. Someone else, several people actually, has already created a script to automate the process of getting and renewing letsencrypt certificates using a shell script. LetsEncrypt includes a list of third-party clients here. Because the OP asked about a shell script, this is about one in particular, GetSSL, that I've looked at, experimented with. It's fully open-source and licensed under the GNU GPL3 license. There is also a wiki that covers its use very well, and a link to reporting issues. Additionally, the creator also is very active in the LetsEncrypt Community forum under the name "serverco", and answers questions there as well. Since it's a shell script, installation is simple. (All paths used here are samples that work for me, but you can change them to suit your needs.) Download the getssl file from the link above into your bin folder. Other places work, of course, but in the bin folder simplifies things. Make it executable. Then run it. $ wget -O - https://raw.githubusercontent.com/srvrco/getssl/master/getssl > ~/bin/getssl $ chmod 0700 ~/bin/getssl $ getssl --create yourdomain.name The --create option creates the default config file(getssl.cfg) in ~/.getssl and ~/.getssl/yourdomain.name. The first one holds information common to all the domains you might choose to register, and the second one holds the information that is unique to the named domain. If you get certs for more than one domain, each will have its own directory in ~/.getssl, and you need to run the --create command above for each one. The default config files are very well commented, and you can probably configure them without even reviewing the online wiki, although reading ahead of time never hurts (RTFM). One of the things that can be slightly confusing deals with keys. For the SSL to work, the server has to have a key pair (private/public) for the encryption process. This is called the Server Key. To use LetsEncrypt you also need a key pair for communications with the cert server. This is called your Account Key, or your LetsEncrypt Key. For the first use of GetSSL, for new accounts, not accounts that are already active, you can make a key very easily with openssl. This command will generate one and store it for you. openssl genrsa 4096 > ~/.getssl/LE_account.key If, on the other hand, you already have used LetsEncrypt, somewhere you have the account key already. If so, you must continue to use that same key to renew the certs you have, or to revoke certs if needed. I don't know where other clients keep their copy of the account key, but I have found a resource that explains where the certbot client keeps it. Although written for acme-tiny, this guide explains how to 'extract' the account key from the certbot files. The GetSSL client expects the account key to be in standard PEM format, but the certbot client stores it in some other format, inside a JSON structure, and this has to be extracted and converted. Doing so uses another tool from JonLundy, that needs python, so even though GetSSL doesn't need python, you will for this. The given process, modified for the sample file structure above, is: $ wget -O - "https://gist.githubusercontent.com/JonLundy/f25c99ee0770e19dc595/raw/6035c1c8938fae85810de6aad1ecf6e2db663e26/conv.py" > conv.py $ cp /etc/letsencrypt/accounts/acme-v01.api.letsencrypt.org/directory/<id>/private_key.json private_key.json $ openssl asn1parse -noout -out private_key.der -genconf <(python conv.py private_key.json) $ openssl rsa -in private_key.der -inform der > ~/.gelssl/LE_account.key $ rm conv.py private_key.json private_key.der The final line probably should be replaced with something a little more secure. After all, this is a rather important private key involved. Maybe something like shred -zun13 private_key.json; shred -zun13 private_key.der would be better. I am not a security expert, nor a server expert, so I am not able to discuss those aspects of GetSSL, or its implementation. They might be better addressed on Server Fault. Nor can I answer most questions about GetSSL. The place to get answers about GetSSL the fastest seems to be on the Community site for LetsEncrypt, where serverco can often be seen. Many others on that forum also deal with GetSSL questions, providing a good pool of answers.
Is it possible to get letsencrypt certificates with a shell script?
1,470,312,180,000
Assuming I am not close on which web server to use, Apache or Nginx but I still want to be over with the SSL certification procedure, can I install a Let'sEncrypt ssl certificate before I configure a virtual host?
Yes, you can get a Let's Encrypt SSL certificate before your webserver is up and running. Let's Encrypt accepts two kinds of domain validation: Provisioning a DNS record under example.com, or Provisioning an HTTP resource under a well-known URI on http://example.com/ (Source.) You can use the first challenge. All you need to do is to edit your DNS to put a TXT record under your domain name. The TXT record will need to contain the value of a token specified by Let's Encrypt. (Source.)
Can I install a Let'sEncrypt ssl certificate before I configure a virtual host?
1,470,312,180,000
I started using letsencrypt when there was an "official" client called letsencrypt. I now want to change to acme-client - that is, the C implementation. I think I manage to configure my sites, and find the certificates for them, but I get the error acme-client: https://acme-v01.api.letsencrypt.org/acme/new-authz: bad HTTP: 403 acme-client: transfer buffer: [{ "type": "urn:acme:error:unauthorized", "detail": "No registration exists matching provided key", "status": 403 }] (120 bytes) I don't think I got the account key right. Where did letsencrypt store that? I find a directory called /etc/letsencrypt/accounts, but below, there are no pem-files, only jsonwith strange content... So my questions are: Did letsencrypt store the account key in pem-format? If so, where can I find it? If not - is the key stored anywhere in a way that is transformable to pem-format?
Another solution, much easier, is to re-register the account using acme-client -DAvv <domain> after having opened port 80 and configured httpd to answer calls with the additional location "/.well-known/acme-challenge/*" { root "/acme" root strip 2 }
Switching from letsencrypt (client) to acme-client - where is my account key?
1,470,312,180,000
# cat /etc/letsencrypt/options-ssl-apache.conf # Baseline setting to Include for SSL sites using Let's Encrypt certificates SSLEngine on # Intermediate configuration, tweak to your needs SSLProtocol -all +TLSv1.1 +TLSv1.2 #SSLCipherSuite ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA SSLHonorCipherOrder on SSLCompression off SSLOptions +StrictRequire # Add vhost name to log entries LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-agent}i\"" vhost_combined LogFormat "%v %h %l %u %t \"%r\" %>s %b" vhost_common CustomLog /var/log/apache2/access.log vhost_combined LogLevel warn ErrorLog /var/log/apache2/error.log # Always ensure Cookies have "Secure" set (JAH 2012/1) Header edit Set-Cookie (?i)^(.*)(;\s*secure)??((\s*;)?(.*)) "$1; Secure$3$4" As I have my own global SSL settings set directly in Apache, I don't want the Certbot not to include the mentioned file with the line: Include /etc/letsencrypt/options-ssl-apache.conf The line gets duplicated by the way, I have found it 3 times in the VirtualHosts... I want the Certbot not to include this file at all. How am I supposed to do this?
You will want to use the certonly command: Authenticators are plugins used with the certonly command to obtain a certificate. The authenticator validates that you control the domain(s) you are requesting a certificate for, obtains a certificate for the specified domain(s), and places the certificate in the /etc/letsencrypt directory on your machine. The authenticator does not install the certificate (it does not edit any of your server’s configuration files to serve the obtained certificate)... usage: certbot [SUBCOMMAND] [options] [-d DOMAIN] [-d DOMAIN] ... ... obtain, install, and renew certificates: (default) run Obtain & install a certificate in your current webserver certonly Obtain or renew a certificate, but do not install it Examples: certbot certonly --webroot -w /var/www/example -d www.example.com -d example.com -w /var/www/other -d other.example.net -d another.other.example.net certbot certonly --standalone -d www.example.com -d example.com
How to configure the Certbot not to include options-ssl-apache.conf into my VirtualHosts?
1,470,312,180,000
I've been trying to get a Let's Encrypt certificate, key and chain to work. But I have also done some stuff that I know I didn't need to before I realized what to do. Port 443 wasn't open so I've also been doing some stuff with "ports.conf" but I've changed that back to how it was, I think. I modified the default-ssl.conf and did an a2ensite but the normal mywebsite.com.conf was still active. There was an error in the file path to the keys, so I fixed that and tried to reboot apache2 but that's when the real problems started. I tried a2dissite on one and both conf files (normal and SSL) but couldn't get apache to restart. The only other thing was I did some thing I didn't need to like making a CSR and adding it to the certificate file. Here's what it was saying: ● apache2.service - LSB: Apache2 web server Loaded: loaded (/etc/init.d/apache2) Drop-In: /lib/systemd/system/apache2.service.d └─forking.conf Active: failed (Result: exit-code) since Wed 2017-01-18 06:23:59 PST; 3min 31s ago Process: 31878 ExecStop=/etc/init.d/apache2 stop (code=exited, status=0/SUCCESS) Process: 31861 ExecReload=/etc/init.d/apache2 reload (code=exited, status=1/FAILURE) Process: 30542 ExecStart=/etc/init.d/apache2 start (code=exited, status=0/SUCCESS) Jan 18 06:22:10 archimedes systemd[1]: apache2.service: control process exited, code=exited status=1 Jan 18 06:22:10 archimedes systemd[1]: Reload failed for LSB: Apache2 web server. Jan 18 06:23:37 archimedes systemd[1]: Reloading LSB: Apache2 web server. Jan 18 06:23:38 archimedes apache2[31861]: Reloading web server: apache2 failed! Jan 18 06:23:38 archimedes apache2[31861]: Apache2 is not running ... (warning). Jan 18 06:23:38 archimedes systemd[1]: apache2.service: control process exited, code=exited status=1 Jan 18 06:23:38 archimedes systemd[1]: Reload failed for LSB: Apache2 web server. Jan 18 06:23:59 archimedes apache2[31878]: Stopping web server: apache2. Jan 18 06:23:59 archimedes systemd[1]: Unit apache2.service entered failed state. Jan 18 06:27:04 archimedes systemd[1]: Unit apache2.service cannot be reloaded because it is inactive. Then I had a look at another thread from here and someone mentioned sudo reboot so I tried this and putty froze. But when I re-opened putty I was able to restart apache2 and it's back to normal HTTP now. Any ideas what I did wrong? Is going in and adding the SSL to sites-enabled again a good thing to do? It seems that port 443 is open now, which is a good thing. EDIT: apache2.conf... Mutex file:${APACHE_LOCK_DIR} default PidFile ${APACHE_PID_FILE} Timeout 300 KeepAlive Off MaxKeepAliveRequests 100 KeepAliveTimeout 5 User ${APACHE_RUN_USER} Group ${APACHE_RUN_GROUP} HostnameLookups Off ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel warn IncludeOptional mods-enabled/*.load IncludeOptional mods-enabled/*.conf Include ports.conf <Directory /> Options FollowSymLinks AllowOverride None Require all denied </Directory> <Directory /usr/share> AllowOverride None Require all granted </Directory> <Directory /var/www/> Options Indexes FollowSymLinks AllowOverride None Require all granted </Directory> and, "mywebsite.com.conf"... <VirtualHost *:80> ServerAdmin [email protected] ServerName mywebsite.com ServerAlias www.mywebsite.com DocumentRoot /var/www/html ErrorLog /var/www/logs/error.log CustomLog /var/www/logs/access.log combined <Directory /var/www/html/> Options Indexes FollowSymLinks MultiViews AllowOverride all Order allow,deny allow from all </Directory> </VirtualHost>
Make sure to allow ports 80 and 443 in your router. Make sure to forward ports 80 and 443 to your server. Make sure to have punched holes in your firewall: sudo iptables -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT sudo iptables -A INPUT -p tcp -m tcp --dport 443 -j ACCEPT Define VirtualHost for port 443 (as well as for 80): <VirtualHost *:80> ... your code here ... </VirtualHost> <IfModule mod_ssl.c> <VirtualHost *:443> ... your code here ... </VirtualHost> </IfModule> Activate mod_rewrite: sudo a2enmod rewrite Define a redirect from HTTP to HTTPS, something like: RewriteCond %{HTTPS} !on RewriteRule ^/?(.*) https://%{SERVER_NAME}/$1 [R=301] And finally restart Apache: sudo service apache2 restart
Apache problems when trying to set up SSL (Debian)
1,470,312,180,000
Apache webserver on Rocky Linux 9, with SSL certs obtained from LetsEncrypt. This is the config of a specific virtual host "myvhost", but the problem arises for all vhosts on my server: /etc/httpd/conf.d/myvhost.conf: <VirtualHost *:80> ServerName myvhost.example.org DocumentRoot "/var/www/html/myvhost" RewriteEngine on RewriteCond %{SERVER_NAME} =myvhost.example.org RewriteRule ^ https://%{SERVER_NAME}%{REQUEST_URI} [END,NE,R=permanent] </VirtualHost> /etc/httpd/conf.d/myvhost-le-ssl.conf (autogenerated by LetsEncrypt): <IfModule mod_ssl.c> <VirtualHost *:443> ServerName myvhost.example.org DocumentRoot "/var/www/html/myvhost" Include /etc/letsencrypt/options-ssl-apache.conf Header always set Strict-Transport-Security "max-age=63072000; includeSubDomains" TraceEnable off SSLCertificateFile /etc/letsencrypt/live/example.org-0001/fullchain.pem SSLCertificateKeyFile /etc/letsencrypt/live/example.org-0001/privkey.pem </VirtualHost> </IfModule> The command curl -i http://myvhost.example.org returns: HTTP/1.1 400 Bad Request Date: Wed, 19 Jun 2024 12:39:10 GMT Server: Apache Content-Length: 362 Connection: close Content-Type: text/html; charset=iso-8859-1 <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>400 Bad Request</title> </head><body> <h1>Bad Request</h1> <p>Your browser sent a request that this server could not understand.<br /> Reason: You're speaking plain HTTP to an SSL-enabled server port.<br /> Instead use the HTTPS scheme to access this URL, please.<br /> </p> </body></html> Why is it doing that? Amongst other things, HTTP Error 400 prevents certbot renew from verifying the domain and renewing the certificate. It is worth noting that the exact same configuration on CentOS Stream 8 did not result in this problem. EDIT: output of the command for f in $(grep -l -e SSLCertificate -e :80 /etc/httpd/conf.d/*.conf); do printf '\n== %s ==\n' "$f"; grep -hE 'SSLCertificate|VirtualHost|Server(Name|Alias)' "$f" | sed -e 's/#.*//' -e '/^[[:space:]]*$/d'; done | less: == /etc/httpd/conf.d/main-le-ssl.conf == <VirtualHost *:443> ServerName example.org SSLCertificateFile /etc/letsencrypt/live/example.org-0001/fullchain.pem SSLCertificateKeyFile /etc/letsencrypt/live/example.org-0001/privkey.pem </VirtualHost> == /etc/httpd/conf.d/main.conf == <VirtualHost *:80> ServerName example.org </VirtualHost> == /etc/httpd/conf.d/myvhost-le-ssl.conf == <VirtualHost *:443> ServerName myvhost.example.org SSLCertificateFile /etc/letsencrypt/live/example.org-0001/fullchain.pem SSLCertificateKeyFile /etc/letsencrypt/live/example.org-0001/privkey.pem </VirtualHost> == /etc/httpd/conf.d/myvhost.conf == <VirtualHost *:80> ServerName myvhost.example.org </VirtualHost> == /etc/httpd/conf.d/anothervhost-le-ssl.conf == <VirtualHost *:443> ServerName anothervhost.example.org SSLCertificateFile /etc/letsencrypt/live/example.org-0001/fullchain.pem SSLCertificateKeyFile /etc/letsencrypt/live/example.org-0001/privkey.pem </VirtualHost> == /etc/httpd/conf.d/anothervhost.conf == <VirtualHost *:80> ServerName anothervhost.example.org </VirtualHost> == /etc/httpd/conf.d/ssl.conf == SSLCertificateFile /etc/letsencrypt/live/example.org-0001/fullchain.pem SSLCertificateKeyFile /etc/letsencrypt/live/example.org-0001/privkey.pem
Run this and inspect the output: for f in $(grep -l -e SSLDirective -e :80 /etc/httpd/conf.d/*.conf) do printf '\n== %s ==\n' "$f" grep -hE 'SSLCertificate|SSLDirective|VirtualHost|Server(Name|Alias)' "$f" | sed -e 's/#.*//' -e '/^[[:space:]]*$/d' done Debian derivatives will need to change /etc/httpd/conf.d/*.conf to /etc/apache2/sites-enabled/*.conf It will show you a (very) cut down representation of your configuration. You're looking for a Virtual Host (vHost) on port 80 that contains SSL directives, or SSL directives outside a vHost section. On seeing your output this second situation was indeed the case: == /etc/httpd/conf.d/ssl.conf == SSLCertificateFile /etc/letsencrypt/live/example.org-0001/fullchain.pem SSLCertificateKeyFile /etc/letsencrypt/live/example.org-0001/privkey.pem (Presumably with SSLEngine on also present, although the original code in a comment omitted that particular line.) What happens is that the presence of the SSLEngine on directives enables SSL (reasonably enough). So when it's within a specific Virtual Host it's just that vHost that's enabled for SSL. But because you had the directives globally, outside any vHost, SSL was enabled everywhere. You should then have disabled SSL for the vHosts that were listening on port 80 with the directive SSLEngine off.
Why is my web server serving HTTPS content on port 80?
1,470,312,180,000
I've followed the guidance from other Stack answers- there's a gazillion related to this- in building my openssl verify command to validate my Let's Encrypt certs, shown below: openssl verify -show_chain /etc/letsencrypt/live/mail.example.com/chain.pem /etc/letsencrypt/live/mail.example.com/cert.pem But it fails with the error: CN = mail.example.com error 20 at 0 depth lookup: unable to get local issuer certificate error /etc/letsencrypt/live/mail.example.com/cert.pem: verification failed /etc/letsencrypt/live/mail.example.com/chain.pem: OK Chain: depth=0: C = US, O = Let's Encrypt, CN = R3 (untrusted) depth=1: C = US, O = Internet Security Research Group, CN = ISRG Root X1 Even if I substitute fullchain.pem for chain.pem this nonetheless fails. But these are all the certs Let's Encrypt distributed to me! What am I missing here?
openssl verify -show_chain /etc/letsencrypt/live/mail.example.com/chain.pem /etc/letsencrypt/live/mail.example.com/cert.pem This command is wrong. It will try to verify all the given certificates independently from each other, i.e. not build a trust chain and verify the first. Instead the command should have been: openssl verify -untrusted chain.pem cert.pem With -untrusted the intermediate certificate will be given. The root certificate ISRG X1 will be taken from the trust store in modern systems, otherwise it should be given with -trusted or -CAfile.
Let's Encrypt Certs Fail "openssl verify" Verification
1,470,312,180,000
I have a some internally available servers (all Debian), that share a LetsEncrypt wildcard certificate (*.local.example.com). One server (Server1) keeps the certificate up-to-date and now I'm looking for a process to automatically distribute the .pem-files from Server1 to the other servers (e.g. Server2 and Server3). I don't allow root logins via SSH, so I believe I need an intermediary user. I've considered using a cronjob on Server1 to copy the updated .pem-files to a users directory, where a unprivileged user uses scp or rsync (private key authentication) via another cronjob to copy the files to the Server2/3. However, to make this a more secure process, I wanted to restrict the user's privileges on the Server2/3 to chroot to their home directory and only allow them to use scp or rsync. It seems like this isn't a trivial configuration and most methods are outdated, flawed or requite an extensive setup (rbash, forcecommand, chroot, ...). I've also considered to change the protocol to sftp, which should allow me to use the restricted sftp environment, via OpenSSH but I have no experience. An alternative idea was to use an API endpoint (e.g. FastAPI, which is already running on Server1) or simply a webserver via HTTPS with custom API-Secrets or mTLS on Server1 to allow Server2/3 to retrieve the .pem-files. At the moment, the API/webserver approach seems most reasonable and least complex, yet feels unnecessarily convoluted. I'd prefer a solution that doesn't require additional software. Server1 has .pem-files (owned by root) and Server2/3 need those files updated regularly (root-owned location). What method can I use to distribute those files automatically in a secure manner?
I've settled on an rsync-only user, that can only rsync data to a predefined directory using ssh-keys (https://gist.github.com/jyap808/8700714). I rsync the files with script that runs after successful letsencrypt deployments. On the receiving servers, I have an inotifywait service running that moves the files to the appropriate locations right after they've synced onto the server.
How to distribute HTTPS certificate/key securely and automatically on internal servers
1,470,312,180,000
So, one of my servers is behind NAT, and since there is already a publicly accessible apache server going on my LAN, I decided to access it from the outside with different ports, and remap them to the standard port of the apache on this new machine I want to get a cert on. I did that with classic port forwarding via my router. Now, if I want to use letsencrypt on said server, it obviously fails because it tries to use the standard port, which will direct to my other server's apache installation (which btw. already has a letsencrypt-cert). Now I guess I need some way to tell letsencrypt to use my self-defined port instead of the standard one to connect from the outside, but I haven't found anything yet. Is that even possible? If it is, how?
It's not possible to use non-standard port, as conforming ACME server will still try to contact default 80 / 443 for http-01 / tls-sni-01 challenges. E.g.certbot has a separate options for to listen to non-standard port, but that still doesn't help to pass the challenge: certonly: Options for modifying how a cert is obtained --tls-sni-01-port TLS_SNI_01_PORT Port used during tls-sni-01 challenge. This only affects the port Certbot listens on. A conforming ACME server will still attempt to connect on port 443. (default: 443) --http-01-port HTTP01_PORT Port used in the http-01 challenge.This only affects the port Certbot listens on. A conforming ACME server will still attempt to connect on port 80. (default: 80) Probably in your case the best way would be to use another verification method -- webroot. In this case you don't need your 80 and 443 to be available to the outside world, but just a specific directory (which might be configured with proxy on webserver side, I assume). Details are available here
Changing the port letsencrypt tries to connect on
1,470,312,180,000
I'm currently trying to share and use my wildcard certificate from letsencrypt with NFS, but the servers who are supposed to use it, cannot do so. To my Setup: I have 3 VM's (in future maybe 4) running. One is a Reverse-proxy, that receives all http and https traffic and redirects them to my Mail server and my Kanboard. My mailserver runs with iRedMail. My problem is that I fail to deploy the certificate on both, the Kanboard and the iRedMail server. Kanboard (APACHE2) tells me this: SSLCertificateFile: file '/mnt/letsencrypt/live/domain.com/fullchain.pem' does not exist or is empty and iRedMail (NGINX) this: nginx: [emerg] BIO_new_file("/etc/ssl/certs/iRedMail.crt") failed (SSL: error:0200100D:system library:fopen:Permission denied:fopen('/etc/ssl/certs/iRedMail.crt' Since I dont want this post to drag to long, I created some pastebins with my configs, and things I have done. Reverse-proxy, iRedMail, Kanboard : All will be accessible for 6 months. HTTPS access for domain.com (meaning Reverse-Proxy) works without a problem. Output for sudo ls -l /etc/letsencrypt/ (live) drwxrwxrwx 3 administrator root 4096 Feb 13 16:25 live All 3 Servers run Ubuntu 1804 Server and the user "administrator" uses the same credentials. If you need anymore information, feel free to ask. Edits Outputs for namei -lx /path/to/private/key
Because the daemons in list run as root (mail and web listen on port <1024) and those daemons try to read from NFS they will have problem because usually NFS shares are done w/o option no_root_squash. The idea is NFS map local (on client) root user as anonymous user with ID not 0. And local root user will have no access to NFS shared files and directories with restricted permissions only for root. So the OP can resolve this issue using two ways: change permissions of files and directories so world can read the files. or add no_root_squash to the NFS share (and restart NFS server)
Letsencrypt Wildcard certficate with NFS. [U 18.04]
1,470,312,180,000
Recently I've changed logs configuration for letsencrypt, because there was no given one and I have files: letsencrypt.log letsencrypt.log.1 letsencrypt.log.10 letsencrypt.log.10.gz letsencrypt.log.11 letsencrypt.log.11.gz letsencrypt.log.12 letsencrypt.log.12.gz letsencrypt.log.13 letsencrypt.log.13.gz letsencrypt.log.14 letsencrypt.log.14.gz letsencrypt.log.15 letsencrypt.log.15.gz letsencrypt.log.16 letsencrypt.log.16.gz where even files have 1409 bytes and odd files have 0 bytes. Gzipped files however have some content (which differs). The configuration for log rotate is: /var/log/letsencrypt/*.log { daily rotate 32 compress delaycompress missingok notifempty create 644 root root } How should I change the log rotate configuration to leave only: first two files not gzipped, rest of the files gzipped, get rid of empty files?
Ok, so I've managed to make proper log rotate: /var/log/letsencrypt/*.log { weekly rotate 9 compress delaycompress missingok create 644 root root } So the difference is that I've removed notifempty.
Logs gzipped and not gzipped
1,470,312,180,000
I am trying to install a Let's Encrypt SSL certificate to my website using Securing Apache with Let's Encrypt on CentOS 7. My web server is (include version): Apache (cPanel) My hosting provider is: GoDaddy followed this link for that and STEP-1 and STEP-2 were successful with the understanding that no firewall has been setup in my VPS sudo yum install epel-release sudo yum install httpd mod_ssl python-certbot-apache sudo systemctl start httpd systemctl status httpd curl www.example.com ((Note: works)) sudo certbot --apache -d example.com -d www.example.com This last command generates an error, as follows: sudo: certbot: command not found I tried to install certbot by sudo yum install certbot and installed successfully Installed: certbot.noarch 0:0.27.1-1.el7 Dependency Installed: audit-libs-python.x86_64 0:2.8.1-3.el7_5.1 checkpolicy.x86_64 0:2.5-6.el7 libcgroup.x86_64 0:0.41-15.el7 libsemanage-python.x86_64 0:2.5-11.el7 policycoreutils-python.x86_64 0:2.5-22.el7 pyOpenSSL.x86_64 0:0.13.1-3.el7 python-IPy.noarch 0:0.75-6.el7 python-cffi.x86_64 0:1.6.0-5.el7 python-enum34.noarch 0:1.0.4-1.el7 python-idna.noarch 0:2.4-1.el7 python-ndg_httpsclient.noarch 0:0.3.2-1.el7 python-ply.noarch 0:3.4-11.el7 python-pycparser.noarch 0:2.14-1.el7 python-requests.noarch 0:2.6.0-1.el7_1 python-requests-toolbelt.noarch 0:0.8.0-1.el7 python-six.noarch 0:1.9.0-2.el7 python-urllib3.noarch 0:1.10.2-5.el7 python-zope-component.noarch 1:4.1.0-3.el7 python-zope-event.noarch 0:4.0.3-2.el7 python-zope-interface.x86_64 0:4.0.5-4.el7 python2-acme.noarch 0:0.27.1-1.el7 python2-certbot.noarch 0:0.27.1-1.el7 python2-configargparse.noarch 0:0.11.0-1.el7 python2-cryptography.x86_64 0:1.7.2-2.el7 python2-future.noarch 0:0.16.0-6.el7 python2-josepy.noarch 0:1.1.0-1.el7 python2-mock.noarch 0:1.0.1-9.el7 python2-parsedatetime.noarch 0:2.4-5.el7 python2-pyasn1.noarch 0:0.1.9-7.el7 python2-pyrfc3339.noarch 0:1.0-2.el7 python2-requests.noarch 0:2.6.0-0.el7 python2-six.noarch 0:1.9.0-0.el7 pytz.noarch 0:2016.10-2.el7 setools-libs.x86_64 0:3.3.8-2.el7 Complete! I again tried to request an SSL certificate for my domain. sudo certbot --apache -d example.com -d www.example.com this time it's returning Saving debug log to /var/log/letsencrypt/letsencrypt.log The requested apache plugin does not appear to be installed /var/log/letsencrypt/letsencrypt.log 2018-11-02 08:15:55,542:DEBUG:certbot.main:certbot version: 0.27.1 2018-11-02 08:15:55,542:DEBUG:certbot.main:Arguments: ['--apache', '-d', 'example.com', '-d', 'www.example.com'] 2018-11-02 08:15:55,543:DEBUG:certbot.main:Discovered plugins: PluginsRegistry(PluginEntryPoint#manual,PluginEntryPoint#null,PluginEntryPoint#standalone,Plugi$ 2018-11-02 08:15:55,611:DEBUG:certbot.log:Root logging level set at 20 2018-11-02 08:15:55,611:INFO:certbot.log:Saving debug log to /var/log/letsencrypt/letsencrypt.log 2018-11-02 08:15:55,613:DEBUG:certbot.plugins.selection:Requested authenticator apache and installer apache 2018-11-02 08:15:55,613:DEBUG:certbot.plugins.selection:No candidate plugin 2018-11-02 08:15:55,614:DEBUG:certbot.plugins.selection:Selected authenticator None and installer None Note: I replaced example.com with my actual domain UPDATE 1 I tried with sudo yum install python-certbot-apache it is returning --> Finished Dependency Resolution Error: Package: python2-certbot-apache-0.27.1-1.el7.noarch (epel) Requires: mod_ssl You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest
What's the problem with doing this?: sudo yum install mod_ssl sudo a2enmod ssl (restart apache then)
Unable to install Let's Encrypt certificate on CentOS 7
1,470,312,180,000
I have an Ubuntu mail server running postfix version 3.6.4. I configured postfix to use ssl by adding the following lines to /etc/postfix/main.cf: # TLS parameters smtpd_tls_cert_file = /etc/letsencrypt/live/host-name.domain.name/fullchain.pem smtpd_tls_key_file = /etc/letsencrypt/live/host-name.domain.name/privkey.pem I am successfully receiving emails from a gmail account. How do I ensure that the emails are ssl encrypted?
Checking the mail logs will have a line similar to this if postfix is receiving email with encryption... 2022-08-11T19:17:07.707481+01:00 eth6 postfix/smtpd[8401]: Anonymous TLS connection established from mail[1.2.3.4]: TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange ECDHE (P-256) server-signature RSA-PSS (2048 bits) server-digest SHA256 You can also telnet to the local smtp port and type in helo moto then ehlo b.org and it should then tell you what it supports. Look for a line similar to 250-STARTTLS The moto and b.org are just pointless drivel and can be anything. Yet another way, is to look at the raw output of a received email message. You should see output similar to: Received: from mail.nowhere.com (mail.nowhere.com [2.3.4.5]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (Client did not present a certificate) by gate.nowhere.com (Postfix) with ESMTPS id 9D3F880E984F for <[email protected]>; Fri, 15 Jul 2022 19:12:02 +0100 (BST As for sending, just run tcpdump and send an email to some gmail account and look at the ascii output.
How do I check if my postfix email server is using SSL?
1,470,312,180,000
I have a valid letsencrypt certificate that is used by apache server. <IfModule mod_ssl.c> <VirtualHost *:443> ServerName mydomain.com SSLCertificateFile /etc/letsencrypt/live/mydomain.com/fullchain.pem SSLCertificateKeyFile /etc/letsencrypt/live/mydomain.com/privkey.pem Include /etc/letsencrypt/options-ssl-apache.conf </VirtualHost> </IfModule> This certificate is automatically renewed using certbot: sudo ls /etc/letsencrypt/mydomain.com/live/ -al lrwxrwxrwx 1 root root 41 Dec 31 08:46 fullchain.pem -> ../../archive/mydomain.com/fullchain8.pem lrwxrwxrwx 1 root root 39 Dec 31 08:46 privkey.pem -> ../../archive/mydomain.com/privkey8.pem Apache is running under www-data user account. www-data 6452 0.0 0.4 472056 41852 ? S 01:44 0:11 /usr/sbin/apache2 -k start I also have a web socket server that needs to use this certificate. When I run the web socket server with www-data user I get following error: RuntimeException: Connection from tcp://127.0.0.1:46714 failed during TLS handshake: Unable to complete TLS handshake: SSL_R_NO_SHARED_CIPHER: no suitable shared cipher could be used. This could be because the server is missing an SSL certificate (local_cert context option)... Which basically means it cannot access the certificate. If I copy the fullchain8.pem and privkey8.pem files to another location where www-data has access the error changes (but that is another problem :) ). I know it is a bad idea to copy a private key file to a place where www-data has access, and I definitely do not want the web socket server to run with root privileges. So my question is, how can I access the certificate? Apache seems to know its way around.
If you run ps auxw | grep httpd you'll see that while most of the apache webserver processes are owned by www-data, there is one process owned by root. This is the master process which (amongst other things) reads the certificate at startup. Further, if you have a look in the directory /etc/letsencrypt/live you'll find one or more synlinks to where the files actually reside. This complexity is there for a reason. And you really don't want to change things around in there - you don't want to break certbot. While you told us lots about the webserver, you didn't tell us anything about the web socket server. It may have some mechanism to avoid this issue. You can add a deploy-hook script in the certbot config to copy the certificate and key from /etc/letsencrypt/live/mydomain.com/ to another location with different permissions (and restart the websocket certificate / signal it to reload the certificate). This can be specified on the certbot command line when first creating the certificate or it can be manually added to /etc/letsencrypt/renewal/mydomain.com.conf later. I've not tried this myself, but I believe this would simply be something like.... deploy-hook = /usr/local/sbin/updatewebscoketserver at the end of the file, under the [renewalparams] section. (you get to write the script yourself). Do you really need to run the websocket server as the same uid as the webserver?
how to access TSL certificate locally
1,375,515,548,000
I have recently discovered that if we press Ctrl+X Ctrl+E, bash opens the current command in an editor (set in $VISUAL or $EDITOR) and executes it when the editor is closed. But it doesn't seem to be documented in the man pages. Is it documented, and if so where?
I have found it out now. I should have read it more carefully before asking this. The man page says: edit-and-execute-command (C-xC-e) Invoke an editor on the current command line, and execute the result as shell commands. Bash attempts to invoke $VISUAL, $EDITOR, and emacs as the editor, in that order.
Where is the bash feature to open a command in $EDITOR documented?
1,375,515,548,000
Currently, I have the following in my .zshrc: bindkey '^[[A' up-line-or-search bindkey '^[[B' down-line-or-search However, this only seems to match the content of my current input before a space character occurs. For example, sudo ls / will match every line in my history that begins with sudo, while I would like it to only match lines that match my entire input. (i.e. sudo ls /etc would match, but not sudo cat /var/log/messages) What do I need to change in order to gain the desired behavior? Here is my entire .zshrc in case it is relevant: https://gist.github.com/919566
This is the documented behavior: down-line-or-search Move down a line in the buffer, or if already at the bottom line, search forward in the history for a line beginning with the first word in the buffer. There doesn't seem to be an existing widget that does exactly what you want, so you'll have to make your own. Here's how to define a widget that behaves like up-line-or-search, but using the beginning of the line (up to the cursor) rather than the first word as search string. Not really tested, especially not on multi-line input. up-line-or-search-prefix () { local CURSOR_before_search=$CURSOR zle up-line-or-search "$LBUFFER" CURSOR=$CURSOR_before_search } zle -N up-line-or-search-prefix An alternate approach is to use history-beginning-search-backward, but only call it if the cursor is on the first line. Untested. up-line-or-history-beginning-search () { if [[ -n $PREBUFFER ]]; then zle up-line-or-history else zle history-beginning-search-backward fi } zle -N up-line-or-history-beginning-search
ZSH: search history on up and down keys?
1,375,515,548,000
I was just typing something along the lines of: mv foo/bar/poit/zoid/narf.txt Suddenly I realized, damn, I have to type large parts of that parameter again: mv foo/bar/poit/zoid/narf.txt foo/bar/poit/zoid/troz.txt Even with tab completion, that's quite a pain. I know I can copy-paste the parameter by mouse-selecting the text and middleclicking but that is not good enough. I want to keep my hands on the keyboard. Is there a way to copy-paste the current parameter of the line using the keyboard?
If I've planned ahead, I use brace expansion. In this case: mv foo/bar/poit/zoid/{narf,troz}.txt Here is another approach using the default readline keyboard shortcuts: mv foo/bar/poit/soid/narf.txt: start Ctrl-w: unix-word-rubout to delete foo/bar/poit/soid/narf.txt Ctrl-y, Space, Ctrl-y: yank, space, yank again to get mv foo/bar/poit/soid/narf.txt foo/bar/poit/soid/narf.txt Meta-backspace, Meta-backspace: backward-kill-word twice to delete the last narf.txt troz.txt: type the tail part that is different If you spend any non-trivial amount of time using the bash shell, I'd recommend periodically reading through a list of the default shortcuts and picking out a few that seem useful to learn and incorporate into your routine. Chapter 8 of the bash manual is a good place to start. Knowing the shortcuts can really raise your efficiency.
How to repeat currently typed in parameter on Bash console?
1,375,515,548,000
I was play around at the bash prompt, and pressed ESC followed by { , after which , the shell showed all the files for completion, in a fileglob string. Eg : If I had typed bash C followed by ESC+{ , the shell would show this : bash CHECK{,1,2{,23{336{,66666},6},3{,6}}} auto-completing all the Possible files & Directories starting with C, showing all the Experimental files & Directories I had made. What is ESC + { & where I can know more about it ? I see this on CENTOS & Mac OSX with bash.
To find out about a key binding. In bash: $ bind -p | grep -a '{' "\e{": complete-into-braces "{": self-insert $ LESS='+/complete-into-braces' man bash complete-into-braces (M-{) Perform filename completion and insert the list of possible com‐ pletions enclosed within braces so the list is available to the shell (see Brace Expansion above). Or with info: info bash --index-search=complete-into-braces (or info bash and use the index with completion (i key)) However note that the pre-built info page that comes with bash-4.3 sources at least is missing some index entries including that for complete-into-braces, so unless your OS rebuilds the info page from the texinfo sources, the above command won't work. In zsh $ bindkey| grep W "^W" backward-kill-word "^[W" copy-region-as-kill $ info --index-search=copy-region-as-kill zsh copy-region-as-kill (ESC-W ESC-w) (unbound) (unbound) Copy the area from the cursor to the mark to the kill buffer. If called from a ZLE widget function in the form 'zle copy-region-as-kill STRING' then STRING will be taken as the text to copy to the kill buffer. The cursor, the mark and the text on the command line are not used in this case. Or with man assuming the less pager like for bash: LESS='+/copy-region-as-kill' man zshall zsh also has a describe-key-briefly which you can bind on a key or key sequence, like Ctrl+XCtrl+H below: bindkey '^X^H' describe-key-briefly Then you type Ctrl+XCtrl+H followed by the key or key combination to describe. For instance, typing that Ctrl+XCtrl+H twice would display below the prompt: "^X^H" is describe-key-briefly In tcsh That's basically the same as zsh except that tcsh doesn't have an info page. > bindkey | grep -a P "^P" -> up-history "^[P" -> history-search-backward > env LESS=+/history-search-backward man tcsh [...] In fish: > bind | grep -F '\ec' bind \ec capitalize-word > help commands Which should start your preferred web browser. And search for capitalize-word in there.
ESC + { : What is it and where I can know more about it?
1,375,515,548,000
Some instances of bash change the command history when you re-use and edit a previous command, others apparently don't. I've been searching and searching but can't find anything that says how to prevent commands in the history from being modified when they're reused and edited. There are questions like this one, but that seems to say how to cope with the history being edited. I've only recently come across an instance of bash that does edit the history when you reuse a command - all previous bash shells I've used have (as far as I've noticed) been configured not to change the history when you reuse and edit a command. (Perhaps I've just not been paying proper attention to my shell history for the past 15 years or so...) So that's probably the best question: CAN I tell bash NEVER to modify the history - and if so, how?
Turns out revert-all-at-newline is the answer. I needed to include set revert-all-at-newline on in my ~/.inputrc file, since using the set command at the bash prompt had no effect. (Then, of course, I had to start a new shell.) Also, I found that ~/.inputrc is loaded instead of /etc/inputrc if present, which means that any defaults defined in the latter are no longer active when you create ~/.inputrc. To fix this, start ~/.inputrc with $include /etc/inputrc. Thanks to @StéphaneChazelas for pointing me in the right direction.
How to stop bash editing the history when I reuse and modify an entry?
1,375,515,548,000
The first two chars were repeated while I use Tab to do completion. In the screenshot below, cd is repeated. I have tried rxvt-unicdoe, xterm, terminator. All these terminal emulators have this issue. Zsh version 5.0.2, config file on-my-zsh
If the characters on your command line are sometimes displayed at an offset, this is often because zsh has computed the wrong width for the prompt. The symptoms are that the display looks fine as long as you're adding characters or moving character by character but becomes garbled (with some characters appearing further right than they should) when you use other commands that move the cursor (Home, completion, etc.) or when the command overlaps a second line. Zsh needs to know the width of the prompt in order to know where the characters of the command are placed. It assumes that each character occupies one position unless told otherwise. One possibility is that your prompt contains escape sequences which are not properly delimited. Escape sequences that change the color or other formatting aspects of the text, or that change the window title or other effects, have zero width. They need to be included within a percent-braces construct %{…%}. More generally, an escape sequence like %42{…%} tells zsh to assume that what is inside the braces is 42 characters wide. So check your prompt settings (PS1, PROMPT, or the variables that they reference) and make sure that all escape sequences (such as \e[…m to change text attributes — note that it may be present via some variable like $fg[red]) are inside %{…%}. Since you're using oh-my-zsh, check both your own settings and the definitions that you're using from oh-my-zsh. The same issue arises in bash. There zero-width sequences in a prompt need to be enclosed in \[…\]. Another possibility is that your prompt contains non-ASCII characters and that zsh (or any other application) and your terminal have a different idea of how wide they are. This can happen if there is a mismatch between the encoding of your terminal and the encoding that is declared in the shell, and the two encodings result in different widths for certain byte sequences. Typically you might run into this issue when using a non-Unicode terminal but declaring a Unicode locale or vice versa. Applications rely on environment variables to know the locale; the relevant setting is LC_CTYPE, which is determined from the environment variables LANGUAGE, LC_ALL, LC_CTYPE and LANG (the first of these that is set applies). The command locale | grep LC_CTYPE tells you your current setting. Usually the best way to avoid locale issues is to let the terminal emulator set LC_CTYPE, since it knows what encoding it expects; but if that's not working for you, make sure to set LC_CTYPE. The same symptoms can occur when the previous command displayed some output that didn't end in a newline, so that the prompt is displayed in the middle of the line but the shell doesn't realize that. In this case that would only happen after running such a command, not persistently. If a line isn't displayed properly, the command redisplay or clear-screen (bound to Ctrl+L by default) will fix it.
First characters of the command repeated in the display when completing
1,375,515,548,000
Putting on Debian 8.3 stty werase '^H' or on Arch Linux 2/2016 stty werase '^?' in .bashrc (for example) makes Ctrl-Backspace delete the last word in the terminal. Still it's not the same behavior as in modern GUI applications (e.g. Firefox): It deletes the last whitespace-separated word, and not the last word separated by whitespace or characters like . : , ; " ' & / ( ). Is it possible to make Ctrl-Backspace behave in the terminal similar to modern GUI applications? Also, is there any way to make Ctrl-Delete delete the word immediately before the cursor?
There are two line editors at play here: the basic line editor provided by the kernel (canonical mode tty line editor), and bash's line editor (implemented via the readline library). Both of these have an erase-to-previous-word command which is bound to Ctrl+W by default. The key can be configured for the canonical mode tty line editor through stty werase; bash imitates the key binding that it finds in the tty setting unless overridden in its own configuration. The werase action in tty line editor cannot be configured. It always erases (ASCII) whitespace-delimited words. It's rare to interact with the tty line editor — it's what you get e.g. when you type cat with no argument. If you want fancy key bindings there, you can run the command under a tool like rlwrap which uses readline. Bash provides two commands to delete the previous word: unix-word-rubout (Ctrl+w or as set through stty werase), and backward-kill-word (M-DEL, i.e. Esc Backspace) which treats a word as a sequence of alphanumeric characters in the current locale and _. If you want Ctrl+Backspace to erase the previous sequence of alphanumeric characters, don't set stty werase, and instead put the following line in your .inputrc: "\C-h": backward-kill-word Note that this assumes that your terminal sends the Ctrl+H character for Ctrl+Backspace. Unfortunately it's one of those keys with no standard binding (and Backspace in particular is a mess for historical reasons). There's also a symmetric command kill-word which is bound to M-d (Alt+D) by default. To bind it to Ctrl+Delete, you first need to figure out what escape sequence your terminal sends, then add a corresponding line in your .inputrc. Type Ctrl+V then Ctrl+Delete; this will insert something like ^[[3;5~ where the initial ^[ is a visual representation of the escape character. Then the binding is "\e[3;5~": kill-word If you aren't happy with either definition of a word, you can provide your own in bash: see confusing behavior of emacs-style keybindings in bash
Ctrl-Backspace and Ctrl-Delete in bash
1,375,515,548,000
One of my favorite tricks in Bash is when I open my command prompt in a text editor. I do this (in vi mode) by pressing ESC v. When I do this, whatever is in my command prompt is now displayed in my $EDITOR of choice. I can then edit the command as if it were a document, and when I save and exit everything in that temp file is executed. I'm surprised that none of my friends have heard of this tip, so I've been looking for docs I can share. The problem is that I haven't been able to find anything on it. Also, the search terms related to this tip are very common, so that doesn't help when Googling for the docs. Does anyone know what this technique is called so I can actually look it up?
In bind -p listing, I can see the command is called edit-and-execute-command, and is bound to C-xC-e in the emacs mode.
Opening Your Command Prompt In A Text Editor - What Is This Called?
1,375,515,548,000
I am not sure how to word this, but I often I find myself typing commands like this: cp /etc/prog/dir1/myconfig.yml /etc/prog/dir1/myconfig.yml.bak I usually just type out the path twice (with tab completion) or I'll copy and paste the path with the cursor. Is there some bashfoo that makes this easier to type?
There are a number of tricks (there's a duplicate to be found I think), but for this I tend to do cp /etc/prog/dir1/myconfig.yml{,.bak} which gets expanded to your command. This is known as brace expansion. In the form used here, the {} expression specifies a number of strings separated by commas. These "expand" the whole /etc/prog/dir1/myconfig.yml{,.bak} expression, replacing the {} part with each string in turn: the empty string, giving /etc/prog/dir1/myconfig.yml, and then .bak, giving /etc/prog/dir1/myconfig.yml.bak. The result is cp /etc/prog/dir1/myconfig.yml /etc/prog/dir1/myconfig.yml.bak These expressions can be nested: echo a{b,c,d{e,f,g}} produces ab ac ade adf adg There's a variant using numbers to produce sequences: echo {1..10} produces 1 2 3 4 5 6 7 8 9 10 and you can also specify the step: echo {0..10..5} produces 0 5 10
Bash command to copy before cursor and paste after?
1,375,515,548,000
How to configure zsh such that Ctrl+Backspace kills the word before point? How to achieve that Ctrl+Delete kills the word after point? I use urxvt as terminal emulator.
I'll focus on Ctrl+Delete first. The zsh command to delete a whole word forwards is called kill-word. By default it is bound to Alt+D. How to make Ctrl+Delete do it too depends on which terminal emulator you are using. On my system, this works in xterm and Gnome Terminal: bindkey -M emacs '^[[3;5~' kill-word and for urxvt, you should do: bindkey -M emacs '^[[3^' kill-word If that doesn't work, try typing Ctrl+V Ctrl+Delete to see what the value is on your system. You could even add both of those together to your .zshrc, or use the output of tput kDC5 instead of hard-coding the sequence. Ctrl+Backspace seems harder. On my system, pressing that is the same as pressing just Backspace. If yours is the same, I think your best option is to use Alt+Backspace or Ctrl+W instead.
zsh kill Ctrl + Backspace, Ctrl + Delete
1,375,515,548,000
I'd like to run the command foo --bar=baz <16 zeroes> How do I type the 16 zeroes efficiently*? If I hold Alt and press 1 6 0 it will repeat the next thing 160 times, which is not what I want. In emacs I can either use Alt-[number] or Ctrl-u 1 6 Ctrl-u 0, but in bash Ctrl-u kills the currently-being-typed line and the next zero just adds a 0 to the line. If I do foo --bar=baz $(printf '0%.0s' {1..16}) Then history shows exactly the above, and not foo --bar=baz 0000000000000000; i.e. bash doesn't behave the way I want. (Edit: point being, I want to input some number of zeroes without using $(...) command substitution) (*) I guess a technical definition of "efficiently" is "with O(log n) keystrokes", preferably a number of keystrokes equal to the number of digits in 16 (for all values of 16) plus perhaps a constant; the emacs example qualifies as efficient by this definition.
Try echo Alt+1Alt+6Ctrl+V0 That's 6 key strokes (assuming a US/UK QWERTY keyboard at least) to insert those 16 zeros (you can hold Alt for both 1 and 6). You could also use the standard vi mode (set -o vi) and type: echo 0Escx16p (also 6 key strokes). The emacs mode equivalent and that could be used to repeat more than a single character (echo 0Ctrl+WAlt+1Alt+6Ctrl+Y) works in zsh, but not in bash. All those will also work with zsh (and tcsh where that comes from). With zsh, you could also use padding variable expansion flags and expand them with Tab: echo ${(l:16::0:)}Tab (A lot more keystrokes obviously). With bash, you can also have bash expand your $(printf '0%.0s' {1..16}) with Ctrl+Alt+E. Note though that it will expand everything (not globs though) on the line. To play the game of the least number of key strokes, you could bind to some key a widget that expands <some-number>X to X repeated <some-number> times. And have <some-number> in base 36 to even further reduce it. With zsh (bound to F8): repeat-string() { REPLY= repeat $1 REPLY+=$2 } expand-repeat() { emulate -L zsh set -o rematchpcre local match mbegin mend MATCH MBEGIN MEND REPLY if [[ $LBUFFER =~ '^(.*?)([[:alnum:]]+)(.)$' ]]; then repeat-string $((36#$match[2])) $match[3] LBUFFER=$match[1]$REPLY else return 1 fi } zle -N expand-repeat bindkey "$terminfo[kf8]" expand-repeat Then, for 16 zeros, you type: echo g0F8 (3 keystrokes) where g is 16 in base 36. Now we can further reduce it to one key that inserts those 16 zeros, though that would be cheating. We could bind F2 to two 0s (or two $STRING, 0 by default), F3 to 3 0s, F1F6 to 16 0s... up to 19... possibilities are endless when you can define arbitrary widgets. Maybe I should mention that if you press and hold the 0 key, you can insert as many zeros as you want with just one keystroke :-)
How do I input n repetitions of a digit in bash, interactively
1,375,515,548,000
I would like to be able to copy and paste text in the command line in Bash using the same keyboard bindings that Emacs uses by default (i.e. using C-space for set-mark, M-w to copy text, C-y, M-y to paste it, etc.). The GNU Bash documentation says that Bash comes with some of these key bindings set up by default. For example, yanking (C-y) works by default on my terminal. However, I can't get the set-mark and copy commands to work, and they don't seem to be bound to any keys by default. Usually, the way a user can define her own key bindings is to add them to .inputrc. So I looked and found the following bash functions in the documentation that I presume can help me define the Emacs-like behavior that I want (i.e. set-mark with C-space and copy with M-w). copy-region-as-kill () Copy the text in the region to the kill buffer, so it can be yanked right away. By default, this command is unbound. and set-mark (C-@) Set the mark to the point. If a numeric argument is supplied, the mark is set to that position. If I understand correctly, the above means that copy-region-as-kill is not bound to any keyboard sequence by default, while set-mark is bound to C-@ by default. I tried C-@ on my terminal, but I don't think it runs set-mark because I don't see any text highlighted when I move my cursor. In any case, I tried adding keyboard bindings (M-w and C-) to the functions copy-region-as-kill and set-mark above in my .inputrc and then reloading it with C-x C-r, but this didn't work. I know that my other entries in .inputrc work because I have other user-defined keybindings defined in it. Is there anything I am doing wrong? Am I missing anything?
It doesn't highlight the selection, but otherwise I think it works fine. Try running $ bind -p | grep copy-region-as-kill to make sure that C-x C-r actually worked. It should say: "\ew": copy-region-as-kill After that, it should work fine. Example: $ abc<C-Spc><C-a><M-w> def <C-y> gives me $ abc def abc If you ever want to know where mark is, just do C-x C-x. Example: $ <C-Spc>abc<C-x><C-x> will put the cursor back to where you set mark (the start of the line). Also, I don't think you need to add the set-mark binding. I didn't. $ bind -p | grep set-mark "\C-@": set-mark "\e ": set-mark # vi-set-mark (not bound) (note that most terminals send C-@ when C-Spc is pressed. I assume yours does too.) If all this fails: does Ctrl+Space work in emacs -nw on the same terminal? do other Alt/Meta shortcuts work in bash?
Copy and set-mark in Bash as in Emacs?
1,375,515,548,000
I use zsh's menu-based tab completion. I press Tab once, and a list of possible completions appears. If I press Tab again, I can navigate this list with the arrow keys. However, is it possible to navigate them with the vi-like H, J, K, L keys instead? I use emacs mode for command-line input, with bindkey -e in ~/.zshrc. I also use zim with zsh. If relevant, the commands that specify the tab-completion system are here.
Yes, you can by enabling menu select: zstyle ':completion:*' menu select zmodload zsh/complist ... # use the vi navigation keys in menu completion bindkey -M menuselect 'h' vi-backward-char bindkey -M menuselect 'k' vi-up-line-or-history bindkey -M menuselect 'l' vi-forward-char bindkey -M menuselect 'j' vi-down-line-or-history
Can I navigate zsh's tab-completion menu with vi-like hjkl keys?
1,375,515,548,000
I notice some sample bash for loops are spread out over multiple lines in examples for VARIABLE in file1 file2 file3 do command1 on $VARIABLE command2 commandN done (eg here http://www.cyberciti.biz/faq/bash-for-loop/) How do I enter a newline in the bash terminal (I use putty) ? When I press enter at the end of a line the system executes it.
When you press Enter at the end of: for VARIABLE in file1 file2 file3 The shell can't execute anything since that for loop is not finished. So instead, it will print a different prompt, the $PS2 prompt (generally >), until you enter the closing done. However, after > is displayed, you can't go back to edit the first line. Alternatively, instead of typing Enter, you can type Ctrl-VCtrl-J. That way, the newline character (aka ^J) is entered without the current buffer being accepted, and you can then go back to editing the first line later on. In zsh, you can press Alt-Enter or EscEnter to insert a newline character without accepting the current buffer. To get the same behavior in bash, you can add to your ~/.inputrc: "\e\C-m": "\026\n" (\026 being the ^V character).
How to input / start a new line in bash terminal?
1,375,515,548,000
How do I handle the backspaces entered, it shows ^? if tried & how read counts the characters, as in 12^?3 already 5 characters were complete(though all of them were not actual input), but after 12^?3^? it returned the prompt, weird. Please help! -bash-3.2$ read -n 5 12^?3^?-bash-3.2$
When you read a whole line with plain read (or read -r or other options that don't affect this behavior), the kernel-provided line editor recognizes the Backspace key to erase one character, as well as a very few other commands (including Return to finish the input line and send it). The shortcut keys can be configured with the stty utility. The terminal is said to be in cooked mode when its line editor is active. In raw mode, each character typed on the keyboard is transmitted to the application immediately. In cooked mode, the characters are stored in a buffer and only complete lines are transmitted to the application. In order to stop reading after a fixed number of characters so as to implement read -n, bash has to switch to raw mode. In raw mode, the terminal doesn't do any processing of the Backspace key (by the time you press Backspace, the preceding character has already been sent to bash), and bash doesn't do any processing either (presumably because this gives the greater flexibility of allowing the script to do its own processing). You can pass the option -e to enable bash's own line editor (readline, which is a proper line editor, not like the kernel's extremely crude one). Since bash is doing the line edition, it can stop reading once it has the requested number of characters.
How to handle backspace while reading?
1,375,515,548,000
I'm using zsh on Gentoo x64, and when I type sudo vim /path/to/file from my home folder, zsh asks: zsh: correct 'vim' to '.vim' [nyae]? I want to run vim not my .vim folder. How do I fix this? My guess is that setopt autocd is causing this. Odd thing is, if I don't add sudo, zsh doesn't ask to correct anything.
try alias sudo='nocorrect sudo'.
zsh wants to correct vim to .vim
1,375,515,548,000
Zsh has a bit of completion-related automation that's nice most of the time: after pressing Tab, a space is inserted automatically (or some other appropriate character such as , inside braces). I want to keep this feature except in one case: when I type & or | after pressing Tab, I don't want the space to be removed. I prefer the space to be removed on a ;, and I definitely want to suppress the automatically-inserted comma when pressing Tab } in a brace enumeration. This feature works by default both in the “old” (compctl) and the “new” (compadd) completion systems. I'm only interested in the new system. How can I tune the automatic suppression of the automatic suffix inserted by completion?
This feature can be tuned with ZLE_REMOVE_SUFFIX_CHARS and ZLE_SPACE_SUFFIX_CHARS shell parameters. If the ZLE_REMOVE_SUFFIX_CHARS variable is set, it should contain a set of characters that, when typed, will cause automatic suffixes from the completion to be removed. If ZLE_REMOVE_SUFFIX_CHARS is unset, the default behaviour equates to ZLE_REMOVE_SUFFIX_CHARS=$' \t\n;&|' For characters set in ZLE_SPACE_SUFFIX_CHARS suffices are replaced with a space. It also takes precedence over ZLE_SPACE_SUFFIX_CHARS So in order to get your desired behaviour, it should be sufficient to set ZLE_SPACE_SUFFIX_CHARS=$'|&' It seems that the automatically inserted , in brace enumerations is always removed when typing }. Although zshparam(1) mentions that certain completion systems may override this behaviour it seems to work just fine with the "new" compsys (you called it compadd)
Keep the space after completion for some characters in zsh
1,375,515,548,000
Under Bash some behavior of Alt+d has been driving me crazy since years and I figured out that maybe it could be fixed with a setting. If I'm at a terminal and issue a command like this: ...$ cat >> ~/notesSuperLongFilename.txt and then if I want, say, to issue : ...$ scp ~/notesSuperLongFilename.txt I'd like to get back the "cat >> ~/notesSuperLongFilename.txt" using Ctrl+p (previous line) and then do Ctrl+a and then Alt+d and Alt+d again so I'd have: ...$ ~/notesSuperLongFilename.txt and then I'd be able to simply enter "scp" and then do a Ctrl+m (or hit Enter / Return). However it doesn't work because after the first Alt+d I get: ...$ >> ~/notesSuperLongFilename.txt (so far so good) but after the second Alt+d I get: ...$ .txt So for some reason Alt+d deletes ">> ~/notesSuperLongFilename" at once instead of just deleting ">> ". This has to be the single biggest time-waster that is driving me crazy with Linux / Bash since literally years. So how can I fix this (arguably broken) behavior of Alt+d? P.S: I don't know who's "responsible" for that Alt+d behavior: I don't know if it's the terminal or if it's the shell (Bash in my case).
I don't know who's "responsible" for that Alt+d behavior: I don't know if it's the terminal or if it's the shell (Bash in my case). It's bash, specifically the default command-line editing setup. Here is a nice page on what commands can be bound, and how to change the default bindings. The default binding for Alt-d is kill-word which is supposed to work like the command of the same name in Emacs. As you've observed, though, it doesn't—Emacs would consider the space between >> and the tilde in your example to be a word break. That bash does not, I would consider a bug. Short of getting the source for bash, changing it, and recompiling it, I don't know what you can do.
The shell's "delete word" shortcut deletes too many characters
1,375,515,548,000
Xubuntu 13.10 Say I paste a command, something like sudo apt-get install abc yxz 123 DEF MMM KKK into the terminal. Then I suddenly had a change in mind and thus I would like to delete the last 3 packages without using backspace. Is there a way to mark them, as in using something like ctrl + shift + left?
Assuming you are using the "usual" bash with emacs bindings, using Ctrlw should work. To delete three words either press Ctrlw three times or preceed it with Alt3 or ESC3. For more shortcuts have a look at this list.
How can I delete input in the terminal?
1,375,515,548,000
I have the following path : $ vim /path/to/some/where If I press Ctrl + w, It removes entire text to first space. The result would be : $ vim How do I delete just the word next of last slash with comination keys?
Try Alt + Backspace. From bash documentation: backward-kill-word (M-DEL) Kill the word behind point. Word boundaries are the same as backward-word.
How to delete a word next of last slash
1,375,515,548,000
If I want to move a file called longfile from /longpath/ to /longpath/morepath/ can I do something like mv (/longpath)/(longfile) $1/morepath/$2 i.e. can I let the bash know that it should remember a specific part of the input so I can reuse it later in the same input? (In the above example I use an imaginary remembering command that works by enclosing in parantheses and an imaginary reuse command $ that inserts the content of the groups)
You could do this: mv /longpath/longfile !#:1:h/morepath/ See https://www.gnu.org/software/bash/manual/bashref.html#History-Interaction !# is the current command :1 is the first argument in this command :h is the "head" -- think dirname /morepath/ appends that to the head and you're moving a file to a directory, so it keeps the same basename. If you want to alter the "longfile" name, say add a ".txt" extension, you could mv /longpath/longfile !#:1:h/morepath/!#:1:t.txt Personally, I would cut and paste with my mouse. In practice I never get much more complicated than !! or !$ or !!:gs/foo/bar/
Is it possible to name a part of a command to reuse it in the same command later on?
1,375,515,548,000
Often I fail to find something with reverse-i-search but want to keep what I have already written. For example, typing pdflatex fails to complete to pdflatex mydocument.tex. If I then cancel it with Ctrl+c or Ctrl g, the pdflatex part is deleted as well. How can I cancel it in a way that keeps my input?
Instead of cancelling just use ALT+F (or on Ubuntu alternatively CTRL+-> to move the cursor to the end of the first word and then press CTRL+K to delete everything to the end of the line. Now you are ready to complete your command.
How to cancel and keep reverse-i-search?
1,375,515,548,000
Thanks to a question on superuser.com, I found out about this utterly convenient rlwrap tool. It satisfies my needs (i.e. add command history to another cmdline tool), but I was wondering how I can use it to add command history to a 'compound' shell command, like the prototypical $> while read line; do echo "i read $line"; done hi i read hi ^D When I put the while loop inside a shell script, and execute it like rlwrap ./whilereadline.sh, it's ok. But I'm wondering how I can do this without the need for an additional file, somewhat like $> rlwrap (while read line; do echo "line: $line"; done) bash: syntax error near unexpected token `while' Any ideas?
Have you tried rlwrap sh -c 'while read line; do echo "i read $line"; done' rlwrap needs a command it can run, which a () syntax-induced subshell is not. sh -c ... is a command however. Replace sh with bash or whatever shell you prefer.
Is there a convenient way to wrap a bash command list into rlwrap?
1,375,515,548,000
If I unintentionally add a newline in a command, as far as I can tell, the only way to undo it is to press Ctrl+c and type the command again. For example: $ cat 'John's File' > ^C $ cat "John's File" This is annoying if the original command is long. Is there a way to delete the newline and remove the > prompt so that I can go back to the original command?
Is there a way to delete the newline ... ? Actually: No. But there are excellent workarounds. As you have already introduced an Enter, the line has been stored in the list of commands executed. Press ControlC to get out of the command, then, without re-typing, press up-arrow. The command entered appears again, and could be edited.
How can I undo an accidental newline in bash?
1,375,515,548,000
I (ab)use Alt + . to recover the last argument in a previous command (I'm using ZSH): for example, $ convert img.png img.pdf $ llpp (alt + .) # which produces llpp img.pdf but sometimes I review a pdf with llpp $ llpp pdffile.pdf& and then if I try to do something else with pdffile.pdf I run into troubles $ llpp (`Alt` + `.`) # produces llpp & So, is there any way to recover pdffile.pdf using something similar to Alt + .? $ echo $SHELL /usr/bin/zsh $ echo $TERM xterm
ESC-. (insert-last-word) considers any space-separated or space-separable shell token¹ a “word“, including punctuation tokens such as &. You can give it a numeric argument to grab a word other than the last one. Positive arguments count from the right: Alt+1 Alt+. is equivalent to Alt+., Alt+2 Alt+. grabs the previous word, etc. Alt+0 Alt+. is the previous word, and negative arguments continue from the left, e.g. Alt+- Alt+1 Alt+. is the first argument. I have copy-earlier-word bound to ESC-,. Where repeated invocations of ESC-. insert the last word of successive commands going back in the history, repeated invocations of ESC-, after ESC-. insert the previous word of the same command. So with the following code in your .zshrc, you can get the next-to-last word of the previous command with Alt+. Alt+,. autoload -U copy-earlier-word zle -N copy-earlier-word bindkey '^[,' copy-earlier-word ¹ There are several reasonable definitions of “token” in this context. In this answer I'm going by the definition “something that insert-last-word considers to be a separate word”.
Alt + . (dot) shows &, instead of a previous argument
1,375,515,548,000
I have a couple of questions regarding the emacs-like keyboard bindings in Zsh. As background to all the questions: I have Emacs-like keybinding activated with bindkey -e (activated by default) Copying and region highlighting: In Emacs, if you run C-space (set-mark), select a region and then copy it using M-w, Emacs puts the region in the kill ring and stops selecting text (i.e. if I move the point, no more text is selected). However, I can't get the same behavior in ZLE. Once I copy a region with M-W, the selection mode is still on, and if I move my cursor, the selection keeps changing. Stop selection: In Emacs, if I am selecting a region, and press C-g, the selection stops (the current mark is killed). In Zsh, by default, C-g starts a new line in the shell. So is there a ZLE command that I can bind to (maybe using something different from C-g) to stop an ongoing selection?
To deactivate the selection, run set-mark-command with a negative argument: ESC - Ctrl+Space. To copy the region and deactivate the selection, write a function that performs the two actions, then declare it as a widget with zle -N and bind that widget to a key. copy-region-as-kill-deactivate-mark () { zle copy-region-as-kill zle set-mark-command -n -1 } zle -N copy-region-as-kill-deactivate-mark bindkey '\ew' copy-region-as-kill-deactivate-mark
Adding more Emacs-like bindings to ZSH's line editor (ZLE)
1,375,515,548,000
file1.txt: hi wonderful amazing sorry superman superhumanwith loss file2.txt : 1 2 3 4 5 6 7 When i try to combine using paste -d" " file1.txt file2.txt > actualout.txt actualout.txt : hi 1 wonderful 2 amazing 3 sorry 4 superman 5 superhumanwith 6 loss 7 But i want my output to look like this desired OUT.txt : hi 1 wonderful 2 amazing 3 sorry 4 superman 5 superhumanwith 6 loss 7 Which command can be used to combine 2 files an look like the desired output? Solaris 5.10 ksh nawk, sed, paste
awk 'FNR==1{f+=1;w++;} f==1{if(length>w) w=length; next;} f==2{printf("%-"w"s",$0); getline<f2; print;} ' f2=file2 file1 file1 Note: file1 is quite intentionally read twice; the first time is to find the maximum line length, and the second time is to format each line for the final concatenation with corresponding lines from file2. — file2 is read programatically; its name is provided by awk's variable-as-an-arg feature. Output: hi 1 wonderful 2 amazing 3 sorry 4 superman 5 superhumanwith 6 loss 7 To handle any number of input files, the following works.but *Note: it does not cope with repeating the same filename. ie each filename arg refers to a different file. It can, however, handle files of different lengths - beyond a files EOF, spaces are used. awk 'BEGIN{ for(i=1; i<ARGC; i++) { while( (getline<ARGV[i])>0) { nl[i]++; if(length>w[i]) w[i]=length; } w[i]++; close(ARGV[i]) if(nl[i]>nr) nr=nl[i]; } for(r=1; r<=nr; r++) { for(f=1; f<ARGC; f++) { if(r<=nl[f]) getline<ARGV[f]; else $0="" printf("%-"w[f]"s",$0); } print "" } } ' file1 file2 file3 file4 Here is the output with 4 input files: hi 1 cat A wonderful 2 hat B amazing 3 mat C sorry 4 moose D superman 5 E superhumanwith 6 F loss 7 G H
Adjust gap between 2 columns to make them look straight
1,375,515,548,000
The Ctrl+R works for root (well toor) however I cannot find why it does not work for user. User .zshrc: setopt AUTO_CD setopt CORRECT_ALL setopt EXTENDED_GLOB # History SAVEHIST=10000 HISTSIZE=10000 HISTFILE=~/.zsh/history setopt APPEND_HISTORY setopt EXTENDED_HISTORY setopt INC_APPEND_HISTORY setopt HIST_FIND_NO_DUPS setopt HIST_IGNORE_ALL_DUPS setopt HIST_IGNORE_SPACE setopt NO_HIST_BEEP setopt SHARE_HISTORY # Keys autoload zkbd [[ ! -d ~/.zkbd ]] && mkdir ~/.zkbd [[ ! -f ~/.zkbd/$TERM-${DISPLAY:-$VENDOR-$OSTYPE} ]] && zkbd source ~/.zkbd/$TERM-${DISPLAY:-$VENDOR-$OSTYPE} [[ -n ${key[Home]} ]] && bindkey "${key[Home]}" beginning-of-line [[ -n ${key[End]} ]] && bindkey "${key[End]}" end-of-line [[ -n ${key[Insert]} ]] && bindkey "${key[Insert]}" overwrite-mode [[ -n ${key[Delete]} ]] && bindkey "${key[Delete]}" delete-char [[ -n ${key[Up]} ]] && bindkey "${key[Up]}" up-line-or-history [[ -n ${key[Down]} ]] && bindkey "${key[Down]}" down-line-or-history [[ -n ${key[Left]} ]] && bindkey "${key[Left]}" backward-char [[ -n ${key[Right]} ]] && bindkey "${key[Right]}" forward-char # Auto completion autoload -U compinit promptinit compinit promptinit prompt clint zstyle ':completion::complete:*' use-cache 1 setopt HASH_LIST_ALL # MIME autoload -U zsh-mime-setup zsh-mime-setup # Calc autoload -U zcalc # Login alias su="su - toor" diff with root .zshrc: --- - 2011-01-06 23:53:54.772440701 +0100 +++ .zshrc 2011-01-06 23:50:00.000000000 +0100 @@ -38,9 +38,5 @@ zsh-mime-setup # Calc autoload -U zcalc -# Editor -export EDITOR=vim -# Paludis -alias background="schedtool -B -e" -alias lowprio="nice -n 20 ionice -c 3" -alias blowprio="ionice -c 3 schedtool -B -e nice -n 20" +# Login +alias su="su - toor" Any ideas? zsh version 2.3.11.
If you have $EDITOR = vi* or VISUAL = vi* when zsh starts up, zsh uses vi insertion mode as the default keymap. Otherwise zsh uses emacs mode. You presumably set EDITOR (or VISUAL) to vim in your init file, but have no such setting when running as root, so you're seeing the vi mode map, in which history search is on ^X r and ^X s. Add bindkey -e to your .zshrc (or learn the vi map). As usual, this is in the documentation (zshzle man page), but you have to know what you're looking for.
zle - I cannot find why Ctrl+R does not work for non-root