date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,375,515,548,000
I'm looking for a keyboard shortcut in tcsh to move the cursor back to the previous blank: not ESC+B which takes me back one word (for instance, in a path argument, to the previous path component) - I want to get to previous space or start of current path.
If you mean keyboard shortcut at the prompt of interactive bash shells, you could bind the shell-backward-word and shell-forward-word to some sequence of characters sent upon some key or combination of key presses. Like if pressing Ctrl-Left sends the sequence \e[1;5D on your terminal like it does in xterm, you could do: bind '"\e[1;5D": shell-backward-word' bind '"\e[1;5C": shell-forward-word' Note that it does not jump from blank to blank but considers shell quoting. So for instance in a line like echo "foo 'bar baz' blah/bleh bloh ^ ^ ^ ^ It would jump to the locations marked above. Edit: for tcsh, you have three options: Use the equivalent to the bash definition above, either in ~/.cshrc or in /etc/csh.cshrc.local to give all users the benefit. bindkey '\e[1;5D' backward-word bindkey '\e[1;5C' forward-word Use the vi mode (with bindkey -v) and use the B and W keys in normal mode just like in vi. In emacs mode (the default, reenabled with bindkey -e) like for bash, bind the corresponding widgets (vi-word-back and vi-word-fwd): bindkey '\e[1;5C' vi-word-fwd bindkey '\e[1;5D' vi-word-back Note that those are like vi's B and W, so they're for jumping between blank separated words, not shell tokens (like quoted strings) like in the bash solution above.
tcsh shortcut to move the cursor back to previous space
1,375,515,548,000
Suppose I do something like: ln a_file_with_a_long_filename.pdf ~/path/to/a/new/hardlink/a_file_with_a_long_filename_slightly_modified.pdf Is there a way to refer to and expand a_file_with_a_long_filename.pdf if my cursor is at the end of the string ln a_file_with_a_long_filename.pdf ~/path/to/a/new/hardlink/ in zsh? If not, what would you suggest do reduce typing work?
This sounds like a fun code golf challenge. Here's one option: Run an innocuous command with the filename; enter enough of the filename to allow TAB-completion. : a_file<TAB> Use !!$ to refer to the last argument of the previous command: ln !!$ ~/path/to/a/new/hardlink/!!$ Thanks to zsh's helpful quoting, this is safe even in the face of IFS-containing filenames. You'll notice that as soon as you hit space after the first !!$, zsh expands the filename; ditto if you add a gratuitous space at the end of the command. Number of characters required is: 3 x 2 = 6 for the two !!$ 2 for the :<SPACE> 8 + plus enough for the initial tab completion.
Zsh refer to last element of current argument list and expand it
1,375,515,548,000
If I use PC-BSD with the default shell (Korn) then Ctrl+r doesn't work. Why won't it work? Ctrl-r was introduced to search your history in the late 1970s or early 80s and my BSD still can't do it (while Ubuntu can). Ctrl-r originates with Emacs doesn't it? When? 1975? 1983?
Ctrl+R works with ksh in emacs mode (ksh -o emacs or set -o emacs within ksh), and it was most probably the first shell to support it. Only it's not as interactive as in zsh or bash or tcsh's i-search-back widget. In ksh (both ksh88 and ksh93), you type Ctrl+RtextReturn. And Ctrl+RReturn to search again with the same text. In vi mode, you can use ? to search backward and n for next search. That emacs incremental search feature was added to: bash/readline at least since July 1989 as the feature was already mentioned on usenet at that time but probably not from the start as the version of readline shipped with zsh-1.0 didn't have it. zsh since 2.0 in 1991 after the line editor was rewritten and no longer used readline. tcsh in V6.00.03, 10/21/91, but not bound by default (tcsh had other search mechanism on Meta-P for a while before that though). ksh: ksh was most probably the first Unix shell to have an emacs editing mode, written in 1982 by Mike Veach (as well as the vi mode by Pat Sullivan, reusing code that those two had already applied independently to the Bourne shell) at AT&T. ksh was first introduced outside AT&T at the 1983 USENIX conference where those features were described, but was not commercially available until some time after that (1, 2). It's hard to tell if ^R was already there at the time (in any case, it was already there in 1986 and 1985 (see usr/man/man1/ksh.1 ksh85 man page in that Unix V8 tarball at the Unix Heritage Society)), but it's hard to imagine it wasn't as it's an essential feature, especially for a shell, and I'd expect vi mode's ? would also have been there at the time.
Why can't Korn Shell do ctrl-r?
1,375,515,548,000
This function can be used to help user input a modification of some text. function change { bash -c "read -ei \"$1\" temp && echo \$temp" } What is idiomatic zsh way to do something similar?
With the vared builtin. change () { local temp=$1 vared temp print -lr -- $temp } And if you want to use the string entered by the user later in your script, it's just temp='initial value' vared temp
Read a line with default input in zsh
1,305,731,391,000
From man file, EXAMPLES $ file file.c file /dev/{wd0a,hda} file.c: C program text file: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), stripped /dev/wd0a: block special (0/0) /dev/hda: block special (3/0) $ file -s /dev/wd0{b,d} /dev/wd0b: data /dev/wd0d: x86 boot sector $ file -s /dev/hda{,1,2,3,4,5,6,7,8,9,10} /dev/hda: x86 boot sector /dev/hda1: Linux/i386 ext2 filesystem /dev/hda2: x86 boot sector /dev/hda3: x86 boot sector, extended partition table /dev/hda4: Linux/i386 ext2 filesystem /dev/hda5: Linux/i386 swap file /dev/hda6: Linux/i386 swap file /dev/hda7: Linux/i386 swap file /dev/hda8: Linux/i386 swap file /dev/hda9: empty /dev/hda10: empty $ file -i file.c file /dev/{wd0a,hda} file.c: text/x-c file: application/x-executable, dynamically linked (uses shared libs), not stripped /dev/hda: application/x-not-regular-file /dev/wd0a: application/x-not-regular-file What does executable stripping mean? Why are some of the executables stripped while others are not?
If you compile an executable with gcc's -g flag, it contains debugging information. That means for each instruction there is information which line of the source code generated it, the name of the variables in the source code is retained and can be associated to the matching memory at runtime etc. Strip can remove this debugging information and other data included in the executable which is not necessary for execution in order to reduce the size of the executable.
What are stripped and not-stripped executables in Unix?
1,305,731,391,000
I have an executable linked like this: $ ldd a.out libboost_system-mt.so.1.47.0 => /usr/lib64/libboost_system-mt.so.1.47.0 (0x00007f4881f56000) libssl.so.10 => /usr/lib64/libssl.so.10 (0x00007f4881cfb000) libcrypto.so.10 => /usr/lib64/libcrypto.so.10 (0x00007f4881965000) librt.so.1 => /lib64/librt.so.1 (0x00007f488175d000) libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f4881540000) libstdc++.so.6 => /usr/lib64/libstdc++.so.6 (0x00007f4881239000) . . Where the libcrypto and libssl libraries are openssl 1.0.0-fips libs. I want to experiment with the 1.0.1 libraries instead, and so I've built them in my home directory. Is there a way to get a.out to relink against my new openssl libraries without a lot of pain? I would like to avoid Having to relink a.out (because the build tools are massively complicated) Altering any global settings (because other devs work on this machine) Is it possible to do what I'm hoping here?
You can temporarily substitute a different library for this particular execution. In Linux, the environment variable LD_LIBRARY_PATH is a colon-separated set of directories where libraries should be searched for first, before the standard set of directories; this is useful when debugging a new library or using a nonstandard library for special purposes. The environment variable LD_PRELOAD lists shared libraries with functions that override the standard set, just as /etc/ld.so.preload does. - Shared Libraries You can also invoke the loader directly: /lib/ld-linux.so.2 --library-path path executable
Changing linked library for a given executable (CentOs 6)
1,305,731,391,000
Has anyone used the gold linker before? To link a fairly large project, I had to use this as opposed to the GNU ld, which threw up a few errors and failed to link. How is the gold linker able to link large projects where ld fails? Is there some kind of memory trickery somewhere?
The gold linker was designed as an ELF-specific linker, with the intention of producing a more maintainable and faster linker than BFD ld (the “traditional” GNU binutils linker). As a side-effect, it is indeed able to link very large programs using less memory than BFD ld, presumably because there are fewer layers of abstraction to deal with, and because the linker’s data structures map more directly to the ELF format. I’m not sure there’s much documentation which specifically addresses the design differences between the two linkers, and their effect on memory use. There is a very interesting series of articles on linkers by Ian Lance Taylor, the author of the various GNU linkers, which explains many of the design decisions leading up to gold. He writes that The linker I am now working, called gold, on will be my third. It is exclusively an ELF linker. Once again, the goal is speed, in this case being faster than my second linker. That linker has been significantly slowed down over the years by adding support for ELF and for shared libraries. This support was patched in rather than being designed in. (The second linker is BFD ld.)
What is the gold linker?
1,305,731,391,000
The man page for ld makes reference to AT&T’s Link Editor Command Language, however a Google search does not offer a satisfactory explanation as to what AT&T’s Link Editor Command Language is or was, other than pointing to said man pages, whereas I expect a Wikipedia page coming up in the first five results. It seems like there is a vital piece of computing history missing here. Does anyone have some reference or historical documentation on this language? Something we can put in Wikipedia to preserve for posterity?
The Link Editor Command Language appears to be described in the AT&T UNIX™ PC Model 7300 Unix System V Programmers Guide, chapter 17: The Link Editor. I found a copy of the Programmer's Guide (pdf) at http://www.tenox.net/docs/. The relevant section is on page 524 of the linked .pdf.
What is AT&T’s Link Editor Command Language?
1,305,731,391,000
I installed an application [ e.g. fdisk ]. But it required libraries for execution. I am looking for utility/tool which will help me to create a static binary from already installed binaries. So that I can use it anywhere. The only reliable tools that I found is ErmineLight from here , but this one is share-ware. Is there any open-source software is available for the same ? EDIT fdisk is just an example. I most of the time work on LFS, So If I have to use any utility, I need to follow the steps as Download the source configure make make test make install, So just-to-save time, I am looking for a solution in which I will make a static binary from debian or from fedora or from other distrbution, try it on LFS, and If it works fine or as per my requirement, I will go with source-code for compilation.
If fdisk is just an example and your goal is really to make static executables from dynamic executables, try Elf statifier. There's even a comparison with Ermine (by the Ermine vendor, so caveat (non-)emptor). Note that If you have many executables, their combined size is likely to be more than the combined size of the dynamically-linked executables plus the necessary dynamic libraries. There are features of GNU libc that may not work in a statically-linked executables, such as NSS (databases of user names, host names, etc.) and locale-related features. If your goal is to have a small, portable suite of system tools, you're looking for BusyBox, a suite of core tools intended for embedded systems (including fdisk). You may also be interested in a smaller standard library than Glibc, for example dietlibc or µClibc.
Creating Static Binary
1,305,731,391,000
I want to remove some of the paths the linker uses to find .so libraries for testing purposes. I have found a way to add library paths: export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/path/to/library" Are there a variable similar to LD_LIBRARY_PATH that I can use to remove library paths such as /usr/local/lib or /usr/lib that are not in LD_LIBRARY_PATH but picked up by the linker? I.e. how can I ignore paths that are given in /etc/ld.so.conf.d/ ? The reason for this is that I am busy creating a program that, for a given executable, it recursively finds library dependencies. I want to see if I can make a program more portable by finding all its dependencies, copying those dependencies into a local directory, and make a local-run bash script to setup LD_LIBRARY_PATH and then run the executable. I want to test if this local-run-executable works after previously important library search paths are removed.
You would be interested in removing library paths if a given shared library has embedded paths via the rpath feature. Those are added at the time the library is created by the linker. You can remove (or alter) those paths using chrpath, e.g., chrpath -d mylibraryfile.so Removing pathnames from the LD_LIBRARY_PATH variable also is a possible area of interest; you can do that by string substitution and re-exporting the variable. However, the question does not seem to be concerned with that. There is no variable which acts to cancel out LD_LIBRARY_PATH. For seeing library dependencies, the mention of /etc/ld.so.conf.d/ makes it sound as if the platform is only Linux. You can use ldd to list dependencies. Aside from OSX, all of the BSDs also support ldd. Here is one of the scripts which I use for this purpose: #!/bin/sh # $Id: ldd-path,v 1.1 2007/07/09 19:30:28 tom Exp $ # Edit the output of ldd for the given parameters, yielding only the # absolute pathnames. ldd $* | sed \ -e 's/([^)]*)//g' \ -e 's/^.*=>//' \ -e 's/[ ][ ]*//g' \ -e '/^$/d' But (addressing a comment), there is no portable mechanism for telling the loader to ignore an existing path. The GNU ld documentation gives a summary of what is sought, and the order in the description of the -rpath option. These items conclude the list: The default directories, normally /lib and /usr/lib. For a native linker on an ELF system, if the file /etc/ld.so.conf exists, the list of directories found in that file. Further reading Can I change 'rpath' in an already compiled binary? RPATH, RUNPATH, and dynamic linking
How to change the paths to shared libraries (.so files) for a single terminal instance
1,305,731,391,000
Would like to try this lld from LLVM. The doc on apt could be found here, but I don't know which package contains the lld executable. It seems the purpose of lld is to remove the system dependency, but clang doesn't have lld built-in. (Not yet?) Using the following example to test if lld is used. GNU-ld places some constraint on the order of archive files appear, but lld seems to be more tolerate on this (if I understand it correctly), so this example should build successfully, if lld is used. However, it fails on my box. # one.c extern int two(); int main(int argc, char *argv[]) { two(); return 0; } # two.c void two(){} $ clang -c two.c; ar cr two.a two.o ; clang -c one.c ; clang two.a one.o one.o: In function `main': one.c:(.text+0x19): undefined reference to `two' clang: error: linker command failed with exit code 1 (use -v to see invocation) If we use -v: $ clang -c two.c; ar cr two.a two.o ; clang -c one.c ; clang -v two.a one.o Ubuntu clang version 3.4-1ubuntu3 (tags/RELEASE_34/final) (based on LLVM 3.4) Target: x86_64-pc-linux-gnu Thread model: posix Found candidate GCC installation: /usr/bin/../lib/gcc/i686-linux-gnu/4.9 Found candidate GCC installation: /usr/bin/../lib/gcc/i686-linux-gnu/4.9.0 Found candidate GCC installation: /usr/bin/../lib/gcc/x86_64-linux-gnu/4.8 Found candidate GCC installation: /usr/bin/../lib/gcc/x86_64-linux-gnu/4.8.2 Found candidate GCC installation: /usr/bin/../lib/gcc/x86_64-linux-gnu/4.9 Found candidate GCC installation: /usr/bin/../lib/gcc/x86_64-linux-gnu/4.9.0 Found candidate GCC installation: /usr/lib/gcc/i686-linux-gnu/4.9 Found candidate GCC installation: /usr/lib/gcc/i686-linux-gnu/4.9.0 Found candidate GCC installation: /usr/lib/gcc/x86_64-linux-gnu/4.8 Found candidate GCC installation: /usr/lib/gcc/x86_64-linux-gnu/4.8.2 Found candidate GCC installation: /usr/lib/gcc/x86_64-linux-gnu/4.9 Found candidate GCC installation: /usr/lib/gcc/x86_64-linux-gnu/4.9.0 Selected GCC installation: /usr/bin/../lib/gcc/x86_64-linux-gnu/4.8 "/usr/bin/ld" -z relro --hash-style=gnu --build-id --eh-frame-hdr -m elf_x86_64 -dynamic-linker /lib64/ld-linux-x86-64.so.2 -o a.out /usr/bin/../lib/gcc/x86_64-linux-gnu/4.8/../../../x86_64-linux-gnu/crt1.o /usr/bin/../lib/gcc/x86_64-linux-gnu/4.8/../../../x86_64-linux-gnu/crti.o /usr/bin/../lib/gcc/x86_64-linux-gnu/4.8/crtbegin.o -L/usr/bin/../lib/gcc/x86_64-linux-gnu/4.8 -L/usr/bin/../lib/gcc/x86_64-linux-gnu/4.8/../../../x86_64-linux-gnu -L/lib/x86_64-linux-gnu -L/lib/../lib64 -L/usr/lib/x86_64-linux-gnu -L/usr/bin/../lib/gcc/x86_64-linux-gnu/4.8/../../.. -L/lib -L/usr/lib two.a one.o -lgcc --as-needed -lgcc_s --no-as-needed -lc -lgcc --as-needed -lgcc_s --no-as-needed /usr/bin/../lib/gcc/x86_64-linux-gnu/4.8/crtend.o /usr/bin/../lib/gcc/x86_64-linux-gnu/4.8/../../../x86_64-linux-gnu/crtn.o one.o: In function `main': one.c:(.text+0x19): undefined reference to `two' clang: error: linker command failed with exit code 1 (use -v to see invocation) ENV Ubuntu clang version 3.4-1ubuntu3 (tags/RELEASE_34/final) (based on LLVM 3.4) Target: x86_64-pc-linux-gnu Thread model: posix
Since January 2017, the LLVM apt repository includes lld, as do the snapshot packages available in Debian (starting with 4.0 in unstable, 5.0 in experimental). Since version 5, lld packages are available in Debian (lld-5.0 in stretch-backports, lld-6.0 in stretch-backports and Debian 10, lld-7 in Debian 9 and 10, lld-8 in buster-backports, and later packages in releases currently in preparation). To install the upstream packages on Debian or Ubuntu, follow the instructions for your distribution. Back in February 2015 when this answer was originally written, the LLVM apt repository stated that it included LLVM, Clang, compiler-rt, polly and LLDB. lld wasn't included. Even the latest snapshot packages in Debian (which are maintained by the same team as the LLVM packages) didn't include lld.
what's the name of ubuntu package contains llvm linker lld
1,305,731,391,000
My application loads custom code using dlopen on the fly. For common symbols, the global symbol table is used by default. However, I want to provide the functionality where - if the user has linked their so with -Bsymbolic-functions, I pass the RTLD_DEEPBIND flag to the dlopen function. Is there a way I can programmatically know whether a .so is linked with -Bsymbolic-functions or not using C ?
You can use the standard ELF program dump: dump -Lv libxxx.so | grep SYMBOLIC
Is there a way to check whether a .so has been compiled with -Bsymbolic-functions flag?
1,305,731,391,000
anisha@linux-y3pi:~/> google-earth ./googleearth-bin: error while loading shared libraries: libGL.so.1: cannot open shared object file: No such file or directory anisha@linux-y3pi:~/> locate libGL /opt/google/earth/free/libGLU.so.1 /usr/lib64/libGL.so /usr/lib64/libGL.so.1 /usr/lib64/libGL.so.1.2 /usr/lib64/libGLU.so.1 /usr/lib64/libGLU.so.1.3.070802 anisha@linux-y3pi:~/> uname -a Linux linux-y3pi 2.6.34-12-desktop #1 SMP PREEMPT 2010-06-29 02:39:08 +0200 x86_64 x86_64 x86_64 GNU/Linux On OpenSUSE, try zypper in Mesa-32bit to install the 32 bit version of the library. linux-y3pi:# zypper in Mesa-32bit Retrieving repository 'google-chrome' metadata [\] Failed to download /repodata/repomd.xml from http://dl.google.com/linux/chrome/rpm/stable/x86_64 Abort, retry, ignore? [a/r/i/?] (a): r Retrieving repository 'google-chrome' metadata [|] Failed to download /repodata/repomd.xml from http://dl.google.com/linux/chrome/rpm/stable/x86_64 Abort, retry, ignore? [a/r/i/?] (a): i Retrieving repository 'google-chrome' metadata [error] Repository 'google-chrome' is invalid. Can't provide /repodata/repomd.xml : User-requested skipping of a file Please check if the URIs defined for this repository are pointing to a valid repository. Warning: Disabling repository 'google-chrome' because of the above error. Retrieving repository 'google-earth' metadata [/] Failed to download /repodata/repomd.xml from http://dl.google.com/linux/earth/rpm/stable/i386 Abort, retry, ignore? [a/r/i/?] (a): r Failed to download /repodata/repomd.xml from http://dl.google.com/linux/earth/rpm/stable/i386 Abort, retry, ignore? [a/r/i/?] (a):
Like Renan said, this is the result of a 32/64 bit mismatch. On OpenSUSE, try zypper in Mesa-32bit to install the 32 bit version of the library. In general, if you have the 64 bit version, you can use rpm -qf to find the package containing the library: % rpm -qf /usr/lib64/libGLU.so.1 Mesa-7.11-11.4.2.x86_64 On OpenSUSE, the naming convention for 32bit-libraries is to append -32bit to the package name, so strip version and architecture information and add the suffix to obtain Mesa-32bit.
error while loading shared libraries: libGL.so.1: cannot open shared object file: No such file or directory
1,305,731,391,000
I'm trying to understand how does symbol tables relate to the .data section in ELF. First some assumptions that I'm using as ground to start with. A symbol is a human readable (or "as written in the source file") representation of a function or a variable that is mapped to the actual binary value (that the CPU operates on) of that. Here is an example //simple.c int var_global_init = 5; int main(void) { return 0; } Let's build it and examine the binary: $ gcc simple.c -o simple $ objdump -t simple | grep var_global_init 0000000000201010 g O .data 0000000000000004 var_global_init It lists the symbol in the .data section of the ELF file. Page 20 of the ELF documentation defines the .data section as: These sections hold initialized data that contribute to the program's memory image. Ok, that kind of fits. So then I ask myself Does this mean that the symbol table is embedded in the .data section?. But that seems to be disproved by the exmple below: $ readelf -s simple Symbol table '.symtab' contains 66 entries: .... 50: 0000000000201010 4 OBJECT GLOBAL DEFAULT 23 var_global_init readelf shows that there is a dedicated .symtab section in the ELF that holds the symbol. Does the .data section need the actual symbol table. The first example points to concluding that there is one in the data section, but shouldn't it be able to execute just the binary values? By checking hexdump I was able to detect only a single entry, so either I got the concepts wrong or some of them is lying. :)
The .data section contains the data itself, i.e. the four bytes which hold the int value 5. The .symtab section contains the symbols, i.e. the names given to various parts of the binary; the var_global_init symbol name points to the four bytes of storage in the .data section. That’s why you only see one entry: there is only one symbol, in the symbol table. But you do need both sections if you want to go from a name to a value: the symbol table tells you where to find the value corresponding to the var_global_init symbol, and the data section contains the storage for the value.
Symbol table in the .data section of ELF
1,305,731,391,000
I am attempting to assemble the assembly source file below using the following NASM command: nasm -f elf -o test.o test.asm This completes without errors and I then try to link an executable with ld: ld -m elf_i386 -e main -o test test.o -lc This also appears to succeed and I then try to run the executable: $ ./test bash: ./test: No such file or directory Unfortunately, it doesn't seem to work. I tried running ldd on the executable: linux-gate.so.1 => (0xf777f000) libc.so.6 => /lib/i386-linux-gnu/libc.so.6 (0xf7598000) /usr/lib/libc.so.1 => /lib/ld-linux.so.2 (0xf7780000) I installed the lsb-core package and verified that /lib/ld-linux.so.2 exists. How come I still can't run the executable? I'm attempting to do this on a machine running the 64-bit edition of Ubuntu 15.04. The source code: ; This code has been generated by the 7Basic ; compiler <http://launchpad.net/7basic> extern printf extern scanf extern read extern strlen extern strcat extern strcpy extern strcmp extern malloc extern free ; Initialized data SECTION .data s_0 db "Hello, World!",0 printf_i: db "%d",10,0 printf_s: db "%s",10,0 printf_f: db "%f",10,0 scanf_i: db "%d",0 scanf_f: db "%lf",0 ; Uninitialized data SECTION .bss v_12 resb 4 v_0 resb 4 v_4 resb 8 SECTION .text ; Code global main main: finit push ebp mov ebp,esp push 0 pop eax mov [v_12], eax l_0: mov eax, [v_12] push eax push 5 pop edx pop eax cmp eax, edx jl l_2 push 0 jmp l_3 l_2: push 1 l_3: pop eax cmp eax, 0 je l_1 push s_0 push printf_s call printf add esp, 8 mov eax, [v_12] push eax push 1 pop edx pop eax add eax, edx push eax pop eax mov [v_12], eax jmp l_0 l_1: mov esp,ebp pop ebp mov eax,0 ret Here's the output of strings test: /usr/lib/libc.so.1 libc.so.6 strcpy printf strlen read malloc strcat scanf strcmp free GLIBC_2.0 t'hx Hello, World! .symtab .strtab .shstrtab .interp .hash .dynsym .dynstr .gnu.version .gnu.version_r .rel.plt .text .eh_frame .dynamic .got.plt .data .bss test.7b.out printf_i printf_s printf_f scanf_i scanf_f v_12 _DYNAMIC _GLOBAL_OFFSET_TABLE_ strcmp@@GLIBC_2.0 read@@GLIBC_2.0 printf@@GLIBC_2.0 free@@GLIBC_2.0 _edata strcat@@GLIBC_2.0 strcpy@@GLIBC_2.0 malloc@@GLIBC_2.0 scanf@@GLIBC_2.0 strlen@@GLIBC_2.0 _end __bss_start main
You need to also link start up fragments like crt1.o and others if you want to call libc functions. The linking process can be very complicated, so you'd better use gcc for that. On amd64 Ubuntu, you can: sudo apt-get install gcc-multilib gcc -m32 -o test test.o You can see files and commands for the link by adding -v option.
Unable to run an executable built with NASM
1,305,731,391,000
$ file /lib/ld-linux.so.2 /lib/ld-linux.so.2: symbolic link to i386-linux-gnu/ld-2.27.so $ readlink -f /lib/ld-linux.so.2 /lib/i386-linux-gnu/ld-2.27.so $ file /lib/i386-linux-gnu/ld-2.27.so /lib/i386-linux-gnu/ld-2.27.so: ELF 32-bit LSB shared object, Intel 80386, version 1 (SYSV), dynamically linked, BuildID[sha1]=7a59ed1836f27b66ffd391d656da6435055f02f8, stripped So is ld-2.27.so a shared library? It is said to be a dynamic linker/loader and mentioned in section 8 of man. So is it an executable? If yes, why is it named like a shared library as *.so? If no, how shall I understand it is like an executable, for being a dynamic linker/loader and mentioned in section 8 of man? Thanks.
It is both, which is perfectly valid. The ld.so-style naming scheme is largely historical; the first dynamic linker in this style was SunOS 4’s, which was named ld.so (I have its history somewhere, I’ll clarify this once I’ve found it). But there are valid reasons for it to be named like a shared library rather than an executable, including: it exists to serve executables, like shared libraries (it has no purpose without executables to run); it is a shared ELF object, but it doesn’t require an interpreter (it has no .interp entry); this is typical of libraries (shared, or rather dynamically-linked, executables always require an interpreter; otherwise they’re statically-linked). The distinction between executables and libraries is somewhat fluid in ELF; any ELF object with an entry point and/or an interpreter can be an executable, regardless of its other properties.
Is ld.so an executable?
1,305,731,391,000
I am trying to compile a program of mine, that needs C++11 features and a newer version of boost than is installed on the target machine. I therefore compiled and installed gcc 4.9 to some local directory (/secured/local) with an in-tree build of all dependencies and the binutils. I then downloaded boost 1.55 and ran ./boostrap.sh --prefix=/secured/local && ./b4 install to install boost. Both compilations worked fine and gcc -std=c++11 also works. My program is built using cmake with the usual FindXX.cmake procedure of finding files. I am running cmake like this: cmake ../source/ -DBOOST_ROOT=/secured/local -DCMAKE_EXE_LINKER_FLAGS='-Wl,-rpath,/secured/local/lib' which successfully finds my new boost installation and the new version of gcc. Compilation and linking both work flawlessly. However, upon execution of my program I am getting the following errors: $ ./surface ./surface: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.20' not found (required by ./surface) ./surface: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.11' not found (required by ./surface) ./surface: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.9' not found (required by ./surface) ./surface: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.19' not found (required by ./surface) ./surface: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.9' not found (required by /secured/local/lib/libconfig++.so.9) ./surface: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.9' not found (required by /secured/local/lib/libboost_program_options.so.1.55.0) ./surface: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.11' not found (required by /secured/local/lib/libboost_program_options.so.1.55.0) ./surface: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.20' not found (required by /secured/local/lib/libboost_program_options.so.1.55.0) ./surface: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.20' not found (required by /secured/local/lib/libboost_filesystem.so.1.55.0) ./surface: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.20' not found (required by /secured/local/lib/libboost_regex.so.1.55.0) ./surface: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.11' not found (required by /secured/local/lib/libboost_regex.so.1.55.0) ./surface: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.9' not found (required by /secured/local/lib/libboost_regex.so.1.55.0) ./surface: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.15' not found (required by /secured/local/lib/libboost_regex.so.1.55.0) Running objdump on boost shows objdump -x /secured/local/lib/libboost_program_options.so.1.55.0 | grep stdc++ NEEDED libstdc++.so.6 required from libstdc++.so.6: It appears as if both the boost libs try to use the old /usr/lib64/libstdc++.so.6 instead the new one in /secured/local/lib. What did I do wrong in my procedure? Note, that I try to avoid setting the LD_LIBRARY_PATH somewhere.
Installing gcc puts a libstdc++.so.6 into both $PREXIF/lib and $PREFIX/lib64. Using the latter as RPATH for boost and my program solved the issue. Using only the former results in a fall-back to the system libstdc++.so.6.
how to specify the libstdc++.so.6 to use
1,305,731,391,000
I'm not very knowledgeable on this topic, and therefore can't figure out why the following command does not work: $ gfortran -o dsimpletest -O dsimpletest.o ../lib/libdmumps.a \ ../lib/libmumps_common.a -L/usr -lparmetis -lmetis -L../PORD/lib/ \ -lpord -L/home/eiser/src/scotch_5.1.12_esmumps/lib -lptesmumps -lptscotch \ -lptscotcherr /opt/scalapack/lib/libscalapack.a -L/usr/lib/openmpi/ \ -lmpi -L/opt/scalapack/lib/librefblas.a -lrefblas -lpthread /usr/bin/ld: cannot find -lrefblas collect2: ld returned 1 exit status This happens when compiling the mumps library. The above command is executed by make. I've got the librefblas.a in the correct path: $ ls /opt/scalapack/lib/ -l total 20728 -rw-r--r-- 1 root root 619584 May 3 14:56 librefblas.a -rw-r--r-- 1 root root 9828686 May 3 14:59 libreflapack.a -rw-r--r-- 1 root root 10113810 May 3 15:06 libscalapack.a -rw-r--r-- 1 root root 653924 May 3 14:59 libtmg.a Question 1: I thought the -L switch of ld takes directories, why does it refer to the file directly here? If I remove the librefblas.a from the -L argument, I get a lot of "undefined reference" errors. Question 2: -l should imply looking for .a and then looking for .so, if I recall correctly. Is it a problem that I don't have the .so file? I tried to find out by using gfortran -v ..., but this didn't help me debugging it.
I was able to solve this with the help of the comments, particular credit to @Mat. Since I wanted to compile the openmpi version, it helped to use mpif90 instead of gfortran, which, on my system, is $ mpif90 --showme /usr/bin/gfortran -I/usr/include -pthread -I/usr/lib/openmpi -L/usr/lib/openmpi -lmpi_f90 -lmpi_f77 -lmpi -ldl -lhwloc
Why can't ld find this library?
1,305,731,391,000
In the past I have embedded resource files (images) into programs by first converting them to .o files using the GNU linker. For example: ld -r -b binary -o file.o file.svg Starting with FreeBSD 12, the default linker has changed from GNU's to LLVM's. Although the linker appears to understand the command line options, it results in an error. For example: ld -r -b binary -o file.o file.svg ld: error: target emulation unknown: -m or at least one .o file required Also tried using the command line options from the ld.lld(1) manual page: ld --relocatable --format=binary -o file.o file.svg ld: error: target emulation unknown: -m or at least one .o file required Am I using the correct tool? Do I need to specify a value for the -m option?
It seems you need to add -z noexecstack (This was added for ELF binaries as well in LLD 7.0.0). The default is to have an executable stack region which is vulnerable to exploitation via stack memory. Your binary image does not have an executable stack and I believe that is why it fails. The error throws you off as it asks you to tell what target emulation to use for your stack (which you do not have). David Herrmann did all the hard work and have found a cross platform solution which covers: GNU-ld GNU-gold GNU-libtool work with cross-compiling work with LLVM do not require any external non-standard tools The magic invocation is then: $(LD) -r -o "src/mydata.bin.o" -z noexecstack --format=binary "src/mydata.bin" And most often you want that binary segment to be read-only: $(OBJCOPY) --rename-section .data=.rodata,alloc,load,readonly,data,contents "src/mydata.bin.o" UPDATE: I could not test as my system was: $ uname -r 11.2-STABLE $ ld -V GNU ld 2.17.50 [FreeBSD] 2007-07-03 Supported emulations: elf_x86_64_fbsd elf_i386_fbsd I spun up a VM with FreeBSD 12.0 to test this out and found this: $ uname -r 12.0-RELEASE $ ld -V LLD 6.0.1 (FreeBSD 335540-1200005) (compatible with GNU linkers) The -z noexecstack was only added in 7.0.0 and it is not listed in the man page for 6.0.1. More annoyingly specifying unsupported values for -z does not trigger an error! I have not upgraded to LLVM 7 to test if that does the trick. @Richard Smith found a proper solution himself by specifying the emulation with -m in another answer. That route would be so much easier if LLD listed supported emulations with -V. If you use the file command on file.o you will see it identifies as SYSV ELF. This might be good enough for you. But if you want the exact samme as the system then use elf_amd64_fbsd which is an alias for elf_x86_64_fbsd. Annoyingly ld -V does not output supported emulations with LLD as GNU ld does. $ file /bin/cat /bin/cat: ELF 64-bit LSB executable, x86-64, version 1 (FreeBSD), dynamically linked, interpreter /libexec/ld-elf.so.1, for FreeBSD 12.0 (1200086), FreeBSD-style, stripped $ ld -r -b binary -m elf_amd64 -o data.o data.bin $ file data.o data.o: ELF 64-bit LSB relocatable, x86-64, version 1 (SYSV), not stripped $ ld -r -b binary -m elf_amd64_fbsd -o data.o data.bin $ file data.o data.o: ELF 64-bit LSB relocatable, x86-64, version 1 (FreeBSD), not stripped elf_amd64_fbsd is an alias for elf_x86_64_fbsd (see D7837 and D24356). Hopefully LLD will add the emulations to the -V output.
Embedding binary data into an executable using LLVM tools
1,305,731,391,000
I have OpenSSL installed through the Homebrew package manager. I have found the library and header files I need. The headers are: /usr/local/Cellar/openssl/1.0.2h_1/include/openssl /usr/local/Cellar/openssl/1.0.2j/include/openssl /usr/local/Cellar/openssl/1.0.2k/include/openssl The library files are: /usr/lib/libssl.0.9.7.dylib /usr/lib/libssl.0.9.8.dylib /usr/lib/libssl.dylib I've tried several gcc commands to try to link the OpenSSL library, including: gcc md5.c -I/usr/local/Cellar/openssl/1.0.2k/include -lssl gcc md5.c -I/usr/local/Cellar/openssl/1.0.2k/include -llibssl gcc md5.c -I/usr/local/Cellar/openssl/1.0.2k/include -llibssl.dylib gcc md5.c -I/usr/local/Cellar/openssl/1.0.2k/include -llibssl.0.9.8 gcc md5.c -I/usr/local/Cellar/openssl/1.0.2k/include -llibssl.0.9.8.dylib All of them produce either a "File not found" error or a linker error. What's the proper way to do this?
It looks like you are trying to link against the openssl libraries installed with your os, rather than the homebrew libraries. Try to find where homebrew installed the 1.0.2k libraries. find /usr/local/Cellar/ -name "libssl.*" You should find something like /usr/local/Cellar/_path_of some_sort/libssl.a. Try to link against this library rather than the ones in /usr/lib. The /usr/lib libraries are old and not compatible with the header files you are using. gcc md5.c -I/usr/local/Cellar/openssl/1.0.2k/include -L/usr/local/Cellar/path_of_some_sort/ -lssl -lcrypto -o md5
How to link OpenSSL library in macOS using gcc?
1,305,731,391,000
I was curious if it's possible to build Linux kernel without GNU toolchain (gcc+autotools). I found out that it is possible: after applying patches from llvm.linuxfoundation.org, it was possible to build Linux kernel with clang. GNU linker was used. The alternative to ld is gold which is also part of GNU binutils. Popular musl+clang toolchain ELLCC also uses GNU binutils. There are more alternatives: lld (no stable releases), mclinker (no stable releases). Does alternative to GNU binutils exist? Probably, building on Mac OS X or FreeBSD doesn't involve GNU tools.
As of 2018 lld seems mature enough to be used in production, not 100% compatible with bfd, but can be used as drop-in replacement in most cases. Update: recently, a new linker appeared, and it is under active development: mold.
What are the alternatives to GNU ld?
1,305,731,391,000
I'm not sure, whether the dynamic linker /usr/bin/ld is automatically invoked by the operating system, when the ELF file is loaded, or whether it's invoked by code embedded in the ELF file? When I use r2 to debug an ELF file, it stops at first instruction to be executed, which should be dynamic linker code, but I don't know if this code is part of the ELF file.
The kernel loads the dynamic loader (which isn’t /usr/bin/ld; see what are the executable ELF files respectively for static linker, dynamic linker, loader and dynamic loader?). When you run an ELF binary, the kernel uses its specific ELF binary loader; for dynamically-linked binaries, this looks for the interpreter specified in the ELF headers, loads that and instructs it to run the target binary. The interpreter is the dynamic loader, which loads any required libraries, resolves the undefined symbols, and jumps to the programs start address. (See What types of executable files exist on Linux? for details of the binary loads in the kernel.) LWN has an article which goes into the details, How programs get run: ELF binaries.
Is the dynamic linker automatically invoked by the operating system or by code embedded in the ELF file?
1,305,731,391,000
Is there any relationship between the linking of binaries (as in dynamic or static linking) and symbolic links. Do they interact in any way, or share some history, or are these two completely orthogonal concepts that just happen to be called similarly?
Not at all. One involves redirecting all references to a file name ( any kind of file ) to a different file instead ( symlinks ), and the other involves building an executable image by copying code from a library into the executable ( static linking ) or referencing a dynamic library that contains the required code and loading that dynamic library at runtime.
Is there any relationship between linking binaries and symbolic links?
1,305,731,391,000
I have built Qt6 in an Alma8 based Docker container, with the Docker host being Fedora 35. Under some circumstances (described below), all Qt libs cannot load libQt6Core.so[.6[.2.4]]. But that file exists and the correct directory is searched for that file. Other Qt libs (e.g., libQt6Dbus.so) are found and loaded. Extensive debugging, re-building, seaching-the-web did not yield any clues what the underlying cause is and how I could fix it. Locating the problem I have narrowed down the problem to the following scenario: I created two minimal VMs, one with centos7 and one with alma8. I installed Docker from the official repos into both of them. I ran the same Docker image in both VMs and installed the same qt6 package. It breaks when the Docker host is centos7. It works when the Docker host is alma8. Theory and Question Qt6 was built on Alma8 and links to some system libraries newer than what Centos7 provides, so Qt6 cannot run unter Centos7 (this is totally expected and okay). But it should run anywhere in the Alma8 Docker container. Container images should be able to run anywhere, but in this case "something" from the host OS sneaks into the container and causes the Issue – even though both containers use the exact same image! The question is: What is this "something" and how/why does it break the build? What I tried I inspected libQt6Gui.so to see whether or not it can load libQt6Core.so and I inspected libQt6Core.so to see if something looks bogus using: ldd and LD_DEBUG=libs ldd which indeed showed some differences (see below) libtree which showed no differences (but a nice tree :)) pyldd (from conda-build) readelf -d What I also tried: Setting LD_LIBRARY_PATH (did not change anything – no surprise since I know that the correct path is always searched) Building Qt6 in an alma8 container with a centos7 host (build failed with "libQt6Core.so.6: cannot open file", same error as with the built lib) Building Qt6 in a centos7 container (build failed due to other problems I could not yet fix) Differences from ldd In the screenshots below, you see a the Alma8-Docker-Container on a Centos7 host on the left and the Alma8-Docker-Container on an Alma8 host on the *right. The first two images show the results for ldd /opt/emsconda/lib/libQt6Gui.so. libQt6Core can not be found on the left but is found on the right. This second screenshot shows that other Qt libs are found and loaded. The ICU libs are also missing on the left - maybe they are only loaded when libQt6Core was also loaded? This screenshot shows the results of LD_DEBUG=libs ldd .... You can see that in both cases, libQt6Core is search in the correct location (/opt/emsconda/lib). But it is only loaded in the right container. The left one additionally looks in `/opt/emsconda/lib/./ (haha)) and then silently walks on to the next lib ... I could not find any error messages. This file is just not opened/loaded. Inspecting the libQt6Core.so itself might give us a clue. It links to a linux-vdso.so.1. According to this SO question, that file is a virtual lib injected into the userspace by the OS kernel. Since Docker containers do not run their own kernel, I suspect that that file comes from the host OS. Maybe, libQt6Core relies on some functionality in the linux-vdso.so.1 that the centos7 kernel cannot provide? I have no idea ... Since nothing I tried so far yields an error message, I have no clue what the acutal problem might be or how to proceeded with debugging. I'd be greatful for any kind of hints, tips or help.
The question got answer in the Qt forums. Summary: The .so contains an ABI tag that denotes the minimum kernel version required. You can see this via objdump -s -j .note.ABI-tag libQt6Core.so.6.2.4. The result is in the last three blocks (0x03 0x11 0x00 -> 3.17.0 in my case). This information is placed there on purpose since QT uses a few system calls that are only available with newer kernels. Glibc reads this information when it loads a shared object and compares it to the current kernel's version. If it doesn't match, the file is not loaded. Since Docker has no own kernel, the Docker host’s kernel version is used for that comparison. So even if the Docker image is Alma8, the kernel is still the old v3.10.0 from the Centos7 host in my case. You can use strip --remove-section=.note.ABI-tag libQt6Core.so.6.2.4. Qt seems to have fallback code, so nothing breaks. Source: https://github.com/Microsoft/WSL/issues/3023
Existing .so file cannot be loaded even though it exists, seems to depend on Docker host OS
1,305,731,391,000
If I run the command foo specifying a a different libc to use as follows: LD_LIBRARY_PATH=$PATH_TO_MY_CUSTOM_LIBC foo Is the globally defined libc used to run any of the command given above? For the sake of context: consider the situation where your libc is physically present and accessible on your machine, but cannot be used for some reason. Given a logged in shell, in order to execute a specific command, you would need to provide a different libc. Specifying the LD_LIBRARY_PATH inline, would set it to the location of a working libc without apparent need to call the globally defined one. Would the globally defined libc be called all the same in order to define locally the new environment variable?
No. Dynamic linking isn't part of the libc in the sense of /lib/libc.so.6, it is the functionality of the /lib/ld.so (both of them got a little bit changed file name and path in the last years, but the essence is the same). Yes, ld.so, the dynamic linker is a shared library as well. Loading it is the first thing what most linux binary does, yet before calling its main() function. Although ld.so is a different file of the libc, it is also part of the gnu libc distribution in both of its source and compiled binary forms. Linking in the ld.so is going from a hardcoded code chunk, given by the gcc to every linux ELF binary. Its path is also hardcoded into the binary. You can't change that easily, although it is possible if it is needed. If you override libc.so.6 with an alternate LD_LIBRARY_PATH setting, this library will supersede the orderntly libc with your own, but it will be still loaded by the normal ld.so. Thus the answer to your question is "yes, but...".
Specifying local libc does call global libc?
1,305,731,391,000
I'm finding a bunch of stuff where working packages contain files where ldd returns "not found" for some libraries. For example... /usr/lib64/thunderbird/libprldap60.so libldap60.so => not found /usr/lib64/libreoffice/program/libofficebean.so libjawt.so => not found We have hundreds of users using Thunderbird and Libre Office, and no one has reported any problems. And these files exist on the system: /usr/lib64/thunderbird/libldap60.so /usr/lib/jvm/java-1.6.0-openjdk-1.6.0.33.x86_64/jre/lib/amd64/libjawt.so /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.71.x86_64/jre/lib/amd64/libjawt.so For this example, both /usr/bin/thunderbird and /usr/bin/libreoffice are wrapper scripts to launch the respective programs, and I expected that those scripts would set up the environment so that those libraries become visible, but it doesn't look like that happens in these scripts. There are hundreds more examples of this throughout our system, I just chose these two as an example. Can anyone explain what's going on?
The direct (perhaps obvious) answer is that the search path for the libraries you are looking at with ldd does not include the directories where the library's own dependencies are located. Normally, unless a library's dependencies are found in system-wide standard locations, the library should have been built with a run path specified (by using the environment variable $LD_RUN_PATH or the appropriate linker option). Otherwise the libraries will not be found later on at run time, as you have found with ldd. So why does Thunderbird work anyway despite this "problem"? There are a few ways that the necessary libraries might be found anyway despite the missing run path: The environment variable $LD_LIBRARY_PATH is set at run time and supplies a list of additional directories to search in. The necessary directory might have gotten included into the search path because it was found in the run path of some other unrelated library that happened to be loaded prior the the current one. By the way, I'm not sure if this works as an accident of implementation of if the standard specifies it. One way or the other, it is fragile because it crucially depends on the exact order in which libraries are loaded. The library might have been loaded manually by the application using the dlopen() function given a full pathname. Thunderbird appears to be using the last of those techniques. I looked at strace output of what it does at startup and it seems to do this: Locate the directory where its own binary comes from. This is always possible because the shell script helper that launches Thunderbird does so with a full pathname. Open the text file dependentlibs.list found in that directory. For each filename in this text file, in order, prepend the same directory to form a full path name, and load that as a library using dlopen(). Now all those dependent libraries like libldap60.so which you mentioned are "preloaded", and other libraries that require them don't need to find them again. Notice that the order or the files listed in dependentlibs.list is significant. The reason Thunderbird does this is so that the directory where it is located does not have to be hardcoded into either the application or into the run path of any of its internal libraries. I don't know what Java does, but it is no doubt something similar.
Why do some files of working packages return "not found" for some libraries of ldd's output?
1,305,731,391,000
I'm starting a project that requires an external shared library third-party.so. I've placed it in /usr/lib. However, when I run sudo ldconfig -v, it's not listed. ldconfig -p | grep third-party.so proves that it wasn't added to the cache. Does this mean that there is something wrong with the library? Or am I missing some detail? I've run readelf on it, and it didn't detect any surprises. Running file /usr/lib/third-party.so returns: /usr/lib/third-party.so: ELF 64-bit LSB pie executable, x86-64, version 1 (SYSV), dynamically linked, stripped
An older colleague of mine took a look and gave the solution: the .so must have a lib prefix: libthird-party.so
Placed library in /usr/lib, but ldconfig doesn't put it in cache
1,305,731,391,000
I have a game I'm writing which recently required libjpeg. I wrote some code using libjpeg on some-other-machine and it worked as expected. I pulled the code to this machine and tried compiling and running it and have been getting the runtime error out of libjpeg: Wrong JPEG library version: library is 62, caller expects 80 If I use ldd to see what the binary is linked to, I get: ldd Debug/tc | grep jpeg libjpeg.so.62 => /usr/lib/x86_64-linux-gnu/libjpeg.so.62 (0x00007f50f02f2000) My compile flags include -ljpeg. The current jpeg related shared objects in my /usr/lib looks like this: find | grep jpeg | xargs ls -l --color -rwxr-xr-x 1 root root 61256 2011-09-26 15:43 ./gimp/2.0/plug-ins/file-jpeg -rw-r--r-- 1 root root 5912 2011-10-01 06:40 ./grub/i386-pc/jpeg.mod -rw-r--r-- 1 root root 40264 2011-08-24 05:41 ./gstreamer-0.10/libgstjpegformat.so -rw-r--r-- 1 root root 78064 2011-08-24 05:41 ./gstreamer-0.10/libgstjpeg.so -rw-r--r-- 1 root root 17920 2011-09-27 17:30 ./i386-linux-gnu/gdk-pixbuf-2.0/2.10.0/loaders/libpixbufloader-jpeg.so lrwxrwxrwx 1 root root 17 2011-08-10 14:07 ./i386-linux-gnu/libjpeg.so.62 -> libjpeg.so.62.0.0 -rw-r--r-- 1 root root 145068 2011-08-10 14:07 ./i386-linux-gnu/libjpeg.so.62.0.0 -rw-r--r-- 1 root root 30440 2011-09-30 05:25 ./i386-linux-gnu/qt4/plugins/imageformats/libqjpeg.so -rw-r--r-- 1 root root 924 2011-06-15 05:05 ./ImageMagick-6.6.0/modules-Q16/coders/jpeg.la -rw-r--r-- 1 root root 39504 2011-06-15 05:05 ./ImageMagick-6.6.0/modules-Q16/coders/jpeg.so -rw-r--r-- 1 root root 10312 2011-06-03 00:18 ./imlib2/loaders/jpeg.so -rw-r--r-- 1 root root 43072 2011-10-21 19:11 ./jvm/java-6-openjdk/jre/lib/amd64/libjpeg.so -rw-r--r-- 1 root root 23184 2011-10-14 02:46 ./kde4/jpegthumbnail.so -rw-r--r-- 1 root root 132632 2009-04-30 00:24 ./libopenjpeg-2.1.3.0.so lrwxrwxrwx 1 root root 22 2009-04-30 00:24 ./libopenjpeg.so.2 -> libopenjpeg-2.1.3.0.so -rw-r--r-- 1 root root 23224 2011-08-03 04:20 ./libquicktime2/lqt_mjpeg.so -rw-r--r-- 1 root root 27208 2011-08-03 04:20 ./libquicktime2/lqt_rtjpeg.so -rw-r--r-- 1 root root 47800 2011-09-24 10:12 ./strigi/strigiea_jpeg.so -rw-r--r-- 1 root root 3091 2011-05-18 05:25 ./syslinux/com32/include/tinyjpeg.h -rw-r--r-- 1 root root 22912 2011-09-27 17:38 ./x86_64-linux-gnu/gdk-pixbuf-2.0/2.10.0/loaders/libpixbufloader-jpeg.so -rw-r--r-- 1 root root 226798 2011-08-10 14:06 ./x86_64-linux-gnu/libjpeg.a -rw-r--r-- 1 root root 935 2011-08-10 14:06 ./x86_64-linux-gnu/libjpeg.la lrwxrwxrwx 1 root root 17 2011-08-10 14:06 ./x86_64-linux-gnu/libjpeg.so -> libjpeg.so.62.0.0 lrwxrwxrwx 1 root root 17 2011-10-15 10:50 ./x86_64-linux-gnu/libjpeg.so.62 -> libjpeg.so.62.0.0 -rw-r--r-- 1 root root 150144 2011-08-10 14:06 ./x86_64-linux-gnu/libjpeg.so.62.0.0 lrwxrwxrwx 1 root root 16 2011-10-15 10:50 ./x86_64-linux-gnu/libjpeg.so.8 -> libjpeg.so.8.3.0 lrwxrwxrwx 1 root root 19 2011-11-30 01:25 ./x86_64-linux-gnu/libjpeg.so.8.3.0.bak -> ./libjpeg.so.62.0.0 -rw-r--r-- 1 root root 31488 2011-09-30 05:13 ./x86_64-linux-gnu/qt4/plugins/imageformats/libqjpeg.so The original machine runs Gentoo, 'this' machine runs Ubuntu 11.10. Both are 64-bit. The gentoo box only has libjpeg version 8, it seems. Ultimately, my question is: How can I resolve this? I'd also like to know how I can determine exactly which library the linker has used. EDIT: My game also links to SDL_image, which according to ldd, links to libjpeg version 8. I bet this is where my troubles stem from. How can I tell gcc to link my game to libjpeg version 8? I tried -l/usr/lib/libjpeg.so.whatever and it complained about not finding the specified lib.
Please use LD_LIBRARY_PATH. Refer to these useful links as well: http://tldp.org/HOWTO/Program-Library-HOWTO/shared-libraries.html http://linuxmafia.com/faq/Admin/ld-lib-path.html
Linking issues with libjpeg
1,305,731,391,000
I have compiled a library and now I should run ldconfig. However, I would rather not modify /etc/ld.so.conf, nor any other system file. Is it possible to generate the cache somewhere else and then make it visible only while compiling selected programs? Or should I manually set LD_LIBRARY_PATH and LD_RUN_PATH for this purpose?
You can check for option -f of ldconfig: -f conf Use conf instead of /etc/ld.so.conf. If you run: ldconfig -f custom.conf with user with enough privileges it will modify /etc/ld.so.cache. ld reads /etc/ld.so.cache and I don't think you can make it to read from different file. As you don't want to modify system files you can do following: gcc -W -Wall -L/path_to_your_library -lyour_library test.c -o testo to build your testo.c Then: export LD_LIBRARY_PATH=/path_to_your_library to run it although setting of LD_LIBRARY_PATH help with debugging and to try out a newer version of a library its usage in the general development environment setup and deployment is considered bad. More you can check here. Another approach could be using rpath: unset LD_LIBRARY_PATH gcc -L/path_to_your_library -Wl,-rpath=/path_to_your_library -W -Wall -o testo testo.c -lyour_library With rpath method each program gets to list its shared library locations independently. Downsides: shared libraries should be installed in a fixed location. Also I've not tested but may be issues if library refers to a NFS mount.
Using `ldconfig` while not touching system files
1,305,731,391,000
I have had this problem for a very long time, had several discussions with friends, and tried searching relating info online. All efforts were in vain so I decide to give a shot here. I have lots of files that I would like to annotate. Not necessarily are they pictures or documents, but also audio/video files. Now, I understand that there are ways to annotate a PDF, and there are ways to add metadata to PDF/mp3/mp4.., but those methods are not enough for me. More specifically, when it comes to PDF files, usually I would like to take some notes in my favorite format. The current best way I can think of is to create another file with the same name and put them in the same directory (or tar them together), e.g. Learn-How-to-Learn.pdf Learn-How-to-Learn.pdf.note.md. However, I found this method cumbersome, for instance it is hard to always link them together and with their names synced. When it comes to mp3/mp4 files, I also want to link them to other files that contain my notes. For example, 00:45:37,I would like to listen to this part again,20190610T19:03:56 01:03:55,Donald Knuth made a good point on blah blah,20190610T20:00:03 These examples go on and on.. I feel that this is very useful, and there must be some clever solutions out there. But to my surprise, I haven't found any! Please let me know if I should be clearer.. sincerely I would like to have a beautiful solution. Thank you in advance!
Based on your question, it would be something like https://github.com/ljmdullaart/a-notate. Yes, it is written (by me) after you asked this question, and it is inspired on your question.
Annotating any files
1,305,731,391,000
I have .c, .h, and .1 files, how can I compile them together in one executable file. Everything clear with .c and .h files, but I have also .1 as I can see from the content it is used for manual, how can link them with program?
I have also .1 as I can see from the content it is used for manual Yes, these are written in groff markup. They aren't compiled, they're interpreted at runtime via man or some other viewer (using groff as a backend). The .1 actually denotes the manual section (see man man). When an executable is installed into an element of the system's executable path (e.g., /usr/bin), the corresponding man page is also usually installed into a subdirectory of, e.g., /usr/share/man. Often they are compressed as well (so foobar.1.gz). man systems maintain a cache and do some indexing of the content (for apropos, etc.), but how this is invoked differs between implementations. Traditionally the update command was makewhatis, but the newer mandb system uses mandb. Distros often set this up to run at regular intervals via cron rather than do it as part of the install since it can be a little time consuming.
How to compile manual files .1
1,305,731,391,000
I'm trying to setup an environment for kernel module development in Linux. I've built the kernel in the home folder and would like to place the sources and binaries to the correct place so include correctly. The example for building the kernel module has the following includes: #include <linux/init.h> #include <linux/module.h> What are the absolute paths that the linker looks for these headers?
I generally approach this question like this. I'm on a Fedora 19 system but this will work on any distro that provides locate services. $ locate "linux/init.h" | grep include /usr/src/kernels/3.13.6-100.fc19.x86_64.debug/include/linux/init.h /usr/src/kernels/3.13.7-100.fc19.x86_64.debug/include/linux/init.h /usr/src/kernels/3.13.9-100.fc19.x86_64/include/linux/init.h /usr/src/kernels/3.13.9-100.fc19.x86_64.debug/include/linux/init.h Your paths will be different but the key take away is that you want to ask locate to find what's being included ("linux/init.h") and filter these results looking for the keyword include. There are also distro specific ways to search for these locations using RPM (Redhat) or APT (Debian/Ubuntu). gcc Notice however that the paths within the C/C++ file are relative: #include <linux/init.h> This is so that when you call the compiler, gcc, you can override the location of the include files that you'd like to use. This is controlled through the switch -I <dir>. excerpt from man gcc -I dir Add the directory dir to the list of directories to be searched for header files. Directories named by -I are searched before the standard system include directories. If the directory dir is a standard system include directory, the option is ignored to ensure that the default search order for system directories and the special treatment of system headers are not defeated . If dir begins with "=", then the "=" will be replaced by the sysroot prefix; see --sysroot and -isysroot. External modules There's this article which discusses how one would incorporate the development of their own kernel modules into the "build environment" that's included with the Linux kernel. The article is titled: Driver porting: compiling external modules. The organization of the Kernel's makefile is also covered in this article: makefiles.txt. For Kernel newbies there's also this article: KernelHeaders from the kernelnewbies.org website. NOTE: The Kernel uses the KBuild system which is covered here as part of the documentation included with the Kernel. https://www.kernel.org/doc/Documentation/kbuild/ References How to include local header files in linux kernel module
Placement of kernel binary and sources for kernel module building?
1,305,731,391,000
A book I am reading refers to an include file that shows how a stack frame looks on one's UNIX system. In particular: /usr/include/sys/frame.h I am having trouble finding the modern equivalent. Anyone have an idea? I'm on Ubuntu 12.10.
A good answer was provided on Super User. Whether or not the files discussed are precise extensions of the legacy file my author refers to remains unknown. However, one will find most of the relevant knowledge in the ptrace.h file and the calling.h file located in the /.../asm/ directory. This presumes an x86 processor.
Where is the frame.h located in modern Linux implementations? (ubuntu specifically)
1,305,731,391,000
So I have been reading about the preload feature of the dynamic liner (dl) and how it can be used to load a user specified, shared library (.so) using the LD_PRELOAD env variable, before all other shared libraries which are linked to an executable will be loaded. I was reading about it in context of privelage escalation. Let's and I'm wondering why isn't there any sort of control of what the application is trying to load ? I created and compiled the following code: #include <stdio.h> #include <sys/types.h> #include <stdlib.h> void _init() { unsetenv("LD_PRELOAD"); setgid(0); setuid(0); system("/bin/bash"); } gcc -fPIC -shared -nostartfiles -o /tmp/preload.so /home/user/tools/sudo/preload.c If I then run sudo LD_PRELOAD=/tmp/preload.so /usr/bin/find a root shell is spawned. I know that I can runn find with sudo as seen on the picture but I don't undertand why the function from my fake shared library is called when that function is not needed within find ? Or is it, that the linker just loads the specified library in the env variable without checking if the app is even needing it ? I'd be great if someone can answer to clear my confusion. Thank you!
is it, that the linker just loads the specified library in the env variable without checking if the app is even needing it ? Yes, that's the point of LD_PRELOAD: the libraries listed there are loaded before the program. LD_PRELOAD is a way to change the behavior of a program. why the function from my fake shared library is called when that function is not needed within find ? _init runs very early in the startup of the program. The dynamic loader calls it. It isn't called explicitly from the source code of find. See for example A General Overview of What Happens Before main() or Linux x86 Program Start Up or How main() is executed on Linux. If you had preloaded a definition of a function that never gets called, your definition would not have mattered. I'm wondering why isn't there any sort of control of what the application is trying to load ? Normally users can run whatever they want. All the code that a user runs runs with that user's privileges. Controls are only necessary where there is an elevation of privileges. Sudo allows elevating privileges. Because of that, it normally forbids features that allow a user to do things that the administrator might not have wanted when the administrator configured the sudo rules. In particular, sudo forbids most environment variables, especially LD_LIBRARY_PATH and LD_PRELOAD. You're working on a system that has a vulnerable sudo configuration. This is for demonstration purposes.
LD_PRELOAD and the dynamic linker
1,305,731,391,000
Code: //a.c I don't use header files as this is just for demo purpose. extern void function_b(int num); void function_a(int num) { function_b(num) } //b.c void function_b(int num) { ... } //dll.c #include <dlfcn.h> int main() { void *handle_a; void *handle_b; void (*pfunc_a)(int); ... handle_a = dlopen("./a.so", RTLD_LAZY); ... pfunc_a = dlsym(handle_a, "function_a"); ... handle_b = dlopen("./b.so", RTLD_GLOBAL); ... pfunc_a(2020); ... return 0; } We can see that dll.c tries to load shared library in run time and module a has a reference on function_b and module b has the definition of function_b. Let's say we have already created shared libraries a.so, and b.so so those shared libraries exist on disk before the program runs, but when I run the program, it throws an symbol lookup error: ./a.so:undefined symbol: function_b but for this line of code handle_a = dlopen("./a.so", RTLD_LAZY); since I use RTLD_LAZY here, the runtime linker doesn't try to resolve the symbol function_b, and there's an opportunity for me to call dlopen("b.so", RTLD_GLOBAL) before calling function_a. This way the dynamic linker will modify the reference in a.so with the definition of function_b in b.so. My questions are: Is my understanding correct that the dynamic linker is supposed to modify the .got or .got.plt section of a.so so that it can be linked/relocated to the instruction address of function_b in the .text section of b.so. If my understanding is correct, then why couldn't the dynamic linker still resolve function_b in this case?
The problem isn’t that the dynamic linker can’t resolve function_b, it’s that your second call to dlopen is incorrect: you need to include either RTLD_LAZY or RTLD_NOW, the other flags are complementary to those two. One of the following two values must be included in flags: Changing your b.so load to handle_b = dlopen("./b.so", RTLD_NOW | RTLD_GLOBAL); produces a working program. Every call to dlopen must choose between RTLD_LAZY and RTLD_NOW; since b.so is the last library loaded, I specified NOW above (we don’t gain anything by lazy-loading), but LAZY works just as well in this instance. On top of that, other flags can be added; here we need RTLD_GLOBAL because we need b.so’s symbols to be made available globally, so that function_a can find function_b when it runs. See the examples in dlopen(3) for details of the error-handling you need to do with dlopen etc., which reveals the problem.
Why the dynamic linker couldn't resolve reference when a shared library has a dependency on other share library?
1,305,731,391,000
My intent is to place the text section at a specific location in memory (0x00100000). SECTIONS { . = 0x00100000; .text : { *(.text*) } } Although the linker does do this (note the 0x01000000 Addr field): $ readelf -S file.elf There are 12 section headers, starting at offset 0x104edc: Section Headers: [Nr] Name Type Addr Off Size ES Flg Lk Inf Al [ 0] NULL 00000000 000000 000000 00 0 0 0 [ 1] .text PROGBITS 00100000 100000 000e66 00 AX 0 0 4 [ 2] .eh_frame PROGBITS 00100e68 100e68 000628 00 A 0 0 4 ... it also places ~1MB of zeros before the .text section in the ELF file (note the .text section's offset is 1MB). Shown another way: $ hexdump -C file.elf 00000000 7f 45 4c 46 01 01 01 00 00 00 00 00 00 00 00 00 |.ELF............| 00000010 02 00 03 00 01 00 00 00 0c 00 10 00 34 00 00 00 |............4...| 00000020 dc 4e 10 00 00 00 00 00 34 00 20 00 02 00 28 00 |.N......4. ...(.| 00000030 0c 00 0b 00 01 00 00 00 00 00 00 00 00 00 00 00 |................| 00000040 00 00 00 00 90 14 10 00 96 04 4f 00 07 00 00 00 |..........O.....| 00000050 00 00 20 00 51 e5 74 64 00 00 00 00 00 00 00 00 |.. .Q.td........| 00000060 00 00 00 00 00 00 00 00 00 00 00 00 07 00 00 00 |................| 00000070 10 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 00000080 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| * 00100000 02 b0 ad 1b 03 00 00 00 fb 4f 52 e4 8b 25 90 04 |.........OR..%..| 00100010 4f 00 50 53 e8 88 00 00 00 fa f4 eb fc 55 89 e5 |O.PS.........U..| 00100020 83 ec 10 c7 45 f8 00 80 0b 00 c7 45 fc 00 00 00 |....E......E....| 00100030 00 eb 24 8b 45 fc 8d 14 00 8b 45 f8 01 d0 8b 4d |..$.E.....E....M| How can this be prevented? Am I improperly using the location counter ("dot" notation) syntax?
It turns out that telling the linker to emulate elf_i386 produced the output that I was looking for, though I do not understand why. Namely, invoke the linker with: $ ld -melf_i386 [...] Files produced with and without -melf_i386 appear to be mostly similar: with.elf: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), statically linked, not stripped, with debug_info without.elf: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), statically linked, not stripped, with debug_info Except their sizes are vastly different: $ ls -l *.elf -rwxr-xr-x 1 user user 10948 May 24 11:56 with.elf -rwxr-xr-x 1 user user 1055428 May 24 11:56 without.elf As far as I can tell, the output files are otherwise exactly the same: $ readelf -S with.elf There are 12 section headers, starting at offset 0x28e4: Section Headers: [Nr] Name Type Addr Off Size ES Flg Lk Inf Al [ 0] NULL 00000000 000000 000000 00 0 0 0 [ 1] .text PROGBITS 00100000 001000 000205 00 AX 0 0 4 [ 2] .eh_frame PROGBITS 00100208 001208 0000b8 00 A 0 0 4 [ 3] .bss NOBITS 001002c0 0012c0 3ef000 00 WA 0 0 4 [ 4] .debug_info PROGBITS 00000000 0012c0 0007bf 00 0 0 1 [ 5] .debug_abbrev PROGBITS 00000000 001a7f 0002c9 00 0 0 1 [ 6] .debug_aranges PROGBITS 00000000 001d48 000060 00 0 0 1 [ 7] .debug_line PROGBITS 00000000 001da8 00023c 00 0 0 1 [ 8] .debug_str PROGBITS 00000000 001fe4 0004bd 01 MS 0 0 1 [ 9] .symtab SYMTAB 00000000 0024a4 000280 10 10 22 4 [10] .strtab STRTAB 00000000 002724 00014e 00 0 0 1 [11] .shstrtab STRTAB 00000000 002872 000070 00 0 0 1 Key to Flags: W (write), A (alloc), X (execute), M (merge), S (strings), I (info), L (link order), O (extra OS processing required), G (group), T (TLS), C (compressed), x (unknown), o (OS specific), E (exclude), p (processor specific) Just note the "offset" field is slightly different $ readelf -S without.elf There are 12 section headers, starting at offset 0x1018e4: Section Headers: [Nr] Name Type Addr Off Size ES Flg Lk Inf Al [ 0] NULL 00000000 000000 000000 00 0 0 0 [ 1] .text PROGBITS 00100000 100000 000205 00 AX 0 0 4 [ 2] .eh_frame PROGBITS 00100208 100208 0000b8 00 A 0 0 4 [ 3] .bss NOBITS 001002c0 1002c0 3ef000 00 WA 0 0 4 [ 4] .debug_info PROGBITS 00000000 1002c0 0007bf 00 0 0 1 [ 5] .debug_abbrev PROGBITS 00000000 100a7f 0002c9 00 0 0 1 [ 6] .debug_aranges PROGBITS 00000000 100d48 000060 00 0 0 1 [ 7] .debug_line PROGBITS 00000000 100da8 00023c 00 0 0 1 [ 8] .debug_str PROGBITS 00000000 100fe4 0004bd 01 MS 0 0 1 [ 9] .symtab SYMTAB 00000000 1014a4 000280 10 10 22 4 [10] .strtab STRTAB 00000000 101724 00014e 00 0 0 1 [11] .shstrtab STRTAB 00000000 101872 000070 00 0 0 1 Key to Flags: W (write), A (alloc), X (execute), M (merge), S (strings), I (info), L (link order), O (extra OS processing required), G (group), T (TLS), C (compressed), x (unknown), o (OS specific), E (exclude), p (processor specific)
GNU linker producing useless spacing between sections in ELF file
1,305,731,391,000
I was brave and tried to compile CUPS in a 32-bit Cygwin environment. I used the standard sources from the tarball. All went fine until linking. http://pastebin.com/QSKvLSmT Here's the end of the transcript: Compiling raster.c... raster.c:1: warning: -fPIC ignored for target (all code is position independent) Linking libcupsimage.so.2... ../cups/libcups.a(file.o): In function `cupsFileRewind': /opt/cups/cups-1.4.8/cups/file.c:1465: undefined reference to `_inflateEnd' ../cups/libcups.a(file.o): In function `cups_fill': /opt/cups/cups-1.4.8/cups/file.c:2096: undefined reference to `_crc32' /opt/cups/cups-1.4.8/cups/file.c:2098: undefined reference to `_inflateInit2_' /opt/cups/cups-1.4.8/cups/file.c:2133: undefined reference to `_inflate' /opt/cups/cups-1.4.8/cups/file.c:2136: undefined reference to `_crc32' ../cups/libcups.a(file.o): In function `cupsFileSeek': /opt/cups/cups-1.4.8/cups/file.c:1569: undefined reference to `_inflateEnd' ../cups/libcups.a(file.o): In function `cups_compress': /opt/cups/cups-1.4.8/cups/file.c:1873: undefined reference to `_crc32' /opt/cups/cups-1.4.8/cups/file.c:1900: undefined reference to `_deflate' ../cups/libcups.a(file.o): In function `cupsFileOpenFd': /opt/cups/cups-1.4.8/cups/file.c:996: undefined reference to `_deflateInit2_' /opt/cups/cups-1.4.8/cups/file.c:1002: undefined reference to `_crc32' ../cups/libcups.a(file.o): In function `cupsFileClose': /opt/cups/cups-1.4.8/cups/file.c:121: undefined reference to `_inflateEnd' /opt/cups/cups-1.4.8/cups/file.c:150: undefined reference to `_deflate' /opt/cups/cups-1.4.8/cups/file.c:174: undefined reference to `_deflateEnd' collect2: ld returned 1 exit status Makefile:331: recipe for target `libcupsimage.so.2' failed make[1]: *** [libcupsimage.so.2] Error 1 Makefile:34: recipe for target `all' failed make: *** [all] Error 1 What to do?
Have a look at the CUPS port in cygwin-ports, they provide version 1.4.6 as of January 30th 2011. It patches quite a lot...
CUPS compilation fails on Cygwin
1,305,731,391,000
I'm having trouble compiling a simple, sample program against glib on Ubunutu. I get these errors. I can get it to compile but not link with the -c flag. Which I believe means I have the glib headers installed, but it's not finding the shared object code. See also the make file below. $> make re gcc -I/usr/include/glib-2.0 -I/usr/lib/x86_64-linux-gnu/glib-2.0/include -lglib-2.0 re.c -o re /tmp/ccxas1nI.o: In function `print_uppercase_words': re.c:(.text+0x21): undefined reference to `g_regex_new' re.c:(.text+0x41): undefined reference to `g_regex_match' re.c:(.text+0x54): undefined reference to `g_match_info_fetch' re.c:(.text+0x6e): undefined reference to `g_print' re.c:(.text+0x7a): undefined reference to `g_free' re.c:(.text+0x8b): undefined reference to `g_match_info_next' re.c:(.text+0x97): undefined reference to `g_match_info_matches' re.c:(.text+0xa7): undefined reference to `g_match_info_free' re.c:(.text+0xb3): undefined reference to `g_regex_unref' collect2: ld returned 1 exit status make: *** [re] Error 1 Makefile used: # Need to installed libglib2.0-dev some system specific install that will # provide a value for pkg-config INCLUDES=$(shell pkg-config --libs --cflags glib-2.0) CC=gcc $(INCLUDES) PROJECT=re # Targets full: clean compile clean: rm $(PROJECT) compile: $(CC) $(PROJECT).c -o $(PROJECT) .c code being compiled: #include <glib.h> void print_upppercase_words(const gchar *string) { /* Print all uppercase-only words. */ GRegex *regex; GMatchInfo *match_info; regex = g_regex_new("[A-Z]+", 0, 0, NULL); g_regex_match(regex, string, 0, &match_info); while (g_match_info_matches(match_info)) { gchar *word = g_match_info_fetch(match_info, 0); g_print("Found %s\n", word); g_free(word); g_match_info_next(match_info, NULL); } g_match_info_free(match_info); g_regex_unref(regex); } int main() { gchar *string = "My body is a cage. My mind is THE key."; print_uppercase_words(string); } Strangely, when I run glib-config it doesn't like that command -- though I don't know how to tell bash or make how to just use one over the other when it complains that gdlib-config is in these 2 packages. $> glib-config No command 'glib-config' found, did you mean: Command 'gdlib-config' from package 'libgd2-xpm-dev' (main) Command 'gdlib-config' from package 'libgd2-noxpm-dev' (main) glib-config: command not found
glib is not your problem. This is: re.c:(.text+0xd6): undefined reference to `print_uppercase_words' What it's saying is you're calling a function print_uppercase_words, but it can't find it. And there's a reason. Look very closely. There's a typo: void print_upppercase_words(const gchar *string) After you fix that, you might still have a problem because you are specifying the libraries before the modules that require those libraries. In short, your command should be written gcc -o re re.o -lglib-2.0 so that -lglib-2.0 comes after re.o. So I'd write your Makefile more like this: re.o: re.c $(CC) -I<includes> -o $@ -c $^ re: re.o $(CC) $^ -l<libraries> -o $@ In fact, if you set the right variables, make will figure it all out for you automatically. CFLAGS=$(shell pkg-config --cflags glib-2.0) LDLIBS=$(shell pkg-config --libs glib-2.0) CC=gcc re: re.o
Linker errors when compiling against glib...?
1,311,339,680,000
I've got an app which won't link, giving error: /usr/lib64/libcroco-0.6.so.3: undefined reference to `xmlGetProp@LIBXML2_2.4.30' /usr/lib64/libcroco-0.6.so.3: undefined reference to `xmlFree@LIBXML2_2.4.30' /usr/lib64/libcroco-0.6.so.3: undefined reference to `xmlHasProp@LIBXML2_2.4.30' I've got libxml installed: libxml++.x86_64 2.33.2-1.fc15 @koji-override- 0/$releasever libxml++-devel.x86_64 2.33.2-1.fc15 @fedora libxml2.i686 2.7.8-6.fc15 @fedora libxml2.x86_64 2.7.8-6.fc15 @koji-override-0/$releasever libxml2-devel.x86_64 2.7.8-6.fc15 @fedora libxml2-python.x86_64 2.7.8-6.fc15 @koji-override-0/$releasever Any ideas? Maybe libcroco was compiled with older version and I need older libxml installed?
The only thing I can think of is that the .so files aren't in a directory the linker looks for libraries in. Can you find out where the file libxml2.so resides, and then put that directory on the link command line with a -L ?
libxml linker error
1,311,339,680,000
This is my /etc/ld.so.conf /usr/local/lib64 /usr/local/lib include /etc/ld.so.conf.d/*.conf The directory /etc/ld.so.conf.d/ contains mysql-x86_64.conf which contains only this one line: /usr/lib64/mysql The /usr/lib64/mysql directory [listed in the .conf file] contains these files: total 45,961,216 drwxr-xr-x 2 root root 4,096 Apr 11 17:20 ./ drwxr-xr-x 121 root root 81,920 Mar 30 20:01 ../ -rw-r--r-- 1 root root 28,951,398 Dec 9 21:40 libmysqlclient.a lrwxrwxrwx 1 root root 20 Dec 9 21:56 libmysqlclient.so -> libmysqlclient.so.21* lrwxrwxrwx 1 root root 25 Dec 9 21:56 libmysqlclient.so.21 -> libmysqlclient.so.21.1.19* -rwxr-xr-x 1 root root 16,869,104 Dec 9 21:40 libmysqlclient.so.21.1.19* -rw-r--r-- 1 root root 44,910 Dec 9 21:36 libmysqlservices.a Running ldconfig -p | grep mysql returns this: libmysqlclient.so.21 (libc6,x86-64) => /usr/lib64/mysql/libmysqlclient.so.21 libmysqlclient.so (libc6,x86-64) => /usr/lib64/mysql/libmysqlclient.so When I try to link a very small MySQL test program I get this error: /usr/lib64/gcc/x86_64-suse-linux/9/../../../../x86_64-suse-linux/bin/ld: cannot find -lmysqlclient Adding -L/usr/lib64/mysql to the linker works. My question: According to this answer and other documentation found on internet ldd is considering the content of the /etc/ld.so.conf file - why is in my case the content ignored? What am I doing wrong?
ld.so.conf is the configuration file for ld.so, the runtime dynamic linker. ld ignores it, intentionally. It has its own defaults, supplemented by -L. Typically the search path is also determined by the compiler driving it — see gcc -print-search-dirs for GCC. LD_LIBRARY_PATH also only affects ld.so, not ld. See also what are the executable ELF files respectively for static linker, dynamic linker, loader and dynamic loader?
ld ignores ld.so.conf
1,311,339,680,000
GNU C compiler passes the wrong architecture name to the linker. For example gcc helloworld.i throws the error ld: unknown/unsupported architecture name: -arch arm. After some experimenting with LD, it seems armv7 is the architecture I should use. The compiling and assembling operations seem to work fine. It appears that the compiler collection (iphone-gcc) is designed to work with an older version of the linker provided through the open-source Darwin CC Tools, not the newer LD64 I have installed provided as a stand-alone outside the CC tool collection. Is there any way to tell GCC to pass another architecture to the linker? Passing -Wl,-arch,armv7 or -Xlinker -arch -Xlinker armv7 to GCC gives the same error.
You shouldn't be upgrading your toolchain piecemeal. The parts have to work together. The GNU tools allow so much variation that it is essential that the pieces be set up to work together, especially for a cross-compiler. If you need a newer ld for some reason, you should build up a complete toolchain to support it.
GCC: set architecture to pass to linker
1,311,339,680,000
I'm trying to build a rust program that involves diesel with postgresql on Fedora 31 and the build fails because the linker can't find libpq. As it's reproducible with gcc, I'm using gcc to keep the question shorter. gcc -L /lib64 -lpq /usr/bin/ld: cannot find -lpq ls /lib64 | grep libpq libpq.so.5 libpq.so.5.11 ldd /lib64/libpq.so.5 linux-vdso.so.1 (0x00007ffc06ddd000) libssl.so.1.1 => /usr/lib64/libssl.so.1.1 (0x00007f4885ea7000) libcrypto.so.1.1 => /usr/lib64/libcrypto.so.1.1 (0x00007f4885bc7000) libgssapi_krb5.so.2 => /usr/lib64/libgssapi_krb5.so.2 (0x00007f4885b75000) libldap_r-2.4.so.2 => /usr/lib64/libldap_r-2.4.so.2 (0x00007f4885b1d000) libpthread.so.0 => /usr/lib64/libpthread.so.0 (0x00007f4885afb000) libc.so.6 => /usr/lib64/libc.so.6 (0x00007f4885932000) libz.so.1 => /usr/lib64/libz.so.1 (0x00007f4885916000) libdl.so.2 => /usr/lib64/libdl.so.2 (0x00007f488590f000) libkrb5.so.3 => /usr/lib64/libkrb5.so.3 (0x00007f488581e000) libk5crypto.so.3 => /usr/lib64/libk5crypto.so.3 (0x00007f4885805000) libcom_err.so.2 => /usr/lib64/libcom_err.so.2 (0x00007f48857fe000) libkrb5support.so.0 => /usr/lib64/libkrb5support.so.0 (0x00007f48857ec000) libkeyutils.so.1 => /usr/lib64/libkeyutils.so.1 (0x00007f48857e3000) libresolv.so.2 => /usr/lib64/libresolv.so.2 (0x00007f48857ca000) liblber-2.4.so.2 => /usr/lib64/liblber-2.4.so.2 (0x00007f48857b9000) libsasl2.so.3 => /usr/lib64/libsasl2.so.3 (0x00007f4885799000) /lib64/ld-linux-x86-64.so.2 (0x00007f4885f8c000) libselinux.so.1 => /usr/lib64/libselinux.so.1 (0x00007f488576c000) libcrypt.so.2 => /usr/lib64/libcrypt.so.2 (0x00007f488572f000) libpcre2-8.so.0 => /usr/lib64/libpcre2-8.so.0 (0x00007f488569d000) libpthread, for example, which also is in lib64, is found: gcc -L /lib64 -lpthread /usr/bin/ld: /usr/lib/gcc/x86_64-redhat-linux/9/../../../../lib64/crt1.o: in function `_start': Am I missing something or should this normally be found right away?
-lpq causes the linker to look for libpq.so, with no soname suffix. To provide this on Fedora, you should install libpq-devel: sudo dnf install libpq-devel
ld cannot find library right in front of it
1,311,339,680,000
I'm trying to compile vapoursynth and have run into a linker issue which I don't understand how to solve. Here is what I have so far: I have compiled zimg from github github: buaazp/zimg and have a binary. I pulled vapoursynth from here github: vapoursynth/vapoursynth and I followed the instructions. When I try to run ./configure: configure: error: Package requirements (zimg) were not met: No package 'zimg' found Consider adjusting the PKG_CONFIG_PATH environment variable if you installed software in a non-standard prefix. Alternatively, you may set the environment variables ZIMG_CFLAGS and ZIMG_LIBS to avoid the need to call pkg-config. See the pkg-config man page for more details. I tried to fix it with: export PKG_CONFIG_PATH=/home/test/zimg/bin/zimg But ./configure still didn't work and had the same error. Then I tried: export ZIMG_CFLAGS=/home/test/zimg/src/ export ZIMG_LIBS=/home/test/zimg/build And the check for zimg passed, but it fails to link it. The error is this: checking for ZIMG... yes configure: error: failed to link zimg. What should I try next?
It turns out I used the wrong zimg. The correct zimg is sekrit-twc/zimg.
Unable to compile vapoursynth: failed to link zimg [closed]
1,311,339,680,000
Apologies if this has already been answered; I am having trouble finding an existing post (either on SE or linux forums) which solves the issue. I need to install the package(s) that enables the -lSM and -lICE linker options for compiling some C/C++ code that uses plotting libraries (see here for an example: C Compiling and Linking). Here's a snippet of the error messages I'm getting: /usr/bin/ld: cannot find -lSM /usr/bin/ld: cannot find -lICE collect2: error: ld returned 1 exit status I am quite certain that the issue is the package simply not being installed. What is the name of the package? I am running on CentOS7/Redhat.
You are looking for libSM.so and libICE.so, provided by the libSM-devel and libICE-devel packages. Basically, if you are linking with -l<something>, look in /usr/lib64/lib<something>.so. An even faster result is to skip the step of finding the package name and run: yum install /usr/lib64/lib<something>.so
What Centos package contains the libraries for -lSM -lICE linker options?
1,311,339,680,000
I'm trying to make rcssmonitor and I get the following error: /usr/bin/ld: cannot find -laudio I'm using Linux Mint 17.2. with gcc 4.8.4.
On Ubuntu/Mint. You should be able to get libaudio using: apt-get install libaudio-dev
Linker error about -laudio
1,311,339,680,000
When I copy over a program and a few libraries it needs to another machine I get the "no version information available" when I run LDD on the program. I know why this is happening, I just want to know if its a big deal. Can I just ignore it? The program seems to execute and exhibits expected behavior. Could this come back to screw me in the future?
From the glibc sources for ldd if (...) { /* The file has no symbol versioning. I.e., the dependent object was linked against another version of this file. We only print a message if verbose output is requested. */ ... errstring = make_string ("no version information available ..."); ... } It means "version mismatch", including mismatch to null. No more, no less. Will it come back to screw you? The answer has to be, unfortunately: "possibly". It's possible that without the version it was looking for, it'll be buggy. And of course, it could be buggy even if it said everything was fine. Should you worry? If this is a production system that large processes are depending on, copying over binaries from other systems is probably not a great idea. If this is just for you, or just to get things moving alongenough so you can work on the real problems, onwards and upwards.
Dynamic linker "no version information available"
1,311,339,680,000
I'm adding c++ runtime and exception support to the Linux kernel. For that, I need to provide my own lib/gcc and lib/libstdc++instead of the standard libraries provided by the compiler. So, I am confused with the flags that are to be passed to the linker. In a normal kernel's top-level Makefile, LD = $(CROSS_COMPILE)ld which enables the kernel to use the default standard libraries and startup files. For my kernel I'm using LD = $(CROSS_COMPILE)ld -nostdlib -nodefaultlibs -nostartfiles as said in a documentation. What I understood from gcc documentation is that passing -nostdlib to the linker is that of passing both -nodefaultlibs -nostartfiles. What's actually the difference between these flags?
These flags are defined in GCC’s spec files, so the best way to determine the differences between them is to look there: gcc -dumpspecs The relevant part is the link_command definition. This shows that -nostdlib, -nodefaultlibs and -nostartfiles have the following impact: %{!nostdlib:%{!nodefaultlibs:%:pass-through-libs(%(link_gcc_c_sequence))}} — this adds libgcc, libpthread, libc, libieee as necessary, using a macro and the lib and libgcc spec strings; %{!nostdlib:%{!nostartfiles:%S}} — this adds the startfile spec string, , which specifies the object files to add to handle startup (crti.o etc.) %{!nostdlib:%{fvtable-verify=std: -lvtv -u_vtable_map_vars_start -u_vtable_map_vars_end} %{fvtable-verify=preinit: -lvtv -u_vtable_map_vars_start -u_vtable_map_vars_end}} — this adds virtual table verification using libvtv %{!nostdlib:%{!nodefaultlibs:%{mmpx:%{fcheck-pointer-bounds: %{static:--whole-archive -lmpx --no-whole-archive %:include(libmpx.spec)%(link_libmpx)} %{!static:%{static-libmpx:-Bstatic --whole-archive} %{!static-libmpx:--push-state --no-as-needed} -lmpx %{!static-libmpx:--pop-state} %{static-libmpx:--no-whole-archive -Bdynamic %:include(libmpx.spec)%(link_libmpx)}}}}%{mmpx:%{fcheck-pointer-bounds:%{!fno-chkp-use-wrappers: %{static:-lmpxwrappers} %{!static:%{static-libmpxwrappers:-Bstatic} -lmpxwrappers %{static-libmpxwrappers: -Bdynamic}}}}}}} — this handles libmpx %{!nostdlib:%{!nodefaultlibs:%{%:sanitize(address): %{static-libasan:%:include(libsanitizer.spec)%(link_libasan)} %{static:%ecannot specify -static with -fsanitize=address}} %{%:sanitize(thread): %{static-libtsan:%:include(libsanitizer.spec)%(link_libtsan)} %{static:%ecannot specify -static with -fsanitize=thread}} %{%:sanitize(undefined):%{static-libubsan:-Bstatic} -lubsan %{static-libubsan:-Bdynamic} %{static-libubsan:%:include(libsanitizer.spec)%(link_libubsan)}} %{%:sanitize(leak): %{static-liblsan:%:include(libsanitizer.spec)%(link_liblsan)}}}} — this handles the various sanitisation options %{!nostdlib:%{!nodefaultlibs:%(link_ssp) %(link_gcc_c_sequence)}} — this adds the stack protection options and repeats the C link sequence (whose libraries were already specified at the start) %{!nostdlib:%{!nostartfiles:%E}} — this adds the endfile spec string, which specifies the object files to add to handle left-overs (crtfastmath.o, crtend.o etc.) As you understood from the documentation, -nostdlib is a superset of -nodefaultlibs and -nostartfiles. It also disables virtual table verification. So -nostdlib is sufficient to disable all the related features; -nodefaultlibs and -nostartfiles don’t add anything to it. (But it doesn’t hurt to mention them too.)
Difference between the linker flags
1,311,339,680,000
I've got reasons for not wanting to rely on a specific build system. I don't mean to dis anybody's favorite, but I really just want to stick to what comes with the compiler. In this case, GCC. Automake has certain compatibility issues, especially with Windows. <3 GNU make is so limited that it often needs to be supplemented with shell scripts. Shell scripts can take many forms, and to make a long story short and probably piss a lot of people off, here is what I want to do -- The main entry point is God. Be it a C or C++ source file, it is the center of the application. Not only do I want the main entry point to be the first thing that is executed, I also want it to be the first thing that is compiled. Let me explain -- There was a time when proprietary and closed-source libraries were common. Thanks to Apple switching to Unix and Microsoft shooting themselves in the foot, that time is over. Any library that needs to be dynamically linked can be included as a supporting file of the application. For that reason, separate build instructions for .SOs (and maybe .DLLs ;]) is all fine and dandy, because they are separate executable files. Any other library should be statically linked. Now, let's talk about static linking -- Static linking is a real bitch. That's what makefiles are for. If the whole project was written in one language (for instance C OR C++), you can #include the libraries as headers. That's just fine. But now, let's consider another scenario -- Let's say you're like me and can't be arsed to figure out C's difficult excuse for strings, so you decide to use C++. But you want to use a C library, like for instance MiniBasic. God help us. If the C library wasn't designed to conform to C++'s syntax, you're screwed. That's when makefiles come in, since you need to compile the C source file with a C compiler and the C++ source file with a C++ compiler. I don't want to use makefiles. I would hope that there is a way to exploit GCC's preprocessor macros to tell it something like this: Hi, GCC. How are you doing? In case you forgot, this source file you're looking at right now is written in C++. You should of course compile it with G++. There's another file that this file needs, but it's written in C. It's called "lolcats.c". I want you to compile that one with GCC into an object file and I want you to compile this one with G++ into the main object file, then I want you to link them together into an executable file. How might I write such a thing in preprocessor lingo? Does GCC even do that?
The main entry point is God. Be it a C or C++ source file, it is the center of the application. Only in the same way that nitrogen is the center of a pine tree. It is where everything starts, but there's nothing about C or C++ that makes you put the "center" of your application in main(). A great many C and C++ programs are built on an event loop or an I/O pump. These are the "centers" of such programs. You don't even have to put these loops in the same module as main(). Not only do I want the main entry point to be the first thing that is executed, I also want it to be the first thing that is compiled. It is actually easiest to put main() last in a C or C++ source file. C and C++ are not like some languages, where symbols can be used before they are declared. Putting main() first means you have to forward-declare everything else. There was a time when proprietary and closed-source libraries were common. Thanks to Apple switching to Unix and Microsoft shooting themselves in the foot, that time is over. "Tell 'im 'e's dreamin'!" OS X and iOS are full of proprietary code, and Microsoft isn't going away any time soon. What do Microsoft's current difficulties have to do with your question, anyway? You say you might want to make DLLs, and you mention Automake's inability to cope effectively with Windows. That tells me Microsoft remains relevant in your world, too. Static linking is a real bitch. Really? I've always found it easier than linking to dynamic libraries. It's an older, simpler technology, with fewer things to go wrong. Static linking incorporates the external dependencies into the executable, so that the executable stands alone, self-contained. From the rest of your question, that should appeal to you. you can #include the libraries as headers No... You #include library headers, not libraries. This isn't just pedantry. The terminology matters. It has meaning. If you could #include libraries, #include </usr/lib/libfoo.a> would work. In many programming languages, that is the way external module/library references work. That is, you reference the external code directly. C and C++ are not among the languages that work that way. If the C library wasn't designed to conform to C++'s syntax, you're screwed. No, you just have to learn to use C++. Specifically here, extern "C". How might I write such a thing in preprocessor lingo? It is perfectly legal to #include another C or C++ file: #include <some/library/main.cpp> #include <some/other/library/main.c> #include <some/other/library/secondary_module.c> #include <iostream> int main() { call_the_library(); do_other_stuff(); return 0; } We don't use extern "C" here because this pulls the C and C++ code from those other libraries directly into our C++ file, so the C modules need to be legal C++ as well. There are a number of annoying little differences between C and C++, but if you're going to intermix the two languages, you're going to have to know how to cope with them regardless. Another tricky part of doing this is that the order of the #includes is more sensitive than the order of library references if a linker command. When you bypass the linker in this way, you end up having to do some things manually that the linker would otherwise do for you automatically. To prove the point, I took MiniBasic (your own example) and converted its script.c driver program to a standalone C++ program that says #include <basic.c> instead of #include <basic.h>. (patch) Just to prove that it's really a C++ program now, I changed all the printf() calls to cout stream insertions. I had to make a few other changes, all of them well within a normal day's work for someone who's going to intermix C and C++: The MiniBasic code makes use of C's willingness to tolerate automatic conversions from void* to any other pointer type. C++ makes you be explicit. Newer compilers are no longer tolerating use of C string constants (e.g. "Hello, world!\n") in char* contexts. The standard says the compiler is allowed to place them into read-only memory, so you need to use const char*. That's it. Just a few minutes work, patching GCC complaints. I had to make some similar changes in basic.c to those in the linked script.c patch file. I haven't bothered posting the diffs, since they're just more of the same. For another way to go about this, study the SQLite Amalgamation, as compared to the SQLite source tree. SQLite doesn't use #include all the other files into a single master file; they're actually concatenated together, but that is also all #include does in C or C++.
Compiling C/C++ code by way of including preprocessor build instructions in an actual C/C++ source file
1,311,339,680,000
I'm reading a textbook which describes how loader works: When the loader runs, it copies chunks of the executable object file into the code and data segments. Next, the loader jumps to the program’s entry point, which is always the address of the _start function.The _start function calls the system startup function, __libc_start_main From the answer of this Stack Overflow question, we have the below pseudo-code about the execution flow: _start: call __setup_for_c ; set up C environment call __libc_start_main ; set up standard library call _main ; call your main call __libc_stop_main ; tear down standard library call __teardown_for_c ; tear down C environment jmp __exit ; return to OS My questions are: I used objdump to check the assembly code of the program and I found _start only call __libc_start_main as picture below shows: What about the rest of functions like call __setup_for_c, _main , etc.? Especially my program's main function, I can't see how it get called. So is the pseudo-code about the execution flow correct? What does __libc_start_main setup standard library mean? Why does the standard library need to be setup? Doesn't that standard library just need to be linked by the dynamic linker when the program is loaded?
The other function calls described in the linked answer give a synopsis of what needs to happen; the actual implementation details in the GNU C library are different, either using “constructors” (_dl_start_user), or explicitly in __libc_start_main. __libc_start_main also takes care of calling the user’s main, which is why you don’t see it called in your disassembly — but its address is passed along (see the lea just the callq). __libc_start_main also takes care of the program exit, and never returns; that’s the reason for the hlt just after the callq, which will crash the program if the function returns. The library needs quite a lot of setup nowadays: some of its own relocation thread-local storage setup pthread setup destructor registration vDSO setup (on Linux) ctype initialisation copying the program name, arguments and environment to various library variables etc. See the x86-64-specific sysdeps/x86_64/start.S and the generic csu/libc-start.c, csu/init-first.c, and misc/init-misc.c among others.
Does _start call my program's main function and other essential setup functions? [closed]
1,311,339,680,000
I'm trying to recompile my software for debian 8, but i have run into this strange issue of libgssappi refusing to link with anything. >~/torque_github$ gcc test.c -lgssapi /usr/bin/ld: cannot find -lgssapi collect2: error: ld returned 1 exit status The library is present in the system, as seen here: >~/torque_github$ /sbin/ldconfig -p | grep gssapi libgssapi_krb5.so.2 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libgssapi_krb5.so.2 libgssapi.so.3 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libgssapi.so.3
You probably need to install the development-package libkrb5-dev or krb5-multidev: apt-get install libkrb5-dev and need the correct parameters for gcc (run krb5-config.mit gssrpc --libs to get them): gcc test.c -o test $(krb5-config.mit gssrpc --libs) which expands to (depending on the system): gcc test.c -o test -L/usr/lib/x86_64-linux-gnu/mit-krb5 -Wl,-z,relro -lgssrpc -lgssapi_krb5 -lkrb5 -lk5crypto -lcom_err
Compiling package for debian 8 - linking issues
1,311,339,680,000
I manually compiled libpcre2 with debug symbols into /usr/local/lib and then deleted the version installed in /lib64. While I can still run commands as my user by first running export LD_LIBRARY_PATH=/usr/local/lib, running sudo still fails with the message sudo: error while loading shared libraries: libpcre2-8.so.0: cannot open shared object file: No such file or directory This happens even when I run it with the -E or -H option. sudo - root gives me "su: Authentication failure". Any ideas?
Why the library is not getting used The dynamic linker will ignore LD_LIBRARY_PATH if the program to be loaded has setuid or setgid bit set, for security. Otherwise, there would be the old trick of using LD_LIBRARY_PATH or LD_PRELOAD to override an innocuous system/library call to do something else instead or in addition to what it's supposed to do. If done to a setuid root program that's not protected against this trick, you would be able to control what the program does while having root privileges... and at that point, there would be no limits at all to what you could do. You'll find a description of this security mechanism in the ld.so(8) man page: Secure-execution mode For security reasons, the effects of some environment variables are voided or modified if the dynamic linker determines that the binary should be run in secure-execution mode. (For details, see the discussion of individual environment variables below.) A binary is executed in secure-execution mode if the AT_SECURE entry in the auxiliary vector (see getauxval(3)) has a nonzero value. This entry may have a nonzero value for various reasons, including: The process's real and effective user IDs differ, or the real and effective group IDs differ. This typically occurs as a result of executing a set-user-ID or set-group-ID program. A process with a non-root user ID executed a binary that conferred capabilities to the process. A nonzero value may have been set by a Linux Security Module. ... followed by long-form descriptions of the LD_* environment variables and how each of them are affected by the secure-execution mode. How to fix your problem You could add /usr/local/lib to /etc/ld.so.conf or make your own /etc/ld.so.conf.d/*.conf file for it, then run ldconfig as root to make sure the new library path gets picked up... if you still have a working way to become root that's independent from sudo, that is! Otherwise, you might need to boot into single-user/recovery mode, and gain access to the system just like if you had lost the root password. But instead of resetting the root password, you'd make the /etc/ld.so.conf[.d] modification as above. Why the -E option of sudo won't help here The sudoers(5) man page says: Note that the dynamic linker on most operating systems will remove variables that can control dynamic linking from the environment of setuid executables, including sudo. Depending on the operating system this may include _RLD*, DYLD_*, LD_*, LDR_*, LIBPATH, SHLIB_PATH, and others. These type of variables are removed from the environment before sudo even begins execution and, as such, it is not possible for sudo to preserve them. You said sudo - root gives you su: Authentication failure. The command and the error message don't quite match up. Did you actually run su - root instead? Then you might be running Ubuntu or some other Linux distribution that comes with the root account's password locked by default; those distributions typically rely heavily on sudo for admin access. If that is the case with your distribution, you are now discovering why deleting a standard system library located in /lib64 was not a great idea.
Can't run sudo after deleting libpcre2
1,311,339,680,000
Following manual describes dynamic linker/loader libs: The program ld.so handles a.out binaries, a format used long ago; ld-linux.so* handles ELF (/lib/ld-linux.so.1 for libc5, /lib/ld-linux.so.2 for glibc2), which everybody has been using for years now. I use Ubuntu 15.04 and I don't have ld.so. My system contains a few symbolic link to ld-2.21.so: /lib/ld-linux.so.2 -> /lib32/ld-linux.so.2 /lib32/ld-linux.so.2 -> ld-2.21.so /lib64/ld-linux-x86-64.so.2 -> /lib/x86_64-linux-gnu/ld-2.21.so Does it mean that the system can't handle a.out binaries (because is not equipped with ld.so) ? Moreover ld-linux.so.2 is a symblic link not a lib as is described in the manual. How to explain that ?
Your system doesn't have /lib/ld.so, so it isn't equipped for dynamically linked a.out executables. It could be equipped for statically linked a.out executables, if your kernel includes support for them; Ubuntu's doesn't (this requires the CONFIG_BINFMT_AOUT kernel configuration option). The a.out format has been obsolescent on Linux for about 20 years and obsolete for about 15, so most systems today have stopped supporting it. /lib/ld-linux.so.1 and /lib/ld-linux.so.2 are two different versions of the GNU/Linux ELF dynamic loader, each with its own ABI. Version 1, corresponding to libc5, has been obsolete for only a few years less than a.out and is not supported on most systems today. Version 2, corresponding to GNU libc6, is current. Each architecture has its own naming convention and version number for the dynamic loader (different processor architectures have de facto different ABIs). /lib/ld-linux.so.2 is the x86_32 name. On x86_64, the usual location is /lib64/ld-linux-x86-64.so.2. On armel, the location is /lib/ld-linux.so.3, on armhf /lib/ld-linux-armhf.so.3, and so on. /lib/ld-linux.so.2 is a library (or more precisely, a dynamically linked shared object — the dynamic loader is usually not called a library). The fact that it's a symlink to a regular file rather than a regular file doesn't change that: what makes it a library is its content.
dynamic linker/loader libs - missing ld.so
1,311,339,680,000
I am trying to build SimGear from the FlightGear project using the download_an_compile.sh script (which uses CMake to build the binaries). The build went fine so far, but when the script tried linking the built object file together to a library, I get tons of //usr/lib/x86_64-linux-gnu/libldap_r-2.4.so.2: warning: undefined reference to [email protected]_2 (where ... is a different function name for each message). Now I thought I would just manually instruct CMake to link the lber library to the library being built, by adding -DCMAKE_CXX_STANDARD_LIBRARIES="-llber-2.4" to CMake's arguments. That resulted in /usr/bin/ld: -llber-2.4 could not be found Which is a riddle to me, because it is there: $ ls /usr/lib/x86_64-linux-gnu | grep lber liblber-2.4.so.2 liblber-2.4.so.2.10.8 In fact, I should not be getting the undefined reference errors, because these functions are all there: $ nm /usr/lib/x86_64-linux-gnu/liblber-2.4.so.2 $ nm -D /usr/lib/x86_64-linux-gnu/liblber-2.4.so.2 | grep ber 0000000000005fe0 T ber_alloc 0000000000005fa0 T ber_alloc_t 0000000000006d50 T ber_bprint 0000000000007ec0 T ber_bvarray_add 0000000000007df0 T ber_bvarray_add_x 0000000000007cd0 T ber_bvarray_dup_x 0000000000007cc0 T ber_bvarray_free 0000000000007c30 T ber_bvarray_free_x 0000000000007830 T ber_bvdup 0000000000007700 T ber_bvecadd 0000000000007650 T ber_bvecadd_x 0000000000007640 T ber_bvecfree 00000000000075c0 T ber_bvecfree_x 00000000000075b0 T ber_bvfree 0000000000007570 T ber_bvfree_x 0000000000007c20 T ber_bvreplace 0000000000007b80 T ber_bvreplace_x 0000000000002c70 T ber_decode_oid 0000000000006fc0 T ber_dump 0000000000006000 T ber_dup 0000000000007820 T ber_dupbv 0000000000007710 T ber_dupbv_x 0000000000004cc0 T ber_encode_oid 0000000000006ab0 T ber_errno_addr 0000000000006a30 T ber_error_print 0000000000003a80 T ber_first_element 0000000000006250 T ber_flatten 0000000000006170 T ber_flatten2 0000000000005f90 T ber_flush 0000000000005db0 T ber_flush2 0000000000005d70 T ber_free 0000000000005d10 T ber_free_buf 00000000000038d0 T ber_get_bitstringa 0000000000003a70 T ber_get_boolean 0000000000003150 T ber_get_enum 0000000000003080 T ber_get_int 0000000000006400 T ber_get_next 0000000000003a20 T ber_get_null 0000000000007ed0 T ber_get_option 0000000000003730 T ber_get_stringa 0000000000003810 T ber_get_stringal 00000000000037a0 T ber_get_stringa_null 0000000000003160 T ber_get_stringb 00000000000031f0 T ber_get_stringbv 0000000000003650 T ber_get_stringbv_null 0000000000002e30 T ber_get_tag 0000000000006380 T ber_init 00000000000060c0 T ber_init2 0000000000006160 T ber_init_w_nullc 000000000020d168 B ber_int_errno_fn 000000000020d178 B ber_int_log_proc 000000000020d190 B ber_int_memory_fns 000000000020d1a0 B ber_int_options 0000000000009590 T ber_int_sb_close 0000000000009610 T ber_int_sb_destroy 0000000000009500 T ber_int_sb_init 0000000000009710 T ber_int_sb_read 00000000000099e0 T ber_int_sb_write 00000000000069d0 T ber_len 0000000000006f70 T ber_log_bprint 00000000000070b0 T ber_log_dump 0000000000007120 T ber_log_sos_dump 0000000000007a50 T ber_mem2bv 0000000000007950 T ber_mem2bv_x 0000000000007460 T ber_memalloc 0000000000007400 T ber_memalloc_x 00000000000074d0 T ber_memcalloc 0000000000007470 T ber_memcalloc_x 0000000000007390 T ber_memfree 0000000000007330 T ber_memfree_x 0000000000007560 T ber_memrealloc 00000000000074e0 T ber_memrealloc_x 00000000000073f0 T ber_memvfree 00000000000073a0 T ber_memvfree_x 0000000000003b00 T ber_next_element 0000000000002e80 T ber_peek_element 0000000000002fd0 T ber_peek_tag 0000000000005370 T ber_printf 00000000000069e0 T ber_ptrlen 0000000000005080 T ber_put_berval 0000000000005100 T ber_put_bitstring 0000000000005290 T ber_put_boolean 0000000000004f30 T ber_put_enum 0000000000004f50 T ber_put_int 0000000000005220 T ber_put_null 0000000000004f70 T ber_put_ostring 0000000000005350 T ber_put_seq 0000000000005360 T ber_put_set 00000000000050b0 T ber_put_string 000000000020d170 B ber_pvt_err_file 0000000000006ad0 T ber_pvt_log_output 000000000020d008 D ber_pvt_log_print 0000000000006c20 T ber_pvt_log_printf 000000000020d1e0 B ber_pvt_opt_on 0000000000008f00 T ber_pvt_sb_buf_destroy 0000000000008ee0 T ber_pvt_sb_buf_init 0000000000009180 T ber_pvt_sb_copy_out 00000000000093b0 T ber_pvt_sb_do_write 0000000000008fe0 T ber_pvt_sb_grow_buffer 00000000000094c0 T ber_pvt_socket_set_nonblock 0000000000005a20 T ber_read 0000000000005ad0 T ber_realloc 0000000000006a20 T ber_remaining 00000000000062f0 T ber_reset 00000000000069f0 T ber_rewind 0000000000003ba0 T ber_scanf 00000000000080f0 T ber_set_option 00000000000059a0 T ber_skip_data 0000000000002f90 T ber_skip_element 0000000000003020 T ber_skip_tag 0000000000008d30 T ber_sockbuf_add_io 0000000000009560 T ber_sockbuf_alloc 0000000000009800 T ber_sockbuf_ctrl 00000000000096a0 T ber_sockbuf_free 000000000020d060 D ber_sockbuf_io_debug 000000000020d0a0 D ber_sockbuf_io_fd 000000000020d0e0 D ber_sockbuf_io_readahead 000000000020d120 D ber_sockbuf_io_tcp 000000000020d020 D ber_sockbuf_io_udp 0000000000008e20 T ber_sockbuf_remove_io 0000000000007130 T ber_sos_dump 00000000000069c0 T ber_start 0000000000005310 T ber_start_seq 0000000000005330 T ber_start_set 0000000000007940 T ber_str2bv 0000000000007840 T ber_str2bv_x 0000000000007ac0 T ber_strdup 0000000000007a60 T ber_strdup_x 0000000000007b70 T ber_strndup 0000000000007b10 T ber_strndup_x 0000000000007ad0 T ber_strnlen 0000000000005c00 T ber_write ldd also shows that libldap is referencing the right liblber: $ ldd /usr/lib/x86_64-linux-gnu/libldap_r-2.4.so.2 | grep lber liblber-2.4.so.2 => /usr/lib/x86_64-linux-gnu/liblber-2.4.so.2 (0x00007f28c8bdc000) Does anyone have any ideas ? I don't … If I forgot any details, please just let me know, and I'll add them !
At least in Debian (and derivatives thereof), a shared library's development files are split off into a separate binary package: If there are development files associated with a shared library, the source package needs to generate a binary development package named libraryname-dev, or if you need to support multiple development versions at a time, librarynameapiversion-dev. Installing the development package must result in installation of all the development files necessary for compiling programs against that shared library. "Development files" in this context mostly means C/C++ header files, but importantly often includes a symbolic link to the shared library itself The development package should contain a symlink for the associated shared library without a version number. For example, the libgdbm-dev package should include a symlink from /usr/lib/libgdbm.so to libgdbm.so.3.0.0. This symlink is needed by the linker (ld) when compiling packages, as it will only look for libgdbm.so when compiling dynamically. In this case, although you already have the shared libraries liblber-2.4.so.2 liblber-2.4.so.2.10.8 in /usr/lib/x86_64-linux-gnu but do not appear to have the symbolic link /usr/lib/x86_64-linux-gnu/liblber.so, which is provided by the corresponding development package libldap2-dev.
Weird linking issue with libldap using cmake
1,311,339,680,000
I am trying to compile a program(on Ubuntu 14.04 64 bit) that requires binutils with multiarch support(recommended version 2.20). I have installed binutils-multiarch 2.24 and the dev package from the distro repository. However, ld fails to find few functions(print_insn_big_arm, print_insn_big_mips, print_insn_little_arm and print_insn_little_mips). I suppose either there is a version mismatch or the exact SO files are not found correctly. The flag "-L/usr/lib" is passed to g++ and /usr/lib is where the files installed by binutils-multiarch-dev reside so I'm confused what exactly is the problem. Has someone faced such issues when using binutils-multiarch?
So everything was correctly installed. It turns out that the program expected libopcodes.so to be symlinked to the multi-arch version and not the regular version. Correcting the symlinks fixed the issue.
ld cannot find print_insn_big_mips(and few others) despite binutils-multiarch-dev installed
1,311,339,680,000
I have a small application that was tested on Linux and it worked. Now I would like to build the same code on FreeBSD. To build it on FreeBSD I needed to change a little my Makefile. Here is my amended version: CXX := gcc LDFLAGS += -L/usr/local/lib -R/usr/local/lib -L/usr/lib -R/usr/lib -L/usr/local/include -R/usr/local/include -L/usr/include -R/usr/include CXXFLAGS += -pedantic -Wall -Wextra -std=c++17 LIBS += -lprotobuf -lstdc++ INCL += -I/usr/local/include SRCS := my_app.cpp \ file1.pb.cc \ file2.pb.cc OBJS := $(SRCS:% = %.o) target := my_app all: $(CXX) $(OBJS) -o $(target) $(LIBS) $(INCL) $(LDFLAGS) %.o:%.cpp $(CXX) $(CXXFLAGS) $(INCL) $(LDFLAGS) -c $^ -o $@ clean: rm -rf *o $(target) The problem is that I get a lot of linker errors. All of them are related to google protobuf functions. I am including one of them below: /usr/local/bin/ld: /tmp//ccpo2Qek.o: in function `main': my_app.cpp:(.text+0x3a4): undefined reference to `google::protobuf::MessageLite::SerializeAsString[abi:cxx11]() const' To build the application I use gmake. I have installed protobuf on my FreeBSD system using pkg install. I can find some google protobuf .h files in /usr/local/include and some protobuf .so libraries in /usr/local/lib. I tried to add these locations to LDFLAGS but it still doesn't work. Thank you in advance for any help.
I replaced gcc with c++ and now it works.
FreeBSD - problem with linking protobuf
1,418,601,686,000
I'm trying to run a game called "Dofus", in Manjaro Linux. I've installed it with packer, that put it under /opt/ankama folder. This folder ownership (and for every file inside it) is root user, and games group. As instructed by the installing package, I've added myself (user familia) in the games group (by not doing so, "I would have to input my password every time I tried to run the updater"). However, when running the game, it crashes after inputting my password (which shouldn't be required). Checking the logs, I've got some errors like those: [29/08 20:44:07.114]{T001}INFO c/net/NetworkAccessManager.cpp L87 : Starting request GET http://dl.ak.ankama.com/updates/uc1/projects/dofus2/updates/check.9554275D [29/08 20:44:07.291]{T001}INFO c/net/NetworkAccessManager.cpp L313 : Request GET http://dl.ak.ankama.com/updates/uc1/projects/dofus2/updates/check.9554275D Finished (status : 200) [29/08 20:44:07.292]{T001}ERROR n/src/update/UpdateProcess.cpp L852 : Can not cache script data So, I suspect Permission Denied errors. An error message a moment after starting That translates to "An error has happened while writing to the disk - verify if you have the sufficient rights and enough disk space". Then, after some research, I came across "auditd" that can log file accesses in a folder. After setting it up, and seeing which file accesses were unsuccessful, this is the result. All of those errors actually refer to a unique file, /opt/ankama/transition/transition, with a syscall to (open). This file's permissions are rwxrwxr-x (775). So, I've rwx permissions to it, yet it gives me an error exit -13, which is a EACESS error (Permission Denied). I've already tried to reboot the computer, to log in and log out. None of them worked. If I set the folder permissions to familia:games, it runs with no trouble, I don't even need to input my password. However, it doesn't seem right this way. Any ideas of why I get Permission Denied errors even though I have read/write/execute permissions? Mark has said that I could need +x permissions in all directories of the path prefix. The path itself is /opt/ankama/transition/transition. The permissions for the path prefixes are: /opt - drwxr-xr-x(755), ownership root:root /opt/ankama - drwxr-xr-x(755), ownership root:games /opt/ankama/transition - drwxrwxr-x(775), ownership root:games However, one thing that I've noticed is that all subfolders of /opt/ankama are 775, even though the folder itself is 755. I don't think this means anything, and changing the permissions to 775 doesn't work. Also, Giel suggested that I could have AppArmor running on my system. However, running # cat /sys/module/apparmor/parameters/enabled gives me N.
First, when you add yourself to a group, the change is not applied immediately. The easiest thing is to logout and log back in. Then there are write permissions of data files (as mentioned already in some of the comments). However, the solutions are not good for security. Add a group for the game. Do not add any user to this group. Make the game executable by chmod -R ugo+rX game-directory Give write permissions to group only and no-one else using chmod -R ug+w,o-w game-directory Add game to group chgrp -R game-group game-directory, chmod -R g+s game-directory or just addgroup game-group; chgrp -R game-group game-directory; chmod -R u=rwX,g=rwXs,o=rX game-directory If game needs to change permissions then you can do the same but for user instead of group. ie. adduser game-owner; addgroup game-group; chown -R game-owner:game-group game-directory; chmod -R u=rwXs,g=rwXs,o=rX game-directory
Why do I get "Permission Denied" errors even though I have group permission?
1,418,601,686,000
I am trying to configure a CentOS 7 running in VirtualBox to send its audit logs to the host which is FreeBSD 10.3. Ideally, I'd like to receive the logs with FreeBSD's auditdistd(8) but for now I'd just like to be able to use netcat for that. My problem is that netcat doesn't get any data. Details When I run service auditd status I get the following results: Redirecting to /bin/systemctl status auditd.service auditd.service - Security Auditing Service Loaded: loaded (/usr/lib/systemd/system/auditd.service; enabled; vendor preset: enabled) Active: active (running) since Fri 2016-08-19 11:35:42 CEST; 3s ago Process: 2216 ExecStartPost=/sbin/augenrules --load (code=exited, status=1/FAILURE) Main PID: 2215 (auditd) CGroup: /system.slice/auditd.service ├─2215 /sbin/auditd -n └─2218 /sbin/audispd Aug 19 11:35:42 hephaistos audispd[2218]: plugin /sbin/audisp-remote was restarted Aug 19 11:35:42 hephaistos audispd[2218]: plugin /sbin/audisp-remote terminated unexpectedly Aug 19 11:35:42 hephaistos audispd[2218]: plugin /sbin/audisp-remote was restarted Aug 19 11:35:42 hephaistos audispd[2218]: plugin /sbin/audisp-remote terminated unexpectedly Aug 19 11:35:42 hephaistos audispd[2218]: plugin /sbin/audisp-remote was restarted Aug 19 11:35:42 hephaistos audispd[2218]: plugin /sbin/audisp-remote terminated unexpectedly Aug 19 11:35:42 hephaistos audispd[2218]: plugin /sbin/audisp-remote was restarted Aug 19 11:35:42 hephaistos audispd[2218]: plugin /sbin/audisp-remote terminated unexpectedly Aug 19 11:35:42 hephaistos audispd[2218]: plugin /sbin/audisp-remote has exceeded max_restarts Aug 19 11:35:42 hephaistos audispd[2218]: plugin /sbin/audisp-remote was restarted Setup Network Setup CentOS and FreeBSD are connected on a host-only network. I've assigned them the following IP's: CentOS: 192.168.56.101 FreeBSD: 192.168.56.1 FreeBSD Setup I've got netcat listening on port 60: nc -lk 60 The connection works. I can use nc 192.168.56.1 60 on CentOS to send data to FreeBSD. CentOS Setup The kernel version is: 4.7.0-1.el7.elrepo.x86_64 #1 SMP Sun Jul 24 18:15:29 EDT 2016 x86_64 x86_64 x86_64 GNU/Linux. The version of Linux Audit userspace is 2.6.6. auditd is running and actively logging to /var/log/audit.log. The auditing rules in /etc/audit/rules.d/ are well configured. The configuration of /etc/audisp/audisp-remote.conf looks like this: remote-server = 192.168.56.1 port = 60 local_port = any transport = tcp mode = immediate I've got two default files in /etc/audisp/plugins.d/: syslog.conf and af_unix.conf and both of them are not active. I've added af-remote.conf and it looks like this: # This file controls the audispd data path to the # remote event logger. This plugin will send events to # a remote machine (Central Logger). active = yes direction = out path = /sbin/audisp-remote type = always #args = format = string It is a modified example from the official repository (link). Here's the content of /etc/audisp/audispd.conf: q_depth = 150 overflow_action = SYSLOG priority_boost = 4 max_restarts = 10 name_format = HOSTNAME I'll be happy to provide more details if needed.
I am not sure if everything here is needed to succeed. Nevertheless, this is a configuration which works so that I am able to receive Linux Audit logs with a netcat on FreeBSD. CentOS:/etc/audisp/audisp-remote.conf: remote_server = 192.168.56.1 port = 60 local_port = 60 transport = tcp mode = immediate queue_depth = 200 format = managed CentOS:/etc/audisp/plugins.d/au-remote.conf: active = yes direction = out path = /sbin/audisp-remote type = always args = /etc/audisp/audisp-remote.conf format = string CentOS:/etc/audit/auditd.conf: local_events = yes log_file = /var/log/audit/audit.log # Send logs to the server. Don't save them. write_logs = no log_format = RAW log_group = root priority_boost = 8 num_logs = 5 disp_qos = lossy dispatcher = /sbin/audispd name_format = hostname max_log_file = 6 max_log_file_action = ROTATE action_mail_acct = root space_left = 75 space_left_action = SYSLOG admin_space_left = 50 admin_space_left_action = SUSPEND disk_full_action = SUSPEND disk_error_action = SUSPEND ##tcp_listen_port = tcp_listen_queue = 5 tcp_max_per_addr = 1 use_libwrap = yes ##tcp_client_ports = 1024-65535 tcp_client_max_idle = 0 enable_krb5 = no krb5_principal = auditd ##krb5_key_file = /etc/audit/audit.key distribute_network = no FreeBSD:/etc/hosts.allow: ALL : ALL : allow I don't know if this one is needed though + it might be a bad idea. That's it. Now you just have to run nc -lk 60 on FreeBSD and service auditd restart on CentOS. In my case however netcat seems to be receiving/printing every record at least two times which seems rather unusual.
How to send audit logs with audisp-remote and receive them with netcat
1,418,601,686,000
We recently implemented some auditd rules in response to an external security audit. My colleague offered some input on them and suggested adding -f 2 to /etc/audit.rules. I can't think of an instance when I would want to induce a kernel panic outside of testing. Can anyone suggest real-world, production situations that would warrant this?
auditctl -f 2 causes a panic, essentially, when the audit mechanism is unable to operate properly. There are high-security environments where proper access controls and full logging are critical, and if any logging fails, the system must be stopped (at that point, the technician on duty has already been paged). Financial transactions tend to be like that. auditctl -f 2 is for such environments.
Can someone give an example as to why I'd want to induce a kernel panic using auditd?
1,418,601,686,000
I'm aware of how to audit for changes to the /etc/sysconfig/iptables file in CentOS/RHEL 6 and earlier, but how do I audit for changes made only to the running configuration?
The following auditctl rule should suffice: [root@vh-app2 audit]# auditctl -a exit,always -F arch=b64 -F a2=64 -S setsockopt -k iptablesChange Testing the change: [root@vh-app2 audit]# iptables -A INPUT -j ACCEPT [root@vh-app2 audit]# ausearch -k iptablesChange ---- time->Mon Jun 1 15:46:45 2015 type=CONFIG_CHANGE msg=audit(1433188005.842:122): auid=90328 ses=3 op="add rule" key="iptablesChange" list=4 res=1 ---- time->Mon Jun 1 15:47:22 2015 type=SYSCALL msg=audit(1433188042.907:123): arch=c000003e syscall=54 success=yes exit=0 a0=3 a1=0 a2=40 a3=7dff50 items=0 ppid=55654 pid=65141 auid=90328 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts0 ses=3 comm="iptables" exe="/sbin/iptables-multi-1.4.7" key="iptablesChange" type=NETFILTER_CFG msg=audit(1433188042.907:123): table=filter family=2 entries=6 [root@vh-app2 audit]# ps -p 55654 PID TTY TIME CMD 55654 pts/0 00:00:00 bash [root@vh-app2 audit]# tty /dev/pts/0 [root@vh-app2 audit]# cat /proc/$$/loginuid 90328 [root@vh-app2 audit]# As you can see from the above output, after auditing for calls to setsockopt when optname (the a2 field) is IPT_SO_SET_REPLACE (which is 64 decimal per the Linux kernel source code) it was able to log changes to the running iptables configuration. I was then able to catch the relevant audit information such as the the user's loginuid (since they would likely have sudo'd to root prior to updating the firewall) as well as the PID of the calling program.
Audit on changes to the running iptables configuration
1,418,601,686,000
auditd sending logs to /var/logs/messages we want to disable it. How to do that? /etc/audisp/plugins.d/syslog.conf i changee active = no but still sending lots to syslog
Edit /etc/audisp/plugins.d and change args = LOG_INFOto this: args = local6 Then edit /etc/rsyslog.conf and add local6 to the "some catch-all log files" block so it's like this: *.=info;*.=notice;*.=warn;\ auth,authpriv.none;\ cron,daemon.none;\ mail,news.none;\ local6.none -/var/log/messages Also change the line args = in /etc/audisp/plugins.d to: args = LOG_LOCAL6 This was adapted from this post.
Disable syslog logging for auditd
1,418,601,686,000
I'm tailing auditd.log and piping it into ausearch and then aureport, with the aim of getting a simple stream of modified files: tail -f /var/log/audit/audit.log | ausearch -k my_key | aureport -f --success -i While aureport seems to do the job of correlating and combining multiple records, it doesn't seem to merge the 2 PATH lines that auditd logs for each file - for example, if someone runs commands that specify relative paths (rather than absolute ones), aureport is showing something like: File Report =============================================== # date time file syscall success exe auid event =============================================== 1. 13/01/18 21:45:44 myfile open yes /usr/bin/touch user 6229 2. 13/01/18 21:45:46 myfile open yes /usr/bin/touch user 6230 Is there any way to get aureport to show the full path instead?
You can do ausearch -k my-key --format text or ausearch -k delete --format csv, without piping to aureport. You can filter by start-end dates (--start --end), uid (--uid 123), and result (--success yes|no)
Can aureport show the full path to files?
1,418,601,686,000
I recently installed the auditd package on my Debian machine. I did some testing with auditctl, creating a single rule to watch a directory, proved something, and then removed and purged auditd. Subsequently, I'm still seeing these entries in kern.log. May 1 08:29:55 trinity kernel: [5654985.963656] type=1325 audit(1462087795.379:71): table=filter family=2 entries=58 May 1 08:29:55 trinity kernel: [5654985.963736] type=1300 audit(1462087795.379:71): arch=40000003 syscall=102 success=yes exit=0 a0=e a1=bf9a75a0 a2=b7750ff4 a3=2250 items=0 ppid=13411 pid=13412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/sbin/xtables-multi" key=(null) May 1 11:29:33 trinity kernel: [5665764.295688] type=1325 audit(1462098573.714:72): table=filter family=2 entries=57 May 1 11:29:33 trinity kernel: [5665764.295765] type=1300 audit(1462098573.714:72): arch=40000003 syscall=102 success=yes exit=0 a0=e a1=bfda2ba0 a2=b77adff4 a3=22e4 items=0 ppid=32410 pid=32411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/sbin/xtables-multi" key=(null) May 1 19:48:03 trinity kernel: [5695674.149293] type=1325 audit(1462128483.567:73): table=filter family=2 entries=58 May 1 19:48:03 trinity kernel: [5695674.149370] type=1300 audit(1462128483.567:73): arch=40000003 syscall=102 success=yes exit=0 a0=e a1=bffb3910 a2=b76cfff4 a3=2378 items=0 ppid=20765 pid=20766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/sbin/xtables-multi" key=(null) May 1 20:40:53 trinity kernel: [5698844.383281] type=1325 audit(1462131653.801:74): table=filter family=2 entries=59 May 1 20:40:53 trinity kernel: [5698844.383357] type=1300 audit(1462131653.801:74): arch=40000003 syscall=102 success=yes exit=0 a0=e a1=bfe7d880 a2=b7761ff4 a3=22e4 items=0 ppid=26521 pid=26522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/sbin/xtables-multi" key=(null) May 2 05:53:28 trinity kernel: [5731999.457579] type=1325 audit(1462164808.877:75): table=filter family=2 entries=58 May 2 05:53:28 trinity kernel: [5731999.457657] type=1300 audit(1462164808.877:75): arch=40000003 syscall=102 success=yes exit=0 a0=e a1=bfc307b0 a2=b77a8ff4 a3=2250 items=0 ppid=20606 pid=20607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/sbin/xtables-multi" key=(null) May 2 08:02:07 trinity kernel: [5739717.728041] type=1325 audit(1462172527.145:76): table=filter family=2 entries=57 May 2 08:02:07 trinity kernel: [5739717.728130] type=1300 audit(1462172527.145:76): arch=40000003 syscall=102 success=yes exit=0 a0=e a1=bfb655f0 a2=b76f7ff4 a3=21bc items=0 ppid=2530 pid=2531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/sbin/xtables-multi" key=(null) May 2 09:36:04 trinity kernel: [5745355.212056] type=1325 audit(1462178164.630:77): table=filter family=2 entries=56 May 2 09:36:04 trinity kernel: [5745355.212135] type=1300 audit(1462178164.630:77): arch=40000003 syscall=102 success=yes exit=0 a0=e a1=bfb26040 a2=b7764ff4 a3=2250 items=0 ppid=12830 pid=12831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/sbin/xtables-multi" key=(null) May 2 10:37:32 trinity kernel: [5749043.125431] type=1325 audit(1462181852.547:78): table=filter family=2 entries=57 May 2 10:37:32 trinity kernel: [5749043.125507] type=1300 audit(1462181852.547:78): arch=40000003 syscall=102 success=yes exit=0 a0=e a1=bfae3220 a2=b76e7ff4 a3=21bc items=0 ppid=19175 pid=19176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/sbin/xtables-multi" key=(null) May 2 12:14:13 trinity kernel: [5754843.852220] type=1325 audit(1462187653.271:79): table=filter family=2 entries=56 May 2 12:14:13 trinity kernel: [5754843.852297] type=1300 audit(1462187653.271:79): arch=40000003 syscall=102 success=yes exit=0 a0=e a1=bfe58c60 a2=b76ecff4 a3=2128 items=0 ppid=29308 pid=29309 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/sbin/xtables-multi" key=(null) May 2 12:41:59 trinity kernel: [5756510.071418] type=1325 audit(1462189319.490:80): table=filter family=2 entries=55 May 2 12:41:59 trinity kernel: [5756510.071496] type=1300 audit(1462189319.490:80): arch=40000003 syscall=102 success=yes exit=0 a0=e a1=bfe31480 a2=b7722ff4 a3=2094 items=0 ppid=32586 pid=32587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/sbin/xtables-multi" key=(null) May 2 12:58:14 trinity kernel: [5757485.373768] type=1325 audit(1462190294.794:81): table=filter family=2 entries=54 May 2 12:58:14 trinity kernel: [5757485.373846] type=1300 audit(1462190294.794:81): arch=40000003 syscall=102 success=yes exit=0 a0=e a1=bf8cb380 a2=b7754ff4 a3=2128 items=0 ppid=1736 pid=1737 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/sbin/xtables-multi" key=(null) May 2 14:34:51 trinity kernel: [5763282.057294] type=1325 audit(1462196091.475:82): table=filter family=2 entries=55 May 2 14:34:51 trinity kernel: [5763282.057370] type=1300 audit(1462196091.475:82): arch=40000003 syscall=102 success=yes exit=0 a0=e a1=bfce29f0 a2=b7736ff4 a3=2094 items=0 ppid=12057 pid=12058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/sbin/xtables-multi" key=(null) May 2 15:31:28 trinity kernel: [5766679.552808] type=1325 audit(1462199488.973:83): table=filter family=2 entries=54 May 2 15:31:28 trinity kernel: [5766679.552884] type=1300 audit(1462199488.973:83): arch=40000003 syscall=102 success=yes exit=0 a0=e a1=bfc402f0 a2=b7718ff4 a3=2128 items=0 ppid=18365 pid=18366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/sbin/xtables-multi" key=(null) This suggests that the iptables command is for some reason generating an audit alert. These did not show up prior to the installation and removal of auditd. Checking in /var/log for that timestamp suggests these relate to fail2ban changing the iptables config to add banned ip addresses. I'm fine with the trigger, but I can't work out how to disable these, given I've removed auditd (and hence, auditctl). Re-installing auditd and running auditctl -l returns no rules. Why is iptables now generating these entries in kern.log, and how do I revert back to the config prior to installing auditd? Debian version is 7.10. Update: So interestingly, during the period where auditd was re-installed, the kernel entries don't turn up, they only turn up when it's removed. So they didn't exist at all, then I installed auditd and they still didn't exist, then I removed auditd and they started showing up. Installing auditd suppresses them again, and uninstalling it results in them showing up. From apt's history.log, Start-Date: 2016-04-26 11:47:13 Commandline: apt-get install auditd Install: auditd:i386 (1.7.18-1.1) End-Date: 2016-04-26 11:47:20 Start-Date: 2016-04-26 11:48:39 Commandline: apt-get remove auditd Remove: auditd:i386 (1.7.18-1.1) End-Date: 2016-04-26 11:48:42 Start-Date: 2016-04-26 11:48:46 Commandline: apt-get purge auditd Purge: auditd:i386 () End-Date: 2016-04-26 11:48:47 Start-Date: 2016-05-03 11:17:43 Commandline: apt-get install auditd Install: auditd:i386 (1.7.18-1.1) End-Date: 2016-05-03 11:17:50 Start-Date: 2016-05-03 14:46:14 Commandline: apt-get remove auditd Remove: auditd:i386 (1.7.18-1.1) End-Date: 2016-05-03 14:46:17 Start-Date: 2016-05-03 14:47:24 Commandline: apt-get purge auditd Purge: auditd:i386 () End-Date: 2016-05-03 14:47:25 And then from kern.log, root@trinity:/var/log# cat kern.log* | grep filter | sort Apr 26 13:30:54 trinity kernel: [5241045.164714] type=1325 audit(1461673854.583:9): table=filter family=2 entries=62 Apr 26 13:32:53 trinity kernel: [5241164.339388] type=1325 audit(1461673973.758:10): table=filter family=2 entries=63 Apr 26 22:05:15 trinity kernel: [5271906.481895] type=1325 audit(1461704715.901:11): table=filter family=2 entries=62 Apr 27 02:28:01 trinity kernel: [5287671.603861] type=1325 audit(1461720481.020:12): table=filter family=2 entries=61 Apr 27 08:44:33 trinity kernel: [5310263.791931] type=1325 audit(1461743073.208:13): table=filter family=2 entries=60 Apr 27 11:07:33 trinity kernel: [5318844.230913] type=1325 audit(1461751653.650:14): table=filter family=2 entries=59 Apr 27 11:11:25 trinity kernel: [5319076.553128] type=1325 audit(1461751885.972:15): table=filter family=2 entries=58 Apr 27 12:31:29 trinity kernel: [5323879.969177] type=1325 audit(1461756689.387:16): table=filter family=2 entries=59 Apr 27 16:22:10 trinity kernel: [5337721.409895] type=1325 audit(1461770530.830:17): table=filter family=2 entries=58 Apr 27 17:18:25 trinity kernel: [5341095.909392] type=1325 audit(1461773905.329:18): table=filter family=2 entries=59 Apr 27 20:25:45 trinity kernel: [5352335.879430] type=1325 audit(1461785145.297:19): table=filter family=2 entries=60 Apr 27 21:19:06 trinity kernel: [5355537.157802] type=1325 audit(1461788346.575:20): table=filter family=2 entries=59 Apr 27 21:23:49 trinity kernel: [5355820.549272] type=1325 audit(1461788629.970:21): table=filter family=2 entries=58 Apr 27 21:53:23 trinity kernel: [5357593.916306] type=1325 audit(1461790403.338:22): table=filter family=2 entries=57 Apr 28 01:32:28 trinity kernel: [5370739.384433] type=1325 audit(1461803548.804:23): table=filter family=2 entries=58 Apr 28 03:35:24 trinity kernel: [5378115.178977] type=1325 audit(1461810924.598:24): table=filter family=2 entries=59 Apr 28 04:44:17 trinity kernel: [5382247.691370] type=1325 audit(1461815057.108:25): table=filter family=2 entries=60 Apr 28 05:47:42 trinity kernel: [5386052.769582] type=1325 audit(1461818862.189:26): table=filter family=2 entries=59 Apr 28 06:49:40 trinity kernel: [5389770.729248] type=1325 audit(1461822580.149:27): table=filter family=2 entries=58 Apr 28 07:03:26 trinity kernel: [5390596.850019] type=1325 audit(1461823406.267:28): table=filter family=2 entries=59 Apr 28 07:54:25 trinity kernel: [5393655.953013] type=1325 audit(1461826465.374:29): table=filter family=2 entries=60 Apr 28 17:19:02 trinity kernel: [5427533.079358] type=1325 audit(1461860342.498:30): table=filter family=2 entries=59 Apr 28 17:40:50 trinity kernel: [5428840.833735] type=1325 audit(1461861650.252:31): table=filter family=2 entries=60 Apr 28 22:11:09 trinity kernel: [5445060.419843] type=1325 audit(1461877869.838:32): table=filter family=2 entries=59 Apr 28 22:20:05 trinity kernel: [5445596.145146] type=1325 audit(1461878405.563:33): table=filter family=2 entries=60 Apr 29 01:34:17 trinity kernel: [5457247.685479] type=1325 audit(1461890057.103:34): table=filter family=2 entries=61 Apr 29 03:08:41 trinity kernel: [5462912.272201] type=1325 audit(1461895721.690:35): table=filter family=2 entries=62 Apr 29 04:05:43 trinity kernel: [5466333.873413] type=1325 audit(1461899143.292:36): table=filter family=2 entries=63 Apr 29 05:27:26 trinity kernel: [5471237.463612] type=1325 audit(1461904046.880:37): table=filter family=2 entries=64 Apr 29 05:57:55 trinity kernel: [5473065.931068] type=1325 audit(1461905875.349:38): table=filter family=2 entries=63 Apr 29 07:43:16 trinity kernel: [5479387.398790] type=1325 audit(1461912196.819:39): table=filter family=2 entries=62 Apr 29 07:59:20 trinity kernel: [5480350.703929] type=1325 audit(1461913160.122:40): table=filter family=2 entries=61 Apr 29 09:01:10 trinity kernel: [5484060.685008] type=1325 audit(1461916870.105:41): table=filter family=2 entries=62 Apr 29 09:08:56 trinity kernel: [5484527.328113] type=1325 audit(1461917336.744:42): table=filter family=2 entries=61 Apr 29 09:28:40 trinity kernel: [5485710.910410] type=1325 audit(1461918520.327:43): table=filter family=2 entries=60 Apr 29 09:35:24 trinity kernel: [5486115.462325] type=1325 audit(1461918924.881:44): table=filter family=2 entries=59 Apr 29 11:58:55 trinity kernel: [5494725.939858] type=1325 audit(1461927535.357:45): table=filter family=2 entries=58 Apr 29 12:29:44 trinity kernel: [5496575.471597] type=1325 audit(1461929384.889:46): table=filter family=2 entries=57 Apr 29 14:38:01 trinity kernel: [5504271.706427] type=1325 audit(1461937081.127:47): table=filter family=2 entries=58 Apr 29 17:01:28 trinity kernel: [5512879.168191] type=1325 audit(1461945688.583:48): table=filter family=2 entries=57 Apr 29 19:31:41 trinity kernel: [5521892.127411] type=1325 audit(1461954701.545:49): table=filter family=2 entries=56 Apr 29 19:34:02 trinity kernel: [5522033.333315] type=1325 audit(1461954842.755:50): table=filter family=2 entries=55 Apr 29 20:00:13 trinity kernel: [5523604.428545] type=1325 audit(1461956413.851:51): table=filter family=2 entries=54 Apr 29 20:34:45 trinity kernel: [5525676.172737] type=1325 audit(1461958485.593:52): table=filter family=2 entries=53 Apr 29 20:57:39 trinity kernel: [5527050.000970] type=1325 audit(1461959859.421:53): table=filter family=2 entries=54 Apr 29 21:03:22 trinity kernel: [5527393.467046] type=1325 audit(1461960202.886:54): table=filter family=2 entries=53 Apr 29 23:18:37 trinity kernel: [5535508.254569] type=1325 audit(1461968317.673:55): table=filter family=2 entries=52 Apr 30 00:29:58 trinity kernel: [5539788.920100] type=1325 audit(1461972598.339:56): table=filter family=2 entries=53 Apr 30 03:12:14 trinity kernel: [5549524.805118] type=1325 audit(1461982334.225:57): table=filter family=2 entries=54 Apr 30 03:56:03 trinity kernel: [5552154.294060] type=1325 audit(1461984963.713:58): table=filter family=2 entries=55 Apr 30 05:31:18 trinity kernel: [5557868.878686] type=1325 audit(1461990678.296:59): table=filter family=2 entries=54 Apr 30 05:51:28 trinity kernel: [5559079.495954] type=1325 audit(1461991888.912:60): table=filter family=2 entries=55 Apr 30 11:18:56 trinity kernel: [5578727.564823] type=1325 audit(1462011536.983:61): table=filter family=2 entries=56 Apr 30 11:38:34 trinity kernel: [5579905.149630] type=1325 audit(1462012714.569:62): table=filter family=2 entries=57 Apr 30 11:58:54 trinity kernel: [5581124.785297] type=1325 audit(1462013934.204:63): table=filter family=2 entries=56 Apr 30 12:28:32 trinity kernel: [5582903.150044] type=1325 audit(1462015712.567:64): table=filter family=2 entries=55 Apr 30 14:41:21 trinity kernel: [5590871.696820] type=1325 audit(1462023681.116:65): table=filter family=2 entries=54 Apr 30 17:58:37 trinity kernel: [5602708.432415] type=1325 audit(1462035517.855:66): table=filter family=2 entries=55 Apr 30 20:07:46 trinity kernel: [5610456.713610] type=1325 audit(1462043266.133:67): table=filter family=2 entries=56 May 1 00:15:50 trinity kernel: [5625341.571375] type=1325 audit(1462058150.990:68): table=filter family=2 entries=57 May 1 01:56:34 trinity kernel: [5631384.621056] type=1325 audit(1462064194.039:69): table=filter family=2 entries=58 May 1 03:47:50 trinity kernel: [5638061.478266] type=1325 audit(1462070870.899:70): table=filter family=2 entries=57 May 1 08:29:55 trinity kernel: [5654985.963656] type=1325 audit(1462087795.379:71): table=filter family=2 entries=58 May 1 11:29:33 trinity kernel: [5665764.295688] type=1325 audit(1462098573.714:72): table=filter family=2 entries=57 May 1 19:48:03 trinity kernel: [5695674.149293] type=1325 audit(1462128483.567:73): table=filter family=2 entries=58 May 1 20:40:53 trinity kernel: [5698844.383281] type=1325 audit(1462131653.801:74): table=filter family=2 entries=59 May 2 05:53:28 trinity kernel: [5731999.457579] type=1325 audit(1462164808.877:75): table=filter family=2 entries=58 May 2 08:02:07 trinity kernel: [5739717.728041] type=1325 audit(1462172527.145:76): table=filter family=2 entries=57 May 2 09:36:04 trinity kernel: [5745355.212056] type=1325 audit(1462178164.630:77): table=filter family=2 entries=56 May 2 10:37:32 trinity kernel: [5749043.125431] type=1325 audit(1462181852.547:78): table=filter family=2 entries=57 May 2 12:14:13 trinity kernel: [5754843.852220] type=1325 audit(1462187653.271:79): table=filter family=2 entries=56 May 2 12:41:59 trinity kernel: [5756510.071418] type=1325 audit(1462189319.490:80): table=filter family=2 entries=55 May 2 12:58:14 trinity kernel: [5757485.373768] type=1325 audit(1462190294.794:81): table=filter family=2 entries=54 May 2 14:34:51 trinity kernel: [5763282.057294] type=1325 audit(1462196091.475:82): table=filter family=2 entries=55 May 2 15:31:28 trinity kernel: [5766679.552808] type=1325 audit(1462199488.973:83): table=filter family=2 entries=54 May 2 15:58:13 trinity kernel: [5768283.694922] type=1325 audit(1462201093.113:84): table=filter family=2 entries=55 May 2 16:42:33 trinity kernel: [5770944.249180] type=1325 audit(1462203753.667:85): table=filter family=2 entries=56 May 2 23:25:56 trinity kernel: [5795147.404091] type=1325 audit(1462227956.820:86): table=filter family=2 entries=57 May 3 03:41:43 trinity kernel: [5810493.831850] type=1325 audit(1462243303.249:87): table=filter family=2 entries=58 May 3 04:44:46 trinity kernel: [5814276.874392] type=1325 audit(1462247086.292:88): table=filter family=2 entries=57 May 3 06:57:06 trinity kernel: [5822217.391993] type=1325 audit(1462255026.809:89): table=filter family=2 entries=56 May 3 08:21:19 trinity kernel: [5827270.101048] type=1325 audit(1462260079.522:90): table=filter family=2 entries=55 May 3 11:03:16 trinity kernel: [5836986.964890] type=1325 audit(1462269796.383:91): table=filter family=2 entries=54 May 3 16:19:19 trinity kernel: [5855950.133701] type=1325 audit(1462288759.553:306): table=filter family=2 entries=56 Kernel logs go back to March 14th, and the above shows the first entry for the audit stuff. There's a lot of data but you can see there's a gap between 11:03 and 16:19 today. However, during that time, fail2ban banned 3 IP addresses and made iptables updates. So while auditd was installed, no audit entries were created. 2016-05-01 08:29:55,374 fail2ban.actions: WARNING [ssh] Unban 113.107.24.247 2016-05-01 11:29:33,708 fail2ban.actions: WARNING [ssh] Ban 52.37.98.155 2016-05-01 19:48:03,560 fail2ban.actions: WARNING [ssh] Ban 185.70.184.135 2016-05-01 20:40:53,795 fail2ban.actions: WARNING [ssh] Unban 185.103.252.142 2016-05-02 05:53:28,816 fail2ban.actions: WARNING [ssh] Unban 185.110.132.54 2016-05-02 08:02:07,030 fail2ban.actions: WARNING [ssh] Unban 202.203.179.129 2016-05-02 09:36:04,623 fail2ban.actions: WARNING [ssh] Ban 42.116.173.198 2016-05-02 10:37:32,536 fail2ban.actions: WARNING [ssh] Unban 125.212.232.159 2016-05-02 12:14:13,263 fail2ban.actions: WARNING [ssh] Unban 146.0.77.32 2016-05-02 12:41:59,482 fail2ban.actions: WARNING [ssh] Unban 112.217.150.112 2016-05-02 12:58:14,786 fail2ban.actions: WARNING [ssh] Ban 210.211.99.15 2016-05-02 14:34:51,468 fail2ban.actions: WARNING [ssh] Unban 179.43.144.43 2016-05-02 15:31:28,963 fail2ban.actions: WARNING [ssh] Ban 37.54.25.239 2016-05-02 15:58:13,105 fail2ban.actions: WARNING [ssh] Ban 125.212.232.63 2016-05-02 16:42:33,660 fail2ban.actions: WARNING [ssh] Ban 146.0.77.32 2016-05-02 23:25:56,812 fail2ban.actions: WARNING [ssh] Ban 193.201.225.31 2016-05-03 03:41:43,242 fail2ban.actions: WARNING [ssh] Unban 42.112.131.91 2016-05-03 04:44:46,285 fail2ban.actions: WARNING [ssh] Unban 173.208.220.131 2016-05-03 06:57:06,803 fail2ban.actions: WARNING [ssh] Unban 193.201.225.29 2016-05-03 08:21:19,512 fail2ban.actions: WARNING [ssh] Unban 185.22.65.27 2016-05-03 11:03:16,375 fail2ban.actions: WARNING [ssh] Ban 173.208.129.210 2016-05-03 13:30:55,106 fail2ban.actions: WARNING [ssh] Unban 58.187.224.226 2016-05-03 14:01:26,542 fail2ban.actions: WARNING [ssh] Ban 221.11.92.253 2016-05-03 14:32:17,009 fail2ban.actions: WARNING [ssh] Ban 82.204.67.66 2016-05-03 16:19:19,543 fail2ban.actions: WARNING [ssh] Ban 169.54.174.138
Audit entries are generated regardless some server is listening using audit_log_acct_message. As far as I know, syscall 102 is getuid() -- you can check using ausyscall 102 (I am afraid to install auditctl after all this :P). The audit message is not invoked by iptables itself, but somewhere in the kernel. You might get rid of it using audit_enable=0 or audit=0 on boot. But the owning problem will not be solved by this (installation of auditctl might have added this enabling trigger to the boot option?). Further investigation should probably go with checking latest Debian if it does the same and then greping kernel sources further at the version which is shipped with Debian 7. Kernel parameters explained: audit= [KNL] Enable the audit sub-system Format: { "0" | "1" } (0 = disabled, 1 = enabled) 0 - kernel audit is disabled and can not be enabled until the next reboot unset - kernel audit is initialized but disabled and will be fully enabled by the userspace auditd. 1 - kernel audit is initialized and partially enabled, storing at most audit_backlog_limit messages in RAM until it is fully enabled by the userspace auditd. Default: unset So it was not initialized before, the initialization was made by auditd and after uninstalling, the messages are generated, but caught by kernel ... still not sure how to disable audit in kernel space other way than rebooting and/or setting up boot=0 (and rebooting).
Identifying source of audit messages in kern.log
1,418,601,686,000
I am writing a parser for Linux Audit and I stumbled upon some weird cases which doesn't seem to comply with the standard. My reference is the Red Hat's documentation. A proper audit record should look like this: type=USER_CMD msg=audit(1464013671.517:403): pid=3569 uid=0 auid=1000 ses=7 msg='cwd="/root" cmd=123 terminal=pts/1 res=success' An invalid name=value field in a record Let's look at the following record: type=DAEMON_START msg=audit(1464013652.147:626): auditd start, ver=2.4 format=raw kernel=3.16.0-4-586 auid=4294967295 pid=3557 res=success The documentation says nothing about auditd start which doesn't fit the name=value format. What is this? Where I can read about it? A comma and a space as a separator Additionally, the documentation says that Each record consists of several name=value pairs separated by a white space or a comma. It is clearly not true since we can see that auditd start, ver=2.4 are separated with a command and a space. Why is it so? Where is the standard really described? Additional whitespaces in a record Let's look at the following record: type=CWD msg=audit(1464013682.961:409): cwd="/root" It has two spaces between type=CWD msg=audit(1464013682.961:409): and cwd="/root". It doesn't make any sense. In fact, I observed this behaviour only in records with type=CWD and cwd="/root". Why is it so? Note: I've generated those logs on a recent Debian.
So I solved tiny a part of the problem - I found out that auditd start, ver=2.2 is valid. I failed to find any documentation though. The only document I have is an example from the Red Hat's manual: Example 7.5. Additional audit.log events The following Audit event records a successful start of the auditd daemon. The ver field shows the version of the Audit daemon that was started. type=DAEMON_START msg=audit(1363713609.192:5426): auditd start, ver=2.2 format=raw kernel=2.6.32-358.2.1.el6.x86_64 auid=500 pid=4979 subj=unconfined_u:system_r:auditd_t:s0 res=success The following Audit event records a failed attempt of user with UID of 500 to log in as the root user. type=USER_AUTH msg=audit(1364475353.159:24270): user pid=3280 uid=500 auid=500 ses=1 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg='op=PAM:authentication acct="root" exe="/bin/su" hostname=? addr=? terminal=pts/0 res=failed' The sad thing is that these are only examples. I'd love to read the actual documentation of the standard since I cannot find it anywhere. Update I asked those questions of the official mailing list (see the full reply to my question). Here's what I've learnt: An invalid name=value field in a record I isn't clear to me why auditd start exist but here's the Steve Grubb's answer to my question. Where are all the elements like auditd start, user, etc. listed? I cannot find any document which specifies what can occurs between the colon (separating the type and the msg=audit(…) from the fields) and the record’s fields. There really is none, Libauparse takes care of all of this so that you don't have to. If you are wanting to do translation, you can feed the logs into auparse and then just format the event the way you want. Basically, the answer is hidden somewhere in the auparse library. A comma and a space as a separator Why do some records are separated by a comma and a whitespace? Example: type=DAEMON_START msg=audit(1363713609.192:5426): auditd start, ver=2.2 format=raw kernel=2.6.32-358.2.1.el6.x86_64 auid=500 pid=4979 subj=unconfined_u:system_r:auditd_t:s0 res=success A long time ago the records were meant to be both human readable (don't laugh) and machine consumable. Over time these have been converted name=value pairs. Even the one you mention above has been fixed. Additional whitespaces in a record This one has already been patched by Steve Grubb. The patch: https://www.redhat.com/archives/linux-audit/2016-July/msg00086.html
Undocumented format of Linux Audit log records
1,418,601,686,000
I have configured auditd to track some sensitive files on my system. Now I would to have a script that will be called each time auditd writes a line, with the $1 argument of that script being the line added. From what I read in the manual auditd has no such option. Is there a way to do this anyway? If I'll have a cron script running every minute, I will have a problem defining it on which lines it should work (which lines are new? if any?)
Using the tail command like so: tail -Fn0 /var/log/audit/audit.log | /sbin/script and /sbin/script is like so: while IFS= read -r line; do #something to do with $line variable when it comes done
how to run a script on auditd events?
1,418,601,686,000
So, I have this trio of audit log entries type=AVC msg=audit(1488396169.095:2624951): avc: denied { setrlimit } for pid=16804 comm="bash" scontext=system_u:system_r:httpd_t:s0 tcontext=system_u:system_r:httpd_t:s0 tclass=process type=SYSCALL msg=audit(1488396169.095:2624951): arch=c000003e syscall=160 success=no exit=-13 a0=1 a1=7ffe06c17350 a2=2 a3=7fea949f3eb0 items=0 ppid=15216 pid=16804 auid=4294967295 uid=48 gid=48 euid=48 suid=48 fsuid=48 egid=48 sgid=48 fsgid=48 tty=(none) ses=4294967295 comm="bash" exe="/usr/bin/bash" subj=system_u:system_r:httpd_t:s0 key=(null) type=EOE msg=audit(1488396169.095:2624951): On the AVC line, it's easy enough to immediately see that a bash process with the system_u:system_r:httpd_t:s0 context was denied permission to set a resource limit. On the SYSCALL line, a quick google for syscall=160 indicates that it's a setrlimit() call, which jives. What I don't know is what resource was requested to be modified. What resources was it trying to modify?
So, in this case, we already know that the syscall in question was setrlimit. A search for setrlimit reveals there's a C library function by the same name that wraps the syscall. The function's documentation indicates that the first argument ("a0" in the SYSCALL line from the audit log) indicates the resource in question, but the manual only tells us symbol names, not numeric value. It does, however, tell us that the symbols are defined in the sys/resource.h header file. However, that file doesn't contain the actual values. To get the numeric values, it turns out we look in sysdeps/unix/sysv/linux/bits/resource.h. There, we find the various RLIMIT_ macros defined. Looking at those, we can find which resource was attempted to be modified. In this case, a0=1, and the macro corresponding to 1 turns out to be RLIMIT_FSIZE.
How do I dissect an SELinux SYSCALL message?
1,418,601,686,000
I am creating a parser/converter from the Linux Audit format. As I was studying the format, looking at examples and reading the documentation I stumbled upon a problem. Can I be sure that the field names inside a single record are unique? For example, is a record like this one is legal / appear in real world implementations: type=TYPE msg=audit(1.002:3): msg="the first msg field" msg="the second msg field" The second related question is whether I can there are there will be only one pid in an event? For example, is this event is legal / appear in real world implementations: type=TYPE1 msg=audit(1.002:3): pid=0 msg="texthere" type=TYPE2 msg=audit(1.002:3): pid=0 msg="differenttexthere"
According to Steve Grubb's reply on the official mailing list (link to the email): Steve's answer: Is it possible that there are duplicate fields in a record? Sometimes. I've tried to fix those when it happens. The problem is that not everyone runs their audit code by this mail list so that we can check it to see that its well formed. What I am planning to do is write an audit event validation suite that checks that events are well formed and that expected events are being written when they are supposed to and in the order that they are supoosed to. Cleaning up these events is high on my TODO list. Something like (which doesn’t make much sense obviously): type=CWD msg=audit(1464013682.961:409): cwd="/root” cwd=“/usr” Something like this will not happen, its more likely around auid and uid. The reason being that the kernel adds somethings automatically because its a trusted source of information. User space can write contradictory information. For example if a daemon is working on behalf of a user but its auid has not been set for the user, then you might see this. tl;dr It is possible however it is uncommon and discouraged.
Can I be sure that the name of a Linux Audit record's field is unique?
1,418,601,686,000
I'm using a RHEL machine with SELinux enabled. I'd like to change the logfile position of auditd to /mydir/log/audit.log. I can apply the security context system_u:object_r:auditd_log_t:s0 to this file. However, what should be the security context of the directory /mydir/log and parent directory /mydir since they're going to be read/written by other daemons? Or should I just go the least complicated way and do semanage permissive -a auditd_t instead?
You're mostly there, you're using the semanage command. Since you already know that there's a correct context on /var/log/audit, the easiest thing is to set up a local selinux filecontext equivalence. So you'd run something like this: semanage fcontext -a -e /var/log/audit /mydir/log This tells SELinux to add (-a) a file context rule that says that /mydir/log will have all the equivalent (-e) file context as /var/log/audit. Once you've set the rule, you want to run restorecon -r -v /mydir/log to set the selinux attributes on /mydir/log to what the new policy wants.
SELinux security context of parent directories
1,418,601,686,000
I'm trying to figure out how to log/track when a user gets a Permission denied notice after attempting to access a file. I've read that adding a rule to /etc/audit/audit.rules can accomplish this. The only suggestion that I've seen mentioned appears to not work as intended. Or, at least, it does not do what I would like. It very well may work the way it is written. The rule is -a always,exit -F arch=b64 -S open -F success!=0 Actually, the suggestion at the link above does not include the arch option. I had to add that. When tailing /var/log/audit/audit.log I'm seeing everything that says success=yes. This includes when I click a window and change focus or enter key combinations to change between window functions. What I'm not seeing is anything relating to Permission denied to include success=no entries or anything about a specific file that I attempt to open knowing I don't have permissions on it. All I can say definitively is that when I grep for success=no in /var/log/audit/audit.log nothing is returned. What should the rule be? Or better yet, is this even actually possible? Is the solution above incorrect?
I've been tooling around and found that if I use success!=1 then audit.log will display entries that indicate success=no. This seems counter-intuitive to me since a non-zero exit code typically indicates a failure of some sort but !=1 could be anything including other failure exit codes as well as a success (0). Interestingly, though, those don't show up. An additional problem is that it does not indicate which file had the failed access. Instead, it only lists the command that ran when the failed exit code was returned. In my case, I was running cat /etc/shadow. So, instead of seeing type=SYSCALL msg=audit(1438754257.463:11451): arch=c000003e syscall=2 success=no exit=-13 a0=7ffea511f35f a1=0 a2=1ffffffffffe0000 a3=0 items=1 ppid=1650 pid=5489 auid=1000 uid=1000 gid=100 euid=1000 suid=1000 fsuid=1000 egid=100 sgid=100 fsgid=100 tty=pts0 ses=1 comm="cat" exe="/usr/bin/cat" key="access" type=CWD msg=audit(1438754257.463:11451): cwd="/home/msnyder" type=PATH msg=audit(1438754257.463:11451): item=0 name="/etc/shadow" inode=1131047 dev=00:20 mode=0100640 ouid=0 ogid=15 rdev=00:00 nametype=NORMAL I would only see type=SYSCALL msg=audit(1438752096.223:4952): arch=c000003e syscall=2 success=yes exit=3 a0=7f77d575c057 a1=80000 a2=1 a3=22 items=1 ppid=1650 pid=4873 auid=1000 uid=1000 gid=100 euid=1000 suid=1000 fsuid=1000 egid=100 sgid=100 fsgid=100 tty=pts0 ses=1 comm="cat" exe="/usr/bin/cat" key=(null) type=CWD msg=audit(1438752096.223:4952): cwd="/home/msnyder" I then looked at the audit.rules manpage. Eureka! The answer was in there all along: -a always,exit -F arch=b64 -S open,openat -F exit=-EACCES -F key=access -a always,exit -F arch=b64 -S open,openat -F exit=-EPERM -F key=access Those two rules combined solve the problem. Not only will it log the failed file access, but it will also log which file the access was attempted on. This results in the first three log entries above which includes the file name.
Using auditd to capture "permission denied" notices
1,418,601,686,000
I'm not able to redirect output of command into a file when ran with cronjob [root@mail /]# crontab -l */1 * * * * /sbin/ausearch -i > /rummy [root@mail /]# cat /rummy It's weird that when I dont give -i option , I'm able to redirect it very well. [root@mail /]# crontab -l */1 * * * * /sbin/ausearch > /rummy [root@mail /]# cat /rummy usage: ausearch [options] -a,--event <Audit event id> search based on audit event id --arch <CPU> search based on the CPU architecture -c,--comm <Comm name> search based on command line name - - - It there any syntax error or I'm missing here something? Note - "ausearch -i" fetches me below output on terminal and on redirecting output to file , it redirects it as it is. [root@server ~]# ausearch -i type=DAEMON_START msg=audit(05/22/2017 11:14:10.391:6858) : auditd start, ver=2.4.5 format=raw kernel=2.6.32-696.el6.x86_64 auid=unset pid=1319 subj=system_u:system_r:auditd_t:s0 res=success ---- type=CONFIG_CHANGE msg=audit(05/22/2017 11:14:10.519:5) : audit_backlog_limit=320 old=64 auid=unset ses=unset subj=system_u:system_r:auditctl_t:s0 res=yes ---- type=USER_ACCT msg=audit(05/22/2017 11:20:01.108:6) : user pid=2073 uid=root auid=unset ses=unset subj=system_u:system_r:crond_t:s0-s0:c0.c1023 msg='op=PAM:accounting acct=root exe=/usr/sbin/crond hostname=? addr=? terminal=cron res=success' ---- type=CRED_ACQ msg=audit(05/22/2017 11:20:01.108:7) : user pid=2073 uid=root auid=unset ses=unset subj=system_u:system_r:crond_t:s0-s0:c0.c1023 msg='op=PAM:setcred acct=root exe=/usr/sbin/crond hostname=? addr=? terminal=cron res=success' ---- type=LOGIN msg=audit(05/22/2017 11:20:01.119:8) : pid=2073 uid=root subj=system_u:system_r:crond_t:s0-s0:c0.c1023 old auid=unset new auid=root old ses=unset new ses=1 ----
The command does not produce output, but runs ok. You can see this because the file rummy got created. The ausearch utility seems to expect a "search criteria", and the empty output could be due to you not providing one. See the ausearch manual on your system for further information. After a bit of reading of the ausearch manual, I found the following: --input-logs Use the log file location from auditd.conf as input for searching. This is needed if you are using ausearch from a cron job. Doing some Googling confirms that this indeed may be the issue. One email describes the problem: You need to use the --input-logs option. If ausearch sees stdin as a pipe, it assumes that is where it gets its data from. The input logs option tells it to ignore the fact that stdin is a pipe and process the logs. Aureport has the same problem and option to fix it. This was fixed in the 1.6.7 general release and backported to the 1.6.5 RHEL5 release. There also seems to be users who does not solve this by using --input-logs, but it's not clear what else may be wrong as there are never any followups from them.
cronjob not redirecting output of command when used with option
1,418,601,686,000
I was looking at Linux audit reports. Here is a log from ausearch. time->Mon Nov 23 12:30:30 2015 type=PROCTITLE msg=audit(1448281830.422:222556): proctitle=6D616E006175736561726368 type=SYSCALL msg=audit(1448281830.422:222556): arch=c000003e syscall=56 success=yes exit=844 a0=1200011 a1=0 a2=0 a3=7f34afa999d0 items=0 ppid=830 pid=838 auid=1001 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts0 ses=1 comm="nroff" exe="/usr/bin/bash" subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 key=(null) From my understanding the comm argument is the name the user used to invoke the exe binary. How come is nroff referring to /usr/bin/bash? Note that this is a general question, I have seen this kind of thing, which I cannot explain, happen many times. In this particular case, here is more data about nroff and bash on my system. [root@localhost ~]# which nroff /bin/nroff [root@localhost ~]# ll -i /bin/nroff 656858 -rwxr-xr-x. 1 root root 3312 Jun 17 10:59 /bin/nroff [root@localhost ~]# ll -i /usr/bin/bash 656465 -rwxr-xr-x. 1 root root 1071992 Aug 18 13:37 /usr/bin/bash
The nroff "executable" provided by groff is a shell script, e.g., #! /bin/sh # Emulate nroff with groff. # # Copyright (C) 1992, 1993, 1994, 1999, 2000, 2001, 2002, 2003, # 2004, 2005, 2007, 2009 # Free Software Foundation, Inc. # # Written by James Clark, maintained by Werner Lemberg. # This file is of `groff'. Depending on the system you are using, /bin/sh may be a symbolic link to /usr/bin/bash, e.g., Fedora, which links /bin to /usr/bin.
auditctl comm vs. exe
1,418,601,686,000
I have written a script which is deployed just by putting it in a globally accessible location for all users. I want to log the usage of this script. Is there a way in Linux to find out how many times a file was read/accessed? And if possible to determine by who ? Edit: I do not have the root privileges, and auditd isn't an option, since its not already available.
Not with the default ext2/3/4 linux file system, I think your only solution is to log the usage of your file into a log file (but people could find that file and modify it. So my advice would be to use a small web service (PHP, Python or even perl that increment a value in a db so people could not change the value easily). Edit 1: Well it seems some software could accomplish such a task see the post Script to count number of times a file has been accessed Edit 2 (as state by commentators): You can start with a good tutorial on auditd deamon. And also search google or Duck duck go for auditd which is the name of the deamon you will need.
Monitor file access count by user
1,418,601,686,000
In RHEL5 and RHEL6, I could add audit=1 to start kernel-level auditing during boot before the boot process got as far as starting auditd. Now, in RHEL7, I can't find any mention of audit=1 as a kernel argument. Has anyone seen a definitive document on kernel/system auditing at boot time? Is just having the audit RPM installed and systemctl enable auditd sufficient on reboot?
The RHEL 7.x documentation on auditing doesn't mention the kernel parameter at all (somehow I thought the RHEL 6.x documentation did mention it but I can't seem to find it now). The manual page for auditd (package audit-2.7.6-3.el7.x86_64) on a RHEL 7.4 system, however, has the following: A boot param of audit=1 should be added to ensure that all processes that run before the audit daemon starts is marked as auditable by the kernel. Not doing that will make a few processes impossible to properly audit. So, although it's not mentioned in the distribution documentation, you do still need the audit=1 kernel parameter.
Kernel / Boot auditing in RHEL 7?
1,418,601,686,000
I can run ausearch based on time: sudo ausearch --start '16:48:07' or date: sudo ausearch --start '05/07/2019' but not both: > sudo ausearch --start '05/07/2019 16:48:07' Invalid start time (05/07/2019 16:48:07). Hour, Minute, and Second are required. The man page clearly implies that you can specify date or time or both but does not have an example with both. How does one run ausearch with both date and time specified?
The date and time should be separate arguments: sudo ausearch --start 05/07/2019 '16:48:07' I found an example online, but a more careful reader could have seen this in the man page: -ts, --start [start-date] [start-time] Search for events with time stamps equal to or after the given start time. The format of start time depends on your locale. If the date is omitted, today is assumed. If the time is omitted, midnight is assumed. Use 24 hour clock time rather than AM or PM to specify time. An example date using the en_US.utf8 locale is 09/03/2009. An example of time is 18:00:00. The date format accepted is influenced by the LC_TIME environmental variable. Notice -ts, --start [start-date] [start-time], clearly there are two optional arguments, not one.
ausearch how to specify both time and date
1,418,601,686,000
I want to monitor access to a file using audit, and hence added the following rule -w /home/test.txt -k monitoring-test I reloaded the rules (sudo service auditd restart) and modified the file /home/test.txt, however, the log does not create any events with that key: sudo ausearch -k monitoring-test returns only the event of adding the rule: time->Fri May 5 13:32:19 2023 type=CONFIG_CHANGE msg=audit(1682311231.581:1719): auid=1000 ses=3 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 op=add_rule key="monitoring-test" list=4 res=1 Strangely, when adding a network monitoring rule like -a always,exit -F arch=b64 -S accept,connect -F key=network-external-access I do get log messages with the corresponding key. I read multiple posts like this tutorial or that redhat post but none of their solution fixes my problem. Does anyone see why I do not get any logs for editing the file? The Kernel has the following flags, obtained by sudo grep CONFIG_AUDIT /boot/config-uname -r CONFIG_AUDIT=y CONFIG_AUDITSYSCALL=y CONFIG_AUDIT_ARCH=y Is it a problem that CONFIG_AUDIT_WATCH=y is missing, which is present in the answer here? For the record, my /etc/audit/audit.rules is (after adding the above file monitor rule permanently): ## This file is automatically generated from /etc/audit/rules.d -D -a task,never -w /home/test.txt -k monitoring-test And sudo auditctl -l returns -a never,task -w /home/test.txt -p rwxa -k monitoring-test My operating system is Fedora, and the audit version is 3.1-2.
From the auditctl man page: DISABLED BY DEFAULT On many systems auditd is configured to install an -a never,task rule by default. This rule causes every new process to skip all audit rule processing. This is usually done to avoid a small performance overhead imposed by syscall auditing. If you want to use auditd, you need to remove that rule by deleting 10-no-audit.rules and adding 10-base-config.rules to the audit rules directory. If you have defined audit rules that are not matching when they should, check auditctl -l to make sure there is no never,task rule there. This is because the event triggers on the first matching rule. Remove first rule -a never,task.
audit does not record file events (but works for network events) in fedora
1,418,601,686,000
While I was playing a little with kernel audit system, I made a small C program: #include <stdio.h> #include <stdlib.h> int main(int argc, char** argv){ void *t; while(1){ t = malloc(1); free(t); } return 0; } And applied the following filters to audit: -a always,exit -F arch=b32 -S open,openat -F exit=-EACCES -F key=access -a always,exit -F arch=b64 -S open,openat -F exit=-EACCES -F key=access -a always,exit -F arch=b32 -S brk -a always,exit -F arch=b64 -S brk After compiling and running, I noticed that sys_brk wasn't showing up in the audit log. Furthermore it didn't also appear in strace, even tho malloc was called (checked with ltrace). Lastly I removed the free and the calls to sys_brk started showing up. What is causing this type of behaviour? Does glibc make some kind of optimization in malloc and free functions to prevent useless syscalls? TL;DR: free followed by malloc makes neither call the kernel. Why?
Your program starts with an initial heap, and your one byte allocation fits within that heap. When you immediately free the allocated memory, the heap never needs to grow so you never see a corresponding system call. See How quickly/often are process memory measurements updated in the kernel? for a similar experiment.
No system call when malloc after free
1,418,601,686,000
I am writing a converter which takes Linux Audit logs as input. I tried to find the most recent dictionary file where all the valid names of the fields are defined. I've found such a file[1] but the main website[2] says: Specs The specifications have moved to github. The following will be left in place for a while and then removed. I cannot find these information on the GitHub Wiki[3] of the Linux Audit project. Is the file[1] still the most recent and valid source of information? Links: https://people.redhat.com/sgrubb/audit/field-dictionary.txt https://people.redhat.com/sgrubb/audit https://github.com/linux-audit/audit-documentation/wiki
From your github link, follow "Audit Event Parsing Library" which has a link the the dictionary at https://github.com/linux-audit/audit-documentation/blob/master/specs/fields/field-dictionary.csv The raw CSV version is at https://raw.githubusercontent.com/linux-audit/audit-documentation/master/specs/fields/field-dictionary.csv
Where can I find the most recent dictionary of standard Linux Audit event fields?
1,418,601,686,000
I have Amazon Linux 2023 running in a Docker container and I would like to be able to load some custom audit rules into the kernel and ensure they are persisted when the container restarts. I have added the rules to /etc/audit/rules.d/audit.rules and can see them when I cat that file and I'm trying to use augenrules --load to load the rules. However, when I run this command the output I get is /usr/sbin/augenrules: No change You must be root to run this program. I receive this same response even when running the command with sudo (sudo augenrules --load). I am already logged in as root (whoami returns root). I wondered whether it could be be because auditd service is not started (in which case the output from augenrules is misleading) but I am unable to check that status of this service as service auditd status (and any other service command like service auditd start) command gives me Redirecting to /bin/systemctl status auditd.service System has not been booted with systemd as init system (PID 1). Can't operate. Failed to connect to bus: Host is down ps -p1 indicates the PID 1 is bash PID TTY TIME CMD 1 pts/0 00:00:00 bash I assume this is because I'm running in a container but don't know if this is why augenrules refuses to run when I am the root user even when using using sudo. What is causing this behaviour?
Feeding audit rules to the kernel is the host operating system's job, not the container's. There is no kernel in the container: only the host system has one. Your container does not appear to have any kind of init system either, although other containers could be set up with one. It is a bit misleading to say "I have Amazon Linux 2023 running in a Docker container", because a container is never a complete operating system. It would be more accurate to say that you have a container built out of user-space parts of Amazon Linux 2023, containing only just enough parts to fulfill the purpose for which the container was designed, whatever that purpose is. Within the container, you may have UID 0 (= the classical definition of being root), but the container's UID numbers are mapped to a per-container range of higher values so that they won't overlap with the UIDs assigned within the host system or in other containers. When you're trying to use augenrules --load, the kernel sees the mapped UID and since it's not 0, rejects the request.
Why does augenrules refuse to run even when sudo is used?
1,418,601,686,000
I'd like to show that entering passwords via read is insecure. To embed this into a half-way realistic scenario, let's say I use the following command to prompt the user for a password and have 7z¹ create an encrypted archive from it: read -s -p "Enter password: " pass && 7z a test_file.zip test_file -p"$pass"; unset pass My first attempt at revealing the password was by setting up an audit rule: auditctl -a always,exit -F path=/bin/7z -F perm=x Sure enough, when I execute the command involving read and 7z, there's a log entry when running ausearch -f /bin/7z: time->Thu Jan 23 18:37:06 2020 type=PROCTITLE msg=audit(1579801026.734:2688): proctitle=2F62696E2F7368002F7573722F62696E2F377A006100746573745F66696C652E7A697000746573745F66696C65002D7074686973206973207665727920736563726574 type=PATH msg=audit(1579801026.734:2688): item=2 name="/lib64/ld-linux-x86-64.so.2" inode=1969104 dev=08:03 mode=0100755 ouid=0 ogid=0 rdev=00:00 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 type=PATH msg=audit(1579801026.734:2688): item=1 name="/bin/sh" inode=1972625 dev=08:03 mode=0100755 ouid=0 ogid=0 rdev=00:00 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 type=PATH msg=audit(1579801026.734:2688): item=0 name="/usr/bin/7z" inode=1998961 dev=08:03 mode=0100755 ouid=0 ogid=0 rdev=00:00 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 type=CWD msg=audit(1579801026.734:2688): cwd="/home/mb/experiments" type=EXECVE msg=audit(1579801026.734:2688): argc=6 a0="/bin/sh" a1="/usr/bin/7z" a2="a" a3="test_file.zip" a4="test_file" a5=2D7074686973206973207665727920736563726574 type=SYSCALL msg=audit(1579801026.734:2688): arch=c000003e syscall=59 success=yes exit=0 a0=563aa2479290 a1=563aa247d040 a2=563aa247fe10 a3=8 items=3 ppid=2690563 pid=2690868 auid=1000 uid=1000 gid=1000 euid=1000 suid=1000 fsuid=1000 egid=1000 sgid=1000 fsgid=1000 tty=pts17 ses=1 comm="7z" exe="/usr/bin/bash" key=(null) This line seemed the most promising: type=EXECVE msg=audit(1579801026.734:2688): argc=6 a0="/bin/sh" a1="/usr/bin/7z" a2="a" a3="test_file.zip" a4="test_file" a5=2D7074686973206973207665727920736563726574 But the string 2D7074686973206973207665727920736563726574 is not the password I entered. My question is twofold: Is audit the right tool to get at the password? If so, is there something I have to change about the audit rule? Is there an easier way, apart from audit, to get at the password? ¹ I'm aware that 7z can prompt for passwords by itself.
What's insecure is not read(2) (the system call to read data from a file). It isn't even read(1) (the shell builtin to read a line from standard input). What's insecure is passing the password on the command line. When the user enters something that the shell reads with read, that thing is visible to the terminal and to the shell. It isn't visible to other users. With read -s, it isn't visible to shoulder surfers. The string passed on the command line is visible in the audit logs. (The string may be truncated, I'm not sure about that, but if it is it would be only for much longer strings than a password.) It's just encoded in hexadecimal when it contains characters such as spaces that would make the log ambiguous to parse. $ echo 2D7074686973206973207665727920736563726574 | xxd -r -p; echo -pthis is very secret $ perl -l -e 'print pack "H*", @ARGV' 2D7074686973206973207665727920736563726574 -pthis is very secret That's not the main reason why you shouldn't pass a secret on the command line. After all, only the administrator should be able to see audit logs, and the administrator can see everything if they want. It is worse to have the secret in the logs, though, because they may be accessible to more people later (for example through an improperly secured backup). The main reason why you shouldn't pass a secret on the command line is that on most systems the command line is also visible to other users. (There are hardened systems where this isn't the case, but that's typically not the default.) Anyone running ps, top, cat /proc/*/cmdline or any similar utility at the right time can see the password. The 7z program overwrites the password soon after it starts (as soon as it's been able to make an internal copy), but that only reduces the window of danger, it doesn't remove the vulnerability. Passing a secret in an environment variable is safe. The environment is not visible to other users. But I don't think 7z supports that. To pass the password without making it visible through the command line, you need to pass it as input, and 7z reads from the terminal, not from stdin. You can use expect to do that (or pexpect if you prefer Python to TCL, or Expect.pm in Perl, or expect in Ruby, etc.). Untested: read -s -p "Enter password: " pass pass=$pass expect \ -c 'spawn 7z a -p test_file.zip test_file' \ -c 'expect "assword:" {send $::env(pass)}' \ -c 'expect eof' -c 'catch wait result' unset pass
Sniff password entered with read and passed as a command line argument
1,568,274,817,000
I have ran the following command on my RHEL 6 system to produce an audit report aureport --login --summary -i that produces the following output Login Summary Report ============================ total auid ============================ Warning - freq is non-zero and incremental flushing not selected. 458 unset 87 root The command is said to generate a summary report of all failed login attempts per each system user according to this RHEL document. However, wouldn't I need to use the --failed option to produce the output for failed login attempts? Also, how is the output of this command to be interpreted? Does it mean 87 failed logins for root, or does 87 mean something else besides the number of failed logins?
From reading the documentation, I think using the "--failed" option would show only failed events for the report you're running. The default behavior is to show both failures and successes. From the man page: --failed Only select failed events for processing in the reports. The default is both success and failed events. I believe that the number is the number of events for that particular report for that particular user. In your case, there are 87 login events (failed and successful) associated with the user "root", and there are 458 login events (again, failed and successful) associated with the user "unset". Here's some additional good reading on aureport: https://www.digitalocean.com/community/tutorials/understanding-the-linux-auditing-system-on-centos-7#generating-audit-reports http://www.golinuxhub.com/2014/05/how-to-track-all-successful-and-failed.html
aureport interpretting report output
1,568,274,817,000
I couldn't find this elusive "unset" user in /etc/passwd and there is no mention of him in man aureport although she scored the most number of hits on my audit log: # aureport -u -i --summary --start today User Summary Report =========================== total auid =========================== 888 unset 222 root 55 creepy_user Who is the "unset" user and what does he do for a living?
Based on the definition of auid from this SuSE page, titled: Understanding the Audit Logs and Generating Reports: auid The audit ID. A process is given an audit ID on user login. This ID is then handed down to any child process started by the initial process of the user. Even if the user changes his identity (for example, becomes root), the audit ID stays the same. Thus you can always trace actions to the original user who logged in. I would conclude that there was no login performed by a user for the process that was handed down to the child process that was started. There are ways to run processes on Unix without actually logging in. I believe pam_loginuid is responsible for setting this. You can take a look at the man page for more on it.
Who is user "unset" in aureport?
1,568,274,817,000
I'm trying to parse audit.log with rsyslog by using a bash script in order to transform the hex part of proctitle to ascii. However I do not get ressults: the file audit_ascii.log do not have lines with "proctitle" values. I tested the script and it is working fine so I guess the problem comes from my rsyslog.conf. rsyslog.conf: $InputFileName /var/log/audit/audit.log $InputFileTag tag_auditd: $InputFileStateFile log_audit $InputFileSeverity info $InputFileFacility local6 $InputRunFileMonitor if $msg contains "msg=audit" then { action(type="omprog" binary="/bin/bash /opt/bin/hex2ascii.sh" output="/var/log/audit/audit_ascii.log") hex2ascii #!/bin/bash read log hasHex=$(echo $log | egrep "msg=audit" | egrep "type=PROCTITLE" | egrep -v '"' | wc -c) if [ ${hasHex} -gt 0 ]; then part1=$(echo $log | cut -d"=" -f1-3) part2=$(echo $log | cut -d"=" -f4) part2=$(echo $part2 | xxd -r -p ) echo $part1 >> /var/log/audit/verif.txt #echo "${part1}=${part2}\n" >> /var/log/audit/audit_ascii.log log="${part1}=${part2}\n" #else #echo $log >> /var/log/audit/audit_ascii.log fi
Just stop the flow after the script redirected the changed log to another file. Then I take the new file as another input in rsyslog. Best solution I found
Rsyslog - Parsing audit.log / omprog change log value
1,568,274,817,000
OS sles 15, audit service enabled When I issue any command (for example, date or ls), I expect it to be logged in audit.log, something like this: type=SYSCALL msg=audit... type=EXECVE msg=audit(1718094805.867:24632): argc=1 a0="date" ... but these entries are not in audit.log There are other entries there, for example about the start/finish of sessions, but there are no commands called.
ERROR: type should be string, got "\nhttps://lowendbox.com/blog/how-to-audit-every-command-run-on-your-linux-system/\nbasically do this to put this rule in your /etc/audit/rules.d/audit.rules file\nauditctl -a exit,always -F arch=b32 -S execve -k allcmds\nauditctl -a exit,always -F arch=b64 -S execve -k allcmds\n\nbe aware the /var/log/audit/audit.log file might grow to gigabytes in a few minutes, and simply fill up whatever disk partition that folder is on.\nAnd I believe that will capture every command on a running system including all under the hood stuff. If you want every command done by a specific user, then it would be a matter of tailoring the rule to filter on a specific uid= such as\nauditctl -a exit,always -F arch=b32 -F uid=1234 -S execve -k allcmds\n\nor\nauditctl -a exit,always -F arch=b32 -F uid >=1000 -S execve -k allcmds\n\n"
Audit service does not audit commands
1,568,274,817,000
Are chown and chmod “write” -w type of an operation? I'm using auditd to watch folder permissions. There are different options read, write, execute, attributes. I just want to watch chmod or chown changes on the directory. Is chown/chmod a write type of operation on the system?
Those operations are a(=attribute change)
auditd / auditctl: Are chown and chmod “write” -w type of an operation?
1,392,108,880,000
In my CPU graph thingy, I have noticed recently that when compiling stuff I can never seem to reach 100% usage, it just keeps bobbing up and down around 60-70% max. Example: In contrast, this graph is completely opaque when done on my work computer. I want to get to the bottom of this and am using the stress utility to simulate CPU usage, and vmstat to observe. I am running stress with cpu core count ranging from 1 to 15 (my CPU has 12 logical cores). Here's the result, with line 1 corresponding to 1 core running 100%, line 2 is 2 cores, etc: procs -----------------------memory---------------------- ---swap-- -----io---- -system-- --------cpu-------- r b swpd free buff cache si so bi bo in cs us sy id wa st 1 0 0 24455180 465548 2862716 0 0 0 38 944 3184 11 1 88 0 0 3 0 0 24508640 465552 2862684 0 0 0 196 1112 2841 18 1 81 0 0 3 0 0 24556876 465564 2865096 0 0 0 63 1880 4569 30 1 70 0 0 4 0 0 24624764 465576 2865044 0 0 0 11 1414 1005 34 0 66 0 0 5 0 0 24625228 465580 2865068 0 0 0 9 1603 1029 42 0 58 0 0 6 0 0 24763772 465600 2864912 0 0 1 159 1973 1032 51 0 49 0 0 8 0 0 24786696 465600 2864844 0 0 0 9 2460 751 56 0 44 0 0 8 0 0 24805572 465600 2864864 0 0 0 78 2619 808 61 0 38 0 0 10 0 0 24811064 465604 2864852 0 0 0 50 2532 761 56 0 44 0 0 14 0 0 24809904 465616 2865180 0 0 0 4 2823 1049 63 0 37 0 0 13 0 0 24868936 465620 2865116 0 0 0 76 2596 709 57 0 43 0 0 19 0 0 24910408 465628 2866136 0 0 0 12 2526 738 56 0 44 0 0 16 0 0 24914768 465636 2865244 0 0 0 36 2757 720 62 0 38 0 0 18 0 0 24914332 465644 2865256 0 0 0 3 2629 862 59 0 41 0 0 19 0 0 24945952 465648 2866224 0 0 0 33 2642 678 59 0 41 0 0 The script I ran: for corecount in $(seq 15); do stress -c $corecount >/dev/null& sleep 1 vmstat -w 4 2 | tail -1 pkill stress sleep 1 done By looking at the us column I see that cpu usage increases linearly as expected up to 6-8 logical cores, but after that it's hitting some other bottleneck. The mouse cursor starts lagging at this point, and if I try running while a videp player is running, it will also starts to stutter at this point. (for comparison, here's the exact same test when done on my work computer: http://pastebin.com/MHPSR4E0 . Here the the cpu usage simply climbs linearly up to 99/100 and stays there (the saturation is on line 8 because it's an 8 core cpu)) (here's the cpu graph for the entire test run, with the choke point visible: ) General information: Ubuntu 16.04 LTS, 32 gb memory, i7-5820K 6-core cpu. free -h total used free shared buff/cache available Mem: 31G 4,8G 23G 90M 3,3G 26G Swap: 15G 0B 15G /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 63 model name : Intel(R) Core(TM) i7-5820K CPU @ 3.30GHz stepping : 2 microcode : 0x2d cpu MHz : 1236.339 cache size : 15360 KB physical id : 0 siblings : 12 core id : 0 cpu cores : 6 apicid : 0 initial apicid : 0 fpu : yes fpu_exception : yes cpuid level : 15 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm ida arat pln pts dtherm tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm xsaveopt cqm_llc cqm_occup_llc bugs : bogomips : 6599.39 clflush size : 64 cache_alignment : 64 address sizes : 46 bits physical, 48 bits virtual power management: [...] processor : 11 vendor_id : GenuineIntel cpu family : 6 model : 63 model name : Intel(R) Core(TM) i7-5820K CPU @ 3.30GHz stepping : 2 microcode : 0x2d cpu MHz : 1200.246 cache size : 15360 KB physical id : 0 siblings : 12 core id : 5 cpu cores : 6 apicid : 11 initial apicid : 11 fpu : yes fpu_exception : yes cpuid level : 15 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm ida arat pln pts dtherm tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm xsaveopt cqm_llc cqm_occup_llc bugs : bogomips : 6599.39 clflush size : 64 cache_alignment : 64 address sizes : 46 bits physical, 48 bits virtual power management: lspci 00:00.0 Host bridge: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 DMI2 (rev 02) 00:01.0 PCI bridge: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 PCI Express Root Port 1 (rev 02) 00:01.1 PCI bridge: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 PCI Express Root Port 1 (rev 02) 00:02.0 PCI bridge: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 PCI Express Root Port 2 (rev 02) 00:02.2 PCI bridge: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 PCI Express Root Port 2 (rev 02) 00:02.3 PCI bridge: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 PCI Express Root Port 2 (rev 02) 00:03.0 PCI bridge: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 PCI Express Root Port 3 (rev 02) 00:05.0 System peripheral: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Address Map, VTd_Misc, System Management (rev 02) 00:05.1 System peripheral: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Hot Plug (rev 02) 00:05.2 System peripheral: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 RAS, Control Status and Global Errors (rev 02) 00:05.4 PIC: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 I/O APIC (rev 02) 00:11.0 Unassigned class [ff00]: Intel Corporation C610/X99 series chipset SPSR (rev 05) 00:14.0 USB controller: Intel Corporation C610/X99 series chipset USB xHCI Host Controller (rev 05) 00:16.0 Communication controller: Intel Corporation C610/X99 series chipset MEI Controller #1 (rev 05) 00:19.0 Ethernet controller: Intel Corporation Ethernet Connection (2) I218-V (rev 05) 00:1a.0 USB controller: Intel Corporation C610/X99 series chipset USB Enhanced Host Controller #2 (rev 05) 00:1b.0 Audio device: Intel Corporation C610/X99 series chipset HD Audio Controller (rev 05) 00:1c.0 PCI bridge: Intel Corporation C610/X99 series chipset PCI Express Root Port #1 (rev d5) 00:1c.4 PCI bridge: Intel Corporation C610/X99 series chipset PCI Express Root Port #5 (rev d5) 00:1d.0 USB controller: Intel Corporation C610/X99 series chipset USB Enhanced Host Controller #1 (rev 05) 00:1f.0 ISA bridge: Intel Corporation C610/X99 series chipset LPC Controller (rev 05) 00:1f.2 SATA controller: Intel Corporation C610/X99 series chipset 6-Port SATA Controller [AHCI mode] (rev 05) 00:1f.3 SMBus: Intel Corporation C610/X99 series chipset SMBus Controller (rev 05) 06:00.0 VGA compatible controller: NVIDIA Corporation GF104 [GeForce GTX 460] (rev a1) 06:00.1 Audio device: NVIDIA Corporation GF104 High Definition Audio Controller (rev a1) 08:00.0 USB controller: ASMedia Technology Inc. ASM1142 USB 3.1 Host Controller ff:0b.0 System peripheral: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 R3 QPI Link 0 & 1 Monitoring (rev 02) ff:0b.1 Performance counters: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 R3 QPI Link 0 & 1 Monitoring (rev 02) ff:0b.2 Performance counters: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 R3 QPI Link 0 & 1 Monitoring (rev 02) ff:0c.0 System peripheral: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Unicast Registers (rev 02) ff:0c.1 System peripheral: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Unicast Registers (rev 02) ff:0c.2 System peripheral: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Unicast Registers (rev 02) ff:0c.3 System peripheral: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Unicast Registers (rev 02) ff:0c.4 System peripheral: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Unicast Registers (rev 02) ff:0c.5 System peripheral: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Unicast Registers (rev 02) ff:0f.0 System peripheral: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Buffered Ring Agent (rev 02) ff:0f.1 System peripheral: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Buffered Ring Agent (rev 02) ff:0f.4 System peripheral: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 System Address Decoder & Broadcast Registers (rev 02) ff:0f.5 System peripheral: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 System Address Decoder & Broadcast Registers (rev 02) ff:0f.6 System peripheral: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 System Address Decoder & Broadcast Registers (rev 02) ff:10.0 System peripheral: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 PCIe Ring Interface (rev 02) ff:10.1 Performance counters: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 PCIe Ring Interface (rev 02) ff:10.5 System peripheral: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Scratchpad & Semaphore Registers (rev 02) ff:10.6 Performance counters: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Scratchpad & Semaphore Registers (rev 02) ff:10.7 System peripheral: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Scratchpad & Semaphore Registers (rev 02) ff:12.0 System peripheral: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Home Agent 0 (rev 02) ff:12.1 Performance counters: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Home Agent 0 (rev 02) ff:13.0 System peripheral: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Integrated Memory Controller 0 Target Address, Thermal & RAS Registers (rev 02) ff:13.1 System peripheral: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Integrated Memory Controller 0 Target Address, Thermal & RAS Registers (rev 02) ff:13.2 System peripheral: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Integrated Memory Controller 0 Channel Target Address Decoder (rev 02) ff:13.3 System peripheral: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Integrated Memory Controller 0 Channel Target Address Decoder (rev 02) ff:13.4 System peripheral: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Integrated Memory Controller 0 Channel Target Address Decoder (rev 02) ff:13.5 System peripheral: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Integrated Memory Controller 0 Channel Target Address Decoder (rev 02) ff:13.6 System peripheral: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 DDRIO Channel 0/1 Broadcast (rev 02) ff:13.7 System peripheral: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 DDRIO Global Broadcast (rev 02) ff:14.0 System peripheral: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Integrated Memory Controller 0 Channel 0 Thermal Control (rev 02) ff:14.1 System peripheral: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Integrated Memory Controller 0 Channel 1 Thermal Control (rev 02) ff:14.2 System peripheral: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Integrated Memory Controller 0 Channel 0 ERROR Registers (rev 02) ff:14.3 System peripheral: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Integrated Memory Controller 0 Channel 1 ERROR Registers (rev 02) ff:14.6 System peripheral: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 DDRIO (VMSE) 0 & 1 (rev 02) ff:14.7 System peripheral: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 DDRIO (VMSE) 0 & 1 (rev 02) ff:15.0 System peripheral: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Integrated Memory Controller 0 Channel 2 Thermal Control (rev 02) ff:15.1 System peripheral: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Integrated Memory Controller 0 Channel 3 Thermal Control (rev 02) ff:15.2 System peripheral: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Integrated Memory Controller 0 Channel 2 ERROR Registers (rev 02) ff:15.3 System peripheral: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Integrated Memory Controller 0 Channel 3 ERROR Registers (rev 02) ff:16.0 System peripheral: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Integrated Memory Controller 1 Target Address, Thermal & RAS Registers (rev 02) ff:16.6 System peripheral: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 DDRIO Channel 2/3 Broadcast (rev 02) ff:16.7 System peripheral: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 DDRIO Global Broadcast (rev 02) ff:17.0 System peripheral: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Integrated Memory Controller 1 Channel 0 Thermal Control (rev 02) ff:17.4 System peripheral: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 DDRIO (VMSE) 2 & 3 (rev 02) ff:17.5 System peripheral: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 DDRIO (VMSE) 2 & 3 (rev 02) ff:17.6 System peripheral: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 DDRIO (VMSE) 2 & 3 (rev 02) ff:17.7 System peripheral: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 DDRIO (VMSE) 2 & 3 (rev 02) ff:1e.0 System peripheral: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Power Control Unit (rev 02) ff:1e.1 System peripheral: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Power Control Unit (rev 02) ff:1e.2 System peripheral: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Power Control Unit (rev 02) ff:1e.3 System peripheral: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Power Control Unit (rev 02) ff:1e.4 System peripheral: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 Power Control Unit (rev 02) ff:1f.0 System peripheral: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 VCU (rev 02) ff:1f.2 System peripheral: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 VCU (rev 02) /proc/meminfo MemTotal: 32841924 kB MemFree: 24244428 kB MemAvailable: 27418672 kB Buffers: 478012 kB Cached: 2622028 kB SwapCached: 0 kB Active: 6606180 kB Inactive: 1240728 kB Active(anon): 4758584 kB Inactive(anon): 85340 kB Active(file): 1847596 kB Inactive(file): 1155388 kB Unevictable: 8020 kB Mlocked: 8020 kB SwapTotal: 16669692 kB SwapFree: 16669692 kB Dirty: 284 kB Writeback: 0 kB AnonPages: 4754864 kB Mapped: 780304 kB Shmem: 93528 kB Slab: 346300 kB SReclaimable: 257036 kB SUnreclaim: 89264 kB KernelStack: 22112 kB PageTables: 109808 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 33090652 kB Committed_AS: 16302224 kB VmallocTotal: 34359738367 kB VmallocUsed: 371016 kB VmallocChunk: 34358945788 kB HardwareCorrupted: 0 kB AnonHugePages: 761856 kB CmaTotal: 0 kB CmaFree: 0 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 537784 kB DirectMap2M: 5648384 kB DirectMap1G: 29360128 kB a more detailed cpu usage screenshot from ksysguard while stress testing with 10 cores: What is going on? What other parts of the kernel can I observe to see what's happening? Is there some cpu scheduler configuration that has been It has not always been like this, I am absolutely certain I have been able to compile before with 100% cpu utilization with no mouse lag and not really noticing it except for fans running.
To debug problems with scheduling or applications performance on Linux, it is a good start to run task under perf stat. It reports statistics about the processor pipeline, its stalled cycles, or memory behaviour. Possible problems: Linux/Scheduler bug Intel HT is not keeping up with your threads Memory is not able to provide enough data for the program For the sake of complexity, resolution was wrong/old kernel (4.2.0) instead of expected 4.4.0 for Ubuntu 16.04. Updating solved the problem.
Why does my cpu never get past 60-70% cpu usage? Where is the bottleneck?
1,392,108,880,000
I am thinking of implementing a load balancing solution for personal use. What I want to do, is to maximize data throughput over mobile phone Internet connections. Let me be clear: I have data plan in my mobile phone and my family have their respective data plans in their phones, too. If I can connect up to 4 phones in one (desktop) PC (through USB preferably) then I will achieve (theoretically) a faster Internet connection than any one of the 4 phones can achieve (if I connect them to a PC). That desktop computer will then act as a router for an intranet. If the above has a sound basis (I could be wrong - don't know the technologies involved in great detail), I need a how to to implement that. I have seen that the tool for the job is ipvs (right?) but no how to. Distro-wise the job can be done in any distro, but I know that connecting an Android phone with Ubuntu works plug and play. So If I can do it in Ubuntu, it will probably be faster than compiling everything from strach. Is there a relative how to? Is there a distro perhaps that does load balancing, and identifies USB internet connections on the fly?
To balance outgoing connections all you need is standard iptables and some policy routing. This does get a bit complex with 4 connections as you will need to reconfigure and rebalance the links as connections come and go. The raw iptables setup is Create a routing table for each connection ip rule add fwmark 10 table PHONE0 prio 33000 ip rule add fwmark 11 table PHONE1 prio 33000 ip rule add fwmark 12 table PHONE2 prio 33000 ip rule add fwmark 13 table PHONE3 prio 33000 Add the default gateway for each connection to each table (the gateway IP will vary depending on each phones provider/setup) ip route add default via 192.168.1.2 table PHONE0 ip route add default via 192.168.9.1 table PHONE1 ip route add default via 192.168.13.2 table PHONE2 ip route add default via 192.168.7.9 table PHONE3 Randomly mark any unmarked flows, which will route the flow via a specific connection. OUTPUT is used for local processes. Use PREROUTING if you are forwarding traffic for other clients) iptables -t mangle -A OUTPUT -j CONNMARK --restore-mark iptables -t mangle -A OUTPUT -m mark ! --mark 0 -j ACCEPT iptables -t mangle -A OUTPUT -j MARK --set-mark 10 iptables -t mangle -A OUTPUT -m statistic --mode random --probability 0.25 -j MARK --set-mark 11 iptables -t mangle -A OUTPUT -m statistic --mode random --probability 0.25 -j MARK --set-mark 12 iptables -t mangle -A OUTPUT -m statistic --mode random --probability 0.25 -j MARK --set-mark 13 iptables -t mangle -A OUTPUT -j CONNMARK --save-mark NAT for each of the connections (the interface will need to be whatever you phone connection appears to the system as) iptables -t nat -A POSTROUTING -o ppp0 -j MASQUERADE iptables -t nat -A POSTROUTING -o ppp1 -j MASQUERADE iptables -t nat -A POSTROUTING -o ppp2 -j MASQUERADE iptables -t nat -A POSTROUTING -o ppp3 -j MASQUERADE Note that a single TCP or UDP connection will see no speed up as it will still be going over a single link. You have to use multiple concurrent connections (at least 4) to make use of the extra bandwidth. Most browsers do this under the hood when requesting multiple objects. Some download managers allow you to use multiple connections for a single file. As garethTheRed suggests, ispunity adds some of the "glue" on top of this iptables setup to loop through a list of connections, check that the gateway is responding, re balance if something is wrong etc. It's "sticky session" management looks to be additional setup per port on top of it's base "round robin" load balancing of connections. Another solution is Net-ISP-Balance, a Perl script and library that automates the iptables and routing table configuration, monitors the ISP status, alerts you to problems, and reconfigures the routing in case one or more ISPs become inacessible. Also note that having requests come from multiple IP's can break some services that are based on consistent IP lookups and you may need to add additional rules for those services to tie them to a connection. You won't see any speedup on single connections, only when you are doing 4 things at once, which most browsers will try to do anyway. ipvs is more for creating a virtual service addresses for things you host so the service can be failed over between multiple hosts.
Implementing load balancing on any Linux distro
1,392,108,880,000
I'm reading about the differences between a load-balancer implemented at the DNS level vs having a single DNS entry which forwards to a load-balancer. I found this Q&A particularly useful: I'm getting the impression from the top answer that DNS based load-balancing isn't reliable. When I run nslookup on some big sites most of them seem to have multiple IP entries: nslookup google.com - 1 IP nslookup amazon.com - 6 IPs nslookup netflix.com - 8 IPs Do the results from Amazon and Netflix imply that they are using DNS for round-robin load-balancing?
As the top answer in the link says, load balancing based DNS is not reliable; it also is known for not being able to distribute load evenly. When using DNS load-balancing techniques you are dependent both in caching choices done by intermediate DNS servers and by the client decisions. In case of a failure, if only using DNS load balancing alone, you might have clients which do not move on to the healthy nodes. You are also assuming servers/CDN IP addresses may map 1:1 to individual machines, which is not usually the case for the players you are naming; in fact with anycast technologies the same group of requests to "the same" IP addresses of the service will be directed to different data centers and different machines or even different technologies based on the geographical location from where the service request is made.
Do multiple entries for nslookup imply load-balancer via DNS?
1,392,108,880,000
I have a gitlab server on my local network and a server that I can ssh to from outside my network. Is there a way I can configure the server, that I can SSH into, so that when I use: ssh [email protected] It sends that to the Gitlab server on the local network? Kind of like an Nginx reverse proxy but with ssh. Edit: I've been looking around and I found something here that looks like what I want. Access via Load Balancer If you want to provide a more standard git experience you can manually > set up and configure an external load balancer to point to a given GitLab node. This should route traffic from port 22 to port 2222 on the > GitLab node. You can then point a DNS record at the load balancer. This looks like what I am trying to do, but how do I accomplish this? Edit 2: Here is an image that can hopefully clarify what I am trying to do. (Those red lines should be going through the internet too.)
HTTP servers like nginx are able to proxy based on the hostname because it is sent in the HTTP/1.1 Host header of the request. SSH does not have this concept of virtual hosts, the client not send the hostname at all. You have three options: Use port forwarding to make your gitlab server directly available. Make your gitlab server available through an (additional) IPv4 or IPv6 address. Create a SSH tunnel into your network and proxy the SSH connection to your git server through this tunnel. Port forwarding This is probably the easiest approach that does interfere with the "public server". Setup your gateway to forward port 2222 to 192.168.2.26:22. Then use the ssh -p2222 [email protected] to connect. For git, use URLs like ssh://[email protected]:2222/repo.git. Alternatively, you can just use ssh://[email protected]/repo.git or [email protected]:repo.git if you create a ~/.ssh/config file with: Host git.example.com Port 2222 Additional IPv4 or IPv6 address If you have a home network, getting an IPv4 address is probably impossible, but some business providers do it. If your network supports IPv6 (end-to-end), then you can just use normal routing without nasty proxying or NAT hackery. SSH tunnel You can use the ProxyCommand option to specify the command that proxies the SSH connection to git.example.com. In your case, the "public server" is the proxy, so the command should be connecting to that server. Let's start with the configuration snippet for ~/.ssh/config: Host git.example.com ProxyCommand ssh -W %h:%p [email protected] In this snippet the -W %h:%p option will be expanded to -W git.example.com:22 and redirect standard input and output to said host (git.example.com). This enables your local SSH client to speak with your gitlab server. You can again use any URL like [email protected]:repo.git, the proxy will be transparant to the git client.
Reverse SSH Tunnel
1,392,108,880,000
I have two Apache instances behind a load balancer that I transfer the requests to, depending on the request type. Now what I want: when I get too many transactions from an IP address, I want to block that IP for few seconds and send back some response to the client that you have sent too many requests. So now the question: is there any way that we can handle this situation on my load balancer rather then calling my instances. How can I handle this on Apache? I am using Apache version 2.2.
I would advise you to setup mod_evasive in Apache. From mod_evasive on Apache mod_evasive is an evasive maneuvers module for Apache that provides evasive action in the event of an HTTP DoS attack or brute force attack. It is also designed to be a detection and network management tool, and can be easily configured to talk to ipchains, firewalls, routers, and more. mod_evasive presently reports abuse via email and syslog facilities. To install it in Debian: apt-get install libapache2-mod-evasive Edit then mods-available/evasive.conf. Your values may vary depending on how many vhosts you have on the server. <IfModule mod_evasive20.c> DOSHashTableSize 2048 DOSPageCount 50 <---- visites to site in the given time DOSSiteCount 500 <---- to pages DOSPageInterval 2.0 <---- 2 seconds DOSSiteInterval 1.0 DOSBlockingPeriod 600.0 <--- seconds DOSLogDir /var/log/apache2/evasive DOSWhitelist 127.0.0.1 DOSWhitelist x.x.x.* </IfModule> For enabling the new mod_evasive configuration, you have to restart Apache. You might also be interested in commercial services like CloudFlare or Amazon CloudFront.
Using a load balancer instead of Apache to throttle transactions from specific IP's
1,392,108,880,000
I am conducting a kind of research in that I schedule multiple parallel applications (e.g., OpenMP/pthreaded applications) and execute the applications on specific (partitioned) cores on Linux-based multi-processor platforms. We can set CPU affinities for each application by using sched_setaffinity() system call. But, as you know, Linux manages (all) running programs as well. So, the applications' executions that I scheduled are sometimes interrupted by other processes that Linux scheduled. I want to set all processes and daemons (except for applications that I scheduled) to CPU 0. My first thought was to set CPU 0 manually by traversing all tasks from init task in a kernel module. But the result will be affected by Linux load-balancing. We need another way to somehow turn off or manage Linux CPU load balancing. Is there any possible way or system configurations to do this? My target platform is AMD Opteron server (containing 64 cores) and Linux version is 3.19.
you should be able to disable the automated load-balancing by telling the kernel to only use the first N CPUs. e.g. adding the following to your boot-parameters, should effectively run the entire system on CPU #0 (as the system will only use a single CPU): maxcpus=1 then use taskset or similar to run your process on a different CPU.
setting (system-wide) CPU affinities for running processes on a Linux platform
1,392,108,880,000
I have a flat network (no routing yet) of 3 servers, each with a service (http, mysqld, doesn't matter) listening on 0.0.0.0 (ip_nonlocal_bind and ip_forward are on) and running keepalived. virtual_server 10.0.0.80 3306 { delay_loop 2 lb_algo rr lb_kind DR protocol TCP real_server 10.0.0.81 3306 { weight 10 TCP_CHECK { connect_timeout 1 } } real_server 10.0.0.82 3306 { weight 10 TCP_CHECK { connect_port 3306 connect_timeout 1 } } real_server 10.0.0.83 3306 { weight 10 TCP_CHECK { connect_port 3306 connect_timeout 1 } } } keepalived (as the service) is working and failing to each box as appropriate. However, the virtual_server is only serving pages (or database queries, or whatever) from the box keepalived currently has the IP on, they fail the other 2/3 of the time (weighted evenly). Example: When BOX1 has the keepalived address, requests will WORK, FAIL, FAIL, repeat, it only answers with "box1". When BOX2 as the keepalived address, requests will FAIL, WORK, FAIL, only answering as "box2". I am convinced the non-keepalived IP boxes are refusing to answer the query because they do not own or do not know they should be answering as the keepalived IP. How do I get the non-keepalived boxes to answer always? This is not my first keepalived setup, but this is my first virtual_server setup. I just need a load balancer, I do not need high availability provided by HAProxy.
Thank to the help in a serverfault.com question I was able to solve my issue. Short Answer: I add the virtual IPs to my dummy interface and ensure net.ipv4.conf.default.accept_source_route is 0. Long Answer: The purpose of a dummy interface is to easily disable/enable/failover keepalived without stopping and starting the service. Just up or down the dummy interface to fail the VIP to another server. To do this easily, I create a systemd service called /usr/local/src/dummy.service. [Unit] Description=Create dummy network interface After=network.target [Service] Type=oneshot ExecStart=/usr/sbin/ip link set dummy0 up ExecStop=/usr/sbin/ip link set dummy0 down RemainAfterExit=yes [Install] WantedBy=multi-user.target Then enable it. # systemctl enable /usr/local/src/dummy.service Additionally, I load the dummy module drivers. File /etc/modules-load.d/dummy.conf: dummy File /etc/modprobe.d/dummy.conf: alias dummy0 dummy options dummy numdummies=1 I find it easier to shutdown -r now at this time to prove it works, but you can reload modprobe if you desire. You should then see that you have a new interface: # ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 3: dummy0: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether 12:23:34:45:56:7a brd ff:ff:ff:ff:ff:ff valid_lft forever preferred_lft forever Within keepalived.conf I track my vrrp_instance with: vrrp_instance mysql { ... track_interface { dummy0 } ... } Here's what I needed to change to work. I needed to add the VIP IP address to the dummy0 interface. Modify /usr/local/src/dummy.service and # systemctl daemon-reload ExecStart=/usr/sbin/ip link set dummy0 up && ip addr add 10.0.0.100 dev dummy0 I needed to ensure that source routing was not enabled so that any network device could answer the query, then reboot. # cat "net.ipv4.conf.default.access_source_route = 0" > /etc/sysctl.d/10-keepalived.conf For completeness, my entire /etc/sysctl.d/10-keepalived.conf: net.ipv4.ip_forward = 1 net.ipv4.ip_nonlocal_bind = 1 net.ipv4.conf.eno16777736.arp_ignore = 1 net.ipv4.conf.eno16777736.arp_announce = 2 net.ipv4.conf.eno16777736.rp_filter = 2 net.ipv4.conf.default.accept_source_route = 0
keepalived virtual_server - only answering on box keepalived is on
1,392,108,880,000
In our test environment we've identified a strange HAProxy behavior. We're using the standard RHEL 7 provided haproxy-1.5.18-8.el7.x86_64 RPM. According to our understanding, the total number of accepted parallel connections is defined as maxconn*nbproc from global section of the haproxy.cfg. However if we define: maxconn 5 nbproc 2 we'd expect total number of parallel connections to be 10. But we can't get over maxconn defined 5. Why is the nbproc being ignored? Here is the complete haproxy.cfg: # Global settings global log 127.0.0.1 local2 warning log 10.229.253.86 local2 warning chroot /var/lib/haproxy pidfile /var/run/haproxy.pid maxconn 5 user haproxy group haproxy daemon nbproc 2 # turn on stats unix socket stats socket /var/lib/haproxy/stats stats socket /var/run/haproxy.sock mode 600 level admin stats socket /var/run/haproxy_hamonit.sock uid 2033 gid 2033 mode 600 level admin stats timeout 2m defaults mode tcp log global option tcplog option dontlognull option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 30s timeout server 30s timeout http-keep-alive 10s timeout check 10s bind-process all frontend ha01 bind 10.229.253.89:80 mode http option httplog option http-server-close option forwardfor except 127.0.0.0/8 default_backend ha01 backend ha01 balance roundrobin mode http option httplog option http-server-close option forwardfor except 127.0.0.0/8 server server1 10.230.11.252:4240 check server server2 10.230.11.252:4242 check listen stats 10.229.253.89:1936 mode http stats enable stats hide-version stats realm Haproxy\ Statistics stats uri / stats auth admin:foo
I've found the culprit. We were reading statistics from socket interface. However in our config, there is just 1 socket interface which binds to only one process. And therefore we can't get statistics from other processes. HAProxy unfortunately doesn't support aggregated statistics via socket interface (if it does, please share how). So when I changed to exact binding: stats socket /var/run/haproxy.sock mode 600 level admin process 1 stats socket /var/run/haproxy2.sock mode 600 level admin process 2 I can get statistics from both sockets when nbproc=2: echo "show sess" | socat /var/run/haproxy.sock stdio echo "show sess" | socat /var/run/haproxy2.sock stdio
HAProxy ignores nbproc
1,392,108,880,000
I have installed nginx on RHEL and now I need to configure it to forward the requests to the actual server in the /etc/nginx/nginx.conf. My actual server is using a private IP address, will nginx forward requests to the private IP address?
Here you can find a doc how to do it: http://nginx.com/resources/admin-guide/reverse-proxy/ Generally use HTTP proxy, from the example: location /some/path/ { proxy_pass http://www.example.com/link/; } It means that if you go to yourserver.com/some/path/ the request will be forwaded to http://www.example.com/link/. If you have an internal server,e.g. on 192.168.0.1 you can do: location / { proxy_pass http://192.168.0.1; } This way all that comes to / will be forwarded to your internal server.
How nginx receives requests from client and forwards it to the actual server?
1,392,108,880,000
I have a small network with a webserver and an OpenVPN Access Server (with own webinterface). I have only 1 public ip and want to be able to point subdomains to websites on the webserver (e.g. website1.domain.com, website2.domain.com) and point the subdomain vpn.domain.com to the web interface of the OpenVPN access server. After some Google actions i think the way to go is setup a proxy server. NGINX seems to be able to do this with the "proxy_pass" function. I got it working for HTTP backend URL's (websites) but it does not work for the OpenVPN Access Server web interface as it forces to use HTTPS. I'm fine with HTTPS and prefer to use it also for the websites hosted on the webserver. By default a self signed cert. is installed and i want to use also self signed cert. for the other websites. How can i "accept" self signed cert. for the backend servers? I found that i need to generate a cert. and define it in the NGINX reverse proxy config but i do not understand how this works as for example my OpenVPN server already has an SSL certificate installed. I'm able to visit the OpenVPN web interface via https://direct.ip.address.here/admin but got an "This site cannot deliver an secure connection" page when i try to access the web interface via Chrome. My NGINX reverse proxy config: server { listen 443; server_name vpn.domain.com; ssl_verify_client off; location / { # app1 reverse proxy follow proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass https://10.128.20.5:443; proxy_ssl_verify off; } access_log /var/log/nginx/access_log.log; error_log /var/log/nginx/access_log.log; } server { listen 80; server_name website1.domain.com; location / { # app1 reverse proxy follow proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://10.128.11.20:80; } access_log /var/log/nginx/access_log.log; error_log /var/log/nginx/access_log.log; } A nearby thought... Maybe NGINX is not the right tool for this at all (now or on long term)? Lets assume i can fix the cert. issue i currently have and we need more backend web servers to handle the traffic, is it possible to scale the NGINX proxy as well? like a cluster or load balancer or something? Should i look for a completely different tool?
Your 443 server block is not configured for SSL requests. You need to add ssl to the listen directive and configure ssl_certificate and ssl_certificate_key. E.g. server { listen 443 ssl; ssl_certificate /path/to/ssl/certificate.pem; ssl_certificate_key /path/to/ssl/certificate.key; # ... You can find more information on these settings and other TLS/SSL-related settings here: http://nginx.org/en/docs/http/configuring_https_servers.html. issue i currently have and we need more backend web servers to handle the traffic, is it possible to scale the NGINX proxy as well? NGINX scales vertically (increasing resources on one server) pretty well. If you want to add more NGINX servers (for horizontal scalability or for high availability) and you only have one public IP address, you will need to manage virtual IPs (VIPs). You can use the keepalived service to manage VIPs. However, with a small network, I don't think you will need this.
How to NGINX reverse proxy to backend server which has a self signed certificate?
1,392,108,880,000
Grasping at straws for search parameters and terminology. The key attribute to the JVM is the V for virtual (at least within the context of this question). How do you span a JVM across a cluster of machines with load balancing so that the JVM itself is distributed? sorta kinda like this: https://www.cacheonix.org/articles/Distributed-Java-application-on-multiple-JVMs.gif so that the application only sees a single JVM?
Elaborating on my comment and assuming that you aren't thinking of something very esoteric... Perhaps you're looking at that diagram and interpreting it as if the users are using an application that can talk to any JVM on the backend at any time during a session because of some kind of coordination between the JVMs. That's not how it typically works...at least not in widely used architectures. The diagram is almost certainly depicting a classic distributed environment with independent JVMs. If the client (the user application) requires that state be maintained between requests (a session) there are several options but they all still involve JVMs with no knowledge of other JVMs. The most straightforward and common way to accomplish this is to use a load balancer that supports so-called sticky sessions. Briefly, that means that once the client has established identity then the load balancer will always route the client's requests to the same JVM for the duration of the session (e.g. until the user logs out). What do I mean by identity? Usually that means the user has logged in and successfully been authenticated after which a unique ID will be chosen and associated with all of the user's subsequent requests. In the case of web apps (such as those using a RESTful API) this ID is often passed around in an HTTP header. Using a header like this makes it easy for most load balancers to extract the ID. Alternatively, you can forgo sticky sessions and store session state in a DB, for instance, and require a JVM to lookup that state using the session ID each time a request is received. This adds some complexity and overhead but is not unheard of. The JVMs are still independent in this model, though. (You could also have the client application pass session state in it's entirety in every request it makes but there are security issues with this, among other problems.)
distribute the JVM across a cluster of machines
1,392,108,880,000
I apologize in advance if this question is in a wrong forum, this is my first question here! My client has hosting with Aliyun Cloud (Alibaba Cloud in China). I've deployed a microsite to their servers, which has following structure: microsite.com -> CDN1 -> SLB -> 2x ECS -> DB ECS oss.microsite.com -> CDN2 -> OSS ECS instances under SLB have sticky sessions and serve only HTML response. All other files (js, css etc) are served from OSS domain. These instances also use database to store sessions data (eg. user IP address, timestamp of last activity etc.) After 3 weeks, database instance ran out of 40GB of storage space. When I looked into it, I saw 23 million session entries. ECS instances are under constant 100-150 concurrent connections, day and night, 24/7, although actual users (we use GA for tracking) is maybe 10-15 per day (campaign hasn't started yet). I am baffled as client IT says this is "normal" and not an "attack" cause it would be "much more severe". They have no explanation from where this traffic comes from. I can see however in access log (tail -f access.log) a constant flow of requests. These are always there, day and night, whenever I SSH in. GA is empty, except when I open the microsite or someone from client side (as link wasn't pushed to media yet). Anyone has any advice what this is? It seems to me some attempt to run server out of resources, or some unsuccessful DDoS. But because it is still in 100-200 concurrent connections, no firewall / security rule is activated by Aliyun. I don't have access to Aliyun console, only can SSH into servers. I simply can't believe this is "normal". On CloudFlare I had options for bots protection, javascript challenge etc. Aliyun seems to have nothing. Or they simply don't care. Some technical info: All ECS instances are on Ubuntu 20.04. Web service is Apache2, with PHP7.4 and PHP7.4-FPM running. Database instance has MySQL8. Database instance only allows connections from web server instances, and those allow HTTP connection only from SLB (Server Load Balancer, equivalent to Elastic Load Balancer on AWS). This means that all traffic still has to come through SLB to instances under it. Has anyone experienced anything like this? How can I protect my backend from it if they are unable to do it?
OK we found out what was the issue, just so I close the question as there was no DDoS or any attack: Client IT has set their load balancer to, literarily, machinegun server instances, and all the traffic I saw in the access log was actually - health check. Now when they set it to some reasonable 2-3min per check, it's gone. Sorry to trouble you all.
Constant concurrent connections drain my server storage
1,392,108,880,000
I set the LoadBalancer in Apache 2.4.6 (CentOS), it works well except one thing. When the user open the alias of Apache server, it anytime redirect the user to another server when click somewhere on website, which is not good for me. I would like to set the Apache in this way: If someone open the page (and the Apache load the webpage from one server), stay here, and doesn't redirect to another server, if he/she click on somewhere on website. How can I configure the Apache in this way? The current configuration is this below: <Proxy balancer://mycluster> BalancerMember https://server1:443 BalancerMember https://server2:443 Require all granted ProxySet lbmethod=bytraffic </Proxy> <Location /balancer-manager> SetHandler balancer-manager Require host example.org </Location> ProxyPass /balancer-manager ! ProxyPass / balancer://mycluster/ I tried this configuration as well, but still not working as expected: ProxyPass "/test" "balancer://mycluster" stickysession=JSESSIONID|jsessionid scolonpathdelim=On <Proxy "balancer://mycluster"> BalancerMember "https://server1:443" route=node1 BalancerMember "https://server2:443" route=node2 Require all granted </Proxy> <Location /balancer-manager> SetHandler balancer-manager Require host example.org </Location> ProxyPass /balancer-manager ! ProxyPass / balancer://mycluster/
That sounds like your backend doesn't set jsessionid cookies? The docs suggest to start from the following example if your backend doesn't set cookies itself: Header add Set-Cookie "ROUTEID=.%{BALANCER_WORKER_ROUTE}e; path=/" env=BALANCER_ROUTE_CHANGED <Proxy "balancer://mycluster"> BalancerMember "http://192.168.1.50:80" route=1 BalancerMember "http://192.168.1.51:80" route=2 ProxySet stickysession=ROUTEID </Proxy> ProxyPass "/test" "balancer://mycluster" ProxyPassReverse "/test" "balancer://mycluster" (Note the explicit Header add Set-Cookie.)
Apache load balancer always redirect
1,392,108,880,000
We are using mod_proxy to balance load between our backend servers . We have different setup and some of the backend servers either runs on tomcat\jboss. The balancer configuation is as follows BalancerMember http://server1:21080 min=1 max=1000 loadfactor=1 retry=1 timeout=240 route=tc_server1 BalancerMember http://server2:21080 min=1 max=1000 loadfactor=1 retry=1 timeout=240 route=tc_server2 BalancerMember http://server3:21080 min=1 max=1000 loadfactor=1 retry=1 timeout=240 route=tc_server3 The issue for us is that once a backend server is in error state, further requests are still getting forwarded to that server. Is it because retry was set to only 1 sec in our configuration? What does actually retry specify. Does that mean that once a host is in error state, do not send further requests to that server till the number of seconds was set to retry value. If that is the case setting the retry value to a higher number can be a better option for us. We can set it to a value which will be enough to resolve the bad node.
Yes , set the retry value to some higher number. retry: Connection pool worker retry timeout in seconds. If the connection pool worker to the backend server is in the error state, Apache will not forward any requests to that server until the timeout expires. This enables to shut down the backend server for maintenance and bring it back online later. A value of 0 means always retry workers in an error state with no timeout. http://httpd.apache.org/docs/2.2/mod/mod_proxy.html
Understanding retry value in Apache Load Balancer Configuration
1,294,207,772,000
on our servers, typing sar show's the system load statistics for today starting at midnight, is it possible to show yesterdays statistics?
Usually, sysstat, which provides a sar command, keeps logs in /var/log/sysstat/ or /var/log/sa/ with filenames such as /var/log/sysstat/sadd where dd is a numeric value for the day of the month (starting at 01). By default, the file from the current day is used; however, you can change the file that is used with the -f command line switch. Thus for the 3rd of the month you would do something like: sar -f /var/log/sysstat/sa03 If you want to restrict the time range, you can use the -s and -e parameters. If you want to routinely get yesterday's file and can never remember the date and have GNU date you could try sar -f /var/log/sysstat/sa$(date +%d -d yesterday) I highly recommend reading the manual page for sar.
How do I get sar to show for the previous day?
1,294,207,772,000
Until recently I thought the load average (as shown for example in top) was a moving average on the n last values of the number of process in state "runnable" or "running". And n would have been defined by the "length" of the moving average: since the algorithm to compute load average seems to trigger every 5 sec, n would have been 12 for the 1min load average, 12x5 for the 5 min load average and 12x15 for the 15 min load average. But then I read this article: http://www.linuxjournal.com/article/9001. The article is quite old but the same algorithm is implemented today in the Linux kernel. The load average is not a moving average but an algorithm for which I don't know a name. Anyway I made a comparison between the Linux kernel algorithm and a moving average for an imaginary periodic load: . There is a huge difference. Finally my questions are: Why this implementation have been choosen compared to a true moving average, that has a real meaning to anyone ? Why everybody speaks about "1min load average" since much more than the last minute is taken into account by the algorithm. (mathematically, all the measure since the boot; in practice, taking into account the round-off error -- still a lot of measures)
This difference dates back to the original Berkeley Unix, and stems from the fact that the kernel can't actually keep a rolling average; it would need to retain a large number of past readings in order to do so, and especially in the old days there just wasn't memory to spare for it. The algorithm used instead has the advantage that all the kernel needs to keep is the result of the previous calculation. Keep in mind the algorithm was a bit closer to the truth back when computer speeds and corresponding clock cycles were measured in tens of MHz instead of GHz; there's a lot more time for discrepancies to creep in these days.
Why isn't a straightforward 1/5/15 minute moving average used in Linux load calculation?
1,294,207,772,000
Last Friday I upgraded my Ubuntu server to 11.10, which now runs with a 3.0.0-12-server kernel. Since then the overall performance has dropped dramatically. Before the upgrade the system load was about 0.3, but currently it is at 22-30 on an 8 core CPU system with 16GB of RAM (10GB free, no swap used). I was going to blame the BTRFS file system driver and the underlaying MD array, because [md1_raid1] and [btrfs-transacti] consumed a lot of resources. But all the [kworker/*:*] consume a lot more. sar has outputted something similar to this constantly since Friday: 11:25:01 CPU %user %nice %system %iowait %steal %idle 11:35:01 all 1,55 0,00 70,98 8,99 0,00 18,48 11:45:01 all 1,51 0,00 68,29 10,67 0,00 19,53 11:55:01 all 1,40 0,00 65,52 13,53 0,00 19,55 12:05:01 all 0,95 0,00 66,23 10,73 0,00 22,10 And iostat confirms a very poor write rate: sda 129,26 3059,12 614,31 258226022 51855269 sdb 98,78 24,28 3495,05 2049471 295023077 md1 191,96 202,63 611,95 17104003 51656068 md0 0,01 0,02 0,00 1980 109 The question is: How can I track down why the kworker threads consume so many resources (and which one)? Or better: Is this a known issue with the 3.0 kernel, and can I tweak it with kernel parameters? Edit: I updated the Kernel to the brand new version 3.1 as recommended by the BTRFS developers. But unfortunately this didn't change anything.
I found this thread on lkml that answers your question a little. (It seems even Linus himself was puzzled as to how to find out the origin of those threads.) Basically, there are two ways of doing this: $ echo workqueue:workqueue_queue_work > /sys/kernel/debug/tracing/set_event $ cat /sys/kernel/debug/tracing/trace_pipe > out.txt (wait a few secs) For this you will need ftrace to be compiled in your kernel, and to enable it with: mount -t debugfs nodev /sys/kernel/debug More information on the function tracer facilities of Linux is available in the ftrace.txt documentation. This will output what threads are all doing, and is useful for tracing multiple small jobs. cat /proc/THE_OFFENDING_KWORKER/stack This will output the stack of a single thread doing a lot of work. It may allow you to find out what caused this specific thread to hog the CPU (for example). THE_OFFENDING_KWORKER is the pid of the kworker in the process list.
Why is kworker consuming so many resources on Linux 3.0.0-12-server?