date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,366,812,867,000 |
I have Linux Mint installed on a VM: Linux jonathan-mint-virtual-machine 3.5.0-43-generic #66-Ubuntu SMP Wed Oct 23 17:33:43 UTC 2013 i686 i686 i686 GNU/Linux I tried to compile something that uses<signal.h>, but it couldn't find it.
What can I install so that a compiler looking for basic header files will find it?
|
To determine what package to install you can use this tool.
$ apt-file search <file>
Searching for a vaguely named file, signal.h is going to be tricky though. You'll likely need more information than just the name.
Example
Here are the 1st 6 occurrences.
$ apt-file search /signal.h | head -6
avr-libc: /usr/lib/avr/include/avr/signal.h
c-cpp-reference: /usr/share/doc/kde/HTML/en/kdevelop/reference/C/MAN/signal.htm
dietlibc-dev: /usr/include/diet/signal.h
dietlibc-dev: /usr/include/diet/sys/signal.h
elks-libc: /usr/lib/bcc/include/bsd/signal.h
elks-libc: /usr/lib/bcc/include/signal.h
You can get the list of unique packages using this form.
$ apt-file search -l /signal.h | head -6
avr-libc
c-cpp-reference
dietlibc-dev
elks-libc
fp-docs-2.6.0
frama-c-base
C headers
Since you've specified that you'd like to install the package that includes <signal.h>, you're likely looking for an include file, include/signal.h.
$ apt-file search -l include/signal.h
elks-libc
libc6-dev
libc6-dev-armel-cross
libc6-dev-armhf-cross
libklibc-dev
libnewlib-dev
libroot-core5.34
mingw-w64-i686-dev
mingw-w64-x86-64-dev
mingw32-runtime
msp430-libc
python-pycparser
python3-pycparser
So you're likely looking for this package, `libc6-dev.
| What's the right way to install an appropriate package containing <signal.h> on a Linux Mint VM? |
1,366,812,867,000 |
So I'm following this tutorial on rolling out your own toy unix.
I'm stuck at compiling the sample source on this page. This is a download link to the sample source code
The Makefile looks like this:
SOURCES=boot.o main.o
CC=gcc
CFLAGS=-nostdlib -nostdinc -fno-builtin -fno-stack-protector
LDFLAGS=-Tlink.ld
ASFLAGS=-felf
all: $(SOURCES) link
clean:
-rm *.o kernel
link:
ld $(LDFLAGS) -o kernel $(SOURCES)
.s.o:
nasm $(ASFLAGS) $<
On OSX I couldn't manage to make because Apple's ld doesn't support a -T switch. I tried compiling on CentOS and I got this:
[gideon@centosbox src]$ make
nasm -felf boot.s
cc -nostdlib -nostdinc -fno-builtin -fno-stack-protector -c -o main.o main.c
main.c:4: warning: 'struct multiboot' declared inside parameter list
main.c:4: warning: its scope is only this definition or declaration, which is probably not what you want
ld -Tlink.ld -o kernel boot.o main.o
boot.o: could not read symbols: File in wrong format
make: *** [link] Error 1
Of course my only problem is File in wrong format.
What does this mean? How do I correct it?
|
These sound like you're mixing code for different architectures. See here: could not read symbols, file in wrong format.
excerpt
You get that error when you change architectures. Are those CHOST/CFLAGS settings new?
Questions
I wonder if the Toy Unix OS can only be built on x86 and not x64. Something to look into.
make clean. Did the files you're using on CentOS come from OSX in any way?
I see boot.o being used but don't see it being compiled, unless the nasm -felf boot.s builds it, so perhaps it's the OSX version of this file.
The fix
If you take a look at the options for nasm you'll notice that it's using the switches:
$ nasm -felf boot.s
The elf format is a 32-bit format. On newer systems that are 64-bit you need to change this to elf64. You can see these formats using the nasm -hf option:
$ nasm -hf
...
valid output formats for -f are (`*' denotes default):
* bin flat-form binary files (e.g. DOS .COM, .SYS)
ith Intel hex
srec Motorola S-records
aout Linux a.out object files
aoutb NetBSD/FreeBSD a.out object files
coff COFF (i386) object files (e.g. DJGPP for DOS)
elf32 ELF32 (i386) object files (e.g. Linux)
elf64 ELF64 (x86_64) object files (e.g. Linux)
as86 Linux as86 (bin86 version 0.3) object files
...
So changing this line in the Make file:
ASFLAGS=-felf64
and re-running make:
$ make
ld -Tlink.ld -o kernel boot.o main.o
$
solves the problem.
| making a toy unix os : issues with ld |
1,366,812,867,000 |
having failed to install a gnome extension from their website I looked for another way on Google. The guide I found was like:
sudo apt-get install gnome-common
git clone git://git.gnome.org/gnome-shell-extensions
cd gnome-shell-extensions
./autogen.sh –prefix=$HOME/.local – enable-extensions=”dock”
The last command failed with the following error:
./configure: line 4276: GLIB_GSETTINGS: command not found
configure: error: invalid extension drop-down-terminal
Unfortunately I couldn't find anything helpful on Google this time.
How can I resolve this error?
|
If I understand correctly, gnome guys introduced an m4 macro in recent versions of glib - GLIB_GSETTINGS and distribute it with glib sources - here it is in gsettings.m4: AC_DEFUN([GLIB_GSETTINGS]....
Your package of interest, gnome-extensions makes use of this macro at line 19 of configure.ac and tries to search for it in standard system locations. Probably, you don't have glib sources installed, so it can't find it.
In Debian gsettings.m4 comes in libglib2.0-dev package and is stored at /usr/share/aclocal/gsettings.m4. Install the dev package and build again.
| GLIB_GSETTINGS not found while compiling gnome extension |
1,366,812,867,000 |
I want to compile and install a software on a new VM. The software was installed successfully on a different VM by a different admin, but I want to duplicate the exact command with the options that he used. Is this possible? BTW, the folder from where he ran ./configure is still intact.
|
If the whole directory where ./configure was previously run is wholly intact, then within that directory will be a file called config.status. The config.status file is generated when ./configure +args is run, and it records the arguments that are run. If you want to do everything exactly the same, and the new system has all the dependencies in place, you have several options.
you can tar/gzip the whole directory, copy the tarball to the new system, unpack it, and run make install to simply re-install the previously made objects. This should work if the system is similar enough (architecture/OS).
you can tar/gzip the whole directory, copy the tarball to the new system, unpack it, and run the ./config.status script redo all the previous ./configure work, allowing you to run a clean make, make test, and make install.
you can also do a completely clean build using the previous admin's exact arguments, by running cp config.status myconfigure, make clean, make distclean, and then running ./myconfigure to redo all the work.
The last option would work even if you were going between different linux distros, or from linux to solaris or freebsd, or 32-bit to 64-bit, provided that all the software's dependencies were met beforehand.
By copying config.status to a new filename like myconfigure, you preserve that file through any make clean, or make distclean commands.
| Compiling software with the same options as a previous install |
1,366,812,867,000 |
I am trying to install a number of python 3 modules (e.g. regex, cytoolsz, spacy) that require compilation, but they all fail with an error identical to the below (bottom). I have tried to check for the presence of "limits.h" using the the grep below. I have reinstalled gcc, g++ build-essentials, python3-dev etc, but to no avail.
I am on Ubuntu 18.10.
dpkg -s gcc
Package: gcc
Status: install ok installed
Priority: optional
Section: devel
Installed-Size: 50
Maintainer: Ubuntu Developers <[email protected]>
Architecture: amd64
Source: gcc-defaults (1.179ubuntu1)
Version: 4:8.2.0-1ubuntu1
Provides: c-compiler, gcc-x86-64-linux-gnu (= 4:8.2.0-1ubuntu1)
Depends: cpp (= 4:8.2.0-1ubuntu1), gcc-8 (>= 8.2.0-4~)
Recommends: libc6-dev | libc-dev
Suggests: gcc-multilib, make, manpages-dev, autoconf, automake, libtool, flex, bison, gdb, gcc-doc
Conflicts: gcc-doc (<< 1:2.95.3)
Description: GNU C compiler
This is the GNU C compiler, a fairly portable optimizing compiler for C.
.
This is a dependency package providing the default GNU C compiler.
Original-Maintainer: Debian GCC Maintainers <[email protected]>
Check:
x86_64-linux-gnu-gcc -xc -E -v /dev/null
Using built-in specs.
COLLECT_GCC=x86_64-linux-gnu-gcc
OFFLOAD_TARGET_NAMES=nvptx-none
OFFLOAD_TARGET_DEFAULT=1
Target: x86_64-linux-gnu
Configured with: ../src/configure -v --with-pkgversion='Ubuntu 8.2.0-7ubuntu1' --with-bugurl=file:///usr/share/doc/gcc-8/README.Bugs --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --prefix=/usr --with-gcc-major-version-only --program-suffix=-8 --program-prefix=x86_64-linux-gnu- --enable-shared --enable-linker-build-id --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix --libdir=/usr/lib --enable-nls --with-sysroot=/ --enable-clocale=gnu --enable-libstdcxx-debug --enable-libstdcxx-time=yes --with-default-libstdcxx-abi=new --enable-gnu-unique-object --disable-vtable-verify --enable-libmpx --enable-plugin --enable-default-pie --with-system-zlib --with-target-system-zlib --enable-objc-gc=auto --enable-multiarch --disable-werror --with-arch-32=i686 --with-abi=m64 --with-multilib-list=m32,m64,mx32 --enable-multilib --with-tune=generic --enable-offload-targets=nvptx-none --without-cuda-driver --enable-checking=release --build=x86_64-linux-gnu --host=x86_64-linux-gnu --target=x86_64-linux-gnu
Thread model: posix
gcc version 8.2.0 (Ubuntu 8.2.0-7ubuntu1)
COLLECT_GCC_OPTIONS='-E' '-v' '-mtune=generic' '-march=x86-64'
/usr/lib/gcc/x86_64-linux-gnu/8/cc1 -E -quiet -v -imultiarch x86_64-linux-gnu /dev/null -mtune=generic -march=x86-64 -fstack-protector-strong -Wformat -Wformat-security
ignoring nonexistent directory "/usr/local/include/x86_64-linux-gnu"
ignoring nonexistent directory "/usr/lib/gcc/x86_64-linux-gnu/8/include-fixed"
ignoring nonexistent directory "/usr/lib/gcc/x86_64-linux-gnu/8/../../../../x86_64-linux-gnu/include"
#include "..." search starts here:
#include <...> search starts here:
/usr/lib/gcc/x86_64-linux-gnu/8/include
/usr/local/include
/usr/include/x86_64-linux-gnu
/usr/include
End of search list.
# 1 "/dev/null"
# 1 "<built-in>"
# 1 "<command-line>"
# 31 "<command-line>"
# 1 "/usr/include/stdc-predef.h" 1 3 4
# 32 "<command-line>" 2
# 1 "/dev/null"
COMPILER_PATH=/usr/lib/gcc/x86_64-linux-gnu/8/:/usr/lib/gcc/x86_64-linux-gnu/8/:/usr/lib/gcc/x86_64-linux-gnu/:/usr/lib/gcc/x86_64-linux-gnu/8/:/usr/lib/gcc/x86_64-linux-gnu/
LIBRARY_PATH=/usr/lib/gcc/x86_64-linux-gnu/8/:/usr/lib/gcc/x86_64-linux-gnu/8/../../../x86_64-linux-gnu/:/usr/lib/gcc/x86_64-linux-gnu/8/../../../../lib/:/lib/x86_64-linux-gnu/:/lib/../lib/:/usr/lib/x86_64-linux-gnu/:/usr/lib/../lib/:/usr/lib/gcc/x86_64-linux-gnu/8/../../../:/lib/:/usr/lib/
COLLECT_GCC_OPTIONS='-E' '-v' '-mtune=generic' '-march=x86-64'
Check:
dpkg -S limits.h | grep linux
linux-headers-4.18.0-15: /usr/src/linux-headers-4.18.0-15/include/linux/dynamic_queue_limits.h
linux-libc-dev:amd64: /usr/include/linux/limits.h
linux-headers-4.19.0-041900rc8: /usr/src/linux-headers-4.19.0-041900rc8/include/uapi/linux/limits.h
linux-headers-4.18.0-14: /usr/src/linux-headers-4.18.0-14/include/linux/drbd_limits.h
linux-headers-4.18.0-15: /usr/src/linux-headers-4.18.0-15/arch/arm/include/asm/limits.h
linux-headers-4.18.0-14: /usr/src/linux-headers-4.18.0-14/include/uapi/linux/limits.h
linux-headers-4.18.0-14: /usr/src/linux-headers-4.18.0-14/include/linux/dynamic_queue_limits.h
libgcc-8-dev:amd64: /usr/lib/gcc/x86_64-linux-gnu/8/include-fixed/limits.h
linux-headers-4.18.0-14: /usr/src/linux-headers-4.18.0-14/arch/arm/include/asm/limits.h
linux-headers-4.18.0-15: /usr/src/linux-headers-4.18.0-15/include/linux/drbd_limits.h
linux-headers-4.19.0-041900rc8: /usr/src/linux-headers-4.19.0-041900rc8/include/linux/drbd_limits.h
linux-headers-4.18.0-15: /usr/src/linux-headers-4.18.0-15/include/uapi/linux/limits.h
linux-headers-4.19.0-041900rc8: /usr/src/linux-headers-4.19.0-041900rc8/include/linux/dynamic_queue_limits.h
linux-headers-4.19.0-041900rc8: /usr/src/linux-headers-4.19.0-041900rc8/arch/arm/include/asm/limits.h
libgcc-8-dev:amd64: /usr/lib/gcc/x86_64-linux-gnu/8/include-fixed/syslimits.h
Error:
sudo pip3 install regex
The directory '/home/mac/.cache/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
The directory '/home/mac/.cache/pip' or its parent directory is not owned by the current user and caching wheels has been disabled. check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
Collecting regex
Downloading https://files.pythonhosted.org/packages/9a/6f/8c1479c781bbc94394f9c4e33ad4139068bcc6a1b018c5a5525471262b8a/regex-2019.02.18.tar.gz (643kB)
100% |████████████████████████████████| 645kB 813kB/s
Installing collected packages: regex
Running setup.py install for regex ... error
Complete output from command /usr/bin/python3 -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-n16bk3y6/regex/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-z1rqj4ab-record/install-record.txt --single-version-externally-managed --compile:
/home/mac/.local/lib/python3.6/site-packages/setuptools/dist.py:475: UserWarning: Normalizing '2019.02.18' to '2019.2.18'
normalized_version,
running install
running build
running build_py
creating build
creating build/lib.linux-x86_64-3.6
copying regex_3/regex.py -> build/lib.linux-x86_64-3.6
copying regex_3/_regex_core.py -> build/lib.linux-x86_64-3.6
copying regex_3/test_regex.py -> build/lib.linux-x86_64-3.6
running build_ext
building '_regex' extension
creating build/temp.linux-x86_64-3.6
creating build/temp.linux-x86_64-3.6/regex_3
x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/include/python3.6m -c regex_3/_regex.c -o build/temp.linux-x86_64-3.6/regex_3/_regex.o
In file included from /usr/include/python3.6m/Python.h:11,
from regex_3/_regex.c:48:
/usr/include/limits.h:124:26: error: no include path in which to search for limits.h
# include_next <limits.h>
^
In file included from regex_3/_regex.c:48:
/usr/include/python3.6m/Python.h:14:2: error: #error "Something's broken. UCHAR_MAX should be defined in limits.h."
#error "Something's broken. UCHAR_MAX should be defined in limits.h."
^~~~~
/usr/include/python3.6m/Python.h:18:2: error: #error "Python's source code assumes C's unsigned char is an 8-bit type."
#error "Python's source code assumes C's unsigned char is an 8-bit type."
^~~~~
In file included from /usr/include/python3.6m/Python.h:25,
from regex_3/_regex.c:48:
/usr/include/stdio.h:33:10: fatal error: stddef.h: No such file or directory
#include <stddef.h>
^~~~~~~~~~
compilation terminated.
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
----------------------------------------
Command "/usr/bin/python3 -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-n16bk3y6/regex/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-z1rqj4ab-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /tmp/pip-build-n16bk3y6/regex/
Both linux-libc-dev and libc6-dev are already installed and I have also tried reinstalling both. My PATH is:
$ echo $PATH
/home/mac/.opam/system/bin:/home/mac/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
I also tried booting into an earlier kernel, 4.18.0, but the results were the same.
|
Analysis (you can skip it, but it may help diagnose similar problems in the future)
Your GCC complains about a missing limits.h file referenced from /usr/include/limits.h (another limits.h file):
/usr/include/limits.h:124:26: error: no include path in which to search for limits.h
Checking with /usr/include/limits.h we can see the following:
$ sed -n 117,125p /usr/include/limits.h
/* Get the compiler's limits.h, which defines almost all the ISO constants.
We put this #include_next outside the double inclusion check because
it should be possible to include this file more than once and still get
the definitions from gcc's header. */
#if defined __GNUC__ && !defined _GCC_LIMITS_H_
/* `_GCC_LIMITS_H_' is what GCC's file defines. */
# include_next <limits.h>
#endif
In other words, libc's limits.h includes another limits.h provided by the compiler itself. Using the apt-file tool and a bit of common sense we can determine that the package you need is libgcc-8-dev:
$ apt-file search /limits.h | grep gcc-8
libgcc-8-dev: /usr/lib/gcc/x86_64-linux-gnu/8/include-fixed/limits.h
...
Your dpkg query lists the package and file as installed:
$ dpkg -S limits.h | grep linux
libgcc-8-dev:amd64: /usr/lib/gcc/x86_64-linux-gnu/8/include-fixed/limits.h
...
However, GCC complains about a missing directory:
$ x86_64-linux-gnu-gcc -xc -E -v /dev/null
...
ignoring nonexistent directory "/usr/lib/gcc/x86_64-linux-gnu/8/include-fixed"
Conclusion and fix
All this likely means that libgcc-8-dev package somehow got corrupted on your system. To restore it, run:
$ sudo apt-get install --reinstall libgcc-8-dev
(you may need to replace 8 with the appropriate major GCC version you have)
In general, if you don't remember manually deleting limits.h from your system or tinkering with the GCC install in any other way, it may be a good idea to check your file system's consistency and hard drive health.
| Installing python modules fail - "limits.h" missing? |
1,366,812,867,000 |
I'm not sure if this is the best place to ask this - please point me in the right direction if there's a better place.
Let's say, hypothetically, that I have two machines - A is a development machine, and B is a production machine. A has software like a compiler that can be used to build software from source, while B does not.
On A, I can easily build software from source by following the usual routine:
./configure
make
Then, I can install the built software on A by running sudo make install. However, what I'd really like to do is install the software that I just built on B. What is the best way to do that?
There are a few options that I have considered:
Use a package manager to install software on B: this isn't an option for me because the software available in the package manager is very out of date.
Install the compiler and other build tools on B: I'd rather not install build tools on the production machine due to various constraints.
Manually copy the binaries from A to B: this is error-prone, and I'd like to make sure that the binaries are installed in a consistent manner across production machines.
Install only make on B, transfer the source directory, and run sudo make install on B: this is the best solution I've found so far, but for some reason (perhaps clock offsets), make will attempt to re-build the software that should have already been built, which fails since the build tools aren't installed on B. Since my machines also happen to have terrible I/O speeds, transferring the source directory takes a very long time.
What would be really nice is if there were a way to make some kind of package containing the built binaries that can be transferred and executed to install the binaries and configuration files. Does any such tool exist?
|
Using what you have so far and if the makefile is generated with GNU autotools, I would set the target location or install path with
./configure --prefix=/somewhere/else/than/the/usual/usr/local
and then run
make && make install
and finally copy the files from the prefix folder to the usr/ folder in the other machine. This is assuming both machines have the same architecture, if not, then use the according cross toolchain.
| Can binaries built from source be installed on a second machine? |
1,366,812,867,000 |
I want to compile Nano from source for my friend.
I have successfully compiled it on all of my computers and he's having Linux Mint 18.1 too.
I don't know why; or better said I don't know what is missing in his system for UTF-8 support as per this configuration message:
*** Insufficient UTF-8 support was detected in your curses and/or C
*** libraries. If you want UTF-8 support, please verify that your slang
*** was built with UTF-8 support or your curses was built with wide
*** character support, and that your C library was built with wide
*** character support.
I tried to install various development packages and it solved several other issues, but this one I am unable to solve since I didn't manage to google much about this issue.
I am quite exhausted, so I temporarily installed the compiled Nano editor with disabled UTF-8 support on his computer.
Any clues appreciated.
|
It looks like you need to install libncursesw5-dev and/or libslang2-dev; that’s what’s missing according to the config log.
| Compiling Nano with UTF-8 support on one computer failed |
1,366,812,867,000 |
I'm going through Linux from Scratch and I'm on the page that discusses the toolchain. Up until this point I've understood everything, but I don't understand the term "toolchain".
From what I've read the toolchain is a set of tools that will be used to compile tools on the new distribution. This is required so that software isn't compiled with the host compiler.
Am I correct in thinking that the hosts tools (I believe it's the compiler that's being built at this stage) have to be used to compile Glibc, Binutils etc? And then once that is done the newly compiled compiler is used to build other tools to create the OS?
This part is very sketchy, and Googling around isn't yielding may useful results. If anyone has any useful resources to share that will help me understand this better that would be great.
|
The toolchain is simply tools to build software (compiler, assembler, linker, libraries, and a few useful utilities).
In this case the important part is host-independent - that is independent of the tools downloaded.
There are several reasons why you might want to rebuild the tools:
It is harder to sneak in backdoors (though not impossible)
Compile parameters can be tweaked to fit your system and not just being a general binary.
You get the newest version of the tools.
| LFS: What is the toolchain and why is it important? |
1,366,812,867,000 |
When running python3 setup.py build it ended with this:
x86_64-linux-gnu-gcc -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-Bsymbolic-functions -Wl,-z,relro -g -fstack-protector-strong -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2 build/temp.linux-x86_64-3.4/sklearn/linear_model/sag_fast.o -Lbuild/temp.linux-x86_64-3.4 -o build/lib.linux-x86_64-3.4/sklearn/linear_model/sag_fast.cpython-34m.so
running install_lib
creating /usr/local/lib/python3.4/dist-packages/sklearn
error: could not create '/usr/local/lib/python3.4/dist-packages/sklearn': Permission denied
Of course it could not write to /usr/local/lib/ as no sudo was used. I'm wary of using sudo for this step.
This was the end of sudo python3 setup.py install:
running install_egg_info
Writing /usr/local/lib/python3.4/dist-packages/scikit_learn-0.18.dev0.egg-info
running install_clib
Looks good to me. However, when I try to import sklearn I get this error:
$ python3
Python 3.4.3+ (default, Oct 14 2015, 16:03:50)
[GCC 5.2.1 20151010] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import sklearn
Traceback (most recent call last):
File "/home/dotancohen/code/scikit-learn/sklearn/__check_build/__init__.py", line 44, in <module>
from ._check_build import check_build
ImportError: No module named 'sklearn.__check_build._check_build'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/dotancohen/code/scikit-learn/sklearn/__init__.py", line 56, in <module>
from . import __check_build
File "/home/dotancohen/code/scikit-learn/sklearn/__check_build/__init__.py", line 46, in <module>
raise_build_error(e)
File "/home/dotancohen/code/scikit-learn/sklearn/__check_build/__init__.py", line 41, in raise_build_error
%s""" % (e, local_dir, ''.join(dir_content).strip(), msg))
ImportError: No module named 'sklearn.__check_build._check_build'
___________________________________________________________________________
Contents of /home/dotancohen/code/scikit-learn/sklearn/__check_build:
_check_build.c setup.pyc __pycache__
_check_build.pyx __init__.py setup.py
___________________________________________________________________________
It seems that scikit-learn has not been built correctly.
If you have installed scikit-learn from source, please do not forget
to build the package before using it: run `python setup.py install` or
`make` in the source directory.
If you have used an installer, please check that it is suited for your
Python version, your operating system and your platform.
>>>
Should I run python3 setup.py build with sudo? This is on Kubuntu Linux 15.10:
$ uname -a
Linux loathe 4.2.0-16-generic #19-Ubuntu SMP Thu Oct 8 15:35:06 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
$ cat /etc/issue
Ubuntu 15.10 \n \l
Note that the Ubuntu-packaged version of python-scikits-learn is for Python 2 only, and I need Python 3.
|
I found this post which mentioned to configure which ATLAS (linear algebra package) version to use:
$ sudo update-alternatives --set libblas.so.3 /usr/lib/atlas-base/atlas/libblas.so.3
$ sudo update-alternatives --set liblapack.so.3 /usr/lib/atlas-base/atlas/liblapack.so.3
After that, I was happily surprised that in fact there was no longer a permissions issue, but I was getting this error on build instead:
sklearn/__check_build/_check_build.c:4:20: fatal error: Python.h: No such file or directory
Therefore I went over the results of aptitude search python | grep dev and decided that the following packages might help:
$ sudo aptitude install python3-numpy-dev python3.5-dev libpython3.4-dev
And with that the package built properly and scikit-learn imports properly:
$ python3
Python 3.4.3+ (default, Oct 14 2015, 16:03:50)
[GCC 5.2.1 20151010] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import sklearn
>>>
I'm not sure which of the three packages was the critical package, probably libpython3.4-dev, but the issue is resolved.
| Building Python packages succeeds, but package is improperly built |
1,366,812,867,000 |
Is there any proper way to build a minimal kernel for FreeBSD? The FreeBSD Handbook has the lack of information about this. By default /boot/kernel directory has the pretty big size - around 450MB. I want to minimize kernel fingerprint and remove all unnecessary kernel modules and options. Should I use "NO_MODULES" option in /etc/make.conf? Or use C compilation flags?
|
There are a number of things you can do to reduce the size and number of files in /boot/kernel.
Possibly the best space saving is to be had by setting WITHOUT_KERNEL_SYMBOLS in /etc/src.conf (if this file doesn't already exist, just create it), and the next time you installkernel, the debug symbol files won't be installed. It's safe to delete them now, if you need the space immediately (rm /boot/kernel/*.symbols)
There are a few make.conf settings that control what modules are built:
NO_MODULES - disable building of modules completely
MODULES_OVERRIDE - specify the modules you want to build
WITHOUT_MODULES - list of modules that should not be built
The NO_MODULES option is probably a bit too heavy-handed, so a judicious combination of the other two is a better choice. If you know exactly which modules you want, you can simply set them in MODULES_OVERRIDE. Note that WITHOUT_MODULES is evaluated after MODULES_OVERRIDE, so any module named in both lists will not be built.
If you really want to suppress building of all modules, you can use NO_MODULES, and ensure that all required drivers and modules are statically compiled into the kernel. Each driver's manpage shows the appropriate lines to add to your kernel config file, so you should be able to figure out what you need.
If you still find that space is a problem, or if you just want to strip down the kernel as much as possible, you can edit your kernel config to remove any devices and subsystems your machine doesn't support, or which you are sure you won't want to use. The build system is pretty sensible, and if you inadvertently remove a module required by one still active in the config, you will get a failed build and an error message explaining what went wrong.
Although it can be extremely tedious, the best approach is to take small steps, removing one or two things at a time and ensuring that the resultant configuration both builds and boots correctly. Whatever you do, though, I highly recommend you make a copy of /usr/src/sys/<arch>/config/GENERIC, and edit the copy. If you ever get so muddled that the only recourse is to start from the default config, you'll be glad you've still got the GENERIC file on your system!
In order to build your custom kernel, you can either pass the name of the config on the command line as make KERNCONF=MYKERNCONF buildkernel, or you can set KERNCONF in /etc/make.conf. Make sure you place the custom config file in /usr/src/sys/<arch>/config and the build system will be able to find it.
| How to properly build a minimal FreeBSD kernel? |
1,366,812,867,000 |
I'm on CentOS 6.5. Specifically, I'm running this AMI: Adobe Media Server 5 Extended.
I followed these steps:
$ sudo yum groupinstall "Development Tools"
$ sudo yum install glib2-devel fuse-devel libevent-devel \
libxml2-devel openssl-devel
$ wget https://github.com/downloads/libevent/libevent/libevent-2.0.21-stable.tar.gz
$ tar -xzf libevent-2.0.21-stable.tar.gz
$ cd libevent-2.0.21-stable
$ ./configure && make
$ sudo make install
$ sudo echo "/usr/local/lib/" > /etc/ld.so.conf.d/riofs.conf
$ sudo ldconfig
$ export PKG_CONFIG_PATH=/usr/local/lib/pkgconfig
Then do libevent and I get command not found.
What am I doing wrong?
|
libevent is library. library most often doesn't go with any executables, so "command not found" for executable with same name as library is perfectly fine.
| Cannot install libevent on CentOS 6.5 |
1,366,812,867,000 |
I tried to install darkice with lame. To compile it, I need ALSA and Pulsaudio:
checking for lame library at /usr ... found at /usr
checking for vorbis libraries at /usr ... configure: WARNING: not found, building without Ogg Vorbis
checking for opus libraries at /usr ... configure: WARNING: not found, building without Ogg Opus
checking for faac library at /usr ... configure: WARNING: not found, building without faac
checking for aacplus library at /usr ... configure: WARNING: not found, building without aacplus
checking for twolame library at /usr ... configure: WARNING: not found, building without twolame
checking for alsa libraries at /usr/lib/alsa-lib ... configure: WARNING: not found, building without ALSA support
checking for pulseaudio libraries at /usr/lib64/pulseaudio/libpulse ... configure: WARNING: not found, building without PULSEAUDIO support
checking for jack libraries at /usr ... configure: WARNING: not found, building without JACK support
checking for samplerate libraries at /usr ... configure: WARNING: not found, building libsamplerate support
I can add with --with-*-prefix= a path for this libraries. But I have no idea where on my system I can find them, or what I need to install, to compile with them.
I tried /usr/lib and /usr/lib64, both don't work.
My question is where do I get these libraries from?
system: Fedora release 19 (Schrödinger’s Cat) 3.11.6-200.fc19.x86_64
|
Ok, Its just that you need to install the *-devel rpm's and thats it.
For pulseaudio and alsa it's: alsa-lib-devel, pulseaudio-libs-devel
| Where are my ALSA, pulseaudio lib |
1,366,812,867,000 |
The gcc compiler use target-triplets for cross-compilation. I see some of these target triples like "x86_64-pc-linux-gnu" (the most common). I understand what means but I don't know how specify another unix-like system instead "linux-gnu". Is there any document for it? and the "pc" seems be optional (should I care about this?), when I run a "config.guess" script, it returns me "x86_64-unknown-linux-gnu".
|
In order to cross compile, you must have (or build) a cross-compiler; gcc cannot, by default, just build for any target that it could be configured for. There is a list of possibilities in the gcc source package, I believe.
Building a cross compiler toolchain is not a simple undertaking, so if you want to do that, you have to decide what it's for and ask more specific questions.
There's also a list of hosts/targets with notes here. An asterisk indicates that any value can be used in that position (presumably this makes no difference to the compiler, and is simply a user defined label); the pc you are talking about may be such.
| Is there any pattern to specify target triples in GCC? |
1,366,812,867,000 |
I'm trying to build a project, and when I use the command make, I get the following errors:
/bin/sh: line 4: .deps/ipset_bitmap_ipmac.Tpo: Permission denied
make[2]: *** [ipset_bitmap_ipmac.lo] Error 126
This file, .deps/ipset_bitmap_ipmac.Tpo, was created by make during the build with the following permissions: -rw-r--r--, notice that there's no x. But then make wants to execute the file immediately, which fails.
If I go to the file and add executable permissions manually, then the build continues past that point if I re-run make. Except that the make command will crash again once it reaches the next file. The only option I have is to keep chmoding every single new file.
My question is, why is make creating these new files without +x?
Side notes: I'm on CentOS5, umask -S returns: u=rwx,g=rx,o=rx, sudo doesn't help at all.
|
With a name like .deps/ipset_bitmap_ipmac.Tpo, it's pretty likely that the file was not meant to be executable.
What's happening here is that there's a line in the Makefile that looks like
$(SOME_VARIABLE) .deps/ipset_bitmap_ipmac.Tpo
or more likely
$(SOME_VARIABLE) $(ANOTHER_VARIABLE)
where the value of ANOTHER_VARIABLE is .deps/ipset_bitmap_ipmac.Tpo, or some variant on this. Due to a bug in the makefile, or in the program that generated it, or because your computer has an unsupported configuration, the variable SOME_VARIABLE (which should have been the name of the program) wasn't defined.
More help may be forthcoming if you tell us what project you're trying to build and exactly where you got it, how you unpacked it, how you configured it, what build command you ran.
| Files created by 'make' aren't getting executable permissions by default |
1,366,812,867,000 |
From here: http://fedoraproject.org/wiki/Common_kernel_problems#Can.27t_find_root_filesystem_.2F_error_mounting_.2Fdev.2Froot
A lot of these bugs end up being a broken initrd due to bugs in
mkinitrd.
Get the user to attach their initrd for their kernel to the
bz, and also their /etc/modprobe.conf, or have them examine the
contents themselves if they are capable of that.
Picking apart the initrd of a working and failing kernel and doing a diff of the init script can reveal clues. To take apart an initrd, do the following ..
mkdir initrd
cd initrd/
gzip -dc /boot/initrd-2.6.23-0.104.rc3.fc8.img | cpio -id
I wish to understand what exactly is being done here.
What has initrd to do with anything?
Where are we supposed to create the directory initrd?
|
An initrd (short for “initial RAM drive”) is a filesystem that's mounted when the Linux kernel boots, before the “real” root filesystem. This filesystem is loaded into memory by the bootloader, and remains in memory until the real boot. The kernel executes the program /linuxrc on the initrd; its job is to mount the real root, and when /linuxrc terminates the kernel runs /sbin/init.
A bug somewhere in the initrd can explain why the system doesn't boot. So the document you link to recommends that you compare your initrd with an official one if you have trouble booting.
In the provided instructions, initrd is just some temporary directory, you can call it anisha_initrd or fred if you like. The initrd is stored in the file /boot/initrd-SOMETHING.img as a gzipped cpio archive; the instructions unpack that archive in the temporary directory you created. After unpacking, you can compare it with an official initrd (unpack the official initrd and run a command like diff -ru /path/to/official_initrd /path/to/anisha_initrd).
| Kernel Panic - Can't find root filesystem / error mounting /dev/root |
1,366,812,867,000 |
I have a game I'm writing which recently required libjpeg. I wrote some code using libjpeg on some-other-machine and it worked as expected. I pulled the code to this machine and tried compiling and running it and have been getting the runtime error out of libjpeg:
Wrong JPEG library version: library is 62, caller expects 80
If I use ldd to see what the binary is linked to, I get:
ldd Debug/tc | grep jpeg
libjpeg.so.62 => /usr/lib/x86_64-linux-gnu/libjpeg.so.62 (0x00007f50f02f2000)
My compile flags include -ljpeg. The current jpeg related shared objects in my /usr/lib looks like this:
find | grep jpeg | xargs ls -l --color
-rwxr-xr-x 1 root root 61256 2011-09-26 15:43 ./gimp/2.0/plug-ins/file-jpeg
-rw-r--r-- 1 root root 5912 2011-10-01 06:40 ./grub/i386-pc/jpeg.mod
-rw-r--r-- 1 root root 40264 2011-08-24 05:41 ./gstreamer-0.10/libgstjpegformat.so
-rw-r--r-- 1 root root 78064 2011-08-24 05:41 ./gstreamer-0.10/libgstjpeg.so
-rw-r--r-- 1 root root 17920 2011-09-27 17:30 ./i386-linux-gnu/gdk-pixbuf-2.0/2.10.0/loaders/libpixbufloader-jpeg.so
lrwxrwxrwx 1 root root 17 2011-08-10 14:07 ./i386-linux-gnu/libjpeg.so.62 -> libjpeg.so.62.0.0
-rw-r--r-- 1 root root 145068 2011-08-10 14:07 ./i386-linux-gnu/libjpeg.so.62.0.0
-rw-r--r-- 1 root root 30440 2011-09-30 05:25 ./i386-linux-gnu/qt4/plugins/imageformats/libqjpeg.so
-rw-r--r-- 1 root root 924 2011-06-15 05:05 ./ImageMagick-6.6.0/modules-Q16/coders/jpeg.la
-rw-r--r-- 1 root root 39504 2011-06-15 05:05 ./ImageMagick-6.6.0/modules-Q16/coders/jpeg.so
-rw-r--r-- 1 root root 10312 2011-06-03 00:18 ./imlib2/loaders/jpeg.so
-rw-r--r-- 1 root root 43072 2011-10-21 19:11 ./jvm/java-6-openjdk/jre/lib/amd64/libjpeg.so
-rw-r--r-- 1 root root 23184 2011-10-14 02:46 ./kde4/jpegthumbnail.so
-rw-r--r-- 1 root root 132632 2009-04-30 00:24 ./libopenjpeg-2.1.3.0.so
lrwxrwxrwx 1 root root 22 2009-04-30 00:24 ./libopenjpeg.so.2 -> libopenjpeg-2.1.3.0.so
-rw-r--r-- 1 root root 23224 2011-08-03 04:20 ./libquicktime2/lqt_mjpeg.so
-rw-r--r-- 1 root root 27208 2011-08-03 04:20 ./libquicktime2/lqt_rtjpeg.so
-rw-r--r-- 1 root root 47800 2011-09-24 10:12 ./strigi/strigiea_jpeg.so
-rw-r--r-- 1 root root 3091 2011-05-18 05:25 ./syslinux/com32/include/tinyjpeg.h
-rw-r--r-- 1 root root 22912 2011-09-27 17:38 ./x86_64-linux-gnu/gdk-pixbuf-2.0/2.10.0/loaders/libpixbufloader-jpeg.so
-rw-r--r-- 1 root root 226798 2011-08-10 14:06 ./x86_64-linux-gnu/libjpeg.a
-rw-r--r-- 1 root root 935 2011-08-10 14:06 ./x86_64-linux-gnu/libjpeg.la
lrwxrwxrwx 1 root root 17 2011-08-10 14:06 ./x86_64-linux-gnu/libjpeg.so -> libjpeg.so.62.0.0
lrwxrwxrwx 1 root root 17 2011-10-15 10:50 ./x86_64-linux-gnu/libjpeg.so.62 -> libjpeg.so.62.0.0
-rw-r--r-- 1 root root 150144 2011-08-10 14:06 ./x86_64-linux-gnu/libjpeg.so.62.0.0
lrwxrwxrwx 1 root root 16 2011-10-15 10:50 ./x86_64-linux-gnu/libjpeg.so.8 -> libjpeg.so.8.3.0
lrwxrwxrwx 1 root root 19 2011-11-30 01:25 ./x86_64-linux-gnu/libjpeg.so.8.3.0.bak -> ./libjpeg.so.62.0.0
-rw-r--r-- 1 root root 31488 2011-09-30 05:13 ./x86_64-linux-gnu/qt4/plugins/imageformats/libqjpeg.so
The original machine runs Gentoo, 'this' machine runs Ubuntu 11.10. Both are 64-bit. The gentoo box only has libjpeg version 8, it seems.
Ultimately, my question is: How can I resolve this? I'd also like to know how I can determine exactly which library the linker has used.
EDIT: My game also links to SDL_image, which according to ldd, links to libjpeg version 8. I bet this is where my troubles stem from. How can I tell gcc to link my game to libjpeg version 8? I tried -l/usr/lib/libjpeg.so.whatever and it complained about not finding the specified lib.
|
Please use LD_LIBRARY_PATH. Refer to these useful links as well:
http://tldp.org/HOWTO/Program-Library-HOWTO/shared-libraries.html
http://linuxmafia.com/faq/Admin/ld-lib-path.html
| Linking issues with libjpeg |
1,366,812,867,000 |
I tried building mutter module using JHBuild and it fails:
<snip>
make[4]: Entering directory `/home/wena/src/mutter/src'
CC screen.lo
core/screen.c: In function 'reload_monitor_infos':
core/screen.c:445:16: error: variable 'display' set but not used [-Werror=unused-but-set-variable]
core/screen.c: At top level:
core/screen.c:394:1: error: 'find_monitor_with_rect' defined but not used [-Werror=unused-function]
core/screen.c:418:1: error: 'find_main_output_for_crtc' defined but not used [-Werror=unused-function]
cc1: all warnings being treated as errors
make[4]: *** [screen.lo] Error 1
make[4]: Leaving directory `/home/wena/src/mutter/src'
make[3]: *** [all-recursive] Error 1
make[3]: Leaving directory `/home/wena/src/mutter/src'
make[2]: *** [all] Error 2
make[2]: Leaving directory `/home/wena/src/mutter/src'
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory `/home/wena/src/mutter'
make: *** [all] Error 2
*** Error during phase build of mutter: ########## Error running make *** [1/1]
|
Using the example in the question, put the following inside the "~.jhbuildrc" file (reference):
module_autogenargs = {"mutter": "--disable-Werror"}
| How to stop warnings from being treated as errors in JHBuild |
1,366,812,867,000 |
I am working directly on a dev server and want to build my own vim, for my purposes, not for all system users. The build scenario is:
hg clone https://vim.googlecode.com/hg/ vim
cd vim/src
./configure --enable-rubyinterp --enable-multibyte
make
The result of ./vim --version is:
VIM - Vi IMproved 7.3 (2010 Aug 15, compiled Jun 22 2011 09:35:46)
Included patches: 1-230
Compiled by aeg@dev
Normal version without GUI. Features included (+) or not (-):
-arabic +autocmd -balloon_eval -browse +builtin_terms +byte_offset +cindent
-clientserver -clipboard +cmdline_compl +cmdline_hist +cmdline_info +comments
-conceal +cryptv -cscope +cursorbind +cursorshape +dialog_con +diff +digraphs
-dnd -ebcdic -emacs_tags +eval +ex_extra +extra_search -farsi +file_in_path
+find_in_path +float +folding -footer +fork() +gettext -hangul_input +iconv
+insert_expand +jumplist -keymap -langmap +libcall +linebreak +lispindent
+listcmds +localmap -lua +menu +mksession +modify_fname +mouse -mouseshape
-mouse_dec -mouse_gpm -mouse_jsbterm -mouse_netterm -mouse_sysmouse
+mouse_xterm +multi_byte +multi_lang -mzscheme +netbeans_intg +path_extra -perl
+persistent_undo +postscript +printer -profile -python -python3 +quickfix
+reltime -rightleft +ruby +scrollbind +signs +smartindent -sniff +startuptime
+statusline -sun_workshop +syntax +tag_binary +tag_old_static -tag_any_white
-tcl +terminfo +termresponse +textobjects +title -toolbar +user_commands
+vertsplit +virtualedit +visual +visualextra +viminfo +vreplace +wildignore
+wildmenu +windows +writebackup -X11 -xfontset -xim -xsmp -xterm_clipboard
-xterm_save
system vimrc file: "$VIM/vimrc"
user vimrc file: "$HOME/.vimrc"
user exrc file: "$HOME/.exrc"
fall-back for $VIM: "/usr/local/share/vim"
Compilation:
gcc -c -I. -Iproto -DHAVE_CONFIG_H -g -O2 -D_FORTIFY_SOURCE=1
Linking: gcc -L. -rdynamic -Wl,-export-dynamic -L/usr/local/lib -Wl,--as-needed -o vim -lm -lncurses -lnsl -lruby1.8 -lpthread -lrt -ldl -lcrypt -lm -L/usr/lib
When I open the just-built vim I get following:
Error detected while processing /home/aeg/.vimrc:
line 148:
E484: Can't open file /usr/local/share/vim/syntax/syntax.vim
The problem is because there is no /usr/local/share/vim. I have my plugins in ./vim and I want vim to look for this path.
It is Debian, and /usr/bin/vim settings differ fall-back for $VIM: "/usr/share/vim"
|
It looks like you forgot to run make install.
If you did run make install, but none of your vim files are found under /usr/local/share/vim, then perhaps you have a permissions problem -- that is, you're not allowed to install files there.
If the latter is true, then just build it with a install location set to a place you do control:
$ cd vim/src
$ ./configure --enable-rubyinterp --enable-multibyte --prefix=/home/aeg/myvim
$ make
$ make install
$ export PATH=/home/aeg/myvim/bin:$PATH
$ vim
| Fall-back for $VIM is invalid |
1,366,812,867,000 |
How can I get a list of all installed 32 Bit packages on a Gentoo Linux system?
|
The eix tool comes to help:
eix -I --installed-with-use abi_x86_32
-I selects only installed packages
--installed-with-use selects packages with certain USE flag
In this particular case you could even omit -I, but I included it just as useful option in general. You may also be interested in the option -U, which selects packages which have abi_x86_32, but not necessary were installed with it, and in combination with -I it gives yet another list.
If you don't have eix on the system yet, just install it with emerge app-portage/eix.
| List all 32 Bit packages on a Gentoo System |
1,366,812,867,000 |
I see this on the Internet:
General Setup --->
<*/M> Kernel .config support
[*] Enable access to .config through /proc/config.gz
But can't understand what's that mean?
I have an arm-based board(NanoPi-M1 with Allwinner H3 sun8iw7p1 SoC) that has Debian Jessie OS, and I have no config.gz file in /proc directory. I only have config-3.4.39-h3.new file in /boot directory that it is an empty file!
I added modules="configs" in /etc/modules file and reboot my system but had no sense!
How can I access to kernel configuration?
|
I see this on the Internet:
It specifies the location in Linux's menuconfig from where you can enable /proc/config.gz. You must recompile the Linux kernel to do this. On an ARM-based board this may not be mainline Linux but a different tree specific to the SoC used on the ARM board.
So, the steps would be:
Figure out which SoC you have on the board
Figure out where to obtain the Linux kernel tree ported to that SoC
Obtain and compile the Linux kernel, enabling the /proc/config.gz option
Install modules, register the newly-compiled kernel with the bootloader, and reboot
| How to enable access to the kernel config file through /proc/config.gz? |
1,366,812,867,000 |
when cross-compiling a package, do you also cross-compile the dependencies or just install the dependencies, and then cross-compile the final package for my target embedded Linux device?
|
You need to cross-compile all the dependencies: every piece of code linked to the final binary (whether statically or dynamically) needs to be built for the target platform.
Depending on the platform (and distribution) you're building on, and the target platform, you might find your cross-dependencies are already available in your distribution.
| Do you cross-compile the dependencies of a package or just perform install? |
1,366,812,867,000 |
I tried to replace a line in a Makefile with sed -i -e 's|$(bindir)\/embossupdate|:|' Makefile, but I got sed: can't read Makefile: No such file or directory
FROM ubuntu:16.04
...
# EMBOSS (ftp://emboss.open-bio.org/pub/EMBOSS/)
ENV EMBOSS_VER 6.6.0
RUN apt-get install libhpdf-dev libpng12-dev libgd-dev -y
ADD EMBOSS-${EMBOSS_VER}.tar.gz /usr/local/
WORKDIR /usr/local/EMBOSS-${EMBOSS_VER}
RUN sed -i -e 's|$(bindir)\/embossupdate|:|' Makefile
RUN ./configure --enable-64 --with-thread --without-x
RUN make
RUN ldconfig
RUN make install
What did I do wrong with the sed command?
|
The Makefile is not created until you've run the configure script. Try placing the sed command after the invocation of configure.
I haven't checked if the sed edit is doing what it should or not, but the main issue is probably that the Makefile simply doesn't exist yet at that point in your script.
In general, I would avoid sed -i as its semantics is different for GNU and BSD sed. It's safer to sed ... file >tmpfile && mv tmpfile file.
| sed: can't read Makefile: No such file or directory |
1,366,812,867,000 |
The Linux kernel documentation page for building external modules (https://www.kernel.org/doc/Documentation/kbuild/modules.txt) says this:
=== 2. How to Build External Modules
To build external modules, you must have a prebuilt kernel available
that contains the configuration and header files used in the build.
Also, the kernel must have been built with modules enabled. If you are
using a distribution kernel, there will be a package for the kernel
you are running provided by your distribution.
An alternative is to use the "make" target "modules_prepare." This
will make sure the kernel contains the information required. The
target exists solely as a simple way to prepare a kernel source tree
for building external modules.
My questions are the following:
To build external modules, you must have a prebuilt kernel available
that contains the configuration and header files used in the build
By "prebuilt kernel", does it mean the compiled binary image (generally named vmlinux/vmlinuz)? Why exactly is the binary image needed? Shouldn't the configuration files, header files and compiler be enough?
To build external modules, you must have a prebuilt kernel available that
contains the configuration and header files used in the build.
If by prebuilt kernel it means the binary image, then what is the meaning of "contains the configuration and header files"? I can understand the source tree needing to "contain the configuration and header files", but in case of binary, these files are just used to generate instructions right? What is the meaning of "contain" then? By "prebuilt kernel" does it mean the entire source tree where the kernel was built?
Also, the kernel must have been built with modules enabled.
Are they referring to the "make modules" step here or is it something different?
If you are
using a distribution kernel, there will be a package for the kernel you
are running provided by your distribution.
I suppose they are referring to the kernel-devel package here, which provides the header and configuation files which were used in the kernel building process. Is that correct?
An alternative is to use the "make" target "modules_prepare." This will
make sure the kernel contains the information required.
What is the meaning of this? Does this mean that we don't need to have a built kernel binary in order to be able to build external modules if we do a "make modules_prepare" in the source directory?
|
ad 1. and 2. The kernel image is called vmlinux, that's right, but that's not what you actually need when you want to build external modules. It's the configuration and header files from this kernel that is needed.
ad 3. To build modules, internal or external, you need support for loadable modules in this kernel, you want to build the module for, of course, so the kernel has to be configured with __modules enabled_.
A kernel is configured by one of the configuration programs that help you in creating a .config file, in the kernel source tree or in the $KBUILD_OUTPUT path, for off-tree builds.
ad 4. Where you find such packages or how they named depends on your distribution, but I think it is often called kernel-devel. I don't actually know cause I used my own kernel tree for years.
ad 5. Yes, you actually don't need the kernel binary, to compile an external module, but you suppressed the note below
NOTE: "modules_prepare" will not build Module.symvers even if
CONFIG_MODVERSIONS is set; therefore, a full kernel build needs to be
executed to make module versioning work.
Most kernels use CONFIG_MODVERSIONS, I think. You can see this in your .config file with
$ grep MODVERSIONS .config
CONFIG_MODVERSIONS=y
This means, your built module will only work for that kernel version and configuration you built it for.
So you can build a module for that kernel and kernel version, but you cannot run it without that kernel and kernel version.
That's why you can build an external module for a distribution kernel without the full kernel source tree, if you install the kernel configuration and header files, that distribution kernel was built with.
Actually, most times you just want to build an external module for the kernel you run your system with. If you built the kernel yourself, from the kernel source tree, you will already have a kernel configuration and header files, that match that kernel.
If you run a distribution kernel you have to install that files from the distribution.
| Some questions regarding linux kernel external module build process |
1,416,896,969,000 |
First a bit of context for those who don't know gradle. It's basically like make except that you don't have to have gradle installed on your computer. It ships with projects as a file called gradlew. So for instance a gradle project could look like:
.
├── gradlew
└── src
└── main
└── java
└── com
└── foo
└── bar
├── Bar.java
├── Baz.java
├── Foo.java
└── Qux.java
And from the root directory I can run commands like ./gradlew build or ./gradlew test to build/test my code.
Now vim. First I :set autochdir in my .vimrc. Second, my current buffer is Foo.java.
I want to run :make which would trigger ../../../../../../gradlew. How can I set makeprg such that no matter what's my :pwd it'll call gradlew (I guess that could be achieved using dirname in a loop but I'm not sure if that's the most efficient/cleanest way to do that).
Thanks.
|
For those looking to do the same thing (or similar things), it's doable by creating a new compiler that use findfile, and reuse errorformat from some other compiler + slight modifications. The end result looks like:
let s:gradlew = escape(findfile('gradlew', '.;') . " -b " . findfile('build.gradle', '.;'), ' \')
if exists("current_compiler")
finish
endif
if exists(":CompilerSet") != 2 " older Vim always used :setlocal
command -nargs=* CompilerSet setlocal <args>
endif
let current_compiler = s:gradlew
execute "CompilerSet makeprg=" . s:gradlew
" copied from javac.vim + added the :compileJava bits
CompilerSet errorformat=%E:compileJava%f:%l:\ %m,%E%f:%l:\ %m,%-Z%p^,%-C%.%#,%-G%.%#
| Set makeprg to gradlew |
1,416,896,969,000 |
I'm trying to build omniORB 4.1.6 under Arch Linux. When I type make, here is the message:
../../../../../src/tool/omniidl/cxx/idlpython.cc:188:26: fatal error: python3.3/Python.h: No such file or directory
# include PYTHON_INCLUDE
I'm sure both python3 and python2 were installed, and I can remember last time I was tring to do the same thing under Linux Mint I met the same problem. That time, I used this command to solve the problem:
sudo apt-get install python-dev
However, it seems Arch doesn't separate python-dev with python. I checked my /usr and found Python.h under /usr/include/python3.3m, so what should I do now?
|
Normally running
./configure
before running make should set up things correctly, but in this fall that seems not to be the case.
Python 3.3.X puts its header files in .../include/Python3.3m, whereas 2.7.x uses .../include/python2.7 (without any suffix), maybe omniORB is not aware (yet) of that suffix m.
You can make a link from python3.3m to python3.3 using:
cd /usr/include
ln -s python3.3m python3.3
and retry the build process ( this assumes python3.3 was configured using --prefix=/usr, adapt the cd as necessary).
| Python.h: No such file or directory |
1,416,896,969,000 |
I'm trying to compile a Linux Kernel to run light and paravirtualized on XenServer 5.6 fp1.
I'm using the guide given here: http://www.mad-hacking.net/documentation/linux/deployment/xen/pv-guest-basics.xml
But I'm stumped when I reached the option CONFIG_COMPAT_VDSO.
Where is it exactly in make menuconfig? The site indicated that the options is in the Processor type and features group, but I don't see it:
[*] Tickless System (Dynamic Ticks)
[*] High Resolution Timer Support
[*] Symmetric multi-processing support
[ ] Support for extended (non-PC) x86 platforms
[ ] Single-depth WCHAN output
[*] Paravirtualized guest support --->
[*] Disable Bootmem code (NEW)
[ ] Memtest (NEW)
Processor family (Core 2/newer Xeon) --->
(2) Maximum number of CPUs
[ ] SMT (Hyperthreading) scheduler support
[ ] Multi-core scheduler support
Preemption Model (No Forced Preemption (Server)) --->
[ ] Reroute for broken boot IRQs
[ ] Machine Check / overheating reporting
< > Dell laptop support (NEW)
< > /dev/cpu/microcode - microcode support
<M> /dev/cpu/*/msr - Model-specific register support
<M> /dev/cpu/*/cpuid - CPU information support
[ ] Numa Memory Allocation and Scheduler Support
Memory model (Sparse Memory) --->
[*] Sparse Memory virtual memmap (NEW)
[*] Allow for memory hot-add
[*] Allow for memory hot remove
[ ] Allow for memory compaction
[*] Page migration
[*] Enable KSM for page merging
(65536) Low address space to protect from user allocation (NEW)
[ ] Check for low memory corruption
[ ] Reserve low 64K of RAM on AMI/Phoenix BIOSen
-*- MTRR (Memory Type Range Register) support
[ ] MTRR cleanup support
[*] Enable seccomp to safely compute untrusted bytecode (NEW)
[*] Enable -fstack-protector buffer overflow detection (EXPERIMENTAL)
Timer frequency (100 HZ) --->
[ ] kexec system call
[ ] kernel crash dumps
[*] Build a relocatable kernel (NEW)
-*- Support for hot-pluggable CPUs
[ ] Built-in kernel command line (NEW)
FYI, I'm configuring Gentoo's Kernel v2.6.36-hardened-r9
|
As you had already said, it IS under "Processor Types and Features".
You are compiling Gentoo's hardened kernel source, so the code would have undergone many patches.
A quick search in Google returned this: Gentoo kernel VDSO. It looks like Gentoo has it disabled even several versions before.
Why don't you download directly from kernel.org?
| Where is CONFIG_COMPAT_VDSO in make menuconfig? |
1,416,896,969,000 |
For Windows I used to download either 32 bit binaries from ASF site or 64 bit binaries from Apache lounge site (for no particular reason). So - that was the way I new what version I have.
I've switched to Ubuntu (for educational purposes) and got used to compiling from source.
When I compile Apache server from source - what binaries do I get? 32? 64?
|
Compiling is the process of building binaries from source. The configure script and makefile will select what's appropriate for your system. (64-bit executable for a 64-bit system, 32-bit for 32-bit.)
And if you have a 64-bit system, there is probably some potential for improved performance in using 64-bit binaries. For some things it could be a huge boost; I'm guessing that for Apache it's not quite so big a deal, but I'm no expert there. It certainly won't hurt, though, to build the appropriate binaries, and I don't see why you'd ever bother to mess with the build process to get 32-bit binaries if you have a 64-bit system.
| Apache server 32 binaries vs 64 binaries? what's the difference |
1,416,896,969,000 |
The overflow_stack variable is used in the kernel_ventry macro in arch/arm64/kernel/entry.S
/* Switch to the overflow stack */
adr_this_cpu sp, overflow_stack + OVERFLOW_STACK_SIZE, x0
It seems to me to be declared in arch/arm64/include/asm/stacktrace.h
DECLARE_PER_CPU(unsigned long [OVERFLOW_STACK_SIZE/sizeof(long)], overflow_stack);
However, this header file is not included in entry.S, or in any other meaningful header that I can find. Is there another way it is being included?
|
No, there is no other way; overflow_stack isn’t declared or defined in any header included by entry.S. But that’s not an error as far as the assembler is concerned; overflow_stack doesn’t have a local prefix, so it ends up as an undefined symbol in arch/arm64/kernel/entry.o, which is resolved when the kernel is linked.
Run
make arch/arm64/kernel/entry.o
(or make CROSS_COMPILE=aarch64-linux-gnu- ARCH=arm64 arch/arm64/kernel/entry.o on an architecture other than arm64); then
objdump -t arch/arm64/kernel/entry.o
will show (among others)
0000000000000000 *UND* 0000000000000000 overflow_stack
The relocation tables include an number of entries for overflow_stack+0x0000000000001000 (overflow_stack + OVERFLOW_STACK_SIZE); run objdump -r arch/arm64/kernel/entry.o to see them.
| How is overflow_stack variable included in entry.S in arm64 architecture? |
1,416,896,969,000 |
Full error message:
arm-linux-gnueabihf-g++: error trying to exec 'cc1plus': execvp: No such file or directory
So I have got this error message while trying to build a C++ project on my machine shortly after a home directory deletion and recovery on Ubuntu 18.04. I'm doubtful that this is because of something in my environment since I built my program not too long ago with the same settings. After researching the error, I found that pretty much everyone says that it's because I have either not installed gcc/g++, incorrectly installed gcc/g++, or have a version mismatch between gcc/g++.
However this appears to not be my problem:
jayz@joshz:/usr$ gcc --version
gcc (Ubuntu 7.3.0-27ubuntu1~18.04) 7.3.0
Copyright (C) 2017 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
jayz@joshz:/usr$ g++ --version
g++ (Ubuntu 7.3.0-27ubuntu1~18.04) 7.3.0
Copyright (C) 2017 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
I have also tried reinstalling both gcc and g++ but still the same error appears.
I have also tried:
sudo apt-get update
sudo apt-get install --reinstall build-essential
I have in fact found the cc1plus file on my system in multiple places:
jayz@joshz:/usr$ locate cc1plus
/home/jayz/raspi/sysroot/usr/lib/gcc/arm-linux-gnueabihf/4.9/cc1plus
/home/jayz/raspi/tools/arm-bcm2708/arm-bcm2708-linux-gnueabi/libexec/gcc/arm-bcm2708-linux-gnueabi/4.7.1/cc1plus
/home/jayz/raspi/tools/arm-bcm2708/arm-bcm2708hardfp-linux-gnueabi/libexec/gcc/arm-bcm2708hardfp-linux-gnueabi/4.7.1/cc1plus
/home/jayz/raspi/tools/arm-bcm2708/arm-rpi-4.9.3-linux-gnueabihf/libexec/gcc/arm-linux-gnueabihf/4.9.3/cc1plus
/home/jayz/raspi/tools/arm-bcm2708/gcc-linaro-arm-linux-gnueabihf-raspbian/libexec/gcc/arm-linux-gnueabihf/4.8.3/cc1plus
/home/jayz/raspi/tools/arm-bcm2708/gcc-linaro-arm-linux-gnueabihf-raspbian-x64/libexec/gcc/arm-linux-gnueabihf/4.8.3/cc1plus
/usr/lib/gcc/x86_64-linux-gnu/7/cc1plus
so perhaps it's a linker issue?
One thing that I have noticed is that I have no /usr/local/libexec or /usr/libexec directories but I am not sure if this is a problem or what it might imply.
|
I fixed the problem!
To fix the cc1plus error:
The first issue was that since I am cross-compiling so I needed to first install OpenSSL on my raspberry pi and then copy that library back over to my PC.
Then I had to get a fresh copy of my sysroot folder (which was for some reason corrupted), and place my OpenSSL inside it.
Then additional errors came up saying "cannot find crt1.0, crti.o, crtn.o, and libdl.so.2: No such file or directory":
To fix these I had to create symbolic links in my sysroot folder to point to where these files were from the location the compiler was expecting them to be in.
And now my project compiles!
| G++/GCC installed but still: error trying to exec 'cc1plus': execvp: No such file or directory |
1,416,896,969,000 |
I'm trying to build OpenSSH 7.9p1 from source, but I can't find a way to delete (or not include), for instance, ssh-agent, ssh-keygen, scp, sftp, sshd, etc. – of course, assuming none of those are required for the ssh command to work.
Ideally I would only need the client: the ssh command, but again, I'm not sure what other pieces are required. I think most of what's build/installed is used by the server, not by the client.
|
Yes, you can do this with the default build system provided.
If you look at the Makefile that is generated by running the provided ./configure script, you should see that the default (first) target is
TARGETS=ssh$(EXEEXT) sshd$(EXEEXT) ssh-add$(EXEEXT) ssh-keygen$(EXEEXT) ssh-keyscan${EXEEXT} ssh-keysign${EXEEXT} ssh-pkcs11-helper$(EXEEXT) ssh-agent$(EXEEXT) scp$(EXEEXT) sftp-server$(EXEEXT) sftp$(EXEEXT)
(for Unix-like systems, $(EXEEXT) should be empty). Each has its own separate build target / rule so for example you can do:
make ssh
to make only the client.
Ex.
$ make ssh
<snip>
$ find . -type f -executable -newermt yesterday
./config.status
./ssh
$ ./ssh -V
OpenSSH_7.9p1, OpenSSL 1.1.0g 2 Nov 2017
| Build OpenSSH client only |
1,416,896,969,000 |
*tldr; I would like to generally understand how the world of linux / embedded linux works. What do I need to do to take the Linux mainline and compile/deploy it on a board with different processors and peripherals, from scratch.
How I currently See It Working:
Steps to get Linux running on arbitrary board:
Get sources for uBoot (for embedded) or GRUB (desktop/x86 SOM)
Modify uBoot or GRUB for specific system, write code to init specific chip and get required interfaces for memory and console up and running
Modify uBoot/GRUB config.txt to configure code written above
compile these and deploy to board, verify that bootloader console comes up and can interact with it
Get kernel mainline sources
"make config" to select drivers and modules that will be available (At this point these selections will change the source - wherever these settings are stored will no longer match a git clone from the mainline) (Track this .config file in source control for future reference)
Get tools such as Busybox or desktop alternative? Install in source directories
Get ucLibc or other libraries and install in source directories
Compile kernel source using cross compiler toolchain for specific chip
Create device tree files .dtb for board (both embedded/desktop? or desktop does not use?) This connects drivers to physical pins
Use Uboot/GRUB and TFTP/serial console or memory card etc to load compiled kernel image.
Boot up and verify shell access through serial/SSH etc depending on drivers and device tree config
Modify uEnv.txt (embedded) or mysteryfile.txt (desktop) for board specific configurations? This is essentially a script that blocks or adds kernel startup steps? What is the desktop equivalent?
apt-get desired packages and drivers
write drivers and application code and test (manually loading drivers)
Add device tree files to account for hardware and drivers implemented above (these are separate from the intial BSP one created)
To include these in kernel image build the kernel and create the file structure with all of these sources and config file mods in the folder strcuture (additions/mods to Linux mainline)
Could have a separate folder for Linux mainline and mods, copy mods directly overwritting/adding files to mainline in a third staging folder. This will allow all additions and non-mainline mods to be source controlled separate.
If you can get a base system that you can SSH into, and at this point you have drivers for all the common components (Video, USB, mouse etc) then you can pretty much do anything at this point (install X11 server, LXDE, networking etc)? Which drivers need to be handled by the bootloader/bios and which ones are purely in the kernel domain?
There are Kconfig files for configuring the kernel build. This makes sense and the kernel module development docs that I have seen seem to describe this well.
There are also files like uEnv.txt and config.txt that handle the run time configuration and which devices should be loaded. There are also device tree blobs which also determine which devices should be loaded?
How do the magic strings in these files tie into the kernel, are these modifications done to the mainline for a specific board? Something has to read these to determine if HDMI should be enabled or not, and this can't be the exact same code as what is on the desktop version of Linux.
Once drivers are in the mainline are they still developed independently from the mainline? For example I have been using a couple of drivers but there are notes they are now included in the mainline, does this mean that it is no longer possible to download directly on its own? The steps I have followed downloaded the headers for my board, the source and then compiled it and installed it. If it is in the mainline do I need to pull it from there now instead?
Background and Specific Thoughts
I am an EE and have experience with Microcontrollers and Windows development, but do not have much Linux experience. The framing of my question is "If I started off with this arbitrary (with linux compiler available) processor, and these peripherals how do I (and what are my options) for building a linux release"
Bootloader:
I have been able to find RPI2 and BBB (Beaglebone Black) specific documentation and how-to's but when you get into more advanced topics like the bootloader there are only a few crumbs to vaguely describe what is going on. For example the RPI2 has a 3 stage bootloader (of which from reading it does not sound like it is totally uBoot based) and the BBB has a more "traditional" uBoot based bootloader. Now the new BBx15 has jumpers where you can select where you want to boot from.
The desktop systems use GRUB (IIRC) and embedded systems typically use uBoot. I have read that the RPI uses the GPU during boot and reads the first stage bootloader off of a separate ROM. And that is all the information available. If you wanted to spin your own version of the board (for discussions sake, this is not really practical) then in addition to uBoot what is going on? Doesn't uBoot for the BBx15 have extra modifications to allow for the jumper boot selection?
Does Linux know anything about the staging of booting or is it oblivious to this once it is running? The BBB uses uBoot to load the image off the eMMC into RAM, the RPI2 uses the 3 stage bootloader. I am guessing that the BBB uses the ARM processor to do this but the RPI2 uses the GPU. I thought on power up that the ARM processor starts executing, what would they have to modify to stage these load procedure? Does the GPU hold the ARM in reset until it has completed its ROM code? Since the GPU is part of the boot procedure does that mean the code it executes is taken out of the uBoot code, that other systems without this GPU would have to then run in the uBoot code? This whole procedure implies to me that if you modify the second or third stage bootloader that you could run Linux entirely off the GPU alone (if the kernel was compiled with the GPU toolchain)?
Is the third stage bootloader and config.txt actually just uBoot?
Regarding the headers for the board in use. Are these just the headers from the mainline with the drivers that have been overlayed included or is there something more to this. The "headers" are just the mainline headers if that is what you have started running with?
For embedded microcontroller development I am used to having a HAL layer. The HAL has function stubs where you setup the peripherals and then point the drivers to those resources. The board support package typically have these HAL stubs already coded for the board in question. I am sure there has to be some parallels here to Linux development but I can't quite see where these divisions are.
There are packages such as Buildroot and Yocto. Are these just the Linux mainline with an interface to automate selecting the ARM processor and drivers to include?
|
From my small experience with router hardware I played with, I can say that this is a dumb small hardware picked up just to do one thing.
At hardware level, it's simple:
U-Boot there is not only bootloader, but a BIOS in PC terms. So it's not only a bootloader, but also it initializes all hardware. At start, CPU executes it directly (from FLASH for example), and it decides what to do next, but usually it relocates itself into memory. Then it does what it needs to: reads configuration from flash piece then loads image at specified address and transfers control there. Nothing special, but it's important to know.
U-Boot (embedded on router hardware) does not access root filesystem at all. Instead, there is a dedicated space for whole kernel image (usually compressed). So, at least on routers - there is no /boot/vmlinuz file.
RPI indeed uses it's own, proprietary boot sequence. They have closed source binaries which user puts on SD flash. The first stage init code is hardcoded into CPU or somewhere there on board. And they start ARM core after GPU, and whole code is done for GPU. More about that maybe you're already found, but if not: https://raspberrypi.stackexchange.com/questions/10489/how-does-raspberry-pi-boot
So, because I did some fun with routers and had rebuilt them into my small servers completely from source code, I can list my own building sequence:
Obtain and build u-boot for platform
Build Linux kernel
Build userspace (kernel and userspace usually divided, even on flash)
Flash that u-boot into flash on programmer
Solder flash on to board
Connect to board via UART
Boot it, verify u-boot inits all hardware well
tftp kernel, write to flash inside board
tftp rootfs, write to flash inside board
reset, verify all works ok
fine-tune rootfs: set permissions, preload default config via tftp
dump whole image, flash it on many devices
Linux kernel then can, or cannot, support your board. Please verify that. You will not able to just take the latest kernel and build it, for example, for your router. The same with RPi: they have their own kernel tree. That happens often in embedded world, only few (and usually generic) platforms are supported by Linux kernel directly. Be prepared for that.
As for userspace, you can select whatever you need, balance your needs between what you need and how many space is left. Usually in embedded, you either compress anything, or strip unneeded things, or both.
I hope this will shed some light on. If you have further questions - welcome to comments! :-)
| Linux/Embedded Linux - Understanding the Kernel and additional BSP specific components [closed] |
1,416,896,969,000 |
I am trying to build Bash 4.2 as an RPM package for use on Enterprise Linux 5 systems, which come by default with 3.2.25. This works successfully, however, I want both versions to co-exist on the system, to avoid conflicts with the system package, and to allow system/other scripts to continue to use bash3 which they are compatible with.
My plan is as follows:
Rename the package 'bash4' and do not conflict with 'bash' or provide 'sh'
Configure bash to build with the binary name 'bash4' and change the path of any docs or support files accordingly
In theory this is simple and Vim offers binary prefix/suffix in it's configure scripts, however bash doesn't appear to have this feature. The closest I have found is automake's EXEEXT which provides support for executable extensions (such as .exe on Windows) but this isn't really designed for what I want to do, not does it solve the doc problem.
|
Though the bash autoconf version (2.63) is a little old (Sept 2008), it supports the --program-transform-name and --program-suffix features. Sadly the bash build process does not use these features as detailed by the documentation, nor does it use parameters to allow build-time processing of the man pages.
Since the number of files and changes is small, I recommend a semi-manual approach, i.e. write a small script to make the changes pre-installation. You can optionally use installwatch to make sure you catch everything during the install, but bash really is quite minimal.
(FWIW, I had a quick look at the FreeBSD bash ports, and Debian bash patches, no sign of a suitable fix.)
While generally being an interesting way to break builds, you can abuse EXEEXT here:
ac_cv_exeext=42 ./configure [...]
make
./bash42 -c 'echo $BASH_VERSION'
4.2.42(1)-release
since all it saved you was a rename, I really don't recommend it ;-)
There's a little more to be gained from:
./configure [...]
make -e Program=bash42
as that also reflects your change within the generated bashbug script (though it does not rename it).
| Build bash (or alternate linux package) with custom binary/doc name |
1,416,896,969,000 |
I was installing rfc5766-turn-server.
But it fails to launch with an error:
error while loading shared libraries: libevent_core-2.0.so.5: cannot open shared object file: No such file or directory
Here's a copy-paste of how I did the installation:
$ cd /var/tmp;
wget https://github.com/downloads/libevent/libevent/libevent-2.0.21-stable.tar.gz; tar xvfz libevent-2.0.21-stable.tar.gz; cd libevent-2.0.21-stable; ./configure; make; make install;
wget http://rfc5766-turn-server.googlecode.com/files/turnserver-1.8.6.3.tar.gz ; tar xvfz turnserver-1.8.6.3.tar.gz; cd turnserver-1.8.6.3; ./configure; make; make install;
/var/tmp/turnserver-1.8.6.3/bin/turnserver;
I tried this, but it didn't help (same error):
$ ln -s /usr/local/lib/libevent-2.0.so.5 /usr/lib64/libevent-2.0.so.5;
/var/tmp/turnserver-1.8.6.3/bin/turnserver ;
EDIT: (without changes if i run as below it runs it, but when i test with client it does not show any kind of logs that TURN is hitting or reached by client)
$ export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/local/lib/;
PATH="bin:../bin:../../bin:${PATH}" turnserver -L 176.34.x.x -a -b /usr/local/etc/turnuserdb.conf -f -r 176.34.x.x
1371111272: RFC 5389/5766/5780/6062/6156 STUN/TURN Server, version Citrix-1.8.6.3 'Black Dow'
1371111272: Config file found: /usr/local/etc/turnserver.conf
1371111272: Listener address to use: 176.34.x.x
1371111272: Config file found: /usr/local/etc/turnserver.conf
1371111272: WARNING: cannot find certificate file: turn_server_cert.pem (1)
1371111272: WARNING: cannot start TLS and DTLS listeners because certificate file is not set properly
1371111272: WARNING: cannot find private key file: turn_server_pkey.pem (1)
1371111272: WARNING: cannot start TLS and DTLS listeners because private key file is not set properly
1371111272: Relay address to use: 176.34.x.x
1371111272: IO method (listener thread): epoll
1371111272: WARNING: I cannot start alternative services of RFC 5780 because only one IP address is provided
1371111272: IO method: epoll
1371111272: IPv4. UDP listener opened on : 0.0.0.0:0
1371111272: IPv4. TCP listener opened on : 0.0.0.0:39227
1371111272: IO method (auth thread): epoll
1371111272: IO method (relay thread): epoll
|
First, the obvious question: is that library installed?
Also, is it installed for the right architecture? (E.g. a 32-bit executable requires a 32-bit library, a 64-bit executable requires a 64-bit library.)
If you just added a library to a directory in the system library path, you'll need to run ldconfig as root. There's a cache of installed libraries, and ldconfig rebuilds that cache. If a library is present in a directory but not in the cache, it's not going to be used.
I see you added the library to /usr/local/lib. Most distributions include it in the default library path, but Red Hat doesn't. Add it to /etc/ld.so.conf then run ldconfig.
Run ldd /path/to/excecutable to see where an executable is finding its libraries. When a library is not found, strace /path/to/executable will tell you where the program is looking for it.
| Error while loading shared libraries after installing a program |
1,416,896,969,000 |
I'm creating a deb-file and enumerating the files and paths I need to have in the package using the install file. It looks like
dir1/* path1
dir2/* path2
...
But in a result deb-file there are no hidden files from dir1 and dir2. It looks like * doesn't match hidden files.
How could I match them apart from specifying each one explicitly?
|
The globs used by dh_install are perl globs, which are modeled after csh globs. These do not match hidden files by default. In order to get all files, including hidden files, you will need to use two globs. Here is an example:
dir1/.* path1
dir1/* path1
Update: It has been pointed out in comments to this answer that .* matches . and ... Since perl's globbing doesn't offer anything to avoid this situation, the dotfiles will need to be added explicitly.
dir1/.htaccess path1
| Creating deb file: hidden files specification by `install` file |
1,416,896,969,000 |
I am trying to do a build of rpm from source. I got through the ./configure and ran through a good chunk of make. Unfortunately, I keeping getting stopped up on undefined references to bzerror, bzwrite, bzflush and others. Looking around online, I see these functions are part of the bzip2 package. I've installed the development libraries, but I am still getting this message. Can anyone assist me in resolving these dependencies?
make[2]: Entering directory `/mnt/fedRoot/rpm-4.6.1/lib'
make all-am
make[3]: Entering directory `/mnt/fedRoot/rpm-4.6.1/lib'
/bin/sh ../libtool --tag=CC --mode=link gcc -std=gnu99 -g -O2 -fPIC -DPIC -D_REENTRANT -Wall -Wpointer-arith -Wmissing-prototypes -Wno-char-subscripts -fno-strict-aliasing -fstack-protector -o rpmdb_archive ../db3/db_archive.o ../db3/util_sig.o librpm.la -lrt -lpthread
gcc -std=gnu99 -g -O2 -fPIC -DPIC -D_REENTRANT -Wall -Wpointer-arith -Wmissing-prototypes -Wno-char-subscripts -fno-strict-aliasing -fstack-protector -o .libs/rpmdb_archive ../db3/db_archive.o ../db3/util_sig.o ./.libs/librpm.so /mnt/fedRoot/rpm-4.6.1/rpmio/.libs/librpmio.so -lmagic -lelf -llua -lm -lnss3 -lpopt -lrt -lpthread -Wl,--rpath -Wl,/usr/local/lib
/mnt/fedRoot/rpm-4.6.1/rpmio/.libs/librpmio.so: undefined reference to `bzerror'
/mnt/fedRoot/rpm-4.6.1/rpmio/.libs/librpmio.so: undefined reference to `bzwrite'
/mnt/fedRoot/rpm-4.6.1/rpmio/.libs/librpmio.so: undefined reference to `bzflush'
/mnt/fedRoot/rpm-4.6.1/rpmio/.libs/librpmio.so: undefined reference to `bzdopen'
/mnt/fedRoot/rpm-4.6.1/rpmio/.libs/librpmio.so: undefined reference to `bzread'
/mnt/fedRoot/rpm-4.6.1/rpmio/.libs/librpmio.so: undefined reference to `bzclose'
/mnt/fedRoot/rpm-4.6.1/rpmio/.libs/librpmio.so: undefined reference to `bzopen'
collect2: ld returned 1 exit status
make[3]: *** [rpmdb_archive] Error 1
make[3]: Leaving directory `/mnt/fedRoot/rpm-4.6.1/lib'
make[2]: *** [all] Error 2
make[2]: Leaving directory `/mnt/fedRoot/rpm-4.6.1/lib'
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory `/mnt/fedRoot/rpm-4.6.1'
make: *** [all] Error 2
|
The libraries would have to be picked up by configure. If you ran configure, then installed the bzip2 development files, then re-ran configure, it may still have picked wrong information from its cache. Run make distclean, then ./configure again.
| unresolved dependencies of bz* files for rpm make from source |
1,416,896,969,000 |
Let's say that I have a Makefile that has two “main” targets: foo.o and clean. The former one has a recipe to create the foo.o file. The latter one removes all the temporary files.
To remove the need of specifying the dependencies of foo.o manually, I have target foo.d that is valid makefile specifying the dependencies in format foo.o foo.d : dep1 dep2 depn. This dependency file is included to the makefile.
The makefile looks like this:
;; This buffer is for text that is not saved, and for Lisp evaluation.
;; To create a file, visit it with C-x C-f and enter text in its buffer.
foo.o: foo.c
cc -c -o $@ $<
foo.d: foo.c
sh deps.sh $< > $@
include foo.d
.PHONY: clean
clean:
rm …
When I want to make foo.o, everything works correctly: foo.d gets (re)made, it is included and foo.o gets made. The problem is that when I want to make the clean target, foo.d gets included, or even made.
How can I prevent make including the foo.d when clean target is being made? (Or, how to include that only when foo.o is made?)
The solution can use features of GNU Make.
|
The solution is quite simple, but results into somewhat unreadable Makefile code.
First, we must know that include directive tries to include the file, and if it does not exist, fails. There is also -include (or sinclude) that does simply does not include the file, if it does not exist. But that is not the thing we want, because it stills tries to remake the included makefile, if possible.
We can avoid that in two ways: either by changing the include directive parameter in such way that Makefile thinks it is not able to make that included file (e.g. relative vs absolute path etc.), or by omitting the parameter when the file does not exist. That can be done in multiple ways:
-include $(wildcard foo.d*)
but that has one problem: it matches also other files. So we can write this:
-include $(filter foo.d,$(wildcard foo.d*))
or even this:
-include $(filter foo.d,$(wildcard *))
And we made another problem: foo.d does not get made. This is resolved either by adding it as another target of the foo.o:
foo.o: foo.c foo.d
or adding it as a command:
foo.o: foo.c
$(MAKE) foo.d
cc …
or directly, without invoking make:
foo.o: foo.c
sh script.sh …
cc …
| Remake included makefile only when needed |
1,416,896,969,000 |
I was building font-manager package from AUR on my Arch system. It is throwing a warning while the process :
==> WARNING: Package contains reference to $srcdir
usr/lib/font-manager/libfontmanager.so.0.7.9
Should I worry about this warning ? Is it harmful to my system anyway ?
|
I have seen that warning on several packages that I have built in Arch. I always ignore it and haven't suffered any negative consequences yet.
| Should I worry about 'WARNING: Package contains reference to $srcdir'? |
1,416,896,969,000 |
I have the file, myapp, that is a .pyc file, and I want to make it executable. Currently, I must manually call Python to execute the program, as follows.
python /usr/bin/myapp "hello world!"
How can I permanently configure the system to execute myapp without manually invoking python, as follows?
myapp "hello world!"
I need to do this because the shebang, #!/usr/bin/env python, does not work in byte-compiled .pyc files without using a separate .sh wrapper script.
|
In Debian-based distributions, the package, binfmt-support, provides the functionality. Look in the procedure file system for the formats that were configured with installation of the package.
ls /proc/sys/fs/binfmt_misc
Make sure to also give the .pyc file/s permission to execute.
| How to make linux shell interpret a .pyc file with python in /usr/bin? |
1,416,896,969,000 |
Under Cross-Compiler-Specific Options, it says:
The default value, in case --with-sysroot is not given an argument, is
${gcc_tooldir}/sys-root.
but it appears that gcc_tooldir is not defined. Is this a nickname for
something else, and also where is it "normally"?
|
but it appears that gcc_tooldir is not defined. Is this a nickname for something else, and also where is it "normally"?
gcc_tooldir is a make variable. You should find that within the scope of a GCC build, it has a value that is functionally equivalent to that of the $(tooldir) make variable, but somewhat different in form. You are not meant to set it by hand, though you may of course use the --with-sysroot configure option to choose your own directory for the target tools. Per the GCC build documentation:
When installing cross-compilers, GCC’s executables are not only installed into bindir, that is, exec-prefix/bin, but additionally into exec-prefix/target-alias/bin, if that directory exists. Typically, such tooldirs hold target-specific binutils, including assembler and linker.
(Emphasis in the original.)
The standard tooldir name is thus something of the form '/usr/x86_64-w64-mingw32', to lift one from the Glade example you presented in comments.
| Where is "gcc_tooldir" |
1,416,896,969,000 |
After the upgrade from gcc-5.x to gcc-6.4 all Gentoo Linux users were advised to run
emerge -e @world
which will recompile all packages on a system and takes on my i7 with 16 GB around 30 h in theory.
This will work in some simple situations, but in many cases the task stops after say 80 of 2000 packages due to a problem at some point. The user tries to fix it and starts from zero again. I tried
emerge --resume --skipfirst
and --keep-going but this does not work, if the problem was not caused by the first package.
A second problem is, that all packages, which are listed in packages.provided must be ignored. The packages.provided is important for users, who need a recent TeXlive for example and install via tlmgr.
My idea was to start with a list of packages which were not compiled after 2017-12-01, which is the day, I start to recompile.
genlop -ln --date 1999-01-01 --date 2017-12-01 | perl -ne '/>>> (.*)/ and print " =$1";'
Ideally the system would compile all packages which raise no error. On the next day the user can fix a problem and compiles the fixed package one after the other.
How can I recompile all packages, which were really installed from the tree (excluding the packages.provided) without starting at point zero after each problem?
edit: This is obviously no duplicate of List all packages on a Gentoo system, which were not recompiled since a date, however its results could be help for the solution of this question.
|
Here's one way to do it:
Save your start time before you begin
date +%s >emergestart && emerge -e --keep-going @world
Then when the emerge inevitably stops you can resume with this script (after fixing any problematic builds)
#!/bin/bash
starttime=`cat emergestart`
eix '-I*' --format '<installedversions:DATESORT>' | cut -f1,3 >tmplist
echo $starttime >>tmplist
sort -n tmplist | sed -e/$starttime/q | sed -e'/[0-9]*\t*/s///' | sort | comm -23 - <(sort omitlist) | comm -23 - <(sort /etc/portage/profile/package.provided) >buildlist
rm tmplist
emerge -a `cat buildlist` --keep-going
The script removes all packages from packages.provided from the list, and also other packages you don't want to emerge (either because they are causing problems or they don't need re-emerging) from a file called omitlist
Example omitlist:
sys-devel/gcc:5.4.0
sys-kernel/gentoo-sources:4.13.12
sys-kernel/gentoo-sources:4.14.2
app-cdr/cdrdao
media-gfx/kphotoalbum
virtual/libintl
virtual/libiconv
app-doc/abs-guide
app-doc/autobook
app-doc/jargon
You'll probably need to do several iterations of the resume script
| How to recompile everything efficiently on a Gentoo Linux system? |
1,416,896,969,000 |
Can I rerun my command to install KDE full
after a shutdown and will it continue? I'm installing KDE and it takes a while now. I'm on my laptop and it's pretty late here. There are many packages to come.
PS the command:
# emerge --ask kde-base/kde-meta
|
If you cancel portage and later rerun the same command the specified package will compile again. So if you e.g. start to run emerge www-client/firefox, cancel it and rerun the command again, merging www-client/firefox will start from scratch. Note that the dependency list is generated when starting emerge, so when you missed some dependency in the first run which have already been merged, those are not reinstalled.
kde-base/kde-meta is no monolithic ebuild for kde, but a meta package, that contains only dependencies to other ebuilds. So the only progress lost when rerunning portage after a reboot is that in the ebuild that was compiling when you canceled portage / shut down.
Most packages used for kde are quite small, so chances are that you are not losing much. Some packages (e.g. kde-base/kdelibs) take a long time though.
One side note: If you want a usable system fast I recommend not to start with installing kde-base/kde-meta, but with kde-base/kde-base-meta. Once this is installed you can already use KDE and install the rest later. You do no lose anything when mergingkde-base/kde-meta after this.
| Gentoo portage continue after shutdown |
1,416,896,969,000 |
So far my compilations have either succeeded or failed but his time it just got stuck. I'm compiling gcc on a Linux Synology NAS. However, the compilation process has run for 3 days and I start to think that it will never finish. From ps I get the following output:
27513 root 2536 S /opt/bin/bash -c r=`${PWDCMD-pwd}`; export r; \ s=`cd .; ${PWDCMD-pwd}`; export s; \ if test -f stage1-lean ; then \ echo Skipping rebuild of
27866 root 2468 S /opt/bin/bash -c build/genautomata ../.././gcc/config/rs6000/rs6000.md \ insn-conditions.md > tmp-automata.c
27867 root 432m D build/genautomata ../.././gcc/config/rs6000/rs6000.md insn-conditions.md
31539 root 2924 S grep build
The last write to tmp-automata.c was 2.5 days ago. The NAS has only 64 MB RAM so I expected long compile time but not at this level. The average CPU load for the build process is 5-10%. What could be wrong? How do I troubleshoot?
|
Well, it looks like the compile needs about 500 megs of memory, and since the system only has 64 megs, the system is thrashing: It is using swap, which works but is really, really slow.
Is there a reason you are compiling this on a system with so little memory? If it's an embedded system with a custom CPU, I would cross-compile on another system.
| Long gcc compile time |
1,416,896,969,000 |
I'm trying to set up namecoin, but I get the following error on running makefile.unix:
$ g++ -c -O2 -Wno-invalid-offsetof -Wformat -g -D__WXDEBUG__ -DNOPCH DFOURWAYSSE2 -DUSE_SSL -DUSE_UPNP=0 -o obj/nogui/net.o net.cpp
In file included from net.cpp:10:
/usr/include/miniupnpc/upnpcommands.h:11:30: error: portlistingparse.h: No such file or directory
/usr/include/miniupnpc/upnpcommands.h:13:28: error: miniupnpctypes.h: No such file or directory
In file included from net.cpp:10:
/usr/include/miniupnpc/upnpcommands.h:25: error: ‘UNSIGNED_INTEGER’ does not name a type
/usr/include/miniupnpc/upnpcommands.h:29: error: ‘UNSIGNED_INTEGER’ does not name a type
/usr/include/miniupnpc/upnpcommands.h:33: error: ‘UNSIGNED_INTEGER’ does not name a type
/usr/include/miniupnpc/upnpcommands.h:37: error: ‘UNSIGNED_INTEGER’ does not name a type
/usr/include/miniupnpc/miniupnpc.h: In function ‘void ThreadMapPort2(void*)’:
/usr/include/miniupnpc/miniupnpc.h:53: error: too few arguments to function ‘UPNPDev* upnpDiscover(int, const char*, const char*, int, int, int*)’
net.cpp:906: error: at this point in file
/usr/include/miniupnpc/upnpcommands.h:117: error: too few arguments to function ‘int UPNP_AddPortMapping(const char*, const char*, const char*, const char*, const char*, const char*, const char*, const char*, const char*)’
net.cpp:920: error: at this point in file
make: *** [obj/nogui/net.o] Error 1
I think the issue may be Berkeley DB but I don't know how to check that. Anybody have any idea?
|
Alex is correct in his comment, the latest miniupnpc is broken. However you can use the working version here: http://miniupnp.tuxfamily.org/files/download.php?file=miniupnpc-1.5.tar.gz and then namecoind will compile fine.
| What does this error mean when installing namecoin? |
1,416,896,969,000 |
I'm trying to compile a specific version of GEOS and its PHP bindings in the Travis CI environment; they're using Ubuntu.
Here is my install script:
sudo apt-get update
sudo apt-get remove 'libgeos.*'
sudo apt-get autoremove
wget https://github.com/libgeos/libgeos/archive/$VERSION.tar.gz
tar zxf $VERSION.tar.gz
cd libgeos-$VERSION
./autogen.sh
./configure
make
sudo make install
cd ..
wget https://git.osgeo.org/gogs/geos/php-geos/archive/1.0.0rc1.tar.gz
tar zxf 1.0.0rc1.tar.gz
cd php-geos
./autogen.sh
./configure
make
mv modules/geos.so $(php-config --extension-dir)
cd ..
echo "extension=geos.so" > geos.ini
phpenv config-add geos.ini
Everything seems to compile fine, but when PHP attempts to load the GEOS extension, this message appears:
PHP Startup: Unable to load dynamic library '/home/travis/.phpenv/versions/5.6.28/lib/php/extensions/no-debug-zts-20131226/geos.so' - libgeos_c.so.1: cannot open shared object file: No such file or directory in Unknown on line 0
I've executed this command on the machine:
sudo find / -name 'libgeos_c.so*'
And here is the result:
/usr/local/lib/libgeos_c.so.1.9.0
/usr/local/lib/libgeos_c.so.1
/usr/local/lib/libgeos_c.so
/home/travis/build/brick/geo/libgeos-3.5.0/capi/.libs/libgeos_c.so.1.9.0T
/home/travis/build/brick/geo/libgeos-3.5.0/capi/.libs/libgeos_c.so.1.9.0
/home/travis/build/brick/geo/libgeos-3.5.0/capi/.libs/libgeos_c.so.1
/home/travis/build/brick/geo/libgeos-3.5.0/capi/.libs/libgeos_c.so
So it looks like the freshly built GEOS PHP extension is trying to load the shared object file from another location than /usr/local/lib.
How can I fix this?
Here is the full log on Travis CI.
|
I did not find a way to make the extension look for the shared libraries in /usr/local/lib, but I did find a way to make libgeos install them to /usr/lib, which is where the extension is looking for them.
Just use --prefix when building libgeos:
./configure --prefix=/usr
| PHP Startup: Unable to load dynamic library : cannot open shared object file: No such file or directory |
1,460,488,047,000 |
I want to find out which compiler/linker options were used to compile the GNU C Standard Library (glibc) when installing Linux. In particular I want to get the same result of the archive libc.a when compiling glibc from source on a different machine (same version of gcc, make, binutils, etc though).
All I could find out was the used gcc version with:
user@ubuntu:/$ /lib/x86_64-linux-gnu/libc.so.6
GNU C Library (Ubuntu GLIBC 2.21-0ubuntu4) stable release version 2.21,
...
Compiled by GNU CC version 4.9.2.
...
But when compiling glibc from source with no further options I don't get the same results after running make. The self compiled libc.a archive is different from the preinstalled one (size and binary wise). So I guess there is some optimization going on. Maybe because of included debug information when compiled from source.
Build:
user@ubuntu:~/glibc$ sudo apt-get source libc6
user@ubuntu:~/glibc/glibc-build$ sudo ../glibc-2.21/configure --prefix=/home/user/glibc/glibc-install/
...
sudo make
...
In the debian/rules file and in the output I found, that -O2 and -g is used with gcc.
There is an existing question, which I looked at but didn't help me.
I am currently using Ubuntu 15.04, but I need it on other (non debian) systems also. Furthermore it should also work with eglibc on Ubuntu 14.04.
The final goal is to reproduce (compile) every version of glibc used on different systems (I know that's a lot) and make IDA PRO FLIRT signatures out of them. So, in conclusion I need the same binary output at least for libc.a (that's the file the signatures are made from). Further reading on FLIRT Signatures here.
The Problem with these Signatures is, that every different compiler version and every compiler option can change the output of the Library archive and lead to a different Signature which will only partially work on the analyzed binary.
I'm new to this, so every help is welcome. I hope i didn't forget to mention something important.
|
You should find everything you need to rebuild in the src.deb package that matches you .deb package. Look at apt-src(8).
| Find out glibc compilation options |
1,460,488,047,000 |
The Blackberry Playbook has formally reached EOL(april 2014), but I've installed BGShell, BGSSH-SCP-SFTP, and Term48 on it. So I have some ksh-like shell it seems with things like GNU awk 3.1.5, sed 4.1.5, grep and python, with some other coreutils elements (but no tr) etc. I'm not root. Basically I can write in the Downloads directory or in the $HOME dir created by the shell apps (/accounts/1000/appdata/com.BGShell..blabla/data for instance) and I can't execute everything but I can generally run a script within the aforementioned constraints. The reason I'm interested is because this is QNX on cortex-A9.:
QNX localhost 6.6.0 2014/03/19-01:28:41EDT OMAP4430_ES2.2_HS_Winchester_Rev:07 armle
So I compiled some binaries by trying to leverage an old project1. Many things need be changed for this to work but it is set up and is able to compile most targets(including gcc and coreutils-8.13)2. To summarize, I compiled using many different variations of configure flags and older versions of some dev tools to avoid some errors. I've settled for something like:
CFLAGS="-march=armv7-a -marm -fno-strict-aliasing -mtune=cortex-a9 -O2 -pipe -fomit-frame-pointer -mlittle-endian"
AUTOMAKE=automake-1.11: AUTOCONF=: AUTOHEADER=: AUTORECONF=: ACLOCAL=aclocal-1.11: MAKEINFO=makeinfo-4.13a
with the arm-unknown-nto-qnx8.0.0eabi-gcc cross-compiler linked from the 10.3 SDK which sits inside the momentics IDE. The resulting binaries look like this:
ELF 32-bit LSB shared object, ARM, EABI5 version 1 (SYSV), dynamically linked (uses shared libs), BuildID[md5/uuid]=41442b23fecda2d1d7cc5d2c68432a33, not stripped
But every one of them errors on the tablet with this message:
ldd:FATAL: Unresolved symbol "getopt_long" called from Executable
As I recompiled many times using different flags and always get that message, I'm thinking I maybe mislinked some other things than the compiler in the build scripts as it was geared for a prior sdk... (this is beyond my expertise but maybe something about garbage collecting i.e. collect?). Or is it the exotic configuration on the target platform?
A million things could have gone wrong here because of the setup(and inexperience) but I'm looking for some clues so is there any specific thing I should understand from such an error and how can I backtrack the problem generally?
1. In summary it is a bunch of scripts to fetch, patch, compile, install to some dir then bundle that dir to one zip file then spawn a cute ruby webrick server and you use the tablet to download the script and the archive from it(I don't use it for the archive at 250mb, I just use some web file host and the browser).
2. I've removed the ruby, file, and ruby targets from the top level build configuration.
|
This is about confusing two different SDK1. getopt_long exists on QNX6.6. But the libc version on the Playbook doesn't have that as it has an earlier version of /proc. The error would have never happened if I had installed the Playbook OS Native SDK v2.1.0 instead of blindly following the link in the project, which will install the Native SDK v10.3 with the 2.1 IDE for BB10. The project's information is clear but the link doesn't link to the proper SDK. So what I ended up with were most likely binaries made to run on the BB10 but on a device which doesn't exactly have the same infrastructure. Interestingly something like grep would work but refused to answer to --version i.e. a long option when compiled with the BB SDK...
But with the proper SDK there is no issue. Most targets have been recompiled2. This is all happening on Arch Linux x84_64 where multilib repositories have been enabled(and many lib32- packages pulled). This is the build.sh configuration block I have used for gcc3:
CONFIGURE_CMD="$EXECDIR/gcc/configure
--host=$PBHOSTARCH
--build=$PBBUILDARCH
--target=$PBTARGETARCH
--srcdir=$EXECDIR/gcc
--with-as=ntoarm-as
--with-ld=ntoarm-ld
--with-sysroot=$BBTOOLS/target/qnx6/
--disable-werror
--prefix=$DESTDIR
--exec-prefix=$DESTDIR
--enable-cheaders=c
--enable-languages=c
--enable-threads=posix
--disable-nls
--disable-libssp
--disable-tls
--disable-libstdcxx-pch
--disable-newlib-supplied-syscalls
--enable-libmudflap
--enable-__cxa_atexit
--with-gxx-include-dir=$BBTOOLS/target/qnx6/usr/include
--enable-shared
--disable-subdir-texinfo
--enable-cross-compile
--enable-shared
CC=$PBTARGETARCH-gcc
CFLAGS='-march=armv7-a -marm -fno-strict-aliasing -mtune=cortex-a9 -O2 -pipe -fomit-frame-pointer -mlittle-endian'
LDFLAGS='-Wl,-s '
MAKEOPTS="-j5"
AUTOMAKE=automake-1.11: AUTOCONF=: AUTOHEADER=: AUTORECONF=: ACLOCAL=aclocal-1.11: MAKEINFO=makeinfo-4.13a
Both coreutils and gcc work and I have aliased ls and grep with the --color option:
At last a functional shell with tron my Playbook, which offers a rare insight into QNX!
1. Thanks to Ryan Mansfield @Foundry27 for the heads up and to @Emmanuel for the info!
2. Excluded targets: file, man, ruby, findutils. The resulting archive was 80Mib and the ruby webrick server was used to deploy as intended.
3. ...for both gcc and coreutils actually(adding to the defaults in the latter case). It is not clear that all options are required or even recommended. Every target can be built individually by launching the build.sh script in the appropriate bootstrap/target directory i.e. bootstrap/gcc/build.sh. The build environment(bbndk-env.sh) must have been sourced once by the global top build.sh script using the -b option i.e. build.sh -b /path/to/bbndk-2.10. Sources are fetched to the work/target directories where they're built. If a build succeeds and no options is given(see lib.sh in the project root dir), it gets added to the pbhome directory structure which will be zipped into an archive(which will be deployed to your $HOME dir on the device running BGShell). Also note the reference to Darwin in lib.sh which I changed to x86_64 in my case. Otherwise nothing has been altered from the original project configuration i.e. arm-unknown-nto-qnx6.5.0eabi(as opposed to using the 8.0.0 version with the BB10.3 SDK like in the Q.). So it is very easy to compile once we weed out the few problematic targets.
| ldd:FATAL: Unresolved symbol "getopt_long" called from Executable - using compiled binaries for QNX on arm. Why? |
1,460,488,047,000 |
Edit 1:
The problem seems to be related to MySQL component. because if I remove every SQL directives from config file, it does work on ftp/ftpes, sftp and ftps
Edit 2:
If I put an existing host not hosting a DB, connection to ftp daemon will hang and finally timeout, while if I put an incorrect db or a non responding host, it will try to run unix auth instaed of mysql auth.
Edit 3:
Sqllog mention we can see that the line Feb 07 15:44:12 mod_sql/4.3[15139036]: entering mysql cmd_open is followed by a new log line after more than one minute Feb 07 15:45:27 :
Feb 07 15:44:11 mod_sql/4.3[15139036]: defaulting to 'mysql' backend
Feb 07 15:44:11 mod_sql/4.3[15139036]: backend module 'mod_sql_mysql/4.0.8'
Feb 07 15:44:11 mod_sql/4.3[15139036]: backend api 'mod_sql_api_v1'
Feb 07 15:44:11 mod_sql/4.3[15139036]: >>> sql_sess_init
Feb 07 15:44:11 mod_sql/4.3[15139036]: entering mysql cmd_defineconnection
Feb 07 15:44:11 mod_sql/4.3[15139036]: name: 'default'
Feb 07 15:44:11 mod_sql/4.3[15139036]: user: 'mysql_poney_user'
Feb 07 15:44:11 mod_sql/4.3[15139036]: host: 'pingableHostWithoutDB.net'
Feb 07 15:44:11 mod_sql/4.3[15139036]: db: 'mysql_poney_user'
Feb 07 15:44:11 mod_sql/4.3[15139036]: port: '15140'
Feb 07 15:44:11 mod_sql/4.3[15139036]: ttl: '2'
Feb 07 15:44:11 mod_sql/4.3[15139036]: exiting mysql cmd_defineconnection
Feb 07 15:44:11 mod_sql/4.3[15139036]: connection 'default' successfully established
Feb 07 15:44:11 mod_sql/4.3[15139036]: mod_sql engine : on
Feb 07 15:44:11 mod_sql/4.3[15139036]: negative_cache : off
Feb 07 15:44:11 mod_sql/4.3[15139036]: authenticate : users
Feb 07 15:44:11 mod_sql/4.3[15139036]: usertable : proftpd_users
Feb 07 15:44:11 mod_sql/4.3[15139036]: userid field : userid
Feb 07 15:44:11 mod_sql/4.3[15139036]: password field : passwd
Feb 07 15:44:11 mod_sql/4.3[15139036]: UID field : uid
Feb 07 15:44:11 mod_sql/4.3[15139036]: GID field : gid
Feb 07 15:44:11 mod_sql/4.3[15139036]: homedir field : homedir
Feb 07 15:44:11 mod_sql/4.3[15139036]: shell field : shell
Feb 07 15:44:11 mod_sql/4.3[15139036]: SQLMinUserUID : 200
Feb 07 15:44:11 mod_sql/4.3[15139036]: SQLMinUserGID : 1
Feb 07 15:44:11 mod_sql/4.3[15139036]: <<< sql_sess_init
Feb 07 15:44:12 mod_sql/4.3[15139036]: >>> sql_escapestr
Feb 07 15:44:12 mod_sql/4.3[15139036]: entering mysql cmd_escapestring
Feb 07 15:44:12 mod_sql/4.3[15139036]: entering mysql cmd_open
Feb 07 15:45:27 mod_sql/4.3[15139036]: exiting mysql cmd_open
Feb 07 15:45:27 mod_sql/4.3[15139036]: exiting mysql cmd_escapestring
Feb 07 15:45:27 mod_sql/4.3[15139036]: unrecoverable backend error
Feb 07 15:45:27 mod_sql/4.3[15139036]: error: '2003'
Feb 07 15:45:27 mod_sql/4.3[15139036]: message: 'Can't connect to MySQL server on 'pingableHostWithoutDB.net' (78)'
Feb 07 15:45:27 mod_sql/4.3[15139036]: entering mysql cmd_exit
Feb 07 15:45:27 mod_sql/4.3[15139036]: exiting mysql cmd_exit
Orginal question
I have a proftpd config file that is tested on both proftpd 1.3.4b and proftpd 1.3.4d. Now I want to compile on a new system a AIX 6.1.
I'm using IBM XLc compiler.
here's the library I installed :
rpm -qa
apr-1.4.6-1
mkisofs-1.13-4
pci.df1000fa-1-191A5
openldap-2.4.23-0.3
apr-util-ldap-1.5.1-1
openssl-1.0.1e-2
bash-3.0-1
coreutils-5.0-2
grep-2.5.1-1
pci.1069B166.0A-050A008a-1
pci.1069B166.08-0508008a-1
pci.1069B166.10-0510006d-1
pci.df1000fa-1-90X13
pci.df1080f9-1-91x4
ibm.scsi.disk.10k300-RPQR-1
ibm.scsi.disk.73lpx15-c51d-1
ibm.scsi.disk.146z10-s28g-1
ibm.scsi.disk.146lp-C50K-1
ses.0018-0018-01
cdrecord-1.9-7
pci.1069B166.10-0710000b-1
screen-3.9.10-2
expat-2.1.0-1
zlib-1.2.7-2
AIX-rpm-6.1.6.15-5
gettext-0.10.40-8
libiconv-1.14-2
apr-util-1.5.1-1
db4-4.7.25-2
bzip2-1.0.6-1
info-4.13a-2
readline-6.2-4
pcre-8.32-1
openssl-devel-1.0.1e-2
httpd-2.4.3-1
mpfr-3.1.2-1
MySQL-devel-5.1.56-1
libgcc-4.6.1-1
gcc-4.6.1-1
libstdc++-4.6.1-1
libstdc++-devel-4.6.1-1
gmp-5.1.3-1
gmp-devel-5.1.3-1
mpfr-devel-3.1.2-1
libmpc-1.0.1-2
libmpc-devel-1.0.1-2
gcc-cpp-4.6.1-1
zlib-devel-1.2.7-2
Here's the script I use to compile:
export CONFIG_SHELL=/opt/freeware/bin/bash
export CONFIG_ENV_ARGS=/opt/freeware/bin/bash
export CC=cc
export CFLAGS="-qmaxmem=16384 -DSYSV -D_AIX -D_AIX32 -D_AIX41 -D_AIX43 -D_AIX51 -D_AIX52 -D_AIX53 -D_AIX61 -D_ALL_SOURCE -DFUNCPROTO=15 -O -I/opt/freeware/include"
export CXX=xlC
export CXXFLAGS=$CFLAGS
export CPPFLAGS='-U__STR__'
export F77=xlf
export FFLAGS="-O -I/opt/freeware/include"
export LD=ld
export LDFLAGS="-L/opt/freeware/lib -Wl,-blibpath:/opt/freeware/lib:/usr/lib:/lib:/opt/freeware/lib/mysql:/opt/freeware/lib/mysql/mysql"
export PATH=/usr/bin:/bin:/etc:/usr/sbin:/usr/ucb:/usr/bin/X11:/sbin:/usr/vac/bin:/usr/vacpp/bin:/usr/ccs/bin:/usr/dt/bin:/usr/opt/perl5/bin:/opt/freeware/bin:/opt/freeware/sbin:/usr/local/bin:/usr/lib/instl
export CFLAGS="-DSYSV -D_AIX -D_AIX32 -D_AIX41 -D_AIX43 -D_AIX51 -D_AIX52 -D_AIX53 -D_AIX61 -D_ALL_SOURCE -DFUNCPROTO=15 -O -I/opt/freeware/include"
make clean
./configure '--with-modules=mod_tls:mod_sql:mod_sql_mysql:mod_sql_passwd:mod_sftp:mod_sftp_sql' '--without-getopt' '--enable-openssl' '--with-includes=/home/poney/libmath_header:/home/poney/include_mysql/mysql/' '--with-libraries=/home/poney/libmath_lib:/opt/freeware/lib/mysql/mysql:/opt/freeware/lib/mysql/mysql/libmysqlclient.a' '--prefix=/usr/local/proftpd'
make
The things is it does compile without much further warning. still I do get a warning when make install:
ld: 0711-224 WARNING: Duplicate symbol: .bcopy
ld: 0711-224 WARNING: Duplicate symbol: .memmove
ld: 0711-345 Use the -bloadmap or -bnoquiet option to obtain more information.
My configuration allow FTPS SFTP and FTP, and if I try to connect with ftps it does work until I type the password :
openssl s_client -connect 127.0.0.1:210 -starttls ftp
CONNECTED(00000003)
depth=0 /C=AU/ST=Some-State/O=Internet Widgits Pty Ltd
verify error:num=18:self signed certificate
verify return:1
depth=0 /C=AU/ST=Some-State/O=Internet Widgits Pty Ltd
verify return:1
---
Certificate chain
0 s:/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd
i:/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd
---
Server certificate
-----BEGIN CERTIFICATE-----
MIICWDCCAcGgAwIBAg[...]8dqCxa3HS6bgg==
-----END CERTIFICATE-----
subject=/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd
issuer=/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd
---
No client certificate CA names sent
---
SSL handshake has read 1264 bytes and written 341 bytes
---
New, TLSv1/SSLv3, Cipher is DHE-RSA-AES256-SHA
Server public key is 1024 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
SSL-Session:
Protocol : TLSv1
Cipher : DHE-RSA-AES256-SHA
Session-ID: 6F520DFBC97CF172B68A99510AAFA765658324A4478D87ACB481362070A88034
Session-ID-ctx:
Master-Key: [...]
Key-Arg : None
Start Time: 1391443369
Timeout : 300 (sec)
Verify return code: 18 (self signed certificate)
---
220 ProFTPD 1.3.4d Server (ftp daemon) [127.0.0.1]
USER frank
331 Password required for frank
PASS $$$$$
And after that nothing, it hangs doing nothing. on the proftpd side the deamon does provide some trace :
see pastbin
I can't read anything usefull here.
I'm pretty sure there's something wrong with the library, but I really don't know what or why it does not want to wrok in the end as it compile without problem.
|
And finally the answer is:
It's not a bug it's a feature
If you try to connect a db that is known in you dns but packets are drop by a firewall, then you fall in the sql timeout of the client (approx 85 sec) and no other authentication works if you have set AUthOrder with mod_sql.c first.
So my compilation option are correct and package version also.
| Compile proftpd with MySQL authentication support on AIX |
1,460,488,047,000 |
I'm trying to make my wifi work on my newly-installed Debian Sid (amd64, kernel 3.10.11-1). The relevant line of the output of lspci is :
06:00.0 Network controller: Realtek Semiconductor Co., Ltd. RTL8188CE 802.11b/g/n WiFi Adapter (rev 01)
and this wi-fi card is not recognized :
# iwconfig
eth0 no wireless extensions.
lo no wireless extensions.
in case you need it, here is the output of lshw -c network :
*-network NON-RÉCLAMÉ
description: Network controller
produit: RTL8188CE 802.11b/g/n WiFi Adapter
fabriquant: Realtek Semiconductor Co., Ltd.
identifiant matériel: 0
information bus: pci@0000:06:00.0
version: 01
bits: 64 bits
horloge: 33MHz
fonctionnalités: pm msi pciexpress cap_list
configuration: latency=0
ressources: portE/S:3000(taille=256) mémoire:f1d00000-f1d03fff
So I looked up online to find what driver I was supposed to install. I found that I was supposed to go to this page and install the rtl8192ce driver. I downloaded and extracted it, and followed the instructions of the readme file. I changed to super user and tried to compile the driver from the code source with make. Here is the output :
# make
make -C /lib/modules/3.10-3-amd64/build M=/home/damien/Downloads/rtl_92ce_92se_92de_8723ae_88ee_linux_mac80211_0012.0207.2013 modules
make[1]: entrant dans le répertoire « /usr/src/linux-headers-3.10-3-amd64 »
CC [M] /home/damien/Downloads/rtl_92ce_92se_92de_8723ae_88ee_linux_mac80211_0012.0207.2013/base.o
In file included from /home/damien/Downloads/rtl_92ce_92se_92de_8723ae_88ee_linux_mac80211_0012.0207.2013/base.c:39:0:
/home/damien/Downloads/rtl_92ce_92se_92de_8723ae_88ee_linux_mac80211_0012.0207.2013/pci.h:247:15: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘rtl_pci_probe’
int __devinit rtl_pci_probe(struct pci_dev *pdev,
^
/home/damien/Downloads/rtl_92ce_92se_92de_8723ae_88ee_linux_mac80211_0012.0207.2013/base.c: In function ‘rtl_action_proc’:
/home/damien/Downloads/rtl_92ce_92se_92de_8723ae_88ee_linux_mac80211_0012.0207.2013/base.c:885:32: error: ‘struct ieee80211_conf’ has no member named ‘channel’
rx_status.freq = hw->conf.channel->center_freq;
^
/home/damien/Downloads/rtl_92ce_92se_92de_8723ae_88ee_linux_mac80211_0012.0207.2013/base.c:886:32: error: ‘struct ieee80211_conf’ has no member named ‘channel’
rx_status.band = hw->conf.channel->band;
^
/home/damien/Downloads/rtl_92ce_92se_92de_8723ae_88ee_linux_mac80211_0012.0207.2013/base.c: In function ‘rtl_send_smps_action’:
/home/damien/Downloads/rtl_92ce_92se_92de_8723ae_88ee_linux_mac80211_0012.0207.2013/base.c:1451:24: error: ‘struct ieee80211_conf’ has no member named ‘channel’
info->band = hw->conf.channel->band;
^
make[4]: *** [/home/damien/Downloads/rtl_92ce_92se_92de_8723ae_88ee_linux_mac80211_0012.0207.2013/base.o] Erreur 1
make[3]: *** [_module_/home/damien/Downloads/rtl_92ce_92se_92de_8723ae_88ee_linux_mac80211_0012.0207.2013] Erreur 2
make[2]: *** [sub-make] Erreur 2
make[1]: *** [all] Erreur 2
make[1]: quittant le répertoire « /usr/src/linux-headers-3.10-3-amd64 »
make: *** [all] Erreur 2
It seems that the error comes from the source code, and not from a lack of library or something.
Any idea of what I should try to do next, or how should I try to solve this compilation error ?
|
Please enable contrib and non-free in /etc/apt/sources.list, install firmware-realtek and reboot.
| Compilation of my wifi driver (RTL8192CE) fails |
1,460,488,047,000 |
I am running a quite complicated script which changes directories and runs many other commands. All these commands are run using 'scriptname', which works fine when I execute the main script from my terminal. However, sometimes I have to ssh into a server and run the main script from there, it fails as there isn't a ./ before each command.
I'd rather not go through all the scripts and executables and add a ./ to the commands, so is there another way to solve this problem?
|
There are ways to change this behavior including adding ./ to your PATH environment variable, but this introduces a serious security risk to your environment. The way your scripts are written is really wrong and the correct solution is to go through all of them and fix the way local scripts are called. This is the only proper fix that will not introduce extra problems down the road and create security issues for you. I know it's not what you wanted to hear, but bite the bullet and do it right.
| Run script without ./ before the name |
1,460,488,047,000 |
I would like to compile less with latest fixes.
I do this:
git clone https://github.com/gwsw/less
cd less/
autoheader
autoconf
./configure
make
But make says this:
make: *** No rule to make target 'funcs.h', needed by 'main.o'. Stop.
There are no Makefile rules that create funcs.h
So, how to compile less from source?
|
Here's the method I just successfully used on Ubuntu 18.04:
git clone https://github.com/gwsw/less.git
cd less
autoreconf -i # install the autoconf package if you haven't already
make -f Makefile.aut dist
This creates a directory release/less-550 containing less-550.tar.gz and less-550.zip. It also attempts to create a gpg signature for less-550.tar.gz. That hung on my system, so I killed the gpg --detach-sign ... process from another window. You could also just kill the make process.
less-550.tar.gz is a standard buildable source tarball, which you can install as usual:
tar xf less-550.tar.gz
cd less-550
./configure --prefix=some-directory other-options
make
make install
The most interesting options for ./configure are probably:
--with-regex=LIB select regular expression library
(LIB is one of
auto,none,gnu,pcre,pcre2,posix,
regcmp,re_comp,regcomp,regcomp-local) [auto]
--with-editor=PROGRAM use PROGRAM as the default editor [vi]
Run ./configure --help for a full list of options.
| How to compile LESS pager? |
1,460,488,047,000 |
I'm running Debian 8.9 Jessie on a Linux 2.6.32-openvz-042stab120.11-amd64 OpenVZ container.
I'm trying to use curlftpfs 0.9.1 as this version has a functionality that was removed in later versions - namely, open(read+write) and open(write).
The current version is 0.9.2-9~deb8u1:
apt-cache policy curlftpfs
curlftpfs:
Installed: (none)
Candidate: 0.9.2-9~deb8u1
Version table:
0.9.2-9~deb8u1 0
500 http://ftp.debian.org/debian/ jessie/main amd64 Packages
I was able to find both binaries and sources on Debian Snapshot.
However, if I try to install the .deb binary, I get unmet dependencies:
# dpkg -i ./curlftpfs_0.9.1-3_amd64.deb
Selecting previously unselected package curlftpfs.
(Reading database ... 44948 files and directories currently installed.)
Preparing to unpack ./curlftpfs_0.9.1-3_amd64.deb ...
Unpacking curlftpfs (0.9.1-3) ...
dpkg: dependency problems prevent configuration of curlftpfs:
curlftpfs depends on fuse-utils; however:
Package fuse-utils is not installed.
curlftpfs depends on libgnutls13 (>= 2.0.4-0); however:
Package libgnutls13 is not installed.
curlftpfs depends on libkrb53 (>= 1.6.dfsg.2); however:
Package libkrb53 is not installed.
curlftpfs depends on libldap2 (>= 2.1.17-1); however:
Package libldap2 is not installed.
dpkg: error processing package curlftpfs (--install):
dependency problems - leaving unconfigured
Processing triggers for man-db (2.7.0.2-5) ...
Errors were encountered while processing:
curlftpfs
And apt-get tells me these dependencies are not installable:
#apt-get install
Reading package lists... Done
Building dependency tree
Reading state information... Done
You might want to run 'apt-get -f install' to correct these.
The following packages have unmet dependencies:
curlftpfs : Depends: fuse-utils but it is not installable
Depends: libgnutls13 (>= 2.0.4-0) but it is not installable
Depends: libkrb53 (>= 1.6.dfsg.2) but it is not installable
Depends: libldap2 (>= 2.1.17-1) but it is not installable
E: Unmet dependencies. Try using -f.
But running apt-get -f install installs the current version of curlftpfs.
Trying gdebi isn't any better:
# gdebi curlftpfs_0.9.1-3_amd64.deb
Reading package lists... Done
Building dependency tree
Reading state information... Done
Building data structures... Done
Building data structures... Done
Este pacote n\xe3o pode ser desinstalado
Dependency is not satisfiable: fuse-utils
If I add a debian-snapshot to my sources list, I can get the specific package version I want, but then I get lost in dependency hell:
apt-get install -f curlftpfs=0.9.1-3+b2
Reading package lists... Done
Building dependency tree
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:
The following packages have unmet dependencies:
curlftpfs : Depends: libkrb53 (>= 1.6.dfsg.2) but it is not going to be installed
Depends: fuse-utils but it is not going to be installed
E: Unable to correct problems, you have held broken packages.
root@tunnelserver:~/temp# apt-get install fuse-utils
Reading package lists... Done
Building dependency tree
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:
The following packages have unmet dependencies:
fuse-utils : Depends: libfuse2 (= 2.7.3-4) but 2.9.3-15+deb8u2 is to be installed
E: Unable to correct problems, you have held broken packages.
root@tunnelserver:~/temp# apt-get install libfuse2=2.7.3-4
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages were automatically installed and are no longer required:
libselinux1-dev libsepol1-dev
Use 'apt-get autoremove' to remove them.
Suggested packages:
fuse-utils
The following packages will be REMOVED:
fuse gvfs-fuse libfuse-dev ntfs-3g sshfs testdisk
The following packages will be DOWNGRADED:
libfuse2
0 upgraded, 0 newly installed, 1 downgraded, 6 to remove and 0 not upgraded.
Need to get 128 kB of archives.
After this operation, 4059 kB disk space will be freed.
Do you want to continue? [Y/n] n
So, I decided to build the binaries. I downloaded the source from Debian Snapshot, applied the diff patch, ran ./configure, and got this error: debian configure: error: "libcurl not found":
# ./configure
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
checking for gawk... gawk
checking whether make sets $(MAKE)... yes
checking for gcc... gcc
checking for C compiler default output file name... a.out
checking whether the C compiler works... yes
checking whether we are cross compiling... no
checking for suffix of executables...
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether gcc accepts -g... yes
checking for gcc option to accept ISO C89... none needed
checking for style of include used by make... GNU
checking dependency style of gcc... gcc3
checking how to run the C preprocessor... gcc -E
checking for a BSD-compatible install... /usr/bin/install -c
checking whether ln -s works... yes
checking whether make sets $(MAKE)... (cached) yes
checking build system type... x86_64-unknown-linux-gnu
checking host system type... x86_64-unknown-linux-gnu
checking for a sed that does not truncate output... /bin/sed
checking for grep that handles long lines and -e... /bin/grep
checking for egrep... /bin/grep -E
checking for ld used by gcc... /usr/bin/ld
checking if the linker (/usr/bin/ld) is GNU ld... yes
checking for /usr/bin/ld option to reload object files... -r
checking for BSD-compatible nm... /usr/bin/nm -B
checking how to recognise dependent libraries... pass_all
checking for ANSI C header files... yes
checking for sys/types.h... yes
checking for sys/stat.h... yes
checking for stdlib.h... yes
checking for string.h... yes
checking for memory.h... yes
checking for strings.h... yes
checking for inttypes.h... yes
checking for stdint.h... yes
checking for unistd.h... yes
checking dlfcn.h usability... yes
checking dlfcn.h presence... yes
checking for dlfcn.h... yes
checking for g++... no
checking for c++... no
checking for gpp... no
checking for aCC... no
checking for CC... no
checking for cxx... no
checking for cc++... no
checking for cl.exe... no
checking for FCC... no
checking for KCC... no
checking for RCC... no
checking for xlC_r... no
checking for xlC... no
checking whether we are using the GNU C++ compiler... no
checking whether g++ accepts -g... no
checking dependency style of g++... none
checking for g77... no
checking for xlf... no
checking for f77... no
checking for frt... no
checking for pgf77... no
checking for cf77... no
checking for fort77... no
checking for fl32... no
checking for af77... no
checking for xlf90... no
checking for f90... no
checking for pgf90... no
checking for pghpf... no
checking for epcf90... no
checking for gfortran... no
checking for g95... no
checking for xlf95... no
checking for f95... no
checking for fort... no
checking for ifort... no
checking for ifc... no
checking for efc... no
checking for pgf95... no
checking for lf95... no
checking for ftn... no
checking whether we are using the GNU Fortran 77 compiler... no
checking whether accepts -g... no
checking the maximum length of command line arguments... 32768
checking command to parse /usr/bin/nm -B output from gcc object... ok
checking for objdir... .libs
checking for ar... ar
checking for ranlib... ranlib
checking for strip... strip
checking for correct ltmain.sh version... yes
checking if gcc supports -fno-rtti -fno-exceptions... no
checking for gcc option to produce PIC... -fPIC
checking if gcc PIC flag -fPIC works... yes
checking if gcc static flag -static works... yes
checking if gcc supports -c -o file.o... yes
checking whether the gcc linker (/usr/bin/ld -m elf_x86_64) supports shared libraries... yes
checking whether -lc should be explicitly linked in... no
checking dynamic linker characteristics... GNU/Linux ld.so
checking how to hardcode library paths into programs... immediate
checking whether stripping libraries is possible... yes
checking if libtool supports shared libraries... yes
checking whether to build shared libraries... yes
checking whether to build static libraries... yes
configure: creating libtool
appending configuration tag "CXX" to libtool
appending configuration tag "F77" to libtool
checking for pkg-config... /usr/bin/pkg-config
checking pkg-config is at least version 0.9.0... yes
checking for GLIB... yes
checking for FUSE... yes
checking for gawk... (cached) gawk
checking for curl-config... no
checking whether libcurl is usable... no
configure: error: "libcurl not found"
I can't find a libcurl package that I can install. How can I proceed?
|
I found out that if you install libcurl4-openssl-dev, then make won't complain about the absence of libcurl anymore:
apt-get install libcurl4-openssl-dev
Unfortunately, I'm unable to provide an explanation on why or how this happens (other than the package install this elusive libcurl).
But I have tested and confirmed myself, and it does work. So I'm leaving this answer here.
| debian configure: error: "libcurl not found" |
1,460,488,047,000 |
I've read somewhere that recompiling libc with the -march=native and -mtune=native flags will provide the maximum benefit for programs, where shared libraries are used instead of static libraries. Is this true, and might there be any additional benefit by recompiling other programs?
|
The -march=native and -mtune=native options will ensure generated binaries best utilize the available processor feature sets and scheduling. Any gain in performance will relate to how much of the application code may be optimized by using the additional processor feature sets. (YMMV). Optimized libraries and binaries should run faster in comparison with generic binaries, but how much is difficult to quantify without testing. So, the short answer is yes there might be a performance gain by recompiling your applications with CPU optimizations, however, maintaining your own optimized builds and keeping up with security updates, etc. will likely be a nightmare.
More information about GCC 4.4.4 i386 and amd64 architecture options here.
| Source of biggest machine-code optimization |
1,460,488,047,000 |
I have installed the GNU "core" utilites coreutils-8.21 into this location on my UNIX server:
/opt/app/p1sas1c1/apps/GNU
I would now like to ADD the findutils-4.4.2 package. My reading the INSTALL document, I see I can configure using this command:
./configure --prefix=/opt/app/p1sas1c1/apps/GNU
That is the same "prefix" I used to install the core utilities.
My question is: If I do this and follow with a "make install" command, will that over-write the existing files in that target location or just "add" the new elements into the corresponding directories?
I want to check first on the "best practices" for doing things like this. I am not a trained "SA" and do not have "root" access; I'm using an application account to do the install.
|
make install will overwrite existing files by the same name. Other than that, it will not remove existing files. GNU coreutils and GNU findutils are intended to be used and installed alongside each other, so they don't have different files by the same name. Therefore if you install them one after the other, you'll get both.
| How do add GNU findutils into an existing location |
1,460,488,047,000 |
I tend to build binaries from sources.
My usual setup is the following
$HOME/build -> this gets the sources
$HOME/programs -> this is where the build happen, so where the binaries are
Once this is done I put the following in my bashrc
export MYNEWBINDIR = $HOME/programs/...
export PATH=$MYNEWBINDIR:$PATH
My question is: is it the recommended way ? For instance I could just create a local $HOME/bin, symlinks all the binaries there and just add it to the path...
|
It is more or less left up to the individual to do as they wish, but I would recommend that you leave directories already used by the base system or by any package manager alone, always. You really do not want to confuse a package manager by overwriting files that it has previously installed! So definitely leave /bin alone.
On most system, /usr/local should be a safe area for you to install things into, although this is also used by package managers on some systems (FreeBSD and OpenBSD for example, since they regard software installed from ports/packages as local software.
See also the entries "Filesystem Hierarchy Standard" and "Unix filesystem" at Wikipedia.
Some Unices also have a hier(7) manual that you may refer to. (The link goes to OpenBSD's version, but it's also available on at least Ubuntu amongst the Linuxes. In OpenBSD's case it documents the directories that system software is aware of, so it doesn't mention /opt, for example).
Where you build stuff doesn't really matter as it's a directory that you are likely to remove when you're done anyway. It's a good idea to do it away from the actual installation directories though. I've worked on systems that had source trees in /bin and /usr/local, which is just extremely untidy.
I'd also recommend that you do an actual make install for the software, rather than running the executables from within the build directory, unless you're tweaking the build. Collecting the executables in a single location makes your PATH cleaner and easier to set, unless you really want a distinct PATH element for each piece of software, obviously.
Personal opinion below:
For software that I build solely for myself, I tend to use a private hierarchy under $HOME/local for installations.
When compiling programs that uses the GNU autotools, and therefore has a configure script, this is really easy. You just say
$ ./configure --prefix="$HOME/local"
in the configuration stage, before make and make install.
When compiling programs that uses CMake, one can achieve the same by
$ cmake -DCMAKE_INSTALL_PREFIX="$HOME/local" .
before the make and make install steps. The same effect (by other means) may be had when installing Perl modules etc.
Then you'll obviously have to add $HOME/local/bin to your path...
Personally, I use GNU Stow too, which means I don't really specify $HOME/local as the installation prefix but $HOME/local/stow/thing-2.1 when I configure the sofware package thing-2.1.
After installing:
$ cd "$HOME/local/stow"
$ stow thing-2.1
The contents of the $HOME/local/stow/thing-2.1/bin directory will show up (using symlinks) in $HOME/local/bin (and similarly for any lib or other directory installed under thing-2.1).
Stow makes it really easy to uninstall software. No need to hunt down every little file that was installed by one make install just to uninstall and remove a piece of software, instead just
$ cd "$HOME/local/stow"
$ stow -D thing-2.1
$ rm -rf thing-2.1
| How to correctly deal with locally built binaries? |
1,460,488,047,000 |
I made a makefile to help compile multiple C++ files, but it is giving me "command not found" errors. I need to fix it.
The errors I get:
Make: line 1: main.out::command not found
g++: error: GradeBook.o: No such file or directory
g++: error: main.o: No such file or directory
g++: fatal error: no input files
compilation terminated.
Make: line 4: main.o:: command not found
Make: line 7: GradeBook.o:: command not found
Make: line 10: clear:: command not found
Here is my makefile:
main.out: GradeBook.o main.o
g++ -Wall -g -o main.out GradeBook.o main.o
main.o: main.cpp GradeBook.h
g++ -Wall -g -c main.cpp
GradeBook.o: GradeBook.cpp GradeBook.h
g++ -Wall -g -c GradeBook.cpp
clean:
rm -f main.out main.o GradeBook.o
|
Here's a list of typical mistakes people make with makefiles.
Issue #1 - using spaces instead of tabs
The command make is notoriously picky about the formatting in a Makefile. You'll want to make sure that the action associated with a given target is prefixed by a tab and not spaces.
That is a single Tab followed by the command you want to run for a given target.
Example
This being your target.
main.out: GradeBook.o main.o
The command that follows should have a single Tab in front of it.
g++ -Wall -g -o main.out GradeBook.o main.o
^^^^--Tab
Here is your Makefile cleaned up
//Here is my makefile:
main.out: GradeBook.o main.o
g++ -Wall -g -o main.out GradeBook.o main.o
main.o: main.cpp GradeBook.h
g++ -Wall -g -c main.cpp
GradeBook.o: GradeBook.cpp GradeBook.h
g++ -Wall -g -c GradeBook.cpp
clean:
rm -f main.out main.o GradeBook.o
Issue #2 - naming it wrong
The tool make is expecting the file to be called Makefile. Anything else, you need to tell make what file you want it to use.
$ make -f mafile
-or-
$ make --file=makefile
-or-
$ make -f smurfy_makefile
NOTE: If you name your file Makefile, then you can get away with just running the command:
$ make
Issue #3 - Running Makefiles
Makefile's are data files to the command make. They aren't executables.
Example
make it executable
$ chmod +x makefile
run it
$ ./makefile
./makefile: line 1: main.out:: command not found
g++: error: GradeBook.o: No such file or directory
g++: error: main.o: No such file or directory
g++: fatal error: no input files
compilation terminated.
./makefile: line 4: main.o:: command not found
g++: error: main.cpp: No such file or directory
g++: fatal error: no input files
compilation terminated.
./makefile: line 7: GradeBook.o:: command not found
g++: error: GradeBook.cpp: No such file or directory
g++: fatal error: no input files
compilation terminated.
./makefile: line 10: clean:: command not found
Other isues
Beyond the above tips I'd also advice you to make heavy use of make's ability to do "dry-runs" or "test mode". The switches:
-n, --just-print, --dry-run, --recon
Print the commands that would be executed, but do not execute them
(except in certain circumstances).
Example
Running the file makefile.
$ make -n -f makefile
g++ -Wall -g -c GradeBook.cpp
g++ -Wall -g -c main.cpp
g++ -Wall -g -o main.out GradeBook.o main.o
But notice that none of the resulting files were actually created when we ran this:
$ ls -l
total 4
-rw-rw-r--. 1 saml saml 0 Dec 22 08:39 GradeBook.cpp
-rw-rw-r--. 1 saml saml 0 Dec 22 08:45 GradeBook.h
-rw-rw-r--. 1 saml saml 0 Dec 22 08:45 main.cpp
-rwxrwxr-x. 1 saml saml 262 Dec 22 08:25 makefile
| My handwritten C++ Makefile gives command not found |
1,460,488,047,000 |
I'm trying to install coreutils on NetBSD 6.1.5 using the pkgsrc system.
This is on the default install of 6.1.5. The only change made has been to install zsh and set it as my default shell for users root and any local users.
As is the pkgsrc way, I change to the directory with the pkgsrc heirachy containing the package I want to install. In this case it is /usr/pkgsrc/sysutils/coreutils
When I enter this directory as root I type
make
and then get an error:
configure: error: you should not run configure as root (set
FORCE_UNSAFE_CONFIGURE=1 in environment to bypass this check)
See `config.log' for more details
*** Error code 1
This is not typical when using pkgsrc as root, and seems to be specific to gnu packages, as I have not experienced it with any other package in pkgsrc.
When I do make as a normal user in the same directory I don't have permission to write to any directory under /usr/pkgsrc and make fails due to a bunch of permission denied errors. For example:
sh: Cannot create configure.override: permission denied.
Copying the package directory to somewhere a local user has write permission and compiling would not seem to be in line with using pkgsrc.
Does the user have to be part of a special group to use pkgsrc?
|
Try the command indicated in the error message:
export FORCE_UNSAFE_CONFIGURE=1 && make
This being said, it is true the "unsafe configure" requirement seems a bit strange. Double-check the log (config.log) and see if there is something more explicit in there.
| Permissions for installing coreutils with pkgsrc on NetBSD |
1,460,488,047,000 |
I'm running FreeBSD 10 and I'd like to build dwm. I've installed Xorg using pkg install. Where are the headers located? Maybe I'm just old fashioned but I first looked in /usr/X11R6 ... not there. Anyone has any idea where Xorg will install its headers files in FreeBSD?
|
You can install dwm from port (/usr/ports/x11-wm/dwm). You can use own config.h:
make DWM_CONF=/path/to/dwm/config.h
I think you should use the port system instead of own compiling - it appears in your packages list.
| Location of Xorg headers on FreeBSD 10 |
1,460,488,047,000 |
I see that my programs install to usr/local/bin and that I can change that if I do ./configure --prefix=/usr/ at the build.
Where is the default prefix specified? Where can I change the default? Is it possible to change the default installation to /usr/bin/ for my program only instead of changing the default for the user?
|
Installing locally built applications with prefix /usr is a really bad idea as the files installed may easily overwrite files installed by package managers. This may later give you issues if the package managers gets confused when file checksums no longer match, or when there are mismatches between executables and libraries.
/usr/local is the correct place to install locally compiled software on most systems, although /opt may be a safer (most BSD Unices uses /usr/local for third party software).
I would definitely not recommend trying to change the default prefix.
Having said that, it is defined in the file general.m4 in the autoconf distribution as the variable as_default_prefix. On my OpenBSD system, this file resides in /usr/local/share/autoconf-2.69/autoconf. This directory may be located elsewhere if you're on Linux or use another version of autoconf.
This variable would have to be changed in the autoconf distribution and any configure script would have to be re-generated (as this variable is inserted in the configure script by autoconf when it's created).
An easier way would be to create a config.site file as described in the autoconf documentation and set the value of prefix.
Again, changing this would most definitely lead to shooting yourself in the foot further down the line.
See also: Filesystem Hierarchy Standard.
| Where is installation prefix set? |
1,460,488,047,000 |
I am trying to compile my favorite nano command-line text editor with some of the options.
Actually, most of the options in order to enable all features.
First, I go to Downloads directory and download the tarball:
cd Downloads
wget --continue https://www.nano-editor.org/dist/v2.8/nano-2.8.0.tar.xz
Then, I verify its integrity:
wget --continue https://www.nano-editor.org/dist/v2.8/nano-2.8.0.tar.xz.asc
gpg --verify nano-2.8.0.tar.xz.asc
It should say:
gpg: Good signature from "Benno Schulenberg <[email protected]>"
I have tried to run the configuration script as follows:
./configure --enable-nanorc --enable-color --enable-extra --enable-multibuffer --enable-utf8 --enable-libmagic --enable-speller --disable-wrapping-as-root
After compilation, I end up with this; directly executed in the compiled directory:
Compiled options: --disable-libmagic ...
I stress the:
--disable-libmagic
As I specifically configured it with:
--enable-libmagic
After no success:
I delete the folder to start the process over:
rm -rf nano-2.8.0/
I extract again the archive:
tar -xJf nano-2.8.0.tar.xz
I have tried different combinations of options, but no luck.
Is there anything missing in the system or am I just doing something wrong?
Direct execution after the compilation:
user@computer ~/Downloads/nano-2.8.0/src $ ./nano --version
GNU nano, version 2.8.0
(C) 1999..2016 Free Software Foundation, Inc.
(C) 2014..2017 the contributors to nano
Email: [email protected] Web: https://nano-editor.org/
Compiled options: --disable-libmagic --disable-wrapping-as-root --enable-utf8
|
Nano doesn't store the compiled options as provided on the ./configure command-line, it reconstructs them based on detected features and the requested target ("tiny" Nano or normal Nano). For tiny Nano, it reports enabled options, since they add to the default; for normal Nano, it reports disabled options, since they remove from the default (in most cases).
In your case, you're building normal Nano, so for most options it only reports if they're disabled; the exceptions are debug, utf8 and slang. All your --enable options are defaults for normal Nano, so it doesn't report them in the compiled options; you'd get the same result with ./configure and no options. You end up with --disable-magic because you don't have the development files for libmagic (see Thomas Dickey's answer), and with --enable-utf8 because you do have the necessary features for UTF-8 support (and it's enabled by default).
| Compiling Nano editor with options |
1,460,488,047,000 |
I am using Ubuntu.
When programming in C++, the nullptr keyword is not recognized by the compiler.
It says it's not declared at this scope.
It doesn't work, even though I set the flag -std=c++11.
|
C++11 isn't a compiler, but an ISO standard implemented by a number of popular compilers. The default C++ compiler on Ubuntu is g++ from the GNU Compiler Collection. As you mentioned in your question, the -std=c++11 flag enables C++11 features in g++ as well as Clang, another C++ compiler available on Ubuntu.
The error message you see is shown when C++11 support is either not enabled or not supported by your compiler. GCC 4.6 was the first version to support nullptr, so if you are using an earlier version, you will not be able to use nullptr. Use g++ --version to obtain the version installed.
Assuming you are using at least GCC 4.6, you will need to determine why your build system is not passing the correct flags to the compiler. In CMake, for example, you will need to use:
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++11")
| How to set C++11 as my default compiler? |
1,460,488,047,000 |
I am installing D.J.B.'s daemontools on an ubuntu 10.04 server (64 bit).
(This question is about daemontools, which is a free and open software for managing UNIX services. It is not about 'DAEMON tools', which is a commercial software for disk images, running on windows.)
I first installed the build-essential package and afterwards followed the instructions on http://cr.yp.to/daemontools/install.html 1:1, but it fails:
Script started on Sa 28 Apr 2012 21:41:34 CEST
root@daemontools1:/# mkdir -p /package
root@daemontools1:/# chmod 1755 /package
root@daemontools1:/# cd /package
root@daemontools1:/package# wget http://cr.yp.to/daemontools/daemontools-0.76.ta ^Mr.gz
--2012-04-28 21:42:10-- http://cr.yp.to/daemontools/daemontools-0.76.tar.gz
Resolving cr.yp.to... 131.193.32.142, 80.101.159.118
Connecting to cr.yp.to|131.193.32.142|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 36975 (36K) [application/x-gzip]
Saving to: `daemontools-0.76.tar.gz'
^M 0% [ ] 0 --.-K/s ^M58% [=====================> $
2012-04-28 21:42:11 (125 KB/s) - `daemontools-0.76.tar.gz' saved [36975/36975]
root@daemontools1:/package# gunzip daemontools-0.76.tar
root@daemontools1:/package# tar -xpf daemontools-0.76.tar
root@daemontools1:/package# rm -f daemontools-0.76.tar
root@daemontools1:/package# cd admin/daemontools-0.76
root@daemontools1:/package/admin/daemontools-0.76#
root@daemontools1:/package/admin/daemontools-0.76#
root@daemontools1:/package/admin/daemontools-0.76# package/install
Linking ./src/* into ./compile...
Compiling everything in ./compile...
sh find-systype.sh > systype
rm -f compile
sh print-cc.sh > compile
chmod 555 compile
./compile byte_chr.c
./compile byte_copy.c
./compile byte_cr.c
./compile byte_diff.c
./compile byte_rchr.c
./compile fmt_uint.c
./compile fmt_uint0.c
./compile fmt_ulong.c
rm -f makelib
sh print-ar.sh > makelib
chmod 555 makelib
./compile scan_ulong.c
./compile str_chr.c
./compile str_diff.c
./compile str_len.c
./compile str_start.c
./makelib byte.a byte_chr.o byte_copy.o byte_cr.o byte_diff.o \
byte_rchr.o fmt_uint.o fmt_uint0.o fmt_ulong.o scan_ulong.o str_chr.o \
str_diff.o str_len.o str_start.o
rm -f choose
cat warn-auto.sh choose.sh \
| sed s}HOME}"`head -1 home`"}g \
> choose
chmod 555 choose
./choose c trydrent direntry.h1 direntry.h2 > direntry.h
./compile envdir.c
rm -f load
sh print-ld.sh > load
chmod 555 load
./compile alloc.c
./compile alloc_re.c
./compile buffer.c
./compile buffer_0.c
./compile buffer_1.c
./compile buffer_2.c
./compile buffer_get.c
./compile buffer_put.c
./compile buffer_read.c
./compile buffer_write.c
./compile coe.c
./compile env.c
./compile error.c
./compile error_str.c
./compile fd_copy.c
./compile fd_move.c
./choose cl trymkffo hasmkffo.h1 hasmkffo.h2 > hasmkffo.h
./compile fifo.c
./choose cl tryflock hasflock.h1 hasflock.h2 > hasflock.h
./compile lock_ex.c
./compile lock_exnb.c
./compile ndelay_off.c
./compile ndelay_on.c
./compile open_append.c
./compile open_read.c
./compile open_trunc.c
./compile open_write.c
./compile openreadclose.c
./compile pathexec_env.c
./compile pathexec_run.c
pathexec_run.c: In function ‘pathexec_run’:
pathexec_run.c:18: warning: implicit declaration of function ‘execve’
./compile chkshsgr.c
chkshsgr.c: In function ‘main’:
chkshsgr.c:10: warning: passing argument 2 of ‘getgroups’ from incompatible pointer type
/usr/include/bits/unistd.h:266: note: expected ‘__gid_t *’ but argument is of type ‘short int *’
chkshsgr.c:10: warning: implicit declaration of function ‘setgroups’
./load chkshsgr
./chkshsgr || ( cat warn-shsgr; exit 1 )
./choose clr tryshsgr hasshsgr.h1 hasshsgr.h2 > hasshsgr.h
./compile prot.c
prot.c: In function ‘prot_gid’:
prot.c:13: warning: implicit declaration of function ‘setgroups’
prot.c:15: warning: implicit declaration of function ‘setgid’
prot.c: In function ‘prot_uid’:
prot.c:20: warning: implicit declaration of function ‘setuid’
./compile readclose.c
./compile seek_set.c
seek_set.c: In function ‘seek_set’:
seek_set.c:9: warning: implicit declaration of function ‘lseek’
./compile sgetopt.c
./compile sig.c
./choose cl trysgprm hassgprm.h1 hassgprm.h2 > hassgprm.h
./compile sig_block.c
./choose cl trysgact hassgact.h1 hassgact.h2 > hassgact.h
./compile sig_catch.c
./compile sig_pause.c
./compile stralloc_cat.c
./compile stralloc_catb.c
./compile stralloc_cats.c
./compile stralloc_eady.c
./compile stralloc_opyb.c
./compile stralloc_opys.c
./compile stralloc_pend.c
./compile strerr_die.c
./compile strerr_sys.c
./compile subgetopt.c
./choose cl trywaitp haswaitp.h1 haswaitp.h2 > haswaitp.h
./compile wait_nohang.c
./compile wait_pid.c
./makelib unix.a alloc.o alloc_re.o buffer.o buffer_0.o buffer_1.o \
buffer_2.o buffer_get.o buffer_put.o buffer_read.o buffer_write.o \
coe.o env.o error.o error_str.o fd_copy.o fd_move.o fifo.o lock_ex.o \
lock_exnb.o ndelay_off.o ndelay_on.o open_append.o open_read.o \
open_trunc.o open_write.o openreadclose.o pathexec_env.o \
pathexec_run.o prot.o readclose.o seek_set.o sgetopt.o sig.o \
sig_block.o sig_catch.o sig_pause.o stralloc_cat.o stralloc_catb.o \
stralloc_cats.o stralloc_eady.o stralloc_opyb.o stralloc_opys.o \
stralloc_pend.o strerr_die.o strerr_sys.o subgetopt.o wait_nohang.o \
wait_pid.o
./load envdir unix.a byte.a
collect2: ld terminated with signal 11 [Segmentation fault]
/usr/bin/ld: make: *** [envdir] Error 1
Copying commands into ./command...
cp: cannot stat `compile/svscan': No such file or directory
I also tried to do it on debian squeeze, the result is similar: http://pastebin.com/VNAWLU57
I know I can install daemontools from the ubuntu and also debian repository, but how do I compile it myself?
|
In the meantime I was able to compile and install on Debian Squeeze, Ubuntu and also CentOS 6.0 and it works. I guess the patch from http://blog.tonycode.com/tech-stuff/setting-up-djbdns-on-linux was fixing it.
1) Install a toolchain
For Debian and Ubuntu: apt-get install build-essential
For CentOS yum groupinstall 'Development Tools'
2) Follow the instructions on http://cr.yp.to/daemontools/install.html but do not yet execute the package/install command
3) Apply the patch from http://blog.tonycode.com/tech-stuff/setting-up-djbdns-on-linux to src/conf-cc
4) Now execute the package/install command.
| How to install daemontools on ubuntu or debian from source |
1,460,488,047,000 |
I am a Linux novice, and I am attempting to compile scientific software called DL_POLY_Classic. I downloaded the zip file dl_class_1.6.tar.gz and unzipped it using the command tar xvzf dl_class_1.6.tar.gz. It gives me a series of folders containing various files for the program operation, which are described in the program manual. One of these directories, called source contains many .f files, among other things. The manual says that to compile the code, one needs to take one of the sample/template Makefiles and modify the paths therein to direct the make program to the appropriate directory containing the compiler. (This DL_POLY_Classic program is written in, and must be compiled with, Fortran [Fortran 90, I believe].)
Based on the recommendation of a colleague, I am attempting to compile this code using an Intel Fortran compiler (ifort) that we have on our cluster. I have modified one of the targets in the template Makefile; this target now has the following text in the Makefile:
#========== mpich-c2
mpich-c2: dpp
cp /opt/mpich/ch-p4/include/mpif.h mpif.h
$(MAKE) LD="/opt/mpich_intel/ch-p4/bin/mpif90 -O3 -o" \
LDFLAGS="-L/opt/mpich_intel/ch-p4/lib64 -lmpich" \
TIMER="" \
FC=/opt/intel_fc_80/bin/ifort \
FFLAGS="-c " \
MPICH_F90="/opt/mpich_intel/ch-p4/bin/mpif90" \
CPFLAGS="-D$(STRESS) -DMPI -P -D'pointer=integer'\
-I/opt/mpich_intel/ch-p4/include" \
EX=$(EX) BINROOT=$(BINROOT) $(TYPE)
I place the Makefile in the source directory, as the manual directs. Then while in the source directory, I type the command:
make mpich-c2
However, I get the following error message:
cp /opt/mpich/ch-p4/include/mpif.h mpif.h
make LD="/opt/mpich_intel/ch-p4/bin/mpif90 -O3 -o" \
LDFLAGS="-L/opt/mpich_intel/ch-p4/lib64 -lmpich" \
TIMER="" \
FC=/opt/intel_fc_80/bin/ifort \
FFLAGS="-c " \
MPICH_F90="/opt/mpich_intel/ch-p4/bin/mpif90" \
CPFLAGS="-DSTRESS -DMPI -P -D'pointer=integer'\
-I/opt/mpich_intel/ch-p4/include" \
EX=D.X BINROOT=./ 3pt
make[1]: Entering directory `/export/home/myusername/dlc1_16/dl_class_1.6/source'
make[1]: *** No rule to make target `dl_params.inc', needed by `angfrc.o'. Stop.
make[1]: Leaving directory `/export/home/myusername/dlc1_16/dl_class_1.6/source'
make: *** [mpich-c2] Error 2
So, the make program is telling me that there is No rule to make target dl_params.inc, needed by angfrc.o.
My question is, what is the software telling me? Is it likely saying that it cannot find a file dl_params.inc, or is it likely that it is saying that it cannot find a file angfrc.o? Or, is it saying that I should have some sort of command in my Makefile to create a new file called dl_params.inc?
When I search in my Makefile (which, as I mentioned, is essentially exactly the template provided by the authors, but just with the directories in the mpich-c2 target entries modified for the particular Fortran compiler to which I have access), I see only these mentions of dl_params.inc:
# Declare dependency on parameters file
$(OBJ_ALL): dl_params.inc
$(OBJ_RRR): dl_params.inc
$(OBJ_4PT): dl_params.inc
$(OBJ_RSQ): dl_params.inc
$(OBJ_NEU): dl_params.inc
$(OBJ_RIG): dl_params.inc
$(OBJ_EXT): dl_params.inc
$(OBJ_SPME): dl_params.inc
$(OBJ_HKE): dl_params.inc
I am not sure what this means.
On the other hand, when I search in my Makefile for angfrc.o, I see only one mention, in the first line of the following (after OBJ_ALL):
# Define object files
#=====================================================================
OBJ_ALL = angfrc.o bndfrc.o cfgscan.o corshl.o coul0.o coul4.o \
coul2.o coul3.o conscan.o dblstr.o dcell.o diffsn0.o \
diffsn1.o dlpoly.o duni.o error.o ewald1.o ewald3.o \
exclude.o exclude_atom.o fldscan.o exclude_link.o forces.o\
exitcomms.o extnfld.o fbpfrc.o fcap.o freeze.o gauss.o \
gdsum.o getrec.o gimax.o gisum.o gstate.o images.o initcomms.o \
intlist.o intstr.o invert.o invfrc.o jacobi.o lowcase.o lrcmetal.o \
lrcorrect.o machine.o merge.o merge1.o merge4.o \
npt_b1.o npt_b3.o parset.o npt_h1.o npt_h3.o nve_1.o \
nvt_b1.o nvt_e1.o nvt_h1.o parlst_nsq.o parlink.o parlst.o passcon.o \
passpmf.o pmf_1.o pmf_shake.o primlst.o quench.o rdf0.o rdf1.o \
rdshake_1.o result.o revive.o scdens.o shellsort.o shlfrc.o \
shlmerge.o shlqnch.o shmove.o simdef.o splice.o static.o strip.o \
strucopt.o sysdef.o sysgen.o systemp.o sysbook.o sysinit.o \
tethfrc.o thbfrc.o timchk.o traject.o vertest.o vscaleg.o \
warning.o xscale.o zden0.o zden1.o \
I am not sure what this means, either.
Interestingly, when I use the find command to search by name--for example, find -name angfrc.o, find -name *.o, and find -name dl_params.inc--I can find nothing anywhere in the program directories with these names. When I try find -name *.inc, I do find that a file comms.inc exists in the source directory. This particular file says in its header that it is the "dl_poly include file for MPI, PVM and SHMEM." It is a relatively short file (only 42 lines long), and it appears that it contains both "parameters for message tags" and "MPI tagsizes"--although, I do not really have a clue as to what I should be looking for in there.
If you have time, can you please give me advise as to what I should try next? I am not so sure what the error message No rule to make target dl_params.inc, needed by angfrc.o. is telling me; do you have any ideas? Thank you very much for your time.
|
The error message No rule to make target dl_params.inc, needed by angfrc.o. means, that the Makefile specifies the file dl_params.inc as a dependency for the building of the file angfrc.o. So you have to somehow find or create the dl_params.inc file.
Searching the documentation indicates, that this file should contain FORTRAN parameters required by DL_POLY. The user manual in section 7.1.1 explains, that there should be a utility sub-folder containing the program PARSET which can be used to generate such a parameter file (called new_params.inc by default).
You probably just have to rename the new_params.inc file to dl_params.inc and place it in the main source-folder.
If the error persists then the Makefile obviously expects the dl_params.inc file somewhere else and you would have to have a closer look at the Makefile.
| Why does 'make' complain about a missing rule when I try to build a program from source? |
1,460,488,047,000 |
When I am building a debian package, often many related packages that are bundled together are being build, and also the foo-dbgsym-* versions and foo-doc packages.
For example, even relatively simple package such as make, will build additional packages:
make-dbgsym_4.2.1-1.2_amd64.deb
make-guile-dbgsym_4.2.1-1.2_amd64.deb
make-guile_4.2.1-1.2_amd64.deb
make_4.2.1-1.2_amd64.deb
Can I tell the build system to only build make and not make-guile ?
Here is the process that I am using for building the package:
apt-get source make
cd make*
dpkg-buildpackage --build=binary --no-sign
Is there a general process how I can specify which packages I want to build?
Make is a simple example, but larger packages build many versions of package which I am not interested in, which need dependent libraries installed, and the build process takes longer.
|
dbgsym packages can be disabled using the noautodbgsym build option:
DEB_BUILD_OPTIONS=noautodbgsym dpkg-buildpackage -us -uc
It’s also possible to build only architecture-dependent or architecture-independent packages, by changing the --build option on dpkg-buildpackage.
Other than that, there’s no generalised way of picking and choosing packages to build and dependencies to install. In particular, build dependencies aren’t tied to the binary packages they are relevant for.
Some packages support build profiles; you can determine that by looking for Build-Profiles and/or angle-bracketed dependencies in debian/control. On such packages, dpkg-buildpackage’s -P option selects the appropriate profile(s), sometimes in combination with a build option. For example, on packages with a nocheck profile,
DEB_BUILD_OPTIONS=nocheck dpkg-buildpackage -Pnocheck
will skip the testing-related build-dependencies (if any) and skip running the tests.
In fact, the latest version of the make package purports to provide a noguile build profile, so it should be possible to skip Guile with
dpkg-buildpackage -Pnoguile -us -uc
except that the profile definition is incomplete.
It is always possible to edit debian/control to remove irrelevant packages, and debian/rules to remove irrelevant build steps.
| building Debian package without associated packages that are bundled together |
1,460,488,047,000 |
I installed a kernel source from the official Linux kernel repository (http://www.kernel.org/pub/linux/kernel/v4.x/linux-4.15.tar.bz2) and I recompiled it with some needed options to support the mobility IPv6. When I needed a module to encrypt some data I didn't find it among the rest of the modules already built. The modules that I need are: "echainiv" and "authenc".
|
The first step is to determine what configuration options you need to set in order for the module to build. I use
make menuconfig
for that; / followed by the configuration option you want will tell you where to find it and what its dependencies are. For ECHAINIV, you need to enable CRYPTO and then enable ECHAINIV (as a module since that’s what you’re after — in make menuconfig, the entry must show <M>, not <*>).
To build the module, look for the directory containing the corresponding source code:
find . -name echainiv\*
The code lives in crypto, so
make crypto/echainiv.ko
(from the top-level directory) will build the module for you.
To install the module, assuming you’re running the target kernel, run
sudo mkdir -p /lib/modules/$(uname -r)/kernel/crypto
sudo cp -i crypto/echainiv.ko /lib/modules/$(uname -r)/kernel/crypto
| How to build a specific kernel module? |
1,538,791,119,000 |
Redshift
packages which are available in most distributions are dated 2016-01-02, which is > 2.5 years ago.
Like on my system - Linux Mint 19 Cinnamon 64-bit - there is only 1.11 version available:
$ apt-cache policy redshift
redshift:
Installed: (none)
Candidate: 1.11-1ubuntu1
Version table:
1.11-1ubuntu1 500
500 http://archive.ubuntu.com/ubuntu bionic/universe amd64 Packages
Note, that Linux Mint 19 is based on the latest Ubuntu 18.04.
That might be caused by too little commits made in version 1.12.
Either way, I personally find version 1.12 it a rather crucial step forward.
Question
Anyway, my question is, how to install the newer version without adding any PPA?
Let me re-phrase. How do I install Redshift 1.12 on Linux Mint 19 Cinnamon from source?
Please do include basic settings and set-up as I am not yet familiar with its settings.
|
Since you’re on a Debian derivative, you can rebuild the packaged sources of version 1.12:
cd ${TMPDIR:-/tmp}
sudo apt install devscripts debian-keyring
dget -x http://deb.debian.org/debian/pool/main/r/redshift/redshift_1.12-2.dsc
cd redshift-1.12
sudo apt build-dep redshift
dpkg-buildpackage -us -uc
sudo dpkg -i ../redshift{,-gtk}_1.12-2_*.deb
There are a number of advantages over installing from source directly:
you don’t need to purge the existing packages;
the updated software is still managed by the package management system;
future upgrades to the package will be applied without needing to rebuild again (or uninstall the manually-installed software and installing the package).
If the configuration needs to be re-visited, see Vlastimil’s answer for details.
| How do I install Redshift 1.12 on Linux Mint 19 Cinnamon from source? |
1,538,791,119,000 |
I'm interested in who builds the Debian main packages for distribution. I'm aware that packages need to be reproducably buildable and I'm not asking about any specific individuals but the process in general (e.g. how "trust" would be involved here and how decentralized it is).
At https://lwn.net/Articles/676799/ it says:
More generally, Mozilla trusts the Debian packagers to use their best
judgment to achieve the same quality as the official Firefox binaries.
At https://wiki.debian.org/Packaging it says:
Debian packages are maintained by a community of Debian Developers and volunteers.
I'm new to Debian so please edit this question if that's needed.
|
It's a little unclear what you really want to know, since you seem to have found good resources, but I'll try to give a short (and not accurate in every detail) description of the process and hope I get the right parts included (I haven't worked with this in Debian's own repositories, but in different iterations of a setup at work, that grew ever bigger and more automated getting more and more like - how I understand - Debian's system). Every (maintained) package in Debian has a developer (or a team of developers), who locally (i.e. on his own machine(s) takes the upstream source code and makes some files that details how a Debian package should be made. He then collects that into a source package, which he signs with GPG and uploads to one of Debian's systems. If that system can verify the the source package came from a developer (by virtue of having a valid signature), it then sends the source package to a build host for each relevant architecture. Those packages, along with any binary packages uploaded directly by the developer, are then uploaded to the relevant repositories, and distributed to mirrors, from where you download and install them. The build host also signs the build packages (with some common key, it obviously can not sign stuff with developers's private keys), and the repository verifies those signatures.
| Who builds the Debian packages? |
1,538,791,119,000 |
I'm trying to write a bash script to automate the install of nginx with pagespeed module.
Part of this requires me to add this: --add-module=$(MODULESDIR)/ngx_pagespeed \ to a section of the /usr/src/nginx/nginx-X.X.6/debian/rules file.
Each section is similar to:
light_configure_flags := \
$(common_configure_flags) \
--with-http_gzip_static_module \
--without-http_browser_module \
--without-http_geo_module \
--without-http_limit_req_module \
--without-http_limit_conn_module \
--without-http_memcached_module \
--without-http_referer_module \
--without-http_scgi_module \
--without-http_split_clients_module \
--without-http_ssi_module \
--without-http_userid_module \
--without-http_uwsgi_module \
--add-module=$(MODULESDIR)/nginx-echo
Full file is here:
https://jsfiddle.net/72hL5pya/1/ (sorry, didn't know where else to put it)
And each section is *_configure_flags
there are a number of these sections for each "version" of nginx to compile (light, full, extras, etc...)
So that last --add-module= line is different in each section.
How can I append the --add-module=$(MODULESDIR)/ngx_pagespeed \ to each?
NOTE:
Looks like as of whatever date, the structure of this rules file has changed.
#!/usr/bin/make -f
#export DH_VERBOSE=1
CFLAGS ?= $(shell dpkg-buildflags --get CFLAGS)
LDFLAGS ?= $(shell dpkg-buildflags --get LDFLAGS)
WITH_HTTP2 := $(shell printf \
"Source: nginx\nBuild-Depends: libssl-dev (>= 1.0.1)\n" | \
dpkg-checkbuilddeps - >/dev/null 2>&1 && \
echo "--with-http_v2_module")
PKGS = nginx nginx-dbg \
nginx-module-xslt nginx-module-geoip nginx-module-image-filter \
nginx-module-perl nginx-module-njs
COMMON_CONFIGURE_ARGS := \
--prefix=/etc/nginx \
--sbin-path=/usr/sbin/nginx \
--modules-path=/usr/lib/nginx/modules \
--conf-path=/etc/nginx/nginx.conf \
--error-log-path=/var/log/nginx/error.log \
--http-log-path=/var/log/nginx/access.log \
--pid-path=/var/run/nginx.pid \
--lock-path=/var/run/nginx.lock \
--http-client-body-temp-path=/var/cache/nginx/client_temp \
--http-proxy-temp-path=/var/cache/nginx/proxy_temp \
--http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp \
--http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp \
--http-scgi-temp-path=/var/cache/nginx/scgi_temp \
--user=nginx \
--group=nginx \
--with-http_ssl_module \
--with-http_realip_module \
--with-http_addition_module \
--with-http_sub_module \
--with-http_dav_module \
--with-http_flv_module \
--with-http_mp4_module \
--with-http_gunzip_module \
--with-http_gzip_static_module \
--with-http_random_index_module \
--with-http_secure_link_module \
--with-http_stub_status_module \
--with-http_auth_request_module \
--with-http_xslt_module=dynamic \
--with-http_image_filter_module=dynamic \
--with-http_geoip_module=dynamic \
--with-http_perl_module=dynamic \
--add-dynamic-module=debian/extra/njs-1c50334fbea6/nginx \
--with-threads \
--with-stream \
--with-stream_ssl_module \
--with-http_slice_module \
--with-mail \
--with-mail_ssl_module \
--with-file-aio \
--with-ipv6 \
$(WITH_HTTP2) \
--with-cc-opt="$(CFLAGS)" \
--with-ld-opt="$(LDFLAGS)"
%:
dh $@
override_dh_auto_configure: configure_debug
override_dh_strip:
dh_strip --dbg-package=nginx-dbg
override_dh_auto_build:
dh_auto_build
mv objs/nginx objs/nginx-debug
mv objs/ngx_http_xslt_filter_module.so objs/ngx_http_xslt_filter_module-debug.so
mv objs/ngx_http_image_filter_module.so objs/ngx_http_image_filter_module-debug.so
mv objs/ngx_http_geoip_module.so objs/ngx_http_geoip_module-debug.so
mv objs/ngx_http_perl_module.so objs/ngx_http_perl_module-debug.so
mv objs/src/http/modules/perl/blib/arch/auto/nginx/nginx.so objs/src/http/modules/perl/blib/arch/auto/nginx/nginx-debug.so
mv objs/ngx_http_js_module.so objs/ngx_http_js_module-debug.so
CFLAGS="" ./configure $(COMMON_CONFIGURE_ARGS)
dh_auto_build
configure_debug:
CFLAGS="" ./configure $(COMMON_CONFIGURE_ARGS) \
--with-debug
override_dh_auto_install:
sed -e 's/%%PROVIDES%%/nginx/g' \
-e 's/%%DEFAULTSTART%%/2 3 4 5/g' \
-e 's/%%DEFAULTSTOP%%/0 1 6/g' \
< debian/init.d.in > debian/init.d
dh_auto_install
mkdir -p debian/nginx/etc/init.d debian/nginx/etc/default \
debian/nginx/usr/lib/nginx/modules
sed -e 's/%%PROVIDES%%/nginx-debug/g' \
-e 's/%%DEFAULTSTART%%//g' \
-e 's/%%DEFAULTSTOP%%/0 1 2 3 4 5 6/g' \
< debian/init.d.in > debian/debug.init.d
/usr/bin/install -m 755 debian/debug.init.d \
debian/nginx/etc/init.d/nginx-debug
/usr/bin/install -m 644 debian/nginx-debug.default \
debian/nginx/etc/default/nginx-debug
/usr/bin/install -m 644 debian/nginx.conf debian/nginx/etc/nginx/
/usr/bin/install -m 644 conf/win-utf debian/nginx/etc/nginx/
/usr/bin/install -m 644 conf/koi-utf debian/nginx/etc/nginx/
/usr/bin/install -m 644 conf/koi-win debian/nginx/etc/nginx/
/usr/bin/install -m 644 conf/mime.types debian/nginx/etc/nginx/
/usr/bin/install -m 644 conf/scgi_params debian/nginx/etc/nginx/
/usr/bin/install -m 644 conf/fastcgi_params debian/nginx/etc/nginx/
/usr/bin/install -m 644 conf/uwsgi_params debian/nginx/etc/nginx/
/usr/bin/install -m 644 html/index.html \
debian/nginx/usr/share/nginx/html/
/usr/bin/install -m 644 html/50x.html \
debian/nginx/usr/share/nginx/html/
/usr/bin/install -m 644 debian/nginx.vh.default.conf \
debian/nginx/etc/nginx/conf.d/default.conf
/usr/bin/install -m 755 objs/nginx debian/nginx/usr/sbin/
/usr/bin/install -m 755 objs/nginx-debug debian/nginx/usr/sbin/
cd debian/nginx/etc/nginx && /bin/ln -s \
../../usr/lib/nginx/modules modules && cd -
override_dh_gencontrol:
for p in $(PKGS); do \
if [ -e debian/$$p.version ]; then \
dpkg-gencontrol -p$$p -ldebian/changelog -Tdebian/$$p.substvars -Pdebian/$$p -v`cat debian/$$p.version`~`lsb_release -cs`; \
else \
dpkg-gencontrol -p$$p -ldebian/changelog -Tdebian/$$p.substvars -Pdebian/$$p ; \
fi ; \
done
override_dh_clean:
dh_clean
rm -f debian/*init.d
|
If it doesn't matter where in the list of options the new one goes, you could
sed '/_configure_flags *:=/ a\
--add-module=$(MODULESDIR)/ngx_pagespeed \\
' file
light_configure_flags := \
--add-module=$(MODULESDIR)/ngx_pagespeed \
$(common_configure_flags) \
--with-http_gzip_static_module \
...
To add after the "common_configure_flags" line, you could:
sed -r '
# when the line ends with a backslash
# add the new line with a backslash
/\$\(common_configure_flags\)[[:blank:]]*\\$/ a\
--add-module=$(MODULESDIR)/ngx_pagespeed \\
# when the line does not end with a backslash,
# add a backslash, then
# add the new line without a backslash
/\$\(common_configure_flags\)[[:blank:]]*$/ {
s/$/ \\/
a\
--add-module=$(MODULESDIR)/ngx_pagespeed
}
' file
| Add Line of Text to Section of Rules File |
1,538,791,119,000 |
I really like C* Music Player (CMUS) and I just installed Fedora 22 because I had issues with Fedora 21. The thing is that I cannot find any executable to install this music player.
I tried with dnf and didn't work, here's the output:
Last metadata expiration check performed 1:10:46 ago on Sun Jul 26 16:14:36 2015.
No package cmus available.
Error: no package matched: cmus
I find this answer on FedoraProject.Org: https://ask.fedoraproject.org/en/question/68940/where-can-i-find-cmus-program-for-fedora/
It says that if you have rpmfusion installed the use of dnf will be enough. I installed rpmfusion and tried again without any success. Here's how I installed rpmfusion free and non-free:
wget http://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-22.noarch.rpm
dnf install rpmfusion-free-release-22.noarch.rpm
wget http://download1.rpmfusion.org/nonfree/fedora/rpmfusion-nonfree-release-22.noarch.rpm
dnf install rpmfusion-nonfree-release-22.noarch.rpm
After that, I decided to compile the source code myself and tried that, I installed gcc and tried:
./configure and here's the output of that:
checking for program gcc... /usr/bin/gcc
checking for program gcc... /usr/bin/gcc
checking for CFLAGS -std=gnu99 -pipe -Wall -Wshadow -Wcast-align -Wpointer-arith -Wwrite-strings -Wundef -Wmissing-prototypes -Wredundant-decls -Wextra -Wno-sign-compare -Wformat-security... yes
checking for CFLAGS -Wold-style-definition... yes
checking for CFLAGS -Wno-pointer-sign... yes
checking for CFLAGS -Werror-implicit-function-declaration... yes
checking for CFLAGS -Wno-unused-parameter... yes
checking if CC can generate dependency information... yes
checking byte order... little-endian
checking for DL_LIBS (-ldl -Wl,--export-dynamic)... yes
checking for PTHREAD_LIBS (-lpthread)... yes
checking for realtime scheduling... yes
checking for program pkg-config... /usr/bin/pkg-config
checking for NCURSES_LIBS (pkg-config)... no
checking for NCURSES_LIBS (-lncursesw)... no
checking for NCURSES_LIBS (pkg-config)... no
checking for NCURSES_LIBS (-lncurses)... no
checking for NCURSES_LIBS (pkg-config)... no
checking for NCURSES_LIBS (-lcurses)... no
configure failed.
And I have no idea what I'm missing to install C* Music Player (CMUS) and I would like your help,.
Maybe I didn't install rmpfusion free-repositories/non-free-repositories the way it should be or I'm not installing everything I need to install before compiling the source code (I have no idea what NCURSES_LIBS is.) I'll go with any solution you can provide for this,. Thank you in advance.
PS. I actually installed ncurses ncurses-devel. And proceed with ./configure. Here's the output:
checking for program gcc... /usr/bin/gcc
checking for program gcc... /usr/bin/gcc
checking for CFLAGS -std=gnu99 -pipe -Wall -Wshadow -Wcast-align -Wpointer-arith -Wwrite-strings -Wundef -Wmissing-prototypes -Wredundant-decls -Wextra -Wno-sign-compare -Wformat-security... yes
checking for CFLAGS -Wold-style-definition... yes
checking for CFLAGS -Wno-pointer-sign... yes
checking for CFLAGS -Werror-implicit-function-declaration... yes
checking for CFLAGS -Wno-unused-parameter... yes
checking if CC can generate dependency information... yes
checking byte order... little-endian
checking for DL_LIBS (-ldl -Wl,--export-dynamic)... yes
checking for PTHREAD_LIBS (-lpthread)... yes
checking for realtime scheduling... yes
checking for program pkg-config... /usr/bin/pkg-config
checking for NCURSES_LIBS (pkg-config)... -lncursesw
checking for NCURSES_CFLAGS (pkg-config)...
checking for working ncurses setup... yes
checking for function resizeterm... yes
checking for function use_default_colors... yes
checking for ICONV_LIBS (-liconv)... no
assuming libc contains iconv
checking for working iconv... yes
checking for header <byteswap.h>... yes
checking for function strdup... yes
checking for function strndup... yes
checking for CDDB_LIBS (pkg-config)... no
checking for CDDB_LIBS (-lcddb)... no
checking for CDIO_LIBS (pkg-config)... no
checking for CDIO_LIBS (-lcdio_cdio -lcdio -lm)... no
checking for FLAC_LIBS (pkg-config)... no
checking for FLAC_LIBS (-lFLAC -lm)... no
checking for MAD_LIBS (pkg-config)... no
checking for MAD_LIBS (-lmad -lm)... no
checking for MODPLUG_LIBS (pkg-config)... no
checking for MODPLUG_LIBS (-lmodplug -lstdc++ -lm)... no
checking for header <mpc/mpcdec.h>... no
checking for header <mpcdec/mpcdec.h>... no
checking for VORBIS_LIBS (pkg-config)... no
checking for VORBIS_LIBS (-lvorbisfile -lvorbis -lm -logg)... no
checking for OPUS_LIBS (pkg-config)... no
*** Package opusfile was not found in the pkg-config search path.
*** Perhaps you should add the directory containing `opusfile.pc'
*** to the PKG_CONFIG_PATH environment variable
*** No package 'opusfile' found
checking for WAVPACK_LIBS (pkg-config)... no
checking for WAVPACK_LIBS (-lwavpack)... no
checking for header <mp4v2/mp4v2.h>... no
checking for header <mp4.h>... no
checking for header <neaacdec.h>... no
checking for FFMPEG_LIBS (pkg-config)... no
*** Package libavformat was not found in the pkg-config search path.
*** Perhaps you should add the directory containing `libavformat.pc'
*** to the PKG_CONFIG_PATH environment variable
*** No package 'libavformat' found
checking for CUE_LIBS (pkg-config)... no
*** Package libcue was not found in the pkg-config search path.
*** Perhaps you should add the directory containing `libcue.pc'
*** to the PKG_CONFIG_PATH environment variable
*** No package 'libcue' found
checking for header <ayemu.h>... no
checking for PULSE_LIBS (pkg-config)... no
*** Package libpulse was not found in the pkg-config search path.
*** Perhaps you should add the directory containing `libpulse.pc'
*** to the PKG_CONFIG_PATH environment variable
*** No package 'libpulse' found
checking for ALSA_LIBS (pkg-config)... no
*** Package alsa was not found in the pkg-config search path.
*** Perhaps you should add the directory containing `alsa.pc'
*** to the PKG_CONFIG_PATH environment variable
*** No package 'alsa' found
checking for JACK_LIBS (pkg-config)... no
*** Package jack was not found in the pkg-config search path.
*** Perhaps you should add the directory containing `jack.pc'
*** to the PKG_CONFIG_PATH environment variable
*** No package 'jack' found
checking for SAMPLERATE_LIBS (pkg-config)... no
*** Package samplerate was not found in the pkg-config search path.
*** Perhaps you should add the directory containing `samplerate.pc'
*** to the PKG_CONFIG_PATH environment variable
*** No package 'samplerate' found
checking for AO_LIBS (pkg-config)... no
checking for AO_LIBS (-lao)... no
checking for program artsc-config... no
checking for header <sys/soundcard.h>... yes
checking for header <sys/audioio.h>... no
checking for ROAR_LIBS (pkg-config)... no
*** Package libroar was not found in the pkg-config search path.
*** Perhaps you should add the directory containing `libroar.pc'
*** to the PKG_CONFIG_PATH environment variable
*** No package 'libroar' found
creating config/cdio.h
creating config/datadir.h
creating config/libdir.h
creating config/debug.h
creating config/tremor.h
creating config/modplug.h
creating config/mpc.h
creating config/mp4.h
creating config/curses.h
creating config/ffmpeg.h
creating config/utils.h
creating config/iconv.h
creating config/samplerate.h
creating config/xmalloc.h
creating config/cue.h
creating config.mk
And after that I use make and make install. It actually installed C* Music player*but it gives me an error: Error: selecting output plugin '': no such plugin. Sigh. Any other thoughts about this?
|
You are missing a library called ncurses which is used by your application. Just install it with sudo yum install ncurses ncurses-devel
As your are building it from sources, you'll need to satisfy the dependencies yourself. That's what rpm packages are meant for : listing dependencies, resolving and installing them so the requested package will work.
Edit: According to your output, you're now missing some libraries to send sound to your soundcard. Try :
sudo yum install ffmpeg-libs ffmpeg-devel libcue libcue-devel pulseaudio-libs pulseaudio-libs-devel libsamplerate-devel libsamplerate
It may install multiple dependancies, but it should match your configuration. Once installed, re-run the "./configure".
| Building and installing C* Music Player (CMUS) |
1,538,791,119,000 |
I don't have experience building kernel modules. And worse, I'm trying to do it on ChrUbuntu, so it seems that I cannot follow the existing Ubuntu guides. For example, this command fails:
# apt-get install linux-headers-$(uname -r)
because the ChrUbuntu kernel is version 3.4.0 and there is no Ubuntu repo for that version (afaik).
uname -a
Linux ChrUbuntu 3.4.0 #1 SMP Sun Aug 26 19:17:55 EDT 2012 x86_64 x86_64 x86_64 GNU/Linux
Here are some references I have looked at:
Kernel/Compile - Community Ubuntu Documentation
How to: Compile Linux kernel modules
Debian / Ubuntu Linux Install Kernel Headers Package
Hello World Loadable Kernel Module | Mark Loiseau
64 bit - How do I compile a kernel module? - Ask Ubuntu
Compiling Kernel Modules
Setting Up Ubuntu for Building Kernel Modules - Drew Stephens
|
This is all from Redditer michaela_elise. (Thank you!)
There is a script that will get and build the ChromeOS 3.4 kernel on your Ubuntu install. This is great because now we can compile kernel mods.
The apt-get install linux-headers-$(uname -r) does not work because 3.4.0 seems to be a Google specific build and you cannot just get those headers.
I have added the script here. Just run it as sudo and let it go. When it is done, you will have /usr/src/kernel (this the source and compiled kernel), /usr/src/linux-headers-3.4.0, it also installs this version of the kernel.
#!/bin/bash
set -x
#
# Grab verified boot utilities from ChromeOS.
#
mkdir -p /usr/share/vboot
mount -o ro /dev/sda3 /mnt
cp /mnt/usr/bin/vbutil_* /usr/bin
cp /mnt/usr/bin/dump_kernel_config /usr/bin
rsync -avz /mnt/usr/share/vboot/ /usr/share/vboot/
umount /mnt
#
# On the Acer C7, ChromeOS is 32-bit, so the verified boot binaries need a
# few 32-bit shared libraries to run under ChrUbuntu, which is 64-bit.
#
apt-get install libc6:i386 libssl1.0.0:i386
#
# Fetch ChromeOS kernel sources from the Git repo.
#
apt-get install git-core
cd /usr/src
git clone https://git.chromium.org/git/chromiumos/third_party/kernel.git
cd kernel
git checkout origin/chromeos-3.4
#
# Configure the kernel
#
# First we patch ``base.config`` to set ``CONFIG_SECURITY_CHROMIUMOS``
# to ``n`` ...
cp ./chromeos/config/base.config ./chromeos/config/base.config.orig
sed -e \
's/CONFIG_SECURITY_CHROMIUMOS=y/CONFIG_SECURITY_CHROMIUMOS=n/' \
./chromeos/config/base.config.orig > ./chromeos/config/base.config
./chromeos/scripts/prepareconfig chromeos-intel-pineview
#
# ... and then we proceed as per Olaf's instructions
#
yes "" | make oldconfig
#
# Build the Ubuntu kernel packages
#
apt-get install kernel-package
make-kpkg kernel_image kernel_headers
#
# Backup current kernel and kernel modules
#
tstamp=$(date +%Y-%m-%d-%H%M)
dd if=/dev/sda6 of=/kernel-backup-$tstamp
cp -Rp /lib/modules/3.4.0 /lib/modules/3.4.0-backup-$tstamp
#
# Install kernel image and modules from the Ubuntu kernel packages we
# just created.
#
dpkg -i /usr/src/linux-*.deb
#
# Extract old kernel config
#
vbutil_kernel --verify /dev/sda6 --verbose | tail -1 > /config-$tstamp-orig.txt
#
# Add ``disablevmx=off`` to the command line, so that VMX is enabled (for VirtualBox & Co)
#
sed -e 's/$/ disablevmx=off/' \
/config-$tstamp-orig.txt > /config-$tstamp.txt
#
# Wrap the new kernel with the verified block and with the new config.
#
vbutil_kernel --pack /newkernel \
--keyblock /usr/share/vboot/devkeys/kernel.keyblock \
--version 1 \
--signprivate /usr/share/vboot/devkeys/kernel_data_key.vbprivk \
--config=/config-$tstamp.txt \
--vmlinuz /boot/vmlinuz-3.4.0 \
--arch x86_64
#
# Make sure the new kernel verifies OK.
#
vbutil_kernel --verify /newkernel
#
# Copy the new kernel to the KERN-C partition.
#
dd if=/newkernel of=/dev/sda6
Let me know how it works for you. I have compiled and insmod'd kernel modules with this.
Here is how you #include the headers
include </usr/src/linux-headers-3.4.0/include/linux/module.h>
include </usr/src/linux-headers-3.4.0/include/linux/kernel.h>
include </usr/src/linux-headers-3.4.0/include/linux/init.h>
include </usr/src/linux-headers-3.4.0/include/linux/syscalls.h>
//or whatever you need specifically
And I am guessing you already know this but in case someone does not This is the basic makefile for kernel mods. Once you use the script I linked, you can just run make with this makefile and all is well. replace kmod.o with whatever your source.c is called except keep it as .o
Makefile obj-m += kmod.o all: make -C /lib/modules/$(shell uname -r)/build M=$(PWD) modules
clean: make -C /lib/modules/$(shell uname -r)/build M=$(PWD) clean
p.s. I had to modify sysinfo.h because the type __kernel_ulong_t was not defined. I changed it to uint64_t. This seems to work just fine. my mods have had no problems thus far. Make sure if you have to do this to edit the sysinfo.h in the 3.4.0 headers
p.p.s. This fixes the issues with vbox and vmware player!!! They just install and work!!
| I need a step by step guide to build kernel modules in ChrUbuntu |
1,538,791,119,000 |
-I option set header file searching path for gcc/g++, and CPLUS_INCLUDE_PATH/CPATH append the searching path list.
Then what about libs? It seems that LD_LIBRARY_PATH is just a path list for run-time library searching. -L option is necessary to specify any lib path other than /usr/lib and /usr/local/lib.
Is there an environment variable similar to CPATH/CPLUS_INCLUDE_PATH, to do the compile-time job?
|
This Q appears to have been answered in the comments. Per njsg's comment,
LIBRARY_PATH is what you're looking for
"The value of LIBRARY_PATH is a colon-separated list of directories, [...] Linking using GCC also uses these directories when searching for ordinary libraries for the -l option (but directories specified with -L come first)."
| How to set library searching path using environment variable in compile-time |
1,538,791,119,000 |
I've just completed a simple source code modification & rebuild on a Raspberry Pi OS - bullseye machine. Because this is new to me, I'll list the steps I followed in an effort to avoid ambiguity:
$ dhcpcd --version
dhcpcd 8.1.2 # "before" version
$ sudo apt install devscripts # build tools for using `debuild`
$ apt-get source dhcpcd5 # creates source tree ~/dhcpcd5-8.1.2; Debian git repo is far off!
$ cd dhcpcd5-8.1.2 # cd to source dir
$ nano src/dhcp.c # make required changes to the source (one line)
~/dhcpcd5-8.1.2 $ debuild -b -uc -us # successful build
$ cd ..
$ sudo dpkg -i dhcpcd5_8.1.2-1+rpt5_armhf.deb # install .deb file created by debuild
$ dhcpcd --version
dhcpcd 8.1.2 # "after" version
$
All well & good, but the "before" & "after" version numbers are exactly the same, which leaves me without a simple way to know whether I have my corrected code running, or the un-corrected code. I'll install the corrected .deb file to several hosts, I may get requests from others, etc, so I'd like some way to easily distinguish corrected from un-corrected code.
Using dhcpcd --version seems an easy way to do this. I've read that Debian has rules re version numbers, but as I'm not releasing this to "the world" I see no need for formality. Also - I've submitted a pull request/merge request to the Debian repo, and I've advised the RPi organization on the issue. I've gotten no feedback from either party, but this bug is a huge annoyance for me. I don't wish to wait for a new release of dhcpcd with a "proper" version number.
What must I do to cause the corrected version of dhcpcd to report dhcpcd 8.1.2.1 - or something similar?
EDIT for Clarification:
Based on this answer, I edited dhcpcd5-8.1.2/debian/changelog. Following this change, the apt utilities consistently report the version of dhcpcd as 8.1.3:
$ apt-cache policy dhcpcd5
dhcpcd5:
Installed: 1:8.1.3-1+rpt1
Candidate: 1:8.1.3-1+rpt1
Version table:
*** 1:8.1.3-1+rpt1 100
100 /var/lib/dpkg/status
1:8.1.2-1+rpt1 500
500 http://archive.raspberrypi.org/debian buster/main armhf Packages
7.1.0-2 500
500 http://raspbian.raspberrypi.org/raspbian buster/main armhf Packages
$ #
$ dpkg -s dhcpcd5 | grep Version
Version: 1:8.1.3-1+rpt1
$
However: dhcpcd --version still reports 8.1.2. dhcpcd is aliased to dhcpcd5 in /etc/alternatives. Consequently, dhcpcd --version is actually dhcpcd5 --version. It appears that the executable dhcpcd5 is getting its --version from a different source than than apt utilities.?
EDIT 2:
Turns out the version # that gets reported by dhcpcd --version is defined in defs.h as follows:
#define PACKAGE "dhcpcd"
#define VERSION "8.1.2"
I think dhcpcd is a bit of an outlier. The RPi team apparently decided to forego the upstream version 9 when released (years ago), and have stuck to version 8.1.2 even though there were several upstream releases following ver 8.1.2. Still more confusing is the fact that the .dsc file lists Vcs-Browser: https://salsa.debian.org/smlx-guest/dhcpcd5 as the Git repo - but it's actually stuck at version 7. This doesn't make much sense to me - I guess that's one reason I'm not a package maintainer. :)
|
You can either add the relevant lines at the top of debian/changelog (find here details on the contents of that file).
You can duplicate the current top stanza and change the version number (making an useful log comment is a good idea).
Alternatively you can use the dch tool (from devtools):
dch --local your_package_name
Once installed, you can check the installed version of the package with something like this (there are alternatives)
dpkg -l dhcpcd5
Upstream version identifiers cannot be automatically imported because they don't always officially exist (say python3-lzss) and when they do, they might not be compatible with the restrictions and sorting of the package system versions. For example epoch is needed sometimes to migrate from upstream to Debian versions.
| How to set a new version number in a .deb package I've built |
1,538,791,119,000 |
Suppose I have a makefile that builds my package, and I only want the package to build if the package file is not present:
package: foo_0.0.0_amd64.deb
cd foo-0.0.0 && debuild -uc -us
So I am new to the debian build process, but I am anticipating that I'll either find a way to build for different architectures, or I'll be on a different architecture natively and that file name will change. So, I set it as a variable:
major=0
minor=0
update=0
release=amd64
package: foo_${major}.${minor}.${update}_${release}.deb
I have a machine where uname -r yields #.##.#-#-amd64. What is the bulletproof way to fetch that amd64 in unix/linux?
|
On a Debian-based system, the bullet-proof way of determining the architecture, as appropriate for use in a package’s file name, is
dpkg --print-architecture
Note that architecture-independent packages use all there, and you’d have to know that in advance.
| Building packages: command which yields 'amd64' (like uname) |
1,538,791,119,000 |
I'm trying to compile php on Suse 10.2. When I run the configure script with --with-mcrypt I receive this message:
configure: error: mcrypt.h not found. Please reinstall libmcrypt
|
OpenSUSE 10.2 has been EOL since 11-30-2008. I recommend updating to a supported version like 15.1 or 42.3.
If you insist on using what you have then you'll have to build the package from source like you're doing with PHP. You will then prepend its binary and library directories to your PATH and LD_LIBRARY_PATH.
If you use a supported version then you can just use zypper to install php and any other packages because as it stands right now, you're going to have to build everything from source that isn't on that DVD.
| configure: error: mcrypt.h not found. Please reinstall libmcrypt |
1,538,791,119,000 |
I'm trying to create a Makefile to compile my project. However, when I use the 'math.h' library, my make fails. This is the Makefile file :
run: tema1
./tema1
build: tema1.c
gcc tema1.c -o tema1 -lm
clean:
rm *.o tema1
The part of the code where I use the pow() and sqrt() is :
float score = sqrt(k) + pow(1.25, completed_lines);
But, even compiling using '-lm', I still get this error :
> /tmp/ccSQVWNy.o: In function `easy_win_score': tema1.c:(.text+0x1518):
> undefined reference to `sqrt' tema1.c:(.text+0x1540): undefined
> reference to `pow' collect2: error: ld returned 1 exit status
> <builtin>: recipe for target 'tema1' failed make: *** [tema1] Error 1
Any idea why and how can I fix this ? If I only use this in the terminal :
gcc tema1.c -o tema1 -lm
it works, but in the Makefile, it fails.
|
This happens because your Makefile doesn’t explain how to build tema1 (from Make’s perspective), so it uses its built-in rules:
run depends on tema1;
tema1 doesn’t have a definition, but there’s a C file, so Make tries to compile it using its default rule, which doesn’t specify -lm.
To fix this, say
tema1: tema1.c
gcc tema1.c -o tema1 -lm
instead of build: tema1.c etc.
You can reduce repetition by using automatic variables:
tema1: tema1.c
gcc $^ -o $@ -lm
To keep “named” rules (run, build etc.), make them depend on concrete artifacts (apart from clean, since it doesn’t produce anything), add separate rules for the concrete artifacts, and mark the “named” rules as phony (so Make won’t expect a corresponding on-disk artifact):
build: tema1
tema1: tema1.c
gcc $^ -o $@ -lm
.PHONY: run build clean
It’s also worth changing clean so it doesn’t fail when there’s nothing to clean:
clean:
rm -f *.o tema1
| -lm doesn't work for my Makefile |
1,538,791,119,000 |
I am compiling a C++ program with g++ and each time get a huge number of errors, forcing me to scroll up every time I want to view the first (and most relevant) error. I am wondering if there is an option when I am compiling the program that would allow me to limit the number of error messages displayed in the terminal.
|
Use compiling-command | head --lines 32 to output first 32 lines from compiling-command output.
You can also use compiling-command | grep "Text to search" | head --lines 32 to display first 32 finds of "Text to search".
Disabling -Wall option with gcc will not output as many errors.
| How can I limit the number of error messsages displayed after compiling a C++ program? |
1,538,791,119,000 |
I'm trying to edit a Makefile that contains:
...
install -d $(DESTDIR)/usr/lib/myApp
install -d $(DESTDIR)/usr/lib/myApp/scripts
install -t $(DESTDIR)/usr/lib/myApp/scripts \
src/scripts/ap.sh \
src/scripts/connect.sh \
src/scripts/devices.sh \
src/scripts/create_ap \
src/scripts/scan.sh
...
After reading this Q/A, I got the idea that I could replace all that with:
install -D src/scripts/* $(DESTDIR)/usr/lib/myApp/scripts
But the above gives me an error saying:
install: target
'/var/lib/jenkins/data/workspace/network-service_build-test@2/build/debian/myApp-service-network/usr/lib/myApp/scripts/network'
is not a directory
Am I misunderstanding the use of the -D flag here? I'm thinking it should move my files to the path specified and create the folders if needed.
|
I believe that you need
install -t "$(DESTDIR)/usr/lib/myApp/scripts/network" -D src/scripts/*
This will create $(DESTDIR)/usr/lib/myApp/scripts/network (including intermediate directories) and copy the files src/scripts/* there.
Testing (with extra verbosity turned on):
$ touch file-{1,2,3,4}
$ install -v -t test/dir -D file-[1-4]
install: creating directory 'test'
install: creating directory 'test/dir'
'file-1' -> 'test/dir/file-1'
'file-2' -> 'test/dir/file-2'
'file-3' -> 'test/dir/file-3'
'file-4' -> 'test/dir/file-4'
This works with GNU install from coreutils 8.25, but fails with coreutils 8.4. For older coreutils implementations, do it in two steps:
install -d "$(DESTDIR)/usr/lib/myApp/scripts/network"
install -t "$(DESTDIR)/usr/lib/myApp/scripts/network" src/scripts/*
... or something similar.
| Using install -D in Makefile |
1,538,791,119,000 |
AUR is said the largest repository out there but sometimes, when trying to build and install, and also to build and install dependencies, the outcome is not always a success.
What a medium user can do at that point?
Normally (that is, for a ubuntu user) , the idea is to build and install from source. That is temerary enough endeavour for me - but how can I try to fix what the automated Pamac/pacman could not?
|
The AUR is an unsupported repository: the quality of the PKGBUILDS varies from the very good through to the abominably bad or outright negligent.
You should always read the PKGBUILD before attempting to install anything and look at the comments on the package page to satisfy yourself that there won't be any unforseen "surprises" when running makepkg.
You should also not get in the habit of relying on an AUR helper to automate the build process for you and thereby blur the distinction between the officially supported repositories and the AUR.
If a particular PKGBUILD does not build successfully, the first step is to try and build it manually: makepkg will provide meaningful error messages that should provide sufficient information to identify the issue.
Arch Linux is not like Ubuntu: users are expected to be able to read PKGBUILDs (basic bash scripts, essentially) and the man page for makepkg and understand the build process sufficiently to responsibly maintain their installations.
If the fault lies with the PKGBUILD, leave a comment to that effect on the package's AUR page to alert the maintainer and anyone else who may want to install the same package. If the issue goes unaddressed, you can always ask to have the package orphaned, then adopt it and fix the PKGBUILD so that it works as expected.
There are guidelines for maintaining packages on the Arch Wiki.
| AUR package cannot be built and installed - what to do? |
1,538,791,119,000 |
I'm rebuilding world and would like to have every other file not created by the rebuilding of world to be deleted. Is there some mergemaster option for this?
|
You can use the make targets delete-old and delete-old-libs to remove obsolete files. They run interactively, unless you set BATCH_DELETE_OLD_FILES:
# pwd
/usr/src
# make -DBATCH_DELETE_OLD_FILES delete-old
Run them after make installworld.
Have a look at build(7) for more details.
A word of warning - be careful with delete-old-libs - it will delete anything that was not built as part of the current world/kernel so if any of your installed ports rely on older versions of any system libs, you'll need to reinstall the affected ports. I usually run delete-old-libs after a complete port rebuild to avoid this problem.
| Deleting all old files after rebuilding world on FreeBSD |
1,538,791,119,000 |
I'm trying to setup an environment for kernel module development in Linux. I've built the kernel in the home folder and would like to place the sources and binaries to the correct place so include correctly.
The example for building the kernel module has the following includes:
#include <linux/init.h>
#include <linux/module.h>
What are the absolute paths that the linker looks for these headers?
|
I generally approach this question like this. I'm on a Fedora 19 system but this will work on any distro that provides locate services.
$ locate "linux/init.h" | grep include
/usr/src/kernels/3.13.6-100.fc19.x86_64.debug/include/linux/init.h
/usr/src/kernels/3.13.7-100.fc19.x86_64.debug/include/linux/init.h
/usr/src/kernels/3.13.9-100.fc19.x86_64/include/linux/init.h
/usr/src/kernels/3.13.9-100.fc19.x86_64.debug/include/linux/init.h
Your paths will be different but the key take away is that you want to ask locate to find what's being included ("linux/init.h") and filter these results looking for the keyword include.
There are also distro specific ways to search for these locations using RPM (Redhat) or APT (Debian/Ubuntu).
gcc
Notice however that the paths within the C/C++ file are relative:
#include <linux/init.h>
This is so that when you call the compiler, gcc, you can override the location of the include files that you'd like to use. This is controlled through the switch -I <dir>.
excerpt from man gcc
-I dir
Add the directory dir to the list of directories to be searched for
header files. Directories named by -I are searched before the
standard system include directories. If the directory dir is a
standard system include directory, the option is ignored to ensure
that the default search order for system directories and the special
treatment of system headers are not defeated . If dir
begins with "=", then the "=" will be replaced by the sysroot
prefix; see --sysroot and -isysroot.
External modules
There's this article which discusses how one would incorporate the development of their own kernel modules into the "build environment" that's included with the Linux kernel. The article is titled: Driver porting: compiling external modules. The organization of the Kernel's makefile is also covered in this article: makefiles.txt.
For Kernel newbies there's also this article: KernelHeaders from the kernelnewbies.org website.
NOTE: The Kernel uses the KBuild system which is covered here as part of the documentation included with the Kernel.
https://www.kernel.org/doc/Documentation/kbuild/
References
How to include local header files in linux kernel module
| Placement of kernel binary and sources for kernel module building? |
1,538,791,119,000 |
I searched but couldn't find any similar questions. I need to recompile OpenSSL with md2 support so that I can compile and install libpki. I can't for the life of me figure out how recompiling OpenSSL should be done. Should I download the current sources and compile then install?
|
You do not want to do this. For example see: http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2009-2409
| On OpenBSD, how do I recompile OpenSSL with md2 support? |
1,538,791,119,000 |
This is a follow up question to Confusion about linking boost library while compilation:
What is to do, when I generate a Makefile by qmake and I have only a third party boost lib installed (I uninstalled all boost libs from dependency management, because it always links to the boost lib from dependency management what I don't want) and I want it to compile only against this manually installed library as well as run against it.
These are the important parts of a Makefile generated by qmake:
CC = gcc
CXX = g++
DEFINES = -DQT_GUI -DBOOST_THREAD_USE_LIB -DBOOST_SPIRIT_THREADSAFE -DBOOST_THREAD_PROVIDES_GENERIC_SHARED_MUTEX_ON_WIN -D__NO_SYSTEM_INCLUDES -DUSE_UPNP=1 -DSTATICLIB -DUSE_QRCODE -DUSE_DBUS -DHAVE_BUILD_INFO -DLINUX -DQT_NO_DEBUG -DQT_DBUS_LIB -DQT_GUI_LIB -DQT_CORE_LIB -DQT_SHARED
CFLAGS = -m64 -pipe -O2 -Wall -W -D_REENTRANT $(DEFINES)
CXXFLAGS = -m64 -pipe -fstack-protector -O2 -fdiagnostics-show-option -Wall -Wextra -Wformat -Wformat-security -Wno-unused-parameter -D_REENTRANT $(DEFINES)
INCPATH = -I/usr/share/qt4/mkspecs/linux-g++-64 -I/usr/include/qt4/QtCore -I/usr/include/qt4/QtGui -I/usr/include/qt4/QtDBus -I/usr/include/qt4 -Isrc -Isrc/json -Isrc/qt -IC:/deps/ -IC:/deps/boost -Ic:/deps/db/build_unix -Ic:/deps/ssl/include -IC:/deps/libqrencode/ -Ibuild -Ibuild
LINK = g++
LFLAGS = -m64 -fstack-protector -Wl,-O1
LIBS = $(SUBLIBS) -L/usr/lib/x86_64-linux-gnu -LC:/deps/miniupnpc -lminiupnpc -lqrencode -lrt -LC:/deps/boost/stage/lib -Lc:/deps/db/build_unix -Lc:/deps/ssl -LC:/deps/libqrencode/.libs -lssl -lcrypto -ldb_cxx -lboost_system-mgw46-mt-sd-1_54 -lboost_filesystem-mgw46-mt-sd-1_54 -lboost_program_options-mgw46-mt-sd-1_54 -lboost_thread-mgw46-mt-sd-1_54 -lQtDBus -lQtGui -lQtCore -lpthread
This is the path to boost:
/usr/local/lib/boost1.55/lib# ls -1
libboost_atomic.a
libboost_atomic.so
libboost_atomic.so.1.55.0
libboost_chrono.a
libboost_chrono.so
libboost_chrono.so.1.55.0
libboost_context.a
libboost_context.so
libboost_context.so.1.55.0
libboost_coroutine.a
libboost_coroutine.so
libboost_coroutine.so.1.55.0
libboost_date_time.a
libboost_date_time.so
libboost_date_time.so.1.55.0
libboost_exception.a
libboost_filesystem.a
libboost_filesystem.so
libboost_filesystem.so.1.55.0
libboost_graph.a
libboost_graph.so
libboost_graph.so.1.55.0
libboost_locale.a
libboost_locale.so
libboost_locale.so.1.55.0
libboost_log.a
libboost_log_setup.a
libboost_log_setup.so
libboost_log_setup.so.1.55.0
libboost_log.so
libboost_log.so.1.55.0
libboost_math_c99.a
libboost_math_c99f.a
libboost_math_c99f.so
libboost_math_c99f.so.1.55.0
libboost_math_c99l.a
libboost_math_c99l.so
libboost_math_c99l.so.1.55.0
libboost_math_c99.so
libboost_math_c99.so.1.55.0
libboost_math_tr1.a
libboost_math_tr1f.a
libboost_math_tr1f.so
libboost_math_tr1f.so.1.55.0
libboost_math_tr1l.a
libboost_math_tr1l.so
libboost_math_tr1l.so.1.55.0
libboost_math_tr1.so
libboost_math_tr1.so.1.55.0
libboost_prg_exec_monitor.a
libboost_prg_exec_monitor.so
libboost_prg_exec_monitor.so.1.55.0
libboost_program_options.a
libboost_program_options.so
libboost_program_options.so.1.55.0
libboost_random.a
libboost_random.so
libboost_random.so.1.55.0
libboost_regex.a
libboost_regex.so
libboost_regex.so.1.55.0
libboost_serialization.a
libboost_serialization.so
libboost_serialization.so.1.55.0
libboost_signals.a
libboost_signals.so
libboost_signals.so.1.55.0
libboost_system.a
libboost_system.so
libboost_system.so.1.55.0
libboost_test_exec_monitor.a
libboost_thread.a
libboost_thread.so
libboost_thread.so.1.55.0
libboost_timer.a
libboost_timer.so
libboost_timer.so.1.55.0
libboost_unit_test_framework.a
libboost_unit_test_framework.so
libboost_unit_test_framework.so.1.55.0
libboost_wave.a
libboost_wave.so
libboost_wave.so.1.55.0
libboost_wserialization.a
libboost_wserialization.so
libboost_wserialization.so.1.55.0
This is the output of ldconfig -v concerning boost:
# ldconfig -v
/sbin/ldconfig.real: Path `/lib/x86_64-linux-gnu' given more than once
/sbin/ldconfig.real: Path `/usr/lib/x86_64-linux-gnu' given more than once
/usr/local/lib/boost1.55/lib:
libboost_wave.so.1.55.0 -> libboost_wave.so.1.55.0
libboost_thread.so.1.55.0 -> libboost_thread.so.1.55.0
libboost_system.so.1.55.0 -> libboost_system.so.1.55.0
libboost_prg_exec_monitor.so.1.55.0 -> libboost_prg_exec_monitor.so.1.55.0
libboost_context.so.1.55.0 -> libboost_context.so.1.55.0
libboost_atomic.so.1.55.0 -> libboost_atomic.so.1.55.0
libboost_filesystem.so.1.55.0 -> libboost_filesystem.so.1.55.0
libboost_math_c99l.so.1.55.0 -> libboost_math_c99l.so.1.55.0
libboost_math_c99.so.1.55.0 -> libboost_math_c99.so.1.55.0
libboost_timer.so.1.55.0 -> libboost_timer.so.1.55.0
libboost_wserialization.so.1.55.0 -> libboost_wserialization.so.1.55.0
libboost_math_c99f.so.1.55.0 -> libboost_math_c99f.so.1.55.0
libboost_coroutine.so.1.55.0 -> libboost_coroutine.so.1.55.0
libboost_signals.so.1.55.0 -> libboost_signals.so.1.55.0
libboost_random.so.1.55.0 -> libboost_random.so.1.55.0
libboost_chrono.so.1.55.0 -> libboost_chrono.so.1.55.0
libboost_program_options.so.1.55.0 -> libboost_program_options.so.1.55.0
libboost_date_time.so.1.55.0 -> libboost_date_time.so.1.55.0
libboost_locale.so.1.55.0 -> libboost_locale.so.1.55.0
libboost_log.so.1.55.0 -> libboost_log.so.1.55.0
libboost_log_setup.so.1.55.0 -> libboost_log_setup.so.1.55.0
libboost_serialization.so.1.55.0 -> libboost_serialization.so.1.55.0
libboost_math_tr1f.so.1.55.0 -> libboost_math_tr1f.so.1.55.0
libboost_unit_test_framework.so.1.55.0 -> libboost_unit_test_framework.so.1.55.0
libboost_math_tr1l.so.1.55.0 -> libboost_math_tr1l.so.1.55.0
libboost_graph.so.1.55.0 -> libboost_graph.so.1.55.0
libboost_math_tr1.so.1.55.0 -> libboost_math_tr1.so.1.55.0
libboost_regex.so.1.55.0 -> libboost_regex.so.1.55.0
What do I have exactly to do to compile and run the code properly?
I tried combinations of:
-L/usr/local/lib/boost1.55/lib/boost_thread-mgw46-mt-sd-1_54
-L/usr/local/lib/boost1.55/lib/boost_thread
-I/usr/local/lib/boost1.55/
-I/usr/local/lib/boost1.55/lib/
-lboost_system-mgw46-mt-sd-1_54
-lboost_system-mgw46-mt-sd-1_55
-lboost_system
All this NEVER works when there is no boost installed by package manager, but I don't want it to use it from package manager. That means it doesn't compile. Sometimes I get something like:
/usr/bin/ld: cannot find -lboost_system-mgw46-mt-sd-1_54
or
/usr/bin/ld: cannot find -lboost_system
or
addrman.cpp:(.text.startup+0x23): undefined reference to `boost::system::generic_category()'
...and so on.
I don't get it. What's wrong here?
[UPDATE]
It turns out that there seems to be something wrong with boost lib itself.
After modifying the important parts of the makefile to:
LIBS = $(SUBLIBS) -L/usr/lib/x86_64-linux-gnu -lminiupnpc -lqrencode -lrt -lssl -lcrypto -ldb_cxx -L/usr/local/lib/boost1.55/ -L/usr/local/lib/boost1.55/include/ -L/usr/local/lib/boost1.55/lib/ -lboost_system -lboost_filesystem -lboost_program_options -lpthread -lboost_thread -lQtDBus -lQtGui -lQtCore
make produced another error:
build/json_spirit_reader.o: In function `void boost::call_once<void (*)()>(boost::once_flag&, void (*)())':
json_spirit_reader.cpp:(.text._ZN5boost9call_onceIPFvvEEEvRNS_9once_flagET_[_ZN5boost9call_onceIPFvvEEEvRNS_9once_flagET_]+0x14): undefined reference to `boost::detail::get_once_per_thread_epoch()'
json_spirit_reader.cpp:(.text._ZN5boost9call_onceIPFvvEEEvRNS_9once_flagET_[_ZN5boost9call_onceIPFvvEEEvRNS_9once_flagET_]+0x2c): undefined reference to `boost::detail::once_epoch_mutex'
json_spirit_reader.cpp:(.text._ZN5boost9call_onceIPFvvEEEvRNS_9once_flagET_[_ZN5boost9call_onceIPFvvEEEvRNS_9once_flagET_]+0x35): undefined reference to `boost::detail::once_epoch_mutex'
json_spirit_reader.cpp:(.text._ZN5boost9call_onceIPFvvEEEvRNS_9once_flagET_[_ZN5boost9call_onceIPFvvEEEvRNS_9once_flagET_]+0x72): undefined reference to `boost::detail::once_epoch_mutex'
json_spirit_reader.cpp:(.text._ZN5boost9call_onceIPFvvEEEvRNS_9once_flagET_[_ZN5boost9call_onceIPFvvEEEvRNS_9once_flagET_]+0x77): undefined reference to `boost::detail::once_epoch_cv'
json_spirit_reader.cpp:(.text._ZN5boost9call_onceIPFvvEEEvRNS_9once_flagET_[_ZN5boost9call_onceIPFvvEEEvRNS_9once_flagET_]+0xa8): undefined reference to `boost::detail::once_epoch_mutex'
json_spirit_reader.cpp:(.text._ZN5boost9call_onceIPFvvEEEvRNS_9once_flagET_[_ZN5boost9call_onceIPFvvEEEvRNS_9once_flagET_]+0xb0): undefined reference to `boost::detail::once_epoch_mutex'
json_spirit_reader.cpp:(.text._ZN5boost9call_onceIPFvvEEEvRNS_9once_flagET_[_ZN5boost9call_onceIPFvvEEEvRNS_9once_flagET_]+0xd9): undefined reference to `boost::detail::once_global_epoch'
json_spirit_reader.cpp:(.text._ZN5boost9call_onceIPFvvEEEvRNS_9once_flagET_[_ZN5boost9call_onceIPFvvEEEvRNS_9once_flagET_]+0xde): undefined reference to `boost::detail::once_epoch_cv'
json_spirit_reader.cpp:(.text._ZN5boost9call_onceIPFvvEEEvRNS_9once_flagET_[_ZN5boost9call_onceIPFvvEEEvRNS_9once_flagET_]+0xe9): undefined reference to `boost::detail::once_global_epoch'
json_spirit_reader.cpp:(.text._ZN5boost9call_onceIPFvvEEEvRNS_9once_flagET_[_ZN5boost9call_onceIPFvvEEEvRNS_9once_flagET_]+0x128): undefined reference to `boost::detail::once_global_epoch'
json_spirit_reader.cpp:(.text._ZN5boost9call_onceIPFvvEEEvRNS_9once_flagET_[_ZN5boost9call_onceIPFvvEEEvRNS_9once_flagET_]+0x19b): undefined reference to `boost::detail::once_epoch_cv'
collect2: error: ld returned 1 exit status
It seems that there is no such function (in this boost version?):
$ objdump -T /usr/local/lib/boost1.55/lib/libboost_thread.so|c++filt|grep once_epoch
prints nothing as well as
$ for i in /usr/local/lib/boost1.55/lib/libboost_*.so ; do if grep once_epoch_mutex <(objdump -T $i|c++filt) ; then echo $i ; fi ; done
does not.
[UPDATE 2]
After adding
-I/usr/local/lib/boost1.55/include/ -I/usr/local/lib/boost1.55/include/boost/
to INCPATH and recompile the whole application within a fresh workspace, the error is different but now, I don't see any error message:
/usr/local/lib/boost1.55/include/boost/bind/arg.hpp: In constructor ‘boost::arg<I>::arg(const T&)’:
/usr/local/lib/boost1.55/include/boost/bind/arg.hpp:37:22: warning: typedef ‘T_must_be_placeholder’ locally defined but not used [-Wunused-local-typedefs]
typedef char T_must_be_placeholder[ I == is_placeholder<T>::value? 1: -1 ];
^
In file included from /usr/local/lib/boost1.55/include/boost/tuple/tuple.hpp:33:0,
from /usr/local/lib/boost1.55/include/boost/thread/detail/async_func.hpp:37,
from /usr/local/lib/boost1.55/include/boost/thread/future.hpp:22,
from /usr/local/lib/boost1.55/include/boost/thread.hpp:24,
from src/util.h:22,
from src/bignum.h:13,
from src/main.h:9,
from src/wallet.h:9,
from src/wallet.cpp:7:
/usr/local/lib/boost1.55/include/boost/tuple/detail/tuple_basic.hpp: In function ‘typename boost::tuples::access_traits<typename boost::tuples::element<N, boost::tuples::cons<HT, TT> >::type>::const_type boost::tuples::get(const boost::tuples::cons<HT, TT>&)’:
/usr/local/lib/boost1.55/include/boost/tuple/detail/tuple_basic.hpp:228:45: warning: typedef ‘cons_element’ locally defined but not used [-Wunused-local-typedefs]
typedef BOOST_DEDUCED_TYPENAME impl::type cons_element;
^
src/wallet.cpp: In member function ‘bool CWallet::AddToWallet(const CWalletTx&)’:
src/wallet.cpp:402:13: error: ‘replace_all’ is not a member of ‘boost’
boost::replace_all(strCmd, "%s", wtxIn.GetHash().GetHex());
^
In file included from /usr/local/lib/boost1.55/include/boost/system/system_error.hpp:14:0,
from /usr/local/lib/boost1.55/include/boost/thread/exceptions.hpp:22,
from /usr/local/lib/boost1.55/include/boost/thread/pthread/thread_data.hpp:10,
from /usr/local/lib/boost1.55/include/boost/thread/thread_only.hpp:17,
from /usr/local/lib/boost1.55/include/boost/thread/thread.hpp:12,
from /usr/local/lib/boost1.55/include/boost/thread.hpp:13,
from src/util.h:22,
from src/bignum.h:13,
from src/main.h:9,
from src/wallet.h:9,
from src/wallet.cpp:7:
/usr/local/lib/boost1.55/include/boost/system/error_code.hpp: At global scope:
/usr/local/lib/boost1.55/include/boost/system/error_code.hpp:222:36: warning: ‘boost::system::posix_category’ defined but not used [-Wunused-variable]
static const error_category & posix_category = generic_category();
^
/usr/local/lib/boost1.55/include/boost/system/error_code.hpp:223:36: warning: ‘boost::system::errno_ecat’ defined but not used [-Wunused-variable]
static const error_category & errno_ecat = generic_category();
^
/usr/local/lib/boost1.55/include/boost/system/error_code.hpp:224:36: warning: ‘boost::system::native_ecat’ defined but not used [-Wunused-variable]
static const error_category & native_ecat = system_category();
^
make: *** [build/wallet.o] Error 1
|
The correct invocation according to the directory listings you gave would be:
-L/usr/local/lib/boost1.55/lib/ -lboost_system
-L is used to specify the path where libraries are found. -I is for headers, that will not help for linker errors (you'll get compiler errors if you're missing include paths).
As for boost_system versus boost_system-mgw46-mt-sd-1_54 - you don't have anything called "boost_system-mgw46-mt-sd-1_54.so[.version]" in your library directory, so you can't use that second name.
(You also have Windows-type paths in your Makefile - try and avoid mixing the two, use conditionals in your Makefiles to separate Windows and Unix paths.)
| How to compile with third party libs properly? |
1,538,791,119,000 |
A Gentoo install still in the livecd stage (unable to boot so far) fails to emerge LVM statically. I need a statically compiled lvm in order to use it in my initrd.
My make.conf:
CFLAGS="-O2 -march=native -pipe"
CXXFLAGS="${CFLAGS}"
CHOST="x86_64-pc-linux-gnu"
USE="bindist mmx sse sse2 static"
The emerge compile error:
/usr/lib/gcc/x86_64-pc-linux-gnu/4.6.3/../../../../lib64/libudev.a(time-util.o): In function `now': (.text.now+0x8): undefined reference to `clock_gettime'
I also note that:
Warning, we no longer overwrite /sbin/lvm and /sbin/dmsetup with
their static versions. If you need the static binaries,
you must append .static to the filename!
What does this mean? How am I supposed to append this ".static" to the filename?
I see that this person had the same issue, but with no answer: http://archives.gentoo.org/gentoo-user/msg_eb40f5d76161fda72d134551cc26d989.xml
I also notice this thread: http://forums.gentoo.org/viewtopic-p-4892618.html?sid=e41b07d9b8554c10430619e1f51d564a
I tried
export LDFLAGS=" -lrt "
However it didn't appear to change anything, still the same error.
|
Works fine for me (in ~amd64 Gentoo), however try removing the udev useflag from lvm2 as a workaround, as udev is not important at initramfs stage. The static binary is called /sbin/lvm.static (requires static useflag to be built). You can check whether a binary is static or not using ldd.
echo sys-fs/lvm2 static -udev >> /etc/portage/package.use
Also check whether you have the static-libs useflag enabled, for the dependencies of the packages you wish to be built statically. Usually the ebuilds should check those dependencies for you, but better to double check.
| Gentoo how to compile LVM statically linked? |
1,538,791,119,000 |
A book I am reading refers to an include file that shows how a stack frame looks on one's UNIX system.
In particular: /usr/include/sys/frame.h
I am having trouble finding the modern equivalent. Anyone have an idea? I'm on Ubuntu 12.10.
|
A good answer was provided on Super User.
Whether or not the files discussed are precise extensions of the legacy file my author refers to remains unknown. However, one will find most of the relevant knowledge in the ptrace.h file and the calling.h file located in the /.../asm/ directory. This presumes an x86 processor.
| Where is the frame.h located in modern Linux implementations? (ubuntu specifically) |
1,538,791,119,000 |
If a previous kernel (assuming it is not from the stone age) compiles successfully, does it make sense to assume that old config file if copied to the new kernel, will compile successfully too?
What things need to be taken care of?
|
Copy the old .config file and then, to know what needs to be taken care of, use make oldconfig. You will be prompted interactively for needed changes in your config file. It's almost safe to answer with the default option to every question. (Usually you don't care about new drivers, and you want to use new features when those are enabled by default).
If you skip this configuration update step, things might break.
| Precautions to be taken while make oldconfig |
1,538,791,119,000 |
I have the following linux kernel source repo cloned to a couple different hosts (my local machine and a Github Actions runner)
https://gitlab.conclusive.pl/devices/linux/-/tree/master
I'm using the kstr-sama5d27 defconfig
When building modules using make modules -j4 ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf- LOCALVERSION=-kstr-sama5d27 I receive the following error on the GA host:
make[2]: *** No rule to make target '/lib/firmware', needed by
'net/wireless/extra-certs.c'. Stop.
whereas on my local machine building succeeds:
GEN net/wireless/extra-certs.c
CC [M] net/wireless/extra-certs.o
I looked in net/wireless/Makefile but nothing within lead me to any ideas on how to troubleshoot this.
I figure the issue must be caused by differences in build environments.
|
This is caused by CONFIG_CFG80211_EXTRA_REGDB_KEYDIR which is set to /lib/firmware. When this configuration entry is not empty, it causes the build to rely on extra-certs.o, which itself depends on the directory given as the configuration value. So the build succeeds on your own system, which has /lib/firmware, but fails on the GitHub action runner, which doesn’t.
If you don’t need to include extra certificates, you should override this configuration setting so that it ends up empty. That will allow the build to proceed without /lib/firmware.
If you do need extra certificates, place them in a directory next to your clone of the kernel repository (e.g. firmware next to linux) and change CONFIG_CFG80211_EXTRA_REGDB_KEYDIR to point to that (../firmware).
| when crosscompiling this source why do I receive "No rule to make target" error on one host but not the other |
1,538,791,119,000 |
To improve compile times, the Arch wiki states,
Users with multi-core/multi-processor systems can specify the number
of jobs to run simultaneously. This can be accomplished with the use
of nproc to determine the number of available processors, e.g.
MAKEFLAGS="-j$(nproc)".
If I set this in Fish shell via set -Ux MAKEFLAGS "-J$(nproc)", then I receive the error:
fish: $(...) is not supported. In fish, please use '(nproc)'.
set -Ux MAKEFLAGS "-J$(nproc)"
^
I can set this variable in two ways without receiving an error:
set -Ux MAKEFLAGS "-J(nproc)"
set -Ux MAKEFLAGS '-J$(nproc)'
Which of these is the correct method? Or are they both okay?
Thanks
|
Neither. In fish, command substitution cannot be quoted.
set arg "-J(nproc)"
set -S arg
$arg: set in global scope, unexported, with 1 elements
$arg[1]: |-J(nproc)|
Use
set -Ux MAKEFLAGS "-J"(nproc)
| What's the correct format for MAKEFLAGS when using Fish shell? |
1,538,791,119,000 |
I'm just learning a bit about lower-level languages and I've noticed that gcc you can specify -march and -mtune parameters to optimise the software for particular CPU families.
But I've also found people saying that building a program from source won't make it noticeably faster than downloading the binary. Surely being able to have the software optimised for the CPU in your system would provide a notable speed boost, especially in software like ffmpeg which uses fairly microarchitecture-dependent features such as AVX?
What I'm wondering is, are the binaries on package managers somehow optimised for multiple microarchitectures? Does the package manager download binaries specific to my system's microarchitecture?
|
Distribution packages are built with reference to a pre-determined baseline (see Debian’s architecture baselines for example). Thus, in Debian, amd64 packages target generic x86-64 CPUs, with SSE2 but not SSE3 or later; i386 packages target generic i686 CPUs, without MMX or SSE. In general, the compiler defaults are used, so tuning might evolve as the compiler itself evolves.
However packages where CPU-specific optimisations provide significant benefit can be built to take advantage of newer CPUs. This is done by providing multiple implementations rather than relying on compiler optimisations, and choosing between them at runtime: the packaged software detects the running CPU and adjusts the code paths it uses to take advantage of it (see ffmpeg’s libswscale/x86/swscale.c for example). On some architectures, ld.so itself helps with this: it can automatically load an optimised library if it’s available, e.g. on an i386-architecture system running on an SSE-capable CPU.
Most if not all package managers are oblivious to all this; they download a given architecture’s package and install it, without regard for the CPU running the system.
| What microarchitecture are packages on apt/yum typically built/tuned for? |
1,538,791,119,000 |
I need to compile and run binaries on CentOS 7 but I'm having hard time running my wrapper app due to Python versions and other issues. If I compile and test binaries on Ubuntu (or any other distribution) and then move my binaries to an online CentOS 7 would I run into any binaries-platform problem?
PS: Binaries I am running are of Google's cwebp and ImageMagick. My wrapper is a Node function for AWS Lambda.
|
Short answer is that binaries from one system are not guaranteed to run correctly on another system, but they may work. They may also appear to work, but have issues.
Longer answer is that it depends on how those binaries were linked. Statically linked binaries have a better chance of running than dynamically linked binaries. Dynamically linked binaries will have a lot of dependencies that may not be satisfiable by a different distribution.
In your particular case, your best bet is to create a CentOS 7 virtual machine or container and create the binaries there. If you can, generate statically linked binaries, then deploy those to your restricted production system.
| Are CentOS 7 binaries the same of Ubuntu or any other GNU distribution? |
1,538,791,119,000 |
I'm trying to build and install ccminer on Ubuntu 16.04 and getting the following error:
scrypt.cpp:(.text+0xa55b): undefined reference to `GOMP_parallel'
scrypt.cpp:(.text+0xa6cd): undefined reference to `GOMP_parallel'
libgomp1 is installed :
Package: libgomp1
Status: install ok installed
Priority: optional
Section: libs
Installed-Size: 156
Maintainer: Ubuntu Core developers <[email protected]>
Architecture: amd64
Multi-Arch: same
Source: gcc-5
Version: 5.4.0-6ubuntu1~16.04.5
Depends: gcc-5-base (= 5.4.0-6ubuntu1~16.04.5), libc6 (>= 2.17)
Breaks: gcc-4.3 (<< 4.3.6-1), gcc-4.4 (<< 4.4.6-4), gcc-4.5 (<< 4.5.3-2)
Description: GCC OpenMP (GOMP) support library
GOMP is an implementation of OpenMP for the C, C++, and Fortran compilers
in the GNU Compiler Collection.
Homepage: http://gcc.gnu.org/
Original-Maintainer: Debian GCC Maintainers <[email protected]>
and the libraries are found here:
locate libgomp
/usr/lib/gcc/x86_64-linux-gnu/5/libgomp.a
/usr/lib/gcc/x86_64-linux-gnu/5/libgomp.so
/usr/lib/gcc/x86_64-linux-gnu/5/libgomp.spec
/usr/lib/x86_64-linux-gnu/libgomp.so.1
/usr/lib/x86_64-linux-gnu/libgomp.so.1.0.0
/usr/share/doc/libgomp1
/var/lib/dpkg/info/libgomp1:amd64.list
/var/lib/dpkg/info/libgomp1:amd64.md5sums
/var/lib/dpkg/info/libgomp1:amd64.shlibs
/var/lib/dpkg/info/libgomp1:amd64.symbols
/var/lib/dpkg/info/libgomp1:amd64.triggers
Is it possible to specifiy the location of the libraries in a config / makefile somehow?
the makefile contains:
OPENMP_CFLAGS = -fopenmp
In case it is relevant, I have Anaconda installed as I have read that this can interfere with some build processes.
Link to VERBOSE output of build.sh
output gist
gcc and g++ versions:
g++ --version
g++ (Ubuntu 5.4.0-6ubuntu1~16.04.5) 5.4.0 20160609
gcc --version
gcc (Ubuntu 5.4.0-6ubuntu1~16.04.5) 5.4.0 20160609
|
I had the exact same problem and, as predicted by OP, my issue was related to an Anaconda install and it got fixed after removing it.
I noticed that running locate libgomp produced an output similar to OPs but with Anaconda related results at the top.
After uninstalling it, the output was the same and I became able to build ccminer with default configs.
This post details simply how to uninstall Anaconda
| How can I resolve this libgomp1 dependency issue? |
1,538,791,119,000 |
I am having issues compiling a claimed to be Linux compatible application on Linux Mint 18.1. The application in particular is called Lightscreen. Everything has gone smoothly besides using the make command.
Here is the command process I have done so far:
I had to first install QT 5.7 because it wouldn't work with any other version unless I used an older version of lightscreen, as I would get this result:
Project ERROR: Unknown module(s) in QT: x11extras
So I went ahead and installed QT 5.7 which is claims to support in the most recent update, and this is the output:
nicholas@LinuxNick ~/bin/lightscreen $ /home/nicholas/.Qt/5.7/gcc_64/bin/qmake
Project MESSAGE: This project is using private headers and will therefore be tied to this specific Qt module build version.
Project MESSAGE: Running this project against other versions of the Qt modules may crash at any arbitrary point.
Project MESSAGE: This is not a bug, but a result of using Qt internals. You have been warned!
nicholas@LinuxNick ~/bin/lightscreen $
Suspecting everything was fine since there was no error messages but instead general Project messages, I continued. I ran make and there were my first errors.
In file included from tools/screenshot.cpp:45:0:
tools/screenshot.cpp: In member function ‘void Screenshot::save()’:
tools/screenshot.cpp:250:34: error: expected unqualified-id before numeric constant
result = Screenshot::Success;
^
tools/screenshot.cpp:260:79: error: expected unqualified-id before numeric constant
result = (QFile::rename(mUnloadFilename, fileName)) ? Screenshot::Success : S
^
tools/screenshot.cpp:260:79: error: expected ‘:’ before numeric constant
tools/screenshot.cpp:262:34: error: expected unqualified-id before numeric constant
result = Screenshot::Success;
^
Makefile:5959: recipe for target 'screenshot.o' failed
make: *** [screenshot.o] Error 1
Am I doing something wrong? I am kinda new to compiling stuff on linux and while I compiled quite a few programs, I still haven't gotten the hang of stuff and how things are supposed to be compiled. A step by step guide would be helpful, but it doesn't have to be one, especially if it isn't needed.
Please, help, and thanks in advance.
EDIT: I am now experiencing a different problem.
When I run make, this is the output:
g++ -c -pipe -O2 -std=gnu++11 -Wall -W -D_REENTRANT -fPIC -DQT_DEPRECATED_WARNINGS -DUGLOBALHOTKEY_NOEXPORT -DAPP_VERSION=\"2.5\" -DQT_NO_DEBUG -DQT_WIDGETS_LIB -DQT_MULTIMEDIA_LIB -DQT_X11EXTRAS_LIB -DQT_GUI_LIB -DQT_NETWORK_LIB -DQT_SQL_LIB -DQT_CONCURRENT_LIB -DQT_CORE_LIB -I. -Itools/UGlobalHotkey -Itools/UGlobalHotkey -I../../.Qt/5.7/gcc_64/include -I../../.Qt/5.7/gcc_64/include/QtWidgets -I../../.Qt/5.7/gcc_64/include/QtMultimedia -I../../.Qt/5.7/gcc_64/include/QtGui/5.7.1 -I../../.Qt/5.7/gcc_64/include/QtGui/5.7.1/QtGui -I../../.Qt/5.7/gcc_64/include/QtX11Extras -I../../.Qt/5.7/gcc_64/include/QtGui -I../../.Qt/5.7/gcc_64/include/QtNetwork -I../../.Qt/5.7/gcc_64/include/QtSql -I../../.Qt/5.7/gcc_64/include/QtConcurrent -I../../.Qt/5.7/gcc_64/include/QtCore/5.7.1 -I../../.Qt/5.7/gcc_64/include/QtCore/5.7.1/QtCore -I../../.Qt/5.7/gcc_64/include/QtCore -I. -I. -I../../.Qt/5.7/gcc_64/mkspecs/linux-g++ -o os.o tools/os.cpp
tools/os.cpp: In function ‘QPair<QPixmap, QPoint> os::cursor()’:
tools/os.cpp:131:23: error: could not convert ‘QPoint(0, 0)’ from ‘QPoint’ to ‘QPair<QPixmap, QPoint>’
return QPoint(0, 0);
^
Makefile:5657: recipe for target 'os.o' failed
make: *** [os.o] Error 1
|
The problem comes from <X11/X.h> (only included when compiling to Linux). It defines the following macro:
#define Success 0
This interferes with an enum member of the same name, Screenshot::Result::Success.
To fix this, open tools/screenshot.cpp, and find the following lines:
#ifdef Q_OS_LINUX
#include <QX11Info>
#include <X11/X.h>
#include <X11/Xlib.h>
#endif
After the include for X.h, add an #undef Success. This removes the conflicting macro.
| Compiling LightScreen on Linux Mint 18.1 |
1,538,791,119,000 |
If for example I have compiled a simple C program that uses GTK 3 on a machine running Ubuntu, will I be able to run it on other Linux flavours?
Note: My actual questions is "Should I label my compiled program for Linux or just Ubuntu?"
eg. Should I label my downloads page as
Windows
program.exe
Linux
program
Macintosh
program.app
or
Windows
program.exe
Ubuntu < Version 17.04
program
Macintosh
program.app
|
Linux executables are not specific to a Linux distribution. But they are specific to a processor architecture and to a set of library versions.
An executable for any operating system is specific to a processor architecture. Windows and Mac users don't care as much because these operating systems more or less only run on a single architecture. (OSX used to run on multiple processor architectures, and OSX applications were typically distributed as a bundle that contained code for all supported processor architectures, but modern OSX only runs on amd64 processors. Windows runs on both 32-bit and 64-bit Intel processors, so you might find “32-bit” and “64-bit” Windows executables.)
Windows resolves the library dependency problem by forcing programmers to bundle all the libraries they use with their program. On Linux, it's uncommon to do this, with the benefit that programmers don't need to bundle libraries and that users get timely security updates and bug fixes for libraries, but with the cost that programs need to be compiled differently for different releases of distributions.
So you should label your binary as “Linux, 64-bit PC (amd64), compiled for Ubuntu 17.04” (or “32-bit PC (i386)” if this is a 32-bit executable), and give the detail of the required libraries. You can see the libraries used by an executable with the ldd command: run ldd program. The part before the => is what matters, e.g. libgtk-3.so.0 is the main GTK3 library, with version 0 (if there ever was a version 1, it would be incompatible with version 0, that's the reason to change the version number). Some of these libraries are things that everyone would have anyway because they haven't changed in many years; only experience or a comparison by looking at multiple distributions and multiple releases can tell you this. Users of other distributions can run the same binary if they have compatible versions of the libraries.
| Compiled Executable |
1,538,791,119,000 |
If I build NGINX from source, how do I update it?
I'm on a Debian machine and used to install and update software on the CLI with apt-get.
|
A package manager installs the binaries and configuration files that were compiled by the package maintainer (and performs many other functions as well). But, if you build from source, then you are responsible for rebuilding from source again - for each and every upgrade instance, possibly including its dependencies as well, if those were compiled from source.
| How do I update software that was built from source? |
1,538,791,119,000 |
I'm trying to compile Lyx 2.2 on my Debian machine from sources. As usual I run ./autogen.sh && ./configure && make, but configuration stops here
configure: error: cannot compile a simple Qt executable. Check you have the right $QTDIR.
So I've installed the qt5-default package, but it didn't solve the problem.
The $QTDIR vairable was empty, so I manually set it to /usr/bin/qmake, and /usr/bin, but none has worked, same error.
Thank you
|
QTDIR shouldn't really be necessary, but try setting it to /usr/share/qt5.
You could build the Debian source package instead:
sudo apt-get install devscripts dpkg-dev build-essential
sudo apt-get build-dep lyx
dget http://httpredir.debian.org/debian/pool/main/l/lyx/lyx_2.2.0-2.dsc
cd lyx-2.2.0
dpkg-buildpackage -us -uc
The first two commands install the packages necessary to build lyx; then dget downloads and extracts the source package, and dpkg-buildpackage builds it and produces a series of .deb packages you can install manually using dpkg as usual.
| Compiling Lyx 2.2 on Debian |
1,538,791,119,000 |
I am trying to compile and install mesa3D from source.
(ftp://ftp.freedesktop.org/pub/mesa/11.0.0/mesa-11.0.0-rc3.tar.gz)
I am at the configure step
./configure \
CXXFLAGS="-O2 -g -DDEFAULT_SOFTWARE_DEPTH_BITS=31" \
CFLAGS="-O2 -g -DDEFAULT_SOFTWARE_DEPTH_BITS=31" \
--disable-xvmc \
--disable-glx \
--disable-dri \
--with-dri-drivers="" \
--with-gallium-drivers="swrast" \
--enable-texture-float \
--disable-shared-glapi \
--disable-egl \
--with-egl-platforms="" \
--enable-gallium-osmesa \
--enable-gallium-llvm=yes \
--with-llvm-shared-libs \
--prefix=/opt/mesa/11.0.0/llvmpip
I keep getting the error about configure not finding the LIBDRM library
checking for LIBDRM... no
configure: error: shared GLAPI required when building two or more of
the following APIs - opengl, gles1 gles2
Even though the library is known to ldconfig
ldconfig -p | grep drm
libdrm_radeon.so.1 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libdrm_radeon.so.1
libdrm_radeon.so (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libdrm_radeon.so
libdrm_nouveau.so.1 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libdrm_nouveau.so.1
libdrm_nouveau.so (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libdrm_nouveau.so
libdrm_intel.so.1 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libdrm_intel.so.1
libdrm_intel.so (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libdrm_intel.so
libdrm.so.2 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libdrm.so.2
libdrm.so (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libdrm.so
I tried to use the LDFLAGS env variable but without success
LDFLAGS='-L/usr/lib/x86_64-linux-gnu/' ./configure <my configure parameters here>
or
export LDFLAGS="-L/usr/lib/x86_64-linux-gnu/" && ./configure <my configure parameters here>
Here is the part in the configure script (that I assume is) generating this error
# Check for libdrm
pkg_failed=no
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for LIBDRM" >&5
$as_echo_n "checking for LIBDRM... " >&6; }
if test -n "$LIBDRM_CFLAGS"; then
pkg_cv_LIBDRM_CFLAGS="$LIBDRM_CFLAGS"
elif test -n "$PKG_CONFIG"; then
if test -n "$PKG_CONFIG" && \
{ { $as_echo "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libdrm >= \$LIBDRM_REQUIRED\""; } >&5
($PKG_CONFIG --exists --print-errors "libdrm >= $LIBDRM_REQUIRED") 2>&5
ac_status=$?
$as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
test $ac_status = 0; }; then
pkg_cv_LIBDRM_CFLAGS=`$PKG_CONFIG --cflags "libdrm >= $LIBDRM_REQUIRED" 2>/dev/null`
test "x$?" != "x0" && pkg_failed=yes
else
pkg_failed=yes
fi
else
pkg_failed=untried
fi
Could you provide some propositions or hints to solve this problem?
Thanks
|
Here is how I manged to install Mesa3D from source on Debian.Thanks to all people's answers and comments.
First, I had to install libdrm-dev package.
# apt-get install libdrm-dev
Then, check where the header and lib files were installed
# dpkg-query -L libdrm-dev
...
/usr/include/libdrm/drm.h
...
/usr/lib/x86_64-linux-gnu/libdrm.a
...
After that, export two env variables needed by configure to link to libdrm (https://stackoverflow.com/questions/24644211/mesa3d-install-cant-find-libdrm)
# export LIBDRM_CFLAGS="-I/usr/include/libdrm/"
# export LIBDRM_LIBS="-L/usr/lib/x86_64-linux-gnu/"
Finaly,configure, make and make install
# ./configure <parameters here>
# make -j24 # running on a 24 cores machine
# make -j24 install
Otherwise, the second error I was getting,
configure: error: shared GLAPI required when building two or more of
the following APIs - opengl, gles1 gles2
was not linked to libdrm. It was because of libgalpi that shouldn't be disabled when running configure!
| Install Mesa3D on Debian - LIBDRM not found by configure autoconf |
1,538,791,119,000 |
When creating a windows static library, we simply create a .lib file which should be included in the linker path.
When creating a windows shared library, along with the .dll, we also a generate a .lib file. This lib file contains the signatures of the API exposed by the library.
There are two ways to use this library
Either we can directly refer the library API in our project and add the path to .lib file in the linker properties. Some people call it as statically linked dynamic library
Or we can explicitly load the dynamic library during runtime. In this case we need not specify the lib file path for linker. Call it dynamically linked dynamic library.
My question is do we have something similar for shared libraries on Linux also? or just the static library (.a) and shared library (.so)?
I know how to include a static library on linux using gcc -l option. Can we use the same option for including a dynamic library (.so) also?
|
I can't say I understand what a "statically linked dynamic library", nor do I know anything about signatures contained in libraries (sounds interesting though: does this mean the linker is able to check for type mismatches in arguments and return types at link time? ELF definitely does not have such a feature.) so this answer will not be from a comparative point of view. Also, as your question is very broad, the answer will be superficial in detail.
Yes, you can create either a static library (.a) or a shared library (.so). When the linker looks for libraries requested with -l, it will prefer the shared library if both exist, unless overridden with an option like -static.
When building a library from source code, one only needs to build it as a static library (.a) or as a shared library (.so), not both. Still, quite a few packages' build scripts are set up to build both versions (which requires compiling twice, once with position independent code and once without) in order to give consumers of the library the choice of which one to link with.
The necessary pieces of a static library are totally incorporated into the binary that is built. There is no need to have the .a file available at run time. In contrast, a shared library that was linked to a binary has to be available at run time, although the run-time dynamic linker will typically search for it under a modified name, its "soname" (usually libsomething.so at link time and libsomething.so.<integer> at run time), which is a feature that allows multiple different versions of a library with slightly different APIs to be installed in the system at the same time.
In your question you mention also explicitly loading dynamic libraries at run time. This is often done for modular applications or applications with plugins. In this case, the library in question (often called a "module" or "plugin") is not linked with the application at all and the build-time linker knows nothing of it. Instead, the application developer must write code to call the run-time dynamic linker and ask it to open a library by filename or full pathname. Sometimes the names of the modules to open are listed in the application's configuration file, or there is some other piece of application logic that decides which modules are or aren't needed.
| Types of dynamic linking in Unix/Linux environments |
1,538,791,119,000 |
I am new to software development, and over the course of compiling about 20 programs and dependencies from source I have seen a rough pattern, but I don't quite get it. I'm hoping you could shed some light on it.
I am SSHing on a SLC6 machine and without root permissions, I have to install all the software dependencies and the most difficult part - to LINK them to the right place.
For instance: I need to install log4cpp. I download a tarball and unpack it
./autogen.sh (if there isn't this one, just continue to next)
./configure
make
So It is installed in the folder itself along with the source code, just lying there dormant, until I can call it in the right way.
Then there is an other program which I need to install, and it requires me to specify the lib and include dirs for some dependencies
--with-log4cpp-inc=
--with-log4cpp-lib=
For SOME source compilations, the folder has a lib, bin and inc or include dir - Perfect!
For some, the folder has just lib and inc dir.
For some, the folder has just inc dir.
I have no problem when they all have a nice folder, easy to find. But I often run into problems, like with the log4cpp.
locate log4cpp.so
returns null
(The lib dirs have .so files in it? or do they?)
So I have a problem, in this specific instance, that the library dir is missing and I cannot find it. But I want to know how to solve the problem every time, and also have some background information. However my googling skills seem to return nothing when searching for how library, include and bin environment variables work. I have also tried looking up the documentation for the program, but it seems that the questions I have:"Where is the lib dir, where is the include dir, where is the bin dir?" are so trivial, that they do not even need to communicate it.
So:
What is an include dir, what does it do, contain, how do I find it.
What is a library dir, what does it do, contain, how do I find it - every time - useful commands perhaps.
What is a binary dir, what does it do, contain, how do I find it.
|
Library files are usually prefixed with lib; your locate command might have been more successful if you were less specific: locate "*log4cpp*".
With regard to shared libraries (i.e., .so files -- this is usually but not necessarily the case; see "What is a library dir?" below) whereis will usually find the appropriate path but does not support globbing, so you'd have to get the name right, sans suffix (whereis liblog4cpp). ldconfig -p is even better, because you are getting the information straight from the horse's mouth (ldconfig configures the cache used by the linker, which manages shared libraries).
ldconfig -p | grep log4cpp
Note that to build against the library, you also need the relevant include header, which is probably not installed by default by the distro; those come in separate -dev or -devel packages.
What is an include dir?
An include directory contains C and C++ header files that are used this way in source code:
#include <foobar.h>
#include <foo/bar.h>
They are are organized into hierarchies, some of which is stipulated by (C/C++) language standards. However, the path to the top of the hierarchy is specific to the system and known to the compiler/preprocessor. E.g., these two files might be found at /usr/include/foobar.h and /usr/include/foo/bar.h.
Linux systems usually have two top level include directories in play, /usr/include and /usr/local/include (the latter takes precedence).
Include files are not necessary to the compiled software, only to creating them, which is why installing libfoobar from a distro package will get you libfoobar.so but not foobar.h. That's in the libfoobar-dev package (nb. naming conventions vary somewhat across distros).
What is a library dir?
A library directory contains libraries of two forms, dynamic (aka. shared) and static. Most of them are the former. They correspond roughly to include directories but there's often more of them (/lib, /lib64, /usr/lib, /usr/local/lib, etc.; some of those may be symlinks to others).
A shared library is one which is used at runtime; if an executable links to a shared library, (parts of) both of them are loaded into memory as necessary in order for the program to run. If something is already using the library, it is already in memory and does not have to be loaded again; the two applications will not interfere with each other since the shared part is read-only to them. Shared libraries by convention use the suffix .so.
A static library is built into an executable at compile time and is not subsequently necessary to run the executable. This is less common because the library cannot be shared with other applications, which potentially wastes a considerable amount of RAM. Static libraries by convention use the suffix .a.
What is a binary dir?
A binary directory contains executable program files, such as ls or firefox. Executables in the *nix world do not use suffixes. These directories are usually in the $PATH variable, otherwise you would have to type /usr/bin/ls all the time. Which executable will be used when you type ls can be determined with the whereis or which command.
If .configure allows you to specify a library or include directory, you usually only have to do so if it is in a non-standard place. Try it without, and if it is not found, then use the --with-inc= option.
| Paths relevant to compiling from source |
1,538,791,119,000 |
When I am running a line like:
./configure --build=x86_64-redhat-linux-gnu --host=x86_64-redhat-linux-gnu {*shortened*} \
--with-imap-ssl=/usr/include/openssl/ --enable-ftp --enable-mbstring --enable-zip
I understand what the "x86_64-redhat-linux-gnu" means descriptively, but I have questions?
1) Is there a list somewhere of all the choices? Either in each configure script or on the internet.
2) Does making the answer more specific or more generic have much of an effect on the outcome?
Thank you.
|
The --build and -host options are to configure scripts are standard configure options, and you very rarely need to specify them unless you are doing a cross-build (that is, building a package on one system to run on a different system). The values of these options are called "triples" because they have the form cpu-vendor-os. (Sometimes, as in your case, os is actually kernel-os but it's still called a triple.)
The base configure script is quite capable of deducing the host triple, and you should let it do that unless you have some really good evidence that the results are incorrect. The script which does that is called config.guess, and you'll find it somewhere in the build bundle (it might be in a build-aux subdirectory). If you're doing a cross-build and you need to know the host triple, the first thing to try is to run config-guess on the host system.
The values supplied (or guessed) for --host and --build are passed through another script called config.sub, which will normalize the values. (According to the autoconf docs, if config.sub is not present, you can assume that the build doesn't care about the host triple.) The developers of a specific software package might customize the config.sub script for the particular needs of their build, and there are a lot of different versions of the standard config.sub script, so you shouldn't expect config.sub from one software package to work on another software package, or even on a different version of the same software package.
Despite all the above, autoconf'ed software packages really should not need to know the names of the host os and vendor, except for identifying default filesystem layout so that they provide the correct default file locations.
You can read through config.sub to get an idea of the range of options which will be recognized, but it is not so easy to figure out how the values are used, or even if the values are used. The first field -- the cpu -- is the most likely to be used.
You can get a list of all the options by typing:
./configure --help
or, better,
./configure --help | less
since there are always a lot of options.
Other than the standard options (--build, --host and --target as above, and the options which override file locations), the specific options allowed by each configure script are different. Since they also tend to change from version to version of the software package, you should always check the configure script itself rather than relying on external documentation.
Unfortunately, the contents of the configure script's help are not always 100% complete, because they rely on the package developers to maintain them. Sometimes unusual or developer-only options are not part of the ./configure --help output, but that is usually an indication that the option should not be used in a normal install.
| Compiling from source: What are the options for config script "build"? |
1,538,791,119,000 |
Is there a system or product that can be used in automating the process of building 32bit and 64bit libraries for multiple platforms (Solaris (Sparc & x86), and windows) used in building C++ based software applications that is replicable by other users and on different machines?
I'm searching to see if there is a product or system that is available to automate the process of compiling libraries as new ones come out for our software builds. We build software products in Solaris and Windows both 32bit and 64bit. We want to build the new libraries and then place them out in SVN or on a server for developers to grab. We need a process that is documentable and able to be done by any of our developers and able to be easily setup and reproduced on any machine.
The two options we are using so far is OSC scripting and bash scripting. I'm looking for more options. All I am concerned mostly about building the libraries, not the applications. I want the new libraries available for the developers to use in their builds. This is C++ development.
|
You need a build automation tool, of which many exist. Even if you restrict to decent tools for building C++ programs that work on both Solaris and Windows, there are probably hundreds of choices. The classic language-agnostic build automation tool in the Unix and Windows world alike is make. There is a general consensus that it's possible to do better but no consensus as to who has actually done better.
Build automation tools merely execute a series of instructions to produce the resulting binaries and other products from the source. It sounds like you probably want a continuous integration tool, which could do things like checking out from svn (possibly automatically in a commit hook), calling make, calling test scripts, uploading the resulting product and test result somewhere, send a notice when the build is finished and show some kind of dashboard with the history of builds.
There isn't a de facto standard continuous integration tool, let alone “best”. Check through the Wikipedia list, pick a few likely-looking ones and spend a few minutes looking through their documentation. Then select one and set it up. Unless you have other requirements that you didn't mention, most of these products should be suitable.
| What is the best tool for automating the building 32-64bit libraries for Unix & Windows building C++ software, replicable by users and machines? |
1,538,791,119,000 |
I've compiled git from source after git from the repository was already installed via the package manager.
In that process, the "from source" git took its place as the "main system git".
user@jeanny:~$ git --version
git version 1.8.3.2
Is there a way to set the git from the repo as the "main system git"?
|
You can confirm this by doing the following:
$ /usr/bin/git --version
$ /usr/local/bin/git --version
It's likely that you now have 2 versions of git installed which is completely fine, so long as they're kept in separate directories.
The newly compiled version of git is most likely the one in the directory /usr/local/bin.
You can use the $PATH environment variable to control which git gets used by controlling the order of how things appear in the $PATH.
For example:
system version of git is the default
PATH=/usr/bin:/usr/local/bin
newly compiled version of git is the default
PATH=/usr/local/bin:/usr/bin
What about alternatives?
The OP asked the following follow-up question in the comments:
Where does update-alternatives fit into this picture?
Alternatives is a mechanism that allows your system to incorporate tools that aren't installed in /usr/bin to be accessible through /usr/bin by putting a link in the /usr/bin directory that is then managed by software. An example says it best. On my system, Java is managed as an alternatives app:
$ ls -l /usr/bin/java
lrwxrwxrwx. 1 root root 22 Dec 26 2010 /usr/bin/java -> /etc/alternatives/java
You can tell because of the above link under /usr/bin. Given this is a link managed by alternatives doesn't change the fact that the link is still under the directory /usr/bin. So when we manipulate the $PATH as described above, alternatives is a non-issue.
| What happens to the old binary when a new one compiled from source? |
1,372,903,716,000 |
When you build a deb how do you make it so arch independent data such as plugin files to be packaged into a separated .deb?
|
In debian packaging, the control file contains the details about the binary packages that the source package will produce. You will need to specify both your arch dependent and arch independent packages in the control file.
Using debhelper, you will want your software's build system to install to debian/tmp. How you do this will depend on the build system of the software. For example, if the software's build system uses GNU autotools, you would use the following for debhelper short rules:
override_dh_auto_configure:
./configure \
--prefix=/tmp
From there, you want to use dh_install to move those files into the appropriate directories for packaging. To do this, you need a file for each binary package named <package_name>.install. The file should contain filenames or patterns to be included in the package.
Here is the example provided by the dh_install manpage:
EXAMPLE
Suppose your package's upstream Makefile installs a binary, a man page,
and a library into appropriate subdirectories of debian/tmp. You want
to put the library into package libfoo, and the rest into package foo.
Your rules file will run "dh_install --sourcedir=debian/tmp". Make
debian/foo.install contain:
usr/bin
usr/share/man/man1
While debian/libfoo.install contains:
usr/lib/libfoo*.so.*
If you want a libfoo-dev package too, debian/libfoo-dev.install might
contain:
usr/include
usr/lib/libfoo*.so
usr/share/man/man3
| Building deb: How to put arch independent files into separated .deb package? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.