date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,308,324,734,000 |
How can I list the configuration options that ssh/sshd was compiled with?
I'm attempting to troubleshoot a SELinux configuration issue, and want to make sure that SSH was compiled with --with-selinux.
|
I don't think there's a way to list compilation options, but something like SELinux support should be apparent from the libraries that the executable is linked against:
$ ldd /usr/bin/ssh /usr/sbin/sshd | egrep '^/|selinux'
/usr/bin/ssh:
/usr/sbin/sshd:
libselinux.so.1 => /lib/libselinux.so.1 (0x00007fbbfed5f000)
Looks like sshd has SELinux support but ssh doesn't (why would it?) on my system.
Another thing you could check (e.g. in case you had a static binary) is if there binary references some SELinux functions.
strings /usr/sbin/sshd |grep -i selinux
| Output compiler options from SSH |
1,308,324,734,000 |
When I am building software from source on a GNU+Linux system, during the ./configure stage I frequently see the following line:
checking for suffix of executables...
How do I create such a check in a bash script?
The reason I want to know this is that I want to create a makefile in which it compiles with suffix .exe on Cygwin, but no suffix on true GNU+Linux.
|
The test is done by compiling a small dummy C program and by checking how the compiler names the output file.
The following example is a simplified version of what configure is doing
#!/bin/sh
cat << EOT > dummy.c
int main(int argc, char ** argv) {
return 0;
}
EOT
gcc -o dummy dummy.c
if [ -f dummy.exe ] ; then
# exe
fi
I would suggest you to use autoconf to generate a configure script and use it for your purpose.
| Find out extension of executable files? |
1,308,324,734,000 |
I am working on a project that includes building an old kernel version of linux. This is fine, but I still need to patch the kernel of all previously found security vulnerabilities based on CVEs. I have the CVEs and have extracted the names of the vulnerable files mentioned in them, along with the kernel versions it affects.
So far, I have found about 150 potential vulnerabilities that could affect my build, but obviously some of them affect files relevant to graphics drivers that I don't use. So far, I have just gone through the list manually, checking if the files are included by using make menuconfig, and cating Kconfig in relevant folders. This has worked alright so far, but these methods don't show the actual file names (e.g. ipc/sem.c) so it takes more work than necessary.
Ideally, I would like to somehow print a list of all the files that will be included in my build, and then just write a script to check if vulnerable files are included.
How can I find the names of ever source file (e.g. ipc/sem.c) that will be included in my build?
|
Do the build, then list the .o files. I think every .c or .S file that takes part in the build is compiled into a .o file with a corresponding name. This won't tell you if a security issue required a fix in a header file that's included in the build.
make vmlinux modules
find -name '*.o' -exec sh -c '
for f; do for x in c S; do [ -e "${f%.o}.$x" ] && echo "${f%.o}.$x"; done; done
' _ {} +
A more precise method is to put the sources on a filesystem where access times are stored, and do the build. Files whose access time is not updated by the build were not used in this build.
touch start.stamp
make vmlinux modules
find -type f -anewer start.stamp
| How do I know which files will be included in a linux kernel before I build it? |
1,308,324,734,000 |
I have a few Linux machines laying around and I wanted to make a cluster computer network. There will be 1 monitor that would be for the controller. The controller would execute a script that would perform a task and split the load onto the computers.
Lets say I have 4 computers that are all connected to the controller. I wanted to compile a program using GCC but I wanted to split the work 3 ways. How would I do that?
Any help would be appreciated.
|
You could try making a Beowulf Cluster. You set up one host as a master and the rest as nodes. It's been done in the past by others, including NASA as the wikipedia entry on Beowulf Cluster says.
Building your own cluster computer farm might cost more in power than you'd gain in compute resources.
I have not tried this myself, but I've always wanted to give it a shot.
| Remotely execute commands but still have control of the host |
1,308,324,734,000 |
I try to build the Linux kernel with Buildroot using Docker. I've created a simple Docker image:
FROM debian:7
MAINTAINER OrangeTux
RUN apt-get update && \
apt-get install -y \
build-essential \
bash \
bc \
binutils \
build-essential \
bzip2 \
cpio \
g++ \
gcc \
git \
gzip \
make \
libncurses5-dev \
patch \
perl \
python \
rsync \
sed \
tar \
unzip \
wget
WORKDIR /root
RUN git clone git://git.buildroot.net/buildroot
WORKDIR /root/buildroot
CMD ["/bin/bash"]
I want to keep dl/ andoutput/build/ when container stops, so I don't have to download and compile all dependencies every time. I also want the build products on my host. Therefore I start container like this:
$ docker run -ti -v $(pwd)/dl:/root/buildroot/dl -v \
$(pwd)/output/build:/root/buildroot/output/build -v \
$(pwd)/output/images:/root/buildroot/output/images orangetux/buildroot
I'm able to run make menuconfig which builds the configuration for Buildroot. I've made a few modifications to the defaults. Here's the output of make savedefconfig:
BR2_arm=y
BR2_LINUX_KERNEL=y
BR2_LINUX_KERNEL_DEFCONFIG="at91_dt"
The next step is to build linux-menuconfig. This action failed and I've no clue what is going wrong:
$ make linux-menuconfig
/usr/bin/make -j1 HOSTCC="/usr/bin/gcc" HOSTCXX="/usr/bin/g++" silentoldconfig
make[1]: Entering directory `/root/buildroot'
BR2_DEFCONFIG='' KCONFIG_AUTOCONFIG=/root/buildroot/output/build/buildroot-config/auto.conf KCONFIG_AUTOHEADER=/root/buildroot/output/build/buildroot-config/autoconf.h KCONFIG_TRISTATE=/root/buildroot/output/build/buildroot-config/tristate.config BR2_CONFIG=/root/buildroot/.config BR2_EXTERNAL=support/dummy-external SKIP_LEGACY= /root/buildroot/output/build/buildroot-config/conf --silentoldconfig Config.in
*** Error during update of the configuration.
make[1]: *** [silentoldconfig] Error 1
make[1]: Leaving directory `/root/buildroot'
make: *** [/root/buildroot/output/build/buildroot-config/auto.conf] Error 2
The file /root/buildroot/output/build/buildroot-config/auto.conf doesn't exists.
Why does the file not exists and how can I build linux-menuconfig?
|
After extensive debugging I found out that mounting a folder on my host system at /root/buildroot/output/ causes the problem. Remove this mount and make linux-menuconfig is possible.
Further debugging revealed that mounting a host folder at /root/buildroot/output/build in the container is the problem. I've no clue why.
| Build linux-menuconfig results in: "*** Error during update of the configuration." |
1,308,324,734,000 |
I have an old embedded board that supports only Debian Lenny. I need to install OpenSSL-1.0.1e on it. If I download the source code then try to compile the source code, I get this error
ts7500:/home/openssl-1.0.1e# make
making all in crypto...
make[1]: Entering directory `/home/openssl-1.0.1e/crypto'
gcc -I. -I.. -I../include -fPIC -DOPENSSL_PIC -DZLIB_SHARED -DZLIB -DOPENSSL_THREADS -D_REENTRANT -DDSO_DLFCN -DHAVE_DLFCN_H -Wa,--noexecstack -DTERMIO -O3 -Wall -DOPENSSL_BN_ASM_MONT -DOPENSSL_BN_ASM_GF2m -DSHA1_ASM -DSHA256_ASM -DSHA512_ASM -DAES_ASM -DGHASH_ASM -c -o armcap.o armcap.c
In file included from armcap.c:8:
arm_arch.h:35:5: error: #error "unsupported ARM architecture"
make[1]: *** [armcap.o] Error 1
make[1]: Leaving directory `/home/openssl-1.0.1e/crypto'
make: *** [build_crypto] Error 1
how can I overcome that error?
Note: ts7500:/home/openssl-1.0.1e# `uname -a`
Linux ts7500 3.4.0 #83 Sun May 26 17:07:14 CEST 2013 `armv4l GNU/Linux`
ARMV4 is defined on https://github.com/joyent/node/blob/89dcf22/deps/openssl/openssl/crypto/arm_arch.h
EDIT: If I add #define __ARM_ARCH__ 4 in the beginning of the library, it suppress the problem and compiles the code without any problem. I wonder how correct what I did. I would appreciate a better solution (e.g., without modifying the library)
|
You haven't passed the right options to Configure. Make sure to pass the argument linux-armv4. If you're cross-compiling, in addition to armv4, you need to pass the path to the cross-compiler, as well as include and library paths if necessary.
./Configure --cross-compiler-prefix=/opt/gcc-arm/bin/arm-linux-gnueabi- -I/opt/gcc-arm/include -L/opt/gcc-arm/lib
| Backporting OpenSSL-1.0.1e to Debian Lenny (armv4l) |
1,308,324,734,000 |
I'm not very knowledgeable on this topic, and therefore can't figure out why the following command does not work:
$ gfortran -o dsimpletest -O dsimpletest.o ../lib/libdmumps.a \
../lib/libmumps_common.a -L/usr -lparmetis -lmetis -L../PORD/lib/ \
-lpord -L/home/eiser/src/scotch_5.1.12_esmumps/lib -lptesmumps -lptscotch \
-lptscotcherr /opt/scalapack/lib/libscalapack.a -L/usr/lib/openmpi/ \
-lmpi -L/opt/scalapack/lib/librefblas.a -lrefblas -lpthread
/usr/bin/ld: cannot find -lrefblas
collect2: ld returned 1 exit status
This happens when compiling the mumps library. The above command is executed by make. I've got the librefblas.a in the correct path:
$ ls /opt/scalapack/lib/ -l
total 20728
-rw-r--r-- 1 root root 619584 May 3 14:56 librefblas.a
-rw-r--r-- 1 root root 9828686 May 3 14:59 libreflapack.a
-rw-r--r-- 1 root root 10113810 May 3 15:06 libscalapack.a
-rw-r--r-- 1 root root 653924 May 3 14:59 libtmg.a
Question 1: I thought the -L switch of ld takes directories, why does it refer to the file directly here? If I remove the librefblas.a from the -L argument, I get a lot of "undefined reference" errors.
Question 2: -l should imply looking for .a and then looking for .so, if I recall correctly. Is it a problem that I don't have the .so file? I tried to find out by using gfortran -v ..., but this didn't help me debugging it.
|
I was able to solve this with the help of the comments, particular credit to @Mat.
Since I wanted to compile the openmpi version, it helped to use mpif90 instead of gfortran, which, on my system, is
$ mpif90 --showme
/usr/bin/gfortran -I/usr/include -pthread -I/usr/lib/openmpi -L/usr/lib/openmpi -lmpi_f90 -lmpi_f77 -lmpi -ldl -lhwloc
| Why can't ld find this library? |
1,308,324,734,000 |
I just compiled mplayer2 from source (git://git.mplayer2.org/mplayer2-build.git) because the repository (ubuntu 12.04) version didn't work on my system. Since I have old hardware I was just wondering if there are some compiler flags I could use to optimize it for my hardware.
The CPU is an Athlon XP 2200+ (mobile), 1GB RAM, graphics: Nvidia GeForce4 420 Go.
I also want to do the same on another old system with similar specs:
CPU: Athlon 1,2 GHZ, 1GB RAM, graphics: [SiS] 65x/M650/740
|
General recommendation
If the version taken from your distribution repository of mplayer2 doesn't
work for you, it is a good thing to report what didn't work to the bug
tracking system, so that it has benefits for both you and others:
It gets fixed for your hardware in this release of your distribution.
It will (with high probability) work when you upgrade to a newer version
of the distribution, without having to worry.
It will benefit others that may have the same problem but, as you, have
not taken the task of reporting the error.
Recompilation on a particular system
Recompiling the program specifically for your machines is likely to have
better results than the "generic" flavor that a distribution releases (this
is, BTW, one of the motivations that the Gentoo folks have when they
recompile things to their own systems).
Of course, you may gain some improvements in speed, but you loose the
portability of the binaries.
The generic compilation
That being said, the general way recompiling a program in a current
Debian/Ubuntu system is to take the source package and its build
dependencies with something as:
sudo apt-get build-dep mplayer2
sudo apt-get install fakeroot
apt-get source mplayer2
then edit the file debian/rules inside the directory created by the last
command to change the values that go in the CFLAGS, CPPFLAGS,
CXXFLAGS, and LDFLAGS.
What can you do to tailor the application to your machines? You will have to
experiment (read: "measure/benchmark", see below) with which level of
optimization (like -O2, -Os, or -O3) the program runs faster.
To actually compile the program, you will need to run, inside the directory
created by the apt-get source mplayer2 command:
fakeroot debian/rules binary
sudo dpkg -i ../*.deb
With GCC versions 4.7 or newer, you can even experiment with the -Ofast
compilation level, which for playing videos is not going to cause too much
harm, but which can give you some improvements (enough to not accumulate
frames and get audio and video out of sync).
The system/hardware specific parts of the compilation
To compile the program specifically for the machine where you will be
executing it, it is a good thing to use GCC's -mach=native flag. This will
probably make the binaries produced unuseable in other systems, but as long
as you care only for your system, that's the way to go.
Just to give you an idea of what options are enabled on my Core i5-2410M
when I use -march=native, see (output reformatted to not destroy the
layout of the site):
gcc -march=native -E -v - < /dev/null 2>&1 | grep cc1
/usr/lib/gcc/i486-linux-gnu/4.7/cc1 -E -quiet -v -imultiarch i386-linux-gnu - \
-march=corei7-avx -mcx16 -msahf -mno-movbe -maes -mpclmul -mpopcnt \
-mno-abm -mno-lwp -mno-fma -mno-fma4 -mno-xop -mno-bmi -mno-bmi2 \
-mno-tbm -mavx -mno-avx2 -msse4.2 -msse4.1 -mno-lzcnt -mno-rdrnd \
-mno-f16c -mno-fsgsbase --param l1-cache-size=32 \
--param l1-cache-line-size=64 \
--param l2-cache-size=3072 -mtune=corei7-avx
From there you can see that GCC detected some "advanced" instructions that
my computer has (AVX) and others that it doesn't have (AVX2).
How to measure the results
As a hint, to benchmark, just play a short video, say, foo.mkv with:
mplayer -benchmark -vo null -nosound foo.mkv
This will "play" the video as fast as your system can and tell you how many
seconds it took to "play" the video in its entirety. Note that I said "play"
in quotes, because we are disabling:
The sound decoding with -nosound. Usually minor time is spent here, in
comparison to the other parts of playing a video.
The time taken to actually display the video (-vo null).
To see if the video card is getting in the way or not, you can omit the -vo
null part from the command above and see if the video you want plays faster
than real-time (or whatever your goal is).
Some final words, part 1: the specific case of mplayer2
That being said, much of mplayer2 (and regular mplayer, when this later one
is taken from distributions) is that most of their processing is "offloaded"
to libraries. In particular, a lot of the decoding is made by libav or
ffmpeg, and those are the packages that should be compiled/optimized in the
first place.
In the case of "vanilla" mplayer (not mplayer2) taken from upstream, it uses
embedded copies of many libraries, which means that, if you compile it from
the upstream sources (instead of the method that I gave you above with
apt-get source mplayer2 etc), it will also compile its own libav/ffmpeg
and have the potential of being much faster than the alternatives.
Some final words, part 2: getting some gains without recompiling
It is not always necessary to recompile the mplayer/mplayer2 binaries
provided by your distribution if you change a few configuration parameters.
To avoid all the work above, I would begin by playing the videos with
something like:
mplayer -framedrop -lavdopts fast:skipframe=nonref:skiploopfilter=nonref foo.mkv
Of course, you can play with the options that I just gave you and the
manpage documents the possible values for skipframe and skiploopfilter,
among other things.
And happy watching videos!
| Compiler flags for mplayer2 to optimize it for old hardware |
1,308,324,734,000 |
I was trying to make a linux server become a radius client. So I downloaded pam_radius. By following the steps from this website : openacs.org/doc/install-pam-radius.html and by following these steps :
cd /usr/local/src
wget ftp://ftp.freeradius.org/pub/radius/pam_radius-1.3.16.tar
tar xvf pam_radius-1.3.16
cd pam_radius
make
cp pam_radius_auth.so /lib/security
I thought I could install it but I got stuck at "make" I get this error message:
[root@zabbix pam_radius-1.4.0]# make
cc -Wall -fPIC -c src/pam_radius_auth.c -o pam_radius_auth.o
make: cc: Command not found
make: *** [pam_radius_auth.o] Error 127
I googled this error message and someone said they installed pam-devel. But I get the same message even after installation of pam-devel. What can I do?
|
Your error message is:
make: cc: Command not found
which tells you that you are missing the C compiler. As @GAD3R suggests, installing the Development Tools group will correct this. You probably also need the pam-devel package.
But, that said: there's really no reason to build pam_radius yourself, as it already exists in EPEL ("Extra Packages for Enterprise Linux"). Find instructions for configuring it here, and then just sudo yum install pam_radius.
| "cc: Command not found" when compiling a PAM module on Centos |
1,308,324,734,000 |
Why does a Linux distribution have gcc installed in advance? Is it because most of the applications in linux are written in C?
What would happen if the gcc directory is deleted?
|
Why does a Linux distribution have gcc installed in advance?
A Linux distribution is rather vague. Some install it, most offer to install it (possibly even if you select the defaults during installation). However not all distributions will install it and you usually have a choice.
Is it because most of the applications in Linux are written in C?
No. A C-compiler (any C-compiler, GCC is just an example, it might just as well be clang/lvm, or something else) is just incredibly handy to have. And not just on a Linux system, but also on BSDs or windows installations.
What would happen if the gcc directory is deleted?
Assuming their are no programs installed which depend on any part of GCC (or an a part of it, such as the pre-processor) then everything will continue to work just fine. You just can not compile any new C programs with that GCC version you just deleted. If it was the last C-compiler (you can have multiple compilers installed) then you will need to use a binary package to reinstall it if you to compile any C programs later.
Note that with What would happen if the gcc directory is deleted? I assume you would delete it using the proper package manager. Just randomly deleting directories on any OS is not a safe thing to do.
| Why does Linux have a C compiler by default? |
1,308,324,734,000 |
I'm trying to install soundconverter via git. This is what I did in Terminal:
$ sudo apt-get update && sudo apt-get upgrade && sudo apt-get install git
$ git clone https://github.com/kassoulet/soundconverter.git
$ cd soundconverter
$ ./autogen.sh
And here is the output from running /home/USERNAME/soundconverter/autogen.sh
*** WARNING: I am going to run 'configure' with no arguments.
*** If you wish to pass any to it, please specify them on the
*** './autogen.sh' command line.
configure.ac:22: warning: macro 'AM_GLIB_GNU_GETTEXT' not found in library
aclocal: installing 'm4/intltool.m4' from '/usr/share/aclocal/intltool.m4'
aclocal: installing 'm4/nls.m4' from '/usr/share/aclocal/nls.m4'
./autogen.sh: 27: ./autogen.sh: glib-gettextize: not found
|
Researching missing package
As you kind of picked up on this is your issue:
configure.ac:22: warning: macro 'AM_GLIB_GNU_GETTEXT' not found in library
aclocal: installing 'm4/intltool.m4' from '/usr/share/aclocal/intltool.m4'
aclocal: installing 'm4/nls.m4' from '/usr/share/aclocal/nls.m4'
./autogen.sh: 27: ./autogen.sh: glib-gettextize: not found
This message is telling you that you're missing a library. The internal logical name that this library goes by is this: AM_GLIB_GNU_GETTEXT.
Searching for that will lead you to many threads like this one:
https://ubuntuforums.org/showthread.php?t=1131769
APT
Before we start looking, lets make sure our apt-file cache is updated:
$ sudo apt-file update
Now let's see what APT about this:
$ apt-file search glib-gettextize
libglib2.0-dev: /usr/bin/glib-gettextize
libglib2.0-dev: /usr/share/man/man1/glib-gettextize.1.gz
libglib2.0-doc: /usr/share/doc/libglib2.0-doc/glib/glib-gettextize.html
Good so the package's name is libglib2.0-dev. This jives with what our previous Google search were returning.
We can poke at this package to see if it has the .m4 file that we appear to be missing:
$ apt-file list libglib2.0-dev | grep '.m4$'
libglib2.0-dev: /usr/share/aclocal/glib-2.0.m4
libglib2.0-dev: /usr/share/aclocal/glib-gettext.m4
libglib2.0-dev: /usr/share/aclocal/gsettings.m4
Good, so there's an .m4 macro file that is what configure was looking for.
So lets install it:
$ sudo apt-get install -y libglib2.0-dev
NOTE: Once it's installed you can query installed packages using dpkg:
$ dpkg-query -L libglib2.0-dev | grep m4
/usr/share/aclocal/glib-2.0.m4
/usr/share/aclocal/gsettings.m4
/usr/share/aclocal/glib-gettext.m4
References
How do I get a list of installed files from a package?
autogen.sh fails #27
| Warning: macro 'AM_GLIB_GNU_GETTEXT' not found in library |
1,308,324,734,000 |
I read the post here and I tried to make do with what I understood from the post but here are some questions:
Where is the /lib/firmware located in for example /usr/src/linux/lib/firmware or /usr/lib/firmware or elsewhere?
Could I use a pre-build EDID in that address the post gave and tweak it with an editor like Gvim and pass it to kernel using the info below? The resolution I am trying to set is 1600x900@60:
1: [H PIXELS RND] : 1600.000000
2: [V LINES RND] : 450.000000
3: [V FIELD RATE RQD] : 120.000000
4: [TOP MARGIN (LINES)] : 8.000000
5: [BOT MARGIN (LINES)] : 8.000000
6: [INTERLACE] : 0.500000
7: [H PERIOD EST] : 16.648841
8: [V SYNC+BP] : 33.000000
9: [V BACK PORCH] : 30.000000
10: [TOTAL V LINES] : 500.500000
11: [V FIELD RATE EST] : 120.008471
12: [H PERIOD] : 16.650017
13: [V FIELD RATE] : 120.000000
14: [V FRAME RATE] : 60.000000
15: [LEFT MARGIN (PIXELS)] : 32.000000
16: [RIGHT MARGIN (PIXELS)] : 32.000000
17: [TOTAL ACTIVE PIXELS] : 1664.000000
18: [IDEAL DUTY CYCLE] : 25.004995
19: [H BLANK (PIXELS)] : 560.000000
20: [TOTAL PIXELS] : 2224.000000
21: [PIXEL FREQ] : 133.573440
22: [H FREQ] : 60.060000
17: [H SYNC (PIXELS)] : 176.000000
18: [H FRONT PORCH (PIXELS)] : 104.000000
36: [V ODD FRONT PORCH(LINES)] : 1.500000
If yes where could I get an edid.bin file?
Or should I build an EDID file from scratch; if yes how could I make an EDID file?
|
Where is /lib/firmware?
The final resting place for your EDID mode firmware should be under /lib/firmware/edid. However, many linux distributions place the example EDID mode-setting firmware source and the Makefile under the directory for the linux kernel documentation. For Fedora, this is provided by the kernel-doc package and resides under /usr/share/doc/kernel-doc-3.11.4/Documentation/EDID. After you compile the firmware for your monitor, you can place the edid binary anywhere that is accessable to grub upon boot, but the convention is /lib/firmware/edid/.
Can I tweak an existing edid.bin file to match my monitor's resolution?
The edid.bin files are in binary format so the correct way to tweak it would not be intuitive.
How can I make an EDID file from scratch?
The post you provided links to the official kernel documentation for building your custom edid file. The same instructions are also provided in the HOWTO.txt file in the kernel documentation directory referenced above. Essentially you edit one of the example firmware files, say 1024x768.S, providing the parameters for your monitor. Then compile it with the provided Makefile and configure grub to use the new firmware.
For me, there were two tricky bits to accomplishing this. The first one is where to find the edid source file that needs to be compiled. This was answered for Fedora above.
The second tricky bit is finding the correct values to place in 1024x768.S for your monitor. This is achieved by running cvt to generate your desired modeline and then doing a little arithmetic. For a resolution of 1600x900 with 60 Hz refresh rate and reduced blanking (recommended for LCDs), you would have:
[user@host ~]$ cvt 1600 900 60 -r
# 1600x900 59.82 Hz (CVT 1.44M9-R) hsync: 55.40 kHz; pclk: 97.50 MHz
Modeline "1600x900R" 97.50 1600 1648 1680 1760 900 903 908 926 +hsync -vsync
You can match the last line of this output to the instructions in HOWTO.txt:
Please note that the EDID data structure expects the timing
values in a different way as compared to the standard X11 format.
X11:
HTimings: hdisp hsyncstart hsyncend htotal
VTimings: vdisp vsyncstart vsyncend vtotal
EDID:
#define XPIX hdisp
#define XBLANK htotal-hdisp
#define XOFFSET hsyncstart-hdisp
#define XPULSE hsyncend-hsyncstart
#define YPIX vdisp
#define YBLANK vtotal-vdisp
#define YOFFSET (63+(vsyncstart-vdisp))
#define YPULSE (63+(vsyncend-vsyncstart))
The 2nd - 5th numbers in the last line of the cvt output (1600 1648 1680 1760) are the four "HTimings" parameters (hdisp hsyncstart hsyncend htotal) and the 6th - 9th numbers (900 903 908 926) are the four "VTimings" parameters (vdisp vsyncstart vsyncend vtotal).
Lastly, you'll need to compile the firmware a second time in order to set the correct CRC value in the last line (see the HOWTO.txt for details).
| How to make EDID |
1,308,324,734,000 |
I am trying to build using ./configure.
I have
Three include directories
-I/path1/include
-I/path2/include
-I/path3/include
Two link directories
-L/path1/lib
-L/path2/lib
Two -l flag options
-ltensorflow
-lasan
Two compile flags
-O3
-g
How can I put all these flags effectively as options in ./configure?
|
The canonical way to do this is to provide values for various variables in the ./configure invocation:
./configure CPPFLAGS="-I/path1/include -I/path2/include -I/path3/include" \
CFLAGS="-O3 -g" \
LDFLAGS="-L/path1/lib -L/path2/lib" \
LIBS="-ltensorflow -lasan"
If the C++ compiler is used, specify CXXFLAGS instead of (or in addition to) CFLAGS.
These variables can also be set in the environment, but recommended practice is to specify them as command-line arguments so that their values will be stored for re-use. See Forcing overrides when configuring a compile (e.g. CXXFLAGS, etc.) for details.
Note that in most cases it would be unusual to specify that many paths as flags; instead, I would expect to find --with options to tell the configure script where to find various dependencies. For example, --with-tensorflow=/path/to/tensorflow which would then result in the appropriate -I and -L flags being set. Run
./configure --help
to see what options are available.
| How to put multiple -I, -L and -l flags in ./configure? |
1,308,324,734,000 |
I have chosen Gentoo for my Linux distribution. I installed it in VirtualBox for practice purposes only, so I just gave it 10GB of disk space. Yesterday when I tried to emerge the Chromium package, it printed the following:
"There is NOT at least 5GB disk space at /var/tmp/portage/www-client/chromium-42.0.2311.90/temp. Space constrains set in the ebuild were not met"
5GB? Why does it need so much space? If it is true, it means that it will be impossible to install Chromium on some machines which have little disk space.
Or there is another way to install it which I don't know about?
|
Gentoo is a Linux distribution that compiles packages from sources. Compiling packages requires much more space that installing pre-compiled binaries (that is, binaries that are compiled on the machines of the distribution maintainers). When you install something from the sources, you also need the sources for all the compilation dependencies.
Almost all the other distributions, instead, download binaries already compiled for that distribution.
To give you an example, Chromium-browser in Linux Mint (based on Ubuntu) takes only 44MB of space in disk, instead of 5GB that will take on the system of the Mint "developer" that maintains this package and compiles the updated binaries for the Mint "users".
If you have disk space restrictions, maybe you could test a different Linux distribution that provides precompiled binaries (CentOS, Ubuntu, Mint, etc) (also, with another distribution you will install and update packages faster and easier, as they don't have to be recompiled each time!).
| Why does Chromium use so much disk space on Gentoo Linux? |
1,308,324,734,000 |
I need to compile an old Apache version, 1.3, and the compilation process fails because:
mod_auth_dbm.c:77:18: fatal error: ndbm.h: File or directory not found
Where is this ndbm.h file?
|
That file here (Fedora 18) belongs to gdbm-devel, the package containing it for Ubuntu should be named similarly. Check the dependencies for the source, you'll probably need a swath of -devel packages corresponding to each dependency.
What do you need an outdated apache, which moreover has known vulnerabilities? Why doesn't the distribution's apache work? It is probably a much better idea to port whatever requires that apache forward than to get stuck in prehistory...
| Ubuntu: can't find ndbm.h |
1,308,324,734,000 |
Does optimizing for size with gcc -Os only reduce the binary size of a program, or does it reduce its runtime memory usage as well? I know what exactly the results are depend on the specific code, but in general is the result a lower memory usage?
|
Obviously, since the program needs to be loaded into memory, -Os will result in lower memory usage. But that is the only effect on memory usage it will have.
| Does optimizing for size reduce runtime memory usage as well as binary size? |
1,403,713,301,000 |
If I'm installing from source, do I need to keep the extracted tarball directory? So if I download the git tarball. I then do:
tar -xvzf git.tar.gz
This will create a git.x.x. directory, into which I cd, then run ./configure etc. Once I'm done with this process, and git, or whatever is installed, do I need to keep the original git.x.x extracted directory, or is that just used for compiling the program?
I'm somewhat confused by all the directories, and folders used for programs.
|
You don't need to keep it. However, you may want to keep the package tarball itself for:
make uninstall
Generally source packages have this as a make target so that you can tidily remove the package from your system if desired. It should not depend on preserving the state of the build, so you can erase the directory and then later unpack the tarball and just do it.
Things from a git repo may be less consistent. You can check to see if the target exists with make --dry-run uninstall1. If so, tar or otherwise archive the directory yourself and stash it.
If you know you can get the same package in the same version anytime, you don't need to keep the tarball either. And of course, if you know what was installed and it is simple and straightforward (e.g. just an executable and a man page), this is not a big concern.
1. Implying a way to deduce what's installed by make install ;)
| Installing from source - do I need to keep the extracted tarball directory |
1,403,713,301,000 |
For a client I needed to add boost 1.54 to the system. So I downloaded the latest version (1.55) and built it within a special directory: /usr/local/lib/boost1.55/. This works. Then I had to adapt the Makefile in this way.
LIBS = $(SUBLIBS) -L/usr/lib/x86_64-linux-gnu
-LC:/deps/miniupnpc -lminiupnpc -lqrencode -lrt -LC:/deps/boost/stage/lib -Lc:/deps/db/build_unix -Lc:/deps/ssl -LC:/deps/libqrencode/.libs -lssl -lcrypto -ldb_cxx -L/usr/local/lib/boost1.55/boost_system-mgw46-mt-sd-1_54 -L/usr/local/lib/boost1.55/boost_filesystem-mgw46-mt-sd-1_54 -L/usr/local/lib/boost1.55/boost_program_options-mgw46-mt-sd-1_54 -L/usr/local/lib/boost1.55/boost_thread-mgw46-mt-sd-1_54 -lQtDBus -lQtGui -lQtCore -lpthread -lboost_system -lboost_filesystem -lboost_program_options -lboost_thread
In the unmodified Makefile, the boost linkings looked like this:
-lboost_thread-mgw46-mt-sd-1_54
But this didn't work. I wasn't able to compile it because it was not found. So I added (as you can see above)
-L/usr/local/lib/boost1.55/boost_thread-mgw46-mt-sd-1_54
and
-lboost_thread
Otherwise it wouldn't compile either. After successful compilation, I executed ldd on the binary and it shows me:
libboost_system.so.1.53.0 =>
/usr/lib/x86_64-linux-gnu/libboost_system.so.1.53.0
(0x00007f416c169000) libboost_filesystem.so.1.53.0 =>
/usr/lib/x86_64-linux-gnu/libboost_filesystem.so.1.53.0
(0x00007f416bf52000) libboost_program_options.so.1.53.0 =>
/usr/lib/x86_64-linux-gnu/libboost_program_options.so.1.53.0
(0x00007f416bce4000) libboost_thread.so.1.53.0 =>
/usr/lib/x86_64-linux-gnu/libboost_thread.so.1.53.0
(0x00007f416bace000)
1.53 is the version installed by package manager. I don't understand why it links to this version. If I wouldn't have installed 1.55, it would not compiled, but now it doesn't link to this version. Any explanation for that?
Actually my target is to not use dynamic linked libraries and I didn't figured out yet how to do that, but I still want to know why the above doesn't work as expected.
|
You don't say what distribution of Linux this is, but often times there is a directory where you can add dynamically linkable libraries. On Redhat distros such as Fedora this directory is here, /etc/ld.so.conf.d/.
LD
You can add a file to this directory with the path to your newly installed library like so:
$ cat /etc/ld.so.conf.d/myboost.conf
/usr/local/lib/boost1.55
Then run this command:
$ ldconfig -v
This will process all the libraries and rebuild a "cache", /etc/ld.so.cache. This cache is what's used to locate libraries when they're specified like so: -lboost_thread-mgw46-mt-sd-1_54.
Example output
$ ldconfig -v
/usr/lib64/atlas:
libclapack.so.3 -> libclapack.so.3.0
libptcblas.so.3 -> libptcblas.so.3.0
libf77blas.so.3 -> libf77blas.so.3.0
libcblas.so.3 -> libcblas.so.3.0
liblapack.so.3 -> liblapack.so.3.0
libptf77blas.so.3 -> libptf77blas.so.3.0
libatlas.so.3 -> libatlas.so.3.0
/usr/lib64/wxSmithContribItems:
libwxflatnotebook.so.0 -> libwxflatnotebook.so.0.0.1
...
When adding paths to the setup, I like to confirm by going through this output to make sure things are getting picked up the way I expect them to.
LD's cache
You can always print the contents of the .cache file as well using this command:
$ ldconfig -p | head -10
2957 libs found in cache `/etc/ld.so.cache'
lib3ds-1.so.3 (libc6,x86-64) => /usr/lib64/lib3ds-1.so.3
libzvbi.so.0 (libc6,x86-64) => /usr/lib64/libzvbi.so.0
libzvbi-chains.so.0 (libc6,x86-64) => /usr/lib64/libzvbi-chains.so.0
libzrtpcpp-1.4.so.0 (libc6,x86-64) => /usr/lib64/libzrtpcpp-1.4.so.0
libzmq.so.1 (libc6,x86-64) => /usr/lib64/libzmq.so.1
libzmq.so (libc6,x86-64) => /usr/lib64/libzmq.so
libzipios.so.0 (libc6,x86-64) => /usr/lib64/libzipios.so.0
libzipios.so (libc6,x86-64) => /usr/lib64/libzipios.so
libzip.so.1 (libc6,x86-64) => /usr/lib64/libzip.so.1
Why's my ldd output still using 1.53?
This is because your binary is using dynamic libraries. So when the binary was compiled, it was against the 1.55 versions of the libraries. However when you interrogate the binary using ldd, it's within the environment that's using the contents of the .cache file. So the library within the cache that is associated with the symbols used by this binary match those for 1.53, hence you're seeing those libraries.
Your environment knows nothing of the 1.55 libraries, only your build environment, i.e. your Makefile, is aware of this.
Dynamic libraries
Think of the functions within these as symbols. The symbols are a name, and so these names often don't change from one version of a library to another. So if you were to look at a library such as boost, you can use the tool readelf to get a list of the symbols within one of these .so files.
Example
$ readelf -Ws /usr/lib64/libboost_date_time-mt.so | head -10
Symbol table '.dynsym' contains 261 entries:
Num: Value Size Type Bind Vis Ndx Name
0: 0000000000000000 0 NOTYPE LOCAL DEFAULT UND
1: 000000335aa096a8 0 SECTION LOCAL DEFAULT 9
2: 0000000000000000 0 FUNC GLOBAL DEFAULT UND _ZNSt8bad_castD2Ev@GLIBCXX_3.4 (2)
3: 0000000000000000 0 FUNC GLOBAL DEFAULT UND _ZNSt6locale5_ImplD1Ev@GLIBCXX_3.4 (2)
4: 0000000000000000 0 OBJECT GLOBAL DEFAULT UND _ZTINSt6locale5facetE@GLIBCXX_3.4 (2)
5: 0000000000000000 0 FUNC GLOBAL DEFAULT UND wcslen@GLIBC_2.2.5 (3)
6: 0000000000000000 0 OBJECT GLOBAL DEFAULT UND _ZTISt11logic_error@GLIBCXX_3.4 (2)
In the above output you can see some of the FUNC definitions, these are the names that are used to "link" a function in an executable with a function from some .so library.
I'm over simplifying this, and probably explaining things slightly off, but I'm only trying to give you a general idea of how the mechanics of how the machinery under the hood works.
References
How do I list the symbols in a .so file
| Confusion about linking boost library while compilation |
1,403,713,301,000 |
I am configuring the Linux kernel version 3.9.4. I am being asked questions about RCU (seen below). Specifically, what are each of these and what are the advantages and disadvantages of enabling or disabling some of these?
Consider userspace as in RCU extended quiescent state (RCU_USER_QS) [N/y/?]
Tree-based hierarchical RCU fanout value (RCU_FANOUT) [64]
Disable tree-based hierarchical RCU auto-balancing (RCU_FANOUT_EXACT) [N/y/?]
Accelerate last non-dyntick-idle CPU's grace periods (RCU_FAST_NO_HZ) [Y/n/?]
Offload RCU callback processing from boot-selected CPUs (RCU_NOCB_CPU) [N/y/?]
|
There are some details about these options over on the LTTng Project site. RCU's are (read-copy-update). These are data structures in the kernel which allow for the same data to be replicated across cores in a multi-core CPU and they guarantee that the data will be kept in sync across the copies.
excerpt
liburcu is a LGPLv2.1 userspace RCU (read-copy-update) library. This
data synchronization library provides read-side access which scales
linearly with the number of cores. It does so by allowing multiples
copies of a given data structure to live at the same time, and by
monitoring the data structure accesses to detect grace periods after
which memory reclamation is possible.
Resources
There is a good reference to what RCU's are and how they work over on lwn.net titled: What is RCU, Fundamentally?.
There's also this resource by the same title as lwn.net but it's different content.
There is also the Wikipedia entry on the RCU topic too.
Finally there's the Linux kernel documentation available here: rcu.txt.
So what are these options?
RCU_USER_QS
This option sets hooks on kernel / userspace boundaries and puts RCU
in extended quiescent state when the CPU runs in userspace. It means
that when a CPU runs in userspace, it is excluded from the global RCU
state machine and thus doesn't try to keep the timer tick on for RCU.
Unless you want to hack and help the development of the full dynticks
mode, you shouldn't enable this option. It also adds unnecessary
overhead.
If unsure say N
RCU Fanout
This option controls the fanout of hierarchical implementations of
RCU, allowing RCU to work efficiently on machines with large numbers
of CPUs. This value must be at least the fourth root of NR_CPUS, which
allows NR_CPUS to be insanely large. The default value of RCU_FANOUT
should be used for production systems, but if you are stress-testing
the RCU implementation itself, small RCU_FANOUT values allow you to
test large-system code paths on small(er) systems.
Select a specific number if testing RCU itself. Take the default if
unsure.
RCU_FANOUT_EXACT
This option forces use of the exact RCU_FANOUT value specified,
regardless of imbalances in the hierarchy. This is useful for testing
RCU itself, and might one day be useful on systems with strong NUMA
behavior.
Without RCU_FANOUT_EXACT, the code will balance the hierarchy.
Say N if unsure.
RCU_FAST_NO_HZ
This option permits CPUs to enter dynticks-idle state even if they
have RCU callbacks queued, and prevents RCU from waking these CPUs up
more than roughly once every four jiffies (by default, you can adjust
this using the rcutree.rcu_idle_gp_delay parameter), thus improving
energy efficiency. On the other hand, this option increases the
duration of RCU grace periods, for example, slowing down
synchronize_rcu().
Say Y if energy efficiency is critically important, and you don't care
about increased grace-period durations.
Say N if you are unsure.
RCU_NOCB_CPU
Use this option to reduce OS jitter for aggressive HPC or real-time
workloads. It can also be used to offload RCU callback invocation to
energy-efficient CPUs in battery-powered asymmetric multiprocessors.
This option offloads callback invocation from the set of CPUs
specified at boot time by the rcu_nocbs parameter. For each such CPU,
a kthread ("rcuox/N") will be created to invoke callbacks, where the
"N" is the CPU being offloaded, and where the "x" is "b" for RCU-bh,
"p" for RCU-preempt, and "s" for RCU-sched. Nothing prevents this
kthread from running on the specified CPUs, but (1) the kthreads may
be preempted between each callback, and (2) affinity or cgroups can be
used to force the kthreads to run on whatever set of CPUs is desired.
Say Y here if you want to help to debug reduced OS jitter. Say N here
if you are unsure.
So do you need it?
I would say if you don't know what a particular option does when compiling the kernel then it's probably a safe bet that you can live without it. So I'd say no to those questions.
Also when doing this type of work I usually get the config file for the kernel I'm using with my distro and do a comparison to see if I'm missing any features. This is probably your best resource in terms of learning what all the features are about.
For example in Fedora there are sample configs included that you can refer to. Take a look at this page for more details: Building a custom kernel.
| Understanding RCU when Configuring the Linux Kernel |
1,403,713,301,000 |
I am compiling my first Kernel (3.5 rc1) from source through menuconfig.
Certain configuration options are pre-set.
Who / what determines if they are pre-set?
Does the make menuconfig somehow detect my computer and its devices and characteristics and generates them?
Or do the default configs come with the source, pre-determined by someone (who put the source out)?
|
make menuconfig doesn't dynamically determine your environment and tries to set the appropriate config but uses your .config file and the default entries in kconfig.
So yes the defaults come with the source and are specified in the kconfig files which also specify the help text, dependencies and other things. Have a look at a sample kconfig file like net/Kconfig.
make localmodconfig on the other hand tries to create a custom tailored kernel configuration for your system based on the loaded modules. It takes your current configuration (typically from your distribution) and will only enable the loaded modules.
| How are Linux kernel compile config's determined? |
1,403,713,301,000 |
I am installing gcc version 5.1 on a supercomputer cluster. I configured it and am now running the make command. It has been several hours and is still running as it is a long install, but the work day is now over and I have to leave. If I use Ctrl+C to quit the make, if I re-run make tomorrow morning will it resume where it left off? Or will it have to start over again? Will this cause problems or errors if I interrupt the make?
|
When you press Ctrl+C, the process (technically, the process group) that is running in your terminal is killed. You can't resurrect it. All you can do is run it again.
Running make involves a lot of steps that each compile a single file, or link some files, or run one test, etc. When you press Ctrl+C, the current step is cancelled, but the data from all the previous steps is still there. The make utility is designed to quickly find out what steps have already been performed and don't need to be performed again. So if you just run make again, it will analyze the situation for a short time (maybe a few seconds for a large projects) and resume where it left off.
If the machine isn't rebooted over night, you can keep commands running on it, even if you log out for the night. Start a terminal multiplexer such as screen or tmux. For example, from a terminal, run
screen
This opens a new shell in your terminal. Here, switch to the relevant directory and type make. Then detach from the screen session by pressing Ctrl+A d. You're back to your original shell prompt, but the command inside screen is still running. You can log out, log back in, and reattach to the still-running screen session by running
screen -rd
| Can I resume make after a Ctrl+C? |
1,403,713,301,000 |
I'm trying to configure liquidsoap and compile it from source. The ./configure process gets stuck at this point:
checking lo/lo.h usability... no
checking lo/lo.h presence... no
checking for lo/lo.h... no
configure: error: LO headers not found.
Now it's quite difficult to find out which lib or package is needed. I searched the package manager (aptitude from Debian) for lo but this is quite pointless. I also was asking google for LO headers but I was not getting much results.
What does lo.h belong to?
|
There should be a package called liblo-dev on Debian that should provide this header. Simply install it using apt-get or aptitude:
aptitude install liblo-dev
| Which library are LO headers belonging to? |
1,403,713,301,000 |
On one computer I installed an application from source using configure make and make install. Now I want to install the same application on another computer that lacks gcc, make and all the other build tools. Both computers are running Kubuntu 12.04. How can I locate all the files to copy to the second computer? What other installation steps are needed? This is a command line app.
I don't want to install build essentials on the 2nd computer. (It's a small laptop.)
|
If your source code already have debian configuration files , you'll just need to run (in the sorce directory):
dpkg-buildpackage
Otherwise you can create a deb package with checkinstall
Launch the configure script first , e.g ./configure --prefix=/usr , then do
checkinstall --install=no
It will ask few questions , just fill the fields , so you can identify it laterly.
If it successed , you will see a *.deb package out of the source directory.
Copy and install it on the other computer.
| How to copy an installed application to another computer that lacks the build tools? |
1,403,713,301,000 |
I had asked about applying patches here . I tried today using the same procedure on a different source-package and it failed. Sharing -
~/games $ mkdir decopy
~/games/decopy $ apt-get source decopy
Reading package lists... Done
NOTICE: 'decopy' packaging is maintained in the 'Git' version control system at:
https://anonscm.debian.org/git/collab-maint/decopy.git
Please use:
git clone https://anonscm.debian.org/git/collab-maint/decopy.git
to retrieve the latest (possibly unreleased) updates to the package.
Need to get 46.9 kB of source archives.
Get:1 http://debian-mirror.sakura.ne.jp/debian unstable/main decopy
0.2-1 (dsc) [1,943 B]
Get:2 http://debian-mirror.sakura.ne.jp/debian unstable/main decopy
0.2-1 (tar) [43.2 kB]
Get:3 http://debian-mirror.sakura.ne.jp/debian unstable/main decopy
0.2-1 (diff) [1,760 B]
Fetched 46.9 kB in 42s (1,103 B/s)
dpkg-source: info: extracting decopy in decopy-0.2
dpkg-source: info: unpacking decopy_0.2.orig.tar.gz
dpkg-source: info: unpacking decopy_0.2-1.debian.tar.xz
Then listing -
~/games/decopy $ ls
decopy-0.2 decopy_0.2-1.debian.tar.xz decopy_0.2-1.dsc decopy_0.2.orig.tar.gz
Obviously decopy-0.2 is where things are.
/games/decopy/decopy$ wget https://bugs.debian.org/cgi-bin/bugreport.cgi?att=1;bug=854052;filename=use_tqdm_progress.patch;msg=10
~/games/decopy/ $ ─[$] ls
decopy-0.2 decopy_0.2-1.debian.tar.xz decopy_0.2-1.dsc
decopy_0.2.orig.tar.gz use_tqdm_progress.patch
~/games/decopy $ cd decopy-0.2
~/games/decopy/decopy-0.2 $ patch -p1 < ../use_tqdm_progress.patch
(Stripping trailing CRs from patch; use --binary to disable.)
patching file decopy/cmdoptions.py
(Stripping trailing CRs from patch; use --binary to disable.)
patching file decopy/tree.py
Hunk #2 succeeded at 190 (offset -6 lines).
Hunk #3 succeeded at 201 (offset -6 lines).
Hunk #4 succeeded at 303 (offset -6 lines).
Hunk #5 succeeded at 364 (offset -6 lines).
Patched it, and now using dch to have another go -
~/games/decopy/decopy-0.2 $ dch -n "Apply patch given in #854052".
~/games/decopy/decopy-0.2 $
Now the directory didn't change, apparently because this package is not a native package like dpkg is/was .
What are the recommended steps here ?
Also is there a way to know which package is a debian native package and which are not ? Any tests or something ?
|
This is a "3.0 (quilt)" package (see debian/source/format), so you'll need to use quilt to manage the patch. Revert the patch:
patch -R -p1 < ../use_tqdm_progress.patch
then create the appropriate structure:
mkdir -p debian/patches
cp ../use_tqdm_progress.patch debian/patches
echo use_tqdm_progress.patch >> debian/patches/series
You should refresh the patch:
quilt push
quilt refresh
Your dch is fine, as is the fact that the directory name didn't change. You can build the package now:
dpkg-buildpackage -us -uc
As far as native packages go, you can spot a native package by the fact that it doesn't have a hyphen in its version (generally speaking). Here, the version is 0.2-1, so it's not a native package. Inside the package, debian/source/format would be "3.0 (native)" for a native package.
| Applying patches in debian packages - Part 2 |
1,403,713,301,000 |
I'm compiling FFMPEG from source using the guide for Ubuntu which I've used before with success.
I'm compiling on a Vagrant virtual machine in VirtualBox on Ubuntu server 14.04. You can clone the project and vagrant up, and you'll have FFMPEG installed in the box: github.com/rfkrocktk/vagrant-ffmpeg.
I'm using Ansible to automate the compilation, so you can see all of the steps here:
---
- hosts: all
tasks:
# version control
- apt: name=git
- apt: name=mercurial
# build tools
- apt: name=build-essential
- apt: name=autoconf
- apt: name=automake
- apt: name=cmake
- apt: name=pkg-config
- apt: name=texinfo
- apt: name=yasm
# libraries
- apt: name=libass-dev
- apt: name=libfreetype6-dev
- apt: name=libsdl1.2-dev
- apt: name=libtheora-dev
- apt: name=libtool
- apt: name=libva-dev
- apt: name=libvdpau-dev
- apt: name=libvorbis-dev
- apt: name=libxcb1-dev
- apt: name=libxcb-shm0-dev
- apt: name=libxcb-xfixes0-dev
- apt: name=zlib1g-dev
- apt: name=libopus-dev
- apt: name=libmp3lame-dev
- apt: name=libx264-dev
# dependent libraries
# libx265
- name: clone libx265
command: hg clone https://bitbucket.org/multicoreware/x265 /usr/src/x265
args:
creates: /usr/src/x265
- name: configure libx265
command: cmake -G "Unix Makefiles" -DCMAKE_INSTALL_PREFIX=/usr/local -DENABLED_SHARED:bool=off ../../source
args:
chdir: /usr/src/x265/build/linux
creates: /usr/src/x265/build/linux/Makefile
- name: compile libx265
command: make -j2
args:
chdir: /usr/src/x265/build/linux
creates: /usr/src/x265/build/linux/x265
- name: install libx265
command: make install
args:
chdir: /usr/src/x265/build/linux
creates: /usr/local/bin/x265
# libfdk-aac
- name: clone libfdk-aac
command: git clone https://github.com/mstorsjo/fdk-aac.git /usr/src/libfdk-aac
args:
creates: /usr/src/libfdk-aac
- name: autoconf libfdk-aac
command: autoreconf -fiv
args:
chdir: /usr/src/libfdk-aac
creates: /usr/src/libfdk-aac/configure
- name: configure libfdk-aac
command: /usr/src/libfdk-aac/configure --prefix=/usr/local --disable-shared
args:
chdir: /usr/src/libfdk-aac
creates: /usr/src/libfdk-aac/libtool
- name: compile libfdk-aac
command: make -j2
args:
chdir: /usr/src/libfdk-aac
creates: /usr/src/libfdk-aac/libFDK/src/FDK_core.o
- name: install libfdk-aac
command: make install
args:
chdir: /usr/src/libfdk-aac
creates: /usr/local/lib/libfdk-aac.a
# libvpx
- name: download libvpx
shell: wget -O - https://storage.googleapis.com/downloads.webmproject.org/releases/webm/libvpx-1.4.0.tar.bz2 | tar xjvf -
args:
chdir: /usr/src
creates: /usr/src/libvpx-1.4.0
- name: configure libvpx
command: ./configure --prefix=/usr/local --disable-examples --disable-unit-tests
args:
chdir: /usr/src/libvpx-1.4.0
creates: /usr/src/libvpx-1.4.0/Makefile
- name: compile libvpx
command: make -j2
args:
chdir: /usr/src/libvpx-1.4.0
creates: /usr/src/libvpx-1.4.0/libvpx.a
- name: install libvpx
command: make install
args:
chdir: /usr/src/libvpx-1.4.0
creates: /usr/local/lib/libvpx.a
# ffmpeg itself
- name: download ffmpeg
shell: wget -O - "https://ffmpeg.org/releases/ffmpeg-snapshot.tar.bz2" | tar xjvf -
args:
chdir: /usr/src
creates: /usr/src/ffmpeg
- name: configure ffmpeg
shell: PKG_CONFIG_PATH=/usr/local/lib/pkgconfig /usr/src/ffmpeg/configure \
--prefix=/usr/local \
--pkg-config-flags='--static' \
--bindir=/usr/local/bin \
--enable-gpl --enable-version3 --enable-nonfree \
--enable-libass --enable-libfdk-aac --enable-libfreetype --enable-libmp3lame \
--enable-libopus --enable-libtheora --enable-libvorbis --enable-libvpx \
--enable-libx264 --enable-libx265
args:
chdir: /usr/src/ffmpeg
creates: /usr/src/ffmpeg/config.asm
- name: compile ffmpeg
command: make -j2
args:
chdir: /usr/src/ffmpeg
creates: /usr/src/ffmpeg/ffmpeg
- name: install ffmpeg
command: make install
args:
chdir: /usr/src/ffmpeg
creates: /usr/local/bin/ffmpeg
Hopefully, even if you don't know Ansible, it should be clear what this is doing.
The problem I'm having is that even after all of this running successfully, when I run ffmpeg from within the machine, I get the following error:
ffmpeg: error while loading shared libraries: libx265.so.77: cannot open shared object file: No such file or directory
I can clearly find the file:
$ find /usr -iname libx265.so.77
/usr/local/lib/libx265.so.77
Why is this not being found? Am I missing something in the compilation guide? I'd like my binaries to be as portable as humanly possible.
Edit
output of ldd $(which ffmpeg):
linux-vdso.so.1 => (0x00007fff552ea000)
libva.so.1 => /usr/lib/x86_64-linux-gnu/libva.so.1 (0x00007f2fb3b45000)
libxcb.so.1 => /usr/lib/x86_64-linux-gnu/libxcb.so.1 (0x00007f2fb3926000)
libxcb-shm.so.0 => /usr/lib/x86_64-linux-gnu/libxcb-shm.so.0 (0x00007f2fb3722000)
libxcb-xfixes.so.0 => /usr/lib/x86_64-linux-gnu/libxcb-xfixes.so.0 (0x00007f2fb351b000)
libxcb-shape.so.0 => /usr/lib/x86_64-linux-gnu/libxcb-shape.so.0 (0x00007f2fb3317000)
libX11.so.6 => /usr/lib/x86_64-linux-gnu/libX11.so.6 (0x00007f2fb2fe1000)
libasound.so.2 => /usr/lib/x86_64-linux-gnu/libasound.so.2 (0x00007f2fb2cf1000)
libSDL-1.2.so.0 => /usr/lib/x86_64-linux-gnu/libSDL-1.2.so.0 (0x00007f2fb2a5b000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f2fb2754000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f2fb2536000)
libx265.so.77 => not found
librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007f2fb232d000)
libx264.so.142 => /usr/lib/x86_64-linux-gnu/libx264.so.142 (0x00007f2fb1f97000)
libvorbisenc.so.2 => /usr/lib/x86_64-linux-gnu/libvorbisenc.so.2 (0x00007f2fb1ac8000)
libvorbis.so.0 => /usr/lib/x86_64-linux-gnu/libvorbis.so.0 (0x00007f2fb189a000)
libtheoraenc.so.1 => /usr/lib/x86_64-linux-gnu/libtheoraenc.so.1 (0x00007f2fb165a000)
libtheoradec.so.1 => /usr/lib/x86_64-linux-gnu/libtheoradec.so.1 (0x00007f2fb1441000)
libopus.so.0 => /usr/lib/x86_64-linux-gnu/libopus.so.0 (0x00007f2fb11f8000)
libmp3lame.so.0 => /usr/lib/x86_64-linux-gnu/libmp3lame.so.0 (0x00007f2fb0f6b000)
libfreetype.so.6 => /usr/lib/x86_64-linux-gnu/libfreetype.so.6 (0x00007f2fb0cc8000)
libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007f2fb0aae000)
libass.so.4 => /usr/lib/x86_64-linux-gnu/libass.so.4 (0x00007f2fb088a000)
libvdpau.so.1 => /usr/lib/x86_64-linux-gnu/libvdpau.so.1 (0x00007f2fb0686000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f2fb02bf000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f2fb00bb000)
libXau.so.6 => /usr/lib/x86_64-linux-gnu/libXau.so.6 (0x00007f2fafeb7000)
libXdmcp.so.6 => /usr/lib/x86_64-linux-gnu/libXdmcp.so.6 (0x00007f2fafcb0000)
/lib64/ld-linux-x86-64.so.2 (0x00007f2fb3d66000)
libpulse-simple.so.0 => /usr/lib/x86_64-linux-gnu/libpulse-simple.so.0 (0x00007f2fafaac000)
libpulse.so.0 => /usr/lib/x86_64-linux-gnu/libpulse.so.0 (0x00007f2faf862000)
libXext.so.6 => /usr/lib/x86_64-linux-gnu/libXext.so.6 (0x00007f2faf650000)
libcaca.so.0 => /usr/lib/x86_64-linux-gnu/libcaca.so.0 (0x00007f2faf383000)
libogg.so.0 => /usr/lib/x86_64-linux-gnu/libogg.so.0 (0x00007f2faf179000)
libpng12.so.0 => /lib/x86_64-linux-gnu/libpng12.so.0 (0x00007f2faef53000)
libfribidi.so.0 => /usr/lib/x86_64-linux-gnu/libfribidi.so.0 (0x00007f2faed3b000)
libfontconfig.so.1 => /usr/lib/x86_64-linux-gnu/libfontconfig.so.1 (0x00007f2faeaff000)
libenca.so.0 => /usr/lib/x86_64-linux-gnu/libenca.so.0 (0x00007f2fae8cc000)
libpulsecommon-4.0.so => /usr/lib/x86_64-linux-gnu/pulseaudio/libpulsecommon-4.0.so (0x00007f2fae664000)
libjson-c.so.2 => /lib/x86_64-linux-gnu/libjson-c.so.2 (0x00007f2fae45a000)
libdbus-1.so.3 => /lib/x86_64-linux-gnu/libdbus-1.so.3 (0x00007f2fae214000)
libslang.so.2 => /lib/x86_64-linux-gnu/libslang.so.2 (0x00007f2fade84000)
libncursesw.so.5 => /lib/x86_64-linux-gnu/libncursesw.so.5 (0x00007f2fadc50000)
libtinfo.so.5 => /lib/x86_64-linux-gnu/libtinfo.so.5 (0x00007f2fada26000)
libexpat.so.1 => /lib/x86_64-linux-gnu/libexpat.so.1 (0x00007f2fad7fc000)
libwrap.so.0 => /lib/x86_64-linux-gnu/libwrap.so.0 (0x00007f2fad5f1000)
libsndfile.so.1 => /usr/lib/x86_64-linux-gnu/libsndfile.so.1 (0x00007f2fad389000)
libasyncns.so.0 => /usr/lib/x86_64-linux-gnu/libasyncns.so.0 (0x00007f2fad183000)
libnsl.so.1 => /lib/x86_64-linux-gnu/libnsl.so.1 (0x00007f2facf68000)
libFLAC.so.8 => /usr/lib/x86_64-linux-gnu/libFLAC.so.8 (0x00007f2facd37000)
libresolv.so.2 => /lib/x86_64-linux-gnu/libresolv.so.2 (0x00007f2facb1b000)
output of file /usr/local/lib/libx265.so.77:
/usr/local/lib/libx265.so.77: ELF 64-bit LSB shared object, x86-64, version 1 (GNU/Linux), dynamically linked, BuildID[sha1]=f91388281cc2dba1dfe37797324dc6b3898d8d1b, not stripped
LD_LIBRARY_PATH is undefined in environment variables.
Also this:
$ grep -r local /etc/ld.so.conf.d/
/etc/ld.so.conf.d/libc.conf:/usr/local/lib
My working theory is that because the Ansible command module strips environment variables, everything breaks.
I'm thinking that there's something wrong with the FFMPEG or x265 build. I removed --enable-libx265 from the FFMPEG configure command and I now have a working FFMPEG.
|
My problem lay in the fact that I never ran ldconfig after installing everything.
In order for shared object libraries to be found on Debian, you must run sudo ldconfig to rebuild the shared library cache.
| Compiling FFMPEG from source: cannot find shared library |
1,403,713,301,000 |
I was curious if it's possible to build Linux kernel without GNU toolchain (gcc+autotools).
I found out that it is possible: after applying patches from llvm.linuxfoundation.org, it was possible to build Linux kernel with clang. GNU linker was used.
The alternative to ld is gold which is also part of GNU binutils. Popular musl+clang toolchain ELLCC also uses GNU binutils.
There are more alternatives: lld (no stable releases), mclinker (no stable releases).
Does alternative to GNU binutils exist? Probably, building on Mac OS X or FreeBSD doesn't involve GNU tools.
|
As of 2018 lld seems mature enough to be used in production, not 100% compatible with bfd, but can be used as drop-in replacement in most cases.
Update: recently, a new linker appeared, and it is under active development: mold.
| What are the alternatives to GNU ld? |
1,403,713,301,000 |
I am compiling the Linux kernel for my specific hardware and I only select the drivers/options which I really need. This is in contrast to a typical distribution kernel, where they compile almost everything, to be compatible with as many hardware configurations as possible.
I imagine, for my kernel, I am only using 1% of the total kernel code (order of magnitude estimate).
Is there any way to find out which files from the kernel source I have actually used when I build my kernel?
This is not an academical question. Suppose I have compiled my kernel 3.18.1. Now a security update comes along, and 3.18.2 is released. I have learned in my other question how to find which files have change between releases. If I knew whether any of the files I am using have changed, I would recompile my kernel to the new version. If, on the other hand, the changes only affect files which I am not using anyway, I can keep my current kernel version.
|
Compiling my comment as answer:
Run the following command on one shell. You can make a shell script of it or demonize with the -d option.
inotifywait -m -r -e open --format '%f' /kernel_sources/directory/in_use/ -o /home/user/kernel_build.log
On other shell, execute make
The log file /home/user/kernel_build.log will have a list of the files that have been opened(read operation) during the build process.
| Find out which kernel source files were used when kernel was compiled |
1,403,713,301,000 |
I'm trying to make a redis RPM using fpm
When I run the following commands, all binaries are installed to /tmp/installdir/usr/local/bin
cd /tmp/redis2-8
PREFIX=/tmp/installdir/usr/local/ make
PREFIX=/tmp/installdir/usr/local/ make install
How could I compile redis so that the redis-server binary is installed to /tmp/installdir/usr/local/sbin and everything else (redis-cli, redis-benchmark ect.. ) is installed to /tmp/installdir/usr/local/bin ?
|
The fpm dev has this to say on the subject of distribution-specific guidelines:
I want a simple way to create packages without all the bullshit. In my own infrastructure, I have no interest in Debian policy and RedHat packaging guidelines - I have interest in my group's own style culture and have a very strong interest in getting work done.
(This is not to say that you can't create packages with FPM that obey Debian or RedHat policies, you can and should if that is what you desire)
So, the default behavior you are observing, where everything is placed in /usr/local/bin seems to be the group's preference. And they are not alone; not only Fedora & co but also freedesktop.org agree with merging /bin and /sbin into /usr/bin. While the current draft of the FSH still uses sbind directories, there does seem to be a movement building against them.
In any case, the Fedora people have some very good arguments against sbin which include (emphasis mine):
a) the split between sbin and bin requires psychic powers from
upstream developers:
The simple fact is that moving binaries between these dirs is really hard, and thus developers in theory would need to know at the time they first introduce a binary whether it ever might ever make sense to invoke it as unprivileged user (because in that case the binary belongs in /bin, not /sbin). History shows that more often than not developers have placed their stuff in the wrong directory, see /sbin/arp, /sbin/ifconfig, /usr/sbin/httpd and then there is no smart way to fix things anymore since changing paths means breaking all hardcoded scripts. And hardcoding paths in scripts is actually something we encourage due to perfomance reasons. The net effect is that many upstream developers unconditionally place their stuff in bin, and never consider sbin at all which undermines the purpose of sbin entirely (i.e. in systemd we do not stick a single binary in sbin, since we cannot be sure for any of its tools that it will never ever be useful for non-root users. and systemd is as low-level, system-specific it can get).
b) The original definition of /sbin is fully obsolete (i.e. the "s" in
sbin stood originally for "static" binaries)
c) /bin vs. /sbin is not about security, never has been. Doing this
would be security by obscurity, and pretty stupid, too.
d) splitting /bin off /sbin is purely and only relevant for $PATH, and
$PATH is purely and only something to ease access to the tools in it for shell users. The emphasis here is on "ease". It is not a way to make it harder for users to access some tools. Or in other words: if your intention is to hide certain tools from the user in order not to confuse him, then there are much better places for that: the documentation could document them in a separate section or so. I don't think it makes any sense at all trying to educate the user by playing games with what he can see if he presses TAB in the shell. [...]
These points were made about /sbin, but they seem equally applicable to /usr/local/sbin. So, while @slm gave you what I'm sure is the way to do what you're actually asking for, it might be better to just ignore sbin and let fpm do its thing.
| How do you separate /bin and /sbin when making an RPM? |
1,403,713,301,000 |
I'm experimenting with different kernel configuration files and wanted to keep a log on the ones I used.
Here is the situation:
There is configuration file called my_config which i want to use as a template
I do make menuconfig, load my_config make NO changes and save as .config.
When i do diff .config my_config, there are differences in the files
Why would here be differences between the old file and the new file?
|
Why would here be differences
Because you loaded my_config into menuconfig, made changes, then saved it as .config. Of course they are different. If you saved it twice, once with each name, then they would be the same.
If you mean, they are more different than you think they should be, keep in mind there is not a 1:1 correspondence between things you select in menuconfig and changes that appear in the config file.
Also, if my_config was the product of an earlier version of the kernel source, make menuconfig will notice this and convert the file to reflect the newer source version. This means even if you change nothing, just loading it and saving it will result in substantial changes to the text of the file. However, the actual configuration should be essentially the same (generally the changes are the addition of new options with appropriate default values).
| Saving a kernel config file through menuconfig results with different options? |
1,403,713,301,000 |
I have a hacked up Chromebook on which I'm running Gentoo. When I try to compile anything, CPU usage spikes up to 100%, the temperature increases by ~10 degrees C, battery usage spikes (4.X W -> 10 W), and it's a slow process. But I also have an Arch Linux computer running, and I can connect to it over SSH. They are both x86_64 CPUs. Is there any way I could offload the compilation of stuff (Linux kernel, everyday packages, etc.) onto the Arch Linux machine over SSH? I haven't done anything like this before. Might cross-compilation be necessary?
|
No, you don't have to cross-compile (that would be necessary if you targeted another architecture.) There are two ways that I can think of that you could set up your systems to do this:
Use distcc. The Gentoo and Arch wikis do a good job of describing how to install and configure the program, so I won't copy the entire thing here. Briefly, you need to have the following set up in order for it to work:
Your CFLAGS in /etc/portage/make.conf must not use march=native or mtune=native, because the remote computer will use its idea of "native" CPU, not the local computer's. If you're using "native", find out which flags to use by running:
$ gcc -v -E -x c -march=native -mtune=native - < /dev/null 2>&1 | grep cc1 | perl -pe 's/^.* - //g;'
Both computers need the same compiler and binutils versions.
Both computers need distcc installed, configured and running.
Use a chroot environment on your Arch system with a copy of your Chromebook filesystem (treat this like you're doing an installation of Gentoo, so copy resolv.conf from your Arch installation, and mount the appropriate filesystems inside per the Gentoo installation manual, keeping in mind the warning about /dev/shm if Arch's version is a symlink.) It needs to be as close as possible to your Chromebook environment, or else you'll end up with possibly incorrect binaries; if you do a copy, you'll have to rebuild less packages. Inside of this environment:
Add FEATURES="buildpkg" to /etc/portage/make.conf.
The generated packages will then be in /usr/portage/packages. You can also compile the kernel in this way and simply copy the generated kernel and appropriate /lib/modules directory to the Chromebook. (Remember that these directory locations are relative to the chroot!) The wiki recommends having an NFS mount or other server so that you don't have to copy files manually: this can be set up on the Arch system proper. I like setting up rsyncd for this purpose, but use whatever method you prefer for file access.
On your Chromebook:
Make sure to add FEATURES="getbinpkg" to /etc/portage/make.conf if you want to prevent it from compiling locally.
If you're using remote file access, add PORTAGE_BINHOST="protocol://path/to/your/chroot/usr/portage/packages" to /etc/portage/make.conf.
Refer to the Binary package guide in the Gentoo wiki for more information.
I have done both of these methods in the past, and they both work pretty well. My observations on the two methods:
distcc is finicky to get working, even if you have identical setups on both sides. Keeping gcc and binutils versions the same will be your biggest challenge. Once you get it going, however, it's pretty fast, and if you have extra computers that are fast enough you can add them.
The chroot environment is less finicky, but if you make changes to any part of the portage environment (CFLAGS, USE flags, masks, profiles, etc.) you have to make sure that both sides stay consistent, or else you can end up with packages that have the wrong dependencies. Gentoo is pretty good about making sure the USE flags match, but it doesn't track compiler options in binary packages. One advantage is that you're not limited by the (lack of) disk space and memory on the Chromebook for compilation.
If you're going to use the chroot method, I would make a script to do all the uninteresting work required in setting it up (replace /mnt/gentoo with your chroot location):
cp -L /etc/resolv.conf /mnt/gentoo/etc
mount -t proc proc /mnt/gentoo/proc
mount --rbind /sys /mnt/gentoo/sys
mount --make-rslave /mnt/gentoo/sys
mount --rbind /dev /mnt/gentoo/dev
mount --make-rslave /mnt/gentoo/dev
chroot /mnt/gentoo /bin/bash
umount -R /mnt/gentoo/dev
umount -R /mnt/gentoo/sys
umount /mnt/gentoo/proc
| Offload compilation through SSH? |
1,403,713,301,000 |
I want to enable google bbr on my VPS. But I don't know this feature is integrated on linux kernel or not. How can I check it?
|
Below command used to find the available tcp congestion control algorithms supported.
1. cat /proc/sys/net/ipv4/tcp_available_congestion_control
bic reno cubic
2. This command used to find which tcp congestion control configured for your Linux.
sysctl net.ipv4.tcp_congestion_control
3. Below command is used to change to desired one from the available list.
sysctl -w net.ipv4.tcp_congestion_control=bic
| How to check what congestion algorithm supported on my linux kernel? |
1,403,713,301,000 |
There is a file version.h in /usr/include/linux. Many header files include this file and use the defines in there for their own ifdefs.
However, when I compile my own kernel, I cannot see how this can possibly be reflected correctly in e.g. version.h.
Actually this holds for all kernel-related header files. AFAICS /usr/include/linux always represent the kernel which came with my distribution and neither the running kernel, nor the kernel I tell make by means of SYSSRC.
In the past I resorted to creating symlinks to my own kernel sources, but I have a feeling that this is not the correct way.
How is this supposed to work? How do I compile (e.g. a kernel module) against a custom kernel?
|
When configuring a system against your own custom kernel, I would suggest adding a name to the current version in your modified kernel sources.
For instance, in Armbian they create their own kernel packages, and add a -sunxi to kernel.release.
Takin as an example modifying the 4.6.3 kernel version:
root@ruir:/usr/src/linux-headers-4.6.3-sunxi# grep -ri 4.6.3-sunxi *
include/generated/utsrelease.h:#define UTS_RELEASE "4.6.3-sunxi"
include/config/kernel.release:4.6.3-sunxi
and also, for the kernel modules, in /lib/modules/4.6.3-sunxi/build:
include/generated/utsrelease.h:#define UTS_RELEASE "4.6.3-sunxi"
include/config/auto.conf.cmd:ifneq "$(KERNELVERSION)" "4.6.3-sunxi"
include/config/kernel.release:4.6.3-sunxi
(see installing sysdig in ARM / Armbian Jessie - module compiled in wrong kernel version )
As we can see, this can be seen in uname -r:
$uname -r
4.6.3-sunxi
As for the custom kernel packages:
$dpkg -l | grep sunxi
ii linux-dtb-next-sunxi 5.16 armhf Linux DTB, version 4.6.3-sunxi
ii linux-firmware-image-next-sunxi 5.16 armhf Linux kernel firmware, version 4.6.3-sunxi
ii linux-headers-next-sunxi 5.16 armhf Linux kernel headers for 4.6.3-sunxi on armhf
ii linux-image-next-sunxi 5.16 armhf Linux kernel, version 4.6.3-sunxi
As for adding your own headers of your compile kernel, I will refer to KernelHeaders (emphasis as bold is mine); if you are replacing minor kernel versions you may (or may not) get away with only make headers_install.
User space programs
In general, user space programs are built against the header files
provided by the distribution, typically from a package named
glibc-devel, glibc-kernheaders or linux-libc-dev. These header files
are often from an older kernel version, and they cannot safely be
replaced without rebuilding glibc as well. In particular, installing
/usr/include/linux as a symbolic link to /usr/src/linux/include or
/lib/modules/*/build/include/linux is highly discouraged as it
frequently breaks rebuilding applications. For instance, older kernels
had the architecture specific header files in include/asm-${arch}
instead of arch/${arch}/include/asm and had on a symlink to the
architecture specific directory.
The correct way to package the header files for a distribution is to
run 'make headers_install' from the kernel source directory to install
the headers into /usr/include and then rebuild the C library package,
with a dependency on the specific version of the just installed kernel
headers.
If you are distributing a user space program that depends on a
specific version of some kernel headers, e.g. because your program
runs only on patched or very recent kernels, you cannot rely on the
headers in /usr/include. You also cannot use the header files from
/usr/src/linux/include or /lib/modules/*/build/include/ because they
have not been prepared for inclusion in user space. The kernel should
warn you about this if you try it and point you to this Wiki page. The
correct way to address this problem is to isolate the specific
interfaces that you need, e.g. a single header file that is patched in
a new kernel providing the ioctl numbers for a character device used
by your program. In your own program, add a copy of that source file,
with a notice that it should be kept in sync with new kernel versions.
If your program is not licensed under GPLv2, make sure you have
permission from the author of that file to distribute it under the
license of your own program. Since your program now depends on kernel
interfaces that may not be present in a regular kernel, it's a good
idea to add run-time checks that make sure the kernel understands the
interface and give a helpful error message if there is no fallback to
an older interface.
Also for kernel development; or compiling a kernel/module for a different server or for a different kernel with multiple kernel versions installed, SYSSRC maybe be used to specify an alternate kernel source location.
| How to compile against a custom kernel (Debian)? |
1,403,713,301,000 |
For the first time in my Linux (Debian) career it seems necessary for me to compile a piece of software myself.
The process of compiling is well described on the project-website. However, there is one thing I don't understand, that keeps me from getting started:
I want to keep my package management (APT) tidy. To compile the project, I need to download a lot of packages (most of them *-dev versions), that I probably wouldn't need, once I finished the compilation.
I don't want to keep those *-dev packages afterwards.
Therefore I wonder if there is a practical way to temporarily install those packages for the compilation-process and afterwards delete them all at once (without having to remember each and every package-name).
|
If the package happens to be in a debian repo somewhere, you can use build-dep to install the build dependencies and mark those as 'automatically' installed. You can then use autoremove to cleanup those build deps.
apt-get build-dep -o APT::Get::Build-Dep-Automatic=true WhatImBuilding
apt-get autoremove
If whatever you're building doesn't already have a deb package with build deps somewhere, then this technique doesn't work. There is however a debian feature suggestion to add this type of support: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=745769
I suppose you could define a fake package for whatever you're building in a local repo which you manage, and define the build-deps there, allowing the above method to work. That's a bit contrived, though. You can also hand-edit the /var/lib/apt/extended_states file to mark whatever packages you're installing as 'automatic', thus making them eligible for autoremove, but that is probably dangerous.
| compile packages AND keep apt tidy |
1,403,713,301,000 |
I'm currently trying to install gcc41 using the AUR and I'm experiencing an issue.
Everytime it goes through the compiling process the build fails because it cannot complete compilation of the toplev object because there is a redefinition error.
Here is the error. I don't really know where to go from here.
In file included from ../../gcc/toplev.c:31:0:
../../gcc/gcov-io.h: In function ‘gcov_position’:
../../gcc/system.h:575:55: warning: ISO C does not support ‘__FUNCTION__’ predefined identifier [-Wpedantic]
((void)(!(EXPR) ? fancy_abort (__FILE__, __LINE__, __FUNCTION__), 0 : 0))
^
../../gcc/gcov-io.h:572:3: note: in expansion of macro ‘gcc_assert’
gcc_assert (gcov_var.mode > 0);
^
../../gcc/toplev.c: At top level:
../../gcc/toplev.c:524:1: error: redefinition of ‘floor_log2’
floor_log2 (unsigned HOST_WIDE_INT x)
^
In file included from ../../gcc/toplev.c:59:0:
../../gcc/toplev.h:175:1: note: previous definition of ‘floor_log2’ was here
floor_log2 (unsigned HOST_WIDE_INT x)
^
../../gcc/toplev.c:559:1: error: redefinition of ‘exact_log2’
exact_log2 (unsigned HOST_WIDE_INT x)
^
In file included from ../../gcc/toplev.c:59:0:
../../gcc/toplev.h:181:1: note: previous definition of ‘exact_log2’ was here
exact_log2 (unsigned HOST_WIDE_INT x)
^
Makefile:2064: recipe for target 'toplev.o' failed
make[2]: *** [toplev.o] Error 1
make[2]: Leaving directory '/tmp/yaourt-tmp-michael/aur-gcc41/src/gcc-4.1.2/build/gcc'
Makefile:3907: recipe for target 'all-gcc' failed
make[1]: *** [all-gcc] Error 2
make[1]: Leaving directory '/tmp/yaourt-tmp-michael/aur-gcc41/src/gcc-4.1.2/build'
Makefile:617: recipe for target 'all' failed
make: *** [all] Error 2
|
I ran into something like this before. I think the issue is that you're trying to compile gcc41 from AUR, using GCC 5.2.0-1 (latest arch version.) GCC adds new errors as versions go on, so the source code of older versions of GCC isn't always considered valid under newer versions of GCC. If you can find a way to disable this warning that might do the trick. If you can use the Arch wayback machine to get a gcc 4.2 binary, you could compile gcc 4.2 source with itself in binary form.
| Arch: Compiling toplev.o fails in GCC install |
1,403,713,301,000 |
Given two computers A and B with the same specifications, both with the same Linux distribution, is it possible to 'make' compile in computer A and copy the directory to computer B and 'make install' without problems?
|
In general it should be possible, if both hosts really have the same specifications (i.e. same processor architecture, same libraries in same versions installed, same kernel installed, same file system structure for referred config files/libraries, ...). But since you can do nasty things in Makefiles there might be situations where this is not possible.
The make command normally just compiles all the sources, links it against the installed libraries and the kernel, and then generates the binary output file.
| Can I compile in one computer and 'make install' in another? |
1,403,713,301,000 |
I have built pidgin from source on CentOS 7. This is because there is no package available yet. This went well, however, pidgin-otr-4.0.0 cannot find the headers for pidgin and purple.
They reside in /usr/local/include, and I can't work out what the configure script wants with its suggestion:
checking for EXTRA... configure: error: Package requirements (glib-2.0 >= 2.6 gtk+-2.0 >= 2.6 pidgin >= 2.0 purple >= 2.0) were not met:
No package 'pidgin' found
No package 'purple' found
Consider adjusting the PKG_CONFIG_PATH environment variable if you
installed software in a non-standard prefix.
Alternatively, you may set the environment variables EXTRA_CFLAGS
and EXTRA_LIBS to avoid the need to call pkg-config.
See the pkg-config man page for more details.
I tried a variety of PKG_CONFIG_PATH options such as /usr/local and /usr/local/include, as well as EXTRA_LIBS. I am not sure what to do at this point.
I just need to specify somehow that pidgin and purple reside in /usr/local/include.
|
I found the answer after having a second look at the pkg-config manual, and better understanding the purpose of those environment variables. I also noticed I could do a Google search for pidgin pkg-config. I was then able to find the solution.
This allows configure to find the required libraries with pkg-config...
$ PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/local/lib/pkgconfig ./configure
This allowed it to find pidgin and purple.
| building pidgin-otr-4.0.0 on CentOS 7: can't find pidgin and purple |
1,403,713,301,000 |
When I enable make V=s to read the full log of make. I always see make[numer] in the log.
e.g:
datle@debian:~/workspace/cpx/trunk$ make
rm -rf openwrt/tmp
cp config/defaut.config openwrt/.config
cd openwrt && make
make[1]: Entering directory `/home/datle/workspace/cpx/trunk/openwrt'
make[1]: Leaving directory `/home/datle/workspace/cpx/trunk/openwrt'
make[1]: Entering directory `/home/datle/workspace/cpx/trunk/openwrt'
make[2]: Entering directory `/home/datle/workspace/cpx/trunk/openwrt'
Collecting package info: done
Collecting target info: done
Checking 'working-make'... ok.
Checking 'case-sensitive-fs'... ok.
Checking 'getopt'... ok.
Checking 'fileutils'... ok.
Checking 'working-gcc'... ok.
Checking 'working-g++'... ok.
Checking 'ncurses'... ok.
Checking 'zlib'... ok.
Checking 'gawk'... ok.
Checking 'unzip'... ok.
Checking 'bzip2'... ok.
Checking 'patch'... ok.
Checking 'perl'... ok.
Checking 'python'... ok.
Checking 'wget'... ok.
Checking 'git'... ok.
Checking 'gnutar'... ok.
Checking 'svn'... ok.
Checking 'gnu-find'... ok.
Checking 'getopt-extended'... ok.
Checking 'non-root'... ok.
make[3]: Entering directory `/home/datle/workspace/cpx/trunk/openwrt'
Checking 'openssl'... ok.
make[3]: Leaving directory `/home/datle/workspace/cpx/trunk/openwrt'
make[2]: Leaving directory `/home/datle/workspace/cpx/trunk/openwrt'
WARNING: your configuration is out of sync. Please run make menuconfig, oldconfig or defconfig!
make[2] world
make[3] target/compile
make[4] -C target/linux compile
make[3] package/cleanup
make[3] package/compile
make[4] -C package/toolchain compile
make[4] -C package/wireless-tools compile
I read the make manual but I didn't find any detail about this.
|
Those numbers is represent for makelevel, which let us know how sub-make relates to top-level make.
This is the recursive use of make, see more details here.
Digging into make source code, you can see something clearer.
In main.c:
/* Value of the MAKELEVEL variable at startup (or 0). */
unsigned int makelevel;
and then:
/* Figure out the level of recursion. */
{
struct variable *v = lookup_variable (STRING_SIZE_TUPLE (MAKELEVEL_NAME));
if (v && v->value[0] != '\0' && v->value[0] != '-')
makelevel = (unsigned int) atoi (v->value);
else
makelevel = 0;
}
In output.c:
/* Use entire sentences to give the translators a fighting chance. */
if (makelevel == 0)
if (starting_directory == 0)
if (entering)
fmt = _("%s: Entering an unknown directory\n");
else
fmt = _("%s: Leaving an unknown directory\n");
else
if (entering)
fmt = _("%s: Entering directory '%s'\n");
else
fmt = _("%s: Leaving directory '%s'\n");
else
And format the output before printing:
if (makelevel == 0)
if (starting_directory == 0)
sprintf (p, fmt , program);
else
sprintf (p, fmt, program, starting_directory);
else if (starting_directory == 0)
sprintf (p, fmt, program, makelevel);
else
sprintf (p, fmt, program, makelevel, starting_directory);
_outputs (NULL, 0, buf);
Note
Source make
| What does make[number] mean in make V=s? |
1,403,713,301,000 |
I am trying to compile OpenOnload from Solarflare for my nic on a server that I'm building. It is saying something about not having a kernel build.
root@server:/usr/src/openonload-201310-u2# ./scripts/onload_install
onload_install: Building OpenOnload.
mmakebuildtree: No kernel build at '/lib/modules/3.2.0-4-amd64/build'
onload_build: FAILED: mmakebuildtree --driver -d x86_64_linux-3.2.0-4-amd64
onload_install: ERROR: Build failed. Not installing.
`
What is it talking about when it's saying there is supposed to be kernel build at /lib/modules/3.2.0-4-amd64/build? How would I get that file?
I'm using Debian 7 "Wheezy".
|
It's talking about the kernel development headers which are needed for compiling certain applications. On Debian-based distributions, you can install them with this command:
sudo apt-get install linux-headers-`uname -r`
If you're asked for that, you may also require the following:
sudo apt-get install build-essentials
That will install tools like make which might not be installed by default, I'm not sure.
| What is the "kernel build", and where do I get it? |
1,403,713,301,000 |
I want to install a piece of software (rtorrent) from source to my home folder. It depends on ncurses, which is not installed. I've installed ncurses to my home folder by using the PREFIX option during the configuration step, but this doesn't seem to work when I try to do the same with rtorrent, as it keeps telling me that it requires ncurses.
The last few lines of output during make:
checking if more special flags are required for pthreads... no
checking for PTHREAD_PRIO_INHERIT... no
checking for NcursesW wide-character library... no
checking for Ncurses library... no
checking for Curses library... no
configure: error: requires either NcursesW or Ncurses library
How can I get this to work?
|
Most configure scripts allow you to specify the location of libraries they use.
./configure ... --with-ncurses=some/path ...
| Build binary and dependencies without sudo |
1,403,713,301,000 |
I would like to build standalone bash binaries, which would hopefully work on a good portion of the linux distributions out there. Complete coverage is definitely not a goal. How would I approach this? Best-effort suggestions welcome.
I understand that each distribution has e.g. unique readline implementations. If it is feasible, I would like to statically link a fixed version and ship it with the standalone bash on e.g. an usb stick.
Hope someone can help me with that :) cheers
|
Download the bash-static package from Debian and extract the executable.
ar p bash-static_*.deb data.tar.xz | tar -xJ ./bin/bash-static
If you want to see how it's done, look in the sources. The build instructions are in debian/rules. There's a lot of expansion going on, so run it:
debian/rules static-build
I think all you need is this (but I haven't tried):
./configure --enable-static-link
make
The question is why you'd want to do that. Virtually all distributions already have bash installed as /bin/bash, and it isn't optional. It would be more useful with zsh which in most distributions is available, but not installed by default. For zsh, you need (again, untried):
./configure --enable-ldflags=-static --disable-dynamic --disable-dynamic-nss
make
| Build a standalone bash |
1,403,713,301,000 |
I am trying to test the LD_LIBRARY_PATH environment variable. I have a program test.c as follows:
int main()
{
func("hello world");
}
I have two files func1.c and func2.c:
// func1.c
#include <stdio.h>
void func(const char * str)
{
printf("%s", str);
}
And
// func2.c
#include <stdio.h>
void func(const char * str)
{
printf("No print");
}
I want to do the following somehow:
Convert func1.c and func2.c to .so files - both with same name func.so (they will be placed in different folders, say dir1 and dir2
Compile test.c s.t. I only mention that it has a dependency func.so, but I don't tell it where it is (I want the environment variable to be used to find this)
Set the environment variable, in first try to dir1 and in second try to dir2 to observe different output in each run of test program
Is the above doable ? If so, how to do step 2 ?
I did the following for step 1 (same steps for func2):
$ gcc -fPIC -g -c func1.c
$ gcc -shared -fPIC -o func.so func1.o
|
Use ld -soname:
$ mkdir dir1 dir2
$ gcc -shared -fPIC -o dir1/func.so func1.c -Wl,-soname,func.so
$ gcc -shared -fPIC -o dir2/func.so func2.c -Wl,-soname,func.so
$ gcc test.c dir1/func.so
$ ldd a.out
linux-vdso.so.1 => (0x00007ffda80d7000)
func.so => not found
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f639079e000)
/lib64/ld-linux-x86-64.so.2 (0x00007f6390b68000)
$ LD_LIBRARY_PATH='$ORIGIN/dir1:$ORIGIN/dir2' ./a.out
hello world
$ LD_LIBRARY_PATH='$ORIGIN/dir2:$ORIGIN/dir1' ./a.out
No print
-Wl,-soname,func.so (this means -soname func.so is passed to ld) embeds SONAME attribute of func.so in the output. You can examine it by readelf -d:
$ readelf -d dir1/func.so
Dynamic section at offset 0xe08 contains 25 entries:
Tag Type Name/Value
0x0000000000000001 (NEEDED) Shared library: [libc.so.6]
0x000000000000000e (SONAME) Library soname: [func.so]
...
Linked with this func.so with SONAME, a.out has that in its NEEDED attribute:
$ readelf -d a.out
Dynamic section at offset 0xe18 contains 25 entries:
Tag Type Name/Value
0x0000000000000001 (NEEDED) Shared library: [func.so]
0x0000000000000001 (NEEDED) Shared library: [libc.so.6]
...
Without -Wl,-soname,func.so, you'll get the following by readelf -d:
$ readelf -d dir1/func.so
Dynamic section at offset 0xe18 contains 24 entries:
Tag Type Name/Value
0x0000000000000001 (NEEDED) Shared library: [libc.so.6]
...
$ readelf -d a.out
Dynamic section at offset 0xe18 contains 25 entries:
Tag Type Name/Value
0x0000000000000001 (NEEDED) Shared library: [dir1/func.so]
0x0000000000000001 (NEEDED) Shared library: [libc.so.6]
...
| LD_LIBRARY_PATH environment variable |
1,403,713,301,000 |
I wonder if it is possible to compile individual software packages from FreeBSD source tree without compiling the whole kernel and world....Say,for example ex , which is included in the nvi (new vi) source code.
https://svnweb.freebsd.org/base/head/contrib/nvi/
My intention is to compile, if possible, individual software with debug symbols enabled, so I will be able to debug the code/software.
|
Sure, with the standard /usr/src installed it might run something like
# cat /etc/src.conf
CFLAGS=-pipe
DEBUG_FLAGS=-g
# cd /usr/src/usr.bin/vi
# make clean && make obj && make depend && make && make install
# gdb -d /usr/src/contrib/nvi/ex -d /usr/src/contrib/nvi/common -tui ex
| Is it possible to compile individual software from the FreeBSD source tree? |
1,403,713,301,000 |
I'm working on a clean Debian 7.7 install. After the install everything was working fine except the webcam in Iceweasel browser. After reading a lot I found that the best solution is to install FlashCam 1.4.5.
After downloading the sources, I did a make and got an error:
ERROR: Kernel configuration is invalid.
include/generated/autoconf.h or include/config/auto.conf are missing.
Run 'make oldconfig && make prepare' on kernel src to fix it.
WARNING: Symbol version dump /usr/src/linux-headers-3.2.0-4-common/Module.symvers
is missing; modules will have no dependencies and modversions.
Building modules, stage 2.
Read something and found that I must install the kernel sources and prepare to compile:
apt-get install linux-source linux-source-3.2
tar jxf /usr/src/linux-source-3.2.tar.bz2
cd linux-source-3.2
cp /boot/config-3.2.0-4-amd64 ./.config
make oldconfig
make prepare
Now there is an autoconf.h file in my local linux-sources folder (linux-sources/include/generated/autoconf.h) but not in */usr/src/linux-headers-3.2.0-4-common/include/* where I assume is the folder where FlashCam sources are looking for. What should I do now? Copy by hand this folder is a bit scary and I can't find additional instructions to make it work.
|
Up to date instructions for building out-of-tree kernel modules are here. Installing the kernel config headers to the system include directory is not part of the procedure. Rather you invoke make from inside the kernel source tree and point it to the module's source tree with the M= parameter.
FlashCam hasn't been updated in a while so it may not be possible to build it against a recent kernel without some porting effort.
| Why autoconf.h is not copied automatically to its place? |
1,403,713,301,000 |
I use Gentoo as my primary home Desktop OS. I have since ~2004. I appreciate the emerge command for managing a source based OS as well as it does. Every now and then I check on other distributions, but I'm particularly fond of Linux From Scratch - For those of you Who Don't Know. Granted, I've never been through the entire book because using Gentoo has spoiled me in that respect. I consider Gentoo to be LFS + a Package Manager. I finally decided I'm going to complete the book, so I stuck XUbuntu on a VM to simulate the newness and ...
I'm following along in the release candidate for Version 3 - Systemd of CLFS, and it hit me at Chapter 6 - Temporary System - Make. If a user needs make to compile a version of Make, the Chicken and Egg Causality Problem appears This leads me to my next logical questions.
When Stuart Feldman created make in 1976, how did the computing public compile his program if their OS did not contain an OS depenent make? Am I to assume that the WikiPedia article below is true for every OS?
Did he have to package make to include every OS dependent version of make to complete 1?(See Below)
If I needed Program A, but it was only available to compile on OS A, did I have to buy OS A, even if I use OS B? (Please Ignore
Windows here if Possible.)
Update
Based on jimmij's comment, Did OS specific compilers exist in the same way that make was OS Dependent?
WikiPedia says:
Before Make's introduction, the Unix build system most commonly
consisted of operating system dependent "make" and "install" shell
scripts accompanying their program's source. Being able to combine the
commands for the different targets into a single file and being able
to abstract out dependency tracking and archive handling was an
important step in the direction of modern build environments.
Also, please note that I am looking for some historical perspective here, as I was born in 1976.
|
Make is simply for convenience. You can build software without it, it is just more difficult.
When you run make to build something, it shows you the commands it is running. You can run those commands manually and get the same effect.
$ echo "int main() {}" > test.c
$ make test
cc test.c -o test
Which created a file called test.
I can get the same result by just doing:
cc test.c -o test
If make didn't exist, you could either instruct users to run this cc command by hand, or you could distribute a shell script, and instruct them to run the script.
| What Did Users Do or Use Before the Make Command? |
1,403,713,301,000 |
I am just starting out with embedded Android drivers, so any help would be great. I haven't found a lot of resources online.
At the moment, I am working through a tutorial on porting a driver, and the instructions read:
copy the platform data initialization files, “driver_sources/platform.c" and "driver_sources/platform.h" into “/arch/arm/”
How do I know which machine directory I should choose? I am using the APQ8064 DragonBoard. I don't see an APQ8064 to choose, but maybe it is called something else?
boot
common
configs
include
Kconfig
Kconfig.debug
Kconfig-nommu
kernel
lib
mach-at91
mach-bcmring
mach-clps711x
mach-cns3xxx
mach-davinci
mach-dove
mach-ebsa110
mach-ep93xx
mach-exynos
mach-footbridge
mach-gemini
mach-h720x
mach-highbank
mach-imx
mach-integrator
mach-iop13xx
mach-iop32x
mach-iop33x
mach-ixp2000
mach-ixp23xx
mach-ixp4xx
mach-kirkwood
mach-ks8695
mach-l7200
mach-lpc32xx
mach-mmp
mach-msm
mach-mv78xx0
mach-mxs
mach-netx
mach-nomadik
mach-omap1
mach-omap2
mach-orion5x
mach-picoxcell
mach-pnx4008
mach-prima2
mach-pxa
mach-realview
mach-rpc
mach-s3c2410
mach-s3c2412
mach-s3c2440
mach-s3c24xx
mach-s3c64xx
mach-s5p64x0
mach-s5pc100
mach-s5pv210
mach-sa1100
mach-shark
mach-shmobile
mach-spear3xx
mach-spear6xx
mach-tegra
mach-u300
mach-ux500
mach-versatile
mach-vexpress
mach-vt8500
mach-w90x900
mach-zynq
Makefile
mm
net
nwfpe
oprofile
perfmon
plat-iop
plat-mxc
plat-nomadik
plat-omap
plat-orion
plat-pxa
plat-s3c24xx
plat-s5p
plat-samsung
plat-spear
plat-versatile
tools
vfp
|
According to elinux.org this should be mach-msm folder. It is a folder for Qualcomm SoCs.
| Embedded Linux: Which machine directory to pick in /arch/arm? |
1,403,713,301,000 |
Some binaries save the command line used to configure them inside the binary (I don't remember any that do, otherwise I'd check the source). Is there a way to obtain the command line used as a macro in configure.ac?
For example, if I compile my code with
./configure --foo bar CXX=g++
I would like to save --foo bar CXX=g++ to a macro in config.h so it can be output by the binary using a flag
./myprogram -V
Version 1.0, compiled using: "./configure --foo bar CXX=g++"
|
configure is essentially a shell script bootstrapped from M4 macros, so you can use $* to grab all the arguments to ./configure. As per the autoconf manual you should do this right after AC_INIT, e.g.:
AC_INIT([My Program], 1.0, ...)
config_flags="$*"
AC_DEFINE_UNQUOTED([CONFIG_FLAGS],["$config_flags"],[Flags passed to configure])
This will #define CONFIG_FLAGS in config.h.
| autoconf save ./configure command line to config.h |
1,403,713,301,000 |
After having worked through Linux From Scratch, I get the eerie feeling that in practice, this is not how new distros are built.
How do I search for tools that other distributions are built with? Is Debian really built from scratch? Googling "Linux distro build tools" have not been very fruitful.
The following are some questions that I have not been able to find on either LFS or Google:
What tools are used to build Debian?
What are some popular tools people use to automate the compilation process?
Is it possible for me to simply build the entire system from precompiled binaries?
How do I create a live iso of my working system? What about an automated installer? Are there automation tools for making live iso and installer?
If I wanted to use another distro as a base, where would I start? Are there specialized tools for branching from existing distros?
LFS is cool, but it doesn't answer many practical questions that I have. Where can I find more information? In particular, what key words can I use in my google searches to find information on tools I can use to build a linux distro? Is there a book like LFS that focuses more on branching an existing distro rather than learning the build process?
PS
I have come across SUSE studio, and the like, but those tools require you to be locked in to that particular distribution, and can only offer as much flexibility as the program will allow. How did people branch from SUSE linux before SUSE studio?
|
Debian is built from scratch in the sense that each package maintainer builds his package from the source, so that you don't have to. Most distros work that way (exceptions are for example Gentoo or LFS). So the "tools" to build the software are depending on each component, and the packaging into a .deb or .rpm is often handled by a distro specific tool.
To branch an existing distro, you would have to set up a repository, and start filling that with packages. Let the package manager point to your repository, and the one of the base distro. Then you can start one by one replacing the base packages with your patched ones.
| How do I search for Linux distro build tools? |
1,403,713,301,000 |
How/why can a Firefox 64bit (or 32bit) package work on different Linux distributions since each Linux distribution has a different version for gcc, glibc, linux kernel, etc. ?
|
The way it's coded the application is not using any calls that would limit it on any particular distribution. Having said that the statement you are making is completely false because unless libstdc++.so.6, libm.so.6, libc.so.6 are present on the system firefox will not work. So your question is predicated on gcc and glibc being at least at the particular minimum versions.
| How/why can Firefox packages work on all Linuxes? |
1,403,713,301,000 |
So I've been at this for a while and have been poking around for an answer for a few days, and figure it's about time to ask for help. I am running Ubuntu 10.10 in VMWare Fusion, and have downloaded a copy of the 3.2 kernel and built it with all default settings. When I try to boot into the new kernel after a call to make install, I get the following message:
[ 1.581916] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0)
[ 1.582260] Pid: 1, comm: swapper/0 Not tainted 3.2.4 #1
[ 1.582444] Call Trace:
[ 1.582552] [<ffffffff815e7447>] panic+0x91/0x1a7
[ 1.582666] [<ffffffff815e75c5>] ? printk+0x68/0x6b
[ 1.582799] [<ffffffff81ad2152>] mount_block_root+0x1ea/0x29e
[ 1.582929] [<ffffffff81ad225c>] mount_root+0x56/0x5a
[ 1.583047] [<ffffffff81ad23d0>] prepare_namespace+0x170/0x1a9
[ 1.583178] [<ffffffff81ad16f7>] kernel_init+0x144/0x153
[ 1.583304] [<ffffffff815f45f4>] kernel_thread_helper+0x4/0x10
[ 1.583436] [<ffffffff81ad15b3>] ? parse_early_options+0x20/0x20
[ 1.583570] [<ffffffff815f45f0>] ? gs_change+0x13/0x13
Which used to appear on every reboot. I found that if I changed the VM's harddrive type, I could get GRUB to boot at least, but the message above comes up if I try to load the newly compiled kernel. The old kernel works as before. I have checked and I have compiled in support for ext4, which is the fs my root is running. I have also tried generating an initrd file with a call to "sudo update-initramfs -c -k 3.2.4", but to no avail.
The compilation, I think, was pretty standard:
make menuconfig
make
make modules_install
make install
update-grub
reboot
Were the general steps. In terms of options, I basically took the default on everything. In case it's pertinent, my fstab looks like this:
proc /proc proc nodev,noexec,nosuid 0 0
#UUID=c75eddd9-f4fa-49be-927b-8c2da7074135 / ext4 errors=remount-ro 0 1
/dev/sda1 / ext4 defaults 0 1
#UUID=5bc6915e-fdfa-479a-885f-ea03cb14f9cd none swap sw 0 0
/dev/sda5 none swap sw 0 0
/dev/fd0 /media/floppy0 auto rw,user,noauto,exec,utf8 0 0
Where I've tried it with both UUID's and /dev/sd* notation. Any help or advice would be much appreciated, as it's gotten quite frustrating.
Thank you.
|
You forgot to build your initrd that goes with the kernel. Run update-initramfs -c -k kernelversion and then update-grub to find it and add it to the grub menu.
| Kernel Panic - not syncing: VFS: Unable to mount root fs after new kernel compile |
1,403,713,301,000 |
Is there any patch for Linux kernel to use different memory allocators, such as ned allocator or TLSF allocator?
|
The allocators you mention are userspace allocators, entirely different to kernel allocators. Perhaps some of the underlying concepts could be used in the kernel, but it would have to be implemented from scratch.
The kernel already has 3 allocators, SLAB, SLUB, SLOB, (and there was/is SLQB). SLUB in particular is designed to work well on multi-CPU systems.
As always if you have ideas on how to improve the kernel, your specific suggestions, preferably in the form of patches, are welcome on LKML :-)
| Kernel memory allocator patch |
1,403,713,301,000 |
I have built Qt6 in an Alma8 based Docker container, with the Docker host being Fedora 35.
Under some circumstances (described below), all Qt libs cannot load libQt6Core.so[.6[.2.4]]. But that file exists and the correct directory is searched for that file. Other Qt libs (e.g., libQt6Dbus.so) are found and loaded.
Extensive debugging, re-building, seaching-the-web did not yield any clues what the underlying cause is and how I could fix it.
Locating the problem
I have narrowed down the problem to the following scenario:
I created two minimal VMs, one with centos7 and one with alma8.
I installed Docker from the official repos into both of them.
I ran the same Docker image in both VMs and installed the same qt6 package.
It breaks when the Docker host is centos7.
It works when the Docker host is alma8.
Theory and Question
Qt6 was built on Alma8 and links to some system libraries newer than what Centos7 provides, so Qt6 cannot run unter Centos7 (this is totally expected and okay). But it should run anywhere in the Alma8 Docker container.
Container images should be able to run anywhere, but in this case "something" from the host OS sneaks into the container and causes the Issue – even though both containers use the exact same image!
The question is: What is this "something" and how/why does it break the build?
What I tried
I inspected libQt6Gui.so to see whether or not it can load libQt6Core.so and I inspected libQt6Core.so to see if something looks bogus using:
ldd and LD_DEBUG=libs ldd which indeed showed some differences (see below)
libtree which showed no differences (but a nice tree :))
pyldd (from conda-build)
readelf -d
What I also tried:
Setting LD_LIBRARY_PATH (did not change anything – no surprise since I know that the correct path is always searched)
Building Qt6 in an alma8 container with a centos7 host (build failed with "libQt6Core.so.6: cannot open file", same error as with the built lib)
Building Qt6 in a centos7 container (build failed due to other problems I could not yet fix)
Differences from ldd
In the screenshots below, you see a the Alma8-Docker-Container on a Centos7 host on the left and the Alma8-Docker-Container on an Alma8 host on the *right.
The first two images show the results for ldd /opt/emsconda/lib/libQt6Gui.so. libQt6Core can not be found on the left but is found on the right.
This second screenshot shows that other Qt libs are found and loaded. The ICU libs are also missing on the left - maybe they are only loaded when libQt6Core was also loaded?
This screenshot shows the results of LD_DEBUG=libs ldd .... You can see that in both cases, libQt6Core is search in the correct location (/opt/emsconda/lib). But it is only loaded in the right container. The left one additionally looks in `/opt/emsconda/lib/./ (haha)) and then silently walks on to the next lib ...
I could not find any error messages. This file is just not opened/loaded.
Inspecting the libQt6Core.so itself might give us a clue. It links to a linux-vdso.so.1.
According to this SO question, that file is a virtual lib injected into the userspace by the OS kernel.
Since Docker containers do not run their own kernel, I suspect that that file comes from the host OS. Maybe, libQt6Core relies on some functionality in the linux-vdso.so.1 that the centos7 kernel cannot provide? I have no idea ...
Since nothing I tried so far yields an error message, I have no clue what the acutal problem might be or how to proceeded with debugging. I'd be greatful for any kind of hints, tips or help.
|
The question got answer in the Qt forums.
Summary:
The .so contains an ABI tag that denotes the minimum kernel version required. You can see this via objdump -s -j .note.ABI-tag libQt6Core.so.6.2.4. The result is in the last three blocks (0x03 0x11 0x00 -> 3.17.0 in my case).
This information is placed there on purpose since QT uses a few system calls that are only available with newer kernels.
Glibc reads this information when it loads a shared object and compares it to the current kernel's version. If it doesn't match, the file is not loaded.
Since Docker has no own kernel, the Docker host’s kernel version is used for that comparison. So even if the Docker image is Alma8, the kernel is still the old v3.10.0 from the Centos7 host in my case.
You can use strip --remove-section=.note.ABI-tag libQt6Core.so.6.2.4. Qt seems to have fallback code, so nothing breaks.
Source: https://github.com/Microsoft/WSL/issues/3023
| Existing .so file cannot be loaded even though it exists, seems to depend on Docker host OS |
1,403,713,301,000 |
I tried to compile the Hello World program in C, inside Eclipse PTP, but it gives me an error related to mpi.h.
I have included /usr/local/include and /usr/local/lib in my paths, and also tried running a search with find / -name mpi.h. I still get a No such file or directory error.
I tried to install mpich2, but still couldn't find mpi.h.
Also:
There is no folder inside the include directory, why is that?
I can find mpicc at /usr/bin/mpicc
The same problem occurs when trying to compile the project as C++ code. What should I do?
|
This Stack Overflow question answers yours.
According to yum, the mpi.h header file is provided by the following packages:
$ yum whatprovides '*/mpi.h'
openmpi-devel-1.8.1-1.el6.x86_64
mpich2-devel-1.2.1-2.3.el6.x86_64
mvapich2-devel-2.0rc1-1.el6.x86_64
mvapich-devel-1.2.0-0.3563.rc1.5.el6.x86_64
mvapich2-psm-devel-2.0rc1-1.el6.x86_64
mpich-devel-3.1-4.el6.x86_64
mvapich-psm-devel-1.2.0-0.3563.rc1.5.el6.x86_64
I've removed most of the output, as well as the i686 versions. Pick the package according to what (variant) you're trying to work with. :)
Note that most of these packages create a subdirectory in /usr/include when installed. For instance, the mpi.h file provided by openmpi-devel is available at /usr/include/openmpi-x86_64/mpi.h, meaning you'd have to either include openmpi-x86_64/mpi.h in your source code, or add the /usr/include/openmpi-x86_64 directory to your include paths.
Also: some of these packages (such as mvapich-devel) don't even use /usr/include at all, and put their headers under /usr/lib64/{package}/include/.
| mpi.h not found |
1,403,713,301,000 |
I have a simple shared library that is currently compiled on Linux using:
gcc -c -fPIC foo.c -o foo.o
gcc -shared -o foo.so foo.o
I need to relay instructions to a colleague for compiling the same on AIX.
I do not know if my colleague will be using gcc on AIX or a native compiler.
Will these gcc instructions also work for AIX? If not, what modifications are necessary? Linux gcc version is 4.4.7
Can anyone provide instructions for same using native AIX compiler? xlC?
Thank you.
|
On AIX you can have 3 compilers:
GCC
newer XL C/C++ Enterprise Edition
older VisualAge C++ Professional
For GCC since late 2.x, syntax for creating shared libraries is:
gcc -shared -Wl,-soname,your_soname -o library_name file_list library_list
Example:
gcc -fPIC -g -c -Wall a.c
gcc -fPIC -g -c -Wall b.c
gcc -shared -Wl,-soname,libmystuff.so.1 -o libmystuff.so.1.0.1 a.o b.o -lc
For the above AIX native compilers, see this page for detailed instructions:
http://www.ibm.com/developerworks/aix/library/au-gnu.html
(see section Shared libraries on AIX versus System V systems)
| How to compile shared library on AIX |
1,403,713,301,000 |
I want to know if there is any tool that, given the output of configure, cmake, autoconf or whatever library/dependencies matcher you have; installs the required packages/sources for you from your distro repo.
In other words, some tool that resolves library dependencies for you, installing needed packages depending on your repo, when you want to build a program from source which you don't know which libraries are required.
I give you a random example:
Some time ago I wanted to use Oprofile. It isn't packaged for Ubuntu 14.04 so I had to build for myself.
In the README file it says I have to do ./configure first.
I did ./configure once, it threw out an error saying "popt" library wasn't found. I looked up on Internet, I discovered which deb package I had to install for that library, I installed it and I did ./configure again.
Second time, it again threw an error saying liberty library wasn't found. Again, I looked up what library I needed, I found that for Ubuntu it was binutils-dev. But when I tried to install it, It didn't work. Funny enough, that library was in binutils-dev package for ubuntu 12.04, but libiberty-dev for ubuntu 14.04.
For the third time, I finally was able to compile it and run the program
|
No, it can't fully automatically infer dependencies.
If it had been packaged, apt-get build-dep oprofile would have helped. If you can find a package elsewhere, you can look up the dependencies there. For example, if the package exists in the next release of your distribution. e.g. here:
http://archive.ubuntu.com/ubuntu/pool/universe/o/oprofile/oprofile_1.0.0-0ubuntu9.dsc
(and if you plan on compiling things yourself, always consider upgrading to the latest version first!)
Other than that it requires a little bit of experience to figure out. configure scripts unfortunately won't tell you the package names, but usually it's quite easy to find. Also use the search functions on the distribution web pages - they can tell you which packages contain a certain file name.
Instead of iterating through configure attempts, it may be more convenient to look at the configure.ac file, from which the script was generated (and which usually is much shorter). You may be able to discover some optional functionality only offered if certain libraries are installed and some flag is given.
LIBERTY_LIBS="-liberty $DL_LIB $INTL_LIB"
BFD_LIBS="-lbfd -liberty $DL_LIB $INTL_LIB $Z_LIB"
POPT_LIBS="-lpopt"
are typical library dependencies.
AC_ARG_ENABLE(gui,[ --enable-gui compile with gui component (qt3|qt4|yes|no),
if not given or set to yes, gui defaults to qt3],, enable_gui=qt3)
indicates that you may also want to consider QT dependencies if you want a GUI.
| Is there any automatic tool for installing required libraries to compile a program from source? |
1,420,103,176,000 |
I needed to compile apache2 on ubuntu but I wanted it to use the original configuration and layout. It took me a very long time to find the information to do this, so I thought I would create this question and answer to help others.
Note: I needed the latest version of apache that was not yet supported by my version of ubuntu and I did not want to upgrade my version of ubuntu.
|
Preamble
This worked for me. If there is a simpler way, I am sure you will let me know.
In theory these steps should work for any linux distribution, you just need to find the original configure options that are used to compile apache for your distro. Perhaps you could add comments to this answer on where you found the options.
The important option is:
--enable-layout=Debian
Debian can be changed to any one of the supported layouts in config.layout file in the apache build directory, the options are defined as:
<Layout x>
...
</Layout>
Where x would be the layout option. Try googling "--enable-layout=x" where x is your distribution to find your options. Try find the original options used by your distro and not some random suggestions.
EDIT: As mentioned by faker the problem with this is that when you upgrade using apt and there is a new version of apache2, the compiled version will be overwritten. His suggestion of building a new deb is a good one. Unfortunately due to various deb dependency issues that are too much work to get around I have not been able to do so. However I would suggest that you try that route first, this should help you:
http://blog.wpkg.org/2014/06/29/building-apache-2-4-x-deb-packages-for-debian-wheezy-7-x/
I've opted to keep it as is, however I have set a hold on apache2 so it is not upgraded till I am ready to release the hold. Alternatively you could just remove apache from the machine and add it again when you are ready.
To hold:
sudo apt-mark hold apache2
To release hold:
sudo apt-mark unhold apache2
I would also suggest that you make a clone of the server you wish to change, work through the process on the clone and get it working before trying it on a production env. Breaking apache for a day or longer on a prod environment is not a stress you need in your life. This is where virtual machines are great, take a snapshot and create a new instance from the snapshot. Or replicate the environment you wish to change and make the changes there.
I use digital ocean, they rock, use this link to get $10 off your subscription. Disclaimer I get $25 off mine:
https://www.digitalocean.com/?refcode=9287fc77c7ae
Here is how to do it on ubuntu as promised
This assumes that you already have the default apache version installed on your system by having previously run:
sudo apt-get install apache2
and you have run
sudo apt-get upgrade
To upgrade all packages to the latest, including apache.
If there have been major changes in config from your version to the latest version of apache you would need to make those changes yourself. This can take a while, hence the suggestion of trying this on a server clone as mentioned in the preamble.
You need to install the dependencies to do the build
sudo apt-get build-dep apache2
You need to download the apache source and unzip it, this will be referred to as the build directory.
Backup your current configuration: DON'T SKIP THIS STEP
sudo cp -r /etc/apache2 ~/apache2_conf_back
You need to establish what your release code name is:
sudo cat /etc/lsb-release
Mine is trusty
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=14.04
DISTRIB_CODENAME=trusty
DISTRIB_DESCRIPTION="Ubuntu 14.04.1 LTS"
You should then be able to replace the 2 occurrences of the word "trusty" in the following url to get the options used to build your release's version of apache
bazaar.launchpad.net/~ubuntu-branches/ubuntu/trusty/apache2/trusty/view/head:/debian/rules
I used the options defined for the variables "AP2_COMMON_CONFARGS" and "AP2_worker_CONFARGS". Additionally I added the options:
--with-pcre=/usr \
--enable-mpms-shared=all \
--enable-unixd=static \
As I ended up trying a few times to get this right, I created the following file in the apache build directory which I ran each time. Call it myconfig.sh
#!/bin/bash
./configure \
--with-pcre=/usr \
--enable-mpms-shared=all \
--enable-unixd=static \
--enable-layout=Debian --enable-so \
--with-program-name=apache2 \
--with-ldap=yes --with-ldap-include=/usr/include \
--with-ldap-lib=/usr/lib \
--with-suexec-caller=www-data \
--with-suexec-bin=/usr/lib/apache2/suexec \
--with-suexec-docroot=/var/www \
--with-suexec-userdir=public_html \
--with-suexec-logfile=/var/log/apache2/suexec.log \
--with-suexec-uidmin=100 \
--enable-suexec=shared \
--enable-log-config=static --enable-logio=static \
--enable-version=static \
--with-apr=/usr/bin/apr-1-config \
--with-apr-util=/usr/bin/apu-1-config \
--with-pcre=yes \
--enable-pie \
--enable-authn-alias=shared --enable-authnz-ldap=shared \
--enable-disk-cache=shared --enable-cache=shared \
--enable-mem-cache=shared --enable-file-cache=shared \
--enable-cern-meta=shared --enable-dumpio=shared --enable-ext-filter=shared \
--enable-charset-lite=shared --enable-cgi=shared \
--enable-dav-lock=shared --enable-log-forensic=shared \
--enable-ldap=shared --enable-proxy=shared \
--enable-proxy-connect=shared --enable-proxy-ftp=shared \
--enable-proxy-http=shared --enable-proxy-ajp=shared \
--enable-proxy-scgi=shared \
--enable-proxy-balancer=shared --enable-ssl=shared \
--enable-authn-dbm=shared --enable-authn-anon=shared \
--enable-authn-dbd=shared --enable-authn-file=shared \
--enable-authn-default=shared --enable-authz-host=shared \
--enable-authz-groupfile=shared --enable-authz-user=shared \
--enable-authz-dbm=shared --enable-authz-owner=shared \
--enable-authnz-ldap=shared --enable-authz-default=shared \
--enable-auth-basic=shared --enable-auth-digest=shared \
--enable-dbd=shared --enable-deflate=shared \
--enable-include=shared --enable-filter=shared \
--enable-env=shared --enable-mime-magic=shared \
--enable-expires=shared --enable-headers=shared \
--enable-ident=shared --enable-usertrack=shared \
--enable-unique-id=shared --enable-setenvif=shared \
--enable-status=shared \
--enable-autoindex=shared --enable-asis=shared \
--enable-info=shared --enable-cgid=shared \
--enable-dav=shared --enable-dav-fs=shared \
--enable-vhost-alias=shared --enable-negotiation=shared \
--enable-dir=shared --enable-imagemap=shared \
--enable-actions=shared --enable-speling=shared \
--enable-userdir=shared --enable-alias=shared \
--enable-rewrite=shared --enable-mime=shared \
--enable-substitute=shared --enable-reqtimeout=shared;
Stop the current apache
/etc/init.d/apache2 stop
To build and install apache, run the following commands in the build directory
./myconfig.sh
make
make install
Restore your apache config. I HOPE YOU BACKED YOUR CONFIG UP AS DESCRIBED EARLIER
sudo rm -rf /etc/apache2
sudo cp -r ~/apache2_conf_back /etc/apache2
I needed to make the include paths absolute in /etc/apache2/apache2.conf, the following commands do this in vim
:%s/^IncludeOptional /IncludeOptional \/etc\/apache2\//
:%s/^Include /Include \/etc\/apache2\// /
I also needed to change the path to the envvars in /usr/sbin/apache2ctl to /etc/apache2/envvars
Then restart apache
/etc/init.d/apache2 start
Hopefully that works for you, if there are any syntax errors please let me know so I can fix them. Much of them were rewritten from memory.
Good Luck!
Special thanks to jrwren for his post that was the missing piece to my puzzle:
how-to-build-configure-options-latest-apache-on-ubuntu
| How do I compile the latest apache2 on ubuntu using the original layout, configuration and configure options |
1,420,103,176,000 |
I have tried compile a driver for my kernel 3.13.0-40-generic. I have also tried compile a custom kernel with this driver but both have failed.
The name of the driver is vizzini for Linux 2.6.18-3.4.x. I have downloaded it from here.
The error is:
/home/usuario/Descargas/xr21v141x-lnx2.6.18-to-3.4-pak/vizzini.c:137:26: error: ‘usb_serial_probe’ undeclared here (not in a function)
.probe = usb_serial_probe,
^
/home/usuario/Descargas/xr21v141x-lnx2.6.18-to-3.4-pak/vizzini.c: In function ‘vizzini_set_termios’:
/home/usuario/Descargas/xr21v141x-lnx2.6.18-to-3.4-pak/vizzini.c:419:29: error: invalid type argument of ‘->’ (have ‘struct ktermios’)
cflag = tty->termios->c_cflag;
^
In file included from include/linux/printk.h:236:0,
from include/linux/kernel.h:13,
from /home/usuario/Descargas/xr21v141x-lnx2.6.18-to-3.4-pak/vizzini.c:42:
/home/usuario/Descargas/xr21v141x-lnx2.6.18-to-3.4-pak/vizzini.c: In function ‘vizzini_out_callback’:
/home/usuario/Descargas/xr21v141x-lnx2.6.18-to-3.4-pak/vizzini.c:804:72: error: ‘struct usb_serial_port’ has no member named ‘number’
if (debug) dev_dbg(&port->dev, "%s - port %d\n", __func__, port->number);
^
I checked in that the function is static and declared in usb-serial.c in line 697.
Can anybody help me?
Thanks and regards!.
|
I was using a driver for <3.4 kernel and I have the 3.13. I had downloaded a new driver. I saw "3.1.3"...
| Error compiling custom kernel with a new usb serial driver |
1,420,103,176,000 |
Background
I discovered Ms. PacMan some time ago, but you don't see many of that game in arcades. Then I did find out about mame (version 0.145), available on my Linux (Ubuntu) machine and about running Ms. PacMan and that works.
Problem
When I save a game (so I don't have to start from the beginning every time (so to improve more difficult levels) and then restore the game, the game in MAME resets and I have to start from the beginning.
I have read here about games being updated to support save state and also somewhere that saving was removed from MAME .100. I also tried to compile an older MAME version 0.99 (which should support saving), but it has no Linux support. And AdvanceMAME (1.2) does not compile because of configuration file problems that don't seem easy to solve.
Question
Should I try harder to get an old MAME compiled, or is this saving of Ms. PacMan not going to work on Linux anyway?
|
Compiling and setting up mame or advancemame, with svgalib or newer replacements is not easy. You should stick with the working version of mame that you have.
If you get the message that the game is properly restored, the first thing to try is restoring from the same position a second time, do this while the system is 'resetting' the game. That often helps (but not always).
What should almost certainly solve the issue is saving a game while you are playing (and not at the beginning of a new level), the latter never worked for me without having problems reloading, the former always worked.
When saving in the middle of a level, make sure Blinky & co are not too close, so you have time to get your hands back in position on the keyboard/joystick after restoring.
| MAME saving but not restoring games |
1,420,103,176,000 |
I'm using Buildroot for compiling embedded linux. It works well because I have target Makefile configurations, but now I need driver for my usb devices. I manage to compile Qt appilactions(c++) on my host linux to target linux by using buildroots /output/host/usr/bin/arm-none-linux-gnueabi-c++ . Works well.
Now I'm trying to compile c-files for this driver.
I'm calling it like:
/output/host/usr/bin/arm-none-linux-gnueabi-gcc -Wall -D__KERNEL__ -DMODULE -I/home/buildroot-2012.08/output/build/linux-2.6.35.3/include -DMODVERSIONS -include /home/buildroot-2012.08/output/build/linux-2.6.35.3/include/config/modversions.h -I /home/buildroot-2012.08/output/build/linux-2.6.35.3/drivers/usb/serial/ -O -c -o ftdi_sio.o ftdi_sio.c
I'm getting error:
output/build/linux-2.6.35.3/include/linux/linkage.h:5: fatal error: asm/linkage.h: No such file or directory
How I should configurate the driver compiling?
Is there some other way to do it for target linux. Mayby I'm not doing it the right way.
|
asm/ is a symbolic link to your target architecture, if it doesn't exist probably you're missing some target in your kernel build directory, configure (if not, maybe just module_headers can make it)
It is not clear from your question if you're using the command line, a custom Makefile, or a Buildroot package (which version of Buildroot are you using).
Your command line is building a C object .o not a kernel module (.o was the extension for kernel modules till the version 2.4, from 2.6 it is .ko)
If you are not sure about the flags increase the verbosity for the build of the kernel modules, build and log, then use the same.
The kernel have its way to build modules and the Buildroot have its way for packages, the best is probably to create a new package to build your module (take a look if there's already some other package that build a module).
This example is a bit old but maybe help.
edit
The ftdi_sio.ko module is generated into the directory /lib/modules/$(uname -r)/kernel/drivers/usb/serial/
But it could be configured as builtin also, so that no .ko is generated, check the symbol USB_SERIAL_FTDI_SIO in your configuration (should be y for builtin m for module).
If it is builtin or inserted there should be a /proc interface at runtime on the target called ftdi_sio, find it.
| buildroot compiling driver |
1,420,103,176,000 |
I'm trying to use the IP_TRANSPARENT declaration. I am using debian 6.0.5. IP_TRANSPARENT is only defined in linux/in.h however it conflicts with netinet/in.h.
In centos for example, IP_TRANSPARENT is defined in both linux/in.h and bits/in.h.
When I look at the top of bits/in.h (which I get when I include netinet/in.h, the centos one has
/* Copyright (C) ... 2008, 2010 Free Software Foundation, Inc.
Whereas one in my debian install has
/* Copyright (C) ... 2004, 2008 Free Software Foundation, Inc.
I've tried
apt-get install linux-headers-2.6.32-5-686
But it says it is already the newest version. How do I update the debian linux headers to the latest versions?
Edit:
In centos, IP_TRANSPARENT is defined in bits/in.h, which I get if I include netinet/in.h. It compiles fine under centos.
In debian, IP_TRANSPARENT is not in bits/in.h, so when I include netinet/in.h I get a ‘IP_TRANSPARENT’ undeclared error when compiling.
|
I'm sure you already have the right versions, but linux/in.h is a kernel header that you should not be trying to include directly in a user space program.
You also shouldn't include bits/in.h as that is a header fragment that will be included by other headers when necessary.
The netinet/in.h is what you should be including and that will, in turn, include the bits/in.h header. If that doesn't have a definition for IP_TRANSPARENT then the version of glibc on the system is too old.
If you can't update glibc because you are already on the latest version offered by your distribution then the pragmatic solution, and the one which will make your program portable, is to add the following to your code:
#ifndef IP_TRANSPARENT
#define IP_TRANSPARENT 19
#endif
| IP_TRANSPARENT missing from glibc headers |
1,420,103,176,000 |
I want to run SSH on Qtopia (on my FriendlyARM). My own distribution is Ubuntu, so I cannot copy and paste the ssh binary file into the device.
If I can copy and paste a binary file, where can I find it? If I must compile SSH, how is it possible in my ubuntu?
|
Your device has an ARM processor. Your PC has an x86 processor. ARM and x86 are different processor architectures with different instruction sets. An executable program compiled for x86 consists of x86 instructions that an ARM processor cannot execute, and vice versa.
You need an ARM binary. Furthermore, you need an ARM binary that's compatible with the other software you have on your device. Specifically, you need either a statically linked binary (a binary that does not depend on anything else) or a binary linked with the right system libraries.
Check which standard library you have. If you have a file called /lib/ld-uClibc.so, you have uClibc, a small library intended for embedded systems. If you have a file called /lib/ld-linux.so.2, you have GNU libc, the same library that you have on your Ubuntu PC (and any other non-embedded Linux).
You have two choices of SSH clients and servers: OpenSSH and Dropbear. Dropbear is smaller, but has fewer features, in particular no SFTP.
If the standard library is Glibc, you can grab a binary from Debian's ARM distribution. Get the armel client or server package. Extract the .deb file by running
dpkg-deb -x openssh-….deb .
Then copy the binary from ./usr/bin or ./usr/sbin to the device.
If the standard library is uClibc, you'll need to grab a binary from a distribution based on uClibc. Dropbear is included in many embedded distribution. Openmoko, which shares some ancestry with Qtopia, includes Dropbear in its default installation. If you're going to want to install several programs, BuildRoot makes it very easy to obtain a cross-compiler and build common programs: you pretty much only need to follow the guide.
| Adding ssh on embedded linux |
1,420,103,176,000 |
I'm working on a Davinci DSP ARM embedded board. The board itself is the Texas Instruments 816X/389X EVM. I'm currently trying to get apache working on the board. The problem is that the SDK for the board is extremely basic and doesn't include 'make' or any update manager like RPM, yum, or apt-get. So I'm having a hard time getting it to work.
I compiled apache on my host machine, which is connected through minicom to the target. I have G++ Sourcery installed, but don't have any experience with it. So, when I took the compiled files to the target, I ended up with the error:
line 1: syntax error: word unexpected (expecting ")")
I'm assuming that I did something wrong during the compile, but I'm not really sure because I'm normally a hardware designer and not a software guy.
|
When you are compiling something for another system, it needs to be cross-compiled to that architecture. Most likely your host is an x86. The TI is an ARM. The instruction set isn't the same. You need to setup a cross toolchain to compile apache with an ARM version of g++. TI should have included cross tools with the EVM so that's the best place to start looking. Otherwise, you can build your own toolchain (http://kegel.com/crosstool/).
| Embedded board with apache |
1,420,103,176,000 |
How can i create initrd image for a new(experimental) kernel without actually installing it.
(Existing tools to create initrd based on config and details from installed kernel.)
Say i compile a new kernel with experimental features turned on, i have this in another separate partition. I would like to boot into this kernel, for that will old initrd work ?
if i want to create a new initrd.img for new kernel without actually installing kernel, how can i do it ?
BTW, can someone clarify about initramfs ? will it be useful for my scenario ?
|
Creating an initrd doesn't have anything to do with installing a kernel. All you do is to create a file structure for the initrd, copy the required files, write the init script and package all of that into a cpio archive. I used the instructions in the Gentoo Wiki to make my initrd. Some distributions make tools to generate initrds, and for that you will have to name your distro. For example, Arch has mkinitcpio.
initramfs is just another (newer) implementation of the initial ramdisk. I don't know for sure, but I think modern distributions all use initramfs. When you see "initrd", it may be a shorthand for "initial ramdisk", and thus it covers both initrd and initramfs.
| Creating new initrd without installing kernel |
1,420,103,176,000 |
There is an very old game (named "six") that is still available on Fedora as package six.x86_64 that I want to use in a hobby project of mine. I have more recent source from the original author, but cannot compile it because I don't have tools that old (think 2010 or older -- Qt3 and such).
Fedora seems able to keep it running. Without getting their sources, I don't see how. They list the sources at https://src.fedoraproject.org/rpms/six/tree/rawhide but when I try to "Fork and edit" it either stalls and times out, or once it got a little farther and had me sign up for a Fedora account (which I did) but then failed with a message about a 'method' not being allowed at that URL. I have no idea what that means.
So I have a couple of questions:
How do I get a copy of the source
How does Fedora keep it working and can I do the same
How can I compile it with modern tools
BTW the final version of the software is at the author's GitHub: https://github.com/melisgl/six and I'd encourage Fedora to switch over to that, but don't know how to contact them for that. This is the first time I've used Fedora since around 2002, and then mostly because my school was using it. I was into Gentoo for my own things. Now Xubuntu.
|
How do I get a copy of the source
The easiest way is to download the source RPM with dnf download --source six (or if you are not on Fedora you can download it from Koji). You'll get a .src.rpm archive which contains the upstream source tarball, patches from Fedora and the SPEC file.
How does Fedora keep it working and can I do the same
There are only two downstream patches in Fedora:
six-fix-DSO.patch which just changes LDFLAGS and
six-gcc43.patch that adds some includes.
So one patch that makes some small adjustment to the build process and one patch that makes the old source work with newer gcc. Nothing special.
How can I compile it with modern tools
If you are asking how you can compile it with newer Qt, the answer is you can't. Qt3 is still available in Fedora so the secret is having Qt3 available.
| How to download and compile source code from Fedora |
1,420,103,176,000 |
I just bought a set top box (digital reciever - Vu+ Solo2) which runs Linux, and would like to compile some C software on it. It uses OPKG as the package manager. I executed
opkg update
and:
root@vusolo2:~# opkg install gcc
Unknown package 'gcc'.
Collected errors:
opkg_install_cmd: Cannot install package gcc.
root@vusolo2:~#
I figured that this is because I do not have the necessary repositories. The files in /etc/opkg/ only point to feeds/repositories that are owned by the creators of the Linux image that the receiver is running (Black Hole).
As far as I understood, the repositories need to match the CPU architecture. Here is the output of /proc/cpuinfo:
root@vusolo2:~# cat /proc/cpuinfo
system type : BCM7346B2 STB platform
machine : Unknown
processor : 0
cpu model : Broadcom BMIPS5000 V1.1 FPU V0.1
BogoMIPS : 864.25 cpu MHz : 1305.007
wait instruction : yes
microsecond timers : yes
tlb_entries : 64
extra interrupt vector : yes
hardware watchpoint : no
isa : mips1 mips2 mips32r1
ASEs implemented :
shadow register sets : 1
kscratch registers : 0
core : 0
VCED exceptions : not available
VCEI exceptions : not available
processor : 1
cpu model : Broadcom BMIPS5000 V1.1 FPU V0.1
BogoMIPS : 655.36
cpu MHz : 1305.007
wait instruction : yes
microsecond timers : yes
tlb_entries : 64
extra interrupt vector : yes
hardware watchpoint : no
isa : mips1 mips2 mips32r1
ASEs implemented :
shadow register sets : 1
kscratch registers : 0
core : 0
VCED exceptions : not available
VCEI exceptions : not available
Now, which repositories should I use to get the following packages:
gcc
gcc-symlinks
make-dev
binutils-dev
libgcc-dev
?
|
I found myself in this position today and the answer seems to be – simply: You don't want to. Just cross compile.
The Entware-ng opkg repository project provides a nice toolchain for this purpose, with instructions here. They only support native compilation on ARM|x86* and conclusively state:
There is no gcc for mipsel repo.
See also: https://github.com/Entware-ng/Entware-ng/issues/138
| How to install compiler tools with opkg on MIPS CPU architecture |
1,420,103,176,000 |
I want to analyze code coverage datas. I want to create gcov files from OpenSSL (and from other projects), but I can only create them in the same directory of the project, and only for the files in the current folder.
I want to create them in a different directory, preserve the source original directory structure, and make the whole process as automatic as possible.
source:
~/mygcovproject/projects/openssl-1.0.0
output:
~/mygcovproject/gcovdata/openssl-1.0.0
Currently I can create the files only in this way:
$ cd ~/mygcovproject/projects/openssl-1.0.0
$ make clean
$ export CC="gcc -fprofile-arcs -ftest-coverage"; ./config
$ make
$ make tests
$ cd test
$ gcov *.c
$ mv *.gcov ~/mygcovproject/gcovdata/openssl-1.0.0/test/
$ cd ..
$ cd apps
$ gcov *.c
$ mv *.gcov ~/mygcovproject/gcovdata/openssl-1.0.0/apps/
$ cd ..
$cd crypto
... (for all the folders)
But there is 2 big problem with this method:
1) There are many folders and subfolders.
2) I have to move the files manually.
How should I do this? Can you help me please?
Upd:
Thanks Gilles, it helped me a lot, but I still struggle with the last part. I get error messages from gcov:
$ cat dothemagic.sh
#!/bin/bash
shopt -s globstar
gcov_data_dir="../../gcovdata/${PWD##*/}"
mkdir -p "$gcov_data_dir"
#make
#make tests
for x in ./**/*.c; do
gcov "$gcov_data_dir/${x%/*}/$x"
done
exit
$ ./dothemagic.sh
../../gcovdata/openssl-1.0.0/./apps/./apps/app_rand.gcno:cannot open notes file
../../gcovdata/openssl-1.0.0/./apps/./apps/apps.gcno:cannot open notes file
../../gcovdata/openssl-1.0.0/./apps/./apps/asn1pars.gcno:cannot open notes file
../../gcovdata/openssl-1.0.0/./apps/./apps/ca.gcno:cannot open notes file
../../gcovdata/openssl-1.0.0/./apps/./apps/ciphers.gcno:cannot open notes file
../../gcovdata/openssl-1.0.0/./apps/./apps/cms.gcno:cannot open notes file
../../gcovdata/openssl-1.0.0/./apps/./apps/crl2p7.gcno:cannot open notes file
...
I tried this too, but it did not work, i get errors:
for x in ./**/*.c; do
echo $x
gcov $x
done
$ ./run_tests.sh openssl-1.0.0
./apps/app_rand.c
File 'app_rand.c'
Lines executed:37.50% of 40
Creating 'app_rand.c.gcov'
Cannot open source file app_rand.c
./apps/apps.c
File 'apps.c'
Lines executed:33.76% of 939
Creating 'apps.c.gcov'
Cannot open source file apps.c
...
I tried a single command:
$ gcov ./apps/app_rand.c
File 'app_rand.c'
Lines executed:37.50% of 40
Creating 'app_rand.c.gcov'
Cannot open source file app_rand.c
Looks like I can only run gcov on the files in the same folder. How should I solve this? Should I cd in the directories in the loop, then move the files? Or am I doing something wrong?
I tried in the folders with the -o options, but it did not worked:
$ pwd
/home/blackcat/gcov_project/projects/openssl-1.0.0/test
$ ls bftest.*
bftest.c bftest.c.gcov bftest.gcda bftest.gcno bftest.o
$ gcov -o ~/gcov_project/gcov/ bftest.c
/home/blackcat/gcov_project/gcov/bftest.gcno:cannot open notes file
$ gcov bftest.c
File 'bftest.c'
Lines executed:47.52% of 101
Creating 'bftest.c.gcov'
File '/usr/include/x86_64-linux-gnu/bits/stdio2.h'
Lines executed:100.00% of 1
Creating 'stdio2.h.gcov'
File '/usr/include/x86_64-linux-gnu/bits/string3.h'
Lines executed:100.00% of 2
Creating 'string3.h.gcov'
Upd2:
Starting from Gilles solution I created a working code. Thanks.
In the end I put all of them in the same directory, but I created a prefix from their path.
### Generate and copy gcov files ###
cd "$TARGET_DIR"
mkdir -p "$OUTPUT_DIR"
CDIR=""
for x in **/*.c; do
if [ "$CDIR" != "$TARGET_DIR/${x%/*}" ]; then
CDIR="$TARGET_DIR/${x%/*}"
cd $CDIR
gcov -p *.c
SUBDIR="${x%/*}"
PREFIX="#${SUBDIR/\//\#}"
for f in *.gcov; do
if [[ $f == \#* ]] ;
then
cp $f "$OUTPUT_DIR/$f"
else
cp $f "$OUTPUT_DIR/$PREFIX#$f"
fi
done
fi
done
|
You have the commands, so put them in a script!
To run a bunch of commands on different data, put the changing data in a variable.
To run gcov and mv on all the files, there are several possible methods, including:
Run gcov on all files, then move them.
Run gcov on one file, then move its output.
Run gconv on the files in a directory, then move them.
The first approach doesn't work because gcov needs to be executed in the directory containing the source files. The third directory-based approach is in fact the most complicated of the three: the simplest method would be to run gcov on one file at a time.
In bash, you can enumerate all the C files in a directory and its subdirectories recursively with the wildcard pattern **/*.c. The ** wildcard needs to be enabled with the globstar option. To iterate over the files, use a for loop.
To change into a directory just to run one command, run cd and that command in a subshell: (cd … && gcov …).
You need one more type of shell construct: a bit of manipulation of file names to extract the directory part. The parameter expansion construct ${x%/*} expands to the value of the variable x with the shortest suffix matching the pattern /* removed. In other words, that's the directory part of the file name stored in x. This wouldn't work if x consisted only of a file name with no directory part (i.e. foo as opposed to bar/foo); it so happens that there's no .c file at the root of the OpenSSL source tree, but a simple way to make sure the file name starts with ./, which designates the current directory.
Invoke this script at the root of the OpenSSL source tree, after running ./config with your desired options.
#!/bin/bash
shopt -s globstar
gcov_data_dir="../../gcovdata/${PWD##*/}"
make
make tests
for x in ./**/*.c; do
mkdir -p "$gcov_data_dir/${x%/*}"
(cd "${x%/*}" && gcov "${x##*/}") &&
mv "$x.gcov" "$gcov_data_dir/${x%/*}"
done
To avoid having to move the .gcov files, an alternative approach would be to create a forest of symbolic links to the compilation directory, and run gcov in the gcovdata directory. With GNU coreutils (i.e. on non-embedded Linux or Cygwin), you can do that with cp -al.
cp -al openssl-1.0.0 gcovdata
cd gcovdata
for x in ./**/*.c; do
(cd "${x%/*}" && gcov "${x##*/}")
done
| How to create gcov files for a project in a different dir? |
1,420,103,176,000 |
I am trying to build a 3.12 Linux Kernel and I see some of the options (sub menus) are hidden and their branch (menu entry) is disabled in make menuconfig. [---- instead of being --->]
I know it is because of my system profile or their dependencies but I want to view all of them.
What helped in the previous kernels was a key-shortcut, (like Ctrl+H in file manager) that shows every option and menu. What is that shortcut?
|
Looking through the Wikipedia page for menuconfig I do not see a similar option. There's a bullet within that page that states the following:
The help information is distributed throughout the kernel source tree in the various files called Kconfig.
So one could use grep to search for whatever you wanted through these files. You can also use ? within menuconfig to summon the help on a given topic too. The key your lookinig for to show hidden menus and submenus appears to be z.
There's also this keyboard shortcut table included in the topic:
References
README.Menuconfig
| View all disabled or hidden linux kernel options |
1,420,103,176,000 |
I run across such a weird problem:
According to this:
The kernel and modules must be moved to special locations in order to
be used,
1. make modules_install
2. make install
The first will create the /lib/modules/ directory and place
the modules there. The second make target will,
1. Move the kernel, bzImage, to /boot and rename it vmlinuz-<revision>,
2. Move the System.map to /boot,
3. Create initrd.img-<revision>
4. Copy .config to /boot, renaming it to config-<revision>
5. Modifies the boot loader configuration file /boot/grub/menu.lst
so that the new kernel is listed on the boot menu.
I configure and compile the latest Linux kernel 3.15, and run make install to install the new kernel. Everything seems OK except for the .config file is not copied to /boot.
Why is the .config file under the root directory of the source tree not copied to /boot ?
PS. My running OS is fedora 20.
|
This document appears to be incorrect or long-obsolete. Looking at the source, I see only bzImage and System.map being copied. This was the case at least as far back as 2.6.12. Copying an initrd or the .config file would have to be done by a distribution's scripts.
For some reason this depends on the architecture: arm and x86 don't copy .config, but mips and tile do.
| Why is the .config file not copied to /boot after installing new kernel? |
1,420,103,176,000 |
I have Slackware installed on my computer, and I install a lot of software from source. Now I want to install ffmpeg from source just to recompile it with some more options. But I already have ffmpeg installed on my computer, so what's gonna happen?
Is it going to overwrite my old install or is it going to create new files, and if so how can I differentiate between the two installed versions.
Also if there is a better way to recompile a programs on Slack let me know, because i'm very interested.
|
If you use the configure, make, make install routine to install software under any Linux distro, then the new version will usually overwrite the previous. The only caveat is that if the newer version happens to change the install location or names of certain files then you may end up with the old version or parts of the old-version remaining on your computer.
For this reason, it is not advised to install programs in this way on Slackware. The recommended practice is to create a .txz or .tgz package which can be installed with the standard Slackware package installer installpkg. This also means that you can cleanly uninstall the package with removepkg or upgrade to a new version with upgradepkg. Many scripts for compiling and creating packages, including one for ffmpeg, can be found at SlackBuilds. Running the provided script with the sources in the same directory will compile and produce a .txz.
Most Slackware users make heavy use of Slackbuilds to install non-official software.
| New source install over existing one |
1,420,103,176,000 |
I want to compile and install a kernel.org kernel on a custom HDD volume, say /dev/sda5, instead of being merged with my current Ubuntu's directories.
I can find information about configuration and compilation process all over the web, but there's no track of how to put the kernel on a custom volume (different than the booted distro you're using at the moment of compile). What I'm asking for is like how we can install 2 different distros on 2 different volumes on 1 HDD, now think of my custom kernel as another distro.
|
You can compile a kernel anywhere you like, including your home directory. The only time directories outside the build tree are modified is when you make one of the install* targets. So, to install a kernel you'd do the obvious:
cd $SOME_DIRECTORY
tar -xjvf linux-$VERSION.tar.bz2
cd linux-$VERSION
make mrproper menuconfig
# Configure the kernel here.
# Now build it using all your CPU threads in parallel.
make -j$(grep -c processor /proc/cpuinfo) bzImage modules
After you configure the kernel, it'll be built. At this point, you'll have a kernel binary (vmlinux) and a bootable kernel image under arch/$YOUR_ARCHITECTURE/boot/bzImage.
If you're building a monolithic kernel, you're done. Copy the uncompressed file (vmlinux) or compressed file (bzImage) to your intended volume, configure the boot manager if you need to, and off you go.
If you need to install modules, and assuming you've mounted your target volume on /mnt, you could say:
INSTALL_MOD_PATH=/mnt \
INSTALL_PATH=/mnt/boot \
make modules_install
This will copy the kernel image to /mnt/boot and the modules to /mnt/lib/modules/$VERSION.
Please note, I'm oversimplifying this. If you need help building the kernel manually, you should read some of the documents in the kernel source tree's Documentation/ subdirectory. The README file also tells you how to build and install it in detail.
Booting the kernel is a different story, though. Most modern distributions use a initial RAMdisk image which contains a ton of drivers for the hardware needed to bring up the rest of the kernel (block devices, filesystems, networking, etc). This process won't make this image. Depending on what you need to do (what do you need to do?), you can use an existing one or make a new one using your distribution's toolchain. You should check the documentation on update-initramfs.
There are other issues too, though. Using the standard toolchain you can't compile a kernel for a different architecture or sub-architecture. Note that in some cases, even kernels compiled on a particular type of x86 box won't work on certain other types of x86 boxes. It all depends on the combination of sub-architectures and the kernel config. Compiling across architectures (e.g. building an ARM kernel on an x86 machine) is altogether out of the question unless you install an appropriate cross-compilation toolchain.
If you're trying to rescue another installation or computer, however, a rescue disk might come in handier than building a custom kernel like that.
One more thing: if you're trying to build a kernel for another computer which boots, is the same architecture as the one you're compiling on, and runs a Debian or Debian-like OS (Ubuntu counts), you could install the kernel-package package (sudo aptitude install kernel-package). Then unpack the kernel, cd to the root of the source tree, and say:
CONCURRENCY_LEVEL=$(grep -c processor /proc/cpuinfo) \
sudo make-kpkg --initrd binary-arch
This will apply necessary patches, configure the kernel, build it and package it as a .deb package (a few packages, actually). All you need to do is install it on your target system and you're done.
| Compiling and installing a kernel.org kernel to a custom volume on disk |
1,420,103,176,000 |
The instructions to build bitcoind are vague enough, but I can't work out what to do on OpenBSD. I've installed boost, and the system has berkeley db 4.6, OpenSSL, etc already in the base install of OpenBSD.
# gmake -f makefile.unix
Building LevelDB ...
gmake[1]: Entering directory `/root/bitcoin/bitcoin-0.8.3/src/leveldb'
g++ -I. -I./include -fno-builtin-memcmp -D_REENTRANT -DOS_OPENBSD -DLEVELDB_PLATFORM_POSIX -O2 -pthread -Wall -Wextra -Wformat -Wformat-security -Wno-unused-parameter -g -DBOOST_SPIRIT_THREADSAFE -D_FILE_OFFSET_BITS=64 -I/root/bitcoin/bitcoin-0.8.3/src -I/root/bitcoin/bitcoin-0.8.3/src/obj -I/usr/local/include/boost -DUSE_UPNP=0 -DUSE_IPV6=1 -I/root/bitcoin/bitcoin-0.8.3/src/leveldb/include -I/root/bitcoin/bitcoin-0.8.3/src/leveldb/helpers -DHAVE_BUILD_INFO -fno-stack-protector -fstack-protector-all -Wstack-protector -D_FORTIFY_SOURCE=2 -c db/builder.cc -o db/builder.o
In file included from ./port/port.h:14,
from ./db/filename.h:14,
from db/builder.cc:7:
./port/port_posix.h:80: error: '__BYTE_ORDER' was not declared in this scope
./port/port_posix.h:80: error: '__LITTLE_ENDIAN' was not declared in this scope
gmake[1]: *** [db/builder.o] Error 1
gmake[1]: Leaving directory `/root/bitcoin/bitcoin-0.8.3/src/leveldb'
gmake: *** [leveldb/libleveldb.a] Error 2
Since writing I found I didn't have BDB 4.8 so I got it and compiled it into /usr/local/BerkeleyDB.4.8
I found some build instructions on github: https://github.com/bitcoin/bitcoin/pull/1815
I modified these as follows to suit where I installed BDB...
BOOST_INCLUDE_PATH=/usr/local/include \
BOOST_LIB_PATH=/usr/local/lib \
BDB_INCLUDE_PATH=/usr/local/BerkeleyDB.4.8/include \
BDB_LIB_PATH=/usr/local/BerkeleyDB.4.8/lib \
BOOST_LIB_SUFFIX=-mt \
gmake -f makefile.unix -j8 USE_UPNP= bitcoind test_bitcoin
Now the build fails, it looks boost related. There are hundreds of errors until some final ones...
db.cpp:510: error: 'boost' has not been declared
db.cpp:510: error: expected `;' before 'pathTmp'
db.cpp:511: error: 'pathTmp' was not declared in this scope
db.cpp:527: error: 'pathAddr' was not declared in this scope
db.cpp:527: error: 'RenameOver' cannot be used as a function
db.cpp: In member function 'bool CAddrDB::Read(CAddrMan&)':
db.cpp:536: error: 'pathAddr' was not declared in this scope
gmake: *** [obj/alert.o] Error 1
gmake: *** [obj/db.o] Error 1
gmake: *** [obj/checkpoints.o] Error 1
|
Use the bitcoin port from OpenBSD-WIP: https://github.com/jasperla/openbsd-wip/tree/master/net/bitcoin.
| Compiling bitcoind on OpenBSD |
1,420,103,176,000 |
I am trying to compile User Mode Linux on a 64 bit machine with defconfig and getting the following error.
arch/x86/um/user-offsets.c:1: sorry, unimplemented: code model "large" not supported yet
Any idea what this means?
|
From my shaky understanding: the compilation script is passing the -mcmodel=large option to GCC. This option is only supported since GCC 4.3 (or perhaps 4.4). You seem to have an older version where the option is recognized on the command line but not implemented under the hood.
This option produces an executable running in the large model, which consumes more memory for pointers but doesn't put any constraints on the address and size of code and data sections.
This allows the kernel to run at any virtual address. I think this is necessary for User-mode Linux because it has to coexist with the real kernel while itself pretending to be a kernel to user→kernel ABIs.
| User Mode Linux compile fails |
1,420,103,176,000 |
Just out of curiosity, I am interested in compiling the Linux kernel with both the clang and zapcc compilers; one at a time.
I can't find a guide to follow. Only GCC is getting used to compile the Linux kernel.
How do I compile the Linux kernel with other compilers?
|
The kernel build allows you to specify the tools you want to use; for example, to specify the C compiler, set the CC and HOSTCC variables:
make CC=clang HOSTCC=clang
The build is only expected to succeed with GCC, but there are people interested in using Clang instead, and it is known to work in some circumstances (some Android kernels are built with Clang).
| How do I compile the Linux kernel with Clang? [closed] |
1,420,103,176,000 |
Take a look at the following command line:
gcc -o hello -Wall -D_BSD_SOURCE hello-world.c
Now, is there a way to know about these options by doing some processing on the 'hello' executable.
|
Sadly, no. But if you think about it before you create a binary, there are some ways. Here's another. With recent gcc, you can use -frecord-gcc-switches option which will add one section to the ELF file with the description you are seeking.
$ gcc -frecord-gcc-switches -o hello -Wall -D_BSD_SOURCE hello-world.c
$ readelf -p .GCC.command.line hello
String dump of section '.GCC.command.line':
[ 0] -D _BSD_SOURCE
[ f] hello-world.c
[ 1d] -mtune=generic
[ 2c] -march=x86-64
[ 3a] -Wall
[ 40] -frecord-gcc-switches
As you can see it shows you all used options, not just those you provided explicitly.
| Is there a way to know which options were used at compile time? |
1,420,103,176,000 |
I am connected to a linux system with SSH in my college. I found that ctorrent is a console alternative of bitorrent. I have downloaded the tar.gz source but to compile/install it needs sudo access
Is there a way to install the program without sudo access?
I don't know a lot about linux and cannot make this answer work with ctorrent.
|
you can install it locally in your home directory. Ususally it can be done by specifying the parameter prefix for configure script.
For example,
./configure --prefix=$HOME
So, when you compile sources configured in such way, then you will call
make install
the binaries will install into you $HOME/bin
Also, you should alternate PATH variable.
You can do this in $HOME/.bashrc in next way
export PATH=$HOME/bin:$PATH
Anyway, if your sources don't have usual build system - you can just compile it, manually put in $HOME/bin and alternate PATH variable (to make it available without specifying the full path to binary).
| Compile a program without sudo access |
1,420,103,176,000 |
I am interested to use NetBSD as the operating system on my server. I have not used a system where security updates are performed by source, but have read enough in the guide to feel comfortable trying it. However, I do not know how long this operation is likely to take.
Given a fairly modest server of say 1 processor core and 0.5 to 1.0 GB RAM, how long might it be expected to take to build the userland and kernel of an x86_64 system, following the directions of Chapter 33. Updating an existing system from sources in the guide?
Also, how much local disk space does this operation require? I have not seen mention of that in the guide.
|
I would suggest that any decent not-so-modern x86_64 true server should be able do a full build in a couple of hours or maybe less, including xsrc.
My NetBSD-current build server is a Xen domU with 8GB RAM and 8 VCPUs running on a Dell PE2950 8-core (Xeon E5440 @2.83GHz) with 32GB RAM and with a decently fast set of SAS disks on the integrated PERC 6/i controller (with the build output going to a RAID-0 partition). That machine only cost me about $650[us], used of course. It can do an NetBSD-5/i386 build of everything to final ISOs, with everything static-linked (i.e. requiring a lot more disk IO and linker memory than a dynamic-linked build), from NFS-mounted sources on another domU on the same server, in less than 2 hours (with -j12). A kernel build (amd64 GENERIC) after a reboot (nothing cached) takes under 5min (with -j12).
At the moment my /build partition has 102GB used and contains objects, binaries, and ISOs for three -current builds (amd64, i386, evbarm) and two 5.x builds (amd64 and i386). Keep in mind that's all separately static-linked binaries -- dynamic-linked builds are much smaller. A static-linked full install (i.e. with xsrc and comp and everything else) takes about 6.6 GB.
| How long might it take to build NetBSD userland and kernel? |
1,420,103,176,000 |
I need to compile a package but the ./configure command does not work?
I'm getting the following error:
-bash ./configure : No such file or directory
Where is that script?
I used the locate command but it did not return anything.
|
locate will not work unless you have an up-to-date database.
Try find . -type f -name configure instead, or issue an updatedb command first, then do the locate (make sure the current path isn't excluded)
But first, you should always check the documentation - maybe the way to compile it does not use the configure mechanism in the first place.
| "./configure" command does not work |
1,420,103,176,000 |
I am on Linux Mint 18 Cinnamon 64-bit.
I was about to compile file-roller known as Archive manager for GNOME from source.
But when running:
./autogen.sh
There is a following M4 macro missing:
Checking for required M4 macros...
yelp.m4 not found
***Error***: some autoconf macros required to build Package
were not found in your aclocal path, or some forbidden
macros were found. Perhaps you need to adjust your
ACLOCAL_PATH?
|
You can use apt-file for this, without necessarily knowing where the M4 files go:
apt-file search yelp.m4
will tell you where the particular file should be located even without having the package (yelp-tools) installed.
yelp-tools: /usr/share/aclocal/yelp.m4
This tells you that installing yelp-tools should allow the build to proceed further.
Alternatively, you can check the build-dependencies of file-roller in Debian: that lists yelp-tools too, along with all the other packages you’ll need.
On Linux Mint 18 apt-file isn’t pre-installed, but it’s easy to install:
sudo apt-get install apt-file
After installation you will need to update its database with:
sudo apt-file update
| Checking for required M4 macros... yelp.m4 not found |
1,420,103,176,000 |
g++ -Wall -I/usr/local/include/thrift *.cpp -lthrift -o something
This is from the Apache Thrift website.
Also is the -I/usr supposed to be -I /usr?
|
Here is a breakdown of the command. First the original command, for reference
g++ -Wall -I/usr/local/include/thrift *.cpp -lthrift -o something
Now, for the breakdown.
g++
This is the actual command command, g++. It is the program that is being executed. Here is what it is, from the man page:
gcc - GNU project C and C++ compiler
This is a compiler for programs written in C++ and C. It takes C or C++ code and turns it into a program, basically.
-Wall
This part makes it display all warnings when compiling. (Warn All)
-I/usr/local/include/thrift
This part tells g++ to use /usr/local/include/thrift as the directory to get the header files from. And with the question about whether to put a space after the I or not. You can do it either way. The way the options (options are things in a command after - signs. -Wall and -I are options) are parsed allows you to put a space or not. It depends on your personal preference.
*.cpp
This part passes every .cpp file in the current directory to the g++ command.
-lthrift
This can also be -l thrift. It tells g++ to search the thrift library when linking.
-o something
This tells it that when everything is compiled to place the executable in the file something.
| What does this Linux command do? |
1,420,103,176,000 |
Historically speaking I know when I run the cc command or gcc my output generally always compiles to a.out unless I have a make file or use a particular flag on the compiler. But why a.out? Why not c.out or c.run or any myriad of a million possibilities?
|
It is a historical artefact, so in other words a legacy throwback. Historically a.out stands for "assembler output".
a.out is now only the name of the file but before it was also the file format of the executable.
The a.out executable format is nowadays uncommonly supported. The ELF format has wider use, but we still keep the old name for the default output of the C compiler.
| Why do programs always compile to a.out? why not p.out or c.out or g.prog? |
1,420,103,176,000 |
I have to demonstrate a practical where I have to make my own module, add this module to a kernel's source and implement the system call. I'm using 3.16 kernel on Ubuntu but it takes around 2 hours to install the kernel from source.
Is it possible to remove some part of kernel(like unnecessary drivers,etc) from the source to save time as I'm not going to use this newly installed kernel for regular use? If yes, how?
|
As mentioned in comments, you should be building using something like make -j4. Use a number equal or slightly higher than the number of CPU cores you have.
make localmodconfig
The following instructions apply to building a kernel from upstream. Personally I find that simplest. I don't know how to obtain a tree with the ubuntu patches applied, ready to build like this.
(1) Theoretically the way you build kernels in more reasonable
timespans for testing is supposed to be
cp /boot/config-`uname -r` .config
you don't need to enable anything newer, so - only problem is this
breaks if they renamed stuff:
make oldnoconfig
now disable all the modules not currently loaded. (Make
sure you have all your usb devices you need plugged in...):
make localmodconfig
It worked for me recently, so might be useful. Not so well the
previous time I tried it.
I think I got it from about one hour down to ten minutes. Even after make localmodconfig it's still building crazy amounts of stuff I don't need. OTOH actually finding and disabling that stuff (e.g. in make xconfig) takes a while too (and even longer if you mistakenly disable something you do need).
I guess it's worth knowing it exists, it's just not guaranteed to make you happy.
(2) I don't think it should take two hours to build every modification to your "module". (It actually needs to be a builtin if you're implementing a new system call). make will just recompile your modified files and integrate it into a kernel binary. So in case getting the Kconfig right is too much trouble, then maybe an initial two-hour build is not too bad.
You might be having this problem if you are building with a distribution kernel source package. (You can switch to manual builds, or you might be able to trick the distro source package into using ccache). Or, your modifications might be modifying a header file which is unfortunately included by many many source files.
Even so, it might be useful to make custom Kconfigs, e.g. much smaller Kconfigs, if you want to port to different kernel versions, do git bisect, test different build options, etc.
| Saving time while compiling Kernel |
1,420,103,176,000 |
I am programming a code to simulate a membrane which will run on computation cluster (single node). I want to optimize the code for that machine. I have used the -optimize, -O3 and -march=core2.
How can I tell if I can improve the march factor and is there any other thing I can do to improve this?
thanks
|
Use -mtune. -march is used to determine the allowed instruction set, whereas -mtune is to be used to tune performance of the code (as always, see man gcc). Depending on the precise CPU type, you might also consider values other than core2. And if you use a recent GCC version (at least 4.4, I think), you might best use native instead.
| Optimizing c++ compiliation for a specific machine |
1,420,103,176,000 |
When trying to install python 3.7 on Ubuntu 18.04
I get error messages like:
zipimport.ZipImportError: can't decompress data; zlib not available
or
ModuleNotFoundError: No module named '_ctypes'
or
~/.pyenv/plugins/python-build/bin/python-build: line 775: make: command not found
or
configure: error: no acceptable C compiler found in $PATH
|
From https://bugs.python.org/issue31652#msg321260
sudo apt-get install build-essential libsqlite3-dev sqlite3 bzip2 libbz2-dev zlib1g-dev libssl-dev openssl libgdbm-dev libgdbm-compat-dev liblzma-dev libreadline-dev libncursesw5-dev libffi-dev uuid-dev
| What libraries are needed to install Python 3.7 on Ubuntu 18.04 |
1,366,812,867,000 |
I am working on building Python 2.7.4 on CentOS 6.4. When running the make test step, the test_gdb step fails, and I would like to get some more info as to why.
Build commands I'm running:
./configure --prefix=/usr/local/python-2.7.4 --enable-ipv6 --enable-unicode=ucs4 --enable-shared
make
make test
Output of make test:
... test test_gdb failed -- multiple errors occurred; run in verbose
mode for details ...
So basically, I'm trying to figure out how to run the test_gdb test separately and in verbose mode. Sounds like I should use regrtest.py, but I seem to get invalid syntax with the various options I've tried. Any ideas?
banjer@somehost:/usr/local/src/Python-2.7.4> python Lib/test/regrtest.py -v test_gdb
File "Lib/test/regrtest.py", line 679
'test_support',
^
SyntaxError: invalid syntax
|
The actual lines around 679 in Lib/test/regrtest.py are:
NOTTESTS = {
'test_support',
'test_future1',
'test_future2',
}
This defines a mutable set and is syntax back-ported from 3.1 to 2.7. This syntax is not available in 2.6 or earlier version of python.
That your test raises a syntax error is probably because your default python is pre-2.7. If you would have executed:
./python Lib/test/regrtest.py -v test_gdb
^-- this is the difference
in that directory, you would have been testing the python executable you just compiled and not the default one provided in your path. Using that executable you are unlikely to get this particular error (but maybe others that are really gdb related).
| Running 'make test' on an individual module for Python 2.7.4 build |
1,366,812,867,000 |
I try to install subversion from source, but on ./configure receive:
configure: error: no suitable apr found
I have download apr source and installed:
libraries have been installed in:
/usr/local/apr/lib
So I returned to configure of subversion but receive the same error.
What I should do?
Thanks.
|
When configuring Subversion, try
./configure --with-apr=/usr/local/apr/
That might not be exactly the right path. Try finding apr-config and giving the path to that. I'm guessing it might be:
./configure --with-apr=/usr/local/apr/bin/apr-config
Try ./configure --help to see what your options are. There are a lot of them.
| How to make apr available for subversion install? |
1,366,812,867,000 |
A few questions about FreeBSD
Where/how should I obtain the source code (ex. through terminal, Download off website)
How (on ubuntu) should I build it?
Before I build it can I customize it (in other words is possible)?
|
You can check the source for FreeBSD out of version control here. The developer's handbook answers a lot of questions about developing FreeBSD. Why aren't you building it from FreeBSD itself? It seems kind of... odd to be building from Ubuntu.
| FreeBSD source and how to build |
1,366,812,867,000 |
Many a times while compiling some binary or library, I feel a need to have much verbose output of the build/compilation. Is there a flag or something I can write in the cmakelists.txt so I get much more verbose output. FWIW, I'm using cmake 3.16.3 on Debian testing which will eventually become Debian bullseye. Any help would be appreciated.
|
When running cmake itself, there are a couple of options you can use to generate more detailed output:
cmake --debug-output
and
cmake --trace
(the latter with even more detail than the former).
When running the build, you can ask for a verbose build by running
make VERBOSE=1
or at the cmake stage, define CMAKE_VERBOSE_MAKEFILE:
cmake -DCMAKE_VERBOSE_MAKEFILE=ON
(which is what debhelper does by default when using cmake).
| Is there a way to make cmakelists.txt more verbose for compilation |
1,366,812,867,000 |
I'm having trouble linking the Intel MKL libraries to use in building Julia with MKL support. I've had this problem with other projects as well, but here I'll focus on Julia. I have MKL installed in /opt/intel. I've tried:
Running /opt/intel/bin/compilervars.sh intel64
Running /opt/intel/mkl/bin/mklvars.sh intel64
Adding the library (libmkl_rt.so) to LD_LIBRARY_PATH: export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/intel/mkl/lib/intel64_lin
Adding a file called "mkl.conf" within /etc/ld.so.conf.d with the contents /opt/intel/compilers_and_libraries_2019/linux/mkl/lib/intel64_lin
After the last two I ran sudo ldconfig, but there hasn't been any change. How can I get Make to recognize this library?
|
LD_LIBRARY_PATH and files in /etc/ld.so.conf.d configure the runtime linker, not the linker used during builds.
To build Julia with MKL, you should
add
USE_INTEL_MKL = 1
to Make.user
run
source /opt/intel/bin/compilervars.sh intel64
and build Julia from the same shell (so that the variables set by compilervars are taken into account).
| ld linker ignores LD_LIBRARY_PATH |
1,366,812,867,000 |
I am compiling a model using make. The model has a Makefile that connects the source code with dependent libraries via flags that look like -L/lib1 -L/lib2. But when I try to run that model, it fails unless I also ensure the environmental variable
export LD_LIBRARY_PATH=/lib1:/lib2
and points to the exact same libraries. This seems redundant to me.
What could be going on under the hood here? Why do I effectively have to specify the location of the libraries before compilation and before execution?
This might be a silly question; I'm not very experienced compiling to machine code, usually just use scripting languages.
|
Although everyone uses compilation in the colloquial sense of turning source code into an executable it's technically a single step in a rather long pipeline:
The input file is run through the preprocessor resulting in a single translation unit.
The output of the preprocessor is compiled to assembly.
The assembler takes that as input an outputs an object file.
The linker stitches together the object files to produce an executable.
[ To be pedantic there's no requirement the steps be separate and modern compilers typically combine them for efficiency. ]
Our concern is the linking step, which combines your code with standard system libraries. The linker copies objects from static libraries directly into the executable. For shared libraries, however, it only provides a reference to the library.
Shared libraries have a lot of advantages. You can update them without recompiling programs and they use less memory because programs can share common code. They also have the obvious drawback that the code isn't in the executable.
The solution to that is the dynamic loader, which is responsible for resolving all shared library references at runtime. The loader is run automatically; instructions for doing so are one thing the linker includes in the executable. Of course this presupposes that the loader can find the libraries.
System libraries are in standard directories, which is straightforward. When that isn't the case the loader will search LD_LIBRARY_PATH. Why doesn't the linker just put the path in the executable? Because then you'd be unable to move or change the library.
In practice you couldn't really move the executable either, since the library is outside the system search path. If it only ran when the library was located in ~luke/lib then you can't give it to joe unless he can read your files. Sucks for joe if you move onto a new job.
Just FYI it would suck in a myriad of other ways as well. It'd make debugging an everlasting nightmare if you could only specify the library location at compile time, amongst other things.
| Why do I have to set LD_LIBRARY_PATH before running a program, even though I already linked the library locations in the compile stage? [duplicate] |
1,366,812,867,000 |
I'm doing some heavy number crunching on one system and I'd like to compile (and finetune) a custom GMP 6.1.0 for the user launching the number crunching computation. Previously I had a Debian wheezy (7.6) system on which I installed a custom GMP lib while being root and modificating things left and right in the filesystem (because I didn't know any better). It ended up working: my custom GMP lib was crunching numbers about 15% faster than the stock GMP.
Now I installed a new Debian (Jessie 8.3) on that computer with the "stock" GMP (the one that comes with Debian Jessie):
# gcc --version
gcc (Debian 4.9.2-10) 4.9.2
# apt-get install libgmp10
# apt-get install libgmp-dev
Which is apparently GMP 6.0.0.
I'm compiling my number crunching program doing:
$ gcc crunch.c -o crunch.o -L/gmp_install/lib -lgmp
(I know I could probably gain some by messing with some parameters passed to GCC, but the big problem here is the "slowness" of the non-custom GMP).
I then invoke ./crunch.o and it works but it is 15% slower than my custom build GMP on my old system (using the exact same gcc compilation command pasted above on the exact same computer).
I'd now like to compile a custom GMP 6.1.0 again, but only accessible for the user running the heavy computation.
In other words: I'd now like to install a custom GMP cleanly instead of messing (while being root) with the entire filesystem.
But I don't understand what -L/gmp_install/lib refers to nor what -lgmp does either.
I take it the first steps I need to do are:
go to https://ftp.gnu.org/gnu/gmp/
download gmp-6.1.0.tar.bz2
untar
???
So how can I compile a custom GMP for one (non root) user account and how would I go about then compiling my crunch.c program?
|
You could use the following steps as normal user
tar xvjf gmp-6.1.0.tar.bz2
cd gmp-6.1.0
./configure --prefix=${HOME}/gmp/6.1.0
make
make install
This will install gmp in ~/gmp/6.1.0. Now if you want to use this version to compile software against or use it at runtime, you have to set some environment variables:
GMP_DIR="${HOME}/gmp/6.1.0"
export LD_LIBRARY_PATH=${GMP_DIR}/lib64:$LD_LIBRARY_PATH
export LIBRARY_PATH=${GMP_DIR}/lib64:$LIBRARY_PATH
export CPATH=${GMP_DIR}/include:$CPATH
You could put that into your ~/.bashrc or in a separate file you source just before you want to use it, or write a wrapper script including your binary stuff. Other people like to use environment-modules for this kind of tasks.
The -lgmp argument tells your linker to link against the shared library libgmp.so and -L/gmp_install/lib means to search for libraries in /gmp_install/lib and in the well known paths (/lib, lib64, /usr/lib, /usr/lib64, ...).
The environment variables are used as follows:
LIBRARY_PATH should provide the same as the -L switch
CPATH provides an additional search path for the header files
LD_LIBRARY_PATH is needed for the runtime
| How to install a custom GMP lib for just one user? |
1,366,812,867,000 |
Of course, the first question is: why I'm doing this. Just for fun! I'm learning more about Linux kernels and I have a virtual machine that I can replace in 15 minutes.
Getting to business, I don't know how to do this, so I went to trying to edit the makefile (trying to learn). So I started with the makefile in the path ubuntu-raring/Makefile, which is the main make file; can be found under this link:
http://pastebin.com/ms2WpQi7
And there I changed every gcc to icc, and every g++ to icpc, and every -O2 to -O3. The result is the following:
http://pastebin.com/cSwTYJ9C
I followed the instructions from this site, too:
https://help.ubuntu.com/community/Kernel/Compile
But eventually, I'm getting weird errors that seem to be caused by using gcc/g++ rather than icc/icpc. For example, I got an error in the file ubuntu-raring/include/linux/compiler-gcc.h that some macros are already defined, while this file shouldn't be included in the first place! The macro that includes it is in the file ubuntu-raring/include/linux/compiler.h, and looks like:
#ifdef __GNUC__
#include <linux/compiler-gcc.h>
#endif
/* Intel compiler defines __GNUC__. So we will overwrite implementations
* coming from above header files here
*/
#ifdef __INTEL_COMPILER
# include <linux/compiler-intel.h>
#endif
And while I don't understand the comment written above the Intel header (sounds weird... why would you define implementations then overwrite them? Never done that in C++!), removing the include of the gcc header manually solved the problem, but other problems came up, and I have no idea whether they're related.
So now I'm confused! What did I do wrong? And should changing every gcc and g++ in the Makefile be sufficient to use a different compiler? Or are there other things to be changed that I overlooked?
Thank you for any efforts.
|
First learn to walk, then learn to fly.
If you want to learn, read. Have you read this instruction manual for building the kernel with the Intel C compiler? It's a rethorical question b/c this manual uses a different approach to choosing icc over gcc.
You are doing three things at once:
fiddle with some adopted and patched kernel to fit into the Ubuntu world (which is gcc)
Up the optimization from -O2 to -O3.
change the compiler
Start out with a vanilla Linux kernel from kernel.org. Keep everything standard and figure out how to build a kernel that works for your computer. Build a kernel that has only the drivers your computer needs, nothing more. Once you can compile and boot into your own kernel, you can start changing the build environment.
Going from -O2 to -O3 will probably never work. -O3 is like opening Pandora's box. If enabling -O3 was that easy, it would probably be the default!
| Compile the Ubuntu “Raring” Kernel with the Intel Compiler |
1,366,812,867,000 |
I'm trying to use
pip install mysql-python
inside a virtualenv container and am getting the error
building '_mysql' extension
gcc -pthread -fno-strict-aliasing -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m32 -march=i686 -mtune=atom -fasynchronous-unwind-tables -D_GNU_SOURCE -fPIC -fwrapv -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m32 -march=i686 -mtune=atom -fasynchronous-unwind-tables -D_GNU_SOURCE -fPIC -fwrapv -fPIC -Dversion_info=(1,2,4,'final',1) -D__version__=1.2.4 -I/usr/include/mysql -I/usr/include/python2.7 -c _mysql.c -o build/temp.linux-x86_64-2.7/_mysql.o -g -pipe -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -D_GNU_SOURCE -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -fno-strict-aliasing -fwrapv -fPIC -fPIC -g -static-libgcc -fno-omit-frame-pointer -fno-strict-aliasing -DMY_PTHREAD_FASTMUTEX=1
_mysql.c:1:0: error: CPU you selected does not support x86-64 instruction set
error: command 'gcc' failed with exit status 1
Why is gcc trying to use -march=i686 when I'm on a 64-bit system and using a 64-bit version of Python?
|
You can set your architecture manually by setting the CFLAGS environmental variable.
CFLAGS='-march=x86-64' pip install mysql-python
This variable's contents are appended to gcc's argument list.
| Pip install - CPU you selected does not support x86-64 instruction set |
1,366,812,867,000 |
I have installed ncurses package from source, and now I have
$HOME/local/include/ncurses/curses.h
$HOME/local/include/ncurses/ncurses.h
on my filesystem. I have also set up the search pathes so that
$ echo $C_INCLUDE_PATH
$HOME/local/include:
$ echo $CPLUS_INCLUDE_PATH
$HOME/local/include:
(i have eddited the output of echo to replace home path with $HOME)
however, when i ./configure another package i get
checking ncurses.h usability... no
checking ncurses.h presence... no
what's the problem that the system cannot detect curses installation?
|
configure scripts produce config.log (in the same folder) files which contain all the details on the tests it ran. They're not particularly easy to read, but open it up and search for "checking ncurses.h usability". Look at what went wrong with the small test program it tried to compile.
My guess is, it doesn't care about $C_INCLUDE_PATH and you'll need to pass it to the build system in a different matter. configure options (eg. --includedir=$HOME/local/include) and $CFLAGS + $CXXFLAGS + $CPPFLAGS (adding -I$HOME/local/include) come to mind.
| ncurses.h is not found, even though it is on the search path |
1,366,812,867,000 |
I can't get the pg gem to install. I have tried --with-pg_config and it didn't work.
Gem::Installer::ExtensionBuildError: ERROR: Failed to build gem native extension.
/usr/local/rvm/rubies/ruby-1.9.3-p194/bin/ruby extconf.rb
checking for pg_config... no
No pg_config... trying anyway. If building fails, please try again with
--with-pg-config=/path/to/pg_config
checking for libpq-fe.h... no
Can't find the 'libpq-fe.h header
*** extconf.rb failed ***
Could not create Makefile due to some reason, probably lack of
necessary libraries and/or headers. Check the mkmf.log file for more
details. You may need configuration options.
|
Can't find the 'libpq-fe.h header
If you haven't done it already, install and initialize apt-file. This tool tells you what package contains a file with the given name.
sudo apt-get install apt-file
apt-file update
Then run apt-file search libpq-fe.h to find out what package contains this file, and install the package in question. (It's libpq-dev.)
You can also search for this file name in some package installation GUIs, or online.
| Can't install pg gem |
1,366,812,867,000 |
I tried compiling a kernel from sources that I got from kernel.org (mainline) with make allyesconfig and make allmodconfig, but both builds resulted in a kernel, that won't boot.
I was thinking, that by compiling everything, It should work on close to any hardware. What am I doing wrong?
And how do I compile a working kernel?
|
One thing you can do is boot a working kernel, run lsmod, and make sure that all the modules listed are turned on in your config (either built-in or as modules).
It's easiest to start with a working config, and then tweak it. If you're lucky, your distribution ships the config file along with the kernel. For example, in Ubuntu you'll find it in /boot/config-version. Copy that file into your new kernel directory and name it .config. If it's for an older kernel, you can try make oldconfig to be asked only about new options. In general, accept the default answer for everything unless you know what it is.
| How to compile a decent kernel from kernel.org? |
1,366,812,867,000 |
I need/want to have the whole build log when compiling a tool. The tool uses autotools.
I tried the most obvious way $ make > make.txt but that just gave the very end of the build to make.txt when I wanted to have the whole build log. Is there a way to do it ?
|
You need to run make as follow:
make 2>&1 | tee make.txt
| Is there a way to have a build log when running 'make'? |
1,366,812,867,000 |
I'm trying my hand at creating a very minimal custom Busybox/Linux distro, a task that is admittedly above my head, but I figured I'd give it a shot. My issue is that whenever I try to run a C program that is not Busybox or a Busybox utility, ash complains and tells me that the file is not found. I mounted the partition from my Arch system, installed GNU binutils and uClibc; no dice. I also wrote the simplest C program I could think of with no dependencies on any libraries:
int main(int argc, char *argv[])
{
return 0;
}
I compiled, ran on Arch, still gave me "file not found" on my Busybox system, although it is shown when I run ls. To address the obvious, yes, I ran it from the same directory as the program and typed ./ before the file name.
|
My guess is that you don't have the correct dynamic linker on the Busybox system.
On your Arch system do this:
ldd ./simplestprogram
I imagine ldd will give you output similar to this:
linux-vdso.so.1 => (0x00007fff9b34f000)
libc.so.6 => /lib64/libc.so.6 (0x0000003b19e00000
/lib64/ld-linux-x86-64.so.2 (0x0000003b19a00000)
That last line, /lib64/ld-linux-x86-64.so.2 is the dynamic linker. I bet that isn't present on your Busybox system.
I compiled a "hello, world" program on my Arch laptop, used vim in binary mode to change /lib64/ld-linux-x86-64.so.2 to /lib65/ld-linux-x86-64.so.2, saved it, and tried to execute it. I got the same "file not found" message you got.
You may not even have the libc.so file on your Busybox system. It's possible that just copying the libc.so and dynamic linker files from Arch to Busybox systems (preserving directories!) might work, but it might not. I'm just not sure.
One thing to try: install musl on your Arch machine. Compile your simple program with musl-gcc -static -o simple simple.c, move that executable, which has no dynamically-linked anything, and try it on the Busybox system.
| Minimal Busybox/Linux Installation - Won't Run C |
1,366,812,867,000 |
My teacher wants to be able to compile our programs without having to type ./.
For example we would write:
g++ some_program.cpp -o some_program
some_program
He says to type:
cp .bash_profile .bash_profile.ORIG
Then load .bash_profile into text editor
Then go to the end of the file PATH=$Path: and add a period
export
restart
My questions are:
Do I just type cp .bash_profile .bash_profile.ORIG into the terminal right after I open it?
How do I load it into my text editor?
And how do I export it?
|
1) Do I just type cp .bash_profile .bash_profile.ORIG into the
terminal right after I open it?
Yes. You are essentially making a backup copy of your current ~/.bash_profile (assuming there is one).
2) How do I load it into my text editor?
It depends on what text editor you intend to use. I do this:
$ emacs ~/.bash_profile
but you could also do:
$ gedit ~/.bash_profile
There are heaps of text editors, of course, including nano or pico for in-terminal editing. So really just take your pick. If you don't have a favourite editor, nano is a good starter.
Note:
The line should not be PATH=$Path:
It should be:
PATH="$PATH:."
As Theophrastus says in the comment, this is a horrible security practise and generally should not be done. I think it's better practise to designate a directory where you code, and a directory where executables get stashed for testing and that is in $PATH. But if this is a school assignment, I guess you should do as your teacher says.
3) And how do I export it?
Add this line after the $PATH line:
export PATH
Note: your teacher is wrong. You do not need to restart. How would that be possible if you were maintaining a server? you'd kick everyone off and your users would be furious! All you need to do to load in new settings is:
$ source ~/.bash_profile
Good luck!
| How can I run my program without having to type ./? |
1,366,812,867,000 |
I tried to compile the program Testing Li’s criterion in Ubuntu. However, when I do gcc demo.c, I get the output as,
demo.c:2:19: fatal error: fmpcb.h: No such file or directory
#include "fmpcb.h"
^
compilation terminated.
How can I compile that program? I think I need some bash-script to make the compilation to work.
|
The file fmpcb.h no longer exists in the most recent version of Arb. The fmprb_t and fmpcb_t types in Arb 1.x were obsoleted by the (more efficient) arb_t and acb_t types in Arb 2.x. The most recent release removed the legacy fmpcb_t type entirely.
You should be able to get the code from that blog post working by substituting fmprb -> arb and fmpcb -> acb and possibly making other minor adjustments.
However, a better solution is to use the Keiper-Li example program which is included in Arb:
https://github.com/fredrik-johansson/arb/blob/master/examples/keiper_li.c
This is basically a better version of the program in the blog post. It's faster, supports multithreading, allows you to pass arguments on the command line instead of recompiling, and it should be up to date with the current interface.
From the Arb source directory, you can build and run the example program as follows (assuming Arb has already been installed):
cd /home/user/src/arb
make examples
build/examples/keiper_li 100
You can also build the library and run example programs without installing Arb, by telling the linker that it can find libarb.so in the source directory:
cd /home/user/src/arb
make
export LD_LIBRARY_PATH=/home/user/src/arb:$LD_LIBRARY_PATH
make examples
build/examples/keiper_li 100
For documentation of the Arb example programs, see: http://fredrikj.net/arb/examples.html
| How can I link my C program against the Arb library? |
1,366,812,867,000 |
I'm trying to compile a program prog and link it against OpenSSL's 1.0.2 beta, built from source and installed in /usr/local/ssl-1.0.2. On an older system using 0.9.8, this works without too much trouble. On a more recent system with 1.0.1 installed, this requires a bit more work. I'm wondering why.
1) On Ubuntu 10.04, with OpenSSL 0.9.8:
Here are the steps I follow to compile and link against 1.0.2.
$ ./config shared --openssldir=/usr/local/ssl-1.0.2 && make && make install
$ ldconfig
$ ldconfig -p | grep libcrypto
=> Only 0.9.8 files show up, so I add the path to the 1.0.2 files...
$ ldconfig /usr/local/ssl-1.0.2/lib
$ ldconfig -p | grep libcrypto
=>
libcrypto.so.1.0.0 (libc6) => /usr/local/ssl-1.0.2/lib/libcrypto.so.1.0.0
libcrypto.so.0.9.8 (libc6, hwcap: 0x0008000000008000) => /lib/i686/cmov/libcrypto.so.0.9.8
libcrypto.so.0.9.8 (libc6, hwcap: 0x0004000000000000) => /lib/i586/libcrypto.so.0.9.8
libcrypto.so.0.9.8 (libc6, hwcap: 0x0002000000000000) => /lib/i486/libcrypto.so.0.9.8
libcrypto.so.0.9.8 (libc6) => /lib/libcrypto.so.0.9.8
libcrypto.so.0.9.8 (libc6) => /usr/lib/libcrypto.so.0.9.8
libcrypto.so (libc6) => /usr/local/ssl-1.0.2/lib/libcrypto.so
And so I can compile prog...
$ gcc -o prog ... -L/usr/local/ssl-1.0.2/lib -lcrypto
$ ldd prog
=>
libcrypto.so.1.0.0 => /usr/local/ssl-1.0.2/lib/libcrypto.so.1.0.0 (0x0083b000)
... and it is correctly linked against 1.0.2.
2) On Debian Wheezy, with OpenSSL 1.0.1:
Same steps, different result.
$ ./config shared --openssldir=/usr/local/ssl-1.0.2 && make && make install
$ ldconfig
$ ldconfig -p | grep libcrypto
=>
libcrypto.so.1.0.0 (libc6, hwcap: 0x0008000000008000) => /usr/lib/i386-linux-gnu/i686/cmov/libcrypto.so.1.0.0
libcrypto.so.1.0.0 (libc6, hwcap: 0x0004000000000000) => /usr/lib/i386-linux-gnu/i586/libcrypto.so.1.0.0
libcrypto.so.1.0.0 (libc6) => /usr/lib/i386-linux-gnu/libcrypto.so.1.0.0
Likewise, I add the path to 1.0.2...
$ ldconfig /usr/local/ssl-1.0.2/lib
$ ldconfig -p | grep libcrypto
=>
libcrypto.so.1.0.0 (libc6, hwcap: 0x0008000000008000) => /usr/lib/i386-linux-gnu/i686/cmov/libcrypto.so.1.0.0
libcrypto.so.1.0.0 (libc6, hwcap: 0x0004000000000000) => /usr/lib/i386-linux-gnu/i586/libcrypto.so.1.0.0
libcrypto.so.1.0.0 (libc6) => /usr/local/ssl-1.0.2/lib/libcrypto.so.1.0.0
libcrypto.so.1.0.0 (libc6) => /usr/lib/i386-linux-gnu/libcrypto.so.1.0.0
libcrypto.so (libc6) => /usr/local/ssl-1.0.2/lib/libcrypto.so
Then I try to compile...
$ gcc -o prog ... -L/usr/local/ssl-1.0.2/lib -lcrypto
$ ldd prog
=>
libcrypto.so.1.0.0 => /usr/lib/i386-linux-gnu/i686/cmov/libcrypto.so.1.0.0 (0xb7591000)
But here it is not linked against 1.0.2. The compile-time library path is correct (specified with -L, gcc would fail otherwise since some functions used in prog are specific to 1.0.2), but not the run-time one.
3) How to get it working on Wheezy
With or without running ldconfig /usr/local/ssl-1.0.2/lib:
$ gcc -o prog ... -Wl,--rpath=/usr/local/ssl-1.0.2/lib -L/usr/local/ssl-1.0.2/lib -lcrypto
$ ldd prog
=>
libcrypto.so.1.0.0 => /usr/local/ssl-1.0.2/lib/libcrypto.so.1.0.0 (0xb7592000)
Alternatively, run export LD_LIBRARY_PATH=/usr/local/ssl-1.0.2/lib before running gcc.
What I'd like to know
Using LD_DEBUG=libs ./prog as suggested by mr.spuratic, I found that the paths were looked up in /etc/ld.so.cache. I opened that file and found that the order in which .so are looked up corresponds to the output of ldconfig -p.
So the actual question is:
Why does the 1.0.2 file gets on top of ldconfig's list in 1) but not in 2) ? Pure randomness? Confusion due to 1.0.1 and 1.0.2 files having the same suffix? ("1.0.0")
Or, said differently,
Why are the flags added in 3) not necessary in 1) ?
|
There are three things you need to take care of when compiling/linking against a non-default package:
headers (usually CFLAGS)
compile-time library path (usually LDFLAGS)
run-time library path (rpath via LDFLAGS, LD_RUN_PATH, LD_LIBRARY_PATH or ld.so.conf)
You haven't said what prog is, so I can't how well-behaved its configuration might be (or if it uses autoconf?), I've seen many that only perform the first two steps reliably.
During the link stage the library path order is relevant, assuming you're using the GNU toolchain (gcc & binutils) you can probably see what's going on by setting CFLAGS before configure (or possible in the Makefile directly):
export CFLAGS="-Wl,-t"
This passes the -t trace option to the linker. You may need to add V=1 or VERBOSE=1 to the make command if you only get terse "CC" and "LD" lines output during the make.)
At run time you can see what ld.so tries by carefully setting LD_DEBUG, e.g.
LD_DEBUG=libs ./myprog
(or try values of files or symbols for more detail)
To specify all three parameters correctly at build time you should be able to do:
export CFLAGS="-I/usr/local/ssl-1.0.2/include"
export LDFLAGS="-L/usr/local/ssl-1.0.2/lib -R/usr/local/ssl-1.0.2/lib"
then reconfigure/recompile.
You're using --openssldir rather than the more conventional --prefix (I recommend the latter and also using just make install_sw if you don't need the 1000 or so man pages & symlinks a default install gives you). This may be part of the problem. For some reason the .so libraries that you show are known to ld.so do not have a version suffix (e.g. .so.1.0.2), a proper "make install" should have set that up for you (via the link-shared target in the main Makefile).
The -R option instructs the linker to embed an RPATH in the executable output for the specific OpenSSL library so that it does not need to rely on the default that the runtime linker (ld.so) would normally provide. You can modify existing binaries with chrpath instead.
This is more or less equivalent to exporting LD_LIBRARY_PATH=/usr/local/ssl-1.0.2/lib. You can read more about RPATH and the related RUNPATH here: http://blog.tremily.us/posts/rpath/
As a last resort you could possibly build OpenSSL without "shared" or with "noshared", this will give you static libraries which won't have this problem (but may well have other problems, e.g. for use within ELF .so, causing PIC/PIE problems)
Based on the updated details, I believe the problem is that 1.0.1 and 1.0.2beta both set the .so version suffix (SONAME) to 1.0.0. On the first system with only 0.9.8 this causes no problems; on the second with 1.0.1 and 1.0.2 both versioned as 1.0.0, it's "first match wins" based on the ld.so.{conf,d} ordering. Remember, ld the compile time linker is a different program to ld.so the run-time linker, and can have different behaviour (usually resulting in symbol errors or worse, as you have seen).
$ cd /usr/local/src/openssl/openssl/1.0.2beta1
$ readelf -a libssl.so | grep SONAME
0x0000000e (SONAME) Library soname: [libssl.so.1.0.0]
$ cat verchk.c
int main(int argc, char *argv[]) {
printf("build: %s\n",OPENSSL_VERSION_TEXT);
printf("run : %s\n",SSLeay_version(SSLEAY_VERSION));
return 0;
}
$ gcc -Wall -I/usr/local/src/openssl/openssl-1.0.2-beta1/include \
-Wl,-rpath /usr/local/src/openssl/openssl-1.0.2-beta1/ \
-o verchk /usr/local/src/openssl/openssl-1.0.2-beta1/libcrypto.so verchk.c
$ ./verchk
build: OpenSSL 1.0.2-beta1 24 Feb 2014
run : OpenSSL 1.0.2-beta1 24 Feb 2014
$ grep SHLIB_M...R= Makefile
SHLIB_MAJOR=1
SHLIB_MINOR=0.0
Update
OpenSSL-1.1 has made some API level changes, the above code will fail to compile with v1.1 headers and older libraries (undefined reference to `OpenSSL_version').
SSLeay_version() is now deprecated and (depending on OPENSSL_API_COMPAT) may be #define-d to the proper API function OpenSSL_version().
| Get ld to pick the correct library |
1,366,812,867,000 |
In the process of building one library (Webdriver) I got the following error:
Package ibus-1.0 was not found in the pkg-config search path.
Perhaps you should add the directory containing `ibus-1.0.pc'
to the PKG_CONFIG_PATH environment variable
No package 'ibus-1.0' found
It seems to be because of the following line in source code of Webdriver:
pkg-config ibus-1.0 --libs
that produces the same output when I run it.
So I installed ibus using installation instructions from its website:
sudo apt-get install ibus ibus-clutter ibus-gtk ibus-gtk3 ibus-qt4
But I still get the same output after invoking pkg-config ibus-1.0 --libs. Should I install ibus 1.0 to build that library? If yes, where can I find it? It doesn't seem to be present in ibus's downloads list?
My OS is Ubuntu 13.04
|
If you need it for a build, then you need the #include headers as well. These, and the pkgconfig files, are not in the normal packages because they don't serve any purpose outside of compiling. Instead, they are included in separate -dev packages which you can install if you want to build something which must be compiled against whatever library.
It looks to me (on Debian) like the package you want is libibus-1.0-dev.
| Should I install ibus-1.0 to build Webdriver? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.