date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,349,361,022,000 |
Say I have a C program main.c that statically links to libmine.a. Statically linking to a library causes library functions to be embedded into the main executable at compile time.
If libmine.a were to feature functions that weren't used by main.c, would the compiler (e.g. GCC) discard these functions?
This question is inspired by the "common messaging" that using static libraries make executables larger, so I'm curious if the compiler at least strips away unused code from an archive file.
|
By default, linkers handle object files as a whole. In your example, the executable will end up containing the code from main.c (main.o), and any object files from libmine.a (which is an archive of object files) required to provide all the functions used by main.c (transitively).
So the linker won’t necessarily include all of libmine.a, but the granularity it can use isn’t functions (by default), it’s object files (strictly speaking, sections). The reason for this is that when a given .c file is compiled to an object file, information from the source code is lost; in particular, the end of a function isn’t stored, only its start, and since multiple functions can be combined, it’s very difficult to determine from an object file what can actually be removed if a function is unused.
It is however possible for compilers and linkers to do better than this if they have access to the extra information needed. For example, the LightspeedC programming environment on ’80s Macs could use projects as libraries, and since it had the full source code in such cases, it would only include functions that were actually needed.
On more modern systems, the compiler can be told to produce object files which allow the linker to handle functions separately. With GCC, build your .o files with the -ffunction-sections -fdata-sections options enabled, and link the final program with the --gc-sections option. This does have an impact, notably by preventing certain categories of optimisation; see discard unused functions in GCC for details.
Another option you can use with modern compilers and linkers is link-time optimisation; enable this with -flto. When optimisation is enabled (e.g. -O2 when compiling the object files), the linker will not include unused functions in the resulting binary. This works even without -ffunction-sections -fdata-sections.
| Do C compilers discard unused functions when statically linking to .a file? |
1,349,361,022,000 |
When I install a simple program it often uses make && make install and doesn't often even have an uninstall target.
If I wish to upgrade a program, is it standard protocol to assume it just rewrites seamlessly over the old program?
How do I keep track of these programs; do most people just 'fire and forget' and if no uninstall target is given do I have to manually delete everything?
|
Install each program in a dedicated directory tree, and use Stow or XStow to make all the programs appear in a common hierarchy. Stow creates symbolic links from the program-specific directory to a common tree.
In more detail, pick a toplevel directory, for example /usr/local/stow. Install each program under /usr/local/stow/PROGRAM_NAME. For example, arrange for its executables to be installed in /usr/local/stow/PROGRAM_NAME/bin, its man pages in /usr/local/stow/man/man1 and so on. If the program uses autoconf, then run ./configure --prefix /usr/local/stow/PROGRAM_NAME. After you've run make install, run stow:
./configure --prefix /usr/local/stow/PROGRAM_NAME
make
sudo make install
cd /usr/local/stow
sudo stow PROGRAM_NAME
And now you'll have symbolic links like these:
/usr/local/bin/foo -> ../stow/PROGRAM_NAME/bin/foo
/usr/local/man/man1/foo.1 -> ../../stow/PROGRAM_NAME/man/man1/foo.1
/usr/local/lib/foo -> ../stow/PROGRAM_NAME/lib/foo
You can easily keep track of what programs you have installed by listing the contents of the stow directory, and you always know what program a file belongs to because it's a symbolic link to a location under that program's directory. Uninstall a program by running stow -D PROGRAM_NAME then deleting the program's directory. You can make a program temporarily unavailable by running stow -D PROGRAM_NAME (run stow PROGRAM_NAME to make it available again).
If you want to be able to quickly switch between different versions of the same program, use /usr/local/stow/PROGRAM_NAME-VERSION as the program directory. To upgrade from version 3 to version 4, install version 4, then run stow -D PROGRAM_NAME-3; stow PROGRAM_NAME-4.
Older versions of Stow doesn't go very far beyond the basics I've described in this answer. Newer versions, as well as XStow (which hasn't been maintained lately) have more advanced features, like the ability to ignore certain files, better cope with existing symlinks outside the stow directory (such as man -> share/man), handle some conflicts automatically (when two programs provide the same file), etc.
If you don't have or don't want to use root access, you can pick a directory under your home directory, e.g. ~/software/stow. In this case, add ~/software/bin to your PATH. If man doesn't automatically find man pages, add ~/software/man to your MANPATH. Add ~/software/info to your INFOPATH, ~/software/lib/python to your PYTHONPATH, and so on as applicable.
| Keeping track of programs |
1,349,361,022,000 |
When I compile my own kernel, basically what I do is the following:
I download the sources from www.kernel.org and uncompress it.
I copy my previous .config to the sources and do a make menuconfig to watch for the new options and modify the configuration according to the new policy of the kernel.
Then, I compile it: make -j 4
Finally, I install it: su -c 'make modules_install && make install'.
After a few tests, I remove the old kernel (from /boot and /lib/modules) and run fully with the new one (this last step saved my life several times! It's a pro-tip!).
The problem is that I always get a /boot/initrd.img-4.x.x which is huge compared to the ones from my distribution. Here the content of my current /boot/ directory as an example:
# ls -alFh
total 243M
drwxr-xr-x 5 root root 4.0K Mar 16 21:26 ./
drwxr-xr-x 25 root root 4.0K Feb 25 09:28 ../
-rw-r--r-- 1 root root 2.9M Mar 9 07:39 System.map-4.4.0-1-amd64
-rw-r--r-- 1 root root 3.1M Mar 11 22:30 System.map-4.4.5
-rw-r--r-- 1 root root 3.2M Mar 16 21:26 System.map-4.5.0
-rw-r--r-- 1 root root 170K Mar 9 07:39 config-4.4.0-1-amd64
-rw-r--r-- 1 root root 124K Mar 11 22:30 config-4.4.5
-rw-r--r-- 1 root root 126K Mar 16 21:26 config-4.5.0
drwxr-xr-x 5 root root 512 Jan 1 1970 efi/
drwxr-xr-x 5 root root 4.0K Mar 16 21:27 grub/
-rw-r--r-- 1 root root 19M Mar 10 22:01 initrd.img-4.4.0-1-amd64
-rw-r--r-- 1 root root 101M Mar 12 13:59 initrd.img-4.4.5
-rw-r--r-- 1 root root 103M Mar 16 21:26 initrd.img-4.5.0
drwx------ 2 root root 16K Apr 8 2014 lost+found/
-rw-r--r-- 1 root root 3.5M Mar 9 07:30 vmlinuz-4.4.0-1-amd64
-rw-r--r-- 1 root root 4.1M Mar 11 22:30 vmlinuz-4.4.5
-rw-r--r-- 1 root root 4.1M Mar 16 21:26 vmlinuz-4.5.0
As you may have noticed, the size of my initrd.img files are about 10 times bigger than the ones from my distribution.
So, do I do something wrong when compiling my kernel? And, how can I reduce the size of my initrd.img?
|
This is because all the kernel modules are not stripped. You need to strip it to down its size.
Use this command:
SHW@SHW:/tmp# cd /lib/modules/<new_kernel>
SHW@SHW:/tmp# find . -name *.ko -exec strip --strip-unneeded {} +
This will drastically reduce the size.
After executing above command, you can proceed to create initramfs/initrd
man strip
--strip-unneeded
Remove all symbols that are not needed for relocation processing.
| How to reduce the size of the initrd when compiling your kernel? |
1,349,361,022,000 |
I know that Linux is available and has been ported for many different platforms such as for X86, ARM, PowerPC etc.
However, in terms of porting, what is required exactly?
My understanding is that Linux is software written in C. Therefore when porting Linux originally from X86 to ARM or others for example, is it not just a matter of re-compiling the code with the compiler for the specific target architecture?
Putting device drivers for different peripherals aside, what else would need to be done when porting Linux to a new architecture. Does the compiler not take care of everything for us?
|
Even though most of the code in the Linux kernel is written in C, there are still many parts of that code that are very specific to the platform where it's running and need to account for that.
One particular example of this is virtual memory, which works in similar fashion on most architectures (hierarchy of page tables) but has specific details for each architecture (such as the number of levels in each architecture, and this has been increasing even on x86 with introduction of new larger chips.) The Linux kernel code introduces macros to handle traversing these hierarchies that can be elided by the compiler on architectures which have fewer levels of page tables (so that code is written in C, but takes details of the architecture into consideration.)
Many other areas are very specific to each architecture and need to be handled with arch-specific code. Most of these involve code in assembly language though. Examples are:
Context Switching: Context switching involves saving the value of all registers for the process being switched out and restoring the registers from the saved set of the process scheduled into the CPU. Even the number and set of registers is very specific to each architecture. This code is typically implemented in assembly, to allow full access to the registers and also to make sure it runs as fast as possible, since performance of context switching can be critical to the system.
System Calls: The mechanism by which userspace code can trigger a system call is usually specific to the architecture (and sometimes even to the specific CPU model, for instance Intel and AMD introduced different instructions for that, older CPUs might lack those instructions, so details for those will still be unique.)
Interrupt Handlers: Details of how to handle interrupts (hardware interrupts) are usually platform-specific and usually require some assembly-level glue to handle the specific calling conventions in use for the platform. Also, primitives for enabling/disabling interrupts are usually platform-specific and require assembly code as well.
Initialization: Details of how initialization should happen also usually include details that are specific to the platform and often require some assembly code to handle the entry point to the kernel. On platforms that have multiple CPUs (SMP), details on how to bring other CPUs online are usually platform-specific as well.
Locking Primitives: Implementation of locking primitives (such as spinlocks) usually involve platform-specific details as well, since some architectures provide (or prefer) different CPU instructions to efficiently implement those. Some will implement atomic operations, some will provide a cmpxchg that can atomically test/update (but fail if another writer got in first), others will include a "lock" modifier to CPU instructions. These will often involve writing assembly code as well.
There are probably other areas where platform- or architecture-specific code is needed in a kernel (or, specifically, in the Linux kernel.) Looking at the kernel source tree, there are architecture-specific subtrees under arch/ and under include/arch/ where you can find more examples of this.
Some are actually surprising, for instance you'll see that the number of system calls available on each architecture is distinct and some system calls will exist in some architectures and not others. (Even on x86, the list of syscalls differs between a 32-bit and a 64-bit kernel.)
In short, there's plenty of cases a kernel needs to be aware that are specific to a platform. The Linux kernel tries to abstract most of those, so higher-level algorithms (such as how memory management and scheduling works) can be implemented in C and work the same (or mostly the same) on all architectures.
| Porting Linux to another platform requirements [closed] |
1,349,361,022,000 |
I was looking for a lightweight X server, but failed to find one. Then I found out about Wayland. I says that it aims to coexist with X, but can run standalone.
When I try to compile it, it needs Mesa, which needs X.
What exactly is Wayland?
|
Wayland is an experimental new display server. It is not an X server, and to run X applications you will need to run an X server with it (see the bottom diagram on Wayland Architecture). Since there are very few Wayland applications so far, this means you really can't use it to replace X yet.
Update: As noted in other answers, Wayland is the protocol, not the server software. Also the number of Wayland applications have greatly expanded since this answer was first written in 2010.
| What is Wayland? |
1,349,361,022,000 |
Maybe there are some compatibility issues?
I have the impression that for Intel-based systems, the Intel compiler would potentially do a better job than GCC. Perhaps there's already a distro that has attempted this?
I would think this might be quite straightforward using Gentoo.
|
You won't be able to compile everything with icc. Many programs out there use GCC extensions to the C language. However Intel have made a lot of effort to support most of these extensions; for example, recent versions of icc can compile the Linux kernel.
Gentoo is indeed your best bet if you like recompiling your software in an unusual way. The icc page on the Gentoo wiki describes the main hurdles.
First make a basic Gentoo installation, and emerge icc. Don't remove icc later as long as you have any binary compiled with icc on your system. Note that icc is installed in /opt; if that isn't on your root partition, you'll need to copy the icc libraries to your root partition if any of the programs used at boot time are compiled with icc.
Set up /etc/portage/bashrc and declare your favorite compilation options; see the Gentoo wiki for a more thorough script which supports building different packages with different compilers (this is necessary because icc breaks some packages).
export OCC="icc" CFLAGS="-O2 -gcc"
export OCXX="icpc" CXXFLAGS="$CFLAGS"
export CC_FOR_BUILD="${OCC}"
| Is it possible to compile a full Linux system with Intel's compiler instead of GCC? |
1,349,361,022,000 |
I have already followed this guide to disable middle mouse button paste on my Ubuntu 12.04.
Works like a charm.
Now I am trying to achieve the same on my Linux Mint 17. When I try to
sudo apt-get build-dep libgtk2.0-0
it gives me the following output:
Reading package lists... Done
Building dependency tree
Reading state information... Done
Picking 'gtk+2.0' as source package instead of 'libgtk2.0-0'
E: Unable to find a source package for gtk+2.0
For me it looks like apt-get is somehow "resolving" 'libgtk2.0-0' to 'gtk+2.0', but then does not find any package named like that.
EDIT:
although I am now able to compile the program (see my answer), I still do not know what Picking 'gtk+2.0' as source package instead of 'libgtk2.0-0' is supposed to mean. Any insight on this would be appreciated, thanks!
|
As others have already noted, make sure that for every deb … entry in /etc/apt/sources.list and /etc/apt/sources.list.d/*, you have a matching deb-src … entry. The rest of the line must be identical.
The deb entry is for binary packages (i.e. ready to install), the deb-src is for source packages (i.e. ready to compile). The reason why the two kinds of packages are separated is that they are managed very differently: binary packages have a dependency tracking mechanism and a currently-installed list, whereas source packages are only tracked so that they can be downloaded conveniently.
Note that when discussing package repositories, the word source means two unrelated things: a source as in a location to download packages from, and a source package as opposed to a binary package.
libgtk2.0-0 is the name of a binary package. It is built from a source package called gtk+2.0. The reason source and binary package names don't always match is that building a source package can produce multiple binary packages; for example, gtk+2.0 is the source for 14 packages as it is split into two libraries (libgtk2.0, libgail), corresponding packages to build programs using these libraries (…-dev), documentation for developers (…-doc), companion programs (libgtk2.0-bin), etc.
You can see the name of the source package corresponding to a binary package by checking the Source: … line in the output of dpkg -s BINARY_PACKAGE_NAME (if the package is installed) or apt-cache show BINARY_PACKAGE_NAME.
You can list the binary packages produced by a source package with aptitude search '?source-package(^SOURCE_PACKAGE_NAME$).
The command apt-get source downloads a source package. If you give it an argument which isn't a known source package, it looks it up in the database of installable binary packages and tries to download the corresponding source package.
The command apt-get build-dep follows the same approach to deduce the name of a source package, then queries the source package database to obtain a list of binary packages (the list in the Build-Dep: field), and installs those binary packages.
The Software Sources GUI has a checkbox “enable repositories with source code” for official repositories, make sure that it's ticked. If you add third-party repositories manually, make sure that you add both deb-src and deb lines.
| apt-get build-dep is unable to find a source package |
1,349,361,022,000 |
After some googling, I found a way to compile BASH scripts to binary executables (using shc).
I know that shell is an interpreted language, but what does this compiler do? Will it improve the performance of my script in any way?
|
To answer the question in your title, compiled shell scripts could be better for performance — if the result of the compilation represented the result of the interpretation, without having to re-interpret the commands in the script over and over. See for instance ksh93's shcomp or zsh's zcompile.
However, shc doesn’t compile scripts in this way. It’s not really a compiler, it’s a script “encryption” tool with various protection techniques of dubious effectiveness. When you compile a script with shc, the result is a binary whose contents aren’t immediately readable; when it runs, it decrypts its contents, and runs the tool the script was intended for with the decrypted script, making the original script easy to retrieve (it’s passed in its entirety on the interpreter’s command line, with extra spacing in an attempt to make it harder to find). So the overall performance will always be worse: on top of the time taken to run the original script, there’s the time taken to set the environment up and decrypt the script.
| Are compiled shell scripts better for performance? |
1,349,361,022,000 |
I sometimes compile apps from source and I've either been using:
./configure
make
sudo make install
But recently, I came across ./autogen.sh which generates the configure and make scripts for me and executes them.
What other methods to streamline C/C++/C#(mono) compilation exist? Make seems a bit old. Are there new tools out there? Given the choice, which one should I use?
|
Autoconf and Automake were set out to solve an evolutionary problem of Unix.
As Unix evolved into different directions, developers that wanted portable code tended to write code like this:
#if RUNNING_ON_BSD
Set things up in the BSD way
#if RUNNING_ON_SYSTEMV
Set things up in the SystemV way
#endif
As Unix was forked into different implementations (BSD, SystemV, many vendor forks, and later Linux and other Unix-like systems) it became important for developers that wanted to write portable code to write code that depended not on a particular brand of operating system, but on features exposed by the operating system. This is important because a Unix version would introduce a new feature, for example the "send" system call, and later other operating systems would adopt it. Instead of having a spaghetti of code that checked for brands and versions, developers started probing by features, so code became:
#if HAVE_SEND
Use Send here
#else
Use something else
#endif
Most README files to compile source code back in the 90's pointed developers to edit a config.h file and comment out that proper features available on the system, or would ship standard config.h files for each operating system configuration that had been tested.
This process was both cumbersome and error prone and this is how Autoconf came to be. You should think of Autoconf as a language made up of shell commands with special macros that was able to replace the human editing process of the config.h with a tool that probed the operating system for the functionality.
You would typically write your probing code in the file configure.ac and then run the autoconf command which would compile this file to the executable configure command that you have seen used.
So when you run ./configure && make you were probing for the features available on your system and then building the executable with the configuration that was detected.
When open source projects started using source code control systems, it made sense to check in the configure.ac file, but not the result of the compilation (configure). The autogen.sh is merely a small script that invokes the autoconf compiler with the right command arguments for you.
--
Automake grew also out of existing practices in the community. The GNU project standardized a regular set of targets for Makefiles:
make all would build the project
make clean would remove all compiled files from the project
make install would install the software
things like make dist and make distcheck would prepare the source for distribution and verify that the result was a complete source code package
and so on...
Building compliant makefiles became burdensome because there was a lot of boilerplate that was repeated over and over. So Automake was a new compiler that integrated with autoconf and processed "source" Makefile's (named Makefile.am) into Makefiles that could then be fed to Autoconf.
The automake/autoconf toolchain actually uses a number of other helper tools and they are augmented by other components for other specific tasks. As the complexity of running these commands in order grew, the need for a ready-to-run script was born, and this is where autogen.sh came from.
As far as I know of, Gnome was project that introduced the use of this helper script autogen.sh
| Is automake and autoconf the standard way to compile code? |
1,349,361,022,000 |
It's said that compiling GNU tools and Linux kernel with -O3 gcc optimization option will produce weird and funky bugs. Is it true? Has anyone tried it or is it just a hoax?
|
It's used in Gentoo, and I didn't notice anything unusual.
| Compiling GNU/Linux with -O3 optimization |
1,349,361,022,000 |
Yesterday I was trying to compile the ROOT package from source. Since I was compiling it on a 6 core monster machine, I decided to go ahead and build using multiple cores using make -j 6. The compiling went smooth and really fast at first, but at some point make hung using 100% CPU on just one core.
I did some googling and found this post on the ROOT message boards. Since I built this computer myself, I was worried that I hadn't properly applied the heatsink and the CPU was overheating or something. Unfortunately, I don't have a fridge here at work that I can stick it in. ;-)
I installed the lm-sensors package and ran make -j 6 again, this time monitoring the CPU temperature. Although it got high (close to 60 C), it never went past the high or critical temperature.
I tried running make -j 4 but again make hung sometime during the compile, this time at a different spot.
In the end, I compiled just running make and it worked fine. My question is: Why was it hanging? Due to the fact that it stopped at two different spots, I would guess it was due to some sort of race condition, but I would think make should be clever enough to get everything in the right order since it offers the -j option.
|
I don't have an answer to this precise issue, but I can try to give you a hint of what may be happening: Missing dependencies in Makefiles.
Example:
target: a.bytecode b.bytecode
link a.bytecode b.bytecode -o target
a.bytecode: a.source
compile a.source -o a.bytecode
b.bytecode: b.source
compile b.source a.bytecode -o a.bytecode
If you call make target everything will compile correctly. Compilation of a.source is performed (arbitrarily, but deterministically) first. Then compilation of b.source is performed.
But if you make -j2 target both compile commands will be run in parallel. And you'll actually notice that your Makefile's dependencies are broken. The second compile assumes a.bytecode is already compiled, but it does not appear in dependencies. So an error is likely to happen. The correct dependency line for b.bytecode should be:
b.bytecode: b.source a.bytecode
To come back to your problem, if you are not lucky it's possible that a command hang in a 100% CPU loop, because of a missing dependency. That's probably what is happening here, the missing dependency couldn't be revealed by a sequential build, but it has been revealed by your parallel build.
| What could be causing make to hang when compiling on multiple cores? |
1,349,361,022,000 |
Examining a buildlog from a failed build, what does the following error mean,
fatal error: ac_nonexistent.h: No such file or directory #include <ac_nonexistent.h>
Here is some context.
configure:6614: $? = 0
configure:6627: result: none needed
configure:6648: checking how to run the C preprocessor
configure:6679: gcc -E -Wdate-time -D_FORTIFY_SOURCE=2 conftest.c
configure:6679: $? = 0
configure:6693: gcc -E -Wdate-time -D_FORTIFY_SOURCE=2 conftest.c
conftest.c:11:28: fatal error: ac_nonexistent.h: No such file or directory
#include <ac_nonexistent.h>
^
compilation terminated.
configure:6693: $? = 1
configure: failed program was:
| /* confdefs.h */
What is ac_nonexistent.h? What should I do when I encounter this error?
|
That’s a sanity check, to ensure that the configuration script is correctly able to determine whether a header file is present or not: it asks the compiler to use a non-existant header, and checks that the compiler (correctly) fails.
Note that your build goes on after that “error”... To figure out the cause of a build failure, you should generally work up from the end of the build log. In this instance the important part of the log is
configure:47489: checking for the Wayland protocols
configure:47492: $PKG_CONFIG --exists --print-errors "wayland-protocols >= 1.4"
Package wayland-protocols was not found in the pkg-config search path.
Perhaps you should add the directory containing `wayland-protocols.pc' to the PKG_CONFIG_PATH environment variable
No package 'wayland-protocols' found
| What is ac_nonexistent.h? |
1,349,361,022,000 |
I want to get detail about binary package and run them on linux. I am running Debian base (Ubuntu/Linux mint) Linux os.
How to build binary package from source? And can I directly download
binary package for applications (like firefox, etc.) and games (like
boswars, etc.) ?
I run some direct package which is in "xyz.linux.run" format What
are these package? Are they independent of dependencies? or Is it
pre-built binary packages?
How to build them which can be run on linux operating system by
directly "xyz.linux.run" on linux.
What is diference between binary package and deb package?
|
In a strict sense a binary file is one which is not character encoded as human readable text. More colloquially, a "binary" refers to a file that is compiled, executable code, although the file itself may not be executable (referring not so much to permissions as to the capacity to be run alone; some binary code files such as libraries are compiled, but regardless of permissions, they cannot be executed all by themselves). A binary which runs as a standalone executable is an "executable", although not all executable files are binaries (and this is about permissions: executable text files which invoke an interpreter via a shebang such as #!/bin/sh are executables too).
What is a binary package?
A binary package in a linux context is an application package which contains (pre-built) executables, as opposed to source code.
Note that this does not mean a package file is itself an executable. A package file is an archive (sort of like a .zip) which contains other files, and a "binary" package file is one which specifically contains executables (although again, executables are not necessarily truly binaries, and in fact binary packages may be used for compiled libraries which are binary code, but not executables). However, the package must be unpacked in order for you to access these files.
Usually that is taken care of for you by a package management system (e.g. apt/dpkg) which downloads the package and unpacks and installs the binaries inside for you.
What is diference between binary package and deb package?
There isn't -- .deb packages are binary packages, although there are .debs which contain source instead, these usually have -src appended to their name.
I run some direct package which is in "xyz.linux.run" format What are these package?
Those are generally self-extracting binary packages; they work by embedding a binary payload into a shell script. "Self-extracting" means you don't have to invoke another application (such as a package manager) in order to unpack and use them. However, since they do not work with a package manager, resolving their dependencies may be more of a crapshoot and hence some such packages use statically linked executables (they have all necessary libraries built into them) which wastes a bit of memory when they are used.
| What is Binary package? How to build them? |
1,349,361,022,000 |
I am using vim editor on Linux mint. I want to know if there is any way to compile c program without leaving the editor.
|
There are several possibilities.
One method is to compile using
:!gcc file.c
But a nicer strategy would be to have a Makefile and compile just using
:make
Where the easiest Makefile would look like.
program:
gcc file.c
Others can explain this a lot better.
| How to compile a c program without leaving the editor? [duplicate] |
1,349,361,022,000 |
I'm building a custom kernel based off 4.11 (for Mintx64, if it matters). I've already compiled and installed it to prove that it works. Now I've made a few small changes to a couple of files (in the driver and net subsystems, this is why I need to compile a custom kernel in the first place!)
Now I want to build the modified kernel. However when I run
fakeroot make -j5 deb-pkg LOCALVERSION=myname KDEB_PKGVERSION=1
The build system appears to start by "clean"-ing a whole load of stuff, so I stopped it quickly. Unfortunately the computer I'm using is not blessed with a good CPU and takes many hours to build from scratch. Therefore I'd rather avoid doing it again if possible!
Is it possible to make just an incremental build without everything be "clean"d or is this a requirement of the kernel build system?
The output I got was:
CHK include/config/kernel.release
make clean
CLEAN .
CLEAN arch/x86/lib
...
|
The make clean is only for the deb-pkg target. Take a look at scripts/package/Makefile:
deb-pkg: FORCE
$(MAKE) clean
$(call cmd,src_tar,$(KDEB_SOURCENAME))
$(MAKE) KBUILD_SRC=
+$(call cmd,builddeb)
bindeb-pkg: FORCE
$(MAKE) KBUILD_SRC=
+$(call cmd,builddeb)
If you build the bindeb-pkg instead, it won't do a clean. You probably don't need the source packages anyway.
I suspect it does a clean because it doesn't want to tar up build artifacts in the source tarball.
| Re-building Linux kernel without "clean" |
1,349,361,022,000 |
I have seen the following install command used in multiple yocto recipes
install -d ${D}${libdir}
I am aware of the install command and its purpose, however I am unable to understand the purpose of the ${D} variable as it is often nowhere defined in the recipe. Can somebody explain the purpose of this shell variable?
|
The ${D} variable allows the software being built to be installed in a directory other than its real target. For example, you might configure the software so that libdir is /usr/lib, but that's for the target device; when you run the installation on your build system, you don't want the newly-built files to actually be installed in /usr/lib, you want the placed somewhere isolated so that they can be readily identified and copied across to the target system. So you create a temporary directory and install there:
mkdir /tmp/yocto-target
make install D=/tmp/yocto-target
That way the files end up in /tmp/yocto-target/usr/lib and so on. You can then archive all of /tmp/yocto-target using whatever tool you prefer, dropping the /tmp/yocto-target prefix, copy the archive to the target device and install its contents there.
In other build systems, the DESTDIR variable is used for the same reason.
| ${D} variable in install command |
1,349,361,022,000 |
Just now I have began reading the book: Advanced Programming in the UNIX® Environment. I wanted to try running its first code example. I am running Scientific Linux 6.4.
I downloaded the source code and as it says in its README, I ran make in the uncompressed file.
I wrote the first program (a mock ls command)
#include "./include/apue.h"
#include <dirent.h>
int
main(int argc, char *argv[])
{
DIR *dp;
struct dirent *dirp;
if(argc!=2)
err_quit("usage: test directory_name");
if((dp=opendir(argv[1]))==NULL)
err_sys("Can't open %s", argv[1]);
while((dirp=readdir(dp))!=NULL)
printf("%s\n", dirp->d_name);
closedir(dp);
return 0;
}
and put it in the uncompressed file. As the book had advised I then ran: gcc myls.c. But I get this error:
# gcc myls.c
/tmp/ccWTWS2I.o: In function `main':
test.c:(.text+0x20): undefined reference to `err_quit'
test.c:(.text+0x5b): undefined reference to `err_sys'
collect2: ld returned 1 exit status
I wanted to know how I can fix this problem. I also want to be able to run a code I write in any directory.
|
A short review of how to write and compile the programs in Advanced Programming in the UNIX® Environment, thanks to slm for helping me understand the steps. You can download the source code from here.
I wish this information was included as part of appendix b of the book,
where the header file is explained.
The uncompressed file contains directories with the names
of the chapters and two others named include and lib.
The ones with the names of the chapters have all the
programs of that chapter in them.
The include directory contains the header file that
is used in most of the programs in the book: apue.h.
The lib directory has the source code of the
implementations for the that header.
Lets assume the uncompressed file is located at:
SCADDRESS/, for example it might be:
/home/yourid/Downloads/apue.3e/
Once you uncompress the source code, go in the directory
and run make:
$ cd SCADDRESS
$ make
make will compile all the programs in all the chapters.
But the important thing is that before that, it will make
the library that will contain the implementations of the
functions in apue.h.
To compile an example program that you write from the book, run this GCC command (assuming your program's name is myls.c which is the first in the book):
gcc -o myls myls.c -I SCADDRESS/include/ -L SCADDRESS/lib/ -lapue
-I tells gcc which directory to look for the include file.
-L tells it the location of the library directory, and
-lapue, tells the name of the library file to look for
in that directory. Such that -LXXX means to look for a file
in the library directory with the name: libXXX.a or libXXX.so.
| Compiling code from apue |
1,349,361,022,000 |
I use Ubuntu 12.04. Say I have installed package x from the repository (with all its dependencies) at version 1.7 but I need some functionality that is only available in version 1.8, so I download the source tar and compile it:
./configure
make
make install
Does this overwrite the existing 1.7 binaries?
If the existing binaries are overwritten, does the package manager reflect the new version (1.8) and can package x be updated by the package manager in the future?
If package y has a dependency of package x 1.8 - will it be satisfied?
I have been trying to find a good source online that explains this. If you have any recommendations, please let me know.
|
The overwhelming majority of .deb packages, whether or not they are provided by official repositories, install with the prefix /usr.
What that means is that executables intended to be run by the user go in /usr/bin or /usr/sbin (or /usr/games if it's a game), shared libraries go in /usr/lib, platform-independent shared data go in /usr/share, header files go in /usr/include, and source code installed automatically goes in /usr/src.
A small percentage of packages use / as the prefix. For example, the bash package puts the bash executable in /bin, not /usr/bin. This is for packages that provide the bare essentials to run in single-user mode (such as recovery mode) and to start multi-user mode (but remember, that often includes functionality to mount some kinds of network shares...in case /usr is a remote filesystem).
A small percentage of .deb packages, mostly those created with Quickly, create a package-specific folder inside /opt and put all their files there. Other than that, most of the time /opt is the location used by software that is installed from an executable installer that does not use the system's package manager but does not involve compiling from source. (For example, if you install a proprietary program like MATLAB, you'll likely put it in /opt.)
In contrast to all of this, when you download a source archive (or get source code from a revision control system such as Bazaar or git), build it, and install it, it usually installs to the prefix /usr/local (unless you tell it to do otherwise). This means your executables go in /usr/local/bin, /usr/local/lib, or /usr/local/games, your libraries in /usr/local/lib, and so forth.
There are some exceptions to this--some programs, by default, install to the /usr prefix and would thus overwrite installations of the same programs from .deb packages. Typically you can prevent this by running ./configure --prefix=/usr/local instead of ./configure when you build them. I again emphasize that usually this is not necessary.
(It is for this reason that it makes very good sense for you to put source code that you are building and will install for systemwide use in /usr/local/src, which exists for that purpose.)
Assuming the packaged version is installed in /usr and the version you installed from source is in /usr/local:
Files from the installed package will not be overwritten.
Typically the newer version will run when you manually invoke the program from the command-line (assuming /usr/local/bin or wherever the executables are installed is in your PATH environment variable and appears before the corresponding /usr-prefixed directory, such as /usr/bin).
But there may be some problems with what launchers are created and made accessible through menus or searching. Furthermore, if you have installed more than one version of a library in different places, it can become a bit more complicated to determine which will be used by what software.
If you're not actually using both versions of the program or library, then often you should remove the one that you're not using, although in limited situations you might want to keep a package installed to satisfy dependencies.
However, if for any reason files are overwritten (for example, if the source code is installed in /usr rather than /usr/local):
The package manager will not know anything about how the software it installed was changed. It will think the old version is installed. Bad problems may result. You should avoid this. If you have created this situation, you should uninstall the software you installed from source (usually with sudo make uninstall in the /usr/local/src/program-or-library-name directory), and then uninstall the package or packages that provide the files that were overwritten (as they will not be restored by uninstalling the version installed from source). Then reinstall whatever version you want to have.
As for fulfilling dependencies:
If there is a .deb package that depends on the software you installed from source, and requires the version you installed from source (or higher), that package will not successfully install. (Or, to be more precise, you may be able to "install" it but it will not ever be "configured" so you will not be able to use it.) Dependencies are resolved by what versions of packages are installed, not by what software you actually have.
Similarly, software will at least try to install completely even if you have manually deleted the files provided by packages on which the software being installed depends. (You should not generally try to harness that for any purpose. The package manager operating based on false information is almost always a bad thing.)
Therefore, if you cannot find a package that provides the version of the software you need, you may need to create your own .deb package from the software you've compiled, and install from that package. Then the package manager will know what is going on. Creating a package for your own use, which you don't need to work well on other people's computers, is actually not very hard. (But I feel that may be outside the scope of your question, as it is currently worded.)
| Effect of compiling from source on already installed applications |
1,349,361,022,000 |
I have an application foobar for which someone has written a patch to add a feature I like. How can I use the patch?
|
Patches are usually contained in .diff files, because the patches are created using the diff command.
A patch is a series of insertions and deletions into source code. For this reason, in order to use the patch, you must build the application (e.g., "foobar") from source after applying the patch. So, in steps:
1. Get the source package for foobar.
Most linux distributions (n.b. patching is not unique to linux) have "source packages" you can use for this purpose, but since these are heterogeneous, I will only refer to the format of the original source here. The original source is not part of the distro and may be hard to find. A good place to start is wikipedia, which has articles for many popular applications, and the article should contain a link to a homepage with a source download. You can also google yourself, obviously. The source package will be called something like foobar.0.1.tar.bz2. Unpack this -- you now have a directory called foobar.0.1.
2. Add the patch.
Sometimes patches are single files and sometimes they are a set of several files. Copy those into foobar.0.1 and cd foobar.0.1. Next, you need to run the patch command. This reads from standard input, so you want to pipe the .diff file in. The tricky part is determining what to use for the -p option (if there are no instructions with the patch). To do that you need to look at the beginning of the patch file. For example:
--- old/comm.c 2003-09-08 14:25:08.000000000 +0000
+++ new/comm.c 2006-07-07 02:39:24.000000000 +0000
In this case, comm.c is the name of the source file that will be altered. However, notice that there is a directory appended to it. Since these are not the same directory ("old" vs. "new"), this is a big clue that this part of the path is junk (for our purposes). The purpose of the -p switch (see man patch) is to eliminate this prefix. It takes a number, which is the number of slashes (/) to eliminate, with everything in between; in this case we would use -p1 to reduce the path to just plain comm.c.
That presumes comm.c is actually in the same directory, which will be another clue as to whether your interpretation is correct. If both those lines were src/comm.c, and comm.c is actually in the src subdirectory of your build tree, then you need to use -p0 -- beware that not using -p at all will remove ALL slashes. If the path is absolute (i.e., begins with /), that's probably what you want. Now apply the patch:
patch -p1 < patch.diff
The source has now been modified. If there are more .diff files, apply those the same way.
3. Build and install.
This is the normal process you would go through to build something from source -- first ./configure, then make, make check, make install. Before you do the last one, if you already have an existing installation of foobar, decide whether you want to remove or overwrite that or how you are going to deal with the naming conflict. You probably want foobar to refer to your new, patched version, and not the old one.
| How do I apply software patches? |
1,349,361,022,000 |
I am trying to build an RPM that targets RHEL4 and 5. Right now I call chcon from %post but multiple Google entries say "that's not how you are supposed to do it" with very limited help on the right way. I've also noticed that fixfiles -R mypackage check says the files are wrong when they are right (as expected; the RPM DB doesn't realize what I want)..
I specifically say RHEL4 because it does not have semanage which seems to be one of the proper ways to do it. (Add a new policy and then run restorecon on your directories in %post.)
I also don't need my own context, just httpd_cache_t on a non-standard directory.
I have also seen "let cpio take care of it" - but then I have a new problem that a non-root RPM building user cannot run chcon on the build directories. I cheated and had sudo in the spec file but that didn't seem to matter anyway.
|
The Fedora Packaging Guidelines have a draft document explaining how to handle SELinux in packages, and they use semanage. Without semanage, it looks like supporting RHEL 4 is going to be a hack, and there's no way around that.
According to the rpm 4.9.0 release notes, there has been some support directly in rpm for managing SELinux policies, but it has historically been broken:
Older versions of RPM supported a %policy directive in spec for attaching SELinux policies into the package header, but this was never really usable for anything. Any uses of the %policy directive in specs should be removed as this unused directive prevents building with RPM 4.9.0 and later, while not doing anything for older versions.
Starting with RPM 4.9.0, SELinux policy packaging is supported via new %sepolicy section in the spec. Such packages cannot be built, but are installable on older RPM versions too (but the included policies will not be used in any way).
I see no mention of file contexts there, and I haven't been able to find any mention of direct file context support (like %attr in the %files section). In any case, it looks like RHEL 6 is only on rpm 4.8.0, so (unless I've missed something) the semanage route is as good as we're going to be able to do at least until RHEL 7.
| What is the proper way to set SELinux context in an RPM .spec? |
1,349,361,022,000 |
I'm running Ubuntu 11.10, which came with kernel version 3.0.0-14. I downloaded and built a kernel from the 3.1.0 branch. After installing the new kernel, I see that my /boot/initrd.img-3.1.0 file is HUGE. It's 114MB, while my /boot/initrd.img-3.0.0-14-generic is about 13MB. I want to get rid of the bloat, which is clearly unnecessary.
When building the new kernel, I copied my /boot/config-3.0.0-14-generic to .config in my build directory, as to keep the configuration of my original kernel. I ran make oldconfig, selected the defaults for all the new options, and then built the kernel.
Looking at the file sizes within each of the initrd cpio archives, I see that all of my .ko modules are larger in size in the 3.1.0 ramdisk, than the 3.0.0-14. I assumed there was an unnecessary debug flag checked in my config file, but I don't see anything different that was not already enabled in the 3.0.0-14 config file.
My /boot/config-3.0.0-14-generic is here:
http://pastebin.com/UjH7nEqd
And my /boot/config-3.0.1 is here:
http://pastebin.com/HyT0M2k1
Can anyone explain where all the unnecessary bloat is coming from?
|
When building the kernel and module using make oldconfig, make and make install, the resulting modules will have debug information available in the files.
Use the INSTALL_MOD_STRIP option for removing debugging symbols:
make INSTALL_MOD_STRIP=1 modules_install
Similarly, for building the deb packages:
make INSTALL_MOD_STRIP=1 deb-pkg
| Why is my initial ramdisk so big? |
1,415,789,761,000 |
I want to build a debian package with git build package.(gbp)
I passed all steps, and at least, when I entered gbp buildpackage, This error appeared.
what does it mean?
and what should I do?
gbp:error: upstream/1.5.13 is not a valid treeish
|
The current tag/branch you are in, is not a Debian source tree, it doesn't contain the debian/ directory in its root. This is evident because you are using a "upstream/" branch, a name utilized to upload the pristine source tree to git repositories. Try using the branch stable, testing or unstable, or any branch that starts with Debian or a commit tagged using the Debian versioning scheme.
| what does " gbp:error: upstream/1.5.13 is not a valid treeish" mean? |
1,415,789,761,000 |
I put up a bug report and have been asked to apply the patch therein and see if it works. I have tried to find documentation about how to go about doing it but is unclear.
The closest I have been able to figure out is http://www.thegeekstuff.com/2014/12/patch-command-examples/ .
I downloaded the latest source via apt-get under a directory named dpkg -
$ sudo apt-get source dpkg
This is how it looks -
[shirish@debian] - [~/games/dpkg] - [5692]
└─[$] pwd
/home/shirish/games/dpkg
That is the path and here it is -
┌─[shirish@debian] - [~/games/dpkg] - [5691]
└─[$] ls
d-m-h-verbose-version-check.patch dpkg-1.18.15 dpkg_1.18.15.dsc dpkg_1.18.15.tar.xz
I would like to make a backup and do a dry-run before applying the patch but need to know what commands and output I should expect. Also, I usually use -
$ fakeroot debian/rules build
$ fakeroot debian/rules binary
to build a local deb package. Is this good enough ?
Update 1 - That didn't work -
┌─[shirish@debian] - [~/games/dpkg] - [5710]
└─[$] cd dpkg-1.18.15
┌─[shirish@debian] - [~/games/dpkg/dpkg-1.18.15] - [5711]
└─[$] dch -n "Apply d-m-h fix from #844701."
dch: fatal error at line 569:
debian/changelog is not writable!
So do I need to use sudo to have write access OR use chmod to change the rights/permissions. I want to do it the right way.
Update 2 - Redid the whole thing, the right way this time, stuck at the patching stage -
┌─[shirish@debian] - [~/games] - [5750]
└─[$] apt-get source dpkg
Reading package lists... Done
NOTICE: 'dpkg' packaging is maintained in the 'Git' version control system at:
https://anonscm.debian.org/git/dpkg/dpkg.git
Please use:
git clone https://anonscm.debian.org/git/dpkg/dpkg.git
to retrieve the latest (possibly unreleased) updates to the package.
Skipping already downloaded file 'dpkg_1.18.15.dsc'
Skipping already downloaded file 'dpkg_1.18.15.tar.xz'
Need to get 0 B of source archives.
dpkg-source: info: extracting dpkg in dpkg-1.18.15
dpkg-source: info: unpacking dpkg_1.18.15.tar.xz
Then -
┌─[shirish@debian] - [~] - [5755]
└─[$] cp d-m-h-verbose-version-check.patch games/dpkg-1.18.15
Then -
┌─[shirish@debian] - [~/games/dpkg-1.18.15] - [5758]
└─[$] ls
ABOUT-NLS ChangeLog configure debian dpkg-split m4 NEWS run-script t-func
aclocal.m4 ChangeLog.old configure.ac d-m-h-verbose-version-check.patch dselect Makefile.am po scripts THANKS
AUTHORS check.am COPYING doc get-version Makefile.in README src TODO
build-aux config.h.in data dpkg-deb lib man README.l10n t utils
and then -
┌─[shirish@debian] - [~/games/dpkg-1.18.15] - [5757]
└─[$] patch < ./d-m-h-verbose-version-check.patch
(Stripping trailing CRs from patch; use --binary to disable.)
can't find file to patch at input line 5
Perhaps you should have used the -p or --strip option?
The text leading up to this was:
--------------------------
|diff --git i/scripts/dpkg-maintscript-helper.sh w/scripts/dpkg-maintscript-helper.sh
|index f20d82647..8db4a4088 100755
|--- i/scripts/dpkg-maintscript-helper.sh
|+++ w/scripts/dpkg-maintscript-helper.sh
--------------------------
File to patch:
now confused what to do ?
Update 3 -
Did it with -p1 parameter and did the remaining steps -
Sharing the last 5 odd lines of the build -
dh_md5sums -i
dh_builddeb -i
dpkg-deb: building package 'dpkg-dev' in '../dpkg-dev_1.18.15+nmu1_all.deb'.
dpkg-deb: building package 'libdpkg-perl' in '../libdpkg-perl_1.18.15+nmu1_all.deb'.
dpkg-genchanges >../dpkg_1.18.15+nmu1_amd64.changes
dpkg-genchanges: info: including full source code in upload
dpkg-source --after-build dpkg-1.18.15+nmu1
dpkg-source: info: using options from dpkg-1.18.15+nmu1/debian/source/options: --compression=xz
dpkg-buildpackage: info: full upload; Debian-native package (full source is included)
and have been able to install the newest one -
┌─[shirish@debian] - [~/games] - [5812]
└─[$] sudo dpkg -i dpkg_1.18.15+nmu1_amd64.deb dpkg-dev_1.18.15+nmu1_all.deb dpkg-dbgsym_1.18.15+nmu1_amd64.deb dselect_1.18.15+nmu1_amd64.deb dselect-dbgsym_1.18.15+nmu1_amd64.deb libdpkg-perl_1.18.15+nmu1_all.deb libdpkg-dev_1.18.15+nmu1_amd64.deb
D000001: ensure_diversions: new, (re)loading
D000001: ensure_statoverrides: new, (re)loading
(Reading database ... 1207494 files and directories currently installed.)
Preparing to unpack dpkg_1.18.15+nmu1_amd64.deb ...
D000001: process_archive oldversionstatus=installed
D000001: cmpversions a='0:1.18.15+nmu1' b='0:1.16.1' r=2
D000001: cmpversions a='0:1.18.15+nmu1' b='0:1.16.2' r=2
D000001: ensure_diversions: same, skipping
Unpacking dpkg (1.18.15+nmu1) over (1.18.10) ...
D000001: cmpversions a='0:1.18.15+nmu1' b='0:1.16.2' r=2
D000001: ensure_diversions: same, skipping
D000001: process_archive updating info directory
D000001: generating infodb hashfile
Preparing to unpack dpkg-dev_1.18.15+nmu1_all.deb ...
D000001: process_archive oldversionstatus=unpacked but not configured
D000001: ensure_diversions: same, skipping
Unpacking dpkg-dev (1.18.15+nmu1) over (1.18.15+nmu1) ...
D000001: process_archive updating info directory
D000001: generating infodb hashfile
Preparing to unpack dpkg-dbgsym_1.18.15+nmu1_amd64.deb ...
D000001: process_archive oldversionstatus=unpacked but not configured
Unpacking dpkg-dbgsym (1.18.15+nmu1) over (1.18.15+nmu1) ...
D000001: process_archive updating info directory
D000001: generating infodb hashfile
Preparing to unpack dselect_1.18.15+nmu1_amd64.deb ...
D000001: process_archive oldversionstatus=installed
D000001: ensure_diversions: same, skipping
Unpacking dselect (1.18.15+nmu1) over (1.18.15+nmu1) ...
D000001: process_archive updating info directory
D000001: generating infodb hashfile
Preparing to unpack dselect-dbgsym_1.18.15+nmu1_amd64.deb ...
D000001: process_archive oldversionstatus=installed
Unpacking dselect-dbgsym (1.18.15+nmu1) over (1.18.15+nmu1) ...
D000001: process_archive updating info directory
D000001: generating infodb hashfile
Preparing to unpack libdpkg-perl_1.18.15+nmu1_all.deb ...
D000001: process_archive oldversionstatus=unpacked but not configured
Unpacking libdpkg-perl (1.18.15+nmu1) over (1.18.15+nmu1) ...
D000001: process_archive updating info directory
D000001: generating infodb hashfile
Preparing to unpack libdpkg-dev_1.18.15+nmu1_amd64.deb ...
D000001: process_archive oldversionstatus=installed
Unpacking libdpkg-dev:amd64 (1.18.15+nmu1) over (1.18.15+nmu1) ...
D000001: process_archive updating info directory
D000001: generating infodb hashfile
D000001: process queue pkg dpkg:amd64 queue.len 6 progress 1, try 1
Setting up dpkg (1.18.15+nmu1) ...
D000001: deferred_configure updating conffiles
D000001: ensure_diversions: same, skipping
D000001: process queue pkg dpkg-dev:all queue.len 5 progress 1, try 1
D000001: process queue pkg dpkg-dbgsym:amd64 queue.len 5 progress 2, try 1
Setting up dpkg-dbgsym (1.18.15+nmu1) ...
D000001: deferred_configure updating conffiles
D000001: process queue pkg dselect:amd64 queue.len 4 progress 1, try 1
Setting up dselect (1.18.15+nmu1) ...
D000001: deferred_configure updating conffiles
D000001: process queue pkg dselect-dbgsym:amd64 queue.len 3 progress 1, try 1
Setting up dselect-dbgsym (1.18.15+nmu1) ...
D000001: deferred_configure updating conffiles
D000001: process queue pkg libdpkg-perl:all queue.len 2 progress 1, try 1
Setting up libdpkg-perl (1.18.15+nmu1) ...
D000001: deferred_configure updating conffiles
D000001: process queue pkg libdpkg-dev:amd64 queue.len 1 progress 1, try 1
Setting up libdpkg-dev:amd64 (1.18.15+nmu1) ...
D000001: deferred_configure updating conffiles
D000001: process queue pkg dpkg-dev:all queue.len 0 progress 1, try 1
Setting up dpkg-dev (1.18.15+nmu1) ...
D000001: deferred_configure updating conffiles
Processing triggers for man-db (2.7.5-1) ...
D000001: ensure_diversions: same, skipping
D000001: cmpversions a='0:2016.03.30' b='0:2016.05.24' r=-2
D000001: cmpversions a='0:1.18.15+nmu1' b='0:1.16' r=2
D000001: cmpversions a='0:1.18.15+nmu1' b='0:1.16' r=2
D000001: cmpversions a='0:1.18.15+nmu1' b='0:1.16' r=2
And lastly -
┌─[shirish@debian] - [/usr/share/doc/dpkg] - [5815]
└─[$] zcat changelog.Debian.gz | less
dpkg (1.18.15+nmu1) UNRELEASED; urgency=medium
* Non-maintainer upload.
* Apply d-m-h fix from #844701
-- shirish <shirish@debian> Mon, 21 Nov 2016 01:04:02 +0530
dpkg (1.18.15) unstable; urgency=medium
This means that it got installed correctly.
[$] apt-show-versions dpkg dpkg-dbgsym dpkg-dev libdpkg-perl libdpkg-dev dselect dselect-dbgsym
dpkg:amd64 1.18.15+nmu1 newer than version in archive
dpkg-dbgsym:amd64 1.18.15+nmu1 newer than version in archive
dpkg-dev:all 1.18.15+nmu1 newer than version in archive
dselect:amd64 1.18.15+nmu1 newer than version in archive
dselect-dbgsym:amd64 1.18.15+nmu1 newer than version in archive
libdpkg-dev:amd64 1.18.15+nmu1 newer than version in archive
libdpkg-perl:all 1.18.15+nmu1 newer than version in archive
|
Starting with the situation you have:
cd dpkg-1.18.15
patch -p1 < ../d-m-h-verbose-version-check.patch
will apply the patch. Before building, add a NMU changelog entry (this will avoid having your patched version of dpkg overwritten by apt & co., but will ensure your version is upgraded to the next dpkg release when that's available):
dch -n "Apply d-m-h fix from #844701."
This will rename the current directory (because dpkg is a native package), so you need to change directories again:
cd ../dpkg-1.18.15+nmu1
To build, I tend to use
dpkg-buildpackage -us -uc
That will produce the various .deb files in the parent directory; you can install them using dpkg as usual.
(Calling debian/rules targets explicitly works too; but you shouldn't use fakeroot for debian/rules build, just for debian/rules clean and debian/rules binary.)
Adding a NMU changelog entry also ensures that the source you've downloaded is left untouched, which addresses your backup concerns. It also means that reinstalling version 1.18.15 will restore the Debian version, without your patch.
| how to apply a patch in a debian package? |
1,415,789,761,000 |
Is there any way to have make use multi-threading (6 threads is ideal on my system) system-wide, instead of by just adding -j6 to the command line? So, that if I run make, it acts the same as if I was running make -j6? I want this functionality because I install a lot of packages from the AUR using pacaur (I'm on Arch), so I don't directly run the make command, but I would still like multi-threading to build packages faster.
|
(pacaur uses makepkg, see https://wiki.archlinux.org/index.php/Makepkg )
In /etc/makepkg.conf add
MAKEFLAGS="-j$(expr $(nproc) \+ 1)"
to run #cores + 1 compiling jobs concurrently.
When using bash you can also add
export MAKEFLAGS="-j$(expr $(nproc) \+ 1)"
to your ~/.bashrc to make this default for all make commands, not only those for AUR packages.
| Use multi-threaded make by default? |
1,415,789,761,000 |
After configuring and building the kernel using make, why don't I have vmlinuz-<version>-default.img and initrd-<version>.img, but only got a huge vmlinux binary (~150MB)?
|
The compressed images are under arch/xxx/boot/, where xxx is the arch. For example, for x86 and amd64, I've got a compressed image at /usr/src/linux/arch/x86/boot/bzImage, along with /usr/src/linux/vmlinux.
If you still don't have the image, check if bzip2 is installed and working (but I guess if that were the problem, you'd get a descriptive error message, such as "bzip2 not found").
Also, the kernel config allows you to choose the compression method, so the actual file name and compression algorithm may differ if you changed that kernel setting.
As others already mentioned, initrds are not generated by the linux compilation process, but by other tools. Note that unless, for some reason, you need external files (e.g. you need modules or udev to identify or mount /), you don't need an initrd to boot.
| vmlinuz and initrd not found after building the kernel? |
1,415,789,761,000 |
Which packages should be rebuilt after upgrading gcc on a gentoo system?
Is it sufficient to run
# emerge -a --oneshot `equery depends gcc |awk '{print " ="$1}'`
as suggested similar for perl in this FAQ?
|
TL;DR
I have a different take on this as a Gentoo user. While I agree with peterph's approach of "Let the System Decide," I disagree when it comes to an ABI Update. An ABI Update is sometimes a major shift in behavior. In the case of GCC 4.7, the ABI Change was the adoption of the new C++11 Standard, which peterph also pointed out.
Here is why I write this answer. I'm a standards junkie. I started in the web world when there were about 4 different browsers, and a plethora of tags in HTML that were only supported by certain browsers. At the time, all those tags increased confusion, and IMO made work harder. C++ has been standardized for this same reason, in short so that you can compile code that I write, and I can compile code that you write. If we chose not to follow a standard, we lose the freedom to share.
C++98 has been the approved Standard for 13 years. C++11 was ratified by the ISO Committee in 2011, and was completely integrated into GCC 4.7. See the current ISO status, and the new ISO Standard.
Why We Should Feel Privileged as Gentoo Users
As users of a source-based distribution, we have the unique opportunity to shape the future behavior of a package because we compile it before we use it. As such, to prepare for that opportunity, I feel that the following commands should be run, when updating to the new compiler:
emerge -ev system
gcc-config -l && gcc-config *new compiler name*
env-update && source /etc/profile
emerge -1v libtool
emerge -ev system
The first pass through system builds the new compiler, and it's dependencies, with the old compiler. The second pass through system rebuilds the new compiler and it's dependencies with the new compiler. Specifically, we want to do this so that our Build Chain takes advantage of the new features of the new compiler, if the Build Chain packages have been updated also... Some people replace the 2nd pass through system with the world set, although I find this to be overkill, as we don't know which packages already support the new standard, but we do want our build chain to behave sanely.
Doing this to at least the system set, prepares us to test every package that we compile against the new standard, because we use a rolling release. In this way, adding -std=c++11 to CXXFLAGS after updating the build chain allows us to test for breakage, and be able to submit bugs directly to either our bugzilla or upstream to the actual developers for the simple reason of:
Hey, your package blah blah breaks using the new C++ standard, and
I've attached my build log.
I consider this a courtesy to the developers, as they now have time to prepare as the standard becomes more widely adopted, and the old standard is phased out. Imagine the commotion on the developer's part if he received hundreds of bugs, because he or she waited until the standard was phased out...
No other distribution that I know of can use this method as the actual package maintainers exist as middlemen before a patch or update can be used by the respective user community. We do have maintainers, but we also have the ability to use a local portage tree.
Regarding Insightful Thoughts Posted in the Bounty Request
I don't know if the bounty was posted because you all like my insightful, well thought out answers, but in an attempt at the bounty, I'll attempt to answer your insightful, well thought out bounty offering. First off, let me say in response that as a user of a source based distribution, I firmly believe what connects the dots are all the things you've asked for in your bounty request. Someone can be a great coder, but have crappy care for software. In the same way, there are people that are crappy coders that have great care for software.
Before I came here, I was an avid poster, over at the Gentoo Forums. I finally realized when I started coming here that everyone has some degree of some talent they can use. It's what they choose to do with it that makes the contributory difference. Some of us are Great writers (not I), so if you want to contribute to some project, but you don't or can't write code, or fix bugs, remember that great writers can write great documentation, or great Wiki Articles.
The standard is there for another reason: In a Community, certain rules are expected of it's members. Follow that statement here too. If I submit a fix, patch, enhancement etc and there are no standards, the patch will only work in the situations that I deem important, i.e if I'm using whizbang compiler 2.0, and the patch is built against whizbang compiler 1.0, it will fail. Since the effort is for a community, the community expects everything to work in most situations, so instead of forcing all users to upgrade to compiler 2, I can stipulate in a standard:
This package chooses to allow Backwards Compatibility with Whizbang Compiler 1.0
In this way, as a developer, crappy coder or not, I know that I must use or at least test against Compiler Version 1.0. As a user on the other hand, I can choose what I want to do. If I'm unhappy, I can request a patch, by submitting a bug, or the other extreme of "This software is a piece of crap!," and do nothing. Regardless, the user, and the developer understand the standard because it's been written.
Bridging the gap takes action of some form on a user's part, and that requires all the things you asked me and other's to comment on, and we must rely on the user community and their talents of all forms bridge that gap. If you choose to be one of the contributing users, I applaud you. For those of you who choose to be inactive, remember that if you want something fixed, the active ones need your input. So I'm telling you, don't be shy about submitting a bug, or telling us we need to update documentation, and if we're rude tell us, or find someone else, until you find your area of expertise.
Other Interesting Reading Related to This Topic
The Biggest Changes in C++11 (and Why You Should Care)
C++0x/C++11 Support in GCC
News, Status & Discussion about Standard C++
| Packages to rebuild after upgrading gcc on gentoo systems |
1,415,789,761,000 |
I'm building a custom Android kernel based on the Cyanogenmod ROM's kernel source code. I'd like to add folders and files into the root folder of the OS (/). For instance, after having compiled my kernel, I'd like for an extra folder named toto (absolute path = /toto) to be created.
I really have no idea which files have to be edited and how to do the work.
Note: If you're an Android user (not a ROM developer) who wants to add files to your rootfs, please see the relevant Android.SE question instead.
|
On Android, like on many Linux-based systems, the kernel first mounts an initramfs on /. The initramfs is stored in RAM; it is loaded from a CPIO archive which is stored together with the kernel itself (or in some other place where the bootloader can find it).
Most desktop Linux systems have a small initramfs which contains just enough programs and configuration files to mount the real root filesystem, which is then mounted on /, replacing the initramfs. Android, like some embedded Linux systems, keeps the initramfs mounted forever. Android's initramfs contains only /init, adbd and a few configuration files.
For Cyanogenmod, you can find build instructions in the porting guide. You want to copy more files to the ramdisk (the initramfs image, in Android terminology), so you need to add them to the PRODUCT_COPY_FILES list in the device_*.mk makefile for your device.
| How to compile extra files into the root directory of an Android ROM |
1,415,789,761,000 |
I know that I can interrupt a make process anytime without having to recompile the entire source tree again. As I know, make only compiles a target if it's not compiled yet, or the source code is modified after the last compilation.
But if I interrupt make, there will surely be one or more (depending on the concurrency level) half-ready binaries. What does it do with them the next time I run make? Or does it finish the current target when I press Ctrl+C to avoid partly compiled binaries?
|
In simple terms, you can think of make as having a (possibly large) number of steps, where each step takes a number of files as input and creates one file as output.
A step might be "compile file.c to file.o" or "use ld to link main.o and file.o into program". If you interrupt make with CtrlC, then the currently executing step will be terminated which will (or should) remove the output file it was working on. There are usually not any "half-ready binaries" left behind.
When you restart make, it will look at the timestamps of all the input and output files and rerun the steps where:
an input file has a newer timestamp than the output file
the output file does not exist
This generally means that if a step takes a long time to run (it's rare on modern computers, but the ld step for large programs could easily take many minutes when make was designed), then stopping and restarting make will start that step over from the beginning.
The reality of your average Makefile is considerably more complicated than the above description, but the fundamentals are the same.
| How does make continue compilation? |
1,415,789,761,000 |
I tried to install the VirtualBox Guest Additions module in a VM guest running CentOS but I get this error message when everything else was okay:
building the main Guest Additions module Failed
Since I'm very new to CentOS and VirtualBox, I have no idea about to solve this and wasnt able to find any solution searching the internet (the only post I found didn't help me).
Here is the log:
/usr/src/vboxguest-4.1.14/vboxguest/build_in_tmp: line 55: make :
command not found Creating user for the Guest additions. Creating udev
rule for the Guest additions kernel module
|
You lack the make command. Make is a utility that is often used to build programs from source; it runs the compiler on every source file in the right order. You need to install the make package, and possibly others: the C compiler, and the kernel headers (files generated during the compilation of the Linux kernel, that are necessary to compile third-party modules).
I hardly ever use CentOS, but I think the right command is:
yum install gcc make kernel-devel
or (will install more than you need)
yum groupinstall "Development Tools"
You may need to install other packages as well.
You need to run this command as root; depending on whether you use su or sudo:
su -c 'yum install …'
sudo yum install …
| How to solve "building the main Guest Additions module Failed" |
1,415,789,761,000 |
I recently bought a Raspberry Pi. I already have configured it, and I install a cross compiler for arm on my desktop (amd64). I compiled a simple "hello world" program and then I copy it from my desktop to my Pi with scp ./hello [email protected]:~/hello.
After login in my Pi I run ls -l hello and I get a normal response:
-rwxr-xr-x 1 david david 6774 Nov 16 18:08 hello
But when I try to execute it, I get the following:
david@raspberry-pi:~$ ./hello
-bash: ./hello: No such file or directory
david@raspberry-pi:~$ file hello
hello: ELF 32-bit LSB executable, ARM, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.26, BuildID[sha1]=0x6a926b4968b3e1a2118eeb6e656db3d21c73cf10, not stripped
david@raspberry-pi:~$ ldd hello
not a dynamic executable
|
If ldd says it is not a dynamic executable, then it was compiled for the wrong target.
Obviously you did cross-compile it, as file says is a 32-bit ARM executable. However, there's more than one "ARM" architecture, so possibly your toolchain was configured incorrectly.
If you are using crosstool-NG, have a look at the .config for the value of CT_ARCH_ARCH. For the raspberry pi, it should be "armv6j"1 -- or at least, that's what's working for me. There are other specifics, but I think that should be enough. Unfortunately, if it's wrong, you now have to rebuild.
IMO getting a cross-compiler toolchain to work can be tedious and frustrating, but, presuming the host is not a significant factor (it shouldn't be), in this case it can be done. Crosstool-ng uses a TLI configurator, so if you end up having to try multiple builds, write down your choices each time so you know what worked.
1 I believe armv7 is a much more common arch (lots of phones and such), so if you are just using something you believe is a generic ARM cross-compiler, that's probably the issue. These numbers are confusing as, e.g., the pi's processor is an ARM11, but (as per that page), the ARM11 family of processors uses the ARMv6 architecture -- i.e. ARM11 is an implementation of ARMv6.
| “No such file or directory” when executing a cross-compiled program on a Raspberry Pi |
1,415,789,761,000 |
I wrote a bash script, and I executed it without compiling it first. It worked perfectly. It can work with or without permissions, but when it comes to C programs, we need to compile the source code. Why?
|
It means that shell scripts aren't compiled, they're interpreted: the shell interprets scripts one command at a time, and figures out every time how to execute each command. That makes sense for shell scripts since they spend most of their time running other programs anyway.
C programs on the other hand are usually compiled: before they can be run, a compiler converts them to machine code in their entirety, once and for all. There have been C interpreters in the past (such as HiSoft's C interpreter on the Atari ST) but they were very unusual. Nowadays C compilers are very fast; TCC is so fast you can use it to create "C scripts", with a #!/usr/bin/tcc -run shebang, so you can create C programs which run in the same way as shell scripts (from the users' perspective).
Some languages commonly have both an interpreter and a compiler: BASIC is one example that springs to mind.
You can also find so-called shell script compilers but the ones I've seen are just obfuscating wrappers: they still use a shell to actually interpret the script. As mtraceur points out though a proper shell script compiler would certainly be possible, just not very interesting.
Another way of thinking about this is to consider that a shell's script interpreting capability is an extension of its command-line handling capability, which naturally leads to an interpreted approach. C on the other hand was designed to produce stand-alone binaries; this leads to a compiled approach. Languages which are usually compiled do tend to sprout interpreters too, or at least command-line-parsers (known as REPLs, read-eval-print loops; a shell is itself a REPL).
| Why does C programming need a compiler and shell scripts don't? |
1,415,789,761,000 |
I'd like to try some shell codes and I want to disable linux protections.
I know I could compile using flags but I know another way exists to disable these protections in general I just can't remember. Can you help me?
|
Stack protection is done by the compiler (add some extra data to the stack and stash some away on call, check sanity on return). Can't disable that without recompiling. It's part of the point, really...
| Disable stack protection on Ubuntu for buffer overflow without C compiler flags |
1,415,789,761,000 |
I was trying to compile libnetfilter_conntrack source from github as it was requested by iptables while compiling iptables and as any of these were not available in the HURD software repo and ended up in an error while configuring libnetfilter_conntrack
checking whether stripping libraries is possible... yes
checking if libtool supports shared libraries... yes
checking whether to build shared libraries... yes
checking whether to build static libraries... no
checking whether ln -s works... yes
configure: error: Linux only, dude!
and my kernel is,
$ uname -a
GNU debian 0.9 GNU-Mach 1.8+git20190109-486/Hurd-0.9 i686-AT386 GNU
and my ultimate goal was to compile iproute2.
|
In general, it’s not very different; there are lists of known pitfalls on the Hurd’s site and on the Debian wiki. Many projects build fine, or after a few fixes (the most common issue being the absence of PATH_MAX).
However in your case you’ll find it difficult to get anywhere: netfilter and iptables are specific to the Linux kernel, so you won’t be able to use them on the Hurd. You’ll probably have noticed that the iptables package isn’t available on hurd-i386 — there’s usually a good reason for that... iproute2 is also Linux-specific.
On the Hurd you’d use eth-filter instead, see the networking section of the Debian GNU/Hurd configuration guide for details.
| How different is compiling source code in Debian GNU/Hurd from Debian GNU/Linux? |
1,415,789,761,000 |
When compiling Icecast 2, I get this error when I run autogen.sh:
$ autogen.sh
... stuff ommitted
configure: error: XSLT configuration could not be found
What is the reason for it, and how do I fix it?
|
You're missing some XML libraries. Try installing libxml2-dev. On Ubuntu or Debian, use
sudo apt-get install libxslt-dev
On Fedora or such it would be
yum install libxslt-devel
| How do I resolve the following error during "configure: error: XSLT configuration could not be found" |
1,415,789,761,000 |
There are some tools in side the kernel,
<kernel source root directory>/tools
perf is one of them.
In ubuntu I think the tools inside this folder is available as package linux-tools
How can I compile it form source and install it and run it?
|
what's wrong with the following?
make -C <kernel source root directory>/tools/perf
| How can I compile, install and run the tools inside kernel/tools? |
1,415,789,761,000 |
OpenCV 2.4.2 took 6 hours to compile on the Raspberry Pi and I'd love to package everything up as a deb but I have never done that before. How can I package the compiled files so that they download or include the necessary other libraries?
|
If by OpenCV you mean the computer vision libraries at http://opencv.willowgarage.com/ then they are already packaged for debian by the Debian Science Team.
Your best bet is to download the debianised source package from your nearest debian mirror, modify the debian/rules and/or Makefile or configure etc as needed to compile correctly on the raspberry pi and rebuild the packages.
The packaging work is already done, there's no need to do it again....and again and again every time your want to update them.
There's a whole bunch of binary packages, but libopencv-dev is probably what you want to start with http://packages.debian.org/search?keywords=libopencv-dev
| How can I create a .deb package with my compiled OpenCV build? |
1,415,789,761,000 |
I am trying to install R in our cluster (the cluster is Red Hat Enterprise Linux 6), where I don't have root access. I tried:
$wget http://cran.rstudio.com/src/base/R-3/R-3.1.1.tar.gz
$ tar xvf R-3.1.1.tar.gz
$ cd R-3.1.1
$ ./configure --prefix=/home/Kryo/R-3.1.1
But I am getting an error:
configure: error: --with-x=yes (default) and X11 headers/libs are not available
|
According to this thread, you should just install libXt-devel package and you should be fine.
But perhaps you also should install xorg-x11-server-devel and libX11-devel?
That would be:
yum install xorg-x11-server-devel libX11-devel libXt-devel
| "--with-x=yes (default) and X11 headers/libs are not available" |
1,415,789,761,000 |
One can find several threads on the Internet such as this:
http://www.gossamer-threads.com/lists/linux/kernel/972619
where people complain they cannot build Linux with -O0, and are told that this is not supported; Linux relies on GCC optimizations to auto-inline functions, remove dead code, and otherwise do things that are necessary for the build to succeed.
I've verified this myself for at least some of the 3.x kernels. The ones I've tried exit after a few seconds of build time if compiled with -O0.
Is this generally considered acceptable coding practice? Are compiler optimizations, such as automatic inlining, predictable enough to rely on; at least when dealing with only one compiler? How likely is it that future versions of GCC might break builds of current Linux kernels with default optimizations (i.e. -O2 or -Os)?
And on a more pedantic note: since 3.x kernels cannot compile without optimizations, should they be considered technically incorrect C code?
|
You've combined together several different (but related) questions. A few of them aren't really on-topic here (e.g., coding standards), so I'm going to ignore those.
I'm going to start with if the kernel is "technically incorrect C code". I'm starting here because the answer explains the special position a kernel occupies, which is critical to understanding the rest.
Is the Kernel Technically Incorrect C Code?
The answer is definitely its "incorrect".
There are a few ways in which a C program can be said to be incorrect. Let's get a few simple ones out of the way first:
A program which doesn't follow the C syntax (i.e., has a syntax error) is incorrect. The kernel uses various GNU extensions to the C syntax. Those are, as far as the C standard is concerned, syntax errors. (Of course, to GCC, they are not. Try compiling with -std=c99 -pedantic or similar...)
A program which doesn't do what its designed to do is incorrect. The kernel is a huge program and, as even a quick check of its changelogs will prove, surely does not. Or, as we'd commonly say, it has bugs.
What Optimization means in C
[NOTE: This section contains a very lose restatement of the actual rules; for details, see the standard and search Stack Overflow.]
Now for the one that takes more explanation. The C standard says that certain code must produce certain behavior. It also says certain things which are syntactically valid C have "undefined behavior"; an (unfortunately common!) example is to access beyond the end of an array (e.g., a buffer overflow).
Undefined behavior is powerfully so. If a program contains it, even a tiny bit, the C standard no longer cares what behavior the program exhibits or what output a compiler produces when faced with it.
But even if the program contains only defined behavior, C still allows the compiler a lot of leeway. As a trivial example (note: for my examples, I'm leaving out #include lines, etc., for brevity):
void f() {
int *i = malloc(sizeof(int));
*i = 3;
*i += 2;
printf("%i\n", *i);
free(i);
}
That should, of course, print 5 followed by a newline. That's what's required by the C standard.
If you compile that program and disassemble the output, you'd expect malloc to be called to get some memory, the pointer returned stored somewhere (probably a register), the value 3 stored to that memory, then 2 added to that memory (maybe even requiring a load, add, and store), then the memory copied to the stack and the also a point string "%i\n" put on the stack, then the printf function called. A fair bit of work. But instead, what you might see is as if you'd written:
/* Note that isn't hypothetical; gcc 4.9 at -O1 or higher does this. */
void f() { printf("%i\n", 5) }
and here's the thing: the C standard allows that. The C standard only cares about the results, not the way they are achieved.
That's what optimization in C is about. The compiler comes up with a smarter (generally either smaller or faster, depending on the flags) way to achieve the results required by the C standard. There are a few exceptions, such as GCC's -ffast-math option, but otherwise the optimization level does not change the behavior of technically correct programs (i.e., ones containing only defined behavior).
Can You Write a Kernel Using Only Defined Behavior?
Let's continue to examine our example program. The version we wrote, not what the compiler turned it in to. The first thing we do is call malloc to get some memory. The C standard tells us what malloc does, but not how it does it.
If we look at an implementation of malloc aimed at clarity (as opposed to speed), we'd see that it makes some syscall (such as mmap with MAP_ANONYMOUS) to get a large chunk of memory. It internally keeps some data structures telling it which parts of that chunk are used vs. free. It finds a free chunk at least as large as what you asked for, carves out the amount you asked for, and returns a pointer to it. It's also entirely written in C, and contains only defined behavior. If its thread-safe, it may contain some pthread calls.
Now, finally, if we look at what mmap does, we see all kinds of interesting stuff. First, it does some checks to see if the system has enough free RAM and/or swap for the mapping. Next, it find some free address space to put the block in. Then it edits a data structure called the page table, and probably makes a bunch of inline assembly calls along the way. It may actually find some free pages of physical memory (i.e., actual bits in actual DRAM modules)---a process which may require forcing other memory out to swap---as well. If it doesn't do that for the entire requested block, it'll instead set things up so that'll happen when said memory is first accessed. Much of this is accomplished with bits of inline assembly, writing to various magic addresses, etc. Note also it also uses large parts of the kernel, especially if swapping is required.
The inline assembly, writing to magic addresses, etc. is all outside the C specification. This isn't surprising; C runs across many different machine architectures—including a bunch that were barely imaginable in the early 1970s when C was invented. Hiding that machine-specific code is a core part of what a kernel (and to some extent C library) is for.
Of course, if you go back to the example program, it becomes clear printf must be similar. It's pretty clear how to do all the formatting, etc. in standard C; but actually getting it on the monitor? Or piped to another program? Once again, a lot of magic done by the kernel (and possibly X11 or Wayland).
If you think of other things the kernel does, a lot of them are outside C. For example, the kernel reads data from disks (C knows nothing of disks, PCIe buses, or SATA) into physical memory (C knows only of malloc, not of DIMMs, MMUs, etc.), makes it executable (C knows nothing of processor execute bits), and then calls it as functions (not only outside C, very much disallowed).
The Relationship Between a Kernel and its Compiler(s)
If you remember from before, if a program contains undefined behavior, so far as the C standard is concerned, all bets are off. But a kernel really has to contain undefined behavior. So there has to be some relationship between the kernel and its compiler, at least enough that the kernel developers can be confident the kernel will work despite violating the C standard. At least in the case of Linux, this includes the kernel having some knowledge of how GCC works internally.
How likely is it to break?
Future GCC versions will probably break the kernel. I can say this pretty confidently as its happened several times before. Of course, things like the strict aliasing optimizations in GCC broke plenty of things besides the kernel, too.
Note also that the inlining that the Linux kernel is depending on is not automatic inlining, it's inlining that the kernel developers have manually specified. There are various people who have compiled the kernel with -O0 and report it basically works, after fixing a few minor problems. (One is even in the thread you linked to). Mostly, it's the kernel developers see no reason to compile with -O0, and requiring optimization as a side effect makes some tricks work, and no one tests with -O0, so it's not supported.
As an example, this compiles and links with -O1 or higher, but not with -O0:
void f();
int main() {
int x = 0, *y;
y = &x;
if (*y)
f();
return 0;
}
With optimization, gcc can figure out that f() will never be called, and omits it. Without optimization, gcc leaves the call in, and the linker fails because there isn't a definition of f(). The kernel developers rely on similar behavior to make the kernel code easier to read/write.
| Linux cannot compile without GCC optimizations; implications? [closed] |
1,415,789,761,000 |
Running example C code is a painful exercise unless it comes with a makefile.
I often find myself with a C file containing code that supposedly does something very cool, but for which a first basic attempt at compilation (gcc main.c) fails with—
main.c:(.text+0x1f): undefined reference to `XListInputDevices'
clang-3.7: error: linker command failed with exit code 1 (use -v to see invocation)
—or similar.
I know this means I'm missing the right linker flags, like -lX11, -lXext or -lpthread.
But which ones?
The way I currently deal with this is to find the library header that a function was included from, use Github's search to find some other program that imports that same header, open its makefile, find the linker flags, copy them onto my compilation command, and keep deleting flags until I find a minimal set that still compiles.
This is inefficient, boring, and makes me feel like there must be a better way.
|
The question is how to determine what linker flag to use from inspection of the source file. The example below will work for Debian. The header files are the relevant items to note here.
So, suppose one has a C source file containing the header
#include <X11/extensions/XInput.h>.
We can do a search for XInput.h using, say apt-file. If you know this header file is contained in an installed package, dpkg -S or dlocate will also work. E.g.
apt-file search XInput.h
libxi-dev: /usr/include/X11/extensions/XInput.h
That tells you that this header file belongs to the development package for libxi (for C libraries, the development packages (normally of the form libname-dev or libname-devel) contain the header files), and therefore you should use the -lxi linker flag.
Similar methods should work for any distribution with a package management system.
| How can I find out what linker flags are needed to use a given C library function? |
1,415,789,761,000 |
When cross compiling the Linux kernel 3.18.10, the compiler adds a .part.<N> suffix at the end of some symbols (see an example below). The number <N> changes when using different defconfigs. Does anybody know under which conditions the compiler adds the part suffix at the end of a symbol?
$ arm-none-linux-gnueabi-readelf -a vmlinux | grep do_kernel_fault
gives
c03a48f8 116 FUNC LOCAL DEFAULT 2 __do_kernel_fault.part.10
|
The symbol ending in .part is a real function symbol, not some kind of function decoration. More precisely, a function ending in .part is a function generated by GCC from a bigger function.
Sometimes, GCC evaluates that a some part of the control flow of a big function could esily be inlined, but that it would not be okay to inline the entire huge function. Therefore, it splits the function to put the big part in its own function, which receives as a name the original function name plus .part + .<some number>, and inlines the rest in other functions.
This is part of an optimization described in the GCC source code, in gcc/ipa-split.c. In gcc-4.8.3 at least (and probably later versions, I'm not able to check right now), it says:
/* The purpose of this pass is to split function bodies to improve
inlining. I.e. for function of the form:
func (...)
{
if (cheap_test)
something_small
else
something_big
}
Produce:
func.part (...)
{
something_big
}
func (...)
{
if (cheap_test)
something_small
else
func.part (...);
}
When func becomes inlinable and when cheap_test is often true, inlining func,
but not fund.part leads to performance improvement similar as inlining
original func while the code size growth is smaller.
The pass is organized in three stages:
1) Collect local info about basic block into BB_INFO structure and
compute function body estimated size and time.
2) Via DFS walk find all possible basic blocks where we can split
and chose best one.
3) If split point is found, split at the specified BB by creating a clone
and updating function to call it.
The decisions what functions to split are in execute_split_functions
and consider_split.
There are several possible future improvements for this pass including:
1) Splitting to break up large functions
2) Splitting to reduce stack frame usage
3) Allow split part of function to use values computed in the header part.
The values needs to be passed to split function, perhaps via same
interface as for nested functions or as argument.
4) Support for simple rematerialization. I.e. when split part use
value computed in header from function parameter in very cheap way, we
can just recompute it.
5) Support splitting of nested functions.
6) Support non-SSA arguments.
7) There is nothing preventing us from producing multiple parts of single function
when needed or splitting also the parts. */
As you may have guessed, this process is entirely controlled by the compiler. The new symbol name is produced by function clone_function_name in gcc/cgraphclones.c. The number added after .part has no particular meaning, it's used only to prevent name clashes. It's a simple counter which is incremented each time GCC creates a new function from some existing one (what the developpers of GCC call a 'clone').
You can use the option -fdisable-ipa-fnsplit to prevent the compiler from applying this optimization, or -fenable-ipa-fnsplit to enable it. By default, it's applied at optimization levels -O2 and -O3 and disabled otherwise.
| Function symbol gets '.part' suffix after compilation |
1,415,789,761,000 |
When compiling, errors are often accompanied by a lengthy series of notes (cyan). Is there a g++ flag to disable this, only showing the error itself?
|
The compiler will not do this for you, but (so far...) the compiler developers are following a longstanding (30+ year) convention adapted from other compilers which gives the essential information on the first line, using error: or warning: to mark the warning. If you grep stderr for those, you will see the minimal warning/error information.
grep is a good starting point (and "grep -n" output is useful by itself). These messages follow a pattern of filename, line number, message which is common to several tools. I used that in vi-like-emacs here.
Fairly recently (in 2014) gcc/g++ started adding a "calling-stack" to the messages, which gives the extra information. That relies upon a change to the preprocessor to track the line-numbers which can be turned off with a -P option (noted here), but that appears to be incompletely integrated in a form which would suppress the calling-stack.
Using clang would not help much with this; it can be very verbose as well. gcc/g++ development has added a lot of messages as noted here.
| How do I disable g++ displaying notes for errors? |
1,415,789,761,000 |
I'm using Nix to install packages under my home (so no binary packages) on a shared host with limited resources. I'm trying to install git-annex. When building one of its dependencies, haskell-lens, the unit tests consume so much memory that they get killed and the installation fails.
Is there a way to skip the unit tests to get the package installed? I looked at the Cabal builder and haskell-packages.nix and it seems to me that you could disable the tests by setting enableCheckPhase to false. I tried the following in ~/.nixpkgs/config.nix, but the tests are still run:
{
packageOverrides = pkgs: with pkgs; {
# ...other customizations...
haskellPackages = haskellPackages.override {
extension = self : super : {
self.lens = self.disableTest self.lens;
};
};
};
}
|
I see you trying to use disableTest found in haskell-package.nix to remove testing from the lens packages. I would have to do some testing to tell you exactly why it is not meeting your needs.
I have disabled testing in general overriding the cabal package in config.nix to cabalNoTest. This overrides the cabal package used by the rest of the haskell packages turning off testing.
This is how I normally write it:
{
packageOverrides = pkgs: with pkgs; {
# ...other customizations...
haskellPackages = haskellPackages.override {
extension = self : super : {
cabal = pkgs.haskellPackages.cabalNoTest;
};
};
};
}
| Nix: Skipping unit tests when installing a Haskell package |
1,415,789,761,000 |
On system start up I currently see Linux 4.0.0-rc6yy and 4.0.0-rc6yy.old from the bootloader menu. I'm not certain where they came from. I suspect "yy" is arbitrary but can someone explain the ".old" suffix?
Also can someone explain what CONFIG_LOCALVERSION and CONFIG_LOCALVERSION_AUTO is from .config? I've looked them up but am still unclear about their use. Many thanks.
|
When you are installing your kernel the responsible script is copying kernel image and initramfs into your /boot directory.
If a previous kernel image with the same name already exist, it is renamed by appending .old to its name.
CONFIG_LOCALVERSION:
Append an extra string to the end of your kernel version.
This will show up when you type uname, for example.
The string you set here will be appended after the contents of
any files with a filename matching localversion* in your
object and source tree, in that order. Your total string can
be a maximum of 64 characters.
That means if you want you can give a special version number or name to your customized kernel. If you type "-MyNewKernel" your kernel should look: Linux 4.0.0-MyNewKernel.
CONFIG_LOCALVERSION_AUTO:
This will try to automatically determine if the current tree is a
release tree by looking for git tags that belong to the current
top of tree revision.
A string of the format -gxxxxxxxx will be added to the localversion
appended after any matching localversion1 files, and after the value set in CONFIG_LOCALVERSION.
1 (The actual string used here is the first eight characters produced by running the command:
$ git rev-parse --verify HEAD
which is done within the script "scripts/setlocalversion".)
That means if it is enabled the unique SCM (source control management) tag reported by setlocalversion (or .scmversion) is appended to the kernel version, if it exists. For example if a git tree is found, the revision number will be appended if it exists. The result could look: Linux 4.0.0-MyNewKernel-ga2cfc42. For mores info you could check in your source tree scipts/setlocalversion.
| Linux kernel version suffix + CONFIG_LOCALVERSION |
1,415,789,761,000 |
I wand to build multiple .deb packages from same source for different versions and distros.
Even if the source code is exactly same, some files in debian folder can not be shared because different dependency and distro name.
So, I want to make multiple 'debian' directories for each version/distro and specify where to search it when build package.
Is it possible?
For your information, I'm using debuild command to build .deb package.
|
Using different branches is one approach, and I can suggest edits for @mestia’s answer if it seems appropriate (but read on...).
Another approach is to keep different files side-by-side; see Solaar for an example of this.
But both of these approaches have a significant shortcoming: they’re unsuitable for packages in Debian or Ubuntu (or probably other derivatives). If you intend on getting your package in a distribution some day, you should package it in such a way that the same set of files produces the correct result in the various distributions.
For an example of this, have a look at the Debian packaging for Solaar (full disclosure: I did the packaging).
The general idea is to ask dpkg-vendor what the distribution is; so for Solaar, which has different dependencies in Debian and Ubuntu, debian/rules has
derives_from_ubuntu := $(shell (dpkg-vendor --derives-from Ubuntu && echo "yes") || echo "no")
and further down an override for dh_gencontrol to fill in “substvars” as appropriate:
override_dh_gencontrol:
ifeq ($(derives_from_ubuntu),yes)
dh_gencontrol -- '-Vsolaar:Desktop-Icon-Theme=gnome-icon-theme-full | oxygen-icon-theme-complete' -Vsolaar:Gnome-Icon-Theme=gnome-icon-theme-full
else
dh_gencontrol -- '-Vsolaar:Desktop-Icon-Theme=gnome-icon-theme | oxygen-icon-theme' -Vsolaar:Gnome-Icon-Theme=gnome-icon-theme
endif
This fills in the appropriate variables in debian/control:
Package: solaar
Architecture: all
Depends: ${misc:Depends}, ${debconf:Depends}, udev (>= 175), passwd | adduser,
${python:Depends}, python-pyudev (>= 0.13), python-gi (>= 3.2), gir1.2-gtk-3.0 (>= 3.4),
${solaar:Desktop-Icon-Theme}
and
Package: solaar-gnome3
Architecture: all
Section: gnome
Depends: ${misc:Depends}, solaar (= ${source:Version}),
gir1.2-appindicator3-0.1, gnome-shell (>= 3.4) | unity (>= 5.10),
${solaar:Gnome-Icon-Theme}
You can use the test in debian/rules to control any action you can do in a makefile, which means you can combine this with alternative files and, for example, link the appropriate files just before they’re used in the package build.
| Build the same source package for different Debian based distros |
1,415,789,761,000 |
I know there are similar questions out there, but I haven't found a solution nor this exact case. The binary was built on Arch Linux using its GCC 4.7. The package works fine on the build system. The commands below were executed on:
Linux vbox-ubuntu 3.2.0-29-generic #46-Ubuntu SMP Fri Jul 27 17:03:23 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux
The file in question is located here. It's a Linux 64-bit to Windows 64-bit cross-compiler. Untarring it to ~/ gives a single ~/mingw64 directory which contains everything needed.
When I try to run ~/mingw64/x86_64-w64-mingw32/bin/as this is what I get:
bash: /home/ruben/mingw64/x86_64-w64-mingw32/bin/as: No such file or directory
Running file ~/mingw64/x86_64-w64-mingw32/bin/as gives me:
/home/ruben/mingw64/x86_64-w64-mingw32/bin/as: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.32, BuildID[sha1]=0x0b8e50955e7919b76967bac042f49c5876804248, not stripped
Running ldd ~/mingw64/x86_64-w64-mingw32/bin/as gives me:
linux-vdso.so.1 => (0x00007fff3e367000)
libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007f2ceae7e000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f2ceaac1000)
/lib/ld-linux-x86-64.so.2 => /lib64/ld-linux-x86-64.so.2 (0x00007f2ceb0a8000)
I am truly at a loss. Any help is much appreciated.
EDIT: Some more details:
The build system is Arch Linux (currently glibc 2.16).
The output of ls -l is:
-rwxr-xr-x 2 ruben users 1506464 11 aug 23:49 /home/ruben/mingw64/bin/x86_64-w64-mingw32-as
The output of objdump -p is:
Version References:
required from libz.so.1:
0x0827e5c0 0x00 05 ZLIB_1.2.0
required from libc.so.6:
0x0d696917 0x00 06 GLIBC_2.7
0x06969194 0x00 04 GLIBC_2.14
0x0d696913 0x00 03 GLIBC_2.3
0x09691a75 0x00 02 GLIBC_2.2.5
The output of ldd -v on Ubuntu 12.04 is:
linux-vdso.so.1 => (0x00007fff225ff000)
libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007fd525c71000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fd5258b4000)
/lib/ld-linux-x86-64.so.2 => /lib64/ld-linux-x86-64.so.2 (0x00007fd525e9b000)
Version information:
/home/ruben/mingw64/x86_64-w64-mingw32/bin/as:
libz.so.1 (ZLIB_1.2.0) => /lib/x86_64-linux-gnu/libz.so.1
libc.so.6 (GLIBC_2.7) => /lib/x86_64-linux-gnu/libc.so.6
libc.so.6 (GLIBC_2.14) => /lib/x86_64-linux-gnu/libc.so.6
libc.so.6 (GLIBC_2.3) => /lib/x86_64-linux-gnu/libc.so.6
libc.so.6 (GLIBC_2.2.5) => /lib/x86_64-linux-gnu/libc.so.6
/lib/x86_64-linux-gnu/libz.so.1:
libc.so.6 (GLIBC_2.3.4) => /lib/x86_64-linux-gnu/libc.so.6
libc.so.6 (GLIBC_2.4) => /lib/x86_64-linux-gnu/libc.so.6
libc.so.6 (GLIBC_2.2.5) => /lib/x86_64-linux-gnu/libc.so.6
/lib/x86_64-linux-gnu/libc.so.6:
ld-linux-x86-64.so.2 (GLIBC_2.3) => /lib64/ld-linux-x86-64.so.2
ld-linux-x86-64.so.2 (GLIBC_PRIVATE) => /lib64/ld-linux-x86-64.so.2
The tested other OSes are Fedora 17 (glibc 2.15) and Ubuntu 12.04 (eglibc 2.15). Both zlib and glibc version requirements are met.
|
If I run ldd -v as on my system, I get:
./as: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.14' not found (required by ./as)
linux-vdso.so.1 => (0x00007fff89ab1000)
libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007f1e4c81f000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f1e4c498000)
/lib/ld-linux-x86-64.so.2 => /lib64/ld-linux-x86-64.so.2 (0x00007f1e4ca6d000)
So yeah, it looks like these binaries are looking for a GLIBC_2.14 symbol, which you are presumably missing on your system. As svenx pointed out, it looks like it's searching for the memcpy@@GLIBC_2.14 symbol. Some more information on why memcpy was given a new version is described in this bug report.
Installing a new version of glibc on your target system should fix it. If you want to try to rebuild the binary to still work on the old version of glibc, you could try tricks like the one listed here. You could also maybe get by with a shim that just provides the specific version of the memcpy symbol that you need, but that gets to be a bit hacky.
After reading your update: you're right, that wasn't your problem. But I think I've found it: your binary is requesting the interpreter /lib/ld-linux-x86-64.so.2, which doesn't exist on Ubuntu 12.04 systems:
$ readelf -a ./as | grep interpreter
[Requesting program interpreter: /lib/ld-linux-x86-64.so.2]
While ldd knew to find it in /lib64 instead, I suppose the kernel doesn't know that when it tries to run the binary and can't find the file's requested interpreter. You could try just running it through the interpreter manually:
$ pwd
/home/jim/mingw64/x86_64-w64-mingw32/bin
$ ./as --version
-bash: ./as: No such file or directory
$ /lib64/ld-linux-x86-64.so.2 ./as --version
GNU assembler (rubenvb-4.7.1-1-release) 2.23.51.20120808
Copyright 2012 Free Software Foundation, Inc.
This program is free software; you may redistribute it under the terms of
the GNU General Public License version 3 or later.
This program has absolutely no warranty.
This assembler was configured for a target of `x86_64-w64-mingw32'.
I'm not 100% certain this is working correctly -- on my system, running gcc this way gives a segmentation fault. But that's at least a different problem.
| executing binary file: file not found |
1,415,789,761,000 |
At this page you can download a configuration file that lets you target a particular notebook architecture during the compilation of a new 32-bit Linux kernel.
I need a 64 bit version.
What do I have to do? I compiled a kernel 2-3 times in my life but I never touched a config file, I always have used an interactive menu.
|
The recommended answer, as the comment suggests, is to save it as .config in the top-level source directory, and then run make xconfig (GUI, easier) or make menuconfig (TUI) on a 64-bit system.
That said, to simply switch from 32-bit to 64-bit without changing anything else, a little editing at the beginning is all that's needed. Compare:
Original (32-bit)
# CONFIG_64BIT is not set
CONFIG_X86_32=y
# CONFIG_X86_64 is not set
CONFIG_OUTPUT_FORMAT="elf32-i386"
CONFIG_ARCH_DEFCONFIG="arch/x86/configs/i386_defconfig"
"Converted" 64-bit
CONFIG_64BIT=y
# CONFIG_X86_32 is not set
CONFIG_X86_64=y
CONFIG_OUTPUT_FORMAT="elf64-x86-64"
CONFIG_ARCH_DEFCONFIG="arch/x86/configs/x86_64_defconfig"
Note that CONFIG_X86=y is not touched.
| How do I convert a kernel .config file from 32-bit to 64-bit? |
1,415,789,761,000 |
I know that apt-get source <package_name> gives you the source package. It contains a debian folder with a file called rules. If I understand it correctly, this file describes how the source package can be transformed into a .deb package, including which compiler flags should be used.
Two questions:
How do I get the compiler flags that are actually used? Is it necessary to run make -n (if this is even possible) or can I get them somehow by parsing the document(s) ?
Given the case of a source package from an official repository. Are the compiler flags a 100% determined by the rules file or do they depend on the system that the .deb creation is done on? Do I need to 'mirror' the official build system to get to the same flags that were used in the official .deb building process? How can I do this?
I learned here, that debian does not have an official policy which compiler flags are used for the .deb-packed binaries.
|
The compiler flags used are a function of
the debian/rules file,
the package's build files (since the upstream author may specify flags there too),
the build system used (dh, cdbs etc.),
the default compiler settings.
To see the flags used you effectively need to at least compile the package:
debian/rules build
Trying things like
debian/rules -n
generally won't take you very far; for instance on a dh-based package it will just say
dh build
or something similar; asking dh to show what that would do (with --no-act) will produce
dh_testdir
dh_auto_configure
dh_auto_build
and so on.
There is no fool-proof, easy-to-explain way to determine the build flags by reading debian/rules; you can get some idea by looking for flags set there, and also (where appropriate) by looking for options for dpkg-buildflags (such as DEB_BUILD_MAINT_OPTIONS) and running that. For many packages the easiest way to see what flags were used is to look at the build logs for the packages shipped in the archives, starting from https://buildd.debian.org. For example the logs for coreutils on i386 show that the flags used were -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -fstack-protector-strong -Wformat -Werror=format-security for compilation, and -g -O2 -fstack-protector-strong -Wformat -Werror=format-security -Wl,--as-needed -Wl,-z,relro for linking (thanks to Faheem Mitha for pointing out the latter!).
| How to get the compiler flags that are used to build the binaries in a (.deb) package? |
1,415,789,761,000 |
I have a corporate Linux server which does not have internet connection, and sudo access needs to be approved through many levels of hierachy. I've tried install git but I soon realized this needs a whole bunch of dependencies like C compiler to install, of which I do not have the dependencies.
What I need is a method to have git on my server, maybe installing it as a standalone, taking care of the dependencies that are required, without internet or preferbly sudo. It's very straight forward in windows box, but I am stuck for Linux.
Problem with this solution is that it still requires sudo.
Help please!
|
I found a solution that works. To iterate my steps:
1) Download relevant RPM (or here)
2) Copy to Linux server and upack using (replace filename as
necessary)
rpm2cpio git-1.7.9.6-1.el6.rfx.x86_64.rpm | cpio -idmv
3) Update $PATH:
PATH=$PATH:<your path to git>/usr/bin
4) Now see it work
git --version
| Install Git offline without sudo |
1,415,789,761,000 |
I've seen that make is useful for large projects, especially with confusing dependencies described in a Makefile, and also helping with workflow.
I haven't heard any advantages for using make for small projects.
Are there any?
|
As opposed to what?
Suppose you have a program that you have split into two files,
which you have imaginatively named file1.c and file2.c.
You can compile the program by running
cc file1.c file2.c -o yourprogram
But this requires recompiling both files every time,
even if only one has changed.
You can decompose the compilation steps into
cc -c file1.c
cc -c file2.c
cc file1.o file2.o -o yourprogram
and then, when you edit one of the files, recompile only that file
(and perform the linking step no matter what you changed).
But what if you edit one file, and then the other,
and you forget that you edited both files,
and accidentally recompile only one?
Also, even for just two files,
you’ve got about 60 characters’ worth of commands there.
That quickly gets tedious to type.
OK, sure, you could put them into a script,
but then you’re back to recompiling every time.
Or you could write a really fancy, complicated script that checks
what file(s) had been modified and does only the necessary compilations.
Do you see where I’m going with this?
| What are the advantages of using `make` for small projects? [closed] |
1,415,789,761,000 |
By building from source do you gain any benefits? Is the code better optimized to your hardware architecture? Is it optimized better in general?
Why would someone choose to build from source rather than using a package management system like APT/yum? If there is some kind of optimization gain when does that outweigh the benefit of a package management system?
|
Building from source provides the following options which are not available when using a version from a binary package manager.
Compiling from source allows you to:
use processor-specific optimizations
use the very latest version
learn how compilation & linking work (suggestion from @mattdm)
fix bugs, development work
set compile-time options (e.g. include X features in vim)
| What are the advantages of building tools/libs from source? [duplicate] |
1,415,789,761,000 |
I am trying to compile vim-7.3 will all features enabled. I ran configure with
$ ./configure --with-features=huge --enable-gui --enable-cscope
$ make ; make install
When I check the version, it shows several features are still not installed.
Huge version without GUI. Features
included (+) or not (-):
+arabic +autocmd -balloon_eval -browse ++builtin_terms +byte_offset +cindent
-clientserver -clipboard +cmdline_compl +cmdline_hist +cmdline_info +comments
+conceal +cryptv +cscope +cursorbind +cursorshape +dialog_con +diff +digraphs
-dnd -ebcdic +emacs_tags +eval +ex_extra +extra_search +farsi +file_in_path
....
Now according to vimdoc
N +browse
N +clientserver
It says
Thus if a feature is marked with "N", it is included in the normal, big and huge versions of Vim.
features.h also says
+huge all possible features enabled.
According to the above mentioned two resources, huge means all features are enabled. Even if not all, then at least +clientserver and +browse has to be enabled in huge compilation mode.
But my experience says otherwise. Huge compilation fails to include browse and clientserver feature.
Why is it so? Is my understanding of the document is incorrect?
How to enable clientserver feature?
How to enable gui?
Is it possible to enable all features simply? I tired huge as features.h suggested it will enable all possible features, but it didn't work.
Thanks for your time.
Edit: Problem solved!
Thanks to all of you guys for your priceless help.
I checked, vim73/src/auto/config.log, it was clear that lots of dependencies are missing. Gert post gave an idea which packages are required. I used:
$ yum -yv install libXt.i686 libXt-devel.i686 \
libXpm.i686 libXpm-devel.i686 \
libX11.i686 libX11-common.noarch libX11-devel.i686 \
ghc-cairo-devel.i686 cairo.i686 \
libgnomeui-devel.i686 \
ncurses.i686 ncurses-devel.i686 ncurses-libs.i686 ncurses-static.i686 \
ghc-gtk-devel.i686 gtk+-devel.i686 \
gtk2.i686 gtk2-devel.i686 \
atk-devel.i686 atk.i686 \
libbonoboui.i686 libbonoboui-devel.i686
Some of the packages were already installed, others were not. After that:
$ ./configure --with-features=huge --enable-cscope --enable-gui=auto
$ make ; make install
Now my vim has all the packages associated with huge.
Huge version with GTK2 GUI. Features included (+) or not (-):
+arabic +autocmd +balloon_eval +browse ++builtin_terms +byte_offset +cindent
+clientserver +clipboard +cmdline_compl +cmdline_hist +cmdline_info +comments
+conceal +cryptv +cscope +cursorbind +cursorshape +dialog_con_gui +diff
+digraphs +dnd -ebcdic +emacs_tags +eval +ex_extra +extra_search +farsi
...
Thanks
|
According to this building Vim page, you'll need these dependencies on Ubuntu
sudo apt-get install libncurses5-dev libgnome2-dev libgnomeui-dev \
libgtk2.0-dev libatk1.0-dev libbonoboui2-dev \
libcairo2-dev libx11-dev libxpm-dev libxt-dev
Run configure again.
./configure --with-features=huge --enable-gui=gnome2 --enable-cscope
I've tried and all seemed to be enabled.
| Why does my vim-7.3 compile fail to include clientserver? |
1,415,789,761,000 |
I made mistake and changed perl non-threaded version to threaded by unmerge first, change USE flags to include ithreads and emerge perl again. Now most packages depending on perl are broken. How do I rebuild them?
|
On way is to use equery's depends function to get the list of things that depend on a package.
# equery depends perl
If you want to rebuild all of them, try something like:
# emerge -a --oneshot `equery depends perl|awk '{print " ="$1}'`
You'll have issues with that if you have packages installed that were removed from the portage tree, so a sync and world update beforehand is a good idea.
For this specific case, you might also want to look at app-admin/perl-cleaner - it has specific features to rebuild perl modules.
| On Gentoo, how do I rebuild all packages depended on some other package? |
1,415,789,761,000 |
I'm debugging a closed-source software installer that seems to have some pre-conceived notions about my distribution. The installation aborts after not finding apt-get. The command it attempts to run is:
apt-get -y -q install linux-headers-3.7.5-1-ARCH
I suppose the "package name" comes from /usr/src, where the sole entry is linux-3.7.5-1-ARCH. Does anyone have any educated guess as to which package I should install with pacman?
The headers are probably going to be used to compile drivers for custom hardware.
Here is some relevant text from the install log:
NOTE: Linux drivers must be built against the kernel sources for the kernel
that your Linux OS is currently running. This script automates this task
for you.
NOTE: You must have the Linux OS kernel header source files installed.
If you plan on running the Jungo Debug Monitor, then you may also
need to install "compat-libstdc++" and "libpng3".
Your Linux is currently running the following kernel version:
3.7.5-1-ARCH
|
You're running Arch linux. According to pacman -Q -i linux-headers, the package "linux-headers" contains "Header files and scripts for building modules for linux kernel". When the linux kernel gets built, various constants, which might be numbers or strings or what have you, get defined. Some loadable modules need to know those numbers or strings. The files in "linux-headers" should contain all the build-specific numbers, strings etc for the kernel, in your case kernel version 3.7.5-1 .
You can see what files package "linux-headers" owns: pacman -Q -l linux-headers
You can install package "linux-headers" as root: pacman -S linux-headers
The "apt-get" part of the script seems to assume you're running Debian or a derivative. Install linux-headers with pacman and see how it goes.
| What package could "linux-headers-3.7.5-1-ARCH" mean? |
1,415,789,761,000 |
When I compile a C (no pluses) program using GCC, there are several levels of messages possible, like warning, error, and note. The note messages are useless and distracting. How do I make them go away using the command line? (I don't use any sort of IDE.)
Example: /home/user/src9/AllBack3.c:129:9: note: each undeclared identifier is reported only once for each function it appears in.
|
Pass the -fcompare-debug-second option to gcc.
gcc's internal API has a diagnostic_inhibit_note() function which turns any "note:" messages off, but that is only serviceable via the unexpected -fcompare-debug-second command line switch, defined here.
Fortunately, turning notes off is its only effect, unless the -fcompare-debug or the -fdump-final-insns options are also used, which afaik are only for debugging the compiler itself.
| Want to turn off "note" level messages in GCC |
1,415,789,761,000 |
I've recently applied a one-line patch to drivers/bluetooth/btusb.c in order to enable compatibility with my Bluetooth device. However, whenever I get a kernel upgrade, the patch will be lost until someone backports it (which isn't likely). Is there a way for me to run a script and patch each new kernel upgrade automatically?
DKMS seems like a good solution, but I'm not sure how to set things up. I don't want to recompile the entire Linux kernel every time I get an update, but I'd like to apply that patch to the btusb module, recompile it, and insert it into my kernel on every update. How can I do this using the source obtained from apt-get source linux-source-3.2.0? What files do I need to copy over? The critical make call is make M=drivers/bluetooth modules, but this depends on other kernel utilities to be built first. How can I assemble a DKMS module for this?
Details on how to apply the patch can be found here on Ask Ubuntu.
|
Yes, you should package up your changes as a DKMS module. Building modules for several installed kernels or automatically rebuilding them on an updated kernel is the main feature of DKMS.
Ubuntu community documention has a nice article on this topic here.
| Automatically apply module patch and compile kernel when updated? |
1,550,151,359,000 |
I'm trying to experiment with shared objects and found the below snippet on http://www.gambas-it.org/wiki/index.php?title=Creare_una_Libreria_condivisa_(Shared_Library)_.so
gcc -g -shared -Wl,-soname,libprimo.so.0 -o libprimo.so.0.0 primo.o -lc
I browsed trough the manpages and online, but I didn't find what the -lc switch does, can someone tell me?
|
The option is shown as "-l_library_" (no space) or "-l _library_" (with a space) and c is the library argument,
see https://linux.die.net/man/1/gcc
-lc will link libc (-lfoobar would link libfoobar etc.)
General information about options and arguments
UNIX commands often accept option arguments with or without whitespace. If you have an option o which takes an argument arg you can write -o arg or -oarg. On the other hand you can combine options that don't take an argument, e.g. -a -b -c or -abc.
When you see -lc you can only find out from the documentation (man page) if this is the combination of options -l and -c or option -l with argument c or a single option -lc.
See also https://www.gnu.org/software/libc/manual/html_node/Argument-Syntax.html
Note: gcc is an exception from this general concept. You cannot combine options for gcc.
| gcc - unknown switches (absent also from the manpage) |
1,550,151,359,000 |
This is about files straight from the compiler, say g++, and the -o (outfile) flag.
If they are binary, shouldn't they just be a bunch of 0's and 1's?
When you cat them, you get unintelligible output but also intact words.
If you file them, you get the answer immediately - there seem to be no computation. Do the binary files in fact have headers with this kind of information?
I thought a binary executable was just the program just compiled, only in the form of machine instructions that your CPU can instantly and unambiguously understand. If so, isn't that instruction set just bit patterns? But then, what's all the other stuff in the binaries? How do you display the bits?
Also, if you somehow get hold of the manual of your processor, could you write a binary manually, one machine instruction at a time? That would be terribly ineffective, but very fascinating if you got it to work even for a "Hello World!" demo.
|
This Super User question: Why don't you see binary code when you open a binary file with text editor? addresses your first point quite well.
Binary and text data aren't separated: They are simply data. It depends on the interpretation that makes them one or the other. If you open binary data (such as an image file) in a text editor, much of it won't make sense, because it does not fit your chosen interpretation (as text).
Files are stored as zeros and ones (e.g. voltage/no voltage on memory, magnetization/no magnetization on hard drive). You don't see zeros and ones when cat ing the files because the 0/1 sequences won't be of much use to an human; characters make more sense, and an hexdump is better for most purposes (try hexdump on a file).
Executable files do have a header that describes parameters such as the architecture for which the program was built, and what sections of the file are code and data. This is what file uses to identify the characteristics of your binary file.
Finally: yes, you can write programs in assembly language using CPU opcodes directly. Take a look at Introduction to UNIX assembly programming and the Intel x86 documentation for a starting point.
| Mystery of binary files |
1,550,151,359,000 |
I'm trying to build btrfs-progs from sources, but when I run ./configure I get the error:
checking for BLKID... no
configure: error: Package requirements (blkid) were not met:
No package 'blkid' found
Consider adjusting the PKG_CONFIG_PATH environment variable if you
installed software in a non-standard prefix.
Alternatively, you may set the environment variables BLKID_CFLAGS
and BLKID_LIBS to avoid the need to call pkg-config.
See the pkg-config man page for more details.
blkid is installed in /sbin so, presumably, all its libraries are in the default locations.
What do I need to do tell pkg-config where blkid is or am I actually missing a package?
FYI: I'm running Debian 8 (sid/unstable) with a 4.1.0 kernel built from github.com/torvalds/linux.git sources about a week ago (commit:g6aaf0da).
|
If there are missing packages, you can use apt-cache:
% apt-cache search blkid
libblkid-dev - block device id library - headers and static libraries
libblkid1 - block device id library
or even:
% apt-cache search blkid | grep '\-dev'
libblkid-dev - block device id library - headers and static libraries
We know that we need the development libraries to compile something, therefore do a...
apt-get install libblkid-dev
...as root user.
| "configure: error: Package requirements (blkid) were not met" |
1,550,151,359,000 |
I am trying to install a Linux kernel (3.8.1) from source in a Fedora distribution.
The kernel is a vanilla one.
I follow the kernel's build instructions closely that is:
make menuconfig
make
sudo make modules_install install
sudo grub2-mkconfig -o /boot/grub2/grub.cfg
Everything in /boot seems fine.
I can see System.map, initramfs, and vmlinuz for the newly compiled kernel.
The vmlinuz link points to vmlinuz-3.8.1.
There are multiple other kernels installed including an Ubuntu one.
grub2 recognises all of them and I can boot to each one of them.
When I reboot I see all kernels as menu entries and choose 3.8.1.
Then I see this message:
early console in decompress_kernel
decompressing Linux... parsing ELF ... done
Booting the kernel.
[1.687084] systemd [1]:failed to mount /dev:no such device
[1.687524] systemd [1]:failed to mount /dev:no such device
Solution:
All three posted responses provide the solution. CONFIG_DEVTMPFS was in fact causing the issue. I copied a working kernel's /boot/config-… into the root of the source tree as .config and executed the standard commands for building the kernel also shown above.
|
Easiest way to get a working kernel configuration is to just copy Fedora's .config over and then do a make oldconfig to configure it. The configuration is found at /boot/config-*
| Self-built kernel: failed to mount /dev: No such device |
1,550,151,359,000 |
I am currently trying to build a version of opencv, featuring cuda, on my arch linux computer. For that, I use opencv-cuda-git as base version. Additionally, I modified the PKGBUILD and added additional flags to further adapt opencv to my system.
However, everytime I run the buildprocess (makepkg csri), it fails with following error message:
[ 16%] Building CXX object modules/hdf/CMakeFiles/example_hdf_create_groups.dir/samples/create_groups.cpp.o
cd /home/tobias/builds/opencv-cuda-git/src/opencv/build/modules/hdf && /usr/bin/cmake -E cmake_link_script CMakeFiles/example_hdf_create_groups.dir/link.txt --verbose=1
/bin/g++-6 -std=c++11 -fsigned-char -W -Wall -Werror=return-type -Werror=non-virtual-dtor -Werror=address -Werror=sequence-point -Wformat -Werror=format-security -Wmissing-declarations -Wundef -Winit-self -Wpointer-arith -Wshadow -Wsign-promo -Wuninitialized -Winit-self -Wno-narrowing -Wno-delete-non-virtual-dtor -Wno-comment -fdiagnostics-show-option -Wno-long-long -pthread -fomit-frame-pointer -ffast-math -ffunction-sections -fdata-sections -msse -msse2 -fvisibility=hidden -fvisibility-inlines-hidden -Wno-invalid-offsetof -O3 -DNDEBUG -DNDEBUG -Wl,-O1,--sort-common,--as-needed,-z,relro,-z,now -Wl,--gc-sections -rdynamic CMakeFiles/example_hdf_create_groups.dir/samples/create_groups.cpp.o -o ../../bin/example_hdf_create_groups -L/opt/cuda/lib64 ../../lib/libopencv_hdf.so.3.4.0 ../../lib/libopencv_highgui.so.3.4.0 ../../lib/libopencv_videoio.so.3.4.0 ../../lib/libopencv_imgcodecs.so.3.4.0 ../../lib/libopencv_imgproc.so.3.4.0 ../../lib/libopencv_core.so.3.4.0 ../../lib/libopencv_cudev.so.3.4.0
../../lib/libopencv_core.so.3.4.0: undefined reference to `cblas_zgemm'
../../lib/libopencv_core.so.3.4.0: undefined reference to `cblas_sgemm'
../../lib/libopencv_core.so.3.4.0: undefined reference to `cblas_dgemm'
../../lib/libopencv_core.so.3.4.0: undefined reference to `cblas_cgemm'
make[2]: *** [modules/hdf/CMakeFiles/example_hdf_create_groups.dir/build.make:102: bin/example_hdf_create_groups] Error 1
make[2]: Leaving directory '/home/tobias/builds/opencv-cuda-git/src/opencv/build'
make[1]: *** [CMakeFiles/Makefile2:2523: modules/hdf/CMakeFiles/example_hdf_create_groups.dir/all] Error 2
make[1]: Leaving directory '/home/tobias/builds/opencv-cuda-git/src/opencv/build'
make: *** [Makefile:163: all] Error 2
My previous search suggested that this error might occur due to a linking error with cublas. Therefore I tried to add -L/opt/cuda/lib64 and -lcublas to CMAKE_CXX_FLAGS. That made no difference at all.
Suggestions by another blog contained using gcc-6 instead of g++-6. That however yields another error:
[ 16%] Linking CXX executable ../../bin/example_hdf_create_groups
cd /home/tobias/builds/opencv-cuda-git/src/opencv/build/modules/hdf && /usr/bin/cmake -E cmake_link_script CMakeFiles/example_hdf_create_groups.dir/link.txt --verbose=1
/bin/gcc-6 -std=c++11 -fsigned-char -W -Wall -Werror=return-type -Werror=non-virtual-dtor -Werror=address -Werror=sequence-point -Wformat -Werror=format-security -Wmissing-declarations -Wundef -Winit-self -Wpointer-arith -Wshadow -Wsign-promo -Wuninitialized -Winit-self -Wno-narrowing -Wno-delete-non-virtual-dtor -Wno-comment -fdiagnostics-show-option -Wno-long-long -pthread -fomit-frame-pointer -ffast-math -ffunction-sections -fdata-sections -msse -msse2 -fvisibility=hidden -fvisibility-inlines-hidden -Wno-invalid-offsetof -O3 -DNDEBUG -DNDEBUG -Wl,-O1,--sort-common,--as-needed,-z,relro,-z,now -Wl,--gc-sections -rdynamic CMakeFiles/example_hdf_create_groups.dir/samples/create_groups.cpp.o -o ../../bin/example_hdf_create_groups -L/opt/cuda/lib64 ../../lib/libopencv_hdf.so.3.4.0 ../../lib/libopencv_highgui.so.3.4.0 ../../lib/libopencv_videoio.so.3.4.0 ../../lib/libopencv_imgcodecs.so.3.4.0 ../../lib/libopencv_imgproc.so.3.4.0 ../../lib/libopencv_core.so.3.4.0 ../../lib/libopencv_cudev.so.3.4.0
ld: CMakeFiles/example_hdf_create_groups.dir/samples/create_groups.cpp.o: undefined reference to symbol '_ZNSt8ios_base4InitD1Ev@@GLIBCXX_3.4'
/usr/lib/libstdc++.so.6: error adding symbols: DSO missing from command line
make[2]: *** [modules/hdf/CMakeFiles/example_hdf_create_groups.dir/build.make:102: bin/example_hdf_create_groups] Error 1
make[2]: Leaving directory '/home/tobias/builds/opencv-cuda-git/src/opencv/build'
make[1]: *** [CMakeFiles/Makefile2:2523: modules/hdf/CMakeFiles/example_hdf_create_groups.dir/all] Error 2
make[1]: Leaving directory '/home/tobias/builds/opencv-cuda-git/src/opencv/build'
make: *** [Makefile:163: all] Error 2
The whole output of the build process and the customized PKGBUILD file can be found here
Cuda version 9, output of nvidia-smi:
Sun Jan 14 14:44:13 2018
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 387.34 Driver Version: 387.34 | |-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 780 Ti Off | 00000000:01:00.0 N/A | N/A |
| 32% 27C P8 N/A / N/A | 624MiB / 3017MiB | N/A Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 Not Supported |
+-----------------------------------------------------------------------------+
|
Okay so to close the question:
The Problem was that opencv needs both Lapack with the normal blas and cblas if you are using the ceres-solver. However, the opencv build only links cublas, which apparently lacks support for some needed functions of cblas.
One solution to this Problem was to manually link cblas by adding
CMAKE_EXE_LINKER_FLAGS=-lcblas to the cmake call in the PKGBUILD file.
It is probably possible to circumvent this problem altogether by building all dependencies manually with forced cublas support. However, that is tedious and not always possible since cublas is only a partial port.
Thanks again to Philippos, who helped me narrrow the problem down.
| Arch Linux: problems building opencv with cuda; libopencv_core.so.3.4.0: undefined reference to `cblas_dgemm' |
1,550,151,359,000 |
I have compiled openvpn from source, running openvpn --version returns:
OpenVPN 2.4.4 x86_64-unknown-linux-gnu [SSL (OpenSSL)] [LZO] [LZ4] [EPOLL] [MH/PKTINFO] [AEAD] built on Nov 19 2017
library versions: OpenSSL 1.0.2g 1 Mar 2016, LZO 2.08
And created a /etc/openvpn/server.conf file with some basic settings. However, when I try to start it with sudo systemctl start openvpn@server it returns
Failed to start [email protected]: Unit [email protected] not found.
And sudo systemctl status openvpn returns:
● openvpn.service
Loaded: masked (/dev/null; bad)
Active: inactive (dead) since Sun 2017-11-19 14:21:06 HKT; 4 days ago
Main PID: 1502 (code=exited, status=0/SUCCESS)
Which makes me think that openvpn service is not even registered.
I have checked /lib/systemd/system/, it doesn't have openvpn.service file, but /etc/systemd/system/ does. As I understand this is because I compiled instead of apt-get install openvpn?
Can anyone suggest how should I add self-compiled openvpn as a service?
First time compiling from source, so any advise/tips much appreciated!
EDIT 1:
I can start openvpn server and connect clients to it with (only service doesn't seem to work):
sudo openvpn /etc/openvpn/server.conf
|
Made it work by manually creating two files in /lib/systemd/system.
The first one is openvpn.service:
[Unit]
Description=OpenVPN service
After=network.target
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/bin/true
ExecReload=/bin/true
WorkingDirectory=/etc/openvpn
[Install]
WantedBy=multi-user.target
and second is [email protected]:
[Unit]
Description=OpenVPN connection to %i
PartOf=openvpn.service
ReloadPropagatedFrom=openvpn.service
Before=systemd-user-sessions.service
Documentation=man:openvpn(8)
Documentation=https://community.openvpn.net/openvpn/wiki/Openvpn23ManPage
Documentation=https://community.openvpn.net/openvpn/wiki/HOWTO
[Service]
PrivateTmp=true
KillMode=mixed
Type=forking
ExecStart=/usr/local/sbin/openvpn --daemon ovpn-%i --status /run/openvpn/%i.status 10 --cd /etc/openvpn --script-security 2 --config /etc/openvpn/%i.conf --writepid /run/openvpn/%i.pid
PIDFile=/run/openvpn/%i.pid
ExecReload=/bin/kill -HUP $MAINPID
WorkingDirectory=/etc/openvpn
ProtectSystem=yes
CapabilityBoundingSet=CAP_IPC_LOCK CAP_NET_ADMIN CAP_NET_BIND_SERVICE CAP_NET_RAW CAP_SETGID CAP_SETUID CAP_SYS_CHROOT CAP_DAC_READ_SEARCH CAP_AUDIT_WRITE
LimitNPROC=10
DeviceAllow=/dev/null rw
DeviceAllow=/dev/net/tun rw
[Install]
WantedBy=multi-user.target
After creating them, do sudo systemctl daemon-reload to reload the new changes.
Generally, the files are the same, as if openvpn was installed from official repo, the only difference is ExecStart=/usr/sbin/openvpn should be ExecStart=/usr/local/sbin/openvpn, pointing to compiled local openVPN.
Edit:
If you use openvpn 2.4+, remove PIDFile=/run/openvpn/%i.pid and --writepid /run/openvpn/%i.pid from the second file, as it prevents server from starting on boot. Found it here
| Self-compiled OpenVPN won't start from systemd |
1,550,151,359,000 |
I am trying to work with gtk which is located at /usr/include/gtk-3.0/gtk/ .., but all of the header files in the toolkit have #include <gtk/gtk.h>.
Aside from adding /usr/local/gtk-3.0 to PATH or adding gtk-3.0 to all the include preprocessors, what other options does one have with this?
|
Adding the appropriate directory to your include path is exactly what you're supposed to do in this case, only you're supposed to do it by pkg-config. Accessing the files directly using full pathnames is unsupported.
Add something like this to your Makefile:
CFLAGS += `pkg-config --cflags gtk+-3.0`
LIBS += `pkg-config --libs gtk+-3.0`
This will automatically add the correct compiler and linker options for the current system.
| Compiler cannot find header file, do I add the directory to PATH? |
1,550,151,359,000 |
I want to build my Linux kernel on my host and use it in my VWware virtual machine. They both use the same Ubuntu kernel now.
On my Host, I do make and make configure. Then, what files should I copy to the target machine, before I do make modules_install and make install?
What other things do I need to do?
|
The 'best' way to do this, is building it as a package. You can then distribute and install it to any Ubuntu machine running the same (major) version.
For building vanilla kernels from source, there's a tool make-kpkg which can build the kernel as packages. Other major advantages: easy reverting by just removing the package, automatic triggers by the package management such as rebuilding DKMS, etc.
The Ubuntu community wiki on Kernel/Compile Alternate Build Method provides a few steps on how to do that.
Basically, it's just the same as building the kernel from upstream documentation, but instead of having make blindly installing it on your system, have it build in a 'fake root' environment and make a package out of it, using
fakeroot make-kpkg --initrd --append-to-version=-some-string-here \
kernel-image kernel-headers
This should produce binary .deb files which you will be able to transfer to other machines and install it using
dpkg -i mykernelfile-image.deb mykernelfile-headers.deb ...
| Build kernel in one machine, install in another |
1,550,151,359,000 |
/usr/src/linux-3.2.1 # make install
scripts/kconfig/conf --silentoldconfig Kconfig
sh /usr/src/linux-3.2.1/arch/x86/boot/install.sh 3.2.1-12-desktop arch/x86/boot/bzImage \
System.map "/boot"
You may need to create an initial ramdisk now.
--
/boot # mkinitrd initrd-3.2.1-12-desktop.img 3.2.1-12-desktop
Kernel image: /boot/vmlinuz-2.6.34-12-desktop
Initrd image: /boot/initrd-2.6.34-12-desktop
Kernel Modules: <not available>
Could not find map initrd-3.2.1-12-desktop.img/boot/System.map, please specify a correct file with -M.
There was an error generating the initrd (9)
See the error during mkinitrd command. What's the point that I am missing?
What does this mean? Kernel Modules: <not available>
OpenSuse 11.3 64 bit
EDIT1:
I did "make modules".
I copied the System.map file from the /usr/src/linux-3.2.1 directory to /boot, now running initrd command gives the following error:
linux-dopx:/boot # mkinitrd initrd-3.2.1.img 3.2.1-desktop
Kernel image: /boot/vmlinuz-2.6.34-12-desktop
Initrd image: /boot/initrd-2.6.34-12-desktop
Kernel Modules: <not available>
Could not find map initrd-3.2.1.img/boot/System.map, please specify a correct file with -M.
Kernel image: /boot/vmlinuz-3.2.1-12-desktop
Initrd image: /boot/initrd-3.2.1-12-desktop
Kernel Modules: <not available>
Could not find map initrd-3.2.1.img/boot/System.map, please specify a correct file with -M.
Kernel image: /boot/vmlinuz-3.2.1-12-desktop.old
Initrd image: /boot/initrd-3.2.1-12-desktop.old
Kernel Modules: <not available>
Could not find map initrd-3.2.1.img/boot/System.map, please specify a correct file with -M.
There was an error generating the initrd (9)
|
You should be using mkinitramfs, not mkinitrd. The actual initrd format is obsolete and initramfs is used instead these days, even though it is still called an initrd. Better yet, just use update-initramfs. Also you need to run make modules_install to install the modules.
| How to create an initrd image on OpenSuSE linux? |
1,550,151,359,000 |
This is my first question and I'm still pretty new so please forgive me if I've missed or botched something, or if this is an obvious solution.
I'm using CentOS 5.8 (yes I know it's ancient) and trying to test some squid configurations
From the Squid wiki:
NP: Squid must be built with the --enable-http-violations configure option before building.
I've done some searching to try to determine where I can find which configuration options were specified at package build, but short of reading through all of the CentOS documentation I can't seem to locate where I can find these configuration options.
I know this question may be similar to this one, but in this case the specific squid package may have been custom built, and I'm not sure I have access to the source without jumping through some hoops.
Is there a way I can list the configuration flags with yum or rpm without extracting the spec file?
|
The question is about using RPM metadata to retrieve information about package specific compile time options. The information you're looking for isn't present in the RPM metadata. Either you need to have more than just an RPM (ideally a package build log or some of the files from the build directory), or you need to use a package specific way.
I don't know the location of build information for CentOS, for Fedora it would be:
http://koji.fedoraproject.org/
For squid, the package specific way is fairly easy:
# squid -v
Squid Cache: Version 3.4.5
configure options: '--build=x86_64-redhat-linux-gnu' '--host=x86_64-redhat-linux-gnu' '--program-prefix=' '--prefix=/usr' '--exec-prefix=/usr' '--bindir=/usr/bin' '--sbindir=/usr/sbin' '--sysconfdir=/etc' '--datadir=/usr/share' '--includedir=/usr/include' '--libdir=/usr/lib64' '--libexecdir=/usr/libexec' '--sharedstatedir=/var/lib' '--mandir=/usr/share/man' '--infodir=/usr/share/info' '--exec_prefix=/usr' '--libexecdir=/usr/lib64/squid' '--localstatedir=/var' '--datadir=/usr/share/squid' '--sysconfdir=/etc/squid' '--with-logdir=$(localstatedir)/log/squid' '--with-pidfile=$(localstatedir)/run/squid.pid' '--disable-dependency-tracking' '--enable-eui' '--enable-follow-x-forwarded-for' '--enable-auth' '--enable-auth-basic=DB,LDAP,MSNT,MSNT-multi-domain,NCSA,NIS,PAM,POP3,RADIUS,SASL,SMB,getpwnam' '--enable-auth-ntlm=smb_lm,fake' '--enable-auth-digest=file,LDAP,eDirectory' '--enable-auth-negotiate=kerberos' '--enable-external-acl-helpers=LDAP_group,time_quota,session,unix_group,wbinfo_group' '--enable-storeid-rewrite-helpers=file' '--enable-cache-digests' '--enable-cachemgr-hostname=localhost' '--enable-delay-pools' '--enable-epoll' '--enable-icap-client' '--enable-ident-lookups' '--enable-linux-netfilter' '--enable-removal-policies=heap,lru' '--enable-snmp' '--enable-ssl' '--enable-ssl-crtd' '--enable-storeio=aufs,diskd,ufs' '--enable-wccpv2' '--enable-esi' '--enable-ecap' '--with-aio' '--with-default-user=squid' '--with-dl' '--with-openssl' '--with-pthreads' 'build_alias=x86_64-redhat-linux-gnu' 'host_alias=x86_64-redhat-linux-gnu' 'CFLAGS=-O2 -g -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -fpie' 'LDFLAGS=-Wl,-z,relro -pie -Wl,-z,relro -Wl,-z,now' 'CXXFLAGS=-O2 -g -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -fpie' 'PKG_CONFIG_PATH=%{_PKG_CONFIG_PATH}:/usr/lib64/pkgconfig:/usr/share/pkgconfig'
(the above output has been made using a Fedora rawhide version of squid)
For other packages, there may or may not be a command to show build time configuration. For downloading, extracting and examining the SRPM to guess compiled in features from the .spec file, see the end of the other answer.
| How do I determine which configuration options an rpm package is built with? |
1,550,151,359,000 |
I'm using Linux Mint 13 MATE 32bit, I'm trying to build the kernel (primarily for experience and for fun).
For now, I like to build it with the same configuration as precompiled kernel, so firstly I've installed precompiled kernel 3.16.0-031600rc6 from kernel.ubuntu.com, booted to it successfully.
Then I've downloaded 3.16.rc6 kernel from kernel.org, unpacked it, configured it to use config from existing precompiled kernel:
$ make oldconfig
It didn't ask me anything, so, precompiled kernel contains all necessary information. Then I've built it (it took about 6 hours) :
$ make
And then installed:
$ sudo make modules_install install
Then I've booted into my manually-compiled kernel, and it works, boot process is somewhat slower though. But then I've found out that all the binaries (/boot/initrd.img-3.16.0-rc6 and all the *.ko modules in /lib/modules/3.16.0-rc6/kernel are about 10 times larger than precompiled versions! Say, initrd.img-3.16.0-rc6 is 160 658 665 bytes, but precompiled initrd.img-3.16.0-031600rc6-generic is 16 819 611 bytes. Each *.ko module is similarly larger.
Why is this? I haven't specified any special options for build (I typed exactly the same commands as I mentioned above). How to build it "correctly"?
|
Despite what file says, it turns out to be debugging symbols after all. A thread about this on the LKML led me to try:
make INSTALL_MOD_STRIP=1 modules_install
And low and behold, a comparison from within the /lib/modules/x.x.x directory; before:
> ls -hs kernel/crypto/anubis.ko
112K kernel/crypto/anubis.ko
And after:
> ls -hs kernel/crypto/anubis.ko
16K kernel/crypto/anubis.ko
More over, the total size of the directory (using the same .config) as reported by du -h went from 185 MB to 13 MB.
Keep in mind that beyond the use of disk space, this is not as significant as it may appear. Debugging symbols are not loaded during normal runtime, so the actual size of each module in memory is probably identical regardless of the size of the .ko file. I think the only significant difference it will make is in the size of the initramfs file, and the only difference it will make there is in the time needed to uncompress the fs. I.e., if you use an uncompressed initramfs, it won't matter.
strip --strip-all also works, and file reports them correctly as stripped either way. Why it says not stripped for the distro ones remains a mystery.
| Linux kernel manual build: resulting binary is 10 times larger than precompiled binaries |
1,550,151,359,000 |
I'd like to make a "portable" version of Emacs 24.3. I am using some Debian 7 systems, where I don't have root access. Since Debian 7 is missing Emacs 24, I'd like to build a portable version of it, which I can carry with me on a USB thumb drive.
My specific questions are:
Can I make the install prefix flexible, or is it hardwired by configure --prefix=...?
How can I bundle all neccessary .so-files with the installation?
|
I have a few ways to do this, easy ones first:
Making the install prefix flexible is hard - I would just make the install prefix to your home directory, or somewhere that you can access on any of the machines, and use
make install DESTDIR=/path/to/place/where/binaries/should/be/installed
to install them to somewhere other than the prefix.
I personally have my binaries in $HOME/bin, so my commands would look like this:
./configure --prefix=$HOME
Some programs (I know FFmpeg is one) can be build with all libraries compiled into the program, avoiding shared libraries. In ffmpeg's case (and possibly others) the configure flags are --disable-shared --enable-static.
You can use ldd (name-of-binary-file) to check which shared objects it needs, and copy those to your flash drive.
Edit 1
I have just made a way to get only the names of the libraries being linked to, which is very helpful.
ldd binary-name|sed 's/=>.*//'|sed 's/\t//'|sed 's/\ (0x.*//'
will get a list of all the libraries linked to.
Additionally, this will get you only the files that have hardcoded paths:
ldd binary-name|sed 's/=>.*//'|sed 's/\t//'|sed 's/\ (0x.*//'|grep --color=never /
This works because only libraries with hardcoded paths have slashes in their names usually. It gives you an idea what you should look for when doing the next possibility.
Edit 2
You can use LD_PRELOAD and/or LD_LIBRARY_PATH to load symbols from manually specified libraries, thus negating the 'hardcoded paths' problem mentioned below.
If your libraries have paths that are hardcoded in, I've heard of a tool called chrpath that can change the runpaths. I have had (limited) success in simply opening my binaries in a hex editor and changing the paths to the shared libraries as long as they are shorter than the ones that were originally compiled in. They must end with the string terminator character. (In C this is almost always 00). To make sure you will have enough space to change the path, I would (on the system I compile it on) set the prefix to something ridiculously long with symlinks, like this if your libraries are in /usr/lib:
sudo mkdir /OH_THIS_IS_A_VERY_VERY_VERY_VERY_VERY_LONG_DIRECTORY_NAME/
sudo ln -s /usr/lib /OH_THIS_IS_A_VERY_VERY_VERY_VERY_VERY_LONG_DIRECTORY_NAME/lib
mkdir destdir
./configure --prefix=/OH_THIS_IS_A_VERY_VERY_VERY_VERY_VERY_LONG_DIRECTORY_NAME
make
make install DESTDIR=$PWD/DESTDIR
($PWD is the current directory, by the way.)
That will give you plenty of room to change the paths. If you have space left over after the actual path, you can just keep adding 00 until you reach the end of the usual space. I have only had to resort to this with binaries I compiled for an Android phone, which had ncurses's path hardcoded into the binary.
One last thing I just found: you can make the location of ld-linux.so.* not hardcoded by adding this (adapt for your systems' locations, run something like locate ld-linux and find one similar to the below:
-Wl,--dynamic-linker=/lib/x86_64-linux-gnu/ld-linux-x86-64.so.2
to your LDFLAGS variable.
| How to make a portable Linux app? |
1,550,151,359,000 |
I am building custom linux kernel packages in a Ubuntu 13.10 amd64 environment usingapt-get source linux-image-$(uname -r)the debian way make-kpkg clean;fakeroot make-kpkg --initrd --append-to-version=-custom kernel_image kernel_headers.
Linux headers is larger than image
The result are two .deb files where the file linux-headers- is 8.2M in size and the resulting linux-image- is only 6.1M. After having a look in what files the linux-image- contains, I see that there are loads of headers for items which are disabled in the .config file.
Linux-headers-... content
unused file systems, like /fs/reiserfs/,
unused security modules, like /security/selinux/,
unused includes, like /include/pcmcia/ or /include/sound/,
unused architectures, like /arch/powerpc/, /arch/s390/, /arch/parisc/, /arch/blackfin/, /arch/cris/, /arch/xtensa/, /arch/alpha/, /arch/ia64/, /arch/h8300/, /arch/arm/, etcetera,
unused drivers, like /drivers/leds/, /drivers/eisa/, /drivers/isdn/, /drivers/net/ppp/, /drivers/net/wireless/, etcetera,
unused networking like /net/bluetooth/, /net/wimax/, /net/decnet/, etcetera
What (and how) are the options for stripping the unused items out of the linux-headers- package and/or otherwise reduce the file size?
|
The linux-headers package is only needed when you want to compile sources, kernels or build other packages.
Package description from debian:
This package provides the architecture-specific kernel header files for
Linux kernel 2.6.32-5-686, generally used for building out-of-tree
kernel modules. These files are going to be installed into
/usr/src/linux-headers-2.6.32-5-686, and can be used for building
modules that load into the kernel provided by the
linux-image-2.6.32-5-686 package.
kernel-headers are also not a part of a system runtime. So precisely spoken there is no use-case for stripping unused header files from the package. However the original description restricts this by saying generally and limits it's usage to build kernel modules. If you are running a custom kernel which was built with kpkg, then you may also relink your /usr/include/{linux,asm,asm-generic} headers to be able to properly compile other sources.
| How to strip unused architectures, drivers, etc from headers when building a custom linux kernel? |
1,550,151,359,000 |
When installing something from source (say, Ruby 1.9.2), what command can I run to get a complete list of all the dependencies needed to install that application? Is this possible?
|
Short answer: not possible. The difficulty of getting the exact dependencies from a source distribution is the reason why package management is so popular on Linux (okay, one of several reasons). In fact, if you just need to get it done and don't care so much how, the most reliable way to get the dependencies will probably be to grab a distro package (gentoo ebuilds are easy to work with) and pull the list of dependencies from that.
Otherwise, if you're lucky, the maintainers will have created a listing of the dependencies in the README file or similar - that'd be the first place to check. Failing that, if it's a C project and you don't mind getting your hands dirty, you can look inside the configure script (or better yet the configure.ac or whatever it's generated from) and figure out the dependencies from that based on what it checks.
| Get list of required libraries when installing something from source |
1,550,151,359,000 |
I need to generate MIPS specific code on my machine when I run my C program. When I simply run,
gcc -O2 -S -c hello.c
On my system, I get the hello.s which seems to generate some assembly code but it doesn't seem to be MIPS specific code. The contents of hello.s file is as below.
.file "hello.c"
.section .rodata.str1.1,"aMS",@progbits,1
.LC0:
.string "Hello world"
.text
.p2align 4,,15
.globl main
.type main, @function
main:
.LFB11:
.cfi_startproc
movl $.LC0, %edi
xorl %eax, %eax
jmp printf
.cfi_endproc
.LFE11:
.size main, .-main
.ident "GCC: (GNU) 4.4.7 20120313 (Red Hat 4.4.7-4)"
.section .note.GNU-stack,"",@progbits
How can I generate the MIPS specific code on my machine?
My machine details are as below.
arch
x86_64
|
Understanding the Basics
From the wiki entry of MIPS architecture, it is described as,
MIPS (originally an acronym for Microprocessor without Interlocked
Pipeline Stages) is a reduced instruction set computer (RISC)
instruction set (ISA) developed by MIPS Technologies (formerly MIPS
Computer Systems, Inc.).
From the wiki entry of the x86-64, it is described as,
x86-64 (also known as x64, x86_64 and AMD64) is the 64-bit version of
the x86 instruction set.
So as per the arch output in the question, it is evident that I have a x86_64 machine and I try to produce the MIPS architecture specific code after running gcc compiler.
This is similar to trying and running a diesel car on a petrol engine. No matter how hard we try, without tweaking the gas engine, we could not run the diesel car on a petrol engine.
To describe it in a technical manner, gcc can produce assembly code for a large number of architectures, include MIPS. But what architecture a given gcc instance targets is decided when gcc itself is compiled. The precompiled binary you will find in an Ubuntu system knows about x86 (possibly both 32-bit and 64-bit modes) but not MIPS.
How to compile a C program to MIPS assembly code
Again quoting from the same answer, compiling gcc with a target architecture distinct from the architecture on which gcc itself will be running is known as preparing a cross-compilation toolchain. Or in layman's terms, this cross compilation toolchain is similar to tweaking the petrol engine to run the diesel car.
However, setting up a cross-compilation toolchain is quite a bit of work, so rather than describe how to set that up, I will describe how to install a native MIPS compiler into a MIPS virtual machine. This involves the additional steps of setting up an emulator for the VM and installing an OS into that environment, but will allow you to use a pre-built native compiler rather than compiling a cross compiler.
We will be first installing qemu to make our system run some virtualized operating systems. Again there are several approaches like installing some cross compiled tool chain as discussed here and using a buildroot as suggested in the answer that I earlier linked.
Download the tar ball of qemu from here.
After downloading the tar ball, run the following commands.
bzip2 -d qe*
tar -xvf qe*
./configure
make
make install
Now, after installing qemu on the machine, I tried several methods
of netboot for the debian OS as suggested over
here and
here. But
unfortunately I was not able to perform the debian OS installation
using the netboot because the correct mirrors were not available.
I got an image for debian which targets MIPS architecture from
here and I
downloaded the kernel and qemu image and from the above link and performed the below steps.
I started the qemu as below.
qemu-system-mips -M malta -kernel vmlinux-2.6.32-5-4kc-malta -hda
debian_squeeze_mips_standard.qcow2 -append "root=/dev/sda1 console=tty0"
After the debian system came up, I installed the gcc compiler as
below.
apt-get update && apt-get upgrade
apt-get install build-essential
Now, I have a perfectly working native gcc compiler inside the MIPS debian virtual machine on qemu, which compiles my C program to MIPS specific assembly code.
Testing
Inside my debian machine, I just put in a sample C hello world program and saved it as hello.c as below.
#include<stdio.h>
int main()
{
printf("Hello World");
}
To generate MIPS architecture code for my hello.c program, I ran the C program using the gcc compiler as,
gcc -O2 -S -c hello.c
The above command generated a hello.s file which generated my MIPS architecture code.
.file 1 "hello.c"
.section .mdebug.abi32
.previous
.gnu_attribute 4, 1
.abicalls
.section .rodata.str1.4,"aMS",@progbits,1
.align 2
$LC0:
.ascii "Hello World\000"
.text
.align 2
.globl main
.set nomips16
.ent main
.type main, @function
main:
.frame $sp,0,$31 # vars= 0, regs= 0/0, args= 0, gp= 0
.mask 0x00000000,0
.fmask 0x00000000,0
.set noreorder
.set nomacro
lui $28,%hi(__gnu_local_gp)
addiu $28,$28,%lo(__gnu_local_gp)
lui $4,%hi($LC0)
lw $25,%call16(printf)($28)
nop
jr $25
addiu $4,$4,%lo($LC0)
.set macro
.set reorder
.end main
.size main, .-main
.ident "GCC: (Debian 4.4.5-8) 4.4.5"
But how will I know if the above generated code is MIPS assembly code?
The arch command's output will tell the machine's architecture. In my debian machine, it produces the output as mips and I also do not have any binutils or cross-compiler tool chains installed in the machine.
So, the generated assembly code is MIPS specific.
| Generate MIPS architecture assembly code on a X86 machine |
1,550,151,359,000 |
Why would you do
g++ -Wall -I/usr/local/include/thrift *.cpp -lthrift -o something
instead of:
g++ -Wall -I/usr/local/include/thrift -c Something.cpp -o something.o
g++ -Wall -I/usr/local/include/thrift -c Something_server.cpp -o server.o
g++ -Wall -I/usr/local/include/thrift -c your_thrift_file_constants.cpp -o constants.o
g++ -Wall -I/usr/local/include/thrift -c your_thrift_file_types.cpp -o types.o
and then:
g++ -L/usr/local/lib -lthrift *.o -o Something_server
Am I right that the first step does essentially the same thing as the second sequence?
Also, to make them identical should something be Something_server in the first line?
|
You're right that you'll end up with the same executable at the end (albeit with a different name); in the first case gcc will actually create a bunch of temporary object files that it removes after linking, versus the second case where you're making the object files yourself.
The main reason to do things the second way is to allow for incremental building. After you've compiled your project once, say you change Something.cpp. The only object file affected is something.o -- there's no reason to waste time rebuilding the others. A build system like make would recognize that and only rebuild something.o before linking all the object files together.
| Why would one want to compile multiple .cpp files to the same executable? |
1,550,151,359,000 |
I've been messing around with my NAS which runs on Linux. I have root access, but there is no compiler. I seem to remember something about being able to compile on another system, but I'm not certain.
root@LSB1:~# uname -a
Linux LSB1 2.6.22.18-88f6281 #50 Tue Dec 22 18:06:23 JST 2009 armv5tejl unknown
|
Cross-compiling may be the solution for you It allows you to compile executables for one architecture on a system of a different architecture. Here's an introduction
| How do I install GCC on a system with no compiler? |
1,550,151,359,000 |
I recently had a conversation with a friend who is a highly skill software engineer, and he showed me some articles outlining the fact libc was much better than glibc.
I wonder if its possible to use libc instead, and what kind of problems would I come up against if I went this route?
|
Context: assuming from above comments that a BSDish libc is meant.
I think it's been looked into, but libc tends to be tightly tied to a given kernel (glibc has an abstraction layer, which allows it some portability but causes the usual problems that an abstraction layer causes) and making BSD libc work with a Linux kernel would require a near complete rewrite. key system services are very different between the two systems (one example: BSD libc assumes that there are no pipes/FIFOs, because BSD uses socketpairs instead; conversely, Linux doesn't support pipe-compatible socketpairs).
Going the other direction (Debian has an experimental Linux userspace on a FreeBSD kernel, I think) is possible due to glibc's portability layer.
| Can I build a linux distro with libc instead of glibc |
1,550,151,359,000 |
Whenever I use yaourt -Syua in my Manjaro Linux system, it'll give me
Edit PKGBUILD ? [Y/n] ("A" to abort)
and sometimes
Edit chromium-pepper-flash.install ? [Y/n] ("A" to abort)
Somewhere I read to just say no to editing these files.
The wiki : https://wiki.archlinux.org/index.php/PKGBUILD
Says the PKGBUILD is just some switches to alter when installing, so is it alright to just leave it default?
I haven't found information about the .install files, what are they?
|
Why don't you thoroughly read the wiki page that you linked:
Packages in Arch Linux are built using the makepkg utility and
information stored in PKGBUILDs. When makepkg is run, it
searches for a PKGBUILD in the current directory and follows the instructions therein to either compile or otherwise acquire the
files to build a package file
Therefore, PKGBUILD is a "recipe" for creating a package (similar to a RPM spec, gentoo ebuild etc). Sometimes, when a package is installed/removed/upgraded, it may require some scripts/programs to be automatically executed before/after the package files are written to/removed from disk so an additional "recipe" is needed, i.e. .install (excerpt from the same link):
install
The name of the .install script to be included in the package. pacman
has the ability to store and execute a package-specific script when it
installs, removes or upgrades a package. The script contains the
following functions which run at different times:
pre_install - The script is run right before files are extracted. One argument is passed: new package version.
post_install - The script is run right after files are extracted. One argument is passed: new package version.
pre_upgrade - The script is run right before files are extracted. Two arguments are passed in the following order: new package version, old package version.
post_upgrade - The script is run after files are extracted. Two arguments are passed in the following order: new package version, old package version.
pre_remove - The script is run right before files are removed. One argument is passed: old package version.
post_remove - The script is run right after files are removed. One argument is passed: old package version.
Usually, you edit PKGBUILD to customize the way the package is built (e.g. add/remove --configure options, change install prefix, patch the source code, exclude files from the package etc). Likewise, you edit .install to add or remove commands that should be automatically executed before/after a package install/upgrade/removal.
I'd say it's good practice to open those files when prompted and read their content just to make sure everything is OK.
| What exactly is PKGBUILD and should I edit it when installing packages? |
1,550,151,359,000 |
I would like to build Debian package from source, using dpkg-buildpackage. I have downloaded package source:
apt-get -t wheezy-backports source gnucash
Inside the file gnucash-2.6.9/configure I see, that there are options which can be selected/deselected when building the package.
Debian maintainer has already made the decision for me. But if I want disable some options, how should I do it?
Lets say, I want to compile the package without --enable-aqbanking. This option appears in several configuration files:
$ grep -rl enable-aqbanking gnucash-2.6.9/
gnucash-2.6.9/packaging/gnucash.spec
gnucash-2.6.9/packaging/gnucash.spec.in
gnucash-2.6.9/configure.ac
gnucash-2.6.9/configure
Which of those should I edit?
What is the proper way to do it?
|
OK, take a look at gnucash-2.6.x/debian/rules.
Find the line that says override_dh_auto_configure: (line 23 in my case), and add your overrides below it.
In your case --enable-aqbanking is already there (for wheezy-backports at least), so simply delete it.
More info can be found in the man page.
Update: In addition, sometimes there's a variable in the rules file responsible for passing custom stuff to configure. It's usually at the top of the file and is called DEB_CONFIGURE_EXTRA_FLAGS.
| building Debian package with non-standard options |
1,550,151,359,000 |
I need to install redis 2.8.x into a specific directory so I can later use fpm to create an rpm.
From my research, it seems that this should be possible by using make PREFIX=
mkdir /tmp/installdir
cd /tmp
wget http://download.redis.io/releases/redis-2.8.6.tar.gz
tar -xvf redis-*.tar.gz
cd redis-2.8.6
make PREFIX=/tmp/installdir
make install
I expect the binaries to be placed in /tmp/installdir, unfortunately that directory remains empty. It seems PREFIX=/tmp/installdir is being ignored.
Normally I would run ./configure --prefix=/tmp/installdir however because the download does not contain source code, there is no configure file.
How can I install software to a non standard directory?
|
I was successful by prefixing
PREFIX=/tmp/installdir make
and
PREFIX=/tmp/installdir make install
to check what happens, use -n
root@wizzard:/tmp/redis-2.8.6# PREFIX=/tmp/installdir make install -n
cd src && make install
make[1]: Entering directory `/tmp/redis-2.8.6/src'
echo ""
echo "Hint: To run 'make test' is a good idea ;)"
echo ""
mkdir -p /tmp/installdir/bin
printf ' %b %b\n' "\033[34;1m"INSTALL"\033[0m" "\033[37;1m"install"\033[0m" 1>&2;install redis-server /tmp/installdir/bin
printf ' %b %b\n' "\033[34;1m"INSTALL"\033[0m" "\033[37;1m"install"\033[0m" 1>&2;install redis-benchmark /tmp/installdir/bin
printf ' %b %b\n' "\033[34;1m"INSTALL"\033[0m" "\033[37;1m"install"\033[0m" 1>&2;install redis-cli /tmp/installdir/bin
printf ' %b %b\n' "\033[34;1m"INSTALL"\033[0m" "\033[37;1m"install"\033[0m" 1>&2;install redis-check-dump /tmp/installdir/bin
printf ' %b %b\n' "\033[34;1m"INSTALL"\033[0m" "\033[37;1m"install"\033[0m" 1>&2;install redis-check-aof /tmp/installdir/bin
make[1]: Leaving directory `/tmp/redis-2.8.6/src'
| specify PREFIX location when running make |
1,550,151,359,000 |
I tried to make && make install package, but I get an error:
libX11.so.6 not found
Where can I get this library?
|
You need to install the libX11 package:
$ rpm -qf /usr/lib/libX11.so.6
libX11-1.3.1-3.fc13.i686
Just go
$ yum -y install libX11
One more thing though: if you don't know how to find and install a library package, care to share why you are trying to compile a piece of software that is officially packaged for Fedora 13 in the most recent version?
$ yum info gpicview
Available Packages
Name : gpicview
Arch : x86_64
Version : 0.2.1
Release : 3.fc13
Size : 93 k
Repo : fedora
Summary : Simple and fast Image Viewer for X
URL : http://lxde.sourceforge.net/gpicview/
License : GPLv2+
Description : Gpicview is an simple and image viewer with a simple and intuitive interface.
: It's extremely lightweight and fast with low memory usage. This makes it
: very suitable as default image viewer of desktop system. Although it is
: developed as the primary image viewer of LXDE, the Lightweight X11 Desktop
: Environment, it only requires GTK+ and can be used in any desktop environment.
| libX11.so.6 Not found |
1,550,151,359,000 |
I have scenario in which
my host is : x86 32 bit processor
my target is : x86 64 bit processor
I have a couple of questions :
I want to know if i can simply
compile a program in my host using
the available gcc and run it on the
target?
Do i need to cross compile it for x86
64 bit processor? If yes, how can i specify it while compiling?
Do i need to use separate tool-chain
for cross-compiling the program?
|
All amd64 (i.e. 64-bit x64) processors can run 32-bit x86 binaries. Also, on most operating systems, you can run x86 programs on an amd64 OS. So it is often possible to deploy x86 binaries on amd64 processors.
Whether it's desirable to do so is a different matter. 64-bit OSes often come with a restricted set of 32-bit libraries, so if your program uses some uncommon libraries it will be easier to install a 64-bit executable. Depending on your application, there may or may not be a performance advantage to 32-bit or 64-bit binaries.
If you decide you want to deploy 64-bit executables, you'll need a cross-compililation environment for the amd64 (a.k.a. x86_64) architecture running on an x86 architecture. This means both a compiler, and static libraries to link against.
A gcc installation can share frontends and include multiple backends. But not many distributions ship with amd64 development tools on x86 platforms, so you may have to get your own (gcc is fairly straightforward to cross-compile). The same goes for libraries to link against (of course, once you have the compiler, you can recompile them from source).
As an example, Ubuntu 10.04 on x86 comes with a “multilib” version of gcc and an amd64 backend, plus a small set of 64-bit development packages (libc6-dev-amd64 Install libc6-dev-amd64 http://bit.ly/software-small and depending and dependent packages).
| Do I need to cross-compile my program when my target is 64 bit arch. and host is 32 bit arch from x86 family? |
1,550,151,359,000 |
I need to recompile my kernel on RHEL WS5 with only two changes.
Change stack size from 4k to 8k
Limit usable memory to 4096.
How do I recompile the kernel without changing anything else but these two items?
|
To change only the new values you will need the config the old kernel was build from.
In RHEL you can find this in: /boot/config-$(\uname -r)
Copy this file to the kernel source and change the values you want. Use make menuconfig for a ncurses gui.
For other distributions: If the config option CONFIG_IKCONFIG_PROC was set, your kernel configuration is available under /proc/config.gz
| Recompile Kernel to Change Stack Size |
1,550,151,359,000 |
I am working with a virtual machine (CentOS 5.3) that has very little storage space on the main drive (which includes /usr, /usr/local, etc). Most of the storage space is available on a separate drive that is mounted to /mnt. Consequently, on this drive I have created a basic installation directory (with subdirectories like bin, include, lib, etc) and installed a library there.
[standage@vm142-46 ~]$ ls -lhp /mnt/lib
total 33M
-rw-r--r-- 1 standage iplant-everyone 21M Dec 21 16:29 libgenometools.a
-rwxr-xr-x 1 standage iplant-everyone 13M Dec 21 16:29 libgenometools.so
I then tried to link to that library with code that I had written, but it gave me the following message.
/usr/bin/ld: cannot find -lgenometools
I realized I had not updated ldconfig with the new installation directory I had created, so I went ahead and added /mnt/lib to /etc/ld.so.conf and ran /sbin/ldconfig. However, when I tried to link my code again I got the same error.
I was eventually able to get the libraries to link by creating symlinks to /usr/local/lib64...
[standage@vm142-46 ~]$ sudo ln -s /mnt/lib/libgenometools.a /usr/local/lib64
[standage@vm142-46 ~]$ sudo ln -s /mnt/lib/libgenometools.so /usr/local/lib64
...but this doesn't really solve my original problem, it's just a duct-tape solution. What did I do wrong originally and how can I link to the library I've installed?
|
/etc/ld.so.conf only influences the dynamic linker, i.e. where libraries are looked for at run time. When you build an executable, what matters is the paths where ld looks for the library. The usual way to specify these is to pass the -L option; most configure scripts have a way to pass additional -L options. There usually isn't a way to change the default search path for ld. You might look into changing the gcc spec file, but that will involve changing a file under /usr, not under /etc.
Given your slightly awkward setup, you may want to look into a union mount of /mnt above /usr. I don't know what union mount possibilities CentOS offers if any (of course, there are third-party options, native or based on FUSE).
| ldconfig issue with non-standard lib directory in CentOS |
1,550,151,359,000 |
I am trying to work my way around with the BuildPrereq flag in the spec files.
I want a few pre-requisites to be included if the OS is of a particular version. something like
if os == fedora 4
BuildPrereq >= apr0.9
endif
if os == feodra 10
BuildPrereq >= apr2.0
endif
Is there any way to achieve the above ? Also I would like to hear some alternatives on this. The problem being i have a section of the code which is not required to be compiled on a few versions of OS. So i am looking at mixing conditional compilation and the above.
Cheers!
|
To translate what you wrote directly into specfile macros:
%if 0%{?fedora} == 4
BuildPrereq >= apr0.9
%endif
%if 0%{?fedora} == 10
BuildPrereq >= apr2.0
%endif
You could probably change the first %endif to an %else but I wanted to keep my rewrite as similar as possible in case there are other circumstances involved.
If you want to support versions of fedora between fc4 and f10 or later, you can use >= and <= as well. If you care about RHEL, there's a %{rhel} that evaluates as 4 for RHEL4 and 5 for RHEL5.
| How can I specify OS-conditional build requirements in an RPM spec file? |
1,550,151,359,000 |
I cannot find any packages for vdo for Debian, and my own attempts to compile and run the software have failed. Can anyone shed light on how to compile vdo for use with Debian as this is software released by RHEL after acquiring another company.
My current steps are:
apt-get update -y
apt-get install -y git sudo
sudo apt-get upgrade -y
sudo apt-get install -y build-essential libdevmapper-dev libz-dev uuid-dev
git clone https://github.com/dm-vdo/vdo.git
make
make install
sudo apt install -t stretch-backports linux-headers-$(uname -r)
git clone https://github.com/dm-vdo/kvdo.git
make -C /usr/src/linux-headers-`uname -r` M=`pwd`
cp vdo/kvdo.ko /lib/modules/$(uname -r)
cp uds/uds.ko /lib/modules/$(uname -r)
depmod
modprobe kvdo
modprobe uds
systemctl start vdo
// error with
Starting VDO volume services...
Traceback (most recent call last):
File "/usr/bin/vdo", line 46, in <module>
from vdo.utils import Command
|
OP's question is incomplete: the end of the error message which contains an important clue to solve this is not included. Here it is (on Debian buster. Debian 9 would instead search for python3.5):
# vdo status
Traceback (most recent call last):
File "/usr/local/bin/vdo", line 46, in <module>
from vdo.utils import Command
File "/usr/local/lib/python3.7/dist-packages/vdo/utils/__init__.py", line 27, in <module>
from .YAMLObject import YAMLObject
File "/usr/local/lib/python3.7/dist-packages/vdo/utils/YAMLObject.py", line 33, in <module>
import yaml
ModuleNotFoundError: No module named 'yaml'
So the python code needs a yaml module.
# apt-cache search python3 yaml | grep yaml | head -5
python3-pretty-yaml - module to produce pretty and readable YAML-serialized data (Python 3)
python3-xstatic-js-yaml - JavaScript yaml implementation - XStatic support
python3-xstatic-json2yaml - converts json or simple javascript objects into a yaml - XStatic support
python3-yamlordereddictloader - loader and dump for PyYAML keeping keys order
python3-yaml - YAML parser and emitter for Python3
# apt-get install python3-yaml
[...]
# vdo status
VDO status:
Date: '2019-05-13 19:33:06+02:00'
Node: somenode
Kernel module:
Loaded: true
Name: kvdo
Version information:
kvdo version: 6.2.0.293
Configuration:
File: does not exist
Last modified: not available
VDOs: {}
That's it. Note that without any configuration made, nothing would actually start.
You should follow directions provided by Redhat there: 1.5. Creating a VDO volume.
Here's an example I ran:
# vdo create --name=vdo-data --device=/dev/md0 --vdoLogicalSize=8T
Creating VDO vdo-data
Starting VDO vdo-data
Starting compression on VDO vdo-data
VDO instance 0 volume is ready at /dev/mapper/vdo-data
Even without completely installing it, a peek at vdo.service gives enough informations:
ExecStart=/usr/bin/vdo start --all --confFile /etc/vdoconf.yml
So manually:
# vdo start --all --confFile /etc/vdoconf.yml
Starting VDO vdo-data
VDO instance 0 volume is ready at /dev/mapper/vdo-data
# ps -ef|grep vdo
root 11590 2 0 19:53 ? 00:00:00 [kvdo0:dedupeQ]
root 11593 2 0 19:53 ? 00:00:00 [kvdo0:journalQ]
root 11594 2 0 19:53 ? 00:00:00 [kvdo0:packerQ]
root 11595 2 0 19:53 ? 00:00:00 [kvdo0:logQ0]
[...]
# vdo status
VDO status:
Date: '2019-05-13 19:54:46+02:00'
Node: somenode
Kernel module:
Loaded: true
Name: kvdo
Version information:
kvdo version: 6.2.0.293
Configuration:
File: /etc/vdoconf.yml
Last modified: '2019-05-13 19:53:35'
VDOs:
vdo-data:
Acknowledgement threads: 1
Activate: enabled
Bio rotation interval: 64
Bio submission threads: 4
Block map cache size: 128M
Block map period: 16380
Block size: 4096
CPU-work threads: 2
Compression: enabled
Configured write policy: auto
Deduplication: enabled
Device mapper status: 0 17179869184 vdo /dev/md0 normal - online online 1151960 242161600
Emulate 512 byte: disabled
Hash zone threads: 1
Index checkpoint frequency: 0
[...]
Final note: to run it on kernel >= 4.20, which by default requires there is no variadic function in kernel, changes are needed for kvdo. The simplest is to ignore corresponding warnings, until the project itself corrects the affected functions. A 2x2 lines patched tree is available from an other RH employee there.
| Compiling VDO for use on Debian |
1,550,151,359,000 |
My company uses a small out-dated cluster (CentOS 5.4) to do number crunching (finite element calculations to be more specific). They are using a commercial package and have no idea of Linux. They don't want to change anything on the machines as long as they run, which I accept as a time-effective policy. I do not have administrative rights.
I can have them install smaller packages, but not change e.g. the python version from 2.4 to 2.6+, so I decided to compile the current version (./configure --prefix=/home/mysuser/root2) and ran into a few problems with the dependencies (wrong version of e.g. readline, zlib, curses, bz2 ... or packages not found). I also need to update gcc which complaines about missing GMP, MPFR and MPC.
The reason for doing this is I'd like to compile other test software to do run on these machines.
What can I do to effectively install the packages I need in order to compile the software I need to work with? I'm elsewhere using archlinux and would find it quite handy to be able to do something along the lines
pacman --root /home/myuser/root2 -S <package>
But I have no idea if this is possible or clever.
Other related SE questions: gentoo-prefix and pkgsrc seem to be not so easy (I may be wrong, though).
|
Your management is wise in not trying to upgrade a working cluster that is performing an important function based on a proprietary package.
Backporting packages is time consuming and risky, that is, not always feasible. You might avoid the time penalty if you can finding the packages that you want to install in the original CentOS 5.4 repository or in some CentOS 5.4 backport repository. While you can have several versions of GCC on one host at the same time (the embedded systems/cross compile folks do this all the time), but it is not trivial to have more than one glibc in a single run-time environment.
So, you are best advised to work in a separate, newer environment that has the packages that you need and find some way to test the output of the old environment in the new one. In any event, do not risk breaking anything in the old environment or you may need all of the stackexchange.com reputation points that you can get to find your next job ;-)
| What is an effective method for installing up-to-date software on an out-dated production machine? |
1,550,151,359,000 |
Is there a way to build emacs on Linux so it doesn't embed the path where it was built into the binary, and it can be relocated harmlessly to a different path?
That is, if you build with --prefix=/a/b/c then move everything to /d/e/f it won't run, because it depends on the fixed path /a/b/c. I see the string /a/b/c inside the binary itself.
Windows emacs can be installed to any directory and it runs from there just fine, so it makes me think you can tell Linux emacs to run the same way, from "wherever you are sitting now".
We have no options like a fixed name symlink up the directory tree pointing to the variable path below it.
|
Emacs can be relocated mostly harmlessly, even if you don't take any precautions when compiling. If the hardcoded paths don't work, Emacs looks for directories near the executable.
Emacs tries to determine where the executable that invoked it is located. It stores this information in the variable invocation-directory. Let's say that this is /path/to/bin/emacs; Emacs looks for the data files it needs in the hard-coded directories, and falls back to directories in /path/to.
You need to structure your directories in the same way as the Emacs source, more or less, with toplevel directories bin, etc, leim, lib-src, lisp, site-lisp. In particular, at least with Emacs 23.2, the directory lib-src must exist (even if it's empty).
There are a few directories that Emacs doesn't find this way. Set the environment EMACSDATA=/path/to/etc. You may need to set INFOPATH as well.
| Relocatable emacs |
1,550,151,359,000 |
I have Ubuntu 14.04 upgraded from 12.04 making dist-upgrades. I did many manual installations such as ffmpeg, libglib and so on, in the past. I have a nice custom distro now, it works well but I have problems while trying to compile applications. It stems from library conflicts between manually installed packages from source code and native distro libraries. A guy advised me to rename /usr/local it works but boot failed on next reboot.
When I look for directories added by pkg-config with
pkg-config --variable pc_path pkg-config
it lists
/usr/local/lib/i386-linux-gnu/pkgconfig:/usr/local/lib/pkgconfig:/usr/local/share/pkgconfig:/usr/lib/i386-linux-gnu/pkgconfig:/usr/lib/pkgconfig:/usr/share/pkgconfig
I don't want it to look for paths in /usr/local/lib...
How can I ban those paths not to let pkg-config look for?
|
Stuff in /usr/local usually supersedes stuff in /usr, so I'm a bit confused as to why you would install libraries there to have a "a nice custom distro", but then not want to compile against them. Those are the libraries the system will use actually use.
Anyway, man pkg-config claims the base search path:
is libdir/pkgconfig:datadir/pkgconfig where libdir is the libdir for pkg-config and datadir is
the datadir for pkg-config when it was installed.
This implies they are compiled in. I notice it is different on ubuntu than fedora -- the former is long and inclusive, whereas the latter is short and exclusive; on fedora I have to set a $PKG_CONFIG_PATH to include /usr/local.
Since paths in $PKG_CONFIG_PATH are checked first, you could just set:
PKG_CONFIG_PATH=/usr/lib/i386-linux-gnu/pkgconfig:/usr/lib/pkgconfig:/usr/share/pkgconfig
The fact that these are at the end of the built-in paths won't matter; if the check makes it to there without finding anything, there's nothing to be found.
To demonstrate how this works, create a temporary directory /opt/bs/pkg and copy a .pc file from one of the directories in the default path into it -- e.g., alsa.pc. First check;
> pkg-config --libs alsa
-lasound
Now go into /opt/bs/pkg/alsa.pc and change -lasound (it's in the Libs: field) to -foobar. Set $PKG_CONFIG_PATH and try again:
> PKG_CONFIG_PATH=/opt/bs/pkg pkg-config --libs alsa
-foobar
Eureka, $PKG_CONFIG_PATH has overridden the built-in paths...you can delete /opt/bs/pkg, of course.
| How can I exclude some library paths listed in " pkg-config --variable pc_path pkg-config"? |
1,550,151,359,000 |
Is there a way to build and install only a few of the GNU coreutils?
The README in coreutils-8.19.tar.xz lists 100-odd, but the INSTALL doesn't say how to install only a few, and the Makefile is (to me) opaque.
|
./configure
cd ./lib
make
cd ../src
make version.h
make cat
make ls
HTH
===
UPDATE as of February 26, 2015:
The recipe above doesn't work in at least coreutils-8.23. I would not recommended building separate files.
The following shows the complexity of internal dependencies for cat and ls:
./configure
make src/version.h
make lib/configmake.h
make lib/arg-nonnull.h
make lib/warn-on-use.h
make lib/fcntl.h
make lib/sys/stat.h
make lib/selinux/context.h
make lib/selinux/selinux.h
make lib/unitypes.h
make lib/unistr.h
make lib/uniwidth.h
make lib/getopt.h
make src/cat
make src/ls
| Install only a few GNU coreutils? |
1,550,151,359,000 |
I am attempting to assemble the assembly source file below using the following NASM command:
nasm -f elf -o test.o test.asm
This completes without errors and I then try to link an executable with ld:
ld -m elf_i386 -e main -o test test.o -lc
This also appears to succeed and I then try to run the executable:
$ ./test
bash: ./test: No such file or directory
Unfortunately, it doesn't seem to work. I tried running ldd on the executable:
linux-gate.so.1 => (0xf777f000)
libc.so.6 => /lib/i386-linux-gnu/libc.so.6 (0xf7598000)
/usr/lib/libc.so.1 => /lib/ld-linux.so.2 (0xf7780000)
I installed the lsb-core package and verified that /lib/ld-linux.so.2 exists. How come I still can't run the executable?
I'm attempting to do this on a machine running the 64-bit edition of Ubuntu 15.04.
The source code:
; This code has been generated by the 7Basic
; compiler <http://launchpad.net/7basic>
extern printf
extern scanf
extern read
extern strlen
extern strcat
extern strcpy
extern strcmp
extern malloc
extern free
; Initialized data
SECTION .data
s_0 db "Hello, World!",0
printf_i: db "%d",10,0
printf_s: db "%s",10,0
printf_f: db "%f",10,0
scanf_i: db "%d",0
scanf_f: db "%lf",0
; Uninitialized data
SECTION .bss
v_12 resb 4
v_0 resb 4
v_4 resb 8
SECTION .text
; Code
global main
main:
finit
push ebp
mov ebp,esp
push 0
pop eax
mov [v_12], eax
l_0:
mov eax, [v_12]
push eax
push 5
pop edx
pop eax
cmp eax, edx
jl l_2
push 0
jmp l_3
l_2:
push 1
l_3:
pop eax
cmp eax, 0
je l_1
push s_0
push printf_s
call printf
add esp, 8
mov eax, [v_12]
push eax
push 1
pop edx
pop eax
add eax, edx
push eax
pop eax
mov [v_12], eax
jmp l_0
l_1:
mov esp,ebp
pop ebp
mov eax,0
ret
Here's the output of strings test:
/usr/lib/libc.so.1
libc.so.6
strcpy
printf
strlen
read
malloc
strcat
scanf
strcmp
free
GLIBC_2.0
t'hx
Hello, World!
.symtab
.strtab
.shstrtab
.interp
.hash
.dynsym
.dynstr
.gnu.version
.gnu.version_r
.rel.plt
.text
.eh_frame
.dynamic
.got.plt
.data
.bss
test.7b.out
printf_i
printf_s
printf_f
scanf_i
scanf_f
v_12
_DYNAMIC
_GLOBAL_OFFSET_TABLE_
strcmp@@GLIBC_2.0
read@@GLIBC_2.0
printf@@GLIBC_2.0
free@@GLIBC_2.0
_edata
strcat@@GLIBC_2.0
strcpy@@GLIBC_2.0
malloc@@GLIBC_2.0
scanf@@GLIBC_2.0
strlen@@GLIBC_2.0
_end
__bss_start
main
|
You need to also link start up fragments like crt1.o and others if you want to call libc functions. The linking process can be very complicated, so you'd better use gcc for that.
On amd64 Ubuntu, you can:
sudo apt-get install gcc-multilib
gcc -m32 -o test test.o
You can see files and commands for the link by adding -v option.
| Unable to run an executable built with NASM |
1,308,324,734,000 |
I am doing a build on a Linux machine with Ubuntu 10.04 on it. How can I really speed up my build? I have 4 CPUs and lots of RAM. I already reniced the process group to -20. Is there something else I can do?
|
Most software build processes use make. Make sure you make make use the -j argument with a number usually about twice the number of CPUs you have, so make -j 8 would be appropriate for your case.
| How to speed up my build |
1,308,324,734,000 |
I'm using Arch linux and I need GCC 4.7.0 for a class.
I only have GCC 6.2.1 installed on my system currently.
I followed all the install instructions correctly but I still yield this error after running the initial make.
$ make
.
.
In file included from /home/flounder/src/gcc-4.7.0/gcc-4.7.0/gcc/cp/except.c:987:0:
cfns.gperf: At top level:
cfns.gperf:101:1: error: ‘gnu_inline’ attribute present on ‘libc_name_p’
cfns.gperf:26:14: error: but not here
.
.
make[3]: *** [Makefile:1055: cp/except.o] Error 1
make[3]: Leaving directory '/home/flounder/src/gcc_compile/gcc'
make[2]: *** [Makefile:4101: all-stage1-gcc] Error 2
make[2]: Leaving directory '/home/flounder/src/gcc_compile'
make[1]: *** [Makefile:19342: stage1-bubble] Error 2
make[1]: Leaving directory '/home/flounder/src/gcc_compile'
make: *** [Makefile:898: all] Error 2
I've read that this can happen when trying to build old versions of GCC with modern versions because:
GCC adds new errors as versions go on, so the source code of older versions of GCC isn't always considered valid under newer versions of GCC
I read that here, here, and here.
So what can I do to remedy the issue?
Two possible solutions I think could work:
Cross compile GCC 4.7.0 for my computer using the school Linux computers (which also have GCC 4.7.0 but they're 32-bit and I have a 64-bit OS)
First compile GCC 5.4.x on my computer using GCC 6.2.1 then use GCC 5.4.x to compile GCC 4.7.0
The first option seems more bulletproof. Would they both work? Is one better than the other?
Edit:
As @Kenneth B. Jensen mentioned below, I attempted to run the configuration with the --disable-werror flag set and attempted to run the initial make with the -k flag set but I still ran into trouble. The following is the error output:
$ make -k
.
.
.
if [ xinfo = xinfo ]; then \
makeinfo --split-size=5000000 --split-size=5000000 --split-size=5000000 --no-split -I . -I /home/flounder/src/gcc-4.7.0/gcc/doc \
-I /home/flounder/src/gcc-4.7.0/gcc/doc/include -o doc/cppinternals.info /home/flounder/src/gcc-4.7.0/gcc/doc/cppinternals.texi; \
fi
echo timestamp > gcc.pod
perl /home/flounder/src/gcc-4.7.0/gcc/../contrib/texi2pod.pl /home/flounder/src/gcc-4.7.0/gcc/doc/invoke.texi > gcc.pod
Unescaped left brace in regex is deprecated, passed through in regex; marked by <-- HERE in m/^\@strong{ <-- HERE (.*)}$/ at /home/flounder/src/gcc-4.7.0/gcc/../contrib/texi2pod.pl line 319.
echo timestamp > doc/gcc.1
(pod2man --center="GNU" --release="gcc-4.7.0" --date=2012-03-22 --section=1 gcc.pod > doc/gcc.1.T$$ && \
mv -f doc/gcc.1.T$$ doc/gcc.1) || \
(rm -f doc/gcc.1.T$$ && exit 1)
echo timestamp > gpl.pod
perl /home/flounder/src/gcc-4.7.0/gcc/../contrib/texi2pod.pl /home/flounder/src/gcc-4.7.0/gcc/doc/include/gpl_v3.texi > gpl.pod
Unescaped left brace in regex is deprecated, passed through in regex; marked by <-- HERE in m/^\@strong{ <-- HERE (.*)}$/ at /home/flounder/src/gcc-4.7.0/gcc/../contrib/texi2pod.pl line 319.
echo timestamp > doc/gpl.7
(pod2man --center="GNU" --release="gcc-4.7.0" --date=2012-03-22 --section=7 gpl.pod > doc/gpl.7.T$$ && \
mv -f doc/gpl.7.T$$ doc/gpl.7) || \
(rm -f doc/gpl.7.T$$ && exit 1)
cp doc/gcc.1 doc/g++.1
make[3]: Target 'all' not remade because of errors.
rm gcc.pod
make[3]: Leaving directory '/home/flounder/src/gcc_compile/gcc'
make[2]: *** [Makefile:4101: all-stage1-gcc] Error 2
make[2]: Target 'all-stage1' not remade because of errors.
make[2]: Leaving directory '/home/flounder/src/gcc_compile'
make[1]: *** [Makefile:19342: stage1-bubble] Error 2
make[1]: Target 'stage3-bubble' not remade because of errors.
make[1]: Leaving directory '/home/flounder/src/gcc_compile'
make: *** [Makefile:898: all] Error 2
|
You'll probably end up spending an awful lot of time getting GCC 4.7 built on your current system, and in the end you still won't be sure of the result: your school's computers' version of GCC may include distribution patches or even local changes which your version won't have.
I would suggest instead that you run the distribution your school is using in a VM. Your school is using RHEL, and you can too: you can get a no-cost developer subscription from Red Hat Developers; once you've got your subscription, you can download ISOs of any still-supported version of RHEL, so you should be able to install the same version as used on the school computers.
In any case since this is for grading purposes you should always check your code on the school computers before submitting it!
| How to handle error compiling GCC 4.7.0 using GCC 6.2.1 |
1,308,324,734,000 |
When installing a rpm package it warns that there is a necessary dependent library missing. In fact I have already installed that library from source, so I guess rpm just doesn't know about that.
Then can I let rpm know the existing library, and how? Maybe add some code in a rpm configure file?
By the way, installing the missing library (again) by rpm may solve the problem (quickly), but sometime there's no rpm version available.
|
The RPM dependency database cannot tell that you installed a package from source. The RPM database only knows about the metadata present in the RPM packages, a package installed from source does not contains this metadata.
Some configure scripts that build a package from source will produce pkg-config, which is metadata about the installed package. Yet, there is no clear-cut integration between the metadata from pkg-config and RPM metadata (or DEB metadata, or pacman metadata). When packaging a distro, the packagers insert the metadata in a specific format into the packages (e.g. RPM packages) and that metadata is the one used to determine dependencies. Not metadata provided in any other form.
On the other hand, you can have different versions of a library on the same system. By default (i.e. according to the GNU coding standards which most packages follow) a configure script should install its produce into /usr/local. Whilst packages packaged by the distro (e.g. RPM) should install their content into /usr.
Therefore, if you follow the convention (called FHS) and keep packages/libraries installed from source in /usr/local, then installing the same library through RPM will not conflict with your library (since the packagers of the distro do follow FHS).
When there is no RPM available, you can build it yourself. For that you need to build the package/library from source and install it into a dummy place (a build root). Then provide the metadata needed for the RPM package and package it into an RPM file. TLDP has a dated but very thorough guide on building RPMs.
| RPM says missing dependency but I have already installed that library (from source) |
1,308,324,734,000 |
I'd like to install software from source (e.g., third-party GitHub repos) to my machine. Generally /usr/local/bin and /usr/local/src are for non-distribution-specific software, right?
Taking ownership of /usr/local seems risky: anything running with my privileges could make nefarious changes to executables in /usr/local/bin, or to sources in /usr/local/src.
But the alternative, building and installing as root (sudo), doesn't make sense to me. GitHub warns against running git as root. Even if I copied the sources from a local repo elsewhere, I'd have to run make and make install as sudo, meaning the software I'm installing could hijack the rest of my machine.
I could just put everything in /home, but that seems like a cop-out -- isn't this what /usr/local is for?
|
Don't take ownership of /usr/local. Use sudo to install software. But use your own account to build it.
git clone … # or tar -x or …
cd …
./configure
make
sudo make install
Why not take ownership of /usr/local? You nailed it. That would allow any program running on your account to write there. Against a malicious program, you've lost anyway — infecting a local account is the big step, escalating to root isn't difficult (e.g. by piggybacking on the next time you run sudo). But against a badly configured program, it's better not to have writable bits in the system-wide directories.
As for the choice between /usr/local and your home directory: your home directory is for things you only want for your account, /usr/local is for things that are installed system-wide.
| Recommended way to install software to /usr/local -- use sudo or chown? |
1,308,324,734,000 |
I would like to compile some C programs for Windows. So I used a search engine and I found that I probably need to install mingw32.
If I run:
sudo apt-get install mingw32
and I got:
E: Unable to locate package mingw32
So, I used a search engine again, and I found this answer on AskUbuntu and this answer on StackOverflow.
I ran:
sudo add-apt-repository universe
and:
sudo apt-get update
But I still the same error. What can I do to solve it?
|
On modern Debian derivatives, including Mint, mingw32 is no longer available; it has been replaced by mingw-w64:
sudo apt install mingw-w64
should work.
This package provides both 32- and 64-bit Windows compilers. When switching from mingw32 to mingw-w64, you’ll need to adjust the target triplets:
i686-w64-mingw32 for 32-bit Windows;
x86_64-w64-mingw32 for 64-bit Windows.
| E: Unable to locate package mingw32, Linux Mint |
1,308,324,734,000 |
My immediate objective is to compile a small kernel for my laptop without sacrificing usability. I am familiar with the kernel compilation steps (don't necessarily understand the process). What are the options I can get rid of in menuconfig for a faster, slimmer kernel? I have been using the trial and error method, i.e uncheck unused filesystems and drivers, but this is a painfully slow process. Can somebody point me towards things I should not touch or a better way of going about this process? This little "project" is for recreation only.
System Specs and OS:
i7 580M, Radeon HD5850, 8Gb DDR3, MSI Motherboard
x86_64 Ubuntu 11.10.
|
Unchecking filesystems and drivers isn't going to reduce the size of the kernel at all, because they are compiled as modules and only the modules that correspond to hardware that you have are loaded.
There are a few features of the kernel that can't be compiled as modules and that you might not be using. Start with Ubuntu's .config, then look through the ones that are compiled in the kernel (y, not m). If you don't understand what a feature is for, leave it alone.
Most of the kernel's optional features are optional because you might not want them on an embedded system. Embedded systems have two characteristics: they're small, so not wasting memory on unused code is important, and they have a dedicated purpose, so there are many features that you know you aren't going to need. A PC is a general-purpose device, where you tend to connect lots of third-party hardware and run lots of third-party software. You can't really tell in advance that you're never going to need this or that feature. Mostly, what you'll be able to do without is support for CPU types other than yours and workarounds for bugs in chipsets that you don't have (what few aren't compiled as modules). If you compile a 64-bit kernel, there won't be a lot of those, not nearly as many as a 32-bit x86 kernel where there's quite a bit of historical baggage.
In any case, you are not going to gain anything significant. With 8GB of memory, the memory used by the kernel is negligible.
If you really want to play around with kernels and other stuff, I suggest getting a hobbyist or utility embedded board (BeagleBoard, Gumstix, SheevaPlug, …).
| Stripped down Kernel for a Laptop |
1,308,324,734,000 |
If someone used a Python virtualenv he knows we can run scripts in its own environment installing all necessary libraries which won't impact main python distribution.
Do we have something similar in a Linux (Debian) world for make utility?
Case 1: I downloaded sources and I know what dependencies I need. I
put libraries somewhere in home directory and I explicitly say to
make utility where to search them.
Case 2: I run some kind of virtualenv for make utility and I call there apt-get install lib-required-dev so downloaded libraries would be placed in this virtual environment and won't pollute my OS. And then I run make.
|
Case one is relatively easy, at least for some programs. Most source packages include a configure script which checks for the availability of needed libraries. These scripts will generally have options to specify search paths. For example --lib-prefix. That way, you don't even need to modify the Makefile yourself. Now whether, or not this will work will depend on how complex the dependencies are, but it's worth a try.
For option 2, you have the chroot program:
chroot - run command or interactive shell with special root directory
chroot needs certain files and directories to be present. The details will depend on what exactly you need to do (e.g. do you need /dev? Do you need /proc?). You can get a minimal chroot environment like this (as root):
mkdir foo
cp -r /bin /lib /lib64 foo/
chroot foo
The last command will move you into the directory foo and run your default shell treating foo as /. The procedure I have outlined is a simplification. You don't need everything in /lib for example. You will also probably need more directories to be present depending on what you want to do. Finally, you can also use mount bind to link the directories into the chroot environment but not if you want it to be completely independent of your real OS.
An easy way of creating such a playground is to take a small partition and install a minimal system onto that partition. You can then simply make the chroot like so (always as root):
mount /dev/sda2 foo/
chroot foo/
Obviously, change sda2 to whichever partition you installed your minimal system on. For more information see these links:
https://wiki.archlinux.org/index.php/Change_Root
https://wiki.debian.org/chroot
| Does Debian have any kind of virtual environment for make install? |
1,308,324,734,000 |
I want to install vim via homebrew, from the compile, but even when I ran the command with -s flag, the compile does not occur.
brew install -s vim --with-luajit
Or
brew reinstall -s vim --with-luajit
==> Reinstalling vim
==> Downloading https://homebrew.bintray.com/bottles/vim-8.1.0202
Already downloaded: /Users/me/Library/Caches/Homebrew/vim-8.1.0202.high_sierra.bottle.tar.gz
==> Pouring vim-8.1.0202.high_sierra.bottle.tar.gz
üç∫ /usr/local/Cellar/vim/8.1.0202: 1,434 files, 23.4MB
As far as I know, the -s flag or --build-from-source compiles it, not pour, but this always pours for some reasons.
How can I compile vim from scratch?
I use macOS 10.14 Beta and homebrew 1.7.1.
UPDATE
With -v flag, it first rmed several files under /usr/local/. Then reinstalling, downloading, verifying, and pouring showed up in the order. The pouring process showed the following:
tar xf /Users/me/Library/Caches/Homebrew/vim-8.1.0202.high_sierra.bottle.tar.gz -C /var/folders/vk/cbdc97r515b0lv_p1dq_852r0000gn/T/d20180806-64534-6ra3ai
Then it showed "finishing up" process, and then ran several ln -s commands. After that, it showed the following:
/usr/bin/sandbox-exec -f /private/tmp/homebrew20180806-64695-e9syu8.sb nice /System/Library/Frameworks/Ruby.framework/Versions/2.3/usr/bin/ruby -W0 -I /Library/Ruby/Gems/2.3.0/gems/did_you_mean-1.0.0/lib:/Library/Ruby/Site/2.3.0:/Library/Ruby/Site/2.3.0/x86_64-darwin18:/Library/Ruby/Site/2.3.0/universal-darwin18:/Library/Ruby/Site:/System/Library/Frameworks/Ruby.framework/Versions/2.3/usr/lib/ruby/vendor_ruby/2.3.0:/System/Library/Frameworks/Ruby.framework/Versions/2.3/usr/lib/ruby/vendor_ruby/2.3.0/x86_64-darwin18:/System/Library/Frameworks/Ruby.framework/Versions/2.3/usr/lib/ruby/vendor_ruby/2.3.0/universal-darwin18:/System/Library/Frameworks/Ruby.framework/Versions/2.3/usr/lib/ruby/vendor_ruby:/System/Library/Frameworks/Ruby.framework/Versions/2.3/usr/lib/ruby/2.3.0:/System/Library/Frameworks/Ruby.framework/Versions/2.3/usr/lib/ruby/2.3.0/x86_64-darwin18:/System/Library/Frameworks/Ruby.framework/Versions/2.3/usr/lib/ruby/2.3.0/universal-darwin18:/usr/local/Homebrew/Library/Homebrew/cask/lib:/usr/local/Homebrew/Library/Homebrew -- /usr/local/Homebrew/Library/Homebrew/postinstall.rb /usr/local/Homebrew/Library/Taps/homebrew/homebrew-core/Formula/vim.rb -v -s --with-luajit --force
brew doctor showed the following (I omitted unrelevant parts such as Python or miniconda):
Warning: You are using macOS 10.14.
We do not provide support for this pre-release version.
You will encounter build failures and other breakages.
Please create pull-requests instead of asking for help on Homebrew's
GitHub, Discourse, Twitter or IRC. As you are running this pre-release version,
you are responsible for resolving any issues you experience.
Warning: The Command Line Tools header package must be installed on Mojave.
The installer is located at:
/Library/Developer/CommandLineTools/Packages/macOS_SDK_headers_for_macOS_10.14.pkg
|
The consequence might not be exactly the same, but adding --HEAD flag on the command worked like a charm.
brew uninstall --force vim
brew install --HEAD -s vim --with-luajit
And now the vim is installed with Lua support.
| "brew install -s" does not compile from source |
1,308,324,734,000 |
I am trying to install Zsh without root privileges on a Linux machine. I downloaded the source tarball and run:
./configure --prefix=<my_installation_path>
but then I got:
configure: error: "No terminal handling library was found on your
system. This is probably a library called curses or ncurses. You may
need to install a package called 'curses-devel' or 'ncurses-devel' on
your system"
Installing ncurses:
Since I am not root on this system, I downloaded ncurses and installed it manually (also using ./configure --prefix=<my_installation_path>), which seems to have gone well.
I then updated the following paths:
INSTALLATION_PATH='/path/to/installation'
export PATH=$INSTALLATION_PATH/bin/:$PATH
export LD_LIBRARY_PATH=$INSTALLATION_PATH/lib:$LD_LIBRARY_PATH
export CFLAGS=-I$INSTALLATION_PATH/include
and tried installing Zsh again, but got the same ncurses error. As far as I can tell, the path variables above point to the right locations, and I can check this on the shell. Why is Zsh not recognizing ncurses?
|
Update:
Following Gilles' answer, I updated CPPFLAGS and LDFLAGS and the problem goes away during configure.
However, I now get an error during make:
<INSTALLATION_PATH>/lib/libncurses.a: could not read symbols: Bad value
collect2: ld returned 1 exit status
I also get a recompile with -fPIC. I guess this refers to the compilation of ncurses. I presume this means that I built ncurses as static, and I should built it as dynamic? How would I do that?
Update 2:
I re-compiled ncurses again. This time, I did:
export CXXFLAGS=" -fPIC"
export CFLAGS=" -fPIC"
prior to make, and then added --enable-shared to ./configure for both ncurses and Zsh. This seems to have fixed the problem!
| Building zsh without admin priv: No terminal handling library found |
1,308,324,734,000 |
My situation. uname -a gives Linux computer2 4.4.0-62-generic #83~14.04.1-Ubuntu SMP Wed Jan 18 18:10:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
I am trying to install HDF5 1.8.18 with GNU make 3.81 invoking gcc 6.3.0. I have successfully installed this gcc 6.3.0 alongside the version 4.8.4 that is shipped with the Ubuntu distribution.
My gcc 6.3.0 lives in /opt/gcc/6_3_0/. I use the following script to configure and pass on the commands, libraries and headers in non-standard directories:
export FC='/opt/gcc/6_3_0/bin/gfortran-6.3.0' # probably unnecessary
export CC='/opt/gcc/6_3_0/bin/gcc-6.3.0'
export CXX='/opt/gcc/6_3_0/bin/g++-6.3.0'
export CPP='/opt/gcc/6_3_0/bin/cpp-6.3.0'
export LDFLAGS='-L/opt/gcc/6_3_0/lib -L/opt/gcc/6_3_0/lib64'
export CPPFLAGS='-I/opt/gcc/6_3_0/include -I/opt/gcc/6_3_0/lib/gcc/x86_64-pc-linux-gnu/6.3.0/include'
./configure \
--prefix=${insdir} \
--with-zlib=${zlibdir}/include,${zlibdir}/lib \
--enable-fortran \
--enable-cxx
where ${insdir} is an installation directory, ${zlibdir} is where zlib lives and the other switches are standards as per the installation guidelines
The configure step goes well. The make step fails with the error:
make[2]: Entering directory `<the source directory>/hdf5-1.8.18/c++/src'
CXX H5Exception.lo
H5Exception.cpp:16:18: fatal error: string: No such file or directory
#include <string>
^
compilation terminated
If I understand it correctly, some header file is missing, and of a basic nature.
Where should I get it from?
Is there any flaw in the names and values of the environment variables?
StackExchange contains a host of posts on this error, but they seem to be mostly related to coding exercises. My aim is not to edit codes, rather to compile source codes successfully with my vanilla gcc 6.3.0.
Updated question
In the light of the helpful comments and Thomas Dickey's answer below, it appears that a promising avenue is to install matching versions of libstdc++ and gcc. I have searched around in the GCC website and it appears that one can configure gcc with the following switch
--enable-version-specific-runtime-libs
Specify that runtime libraries should be installed in the compiler specific subdirectory (libdir/gcc) rather than the usual places. In addition, libstdc++'s include files will be installed into libdir unless you overruled it by using --with-gxx-include-dir=dirname. Using this option is particularly useful if you intend to use several versions of GCC in parallel. This is currently supported by libgfortran, libstdc++, and libobjc.
Is this pointing in the right direction?
Where would I be supposed to find the libstdc++'s include files that are distributed alongside the source of gcc, if this is switch is not used?
|
That's looking for a C++ header file, normally part of a development package such as libstdc++ (with a version and "-dev" or "-devel" as part of the package name).
For instance in Debian (where Ubuntu gets most of its packages), I have a "libstdc++6-4.6-dev" on my Debian 7 machine, which has this file:
/usr/include/c++/4.6/string
The C header files have a .h suffix; C++ as a rule does not (though on some systems you may see .hh).
When you configured the add-on compiler, it used settings (see your logs...) which told it where to expect to find libraries. You'll probably have to build your own libstdc++ for compatibility with the newer compiler. Again, you'll have to set the --prefix option when configuring, so the compiler and library work together.
Addressing a followup: if your compiler is looking in /usr/local, then you could work around this by amending your CPPFLAGS variable, adding /usr/include (and possibly /usr/include/c++/4.8, etc.), though there's also the library path in LDFLAGS to consider). To see the pathnames used by your libstdc++ package, use
dpkg -L $(dpkg -l | awk '{print $2}' |grep -E 'libstdc++.*dev')
| gcc compilation terminated with "fatal error: string: No such file or directory #include <string>" |
1,308,324,734,000 |
I'm building a jessie build of Debian. Passwords are saved in /etc/shadow in the build tree, but they are salted obviously so I cannot change it just by editing the file. If this was my installed system, I could call passwd, but here I want to change the password in the file in the build tree.
How do I change the root password before I flash a SD with a new build?
|
At the stage where you have a directory tree containing a file …/etc/shadow (before building the filesystem image), modify that file to inject the password hash(es) that you want to have.
The easiest way to do that is with recent enough versions of the chpasswd tool from the Linux shadow utilities suite (Debian wheezy is recent enough) with the -R option. Sample usage:
chpasswd -R /path/to/build/tree <passwords.txt
with passwords.txt containing lines like
root:swordfish
alibaba:opensesame
If your build environment doesn't support chpasswd -R, you can use a tool that generates a password hash by calling the crypt function and inject that into the shadow file by text manipulation. For example (untested code):
#!/usr/bin/python
import base64, crypt, os, re, sys
for line in sys.stdin.readlines():
(username, password) = line.strip().split(":")
salt = "$6$" + base64.b64encode(os.urandom(6))
hashes[username] = crypt.crypt(password, salt)
old_shadow = open("etc/shadow")
new_shadow = open("etc/shadow.making", "w")
for line in old_shadow.readlines():
(username, password, trail) = line.lstrip().split(":", 3)
if hashes.has_key(username):
line = username + ":" + hashes[username] + ":" + trail
new_shadow.write(line)
old_shadow.close()
new_shadow.close()
os.rename("etc/shadow.making", "etc/shadow")
| Change the root password of a Linux image |
1,308,324,734,000 |
I want to use Meson to build a new c++ project. The first thing I need is a
dependency for the Boost library. But though the Boost libs are installed on my Arch system (headers
and libs), Meson complains that it doesn't find them.
Here is the meson build file:
project('myproj', 'cpp')
boost_dep = dependency('boost')
executable('myproj', 'main.cpp', dependencies : boost_dep)
The main.cpp source file:
int main()
{
return 0;
}
A partial listing of some Boost files installed on my system:
$ ls /usr/lib/libboost*|head -n5; ls /usr/include/boost/*|head -n5
/usr/lib/libboost_atomic.a
/usr/lib/libboost_atomic.so
/usr/lib/libboost_atomic.so.1.65.1
/usr/lib/libboost_chrono.a
/usr/lib/libboost_chrono.so
/usr/include/boost/aligned_storage.hpp
/usr/include/boost/align.hpp
/usr/include/boost/any.hpp
/usr/include/boost/array.hpp
/usr/include/boost/asio.hpp
Output from ninja command inside my project:
[0/1] Regenerating build files.
The Meson build system
Version: 0.43.0
Source dir: /home/io/prog/myproj/src
Build dir: /home/io/prog/myproj/builddir
Build type: native build
Project name: myproj
Native C++ compiler: c++ (gcc 7.2.0)
Build machine cpu family: x86_64
Build machine cpu: x86_64
Dependency Boost () found: NO
Meson encountered an error in file meson.build, line 2, column 0:
Dependency "boost" not found
[...]
What am I missing?
|
The following issue solved my problem:
Boost not detected on Fedora · Issue #2547
I replaced the meson build file by the following:
project('myproj', 'cpp')
cxx = meson.get_compiler('cpp')
boost_dep = [
cxx.find_library('boost_system'),
cxx.find_library('boost_filesystem'),
]
executable('myproj', 'main.cpp', dependencies : boost_dep)
| Meson doesn't find the Boost libraries |
1,308,324,734,000 |
Following up on this article I use GPP to empower Markdown parser pandoc with some macros. Unfortunately, gpp seems to copy all whitespace into the result.
For example, consider file test.md
% Title
% Raphael
% 2012
\lorem \ipsum
with test.gpp
\define{lorem}{Lorem}
\define{ipsum}{ipsum...}
Now, calling gpp -T --include test.gpp test.md yields
<empty line>
% Title
% Raphael
% 2012
Lorem ipsum...
This breaks the metadata extraction of pandoc. The extra linebreak is indeed the one between the definitions; if I use
\define{lorem}{Lorem}@@@
\define{ipsum}{ipsum...}
with the extra option +c "@@@" "\n", the empty line is gone. But this workaround is not only ugly, is also has two fatal flaws.
First, it treats @@@ as comment indicator in the source file, too. As @@@ is not forbidden in Markdown, that can have unintended consequences when @@@ (or any other chosen delimiter) happens to occur in the source file.
Second, it does not cover whitespaces at line beginnings as caused by proper indentation. For example,
\define{lorem}{@@@
\if{a == a}@@@
@@@
\endif@@@
}@@@
will cause all such image tags to be indented by four spaces, causing pandoc to typeset it as code (as specified).
So, short of writing gpp files in one line or introducing ugly line-end comments and not indenting, what can you do to prevent gpp from plastering superfluous whitespaces all over the place?
|
Assuming all the junk is in the include file, and therefore before the start of the document, you could just post-process it:
test.gpp:
\define{lorem}{Lorem}
\define{ipsum}{ipsum...}
----- cut here ------
Then do:
gpp -T --include test.gpp test.md | sed '1,/----- cut here ------/d'
(Does gpp output to stdout? Otherwise just run sed on the output file.)
| Generic Preprocessor adds extra whitespace |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.