date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,371,543,299,000 |
I got the source code of unclutter using apt-get source unclutter, and copied the files to my embedded system.
Now, how can I compile it?
--update
I've tried this answer:
How to compile and install programs from source but it doesn't work here.. there's no ".configure" and make was not found.
|
In order to compile things on that system, it needs to have make, gcc, and a whole lot of other stuff that's not usually found on embedded devices. Typically, you cross-compile it on another machine then put the binary on the embedded system. You may be lucky enough to not have to compile it. You can get the binary for your architecture and try running it on the system.
Cross compiling is a large topic, and there are lots of tools out there that try to make it easier. Some things to search for: linaro, buildroot, crosstool.
To get the binary, go to packages.debian.org, search for the package that has the binary, download the appropriate one for your architecture (such as arm), open it with an archive manager, and look at the "data" folder - this will have the binaries. It may turn out that the binary needs libraries that are also not installed - you can do the same process - find the package with the library you need, copy the binary over to the target system and try again.
| How can I compile unclutter to my embedded linux? |
1,371,543,299,000 |
Which tools for distributed builds are currently in use in popular Linux distros, like Ubuntu, Debian, RedHat, etc.?
For example, we have a huge amount of packages with dependencies and try to compile all of them for the new release of the distro. It takes a lot of time to build them one by one. I have heard about distcc, but it shares one project compilation between the servers for distributed parallel building.
Is there any tool which analyzes dependencies between the packages (creates a graph, for example) and creates a schedule for building with package-level parallelism? For example, we have packages p1, p2, p3, p4, and two servers. One server builds p1, another p2 at the same time if they are not dependent on each other.
I need examples of the projects which are being used in production.
|
Each distribution tends to have its own set of tools for this, there’s not much shared software here.
Debian uses wanna-build, buildd and sbuild, which you’ll all find documented on the Debian site (follow the links too). wanna-build maintains the build queue, buildd picks a package to build, and sbuild builds it. wanna-build tracks packages with missing dependencies using the “dep-wait” state; packages can enter that state directly (if wanna-build itself can identify that dependencies are missing) or after a build fails because of missing dependencies. There’s a tutorial available if you want to set up a local build infrastructure.
Fedora uses Koji, which is extensively documented. It also involves a number of different components, including koji-hub, the centralised database front-end, and kojid , which drives the builds. I’m not as familiar with Koji though so I don’t know how it all integrates to handle build states.
Other distributions have other build systems, such as Launchpad. They all handle the same concerns you have: a centralised view of all the packages, multiple build systems, and a centralised repository which receives the build results.
| Distributed build systems |
1,371,543,299,000 |
I compiled a simple C(only contained an empty main function) file into a.out, and run it in different loaction:
user@host:~$ md5sum /home/work/a.out /tmp/a.out
dcbdb836569b99a7dc83366ba9bb3588 /home/work/a.out
dcbdb836569b99a7dc83366ba9bb3588 /tmp/a.out
user@host:~$
user@host:~$
user@host:~$ ldd /home/work/a.out
linux-vdso.so.1 (0x00007fffe11fa000)
libc.so.6 => /opt/compiler/gcc-4.8.2/lib/libc.so.6 (0x00007f42b8bca000) <--
/opt/compiler/gcc-4.8.2/lib64/ld-linux-x86-64.so.2 (0x00007f42b8f77000)
user@host:~$
user@host:~$ ldd /tmp/a.out
linux-vdso.so.1 (0x00007fff6ba41000)
libc.so.6 => /tmp/../lib64/tls/libc.so.6 (0x0000003f0b000000) <--
/opt/compiler/gcc-4.8.2/lib64/ld-linux-x86-64.so.2 (0x00007f12f537a000)
Why it loaded different libc.so?
Here are more information, thanks for @qubert
$ readelf -a ./a.out | fgrep ORIGIN
0x000000000000000f (RPATH) Library rpath: [$ORIGIN:$ORIGIN/lib:$ORIGIN/lib64:$ORIGIN/../lib:$ORIGIN/../lib64:/opt/compiler/gcc-4.8.2/lib:/opt/compiler/gcc-4.8.2/lib64]
$ gcc -v -g 1.c 2>&1 | fgrep collect
/home/opt/gcc-4.8.2.xxx-r4/gcc-4.8.2.xxx-r4/sbin/../libexec/gcc/x86_64-xxx-linux-gnu/4.8.2/collect2 -rpath $ORIGIN:$ORIGIN/lib:$ORIGIN/lib64:$ORIGIN/../lib:$ORIGIN/../lib64:/opt/compiler/gcc-4.8.2/lib:/opt/compiler/gcc-4.8.2/lib64 --sysroot=/home/opt/gcc-4.8.2.xxx-r4/gcc-4.8.2.xxx-r4/sbin/../x86_64-xxx-linux-gnu/sys-root --eh-frame-hdr -m elf_x86_64 -dynamic-linker /opt/compiler/gcc-4.8.2/lib64/ld-linux-x86-64.so.2 /home/opt/gcc-4.8.2.xxx-r4/gcc-4.8.2.xxx-r4/sbin/../lib/gcc/x86_64-xxx-linux-gnu/4.8.2/../../../../lib64/crt1.o /home/opt/gcc-4.8.2.xxx-r4/gcc-4.8.2.xxx-r4/sbin/../lib/gcc/x86_64-xxx-linux-gnu/4.8.2/../../../../lib64/crti.o /home/opt/gcc-4.8.2.xxx-r4/gcc-4.8.2.xxx-r4/sbin/../lib/gcc/x86_64-xxx-linux-gnu/4.8.2/crtbegin.o -L/home/opt/gcc-4.8.2.xxx-r4/gcc-4.8.2.xxx-r4/sbin/../lib/gcc/x86_64-xxx-linux-gnu/4.8.2 -L/home/opt/gcc-4.8.2.xxx-r4/gcc-4.8.2.xxx-r4/sbin/../lib/gcc -L/home/opt/gcc-4.8.2.xxx-r4/gcc-4.8.2.xxx-r4/sbin/../lib/gcc/x86_64-xxx-linux-gnu/4.8.2/../../../../lib64 -L/home/opt/gcc-4.8.2.xxx-r4/gcc-4.8.2.xxx-r4/sbin/../x86_64-xxx-linux-gnu/sys-root/lib/../lib64 -L/home/opt/gcc-4.8.2.xxx-r4/gcc-4.8.2.xxx-r4/sbin/../x86_64-xxx-linux-gnu/sys-root/usr/lib/../lib64 -L/home/opt/gcc-4.8.2.xxx-r4/gcc-4.8.2.xxx-r4/sbin/../lib/gcc/x86_64-xxx-linux-gnu/4.8.2/../../../../x86_64-xxx-linux-gnu/lib -L/home/opt/gcc-4.8.2.xxx-r4/gcc-4.8.2.xxx-r4/sbin/../lib/gcc/x86_64-xxx-linux-gnu/4.8.2/../../.. -L/home/opt/gcc-4.8.2.xxx-r4/gcc-4.8.2.xxx-r4/sbin/../x86_64-xxx-linux-gnu/sys-root/lib -L/home/opt/gcc-4.8.2.xxx-r4/gcc-4.8.2.xxx-r4/sbin/../x86_64-xxx-linux-gnu/sys-root/usr/lib /tmp/ccbKeW7k.o -lgcc --as-needed -lgcc_s --no-as-needed -lc -lgcc --as-needed -lgcc_s --no-as-needed /home/opt/gcc-4.8.2.xxx-r4/gcc-4.8.2.xxx-r4/sbin/../lib/gcc/x86_64-xxx-linux-gnu/4.8.2/crtend.o /home/opt/gcc-4.8.2.xxx-r4/gcc-4.8.2.xxx-r4/sbin/../lib/gcc/x86_64-xxx-linux-gnu/4.8.2/../../../../lib64/crtn.o
|
Your compiler was configured to set DT_RPATH with $ORIGIN by default using its built-in specs.
The purpose of $ORIGIN is to create executables that could be moved elsewhere together with the shared libraries they depend on: if a binary is moved to /alt/opt/bin and has $ORIGIN/../lib in its runpath, the dynamic linker will first look for its libraries in /alt/opt/lib. More details in the ld.so(8) manpage.
The problem with your compiler is that it's using the deprecated DT_RPATH (instead of DT_RUNPATH), which is always searched first and cannot be overridden via LD_LIBRARY_PATH. To avoid that, try using -Wl,--enable-new-dtags to gcc:
gcc -Wl,--enable-new-dtags file.c
That will direct the linker to use DT_RUNPATH instead of DT_RPATH for the -rpath option, whether set on the command line or via specs. This is supposed to not be supported on older systems, but as far as I remember, that was quite a while ago.
| Why the same executable in different location loaded different libc.so |
1,371,543,299,000 |
I don't want to use the standard gpg version in raspbian which is almost 4 years old . I had to compile all the libraries manually . This worked fine but then when I compiled gpg it saied " libgcrypt too old need 1.7.0 have 1.6.4 " even tho I install libgcrypt 1.8.1 . So i uninstalled gpg and libgcrypt with " make uninstall " and compiled them again . With no success . I've tried to find a solution the last two days . There were some forum posts on the ubuntu forums , but they were not very helpful .
When I compild it the last time it gave out an error :
collect2: error: ld returned 1 exit status Makefile:949: recipe for target 't-stringhelp' failed make[3]: *** [t-stringhelp] Error 1 make[3]: Leaving directory '/home/pi/gnupg-2.2.1/common' Makefile:816: recipe for target 'all' failed make[2]: *** [all] Error 2 make[2]: Leaving directory '/home/pi/gnupg-2.2.1/common' Makefile:590: recipe for target 'all-recursive' failed make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory '/home/pi/gnupg-2.2.1' Makefile:509: recipe for target 'all' failed make: *** [all] Error 2
|
Converting comment to answer:
From your comments, you are using the oldstable version of Raspbian. You should be aware that the oldstable receives less frequent updates and is estimated to remain within the Debian security team's perview for about one year after the release of the next stable release.
See Debian's wiki on oldstable
As such, Debian oldstable shouldn't be used unless there is some special reason for it to remain in use. All Raspbian users should change their sources.list to reflect the release of the new stable version of Raspbian which is currently, Stretch
GNUPG 2.2 series is also the new stable for GNUPG. The 2.1 series and prior end support at the end of Dec 2017.
See GNUPG 2.2.0 Announcement
The GnuPG team is pleased to announce the availability of a new release
of GnuPG: version 2.2.0. See below for a list of new features and bug
fixes. This release marks the start of a new long term support series
to replace the 2.0.x series which will reach end-of-life on 2017-12-31.
And GNUPG 2.2.1 Announcement
We are is pleased to announce the availability of a new GnuPG release:
version 2.2.1. This is a maintenance release; see below for a list of
fixed bugs.
As far as the question: "Will upgrading to Stretch break things?"
I would suggest getting a second SD card, installing the new stable version of Raspbian on that new card, and copying over any personal applications and data. This will allow you to test the new stable version while not disturbing your oldstable installation.
Addendum
Of course, this answer doesn't directly answer your question of "How do I build GNUPG?"
For a nice easy to follow answer to this question you can follow the instructions included on GNUPG's Webkey installation page:
GNUPG Webkey with local build of new version GNUPG
GNUPG Says:
The easiest way to install the latest GnuPG version is to use Speedo, which downloads, verifies and builds all dependent packages. To do this first unpack the tarball:
$ tar xjf gnupg-2.1.15.tar.bz2
On non GNU system you may need to use this instead:
$ zcat gnupg-2.1.15.tar.bz2 | tar xf -
Then run:
$ make -f gnupg-2.1.15/build-aux/speedo.mk INSTALL_PREFIX=. \
speedo_pkg_gnupg_configure='--enable-gpg2-is-gpg \
--disable-g13 --enable-wks-tools' native
If you run into errors you are probably missing some development tools; install them and try again. If all succeeds you will notice a bunch of new directories below webkey's home directory:
PLAY bin include lib libexec sbin share swdb.lst swdb.lst.sig
Optionally you may delete what is not anymore required:
$ rm -rf PLAY include lib swdb.*
To make use of your new GnuPG installation you need to run this first (you should add it to webkey's .profile or .bashrc):
PATH="$HOME/bin:$PATH"
LD_LIBRARY_PATH="$(pwd)/lib"
export LD_LIBRARY_PATH
End build instructions
Of course, you will be playing with the latest version of GNUPG which is no longer 2.1.15
| I have trouble installing gnupg on raspberry pi |
1,371,543,299,000 |
I am trying to compile ChatScipt v7.55 in Ubuntu 16.04. But when I use make server command, I get this error:
evserver.cpp: In function ‘int settcpnodelay(int)’:
evserver.cpp:263:40: error: ‘TCP_NODELAY’ was not declared in this scope
return setsockopt(fd, IPPROTO_TCP, TCP_NODELAY, (void*) &on, sizeof(on));
^
Makefile:110: recipe for target 'evserver.o' failed
make: *** [evserver.o] Error 1
This is the full result of the command:
************ LINUX VERSION ************
g++ -c -std=c++11 -Wall -funsigned-char -Wno-write-strings -Wno-char-subscripts -Wno-strict-aliasing -DLOCKUSERFILE=1 -DEVSERVER=1 -DEVSERVER_FORK=1 -DDISCARDPOSTGRES=1 -DDISCARDMONGO=1 -DDISCARDMYSQL=1 -Ievserver evserver.cpp -o evserver.o
evserver.cpp: In function ‘int settcpnodelay(int)’:
evserver.cpp:263:40: error: ‘TCP_NODELAY’ was not declared in this scope
return setsockopt(fd, IPPROTO_TCP, TCP_NODELAY, (void*) &on, sizeof(on));
^
Makefile:110: recipe for target 'evserver.o' failed
make: *** [evserver.o] Error 1
What is the problem and how can I fix it?
|
I solved the problem by adding #include <netinet/tcp.h> in above of ” evserver.cpp ” file
| error: ‘TCP_NODELAY’ was not declared in this scope [closed] |
1,371,543,299,000 |
I'm using Ubuntu 17.04 , and I manually upgraded my kernel version to 4.12.8 using the tool ukuu (Ubuntu Kernel Update Utility).
When trying to launch vmware (it is already installed), it asks me to give it the path to gcc-7.1 to compile vmware modules.
I didn't understand why vmware asked me that, because I installed the vmware modules without any problem in the previous kernel versions in the same computer.
After some researches, I found out that vmware compile its modules using the same gcc version that is used to compile the current kernel. As I installed this kernel version manually, I didn't have gcc-7.1 on my computer.
My question is (sorry for this long speech I had to give) : How can I force vmware to use another gcc version to compile its modules ?
|
Short answer: you should not.
Long answer:
That's not that VMware stubbornly wants a particular GCC version for no reason. It's very unwise to compile a kernel module with another GCC than the one used for the kernel itself: if there is any ABI change between the two gcc versions, you will probably corrupt and crash your system.
If you ever convinced VMware to compile its modules with your GCC version, the kernel would refuse to load them. You would then have to binary-edit the modules in order to replace the GCC signature with the right one.
But, all in all, is that worth the risk? It would be preferable to either download/compile GCC 7.1, or recompile your kernel with your current GCC version.
| Choose gcc version to compile vmware modules |
1,371,543,299,000 |
These are the debian/rules from an app. called i-nex. It is a CPU-Z alternative for GNU/Linux and has a debian sub-directory having the following files -
┌─[shirish@debian] - [~/games/I-Nex] - [4454]
└─[$] ll -r debian
-rw-r--r-- 1 shirish shirish 296 2016-11-13 02:12 i-nex-library.desktop
-rw-r--r-- 1 shirish shirish 93 2016-11-13 02:12 gbp.conf
-rw-r--r-- 1 shirish shirish 16588 2016-11-13 02:12 copyright
-rw-r--r-- 1 shirish shirish 14328 2016-11-13 02:12 changelog
drwxr-xr-x 2 shirish shirish 4096 2016-11-13 02:12 source
-rwxr-xr-x 1 shirish shirish 384 2016-11-13 02:12 rules
-rw-r--r-- 1 shirish shirish 63 2016-11-13 02:12 manpages
-rw-r--r-- 1 shirish shirish 110 2016-11-13 02:12 i-nex.triggers
-rw-r--r-- 1 shirish shirish 6535 2016-11-13 02:12 i-nex.desktop
-rw-r--r-- 1 shirish shirish 1408 2016-11-13 03:16 control
-rw-r--r-- 1 shirish shirish 2 2016-11-13 03:16 compat
-rw-r--r-- 1 shirish shirish 6 2016-11-13 03:17 debhelper-build-stamp
drwxr-xr-x 5 shirish shirish 4096 2016-11-13 03:18 i-nex
-rw-r--r-- 1 shirish shirish 62 2016-11-13 03:19 i-nex.substvars
-rw-r--r-- 1 shirish shirish 91 2016-11-13 03:19 files
-rw-r--r-- 1 shirish shirish 455 2016-11-13 03:19 i-nex.debhelper.log
I run the following two commands and a debian package comes at the end -
$ fakeroot debian/rules build
$ fakeroot debian/rules binary
From the above listing it is obvious that at the end back-end it is debhelper which is doing the build process from the timestamp as well as build log. This is also confirmed by running
$ fakeroot debian/rules clean
where debian sub-directory gets rid of all the debhelper entries.
Now this is the debian/rules as can be seen -
┌─[shirish@debian] - [~/games/I-Nex] - [4453]
└─[$] cat debian/rules
#!/usr/bin/make -f
LSB_CS = $(shell lsb_release -cs)
ifeq ($(LSB_CS),lucid)
COMPRESSION = -- -z9 -Zgzip
else
COMPRESSION = -- -z9 -Zxz
endif
override_dh_autoreconf:
cd I-Nex && autoreconf -i
override_dh_auto_configure:
dh_auto_configure --sourcedirectory=I-Nex
override_dh_builddeb:
dh_builddeb $(COMPRESSION)
override_dh_fixperms:
dh_fixperms
%:
dh $@ --with autoreconf
Now according to this answer, it seems the only thing to change is the last line -
dh $@ --with autoreconf
with
dh $@ --parallel --with autoreconf
this is assuming of course, that there no missing dependencies while compiling parallelly. Am I missing something ?
JFR there are two RFP's in Debian for the package
|
That's right, in compatibility level 9,
dh $@ --parallel --with autoreconf
is sufficient to enable parallel builds. Note that "missing dependencies" for parallel builds refers to target dependencies in upstream build rules (Makefile etc.), not package dependencies.
With compatibility level 10, the two options above are enabled by default, so
dh $@
is sufficient to enable parallel builds with autoreconf.
The dh and debhelper manpages have all the details.
| option to parallel build an app |
1,371,543,299,000 |
How can I keep something always compiling on a spare machine?
As it's just for looks, the more complex looking the better. I don't care what it is, just so long as it doesn't require input on my part, and it repeats forever.
I'll be using some flavor of Ubuntu.
Thanks in advance!
|
Are you just looking for something looking busy? Don't care about any productive output?
Check out hollywood. There is a link here talking about it, and spotting it in the wild.
| How can I keep something, anything, compiling forever? [closed] |
1,371,543,299,000 |
I'm trying to compile vapoursynth and have run into a linker issue which I don't understand how to solve. Here is what I have so far:
I have compiled zimg from github
github: buaazp/zimg
and have a binary. I pulled vapoursynth from here
github: vapoursynth/vapoursynth
and I followed the instructions.
When I try to run ./configure:
configure: error: Package requirements (zimg) were not met:
No package 'zimg' found
Consider adjusting the PKG_CONFIG_PATH environment variable if you
installed software in a non-standard prefix.
Alternatively, you may set the environment variables ZIMG_CFLAGS
and ZIMG_LIBS to avoid the need to call pkg-config.
See the pkg-config man page for more details.
I tried to fix it with:
export PKG_CONFIG_PATH=/home/test/zimg/bin/zimg
But ./configure still didn't work and had the same error. Then I tried:
export ZIMG_CFLAGS=/home/test/zimg/src/
export ZIMG_LIBS=/home/test/zimg/build
And the check for zimg passed, but it fails to link it. The error is this:
checking for ZIMG... yes
configure: error: failed to link zimg.
What should I try next?
|
It turns out I used the wrong zimg. The correct zimg is sekrit-twc/zimg.
| Unable to compile vapoursynth: failed to link zimg [closed] |
1,371,543,299,000 |
I downloaded a fresh kernel that I'm planning on using in a VM. In the instructions of the tutorial I'm using, I'm told
You will also need to build a new instance of the kernel, and ensure
that it will boot in the VM. To do this, move to your source tree,
copy config-3.14.26-yocto-qemu to
$SRC_ROOT/.config (where $SRC_ROOT is the root of your linux tree),
and run make -j4 all.
I did that by
cd linux-yocto-3.14 to go to the root of the linux tree
mkdir .config to make the configuration folder that didn't exist in this brand new kernel
copying config-3.14.26-yocto-qemu from outside the kernel into /.config
While at the root of the kernel, executing make -j4 all
My concern is that after this step, the guide says it'll take about 5-minutes or so to build and that I can simply just leave it to do its thing; I thought the point of the config file was to build the kernel for me. Instead, I get prompted with the typical kernel build setup screen where I have to go through every single option to build the kernel. Did I do something wrong?
|
You should not have done the mkdir .config; you should have just copied the existing config file to a filename called .config in the kernel source directory.
e.g.
cd linux-yocto-3.14
cp /path/to/config-3.14.26-yocto-qemu .config
make -j4 all
| Configuration file for kernel in VM environment |
1,371,543,299,000 |
I downloaded the source for rmlint and am trying to compile it on cygwin. When I run scons, it says Checking for glib-2.0 >= 2.32... Error: glib-2.0 >= 2.32 not found.
In the cygwin setup facility, it shows I have the libglib 2.0_0 2.46.2-1 package installed. I re-installed it for good measure, but no luck.
How could I try to find the library on my filesystem, and how do I tell scons where it's located?
|
Hi I'm one of the rmlint dev's.
Unfortunately I don't think you'll be able to get rmlint running under cygwin (although happy to be proven wrong).
Edit: have been proven wrong. Now have more-or-less working command-line version of rmlint under cygwin. It requires:
gcc-core
pkg-config
libglib2.0-devel
libtool
and optionally:
libjson-glib 1.0-devel
libblkid-devel
libelf-devel
There seems to be no filemap support under cygwin so rmlint can't do its normal optimisation of file order to reduce seek times and thrash.
| scons can't find glib-2.0 >= 2.32 on cygwin |
1,371,543,299,000 |
I have to cross compile bluez for another machine, but i am not allowed to install anything on the host machine. I never did this before. How can i get started?
Host machine:
Processor: Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz
OS Version: Linux 2.6.32-44 generic
Target machine:
Processor: ARM926EJ-S rev 5 (v5l)
OS Version: Linux 2.6.35.3-571
As you can see the Target Machine has a newer version of Linux then the host machine, is it even possible to cross compile in this situation?
I found this site (german), is this a good tutorial on how to start?
|
The kernel version has no bearing on compiling code for another system.
Unfortunately without being able to install any software on the host system you're going to be out of luck. You need a compiler suite that will generate code for your target platform (ARM in this case) and by default such a compiler suite isn't installed on most systems - if they have a compiler installed, it'll be for the same architecture.
That said, if you can install software in your home directory on the host, you can install a cross compiler. There are any number of guides online (for example, this one). Basically, it involves downloading the source code for a compiler suite and compiling it on your host system so that it can generate binaries for your target architecture.
That said - why do you need to compile BlueZ from source? I don't know what distribution your ARM system is running, but Debian has ARM packages available (although the packages for squeeze are probably more appropriate based on the vintage of your ARM kernel).
| Cross compile for ARM without installing anything |
1,371,543,299,000 |
I have a buildroot package that I need to build. When I issue the 'make' command, it runs until it errors out with "lutimes undeclared". After lengthy research, it appears that uClibc needs to be patched to include the definition of lutimes before my build will complete.
I found this patch, but being new to Linux, I do not know how to apply it:
ucLibc Mailing List Archive - June 2010, Post 44113
Can I properly apply this patch ?
|
Have in mind that Patches are per source code revision, and after 4 years of changes in the source code, this patch may be outdated and may need recreation from scratch.
First thing you should do is check the official documentation.
http://buildroot.uclibc.org/downloads/manual/manual.html#_providing_patches
You should pay attention to the following two categories:
17.1.2. Within Buildroot
and
17.2. How patches are applied
Give it a try and let us know if you face any issues
| Applying patch to the uClibc |
1,371,543,299,000 |
I'm using GTK3.4 and I need to update to > GTK3.10 for a theme editor I would like to use.
I've never built something like that from source and the task looks a bit daunting. I've looked at what's available when I type apt-cache search gtk but don't see anything that seems to be like an update to GTK > 3.10
I've googled around but only seem to find things that point to tutorials on compiling it from source. If necessary I will follow them but I'm a bit afraid of braking something.
NB: I use Debian-based Kali that's why the GTK appears so outdated.
|
If you don't want to compile from sources, the only other alternative is upgrading to the next version of Debian (normally testing/sid) which (normally) has the latest version:
➜ ~ rmadison libgtk-3-bin
debian:
libgtk-3-bin | 3.4.2-7 | wheezy | amd64, armel, armhf, i386, ia64, kfreebsd-amd64, kfreebsd-i386, mips, mipsel, powerpc, s390, s390x, sparc
libgtk-3-bin | 3.14.3-1 | jessie | amd64, arm64, armel, armhf, i386, kfreebsd-amd64, kfreebsd-i386, mips, mipsel, powerpc, ppc64el, s390x
libgtk-3-bin | 3.14.3-1 | sid | sparc
libgtk-3-bin | 3.14.4-1 | sid | amd64, arm64, armel, armhf, hurd-i386, i386, kfreebsd-amd64, kfreebsd-i386, mips, mipsel, powerpc, ppc64el, s390x
new:
Otherwise, you are stuck to that version of gtk.
| Install latest GTK without building myself |
1,371,543,299,000 |
Ubuntu/Fedora (to mention a few ones) is composed for several applications or programs dedicated each one to a specific task, but I've got a few questions:
How do I get the source code?
If I want to make my own GNU/Linux distro based on Ubuntu (such as Linux Mint), how should I start, which code can I modify at first?
Can you give an example of one of these programs (that composed Ubuntu) and its source code? How do I compile it?
|
A little bit of google-fu would have helped here. Not that you are not right to ask: it's perfectly fine. But the very first thing you should be able to do is to find information yourself, read the doc, and so on.
Get the source code
What you describes is distro specific. For instance for Debian and derivatives (such as ubuntu) apt-get source [package] is what you need (see here for instance or man apt-get source or search engine). This way, one can edit the sources of the versions of the programs that are used in the distro.
If what you want is to contribute to an open source project directly, you shouldn't use distro specific sources but the upstream sources usually managed by a version control system (quick how-to for git and github, assumes that you already know what we are talking about).
Make your own distrib
There is nowhere to start from. There aren't good practices. But if I were to forge a new distro, I would begin by wondering why I want to create a new one. Isn't there a distro out there that would fit my needs? If not, is there one that is close enough to ask its maintainers whether they are interested by a new contributor?
Code compilation
It depends on a lot of things. But usually for compiled languages, a
./configure --prefix=[install directory]
make
make install
should do it, but again, read the doc. Packages are usually release with INSTALL or README files, read them. And again, your favorite search engine should give you all the details about how and what to do to get your system ready or to solve common problems (99.99% of them are common).
General thoughts
Read the doc (man pages, local doc, web doc, etc.).
Extensively use your favorite search engine to answer your questions and keep stackechange and other websites for clarifications or questions that have really not been answered elsewhere. I got dozens of relevant pages just by typing your questions on google.
Get used to Linux distros such as ArchLinux and Gentoo once you are comfortable with Fedora/Ubuntu: you should learn a lot from them. Their Wiki/Handbook are also supergreat!
Visit Linux from Scratch website.
Don't by too hasty. Take your time and be sure to be comfortable before trying something knew.
| How to modify source code of collection of programs of a GNU/Linux OS? [closed] |
1,371,543,299,000 |
I've a CI build process during which I install a debian package from my local reprepro.
I have a Makefile which does call aptitude to install the package from its own repository like this
sudo aptitude -y install foobar >> aptitude.log 2>&1
Now it could happen that aptitude has conflicts, which can't be resolved or the repository doesn't offer a new version for the package "foobar". In both cases aptitude wouldn't install anything.
But
echo $?
after the aptitude call in the Makefile always returns 0.
What way do you propose to check if aptitude actually did install anything? Grepping for the last line of the aptitude output is the only thing I can think of if the exit codes are always 0.
|
Try with dpkg-query, which print information about installed package
Exemple:
dpkg-query -W -f='${Status} ${Version}\n' foobar
Will result
No packages found matching foobar.
Run dpkg-query --help for more information
| How to check if aptitude did something? |
1,371,543,299,000 |
So I compiled FFmpeg from this guide as a standard user and it works fine as the user I compiled with but if i do sudo ffmpeg the program can't be found. Is it possible to make it accessible by root or do I need to rebuild logged in as root?
|
The issue here is that ffmpeg has not been placed in a directory that is in root's $PATH. The guide you linked to (in the future please include the steps here so we don't need to go looking for them) tells you to run this command:
./configure --prefix="$HOME/ffmpeg_build" --bindir="$HOME/bin"
This will cause ffmpeg's files to be installed into $HOME/ffmpeg_build and the compiled executable into $HOME/bin (in the make install step). If you can run it as your normal user, that means that you have modified your $PATH and have added that directory to it.
For root to run it you can either add /home/your_user/bin to roots path or, much better, just call sudo and give the path to the executable:
sudo ~/bin/ffmpeg
| Compiled, can't access with sudo |
1,371,543,299,000 |
I've been following this tutorial, and I've found myself needing to compile makedev. However, when I try
ROOT=/mnt make install
in the source directory (makedev-20110824), I get:
gimptool-2.0 --install-script misc/xcf-png.scm
make: gimptool-2.0: Command not found
make: *** [misc/icon.png] Error 127
As I understand it, makedev is a utility that automatically populates the /dev directory, whereas gimptools are used for image manipulation.
Why do I need gimptools to build makedev? Is there a way I can build it without gimptools and still expect to make it through the tutorial without much trouble?
P.S.
Also in case it is relevant, I'm running a reasonably minimal Debian install on VirtualBox.
|
Are you sure you got the right makedev ? I found the multiple versions of makedev:
The one available at sunsite.unc.edu which the tutorial mentions. There is makedev-1.6.1, which is written in C and makedev-2.2, which is a shell script.
makedev-20110824 (google lead me to this sourceforge project). It would appear that this is not maintained much.
As for the gimp-tool invocation, it looks like the author of the makedev-20110824 wanted an 'icon' whose original (in gimp XCF format) was converted to a png during the make. I don't understand why the author would put an icon for a system utility like makedev which is unlikely to be launched off a desktop environment.
Also, I'd like you to checkout Linux From Scratch, considering your interest in the tutorial. That tutorial is quite old (2000) and you might face many problems putting the pieces together (which might be a good thing if you enjoy it).
| Why do I need gimptools to build makedev? |
1,371,543,299,000 |
I cloned hulahop via git and I'd like to use it and try to run this example.
I have a problem to find out how to build and install it. I also tried to install it with apt-get but I don't have this package available.
[Xubuntu 10.04]
|
(First things first: I have never used nor installed hulahop, what follows is generic, based on glancing over the source tree.)
To get this straight, there are basically two ways to install something in a Debian(-derived) distribution:
the clean way: via a .deb package and some tool like apt-get, aptitude, dpkg, you can build them yourself or semi-self by using tools like checkinstall (for example, there are probably others); this enables you to use your distribution's tools to remove them and you're thus save from cluttering up your system, what is the danger of...
the manual way, i.e., using whatever is provided in the sources to compile and install it yourself. The important part is choosing where to install it to, usually called "prefix". (A prefix of /usr/local or $HOME or $HOME/.local can be used to separate manually installed packages from the distribution's packages.*****)
So, given you chose (2.), you have to look at the cloned sources, where you'll find:
autogen.sh, a three-line shell script that calls autoreconf, part of the GNU build system, which if run successfully (i.e., you have the necessary build tools, e.g. autotools or build-essenstial packages, not sure about these) creates a configure script. The autogen.sh script then calls ./configure "$@", i.e., it creates a Makefile tailored to your system that is used to compile the sources, in the classical ./configure && make && make install way). If you want to change the prefix, pass --prefix=/the/prefix/you/want to configure (or to autogen.sh since it passes the argument to configure, via $@) -- this is the manual way
a debian/ folder, that contains what is needed to create a .deb package -- the clean way!
When you find this in sources, it can be worthwhile to check if someone already built a deb package, since it's strong evidence for this. Googling "hulahop debian" reveals a Debian package and an Ubuntu package sugar-hulahop. You could use these or if you still prefer to install the newest sources, you could try (again) what was told you here and ask a question including a specific error if it fails.
(***** If you chose a prefix, be sure to tell every involved party, i.e., adjust $PATH if you want your shell to know where to find an executable, do whatever is needed for python to know where to import something from, etc.)
| How can I install hulahop |
1,371,543,299,000 |
Trying to compile ADCIRC, a Fortran program with NetCDF support, and I hit the following error.
/usr/bin/ld: cannot find -lhdf5_fortran: No such file or directory
collect2: error: ld returned 1 exit status
I am a little confused as to which file it is looking for as it cannot find any file named lhdf5_fortran.
I have the libhdf5-dev and libhdf5-fortran-102 libraries installed.
I am using Ubuntu.
|
As pointed out by @steeldriver, Ubuntu places libhdf5_fortran under serial, and changing the flag to libhdf5_serial_fortran in the cmplrflags file works.
| ADCIRC - Cannot find -lhdf5_fortran: No such file or directory |
1,371,543,299,000 |
I just got the following error when compiling linux-5.14.2.tar.gz and patch-5.14.2-rt21.patch
on CONFIG_DEBUG_INFO_BTF=y:
AS arch/x86/lib/iomap_copy_64.o
arch/x86/lib/iomap_copy_64.S: Assembler messages:
arch/x86/lib/iomap_copy_64.S:13: 警告:found `movsd'; assuming `movsl' was meant
AR arch/x86/lib/built-in.a
GEN .version
CHK include/generated/compile.h
LD vmlinux.o
ld: warning: arch/x86/power/hibernate_asm_64.o: missing .note.GNU-stack section implies executable stack
ld: NOTE: This behaviour is deprecated and will be removed in a future version of the linker
MODPOST vmlinux.symvers
MODINFO modules.builtin.modinfo
GEN modules.builtin
LD .tmp_vmlinux.btf
ld: warning: arch/x86/power/hibernate_asm_64.o: missing .note.GNU-stack section implies executable stack
ld: NOTE: This behaviour is deprecated and will be removed in a future version of the linker
ld: warning: .tmp_vmlinux.btf has a LOAD segment with RWX permissions
BTF .btf.vmlinux.bin.o
LD .tmp_vmlinux.kallsyms1
ld: warning: .btf.vmlinux.bin.o: missing .note.GNU-stack section implies executable stack
ld: NOTE: This behaviour is deprecated and will be removed in a future version of the linker
ld: warning: .tmp_vmlinux.kallsyms1 has a LOAD segment with RWX permissions
KSYMS .tmp_vmlinux.kallsyms1.S
AS .tmp_vmlinux.kallsyms1.S
LD .tmp_vmlinux.kallsyms2
ld: warning: .btf.vmlinux.bin.o: missing .note.GNU-stack section implies executable stack
ld: NOTE: This behaviour is deprecated and will be removed in a future version of the linker
ld: warning: .tmp_vmlinux.kallsyms2 has a LOAD segment with RWX permissions
KSYMS .tmp_vmlinux.kallsyms2.S
AS .tmp_vmlinux.kallsyms2.S
LD vmlinux
ld: warning: .btf.vmlinux.bin.o: missing .note.GNU-stack section implies executable stack
ld: NOTE: This behaviour is deprecated and will be removed in a future version of the linker
ld: warning: vmlinux has a LOAD segment with RWX permissions
BTFIDS vmlinux
failed: load btf from vmlinux: invalid argument make:
make: *** [Makefile:1176: vmlinux] Error 255
make: *** Deleting file 'vmlinux'
I know that if CONFIG_DEBUG_INFO_BTF is set to n, compilation will not report errors, but I don't want CONFIG_DEBUG_INFO_BTF to be set to n.
From this issue, it seems that the virtual memory is too small, but the initiator of this problem is using a virtual machine, and I am a physical machine Debian12. What should I do?
~/kernel/5.14.2/linux-5.14.2$ free -h
total used free shared buff/cache available
内存: 7.7Gi 508Mi 3.4Gi 1.2Mi 4.1Gi 7.2Gi
交换: 976Mi 0B 976Mi
|
I resolved this issue by changing the kernel version.
linux-6.4.tar.gz and patch-6.4.6-rt8.patch.
| failed: load btf from vmlinux: invalid argument make on CONFIG_DEBUG_INFO_BTF=y |
1,371,543,299,000 |
No matter how many times I rebuild the plocate db I get:
/var/lib/plocate/plocate.db: has version 4294967295, expected 0 or 1; please rebuild it.
How in the world did I manage this??
/sbin/updatedb.plocate:
linux-vdso.so.1 (0x0000007f1c7d4000)
libzstd.so.1 => /lib/aarch64-linux-gnu/libzstd.so.1 (0x0000007f1c6c0000)
libstdc++.so.6 => /lib/aarch64-linux-gnu/libstdc++.so.6 (0x0000007f1c490000)
libm.so.6 => /lib/aarch64-linux-gnu/libm.so.6 (0x0000007f1c3f0000)
libgcc_s.so.1 => /lib/aarch64-linux-gnu/libgcc_s.so.1 (0x0000007f1c3c0000)
libc.so.6 => /lib/aarch64-linux-gnu/libc.so.6 (0x0000007f1c210000)
/lib/ld-linux-aarch64.so.1 (0x0000003000000000)
/bin/plocate:
linux-vdso.so.1 (0x00000077e07be000)
liburing.so.2 => /lib/aarch64-linux-gnu/liburing.so.2 (0x00000077e0760000)
libzstd.so.1 => /lib/aarch64-linux-gnu/libzstd.so.1 (0x00000077e0690000)
libstdc++.so.6 => /lib/aarch64-linux-gnu/libstdc++.so.6 (0x00000077e0460000)
libgcc_s.so.1 => /lib/aarch64-linux-gnu/libgcc_s.so.1 (0x00000077e0430000)
libc.so.6 => /lib/aarch64-linux-gnu/libc.so.6 (0x00000077e0280000)
/lib/ld-linux-aarch64.so.1 (0x0000003000000000)
libm.so.6 => /lib/aarch64-linux-gnu/libm.so.6 (0x00000077e01e0000)
plocate 1.1.15
Copyright 2020 Steinar H. Gunderson
License GPLv2+: GNU GPL version 2 or later <https://gnu.org/licenses/gpl.html>.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
updatedb (plocate) 1.1.15
Copyright (C) 2007 Red Hat, Inc. All rights reserved.
This software is distributed under the GPL v.2.
This program is provided with NO WARRANTY, to the extent permitted by law.
|
TL;DR this is due to a limitation in Android/termux/proot but is trivially worked around with a code change in plocate, which is in the 1.1.20 release.
4294967295 (i.e. 0xffffffff) is (uint32_t)-1. This is the value plocate's updatedb uses as a sentinel version number to detect an incompletely written database file.
When plocate's updatedb is generating a database, it first writes out a header block using dummy values (including the bad version number above).
After writing the rest of the database, it goes back to overwrite the header, this time using the correct values (including the correct version value, presently 1).
Since the database output stream is writing to an unlinked file (opened with O_TMPFILE), the file must now be linked at its actual path.
The code is roughly:
/* Open database */
fd = open(path.c_str(), O_WRONLY | O_TMPFILE, 0640);
outfp = fdopen(fd, "wb");
/* Write dummy header. */
...
/* Write database. */
...
/* Write real header */
fseek(outfp, 0, SEEK_SET);
fwrite(&hdr, sizeof(hdr), 1, outfp);
/* Link database path */
snprintf(procpath, sizeof(procpath), "/proc/self/fd/%d", fileno(outfp));
linkat(AT_FDCWD, procpath, AT_FDCWD, outfile.c_str(), AT_SYMLINK_FOLLOW);
fclose(outfp);
This is being executed on Android, which does not allow unprivileged hard link creation, so the linkat above would normally fail.
However, it is executing under proot, which is implementing a handler for linkat(..., "/proc/X/fd/Y", ..., AT_SYMLINK_FOLLOW) when, as in this case, the FD is for an unlinked file.
The linkat "works" because proot's handler copies the contents of the unlinked file, at the time of the linkat call. This may be better than nothing, but any further writes to the original file descriptor do not make it in to the file on the filesystem.
In the case of updatedb, there are no further writes -- but neither is there an fflush between the final fwrite and the linkat call. The lack of fflush would normally be fine, as the subsequent fclose flushes the output buffers. However, with proot's linkat implementation this pattern leads to data loss.
There doesn't seem to be a public bug tracker but I've reported this to the ptrace plocate author. Adding an fflush would resolve the issue and be harmless otherwise. Update: resolved in 994819b.
| /var/lib/plocate/plocate.db: has version 4294967295, expected 0 or 1; please rebuild it |
1,371,543,299,000 |
I'm attempting to build the Linux Kernel (version 5.16). I know that there's a compile-time option to randomize various structure fields (indicated by macros like randomized_struct_fields_start). However, I'm looking through make menuconfig and I can't find the right option.
|
The options you need to enable are in “General architecture-dependent options”, but they depend on GCC plugins. For the latter to work,
$(gcc -print-file-name=plugin)/include/plugin-version.h
must exist; on Debian for example, that means you need to install gcc-10-plugin-dev.
Once that’s done, enable “GCC plugins”, then “Randomize layout of sensitive kernel structures”:
| Build Linux Kernel with randomized struct fields |
1,371,543,299,000 |
I want to download all (recursive) build dependencies to be able to build apt (debian) package from source. However, when I apt-get install path/*.debs with debs that I got by apt build-dep --download-only --assume-yes <package> apt finds additional packages to be installed and fails, even with --no-install-recommends --ignore-missing. My specific issue got no answer on SO. Then I've investigated further and I have not seen those additional packages in output of successfully run apt build-dep <package>, therefore I've realized (obviously) build dependencies should be tracked differently. How?
I mean there are Depends/Suggests/Recommends fields in a deb file, but I have not seen additional fields related to sourcing. build-dep resulted in ~150 deb files found, but during installation of them as packages, apt found additional dependencies.
I've tried to read
Packaging/SourcePackage - Debian Wiki
Source packages provide you with all of the necessary files to compile
or otherwise, build the desired piece of software. It consists, in its
simplest form, of three files:
The upstream tarball with .tar.gz ending
A description file with .dsc ending.
apt source cinnamon-settings-daemon
Got cinnamon-settings-daemon_5.0.4+uma.tar.xz., search have not found .dsc file inside, maybe Linux Mint (OS I use) implemented modified Debian implementation?
BuildingTutorial - Debian Wiki
apt provides a way of easily installing all the needed dependencies:
Example 1: node-pretty-ms
sudo apt build-dep node-pretty-ms
However I have not found description how system keep track of those.
Inside one of downloaded deb files I got with apt build-dep I do not see additional section with dependencies for building/source:
$ apt show /media/ramdrive/debs/cinnamon-settings-daemon/autoconf_2.69-11.1_all.deb
Package: autoconf
Version: 2.69-11.1
Priority: optional
Section: devel
Origin: Ubuntu
Maintainer: Ubuntu Developers <[email protected]>
Original-Maintainer: Ben Pfaff <[email protected]>
Bugs: https://bugs.launchpad.net/ubuntu/+filebug
Installed-Size: 1905 kB
Depends: perl (>> 5.005), m4 (>= 1.4.13), debianutils (>= 1.8)
Recommends: automake | automaken
Suggests: autoconf-archive, gnu-standards, autoconf-doc, libtool, gettext
Breaks: gettext (<< 0.10.39), pkg-config (<< 0.25-1.1)
Homepage: http://www.gnu.org/software/autoconf/
Task: ubuntustudio-video
Download-Size: 321 kB
APT-Sources: http://archive.ubuntu.com/ubuntu focal/main amd64 Packages
Description: automatic configure script builder
The standard for FSF source packages. This is only useful if you
write your own programs or if you extensively modify other people's
programs.
.
For an extensive library of additional Autoconf macros, install the
`autoconf-archive' package.
.
This version of autoconf is not compatible with scripts meant for
Autoconf 2.13 or earlier.
Added 1:
One of two packages still listed as "additional" during apt-get install --no-install-recommends is libpulse0:i386. Doing
~$ apt-cache rdepends --recurse --no-recommends --no-suggests --no-conflicts --no-breaks --no-replaces --no-enhances libpulse0:i386 # got ~ 1000 lines
find /path_to_debs/cinnamon-settings-daemon -name *.deb | xargs apt-cache show | grep Package | awk '{print $2}' # ~ 160 debs
and using vlookup in LibreOffice Calc found out it reverse depends on to be installed pulseaudio and pulseaudio-module-bluetooth via e.g. on about ~300th line of rdepends:
libcanberra-pulse:i386
ReverseDepends:
pulseaudio
Added 2022/01/06:
I understood the cause of initial issue, if interested, see https://stackoverflow.com/a/70601238/14557599 and https://unix.stackexchange.com/a/684975/446998. I was not able to reproduce my claim in this question (I have not seen those additional packages in output of successfully run apt build-dep <package>), maybe I run the command on another system blinded by my incorrect assumption from realizing differences between them mattered.
|
Build dependencies are set by the package maintainer with a Build-Depends: (and sometimes Build-Depends-Indep:) settings in the debian/control file of the source package.
Depends, Recommends, and Suggestions are needed when a package is installed (or about to be installed), so that data is in the Packages file. Build-Depends* are only needed when the package is being built, so is not.
BTW, as you can see from either downloading the source package or using the package tracker (e.g. https://tracker.debian.org/media/packages/a/autoconf/control-2.71-2) the Build-Depends* settings for autoconf are:
Build-Depends-Indep: texinfo (>= 4.6), m4 (>= 1.4.13),
texlive-base, texlive-plain-generic, texlive-latex-base,
texlive-latex-recommended, texlive-fonts-recommended, help2man, cm-super
Build-Depends: debhelper-compat (= 13)
Also BTW, this is a simplification. It's enough for most packages, but some packages also have Build-Conflicts*: settings for packages that can not be installed for the build to be successful.
If you haven't already, I suggest that you read the Debian New Maintainers' Guide - some of this is specific to Debian package maintainers, but most of it is generic "how do I build a .deb package" info.
| How does apt keep track of BUILD (source) dependencies? |
1,371,543,299,000 |
I am trying to rebuild Debian package mousepad without D-Bus on Debian 10.
First, I try building the package without any change.
apt-get source mousepad
cd mousepad-0.4.1
dpkg-buildpackage --build=binary --no-sign
That works.
Now I want to build with D-Bus disabled. I see that Mousepad has the --disable-dbus build option, but where exactly should I put it?
The debian/rules file looks like this:
#!/usr/bin/make -f
export DEB_LDFLAGS_MAINT_APPEND=-Wl,--as-needed -Wl,-O1 -Wl,-z,defs
export DEB_BUILD_MAINT_OPTIONS=hardening=+all
override_dh_missing:
dh_missing --fail-missing
override_dh_autoreconf:
mkdir -p m4
dh_autoreconf
%:
dh $@
When I launch Mousepad on Debian 10, I see the following messages in the log.
dbus-daemon: [session uid=1000 pid=10430] Activating service name='ca.desrt.dconf' requested by ':1.0' (uid=1000 pid=10425 comm="mousepad ")
dbus-daemon: [session uid=1000 pid=10430] Successfully activated service 'ca.desrt.dconf'
Therefore, I believe the standard package Mousepad on Debian 10 does use D-Bus and it starts the dbus-launch binary.
|
The Debian 10 mousepad package is already built without D-Bus support; you can verify this by looking at the build logs for version 0.4.1-2 on amd64 and searching for “D-BUS”:
Build Configuration:
* D-BUS support: no
* Debug Support: minimum
* Use keyfile backend: default
* Build with GTK+ 3: yes
To make this explicit, you need to override the automatic configuration; add this to the end of debian/rules:
override_dh_auto_configure:
dh_auto_configure -- --disable-dbus
Make sure that the second line starts with a tab.
The log messages you’ve found come from dconf, not Mousepad itself; to disable those, you can try switching to the keyfile settings backend:
override_dh_auto_configure:
dh_auto_configure -- --without-dbus --enable-keyfile-settings
While you’re at it, add a changelog entry to make sure that your package won’t get “upgraded” to the same dbus-enabled version from the repositories:
dch --local +400cat 'Rebuild without dbus.'
dch -r ignored
(this uses dch from the devscripts package).
Now build the package:
dpkg-buildpackage -us -uc
and install it.
This will still result in a binary which depends (indirectly) on libdbus-1.so.3, but that’s because it depends on libgtk-3.so.0, which itself depends on libatk-bridge-2.0.so.0, which depends on libdbus-1.so.3.
If you really want to get rid of D-Bus, you’ll have to rebuild at-spi2-atk, and anything else on your system which build-depends on libdbus-1-dev.
| Remove all traces of D-Bus when running Mousepad in Debian |
1,371,543,299,000 |
I am trying to build the ksmbd kernel module. I tried the tag version:
$ wget https://github.com/namjaejeon/ksmbd/archive/refs/tags/3.2.1.tar.gz
$ tar xvfz 3.2.1.tar.gz
$ cd ksmbd-3.2.1
$ make
[...]
CC [M] /tmp/ksmbd-3.2.1/transport_tcp.o
/tmp/ksmbd-3.2.1/transport_tcp.c: In function ‘create_socket’:
/tmp/ksmbd-3.2.1/transport_tcp.c:484:10: error: incompatible type for argument 4 of ‘sock_setsockopt’
(char __user *)iface->name,
^~~~~~~~~~~~~~~~~~~~~~~~~~
As well as the version from git/master:
$ git clone [email protected]:namjaejeon/ksmbd.git
$ cd ksmbd
$ make
make -C /lib/modules/5.10.0-0.bpo.7-amd64/build M=/tmp/ksmbd modules
make[1]: Entering directory '/usr/src/linux-headers-5.10.0-0.bpo.7-amd64'
make[3]: *** No rule to make target '/tmp/ksmbd/ksmbd_spnego_negtokeninit.asn1.c', needed by '/tmp/ksmbd/ksmbd_spnego_negtokeninit.asn1.o'. Stop.
make[2]: *** [/usr/src/linux-headers-5.10.0-0.bpo.7-common/Makefile:1845: /tmp/ksmbd] Error 2
make[1]: *** [/usr/src/linux-headers-5.10.0-0.bpo.7-common/Makefile:185: __sub-make] Error 2
make[1]: Leaving directory '/usr/src/linux-headers-5.10.0-0.bpo.7-amd64'
make: *** [Makefile:47: all] Error 2
What's the trick to generate those *.asn1.c files ?
For reference:
$ cat Makefile
[...]
$(obj)/asn1.o: $(obj)/ksmbd_spnego_negtokeninit.asn1.h $(obj)/ksmbd_spnego_negtokentarg.asn1.h
$(obj)/ksmbd_spnego_negtokeninit.asn1.o: $(obj)/ksmbd_spnego_negtokeninit.asn1.c $(obj)/ksmbd_spnego_negtokeninit.asn1.h
$(obj)/ksmbd_spnego_negtokentarg.asn1.o: $(obj)/ksmbd_spnego_negtokentarg.asn1.c $(obj)/ksmbd_spnego_negtokentarg.asn1.h
|
On Fedora and RHEL, the ksmbd external module build works because the kernel-devel packages ship all the relevant tools, in particular asn1_compiler.
There is no equivalent package in Debian, so the only way to build ksmbd is to use the full kernel source, and the simple option is to build it in the kernel tree:
sudo apt install linux-source-5.10
cd $(mktemp -d)
tar xf /usr/src/linux-source-5.10.tar.xz
cd linux-source-5.10/fs
git clone https://github.com/namjaejeon/ksmbd
Make the necessary changes to fs/Kconfig and fs/Makefile, then
cd ..
make allmodconfig
make fs/ksmbd/ksmbd.ko
| Building ksmbd on Debian Buster (+bpo) |
1,371,543,299,000 |
Apologies if this has already been answered; I am having trouble finding an existing post (either on SE or linux forums) which solves the issue.
I need to install the package(s) that enables the -lSM and -lICE linker options for compiling some C/C++ code that uses plotting libraries (see here for an example: C Compiling and Linking).
Here's a snippet of the error messages I'm getting:
/usr/bin/ld: cannot find -lSM
/usr/bin/ld: cannot find -lICE
collect2: error: ld returned 1 exit status
I am quite certain that the issue is the package simply not being installed. What is the name of the package? I am running on CentOS7/Redhat.
|
You are looking for libSM.so and libICE.so, provided by the libSM-devel and libICE-devel packages.
Basically, if you are linking with -l<something>, look in /usr/lib64/lib<something>.so. An even faster result is to skip the step of finding the package name and run:
yum install /usr/lib64/lib<something>.so
| What Centos package contains the libraries for -lSM -lICE linker options? |
1,593,247,601,000 |
On a linux server i have a script that does a curl and returns an output as below:
Script:
/usr/bin/curl -k -s https://example.com:18080/seriessnapshot?substringSearch=OpenFin%20Memory | cut --characters=44-51 | sort --unique | sed -e 's/iessnaps//g' -e '/^$/d'
Output:
AP711671
AP714628
AP715911
AP716960
AP717267
AP717938
AP718017
AP718024
AP721570
AP721875
AP722002
AP722622
I need to then build a URL from the output for each AP number, so for example i would need the output to return as below for each AP number:
http://apRandomNumber.com:1025/
everything but the AP is static, the only dynamic part of the URL would be the AP number.
Would it be possible to do this from the same script i use to return just AP numbers and if so how can i incorporate it into that script?
|
The simple and easy way is to replace your sed command with
sed -n -E 's|^AP([[:digit:]]+)$|http://ap\1.ztb.icb.commerzbank.com:1025/|p'
-n suppresses the printing of lines so we have better control over which lines actually get printed at the end
-E enables extended regular expressions which make the rest easier
^AP([[:digit:]]+)$ matches a whole line starting with AP and follewed by numbers, it puts the part between the () into \1. If you would have a more complex pattern with several () parts they would end in \2 etc.
the value/content of \1 is then inserted directly into the replacement
p at the end prints the line (so it prints only those lines where the substitution actually took place)
| Build a URL from output of shell script |
1,593,247,601,000 |
I was trying to compile GCC 9.2 against a custom built GLIBC 2.30. I have installed GLIBC in a non-standard location. Then I have followed these steps to compile GCC:
sfinix@multivac:~$ GLIBCDIR=/home/sfinix/programming/repos/glibc/glibc-install/
sfinix@multivac:~$ export LDFLAGS="-Wl,-q"
sfinix@multivac:~$ CFLAGS="-L "${GLIBCDIR}/lib" -I "${GLIBCDIR}/include" -Wl,--rpath="${GLIBCDIR}/lib" -Wl,--dynamic-linker="${GLIBCDIR}/lib/ld-linux-x86-64.so.2""
sfinix@multivac:~$ cd ${GCC_BUILD_DIR}
sfinix@multivac:~$ make -j 4 CFLAGS="${CFLAGS}" CXXFLAGS="${CFLAGS}"
The compilation was successful but the problem is GCC is still picking up the old library:
sfinix@multivac:~$ ldd programming/repos/gcc/gcc-install/bin/gcc-9.2
linux-vdso.so.1 (0x00007ffc3b7cb000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f177772f000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f177733e000)
/lib64/ld-linux-x86-64.so.2 (0x00007f1777acd000)
Output of readelf -d programming/repos/gcc/gcc-install/bin/gcc-9.2:
Dynamic section at offset 0x113dd8 contains 27 entries:
Tag Type Name/Value
0x0000000000000001 (NEEDED) Shared library: [libm.so.6]
0x0000000000000001 (NEEDED) Shared library: [libc.so.6]
0x0000000000000001 (NEEDED) Shared library: [ld-linux-x86-64.so.2]
0x000000000000000c (INIT) 0x402a80
0x000000000000000d (FINI) 0x488440
0x0000000000000019 (INIT_ARRAY) 0x712de8
0x000000000000001b (INIT_ARRAYSZ) 48 (bytes)
0x000000000000001a (FINI_ARRAY) 0x712e18
0x000000000000001c (FINI_ARRAYSZ) 8 (bytes)
0x0000000000000004 (HASH) 0x4002b0
0x000000006ffffef5 (GNU_HASH) 0x400728
0x0000000000000005 (STRTAB) 0x4015f0
0x0000000000000006 (SYMTAB) 0x400798
0x000000000000000a (STRSZ) 1373 (bytes)
0x000000000000000b (SYMENT) 24 (bytes)
0x0000000000000015 (DEBUG) 0x0
0x0000000000000003 (PLTGOT) 0x714000
0x0000000000000002 (PLTRELSZ) 3264 (bytes)
0x0000000000000014 (PLTREL) RELA
0x0000000000000017 (JMPREL) 0x401dc0
0x0000000000000007 (RELA) 0x401d00
0x0000000000000008 (RELASZ) 192 (bytes)
0x0000000000000009 (RELAENT) 24 (bytes)
0x000000006ffffffe (VERNEED) 0x401c80
0x000000006fffffff (VERNEEDNUM) 2
0x000000006ffffff0 (VERSYM) 0x401b4e
0x0000000000000000 (NULL) 0x0
Though this approach is working for other programs I am compiling myself for testing:
sfinix@multivac:~$ GLIBDIR=/home/sfinix/programming/repos/glibc/glibc-install/
sfinix@multivac:~$ vim test.c
sfinix@multivac:~$ CFLAGS="-L ${GLIBDIR}/lib -I ${GLIBDIR}/include -Wl,--rpath=${GLIBDIR}/lib -Wl,--dynamic-linker=${GLIBDIR}/lib/ld-linux-x86-64.so.2"
sfinix@multivac:~$ gcc -Wall -g ${CFLAGS} test.c -o run
sfinix@multivac:~$ ldd run
linux-vdso.so.1 (0x00007ffd616d5000)
libc.so.6 => /home/sfinix/programming/repos/glibc/glibc-install//lib/libc.so.6 (0x00007f5fcdc6e000)
/home/sfinix/programming/repos/glibc/glibc-install//lib/ld-linux-x86-64.so.2 => /lib64/ld-linux-x86-64.so.2 (0x00007f5fce22a000)
What am I missing? How can I compile GCC against a custom GLIBC? How can I pass compiler and linker flags?
|
According to Autoconf Manual in GNU build system compiler/linker flags/options are passed through configure script. So in my case, I should configure, compile and install in the following way:
$ GLIBCDIR=/home/sfinix/programming/repos/glibc/glibc-install/
$ LDFLAGS="-Wl,-q"
$ CFLAGS="-L ${GLIBCDIR}/lib -I ${GLIBCDIR}/include -Wl,--rpath=${GLIBCDIR}/lib -Wl,--dynamic-linker=${GLIBCDIR}/lib/ld-linux-x86-64.so.2"
$ mkdir ~/gcc-build
$ cd ~/gcc-build'
$ ~/gcc-src/configure --prefix=~/gcc-install CFLAGS=${CFLAGS} CXXFLAGS=${CFLAGS} LDFLAGS=${LDFLAGS}
$ make && make install
In the configure script I have only passed the variables/options that are relevant to the question asked. You may want to pass more options according to your specific needs. You can see all the options and accepted variables by running ~/gcc-src/configure --help. You can also pass flags through environment variables but you have to set them before running configure script.
| Compiling GCC against a custom built GLIBC |
1,593,247,601,000 |
I'm trying to compile fIcy (https://gitlab.com/wavexx/fIcy) for NetBSD/FreeBSD.
When I'm executing the make command nothing happens. Even no error message.
The same source package compiles without problems with Debian 10.
Is the Makefile even compatible with BSD?
https://gitlab.com/wavexx/fIcy/blob/master/Makefile
The commands I used so far on FreeBSD 12:
pkg install gcc
wget https://gitlab.com/wavexx/fIcy/-/archive/master/fIcy-master.tar.gz
tar xfvz fIcy-master.tar.gz
cd fIcy-master
make
type make
make is /usr/bin/make
|
You should use GNU's make as README.rst says:
pkg install gmake
If you've already installed any other dependencies you should run
gmake all
(Note g is the first letter.)
Works for me but if you've any error message please post/edit it.
/Note: the GNU make and FreeBSD make aren't compatible. They can work as POSIX make but have different extensions./
| How to compile fIcy for BSD? |
1,593,247,601,000 |
I am trying to get QuantLib version 1.13 running on Amazon Linux.
I found some .rpm files at https://pkgs.org/download/QuantLib, although there is an up to date .rpm for Fedora, there isn't one for CentOS (the CentOS files seem to be compatible with Amazon Linux).
I was able to successfully build the library from source, however when I do so it creates a 1.2GB libQuantLib.a file and a 421MB libQuantLib.so.0.0.0 file.
The .rpm files at https://pkgs.org/download/QuantLib are all ~25MB.
Ultimately I am trying to pack QuantLib well enough that I can run it in an AWS Lambda environment. This would require the compressed binaries to be ~50MB and be compatible with the Amazon Linux AMI for Lambda.
My question:
Why is there such a discrepancy between the size of the .rpm file and the libQuantLib.a / libQuantLib.so.0.0.0 files that result when I build from source? Is the .rpm file not a full version of the library? Does the result of my build contain a lot of fluff?
Is it possible to build from source and achieve the ~25MB size or is this effort fruitless?
|
Most likely your hand-built libraries are built with debugging information, that's why they are so big. You can try strip libQuantLib.so.0.0.0 and see how much smaller it will get.
You can try to rebuild the official RPM for Amazon Linux 2 like this:
Download the source RPM (QuantLib-1.4-7.el7.src.rpm)
Install rpm-build package (or rpmbuild? not quite sure what's the name on AL2)
Run rpmbuild --rebuild QuantLib-1.4-7.el7.src.rpm and if everything goes right you should have the QuantLib-...x86_64.rpm built for Amazon Linux 2 after a while.
There may be some dependency issues. If you're not familiar with building RPMs feel free to follow up here or open another question.
However this should get you started. Good luck with it :)
Update - building without QuantLib-doc package.
As per the comment below building the QuantLib-doc requires a lot of extra dependencies. To rebuild it without doc do the following:
Download source RPM, e.g. to /tmp
In an empty directory run rpm2cpio /tmp/QuantLib-...src.rpm
Edit QuantLib.spec and comment out %package doc, %description doc and %files doc sections
Build the RPM with rpmbuild -ba QuantLib.spec
That should remove the need to install the many dependencies.
| Building QuantLib on Amazon Linux |
1,593,247,601,000 |
I'm trying compile nginx ... configure seems ok, then when I type make I get following error:
make
make -f objs/Makefile
make[1]: Entering directory '/home/paul/src/ngxbuild/nginx-1.14.0'
cc -o objs/ngx_http_perl_module.so \
objs/src/http/modules/perl/ngx_http_perl_module.o \
objs/ngx_http_perl_module_modules.o \
-Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-z,now -fPIC -Wl,-E -fstack-protector-strong -L/usr/local/lib -L/usr/lib/x86_64-linux-gnu/perl/5.26/CORE -lperl -ldl -lm -lpthread -lc -lcrypt \
-shared
/usr/bin/x86_64-linux-gnu-ld: cannot find -lperl
collect2: error: ld returned 1 exit status
objs/Makefile:1670: recipe for target 'objs/ngx_http_perl_module.so' failed
make[1]: *** [objs/ngx_http_perl_module.so] Error 1
make[1]: Leaving directory '/home/paul/src/ngxbuild/nginx-1.14.0'
Makefile:8: recipe for target 'build' failed
make: *** [build] Error 2
so error is cannot find -lperl
well the problem seems that in /usr/lib/x86_64-linux-gnu/perl/5.26/ there's no CORE directory.
I use Ubuntu server 18.0, update and upgraded, an I can't find the way to update...
Debian stretch contain perl 5.24.1 and CORE directory exist in /usr/lib/x86_64-linux-gnu/perl/5.24/, so why ubuntu won't install core directory and how install it
|
The Can not find -lperl error means it can not find libperl.
On Debian Stretch this is provided by libperl-dev
% apt-file list libperl-dev
libperl-dev: /usr/lib/x86_64-linux-gnu/libperl.a
libperl-dev: /usr/lib/x86_64-linux-gnu/libperl.so
libperl-dev: /usr/share/doc/libperl-dev/README.cross
libperl-dev: /usr/share/doc/libperl-dev/changelog.Debian.gz
libperl-dev: /usr/share/doc/libperl-dev/copyright
I would expect a similar package to be needed on Ubuntu.
| I can't find module "core" in perl dir on ubuntu 18.x |
1,593,247,601,000 |
First attempt to build netplan.io on Debian 9 and have the errors below. Can you tell me what I need to install?
Package yaml-0.1 was not found in the pkg-config search path.
Perhaps you should add the directory containing
`yaml-0.1.pc'
to the PKG_CONFIG_PATH environment variable
No package 'yaml-0.1' found
Package uuid was not found in the pkg-config search path.
Perhaps you should add the directory containing
`uuid.pc'
to the PKG_CONFIG_PATH environment variable
No package 'uuid' found
Neither of the pkgconfig directories under /usr contain these .pc files. And the pc files do not show up on a wider search.
Please, what do I need to install?
There is also this :
src/generate.c:24:18: fatal error: glib.h: No such file or directory
#include <glib.h>
however there is a glib-2.0.pc in the usr/lib/x86_64-linux-gnu/pkgconfig/ directory. Is there a misconfiguration?
Thanks
|
After taking
https://github.com/CanonicalLtd/netplan.git
Need to install :
libyaml-dev
libc6-dev
libglib2.0-dev
pandoc
uuid-dev
Thanks to @dsstorefile and @user996142
To run netplan will require :
pip3 install pyyaml
For Debian 10, need also
pip3 install netifaces
| debian netplan install: library missing. what to install for these .pc files |
1,593,247,601,000 |
I need to generate the libssl* and libcrypto* binaries for use on a different system. I wrote a trivial script doing it
#!/bin/bash
set -evx
OPENSSL_VERSION="1.0.2l"
TARGET=openssl-linux64
/bin/rm -fr $TARGET
curl -O -L http://www.openssl.org/source/openssl-$OPENSSL_VERSION.tar.gz
tar -xvzf openssl-$OPENSSL_VERSION.tar.gz
mv openssl-$OPENSSL_VERSION $TARGET
cd $TARGET
./Configure linux-x86_64 -shared
make
Everything seems to work, in the end I get the two libraries. Unfortunately, they're called libssl.so.1.0.0 and libcrypto.so.1.0.0. I'm rather confused...
Is it just a chaotic versioning or what's going on?
How can I find out what exactly was produced? Should I trust it?
In case it matters: My system is "Linux 4.4.0-116-generic #140-Ubuntu SMP Mon x86_64 GNU/Linux".
|
Yes, that is correct, the libraries will have version 1.0.0 even though the software package will have version 1.0.2l. That's because all versions 1.0.x of the software implement the same API (same functions with the same function signatures/prototypes), so the libraries should be versioned the same since users of those libraries can use these versions interchangeably.
The version of the libraries is defined here in the source tree. There's a comment just above that define that explains it a little bit further.
I hope that answers your question.
| Confused by openssl make |
1,593,247,601,000 |
I'm working in a team, which develops a c++ (ROS) project.
For some reason, we haven't a good git management.
We have several git branches. To compile the project, I have to git clone the codes from each branch and rearrange the structure of directories.
First, I mkdir -p /home/me/repo, then I git clone the codes from each branch and put all of them into /home/me/repo.
Now I need to rearrange the structure, here is what I've done:
#!/bin/sh
mkdir -p /home/me/project/src
cd /home/me/project/src
catkin_init_workspace # command of ROS to initialize a workspace
cp -r /home/me/repo/robot_dev/control .
cp -r /home/me/repo/robot_dev/control_algo .
cp -r /home/me/repo/sim/third_party .
cp -r /home/me/repo/planning .
cp -r /home/me/repo/robot_dev/cognition/hdmap .
cd ..
catkin_make # command of ROS to compile the project
I created such a script to compile the project and it worked. As you can see, I simply copied and rearranged some directories and compiled.
Now I'm thinking that cp -r is not a good idea because it took too much time. I want to use ln -s to do the same thing. So I wrote another script as below:
#!/bin/sh
mkdir -p /home/me/project2/src
cd /home/me/project2/src
catkin_init_workspace # command of ROS to initialize a workspace
ln -s /home/me/repo/robot_dev/control control
ln -s /home/me/repo/robot_dev/control_algo control_algo
ln -s /home/me/repo/sim/third_party third_party
ln -s /home/me/repo/planning planning
ln -s /home/me/repo/robot_dev/cognition/hdmap hdmap
cd ..
catkin_make # command of ROS to compile the project
However, I got an error:
CMake Error at /opt/ros/kinetic/share/catkin/cmake/catkin_package.cmake:297 (message):
catkin_package() absolute include dir
'/home/me/project2/src/hdmap/../third_party/ubuntu1604/opencv-3.3.1/include'
I've checked cd /home/me/project2/src/hdmap/../third_party/ubuntu1604/opencv-3.3.1/include, it does exist.
Is there some reasons why cp -r can compile but ln -s can't?
|
Accessing .. doesn't really work as you expect when symlinks are involved...
And when you try that in bash, bash tries to be "helpful" and fixes it for you, so the problem does not become aparent.
But, in short, when you go to /home/me/project2/src/hdmap/../third_party, the kernel will first resolve the symlink of "hdmap", getting to /home/me/repo/robot_dev/cognition/hdmap, then look up .. which means the parent directory of that hdmap directory, so /home/me/repo/robot_dev/cognition and then it will try to find a third_party in there.
Considering /home/me/repo/robot_dev/cognition/third_party does not exist (or, if it does, it's not the same as /home/me/repo/sim/third_party, which is what you want), you get a file not found error.
Bash keeps a $PWD variable with the path stored as a string, and so it can help "resolve" the .. references in the shell itself before it passes the kernel a path... That way, it will hide these details from you.
You can disable that behavior using set -P (or set -o physical) in bash. (See bash man page for more details.)
In order to help you with your underlying problem... Perhaps a decent solution is to use cp -rl to copy the trees. The -l option to cp creates hardlinks. That should not be a problem in this case, particularly as I imagine you're not expecting to modify the files. It will still have to traverse the data structure and create each object individually, but it will not have to create any contents...
If you're working on a modern filesystem (such as Btrfs), you can also try cp -r --reflink which creates a CoW (copy-on-write) copy. It's essentially a hardlink, but slightly better since there's no connection between the two names, they're just sharing blocks on the backend, touching one of the files will actually fork them into two separate files. (But I imagine hardlinks are probably good enough for you.)
Perhaps there are some tricks you could do with git, in order to just expose the directory you need in each step... So you can actually clone the parts you need... But that might be harder to accomplish or to maintain... Hopefully cp -rl will be enough for you!
| Why can't a project compile with symbol link |
1,593,247,601,000 |
I am trying to build binutils on platform A, which will be run on platform A and targets platform B.
I have the source of binutils in /home/cedric/source/binutils-2.29/, and I perform the following 2 builds:
cd /home/cedric/source/
mkdir default/ && cd default/
mkdir build/ install/ & cd build/
../../binutils-2.29/configure --prefix=/home/cedric/source/default/install/
make
make install
and
cd /home/cedric/source/
mkdir lfs/ && cd lfs/
mkdir build/ install/ & cd build/
../../binutils-2.29/configure --prefix=/home/cedric/source/lfs/install/ --target=x86_64-lfs-linux-gnu
make
make install
It turns out that there are 2 more directories in default/install than in lfs/install, i.e. lib, which contains bfd and opcodes static libraries, and include, which contains some headers.
My Confusion
Why are these static libraries and headers installed (what are they intended for, since binutils tools like ld and as etc. are already built)? And why do they disappear when building for another platform?
Research that I've done yet didn't eliminate my confusion:
binutils-2.29/configure --help and README in binutils-2.29/, binutils-2.29/binutils/, and binutils-2.29/bfd/
FWIW, I found the following in the configure help, which has something to do with directory it may create and their default value:
Installation directories:
--prefix=PREFIX install architecture-independent files in PREFIX
[/usr/local]
--exec-prefix=EPREFIX install architecture-dependent files in EPREFIX
[PREFIX]
By default, `make install' will install all the files in
`/usr/local/bin', `/usr/local/lib' etc. You can specify
an installation prefix other than `/usr/local' using `--prefix',
for instance `--prefix=$HOME'.
For better control, use the options below.
Fine tuning of the installation directories:
--bindir=DIR user executables [EPREFIX/bin]
--sbindir=DIR system admin executables [EPREFIX/sbin]
--libexecdir=DIR program executables [EPREFIX/libexec]
--sysconfdir=DIR read-only single-machine data [PREFIX/etc]
--sharedstatedir=DIR modifiable architecture-independent data [PREFIX/com]
--localstatedir=DIR modifiable single-machine data [PREFIX/var]
--libdir=DIR object code libraries [EPREFIX/lib]
--includedir=DIR C header files [PREFIX/include]
--oldincludedir=DIR C header files for non-gcc [/usr/include]
--datarootdir=DIR read-only arch.-independent data root [PREFIX/share]
--datadir=DIR read-only architecture-independent data [DATAROOTDIR]
--infodir=DIR info documentation [DATAROOTDIR/info]
--localedir=DIR locale-dependent data [DATAROOTDIR/locale]
--mandir=DIR man documentation [DATAROOTDIR/man]
--docdir=DIR documentation root [DATAROOTDIR/doc/PACKAGE]
--htmldir=DIR html documentation [DOCDIR]
--dvidir=DIR dvi documentation [DOCDIR]
--pdfdir=DIR pdf documentation [DOCDIR]
--psdir=DIR ps documentation [DOCDIR]
|
The short answer is that you’d expect lib to be built (or rather, installed) if there’s a library to install, and include if there are headers to install. Usually the two go together (at least for C and C++ libraries).
In binutils’ case, the libraries and associated headers are libbfd and libopcodes, as you mention. libbfd is a library used to manipulate object files in a variety of formats, libopcodes is a library used to map opcodes to instructions. libbfd is installed by default for same-host compilation, but not for cross-compilation, which is why you’re seeing different behaviour in your scenarios. You can see the conditional default in bfd/acinclude.m4 in the source code. Both libraries should be built in all cases.
You only need libbfd if you want to build GDB. If you do want to install it in a cross-compilation scenario, you can tell ./configure to do so with the --enable-install-libbfd option; when you do this, the libraries and header files will be installed in the appropriate host- and target-specific directory (libbfd is built for the host but contains target-specific code).
| When should "lib" and "include" directories be built when compiling from source? |
1,593,247,601,000 |
I am trying to install an R package (mongolite) on FreeBSD (FreeBSD 11.0-RELEASE-p9 amd64) and I am getting an error while compiling C++ source files.
The errors is the following: error: 'SYS_gettid' undeclared.
Any idea for how to go about this problem?
|
The SYS_xxxx defines contain the numbers of system calls on Linux. They're used mostly when making raw system calls through the syscall(2) wrapper instead of the usual glibc wrapper functions. In the case of gettid(), glibc doesn't contain a wrapper for the system call, so it has to be called manually.
gettid() returns the thread ID on Linux, and it appears it doesn't have a direct equivalent on FreeBSD.
So, given the error, it seems the software you're trying to compile has a hard requirement for Linux, and the appropriate course of action would be to file a bug report to get it ported to FreeBSD. They should probably use phtreads or such.
| FreeBSD: undeclared SYS_gettid |
1,593,247,601,000 |
I just compiled bash 4.4.12 for my arm-based NAS.
The NAS has a custom environment. The prefix is /ffp/ instead of /usr/.
Starting the fresh compiled bash /ffp/etc/profile/ is not beeing read/sourced.
Is this a ./configure flag, or do I need to specify this elsewhere?
My compile config. looks like this:
#!/usr/bin/env bash
OLANG=$LANG
export LANG="C"
[ -x ./autogen.sh ] && ./autogen.sh
if [ -x ./configure ]; then
./configure \
--prefix=/ffp/ \
--bindir=/ffp/bin/ \
--sbindir=/ffp/sbin/ \
--sysconfdir=/ffp/etc \
--localstatedir=/ffp/var \
--libdir=/ffp/lib/ \
--includedir=/ffp/include/ \
--datarootdir=/ffp/share/\
--libexecdir=/ffp/libexec/
fi
make
make strip
make install
export LANG=$OLANG
|
It's hardcoded in pathnames.h.in:
/* The default login shell startup file. */
#define SYS_PROFILE "/etc/profile"
You can just replace it with what you want and rebuild.
| Compiling bash from source in a custom env - custom profile is not being read |
1,593,247,601,000 |
Common compilation tools like make, cmake, premake, gmake, rake etc. generate buildinfo files by default or support for them is missing ? Or if they have support, then maybe needs some flags to make sure that .buildinfo files.
I am looking at three tools at the moment -
a. make
b. cmake
c. premake
|
buildinfo files are generated by dpkg-genbuildinfo on Debian systems, build tools aren't involved. See the manpage for details.
| Can any compilation tools generate .buildinfo info by default or needs some flags? |
1,593,247,601,000 |
I am trying to build GLIBC 2.14 from source. I keep getting the error message that cpuid.h was not found.
What does that file contain and what purpose does it serve ?
I have seen that cpuid.h is placed in two different locations ie.
/usr/lib/gcc/x86_64-redhat-linux/4.3.0/include/cpuid.h
/usr/src/kernels/2.6.25-14.fc9.x86_64/include/config/x86/cpuid.h
Also I have seen that these two files are different running a simple diff tells me that -:
diff --brief /usr/lib/gcc/x86_64-redhat-linux/4.3.0/include/cpuid.h /usr/src/kernels/2.6.25-14.fc9.x86_64/include/config/x86/cpuid.h
Files /usr/lib/gcc/x86_64-redhat-linux/4.3.0/include/cpuid.h and /usr/src/kernels/2.6.25-14.fc9.x86_64/include/config/x86/cpuid.h differ
What is cpuid.h, what does it normally contain and what purpose does it serve ?
|
cpuid.h contains definitions of assembly-language fragments to get low-level info out of certain CPUs, plus names for various numeric constants that a program might use to figure out what kind of CPU it was running on, and what features are available. (For example, if the program wanted to use special matrix-math instructions available on some CPUs, it could check whether the instructions were available before trying to use them. If the program was running on an older CPU, it could use software emulation of those instructions instead.)
The quick-n-dirty way to move ahead in your compilation project is just to copy one of the cpuid.h files you've found to a place where the compiler can find it. (Maybe try cp /usr/lib/gcc/x86_64-redhat-linux/4.3.0/include/cpuid.h /usr/include) It might not be the right file for your CPU, but most library routines don't need the cpuid information anyway, so it may let you make useful progress. However, the library you build might not work in 100% of the places you expect it to, if you ever hit a function that tries to look up CPU info.
| What does the file cpuid.h do? |
1,593,247,601,000 |
I am trying to compile efivar-0.23 for my LFS, but when I untar it end run the following command:
make libdir="/usr/lib/" bindir="/usr/bin/" mandir="/usr/share/man/" includedir=/usr/include/" V=1 -j1
I get an error, that NVME_IOCTL_ID is undeclared. I have looked through the whole internet for the answer, but the only thing I have found, is that I need to patch the file. I have found several patches, but nothing helps (maybe I am installing them incorrectly..). This is the last patch that I have tried: http://patchwork.openembedded.org/patch/117073/.
I have entered the untared efivar directory and have executed: patch -Np1 ../efivar.patch, but it was doing anything. It was like it is doing something, but nothing happened.
I have tried to patch < ../efivar.patch from the untared directory, but then the system started asking questions...
System: File to patch:
Me: Makefile
System: patching file Makefile
Hunk #1 FAILED at 12.
1 out of 1 hunk FAILED -- saving rejects to file Makefile.rej
The next patch would delete the file efivar-drop-options-not-supported-by-lower-version-gcc.patch,
which does not exist! Assume -R? [n]
Me: y
System: patching file efivar-drop-options-not-supported-by-lower-version-gcc.patch
The next patch would delete the file efivar_0.21.bb,
which does not exist! Assume -R? [n]
Me: y
patching file efivar_0.21.bb
patching file efivar_0.23.bb
I have tried different combinations of answers and different patches. I have also typed manualy in the files needed to be patched to be sure it containes what is needed (cause I am unsure if this patching works).
So basically I am at the same point with undeclared variable and a lot of time wasted not knowing what to do.. Any ideas?
|
efivar version 0.23 needs a patch to work with kernel headers from 4.4 (and later kernels), because the header defining NVME_IOCTL_ID changed (it was renamed from nvme.h to nvme_ioctl.h).
To build efivar on your system, you'll need the "Workaround rename of linux/nvme.h" patch. To apply that, go into the directory containing the efivar source code (with the 0.23 source, and no changes), and run
curl https://github.com/rhinstaller/efivar/commit/3a0ae7189fe96355d64dc2daf91cf85282773c66.patch | patch -p1
Then you should be able to build efivar correctly with kernel 4.4 headers.
Given that you have an nvme.h header file though, you'll probably still have problems with NVME_IOCTL_ID at this point. You can apply another patch which avoids using it altogether, "libefiboot: rework NVME so we get EUI right and don't need kernel headers" (this patch requires the previous one):
curl https://github.com/rhinstaller/efivar/commit/8910f45c27fadba0904f707e7c40ad80bf828f7e.patch | patch -p1
With these two patches you can build efivar regardless of where (and whether) your kernel headers define NVME_IOCTL_ID.
| How to compile efivar? |
1,593,247,601,000 |
A common thing for people who work on machines without having root rights is to locally build your own little suit of your favorite tools.
The workflow goes a little something like this:
tar xvzf fav_tool.tar.xvzf
cd fav_tool
./configure --prefix=$HOME
make install
It tends to get a little messy. You get folders like bin/, etc/, include, lib/ and source/ in your home folder. You have to manage dependencies and $PATH. Is there a smart way to do this? Is there tool to keep track of your packages locally?
|
It tends to get a little messy. You get folders like bin/, etc/, include, lib/ and source/ in your home folder.
By choice, yes. If that seems untidy, you can use
./configure --prefix=$HOME/mytools
Instead. You will then need to add that to your $PATH, or, if $HOME/bin is already part of it, you could move everything currently there into $HOME/mytools/bin and
rm ~/bin
ln -s ~/mytools/bin ~/bin
If your tools put stuff in ~/mytools/lib, you'll also want to set LD_LIBRARY_PATH appropriately somewhere.
If you use whatever initialization file you normally use for setting env variables, this only needs to be done once and takes about a minute to do.
You have to manage dependencies
If by "manage" you mean resolve for the purpose of installation, that is a major purpose of ./configure. If it doesn't do this right, a third party tool is unlikely to do it much better. It might, but it might also do worse.
If you mean something else, there really isn't anything else involved in the concept of "dependency". If you mean you want something that resolves this, downloads the dependency, and installs it for you, that's what normal package managers are for -- but remember you decided you didn't want pre-built binaries. You are building from source.
It is very unfortunate that linux package managers are mostly unfriendly or useless to unprivileged users, but that is a separate issue (and a separate question, to which there are various answers depending on the distro).
Is there tool to keep track of your packages locally?
Yes, the source packages themselves. When you build, unpack in ~/mytools/src. You can leave the build directory there, or just the tarball. When you want to uninstall something, you just go into the relevant directory (unpacking it again if necessary) and make uninstall.
The src directory is never used by the system for anything. It contains only what you put in it, and so as long as you don't delete your downloads, you have a nice tidy list of local installed source built software, with the sources actually used to build it all.
| Organize local builds |
1,593,247,601,000 |
I am trying to install the necessary dependencies for building Chromium on Ubuntu 14.04, and I am facing the following message:
The following packages have unmet dependencies:
g++-4.8-multilib : Depends: gcc-4.8-multilib (= 4.8.2-19ubuntu1) but it is not going to be installed
Depends: lib32stdc++-4.8-dev (= 4.8.2-19ubuntu1) but it is not going to be installed
Depends: libx32stdc++-4.8-dev (= 4.8.2-19ubuntu1) but it is not going to be installed
lib32gcc1 : Depends: gcc-4.9-base (= 4.9-20140406-0ubuntu1) but 4.9.1-0ubuntu1 is to be installed
libbluetooth-dev : Depends: libbluetooth3 (= 4.101-0ubuntu13) but 4.101-0ubuntu13.1 is to be installed
libcairo2-dbg : Depends: libcairo2 (= 1.13.0~20140204-0ubuntu1) but 1.13.0~20140204-0ubuntu1.1 is to be installed
libcairo2-dev : Depends: libcairo2 (= 1.13.0~20140204-0ubuntu1) but 1.13.0~20140204-0ubuntu1.1 is to be installed
Depends: libcairo-gobject2 (= 1.13.0~20140204-0ubuntu1) but 1.13.0~20140204-0ubuntu1.1 is to be installed
Depends: libfontconfig1-dev (>= 2.2.95) but it is not going to be installed
libfontconfig1-dbg : Depends: libfontconfig1 (= 2.11.0-0ubuntu4) but 2.11.0-0ubuntu4.1 is to be installed
libgbm-dev : Depends: libgbm1 (= 10.1.0-4ubuntu5)
libgl1-mesa-glx:i386 : Depends: libglapi-mesa:i386 (= 10.1.0-4ubuntu5)
Recommends: libgl1-mesa-dri:i386 (>= 7.2)
Conflicts: libgl1
libgl1-mesa-glx-lts-utopic : Conflicts: libgl1:i386
Conflicts: libgl1-mesa-glx:i386
libglib2.0-0-dbg : Depends: libglib2.0-0 (= 2.40.0-2) but 2.40.2-0ubuntu1 is to be installed
libglib2.0-dev : Depends: libglib2.0-0 (= 2.40.0-2) but 2.40.2-0ubuntu1 is to be installed
Depends: libglib2.0-bin (= 2.40.0-2)
libgtk2.0-dev : Depends: libpango1.0-dev (>= 1.20) but it is not going to be installed
libpango1.0-0-dbg : Depends: libpango-1.0-0 (= 1.36.3-1ubuntu1) but 1.36.3-1ubuntu1.1 is to be installed or
libpangocairo-1.0-0 (= 1.36.3-1ubuntu1) but 1.36.3-1ubuntu1.1 is to be installed or
libpangoft2-1.0-0 (= 1.36.3-1ubuntu1) but 1.36.3-1ubuntu1.1 is to be installed or
libpangoxft-1.0-0 (= 1.36.3-1ubuntu1) but 1.36.3-1ubuntu1.1 is to be installed
libpulse-dev : Depends: libpulse0 (= 1:4.0-0ubuntu11) but 1:4.0-0ubuntu11.1 is to be installed
Depends: libpulse-mainloop-glib0 (= 1:4.0-0ubuntu11) but 1:4.0-0ubuntu11.1 is to be installed
libstdc++6-4.6-dbg : Depends: libgcc1-dbg but it is not going to be installed
libudev-dev : Depends: libudev1 (= 204-5ubuntu20) but 204-5ubuntu20.11 is to be installed
E: Error, pkgProblemResolver::Resolve generated breaks, this may be caused by held packages.
You will have to install the above packages yourself.
It says there that I have to install those packages by myself, but how to do this ?
When I am trying to sudo apt-get install <some_package> it tells me (example for gcc-4.8-multilib):
gcc-4.8-multilib : Depends: lib32gcc-4.8-dev (= 4.8.2-19ubuntu1) but it is not going to be installed
Depends: libx32gcc-4.8-dev (= 4.8.2-19ubuntu1) but it is not going to be installed
E: Unable to correct problems, you have held broken packages.
Can anybody help me resolve all of these packages installation ?
UPDATE 1:
for sudo apt-get install lib32gcc-4.8-dev I get:
lib32gcc-4.8-dev : Depends: lib32gcc1 (>= 1:4.8.2-19ubuntu1) but it is not going to be installed
Depends: libx32gcc1 (>= 1:4.8.2-19ubuntu1) but it is not going to be installed
Depends: lib32asan0 (>= 4.8.2-19ubuntu1) but it is not going to be installed
Depends: libx32asan0 (>= 4.8.2-19ubuntu1) but it is not going to be installed
and for apt-cache policy lib32gcc-4.8-dev I get:
lib32gcc-4.8-dev:
Installed: (none)
Candidate: 4.8.2-19ubuntu1
Version table:
4.8.2-19ubuntu1 0
500 http://ro.archive.ubuntu.com/ubuntu/ trusty/main amd64 Packages
UPDATE 2:
for sudo apt-get install lib32gcc1 I get:
Reading package lists... Done
Building dependency tree
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:
The following packages have unmet dependencies:
lib32gcc1 : Depends: gcc-4.9-base (= 4.9-20140406-0ubuntu1) but 4.9.1-0ubuntu1 is to be installed
E: Unable to correct problems, you have held broken packages.
and for apt-cache policy lib32gcc1 I get:
Installed: (none)
Candidate: 1:4.9-20140406-0ubuntu1
Version table:
1:4.9-20140406-0ubuntu1 0
500 http://ro.archive.ubuntu.com/ubuntu/ trusty/main amd64 Packages
1:4.6.3-1ubuntu5 0
500 mirror://mirrors.ubuntu.com/mirrors.txt/ precise/main amd64 Packages
For apt-cache policy lib32gcc-4.8-dev lib32gcc1 libx32gcc1 lib32asan0 libx32asan0 I get:
lib32gcc-4.8-dev:
Installed: (none)
Candidate: 4.8.2-19ubuntu1
Version table:
4.8.2-19ubuntu1 0
500 http://ro.archive.ubuntu.com/ubuntu/ trusty/main amd64 Packages
lib32gcc1:
Installed: (none)
Candidate: 1:4.9-20140406-0ubuntu1
Version table:
1:4.9-20140406-0ubuntu1 0
500 http://ro.archive.ubuntu.com/ubuntu/ trusty/main amd64 Packages
1:4.6.3-1ubuntu5 0
500 mirror://mirrors.ubuntu.com/mirrors.txt/ precise/main amd64 Packages
libx32gcc1:
Installed: (none)
Candidate: 1:4.9-20140406-0ubuntu1
Version table:
1:4.9-20140406-0ubuntu1 0
500 http://ro.archive.ubuntu.com/ubuntu/ trusty/main amd64 Packages
lib32asan0:
Installed: (none)
Candidate: 4.8.2-19ubuntu1
Version table:
4.8.2-19ubuntu1 0
500 http://ro.archive.ubuntu.com/ubuntu/ trusty/main amd64 Packages
libx32asan0:
Installed: (none)
Candidate: 4.8.2-19ubuntu1
Version table:
4.8.2-19ubuntu1 0
500 http://ro.archive.ubuntu.com/ubuntu/ trusty/main amd64 Packages
UPDATE 3:
For apt-cache policy gcc-4.9-base I get:
gcc-4.9-base:
Installed: 4.9.1-0ubuntu1
Candidate: 4.9.1-0ubuntu1
Version table:
*** 4.9.1-0ubuntu1 0
100 /var/lib/dpkg/status
4.9-20140406-0ubuntu1 0
500 http://ro.archive.ubuntu.com/ubuntu/ trusty/main amd64 Packages
When I try to do sudo apt-get purge gcc-4.9-base, I am getting:
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:
The following packages have unmet dependencies:
libgcc1 : Depends: gcc-4.9-base (= 4.9.1-0ubuntu1) but it is not going to be installed
libudev1 : Depends: libcgmanager0 but it is not going to be installed
Depends: libnih-dbus1 (>= 1.0.0) but it is not going to be installed
Depends: libnih1 (>= 1.0.0) but it is not going to be installed
libxcb1 : Depends: libxau6 but it is not going to be installed
Depends: libxdmcp6 but it is not going to be installed
E: Error, pkgProblemResolver::Resolve generated breaks, this may be caused by held packages.
|
open your terminal and type as
sudo apt-get autoclean
sudo apt-get autoremove
sudo apt-get update
sudo apt-get dist-upgrade
sudo apt-get autoclean clears out the local repository of retrieved
package files in the /var/cache/apt/archives. The difference is that it only removes package filesthat can no longer be downloaded, and are largely useless.
sudo apt-get autoremove used to remove packages that were automatically
installed to satisfy dependencies for other packages and are now no longer needed.
autoclean and autoremove used to ensure that there is no unneeded packages which may affect your system.
sudo apt-get update update the sources list and resynchronize the package index files from their sources.
sudo apt-get dist-upgrade dist-upgrade in addition to performing the function of upgrade, also intelligently handles changing dependencies with of packages
| Unmet dependencies when trying to build chromium browser on Ubuntu 14.04 |
1,593,247,601,000 |
I am trying to modify the default congestion control algorithm in FreeBSD (NewReno) by creating a copy of the source file (cc_newreno.c, located in /usr/src/sys/netinet/cc) called cc_newreno_mod.c and making changes to it.
Suppose I have made some modifications. How do I test them? Compiling the cc_newreno_mod.c directly (using the built-in C compiler) results in multiple errors, some of which seem strange (for example netinet/cc/cc_module.h file not found, although the file clearly is there).
Should I build a new Kernel? Will the module from the changed file be created automatically? Or am I totally wrong and I should take a different approach?
|
For compiling kernel module you should create Makefile and to include kernel module makefile /usr/src/share/mk/bsd.kmod.mk for example:
# Note: It is important to make sure you include the <bsd.kmod.mk> makefile after declaring the KMOD and SRCS variables.
# Declare Name of kernel module
KMOD = module
# Enumerate Source files for kernel module
SRCS = module.c
# Include kernel module makefile
.include <bsd.kmod.mk>
And finally you run make to compile it so you can test it if it compiles properly.
And as it is not presented in kernel modules (/boot/kernel/*.ko), but it is listed in sys/conf/files I think you should recompile your kernel to apply changes. For more info you can see this page. As it is a copy of cc_newreno.c you can rename your original /usr/src/sys/netinet/cc/cc_newreno.c to something else for saving it copy your new one there and recompile.
| How to test modified FreeBSD source code? |
1,593,247,601,000 |
So I have successfully compiled it to ~/.local by editing the prefix option in the makefile to prefix=~/.local the program compiles fine, and I did the same with librtmp. When running ldd on the binary I get the following output:
ldd rtmpdump-ksv/rtmpdump
linux-vdso.so.1 => (0x00007ffedb4d2000)
librtmp.so.1 => not found
libssl.so.1.0.0 => /usr/lib/x86_64-linux-gnu/libssl.so.1.0.0 (0x00007fc7489a5000)
libcrypto.so.1.0.0 => /usr/lib/x86_64-linux-gnu/libcrypto.so.1.0.0 (0x00007fc7485ac000)
libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007fc748395000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007fc748113000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fc747d87000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007fc747b83000)
/lib64/ld-linux-x86-64.so.2 (0x00007fc748c15000)
And I have tried to copy ibrtmp.so.1 and librtmp.so to every directory in ~/.local
|
Programs do not search for libraries in the same directory as the executable by default. The traditional directory organization under Unix has executables in directories called …/bin and libraries in directories called …/lib.
if you set prefix=~/.local when compiling software, you'll end up installing the executables in ~/.local/bin and the libraries in ~/.local/lib. To tell the system about these libraries, add the following lines to your ~/.profile, or otherwise arrange to set the environment variables PATH and LD_LIBRARY_PATH.
PATH=$PATH:~/.local/bin
export LD_LIBRARY_PATH=~/.local/lib
On OSX, use DYLD_LIBRARY_PATH instead of LD_LIBRARY_PATH.
| rtmpdump compile without root, librtmp.so.1 => not found |
1,593,247,601,000 |
I have downloaded and compiled the latest version of libssl, the result of which is located at /usr/local/ssl. I want to compile libssh2 using the files in this folder, and to do that I've set the switch --with-libssl-prefix=/usr/local/ssl.
After performing ./configure --with-libssl-prefix=/usr/local/ssl and make, the resulting libssh2.so, according to the output of ldd, depends on the libssl found in the /usr/lib64, which is exactly what I don't want.
What can I do to force libssh2 to be compiled with the libssl I have in /usr/local/ssl?
|
If you compiled and installed libssl into the default /usr/local path, there is a /usr/local/ssl, but the lib is not in there; it's just directories like certs and misc -- stuff that other things would probably put in a share directory (e.g. /usr/local/share/ssl).
The actual library is installed in a normal place, /usr/local/lib. Presuming you've already run ldconfig and that path is in a file in /etc/ld.so.conf.d, you should be able to then do:
ldconfig -p | grep ssl
And all the => paths should be into /usr/local/lib. If so, you can use:
--with-libssl-prefix=/usr/local/
No ssl or lib, etc. It should now be found properly.
| Compiling LibSSH2 with specific LibSSL |
1,593,247,601,000 |
I am trying build the kde from the source available on https://www.kde.org/info/4.14.3.php, but I am a bit lost about the order of installation of the packages listed on the page, which would give me a hint about the minimum required packages to make the system work. Anyone can indicate to me some link with this order?
|
The last time I had to build KDE from scratch I used this as a guide. It is from Linux From Scratch (in particular Beyond Linux From Scratch) and should help you through everything.
A quick copy and paste shows this order:
Automoc4-0.9.88
Phonon-4.8.2
Phonon-backend-gstreamer-4.8.0
Phonon-backend-vlc-0.8.1
Akonadi-1.13.0
Attica-0.4.2
QImageblitz-0.0.6
Polkit-Qt-0.112.0
Oxygen-icons-4.14.3
Kdelibs-4.14.3
Kfilemetadata-4.14.3
Kdepimlibs-4.14.3
Baloo-4.14.3
Baloo-widgets-4.14.3
Polkit-kde-agent-0.99.0
Kactivities-4.13.3
Kde-runtime-4.14.3
Kde-baseapps-4.14.3
Kde-base-artwork-4.14.3
Kde-workspace-4.11.14
Good luck!
| Building KDE from source? |
1,593,247,601,000 |
I tried to build a plugin for DNSCrypt, but it keeps telling me that it needs some other files.
I need to know how I can build it, and I've never compiled a package from scratch. I've always been able to use a repository.
I use Ubuntu 14.04 (64 bit) with gcc on CodeAnywhere
Here is the link to the plugin: GeoIP Plugin
Here is the link to dnscrypt: DNSCrypt
Here is what I get when I try to compile:
cabox@box-codeanywhere:~/workspace$ cmake . && make
CMake Error: The source directory "/home/cabox/workspace" does not appear to contain CMakeLists.txt.
Specify --help for usage, or press the help button on the CMake GUI.
cabox@box-codeanywhere:~/workspace$ cd plugin
cabox@box-codeanywhere:~/workspace/plugin$ cmake . && make
-- Configuring done
-- Generating done
-- Build files have been written to: /home/cabox/workspace/plugin
[100%] Building C object CMakeFiles/geoip-block.dir/geoip-block.c.o
/home/cabox/workspace/plugin/geoip-block.c:14:29: fatal error: dnscrypt/plugin.h: No such file or directory
#include <dnscrypt/plugin.h>
^
compilation terminated.
make[2]: *** [CMakeFiles/geoip-block.dir/geoip-block.c.o] Error 1
make[1]: *** [CMakeFiles/geoip-block.dir/all] Error 2
make: *** [all] Error 2
If more info is needed, I will add it as soon as possible.
Dnscrypt build was all fine but i still get
http://pastebin.com/MeU4Q24W
|
Update
It looks like terdon added and updated some commands over the Thanksgiving Holiday. These add extra and or needed functionality. I want to thank him for adding these.
Task
First, let's start with a clean slate.
cd ~ && rm -Rv workspace
Now, we make sure we have the right tools for Ubuntu:
sudo apt-get update ## run make sure you get all things right
sudo apt-get install build-essential checkinstall
sudo apt-get install cmake wget software-properties-common python-software-properties autoconf
sudo add-apt-repository ppa:shnatsel/dnscrypt
sudo add-apt-repository ppa:maxmind/ppa
sudo apt-get update
sudo apt-get install libtool openssl libssl-dev
Then, in case you want a package in Source Control, we need to add more tools. DNScrypt doesn't need it, but in case you ever build an item from source again:
sudo apt-get install cvs subversion git-core mercurial
You should be in your home directory, so now we need the actual source tarball for dnscrypt-proxy:
Download the libsodium (if you don't have it installed)
wget https://download.libsodium.org/libsodium/releases/libsodium-1.0.1.tar.gz
tar xzf libsodium-1.0.1.tar.gz && cd libsodium-1.0.1 && ./configure
make && make check && sudo make install
sudo ldconfig && ./configure && cd ..
Download the geoip api (if you don't have it installed)
wget https://github.com/maxmind/geoip-api-c/archive/v1.6.3.tar.gz
tar xzf v1.6.3.tar.gz && cd geoip-api-c-1.6.3
sh bootstrap && ./configure
make && make check && sudo make install && cd ..
Download the ldns (if you don't have it installed)
wget http://www.nlnetlabs.nl/downloads/ldns/ldns-1.6.17.tar.gz
tar xzf ldns-1.6.17.tar.gz && cd ldns-1.6.17
./configure && make && sudo make install && cd ..
Download the DNSCrypt-Proxy Version 1.4.1 Tar.bz2 File. For the Ubuntu Way, add this DNSCrypt-PPA Please note, that this PPA is dated (the most recent version is 1.4.0 for 13.10), therefore we will install from source
tar -xvjpf dnscrypt-proxy-1.4.1.tar.bz2 && cd dnscrypt-proxy-1.4.1
./configure && make && sudo make install
Since we deleted Plugin, we need to redownload the Zip File From the GitHub Repo. The directory we create will be called master
sudo apt-get install zip unzip Tar won't extract zips, so we need new tools. You may already have these.
unzip master.zip && cd master
Once you unzipped the file go to the folder and edit CMakeLists.txt and add this lines
include_directories(/home/cabox/workspace/dnscrypt-proxy-1.4.1/src/include)
include_directories(/home/cabox/workspace/geoip-api-c-1.6.3/libGeoIP)
include_directories(/home/cabox/workspace/ldns-1.6.17/ldns)
Then, run
cmake . && make
cd .. && cp -v master/nameofplugin.ext /some/dir/where/you/store/plugins
Why Your Error is Occurring
The Header File for DNSCrypt plugin.h is only installed in /usr/include/dnscrypt after you sucessfully compile DNSCrypt itself. You couldn't compile DNSCrypt for two Reasons:
You did not have the source tarball.
Cross-Make, or CMake for short is an independent Build system that doesn't use the standard Linux Build Process. Clients that use it include KDE,and Poppler.
References
Gentoo User. We build from Source routinely. Every update, in fact.
The Ubuntu EasyCompilingHOWTO
| How to build dnscrypt plugins? |
1,593,247,601,000 |
The version of glibc in the repository is 2.13 so I couldn't use apt-get to install it. I downloaded the source code of glibc-2.14, but when I run ./configure command I get this error:
checking for -z nodlopen option... yes
checking for -z initfirst option... yes
checking for -z relro option... no
configure: error: linker with -z relro support required
MySQL workbench relies on glibc-2.14. How can I get workbench installed?
|
"testing" in Debian currently has libc6 2.19, does that workbench need exactly 2.14 or (more usually) at least 2.14... see packages.debian.org. I recommend updating your system to testing completely or to use pinning to selectively update packages.
Trying to compile glibc yourself is not something undertaken lightly, fighting the dependency hell while upgrading selectively to testing is a lot easier :-)
| Error while installing glibc-2.14 from source |
1,593,247,601,000 |
I have a checkout of a source code tree (https://github.com/hautreux/slurm-spank-x11, for the curious) that contains a .spec file for building a RPM package. My question is, what's the simplest way of building the binary RPM from that source tree? In the Debian world, I would just run debian/rules binary from inside the source tree. Is there an equivalent that's close to being this easy in the RPM world?
|
The simplest way is to use rpmbuild.
rpmbuild <spec file> is the RPM equivalent of fakeroot debian/rules binary.
Fedora
Before your first build, you need to prepare your build system once by installing the Development Tools group:
# yum install @development-tools
then:
# yum install fedora-packager
As a user (never as root) create your build environment:
$ rpmdev-setuptree
This creates a directory tree:
~/rpmbuild
├── BUILD
├── BUILDROOT
├── RPMS
├── SOURCES
├── SPECS
└── SRPMS
To build:
Your spec file goes in the SPECS directory and your sources in SOURCES.
You then change to the SPECS directory and run rpmbuild <spec file>.
Of course, there is much more to it than the above. Details are available on the Fedora Wiki
RedHat or CentOS
The same tool (rpmbuild) is used on these distros, but the process and required pacakges are slightly different. Details for CentOS are on the CentOS Wiki.
Copr Service
This is a build service provided by Fedora that allows you to upload source RPM (srpm) files and let the service build it for any target (eg RedHat/CentOS or Fedora). You still need to package the source and the spec file, but it saves you installing all the build tools and required dev libraries on your local system.
| What's the simplest way to build a binary RPM from a source code checkout with a .spec file |
1,593,247,601,000 |
I need the latest libpcre3-dev library to compile a software from source, however, the current distribution of the OS (Ubuntu) on my server only has the older version of libpcre3-dev and no backport is available.
I am thinking to compile the binary on a separate server with the latest version of libpcre3-dev and install the binary back to my actual server. I have two questions:
Does this work? My main concern is that the libpcre3 on my server is still the older version, does the binary still need the latest corresponding libpcre3 at runtime even if it is compiled with the latest libpcre3-dev?
What is the best way of installing the binary back to my server? Simply copy the binary or make it into a .deb package and then install using the package manager (if possible)?
|
If the program requires newer features that aren't available on your server, then those features won't be available at runtime and so your program probably won't run.
You can link the library statically. This has the downside that you can't upgrade the library separately from the program. If a security vulnerability is found in that version of the library, you'll need to rebuild the program. Replace -lpcre3 in the linker command line by /usr/lib/libpcre3.a.
You can link dynamically and copy the library to the same directory where you install your software. Start the software through a wrapper script that sets the library load path to include that additional directory.
#!/bin/sh
export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/opt/my-software/lib"
/opt/my-software/bin/foo "$@"
| Compile source with later libraries on one server and use the binary on an older server |
1,593,247,601,000 |
I am running GNU/Linux (Centos 6) on kernel 2.6.32-431.17.1.el6.x86_64. I am trying to update the kernel to 3.2.61. I performed the following steps inside the 3.2.61 folder structure:
make menuconfig (took defaults- didn't add anything)
make
make modules
make modules_install
make install
On step 5, I received the following error:
ERROR: modinfo: could not find module lpc_ich
I tried yum install lpc_ich, but that did not exist. This is my first time trying to install a new kernel. I am not really sure if I am doing this correctly.
Could someone please help point me in the right direction?
|
It's important to give to the toolchain used to build the kernel the location of the kernel source tree. Otherwise, even if the compilation runs perfectly, the installation may fail with errors about missing modules or parts.
The kernel source tree is specified through the KERNEL_TREE environment variable. It defaults to /usr/src/linux. So either export this variable in the terminal in which you make the kernel:
export KERNEL_TREE=/usr/src/linux-3.2.61
or define a symlink from /usr/src/linux-3.2.61 to /usr/src/linux`:
ln -s /usr/src/linux-3.2.61 /usr/src/linux
Of course, replace /usr/src/linux-3.2.61 with the corresponding kernel source directory.
| Error installing kernel on Centos (from source) |
1,593,247,601,000 |
I want to add and compile my custom application code into my kernel.
How should I add my .c and .o files into my kernel 'bin' directory and compile them?
I made a hello.c and hello.o file and I want to add them into my kernel such way that when kernel start this hello.o file runs. Do I need to edit some makefile?
|
I want to add these file into my kernel such way that when kernel start this hello.o file execute and run
What you are trying to achieve shouldn't be made through kernel edition. Executing a program at boot time can be handled in much simpler ways, without need for kernel programming experience. You can:
Execute it when your shell starts:
Write /path/to/hello/executable at the end of /etc/profile, ~/.profile or ~/.bashrc. Those files are sourced every time you start a shell (more specifically, bash). This also means that your executable will run every time you open a terminal.
Use the init.d system: Compile your code and add an init script to the init system. You'll find all the information you may need on this page, or here for Ubuntu specifics.
Here a quick example of an init script for your hello executable:
#!/bin/sh
### BEGIN INIT INFO
# Provides: hello
# Default-Start: 1 2 3 4 5
# Short-Description: Prints Hello World somewhere.
# Description: Some more description...
### END INIT INFO
case "$1" in
start)
/path/to/hello/executable
;;
stop|restart|force-reload)
;;
*)
;;
esac
exit 0
Now you can execute hello using :
service hello start
Assuming that the init script is named hello, and is executable. To make it run at boot time, use:
update-rc.d hello defaults
The kernel relies more on libraries than executables actually. You may also want to have a look at The Kernel Module Development Guide, which can introduce you to the basics of kernel modules. Again, this is a horrible overkill for what you're trying to do.
| How do I compile source into the kernel? |
1,593,247,601,000 |
I am trying to compile Simple Screen Recorder. I had a linker error due to a wrong library path relating to ffmpeg.
When I checked installation paths with whereis ffmpeg, I get:
ffmpeg: /usr/bin/ffmpeg /usr/bin/X11/ffmpeg /usr/local/bin/ffmpeg /opt/ffmpeg/bin/ffmpeg
I installed ffmpeg many times, without uninstalling old one, now I face a linker error.
How can I exclude older ffmpeg installations and fix the library path problem?
|
You need to verify $PATH and $LD_LIBRARY_PATH environment variables. make sure they are point to correct version of ffmpeg and does not include older version
if LD_LIBRARY_PATH is not already setup then try this :
LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib ffmpeg
| multiple ffmpeg library paths, how can exclude older ffmpeg installations? |
1,593,247,601,000 |
I have been a Windows kernel developer for many years. Now I start to develop Linux kernel modules.
To begin with, I installed kernel-devel under /usr/src/kernels/$(uname -r). However, after checking the installation folder, I am confused because there seem a lot of seemingly useless folders and files inside the directory. Many folders are empty except for two files: Kconfig and Makefile.
Under Windows, to develop kernel device drivers, I just need an include folder containing all necessary header files, and an lib folder containing necessary libraries to link.
Under Linux, I can't understand why there are so many seemingly useless folders.
Any explanations?
|
The kernel-devel package in Fedora and other Red Hat derivatives does not contain the full kernel source, just headers for public interfaces and makefiles needed for driver development. Most headers can be found under /usr/src/kernels/$(uname -r)/include/ and some architecture specific headers, e.g. for x86 under /usr/src/kernels/$(uname -r)/arch/x86/include/.
The directories with Kconfig and Makefile are not useless, you only don't see the complete picture because you don't have the entire kernel source (which you typically wouldn't need for driver development anyway).
| Why does kernel-devel contain so many "empty" directories? |
1,593,247,601,000 |
I cloned the git repository of xbindkeys using:
git clone git://git.sv.gnu.org/xbindkeys.git
I want to compile it. How can I do this? Where can I find the compile instructions?
What are the dependencies?
|
After downloading it when I run the ./configure command it complained about 2 libraries missing:
checking for XCreateWindow in -lX11... no
configure: WARNING: Xbindkeys depends on the X11 libraries!
checking for guile... no
configure: error: guile required but not found
I had to install these 2 packages:
$ sudo apt-get install guile-1.8-dev tk-dev
Afterwards a typical ./configure and make worked fine.
| How to compile xbindkeys |
1,593,247,601,000 |
I have been trying to build CUPS 1.7.1 on RHEL 5.6 using
$ sudo rpmbuild -ta ./cups-1.7.1-source.tar.bz2
and was getting:
error: Failed build dependencies:
libusbx-devel is needed by cups-1.7.1-1.x86_64
I've since found this CUPS STR 4336 (https://www.cups.org/str.php?L4336) that described the same issue, with the comment on it to use --without-libusb1 build option. So I tried running:
sudo rpmbuild -ta ./cups-1.7.1-source.tar.bz2 --without-libusb1
but now getting:
--without-libusb1: unknown option
Any ideas how to get it to build and/or what I'm doing wrong here?
|
The switch you're trying to pass to rpmbuild is a switch to what you'd typically provide in a configure step if you were building this package from source. The details of how you want your package to build are contained in a .spec file which is likely inside your .tar.gz2 file.
You could unpack the tarball and confirm its contents, looking for a .spec file but I suspect your issue can be resolved more simply by installing the missing library, libusbx-devel. So I suggest installing that first and trying your hand at running your rpmbuild command again.
$ sudo yum install libusbx-devel
Or perhaps it's known as this:
$ sudo yum install libusb-devel
OK that didn't work, now what?
So if you've attempted the above and your only course of action really is to include the missing configure switch here's how I would proceed.
$ mkdir somedir && cd somedir
$ tar jxvf /path/to/cups-1.7.1-source.tar.bz2
A peek inside the unpacked directory shows the .spec file we're looking for:
$ find . | grep '\.spec$'
./cups-1.7.1/packaging/cups.spec
If you more that file you'll notice this section at the top:
# Conditional build options (--with name/--without name):
#
# dbus - Enable/disable DBUS support (default = enable)
# dnssd - Enable/disable DNS-SD support (default = enable)
# libusb1 - Enable/disable LIBUSB 1.0 support (default = enable)
# static - Enable/disable static libraries (default = enable)
So you can use these with the --without X switch to disable them when using rpmbuild, like so:
$ rpmbuild -ta cups-1.7.1-source.tar.bz2 --without libusb1
If you need to disable others just add additional --without X switches:
$ rpmbuild -ta cups-1.7.1-source.tar.bz2 --without libusb1 --without dbus
References
Passing conditional parameters into a rpm build
| CUPS libusbx-devel is needed error when trying to build cups-1.7.1 from source |
1,593,247,601,000 |
I try to install libxcb from source, but I have an error, I do not understand why, this the error:
configure: error: Package requirements (pthread-stubs xau >= 0.99.2) were not met:
No package 'xau' found
I have installed from source pthread-stubs and proto-xcb-proto
This is to install qtile. At the moment I use Debian wheezy but I will soon use jessie.
|
You are missing the headers from the pthreads and xau packages. You can get these from installing the -dev version of the respective packages.
The easiest thing to do however is to make sure you have a deb-src line in you sources.list and run:
apt-get build-dep libxcb
which will install all of the necessary packages to build xcb.
| Install libxcb from source |
1,370,473,401,000 |
I would like to make a few changes to the fdupes code. I know I can grab the source code from the website - but is there a better way on Ubuntu / Debian?
After getting the source this way where is it stored?
If one wanted to make changes to the code and recompile / install - how is that done?
Is there a good way to mange changes when updates come out?
|
There sure is:
apt-get source fdupes
Have a look here
| How can I build fdupes from source on Ubuntu? |
1,370,473,401,000 |
I have a Lenovo IdeaPad Yoga 13. WLAN won't work out of box with fedora 18. So I googled around and found this 2 links:
https://askubuntu.com/questions/139632/wireless-card-realtek-rtl8723ae-bt-is-not-recognized
https://ask.fedoraproject.org/question/9633/i-can-not-get-my-realtek-8723-chip-to-work/
So I downloaded the source install gcc, kernel-header, kernel-devel and patch.
I comment the line 320 in base.c out. But I get still an error.
make -C /lib/modules/3.8.9-200.fc18.x86_64/build M=/home/l33tname/rtl_92ce_92se_92de_8723ae_linux_mac80211_0006.0514.2012 modules
make[1]: Entering directory `/usr/src/kernels/3.8.9-200.fc18.x86_64'
CC [M] /home/l33tname/rtl_92ce_92se_92de_8723ae_linux_mac80211_0006.0514.2012/base.o
In file included from /home/l33tname/rtl_92ce_92se_92de_8723ae_linux_mac80211_0006.0514.2012/base.c:39:0:
/home/l33tname/rtl_92ce_92se_92de_8723ae_linux_mac80211_0006.0514.2012/pci.h:245:15: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘rtl_pci_probe’
/home/l33tname/rtl_92ce_92se_92de_8723ae_linux_mac80211_0006.0514.2012/base.c: In function ‘rtl_action_proc’:
/home/l33tname/rtl_92ce_92se_92de_8723ae_linux_mac80211_0006.0514.2012/base.c:870:25: error: ‘RX_FLAG_MACTIME_MPDU’ undeclared (first use in this function)
/home/l33tname/rtl_92ce_92se_92de_8723ae_linux_mac80211_0006.0514.2012/base.c:870:25: note: each undeclared identifier is reported only once for each function it appears in
/home/l33tname/rtl_92ce_92se_92de_8723ae_linux_mac80211_0006.0514.2012/base.c: In function ‘rtl_send_smps_action’:
/home/l33tname/rtl_92ce_92se_92de_8723ae_linux_mac80211_0006.0514.2012/base.c:1432:16: error: ‘struct <anonymous>’ has no member named ‘sta’
make[2]: *** [/home/l33tname/rtl_92ce_92se_92de_8723ae_linux_mac80211_0006.0514.2012/base.o] Error 1
make[1]: *** [_module_/home/l33tname/rtl_92ce_92se_92de_8723ae_linux_mac80211_0006.0514.2012] Error 2
make[1]: Leaving directory `/usr/src/kernels/3.8.9-200.fc18.x86_64'
make: *** [all] Error 2
The line 245 on pci.h is this:
int __devinit rtl_pci_probe(struct pci_dev *pdev,const struct pci_device_id *id);
And yeah I try it as normal user and as root.
My question is how can I compile this or what must be fixed.
|
So the solutions is really simple take the latest source from:
https://github.com/lwfinger
make & make install
So this works well for realtek-8723.
And there is a small blog post I wrote about it -> http://l33tsource.com/blog/2013/05/08/Yoga-with-WLAN.html
| I can not get my Realtek 8723 driver source compiled |
1,370,473,401,000 |
I am compiling php 5.3.13 on my server. I want to create an autonome php5 folder.
So prefix is:
/usr/local/php5
In this folder I have a lib folder, where I put all lib needed for php to be executed such as:
libk5crypto.so.3
libxml2.so.2
libjpeg.so.62 ....
Even if I compile with --with-jpeg-dir=/usr/local/php5/lib/, php binary is still looking for the file in /usr/lib64.
The only solution I found so far is to manually export LD_LIBRARY_PATH=/usr/local/php5/lib
I would like the same automatically at compile time. Is that possible?
|
There are two distinct linker paths, the compile time, and the run time.
I find autoconf (configure) is rarely set up to do the correct thing with alternate library locations, using --with-something= usually does not generate the correct linker flags (-R or -Wl,-rpath). If you only had .a libraries it would work, but for .so libraries what you need to specify is the RPATH:
export PHP_RPATHS=/usr/local/php5/lib
./configure [options as required]
(In many cases just appending LDFLAGS to the configure command is used, but PHP's build process is slightly different.)
This effectively adds extra linker search paths to each binary, as if those paths were specified in LD_LIBRARY_PATH or your default linker config (/etc/ld.so.conf).
This also takes care of adding -L/usr/local/php5/lib to LDFLAGS so that the compile-time and run-time use libraries are from the same directory (there's the potential for problems with mismatched versions in different locations, but you don't need to worry here).
Once built, you can check with:
$ objdump -j dynamic -x ./sapi/cli/php | grep RPATH
RPATH /usr/local/php5/lib
$ objdump -j dynamic -x ./libs/libphp5.so | fgrep RPATH
RPATH /usr/local/php5/lib
Running ldd will also confirm which libraries are loaded from where.
What --with-jpeg-dir should be really be used for is to point at /usr/local/ or some top-level directory, the directories include/, lib/, and possibly others are appended depending on what the compiler/linker needs.
You only need --with-jpeg-dir if configure cannot find the installation, configure will automatically find it in /usr/local and other (possibly platform specific) "standard" places. In your case I think configure is finding libjpeg in a standard place, and silently disregarding the directive.
(Also, PHP 5.3.13 is no longer current, I suggest 5.3.21, the current version at this time.)
| PHP compilation - link to library |
1,370,473,401,000 |
I would like to have tmux 1.7 on my machine with CentOS 5.8 (64 bit).
It requires libevent in version at least 1.4.14b or 2.0.20 and the latest version in yum packages for CentOS 5.8 is 1.4.13 .
I know that I need the libevent-devel package as well to build tmux but I cannot get it anywhere.
Can anyone give me hints how to do this ?
How can I get (build) the devel package ?
|
You can use the following steps to compile tmux 1.7 on CentOS 5.8:
Install developer tools
yum groupinstall "Development Libraries"
yum groupinstall "Development Tools"
yum install rpm-build gcc
Setup .rpmmacros file
$ cat > /home/<myusername>/.rpmmacros << EOF
%packager Your Name
%vendor Your Orgnazation
%_topdir /home/<myusername>/rpmbuild
%_signature gpg
%_gpg_name Your Packaging Dept
%_gpg_path /home/mockbuild/.gnupg
%dist build_id
%buildroot
EOF
NOTE: Make sure you substitute your $HOME path into <myusername>.
Setup rpmbuild area
mkdir -p $HOME/rpmbuild/{BUILD,RPMS/i386,SOURCES,SPECS,SRPMS}
Build libevent 2.x RPM
# d/l package
wget http://sourceforge.net/projects/levent/files/libevent/libevent-2.0/libevent-2.0.10-stable.tar.gz/download
mv libevent-2.0.10-stable.tar.gz rpmbuild/SOURCES/
# download .spec file
wget http://geekery.altervista.org/specs/libevent2010.spec
mv libevent2010.spec rpmbuild/SPECS
# build RPM
rpmbuild -bb rpmbuild/SPECS/libevent2010.spec
Install libevent packages
cd $HOME/rpmbuild/RPMS/x86_64
rpm -ivh libevent-devel-2.0.10-1build_id.x86_64.rpm libevent-2.0.10-1build_id.x86_64.rpm
Download tmux SRPM
For this we're going to download the SRPM for Fedora, but extract the contents of it out and reuse it's .spec file to build tmux for CentOS 5.x.
cd $HOME/rpmbuild
wget ftp://ftp.muug.mb.ca/mirror/fedora/linux/development/19/source/SRPMS/t/tmux-1.7-2.fc19.src.rpm
mkdir -p temp && cd temp
rpm2cpio ../tmux-1.7-2.fc19.src.rpm | cpio -idmv
mv tmux.spec ../SPECS/ && mv tmux-1.7.tar.gz ../SOURCES/
cd ../SPECS/ && rmdir ../temp/
Edit tmux.spec
vim tmux.spec
I ran into several issues with this tmux.spec file. Not sure if it was my setup or not so I made these changes but you may not require them.
# Added these lines after the BuildRequires
BuildRoot: %{buildroot}
Prefix: /usr
# added DESTDIR=%{buildroot}
make %{?_smp_mflags} LDFLAGS="%{optflags}" DESTDIR=%{buildroot}
# changed this line
%{_bindir}/bin/tmux
# to this line
/usr/bin/tmux
Save this file.
Building tmux RPM
cd $HOME/rpmbuild
rpmbuild -ba SPECS/tmux.spec
rpm -ivh RPM/x86_64/tmux-1.7-2.x86_64.rpm
prebuilds
Given the number of steps to do this I'm going to do you a favor and provide these RPMs in my yum repository.
libevent-2.0.10-1.x86_64.rpm
libevent-devel-2.0.10-1.x86_64.rpm
tmux-1.7-2.x86_64.rpm
References
build libevent and transmission on RHEL/CentOS 5.x
Packaging software with RPM, Part 1: Building and distributing packages
How To Extract an RPM Package Without Installing It (rpm extract command)
| How to compile tmux 1.7 on CentOS 5.8? |
1,370,473,401,000 |
From here: http://www.xenomai.org/index.php/FAQs#Which_kernel_settings_should_be_avoided.3F
Which kernel settings should be avoided?
Note that Xenomai will warn you about known invalid combinations during kernel configuration.
- CONFIG_CPU_FREQ
- CONFIG_APM
- CONFIG_ACPI_PROCESSOR
Now, when I look in the .config, I do find these options clearly but I don't know their dependencies.
So, it is wise to simply put a n next to these options in the .config file?
Will the make procedure take care of the dependencies?
The make menuconfig window do not present these options explicitly.
|
make menuconfig does present this option. If you are in the menu press / and search for CPU_FREQ. This will show all CONFIG parameters containing CPU_FREQ. It does also show how you can access it through the menu, e.g:
│ Symbol: CPU_FREQ [=y]
│ Type : boolean
│ Prompt: CPU Frequency scaling
│ Defined at drivers/cpufreq/Kconfig:3
│ Location:
│ -> Power management and ACPI options
│ -> CPU Frequency scaling
This means you find it under Power managment and ACPI options -> CPU Frequency scaling and the name of the entry is CPU Frequency scaling.
| Edit the .config file when en/disabling a particular option like CONFIG_CPU_FREQ? |
1,370,473,401,000 |
Can I build packages for Debian 5.0 using Debian 6.0 or Debian Wheezy?
I assume that I could do a complete chroot'ed installation of Debian 5.0 and do my builds there, but it might be nice to have something lighter-weight.
For bonus points - can I build packages for Debian 5.0 using the version of g++ from Debian 6.0 or Debian Wheezy? I'm responsible for developing some Debian 5.0 software, but I'd like to start using C++11 features that aren't available in Debian 5.0's g++.
|
You might find the schroot or pbuilder packages convenient for this, since they are designed to maintain a build environment for multiple versions of Debian.
That said, a basic chroot is a couple of hundred MB in size; you could have thousands of them on most modern systems without really noticing. debootstrap is a great tool for getting those working swiftly.
Building with a newer version of G++ is traditionally exciting, because the support libraries also need to change - and so the older glibc release may not support your newer binary code well.
You should be able to backport the appropriate version of G++ and have it work, but directly using the newer version might cause problems. The release notes should help understand that.
| Compiling for old versions of Debian |
1,370,473,401,000 |
I rebooted a compiled kernel 3.1.0, and those are the errors that I am getting:
linux-dopx:/usr/src/linux-3.1.0-1.2 # make install
sh /usr/src/linux-3.1.0-1.2/arch/x86/boot/install.sh 3.1.0 arch/x86/boot/bzImage \
System.map "/boot"
Kernel image: /boot/vmlinuz-3.1.0
Initrd image: /boot/initrd-3.1.0
Root device: /dev/disk/by-id/ata-ST3250310AS_6RYNQEXY-part2 (/dev/sda2) (mounted on / as ext4)
Resume device: /dev/disk/by-id/ata-ST3250310AS_6RYNQEXY-part1 (/dev/sda1)
find: `/lib/modules/3.1.0/kernel/drivers/ata': No such file or directory
modprobe: Module ata_generic not found.
WARNING: no dependencies for kernel module 'ata_generic' found.
modprobe: Module ext4 not found.
WARNING: no dependencies for kernel module 'ext4' found.
Features: block usb resume.userspace resume.kernel
Bootsplash: openSUSE (1280x1024)
41713 blocks
Rebooting says: Could not load /lib/modules/3.1.0/modules.dep
EDIT1:
Here's what I did:
linux-dopx:/usr/src/linux-3.1.0-1.2 # make bzImage
CHK include/linux/version.h
CHK include/generated/utsrelease.h
CALL scripts/checksyscalls.sh
CHK include/generated/compile.h
Kernel: arch/x86/boot/bzImage is ready (#1)
linux-dopx:/usr/src/linux-3.1.0-1.2 # make modules
CHK include/linux/version.h
CHK include/generated/utsrelease.h
CALL scripts/checksyscalls.sh
Building modules, stage 2.
MODPOST 3 modules
linux-dopx:/usr/src/linux-3.1.0-1.2 # make modules install
CHK include/linux/version.h
CHK include/generated/utsrelease.h
CALL scripts/checksyscalls.sh
CHK include/generated/compile.h
Building modules, stage 2.
MODPOST 3 modules
sh /usr/src/linux-3.1.0-1.2/arch/x86/boot/install.sh 3.1.0 arch/x86/boot/bzImage \
System.map "/boot"
Kernel image: /boot/vmlinuz-3.1.0
Initrd image: /boot/initrd-3.1.0
Root device: /dev/disk/by-id/ata-ST3250310AS_6RYNQEXY-part2 (/dev/sda2) (mounted on / as ext4)
Resume device: /dev/disk/by-id/ata-ST3250310AS_6RYNQEXY-part1 (/dev/sda1)
find: `/lib/modules/3.1.0/kernel/drivers/ata': No such file or directory
modprobe: Module ata_generic not found.
WARNING: no dependencies for kernel module 'ata_generic' found.
modprobe: Module ext4 not found.
WARNING: no dependencies for kernel module 'ext4' found.
Features: block usb resume.userspace resume.kernel
Bootsplash: openSUSE (1280x1024)
41713 blocks
linux-dopx:/usr/src/linux-3.1.0-1.2 # make install
sh /usr/src/linux-3.1.0-1.2/arch/x86/boot/install.sh 3.1.0 arch/x86/boot/bzImage \
System.map "/boot"
Kernel image: /boot/vmlinuz-3.1.0
Initrd image: /boot/initrd-3.1.0
Root device: /dev/disk/by-id/ata-ST3250310AS_6RYNQEXY-part2 (/dev/sda2) (mounted on / as ext4)
Resume device: /dev/disk/by-id/ata-ST3250310AS_6RYNQEXY-part1 (/dev/sda1)
find: `/lib/modules/3.1.0/kernel/drivers/ata': No such file or directory
modprobe: Module ata_generic not found.
WARNING: no dependencies for kernel module 'ata_generic' found.
modprobe: Module ext4 not found.
WARNING: no dependencies for kernel module 'ext4' found.
Features: block usb resume.userspace resume.kernel
Bootsplash: openSUSE (1280x1024)
41713 blocks
EDIT2:
linux-dopx:/usr/src/linux-3.1.0-1.2 # make modules_install install
INSTALL arch/x86/kernel/test_nx.ko
INSTALL drivers/scsi/scsi_wait_scan.ko
INSTALL net/netfilter/xt_mark.ko
DEPMOD 3.1.0
sh /usr/src/linux-3.1.0-1.2/arch/x86/boot/install.sh 3.1.0 arch/x86/boot/bzImage \
System.map "/boot"
Kernel image: /boot/vmlinuz-3.1.0
Initrd image: /boot/initrd-3.1.0
Root device: /dev/disk/by-id/ata-ST3250310AS_6RYNQEXY-part2 (/dev/sda2) (mounted on / as ext4)
Resume device: /dev/disk/by-id/ata-ST3250310AS_6RYNQEXY-part1 (/dev/sda1)
find: `/lib/modules/3.1.0/kernel/drivers/ata': No such file or directory
modprobe: Module ata_generic not found.
WARNING: no dependencies for kernel module 'ata_generic' found.
modprobe: Module ext4 not found.
WARNING: no dependencies for kernel module 'ext4' found.
Features: block usb resume.userspace resume.kernel
Bootsplash: openSUSE (1280x1024)
41713 blocks
EDIT 3:
This message is still getting shown after make install: /lib/modules/2.6.35.13/kernel/drivers/ata': No such file or directory
I set to '[*]' the "Generic ATA support" under "Serial ATA and Parallel ATA driver", but that's of no avail.
The kernel version is different this time, but the problem is same.
EDIT 4:
linux-dopx:~ # lspci -vvv
00:00.0 Host bridge: Intel Corporation 82G33/G31/P35/P31 Express DRAM Controller (rev 10)
Subsystem: ASUSTeK Computer Inc. P5KPL-VM Motherboard
Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
Status: Cap+ 66MHz- UDF- FastB2B+ ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort+ >SERR- <PERR- INTx-
Latency: 0
Capabilities: [e0] Vendor Specific Information: Len=0b <?>
Kernel driver in use: agpgart-intel
00:02.0 VGA compatible controller: Intel Corporation 82G33/G31 Express Integrated Graphics Controller (rev 10) (prog-if 00 [VGA controller])
Subsystem: ASUSTeK Computer Inc. P5KPL-VM Motherboard
Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
Status: Cap+ 66MHz- UDF- FastB2B+ ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
Latency: 0
Interrupt: pin A routed to IRQ 18
Region 0: Memory at fea80000 (32-bit, non-prefetchable) [size=512K]
Region 1: I/O ports at dc00 [size=8]
Region 2: Memory at e0000000 (32-bit, prefetchable) [size=256M]
Region 3: Memory at fe900000 (32-bit, non-prefetchable) [size=1M]
Expansion ROM at <unassigned> [disabled]
Capabilities: [90] MSI: Enable+ Count=1/1 Maskable- 64bit-
Address: fee0100c Data: 4149
Capabilities: [d0] Power Management version 2
Flags: PMEClk- DSI+ D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
Kernel driver in use: i915
00:1b.0 Audio device: Intel Corporation N10/ICH 7 Family High Definition Audio Controller (rev 01)
Subsystem: ASUSTeK Computer Inc. Device 83a1
Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
Latency: 0, Cache Line Size: 32 bytes
Interrupt: pin A routed to IRQ 20
Region 0: Memory at fea78000 (64-bit, non-prefetchable) [size=16K]
Capabilities: [50] Power Management version 2
Flags: PMEClk- DSI- D1- D2- AuxCurrent=55mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
Capabilities: [60] MSI: Enable+ Count=1/1 Maskable- 64bit+
Address: 00000000fee0100c Data: 4159
Capabilities: [70] Express (v1) Root Complex Integrated Endpoint, MSI 00
DevCap: MaxPayload 128 bytes, PhantFunc 0, Latency L0s <64ns, L1 <1us
ExtTag- RBE- FLReset-
DevCtl: Report errors: Correctable- Non-Fatal- Fatal- Unsupported-
RlxdOrd- ExtTag- PhantFunc- AuxPwr- NoSnoop+
MaxPayload 128 bytes, MaxReadReq 128 bytes
DevSta: CorrErr- UncorrErr- FatalErr- UnsuppReq- AuxPwr+ TransPend-
LnkCap: Port #0, Speed unknown, Width x0, ASPM unknown, Latency L0 <64ns, L1 <1us
ClockPM- Surprise- LLActRep- BwNot-
LnkCtl: ASPM Disabled; Disabled- Retrain- CommClk-
ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
LnkSta: Speed unknown, Width x0, TrErr- Train- SlotClk- DLActive- BWMgmt- ABWMgmt-
Capabilities: [100 v1] Virtual Channel
Caps: LPEVC=0 RefClk=100ns PATEntryBits=1
Arb: Fixed- WRR32- WRR64- WRR128-
Ctrl: ArbSelect=Fixed
Status: InProgress-
VC0: Caps: PATOffset=00 MaxTimeSlots=1 RejSnoopTrans-
Arb: Fixed- WRR32- WRR64- WRR128- TWRR128- WRR256-
Ctrl: Enable+ ID=0 ArbSelect=Fixed TC/VC=01
Status: NegoPending- InProgress-
VC1: Caps: PATOffset=00 MaxTimeSlots=1 RejSnoopTrans-
Arb: Fixed- WRR32- WRR64- WRR128- TWRR128- WRR256-
Ctrl: Enable- ID=0 ArbSelect=Fixed TC/VC=00
Status: NegoPending- InProgress-
Capabilities: [130 v1] Root Complex Link
Desc: PortNumber=0f ComponentID=00 EltType=Config
Link0: Desc: TargetPort=00 TargetComponent=00 AssocRCRB- LinkType=MemMapped LinkValid+
Addr: 00000000fed1c000
Kernel driver in use: snd_hda_intel
00:1c.0 PCI bridge: Intel Corporation N10/ICH 7 Family PCI Express Port 1 (rev 01) (prog-if 00 [Normal decode])
Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx+
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
Latency: 0, Cache Line Size: 32 bytes
Bus: primary=00, secondary=02, subordinate=02, sec-latency=0
I/O behind bridge: 00001000-00001fff
Memory behind bridge: 7f900000-7fafffff
Prefetchable memory behind bridge: 000000007fb00000-000000007fcfffff
Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- <SERR- <PERR-
BridgeCtl: Parity- SERR+ NoISA- VGA- MAbort- >Reset- FastB2B-
PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
Capabilities: [40] Express (v1) Root Port (Slot+), MSI 00
DevCap: MaxPayload 128 bytes, PhantFunc 0, Latency L0s unlimited, L1 unlimited
ExtTag- RBE- FLReset-
DevCtl: Report errors: Correctable- Non-Fatal- Fatal- Unsupported-
RlxdOrd- ExtTag- PhantFunc- AuxPwr- NoSnoop-
MaxPayload 128 bytes, MaxReadReq 128 bytes
DevSta: CorrErr- UncorrErr- FatalErr- UnsuppReq- AuxPwr+ TransPend-
LnkCap: Port #1, Speed 2.5GT/s, Width x1, ASPM L0s L1, Latency L0 <1us, L1 <4us
ClockPM- Surprise- LLActRep+ BwNot-
LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- Retrain- CommClk-
ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
LnkSta: Speed 2.5GT/s, Width x0, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
SltCap: AttnBtn- PwrCtrl- MRL- AttnInd- PwrInd- HotPlug+ Surprise+
Slot #4, PowerLimit 25.000W; Interlock- NoCompl-
SltCtl: Enable: AttnBtn- PwrFlt- MRL- PresDet- CmdCplt- HPIrq- LinkChg-
Control: AttnInd Unknown, PwrInd Unknown, Power- Interlock-
SltSta: Status: AttnBtn- PowerFlt- MRL- CmdCplt- PresDet- Interlock-
Changed: MRL- PresDet- LinkState-
RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna- CRSVisible-
RootCap: CRSVisible-
RootSta: PME ReqID 0000, PMEStatus- PMEPending-
Capabilities: [80] MSI: Enable+ Count=1/1 Maskable- 64bit-
Address: fee0100c Data: 4129
Capabilities: [90] Subsystem: ASUSTeK Computer Inc. Device 8179
Capabilities: [a0] Power Management version 2
Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
Capabilities: [100 v1] Virtual Channel
Caps: LPEVC=0 RefClk=100ns PATEntryBits=1
Arb: Fixed+ WRR32- WRR64- WRR128-
Ctrl: ArbSelect=Fixed
Status: InProgress-
VC0: Caps: PATOffset=00 MaxTimeSlots=1 RejSnoopTrans-
Arb: Fixed+ WRR32- WRR64- WRR128- TWRR128- WRR256-
Ctrl: Enable+ ID=0 ArbSelect=Fixed TC/VC=01
Status: NegoPending- InProgress-
VC1: Caps: PATOffset=00 MaxTimeSlots=1 RejSnoopTrans-
Arb: Fixed+ WRR32- WRR64- WRR128- TWRR128- WRR256-
Ctrl: Enable- ID=0 ArbSelect=Fixed TC/VC=00
Status: NegoPending- InProgress-
Capabilities: [180 v1] Root Complex Link
Desc: PortNumber=01 ComponentID=00 EltType=Config
Link0: Desc: TargetPort=00 TargetComponent=00 AssocRCRB- LinkType=MemMapped LinkValid+
Addr: 00000000fed1c001
Kernel driver in use: pcieport
00:1c.1 PCI bridge: Intel Corporation N10/ICH 7 Family PCI Express Port 2 (rev 01) (prog-if 00 [Normal decode])
Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx+
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
Latency: 0, Cache Line Size: 32 bytes
Bus: primary=00, secondary=01, subordinate=01, sec-latency=0
I/O behind bridge: 0000e000-0000efff
Memory behind bridge: feb00000-febfffff
Prefetchable memory behind bridge: 000000007f700000-000000007f8fffff
Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- <SERR- <PERR-
BridgeCtl: Parity- SERR+ NoISA- VGA- MAbort- >Reset- FastB2B-
PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
Capabilities: [40] Express (v1) Root Port (Slot+), MSI 00
DevCap: MaxPayload 128 bytes, PhantFunc 0, Latency L0s unlimited, L1 unlimited
ExtTag- RBE- FLReset-
DevCtl: Report errors: Correctable- Non-Fatal- Fatal- Unsupported-
RlxdOrd- ExtTag- PhantFunc- AuxPwr- NoSnoop-
MaxPayload 128 bytes, MaxReadReq 128 bytes
DevSta: CorrErr- UncorrErr- FatalErr- UnsuppReq- AuxPwr+ TransPend-
LnkCap: Port #2, Speed 2.5GT/s, Width x1, ASPM L0s L1, Latency L0 <1us, L1 <4us
ClockPM- Surprise- LLActRep+ BwNot-
LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- Retrain- CommClk-
ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
LnkSta: Speed 2.5GT/s, Width x1, TrErr- Train- SlotClk+ DLActive+ BWMgmt- ABWMgmt-
SltCap: AttnBtn- PwrCtrl- MRL- AttnInd- PwrInd- HotPlug+ Surprise+
Slot #0, PowerLimit 0.000W; Interlock- NoCompl-
SltCtl: Enable: AttnBtn- PwrFlt- MRL- PresDet- CmdCplt- HPIrq- LinkChg-
Control: AttnInd Unknown, PwrInd Unknown, Power- Interlock-
SltSta: Status: AttnBtn- PowerFlt- MRL- CmdCplt- PresDet+ Interlock-
Changed: MRL- PresDet+ LinkState+
RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna- CRSVisible-
RootCap: CRSVisible-
RootSta: PME ReqID 0000, PMEStatus- PMEPending-
Capabilities: [80] MSI: Enable+ Count=1/1 Maskable- 64bit-
Address: fee0100c Data: 4141
Capabilities: [90] Subsystem: ASUSTeK Computer Inc. Device 8179
Capabilities: [a0] Power Management version 2
Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
Capabilities: [100 v1] Virtual Channel
Caps: LPEVC=0 RefClk=100ns PATEntryBits=1
Arb: Fixed+ WRR32- WRR64- WRR128-
Ctrl: ArbSelect=Fixed
Status: InProgress-
VC0: Caps: PATOffset=00 MaxTimeSlots=1 RejSnoopTrans-
Arb: Fixed+ WRR32- WRR64- WRR128- TWRR128- WRR256-
Ctrl: Enable+ ID=0 ArbSelect=Fixed TC/VC=01
Status: NegoPending- InProgress-
VC1: Caps: PATOffset=00 MaxTimeSlots=1 RejSnoopTrans-
Arb: Fixed+ WRR32- WRR64- WRR128- TWRR128- WRR256-
Ctrl: Enable- ID=0 ArbSelect=Fixed TC/VC=00
Status: NegoPending- InProgress-
Capabilities: [180 v1] Root Complex Link
Desc: PortNumber=02 ComponentID=00 EltType=Config
Link0: Desc: TargetPort=00 TargetComponent=00 AssocRCRB- LinkType=MemMapped LinkValid+
Addr: 00000000fed1c001
Kernel driver in use: pcieport
00:1d.0 USB Controller: Intel Corporation N10/ICH 7 Family USB UHCI Controller #1 (rev 01) (prog-if 00 [UHCI])
Subsystem: ASUSTeK Computer Inc. P5KPL-VM,P5LD2-VM Mainboard
Control: I/O+ Mem- BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
Status: Cap- 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
Latency: 0
Interrupt: pin A routed to IRQ 5
Region 4: I/O ports at d400 [size=32]
Kernel driver in use: uhci_hcd
00:1d.1 USB Controller: Intel Corporation N10/ICH 7 Family USB UHCI Controller #2 (rev 01) (prog-if 00 [UHCI])
Subsystem: ASUSTeK Computer Inc. P5KPL-VM,P5LD2-VM Mainboard
Control: I/O+ Mem- BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
Status: Cap- 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
Latency: 0
Interrupt: pin B routed to IRQ 7
Region 4: I/O ports at d480 [size=32]
Kernel driver in use: uhci_hcd
00:1d.2 USB Controller: Intel Corporation N10/ICH 7 Family USB UHCI Controller #3 (rev 01) (prog-if 00 [UHCI])
Subsystem: ASUSTeK Computer Inc. P5KPL-VM,P5LD2-VM Mainboard
Control: I/O+ Mem- BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
Status: Cap- 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
Latency: 0
Interrupt: pin C routed to IRQ 3
Region 4: I/O ports at d800 [size=32]
Kernel driver in use: uhci_hcd
00:1d.3 USB Controller: Intel Corporation N10/ICH 7 Family USB UHCI Controller #4 (rev 01) (prog-if 00 [UHCI])
Subsystem: ASUSTeK Computer Inc. P5KPL-VM,P5LD2-VM Mainboard
Control: I/O+ Mem- BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
Status: Cap- 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
Latency: 0
Interrupt: pin D routed to IRQ 10
Region 4: I/O ports at d880 [size=32]
Kernel driver in use: uhci_hcd
00:1d.7 USB Controller: Intel Corporation N10/ICH 7 Family USB2 EHCI Controller (rev 01) (prog-if 20 [EHCI])
Subsystem: ASUSTeK Computer Inc. P5KPL-VM,P5LD2-VM Mainboard
Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
Status: Cap+ 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
Latency: 0
Interrupt: pin A routed to IRQ 5
Region 0: Memory at fea77c00 (32-bit, non-prefetchable) [size=1K]
Capabilities: [50] Power Management version 2
Flags: PMEClk- DSI- D1- D2- AuxCurrent=375mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
Capabilities: [58] Debug port: BAR=1 offset=00a0
Kernel driver in use: ehci_hcd
00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev e1) (prog-if 01 [Subtractive decode])
Control: I/O+ Mem- BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx-
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
Latency: 0
Bus: primary=00, secondary=03, subordinate=03, sec-latency=32
I/O behind bridge: 0000f000-00000fff
Memory behind bridge: fff00000-000fffff
Prefetchable memory behind bridge: 00000000fff00000-00000000000fffff
Secondary status: 66MHz- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort+ <SERR- <PERR-
BridgeCtl: Parity- SERR+ NoISA- VGA- MAbort- >Reset- FastB2B-
PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
Capabilities: [50] Subsystem: ASUSTeK Computer Inc. Device 8179
00:1f.0 ISA bridge: Intel Corporation 82801GB/GR (ICH7 Family) LPC Interface Bridge (rev 01)
Subsystem: ASUSTeK Computer Inc. P5KPL-VM Motherboard
Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
Latency: 0
Capabilities: [e0] Vendor Specific Information: Len=0c <?>
00:1f.1 IDE interface: Intel Corporation 82801G (ICH7 Family) IDE Controller (rev 01) (prog-if 8a [Master SecP PriP])
Subsystem: ASUSTeK Computer Inc. P5KPL-VM Motherboard
Control: I/O+ Mem- BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
Status: Cap- 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx+
Latency: 0
Interrupt: pin A routed to IRQ 3
Region 0: I/O ports at 01f0 [size=8]
Region 1: I/O ports at 03f4 [size=1]
Region 2: I/O ports at 0170 [size=8]
Region 3: I/O ports at 0374 [size=1]
Region 4: I/O ports at ffa0 [size=16]
Kernel driver in use: ata_piix
00:1f.2 IDE interface: Intel Corporation N10/ICH7 Family SATA IDE Controller (rev 01) (prog-if 8f [Master SecP SecO PriP PriO])
Subsystem: ASUSTeK Computer Inc. P5KPL-VM Motherboard
Control: I/O+ Mem- BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
Status: Cap+ 66MHz+ UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
Latency: 0
Interrupt: pin B routed to IRQ 7
Region 0: I/O ports at d080 [size=8]
Region 1: I/O ports at d000 [size=4]
Region 2: I/O ports at cc00 [size=8]
Region 3: I/O ports at c880 [size=4]
Region 4: I/O ports at c800 [size=16]
Capabilities: [70] Power Management version 2
Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot+,D3cold-)
Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
Kernel driver in use: ata_piix
00:1f.3 SMBus: Intel Corporation N10/ICH 7 Family SMBus Controller (rev 01)
Subsystem: ASUSTeK Computer Inc. P5KPL-VM Motherboard
Control: I/O+ Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
Status: Cap- 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
Interrupt: pin B routed to IRQ 7
Region 4: I/O ports at 0400 [size=32]
Kernel driver in use: i801_smbus
01:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8101E/RTL8102E PCI Express Fast Ethernet controller (rev 01)
Subsystem: ASUSTeK Computer Inc. Device 8136
Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
Latency: 0, Cache Line Size: 32 bytes
Interrupt: pin A routed to IRQ 19
Region 0: I/O ports at e800 [size=256]
Region 2: Memory at febff000 (64-bit, non-prefetchable) [size=4K]
Expansion ROM at febc0000 [disabled] [size=128K]
Capabilities: [40] Power Management version 2
Flags: PMEClk- DSI- D1+ D2+ AuxCurrent=375mA PME(D0-,D1+,D2+,D3hot+,D3cold+)
Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
Capabilities: [48] Vital Product Data
Unknown small resource type 05, will not decode more.
Capabilities: [50] MSI: Enable+ Count=1/2 Maskable- 64bit+
Address: 00000000fee0100c Data: 4151
Capabilities: [60] Express (v1) Endpoint, MSI 00
DevCap: MaxPayload 128 bytes, PhantFunc 0, Latency L0s <128ns, L1 unlimited
ExtTag+ AttnBtn+ AttnInd+ PwrInd+ RBE- FLReset-
DevCtl: Report errors: Correctable- Non-Fatal- Fatal- Unsupported-
RlxdOrd+ ExtTag- PhantFunc- AuxPwr- NoSnoop+
MaxPayload 128 bytes, MaxReadReq 512 bytes
DevSta: CorrErr- UncorrErr+ FatalErr- UnsuppReq+ AuxPwr+ TransPend-
LnkCap: Port #0, Speed 2.5GT/s, Width x1, ASPM L0s, Latency L0 unlimited, L1 unlimited
ClockPM- Surprise- LLActRep- BwNot-
LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- Retrain- CommClk-
ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
LnkSta: Speed 2.5GT/s, Width x1, TrErr- Train- SlotClk- DLActive- BWMgmt- ABWMgmt-
Capabilities: [84] Vendor Specific Information: Len=4c <?>
Capabilities: [100 v1] Advanced Error Reporting
UESta: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq+ ACSViol-
UEMsk: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
UESvrt: DLP+ SDES- TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol-
CESta: RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr-
CEMsk: RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr-
AERCap: First Error Pointer: 14, GenCap- CGenEn- ChkCap- ChkEn-
Capabilities: [12c v1] Virtual Channel
Caps: LPEVC=0 RefClk=100ns PATEntryBits=1
Arb: Fixed- WRR32- WRR64- WRR128-
Ctrl: ArbSelect=Fixed
Status: InProgress-
VC0: Caps: PATOffset=00 MaxTimeSlots=1 RejSnoopTrans-
Arb: Fixed- WRR32- WRR64- WRR128- TWRR128- WRR256-
Ctrl: Enable+ ID=0 ArbSelect=Fixed TC/VC=01
Status: NegoPending- InProgress-
Capabilities: [148 v1] Device Serial Number 01-00-00-00-36-4c-e0-00
Capabilities: [154 v1] Power Budgeting <?>
Kernel driver in use: r8169
linux-dopx:~ #
|
You must enable module ata_generic and module ext4 in menuconfig. The option are:
CONFIG_ATA_GENERIC=y: http://cateee.net/lkddb/web-lkddb/ATA_GENERIC.html
CONFIG_EXT4_FS=y: http://cateee.net/lkddb/web-lkddb/EXT4_FS.html
| modprobe: Module ext4 not found. WARNING: no dependencies for kernel module 'ext4' found |
1,370,473,401,000 |
I got this error following LFS 6.8, chapter 5, Tcl-8.5.10 to install Tcl:
lfs@sam:/mnt/lfs/sources/tcl8.5.9/tools$ ./configure --prefix=/tools
./configure: line 1208: cd: ../../tcl8.5/unix: No such file or directory
configure: error: There's no tclConfig.sh in /mnt/lfs/sources/tcl8.5.9/tools; perhaps you didn't specify the Tcl *build* directory (not the toplevel Tcl directory) or you forgot to configure Tcl?
lfs@sam:/mnt/lfs/sources/tcl8.5.9/tools$
How can I fix it?
|
According to the doc you link, you should be in the unix subdirectory of tcl8.5.9 to run the configure script, not in the tools subdirectory.
| How to configure Tcl on Linux From Scratch? |
1,370,473,401,000 |
I'm getting strange errors when I try to compile Python 3.2 on NetBSD 5.1:
python ./Objects/typeslots.py < ./Include/typeslots.h > ./Objects/typeslots.inc
python: not found
*** Error code 127
What am I doing wrong?
I'm trying to compile Python in the usual fashion:
./configure
make
su
make install
|
For some reason, you have to touch some files during the make process. When make quits with this Error 127, run:
touch ./Include/typeslots.h
touch ./Objects/type
touch ./Objects/typeslots.py
make
Inside of the Python source directory.
It will complain a second time:
./Python/makeopcodetargets.py ./Python/opcode_targets.h
env: python: No such file or directory
Again, just touch the offending files and run make again.
touch ./Python/makeopcodetargets.py
touch ./Python/opcode_targets.h
make
| How do I compile Python 3.2 on NetBSD? Error code 127 |
1,370,473,401,000 |
I'm trying to compile some software (FocusWriter) on openSUSE 11.3, (linux 2.6.34.7-0.5-desktop). (I can't find an actual download link to the alleged openSUSE RPM...just lots of metadata about the RPMs). So I unpacked the source from git, and, following instructions, ran qmake. I get this:
Package ao was not found in the pkg-config search path.
Perhaps you should add the directory containing `ao.pc'
to the PKG_CONFIG_PATH environment variable
No package 'ao' found
Package hunspell was not found in the pkg-config search path.
Perhaps you should add the directory containing `hunspell.pc'
to the PKG_CONFIG_PATH environment variable
No package 'hunspell' found
Package libzip was not found in the pkg-config search path.
Perhaps you should add the directory containing `libzip.pc'
to the PKG_CONFIG_PATH environment variable
No package 'libzip' found
Package ao was not found in the pkg-config search path.
Perhaps you should add the directory containing `ao.pc'
to the PKG_CONFIG_PATH environment variable
No package 'ao' found
Package hunspell was not found in the pkg-config search path.
Perhaps you should add the directory containing `hunspell.pc'
to the PKG_CONFIG_PATH environment variable
No package 'hunspell' found
Package libzip was not found in the pkg-config search path.
Perhaps you should add the directory containing `libzip.pc'
to the PKG_CONFIG_PATH environment variable
No package 'libzip' found
I know that all those packages are in fact installed, according to both YaST and zypper. /usr/lib64/ contains files such as libao.20.2 and libzip.so.1 and libzip.so.1.0.0 -- but nowhere on the harddrive can I find anything called ao.pc, hunspell.pc, or libzip.pc.
Any suggestions what I'm missing here?
Thanks.
|
You have the user libraries installed, but you also need to install the developer libraries and header files.
Taking ao as an example:
The normal user package includes files like:
/usr/lib/libao.so.4.0.0
/usr/lib/libao.so.4
whereas the developer package include files like:
/usr/include/ao/ao.h
/usr/include/ao/os_types.h
/usr/include/ao/plugin.h
/usr/lib/pkgconfig/ao.pc
And it's the second set of files you're missing.
I'm not familiar with SUSE's YaST2, but the commands should look something like
yast2 --install libao-devel.
And the same for the other packages of course.
One way to double check the name of the RPM to install is to go to rpmfind.net and paste one of the missing file names in, e.g. /usr/lib/pkgconfig/ao.pc. It will give you a list of RPMs: look for the OpenSUSE 11.3 one and use that name when running yast2 --install.
UPDATE
According to Using zypper to determine what package contains a certain file, you can use zypper rather than needing to use rpmfind.net.
Try this:
zypper wp ao.pc
(untested)
Also, on an RPM-based system, you might find it better to try searching for an RPM .spec file, and build using that.
I found a focuswriter spec file on the OpenSUSE web site.
Then if you build using rpmbuild, it should give you an error telling you which packages you still need to install so you can build it.
This also has the advantage of giving you an RPM you can easily install, upgrade, and uninstall, which uses the SUSE recommended build options.
| qmake looking for lib files named *.pc |
1,370,473,401,000 |
I'm trying to compile a program that uses vulkan hpp and it uses a method in a file named vulkan_driver.h like this:
// Evaluate f and if result is not a success throw proper vk exception.
#define CHECK_VK_RESULT(x) do { \
vk::Result res = vk::Result(x); \
int tmp = 0; \
vk::createResultValue(res, tmp, __FILE__ ":" TOSTRING(__LINE__)); \
} while (0)
when i compile i get the following error:
[ 199s] In file included from /home/abuild/rpmbuild/BUILD/decaf-emu-20220508T212126/src/libgpu/src/vulkan/vulkan_driver.cpp:2:
[ 199s] /home/abuild/rpmbuild/BUILD/decaf-emu-20220508T212126/src/libgpu/src/vulkan/vulkan_driver.cpp: In member function 'void vulkan::Driver::initialise(vk::Instance, vk::PhysicalDevice, vk::Device, vk::Queue, uint32_t)':
[ 199s] /home/abuild/rpmbuild/BUILD/decaf-emu-20220508T212126/src/libgpu/src/vulkan/vulkan_driver.h:36:8: error: 'createResultValue' is not a member of 'vk'; did you mean 'createResultValueType'?
[ 199s] 36 | vk::createResultValue(res, tmp, __FILE__ ":" TOSTRING(__LINE__)); \
[ 199s] | ^~~~~~~~~~~~~~~~~
[ 199s] /home/abuild/rpmbuild/BUILD/decaf-emu-20220508T212126/src/libgpu/src/vulkan/vulkan_driver.cpp:89:4: note: in expansion of macro 'CHECK_VK_RESULT'
[ 199s] 89 | CHECK_VK_RESULT(vmaCreateAllocator(&allocatorCreateInfo, &mAllocator));
[ 199s] | ^~~~~~~~~~~~~~~
[ 199s] make[2]: *** [src/libgpu/CMakeFiles/libgpu.dir/build.make:647: src/libgpu/CMakeFiles/libgpu.dir/src/vulkan/vulkan_driver.cpp.o] Error 1
How can I fix this error?
|
I have come across the same error while trying to build https://github.com/jherico/Vulkan (on Ubuntu 22.04 and Windows 10)
For example this line in glfw.cpp leads to the same compile error as Ahmed Moselhi had posted.
I found that the name AND signature for this function has been changed in vulkan.hpp
(see this commit in the Github Vulkan-Headers repo)
To get the compile working again, I needed to change the usages of the function from:
vk::createResultValue(result, rawSurface, "vk::CommandBuffer::begin");
to:
vk::createResultValueType(result, rawSurface);
In your case @Ahmed Moselhi, you should change the code to the following:
vk::createResultValueType(res, tmp);
| vulkan build error : 'createResultValue' is not a member of 'vk' |
1,370,473,401,000 |
I was trying to install a library called Openslide which failed during the ./configure step because it could not find a dependency (libjpeg).
I thought I would proceed to build libjpeg and then manually provide the library location to ./configure to make it work. After building libjpeg at ~/libjpeg, I thought I could just add ~/libjpeg/lib to LD_LIBRARY_PATH by putting the following in my bashrc and re-sourcing it LD_LIBRARY_PATH=~/libjpeg/lib:$LD_LIBRARY_PATH.
This didn't work and libjpeg still couldn't be found by the ./configure script in Openslide. I started hunting down answers online, one suggestion was to try ./configure --with-libjpeg=~/libjpeg/lib which also failed.
I eventually gave up and just did a sudo apt install, but I am still curious as to why I couldn't manually provide the location of the library. Is there a correct way to do this?
|
OpenSlide uses pkg-config to find its dependencies, so you need to tell pkg-config where to find your library:
PKG_CONFIG_PATH=~/libjpeg/pkg-config ./configure …
replacing ~/libjpeg/pkg-config with the path to the directory containing libjpeg.pc.
Unfortunately the libjpeg implementation you used is very old and doesn’t provide a .pc file; you might want to use libjpeg-turbo instead (that’s what libjpeg-dev pulls in on current Debian and derivatives).
| How to provide library path to ./configure script |
1,370,473,401,000 |
I'm using this tool-chian provided by a manufacture of control boards.
I followed the instruction step by step but when I tried to compile example code, the compilation process got stuck at "$basename can't execute" branch of the if clause.
I'm not exactly a wizard of bash scripts so I have no idea what I'm looking at.
#!/bin/bash
# uclibc-toolchain-wrapper
basename=$0
if [ -d $basename ]
then
echo "This can't be a directory."
exit 1;
fi
tool_name=${basename##*/}
if [[ $tool_name =~ "mips-linux-uclibc-gnu" ]]
then
prefix=${basename%-uclibc-*}
postfix=${basename##*mips-linux-uclibc}
$prefix$postfix "-muclibc" $@
else
echo "$basename can't execute."
exit 1;
fi
So what should I do to get this script rolling?
The user manual told me to modify environment variables in order to "install" the tool chain. Which consists basically of adding a designated path to "PATH" variable in .bashrc. Of course I've placed the entire toolchain inside the designated folder.
When I type "make" command in the source folder, the toolchain does appear to be called upon, but the execution stops at this script with an error printout:"uclibc-toolchain-wrapper can't execute". Where "uclibc-toolchain-wrapper" is the filename of this script.
I've tried this on lubuntu 13, ubuntu 22, Debian 5 and all met the same result.
Please help! Thanks in advance!
|
In the beginning of the script, $basename is set to be equal to $0 which means "the name (possibly including a directory path) this script was executed as".
The apparent purpose of this script is to be linked/copied to several different names, like cc1-mips-linux-uclibc-gnu for the first phase of the compiler. When executed using that name, it rearranges the command to cc1-mips-linux-gnu -muclibc <script parameters> and attempts to execute that.
From this, it would seem that the toolchain package might contain several executable tools named like:
<tool name>-mips-linux-gnu<possible suffix>
This wrapper would then presumably be linked as:
<tool name>-mips-linux-uclibc-gnu<possible suffix>
for each of them, to allow the use of long-form tool names that include the -uclibc- part.
If you run ls -lF in a directory that contains the toolchain's executables, you might/should see symbolic links like:
cc1-mips-linux-uclibc-gnu -> uclibc-toolchain-wrapper
<some tool>-mips-linux-uclibc-gnu -> uclibc-toolchain-wrapper
<another tool>-mips-linux-uclibc-gnu -> uclibc-toolchain-wrapper
...
and either in the same directory, or elsewhere within the toolchain package, there would be executables named like:
cc1-mips-linux-gnu
<some tool>-mips-linux-gnu
<another tool>-mips-linux-gnu
...
The <some tool>, <another tool> etc. might be well-known compiler/linker component names like cpp, ld etc., or other tools.
But if the literal error message you're getting when you run make is
uclibc-toolchain-wrapper can't execute
then the script is detecting it's being run as just uclibc-toolchain-wrapper, which is not correct: this script expects to be called using any name that includes the string mips-linux-uclibc-gnu. This is literally the test that causes the "$basename cannot execute" error messages if it fails.
Have you extracted the toolchain package to a non-Unix-like filesystem (like NTFS) and trying to run it from there? On such filesystems, symbolic link semantics might not work the same as on Unix-like filesystems, and that might explain why the script is not being able to get the original name of the symbolic link used to run the script, and gets the real name of the script (uclibc-toolchain-wrapper) instead. In this case, the fix would be to re-extract the toolchain package to a real Unix-like filesystem instead, and use it from there.
Or if the Makefile literally calls the script using the name uclibc-toolchain-wrapper, then that Makefile is being silly and/or somehow misconfigured.
| Why this script stuck at "can't execute" branch? |
1,370,473,401,000 |
I'm having trouble to compile kernel modules with KBUILD_CFLAGS_MODULE with new kernel. The compiler shows me a weird error. Such builds used to work with my older kernel (5.5) but does not work anymore with my 5.16 kernel.
Here is a minimal reproducible example:
Dummy module:
#include <linux/init.h>
#include <linux/module.h>
#include <linux/kernel.h>
MODULE_LICENSE("GPL");
static int __init lkm_example_init(void) {return 0;}
static void __exit lkm_example_exit(void){}
module_init(lkm_example_init);
module_exit(lkm_example_exit);
Makefile:
TARGET ?= test
obj-m += ${TARGET}.o
KBUILD_CFLAGS_MODULE := "-O1" "-mcmodel=medium" # Examples
.PHONY: all
all:
make -C /lib/modules/$(shell uname -r)/build M=$(PWD) modules
clean:
make -C /lib/modules/$(shell uname -r)/build M=$(PWD) clean
If I comment out the KBUILD_CFLAGS_MODULE line, my build work.
However, if uncommented my compilation fails with this error message:
make TARGET=test
make -C /lib/modules/5.16.0/build M=/home/user/test modules
make[1]: Entering directory '/home/user/linux'
CC [M] /home/user/test/test.o
In file included from ./include/linux/module.h:22,
from /home/user/test/test.c:2:
./include/linux/module.h:183:39: error: expected ',' or ';' before 'KBUILD_MODFILE'
183 | #define MODULE_FILE MODULE_INFO(file, KBUILD_MODFILE);
| ^~~~~~~~~~~~~~
./include/linux/moduleparam.h:26:47: note: in definition of macro '__MODULE_INFO'
26 | = __MODULE_INFO_PREFIX __stringify(tag) "=" info
| ^~~~
./include/linux/module.h:183:21: note: in expansion of macro 'MODULE_INFO'
183 | #define MODULE_FILE MODULE_INFO(file, KBUILD_MODFILE);
| ^~~~~~~~~~~
./include/linux/module.h:230:34: note: in expansion of macro 'MODULE_FILE'
230 | #define MODULE_LICENSE(_license) MODULE_FILE MODULE_INFO(license, _license)
| ^~~~~~~~~~~
/home/user/test/test.c:4:1: note: in expansion of macro 'MODULE_LICENSE'
4 | MODULE_LICENSE("GPL");
| ^~~~~~~~~~~~~~
make[2]: *** [scripts/Makefile.build:287: /home/user/test/test.o] Error 1
make[1]: *** [Makefile:1846: /home/user/test] Error 2
make[1]: Leaving directory '/home/user/linux'
make: *** [Makefile:6: all] Error 2
Do you know what could be the root cause of this issue?
|
It’s not obvious from the documentation, but you’re supposed to add to KBUILD_CFLAGS_MODULE. Change your declaration to
KBUILD_CFLAGS_MODULE += "-O1" "-mcmodel=medium" # Examples
and the build will work.
The root cause of the build failure is that KBUILD_CFLAGS_MODULE lost its initial -DMODULE contents, which messed up the MODULE_FILE declaration.
| Can't compile Kernel module with KBUILD_CFLAGS_MODULE |
1,370,473,401,000 |
I wanted to uninstall conky that I built from source on my arch linux and this thread suggested installing checkinstall. But I am new to all this and makepkg -sic resulted in the following error -
makepkg -si
==> Making package: checkinstall 1.6.2-5 (Thursday 22 April 2021 12:28:02 AM)
==> Checking runtime dependencies...
==> Checking buildtime dependencies...
==> Retrieving sources...
-> Found checkinstall-1.6.2.tar.gz
-> Found 0001-Felipe-Sateler-Tag-only-regular-files-as-conffiles.patch
-> Found 0002-Backtick-patch-from-Andrey-Schmachev-Copyright-year-.patch
-> Found 0003-Fixed-bug-3-Removed-extra-okfail-and-fixed-spanish-t.patch
-> Found 0004-Fixed-bug-1-Source-directory-package-name-with-space.patch
-> Found 0005-Applied-patch-from-Ladislav-Hagara-for-compiling-ins.patch
-> Found 0006-Added-Norwegian-translation-update-from-Andreas-Note.patch
-> Found 0007-Added-summary-command-line-option.patch
-> Found 0008-Fixed-glibc-minor-version-handling.patch
-> Found 0009-Fixed-warning-about-uninitialized-variable-in-fopen-.patch
-> Found 0010-Support-for-the-Makefile-PREFIX-variable.patch
-> Found 0011-We-now-create-Slackware-packages-in-TMP_DIR.patch
-> Found 0012-Fixed-bug-110.-create-localdecls-correctly-identifie.patch
-> Found 0013-Fixed-bug-23.-We-remove-empty-fields-from-the-Debian.patch
-> Found 0014-Fixed-typo-in-create-localdecls.patch
-> Found 0015-Fixed-bug-30.-Newlines-are-converted-to-underscores-.patch
-> Found 0016-Fixed-bug-38.-.spec-file-macro-processing.patch
-> Found 0017-Fixed-bug-112-make-install-fails-on-Fedora-21.patch
-> Found 0018-Fixed-bug-137-Missing-in-copy_dir_hierarchy.patch
-> Found 0019-Fixed-bug-35-Directories-in-etc-are-incorrectly-incl.patch
-> Found 0020-add-support-for-recommends-and-suggests-AKA-weak-dep.patch
-> Found 0021-Load-checkinstallrc-from-etc.patch
-> Found 0022-Drop-cases-for-glibc-2.4.patch
-> Found 0023-fix-usr-sbin-merge-to-usr-bin-in-Arch.patch
-> Found 0024-using-custom-cflag-and-ldflag.patch
-> Found 0025-fix-installwatch-path-usr-local.patch
==> Validating source files with b2sums...
checkinstall-1.6.2.tar.gz ... Passed
0001-Felipe-Sateler-Tag-only-regular-files-as-conffiles.patch ... Passed
0002-Backtick-patch-from-Andrey-Schmachev-Copyright-year-.patch ... Passed
0003-Fixed-bug-3-Removed-extra-okfail-and-fixed-spanish-t.patch ... Passed
0004-Fixed-bug-1-Source-directory-package-name-with-space.patch ... Passed
0005-Applied-patch-from-Ladislav-Hagara-for-compiling-ins.patch ... Passed
0006-Added-Norwegian-translation-update-from-Andreas-Note.patch ... Passed
0007-Added-summary-command-line-option.patch ... Passed
0008-Fixed-glibc-minor-version-handling.patch ... Passed
0009-Fixed-warning-about-uninitialized-variable-in-fopen-.patch ... Passed
0010-Support-for-the-Makefile-PREFIX-variable.patch ... Passed
0011-We-now-create-Slackware-packages-in-TMP_DIR.patch ... Passed
0012-Fixed-bug-110.-create-localdecls-correctly-identifie.patch ... Passed
0013-Fixed-bug-23.-We-remove-empty-fields-from-the-Debian.patch ... Passed
0014-Fixed-typo-in-create-localdecls.patch ... Passed
0015-Fixed-bug-30.-Newlines-are-converted-to-underscores-.patch ... Passed
0016-Fixed-bug-38.-.spec-file-macro-processing.patch ... Passed
0017-Fixed-bug-112-make-install-fails-on-Fedora-21.patch ... Passed
0018-Fixed-bug-137-Missing-in-copy_dir_hierarchy.patch ... Passed
0019-Fixed-bug-35-Directories-in-etc-are-incorrectly-incl.patch ... Passed
0020-add-support-for-recommends-and-suggests-AKA-weak-dep.patch ... Passed
0021-Load-checkinstallrc-from-etc.patch ... Passed
0022-Drop-cases-for-glibc-2.4.patch ... Passed
0023-fix-usr-sbin-merge-to-usr-bin-in-Arch.patch ... Passed
0024-using-custom-cflag-and-ldflag.patch ... Passed
0025-fix-installwatch-path-usr-local.patch ... Passed
==> Extracting sources...
-> Extracting checkinstall-1.6.2.tar.gz with bsdtar
==> Starting prepare()...
patching file INSTALL
Reversed (or previously applied) patch detected! Assuming -R.
patching file checkinstall
patching file description-pak
Reversed (or previously applied) patch detected! Assuming -R.
patching file installwatch/create-localdecls
The next patch would delete the file installwatch/description-pak,
which does not exist! Assuming -R.
patching file installwatch/description-pak
patching file installwatch/installwatch.c
patching file checkinstall
patching file installwatch/installwatch
patching file checkinstall
patching file locale/checkinstall-es.po
patching file checkinstall
patching file installwatch/create-localdecls
patching file checkinstall-man.sgml
patching file installwatch-man.sgml
patching file locale/checkinstall-no.po
patching file checkinstall
patching file installwatch/create-localdecls
patching file installwatch/installwatch.c
patching file Makefile
patching file checkinstall.in (renamed from checkinstall)
patching file checkinstall.in
patching file installwatch/Makefile
patching file installwatch/create-localdecls
patching file installwatch/installwatch.c
patching file installwatch/libcfiletest.c
patching file installwatch/libctest.c
patching file checkinstall.in
patching file installwatch/create-localdecls
patching file checkinstall.in
patching file checkinstall.in
patching file Makefile
patching file checkinstall.in
patching file checkinstall.in
patching file checkinstall.in
patching file checkinstall.in
patching file installwatch/installwatch.c
patching file Makefile
patching file checkinstall.in
patching file checkinstallrc-dist
patching file installwatch/Makefile
patching file checkinstallrc-dist
==> Removing existing $pkgdir/ directory...
==> Starting build()...
for file in locale/checkinstall-*.po ; do \
case ${file} in \
locale/checkinstall-template.po) ;; \
*) \
out=`echo $file | sed -s 's/po/mo/'` ; \
msgfmt -o ${out} ${file} ; \
if [ $? != 0 ] ; then \
exit 1 ; \
fi ; \
;; \
esac ; \
done
sed 's%MAKEFILE_PREFIX%/usr/local%g' checkinstall.in > checkinstall
make -C installwatch
make[1]: Entering directory '/home/privileged/applications/checkinstall/src/checkinstall-1.6.2/installwatch'
./create-localdecls
Checking truncate argument type... off_t
Checking readlinkat result type... ssize_t
Checking which libc we are using... libc.so.6
Checking libc version... 2.33
glibc >= 2 found
Checking glibc subversion... 33
gcc -march=x86-64 -mtune=generic -O2 -pipe -fno-plt -fexceptions -Wp,-D_FORTIFY_SOURCE=2,-D_GLIBCXX_ASSERTIONS -Wformat -Werror=format-security -fstack-clash-protection -fcf-protection -Wl,-O1,--sort-common,--as-needed,-z,relro,-z,now -Wall -c -g -D_GNU_SOURCE -DPIC -fPIC -D_REENTRANT -DVERSION=\"0.7.0beta7\" installwatch.c
installwatch.c: In function ‘true_stat’:
installwatch.c:161:20: error: ‘_STAT_VER’ undeclared (first use in this function)
161 | return true_xstat(_STAT_VER,pathname,info);
| ^~~~~~~~~
installwatch.c:161:20: note: each undeclared identifier is reported only once for each function it appears in
installwatch.c: In function ‘true_mknod’:
installwatch.c:165:21: error: ‘_MKNOD_VER’ undeclared (first use in this function)
165 | return true_xmknod(_MKNOD_VER,pathname,mode,&dev);
| ^~~~~~~~~~
installwatch.c: In function ‘true_lstat’:
installwatch.c:169:21: error: ‘_STAT_VER’ undeclared (first use in this function)
169 | return true_lxstat(_STAT_VER,pathname,info);
| ^~~~~~~~~
installwatch.c: In function ‘true_fstatat’:
installwatch.c:173:23: error: ‘_STAT_VER’ undeclared (first use in this function)
173 | return true_fxstatat(_STAT_VER, dirfd, pathname, info, flags);
| ^~~~~~~~~
installwatch.c: In function ‘true_fstatat64’:
installwatch.c:177:25: error: ‘_STAT_VER’ undeclared (first use in this function)
177 | return true_fxstatat64(_STAT_VER, dirfd, pathname, info, flags);
| ^~~~~~~~~
installwatch.c: In function ‘true_mknodat’:
installwatch.c:181:23: error: ‘_MKNOD_VER’ undeclared (first use in this function)
181 | return true_xmknodat(_MKNOD_VER, dirfd, pathname, mode, &dev);
| ^~~~~~~~~~
installwatch.c: In function ‘instw_init’:
installwatch.c:1209:3: warning: ignoring return value of ‘realpath’ declared with attribute ‘warn_unused_result’ [-Wunused-result]
1209 | realpath(proot,wrkpath);
| ^~~~~~~~~~~~~~~~~~~~~~~
installwatch.c:1328:3: warning: ignoring return value of ‘realpath’ declared with attribute ‘warn_unused_result’ [-Wunused-result]
1328 | realpath(__instw.root,wrkpath);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
installwatch.c:1346:4: warning: ignoring return value of ‘realpath’ declared with attribute ‘warn_unused_result’ [-Wunused-result]
1346 | realpath(pexclude,wrkpath);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~
installwatch.c: In function ‘true_stat’:
installwatch.c:162:1: warning: control reaches end of non-void function [-Wreturn-type]
162 | }
| ^
installwatch.c: In function ‘copy_path’:
installwatch.c:755:5: warning: ignoring return value of ‘write’ declared with attribute ‘warn_unused_result’ [-Wunused-result]
755 | write(translfd,buffer,bytes);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~
installwatch.c: In function ‘true_lstat’:
installwatch.c:170:1: warning: control reaches end of non-void function [-Wreturn-type]
170 | }
| ^
installwatch.c: In function ‘true_mknod’:
installwatch.c:166:1: warning: control reaches end of non-void function [-Wreturn-type]
166 | }
| ^
make[1]: *** [Makefile:22: installwatch.o] Error 1
make[1]: Leaving directory '/home/privileged/applications/checkinstall/src/checkinstall-1.6.2/installwatch'
make: *** [Makefile:12: all] Error 2
==> ERROR: A failure occurred in build().
Aborting...
I am not sure what this means and searching online did not result in any useful hits either.
Anybody well versed with this mind helping me out perhaps?
|
It looks like you're not the only person to have seen this -- it seems that the package you're using has not been updated for glibc-2.33. There's a patch at the end of this thread which is supposed to fix that (though, be warned, it was only posted earlier today).
| ERROR: A failure occurred in build(). while installing "checkinstall" help the newbie out |
1,370,473,401,000 |
I am trying to compile the demo for the Acontis etherCAT master stack, but G++ is reporting a number of undefined references when trying to compile, without giving any clue as to which headers or libraries need be included to correct the problem. Since G++ is not reporting any referenced missing headers, how can I figure out which files are required to satisfy the undefined references?
If it matters, I can create the object files from source, the errors occur during linking. Acontis did not provide a makefile.
The documentation provided by Acontis for linux is as follows:
I have tried using g++ and hunting down the header locations (Format simplified to make it more readable and <Install_Location> inserted so that each argument fits on a single line):
g++
-I <Install_Location>/Examples/EcMasterDemo/
-I <Install_Location>/SDK/INC/
-I <Install_Location>/SDK/INC/Linux
-I <Install_Location>/Examples/Common/Linux
-I <Install_Location>/Examples/Common/
-I <Install_Location>/Sources/Common
-o test
EcDemoApp.cpp
<Install_Location>/Examples/Common/Linux/EcDemoMain.cpp
<Install_Location>/Sources/Common/EcTimer.cpp
<Install_Location>/SDK/LIB/Linux/x64/libAtemRasSrv.a
<Install_Location>/SDK/LIB/Linux/x64/libEcMaster.a
-pthread
This is a short snippet of the output:
I am running Ubuntu 20.04 with 4.14.213-rt103 #1 SMP PREEMPT RT for Kernel. g++ is version 9.3.0
Update after fixing the -l arguments (thank you steeldriver)
command executed:
/ClassB/Examples/EcMasterDemo$ gcc
<Install_Dir>/ClassB/Examples/Common/Linux/EcDemoMain.cpp
<Install_Dir>/ClassB/Examples/EcMasterDemo/EcDemoApp.cpp
<Install_Dir>/ClassB/Sources/Common/EcTimer.cpp
-o test
-I <Install_Dir>/ClassB/Examples/EcMasterDemo
-I <Install_Dir>/ClassB/SDK/INC/Linux
-I <Install_Dir>/ClassB/SDK/INC
-I <Install_Dir>/ClassB/Sources/Common
-I <Install_Dir>/ClassB/Examples/Common
-I <Install_Dir>/ClassB/Examples/Common/Linux
-L <Install_Dir>/ClassB/SDK/LIB/Linux/x64
-lAtemRasSrv -lEcMaster -pthread -ldl -lrt
Which seemed to have fixed a few undefined references, but a lot still exist.
|
There were two issues preventing the compiling of the program.
First, as answered by steeldriver, the library path was not correctly included and the libs were not correctly referenced in GCC.
Second, several cpp source files were missing, either accidentally deleted or not successfully decompressed from the archive the first time.
Once these issues were corrected, the program built correctly in GCC according to the demo source file list provided by the programmers earlier in the documentation.
For reference and since Acontis does not provide compiler examples, these are the G++ arguments that allowed the Acontis etherCAT master demo to build on ubuntu linux 20.04:
g++
<Install_Dir>/ClassB/Examples/Common/Linux/EcDemoMain.cpp
<Install_Dir>/ClassB/Examples/EcMasterDemo/EcDemoApp.cpp
<Install_Dir>/ClassB/Examples/Common/EcDemoParms.cpp
<Install_Dir>/ClassB/Examples/Common/EcSelectLinkLayer.cpp
<Install_Dir>/ClassB/Examples/Common/EcNotification.cpp
<Install_Dir>/ClassB/Examples/Common/EcSdoServices.cpp
<Install_Dir>/ClassB/Examples/Common/EcSlaveInfo.cpp
<Install_Dir>/ClassB/Examples/Common/EcLogging.cpp
<Install_Dir>/ClassB/Sources/Common/EcTimer.cpp
-o test
-I <Install_Dir>/ClassB/Examples/EcMasterDemo
-I <Install_Dir>/ClassB/SDK/INC/Linux
-I <Install_Dir>/ClassB/SDK/INC
-I <Install_Dir>/ClassB/Sources/Common
-I <Install_Dir>/ClassB/Examples/Common
-I <Install_Dir>/ClassB/Examples/Common/Linux
-L <Install_Dir>/ClassB/SDK/LIB/Linux/x64
-lAtemRasSrv -lEcMaster -pthread -ldl -lrt -Wall
| What is the best way to determine missing dependencies when using G++ to compile a program with headers and static libraries? |
1,370,473,401,000 |
I'm currently developing a Linux Security Module which is stored in the security directory of the kernel source tree. When I compile and install the kernel using the following commands, the module is loaded and everything is working fine:
fakeroot make -j9 -f debian/rules.gen binary-arch_amd64_none_amd64
apt remove linux-image-4.19.0-9-amd64-unsigned
dpkg -i linux-image-4.19.0-9-amd64-unsigned_4.19.118-2_amd64.deb
If I make the changes to the module and rebuild the kernel using the commands above however, they won't be included in the new image, unless I delete all build output and recompile the whole kernel.
Is there a way to only rebuild a specific part of the kernel i.e. only the security directory?
|
I found it out thanks to the help of a university professor.
You have to delete the file debian/stamps/build_amd64_none_amd64.
# The next line make sure only the required parts are rebuild
rm debian/stamps/build_amd64_none_amd64
# Rebuild the kernel
fakeroot debian/rules source
fakeroot make -j9 -f debian/rules.gen binary-arch_amd64_none_amd64
| How can I recompile only a specific part of the Linux kernel on Debian Buster? |
1,370,473,401,000 |
I am attempting to install rejoystick, and when I run make, I get this:
Making all in src
make[1]: Entering directory '/home/chrx/Downloads/joystick/rejoystick-0.8.1/src'
make[2]: Entering directory '/home/chrx/Downloads/joystick/rejoystick-0.8.1/src'
/bin/bash ../libtool --tag=CC --mode=link gcc -g -O2 -std=iso9899:1990 -Wall -pedantic -I../include -O2 -s -pthread -I/usr/include/glib-2.0 -I/usr/lib/x86_64-linux-gnu/glib-2.0/include -I/usr/include/SDL -D_GNU_SOURCE=1 -D_REENTRANT -pthread -I/usr/include/gtk-2.0 -I/usr/lib/x86_64-linux-gnu/gtk-2.0/include -I/usr/include/gio-unix-2.0/ -I/usr/include/cairo -I/usr/include/pango-1.0 -I/usr/include/atk-1.0 -I/usr/include/cairo -I/usr/include/pixman-1 -I/usr/include/libpng12 -I/usr/include/gdk-pixbuf-2.0 -I/usr/include/libpng12 -I/usr/include/pango-1.0 -I/usr/include/harfbuzz -I/usr/include/pango-1.0 -I/usr/include/freetype2 -I/usr/include/glib-2.0 -I/usr/lib/x86_64-linux-gnu/glib-2.0/include -o rejoystick assign_button.o backend.o button_axis.o error.o io.o js_axis.o js_button.o list.o main.o sdl_misc.o -lXtst -lgthread-2.0 -pthread -lglib-2.0 -L/usr/lib/x86_64-linux-gnu -lSDL -lgtk-x11-2.0 -lgdk-x11-2.0 -lpangocairo-1.0 -latk-1.0 -lcairo -lgdk_pixbuf-2.0 -lgio-2.0 -lpangoft2-1.0 -lpango-1.0 -lgobject-2.0 -lfontconfig -lfreetype -lglib-2.0
gcc -g -O2 -std=iso9899:1990 -Wall -pedantic -I../include -O2 -s -pthread -I/usr/include/glib-2.0 -I/usr/lib/x86_64-linux-gnu/glib-2.0/include -I/usr/include/SDL -D_GNU_SOURCE=1 -D_REENTRANT -pthread -I/usr/include/gtk-2.0 -I/usr/lib/x86_64-linux-gnu/gtk-2.0/include -I/usr/include/gio-unix-2.0/ -I/usr/include/cairo -I/usr/include/pango-1.0 -I/usr/include/atk-1.0 -I/usr/include/cairo -I/usr/include/pixman-1 -I/usr/include/libpng12 -I/usr/include/gdk-pixbuf-2.0 -I/usr/include/libpng12 -I/usr/include/pango-1.0 -I/usr/include/harfbuzz -I/usr/include/pango-1.0 -I/usr/include/freetype2 -I/usr/include/glib-2.0 -I/usr/lib/x86_64-linux-gnu/glib-2.0/include -o rejoystick assign_button.o backend.o button_axis.o error.o io.o js_axis.o js_button.o list.o main.o sdl_misc.o -pthread -lXtst -lgthread-2.0 -L/usr/lib/x86_64-linux-gnu -lSDL -lgtk-x11-2.0 -lgdk-x11-2.0 -lpangocairo-1.0 -latk-1.0 -lcairo -lgdk_pixbuf-2.0 -lgio-2.0 -lpangoft2-1.0 -lpango-1.0 -lgobject-2.0 -lfontconfig /usr/lib/x86_64-linux-gnu/libfreetype.so -lglib-2.0 -Wl,--rpath -Wl,/usr/lib/x86_64-linux-gnu -Wl,--rpath -Wl,/usr/lib/x86_64-linux-gnu
/usr/bin/ld: io.o: undefined reference to symbol 'XKeycodeToKeysym'
/usr/lib/x86_64-linux-gnu/libX11.so.6: error adding symbols: DSO missing from command line
collect2: error: ld returned 1 exit status
Makefile:277: recipe for target 'rejoystick' failed
make[2]: *** [rejoystick] Error 1
make[2]: Leaving directory '/home/chrx/Downloads/joystick/rejoystick-0.8.1/src'
Makefile:335: recipe for target 'all-recursive' failed
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory '/home/chrx/Downloads/joystick/rejoystick-0.8.1/src'
Makefile:248: recipe for target 'all-recursive' failed
make: *** [all-recursive] Error 1
What can I do to fix this? (I'm guessing the error is error adding symbols: DSO missing from command line)
|
The build is missing -lX11; to work around that, run
./configure LIBS=-lX11 && make
| Make error: DSO missing from command line |
1,370,473,401,000 |
I'm interested in modifying and re-compiling one of the wireless drivers in a Linux environment. I know exactly which line in what file I need to modify, however may I know how can I re-compile the source code from .c to .ko? Correct me if i'm wrong, the .ko file is how I am able to specify my modified wireless driver, i think.
[8/7/2018] - Edited for more information.
I have edited the brcmfmac driver to transmit static data and I am trying to re-compile it. Therefore I would like to know how can I compile it into a .ko so that I can put this new driver into my openwrt device. I hope that clear things up. I am still in the midst of attempting to compile it.
|
You Can recompile kernel module by running make -C /lib/modules/$(uname -r)/build M=$(pwd) modules command in module source directory.
| Modifying and Re-compiling Linux Drivers [closed] |
1,370,473,401,000 |
I am trying to compile i3 version 4.14.1 under Cygwin 2.884 (Windows 7). I have installed needed libiconv library via Cygwin setup but while running ./configure I get this error:
configure: error: in `/home/msamec/Downloads/i3-4.14.1/x86_64-unknown cygwin':
configure: error: cannot find the required iconv_open() function despite trying
to link with -liconv
See `config.log' for more details
Any clue what can I do to help it find the library?
I have tried to compile the library libiconv-1.13.1 manually but I have encountered some errors I don't know how to resolve:
libtool: link: /bin/gcc -shared .libs/localcharset.o .libs/relocatable.o -o .libs/cygcharset-1.dll -Wl,--enable-auto-image-base -Xlinker --out-implib -Xlinker .libs/libcharset.dll.a
.libs/relocatable.o: In function `DllMain':
/home/msamec/Downloads/libiconv-1.13.1/libcharset/lib/./relocatable.c:324: undefined reference to `cygwin_conv_to_posix_path'
/home/msamec/Downloads/libiconv-1.13.1/libcharset/lib/./relocatable.c:324:(.text+0x113): relocation truncated to fit: R_X86_64_PC32 against undefined symbol `cygwin_conv_to_posix_path'
collect2: error: ld returned 1 exit status
make[2]: *** [Makefile:59: libcharset.la] Error 1
make[2]: Leaving directory '/home/msamec/Downloads/libiconv-1.13.1/libcharset/lib'
make[1]: *** [Makefile:34: all] Error 2
make[1]: Leaving directory '/home/msamec/Downloads/libiconv-1.13.1/libcharset'
make: *** [Makefile:42: lib/localcharset.h] Error 2
I have grepped the iconv_open() function name and found it in the cygwin folder
/usr/i686-pc-cygwin/sys-root/usr/include/iconv.h
and also in the libiconv folder
/usr/include/iconv.h
But for some reason the configure script is not able to find it.
Here is my config.log
Here is my iconv.h
https://gist.github.com/anonymous/0b117d1680954d591f989256b508bfc5
I have checked where is this libary file inconv.h located on Ubuntu. Unlinke in cygwin it is in /lib/ while in cygwin it is here /usr/include/. Tried copying the library to that location but that did not help either. I was able to reproduce the issue on my home Windows 10 as well.
EDIT: Here is the configure file that I am using: enter link description here
|
The test is failing as
| char iconv_open ();
| int
| main ()
| {
| return iconv_open ();
| ;
| return 0;
| }
configure:6391: /bin/gcc -o conftest.exe conftest.c -liconv -lev >&5
/tmp/ccz9hxNr.o:conftest.c:(.text+0xe): undefined reference to `iconv_open'
/tmp/ccz9hxNr.o:conftest.c:(.text+0xe): relocation truncated to fit: R_X86_64_PC32 against undefined symbol `iconv_open'
is looking for iconv_open in the library libiconv, wrongly.
The test code should use the provided /usr/include/iconv.h
where there is a
#define iconv_open libiconv_open
and the cygwin library libiconv exports:
$ objdump -x /usr/lib/libiconv.dll.a | grep iconv_open
[ 5](sec 1)(fl 0x00)(ty 0)(scl 2) (nx 0) 0x0000000000000000 libiconv_open_into
[ 6](sec 3)(fl 0x00)(ty 0)(scl 2) (nx 0) 0x0000000000000000 __imp_libiconv_open_into
[ 5](sec 1)(fl 0x00)(ty 0)(scl 2) (nx 0) 0x0000000000000000 libiconv_open
[ 6](sec 3)(fl 0x00)(ty 0)(scl 2) (nx 0) 0x0000000000000000 __imp_libiconv_open
the symbol libiconv_open.
You need to correct the test to use iconv.h.
The test is defined in configure.ac
AC_SEARCH_LIBS([iconv_open], [iconv], , [AC_MSG_FAILURE([cannot find the required iconv_open() function despite trying to link with -liconv])])
a possible workaround is to change it in something that will test both options.
AC_SEARCH_LIBS([iconv_open],[iconv],,
AC_SEARCH_LIBS([libiconv_open],[iconv],,[AC_MSG_FAILURE([cannot find the required iconv_open() function despite trying to link with -liconv])]))
Disclaimer: not tested and you need to run autoreconf to rebuild configure
| compiling i3 under cygwin - cannot find libiconv library |
1,370,473,401,000 |
I'm running Manjaro Linux and trying to install Discord app. Since Discord doesn't have an official build for Arch-based systems, I've tried to use yaourt and the install gives me this error:
==> Verifying source file signatures with gpg...
llvm-6.0.0.src.tar.xz ... FAILED (unknown public key 0FC3042E345AD05D)
libcxx-6.0.0.src.tar.xz ... FAILED (unknown public key 0FC3042E345AD05D)
libcxxabi-6.0.0.src.tar.xz ... FAILED (unknown public key 0FC3042E345AD05D)
==> ERROR: One or more PGP signatures could not be verified!
==> ERROR: Makepkg was unable to build libc++.
==> Restart building libc++ ? [y/N]
So even if I type "Y" to restart the build, it doesn't work because it stops at the same error again.
Is there a way to get those three public keys and manually point to them? Or another way to install the package?
|
When installing Discord, during the install the system will try to validate the PGP signatures for libc++. The signatures should be added by the user, as seen on the package instructions in the AUR (here).
During the install the system will ask if you want to edit the PKGBUILD, and you should input "yes". Search for the keys there, on the validpgpkeys array.
Copy those two keys and run in a separate window the command:
gpg --recv-keys <KEY_A> <KEY_B>
Replace KEY_A and KEY_B with the signatures found on the PKGBUILD file.
After importing those keys you should see something like this:
gpg: key 0FC3042E345AD05D: 3 signatures not checked due to missing keys
gpg: key 0FC3042E345AD05D: public key "Hans Wennborg <[email protected]>" imported
gpg: key 8F0871F202119294: 3 signatures not checked due to missing keys
gpg: key 8F0871F202119294: public key "Tom Stellard <[email protected]>" imported
gpg: no ultimately trusted keys found
gpg: Total number processed: 2
gpg: imported: 2
And then you can proceed with the libc++ installation.
| Unknown public key when installing dependency for package in Manjaro Linux? |
1,370,473,401,000 |
I am pretty new Freebsd user.
As I am trying to install gnu m4-1.4.18, I get eval fail on eval test section:
Checking ./189.eval
@ ../doc/m4.texi:6405: Origin of test
./189.eval: stdout mismatch
--- m4-tmp.2536/m4-xout 2017-12-18 22:11:42.931036000 +0000
+++ m4-tmp.2536/m4-out 2017-12-18 22:11:42.928582000 +0000
@@ -2,8 +2,8 @@
1
1
-overflow occurred
--2147483648
+
+2147483648
0
-2
-2
this is what inside 189.eval:
dnl @ ../doc/m4.texi:6405: Origin of test
dnl @ expected status: 0
dnl @ extra options:
dnl @ Copyright (C) 2006, 2007, 2008, 2009 Free Software
dnl @ Foundation, Inc.
dnl @ This file is free software; the Free Software Foundation
dnl @ gives unlimited permission to copy and/or distribute it
dnl @ with or without modifications, as long as this notice
dnl @ is preserved.
define(`max_int', eval(`0x7fffffff'))
dnl @result{}
define(`min_int', incr(max_int))
dnl @result{}
eval(min_int` < 0')
dnl @result{}1
eval(max_int` > 0')
dnl @result{}1
ifelse(eval(min_int` / -1'), min_int, `overflow occurred')
dnl @result{}overflow occurred
min_int
dnl @result{}-2147483648
eval(`0x80000000 % -1')
dnl @result{}0
eval(`-4 >> 1')
dnl @result{}-2
eval(`-4 >> 33')
dnl @result{}-2
Just to let you know its fresh OS (installation) and its one of the first software I am installing.
|
You are having problems installing software but you do not show us what you are doing. You are just showing the output of some command and we are left guessing.
If you are new to FreeBSD but previously have been used to working on a GNU sytem (Linux) there are some subtle but important differences.
A typical stumbling block when compiling your own programs is make. BSD has a nice make but it is not the same as GNU make. If you want to use GNU make then you will install it. But when using it make is still BSD make but now you have a gmake as well. This can be confusing.
It is the same thing with m4 as FreeBSD has it in the base system.
$ which m4
/usr/bin/m4
But writing that you are installing gnu m4-1.4.18 is not helpful as many roads lead to Rome. Are you installing the package/port or from source?
FreeBSD Package
The most simple way of installing software on FreeBSD is to install the package. Packages are precompiled binary distributions of the ports.
pkg install m4
You are probably not doing this. But this is the easy route.
FreeBSD Port
A FreeBSD port is a collection of patches and whatnot needed to get an application running on FreeBSD. If you have the ports tree installed you would change the directory to devel/m4 and make (compile) the application.
The ports tree is targetting BSD make. Hence it is important to use BSD make and not GNU make. The fun part is that m4 depends on autoconf which in turn depends on GNU make.
But for our purpose we will use BSD make:
$ make
$ sudo make install
An advantage of using ports is that you can change the compile time settings using make config. But in most cases with GNU autotools and friends the defaults will usually suffice and the binary package is all you need.
Source install
My guess is that you are trying to install from source. In that case it is important to know the differences between the GNU and BSD tools which often are named the same. But the GNU tools tend to expect you to be using GNU tools. And if you have a vanilla FreeBSD install then you already have make and m4 but the BSD variants.
So when the GNU instructions say make you should make sure you have GNU make installed and are typing gmake at the command line.
Unless you want to learn these intricacies I will recommend that you stick to packages. If you want to continue down this route you need be more verbose in your questions and show us what you are doing. Without this information we are left guessing.
Update
From reading the comments it seem that the root cause is trying to install Apache APR. This is avaiable in FreeBSD ports as well. At the time of writing the latest port version of APR is 1.6.3 which is in sync with what Apache thinks is the latest stable version.
On new vanilla FreeBSD systems it will then be as simple as typing:
pkg install apr1
If the binary package servers have not caught up yet you can choose to build it yourself. In that case you may change the defaults as well. You do that using the ports tree. Use the portsnap tool to make sure the tree is up-to-date.
If you have no ports tree then:
# portsnap fetch
# portsnap extract
If you just need to update:
# portsnap fetch update
Then:
# cd /usr/ports/devel/apr1
# make config
# make
# make install
| Freebsd 11.1 issues with gnu m4 eval test fail |
1,370,473,401,000 |
How do I compile a pfSense port for ARM?
Do I need to be running FreeBSD to do it? How do I then transfer it to a USB drive, SD Card, or ISO so I can boot it?
I tried the usual compiling in Ubuntu after I cloned the repo that I found listed in this question, but I get an error....
First off there's no ./configure file, and when I run make despite that there is indeed a Makefile I get the following error:
Makefile:69: *** missing separator. Stop.
Is the repository I cloned more like a subproject to something else?
|
I think you can use FreeBSD's package (see here). FreeBSD has packages for ARM (check http://pkg.freebsd.org/) - since 11.0.
But if you really want build your packages from ports please read Using the Ports Collection. You can build ARM packages on your (non-ARM) machine using qemu (see a short description here).
| How to compile a pfSense port for ARM? |
1,370,473,401,000 |
I am trying to install Fifth Browser (website) (github link) on Xubuntu 16.04.2 LTS.
I was able to get all of the dependencies via the official distro repositories, using Synaptic for installation. One of them as listed on fifth's homepage is called liburlmatch (github link). It appears to be a simple library that lets you block URLs while using wildcards.
I have installed urlmatch via:
/git clone https://github.com/clbr/urlmatch.git and then
/sudo checkinstall in a separate folder. This seemed to work flawlessly.
When I do ./configure in the fifth folder the last few lines look like this:
checking for fltk-config13... no
checking for fltk-config... fltk-config
checking for url_init in -lurlmatch... no
configure: error: liburlmatch not found
You can find the part of the configure file pertaining to urlmatch in the following pastebin to your convenience: codeblock from configure for liburlmatch.
What am I doing wrong?
Why doesn't the configure script recognise the urlmatcher library?
Please consider in your answer that this is one of my first attempts to compile a programme like this, thanks.
|
It looks like the issue is actually to do with how the configure script for fifth-5.0 constructs and runs the conftest for the urlmatch library.
First, the error
checking for url_init in -lurlmatch... no
configure: error: liburlmatch not found
turns out to be somewhat misleading: if we look at the config.log we see that the conftest is actually failing to build because of an undefined reference to the uncompress function:
configure:5511: checking for url_init in -lurlmatch
configure:5546: g++ -o conftest -g -O2 -pthread -isystem /usr/include/cairo -isystem /usr/include/glib-2.0 -isystem /usr/lib/x86_64-linux-gnu/glib-2.0/include -isystem /usr/include/pixman-1 -isystem /usr/include/freetype2 -isystem /usr/include/libpng12 -isystem /usr/include/freetype2 -isystem /usr/include/cairo -isystem /usr/include/glib-2.0 -isystem /usr/lib/x86_64-linux-gnu/glib-2.0/include -isystem /usr/include/pixman-1 -isystem /usr/include/freetype2 -isystem /usr/include/libpng12 -fvisibility-inlines-hidden -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -D_THREAD_SAFE -D_REENTRANT -lz conftest.cpp -lurlmatch -Wl,-Bsymbolic-functions -lfltk_images -lfltk -lX11 >&5
//usr/local/lib/liburlmatch.a(opti_init.o): In function `initbin':
opti_init.c:(.text+0xd6): undefined reference to `uncompress'
collect2: error: ld returned 1 exit status
configure:5552: $? = 1
That's because uncompress is in libz - which is being linked before liburlmatch:
. . . -lz conftest.cpp -lurlmatch -Wl,-Bsymbolic-functions -lfltk_images -lfltk -lX11 >&5
failing to respect the necessary link order1 for the two libraries. We can trace that back further to the configure.ac file from which the configure script would have been generated:
# Checks for libraries.
OLD_LDFLAGS=[$LDFLAGS]
LDFLAGS=["$LDFLAGS -lz"]
AC_CHECK_LIB([urlmatch], [url_init], [], AC_MSG_ERROR([liburlmatch not found]))
LDFLAGS=[$OLD_LDFLAGS]
i.e. rather than being added to the list of LIBS, -lz is added to the LDFLAGS (which is more typically used to specify additional library paths ahead of the LIBS).
A quick'n'dirty workaround is to call ./configure with an explicit LIBS argument:
./configure "LIBS=-lz"
This causes an extra -lz to be placed on the g++ command line after the urlmatch library (at the head of the other LIBS):
. . . -lz conftest.cpp -lurlmatch -lz -Wl,-Bsymbolic-functions -lfltk_images -lfltk -lX11 >&5
A more permanent solution might be to modify the configure.ac file to add -lz to LIBS instead of LDFLAGS, and then re-generate configure using autoconf (or autoreconf if necessary).
Refs.:
Why does the order of '-l' option in gcc matter?
| Fifth Browser - how do I get the configure script to recognise a dependency? |
1,370,473,401,000 |
I use linux-socfpga from Altera's github repository (the master branch which is recently updated) with my DE2-115 FPGA. The output from jtag configuration is:
$ jtagconfig1) USB-Blaster [2-2]
020F70DD EP3C120/EP4CE115
I wonder if it can find a USB memory that I attached? When I run lsusb nothing appears. Maybe it is the FPGA design that is wrong?
# Linux version 4.11.0-rc7-00113-g94836ec (developer@1604) (gcc version 6.2.0 (Sourcery CodeBench Lite 2016.11-32) ) #24 Sun Apr 23 05:44:19 CEST 207
bootconsole [early0] enabled
early_console initialized at 0xe8001400
ERROR: Nios II DIV different for kernel and DTS
Warning: icache size configuration mismatch (0x8000 vs 0x1000) of CONFIG_NIOS2_ICACHE_SIZE vs device tree icache-size
Warning: dcache size configuration mismatch (0x8000 vs 0x800) of CONFIG_NIOS2_DCACHE_SIZE vs device tree dcache-size
On node 0 totalpages: 32768
free_area_init_node: node 0, pgdat c0e8a31c, node_mem_map c0eaab80
Normal zone: 256 pages used for memmap
Normal zone: 0 pages reserved
Normal zone: 32768 pages, LIFO batch:7
pcpu-alloc: s0 r0 d32768 u32768 alloc=1*32768
pcpu-alloc: [0] 0
Built 1 zonelists in Zone order, mobility grouping on. Total pages: 32512
Kernel command line: console=ttyAL0,115200
PID hash table entries: 512 (order: -1, 2048 bytes)
Dentry cache hash table entries: 16384 (order: 4, 65536 bytes)
Inode-cache hash table entries: 8192 (order: 3, 32768 bytes)
Sorting __ex_table...
Memory: 114896K/131072K available (4366K kernel code, 114K rwdata, 1272K rodata, 9132K init, 113K bss, 16176K reserved, 0K cma-reserved)
NR_IRQS:64 nr_irqs:64 0
clocksource: nios2-clksrc: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 38225208935 ns
Calibrating delay loop (skipped), value calculated using timer frequency.. 100.00 BogoMIPS (lpj=50000)
pid_max: default: 32768 minimum: 301
Mount-cache hash table entries: 1024 (order: 0, 4096 bytes)
Mountpoint-cache hash table entries: 1024 (order: 0, 4096 bytes)
devtmpfs: initialized
cpu cpu0: Error -2 creating of_node link
clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
futex hash table entries: 256 (order: -1, 3072 bytes)
NET: Registered protocol family 16
random: fast init done
usbcore: registered new interface driver usbfs
usbcore: registered new interface driver hub
usbcore: registered new device driver usb
FPGA manager framework
clocksource: Switched to clocksource nios2-clksrc
NET: Registered protocol family 2
TCP established hash table entries: 1024 (order: 0, 4096 bytes)
TCP bind hash table entries: 1024 (order: 0, 4096 bytes)
TCP: Hash tables configured (established 1024 bind 1024)
UDP hash table entries: 256 (order: 0, 4096 bytes)
UDP-Lite hash table entries: 256 (order: 0, 4096 bytes)
NET: Registered protocol family 1
RPC: Registered named UNIX socket transport module.
RPC: Registered udp transport module.
RPC: Registered tcp transport module.
RPC: Registered tcp NFSv4.1 backchannel transport module.
random: crng init done
workingset: timestamp_bits=30 max_order=15 bucket_order=0
jffs2: version 2.2. (NAND) �© 2001-2006 Red Hat, Inc.
Block layer SCSI generic (bsg) driver version 0.4 loaded (major 254)
io scheduler noop registered
io scheduler deadline registered
io scheduler cfq registered (default)
io scheduler mq-deadline registered
8001400.serial: ttyAL0 at MMIO 0x8001400 (irq = 3, base_baud = 3125000) is a Altera UART
console [ttyAL0] enabled
console [ttyAL0] enabled
bootconsole [early0] disabled
bootconsole [early0] disabled
8001440.serial: ttyJ0 at MMIO 0x8001440 (irq = 2, base_baud = 0) is a Altera JTAG UART
loop: module loaded
libphy: Fixed MDIO Bus: probed
ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
ehci-platform: EHCI generic platform driver
ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver
ohci-platform: OHCI generic platform driver
fotg210_hcd: FOTG210 Host Controller (EHCI) Driver
Warning! fotg210_hcd should always be loaded before uhci_hcd and ohci_hcd, not after
usbcore: registered new interface driver usbtmc
usbcore: registered new interface driver mdc800
mdc800: v0.7.5 (30/10/2000):USB Driver for Mustek MDC800 Digital Camera
usbcore: registered new interface driver usbserial
usbcore: registered new interface driver adutux
usbcore: registered new interface driver appledisplay
usbcore: registered new interface driver cypress_cy7c63
usbcore: registered new interface driver cytherm
usbcore: registered new interface driver emi26 - firmware loader
usbcore: registered new interface driver emi62 - firmware loader
ftdi_elan: driver ftdi-elan
usbcore: registered new interface driver ftdi-elan
usbcore: registered new interface driver idmouse
usbcore: registered new interface driver iowarrior
usbcore: registered new interface driver isight_firmware
usbcore: registered new interface driver usblcd
usbcore: registered new interface driver ldusb
usbcore: registered new interface driver legousbtower
usbcore: registered new interface driver rio500
usbcore: registered new interface driver usbtest
usbcore: registered new interface driver usb_ehset_test
usbcore: registered new interface driver trancevibrator
usbcore: registered new interface driver usbsevseg
usbcore: registered new interface driver yurex
usbcore: registered new interface driver sisusb
usbcore: registered new interface driver lvs
usbip_core: usbip_core_init:766: USB/IP Core v1.0.0
usbcore: registered new device driver usbip-host
usbip_host: usbip_host_init:302: USB/IP Host Driver v1.0.0
sdhci: Secure Digital Host Controller Interface driver
sdhci: Copyright(c) Pierre Ossman
usbcore: registered new interface driver usbhid
usbhid: USB HID core driver
NET: Registered protocol family 17
Freeing unused kernel memory: 9132K
This architecture does not have kernel memory protection.
INIT: version 2.88 booting
INIT: Entering runlevel: 3
Starting logging: OK
Initializing random number generator... done.
Starting system message bus: done
Starting network: OK
Output from dmesg
# dmesg
Linux version 4.11.0-rc7-00113-g94836ec (developer@1604) (gcc version 6.2.0 (Sourcery CodeBench Lite 2016.11-32) ) #24 Sun Apr 23 05:44:19 CEST 2017
bootconsole [early0] enabled
early_console initialized at 0xe8001400
ERROR: Nios II DIV different for kernel and DTS
Warning: icache size configuration mismatch (0x8000 vs 0x1000) of CONFIG_NIOS2_ICACHE_SIZE vs device tree icache-size
Warning: dcache size configuration mismatch (0x8000 vs 0x800) of CONFIG_NIOS2_DCACHE_SIZE vs device tree dcache-size
On node 0 totalpages: 32768
free_area_init_node: node 0, pgdat c0e8a31c, node_mem_map c0eaab80
Normal zone: 256 pages used for memmap
Normal zone: 0 pages reserved
Normal zone: 32768 pages, LIFO batch:7
pcpu-alloc: s0 r0 d32768 u32768 alloc=1*32768
pcpu-alloc: [0] 0
Built 1 zonelists in Zone order, mobility grouping on. Total pages: 32512
Kernel command line: console=ttyAL0,115200
PID hash table entries: 512 (order: -1, 2048 bytes)
Dentry cache hash table entries: 16384 (order: 4, 65536 bytes)
Inode-cache hash table entries: 8192 (order: 3, 32768 bytes)
Sorting __ex_table...
Memory: 114896K/131072K available (4366K kernel code, 114K rwdata, 1272K rodata, 9132K init, 113K bss, 16176K reserved, 0K cma-reserved)
NR_IRQS:64 nr_irqs:64 0
clocksource: nios2-clksrc: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 38225208935 ns
Calibrating delay loop (skipped), value calculated using timer frequency.. 100.00 BogoMIPS (lpj=50000)
pid_max: default: 32768 minimum: 301
Mount-cache hash table entries: 1024 (order: 0, 4096 bytes)
Mountpoint-cache hash table entries: 1024 (order: 0, 4096 bytes)
devtmpfs: initialized
cpu cpu0: Error -2 creating of_node link
clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
futex hash table entries: 256 (order: -1, 3072 bytes)
NET: Registered protocol family 16
random: fast init done
usbcore: registered new interface driver usbfs
usbcore: registered new interface driver hub
usbcore: registered new device driver usb
FPGA manager framework
clocksource: Switched to clocksource nios2-clksrc
NET: Registered protocol family 2
TCP established hash table entries: 1024 (order: 0, 4096 bytes)
TCP bind hash table entries: 1024 (order: 0, 4096 bytes)
TCP: Hash tables configured (established 1024 bind 1024)
UDP hash table entries: 256 (order: 0, 4096 bytes)
UDP-Lite hash table entries: 256 (order: 0, 4096 bytes)
NET: Registered protocol family 1
RPC: Registered named UNIX socket transport module.
RPC: Registered udp transport module.
RPC: Registered tcp transport module.
RPC: Registered tcp NFSv4.1 backchannel transport module.
random: crng init done
workingset: timestamp_bits=30 max_order=15 bucket_order=0
jffs2: version 2.2. (NAND) �© 2001-2006 Red Hat, Inc.
Block layer SCSI generic (bsg) driver version 0.4 loaded (major 254)
io scheduler noop registered
io scheduler deadline registered
io scheduler cfq registered (default)
io scheduler mq-deadline registered
8001400.serial: ttyAL0 at MMIO 0x8001400 (irq = 3, base_baud = 3125000) is a Altera UART
console [ttyAL0] enabled
bootconsole [early0] disabled
8001440.serial: ttyJ0 at MMIO 0x8001440 (irq = 2, base_baud = 0) is a Altera JTAG UART
loop: module loaded
libphy: Fixed MDIO Bus: probed
ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
ehci-platform: EHCI generic platform driver
ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver
ohci-platform: OHCI generic platform driver
fotg210_hcd: FOTG210 Host Controller (EHCI) Driver
Warning! fotg210_hcd should always be loaded before uhci_hcd and ohci_hcd, not after
usbcore: registered new interface driver usbtmc
usbcore: registered new interface driver mdc800
mdc800: v0.7.5 (30/10/2000):USB Driver for Mustek MDC800 Digital Camera
usbcore: registered new interface driver usbserial
usbcore: registered new interface driver adutux
usbcore: registered new interface driver appledisplay
usbcore: registered new interface driver cypress_cy7c63
usbcore: registered new interface driver cytherm
usbcore: registered new interface driver emi26 - firmware loader
usbcore: registered new interface driver emi62 - firmware loader
ftdi_elan: driver ftdi-elan
usbcore: registered new interface driver ftdi-elan
usbcore: registered new interface driver idmouse
usbcore: registered new interface driver iowarrior
usbcore: registered new interface driver isight_firmware
usbcore: registered new interface driver usblcd
usbcore: registered new interface driver ldusb
usbcore: registered new interface driver legousbtower
usbcore: registered new interface driver rio500
usbcore: registered new interface driver usbtest
usbcore: registered new interface driver usb_ehset_test
usbcore: registered new interface driver trancevibrator
usbcore: registered new interface driver usbsevseg
usbcore: registered new interface driver yurex
usbcore: registered new interface driver sisusb
usbcore: registered new interface driver lvs
usbip_core: usbip_core_init:766: USB/IP Core v1.0.0
usbcore: registered new device driver usbip-host
usbip_host: usbip_host_init:302: USB/IP Host Driver v1.0.0
sdhci: Secure Digital Host Controller Interface driver
sdhci: Copyright(c) Pierre Ossman
usbcore: registered new interface driver usbhid
usbhid: USB HID core driver
NET: Registered protocol family 17
Freeing unused kernel memory: 9132K
This architecture does not have kernel memory protection.
#
The FPGA design.
In menuconfig I have the following settings.
|
Partial answer:
The User Manual shows on page 9 that two USB ports are connected to the Philips ISP 1362 Host/Device/OTG chip, and one port is connected to an FTDI FT245.
The boot log shows you are loading the generic (Intel-compatible) EHCI and OHCI host drivers, and a fotg210_hcd (in the wrong order). So this cannot work, you don't have any driver to access your USB ports.
The USB device tree bindings have no obvious files for these chips, however, and googling a bit says the ISP 1362 may be deprecated.
So (1) find out what actual USB chips are on your board by looking at the board, and (2) find compatible drivers for them, if necessary by looking through or greping the source code of the Linux tree you have installed somewhere.
| How to enable USB with linux-socfpga? |
1,370,473,401,000 |
On a rolling release distribution like openSUSE Tumbleweed, if one wanted to build some software from source, how often would these programs need to be rebuilt considering that dependencies installed from the distribution repositories might be upgrading frequently.
For example, if one wanted to build Apache httpd and Exim from source, both of which could depend upon PCRE and GnuTLS, among other things, would Apache httpd and Exim need to be rebuilt each time PCRE or GnuTLS or another dependency was upgraded?
Is there a certain type of dependency that would require rebuilding the dependent software from source each time the dependency was upgraded?
Or would rebuilding dependent software only be necessary if the structure of a dependency significantly changed?
There are probably many individual unique cases, but are there any general guidelines?
|
As far as I know the only "painful" in terms of recompiling things scenario is a kernel update. Then you need to compile a kernel itself together with all kernel modules.
As of the other relatively high-level packages, you probably won't need to recompile them most of the time when the dependency updated. There are only a few occasions when dependencies for a particular package change so drastically that you need to recompile the dependant package.
Most of the time, when these 'high level' packages are updated, the only indicator of the fact that you need to recompile the dependant packages is if they suddenly stop working.
Also reading change logs for packages you update is a good thing as they most of the time warn you about some big changes in their architecture and though you have an option to stick with your current version in order to not recompile all the things.
Actually, Slackware has slackpkgs that automate the process of recompiling some common packages and their dependencies. Also nobody bans the usage of a package managers of some kind (apt etc) to make your life easier.
Here are a couple of refs from Slackware and Gentoo documentation:
Slackware docs
Gentoo docs
| Building software from source on a rolling release distribution [closed] |
1,370,473,401,000 |
I'm trying to build ffmpeg with NVENC support so I can then build obs-studio with NVENC support, using this as a guide. I've sorted out every dependency after a bit of headache, and am now to the point where I should be able to compile ffmpeg with the edits to its rules file just fine. However, my trusty terminal spits the following out at me:
http://pastebin.com/888pa3kW
Here is my ffmpeg/debian/rules file:
#!/usr/bin/make -f
export V=1
# sets DEBIAN_VERSION variable
include /usr/share/dpkg/pkg-info.mk
# Get the Debian version revision:
DEB_REVISION := $(word 2, $(subst -, ,$(DEB_VERSION)))
# sets DEB_HOST_* variables
include /usr/share/dpkg/architecture.mk
# Ubuntu ld adds -Bsymbolic-functions by default, but that prevents FFmpeg from building.
export DEB_LDFLAGS_MAINT_STRIP=-Wl,-Bsymbolic-functions
# Package name for the extra flavor.
EXTRA_PKGS := $(shell sed -nr 's/^Package:[[:space:]]*(.*extra[0-9]+)[[:space:]]*$$/\1/p' debian/control)
FLAVORS = standard extra static
# Enable as many features as possible, as long as the result is still GPLv2+ (a GPLv3+ variant is built as libavcodec-extra/libavfilter-extra flavor).
# The following flags (and build-dependencies) are not added, because they would require a libavformat-extra flavor:
# --enable-libsmbclient (libsmbclient-dev [!hurd-i386 !m68k !sparc64])
# The following flags are not added, because the necessary libraries are not in Debian:
# --enable-decklink
# --enable-libcelt (see #676592: removed from Debian as abandoned upstream, replaced by opus)
# --enable-libdcadec
# --enable-libilbc (see #675959 for the RFP bug)
# --enable-libkvazaar
# --enable-libmfx
# --enable-libnut
# --enable-libopenh264
# --enable-libopenmpt
# --enable-libschroedinger (see #845037: removal due to security issues)
# --enable-libutvideo
# --enable-libvidstab (see #709193 for the RFP bug)
# --enable-libxavs
# --enable-libzimg
# The following flags are not added for various reasons:
# * --enable-librtmp: ffmpeg has better built-in RTMP support with listen mode.
# * --enable-libv4l2 [!hurd-any]: This is only needed for very old devices and may cause problems for others.
# Should anyone need it, using LD_PRELOAD pointing on libv4l2 has the same effect.
# * --enable-opencl [!hurd-any]: This is considered an experimental API.
CONFIG := --prefix=/usr \
--extra-version="$(DEB_REVISION)" \
--toolchain=hardened \
--libdir=/usr/lib/$(DEB_HOST_MULTIARCH) \
--incdir=/usr/include/$(DEB_HOST_MULTIARCH) \
--enable-gpl \
--disable-stripping \
--enable-avresample \
--enable-avisynth \
--enable-gnutls \
--enable-ladspa \
--enable-libass \
--enable-libbluray \
--enable-libbs2b \
--enable-libcaca \
--enable-libcdio \
--enable-libebur128 \
--enable-libflite \
--enable-libfontconfig \
--enable-libfreetype \
--enable-libfribidi \
--enable-libgme \
--enable-libgsm \
--enable-libmodplug \
--enable-libmp3lame \
--enable-libopenjpeg \
--enable-libopus \
--enable-libpulse \
--enable-librubberband \
--enable-libshine \
--enable-libsnappy \
--enable-libsoxr \
--enable-libspeex \
--enable-libssh \
--enable-libtheora \
--enable-libtwolame \
--enable-libvorbis \
--enable-libvpx \
--enable-libwavpack \
--enable-libwebp \
--enable-libx265 \
--enable-libxvid \
--enable-libzmq \
--enable-libzvbi \
--enable-omx \
--enable-openal \
--enable-opengl \
--enable-sdl2 \
--enable-nonfree \
--enable-nvenc
# The standard configuration only uses the shared CONFIG.
CONFIG_standard = --enable-shared
# With these enabled, resulting binaries are effectively licensed as GPLv3+.
CONFIG_extra = --enable-shared \
--enable-version3 \
--disable-doc \
--disable-programs \
--enable-libopencore_amrnb \
--enable-libopencore_amrwb \
--enable-libtesseract \
--enable-libvo_amrwbenc
# The static libraries should not be built with PIC.
CONFIG_static = --disable-pic \
--disable-doc \
--disable-programs
# Disable optimizations if requested.
ifneq (,$(filter $(DEB_BUILD_OPTIONS),noopt))
CONFIG += --disable-optimizations
endif
# Respect CC/CXX from the environment, if they differ from the default.
# Don't set them if they equal the default, because that disables autodetection needed for cross-building.
ifneq ($(CC),cc)
CONFIG += --cc=$(CC)
endif
ifneq ($(CXX),g++)
CONFIG += --cxx=$(CXX)
endif
# Some libraries are built only on linux.
ifeq ($(DEB_HOST_ARCH_OS),linux)
CONFIG += --enable-libdc1394 \
--enable-libiec61883
endif
# Some build-dependencies are not installable on some architectures.
ifeq (,$(filter $(DEB_HOST_ARCH),powerpcspe))
CONFIG_extra += --enable-netcdf
endif
# ffmpeg is involed in build-dependency cycles with opencv, x264 and chromaprint, so disable them in stage one.
# Also disable frei0r, which build-depends on opencv.
ifneq ($(filter stage1,$(DEB_BUILD_PROFILES)),)
CONFIG += --disable-frei0r \
--disable-chromaprint \
--disable-libopencv \
--disable-libx264
else
CONFIG += --enable-libopencv \
--enable-frei0r
ifeq (,$(filter $(DEB_HOST_ARCH),powerpcspe))
CONFIG += --enable-libx264
endif
ifeq (,$(filter $(DEB_HOST_ARCH),sh4))
CONFIG += --enable-chromaprint
endif
endif
# Disable altivec optimizations on powerpc, because they are not always available on this architecture.
ifeq ($(DEB_HOST_ARCH),powerpc)
CONFIG += --disable-altivec
# Build an altivec flavor of the libraries on powerpc.
# This works around the problem that runtime cpu detection on powerpc currently does not work,
# because, if altivec is enabled, all files are build with '-maltivec' so that the compiler inserts altivec instructions, wherever it likes.
CONFIG_altivec = --enable-shared \
--enable-altivec \
--disable-doc \
--disable-programs
CONFIG_altivec-extra = $(CONFIG_altivec) $(CONFIG_extra)
FLAVORS += altivec altivec-extra
endif
# Disable assembly optimizations on x32, because they don't work (yet).
ifneq (,$(filter $(DEB_HOST_ARCH),x32))
CONFIG += --disable-asm
endif
# Disable optimizations on mips(el) and some on mips64(el), because they are not always available on these architectures.
ifneq (,$(filter $(DEB_HOST_ARCH),mips mipsel mips64 mips64el))
CONFIG += --disable-mipsdsp \
--disable-mipsdspr2 \
--disable-loongson3 \
--disable-mips32r6 \
--disable-mips64r6
endif
ifneq (,$(filter $(DEB_HOST_ARCH),mips mipsel))
CONFIG += --disable-mipsfpu
endif
# Set cross-build prefix for compiler, pkg-config...
# Cross-building also requires to manually set architecture/OS.
ifneq ($(DEB_BUILD_GNU_TYPE),$(DEB_HOST_GNU_TYPE))
CONFIG += --cross-prefix=$(DEB_HOST_GNU_TYPE)- \
--arch=$(DEB_HOST_ARCH) \
--target-os=$(DEB_HOST_ARCH_OS)
endif
# Use the default debhelper scripts, where possible.
%:
dh $@
# Add configuration options:
override_dh_auto_configure:
$(foreach flavor,$(FLAVORS),mkdir -p debian/$(flavor);)
$(foreach flavor,$(FLAVORS),set -e; echo " *** $(flavor) ***"; cd debian/$(flavor); ../../configure $(CONFIG) $(CONFIG_$(flavor)) || (cat config.log && exit 1); cd ../.. ;)
touch override_dh_auto_configure
# Remove the subdirectories generated for the flavors.
override_dh_auto_clean:
$(foreach flavor,$(FLAVORS),[ ! -d debian/$(flavor) ] || rm -r debian/$(flavor);)
# Create doxygen documentation:
override_dh_auto_build-indep:
dh_auto_build -i --sourcedirectory=debian/standard -- apidoc
# Create the minified CSS files.
lessc debian/missing-sources/ffmpeg-web/src/less/style.less | cleancss > debian/standard/doc/style.min.css
rm override_dh_auto_configure
override_dh_auto_build-arch:
# Copy built object files to avoid building them again for the extra flavor.
# Build qt-faststart here, to make it possible to build with 'nocheck'.
set -e && for flavor in $(FLAVORS); do \
echo " *** $$flavor ***"; \
if echo "$$flavor" | grep -q "extra"; then \
subdir=`[ "$$flavor" = "extra" ] && echo "debian/standard/" || echo "debian/altivec/"`; \
for dir in `cd ./$$subdir; find libavcodec libavdevice libavfilter libavformat libavresample libavutil libpostproc libswscale libswresample -type d`; do \
mkdir -p debian/"$$flavor"/"$$dir"; \
echo "$$subdir$$dir"/*.o | grep -q '*' || cp "$$subdir$$dir"/*.o debian/"$$flavor"/"$$dir"; \
done; \
rm debian/"$$flavor"/libavcodec/allcodecs.o; \
rm debian/"$$flavor"/libavfilter/allfilters.o; \
fi; \
if [ "$$flavor" = "standard" ]; then \
$(MAKE) -C debian/standard tools/qt-faststart; \
fi; \
dh_auto_build -a --sourcedirectory=debian/"$$flavor" || (cat debian/"$$flavor"/config.log && exit 1); \
done
# Set the library path for the dynamic linker, because the tests otherwise don't find the libraries.
override_dh_auto_test-arch:
export LD_LIBRARY_PATH="libavcodec:libavdevice:libavfilter:libavformat:libavresample:libavutil:libpostproc:libswresample:libswscale"; \
dh_auto_test -a --sourcedirectory=debian/standard -- -k
# No tests for indep build.
override_dh_auto_test-indep:
override_dh_auto_install-arch:
dh_auto_install -a --sourcedirectory=debian/standard
ifeq ($(DEB_HOST_ARCH),powerpc)
install -d debian/tmp/usr/lib/$(DEB_HOST_MULTIARCH)/altivec
install -m 644 debian/altivec/*/*.so.* debian/tmp/usr/lib/$(DEB_HOST_MULTIARCH)/altivec
endif
dh_auto_install -a --sourcedirectory=debian/extra --destdir=debian/tmp/extra
ifeq ($(DEB_HOST_ARCH),powerpc)
install -d debian/tmp/extra/usr/lib/$(DEB_HOST_MULTIARCH)/altivec
install -m 644 debian/altivec-extra/*/*.so.* debian/tmp/extra/usr/lib/$(DEB_HOST_MULTIARCH)/altivec
endif
# Use the static libraries from the --disable-pic build
install -m 644 debian/static/*/lib*.a debian/tmp/usr/lib/$(DEB_HOST_MULTIARCH)
override_dh_auto_install-indep:
dh_auto_install -i --sourcedirectory=debian/standard
override_dh_install:
dh_install $(addprefix -p,$(EXTRA_PKGS)) --sourcedir=debian/tmp/extra
dh_install --remaining-packages
override_dh_makeshlibs:
set -e && for pkg in $(shell dh_listpackages -a) ; do \
case $$pkg in \
ffmpeg|*-dev) \
continue \
;; \
*avcodec*) \
soversion=$$(echo $$pkg | sed -nr 's/^[^0-9]*([0-9]+)$$/\1/p'); \
dh_makeshlibs -p $$pkg -V"libavcodec$$soversion (>= ${DEB_VERSION_EPOCH_UPSTREAM}) | libavcodec-extra$$soversion (>= ${DEB_VERSION_EPOCH_UPSTREAM})" \
;; \
*avfilter*) \
soversion=$$(echo $$pkg | sed -nr 's/^[^0-9]*([0-9]+)$$/\1/p'); \
dh_makeshlibs -p $$pkg -V"libavfilter$$soversion (>= ${DEB_VERSION_EPOCH_UPSTREAM}) | libavfilter-extra$$soversion (>= ${DEB_VERSION_EPOCH_UPSTREAM})" \
;; \
*) \
dh_makeshlibs -p $$pkg -V \
;; \
esac \
done
# Don't compress the example source code files.
override_dh_compress:
dh_compress -Xexamples
I am running Linux Lite 3.2 64bit, which is based on Ubuntu 16.04. I'm essentially completely new to this, so I'll need some hand holding.
|
I could build it without any issue in an LXC Ubuntu 16.04 container.
vi /etc/apt/sources.list added source & backports repositories:
deb http://archive.ubuntu.com/ubuntu xenial main restricted universe multiverse
deb http://archive.ubuntu.com/ubuntu xenial-updates main restricted universe multiverse
deb http://security.ubuntu.com/ubuntu xenial-security main restricted universe multiverse
deb http://security.ubuntu.com/ubuntu xenial-backports main restricted universe multiverse
deb-src http://archive.ubuntu.com/ubuntu xenial main restricted universe multiverse
deb-src http://archive.ubuntu.com/ubuntu xenial-updates main restricted universe multiverse
deb-src http://security.ubuntu.com/ubuntu xenial-security main restricted universe multiverse
deb-src http://security.ubuntu.com/ubuntu xenial-backports main restricted universe multiverse
Install dependencies and build tools including debhelper & dh-autoreconf from backports repo, use apt-cache policy ... to check their current version.
apt update
apt build-dep ffmpeg
apt install git openssl ca-certificates devscripts dh-autoreconf=12~ubuntu16.04.1 debhelper=10.2.2ubuntu1~ubuntu16.04.1 libchromaprint-dev libebur128-dev libleptonica-dev libnetcdf-dev libomxil-bellagio-dev libopenjp2-7-dev librubberband-dev libsdl2-dev libtesseract-dev nasm
Download source
git clone https://anonscm.debian.org/git/pkg-multimedia/ffmpeg.git
Modify rules file to add --enable-nonfree and --enable-nvenc
cd ffmpeg/
echo 'libva 1 libva1' > debian/shlibs.local
vi debian/rules:
CONFIG :=...
--enable-sdl2 \
--enable-nonfree \
--enable-nvenc
Build it
debuild -us -uc -b
Here is the list for result debian packages.
Reply to OP, for new error messages
lintian is a QC tool for Debian packages, It just verify the result packages, but not effect on building process.
Now running lintian...
E: ffmpeg changes: bad-distribution-in-changes-file unstable
W: libavdevice57: virtual-package-depends-without-real-package-depends depends: libgl1
N: 9 tags overridden (8 warnings, 1 info)
Finished running lintian.
However, if you want to correct that error message. That error raised because we copy source prepared for the unstable Debian release. Where is our case it should be xenial Ubuntu release. Run inside ffmpeg/ folder:
dch
To add new entry to debian/changelog and set it to xenial example:
ffmpeg (7:3.2.2-1ubuntu1) xenial; urgency=medium
* backport to xenial
-- root <root@ci2> Wed, 28 Dec 2016 11:24:08 +0000
libavfilter-extra* is alternative to libavfilter* and they can't be installed together in same system. You have to choose depending your need (If you don't know, install extra)
dpkg: regarding libavfilter-extra6_3.2.2-1_amd64.deb containing libavfilter-extra6:amd64:
libavfilter-extra6 conflicts with libavfilter6
Other missing dependencies that are available in repo, like:
ffmpeg-doc depends on libjs-bootstrap; however:
Package libjs-bootstrap is not installed.
could be downloaded using:
sudo apt -f install
| Help compiling ffmpeg with NVENC support under Linux |
1,370,473,401,000 |
How do I compile transmission-gtk torrent client from source on Linux Mint 18 or generally Ubuntu 16.04 based systems?
Supposing I want to:
Remove the original packaged version.
Replace it, while retaining the original settings, desktop item, etc.
|
In this compilation procedure, let it be clear, that it is written for today's current version 2.92, and for Ubuntu 16.04 based systems as is Linux Mint 18. This guide may differ slightly on later versions of systems and / or Transmission.
Go to the official page; over secure protocol, currently the official page does not redirect to HTTPS; you may use the link below to get to the web page:
https://transmissionbt.com/download/
Navigate to Source Code section and download the current one; it uses GitHub repository; if you are in CLI, you may use this direct method:
wget --continue https://github.com/transmission/transmission-releases/raw/master/transmission-2.92.tar.xz
Check the SHA-256 hash matches; it is written on the official download page; for version 2.92 the following applies:
sha256sum transmission-2.92.tar.xz
3a8d045c306ad9acb7bf81126939b9594553a388482efa0ec1bfb67b22acd35f
Extract the archive:
tar -xJvf transmission-2.92.tar.xz
Go to the extraction directory:
cd transmission-2.92/
Now we need to install the build dependencies for transmission-gtk:
sudo apt-get build-dep transmission-gtk
Let's make sure all of the prerequisites are installed, according to this GitHub page:
sudo apt-get install build-essential automake autoconf libtool pkg-config intltool libcurl4-openssl-dev libglib2.0-dev libevent-dev libminiupnpc-dev libappindicator-dev
Note, that I must have removed libminiupnpc5 as libminiupnpc-dev replaces it.
Run the configuration script:
./configure
The following optional arguments may be passed to the configuration script (copy-pasted from the configuration script):
Optional Features:
--disable-option-checking ignore unrecognized --enable/--with options
--disable-FEATURE do not include FEATURE (same as --enable-FEATURE=no)
--enable-FEATURE[=ARG] include FEATURE [ARG=yes]
--enable-silent-rules less verbose build output (undo: "make V=1")
--disable-silent-rules verbose build output (undo: "make V=0")
--enable-shared[=PKGS] build shared libraries [default=yes]
--enable-static[=PKGS] build static libraries [default=yes]
--enable-fast-install[=PKGS]
optimize for fast installation [default=yes]
--enable-dependency-tracking
do not reject slow dependency extractors
--disable-dependency-tracking
speeds up one-time build
--disable-libtool-lock avoid locking (might break parallel builds)
--disable-largefile omit support for large files
--enable-external-dht Use system external-dht
--enable-external-b64 Use system libb64
--enable-utp build µTP support
--enable-external-natpmp
Use system external-natpmp
--enable-nls enable native language support
--disable-nls do not use Native Language Support
--enable-lightweight optimize libtransmission for low-resource systems:
smaller cache size, prefer unencrypted peer
connections, etc.
--enable-cli build command-line client
--enable-mac build Mac client
--enable-daemon build daemon
Optional Packages:
--with-PACKAGE[=ARG] use PACKAGE [ARG=yes]
--without-PACKAGE do not use PACKAGE (same as --with-PACKAGE=no)
--with-pic[=PKGS] try to use only PIC/non-PIC objects [default=use
both]
--with-aix-soname=aix|svr4|both
shared library versioning (aka "SONAME") variant to
provide on AIX, [default=aix].
--with-gnu-ld assume the C compiler uses GNU ld [default=no]
--with-sysroot[=DIR] Search for dependent libraries within DIR (or the
compiler's sysroot if not specified).
--with-crypto=PKG Use specified crypto library: auto (default),
openssl, cyassl, polarssl
--with-inotify Enable inotify support (default=auto)
--with-kqueue Enable kqueue support (default=auto)
--with-systemd-daemon Add support for systemd startup notification
(default is autodetected)
--with-gtk with Gtk
Check if the output of the configuration script matches the following (if that is what you want):
Configuration:
Source code location: .
Compiler: g++
Build libtransmission: yes
* optimized for low-resource systems: no
* µTP enabled: yes
* crypto library: openssl
Build Command-Line client: no
Build GTK+ client: yes
* libappindicator for an Ubuntu-style tray: yes
Build Daemon: yes
Build Mac client: no
If there is nothing wrong, you may proceed, otherwise you would need to troubleshoot the problem.
Compile the program, this may take a while:
make
If the compilation is successful, you may proceed, otherwise you would need to troubleshoot the problem.
Before you install it, you will probably want to remove the rather old stable version you may have installed from the repository, but there is a hatch: You will probably want to keep your settings, and if so, locate the settings file:
locate transmission/settings.json
Let's suppose it is in your personal ~/.config/ directory. Make a backup somewhere, e.g. into your home directory:
cp ~/.config/transmission/settings.json ~/
Now remove the original packaged version:
sudo apt-get purge transmission-gtk transmission-common
Install your compiled transmission-gtk client:
sudo make install
While not having the transmission-gtk client started, you may move your settings file in place, or better first examine the differences, and then decide, if just overwriting it would be OK or not:
mv ~/settings.json ~/.config/transmission/settings.json
Finally supposing you want a desktop item, then copy it and mark it as executable:
cp ~/Downloads/transmission-2.92/gtk/transmission-gtk.desktop ~/Desktop/
chmod a+x ~/Desktop/transmission-gtk.desktop
Similarly, you may create a menu item, you just need to add sudo and don't bother with the execution bit:
sudo cp ~/Downloads/transmission-2.92/gtk/transmission-gtk.desktop /usr/share/applications/
| Compiling Transmission-GTK torrent client on Linux Mint 18 |
1,370,473,401,000 |
I have found piece of C code which would be very useful for what I want do to under this link: All possible combinations of characters and numbers
#include <stdio.h>
//global variables and magic numbers are the basis of good programming
const char* charset = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789";
char buffer[50];
void permute(int level) {
const char* charset_ptr = charset;
if(level == -1){
puts(buffer);
}else {
while(buffer[level]=*charset_ptr++) {
permute(level - 1);
}
}
}
int main(int argc, char **argv)
{
int length;
sscanf(argv[1], "%d", &length);
//Must provide length (integer < sizeof(buffer)==50) as first arg;
//It will crash and burn otherwise
buffer[length]='\0';
permute(length - 1);
return 0;
}
However, when I try to compile it as it is suggested, I get following errors. Can anyone please help me to correct it?
$ make CFLAGS=-O3 permute && time ./permute 5 >/dev/null
make: Nothing to be done for 'permute'.
./permute: line 3: //global: No such file or directory
./permute: line 4: const: command not found
./permute: line 5: char: command not found
./permute: line 7: syntax error near unexpected token `('
./permute: line 7: `void permute(int level) {'
Also when I try to use gcc I get Segmentation fault error:
$ mv permute permute.c
$ gcc permute.c -o permute.bin
$ chmod 755 permute.bin
$ ./permute.bin
Segmentation fault (core dumped)
|
It appears that you initially named the C file permute; when the make failed, you tried to execute it with your shell, which resulted in all of those syntax errors (as the shell does not know how to execute C code).
In the second case, you hit the comment:
//Must provide length (integer < sizeof(buffer)==50) as first arg;
//It will crash and burn otherwise
because you did not provide a first (or any) arguments to the program. Try ./permute.bin 10.
| Errors while compiling C code [closed] |
1,370,473,401,000 |
I'm trying to compile Pagespeed with Nginx in Ubuntu 14.04, following Google's instructions, and got some errors I don't quite understand. By default, it won't find my OpenSSL location, so I manually input it. However, it still fails.
What am I doing wrong?
Here's the whole process:
alain@a3:~$ bash <(curl -f -L -sS https://ngxpagespeed.com/install) \
> --nginx-version latest
Detected debian-based distro.
Operating system dependencies are all set.
Downloading ngx_pagespeed...
--2016-12-08 10:43:19-- https://github.com/pagespeed/ngx_pagespeed/archive/latest-stable.zip
Resolving github.com (github.com)... 192.30.253.113, 192.30.253.112
Connecting to github.com (github.com)|192.30.253.113|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://codeload.github.com/pagespeed/ngx_pagespeed/zip/latest-stable [following]
--2016-12-08 10:43:19-- https://codeload.github.com/pagespeed/ngx_pagespeed/zip/latest-stable
Resolving codeload.github.com (codeload.github.com)... 192.30.253.120, 192.30.253.121
Connecting to codeload.github.com (codeload.github.com)|192.30.253.120|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 135997 (133K) [application/zip]
Saving to: ‘/tmp/tmp.AUd44ktCew/ngx_pagespeed-latest-stable.zip’
/tmp/tmp.AUd44ktCew/ngx_pagespeed-latest-stable 100%[======================================================================================================>] 132.81K 682KB/s in 0.2s
2016-12-08 10:43:20 (682 KB/s) - ‘/tmp/tmp.AUd44ktCew/ngx_pagespeed-latest-stable.zip’ saved [135997/135997]
OK to delete /home/alain/ngx_pagespeed-latest-stable? [Y/n]
Extracting ngx_pagespeed...
Downloading PSOL binary...
--2016-12-08 10:43:22-- https://dl.google.com/dl/page-speed/psol/1.11.33.4.tar.gz
Resolving dl.google.com (dl.google.com)... 216.58.217.238, 2607:f8b0:4002:805::200e
Connecting to dl.google.com (dl.google.com)|216.58.217.238|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 132774363 (127M) [application/x-tar]
Saving to: ‘1.11.33.4.tar.gz’
1.11.33.4.tar.gz 100%[======================================================================================================>] 126.62M 31.7MB/s in 4.4s
2016-12-08 10:43:26 (29.0 MB/s) - ‘1.11.33.4.tar.gz’ saved [132774363/132774363]
Extracting PSOL...
Downloading nginx...
--2016-12-08 10:43:33-- http://nginx.org/download/nginx-1.11.6.tar.gz
Resolving nginx.org (nginx.org)... 95.211.80.227, 206.251.255.63, 2606:7100:1:69::3f, ...
Connecting to nginx.org (nginx.org)|95.211.80.227|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 960331 (938K) [application/octet-stream]
Saving to: ‘/tmp/tmp.AUd44ktCew/nginx-1.11.6.tar.gz’
/tmp/tmp.AUd44ktCew/nginx-1.11.6.tar.gz 100%[======================================================================================================>] 937.82K 62.9KB/s in 7.5s
2016-12-08 10:43:41 (125 KB/s) - ‘/tmp/tmp.AUd44ktCew/nginx-1.11.6.tar.gz’ saved [960331/960331]
OK to delete /home/alain/nginx-1.11.6/? [Y/n]
Extracting nginx...
About to build nginx. Do you have any additional ./configure
arguments you would like to set? For example, if you would like
to build nginx with https support give --with-http_ssl_module
If you don't have any, just press enter.
> --with-http_ssl_module --with-openssl=/usr/bin/openssl
About to configure nginx with:
./configure --add-module=/home/alain/ngx_pagespeed-latest-stable --with-http_ssl_module --with-openssl=/usr/bin/openssl
Does this look right? [Y/n]
checking for OS
+ Linux 4.4.0-53-generic x86_64
checking for C compiler ... found
+ using GNU C compiler
+ gcc version: 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.4)
checking for gcc -pipe switch ... found
checking for -Wl,-E switch ... found
checking for gcc builtin atomic operations ... found
checking for C99 variadic macros ... found
checking for gcc variadic macros ... found
checking for gcc builtin 64 bit byteswap ... found
checking for unistd.h ... found
checking for inttypes.h ... found
checking for limits.h ... found
checking for sys/filio.h ... not found
checking for sys/param.h ... found
checking for sys/mount.h ... found
checking for sys/statvfs.h ... found
checking for crypt.h ... found
checking for Linux specific features
checking for epoll ... found
checking for EPOLLRDHUP ... found
checking for EPOLLEXCLUSIVE ... not found
checking for O_PATH ... found
checking for sendfile() ... found
checking for sendfile64() ... found
checking for sys/prctl.h ... found
checking for prctl(PR_SET_DUMPABLE) ... found
checking for sched_setaffinity() ... found
checking for crypt_r() ... found
checking for sys/vfs.h ... found
checking for nobody group ... not found
checking for nogroup group ... found
checking for poll() ... found
checking for /dev/poll ... not found
checking for kqueue ... not found
checking for crypt() ... not found
checking for crypt() in libcrypt ... found
checking for F_READAHEAD ... not found
checking for posix_fadvise() ... found
checking for O_DIRECT ... found
checking for F_NOCACHE ... not found
checking for directio() ... not found
checking for statfs() ... found
checking for statvfs() ... found
checking for dlopen() ... not found
checking for dlopen() in libdl ... found
checking for sched_yield() ... found
checking for SO_SETFIB ... not found
checking for SO_REUSEPORT ... found
checking for SO_ACCEPTFILTER ... not found
checking for SO_BINDANY ... not found
checking for IP_BIND_ADDRESS_NO_PORT ... found
checking for IP_TRANSPARENT ... found
checking for IP_BINDANY ... not found
checking for IP_RECVDSTADDR ... not found
checking for IP_PKTINFO ... found
checking for IPV6_RECVPKTINFO ... found
checking for TCP_DEFER_ACCEPT ... found
checking for TCP_KEEPIDLE ... found
checking for TCP_FASTOPEN ... found
checking for TCP_INFO ... found
checking for accept4() ... found
checking for eventfd() ... found
checking for int size ... 4 bytes
checking for long size ... 8 bytes
checking for long long size ... 8 bytes
checking for void * size ... 8 bytes
checking for uint32_t ... found
checking for uint64_t ... found
checking for sig_atomic_t ... found
checking for sig_atomic_t size ... 4 bytes
checking for socklen_t ... found
checking for in_addr_t ... found
checking for in_port_t ... found
checking for rlim_t ... found
checking for uintptr_t ... uintptr_t found
checking for system byte ordering ... little endian
checking for size_t size ... 8 bytes
checking for off_t size ... 8 bytes
checking for time_t size ... 8 bytes
checking for AF_INET6 ... found
checking for setproctitle() ... not found
checking for pread() ... found
checking for pwrite() ... found
checking for pwritev() ... found
checking for sys_nerr ... found
checking for localtime_r() ... found
checking for posix_memalign() ... found
checking for memalign() ... found
checking for mmap(MAP_ANON|MAP_SHARED) ... found
checking for mmap("/dev/zero", MAP_SHARED) ... found
checking for System V shared memory ... found
checking for POSIX semaphores ... not found
checking for POSIX semaphores in libpthread ... found
checking for struct msghdr.msg_control ... found
checking for ioctl(FIONBIO) ... found
checking for struct tm.tm_gmtoff ... found
checking for struct dirent.d_namlen ... not found
checking for struct dirent.d_type ... found
checking for sysconf(_SC_NPROCESSORS_ONLN) ... found
checking for openat(), fstatat() ... found
checking for getaddrinfo() ... found
configuring additional modules
adding module in /home/alain/ngx_pagespeed-latest-stable
mod_pagespeed_dir=/home/alain/ngx_pagespeed-latest-stable/psol/include
build_from_source=false
checking for psol ... found
List of modules (in reverse order of applicability): ngx_http_write_filter_module ngx_http_header_filter_module ngx_http_chunked_filter_module ngx_http_range_header_filter_module ngx_pagespeed_etag_filter ngx_http_gzip_filter_module ngx_pagespeed ngx_http_postpone_filter_module ngx_http_ssi_filter_module ngx_http_charset_filter_module ngx_http_userid_filter_module ngx_http_headers_filter_module
checking for psol-compiler-compat ... found
+ ngx_pagespeed was configured
checking for PCRE library ... found
checking for PCRE JIT support ... found
checking for zlib library ... found
creating objs/Makefile
Configuration summary
+ using system PCRE library
+ using OpenSSL library: /usr/bin/openssl
+ using system zlib library
nginx path prefix: "/usr/local/nginx"
nginx binary file: "/usr/local/nginx/sbin/nginx"
nginx modules path: "/usr/local/nginx/modules"
nginx configuration prefix: "/usr/local/nginx/conf"
nginx configuration file: "/usr/local/nginx/conf/nginx.conf"
nginx pid file: "/usr/local/nginx/logs/nginx.pid"
nginx error log file: "/usr/local/nginx/logs/error.log"
nginx http access log file: "/usr/local/nginx/logs/access.log"
nginx http client request body temporary files: "client_body_temp"
nginx http proxy temporary files: "proxy_temp"
nginx http fastcgi temporary files: "fastcgi_temp"
nginx http uwsgi temporary files: "uwsgi_temp"
nginx http scgi temporary files: "scgi_temp"
Build nginx? [Y/n]
make -f objs/Makefile
make[1]: Entering directory '/home/alain/nginx-1.11.6'
cd /usr/bin/openssl \
&& if [ -f Makefile ]; then make clean; fi \
&& ./config --prefix=/usr/bin/openssl/.openssl no-shared \
&& make \
&& make install_sw LIBDIR=lib
/bin/sh: 1: cd: can't cd to /usr/bin/openssl
objs/Makefile:1339: recipe for target '/usr/bin/openssl/.openssl/include/openssl/ssl.h' failed
make[1]: *** [/usr/bin/openssl/.openssl/include/openssl/ssl.h] Error 2
make[1]: Leaving directory '/home/alain/nginx-1.11.6'
Makefile:8: recipe for target 'build' failed
make: *** [build] Error 2
Error: Failure running 'make', exiting.
|
I was trying to manually force it to check on /usr/bin/openssl because that's where wich openssl was targeting openssl, but nothing happened and it was denying me the follow-up installation.
After search, I was told to to install libssl-dev. And that resolved my problem.
With that the installation will run smoothly.
| Pagespeed + Nginx installation from source fails |
1,472,033,810,000 |
I want to compile guile on shared hosting but when I run ./configure I've got error:
configure: error: GNU MP 4.1 or greater not found, see README
so I've downloaded GMP and tried to install it locally (found in answer to this question on Stack Overflow install library in home directory)
mkdir /home/jcubic/lib
./configure --prefix=/home/jcubic/
make
make install
it created this files in /home/jcubic/lib
libgmp.a
libgmp.la
libgmp.so
libgmp.so.10
libgmp.so.10.3.1
then I've run configure from guile directory (found the option by reading configure script):
./configure --with-libgmp-prefix=/home/jcubic
but the error remain, how can I use local GNU MP file while running guile ./configure and make?
|
As a sum up of the comments. One has to add the environment variables as follows.
LD_LIBRARY_PATH="/home/<user>/lib" LIBRARY_PATH="/home/<user>/lib" CPATH="/home/<user>/include"
| How to use local shared library while compiling the FOSS project? |
1,472,033,810,000 |
I'm trying to compile linux kernel 3.14 on ubuntu 14.04. Before anyone points out, I know newer stable versions of the kernel are available but I have been asked to install 3.14 itself. So, I wrote a script which unpacks the source tar and starts building the kernel. But it stops mid way without generating any errors. I've tried to fiddle with the code and it still gives the same error every-time.
Snippet of the script:
# Prepare for compilation
make -j1 mrproper
# Set default configuration
make -j1 defconfig
# Compile the kernel image and modules
make -j1
# Install the modules
make -j1 modules_install
# Install the firmware
make -j1 firmware_install
# Install the kernel
cp -v arch/x86_64/boot/bzImage /boot/vm_linuz-3-14-systemd
# Install the map file
cp -v System.map /boot/system-map-3-14-systemd
# Backup kernel configuration file
cp -v .config /boot/config-backup-3-14
Last few lines of the log:
LD [M] net/ipv4/netfilter/iptable_nat.ko
LD [M] net/ipv4/netfilter/nf_nat_ipv4.ko
LD [M] net/netfilter/nf_nat.ko
LD [M] net/netfilter/nf_nat_ftp.ko
LD [M] net/netfilter/nf_nat_irc.ko
LD [M] net/netfilter/nf_nat_sip.ko
LD [M] net/netfilter/xt_LOG.ko
LD [M] net/netfilter/xt_mark.ko
LD [M] net/netfilter/xt_nat.ko
HOSTCC arch/x86/boot/tools/build
CPUSTR arch/x86/boot/cpustr.h
CC arch/x86/boot/cpu.o
MKPIGGY arch/x86/boot/compressed/piggy.S
AS arch/x86/boot/compressed/piggy.o
LD arch/x86/boot/compressed/vmlinux
ZOFFSET arch/x86/boot/zoffset.h
OBJCOPY arch/x86/boot/vmlinux.bin
AS arch/x86/boot/header.o
LD arch/x86/boot/setup.elf
OBJCOPY arch/x86/boot/setup.bin
BUILD arch/x86/boot/bzImage
Setup is 15232 bytes (padded to 15360 bytes).
System is 5433 kB
CRC 62b609cb
Kernel: arch/x86/boot/bzImage is ready (#1)
Building modules, stage 2.
MODPOST 11 modules
CC drivers/thermal/x86_pkg_temp_thermal.mod.o
LD [M] drivers/thermal/x86_pkg_temp_thermal.ko
CC net/ipv4/netfilter/ipt_MASQUERADE.mod.o
LD [M] net/ipv4/netfilter/ipt_MASQUERADE.ko
CC net/ipv4/netfilter/iptable_nat.mod.o
LD [M] net/ipv4/netfilter/iptable_nat.ko
CC net/ipv4/netfilter/nf_nat_ipv4.mod.o
LD [M] net/ipv4/netfilter/nf_nat_ipv4.ko
CC net/netfilter/nf_nat.mod.o
LD [M] net/netfilter/nf_nat.ko
CC net/netfilter/nf_nat_ftp.mod.o
LD [M] net/netfilter/nf_nat_ftp.ko
CC net/netfilter/nf_nat_irc.mod.o
LD [M] net/netfilter/nf_nat_irc.ko
CC net/netfilter/nf_nat_sip.mod.o
LD [M] net/netfilter/nf_nat_sip.ko
CC net/netfilter/xt_LOG.mod.o
LD [M] net/netfilter/xt_LOG.ko
CC net/netfilter/xt_mark.mod.o
LD [M] net/netfilter/xt_mark.ko
CC net/netfilter/xt_nat.mod.o
LD [M] net/netfilter/xt_nat.ko
sh /finalize-system/linux-kernel/linux-3.14/arch/x86/boot/install.sh 3.14.21 arch/x86/boot/bzImage \
System.map "/boot"
Cannot find LILO.
It is showing an error Cannot find LILO. But I have installed Grub 2 on my system. Then why is it asking for LILO?
|
After adding the ARCH=x86_64 flag to all the make commands the Linux kernel was successfully compiled.
| Linux kernel 3.14: Cannot find LILO |
1,472,033,810,000 |
I compiled coreutils with --sysconfdir=/test/etc instead of default /etc, moved /etc/group to /test/etc/group, and chgrp failed with
chgrp: invalid group: $groupname.
How can I fix that and make chgrp work with new sysconfdir?
|
Recompiling coreutils to look for /etc/group and other files in a different place won't change the fact that most of the system still expects to find those files in the standard places. In your case, you are noticing that the part of libc responsible for looking up groups and other objects in system database, which is nss_files, continues to look for groups in the standard place.
If you want to change the location where /etc/group and a lot of other very basic configuration files live, you'll have to recompile libc6 and probably a lot of other things. Almost certainly, many parts of the system (init scripts come to mind) are hardcoded to use /etc, and none of this has been tested so you are likely to find bugs even if you succeed in this ambitious task.
| custom sysconfdir for coreutils |
1,472,033,810,000 |
I try to install tty0tty a null-modem emulatom like in the linked installation guide, but I have a problem at "3. Build the kernel module from provided source":
user@linux-bmne:/run/media/.../Downloads/tty0tty-1.2/module> make
make -C /lib/modules/3.16.7-29-desktop/build M=/run/media/.../Downloads/tty0tty-1.2/module modules
make[1]: Entering directory '/lib/modules/3.16.7-29-desktop/build'
make[1]: *** No rule to make target 'modules'. Stop.
make[1]: Leaving directory '/lib/modules/3.16.7-29-desktop/build'
Makefile:26: recipe for target 'default' failed
make: *** [default] Error 2
Yes, the makefile is in the folder module. Also /lib/modules/3.16.7-29-desktop/build exit (after I mkdir build in 3.16.7-29-desktop). You can have a look at the folder structure of tty0tty here (It is very simple). I also tried out sudo make, but It made no difference.
The problem No rule to make target seems to be common, but I does not found a matching solution in this case. I do not know if this is helpful, but my System is open suse 13.2 x86_64.
I would be thankful for your help.
|
To build a kernel module, you need some header files which are generated during the build of the main kernel image. The makefile expects those headers to be available under /lib/modules/3.16.7-29-desktop/build where the 3.16.7-29-desktop is determined from your running kernel. Together with the header files, there's a makefile which can be used to build third-party modules. The makefile in module calls that makefile, but it isn't present on your system.
You need to install the kernel headers for your system. On OpenSUSE, that's the kernel-devel package. On most distributions, /lib/modules/VERSION/build is a symbolic link to where the kernel header tree is located. I don't know if OpenSUSE does this; if it doesn't, then either create the symbolic link or pass the actual location of the headers (the directory containing files Makefile and Module.symvers and subdirectories include and arch) as an argument to make
make KERNELDIR=/path/to/kernel-headers
The latter method is what you'll need to use if you want to build the module for a kernel version that isn't the one that's currently running.
| "make" stops during installation of tty0tty (null-modem emulator) |
1,472,033,810,000 |
I have installed Linux Mint 17 on my 32-bit laptop and bought a FriendlyARM Mini2440 development board to do some basic programming and learn concepts of Linux.
However, I couldn't find any documentation on how to install the cross-compiler and toolchain for FriendlyARM Mini2440 on Linux Mint (I found it for Ubuntu, though). I am using this tutorial to start my system and have followed all the steps.
My problem is that while I am able to install and execute the cross-compiler and toolchain correctly for the first time, after restarting, when I issue the command arm-none-linux-gnueabi-cc –v, it gives me an error.
How can I get FriendlyARM Mini2440 to work on Linux Mint?
|
In the link https://alselectro.wordpress.com/category/friendly-arm-mini2440/
they suggest pasting the following line into /root/.bashrc:
export PATH=$PATH:/opt/FriendlyArm/toolschain/4.4.3/bin
However, I didn't have the /root/.bashrc file in my Linux Mint, so I was getting an error for arm-none-linux-gnueabi-cc –v.
After much searching, I found that I can paste the path /root/.profile/ instead.
After doing this, the arm compiler is initialized on start-up and seems to be working fine now.
Although the procedures in the link are for FriendlyARM2440 + Ubuntu, I have tested all of them for Linux Mint 17, and except for this small change, all seem to work fine.
| Linux Mint + FriendlyARM 2440: Cannot install and execute cross-compiler and toolschain (arm-linux-gcc-4.4.3.tar.gz) |
1,472,033,810,000 |
I am using a device that is running Yoco Linux on a 32 Bit architecture. It does not provide a package manager or a compiler thus not allowing me to compile on it. Instead I use a Ubuntu Virtual Machine where I compile what I need, copy it to the device and install there.
That what I did for example for Python3.4 - it was as easy as:
./configure
make
On the Ubuntu Virtual Machine and
make install
on the device running Yocto.
But now I am facing a minor problem with unixODBC. ./configure and make are completed without any problems. What I did not know is that I also need a gcc compiler for make install and therefore not making it possible to use it on the device. Naivly I thought I could just copy the libraries completly compiled on Ubuntu to the deivce but this didn't work. All configuration is not done.
My question is: is there an argument for make or any kind of other option to build all I need on Ubuntu, copy it to the device and start the configuration routine that is build into unixODBCs installation?
I took a look around through the Makefile hoping to find a clue but without luck. It also is hard to grasp what kind of configuration is done after compilation was successful.
|
When installing programs from the source you can often use the DESTDIR flag to achieve this.
Create a new directory: mkdir /tmp/uodbcinst
Install: make DESTDIR=/tmp/uodbcinst/ install
Archive the result tar -cpzf ~/uodbcinst.tar.gz -C /tmp/uodbcinst .
Extract the result on the other system tar -C / -xf uodbcinst.tar.gz
Disclaimer: I haven't tested these commands, please double check the syntax. Extracting a tarball into the root directory is very dangerous.
You may also want to consider adjusting the --sysconfdir option when running ./configure so that the resulting program looks in the /etc/ folder for configuration files instead of /usr/local/etc.
This method is similar to how distribution maintainers create the payloads for package managers. See the Arch Linux example.
| Configuration of unixODBC after compiling from source |
1,472,033,810,000 |
I have a toolchain generated through Buildroot with which I am trying to compile something statically but the build fails with gcc saying something about "multiple definition". Now looking at the command line in question, gcc is executed with an link option there occurs twice, i.e. gcc -lpthread -lpthread.
I mentioned this on the Buildroot IRC but I was told that this linking the same library twice shouldn't cause any problems. Is this true and is GCC indeed smart enough and might something else be to blame for why the compilation fails?
|
As noted in the comments below my question the answer is: No, linkers are too smart nowadays, the problem has to be something else.
| GCC: library linked several times can cause "multiple definition" errors? |
1,472,033,810,000 |
I am just wondering if it is a good idea [better optimization, less bugs] to recompile the gcc compiler with the own version.
I compiled gcc 4.9.3 with the current system gcc 4.3.4. Once gcc 4.9.3 has been compiled should I compile it again - and all other dependencies [gmp, isl, mpc, mpfr] - with the built version 4.9.3?
|
gcc does this as part of its three-phase build: see the discussion of phase-2 and phase-3 compilers in Installing GCC: Building
| Recompile gcc with the built version |
1,472,033,810,000 |
I have this repository cloned on my Sabayon machine, what I would like to do is write a script that will change into each directory of this repo (just the top-level directories, not directories inside these directories) and run ./autogen.sh --prefix=/usr && make && sudo make install. I was thinking that maybe this script will do what I want:
for i in `find . -type d`
do
pushd $i
./autogen.sh --prefix=/usr && make && sudo make install
popd
done
but, the only problem is that find . -type d shows every directory within this repo, including directories within directories (e.g., it shows tclock/images, that is the images directory, within the tclock directory), when I want just top-level directories (or tclock in the previous example).
|
I have found that this works:
for i in `find . -maxdepth 1 -type d -exec basename {} \;`
do
pushd $i
./autogen.sh --prefix=/usr && make && sudo make install
popd
done
although, some odd error messages pop out from this, so if anyone has a better answer I will be more than willing to accept it.
| How do I write a script to automatically compile and install all Moksha modules? |
1,472,033,810,000 |
I am trying to compile GNU Screen in my home folder on a machine where I don't have super user rights. I am taking GNU Screen version used by Linux from Scratch.
tar xvzf screen-4.3.1.tar.gz
cd screen-4.3.1
./configure --prefix=$HOME
Its all good until that point and the Makefile is generated. Then the command make exists with
utmp.c:99:1: warning: "pututline" redefined
In file included from screen.h:30,
from utmp.c:34:
os.h:262:1: warning: this is the location of the previous definition
utmp.c: In function 'makedead':
utmp.c:602: error: 'struct __exit_status' has no member named 'e_termination'
utmp.c:603: error: 'struct __exit_status' has no member named 'e_exit'
make: *** [utmp.o] Error 1
After compiling a few files successfully.
Any ideas?
The code line it refers to looks like this
static void
makedead(u)
struct utmp *u;
{
u->ut_type = DEAD_PROCESS;
#if (!defined(linux) || defined(EMPTY)) && !defined(__CYGWIN__)
u->ut_exit.e_termination = 0; // Line 602
u->ut_exit.e_exit = 0; // Line 603
#endif
#if !defined(sun) || !defined(SVR4)
u->ut_user[0] = 0; /* for Digital UNIX, [email protected] */
#endif
}
I am on a linux machine though:
Red Hat Enterprise Linux Server release 5.5 (Tikanga)
|
It looks like you're missing a few dependencies. That would be a bug in the configure script. You might want to file a bugreport to the screen maintainers.
| Trying to compile GNU Screen |
1,472,033,810,000 |
I want to build my own FreeBSD installation media (an .iso file to burn on a DVD) with selected ports. What I mean is: I use a DVD to install new system, and after installing it I can (for example) use git and gcc49. Is that possible? I know it's possible while compiling source code for ARM's (like Raspi or Beaglebone), but is that possible for i386/amd64 version? Googling gave me nothing and so do posting on official FreeBSD forum. Thank you in advance for your answer. Greethings
|
I was about to answer this question with a link to the release man page and go into great detail about how to build packages and include them in releases etc.
But then I realized that your question is mixing up which packages are installed after the install. Packages are no longer included in the ISO, you can install the packages by installing them from the network post install by running pkg install packagename.
| How do I include a port into FreeBSD distro? |
1,472,033,810,000 |
I have created new module code. And want to compile it for my kernel. But I am not sure about I thing. Should I copy it to some designated directory before I start the compilation? Or I can just compile it wherever I want?
Thank you very much!
|
There isn't anywhere special the source code needs to be. Normally it'd be wherever your repository is.
If you want to leave it somewhere for the next admin to find, the most obvious place would be a company VCS server. /usr/src would also be a reasonable place to look, as well as $HOME.
Eventually, if you decide to submit the module for inclusion in the kernel, you'll have to put it in the proper spot in your linux git checkout.
| Where (the directory) should I put my newly created module in the kernel? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.