date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,472,033,810,000
I try to compile a project, I got the source files from git hub. But when I launch make, the compilation complains that the dependency on wayland-server is not met. However, I have wayland installed. On my system, libwayland-server.so.0 is located in /usr/lib/x86_64-linux-gnu When looking at the Makefile of the project, I see that the variable LIBDIR is set to /usr/lib So I suspect the compiler is not searching for wayland-server lib inside /usr/lib/x86_64-linux-gnu What is the way to go so as to fix this compilation problem ? Should I modify the make file, in which way ?
Use command Export LIBDIR=/usr/lib/x86_64-linux-gnu:$LIBDIR And try again. If it works, put this line in your ~/.bashrc
How to tell the compiler to search some libs inside /usr/lib/x86_64-linux-gnu
1,472,033,810,000
I get the following error when I try to compile my application. I have installed all dependencies using Homebrew, looked at output from "brew doctor", nothing clear on how to solve this problem. $ gcc `pkg-config --cflags gtk+-3.0` -o gui gui.c `pkg-config --libs gtk+-3.0` In file included from /usr/local/Cellar/glib/2.42.0/include/glib-2.0/glib/gtypes.h:32:0, from /usr/local/Cellar/glib/2.42.0/include/glib-2.0/glib/galloca.h:32, from /usr/local/Cellar/glib/2.42.0/include/glib-2.0/glib.h:30, from /usr/local/Cellar/gtk+3/3.14.4/include/gtk-3.0/gdk/gdkconfig.h:13, from /usr/local/Cellar/gtk+3/3.14.4/include/gtk-3.0/gdk/gdk.h:30, from /usr/local/Cellar/gtk+3/3.14.4/include/gtk-3.0/gtk/gtk.h:30, from gui.c:5: /usr/local/Cellar/glib/2.42.0/lib/glib-2.0/include/glibconfig.h:12:19: fatal error: float.h: No such file or directory With pkg-config --cflags gtk+-3.0 -D_REENTRANT -I/usr/local/Cellar/gtk+3/3.14.4/include/gtk-3.0 -I/usr/local/Cellar/at-spi2-atk/2.14.1/include/at-spi2-atk/2.0 -I/usr/local/Cellar/at-spi2-core/2.14.0/include/at-spi-2.0 -I/usr/local/Cellar/d-bus/1.8.8/include/dbus-1.0 -I/usr/local/Cellar/d-bus/1.8.8/lib/dbus-1.0/include -I/usr/local/Cellar/gtk+3/3.14.4/include/gtk-3.0 -I/usr/local/Cellar/glib/2.42.0/include/gio-unix-2.0/ -I/usr/local/Cellar/cairo/1.14.0/include/cairo -I/usr/local/Cellar/pango/1.36.8/include/pango-1.0 -I/usr/local/Cellar/harfbuzz/0.9.35_1/include/harfbuzz -I/usr/local/Cellar/pango/1.36.8/include/pango-1.0 -I/usr/local/Cellar/atk/2.14.0/include/atk-1.0 -I/usr/local/Cellar/cairo/1.14.0/include/cairo -I/usr/local/Cellar/pixman/0.32.6/include/pixman-1 -I/usr/local/Cellar/fontconfig/2.11.1/include -I/usr/local/Cellar/freetype/2.5.3_1/include/freetype2 -I/usr/local/Cellar/libpng/1.6.13/include/libpng16 -I/usr/local/Cellar/gdk-pixbuf/2.30.8/include/gdk-pixbuf-2.0 -I/usr/local/Cellar/libpng/1.6.13/include/libpng16 -I/usr/local/Cellar/glib/2.42.0/include/glib-2.0 -I/usr/local/Cellar/glib/2.42.0/lib/glib-2.0/include -I/usr/local/opt/gettext/include -I/opt/X11/include With pkg-config --libs gtk+-3.0 -L/usr/local/Cellar/gtk+3/3.14.4/lib -L/usr/local/Cellar/pango/1.36.8/lib -L/usr/local/Cellar/atk/2.14.0/lib -L/usr/local/Cellar/cairo/1.14.0/lib -L/usr/local/Cellar/gdk-pixbuf/2.30.8/lib -L/usr/local/Cellar/glib/2.42.0/lib -L/usr/local/opt/gettext/lib -lgtk-3 -lgdk-3 -lpangocairo-1.0 -lpango-1.0 -latk-1.0 -lcairo-gobject -lcairo -lgdk_pixbuf-2.0 -lgio-2.0 -lgobject-2.0 -lglib-2.0 -lintl
Solved the problem, I used the following to compile my app, I wasn't specifying the vte library. clang -Wall -g gui.c -o gui pkg-config --cflags --libs gtk+-3.0 vte
float.h library not found when compiling gtk+3/vte app with Homebrew
1,472,033,810,000
I have 2 systems that both run Gentoo. I want to use one to build binary packages for the other and have been following this wiki article. One problem I have is that I have different use flags for my 2 systems. As an example, I have vim installed on both my package server and package host. My package server has the USE flag gpm, but my build host has -gpm. If I use quickpkg and move vim from my package server to my package host, I get the error ./vim: error while loading shared libraries: libgpm.so.1: cannot open shared object file: No such file or directory, which means that the host is missing gpm support. I'm not too familiar with Gentoo yet, so I'm not sure how I can resolve this. I've tried Googling everything I can think of, but I haven't found anything that works.
For my situation, I found the solution to be distcc, which eyoung100 suggested in the comments.
Different USE flags for a binary package server and host
1,472,033,810,000
To save space and time I copied a large project tree on a network drive as hard links, i.e. cp -a -r --link proj proj_B (background: it's huge, needs to be rebuilt from two incompatible environments, and doesn't have good support for specifying intermediate and product locations. So this was a quick hack to get a rebuild in environment "B": after copying clean and rebuild from "proj_B/obj". Both environments are under LinuxMint 16) The problem with this approach is that edits won't be (reliably) shared between these trees, e.g. saving an edit to "proj/foo.cpp" will leave it pointing to a new inode and "proj_B/foo.cpp" will still point to the old one (maybe from the loss-avoidance pattern of "save temp; mv orig temp2; mv temp orig; rm temp2"). For sharing source I guess I need symbolic links for the source directories (but not simply a symlink of the project root, since the binary directories need to be kept apart), e.g. something like: cp -a -r --symbolic-link proj proj_B followed by unlinking the binary directories (except that recursive symlink copying fails with "can make relative symbolic links only in current directory". But something similar could be done with "find -exec", or just capitulating and writing a script) But before doing that I wanted a sanity check: is there a better tool for this all along (e.g. some warlock-grade combination of rsync flags)? Or is this sharing approach doomed to end in tears and lost data and I should resign myself to using two copies (and lots of cursing when I find I forgot to push/pull latest changes between them)?
I wouldn't use hard links. Some editors break hard links when they save files, others don't, and some can be configured. However, preserving hard links when saving a file implies that the file is written in place, which means that if the system crashes during the write, you will be left with an incomplete file. This is why the save-to-new-file-and-move-in-place is preferable — but this breaks hard links. In particular, most version control software breaks hard links. So hard links are out. A forest of symbolic links doesn't have this problem. You need to ensure that you point your editor to the master copy or that the editor follows symlinks. You can create the symlink forest with cp -as. However, if you create new files, cp -as is inconvenient to create the corresponding symlinks (it'll do the job, but drown you in complaints that the target already exists). You can use a simple shell loop. for env in environement1 environment2 environment3; do cd "../$env" find ../src \ -type d -exec mkdir {} + \ -o -exec sh -c 'ln -s "$@" .' _ {} + done
Sharing a project tree between environments
1,472,033,810,000
I built kernel 3.11.3 following the instructions here. There were no issues in the build. The steps I followed automatically made entries in grub and copied the image to /boot too. At boot time, when I choose the new kernel, booting gets stuck with the following message [ 1.563345] MODSIGN: Problem loading in-kernel X.509 certificate (-129) [1.734622] ata3: softreset failed (device not ready) [1.735638[ ata1: softreset failed (device not ready) But I get the same message when booting with my original kernel (kernel-3.9.5-301.fc19.x86_64) and it immediately disappears and boots normally. /boot/grub2/grub.cfg looks like this # # DO NOT EDIT THIS FILE # # It is automatically generated by grub2-mkconfig using templates # from /etc/grub.d and settings from /etc/default/grub # ### BEGIN /etc/grub.d/00_header ### if [ -s $prefix/grubenv ]; then load_env fi if [ "${next_entry}" ] ; then set default="${saved_entry}" set next_entry= save_env next_entry set boot_once=true else set default="${saved_entry}" fi if [ x"${feature_menuentry_id}" = xy ]; then menuentry_id_option="--id" else menuentry_id_option="" fi export menuentry_id_option if [ "${prev_saved_entry}" ]; then set saved_entry="${prev_saved_entry}" save_env saved_entry set prev_saved_entry= save_env prev_saved_entry set boot_once=true fi function savedefault { if [ -z "${boot_once}" ]; then saved_entry="${chosen}" save_env saved_entry fi } function load_video { if [ x$feature_all_video_module = xy ]; then insmod all_video else insmod efi_gop insmod efi_uga insmod ieee1275_fb insmod vbe insmod vga insmod video_bochs insmod video_cirrus fi } terminal_output console set timeout=5 ### END /etc/grub.d/00_header ### ### BEGIN /etc/grub.d/10_linux ### menuentry 'Fedora (3.11.3Mephisto) 19 (Schrödinger’s Cat)' --class fedora --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.9.5-301.fc19.x86_64-advanced-ebc305eb-2826-4aa0-91ae-f74f3e12b496' { load_video set gfxpayload=keep insmod gzio insmod part_msdos insmod ext2 set root='hd0,msdos7' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos7 --hint-efi=hd0,msdos7 --hint-baremetal=ahci0,msdos7 --hint='hd0,msdos7' b6603ac8-e004-4cd6-b141-9bc95409e32a else search --no-floppy --fs-uuid --set=root b6603ac8-e004-4cd6-b141-9bc95409e32a fi linux /vmlinuz-3.11.3Mephisto root=/dev/mapper/fedora-root ro rd.lvm.lv=fedora/swap rd.md=0 rd.dm=0 vconsole.keymap=guj rd.luks=0 vconsole.font=latarcyrheb-sun16 rd.lvm.lv=fedora/root rhgb quiet LANG=en_US.UTF-8 initrd /initramfs-3.11.3Mephisto.img } menuentry 'Fedora, with Linux 3.9.5-301.fc19.x86_64' --class fedora --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.9.5-301.fc19.x86_64-advanced-ebc305eb-2826-4aa0-91ae-f74f3e12b496' { load_video set gfxpayload=keep insmod gzio insmod part_msdos insmod ext2 set root='hd0,msdos7' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos7 --hint-efi=hd0,msdos7 --hint-baremetal=ahci0,msdos7 --hint='hd0,msdos7' b6603ac8-e004-4cd6-b141-9bc95409e32a else search --no-floppy --fs-uuid --set=root b6603ac8-e004-4cd6-b141-9bc95409e32a fi linux /vmlinuz-3.9.5-301.fc19.x86_64 root=/dev/mapper/fedora-root ro rd.lvm.lv=fedora/swap rd.md=0 rd.dm=0 vconsole.keymap=guj rd.luks=0 vconsole.font=latarcyrheb-sun16 rd.lvm.lv=fedora/root rhgb quiet initrd /initramfs-3.9.5-301.fc19.x86_64.img } menuentry 'Fedora, with Linux 0-rescue-7725dfc225d14958a625ddaaaea5962b' --class fedora --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-0-rescue-7725dfc225d14958a625ddaaaea5962b-advanced-ebc305eb-2826-4aa0-91ae-f74f3e12b496' { load_video insmod gzio insmod part_msdos insmod ext2 set root='hd0,msdos7' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos7 --hint-efi=hd0,msdos7 --hint-baremetal=ahci0,msdos7 --hint='hd0,msdos7' b6603ac8-e004-4cd6-b141-9bc95409e32a else search --no-floppy --fs-uuid --set=root b6603ac8-e004-4cd6-b141-9bc95409e32a fi linux /vmlinuz-0-rescue-7725dfc225d14958a625ddaaaea5962b root=/dev/mapper/fedora-root ro rd.lvm.lv=fedora/swap rd.md=0 rd.dm=0 vconsole.keymap=guj rd.luks=0 vconsole.font=latarcyrheb-sun16 rd.lvm.lv=fedora/root rhgb quiet initrd /initramfs-0-rescue-7725dfc225d14958a625ddaaaea5962b.img } ### END /etc/grub.d/10_linux ### ### BEGIN /etc/grub.d/20_linux_xen ### ### END /etc/grub.d/20_linux_xen ### ### BEGIN /etc/grub.d/20_ppc_terminfo ### ### END /etc/grub.d/20_ppc_terminfo ### ### BEGIN /etc/grub.d/30_os-prober ### ### END /etc/grub.d/30_os-prober ### ### BEGIN /etc/grub.d/40_custom ### # This file provides an easy way to add custom menu entries. Simply type the # menu entries you want to add after this comment. Be careful not to change # the 'exec tail' line above. ### END /etc/grub.d/40_custom ### ### BEGIN /etc/grub.d/41_custom ### if [ -f ${config_directory}/custom.cfg ]; then source ${config_directory}/custom.cfg elif [ -z "${config_directory}" -a -f $prefix/custom.cfg ]; then source $prefix/custom.cfg; fi ### END /etc/grub.d/41_custom ### /etc/fstab looks like this # # /etc/fstab # Created by anaconda on Tue Jan 1 11:58:46 2002 # # Accessible filesystems, by reference, are maintained under '/dev/disk' # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info # /dev/mapper/fedora-root / ext4 defaults 1 1 UUID=b6603ac8-e004-4cd6-b141-9bc95409e32a /boot ext4 defaults 1 2 /dev/mapper/fedora-home /home ext4 defaults 1 2 /dev/mapper/fedora-swap swap swap defaults 0 0 /dev/sda1 /mnt/media ntfs-3g gid=admin,umask=0007 0 0 /dev/sda5 /mnt/setups ntfs-3g gid=admin,umask=0007 0 0 /dev/sda6 /mnt/documents ntfs-3g gid=admin,umask=0007 0 0 I'm using Fedora 19 on a x86_64 architecture
cd to the source directory, type 'make clean', then 'make localmodconfig' then build the kernel as you did before. When you do 'make install' grub.cfg will be autogenerated.
Unable to boot using self built kernel
1,472,033,810,000
In the past when I have compiled applications from source I have extracted the source code to ~/src and compiled from there. I realize now that there may be no need for me to create the ~/src directory, as Linux probably already has an established location for source code for applications such as this. Is this the case? What is the directory in Linux that is established as the place for source code from third party applications that I want to compile?
There's no pre-determined, or even globally preferred, location. The closest analogue I know of would be the /usr/src tree in Red Hat Enterprise Linux and derivatives, but most applications that you compile are designed to be unrolled into their own directories, compiled as a non-privileged user, and only then installed with root privileges.
Where to place source code for applications compiled from source?
1,472,033,810,000
We are working with an older software product that has some limited programming capabilities - specifically, no bit manipulation functions. This has created a significant problem as we need to implement a HMAC-MD5 Hash to interface with an industry standard software interface. The older software does have the ability to call a C program/dll and pass that information and obtain a returned value. We have no experience with the C language or how to set up the shared libraries required. So specifically: How should we set up and install a simplistic C compiler environment? Where can we find an existing open source C implementation of the HMAC-MD5 algorithm? If this does not exist, where can we find a resource to implement this algorithm? Our environment is Unix CENTOS 4.9 and Apache/1.3.42
The compiler should be gcc, If you don't have it, the package has the same name—install it however you normally install CentOS package (e.g., yum install gcc) There are a lot of open source implementations of HMAC-MD5. Any crypto library will have it. And various other projects have one. Google or a code search will quickly turn up thousands. In fact, HMAC is defined in RFC2104, which includes example C code in the appendix. (You'll need to grab the MD5 example code from RFC 1321 as well).
C-library HMAC_MD5 questions
1,472,033,810,000
When installing ZoneMinder 1.25.0 in CentOS 6.4 (64-bit) the following error pops up when executing make: zm_ffmpeg_camera.cpp:105:44: error: missing binary operator before token "(" Full log: zm_ffmpeg_camera.cpp:105:44: error: missing binary operator before token "(" In file included from zm_ffmpeg_camera.cpp:24: zm_ffmpeg_camera.h:39: error: ISO C++ forbids declaration of ‘AVFormatContext’ with no type zm_ffmpeg_camera.h:39: error: expected ‘;’ before ‘*’ token zm_ffmpeg_camera.h:41: error: ISO C++ forbids declaration of ‘AVCodecContext’ with no type zm_ffmpeg_camera.h:41: error: expected ‘;’ before ‘*’ token zm_ffmpeg_camera.h:42: error: ISO C++ forbids declaration of ‘AVCodec’ with no type zm_ffmpeg_camera.h:42: error: expected ‘;’ before ‘*’ token zm_ffmpeg_camera.h:44: error: ISO C++ forbids declaration of ‘AVFrame’ with no type zm_ffmpeg_camera.h:44: error: expected ‘;’ before ‘*’ token zm_ffmpeg_camera.h:45: error: ISO C++ forbids declaration of ‘AVFrame’ with no type zm_ffmpeg_camera.h:45: error: expected ‘;’ before ‘*’ token zm_ffmpeg_camera.cpp: In constructor ‘FfmpegCamera::FfmpegCamera(int, const std::string&, int, int, int, int, int, int, int, bool)’: zm_ffmpeg_camera.cpp:35: error: ‘mFormatContext’ was not declared in this scope zm_ffmpeg_camera.cpp:37: error: ‘mCodecContext’ was not declared in this scope zm_ffmpeg_camera.cpp:38: error: ‘mCodec’ was not declared in this scope zm_ffmpeg_camera.cpp:40: error: ‘mRawFrame’ was not declared in this scope zm_ffmpeg_camera.cpp:41: error: ‘mFrame’ was not declared in this scope zm_ffmpeg_camera.cpp: In destructor ‘virtual FfmpegCamera::~FfmpegCamera()’: zm_ffmpeg_camera.cpp:46: error: ‘mFrame’ was not declared in this scope zm_ffmpeg_camera.cpp:46: error: ‘av_freep’ was not declared in this scope zm_ffmpeg_camera.cpp:47: error: ‘mRawFrame’ was not declared in this scope zm_ffmpeg_camera.cpp:51: error: ‘sws_freeContext’ was not declared in this scope zm_ffmpeg_camera.cpp:54: error: ‘mCodecContext’ was not declared in this scope zm_ffmpeg_camera.cpp:56: error: ‘avcodec_close’ was not declared in this scope zm_ffmpeg_camera.cpp:59: error: ‘mFormatContext’ was not declared in this scope zm_ffmpeg_camera.cpp:61: error: ‘av_close_input_file’ was not declared in this scope zm_ffmpeg_camera.cpp: In member function ‘void FfmpegCamera::Initialise()’: zm_ffmpeg_camera.cpp:78: error: ‘AV_LOG_DEBUG’ was not declared in this scope zm_ffmpeg_camera.cpp:78: error: ‘av_log_set_level’ was not declared in this scope zm_ffmpeg_camera.cpp:80: error: ‘AV_LOG_QUIET’ was not declared in this scope zm_ffmpeg_camera.cpp:80: error: ‘av_log_set_level’ was not declared in this scope zm_ffmpeg_camera.cpp:82: error: ‘av_register_all’ was not declared in this scope zm_ffmpeg_camera.cpp: In member function ‘virtual int FfmpegCamera::PrimeCapture()’: zm_ffmpeg_camera.cpp:94: error: ‘mFormatContext’ was not declared in this scope zm_ffmpeg_camera.cpp:94: error: ‘av_open_input_file’ was not declared in this scope zm_ffmpeg_camera.cpp:95: error: ‘errno’ was not declared in this scope zm_ffmpeg_camera.cpp:98: error: ‘mFormatContext’ was not declared in this scope zm_ffmpeg_camera.cpp:98: error: ‘av_find_stream_info’ was not declared in this scope zm_ffmpeg_camera.cpp:99: error: ‘errno’ was not declared in this scope zm_ffmpeg_camera.cpp:103: error: ‘mFormatContext’ was not declared in this scope zm_ffmpeg_camera.cpp:108: error: ‘CODEC_TYPE_VIDEO’ was not declared in this scope zm_ffmpeg_camera.cpp:118: error: ‘mCodecContext’ was not declared in this scope zm_ffmpeg_camera.cpp:118: error: ‘mFormatContext’ was not declared in this scope zm_ffmpeg_camera.cpp:121: error: ‘mCodec’ was not declared in this scope zm_ffmpeg_camera.cpp:121: error: ‘avcodec_find_decoder’ was not declared in this scope zm_ffmpeg_camera.cpp:125: error: ‘mCodec’ was not declared in this scope zm_ffmpeg_camera.cpp:125: error: ‘avcodec_open’ was not declared in this scope zm_ffmpeg_camera.cpp:129: error: ‘mRawFrame’ was not declared in this scope zm_ffmpeg_camera.cpp:129: error: ‘avcodec_alloc_frame’ was not declared in this scope zm_ffmpeg_camera.cpp:132: error: ‘mFrame’ was not declared in this scope zm_ffmpeg_camera.cpp:135: error: ‘PIX_FMT_RGB24’ was not declared in this scope zm_ffmpeg_camera.cpp:135: error: ‘avpicture_get_size’ was not declared in this scope zm_ffmpeg_camera.cpp:138: error: ‘AVPicture’ was not declared in this scope zm_ffmpeg_camera.cpp:138: error: expected primary-expression before ‘)’ token zm_ffmpeg_camera.cpp:138: error: ‘avpicture_fill’ was not declared in this scope zm_ffmpeg_camera.cpp:141: error: ‘SWS_BICUBIC’ was not declared in this scope zm_ffmpeg_camera.cpp:141: error: ‘sws_getCachedContext’ was not declared in this scope zm_ffmpeg_camera.cpp: In member function ‘virtual int FfmpegCamera::Capture(Image&)’: zm_ffmpeg_camera.cpp:159: error: ‘AVPacket’ was not declared in this scope zm_ffmpeg_camera.cpp:159: error: expected ‘;’ before ‘packet’ zm_ffmpeg_camera.cpp:163: error: ‘mFormatContext’ was not declared in this scope zm_ffmpeg_camera.cpp:163: error: ‘packet’ was not declared in this scope zm_ffmpeg_camera.cpp:163: error: ‘av_read_frame’ was not declared in this scope zm_ffmpeg_camera.cpp:172: error: ‘mCodecContext’ was not declared in this scope zm_ffmpeg_camera.cpp:172: error: ‘mRawFrame’ was not declared in this scope zm_ffmpeg_camera.cpp:172: error: ‘avcodec_decode_video2’ was not declared in this scope zm_ffmpeg_camera.cpp:182: error: ‘mRawFrame’ was not declared in this scope zm_ffmpeg_camera.cpp:182: error: ‘mCodecContext’ was not declared in this scope zm_ffmpeg_camera.cpp:182: error: ‘mFrame’ was not declared in this scope zm_ffmpeg_camera.cpp:182: error: ‘sws_scale’ was not declared in this scope zm_ffmpeg_camera.cpp:188: error: ‘mCodecContext’ was not declared in this scope zm_ffmpeg_camera.cpp:188: error: ‘mFrame’ was not declared in this scope zm_ffmpeg_camera.cpp:193: error: ‘av_free_packet’ was not declared in this scope make[2]: *** [zm_ffmpeg_camera.o] Error 1 make[2]: Leaving directory `/root/cam/ZoneMinder-1.25.0/src' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory `/root/cam/ZoneMinder-1.25.0' make: *** [all] Error 2
Turns out the latest stable release of ffmpeg (1.2.2) does not go along with ZoneMinder 1.25.0. Installing the 0.9 version of ffmpeg solved this issue. wget http://www.ffmpeg.org/releases/ffmpeg-0.9.tar.gz tar -xzvf ffmpeg-0.9.tar.gz cd ffmpeg-0.9 ./configure --enable-gpl --enable-shared --enable-pthreads make make install make install-libs
ZoneMinder compiling error: "missing binary operator before token "(""
1,472,033,810,000
I git-cloned the latest version of ldc2, but I don't know how to compile it on my centOS 5 machine: git clone --recursive git://github.com/ldc-developers/ldc cd ldc git submodule update --init cmake doesn't seem to do much. Neither does cmake Unix Makefiles Any ideas? There is no INSTALL file and the README file doesn't mention any installation instructions.
The README states the following: If you have a working C++ build environment, CMake, a current LLVM and libconfig++ (http://hyperrealm.com/libconfig/libconfig.html) available, there should be no big surprises, though. Do you have the package cmake installed? Additionally I'd install the package group "Development Tools". On Red Hat based distros you can install groups of packages. This facility provides groups of packages that are related to a particular type of task: Development Educational Software Editors You can see a complete list with this command: $ yum grouplist | less Loaded plugins: langpacks, presto, refresh-packagekit Adding en_US to language list Setting up Group Process Installed Groups: Administration Tools Arabic Support Armenian Support Assamese Support Authoring and Publishing Base Bengali Support Bhutanese Support Chinese Support Development Libraries Development Tools Dial-up Networking Support ... To get all the packages typically needed to build software you'll usually need to install compilers + libraries. These can be had by using this command: $ sudo yum groupinstall "Development Tools" Incidentally you can see what packages are in a group using this command: $ yum groupinfo "Development Tools" Loaded plugins: langpacks, presto, refresh-packagekit Adding en_US to language list Setting up Group Process Group: Development Tools Description: These tools include core development tools such as automake, gcc, perl, python, and debuggers. Mandatory Packages: autoconf automake binutils bison flex gcc gcc-c++ ... Cmake is part of this group: $ yum groupinfo "Development Tools"|grep cmake cmake
compiling ldc2 on a centOS 5 system with no root access
1,472,033,810,000
When I run make for flac, I get this gcc: error: @LIBICONV@: No such file or directory make[3]: *** [flac] Error 1 make[3]: Leaving directory `/home/ubuntu/flac/src/flac' make[2]: *** [all-recursive] Error 1 make[2]: Leaving directory `/home/ubuntu/flac/src' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory `/home/ubuntu/flac' make: *** [all] Error 2 While aclocal.sh ran successfully (exit code 0), I got this when I ran it, configure.ac:308: warning: macro `AM_ICONV' not found in library configure.ac:309: warning: macro `AM_LANGINFO_CODESET' not found in library configure.ac:308: warning: macro `AM_ICONV' not found in library configure.ac:309: warning: macro `AM_LANGINFO_CODESET' not found in library I tried looking for an inconv.h, or for an iconv-dev package with my distro and I couldn't find one. How do I resolve this problem?
This bug is documented here. However, none of the suggested fixes worked for me. It's not a header file you need, it's a macrofile: namely iconv.m4 If you use Ubuntu can you see here what provides these files, $ apt-file search iconv.m4 gettext: /usr/share/aclocal/iconv.m4 gnulib: /usr/share/gnulib/m4/iconv.m4 The .m4 that worked for me was in gettext. That's the only one I tried -- because there was other obvious indicators that aclocal was being used in the build process. Simply run, $ sudo apt-get install gettext
When compiling I get an error, `@LIBICONV@: No such file or directory`?
1,472,033,810,000
I have a 3rd party device driver which I am trying to cross-compile. When I build the driver everything goes smooth but I don't see any driver.ko file, however driver.o file is generated fine and I don't see any error during the build process. I have also tried with the option V=1 and I see following error echo; echo " ERROR: Kernel configuration is invalid."; echo " include/generated/autoconf.h or include/config/auto.conf are missing."; echo " Run 'make oldconfig && make prepare' on kernel src to fix it."; echo; But my kernel configuration is correct and I have tried a simple hello world module with this configuration, in that case I can build my module but still see this error message. Also I can see both the files include/generated/autoconf.h and include/config/auto.conf in the kernel sources. Still why I am unable to build my driver module. I am cross-compiling the driver for ARM platform so /lib/modules will not be good in this environment. Secondly here is the output of the build. LD [M] /home/farshad/Work/CSP/boards/imx6q/ar6k3/ar6003_3.1_RC_Linux_release_[posted_2011_8_19_olca3.1RC_553/imx6build/host/os/linux/ar6000.o Building modules, stage 2. MODPOST 0 modules make[2]: Leaving directory `/home/farshad/Work/CSP/projects/phase_1/farshad/cspbox/platform/imx6/mel5/fs/workspace/linux-2.6.38-imx6' As you can see above ar6000.o is built properly without any error, but why ar6000.ko is not being built otherwise it should report "MODPOST 1 modules". Since ar6000.ko is not being built at the end of the complete build process I also get the following error cp: cannot stat `/home/farshad/Work/CSP/boards/imx6q/ar6k3/ar6003_3.1_RC_Linux_release_[posted_2011_8_19_olca3.1RC_553/imx6build/host/os/linux/ar6000.ko': No such file or directory 2404 make[1]: *** [install] Error 1 Which is obvious. My problem is why I am not getting a ar6000.ko in the first place. Searching over google someone also faced this issue and mentioned that running make with sudo resolved it but it brought no luck for me! I am wandering is there any problem in my kernel configuration (i.e the driver is looking for some configuration setting which I haven't enabled in my kernel, but in that case it should give compiler error looking for required #define), the other point can be that there is a problem with the driver build system, which I am trying to figure out. My cross-compile environment is good as I am seeing exactly the same issue while building the same driver for my (Ubuntu x86) machine! Update # 1 Its a 3rd party driver package which also build other utilities along with the driver module. Here is the output of the driver module build process make CT_BUILD_TYPE=MX6Q_ARM CT_OS_TYPE=linux CT_OS_SUB_TYPE= CT_LINUXPATH=~/Work/CSP/projects/phase_1/farshad/cspbox/platform/imx6/mel5/fs/workspace/linu x-2.6.38-imx6 CT_BUILD_TYPE=MX6Q_ARM CT_CROSS_COM PILE_TYPE=~/bin/mgc/CodeSourcery/Sourcery_CodeBench_for_ARM_GNU_Linux/bin/arm-none-linux- gnueabi- CT_ARCH_CPU_TYPE=arm CT_HC_DRIVERS=pci_std/ CT_MAKE_INCLUDE_OVERRIDE= CT_BUILD_OUTPUT_OVERRIDE=/home/far shad/Work/CSP/boards/imx6q/ar6k3/ar6003_3.1_RC_Linux_release_[posted_2011_8_19_olca3.1RC_553 /imx6build/host/.output/MX6Q_ARM-SDIO/image -C /home/farshad/Work/CSP/boards/imx6q/ar6k3/ar6003_3.1_RC_Linux _release_[posted_2011_8_19_olca3.1RC_553/imx6build/host/sdiostack/src default make[3]: Entering directory `/home/farshad/Work/CSP/boards/imx6q/ar6k3/ar6003_3.1_RC_Linux_release_[posted_2011_8_19_olc a3.1RC_553/imx6build/host/sdiostack/src' make -C ~/Work/CSP/projects/phase_1/farshad/cspbox/platform/imx6/mel5/fs/workspace/linux-2.6.38-imx6 SUBDIRS=/home/farshad/Work/CSP/boards/imx6q/ar6k3/ar6003_3.1_RC_Linux_release_[posted_2011_8_19_olca 3.1RC_553/imx6build/host/sdiostack/src ARCH=arm CROSS_COMPILE=~/bin/mgc/CodeSourcery/Sourcery_CodeBench_for_ARM_GNU_Linux/bin/arm-none-linux-gnueabi- EXTRA_CFLAGS="-DLINUX -I/home/farshad/Work/CSP/board s/imx6q/ar6k3/ar6003_3.1_RC_Linux_release_[posted_2011_8_19_olca3.1RC_553/imx6build/host/sdiostack/src/include -DDEBUG" modules make[4]: Entering directory `/home/farshad/Work/CSP/projects/phase_1/farshad/cspbox/platform/imx6/mel5/fs/workspace/linux-2.6.38-imx6' Building modules, stage 2. MODPOST 0 modules make[4]: Leaving directory `/home/farshad/Work/CSP/projects/phase_1/farshad/cspbox/platform/imx6/mel5/fs/workspace/linu x-2.6.38-imx6' Here is the Makefile of the driver module. ifdef CT_MAKE_INCLUDE_OVERRIDE -include $(CT_MAKE_INCLUDE_OVERRIDE) else -include localmake.$(CT_OS_TYPE).inc -include localmake.$(CT_OS_TYPE).private.inc endif export CT_OS_TYPE export CT_OS_SUB_TYPE export CT_OS_TOP_LEVEL_RULE export CT_PASS_CFLAGS export CT_SRC_BASE export CT_BUILD_SUB_PROJ # this makefile can only be invoked from the /EMSDIO/src base CT_SRC_BASE :=$(shell pwd) # export flags for which HCDs to build. Set the hcd driver name in hcd/ in your localmake.*.inc file. export CT_HC_DRIVERS export PDK_BUILD export HDK_BUILD export ALL_BUILD export ATHRAW_FD_BUILD export BUS_BUILD # For Linux ifeq ($(CT_OS_TYPE),linux) #make a copy for linux 2.4 EXTRA_CFLAGS += -DLINUX -I$(CT_SRC_BASE)/include ifneq ($(CT_RELEASE),1) EXTRA_CFLAGS += -DDEBUG endif export EXTRA_CFLAGS CT_SRC_OUTPUT :=$(CT_SRC_BASE)/../output ifdef CT_BUILD_OUTPUT_OVERRIDE _CT_COMPILED_OBJECTS_PATH :=$(CT_BUILD_OUTPUT_OVERRIDE) _MAKE_OUTPUT_DIR := _CLEAN_OUTPUT_DIR := else _CT_COMPILED_OBJECTS_PATH := $(CT_SRC_OUTPUT)/$(CT_BUILD_TYPE) _MAKE_OUTPUT_DIR := mkdir --parents $(_CT_COMPILED_OBJECTS_PATH) _CLEAN_OUTPUT_DIR := rm -R -f $(CT_SRC_OUTPUT) endif ifeq ($(CT_OS_SUB_TYPE),linux_2_4) CT_PASS_CFLAGS := $(EXTRA_CFLAGS) _CT_MOD_EXTENSION :=o ifeq ($(ALL_BUILD),1) subdir-m += busdriver/ lib/ hcd/ function/ else ifeq ($(BUS_BUILD),1) subdir-m += busdriver/ lib/ hcd/ else ifeq ($(PDK_BUILD),1) subdir-m += function/ else ifeq ($(HDK_BUILD),1) subdir-m += hcd/ function/ endif endif endif endif # add in rules to make modules CT_OS_TOP_LEVEL_RULE :=$(CT_LINUXPATH)/Rules.make include $(CT_OS_TOP_LEVEL_RULE) else #2.6+ _CT_MOD_EXTENSION :=ko ifeq ($(ALL_BUILD),1) obj-m += busdriver/ lib/ hcd/ function/ else ifeq ($(BUS_BUILD),1) obj-m += busdriver/ lib/ hcd/ else ifeq ($(PDK_BUILD),1) obj-m += function/ else ifeq ($(HDK_BUILD),1) obj-m += hcd/ function/ endif endif endif endif endif ifdef CT_BUILD_SUB_PROJ _CT_SUBDIRS=$(CT_BUILD_SUB_PROJ) else _CT_SUBDIRS=$(CT_SRC_BASE) endif ifdef CT_CROSS_COMPILE_TYPE CT_MAKE_COMMAND_LINE=$(CT_OUTPUT_FLAGS) -C $(CT_LINUXPATH) SUBDIRS=$(_CT_SUBDIRS) ARCH=$(CT_ARCH_CPU_TYPE) CROSS_COMPILE=$(CT_CROSS_COMPILE_TYPE) else CT_MAKE_COMMAND_LINE=$(CT_OUTPUT_FLAGS) -C $(CT_LINUXPATH) SUBDIRS=$(_CT_SUBDIRS) endif makeoutputdirs: $(_MAKE_OUTPUT_DIR) default: makeoutputdirs echo " ************ BUILDING MODULE ************** " $(MAKE) $(CT_MAKE_COMMAND_LINE) EXTRA_CFLAGS="$(EXTRA_CFLAGS)" modules echo " *** MODULE EXTENSION = $(_CT_MOD_EXTENSION)" $(CT_SRC_BASE)/../scripts/getobjects.scr $(CT_SRC_BASE) $(_CT_COMPILED_OBJECTS_PATH) $(_CT_MOD_EXTENSION) ifeq ($(CT_OS_SUB_TYPE),linux_2_4) # on 2.4 we can't invoke the linux clean with SUBDIRS, it will just clean out the kernel clean: find $(_CT_SUBDIRS) \( -name '*.[oas]' -o -name core -o -name '.*.flags' -o -name '.ko' -o -name '.*.cmd' \) -type f -print \ | grep -v lxdialog/ | xargs rm -f $(_CLEAN_OUTPUT_DIR) else clean: $(MAKE) $(CT_MAKE_COMMAND_LINE) clean find $(_CT_SUBDIRS) \( -name '*.[oas]' -o -name core -o -name '.*.flags' \) -type f -print \ | grep -v lxdialog/ | xargs rm -f $(_CLEAN_OUTPUT_DIR) endif endif # For QNX ifeq ($(CT_OS_TYPE),qnx) LIST=VARIANT EARLY_DIRS=lib ##ifndef QRECURSE QRECURSE=./recurse.mk ##ifdef QCONFIG ###QRDIR=$(dir $(QCONFIG)) ##endif ##endif include $(QRDIR)$(QRECURSE) endif
Ok, I have figured out the problem. I am having square bracket character "[" in the module source directory LD [M] /home/farshad/Work/CSP/boards/imx6q/ar6k3/ar6003_3.1_RC_Linux_release_[posted_2011_8_19_olca3.1RC_553/imx6build/host/os/linux/ar6000.o Removing this from the path worked well and I got my kernel module object files. I have renamed ar6003_3.1_RC_Linux_release_[posted_2011_8_19_olca3.1RC_553 to ar6003, and also tested with ar6003_3.1_RC_Linux_release_posted_2011_8_19_olca3.1RC_553 Both worked fine. I was building on Ubuntu 10.04. A colleague of mine has built from the same sources having "[" character in his path on Ubuntu 11.04 and kernel module object file was building nicely, this also suggest the changed behavior of grep / find / awk or such utility among their different versions, which kernel build system is using, resulting in this issue. Regards, Farrukh Arshad.
Building kernel module
1,472,033,810,000
I am trying to compile the last development version of Yap Prolog in OSX (Mountain Lion). The first time I tried I saw this message: ################################################################## # ERROR: Could not find library archive (-larchive). Dropped # library(archive). Library archive is available from # http://code.google.com/p/libarchive/ # # Most Unix/Linux distributions are shipped with binaries. Make # sure to have the development library installed. E.g. # # Debian/Ubuntu/Mint: aptitude install libarchive-dev # Fedora/... yum install libarchive-devel # MacOS (Macports): port install libarchive ################################################################## So I installed libarchive using mac ports as suggested with sudo port install libarchive. The installation was successful. However, after compiling again it keeps saying that libarchive is missing. I tried to find a libarchive file in my system and I found an alias /opt/local/lib/libarchive.dylib pointing to /opt/local/lib/libarchive.2.dylib. Just in case I set the environment variable DYLD_LIBRARY_PATH to /opt/local/lib but the problem is still there. Does someone have a clue how I can solve this ?
The -devel packages usually contain header files, pkgconfig data and similar - anything one would need to link an application against the library in question. I'm not sure how ports work, but check /opt/local (or /opt/local/include) for archive.h and archive_entry.h. Without these files you won't be able to compile the application. Since the path sounds rather non-standard (/opt/local/...), you will likely need to tell the buildsystem, that is should look for the libraries and headers in that particular directory. The basic generic layout of files on unix-like systems these days is governed by the Filesystem Hierarchy Standard. The most important parts are as follows: PREFIX |-- bin |-- etc |-- include |-- lib |-- sbin `-- share bin and sbin hold binaries (the programs you run) - this is why these directories are usually mentioned in the $PATH shell variable. The s in sbin used to stand for static as in statically linked binary, which doesn't need any dynamic linking and can be basically run "as-is". lib (and/or lib64 or even lib32) hold the shared (and possibly also static) libraries include contains header files enabling linking your code against libraries (basically APIs definitions). etc and share are for configuration and additional data files. PREFIX is usually /usr, /usr/local, /opt or /opt/<something> but you can as well create such structure in your home directory for example. How to tell the buildsystem where to look for binaries depends on what bs the code uses. Usually this very kind of information is placed in either the README or INSTALL file that accompany the source. For example for GNU autotools, it is usually in the form of --with-name=PREFIX or --with-name-lib=PREFIX/lib --with-name-include=PREFIX/include arguments passed to the configure script. If this isn't available, you might want to explicitly export variables used by compiler and linker: $ export CFLAGS="-IPREFIX/include $CFLAGS" $ export LDFLAGS="-LPREFIX/lib $LDFLAGS" In your case this would be -I/opt/local/include and -L/opt/local/lib respectively.
cannot find libarchive when compiling YAP prolog in OSX
1,472,033,810,000
I installed catalyst, so I needed to downgrade to xorg-server-1.11 from xorg-server-1.12. Now I use the [xorg111] repo and I understood that because of the udev, which works for the new xorg-server I have to recompile it. I don't really know how to recompile it so it works with it. Q: How do I do the manual compilation? Here is the thread in Arch forum if this helps.
Solution A: use ARM Find and download proper packages here (also dependencies) , and use pacman -U XX.xz to rollback http://arm.konnichi.com/search/index.php?a=32&q=xorg-server&core=1&extra=1&community=1 Solution B: bulid from source Clone this repository: git://pkgbuild.com/aur-mirror.git And find the old version of package you need , and use makepkg to build the Arch package , and install them with pacman -U XX.xz Get ready for damaging your system ;-P
How to recompile my xorg-server in ArchLinux
1,472,033,810,000
My Xen dom0 is a Gentoo x64 pvops. I boot my guest Gentoo, in PV mode, system with the same kernel my dom0 uses. When I emerge in the guest system, building a CPP package, I am experiencing low CPU utilization. From the System Monitor tool on dom0, I see the CPU utilization is about 12% for both cores. But in the guest, the system is almost hung. Building a package takes forever.
You can start by setting vcpus in the guest. vpus = <number of virtual cpu cores> You could also consider pinning some vcpus to the guest. vcpu-set domain-id vcpu-count Enables the vcpu-count virtual CPUs for the domain in question. Like mem-set, this command can only allocate up to the maximum virtual CPU count configured at boot for the domain. If the vcpu-count is smaller than the current number of active VCPUs, the highest number VCPUs will be hotplug removed. This may be important for pinning purposes. Attempting to set the VCPUs to a number larger than the initially configured VCPU count is an error. Trying to set VCPUs to < 1 will be quietly ignored. Some guests may need to actually bring the newly added CPU online after vcpu-set, go to SEE ALSO section for information. vcpu-list [domain-id] Lists VCPU information for a specific domain. If no domain is specified, VCPU information for all domains will be provided. vcpu-pin domain-id vcpu cpus Pins the VCPU to only run on the specific CPUs. The keyword all can be used to apply the cpus list to all VCPUs in the domain. Normally VCPUs can float between available CPUs whenever Xen deems a different run state is appropriate. Pinning can be used to restrict this, by ensuring certain VCPUs can only run on certain physical CPUs. http://xenbits.xen.org/docs/unstable/man/xl.1.html#domain_subcommands http://xenbits.xen.org/docs/unstable/man/xl.1.html#cpupools_commands http://wiki.xen.org/wiki/Credit_Scheduler Finally there has been several articles on the Xen blog recently regarding scheduling, NUMA, and cpupools http://blog.xen.org/index.php/2012/04/26/numa-and-xen-part-1-introduction/
How to increase Xen guest CPU utilization?
1,472,033,810,000
I have a bug in a driver (iwlwifi/iwlagn), which I have reported, and the developers are asking me to "build the driver with debug options enabled." More specifically: Debugging output is enabled when compiling the driver with CONFIG_IWLWIFI_DEBUG set to "y". I do have the source. How do I put that option in when compiling?
This debugging option i.e. CONFIG_IWLWIFI_DEBUG is used for enabling the debugging of your WiFi card. You have to enable this option by adding a line in the .config file in the /usr/src/linux-headers-(kernel-version) directory. CONFIG_IWLWIFI_DEBUG=y This .config file contains all the kernel options which you want to use while compiling the kernel.
Compiling a kernel module with some options
1,472,033,810,000
When I cloned hulahop I had a few possibilities to install it. I have chosen the path with autogen.sh I used commands: $ sh autogen.sh # -> OK $ ./configure # -> OK But make fails: $ sudo make 2> errors.txt $ cat errors.txt hulahop.cpp:28:29: error: pyxpcom/PyXPCOM.h: No such file or directory In file included from /usr/include/python2.6/Python.h:8, from hulahop-web-view.h:23, from hulahop.h:23, from hulahop.cpp:31: /usr/include/python2.6/pyconfig.h:395:1: warning: "HAVE_LONG_LONG" redefined In file included from /usr/include/xulrunner-1.9.2.24/nspr/prtypes.h:58, from /usr/include/xulrunner-1.9.2.24/nscore.h:51, from /usr/include/xulrunner-1.9.2.24/nsDebug.h:42, from /usr/include/xulrunner-1.9.2.24/nsCOMPtr.h:59, from hulahop.cpp:20: /usr/include/xulrunner-1.9.2.24/nspr/prcpucfg.h:807:1: warning: this is the location of the previous definition In file included from /usr/include/python2.6/Python.h:8, from hulahop-web-view.h:23, from hulahop.h:23, from hulahop.cpp:31: /usr/include/python2.6/pyconfig.h:1031:1: warning: "_POSIX_C_SOURCE" redefined In file included from /usr/include/sys/types.h:27, from /usr/include/xulrunner-1.9.2.24/nspr/obsolete/protypes.h:79, from /usr/include/xulrunner-1.9.2.24/nspr/prtypes.h:517, from /usr/include/xulrunner-1.9.2.24/nscore.h:51, from /usr/include/xulrunner-1.9.2.24/nsDebug.h:42, from /usr/include/xulrunner-1.9.2.24/nsCOMPtr.h:59, from hulahop.cpp:20: /usr/include/features.h:158:1: warning: this is the location of the previous definition In file included from /usr/include/python2.6/Python.h:8, from hulahop-web-view.h:23, from hulahop.h:23, from hulahop.cpp:31: /usr/include/python2.6/pyconfig.h:1040:1: warning: "_XOPEN_SOURCE" redefined In file included from /usr/include/sys/types.h:27, from /usr/include/xulrunner-1.9.2.24/nspr/obsolete/protypes.h:79, from /usr/include/xulrunner-1.9.2.24/nspr/prtypes.h:517, from /usr/include/xulrunner-1.9.2.24/nscore.h:51, from /usr/include/xulrunner-1.9.2.24/nsDebug.h:42, from /usr/include/xulrunner-1.9.2.24/nsCOMPtr.h:59, from hulahop.cpp:20: /usr/include/features.h:160:1: warning: this is the location of the previous definition hulahop.cpp: In function ‘HulahopWebView* hulahop_get_view_for_window(PyObject*)’: hulahop.cpp:101: error: ‘Py_nsISupports’ has not been declared hulahop.cpp:112: error: ‘do_GetService’ was not declared in this scope make[1]: *** [hulahop.lo] Error 1 make: *** [all-recursive] Error 1 Could you help me with compilation?
error: pyxpcom/PyXPCOM.h: No such file or directory You need PyXPCOM. It's not currently in Ubuntu. There are a couple of old PyXPCOM PPAs, you could try them, but both haven't been updated since maverick so they might not work. Otherwise, build PyXPCOM from source. But first, check if PyXPCOMext (which you can get in binary form) is sufficient for your purposes.
Make errors when compiling hulahop
1,306,699,713,000
Basically I need this specific version of the Tanuki Java Service Wrapper to run a specific Java application. I downloaded the source from the Tanuki website and I'm trying to compile it from source. This is under a Debian Linux system armv5tel architecture. It uses Ant, and there is a build.sh script that invokes a copy of Ant provided in the source. However, the compilation fails with this message: home/build/wrapper_3.1.2_src/build.xml:263: Error starting javah: java.lang.NoSuchMethodException: com.sun.tools.javah.Main.<init>([Ljava.lang.String;)
The solution was to use a later version of ant (the one I installed via Debian) instead of the copy provided with the source package.
Issues compiling Tanuki Java Service wrapper version 3.1.2 under armv5tel architecture
1,306,699,713,000
I have little experience with LINUX . I am using Debian. It has a library glibc which has several useful programs. iconv is the program that i want to use to do several charset conversions... However i want to create my charset to use it at a really old from 1978 dot matrix heavy duty printer which has a custom charset for my language... Capital latin /greek only 7 bit So i should write my own module for iconv... I found how to configure it and make it but i dont know how to do the compilation.. from source .c file to .so file.... If there is no solution i will try to build from source the libc libray and copy the .so file that it should generate (think so)...
A quick search led me to this HOWTO. I have no experience with this, but your command will look similar like this: gcc -shared -Wl,-soname,your_soname -o library_name file_list library_list
How to build-compile a .c file
1,306,699,713,000
I am trying to compile the mainline Linux kernel with a custom config. This one! Running on a 64 bit system. At the last step, when linking the Kernel, it fails because it goes OOM (error 137). [...] DESCEND objtool INSTALL libsubcmd_headers CALL scripts/checksyscalls.sh LD vmlinux.o Killed make[2]: *** [scripts/Makefile.vmlinux_o:61: vmlinux.o] Error 137 make[2]: *** Deleting file 'vmlinux.o' [...] ulimit -a says that per process memory is unlimited. I have tried make, make -j1 make -j4, no difference whatsoever. Same results with gcc as compiler instead of clang. Does anyone have a freaking clue on why the compilation eats up so much RAM? It's getting unaffordable to develop Linux :\
It's getting unaffordable to develop Linux I am afraid it has always been. 32GB RAM is common on kernel devs desktops. And yet some of them started encountering ooms when building their allyesconfig-ed kernel. Lucky you… who are apparently not allyesconfig-ing… you should not need more than 32G… ;-) On a side note, reading CONFIG_HAVE_OBJTOOL=y as part of your .config file, you might take some benefits from the patches submitted as part of the discussion linked hereabove. Does anyone have a freaking clue on why the compilation eats up so much RAM? You are probably the only one who could precisely tell. (after considering the size of the miscellaneous *.o files you might be able to find in each top level directory of the kernel source distribution (since compilation was achieved successfuly)) From the information you provide (the kernel.config file) I can only venture a priori : A/ every component of your kernel will be statically linked : (since I notice that all your selected OPTION_* are marked "=y") There is nothing wrong with this per se since there can be many good reasons for building everything in-kernel but this will definitely significantly increase the RAM needed when linking all this together. => You probably should consider building kernel parts as modules wherever possible. B/ a good amount of CONFIG_DEBUG appear set. Once again there is nothing wrong with that per se however it is likely to increase significantly the RAM needed to link the different parts, not to say even more since it implies CONFIG_KALLSYMS_*=y On a side note, considering the debugging feature selected, in addition to CONFIG_HZ_100=y I would assume that you are not searching for best possible latencies / performances. => I would then consider the opportunity to prefer CONFIG_CC_OPTIMIZE_FOR_SIZE
Linux build with custom config using all RAM (8GB)?
1,306,699,713,000
I am trying to compile coremarks to benchmark one of my CPU cores I generated (from here: https://gitlab.com/incoresemi/core-generators/benchmarks/-/tree/master). I get the following error: In file included from common/syscalls.c:3: /usr/lib/gcc/riscv64-unknown-elf/10.2.0/include/stdint.h:9:16: fatal error: stdint.h: No such file or directory 9 | # include_next <stdint.h> | ^~~~~~~~~~ compilation terminated. make: *** [Makefile:58: coremarks] Error 1 I already have the riscv toolchain installed. I also tried installing libc6-dev and avr-libc as I read this in a few answers online. How do I resolve this? thanks for your time in advance.
One way to fix this error is: restrict gcc in using stdint-gcc.h This can be done by adding c compiler flag -ffreestanding to gcc For more info on what is freestanding visit here and implied -fno-builtin visit here
stdint.h: no such file or directory
1,306,699,713,000
> install.packages("stringi") [...] * installing *source* package ‘stringi’ ... ** package ‘stringi’ successfully unpacked and MD5 sums checked ** using staged installation checking for R_HOME... /usr/lib64/R checking for R... /usr/lib64/R/bin/R checking for endianness... little checking for R >= 3.1.0 for C++11 use... yes checking for R < 3.4.0 for CXX1X flag use... no checking for cat... /usr/bin/cat checking for local ICUDT_DIR... icu61/data checking for gcc... gcc -m64 checking whether the C compiler works... no configure: error: in `/tmp/Rtmpy22IYZ/R.INSTALL22bb1549772a2/stringi': configure: error: C compiler cannot create executables See `config.log' for more details ERROR: configuration failed for package ‘stringi’ * removing ‘/home/[USER]/R/x86_64-redhat-linux-gnu-library/4.0/stringi’ Warning in install.packages : installation of package ‘stringi’ had non-zero exit status I'm noticing this issue after having upgraded Fedora from 32 to 33; not sure if it's related. gcc is installed. I could not find the config.log referenced in the output. Maybe it was deleted when the package was removed due to failure? EDIT: The problem indeed seems related to the Fedora upgrade, and specifically to the new /usr/lib64/R/etc/Makeconf file. The stringi package compiled and installed properly when using Makeconf.rpmsave instead. $ diff Makeconf.rpmsave Makeconf 7c7 < # configure '--build=x86_64-redhat-linux-gnu' '--host=x86_64-redhat-linux-gnu' '--program-prefix=' '--disable-dependency-tracking' '--prefix=/usr' '--exec-prefix=/usr' '--bindir=/usr/bin' '--sbindir=/usr/sbin' '--sysconfdir=/etc' '--datadir=/usr/share' '--includedir=/usr/include' '--libdir=/usr/lib64' '--libexecdir=/usr/libexec' '--localstatedir=/var' '--sharedstatedir=/var/lib' '--mandir=/usr/share/man' '--infodir=/usr/share/info' '--with-system-tre' '--with-system-valgrind-headers' '--with-lapack' '--with-blas' '--with-tcl-config=/usr/lib64/tclConfig.sh' '--with-tk-config=/usr/lib64/tkConfig.sh' '--enable-R-shlib' '--enable-prebuilt-html' '--enable-R-profiling' '--enable-memory-profiling' 'MAKEINFO=texi2any' 'rdocdir=/usr/share/doc/R' 'rincludedir=/usr/include/R' 'rsharedir=/usr/share/R' 'build_alias=x86_64-redhat-linux-gnu' 'host_alias=x86_64-redhat-linux-gnu' 'R_PRINTCMD=lpr' 'R_BROWSER=/usr/bin/xdg-open' 'R_PDFVIEWER=/usr/bin/xdg-open' 'PKG_CONFIG_PATH=:/usr/lib64/pkgconfig:/usr/share/pkgconfig' 'CC=gcc -m64' 'CFLAGS=-O2 -g -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fexceptions -fstack-protector-strong -grecord-gcc-switches -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection' 'LDFLAGS=-Wl,-z,relro -Wl,--as-needed -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld' 'FC=gfortran -m64' 'FCFLAGS=-O2 -g -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fexceptions -fstack-protector-strong -grecord-gcc-switches -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection --no-optimize-sibling-calls' 'CXX=g++ -m64' 'CXXFLAGS=-O2 -g -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fexceptions -fstack-protector-strong -grecord-gcc-switches -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection' 'LT_SYS_LIBRARY_PATH=/usr/lib64:' --- > # configure '--build=x86_64-redhat-linux-gnu' '--host=x86_64-redhat-linux-gnu' '--program-prefix=' '--disable-dependency-tracking' '--prefix=/usr' '--exec-prefix=/usr' '--bindir=/usr/bin' '--sbindir=/usr/sbin' '--sysconfdir=/etc' '--datadir=/usr/share' '--includedir=/usr/include' '--libdir=/usr/lib64' '--libexecdir=/usr/libexec' '--localstatedir=/var' '--sharedstatedir=/var/lib' '--mandir=/usr/share/man' '--infodir=/usr/share/info' '--with-system-tre' '--with-system-valgrind-headers' '--with-lapack' '--with-blas=flexiblas' '--with-tcl-config=/usr/lib64/tclConfig.sh' '--with-tk-config=/usr/lib64/tkConfig.sh' '--enable-R-shlib' '--enable-prebuilt-html' '--enable-R-profiling' '--enable-memory-profiling' 'MAKEINFO=texi2any' 'rdocdir=/usr/share/doc/R' 'rincludedir=/usr/include/R' 'rsharedir=/usr/share/R' 'build_alias=x86_64-redhat-linux-gnu' 'host_alias=x86_64-redhat-linux-gnu' 'R_PRINTCMD=lpr' 'R_BROWSER=/usr/bin/xdg-open' 'R_PDFVIEWER=/usr/bin/xdg-open' 'PKG_CONFIG_PATH=:/usr/lib64/pkgconfig:/usr/share/pkgconfig' 'CC=gcc -m64' 'CFLAGS=-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection' 'LDFLAGS=-Wl,-z,relro -Wl,--as-needed -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld ' 'FC=gfortran -m64' 'FCFLAGS=-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection --no-optimize-sibling-calls' 'CXX=g++ -m64' 'CXXFLAGS=-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection' 'LT_SYS_LIBRARY_PATH=/usr/lib64:' 13c13 < BLAS_LIBS = -lopenblas --- > BLAS_LIBS = -lflexiblas 16c16 < CFLAGS = -O2 -g -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fexceptions -fstack-protector-strong -grecord-gcc-switches -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection $(LTO) --- > CFLAGS = -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection $(LTO) 23c23 < CXXFLAGS = -O2 -g -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fexceptions -fstack-protector-strong -grecord-gcc-switches -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection $(LTO) --- > CXXFLAGS = -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection $(LTO) 26c26 < CXX11FLAGS = -O2 -g -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fexceptions -fstack-protector-strong -grecord-gcc-switches -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection $(LTO) --- > CXX11FLAGS = -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection $(LTO) 30c30 < CXX14FLAGS = -O2 -g -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fexceptions -fstack-protector-strong -grecord-gcc-switches -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection $(LTO) --- > CXX14FLAGS = -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection $(LTO) 34c34 < CXX17FLAGS = -O2 -g -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fexceptions -fstack-protector-strong -grecord-gcc-switches -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection $(LTO) --- > CXX17FLAGS = -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection $(LTO) 38c38 < CXX20FLAGS = -O2 -g -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fexceptions -fstack-protector-strong -grecord-gcc-switches -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection $(LTO) --- > CXX20FLAGS = -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection $(LTO) 54c54 < FCFLAGS = -O2 -g -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fexceptions -fstack-protector-strong -grecord-gcc-switches -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection --no-optimize-sibling-calls $(LTO_FC) --- > FCFLAGS = -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection --no-optimize-sibling-calls $(LTO_FC) 57c57 < FFLAGS = -O2 -g -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fexceptions -fstack-protector-strong -grecord-gcc-switches -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection --no-optimize-sibling-calls $(LTO_FC) --- > FFLAGS = -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection --no-optimize-sibling-calls $(LTO_FC) 65c65 < JAVAH = /bin/javah --- > JAVAH = 70c70 < JAVA_LIBS = -L$(JAVA_HOME)/lib/amd64/server -ljvm --- > JAVA_LIBS = -L$(JAVA_HOME)/lib/server -ljvm 73c73 < LDFLAGS = -Wl,-z,relro -Wl,--as-needed -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld --- > LDFLAGS = -Wl,-z,relro -Wl,--as-needed -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld 100c100 < SAFE_FFLAGS = -O2 -g -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fexceptions -fstack-protector-strong -grecord-gcc-switches -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection --no-optimize-sibling-calls -msse2 -mfpmath=sse --- > SAFE_FFLAGS = -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection --no-optimize-sibling-calls -msse2 -mfpmath=sse
Solved by changing MAKE=${MAKE-'make -j16'} to MAKE=${MAKE-'make'} in /usr/lib64/R/etc/Renviron. I had added -j16 years ago to make system-wide use of multi-threading, and it had been fine through multiple Fedora and R upgrades until now. Multi-threading is still enabled at the user level via ~/.R/Makevars with contents MAKEFLAGS = -j16.
Cannot install 'stringi' R package -- C compiler problem?
1,306,699,713,000
I'm trying to install mopidy-spotify on my freebox delta that allow me to install vm and is arm64 based After many problems, i've manage to get most of the dependencies working and to get rid of most of the errors. But am still struggling on libspotify when trying to compile pyspotify. I've compiled successfully (i think) libspotify on my system using the sources from that link but I'm always getting here are the log output: Obtaining file:///home/jc/pyspotify Installing build dependencies ... done Getting requirements to build wheel ... done Installing backend dependencies ... done Preparing wheel metadata ... done Requirement already satisfied: setuptools in /usr/lib/python3/dist-packages (from pyspotify==2.1.3) (45.2.0) Requirement already satisfied: cffi>=1.0.0 in /usr/local/lib/python3.8/dist-packages (from pyspotify==2.1.3) (1.14.0) Requirement already satisfied: pycparser in /usr/local/lib/python3.8/dist-packages (from cffi>=1.0.0->pyspotify==2.1.3) (2.20) Installing collected packages: pyspotify Running setup.py develop for pyspotify ERROR: Command errored out with exit status 1: command: /usr/bin/python3 -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/home/jc/pyspotify/setup.py'"'"'; __file__='"'"'/home/jc/pyspotify/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' develop --no-deps cwd: /home/jc/pyspotify/ Complete output (22 lines): running develop running egg_info writing pyspotify.egg-info/PKG-INFO writing dependency_links to pyspotify.egg-info/dependency_links.txt writing requirements to pyspotify.egg-info/requires.txt writing top-level names to pyspotify.egg-info/top_level.txt reading manifest file 'pyspotify.egg-info/SOURCES.txt' reading manifest template 'MANIFEST.in' no previously-included directories found matching 'docs/_build' no previously-included directories found matching 'examples/tmp' warning: no previously-included files matching '__pycache__/*' found anywhere in distribution writing manifest file 'pyspotify.egg-info/SOURCES.txt' running build_ext generating cffi module 'build/temp.linux-aarch64-3.8/spotify._spotify.c' already up-to-date building 'spotify._spotify' extension aarch64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/include/python3.8 -c build/temp.linux-aarch64-3.8/spotify._spotify.c -o build/temp.linux-aarch64-3.8/build/temp.linux-aarch64-3.8/spotify._spotify.o aarch64-linux-gnu-gcc -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions -Wl,-z,relro -g -fwrapv -O2 -Wl,-Bsymbolic-functions -Wl,-z,relro -g -fwrapv -O2 -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 build/temp.linux-aarch64-3.8/build/temp.linux-aarch64-3.8/spotify._spotify.o -lspotify -o build/lib.linux-aarch64-3.8/spotify/_spotify.abi3.so /usr/bin/ld: skipping incompatible /usr/local/lib/libspotify.so when searching for -lspotify /usr/bin/ld: cannot find -lspotify collect2: error: ld returned 1 exit status error: command 'aarch64-linux-gnu-gcc' failed with exit status 1 please be free to ask for more informations Any clues on that?
You're most likely trying to mix 32 bit and 64 bit libraries. 32 bit applications must be linked against 32 bit libraries whereas 64 bit applications must be linked against 64 bit libraries. You can run file /usr/local/lib/libspotify.so to check if your library has been compiled for 32 bit or 64 bit. You can instruct a GCC running on a 64 bit system to compile 32 bit code by setting the following environment variables: CFLAGS=-m32 CXXFLAGS=-m32 make Also see /usr/bin/ld: skipping incompatible foo.so when searching for foo.
pyspotify compilation ld error
1,306,699,713,000
I have been having problems compiling a router's kernel for QEMU. I have the router working in QEMU using an OpenWRT kernel, but networking does not work. This is why I want to compile the original kernel. The below command is the problematic command that the (main) Makefile indirectly executes. I say indirectly because it doesn't even explicitly choose to execute the configure script, it just chooses to do so because it is in the directory of downloaded packages that are needed to compile the kernel. PATH=/home/debian/build-new/host/bin:/home/debian/build-new/host/usr/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games AR="/usr/bin/ar" AS="/usr/bin/as" LD="/usr/bin/ld" NM="/usr/bin/nm" CC="/home/debian/build-new/host/usr/bin/ccache /usr/bin/gcc" GCC="/home/debian/build-new/host/usr/bin/ccache /usr/bin/gcc" CXX="/home/debian/build-new/host/usr/bin/ccache /usr/bin/g++" CPP="/usr/bin/cpp" CPPFLAGS="-I/home/debian/build-new/host/usr/include" CFLAGS="-O2 -I/home/debian/build-new/host/usr/include" CXXFLAGS="-O2 -I/home/debian/build-new/host/usr/include" LDFLAGS="-L/home/debian/build-new/host/lib -L/home/debian/build-new/host/usr/lib -Wl,-rpath,/home/debian/build-new/host/usr/lib" PKG_CONFIG_ALLOW_SYSTEM_CFLAGS=1 PKG_CONFIG_ALLOW_SYSTEM_LIBS=1 PKG_CONFIG="/home/debian/build-new/host/usr/bin/pkg-config" PKG_CONFIG_SYSROOT_DIR="/" PKG_CONFIG_LIBDIR="/home/debian/build-new/host/usr/lib/pkgconfig:/home/debian/build-new/host/usr/share/pkgconfig" PERLLIB="/home/debian/build-new/host/usr/lib/perl" LD_LIBRARY_PATH="/home/debian/build-new/host/usr/lib:" CFLAGS="-O2 -I/home/debian/build-new/host/usr/include" LDFLAGS="-L/home/debian/build-new/host/lib -L/home/debian/build-new/host/usr/lib -Wl,-rpath,/home/debian/build-new/host/usr/lib" CC="/usr/bin/gcc" ./configure --prefix="/home/debian/build-new/host/usr" --sysconfdir="/home/debian/build-new/host/etc" --enable-shared --disable-static --disable-gtk-doc --disable-doc --disable-docs --disable-documentation --with-xmlto=no --with-fop=no ccache_cv_zlib_1_2_3=no The flag that breaks the command is LDFLAGS. LD="/usr/bin/ld" LDFLAGS="-L/home/debian/build-new/host/lib -L/home/debian/build-new/host/usr/lib -Wl,-rpath,/home/debian/build-new/host/usr/lib" LDFLAGS="-L/home/debian/build-new/host/lib -L/home/debian/build-new/host/usr/lib -Wl,-rpath,/home/debian/build-new/host/usr/lib" The output of running the command is: debian@debian-i686:~/build-new/build/host-ccache-3.1.8$ PATH=/home/debian/build-new/host/bin:/home/debian/build-new/host/usr/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games AR="/usr/bin/ar" AS="/usr/bin/as" LD="/usr/bin/ld" NM="/usr/bin/nm" CC="/home/debian/build-new/host/usr/bin/ccache /usr/bin/gcc" GCC="/home/debian/build-new/host/usr/bin/ccache /usr/bin/gcc" CXX="/home/debian/build-new/host/usr/bin/ccache /usr/bin/g++" CPP="/usr/bin/cpp" CPPFLAGS="-I/home/debian/build-new/host/usr/include" CFLAGS="-O2 -I/home/debian/build-new/host/usr/include" CXXFLAGS="-O2 -I/home/debian/build-new/host/usr/include" LDFLAGS="-L/home/debian/build-new/host/lib -L/home/debian/build-new/host/usr/lib -Wl,-rpath,/home/debian/build-new/host/usr/lib" PKG_CONFIG_ALLOW_SYSTEM_CFLAGS=1 PKG_CONFIG_ALLOW_SYSTEM_LIBS=1 PKG_CONFIG="/home/debian/build-new/host/usr/bin/pkg-config" PKG_CONFIG_SYSROOT_DIR="/" PKG_CONFIG_LIBDIR="/home/debian/build-new/host/usr/lib/pkgconfig:/home/debian/build-new/host/usr/share/pkgconfig" PERLLIB="/home/debian/build-new/host/usr/lib/perl" LD_LIBRARY_PATH="/home/debian/build-new/host/usr/lib:" CFLAGS="-O2 -I/home/debian/build-new/host/usr/include" LDFLAGS="-L/home/debian/build-new/host/lib -L/home/debian/build-new/host/usr/lib -Wl,-rpath,/home/debian/build-new/host/usr/lib" CC="/usr/bin/gcc" ./configure --prefix="/home/debian/build-new/host/usr" --sysconfdir="/home/debian/build-new/host/etc" --enable-shared --disable-static --disable-gtk-doc --disable-doc --disable-docs --disable-documentation --with-xmlto=no --with-fop=no ccache_cv_zlib_1_2_3=no configure: WARNING: unrecognized options: --enable-shared, --disable-static, --disable-gtk-doc, --disable-doc, --disable-docs, --disable-documentation, --with-xmlto, --with-fop configure: Configuring ccache checking for gcc... /usr/bin/gcc checking whether the C compiler works... no configure: error: in `/home/debian/build-new/build/host-ccache-3.1.8': configure: error: C compiler cannot create executables See `config.log' for more details Removing only the LDFLAGS fixes the particular error, but then I have another error later. debian@debian-i686:~/build-new/build/host-ccache-3.1.8$ PATH=/home/debian/build-new/host/bin:/home/debian/build-new/host/usr/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games AR="/usr/bin/ar" AS="/usr/bin/as" LD="/usr/bin/ld" NM="/usr/bin/nm" CC="/home/debian/build-new/host/usr/bin/ccache /usr/bin/gcc" GCC="/home/debian/build-new/host/usr/bin/ccache /usr/bin/gcc" CXX="/home/debian/build-new/host/usr/bin/ccache /usr/bin/g++" CPP="/usr/bin/cpp" CPPFLAGS="-I/home/debian/build-new/host/usr/include" CFLAGS="-O2 -I/home/debian/build-new/host/usr/include" CXXFLAGS="-O2 -I/home/debian/build-new/host/usr/include" PKG_CONFIG_ALLOW_SYSTEM_CFLAGS=1 PKG_CONFIG_ALLOW_SYSTEM_LIBS=1 PKG_CONFIG="/home/debian/build-new/host/usr/bin/pkg-config" PKG_CONFIG_SYSROOT_DIR="/" PKG_CONFIG_LIBDIR="/home/debian/build-new/host/usr/lib/pkgconfig:/home/debian/build-new/host/usr/share/pkgconfig" PERLLIB="/home/debian/build-new/host/usr/lib/perl" LD_LIBRARY_PATH="/home/debian/build-new/host/usr/lib:" CFLAGS="-O2 -I/home/debian/build-new/host/usr/include" CC="/usr/bin/gcc" ./configure --prefix="/home/debian/build-new/host/usr" --sysconfdir="/home/debian/build-new/host/etc" --enable-shared --disable-static --disable-gtk-doc --disable-doc --disable-docs --disable-documentation --with-xmlto=no --with-fop=no ccache_cv_zlib_1_2_3=no configure: WARNING: unrecognized options: --enable-shared, --disable-static, --disable-gtk-doc, --disable-doc, --disable-docs, --disable-documentation, --with-xmlto, --with-fop configure: Configuring ccache checking for gcc... /usr/bin/gcc checking whether the C compiler works... yes checking for C compiler default output file name... a.out checking for suffix of executables... checking whether we are cross compiling... configure: error: in `/home/debian/build-new/build/host-ccache-3.1.8': configure: error: cannot run C compiled programs. If you meant to cross compile, use `--host'. See `config.log' for more details Removing all flags (to get the command below) allows the configure script to function perfectly. ./configure --prefix="/home/debian/build-new/host/usr" --sysconfdir="/home/debian/build-new/host/etc" --enable-shared --disable-static --disable-gtk-doc --disable-doc --disable-docs --disable-documentation --with-xmlto=no --with-fop=no ccache_cv_zlib_1_2_3=no What the configure script is crashing on is it is trying to find the files path/to/lib/libc.so.0 and path/to/usr/lib/uclibc_nonshared.a. The problem is, the script is trying to find these libraries in /lib/ and /usr/lib/ even though the Makefile explicitly sets where it is supposed to get the libraries from. Manually symlinking the libraries from where LDFLAGS points to have it link to /lib/ and /usr/lib/ only results in the message: /usr/bin/ld: skipping incompatible /lib/libc.so.0 when searching for /lib/libc.so.0 /usr/bin/ld: cannot find /lib/libc.so.0 /usr/bin/ld: skipping incompatible /usr/lib/uclibc_nonshared.a when searching for /usr/lib/uclibc_nonshared.a /usr/bin/ld: cannot find /usr/lib/uclibc_nonshared.a Also, setting the LD flag to LD="/home/debian/build-new/host/usr/bin/mips-linux-ld" does not fix the problem. How do I get the Makefile to compile properly? I left some logs and configs on Github's gist service. Edit: Using @filbranden's tip I have now reached the point of getting results such as the below output: /home/debian/build-new/toolchain/gcc-4.7.3-intermediate/./gcc/xgcc -B/home/debian/build-new/toolchain/gcc-4.7.3-intermediate/./gcc/ -B/home/debian/build-new/host/usr/mips-buildroot-linux-uclibc/bin/ -B/home/debian/build-new/host/usr/mips-buildroot-linux-uclibc/lib/ -isystem /home/debian/build-new/host/usr/mips-buildroot-linux-uclibc/include -isystem /home/debian/build-new/host/usr/mips-buildroot-linux-uclibc/sys-include -g -Os -O2 -g -Os -DIN_GCC -DCROSS_DIRECTORY_STRUCTURE -W -Wall -Wno-narrowing -Wwrite-strings -Wcast-qual -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -isystem ./include -fPIC -g -DIN_LIBGCC2 -fbuilding-libgcc -fno-stack-protector -fPIC -I. -I. -I../.././gcc -I/home/debian/build-new/toolchain/gcc-4.7.3/libgcc -I/home/debian/build-new/toolchain/gcc-4.7.3/libgcc/. -I/home/debian/build-new/toolchain/gcc-4.7.3/libgcc/../gcc -I/home/debian/build-new/toolchain/gcc-4.7.3/libgcc/../include -DHAVE_CC_TLS -o _fractHADI_s.o -MT _fractHADI_s.o -MD -MP -MF _fractHADI_s.dep -DSHARED -DL_fract -DFROM_HA -DTO_DI -c /home/debian/build-new/toolchain/gcc-4.7.3/libgcc/fixed-bit.c /home/debian/build-new/toolchain/gcc-4.7.3-intermediate/./gcc/xgcc -B/home/debian/build-new/toolchain/gcc-4.7.3-intermediate/./gcc/ -B/home/debian/build-new/host/usr/mips-buildroot-linux-uclibc/bin/ -B/home/debian/build-new/host/usr/mips-buildroot-linux-uclibc/lib/ -isystem /home/debian/build-new/host/usr/mips-buildroot-linux-uclibc/include -isystem /home/debian/build-new/host/usr/mips-buildroot-linux-uclibc/sys-include -g -Os -O2 -g -Os -DIN_GCC -DCROSS_DIRECTORY_STRUCTURE -W -Wall -Wno-narrowing -Wwrite-strings -Wcast-qual -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -isystem ./include -fPIC -g -DIN_LIBGCC2 -fbuilding-libgcc -fno-stack-protector -fPIC -I. -I. -I../.././gcc -I/home/debian/build-new/toolchain/gcc-4.7.3/libgcc -I/home/debian/build-new/toolchain/gcc-4.7.3/libgcc/. -I/home/debian/build-new/toolchain/gcc-4.7.3/libgcc/../gcc -I/home/debian/build-new/toolchain/gcc-4.7.3/libgcc/../include -DHAVE_CC_TLS -o _fractHATI_s.o -MT _fractHATI_s.o -MD -MP -MF _fractHATI_s.dep -DSHARED -DL_fract -DFROM_HA -DTO_TI -c /home/debian/build-new/toolchain/gcc-4.7.3/libgcc/fixed-bit.c /home/debian/build-new/toolchain/gcc-4.7.3-intermediate/./gcc/xgcc -B/home/debian/build-new/toolchain/gcc-4.7.3-intermediate/./gcc/ -B/home/debian/build-new/host/usr/mips-buildroot-linux-uclibc/bin/ -B/home/debian/build-new/host/usr/mips-buildroot-linux-uclibc/lib/ -isystem /home/debian/build-new/host/usr/mips-buildroot-linux-uclibc/include -isystem /home/debian/build-new/host/usr/mips-buildroot-linux-uclibc/sys-include -g -Os -O2 -g -Os -DIN_GCC -DCROSS_DIRECTORY_STRUCTURE -W -Wall -Wno-narrowing -Wwrite-strings -Wcast-qual -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -isystem ./include -fPIC -g -DIN_LIBGCC2 -fbuilding-libgcc -fno-stack-protector -fPIC -I. -I. -I../.././gcc -I/home/debian/build-new/toolchain/gcc-4.7.3/libgcc -I/home/debian/build-new/toolchain/gcc-4.7.3/libgcc/. -I/home/debian/build-new/toolchain/gcc-4.7.3/libgcc/../gcc -I/home/debian/build-new/toolchain/gcc-4.7.3/libgcc/../include -DHAVE_CC_TLS -o _fractHASF_s.o -MT _fractHASF_s.o -MD -MP -MF _fractHASF_s.dep -DSHARED -DL_fract -DFROM_HA -DTO_SF -c /home/debian/build-new/toolchain/gcc-4.7.3/libgcc/fixed-bit.c This compilation has been running for the past 17-18 hours now (and has not crashed or done anything else to indicate that an error may have occurred). It does seem a bit weird that it is still working on fixed-bit.c, but maybe that's normal?
Using @filbranden's comment, I was able to compile the kernel for my router (there are more error's that need to be solved, but that isn't the scope of this question). I left a log and my config of what I was doing to compile the kernel on Github gist (new logs and config). The config is broken and won't be apparent until the stage of actually compiling linux, but those solutions are simple. I ran the below commands to compile the kernel: make menuconfig O=~/build-new/ RANLIB="/home/debian/new-kernel/sagem/uclibc-crosstools-gcc-4.2.3-3/usr/bin/mips-linux-uclibc-ranlib" READELF="/home/debian/new-kernel/sagem/uclibc-crosstools-gcc-4.2.3-3/usr/bin/mips-linux-uclibc-readelf" OBJDUMP="/home/debian/new-kernel/sagem/uclibc-crosstools-gcc-4.2.3-3/usr/bin/mips-linux-uclibc-objdump" AR="/home/debian/new-kernel/sagem/uclibc-crosstools-gcc-4.2.3-3/usr/bin/mips-linux-uclibc-ar" AS="/home/debian/new-kernel/sagem/uclibc-crosstools-gcc-4.2.3-3/usr/bin/mips-linux-uclibc-as" LD="/home/debian/new-kernel/sagem/uclibc-crosstools-gcc-4.2.3-3/usr/bin/mips-linux-uclibc-gcc" NM="/home/debian/new-kernel/sagem/uclibc-crosstools-gcc-4.2.3-3/usr/bin/mips-linux-uclibc-nm" CC="/home/debian/bin-new/ccache-3.1.8/ccache /home/debian/new-kernel/sagem/uclibc-crosstools-gcc-4.2.3-3/usr/bin/mips-linux-uclibc-gcc" GCC="/home/debian/bin-new/ccache-3.1.8/ccache /home/debian/new-kernel/sagem/uclibc-crosstools-gcc-4.2.3-3/usr/bin/gcc" CXX="/home/debian/bin-new/ccache-3.1.8/ccache /home/debian/new-kernel/sagem/uclibc-crosstools-gcc-4.2.3-3/usr/bin/g++" CPP="/home/debian/new-kernel/sagem/uclibc-crosstools-gcc-4.2.3-3/usr/bin/mips-linux-uclibc-cpp" CPPFLAGS="-I/home/debian/build-new/host/usr/include" CFLAGS="-O2 -I/home/debian/build-new/host/usr/include" CXXFLAGS="-O2 -I/home/debian/build-new/host/usr/include" LDFLAGS="-L/home/debian/build-new/host/lib -L/home/debian/build-new/host/usr/lib" make autoconf O=~/build-new/ RANLIB="/home/debian/new-kernel/sagem/uclibc-crosstools-gcc-4.2.3-3/usr/bin/mips-linux-uclibc-ranlib" READELF="/home/debian/new-kernel/sagem/uclibc-crosstools-gcc-4.2.3-3/usr/bin/mips-linux-uclibc-readelf" OBJDUMP="/home/debian/new-kernel/sagem/uclibc-crosstools-gcc-4.2.3-3/usr/bin/mips-linux-uclibc-objdump" AR="/home/debian/new-kernel/sagem/uclibc-crosstools-gcc-4.2.3-3/usr/bin/mips-linux-uclibc-ar" AS="/home/debian/new-kernel/sagem/uclibc-crosstools-gcc-4.2.3-3/usr/bin/mips-linux-uclibc-as" LD="/home/debian/new-kernel/sagem/uclibc-crosstools-gcc-4.2.3-3/usr/bin/mips-linux-uclibc-gcc" NM="/home/debian/new-kernel/sagem/uclibc-crosstools-gcc-4.2.3-3/usr/bin/mips-linux-uclibc-nm" CC="/home/debian/bin-new/ccache-3.1.8/ccache /home/debian/new-kernel/sagem/uclibc-crosstools-gcc-4.2.3-3/usr/bin/mips-linux-uclibc-gcc" GCC="/home/debian/bin-new/ccache-3.1.8/ccache /home/debian/new-kernel/sagem/uclibc-crosstools-gcc-4.2.3-3/usr/bin/gcc" CXX="/home/debian/bin-new/ccache-3.1.8/ccache /home/debian/new-kernel/sagem/uclibc-crosstools-gcc-4.2.3-3/usr/bin/g++" CPP="/home/debian/new-kernel/sagem/uclibc-crosstools-gcc-4.2.3-3/usr/bin/mips-linux-uclibc-cpp" CPPFLAGS="-I/home/debian/build-new/host/usr/include" CFLAGS="-O2 -I/home/debian/build-new/host/usr/include" CXXFLAGS="-O2 -I/home/debian/build-new/host/usr/include" LDFLAGS="-L/home/debian/build-new/host/lib -L/home/debian/build-new/host/usr/lib -Wl,-rpath,/home/debian/build-new/host/usr/lib" make O=~/build-new/ Given the below error, I had to still symlink some binaries before running the make command: Checking for C compiler ... none found ERROR: no C compiler found Checking for linker ... '/home/debian/build-new/host/usr/bin/mips-buildroot-linux-uclibc-gcc' not found (user) ERROR: no linker found Checking for ar ... /home/debian/new-kernel/sagem/uclibc-crosstools-gcc-4.2.3-3/usr/bin/mips-linux-uclibc-ar () Checking for ranlib ... /home/debian/new-kernel/sagem/uclibc-crosstools-gcc-4.2.3-3/usr/bin/mips-linux-uclibc-ranlib () Checking for readelf ... none found ERROR: no readelf found Checking for objdump ... /home/debian/new-kernel/sagem/uclibc-crosstools-gcc-4.2.3-3/usr/bin/mips-linux-uclibc-objdump () The binaries I symlinked are below. I put a pound sign in front of the ones I don't think matter if symlinked (because the path was never able to be set from prefixing to the make command). mkdir -p /home/debian/build-new/host/usr/bin/ cp -r /home/debian/new-kernel/sagem/uclibc-crosstools-gcc-4.2.3-3/usr/bin/* /home/debian/build-new/host/usr/bin/ cd /home/debian/build-new/host/usr/bin/ ln -s mips-linux-uclibc-gcc mips-buildroot-linux-uclibc-gcc #ln -s /home/debian/new-kernel/sagem/uclibc-crosstools-gcc-4.2.3-3/usr/bin/mips-linux-uclibc-ranlib mips-buildroot-linux-uclibc-ranlib #ln -s /home/debian/new-kernel/sagem/uclibc-crosstools-gcc-4.2.3-3/usr/bin/mips-linux-uclibc-readelf mips-buildroot-linux-uclibc-readelf #ln -s /home/debian/new-kernel/sagem/uclibc-crosstools-gcc-4.2.3-3/usr/bin/mips-linux-uclibc-objdump mips-buildroot-linux-uclibc-objdump The config and commands here are still broken when it comes to building the kernel itself, but they function well enough to get past the stage of this error. I managed to successfully compile the kernel this morning (took more than 24 hours of pure compile time), but after booting it in QEMU and mounting my filesystem I copied from my router, I came to realize I chose the wrong byte order (I chose LSB instead of MSB by choosing big endian instead of little endian). Otherwise, I have successfully compiled the kernel using @filbranden's help.
Why does my Makefile not compile and how can I fix it?
1,549,357,740,000
System: Linux Mint 19 Cinnamon 64-bit, based on Ubuntu 18.04. Pidgin: Built from source, version 2.13.0. Purple Facebook: I'd like to build it from source, version 0.9.5. But I get missing package error which I cannot locate. $ ./configure ... checking for json-glib-1.0 >= 0.14.0... no configure: error: Package requirements (json-glib-1.0 >= 0.14.0) were not met: No package 'json-glib-1.0' found Consider adjusting the PKG_CONFIG_PATH environment variable if you installed software in a non-standard prefix. Alternatively, you may set the environment variables JSON_CFLAGS and JSON_LIBS to avoid the need to call pkg-config. See the pkg-config man page for more details. Upon searching for the package, I get similarly named result: $ apt-cache policy json-glib-1.0 libjson-glib-1.0-0: Installed: 1.4.2-3 Candidate: 1.4.2-3 Version table: *** 1.4.2-3 500 500 http://archive.ubuntu.com/ubuntu bionic/main amd64 Packages 100 /var/lib/dpkg/status libjson-glib-1.0-common: Installed: 1.4.2-3 Candidate: 1.4.2-3 Version table: *** 1.4.2-3 500 500 http://archive.ubuntu.com/ubuntu bionic/main amd64 Packages 500 http://archive.ubuntu.com/ubuntu bionic/main i386 Packages 100 /var/lib/dpkg/status
I had a missing single development package: sudo apt-get install libjson-glib-dev
Mint 19 - Pidgin IM Purple plugin - Error while configuring: No package 'json-glib-1.0' found
1,549,357,740,000
I downloaded xfstk source and built it. I installed dependencies such as boost, libusb-devel etc... but although I installed boost, I getting error messages, such as the one below, reporting that boost is not installed. ...some output code here [ 0%] Built target docs [ 1%] Built target xfstk-command-line [ 2%] Automatic MOC for target XfstkFactory [ 2%] Built target XfstkFactory_autogen [ 39%] Built target XfstkFactory [ 40%] Automatic MOC for target xfstk-dldr-api [ 40%] Built target xfstk-dldr-api_autogen [ 40%] Linking CXX shared library libxfstk-dldr-api.so /usr/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-redhat-linux/8/../../../libboost_program_options.so when searching for -lboost_program_options /usr/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-redhat-linux/8/../../../libboost_program_options.a when searching for -lboost_program_options /usr/bin/ld: skipping incompatible //lib/libboost_program_options.so when searching for -lboost_program_options /usr/bin/ld: skipping incompatible //lib/libboost_program_options.a when searching for -lboost_program_options /usr/bin/ld: skipping incompatible //usr/lib/libboost_program_options.so when searching for -lboost_program_options /usr/bin/ld: skipping incompatible //usr/lib/libboost_program_options.a when searching for -lboost_program_options /usr/bin/ld: cannot find -lboost_program_options /usr/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-redhat-linux/8/../../../libboost_program_options.so when searching for -lboost_program_options /usr/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-redhat-linux/8/../../../libboost_program_options.a when searching for -lboost_program_options /usr/bin/ld: skipping incompatible //lib/libboost_program_options.so when searching for -lboost_program_options /usr/bin/ld: skipping incompatible //lib/libboost_program_options.a when searching for -lboost_program_options /usr/bin/ld: skipping incompatible //usr/lib/libboost_program_options.so when searching for -lboost_program_options /usr/bin/ld: skipping incompatible //usr/lib/libboost_program_options.a when searching for -lboost_program_options /usr/bin/ld: cannot find -lboost_program_options collect2: error: ld returned 1 exit status make[2]: *** [ancillary/configure/api/downloader-api/CMakeFiles/xfstk-dldr-api.dir/build.make:137: ancillary/configure/api/downloader-api/libxfstk-dldr-api.so] Error 1 make[1]: *** [CMakeFiles/Makefile2:366: ancillary/configure/api/downloader-api/CMakeFiles/xfstk-dldr-api.dir/all] Error 2 make: *** [Makefile:152: all] Error 2 [frogwine@leopardpro build]$
You typically have 2 paths to take when you're trying to build software on Linux distros. Options Rely on the package manager of the Linux distro to do the heavy lifting for you Incorporate your self compiled libraries into LD's path so that build/config tools are aware of it. Option 1 For number 1, you can install boost using your distros package manager. I'm more familiar with Redhat distros, and for these you'd do this: $ sudo yum search boost | grep ^boost | head yum search boost | grep ^boost | head -10 boost-atomic.i686 : Run-Time component of boost atomic library boost-atomic.x86_64 : Run-Time component of boost atomic library boost-chrono.i686 : Run-Time component of boost chrono library boost-chrono.x86_64 : Run-Time component of boost chrono library boost-context.i686 : Run-Time component of boost context switching library boost-context.x86_64 : Run-Time component of boost context switching library boost-date-time.i686 : Run-Time component of boost date-time library boost-date-time.x86_64 : Run-Time component of boost date-time library boost-devel.i686 : The Boost C++ headers and shared development libraries boost-devel.x86_64 : The Boost C++ headers and shared development libraries And then install whatever you need from this output: $ sudo yum install -y boost boost-devel .... Option 2 For number 2, I've covered this already in this U&L Q&A titled: Confusion about linking boost library while compilation.
Building XFSTK error on Fedora 28 : /usr/bin/ld: cannot find -lboost_program_options
1,549,357,740,000
I've stumbled across the discontinued bzip-0.21 on the ftp of vim.org which is respectable, so I think it's fine. However the code from 1996 won't compile anymore with today's GCC. Is there a chance in getting it running for a modern device that requires the binary to be compiled for armhf? Merely an enthusiast for old compression algorithms. The windows binary that was in the .tar.gz from the vim archive actually yielded smaller files than bzip2 when I made a test. Which is pretty impressive for such old code. greetings gavery Edit: the error was that undefined reference to 'minUInt32' references inside the c-file itself were not recognized by gcc in my case.
Solved by cross-compiling from a desktop PC: get the requirements sudo apt-get install build-essential g++-arm-linux-gnueabihf gdb-multiarch -y build object code and compile ran@compilestation:~$ arm-linux-gnueabihf-gcc -O3 -g3 -Wall -c -o -fPIC "bzip.c" ran@compilestation:~$ arm-linux-gnueabihf-gcc -o "bzip1" bzip1.o verify your work ran@compilestation:~$ file bzip1 bzip1: ELF 32-bit LSB executable, ARM, EABI5 version 1 (SYSV), dynamically linked, interpreter /lib/ld-linux-armhf.so.3, for GNU/Linux 3.2.0, BuildID[sha1]=4cf52bccf817e031556bab923696f4677b00d29b, not stripped (thanks to Stephen for mental support)
Compiling bzip1 on a raspberry pi 2 with debian server?
1,549,357,740,000
After reading over the great Arch Linux guide for the Dell XPS 13 9560, I am curious about using the patch mentioned for enabling low-power modes on NVMe SSDs. To quote the repo that contains the patch's source: To manually compile the Archlinux kernels from here, follow steps: (1) git clone: https://github.com/damige/linux-nvme.git (2) go into /src/[kernel you want] (3) type "makepkg" -wait until compilation completes- (4) pacman -U linux-nvme-* (5) Adjust your bootloader to boot linux-nvme I was wondering if it would be possible to compile this for Elementary OS(Ubuntu-based). Would I need to reinstall from scratch? Any advice on implementing this in other distros would be greatly appreciated.
So, jasonwryan basically commented the answer I was looking for here. The patch is applied to a linux kernel which can be compiled and used for any distro. He linked me to a couple great articles including the same Arch Linux guide on kernels. When I asked this question I didn't have enough knowledge about the kernel to understand how this patch is applied but thanks to him for putting me on the right track.
Using Andy Lutomirski's Arch Linux NVMe patch with another distro
1,464,367,064,000
My system has an old glibc version. I compiled the new version [2.23] into /FaF/glibc. Due to the fact the new glibc version is not compatible with SLES 11 SP2 & SP3 I have to use the linker switch [--rpath=/FaF/glibc/lib] in order the new version is used in my programs. This works perfectly! I cannot set the path to the new glibc version in the ld.so.confg file because then all system programs try to load the new glibc version and the system crashes horrible. My question: Is there a way to compile Apache with the new glibc version using the --rpath switch?
In the end the answer was in the Apache build documentation - Environment variables. In my case the very serious issue is the fact my system [SLES 11 SP2] crashes with the glibc version 2.23. This means I cannot set the path to the new glibc libraries in the ld.so.conf file. The only one solution for me is to set $LDFLAGS while running configure and make with the following values in order the system can load Apache and all of the new libraries in the correct order and also to set the loader explicitly: export LDFLAGS="-L/FaF/lib64 -L/FaF/glibc/lib -L/FaF/openssl-curl/lib -Wl,--rpath=/FaF/glibc/lib -Wl,--rpath=/FaF/lib64 -Wl,--rpath=/FaF/lib -Wl,--rpath=/FaF/openssl-curl/lib -Wl,--rpath=/usr/local/lib64/ -Wl,--rpath=/usr/lib64 -Wl,--rpath=/lib64/ -Wl,--dynamic-linker=/FaF/glibc/lib/ld-linux-x86-64.so.2" All programs using the new glibc version are collected in /FaF.
Compiling Apache with another glibc version
1,464,367,064,000
I'm trying to compile VASP 5.4 on Ubuntu 14.04 and running into issues. I believe I'm most the way there but when I run the make command it seems to bottle out with out a specific error. I am unable to run make install: make: *** No rule to make target `install'. Stop. This is very new to me so I am struggling to really get any where at this stage. makefile.include # Precompiler options CPP_OPTIONS= -DMPI -DHOST=\"IFC91_ompi\" -DIFC \ -DCACHE_SIZE=4000 -Davoidalloc \ -DMPI_BLOCK=8000 -DscaLAPACK -Duse_collective \ -DnoAugXCmeta -Duse_bse_te \ -Duse_shmem -Dtbdyn CPP = gcc -E -P -C $*$(FUFFIX) >$*$(SUFFIX) $(CPP_OPTIONS) FC = mpif90 FCL = $(FC) FREE = -ffree-form -ffree-line-length-none FFLAGS = OFLAG = -O2 OFLAG_IN = $(OFLAG) DEBUG = -O0 LIBDIR = /usr/lib BLAS = -L$(LIBDIR) -lblas LAPACK = -L$(LIBDIR) -llapack BLACS = -L$(LIBDIR) -lblacs-openmpi -lblacsCinit-openmpi SCALAPACK = -L$(LIBDIR) -lscalapack-openmpi $(BLACS) FFTW = -L/usr/lib/x86_64-linux-gnu -lfftw3f_mpi OBJECTS = fftmpi.o fftmpi_map.o fft3dfurth.o fft3dlib.o INCS =-I/usr/local/include LLIBS = $(SCALAPACK) $(LAPACK) $(BLAS) $(FFTW) OBJECTS_O1 += fft3dfurth.o fftw3d.o fftmpi.o fftmpiw.o chi.o OBJECTS_O2 += fft3dlib.o # For what used to be vasp.5.lib CPP_LIB = $(CPP) FC_LIB = $(FC) CC_LIB = gcc CFLAGS_LIB = -O FFLAGS_LIB = -O1 FREE_LIB = $(FREE) OBJECTS_LIB= linpack_double.o getshmem.o # Normally no need to change this SRCDIR = ../../src BINDIR = ../../bin End of the output from make command cp makefile.include lib make -C lib make[3]: Entering directory `/home/scienceadmin/Documents/VASP/src/vasp.5.4.1/build/ncl/lib' make libdmy.a make[4]: Entering directory `/home/scienceadmin/Documents/VASP/src/vasp.5.4.1/build/ncl/lib' make[4]: `libdmy.a' is up to date. make[4]: Leaving directory `/home/scienceadmin/Documents/VASP/src/vasp.5.4.1/build/ncl/lib' make[3]: Leaving directory `/home/scienceadmin/Documents/VASP/src/vasp.5.4.1/build/ncl/lib' gcc -E -P -C main.F >main.f90 -DMPI -DHOST=\"IFC91_ompi\" -DIFC -DCACHE_SIZE=4000 -Davoidalloc -DMPI_BLOCK=8000 -DscaLAPACK -Duse_collective -DnoAugXCmeta -Duse_bse_te -Duse_shmem -Dtbdyn mpif90 -ffree-form -ffree-line-length-none -O0 -I/usr/local/include -c main.f90 rm -f vasp mpif90 -o vasp base.o mpi.o smart_allocate.o xml.o constant.o jacobi.o main_mpi.o scala.o asa.o lattice.o poscar.o ini.o mgrid.o xclib.o vdw_nl.o xclib_grad.o radial.o pseudo.o gridq.o ebs.o mkpoints.o wave.o wave_mpi.o wave_high.o spinsym.o symmetry.o symlib.o lattlib.o random.o nonl.o nonlr.o nonl_high.o dfast.o choleski2.o mix.o hamil.o xcgrad.o xcspin.o potex1.o potex2.o constrmag.o cl_shift.o relativistic.o LDApU.o paw_base.o metagga.o egrad.o pawsym.o pawfock.o pawlhf.o rhfatm.o hyperfine.o paw.o mkpoints_full.o charge.o Lebedev-Laikov.o stockholder.o dipol.o solvation.o pot.o dos.o elf.o tet.o tetweight.o hamil_rot.o chain.o dyna.o k-proj.o sphpro.o us.o core_rel.o aedens.o wavpre.o wavpre_noio.o broyden.o dynbr.o reader.o writer.o tutor.o xml_writer.o brent.o stufak.o fileio.o opergrid.o stepver.o chgloc.o fast_aug.o fock_multipole.o fock.o mkpoints_change.o subrot_cluster.o sym_grad.o mymath.o npt_dynamics.o subdftd3.o internals.o dynconstr.o dimer_heyden.o dvvtrajectory.o vdwforcefield.o hamil_high.o nmr.o pead.o subrot.o subrot_scf.o paircorrection.o force.o pwlhf.o gw_model.o optreal.o steep.o rmm-diis.o davidson.o david_inner.o electron.o rot.o electron_all.o shm.o pardens.o optics.o constr_cell_relax.o stm.o finite_diff.o elpol.o hamil_lr.o rmm-diis_lr.o subrot_lr.o lr_helper.o hamil_lrf.o elinear_response.o ilinear_response.o linear_optics.o setlocalpp.o wannier.o electron_OEP.o electron_lhf.o twoelectron4o.o gauss_quad.o m_unirnk.o varpro.o minimax.o mlwf.o ratpol.o screened_2e.o wave_cacher.o chi_base.o wpot.o local_field.o ump2.o ump2kpar.o fcidump.o ump2no.o bse_te.o bse.o acfdt.o chi.o sydmat.o rmm-diis_mlr.o linear_response_NMR.o wannier_interpol.o linear_response.o lcao_bare.o wnpr.o dmft.o auger.o dmatrix.o fftmpi.o fftmpi_map.o fft3dfurth.o fft3dlib.o main.o -Llib -ldmy -L/usr/lib -lscalapack-openmpi -L/usr/lib -lblacs-openmpi -lblacsCinit-openmpi -L/usr/lib -llapack -L/usr/lib -lblas -L/usr/lib/x86_64-linux-gnu -lfftw3f_mpi make[2]: Leaving directory `/home/scienceadmin/Documents/VASP/src/vasp.5.4.1/build/ncl' make[1]: Leaving directory `/home/scienceadmin/Documents/VASP/src/vasp.5.4.1/build/ncl'
You have already compiled it. The make is what compiles. The make install simply copies the compiled executables and associated files to their target directories on your system. From what you describe, it sounds like what you are compiling simply doesn't have an install script. I haven't compiled this particular package myself but such third party tools often don't have an install script since they're more of a specialized geek tool and the authors expect you'll know how to use them. Look for a bin directory or, alternatively, for executable files inside the src directory and then execute those files. For example: ./bin/vasp
Problems compiling VASP
1,464,367,064,000
I'm using fedora 23 64 bits here, i suppose i have all dev deps installed, however this last one i don't know how to solve: [sombriks@sephiroth planner]$ sh autogen.sh /usr/bin/gnome-autogen.sh ***Warning*** USE_COMMON_DOC_BUILD is deprecated, you may remove it from autogen.sh ***Warning*** USE_GNOME2_MACROS is deprecated, you may remove it from autogen.sh ***Warning*** PKG_NAME is deprecated, you may remove it from autogen.sh checking for automake >= 1.9... testing automake... found 1.15 checking for autoreconf >= 2.53... testing autoreconf... found 2.69 checking for intltool >= 0.25... testing intltoolize... found 0.51.0 checking for pkg-config >= 0.14.0... testing pkg-config... found 0.28 checking for gtk-doc >= 1.0... testing gtkdocize... found 1.24 Checking for required M4 macros... **Warning**: I am going to run `configure' with no arguments. If you wish to pass any to it, please specify them on the `autogen.sh' command line. Processing ./configure.ac Running gtkdocize... Running intltoolize... Running autoreconf... autoreconf: Entering directory `.' autoreconf: configure.ac: not using Gettext autoreconf: running: aclocal --force --warnings=no-portability autoreconf: configure.ac: tracing autoreconf: running: libtoolize --copy --force libtoolize: putting auxiliary files in '.'. libtoolize: copying file './ltmain.sh' libtoolize: Consider adding 'AC_CONFIG_MACRO_DIRS([m4])' to configure.ac, libtoolize: and rerunning libtoolize and aclocal. libtoolize: Consider adding '-I m4' to ACLOCAL_AMFLAGS in Makefile.am. autoreconf: running: /usr/bin/autoconf --force --warnings=no-portability autoreconf: running: /usr/bin/autoheader --force --warnings=no-portability autoreconf: running: automake --add-missing --copy --force-missing --warnings=no-portability configure.ac:6: warning: AM_INIT_AUTOMAKE: two- and three-arguments forms are deprecated. For more info, see: configure.ac:6: http://www.gnu.org/software/automake/manual/automake.html#Modernize-AM_005fINIT_005fAUTOMAKE-invocation configure.ac:10: installing './compile' configure.ac:6: installing './missing' automake: error: cannot open < xmldocs.make: No such file or directory autoreconf: automake failed with exit status: 1 [sombriks@sephiroth planner]$ Any help is welcome.
While hitting the same problem I found out that the below helped: touch xmldocs.make
unable to compile GNOME planner from source
1,464,367,064,000
I want to add Python 2.7 to my Unix. I downloaded the sources to the VirtualBox on which the Unix is installed and run ./configure --prefix=/usr \ --enable-shared \ --with-system-expat \ without problems. However, when I try to run make, it fails on: cc –Kpthread –Wl,-Bexport –o python Modules/python.o libpython2.7.a -lsocket –lnsl –lpthread –ldl –lm Undefined first referenced symbol in file _PyInt_FromDev libpython2.7.a(posixmodule.o) UX:ld: ERROR: Symbol referencing errors. No output written to python *** Error code 1 (bu21) UX:make: ERROR: fatal error. I tried to find a solution searching on Google, without much success. Can you suggest directions towards a solution? Environment: OS Unixware 7.1.4 Python 2.7.10
make is successful when #endif is moved from line 526 of Modules/posixmodule.c to line 513.
Compiling Python 2.7.10 Error
1,464,367,064,000
I am trying to install Kali on my Lenovo Yoga13, but after formatting the disk the setup failed to install grub because of no internet access (no Ethernet, need driver to get Wifi to work). So, I decided to compile the Wifi driver to complete the setup just to realize I am missing kernel headers. I cannot apt-get install because I do not have net access. Is there a way to manually install kernel headers to compile the driver?
If it is not on the install media, then you need to get the deb into /var/cache/apt/archives/. If it is there then the download part of apt-get will be skipped.
Missing kernel headers, but need them to install the Wifi driver
1,464,367,064,000
I am currently trying to upgrade apache from 2.2.8 to 2.2.29 and am running into some trouble. I configured the makefile like so: ./configure --enable-mods-shared --enable-ssl --enable-rewrite --enable-proxy-ftp --enable-proxy-http --enable-proxy-connect --enable-proxy --enable-cache --enable-mem-cache --enable-expires --enable-hea ders --enable-deflateloca --enable-unique-id When running the make command I get the following error: /usr/local/apache2/build/libtool --silent --mode=compile gcc -g -O2 -pthread -DLINUX=2 -D_REENTRANT -D_GNU_SOURCE -D_LARGEFILE64_SOURCE -I/opt/vignette/software/apache/srclib/pcre -I. -I/opt/vignette/software/apache/os/unix -I/opt/vignette/software/apache/server/mpm/prefork -I/opt/vignette/software/apache/modules/http -I/opt/vignette/software/apache/modules/filters -I/opt/vignette/software/apache/modules/proxy -I/opt/vignette/software/apache/include -I/opt/vignette/software/apache/modules/generators -I/opt/vignette/software/apache/modules/mappers -I/opt/vignette/software/apache/modules/database -I/usr/local/apache2/include -I/opt/vignette/software/apache/modules/proxy/../generators -I/usr/kerberos/include -I/opt/vignette/software/apache/modules/ssl -I/opt/vignette/software/apache/modules/dav/main -prefer-non-pic -static -c mod_deflate.c && touch mod_deflate.lo mod_deflate.c: In function `deflate_out_filter': mod_deflate.c:790: error: `APR_INT32_MAX' undeclared (first use in this function) mod_deflate.c:790: error: (Each undeclared identifier is reported only once mod_deflate.c:790: error: for each function it appears in.) mod_deflate.c: In function `deflate_in_filter': mod_deflate.c:1165: error: `APR_INT32_MAX' undeclared (first use in this function) mod_deflate.c: In function `inflate_out_filter': mod_deflate.c:1550: error: `APR_INT32_MAX' undeclared (first use in this function) make[3]: *** [mod_deflate.lo] Error 1 make[3]: Leaving directory `/opt/vignette/software/httpd-2.2.29/modules/filters' make[2]: *** [all-recursive] Error 1 make[2]: Leaving directory `/opt/vignette/software/httpd-2.2.29/modules/filters' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory `/opt/vignette/software/httpd-2.2.29/modules' make: *** [all-recursive] Error 1 Now when I go to line 790 of deflate.c it has this: if (len > APR_INT32_MAX) { apr_bucket_split(e, APR_INT32_MAX); apr_bucket_read(e, &data, &len, APR_BLOCK_READ); } For some reason I don't think this variable is getting set for whatever reason. I will note that I am running RHEL4 (I know it's bad) and that I just recently installed APR (Apache Runtime) and APU (Apache Runtime Utility) and have them configured in: /usr/local/apr/bin/apr-1-config /usr/local/apr/bin/apu-1-config I am not sure if these are related or if it's causing a problem since I installed APR independently and the fact that it comes with apache. The reason I have them installed separately is because another program I have installed is dependent on a different version. I will say that when I configure the makefile without deflate the binary compiles successfully.
After a few days of trying to figure this out here is the solution. There seemed to be an old APR library (< v1.3.0) installed on the system that was in conflict with the version needed for apache. What I did was compile (and run) with the APR embedded in the httpd-2.2.29 archive using: ./configure --with-included-apr
Trouble updating Apache - mod_deflate APR_INT32_MAX undeclared
1,464,367,064,000
I recently lost my dual-booted HDD (with both Linux and Windows). It did some weird stuff and started changing all files to read-only on every start-up. So, I bought a new HDD and put Fedora 20 on it. Now, the only thing I'm looking to replace is Finale (I have found acceptable substitutes or Linux versions of everything else). I have found MusScore, NoteEdit, etc, but all of these programs need to be compiled (I can't find a suitable .rpm), and typing 'make' gives me errors. First I need cmake, then qmake, then a whole ton of things I don't recognize, and some of which I don't understand. Honestly, I've gotten to the point of saying it isn't worth it unless I can find an RPM of something, because compiling things is ridiculously difficult (I don't NEED a music writing program, just want one). Does anyone know of a suitable replacement already compiled and ready for distribution for Fedora 20? EDIT #1 For those who are unfamiliar with Finale, it is a music-composition software, similar to linux's NoteEdit or MusScore. It allows exportation of audio files, note annotations, and printing pages of musical scores. It also supports transposition and human playback features. Some of these things are not that important to me, and I can do some by hand (like transposition), but I would really like the audio file export and printing features. Here is the about page for Finale: http://www.finalemusic.com/products/finale/
Method #1 - Using Wine + Finale If you have Fedora 20 setup and a copy of Finale already you can apparently run it under Wine, at least according to this thread titled: Finale 2014 without Windows or Mac. excerpt Hey all Heads up: Finale 2014 is working perfectly on Linux. You can use PlayOnLinux (or basic WINE) and performance is incredible. Of course, if you insist on using VST third-party playback and whatnot, you'll be sad but it's not surprising. For strictly notation and basic playback, Finale 2014 is golden on Linux. If all you want to do is input and edit scores, then you will be very happy. Finale 2014 on Linux does everything as the other OS can do, minus comprehensive VST support. Even though Finale is only "supported" on Mac and PC, I figured I would share my findings with the forum in case any other users like myself prefer to use Linux whenever possible. What's Wine? Wine is an emulation layer that allows Window executables to run under Linux natively. It's not virtualization and it isn't emulation, it's somewhere in between both of these approaches. But it really doesn't matter how it works, just that it does. You can read more of the technology underpinnings on the project's website. The software application Finale is listed in Wine's AppDB. It's listed as being Gold or Bronze level for versions 2011 & 2012. This is a pretty good indication that the application should run reasonably well through this approach. Method #2 - MusScore I did find the package already pre-built for Fedora 20. It's called mscore and looks to be in the standard repositories. http://pkgs.org/fedora-20/fedora-i386/mscore-1.3-2.fc20.i686.rpm.html To install it: $ sudo yum install mscore* On Fedora 19 (which I'm currently using) the packages are definitely already available: $ yum search mscore Loaded plugins: auto-update-debuginfo, changelog, langpacks, refresh-packagekit ============================================================================================================ N/S matched: mscore ============================================================================================================ mscore-debuginfo.x86_64 : Debug information for package mscore mscore.i686 : Music Composition & Notation Software mscore.x86_64 : Music Composition & Notation Software mscore-doc.noarch : MuseScore documentation mscore-fonts.noarch : MuseScore fonts Method #3 - NoteEdit There is a distro that builds on top of Fedora called CCRMA (pronounced Karma) which is geared towards doing music pre/post production along with video editing. Planet CCRMA at Home (CCRMA is pronounced ``karma'') is a collection of free, open source rpm packages (RPM stands for RPM Package Manager) that you can add to a computer running Fedora, 18, 19 or 20, or CentOS 5 (not all applications are built on the 64 bit version) to transform it into an audio workstation with a low-latency kernel, current audio drivers and a nice set of music, midi and audio applications This distro offers NoteEdit as a pre-built RPM: $ sudo yum install noteedit You should be able to add the packages from the CCRMA website directly to any stock Fedora 20 system, I'd probably add the CCRMA YUM repositories to the system so that any dependencies can also be automatically downloaded and installed as well. Details for doing this are covered here: http://ccrma.stanford.edu/planetccrma/software/installplanettwenty.html
Pre-compiled Finale substitute for Fedora 20?
1,464,367,064,000
If we install the Red Hat System then can we compile the other distros of linux on the Red Hat server system? If so, please provide me the tutorials and links.
First If by that you mean creating a custom Ubuntu or other distros yes you can and this action is not entirely distro-specific (i.e in your case Redhat). For that you could use two diffrerent approaches. Either use automation tools such as linuxcoe and other different tools or go native and start by LFS (Linux From Scratch). Either way you have three things to consider: Your kernel Your GUI You distribution system Build your own distro For the third option you could use other distros tools also (or write your own) Or you meant to compile codes for other platforms which is something else and has more to do with your make and build environment and coding skills (writing config files for example). you code do it in Jail env. or Virtual machine if you like. Or perhaps you meant to access other distros inside a running RedHat. In this case what you should consider the most is the arch of the Host and target system. This could be done by chrooting. Consider that you have the live image of some distro in /mnt/distro. first you should mount your /proc , /dev and /sys Then chroot to /mnt/distro. mount -t proc proc /mnt/distro/proc mount --rbind /sys /mnt/distro/sys mount --rbind /dev /mnt/distro/dev Then for chrooting part you need to specify your environment completely to avoid problems in future. chroot /mnt/distro /bin/env -i TERM=$TERM /bin/bash Note: It is the base idea and the path and env would be different in your case
Do I compile Ubuntu Source code on Red Hat System?
1,464,367,064,000
I'm trying to figure out how to run the make command. I'm trying to make use of fmem (a tool found on the internet), and it is said that "make" must be run from a terminal in the folder. However I get this: root@bakie:/home/tux/Documents/fmem/fmem_1.6-0# make rm -f *.o *.ko *.mod.c Module.symvers Module.markers modules.order \.*.o.cmd \.*.ko.cmd \.*.o.d rm -rf \.tmp_versions make -C /lib/modules/`uname -r`/build SUBDIRS=`pwd` modules make: *** /lib/modules/3.10-3-686-pae/build: No such file or directory. Stop. make: *** [fmem] Error 2 Thanks for making me see clearer.
You're missing the Linux kernel's headers. You can install them like so: $ sudo apt-get install linux-headers-$(uname -r) Any time you're attempting to compile C/C++ software you often need libraries and header files. The header files can contain function prototypes and implementations of said functions. The libraries are binary blob files that contain compiled code that the compile can incorporate when told to do so by your source code. The big clue to this stuff missing is this line in your output: make: * /lib/modules/3.10-3-686-pae/build: No such file or directory. Stop. This is telling you that a library is missing, and anytime you see "modules" missing that's a good indicator that it's something to do with the Linux kernel.
fmem compile error with make
1,464,367,064,000
I'm trying to cross-compile libSDL version 1.2 for a custom made, Debian based Linux system. The toolchain I'm using is already configured properly so that I just run gcc/g++ on the desired code and the resulting output is compatible with the target machine. When I run ./configure --help in the libSDL source directory, I see that I can basically just set some environment variables to point to my cross-compiler. However, I also see the following options: System types: --build=BUILD configure for building on BUILD [guessed] --host=HOST cross-compile to build programs to run on HOST [BUILD] I looked into the configure.in, build-scripts/config.sub, and build-scripts/config.guess files but couldn't really figure out how it works. Are these options required? If not, is it a good idea to use them?
The answer is 'yes', I should specify both. The --build flag will be the architecture of the machine doing the compiling... the scripts can do a good job of guessing what this will be, but it's better to be safe than sorry and actually specify yourself. The --host flag is the necessary part in making this cross-compilation actually work. When my --build and --host flags weren't the same values (for example, i686-generic versus x86_64-linux), the configure scripts would realize that I'm trying to cross compile. Instead of simply resorting to local compiler (on the host machine, specified somewhere in $PATH), the configure scripts would find a compiler suite (compiler, linker, etc.) either specified as flags or an environment variables (e.g. $CPPFLAGS, $CC, $AR, etc.) and use those.
Cross compiling libSDL
1,367,501,163,000
I bought a barcode printer as Godex. I have to install its drivers. I did it on Ubuntu. However, I have some problems about it.For example: In README file: $ sudo aptitude install libcupsimage2-dev But shell says me sudo:aptitude: command not found Moreover when i write sudo apt-get install aptitude it says sudo:apt-get: command not found And then, When I write ./configure shell says followings. ./configure checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for a thread-safe mkdir -p... /bin/mkdir -p checking for gawk... gawk checking whether make sets $(MAKE)... no checking for gcc... no checking for cc... no checking for cl.exe... no configure: error: no acceptable C compiler found in $PATH See `config.log' for more details. How can i fix it ? How can i install this package? Thanks for your help already now.
Each Linux distribution (or family of distributions) has its own package management system. Ubuntu uses .deb packages and has the APT tool family including Aptitude. Pardus has its own package manager called PiSi. Your immediate problem is “no acceptable C compiler”. You need to install a C compiler, and probably other development packages. See “Compiling Software and Packaging” in the Pardus Wiki; start by installing the basic development tools (the equivalent of Ubuntu's build-essentials): sudo pisi install -c system.devel
How to install rastertoezpl on Pardus?
1,367,501,163,000
I did compile a cmake project statically in FreeBSD 12, when I try ldd executable it returns not a dynamic executable. I tried the executable in same computer and It works fine. Then I did port it to my vps that have a FreeBSD 12 and it's working like expected. But when I port it to FreeBSD 8. Some commands are working, like executable --help which print the help. But when I try some functions which Involve's networking (network is configured and I tried various programs like curl and php), the process exits silently, no segmentation fault or whatsoever, and after running the executable some other file name executable.core is added to the same directory This is my first time compiling a FreeBSD build so I don't knew what I'm missing
FreeBSD 12 had serious ABI change called "ino64". IIRC, libc can handle that when linked dynamically, so I'd try that first. If that doesn't work, your only option is to compile on FreeBSD 8. Generally, you shouldn't expect binary compiled on X major release to work on X-1 release. But it works other way around by installing misc/compatXX packages.
Static executable weird behaviour when running it in older FreeBSD
1,367,501,163,000
I'm trying to build tools from source codes. I start with make and get the following error: configure: error: no acceptable C compiler found in $PATH So it means that I need to install gcc. But as you may guess it requires make tool. gcc-9.3.0 installation instructions says: "GNU make version 3.80 (or later) You must have GNU make installed to build GCC." It seems to me a chicken-egg problem. Am I missing something? P.S: I know it's easy to install one of them from distro's packages source. But I want to build them from source code.
But I want to build them from source code. You really do need a C compiler to prime the pump. Even in the old days of getting GNU going on systems such as SunOS, you’d start by building the tools using the native C compiler (or later, by using GCC binaries built by someone else). Make is only one of the tools you need to build GCC; once you’ve got past the Make hurdle, you’ll find you need GCC to build GCC... I recommend Linux From Scratch if you want to explore building a system from source, it will explain all the various steps in detail.
How to build gcc without make?
1,367,501,163,000
Forgive my lack of expertise, but I'm wondering if a source code for an open source windows program can be compiled and its executable used on Linux (without Wine). In my case it's Ant Renamer, which I currently use with Wine (I was unsatisfied with the Linux alternatives, which lack some advanced functionalities or a GUI).
The short answer is no. A longer answer is usually not. It depends on what libraries the program relies on. Depending on the language that the program is written in, the standard library shipped with the compiler or interpreter may include more or less functionality. C, a ubiquitous language in the sense that only extremely unusual platforms lack a C compiler, has a standard library that's pretty small. For example, it has a concept of file I/O, but not directories, and its user interface is limited to standard input, standard output and standard error. Anything else requires making assumptions on the operating system or using a third-party library. Some languages choose to stick to what C can do in its standard library, so that their standard library can run everywhere C runs. Other programming languages provide more features in their standard library, and may have de facto standard third-party libraries that are portable at least between Unix and Windows. Portable not in the sense that the same library code runs on both, but in the sense that there's a Windows implementation and a Unix implementation, and a program that uses the library doesn't need to care which one is being used. For example, Python's standard library has interfaces to manage files and directories and comes with a few basic GUI libraries (not nearly as nice-looking or sophisticated as what can be done, but they're portable and relatively simple to use); in addition there are some more advanced portable third-party libraries. Both Unix and Windows have many features that aren't available on the other system, or that work very differently. Graphical user interfaces work very differently. Some Unix libraries have been ported to Windows, but Windows's interfaces have not been ported to Unix. The Windows GUI code is more integrated into the operating system (Unix's GUI was designed in separate layers from the start), and it is proprietary code (the libraries that come from the Unix world are mostly open source), so they would be very hard to port to Unix: you'd basically have to run Windows in a virtual machine. Wine attempts to provide an independent implementation of the Windows libraries, but it's a very difficult job. Ant Renamer is written in a Pascal dialect for which the tools were developed for Windows, namely Delphi. There was a Linux port at some point, but I don't know how successful it was. The GUI libraries, in particular, would have to be rewritten from the Windows interfaces to some Unix ones. The practical answer is that if Ant Renamer can be made to work on Linux, it would require significant effort, possibly more than writing it from scratch based on a precise description of its behavior. For the sake of people who found this question in a search but had a different problem — you can use a Unix machine to compile source from a Windows program to run on a Windows machine, with a cross compiler such as mingw.
can we compile source code of windows programs?
1,367,501,163,000
Suppose I want to: compile and install my own custom application which requires downloading, compiling and installing the source for newest version of libthrift which requires downloading, compiling and installing the latest version of libboost Here, I'm installing these libraries in my system which may interact with other packages -- very many libraries depend on libthrift and libboost. installing these may break existing package installed with apt-get/yum Also, if I later run apt-get or yum: my custom libthrift and libboost will be overwritten breaking my custom application that depends on custom versions of these libraries. So, what's the solution here? I don't want to install into /home (I'd like the packages to be available to shared users in a code regression build cluster). I've also read that /opt is not really for this purpose (the packages installed should be self-contained, which they are not). The references I've found seem to cover this case in enough detail.
Generally, the solution is "don't try to install from source into directories managed by your packaging system". You can install your custom-compiled code into /usr/local, for example, and having anything that depends on it look to /usr/local for libraries and include files using appropriate invocations of your build system (e.g., setting CPPFLAGS/CFLAGS/LDFLAGS for a typical Makefile). You could even install everything into an application-specific directory (e.g., /usr/local/myapp, or /opt/myapp). This is also a great use case for something like Docker, which makes it very easy to set up isolated development/runtime environments that are isolated from your host.
How can I get self-compiled packages to play nice with packages managers (e.g. apt-get, yum) [duplicate]
1,367,501,163,000
I have read that python will compile the source file .py by itself to produce .pyc, but this doesn't happen in my case. I have the source file inside /opt/osqa folder where I always have to use sudo privileges. How can I compile this source file manually. I am using ArchLinux. Do I need any specific package?
The .pyc files are created when files are imported. Usually running a script by itself will not create a compiled file. For instance: % cat tmp.py print 'in tmp.py' When I run the file normally: % python tmp.py in tmp.py there is no .pyc file created: % ls tmp.py* tmp.py However, if I import tmp from a live Python interpreter: % python Python 2.7.6 (default, Nov 14 2013, 09:55:56) [GCC 4.2.1 Compatible Apple LLVM 5.0 (clang-500.2.79)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import tmp in tmp.py >>> then the compiled file is created: % ls tmp.py* tmp.py tmp.pyc So, it may be normal behaviour depending on how you are running your script.
Compiling python .py to .pyc doesn't happen
1,367,501,163,000
A4: main.o testA4.o helper.o miscFunctions.o queueFunctions.o headerA4.h gcc -Wall -std=c99 main.o testA4.o helper.o miscFunctions.o queueFunctions.o main.o: main.c headerA4.h gcc -Wall -std=c99 -c main.c -o main.o testA4.o: testA4.c headerA4.h gcc -Wall -std=c99 -c testA4.c -o testA4.o helper.o: helper.c headerA4.h gcc -Wall -std=c99 -c helper.c -o helper.o miscFunctions.o: miscFunctions.c headerA4.h gcc -Wall -std=c99 -c miscFunctions.c -o miscFunctions.o queueFunctions.o: queueFunctions.c headerA4.h gcc -Wall -std=c99 -c queueFunctions.c -o queueFunctions.o clean: rm *.o This is my make file but when I compile it this happens- zali05@ginny:~/A4$ make gcc -Wall -std=c99 main.o testA4.o helper.o miscFunctions.o queueFunctions.o zali05@ginny:~/A4$ A4 bash: A4: command not found zali05@ginny:~/A4$ A4: bash: A4:: command not found zali05@ginny:~/A4$ ./A4 bash: ./A4: No such file or directory zali05@ginny:~/A4$ ./a.out Begining A4 Program Testing... Creating Initial List... Enter a username: It works with ./a.out
The link/load command is missing the -o A4 option. You might want to write it as -o $@. Similarly, you might write the object list in the command as $^, causing GNU make to copy the dependency list. (Alas, not all makes have this feature.) Make will also provide a pattern for the compiles. You could set CFLAGS and optionally CC and omit all the compile commands. Also, you misformated the first line of your makefile when posting here. Just to provide a complete result, if I were writing this makefile (and not using automake or makedepend (use one!), and targeting to not be GNU make specific), I would probably write: A4_OBJS = main.o testA4.o helper.o miscFunctions.o queueFunctions.o CC = gcc CFLAGS = -Wall -std=c99 A4 : ${A4_OBJS} ${CC} ${CFLAGS} ${LDFLAGS} -o $@ ${A4_OBJS} main.o : main.c headerA4.h testA4.o : testA4.c headerA4.h helper.o : helper.c headerA4.h miscFunctions.o : miscFunctions.c headerA4.h queueFunctions.o : queueFunctions.c headerA4.h clean : rm *.o A4
Make file not working c
1,367,501,163,000
I do a lot of software testing and that requires me to build many different projects from source. I've recently upgraded to Fedora 33 from 31, and essentially, I have a group of issues that involve either not being able to run software that I had previously built due to "missing" dependencies, and/or when I try to build software, I get errors that packages or libraries are missing, mostly after running ./configure or make, when many times, they really aren't and certainly were not knowingly removed by me. Other times, an update to a package (such as Python 3.6 -> 3.9) causes issues with some scripts, etc... The reason why it has me confused (with exception of that python issue) at times is because, I never actually removed many of the libraries and it just seems that for whatever reason, they don't get recognized anymore, so I have to manually show the build system where they are, or just reinstall the dependencies, and so on... Is there any general advice for clearing up these types of issues after upgrading a version of Fedora or other Linux distributions? I'd imagine this is somewhat common and while I can certainly troubleshoot every individual issue on a case-by-case basis, I would consider this one collective issue, and as I continue my Linux journey, would like to become more adept at addressing this issue as I upgrade in the future.
Compiling any non-trivial piece of software from source can be a complex task, although nowadays there's quite a bit of automation to hide much of the details. But when something goes wrong in the process, you'll need to be able to understand the process to effectively troubleshoot it. Unfortunately, it seems to me that the knowledge and skills to troubleshoot a failing compilation process are increasingly understood as "something only the actual software developers have", while it's actually also very necessary for testers, and sometimes, highly useful for system administrators too. In fact, the underlying idea of Open Source software assumes that everyone should be able to recompile a software package if they feel like doing it. When ./configure complains about a missing library, it's often really looking for the -devel package for that library. ./configure is actually a shell script that runs a number of tests to find and verify the prerequisites for compiling a software package. It is generated by the autoconf toolkit, which is used to ease the job of making a source code package compilable on several hardware architectures and unix-like OSs. Sometimes a library development project or a Linux distribution might make some changes in the way the development headers (*.h files) are laid out under /usr/include/. For example, an old version might have them directly under /usr/include/, and a newer one might have them under /usr/include/library_name/ sub-directory. If a library developer makes a change that is backwards-incompatible at the source level, it might be necessary to include the library version number, so that distributions may support an old version of the library as /usr/include/library_name/ and a new version as /usr/include/library_name2 or something. If such a change is newer than the version of autoconf used to create the ./configure script for a software package, it might not be able to auto-detect the new location. autoconf is not perfect, and there are also other mechanisms to supplement or replace it. Another common one is pkg-config: each library package that supports it will include in its -devel package (or equivalent) a *.pc file that documents important compiler options, dependencies and other information that is important when building software to use the library in question. The make step will usually need both the actual library package and its -devel package to be present. When a software package is built using make, often several parts of it are compiled from source code to binary object files in isolation, and then those files are linked together to form the executable(s). The compilation sub-step might require only the *.h files provided by the -devel package, but the linking sub-step requires that the actual library package is present. If you need some piece of software compiled for OS major version X to run on OS major version (X+n), you might encounter problems with shared libraries: a library has undergone a backwards-incompatible change a library has been packaged differently to work around a name conflict or some other issue an old library has been completely dropped from the distribution and the functionality is now provided with a different library, or in another way altogether. To work around problems like this, you might have to find the old libraries the software actually wants, place them into some directory that is not searched for libraries normally, and start the old program using a wrapper script that uses environment variables like LD_LIBRARY_PATH to specify a custom search path for libraries for that application only. See the ld.so(8) man page and particularly its ENVIRONMENT chapter for more details on things you can do to change the way an application looks for shared libraries at run-time. In this way, I can still run a Linux version of Sid Meier's Alpha Centauri (copyright 1999) on a current Debian 10 system. Not too shabby, I'd say.
What types of things break regarding building of native software when upgrading versions of Fedora, and what are some solutions? [closed]
1,367,501,163,000
I'm currently working my way through the Linux From Scratch (LFS) book, but I'm a bit confused about when to apply patches. In Section 5.7 the goal is to build glibc. The section have a pretty straight forward build instruction, but I noticed in a previous chapter, that I downloaded the patch Glibc FHS Patch and there is no mentioning of this patch in the instructions. Should I patch glibc in this section or just follow the instructions ? NB! Current version of LFS stable is 9.0
The patch is not needed in Section 5.7, it's needed in Section 6.9. There is a note in the general compilation instructions which says: Several of the packages are patched before compilation, but only when the patch is needed to circumvent a problem. A patch is often needed in both this and the next chapter, but sometimes in only one or the other. Therefore, do not be concerned if instructions for a downloaded patch seem to be missing. NB! Current version of LFS stable is 9.0
LFS - Should Glibc be patched in Section 5.7?
1,367,501,163,000
I have a bash script that uses source to make the script more modular. Here's how it would look copied into a user's bin directory: /bin modules/ script-1 script-2 script-3 script-4 script-5 main-app However, this doesn't work if you want to execute main-app from a directory other than your bin directory. Is there a way to use source ./modules/script-x in main-app so that I can source these files properly? Or should I be converting main-app into one file? If I do need to, should I do this manually or is there some kinda "compiler" I should use?
Use the $0 positional parameter and dirname: #!/bin/bash echo running "$(dirname "$0")/$(basename "$0")" source "$(dirname "$0")/modules/script-1"
How to add a modular bash script to `bin`?
1,367,501,163,000
I want install mdbtools from source and I get the following error fatal error: sql.h: No such file or directory so I read the following solutions, but I don't really understand them https://forums.freebsd.org/threads/46299/ https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=189382 I did the following: $ sudo apt install libtool automake autoconf glib2.0 byacc unixodbc $ cd ~/Downloads && git clone https://github.com/brianb/mdbtools $ autoreconf -i -f $ export DOCBOOK_DSL=/usr/share/sgml/docbook/stylesheet/dsssl/modular/html/docbook.dsl $ ./configure --with-unixodbc=/usr/local $ make $ make install This is the full output from the make install command Thanks for the help --->> sudo make install Making install in src make[1]: Entering directory '/home/fabrizio/Downloads/mdbtools/src' Making install in libmdb make[2]: Entering directory '/home/fabrizio/Downloads/mdbtools/src/libmdb' make[3]: Entering directory '/home/fabrizio/Downloads/mdbtools/src/libmdb' /bin/mkdir -p '/usr/local/lib' /bin/bash ../../libtool --mode=install /usr/bin/install -c libmdb.la '/usr/local/lib' libtool: install: /usr/bin/install -c .libs/libmdb.so.2.0.1 /usr/local/lib/libmdb.so.2.0.1 libtool: install: (cd /usr/local/lib && { ln -s -f libmdb.so.2.0.1 libmdb.so.2 || { rm -f libmdb.so.2 && ln -s libmdb.so.2.0.1 libmdb.so.2; }; }) libtool: install: (cd /usr/local/lib && { ln -s -f libmdb.so.2.0.1 libmdb.so || { rm -f libmdb.so && ln -s libmdb.so.2.0.1 libmdb.so; }; }) libtool: install: /usr/bin/install -c .libs/libmdb.lai /usr/local/lib/libmdb.la libtool: install: /usr/bin/install -c .libs/libmdb.a /usr/local/lib/libmdb.a libtool: install: chmod 644 /usr/local/lib/libmdb.a libtool: install: ranlib /usr/local/lib/libmdb.a libtool: finish: PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin:/sbin" ldconfig -n /usr/local/lib ---------------------------------------------------------------------- Libraries have been installed in: /usr/local/lib If you ever happen to want to link against installed libraries in a given directory, LIBDIR, you must either use libtool, and specify the full pathname of the library, or use the '-LLIBDIR' flag during linking and do at least one of the following: - add LIBDIR to the 'LD_LIBRARY_PATH' environment variable during execution - add LIBDIR to the 'LD_RUN_PATH' environment variable during linking - use the '-Wl,-rpath -Wl,LIBDIR' linker flag - have your system administrator add LIBDIR to '/etc/ld.so.conf' See any operating system documentation about shared libraries for more information, such as the ld(1) and ld.so(8) manual pages. ---------------------------------------------------------------------- make[3]: Nothing to be done for 'install-data-am'. make[3]: Leaving directory '/home/fabrizio/Downloads/mdbtools/src/libmdb' make[2]: Leaving directory '/home/fabrizio/Downloads/mdbtools/src/libmdb' Making install in extras make[2]: Entering directory '/home/fabrizio/Downloads/mdbtools/src/extras' make[3]: Entering directory '/home/fabrizio/Downloads/mdbtools/src/extras' /bin/mkdir -p '/usr/local/bin' /bin/bash ../../libtool --mode=install /usr/bin/install -c mdb-hexdump '/usr/local/bin' libtool: install: /usr/bin/install -c .libs/mdb-hexdump /usr/local/bin/mdb-hexdump make[3]: Nothing to be done for 'install-data-am'. make[3]: Leaving directory '/home/fabrizio/Downloads/mdbtools/src/extras' make[2]: Leaving directory '/home/fabrizio/Downloads/mdbtools/src/extras' Making install in sql make[2]: Entering directory '/home/fabrizio/Downloads/mdbtools/src/sql' make install-am make[3]: Entering directory '/home/fabrizio/Downloads/mdbtools/src/sql' make[4]: Entering directory '/home/fabrizio/Downloads/mdbtools/src/sql' /bin/mkdir -p '/usr/local/lib' /bin/bash ../../libtool --mode=install /usr/bin/install -c libmdbsql.la '/usr/local/lib' libtool: warning: relinking 'libmdbsql.la' libtool: install: (cd /home/fabrizio/Downloads/mdbtools/src/sql; /bin/bash "/home/fabrizio/Downloads/mdbtools/libtool" --silent --tag CC --mode=relink gcc -I../../include -I/usr/include/glib-2.0 -I/usr/lib/x86_64-linux-gnu/glib-2.0/include -g -O2 -DSQL -Wall -version-info 2:0:0 -export-symbols-regex "^mdb_sql_" -Wl,--as-needed -o libmdbsql.la -rpath /usr/local/lib mdbsql.lo parser.lo lexer.lo ../libmdb/libmdb.la -lglib-2.0 ) libtool: install: /usr/bin/install -c .libs/libmdbsql.so.2.0.0T /usr/local/lib/libmdbsql.so.2.0.0 libtool: install: (cd /usr/local/lib && { ln -s -f libmdbsql.so.2.0.0 libmdbsql.so.2 || { rm -f libmdbsql.so.2 && ln -s libmdbsql.so.2.0.0 libmdbsql.so.2; }; }) libtool: install: (cd /usr/local/lib && { ln -s -f libmdbsql.so.2.0.0 libmdbsql.so || { rm -f libmdbsql.so && ln -s libmdbsql.so.2.0.0 libmdbsql.so; }; }) libtool: install: /usr/bin/install -c .libs/libmdbsql.lai /usr/local/lib/libmdbsql.la libtool: install: /usr/bin/install -c .libs/libmdbsql.a /usr/local/lib/libmdbsql.a libtool: install: chmod 644 /usr/local/lib/libmdbsql.a libtool: install: ranlib /usr/local/lib/libmdbsql.a libtool: finish: PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin:/sbin" ldconfig -n /usr/local/lib ---------------------------------------------------------------------- Libraries have been installed in: /usr/local/lib If you ever happen to want to link against installed libraries in a given directory, LIBDIR, you must either use libtool, and specify the full pathname of the library, or use the '-LLIBDIR' flag during linking and do at least one of the following: - add LIBDIR to the 'LD_LIBRARY_PATH' environment variable during execution - add LIBDIR to the 'LD_RUN_PATH' environment variable during linking - use the '-Wl,-rpath -Wl,LIBDIR' linker flag - have your system administrator add LIBDIR to '/etc/ld.so.conf' See any operating system documentation about shared libraries for more information, such as the ld(1) and ld.so(8) manual pages. ---------------------------------------------------------------------- make[4]: Nothing to be done for 'install-data-am'. make[4]: Leaving directory '/home/fabrizio/Downloads/mdbtools/src/sql' make[3]: Leaving directory '/home/fabrizio/Downloads/mdbtools/src/sql' make[2]: Leaving directory '/home/fabrizio/Downloads/mdbtools/src/sql' Making install in odbc make[2]: Entering directory '/home/fabrizio/Downloads/mdbtools/src/odbc' CC odbc.lo odbc.c:24:17: fatal error: sql.h: No such file or directory compilation terminated. Makefile:494: recipe for target 'odbc.lo' failed make[2]: *** [odbc.lo] Error 1 make[2]: Leaving directory '/home/fabrizio/Downloads/mdbtools/src/odbc' Makefile:375: recipe for target 'install-recursive' failed make[1]: *** [install-recursive] Error 1 make[1]: Leaving directory '/home/fabrizio/Downloads/mdbtools/src' Makefile:474: recipe for target 'install-recursive' failed make: *** [install-recursive] Error 1
Seeing how you're on Ubuntu, most likely you need unixodbc-dev : sudo apt-get install unixodbc-dev . Usually on Debian-based systems when you are asked for a header file (.h or .hpp) you need the corresponding -dev package.
mdbtools fatal error: sql.h when installing
1,367,501,163,000
Am using a RHEL 5.5 shared server, my user has complete access to /opt folder. No root access, cant write to /etc, /usr etc. So, I downloaded httpd-2.4.6 and httpd-2.4.6-deps onto /opt (ie. /opt/httpd-2.4.6) I installed Apache on /opt/httpd. By using ./configure --prefix=/opt/httpd --with-included-apr It installed and worked w/o any issues. Then, I wanted to setup this Apache with mod_dav_svn, So i downloaded subversion 1.6.23(i prefer svn 1.6) from Apache site. But when i compile subversion with ./configure --prefix=/opt/svn --with-apr=/opt/httpd/bin/apr-1-config --with-apr-util=/opt/httpd/bin/apu-1-config --with-ssl --with-apxs=/opt/httpd/bin/apxs I got this error: checking whether Apache version is compatible with APR version... no configure: error: Apache version incompatible with APR version I googled on the error, which mentioned i need to use latest version of APR, but the apr i used was from the httpd-2.4.6-deps.tar.bz2 I checked the version in /opt/httpd-2.4.6/srclib/apr/CHANGES, it was 1.4.8, Isnt it latest? Can anyone tell me whats the source of the issue?
I found that the issue was a mistake with handling double quotes in configure file of the source of subversion. I had to compare the line which gave the mismatch error with the configure file in subversion 1.7.14. I had to change the line $EGREP "[apache_minor_version= *"$apache_minor_version_wanted_regex"]" >/dev/null 2>&1; then to $EGREP "apache_minor_version= *\"$apache_minor_version_wanted_regex\"" >/dev/null 2>&1; then :
Error while Compiling Subversion with a custom-compiled Apache on a shared server
1,367,501,163,000
I want to download aptitude to build it from source on my 64bit machine. It is a Thinkpad with a normal x86_64 CPU, nothing special. The download page only has: alpha amd64 armel armhf hppa hurd-i386 i386 ia64 kfreebsd-amd64 kfreebsd-i386 m68k mips mipsel powerpc powerpcspe ppc64 s390 s390x sh4 sparc sparc64 x32 I find it hard to believe x86_64 is not available. Am I missing something? Note: I believe ia64 is for Itanium, so not for me.
What you're looking for is called "amd64". "ia64" it's Itanium, "i386" is 32bit Intel 386. The 64bit architecture was originally developed by AMD, and then adopted by Intel.
Debian package available for ia64 but not for x86_64?
1,367,501,163,000
I just downloaded libevent-2.0.21-stable, which I am hoping to compile so that I can use tmux. However, when I run: ./configure --prefix=/path/to/libevent-2.0.21-stable/ make make install everything seems to work well until the last few lines, where I get the following error: ... make[3]: Leaving directory `/nfs/titan7/u11/vid/bin/utils/libevent-2.0.21-stable' make[2]: Leaving directory `/nfs/titan7/u11/vid/bin/utils/libevent-2.0.21-stable' Making install in include make[2]: Entering directory `/nfs/titan7/u11/vid/bin/utils/libevent-2.0.21-stable/include' make[3]: Entering directory `/nfs/titan7/u11/vid/bin/utils/libevent-2.0.21-stable/include' make[3]: Nothing to be done for `install-exec-am'. /bin/mkdir -p '/d4m/vid/bin/utils/libevent-2.0.21-stable/include' /bin/mkdir -p '/d4m/vid/bin/utils/libevent-2.0.21-stable/include/event2' /usr/bin/install -c -m 644 event2/buffer.h event2/buffer_compat.h event2/bufferevent.h event2/bufferevent_compat.h event2/bufferevent_ssl.h event2/buffereve nt_struct.h event2/dns.h event2/dns_compat.h event2/dns_struct.h event2/event.h event2/event_compat.h event2/event_struct.h event2/http.h event2/http_compat.h event2/http_struct.h event2/keyvalq_struct.h event2/listener.h event2/rpc.h event2/rpc_compat.h event2/rpc_struct.h event2/tag.h event2/tag_compat.h event2/t hread.h event2/util.h '/d4m/vid/bin/utils/libevent-2.0.21-stable/include/event2' /usr/bin/install: `event2/buffer.h' and `/d4m/vid/bin/utils/libevent-2.0.21-stable/include/event2/buffer.h' are the same file /usr/bin/install: `event2/buffer_compat.h' and `/d4m/vid/bin/utils/libevent-2.0.21-stable/include/event2/buffer_compat.h' are the same file /usr/bin/install: `event2/bufferevent.h' and `/d4m/vid/bin/utils/libevent-2.0.21-stable/include/event2/bufferevent.h' are the same file /usr/bin/install: `event2/bufferevent_compat.h' and `/d4m/vid/bin/utils/libevent-2.0.21-stable/include/event2/bufferevent_compat.h' are the same file /usr/bin/install: `event2/bufferevent_ssl.h' and `/d4m/vid/bin/utils/libevent-2.0.21-stable/include/event2/bufferevent_ssl.h' are the same file /usr/bin/install: `event2/bufferevent_struct.h' and `/d4m/vid/bin/utils/libevent-2.0.21-stable/include/event2/bufferevent_struct.h' are the same file /usr/bin/install: `event2/dns.h' and `/d4m/vid/bin/utils/libevent-2.0.21-stable/include/event2/dns.h' are the same file /usr/bin/install: `event2/dns_compat.h' and `/d4m/vid/bin/utils/libevent-2.0.21-stable/include/event2/dns_compat.h' are the same file /usr/bin/install: `event2/dns_struct.h' and `/d4m/vid/bin/utils/libevent-2.0.21-stable/include/event2/dns_struct.h' are the same file /usr/bin/install: `event2/event.h' and `/d4m/vid/bin/utils/libevent-2.0.21-stable/include/event2/event.h' are the same file /usr/bin/install: `event2/event_compat.h' and `/d4m/vid/bin/utils/libevent-2.0.21-stable/include/event2/event_compat.h' are the same file /usr/bin/install: `event2/event_struct.h' and `/d4m/vid/bin/utils/libevent-2.0.21-stable/include/event2/event_struct.h' are the same file /usr/bin/install: `event2/http.h' and `/d4m/vid/bin/utils/libevent-2.0.21-stable/include/event2/http.h' are the same file /usr/bin/install: `event2/http_compat.h' and `/d4m/vid/bin/utils/libevent-2.0.21-stable/include/event2/http_compat.h' are the same file /usr/bin/install: `event2/http_struct.h' and `/d4m/vid/bin/utils/libevent-2.0.21-stable/include/event2/http_struct.h' are the same file /usr/bin/install: `event2/keyvalq_struct.h' and `/d4m/vid/bin/utils/libevent-2.0.21-stable/include/event2/keyvalq_struct.h' are the same file /usr/bin/install: `event2/listener.h' and `/d4m/vid/bin/utils/libevent-2.0.21-stable/include/event2/listener.h' are the same file /usr/bin/install: `event2/rpc.h' and `/d4m/vid/bin/utils/libevent-2.0.21-stable/include/event2/rpc.h' are the same file /usr/bin/install: `event2/rpc_compat.h' and `/d4m/vid/bin/utils/libevent-2.0.21-stable/include/event2/rpc_compat.h' are the same file /usr/bin/install: `event2/rpc_struct.h' and `/d4m/vid/bin/utils/libevent-2.0.21-stable/include/event2/rpc_struct.h' are the same file /usr/bin/install: `event2/tag.h' and `/d4m/vid/bin/utils/libevent-2.0.21-stable/include/event2/tag.h' are the same file /usr/bin/install: `event2/tag_compat.h' and `/d4m/vid/bin/utils/libevent-2.0.21-stable/include/event2/tag_compat.h' are the same file /usr/bin/install: `event2/thread.h' and `/d4m/vid/bin/utils/libevent-2.0.21-stable/include/event2/thread.h' are the same file /usr/bin/install: `event2/util.h' and `/d4m/vid/bin/utils/libevent-2.0.21-stable/include/event2/util.h' are the same file make[3]: *** [install-nobase_includeHEADERS] Error 1 make[3]: Leaving directory `/nfs/titan7/u11/vid/bin/utils/libevent-2.0.21-stable/include' make[2]: *** [install-am] Error 2 make[2]: Leaving directory `/nfs/titan7/u11/vid/bin/utils/libevent-2.0.21-stable/include' make[1]: *** [install-recursive] Error 1 make[1]: Leaving directory `/nfs/titan7/u11/vid/bin/utils/libevent-2.0.21-stable' make: *** [install] Error 2 make install 6.15s user 8.22s system 86% cpu 16.622 total Running make verify also returns the following error: EPOLL test-eof: OKAY test-weof: OKAY test-time: OKAY test-changelist: OKAY regress: OKAY EPOLL (changelist) test-eof: OKAY test-weof: OKAY test-time: OKAY test-changelist: OKAY regress: FAIL regress.c:717: assert(abs(timeval_msec_diff(((&start)), ((&res.tvs[2]))) - (500)) <= 50): 145 vs 50main/persistent_active_timeout: [persistent_active_timeout FAILED] 1/179 TESTS FAILED. (0 skipped) FAILED DEVPOLL Skipping test POLL test-eof: OKAY test-weof: OKAY test-time: OKAY test-changelist: OKAY regress: OKAY SELECT test-eof: OKAY test-weof: OKAY test-time: OKAY test-changelist: OKAY regress: OKAY WIN32 Skipping test FAIL: ../test/test.sh ================== 1 of 1 test failed ================== make[4]: *** [check-TESTS] Error 1 make[4]: Leaving directory `/nfs/titan7/u11/vid/bin/utils/libevent-2.0.21-stable/test' make[3]: *** [check-am] Error 2 make[3]: Leaving directory `/nfs/titan7/u11/vid/bin/utils/libevent-2.0.21-stable/test' make[2]: *** [check] Error 2 make[2]: Leaving directory `/nfs/titan7/u11/vid/bin/utils/libevent-2.0.21-stable/test' make[1]: *** [check-recursive] Error 1 make[1]: Leaving directory `/nfs/titan7/u11/vid/bin/utils/libevent-2.0.21-stable' make: *** [check] Error 2 make verify 22.71s user 24.82s system 16% cpu 4:41.78 total I am stuck at this point. Why do the compilation and verification fail?
It seems that you want to install the files to the same place where you extracted the tarball. Extract the tarball to a different place or try a different prefix and it should work (worse option: use make install -i to ignore all error messages).
Unable to install libevent without admin priviledges
1,367,501,163,000
Issue This question, and answers containing various approaches to solving it comes up many times almost daily on our exchange here. If a search result brought you here Welcome! If a comment I left in your question linked here, you can safely assume your question could be added to the list below, as it fits in the same category: Disagreement between glib and gcc after gcc downgrade How can I downgrade to a specific gcc version on Fedora 39? How I can install some gcc compilers in arch linux? Debian Bullseye: Install gcc-11.4.0 and dependencies Those are just four of the many questions tagged under [gcc] with the search term downgrade. Those readers/users knowledgeable enough to come to the exchange first almost always ask How Can I..." Those unlucky readers that tried something that failed almost always have the phrase "It Broke", or "I think I broke something". In this question and answer, I'll attempt to explain Why mixing compilers breaks all Linux's, and in the answer I'll provide the simplest way that I know to fix the issue, that doesn't cause breakage but costs a bit of overhead in terms of time and space to setup. Since this is going to be a community contributed Question and Answer, I'm kindly going to ask you to upvote this if you like my approach. Issue Explained TLDR: Skip Down to Why Does Downgrading Compilers Cause Issues? I've used the picture below two or 3 times here across various answers, and feel it provides us all with a good place to start. I realize it's big, and as such, I urge reader's to open it in a separate tab and zoom etc. if you like. Find your distribution in that list, and then continue reading. You'll notice that your distribution has a parent distribution. OK, OK, I know the readers using the parent distributions (the ones on the far left of the timeline) are asking: I'm using one of the ones on the far left, so where are it's parents? And now we get to the meat of this question. For Example sake, let's fill in $X and $Y from my title (Patience Readers, $Z is coming up) $X = Fedora 39 $Y = 13.2.1-6.fc39 Definitions Distribution (Quoting Wikipedia): A Linux distribution (often abbreviated as distro) is an operating system made from a software collection that includes the Linux kernel and often a package management system. Package Management System (Quoting Again): A package manager or package-management system is a collection of software tools that automates the process of installing, upgrading, configuring, and removing computer programs for a computer in a consistent manner Knowing these two items helps us answer the Parent Distribution question, but unfortunately with another question: How Do the distribution maintainers create a distribution? The Answer: All linux software minus the kernel is stored at and can be obtained from the GNU Software FTP Site in source code archives. All Linux Kernels are available for download in source code archives at The Linux Kernel Archives. In short, all Distributions begin from the same source code, including the parent distributions Why Does Downgrading Compilers Cause Issues? From the definition earlier, a distribution is a group or set of related software. This relation can be seen in your distribution's repository (or whatever else your distribution has chosen to name it). At the time a new version (in our Example $X = Fedora 39) is made available the repository for that version is locked, specifically version locked, meaning that every package in that repository is now frozen in time. Once frozen it isn't altered. It can only live as long as the version is supported or die when the version is upgraded. The tools required to build GCC are also in the now frozen repository. If a user attempts to upgrade or downgrade the GCC that was shipped in the frozen repository, the version locking would be broken if you were to succeed. To prevent the breakage from happening, your OS'es Package Manager prevents this. Read my answer to find out how I overcame $Z
My Solution First off, I guess I would be considered a purist in that if a package isn't in your repository, or can't be added from a third party repository, you shouldn't attempt to compile it from scratch or "force it to work" because doing so short-circuits your package manager. With that in mind, I present a rock-solid solution Virtualization Is Our Friend We need some tools and a bit of Hard Drive Space for this approach. I'll leave making space available to you but aside from that follow these steps: Install VirtualBox with your package manager Install Vagrant with your package manager Note: Vagrant will work with Hyper-V and docker out of the box I believe (see Stephen Kitt's answer for help with docker), but I chose VirtualBox because that's what I learned first, and if needed I can install a Desktop in the VM to configure the VM further. With our Virtualization Helpers installed we can continue. I'm continuing by: Gathering Requirements - I came up as a web programmer, so let's gather requirements from our customer. For our purposes, I'm going to fill in $Z now. Using Example question 2 from the Issue heading above we have the following values: $X = Fedora $Y = 13.2.1-6.fc39 $Z = 12.x.x Always use the following sentence as the Search term in your favorite engine: Which release of $X contained GCC $Z So using our example: Which release of Fedora contained GCC 12. Now Quoting Google: Update the Fedora 36 GNU Toolchain to gcc 12 and glibc 2.35. The gcc 12 is currently under development and will be included in Fedora 36 upon release. Feb 16, 2022 Let's check to see if the above release - 36 - is still supported. In our search engine: $X release history Which would equate to Fedora release history in our example, which gives us Fedora Linux release history. Well, what do we do now? It looks like Fedora 36 is no longer supported. We added Vagrant for this very reason. If we visit The Vagrant Cloud, we can search for Fedora 36, which yields these results, and using a Vagrantfile, we can customize a full VM version and develop our application or website using the VM while saving our project files on our host system. Reason Why I Chose This Approach Using virtualization separates your development environment from the system you use day-to-day, and keeps you from making an irreparable mistake. I realize it will cost a bit of extra setup time, but Vagrant only needs to be installed once. A vagrantfile only needs to be created each time your project requirements change. The small learning curve for Vagrant and the Virtualization is worth the tradeoff of reinstalling an OS if a mistake is made trying to figure out how to install multiple compilers.
I'm Using Distribution $X That was shipped With Compiler $Y, but I Need Compiler Version $Z
1,367,501,163,000
As the title suggests it is the output file it cannot find, not sure why this is an issue here is my c_cpp_properties.json: { "configurations": [ { "name": "Linux", "includePath": [ "${workspaceFolder}/**", "~/edu/doa/code_base/datastructures-v1.0.13.0/include" ], "defines": [], "compilerPath": "/usr/bin/gcc", "cStandard": "c99", "cppStandard": "gnu++17", "intelliSenseMode": "linux-gcc-x64", "compilerArgs": [ "-Wall" ] } ], "version": 4 } here is my tasks.json: { "tasks": [ { "type": "cppbuild", "label": "C/C++: gcc build int_array_1d_mwe with DoA code base options", "command": "/usr/bin/gcc", "args": [ "-fdiagnostics-color=always", "-std=c99", "-Wall", "-I", "~/edu/doa/code_base/datastructures-v1.0.13.0/include/", //Deklarationer "~/../../usr/include/", "-g", //"${workspaceFolder}/tabletest-1.9.c", "${workspaceFolder}/graph2.c", //Din_fil.c "~/edu/doa/code_base/datastructures-v1.0.13.0/src/dlist/dlist.c", //Definitioner "/home/manfred/edu/doa/code_base/datastructures-v1.0.13.0/src/array_1d/array_1d.c", "-o", "${workspaceFolder}/outputfile" //Output ], "options": { "cwd": "${fileDirname}" }, "problemMatcher": [ "$gcc" ], "group": { "kind": "build", "isDefault": true }, "detail": "Customized for int_array_1d_mwe and DoA code base 1.0.13.0." } ], "version": "2.0.0" } here is the output when i run the program (Debug anyway): here is the error page when i click Show Errors after failed compilation: here is the 'Debug anyway' popup i get:
In your task.json file, you have -I followed by a directory path. This seems ok, but then you have another directory path, ~/../../usr/include/, before the -g debug option. The compiler would see the second directory path and, since it's not an argument to any option, would try to use it in the compilation as if it was a source file. A diagnostic message mentions this directory path in your debug output. What's missing is another -I option before the second directory path. Alternatively, since ~/../../usr/include/ is probably the same as /usr/include (unless your home directory is lower down in the file hierarchy than usual on a Linux system), which would likely be searched by default, you may instead want to remove the mentioning of that path in the JSON task specification. I'm not a VSCode user and have never seen these JSON files before. I'm just looking at the error messages and inferring what must have happened and why.
gcc doesn't find output fine and therefore cannot compile my c program (vscode)
1,367,501,163,000
Wanted to try out st terminal. In the Requirements section in its README: Requirements ------------ In order to build st you need the Xlib header files. What are 'Xlib' & 'Xlib header files'? What packages should I install? Using Debian stable.
Xlib is the X11 client library, and the headers are files needed to build programs using it. On Debian you need to install libx11-dev.
What are the Xlib header files and how can I install them?
1,367,501,163,000
I wonder how one could force, for some real example: CFLAGS='-O2 -march=native' CXXFLAGS='-O2 -march=native' CC='gcc-10' CPP='gcc-10 -E' CXX='g++-10' when running the configure script in my case for Transmission 3.00 BitTorrent client? Editing the configure file does seem a bit tricky and more importantly not universally usable.
The documented way to override variables when running configure is to specify their values as arguments to configure, as explained by ./configure --help: `configure' configures transmission 3.00 to adapt to many kinds of systems. Usage: ./configure [OPTION]... [VAR=VALUE]... To assign environment variables (e.g., CC, CFLAGS...), specify them as VAR=VALUE. See below for descriptions of some of the useful variables. In your case, ./configure --disable-cli --disable-mac --disable-daemon --enable-utp --with-gtk --with-crypto=openssl CFLAGS='-O2 -march=native' CXXFLAGS='-O2 -march=native' CC=gcc-10 CPP='gcc-10 -E' CXX=g++-10 configure takes environment variables into account by default, which is why setting them also works. In both cases, the values set are preserved in config.status (if the variables are marked as “precious”) and taken into account with config.status --recheck. The Autoconf documentation recommends specifying variables as arguments rather than relying on the environment.
Forcing overrides when configuring a compile (e.g. CXXFLAGS, etc.)
1,367,501,163,000
I've been writing a script that can compile a Linux Distro, which you can find here. Essentially, it creates /mnt/semcos, and throws up a linux-based system there. At the moment, I'm stuck at compiling busybox-1.31.1 - I get the following error: date.c(.text.rdate_main+0xe4): undefined reference to `stime' collect2: error: ld returned 1 exit status Why am I getting this error?
The error you reference is a problem finding the symbol stime(). Looking at man 2 stime I see: NOTES Starting with glibc 2.31, this function is no longer available to newly linked applications and is no longer declared in <time.h>. My guess is that you have glibc 2.31 or greater. Note that the calls to stime() were removed from BusyBox in version 1.32. If you update your script to use that version, that should resolve your problem.
Failed compiling Busybox: very long and confusing error
1,367,501,163,000
I try to compile a program by typing (lets assume here I'm cd'd into the directory) clang++ file_name.cpp. It compiles and should autorun, but it doesn't, and just ends the event, and opens the terminal back for input. I see a file named 'a.out' in the folder directory. And if I type clang++ file_name.cpp, nothing happens, and I am back in the terminal. If I type clang++ a.out.cpp, the compiler says it can't find the file, and the compiler closes. I tried to see if clang is properly installed, and it is. I don't know what's wrong.
A compiler like clang++ only does compiling of the source code. In your case, it creates the executable file a.out (since you didn't explicitly tell it to use some other output filename using the -o option). The compiler will not automatically run the resulting executable. These things also holds true for g++ (the GNU C++ compiler) as well as for both the clang and gcc C compilers (and most other compilers of languages requiring compilation). To run the executable, issue the command ./a.out at the shell's command prompt. To give the executable another name than the traditional default of a.out, use something like clang++ -o myprog file_name.cpp to create myprog from the sources in file_name.cpp. Given the sources in the single file file_name.cpp, make can also be used to compile the sources into the executable file_name using the command make file_name while in the same directory as the source code file (but only if the source code has been updated since file_name was last compiled). This requires no Makefile to be present but will instead use implicit rules built into make for compiling C++ sources. Use CXX=clang++ make file_name to explicitly use the clang++ compiler. For more on this, see the GNU make documentation about implicit rules (other implementations of make, e.g. on BSD systems, use similar implicit rules).
Clang++ Compiles, but doesn't run
1,367,501,163,000
Ok, maybe this is not the right thread, if so, please point me to the right one. ​Some background: I've seen other Open Source projects derived from FreeBSD (FreeNAS, PFSense, etc) that called themselves "an OS". They do give (marginal) credit to FreeBSD, but apparently they have little or no modification to the OS itself, and instead they appear to be just a "FreeBSD base OS" plus a collection of other open source packages, custom configurations, a suite of middleware scripts and a web frontend. Because of that, I would say they are not an actual "OS" but more like a 'suite of apps' built on top of FreeBSD. A good example would be FreeNAS, which rely heavily on Python and Angular to provide a user friendly way to do the same things FreeBSD alone can do from the shell. They do present themselves as an "OS" (their website states "FreeNAS is an operating system that can......") Now, my question: To which extend someone can use a FreeBSD base OS, modify a couple of things here and there, add some apps to it and legally call it a "BlahBlah OS" (with a different name, branding etc).? This is a technical question and a legal one as well, Im not well versed on the practical applications of the FreeBSD license, but Im sure some of you are and can put it in plain language.
The FreeBSD license has a copyright notice, one line with two conditions, and a warranty disclaimer. After having to read some other contracts and license agreements, this one is quite easy to get through. This is known as the 2-clause BSD license which is a derivative of the 3-clause BSD license. IANAL, but the BSD family of licenses are known for being very permissive, as in they allow any redistribution of source or binary code as long as they continue to include the copyright notice and BSD license. It even explicitly mentions permitting distribution of modified versions, which likely includes changing the name of the OS. As long as they release the possibly modified FreeBSD code/software with the original copyright notice and the BSD license, they seem to have those rights granted by that license.
Why some projects are a 'repacked-rebranded' FreeBSD and call themselves a 'different' OS? [closed]
1,367,501,163,000
I just compiled xdebug as instructed on the official site. But where can I find the resulting .so file? I found one in the folder ./libs/xdebug.so and one in modules/xdebug.so. Questions Which one do I need, there is no info on the official site? Why is there not simply a folder called result or similar?
The last part of the build, as run by make, tells you explicitly where the library is installed: ---------------------------------------------------------------------- Libraries have been installed in: /tmp/user/1000/xdebug/modules If you ever happen to want to link against installed libraries in a given directory, LIBDIR, you must either use libtool, and specify the full pathname of the library, or use the '-LLIBDIR' flag during linking and do at least one of the following: - add LIBDIR to the 'LD_LIBRARY_PATH' environment variable during execution - add LIBDIR to the 'LD_RUN_PATH' environment variable during linking - use the '-Wl,-rpath -Wl,LIBDIR' linker flag - have your system administrator add LIBDIR to '/etc/ld.so.conf' See any operating system documentation about shared libraries for more information, such as the ld(1) and ld.so(8) manual pages. ---------------------------------------------------------------------- So the answer is that you want the .so which is modules (which is identical to the one in .libs, but the latter is a libtool implementation detail). make install copies that .so into the target directory, which is the PHP API directory as determined by phpize (/usr/lib/php/20151012 on the system I tested this on). PHP should be able to pick it up there automatically. So really, if you follow the upstream instructions all the way to the end, you don’t need to care about the answer: make install does the right thing, making the module available to your PHP installation.
Where to find the resulting .so file of a compilation?
1,367,501,163,000
I installed Intel Parallel Studio on Ubuntu 18.04, but when I try to use ICC (/opt/intel/bin/cc), I receive the error: /opt/intel/composer_xe_2015.3.187/compiler/include/math.h(1214): error: identifier "_LIB_VERSION_TYPE" is undefined According to the Intel forum, the error is because Ubuntu 18.04 is an unsupported OS. However, the latest supported version is 14.04. The same for other Linux distributions, at least 4 years old. Many programmers should use Intel compiler on latest versions of Linux distributions (including Ubuntu), and therefore, this error should have a solution. Any suggestion?
In this particular case the solution really has to come from Intel, and they do appear to be working on it; quoting the reply given to the forum post you linked: Currently our latest version doesn't support Ubuntu 18.4LTS. We will let you know when it's available. 2018 release 2 supports Ubuntu 17.10, and 2019 beta supports 18.04 so you’ll get support when that’s released (the beta is already available for download).
How to use Intel Compiler in Linux?
1,367,501,163,000
As a user of OpenSUSE I am used to type: gcc -lz myfile.c I was surprised that on Ubuntu this command would fail with something like: myfile.c:(.text+0x5): undefined reference to `zlibVersion' collect2: error: ld returned 1 exit status With gcc -v I found that the collect2 command generated by the GCC C compiler on Ubuntu starts with --as-needed while on OpenSUSE this option is not there. I.e. the command line on Ubuntu looks like: /usr/lib/gcc/x86_64-linux-gnu/5/collect2 --build-id --eh-frame-hdr\ -m elf_x86_64 --hash-style=gnu --as-needed -dynamic-linker \ ....[a lot of stuff removed].....\ -lz /tmp/cc7kz9Nz.o ....[yet more stuff removed].....\ /usr/lib/gcc/x86_64-linux-gnu/5/../../../x86_64-linux-gnu/crtn.o While on OpenSUSE it looks quite similar but for the --as-needed option. /usr/lib64/gcc/x86_64-suse-linux/4.8/collect2 --build-id --eh-frame-hdr\ -m elf_x86_64 -dynamic-linker \ ....[a lot of stuff removed].....\ -lz /tmp/cccpZlmL.o ....[yet more stuff removed].....\ /usr/lib64/gcc/x86_64-suse-linux/4.8/../../../../lib64/crtn.o Where did this difference come from? Has it been discussed somewhere? Should I never put library names before the source file name?
This is documented on the Ubuntu wiki. It’s set by default to reduce the number of dependencies in packages, but as you discovered it means the order of libraries is significant: you need to ensure that objects (of any type) appear before the libraries they use. You can disable this with --no-as-needed.
Why should I put the source file name before library names on compiler command line?
1,367,501,163,000
I have Linux with a kernel that was compiled with the real-time patch, but the config option CONFIG_PREEMPT_RT_FULL was not enabled (it says in /proc/config that it is not set). Do you now if there is any way to turn this on, without having to recompile the kernel? I guess it's not possible but maybe there's some way?
No, it’s a solely compile-time configuration option, there’s no runtime equivalent. You’ll need to rebuild your kernel.
Enable CONFIG_PREEMPT_RT_FULL after the kernel compilation
1,518,444,908,000
I'm doing a compilation of PHP for an application with make. The problem is when I do a ldd php I have something like this: libk5crypto.so.3 => /usr/lib/x86_64-linux-gnu/libk5crypto.so.3 (0x00007f5b4e661000) But libk5crypto.so.3 is a symlic that point to libk5crypto.so.3.1 I would like my php to point directly to libk5crypto.so.3.1. Is-it possible ? EDIT: I have a web application with a php server that I compile myself. I don't want to install it in /etc, I just want it to be inside my application. Inside my application I have a folder named server where I store php, fop, mapserver etc... Inside my php folder, I have a lib folder and inside I put all dependencies (ldd bin/php) When I install my application, I modify the file /etc/ld.so.conf to add the lib dir from my php server, then I do a ldconfig. Sometimes the libs already exists in /usr/lib/x86_64-linux-gnu and PHP take this libs instead the one on his folder. It's nearly not a problem, but sometimes I have a lib inside /usr/lib... that have the same major version but has a lower minor version. PHP tries to get it from /usr/lib and thrown me an error because php was copile with the newest dependencies. Is for that reason that I want to point to libk5crypto.so.3.1 directly. When I update my application, I delete my php and I put a newer one with all the new libs. Another things, I try to tell PHP to look the libs in a given directory, but my problem I don't know where it will be at compile time. EDIT for JigglyNaga : I compile PHP, Then I compile imap and others extensions. The problem is with php and the extension. So the compilation is shorter for imap so I give you all. root@ubuntu16:~/compilPHP/php-7.2.2/ext/imap# make /bin/bash /root/compilPHP/php-7.2.2/ext/imap/libtool --mode=compile cc -I. -I/root/compilPHP/php-7.2.2/ext/imap -DPHP_ATOM_INC -I/root/compilPHP/php-7.2.2/ext/imap/include -I/root/compilPHP/php-7.2.2/ext/imap/main -I/root/compilPHP/php-7.2.2/ext/imap -I/php/include/php -I/php/include/php/main -I/php/include/php/TSRM -I/php/include/php/Zend -I/php/include/php/ext -I/php/include/php/ext/date/lib -I/usr/include/c-client -DHAVE_CONFIG_H -g -O2 -c /root/compilPHP/php-7.2.2/ext/imap/php_imap.c -o php_imap.lo mkdir .libs cc -I. -I/root/compilPHP/php-7.2.2/ext/imap -DPHP_ATOM_INC -I/root/compilPHP/php-7.2.2/ext/imap/include -I/root/compilPHP/php-7.2.2/ext/imap/main -I/root/compilPHP/php-7.2.2/ext/imap -I/php/include/php -I/php/include/php/main -I/php/include/php/TSRM -I/php/include/php/Zend -I/php/include/php/ext -I/php/include/php/ext/date/lib -I/usr/include/c-client -DHAVE_CONFIG_H -g -O2 -c /root/compilPHP/php-7.2.2/ext/imap/php_imap.c -fPIC -DPIC -o .libs/php_imap.o /bin/bash /root/compilPHP/php-7.2.2/ext/imap/libtool --mode=link cc -DPHP_ATOM_INC -I/root/compilPHP/php-7.2.2/ext/imap/include -I/root/compilPHP/php-7.2.2/ext/imap/main -I/root/compilPHP/php-7.2.2/ext/imap -I/php/include/php -I/php/include/php/main -I/php/include/php/TSRM -I/php/include/php/Zend -I/php/include/php/ext -I/php/include/php/ext/date/lib -I/usr/include/c-client -DHAVE_CONFIG_H -g -O2 -o imap.la -export-dynamic -avoid-version -prefer-pic -module -rpath /root/compilPHP/php-7.2.2/ext/imap/modules php_imap.lo -Wl,-rpath,/usr/lib/x86_64-linux-gnu/mit-krb5 -L/usr/lib/x86_64-linux-gnu/mit-krb5 -lc-client -lcrypt -lpam -lgssapi_krb5 -lkrb5 -lk5crypto -lcom_err -lssl -lcrypto cc -shared .libs/php_imap.o -L/usr/lib/x86_64-linux-gnu/mit-krb5 -lc-client -lcrypt -lpam -lgssapi_krb5 -lkrb5 -lk5crypto -lcom_err -lssl -lcrypto -Wl,-rpath -Wl,/usr/lib/x86_64-linux-gnu/mit-krb5 -Wl,-soname -Wl,imap.so -o .libs/imap.so creating imap.la (cd .libs && rm -f imap.la && ln -s ../imap.la imap.la) /bin/bash /root/compilPHP/php-7.2.2/ext/imap/libtool --mode=install cp ./imap.la /root/compilPHP/php-7.2.2/ext/imap/modules cp ./.libs/imap.so /root/compilPHP/php-7.2.2/ext/imap/modules/imap.so cp ./.libs/imap.lai /root/compilPHP/php-7.2.2/ext/imap/modules/imap.la PATH="$PATH:/sbin" ldconfig -n /root/compilPHP/php-7.2.2/ext/imap/modules FINAL EDIT: It worked, I change the rpath before making the make. export LDFLAGS='-Wl,-rpath,\$${ORIGIN}/../lib' Thanks a lot for all your answers.
Adding this as another answer because the other one can still stand on its own, but your problem (after you clarified) is different. If this is about libraries that are shipped with your application and also by the system, then the situation is not comparable. You're almost at what you need to do to fix it, but not quite :-) There are several solutions to your problem: LD_LIBRARY_PATH, rpath, shipping changed libraries, and compiling multiple times. Each has their advantages and issues, so let me explain: LD_LIBRARY_PATH If you go down this route, then you set an environment variable before you run your program. It probably requires that you use a wrapper script around your binary (i.e., rather than directly running /path/to/my/php, you run a shell script which first sets LD_LIBRARY_PATH and then runs /path/to/my/php). The downside of this method is that it's a bit fragile: LD_LIBRARY_PATH is prepended to the library search path, but it does not replace it. That means that if the libraries in question are installed system-wide but for some reason the ones you shipped can't be loaded, the dynamic linker will fall back on the system-provided ones. The requirement to call a shell script means you have an extra fork/exec call, which may make things go wrong. This can be mitigated somewhat by using the exec command in your shell script (so that the script gets replaced by the php binary, and so that your php program is a proper child of the parent), but it's still messy On the other hand, it allows you some flexibility with where you store your libraries (i.e., if the system-provided one is good enough in some cases, it's fine if you remove it from your LD_LIBRARY_PATH directory) rpath Here, the idea is that you tell the compiler (by way of gcc -Wl,-rpath,'/path/to/library') exactly where to look for the library. This gets hardcoded into the program at compile time, and the dynamic linker will then absolutely ignore versions of the library that are outside your provided rpath, including system-provided versions. This avoids the messiness of the LD_LIBRARY_PATH above, but the downside is that this makes things less flexible; if you need to move things around in the filesystem, you need to recompile everything. It also means that if your users want to see things installed in a different way, they're shit outta luck. They may not like that. Both methods are documented in the ld.so man page. Shipping changed libraries Here, the idea is that rather than trying to link to libk5crypto.so.3, you link to libmycorp-k5crypto.so.3. There will be zero chance of the dynamic linker picking up the system-provided libk5crypto.so.3 in that case. The advantage here is that it's fairly easy and elegant once installed; The downside is that people might start wondering whether you changed libk5crypto at all (and ask you for patches), and you'll also have to dive deep into the libk5crypto.so build system to make it actually emit a libmycorp-k5crypto.so library. It might also look bad in the long run, so be careful before you go down this route. Compiling multiple times Rather than shipping the libraries that are also provided system-wide, you can just compile your application on every supported distribution, and ship a package for each and every distribution, rather than shipping your libraries. Things like packagecloud.io make that easier to do. Since there is only one library with a given name on the system, there is only one library to pick and no chance of picking the wrong one. The downside is that you have a larger range of things to test, so you have more work at release time (and you better have a good test suite). The advantage is that this method ensures you ship less than you otherwise would (so you have less to support), and you can tell users who are on a distribution which has an older version of libk5crypto.so that you don't support anymore, that they should update that first.
Compilation with make: link to library
1,518,444,908,000
I'm attempting to install automake-1.13.4 on my system. First, I do ./configure which creates a Makefile compatible with my system. However, when I execute make, it runs for a bit, but then returns the following error message: /bin/sh: -c: line 5: syntax error near unexpected token || /bin/sh: -c: line 5: ` { || exec 5>&2 >$tmp 2>&1; } \' make: *** [doc/amhello-1.0.tar.gz] Error 1 I can't seem to figure out why this is occuring. Any help would be greatly appreciated.
I've figured it out. This error is related to the amhello-1.0.tar.gz file. The originally provided file was not configured properly for my system. Therefore, if I rebuild the file myself, and replace the original amhello-1.0.tar.gz, then I can run make with no errors. To see how to rebuild amhello-1.0.tar.gz so that its configured properly to your system, see the link below: https://www.gnu.org/software/automake/manual/html_node/Creating-amhello.html By the way, it is also important to run autoreconf -vfi before compiling the package.
Error in bin/sh when installing automake-1.13
1,518,444,908,000
OS: Linux Mint 18.2 Cinnamon 64-bit. I would like to compile the following: p7zip_16.02_src_all.tar.bz2 with SHA256: 5eb20ac0e2944f6cb9c2d51dd6c4518941c185347d4089ea89087ffdd6e2341f I extracted it as follows: tar -xjf p7zip_16.02_src_all.tar.bz2 I read README file, specifically, that I need to replace the makefile with my machine's equivalent: According to your OS, copy makefile.linux, makefile.freebsd, makefile.cygwin, ... over makefile.machine So I did: cp makefile.linux_amd64 makefile.machine It also says it is possible to build in parallel, in my case of 8 cores: If you want to make a parallel build on a 4 cpu machine : make -j 4 TARGET So I did: make -j 8 all_test With the result: Everything is Ok Now, I would like to proceed further, but: make -j 8 depend this throws errors: fatal error: wx/wxprec.h: No such file or directory So, I searched for a package that contains the header file: apt-file search wxprec.h which says: wx3.0-headers: /usr/include/wx-3.0/wx/wxprec.h So, I installed that package: sudo apt-get install wx3.0-headers but it still throws the same error.
You need to build the dependencies before make command : sudo apt-get build-dep p7zip It will install the missing dependencies.
wx/wxprec.h: No such file or directory
1,518,444,908,000
I thought my compliation was okay because no errors were printed out, yet when I try to run the executable, he tells me that it's unreachable... coppan12@b048-08:~$ gcc -Wall prog.c -o prog coppan12@b048-08:~$ prog La commande « prog » est introuvable any hint?
Try ./prog to run prog in the current working directory, as . is typically not (nor should be) in PATH. Also, a Makefile is perhaps much more sensible, as then you can simply type make test and have the program built (if necessary) and tested: prog: prog.c test: prog echo blah de blah | ./prog A Makefile can also integrate with emacs or vim based testing, among other advantages... (disadvantage: Makefile use tabs, so ensure any rules are tabbed in, not with spaces, sigh.)
Is my compilation false?
1,518,444,908,000
I'm trying to compile Caribou 0.4.18.1 on Xubuntu 14.04. In the INSTALL document it says, that I should run ./configure && make && make install. But the ./configure step ended with: checking for python platform... linux2 checking for python script directory... ${prefix}/lib/python2.7/dist-packages checking for python extension module directory... ${exec_prefix}/lib/python2.7/dist-packages checking for CARIBOU... no configure: error: Package requirements ( pygobject-3.0 >= 2.90.3, gtk+-3.0 >= 3.0.0, clutter-1.0 >= 1.5.11, gdk-3.0 >= 3.0.0, x11, atspi-2 ) were not met: No package 'pygobject-3.0' found No package 'gtk+-3.0' found No package 'clutter-1.0' found No package 'gdk-3.0' found No package 'atspi-2' found Consider adjusting the PKG_CONFIG_PATH environment variable if you installed software in a non-standard prefix. Alternatively, you may set the environment variables CARIBOU_CFLAGS and CARIBOU_LIBS to avoid the need to call pkg-config. See the pkg-config man page for more details. Trying to install any of these packages fails, because they're not in the package index. I've seen this with some other programs I wanted to compile, so it seems to be a problem with my machine. Could someone explain to me, what the error is actually saying and what I can do against it?
The dependencies are expressed not as package names, but as pkg-config dependencies. I think that on RPM-based systems you can search for these directly, but on Debian-based systems you need to search for the corresponding files. To do that, the easiest approach is to install apt-file, update its indices with sudo apt-file update then you can use apt-file search with the dependencies. In your case: apt-file search pygobject-3.0.pc apt-file search gtk+-3.0.pc and so on. (.pc files contain the information necessary for pkg-config.) This will tell you that the packages you need to install are respectively python-gi-dev for pygobject-3.0, and libgtk-3-dev for gtk+-3.0; I'll let you figure out the rest. You can perform the same search online using https://packages.debian.org (the results will generally work on Xubuntu as well). With a little more work you can use apt-cache search too; apt-cache search gtk+-3.0 | grep -- -dev should produce the appropriate package name (amongst others possibly). As pointed out by K1773L, since caribou is packaged in Xubuntu, you can run apt-get build-dep caribou to get the build-dependencies, but that will give the dependencies of the version that was packaged; in general if yours is different then you may need different dependencies.
Why does ./configure give me unmet packages, that do not exist?
1,518,444,908,000
I am using the release version of OpenBSD 5.6 and have to apply a patch called 004_kernexec.patch.sig (URL: http://ftp.openbsd.org/pub/OpenBSD/patches/5.6/common/004_kernexec.patch.sig ) An excerpt of the said patch is as follows: OpenBSD 5.6 errata 4, Oct 20, 2014: Executable headers with an unaligned address will trigger a kernel panic. Apply patch using: signify -Vep /etc/signify/openbsd-56-base.pub -x 004_kernexec.patch.sig \ -m - | (cd /usr/src && patch -p0) Then build and install a new kernel. I'm now at the section titled 5.3.4 - Building the kernel (URL: http://www.openbsd.org/faq/faq5.html#Why). According to it, I need to issue the following command first: cd /usr/src/sys/arch/`machine`/conf followed by config GENERIC Is it compulsory to use the name GENERIC? Can I call it something else such as bsd? I remember that towards the end of the installation process of the OS, there was this line that stated bsd.mp would replace bsd.rd as my machine was a multi-processor system.
The OpenBSD FAQ is your friend in this case. They have extensive documentation on how to build your own kernel. In particular you want section 5.3.4 but before you do that make sure and read all of section 5.3 to get a feel for the bigger picture. I'd also recommend taking a look at Absolute OpenBSD by Michael Lucas. He's got a pretty good walk through on how to build your own kernel. Good luck.
When building a new kernel in OpenBSD 5.6, can the name of `config` be something else?
1,518,444,908,000
Is there a python crypto library that does not rely on anything besides python? Or pre-complied bundles of PyCrypto for Linux?
Cryptography involves intensive numerical computations that are significantly faster when implemented in a low-level language such as C and compiled to machine code, than when implemented in a high-level language such as Python and executed as interpreted bytecode. This is why you should expect any library that provides basic cryptographic primitives to be at least partly written in C (or even assembler). However, Python does include some cryptographic primitives: the hashlib and hmac modules provide the most common digest algorithms (MD5, SHA-1, SHA-2) and the corresponding HMAC algorithms. Furthermore, Python has built-in arbitrary-precision integer arithmetic, which allows efficiently implementing some asymmetric algorithms in pure Python (at least signing and verification, not necessarily key generation). Python-RSA (rsa) is a pure Python implementation of RSA PKCS#1 1.5. Python does not ship with any encryption algorithms (in particular AES), because the distribution of encryption code is legally restricted in many parts of the world. The officially recommended external library is PyCrypto. You should install PyCrypto. It would be in the standard library if it wasn't for legal restrictions. Most distributions include a Pycrypto package. If yours doesn't (I'm unfamiliar with Levinux), you will need to install gcc. If your distribution lacks gcc (or a cross-compilation system to easily compile packages on a more complete system), it isn't suitable for serious work.
Why does PyCrypto require a C compiler?
1,518,444,908,000
I upgraded kernel on my CentOS 5.8 from 2.6.18 to 3.5.3 and now it is unable to mount the root filesystem: I could not find any explanation through Google. Can you point me in the right direction? I use Grub 0.97. I tried to point to the root device in the grup.conf by label, by /dev/hda and by UUID and nothing changed. I compared the init scripts located in old and new initrd images and they are mostly the same - dm-mem-cache.ko, dm-message.ko and dm-raid45.ko modules are not loaded into the new kernel. The drivers installed with the new kernel are the same as those with the old one.
According to this website (which cites this forum thread), you need to enable a kernel option. First, get into the kernel's menuconfig: # cd /usr/src/linux # make clean && make mrproper # cp /boot/config-`uname -r` /usr/src/linux/.config # make menuconfig Then go into the "General settings" section, and include "enable deprecated sysfs features to support old userspace tools" in the kernel. Hit escape a few times until it asks you to save, and say yes. Then build the kernel and install it (the actual path might be different on your system): # make rpm # rpm -ivh /usr/src/redhat/RPMS/i386/kernel-2.6.35.10local0-1.i386.rpm
Kernel upgrade 2.6 to 3.5.3 on CentOS 5.8 -> switchroot: mount failed: No such file or directory
1,518,444,908,000
Working on rebuilding some AWS linux servers. I see my company's server has Apache Portable Runtime (APR) downloaded within our Apache webserver instance as well as Tomcat. What are the implications of configuring to install into multiple different locations/softwares, such as APR into Tomcat and Apache webserver? If I run ./configure --prefix=/opt/tomcat/ and then another ./configure --prefix=/opt/apache2/ I see it downloaded the packages in those places. Is this okay to do? Are there issues with doing this? I see this done on the old servers where openSSL is also downloaded within apache and tomcat, and I'm assuming it was done with this same approach. But just want to get a good explanation as to the why this either okay or bad practice.
You can do that, yes. That's kind of the point of the prefix! However, when then running these servers, you need to make sure to adjust PATH, LD_LIBRARY_PATH, and other environment variables accordingly, so that the libraries and executables are actually taken from the different prefixes. Also, installing the same binaries in two places makes little sense (aside from storage, and administrative headaches, it doesn't cost something): You could just run the one server with different configurations. How that works depends on the individual service. But it's honestly a bit strange to compile such standard software yourself, unless you know why you need to do that. Especially in the server/cloud world, you'd usually just have a single service, installed as plainly as possible, and if you need two instances of the same service for some reason, you'd put both of them in containers, only sharing the relevant data directories with the host system. So, this question feels like you should figure out the following things What are these services actually doing? Why are they built from source, and why do you need two separate installatios? What resources would both instances necessarily share – for example, two webservers can't both answer to TCP port 80 on the same IP address, so that wouldn't work. Two different Tomcat applications could still want to share one database; that'd be fine. But if they both try to write to the same log file, things would go bad pretty quickly In the context on anything on AWS, why aren't the two service instances not isolated from each other in a container each?
Using same unpacked tarball to configure to install software into multiple different spots on server
1,518,444,908,000
I want to simulate Tiago robot in Gazebo and I am using ROS available package. Before, I simulated it without problem but right now I can not. I am using ROS melodic and ubuntu 18.04 on KVM virtual machine. When I use catkin_make command to build the workspace, the below error happens: [ 1%] Built target _tiago_pick_demo_generate_messages_check_deps_PickUpPoseGoal [ 1%] Generating dynamic reconfigure files from cfg/SphericalGrasp.cfg: /home/pouyan/tiago_ws/devel/include/tiago_pick_demo/SphericalGraspConfig.h /home/pouyan/tiago_ws/devel/lib/python2.7/dist-packages/tiago_pick_demo/cfg/SphericalGraspConfig.py [ 1%] Linking CXX shared library /home/pouyan/tiago_ws/devel/lib/libposition_controllers.so [ 1%] Built target tiago_pcl_tutorial_gencfg Scanning dependencies of target run_traj_control [ 1%] Built target _tiago_pick_demo_generate_messages_check_deps_PickUpPoseResult Scanning dependencies of target transmission_interface_parser [ 1%] Built target _tiago_pick_demo_generate_messages_check_deps_PickUpPoseActionResult Generating reconfiguration files for SphericalGrasp in spherical_grasps_server [ 1%] Building CXX object tiago_tutorials/tiago_trajectory_controller/CMakeFiles/run_traj_control.dir/src/run_traj_control.cpp.o Wrote header file in /home/pouyan/tiago_ws/devel/include/tiago_pick_demo/SphericalGraspConfig.h [ 1%] Building CXX object ros_control/transmission_interface/CMakeFiles/transmission_interface_parser.dir/src/transmission_parser.cpp.o Scanning dependencies of target gazebo_ros_block_laser Scanning dependencies of target effort_controllers [ 1%] Built target force_torque_sensor_controller [ 1%] Built target actuator_state_controller [ 1%] Built target tiago_pick_demo_gencfg Scanning dependencies of target gazebo_ros_laser Scanning dependencies of target polled_camera_generate_messages_cpp [ 1%] Building CXX object ros_controllers/effort_controllers/CMakeFiles/effort_controllers.dir/src/joint_effort_controller.cpp.o [ 1%] Linking CXX shared library /home/pouyan/tiago_ws/devel/lib/libimu_sensor_controller.so Scanning dependencies of target polled_camera_generate_messages_eus [ 1%] Built target polled_camera_generate_messages_cpp [ 1%] Building CXX object ros_controllers/effort_controllers/CMakeFiles/effort_controllers.dir/src/joint_velocity_controller.cpp.o [ 1%] Built target polled_camera_generate_messages_eus [ 1%] Building CXX object ros_controllers/effort_controllers/CMakeFiles/effort_controllers.dir/src/joint_position_controller.cpp.o [ 1%] Built target position_controllers [ 1%] Building CXX object ros_controllers/effort_controllers/CMakeFiles/effort_controllers.dir/src/joint_group_effort_controller.cpp.o [ 2%] Linking CXX executable /home/pouyan/tiago_ws/devel/lib/pal_gazebo_worlds/increase_real_time_factor [ 2%] Building CXX object gazebo_ros_pkgs/gazebo_plugins/CMakeFiles/gazebo_ros_block_laser.dir/src/gazebo_ros_block_laser.cpp.o [ 3%] Building CXX object gazebo_ros_pkgs/gazebo_plugins/CMakeFiles/gazebo_ros_laser.dir/src/gazebo_ros_laser.cpp.o [ 3%] Linking CXX shared library /home/pouyan/tiago_ws/devel/lib/libjoint_state_controller.so [ 3%] Built target imu_sensor_controller Scanning dependencies of target diagnostic_msgs_generate_messages_lisp [ 3%] Built target diagnostic_msgs_generate_messages_lisp [ 3%] Building CXX object ros_controllers/effort_controllers/CMakeFiles/effort_controllers.dir/src/joint_group_position_controller.cpp.o [ 3%] Built target increase_real_time_factor Scanning dependencies of target diagnostic_msgs_generate_messages_py [ 3%] Built target diagnostic_msgs_generate_messages_py Scanning dependencies of target polled_camera_generate_messages_nodejs [ 3%] Built target polled_camera_generate_messages_nodejs [ 3%] Built target joint_state_controller Scanning dependencies of target polled_camera_generate_messages_lisp Scanning dependencies of target diagnostic_msgs_generate_messages_eus [ 3%] Built target diagnostic_msgs_generate_messages_eus [ 3%] Built target polled_camera_generate_messages_lisp Scanning dependencies of target diagnostic_msgs_generate_messages_nodejs Scanning dependencies of target polled_camera_generate_messages_py [ 3%] Built target diagnostic_msgs_generate_messages_nodejs [ 3%] Built target polled_camera_generate_messages_py Scanning dependencies of target diagnostic_msgs_generate_messages_cpp Scanning dependencies of target MultiCameraPlugin [ 3%] Built target diagnostic_msgs_generate_messages_cpp Scanning dependencies of target gazebo_ros_projector [ 3%] Building CXX object gazebo_ros_pkgs/gazebo_plugins/CMakeFiles/MultiCameraPlugin.dir/src/MultiCameraPlugin.cpp.o [ 3%] Linking CXX shared library /home/pouyan/tiago_ws/devel/lib/libjoint_torque_sensor_state_controller.so [ 3%] Building CXX object gazebo_ros_pkgs/gazebo_plugins/CMakeFiles/gazebo_ros_projector.dir/src/gazebo_ros_projector.cpp.o [ 3%] Built target joint_torque_sensor_state_controller Scanning dependencies of target gazebo_ros_hand_of_god [ 3%] Building CXX object gazebo_ros_pkgs/gazebo_plugins/CMakeFiles/gazebo_ros_hand_of_god.dir/src/gazebo_ros_hand_of_god.cpp.o c++: fatal error: Killed signal terminated program cc1plus compilation terminated. tiago_tutorials/look_to_point/CMakeFiles/look_to_point.dir/build.make:62: recipe for target 'tiago_tutorials/look_to_point/CMakeFiles/look_to_point.dir/src/look_to_point.cpp.o' failed make[2]: *** [tiago_tutorials/look_to_point/CMakeFiles/look_to_point.dir/src/look_to_point.cpp.o] Error 1 CMakeFiles/Makefile2:28217: recipe for target 'tiago_tutorials/look_to_point/CMakeFiles/look_to_point.dir/all' failed make[1]: *** [tiago_tutorials/look_to_point/CMakeFiles/look_to_point.dir/all] Error 2 make[1]: *** Waiting for unfinished jobs.... c++: fatal error: Killed signal terminated program cc1plus compilation terminated. tiago_tutorials/tiago_pcl_tutorial/CMakeFiles/segment_table.dir/build.make:62: recipe for target 'tiago_tutorials/tiago_pcl_tutorial/CMakeFiles/segment_table.dir/src/nodes/segment_table.cpp.o' failed make[2]: *** [tiago_tutorials/tiago_pcl_tutorial/CMakeFiles/segment_table.dir/src/nodes/segment_table.cpp.o] Error 1 CMakeFiles/Makefile2:41209: recipe for target 'tiago_tutorials/tiago_pcl_tutorial/CMakeFiles/segment_table.dir/all' failed make[1]: *** [tiago_tutorials/tiago_pcl_tutorial/CMakeFiles/segment_table.dir/all] Error 2 c++: fatal error: Killed signal terminated program cc1plus compilation terminated. tiago_tutorials/tiago_pcl_tutorial/CMakeFiles/tiago_pcl_tutorial.dir/build.make:62: recipe for target 'tiago_tutorials/tiago_pcl_tutorial/CMakeFiles/tiago_pcl_tutorial.dir/src/pcl_filters.cpp.o' failed make[2]: *** [tiago_tutorials/tiago_pcl_tutorial/CMakeFiles/tiago_pcl_tutorial.dir/src/pcl_filters.cpp.o] Error 1 CMakeFiles/Makefile2:41315: recipe for target 'tiago_tutorials/tiago_pcl_tutorial/CMakeFiles/tiago_pcl_tutorial.dir/all' failed make[1]: *** [tiago_tutorials/tiago_pcl_tutorial/CMakeFiles/tiago_pcl_tutorial.dir/all] Error 2 [ 3%] Linking CXX shared library /home/pouyan/tiago_ws/devel/lib/libtransmission_interface_parser.so [ 3%] Built target transmission_interface_parser [ 3%] Linking CXX shared library /home/pouyan/tiago_ws/devel/lib/libaruco_ros_utils.so c++: fatal error: Killed signal terminated program cc1plus compilation terminated. gazebo_ros_pkgs/gazebo_plugins/CMakeFiles/gazebo_ros_block_laser.dir/build.make:62: recipe for target 'gazebo_ros_pkgs/gazebo_plugins/CMakeFiles/gazebo_ros_block_laser.dir/src/gazebo_ros_block_laser.cpp.o' failed make[2]: *** [gazebo_ros_pkgs/gazebo_plugins/CMakeFiles/gazebo_ros_block_laser.dir/src/gazebo_ros_block_laser.cpp.o] Error 1 CMakeFiles/Makefile2:44726: recipe for target 'gazebo_ros_pkgs/gazebo_plugins/CMakeFiles/gazebo_ros_block_laser.dir/all' failed make[1]: *** [gazebo_ros_pkgs/gazebo_plugins/CMakeFiles/gazebo_ros_block_laser.dir/all] Error 2 [ 3%] Built target aruco_ros_utils c++: fatal error: Killed signal terminated program cc1plus compilation terminated. gazebo_ros_pkgs/gazebo_plugins/CMakeFiles/gazebo_ros_laser.dir/build.make:62: recipe for target 'gazebo_ros_pkgs/gazebo_plugins/CMakeFiles/gazebo_ros_laser.dir/src/gazebo_ros_laser.cpp.o' failed make[2]: *** [gazebo_ros_pkgs/gazebo_plugins/CMakeFiles/gazebo_ros_laser.dir/src/gazebo_ros_laser.cpp.o] Error 1 CMakeFiles/Makefile2:44827: recipe for target 'gazebo_ros_pkgs/gazebo_plugins/CMakeFiles/gazebo_ros_laser.dir/all' failed make[1]: *** [gazebo_ros_pkgs/gazebo_plugins/CMakeFiles/gazebo_ros_laser.dir/all] Error 2 [ 3%] Linking CXX shared library /home/pouyan/tiago_ws/devel/lib/libeffort_controllers.so [ 3%] Built target effort_controllers [ 3%] Linking CXX executable /home/pouyan/tiago_ws/devel/lib/tiago_trajectory_controller/run_traj_control [ 3%] Built target run_traj_control [ 3%] Linking CXX shared library /home/pouyan/tiago_ws/devel/lib/libgazebo_ros_hand_of_god.so [ 3%] Built target gazebo_ros_hand_of_god [ 3%] Linking CXX shared library /home/pouyan/tiago_ws/devel/lib/libgazebo_ros_projector.so [ 3%] Linking CXX shared library /home/pouyan/tiago_ws/devel/lib/libMultiCameraPlugin.so [ 3%] Built target MultiCameraPlugin [ 3%] Built target gazebo_ros_projector Makefile:140: recipe for target 'all' failed make: *** [all] Error 2 Invoking "make -j16 -l16" failed I searched alot but I dont know how solve it. Thanks Solution: According to the answer of Marcus, I increased the memory size of my KVM to double. For editing the memory size of KVM, this Youtube video worked for me: https://www.youtube.com/watch?v=LwLHwXWoYjk
You're running out of RAM, so badly that your operating system kills your compiler. So, reduce the parallelism, or assign more RAM to the building VM.
Invoking "make -j16 -l16" failed
1,518,444,908,000
For compiling the linux kernel, If I do make_runner.sh && echo "hello" it prints hello even if some of the kernel compilation fails. Is there a way for it to only print if all of the compilation targets built correctly? Where make_runner.sh is the following: #!/usr/bin/env bash set -xe make O=out ARCH=arm64 CC=clang CLANG_TRIPLE=aarch64-linux-gnu- vendor/citrus-perf_defconfig make O=out ARCH=arm64 CC=clang CLANG_TRIPLE=aarch64-linux-gnu- -j$(nproc --all) 2>&1 | tee kernel.log
Because of the pipe to tee, the second make’s exit status is ignored. To get the behaviour you want, you need to enable pipefail: change the set -xe line to set -xe -o pipefail See Debugging scripts, what is the difference between -x to set -euxo pipefail? for details.
make && echo "hello" only print hello when make succeeds (kernel)
1,518,444,908,000
I am trying to compile linux kernel (version 4.4) with KASAN support on a 32 bits machine, but i can't enable it. It seems that it depends on 64 bits architecture only. So my question is about the possibility of using KASAN in 32 bits architecture CONFIG_x86 ?? Is there a relation between KASAN and 64 bits arch ??
As the documentation says: Currently KASAN is supported only for the x86_64 and arm64 architectures.
KASAN config in 32 bits architecture?
1,518,444,908,000
when i run gcc -c *.c, it runs:- gcc -c file1.c -o file1.o gcc -c file2.c -o file2.o gcc -c file3.c -o file3.o ... but as *.s runs:- as file1.s -o a.out as file2.s -o a.out as file3.s -o a.out ... by default, gcc replaces the only the file extension while compiling into an object file but gnu as sets the default file output as a.out. how to make gnu as to replace .s into .o while assembling?
I don’t think you can get as to do that on its own, but you can use gcc to drive as: gcc -c *.s This will produce file1.o, file2.o etc. If you need to provide as-specific options, you can add them with -Xassembler, e.g. gcc -c -Xassembler -mindex-reg *.s
as .*s does not work like gcc .*c
1,518,444,908,000
I want to run a set of commands from a bash script. How ever I don't know how to put the quotation in a bash script. The following is the bash script which I want to run, how ever in the cmake -DCMAKE_C_FLAGS I want to add another flag -gcc-name=/path/bin/gcc. I want to do it through a shell script and eventually run that shell script, which is going to give me the installation. Please kindly suggest me a way to do this. mkdir /g/g92/bhowmik1/installTF/ROSS; cmake -DCMAKE_INSTALL_PREFIX=/g/g92/bhowmik1/installTF/ROSS -DCMAKE_C_FLAGS=-O3 -DCMAKE_C_COMPILER=mpicc -DARCH=x86_64 -DROSS_BUILD_MODELS=ON ..; make; make install;
Quote the whole value: cmake -DCMAKE_INSTALL_PREFIX=/g/g92/bhowmik1/installTF/ROSS \ -DCMAKE_C_FLAGS='-O3 -gcc-name=/path/bin/gcc' \ -DCMAKE_C_COMPILER=mpicc -DARCH=x86_64 -DROSS_BUILD_MODELS=ON ..
Add multiple options in cmake flag in a shell script and run the shell script
1,518,444,908,000
I have a related question, but was asked to open a new one. I would like to recompile the Debian package wpasupplicant with IPv6 disabled. I know basics of Debian package compilation, ie: apt-get source wpasupplicant dpkg-buildpackage --build=binary --no-sign What do I have to change, to disable IPv6 completely? Also, this particular package seems to want to compile some qt versions of wpasuplicant, because the compialtion dependencies ask for qtbase5-dev . Can I compile only the pure/commandline version of wpasupplicant, without any gui versions? I don't want to install additional unnecessary dependencies. I am using Debian 10.
Here is an example how to compile wpasupplicant posted in linuxfromscratch . To disable IPV6 support you need to remove CONFIG_IPV6=y from wpasupplicant build configuration file (.config). You need to install some dependencies: sudo apt install -t buster-backports checkinstall sudo apt install desktop-file-utils libxml++2.6-dev qt5-default libssl-dev build-essential \ libdbus-1-dev libdbus-glib-1-2 libdbus-glib-1-dev libreadline-dev pkg-config dbus \ libncurses5-dev libnl-genl-3-dev libnl-3-dev libreadline-dev Download the tarball from here cd /tmp wget https://w1.fi/releases/wpa_supplicant-2.9.tar.gz tar xvf wpa_supplicant-2.9.tar.gz cd wpa_supplicant-2.9/wpa_supplicant Edit your wpasupplicant .config file to remove CONFIG_IPV6=y then run: make sudo checkinstall Install the .deb using gdebi or apt.
recompile wpasupplicant Debian package with IPv6 disabled
1,518,444,908,000
After inspecting this question and the unaccepted answer, I tried to test it. First I tested the recommended command with: fswatch -o report.tex | xargs -n1 -I{} pdflatex report.tex Which led to an infinite compilation loop, which I consider slightly inefficient, and furthermore, it does not create a long enough window-of-opportunity in which the PDF is accessible. So I tried to debug why fswatch yields an infinite compilation loop using a ctrl+s command in TexMaker in the report.tex file: fswatch -o report.tex | xargs -n1 -I{} echo "hello world" hello world hello world There I observed the ctrl+s/saving of the report.tex triggers two detected changes by fswatch. Furthermore, I tested it on a f6 command in TexMaker, as well as on the pdflatex report.tex command, and observed fswatch detected 3 changes for both attempts: fswatch -o report.tex | xargs -n1 -I{} echo "hello world" hello world hello world hello world The latter seems to be sufficient to cause the infinite compilation loop. Hence, I would like to ask, how can I ensure the report.tex is compiled once when a change is saved to it from TexMaker? I think subquestions that may lead to a working answer may be as listed below, yet I did not find a satisfying answer to these: Does for example the fswatch have an argument to ignore changes for n-miliseconds or for the next n-changes? It appears to me that section 3.2.3 Numeric Event Flags of the fswatch documentation seems to allow for only using the n-th event flag. Yet I did not yet successfully implement passing such an argument to inhibit compilation. Or are there arguments I could pass to the pdflatex report to reduce the three detected changes upon compilation to the report.tex to zero changes? Does TexMaker have an option to reduce the amount of changes to the report.tex from two to one upon saving with ctrl+s?
fswatch uses various backends depending on various OS. Let's assume the overall behavior is the same. On my Linux system, attempts with fswatch (and option -x) give PlatformSpecific events which doesn't help to distinguish between a read event, a write event, an open event and a close event among many other possible events (note that Linux' inotify even distinguishes between two type of close events, one after no change, and one after changes, which would help further here). That's also why you get multiple events. Reading from the source will trigger a compilation attempt, which will among other things read again this source. Loop created. That's the cause of all your problems. You should trigger only on write events and also only once the file stops being written to, but fswatch considers all these events to be all PlatformSpecific. You need (a) tool(s) that can distinguish between the events, or else do periodic polls that compare dates or contents (I believe the last part is what is doing the accepted answer from OP's link). You could use fswatch in --one-event mode without xargs combined with OP's linked accepted answer so fswatch doesn't get triggered again indefinitely but waits for pdftex to finish.. Of course you have to deal with race conditions when doing so in case you miss an event between. As a conclusion: don't try to adapt everything to fswatch, change fswatch with a more adequate tool. Example of alternate method on Linux (As this is specific OS related and OP didn't state an OS I'm going with this example). inotifywait has the right options to monitor writes, and actual only end-of-write (so not MODIFY event, but only CLOSE_WRITE event). There might maybe be other things to check and adapt to when the file is actually moved, deleted and replaced, but I'm not attempting to figure out all the details. So: inotifywait -m -e CLOSE_WRITE report.tex | xargs -n1 -I{} pdflatex report.tex would be a good start. To feed an event loop: inotifywait -m -r -e CLOSE_WRITE somedir | while read -r dir events filename; do if [ "$filename" != "${filename%.tex}" ]; then pdflatex "$dir/$filename" fi done could be a good start. Except... Alas there are certainly caveats, starting with filenames including spaces or special characters. They could be further addressed for example with --format which appears to accept \0 in the output (I don't think --csv is enough), but then the shell can't cope with this in a read loop. Sooner or later a non-shell tool using the inotify facility but not the inotifywait command would be better suited (eg: https://pypi.org/project/inotify/).
How prevent to auto-compilation loop with fswatch upon changes in Texmaker?
1,518,444,908,000
I have a build that fails, it complains about the lack of the following header files: /usr/include/Availability.h /usr/include/AvailabilityInternal.h /usr/include/_types.h I know for sure that my environment must have, i.e., stdio.h or cmath (and find / -iname stdio.h gives me the expected answer)... but how can I know whether the above files should be here?
The easiest way to search for files (and what packages they belong to) is the apt-file command. For example, searching for stdio.h: $ apt-file search /usr/include/stdio.h libc6-dev: /usr/include/stdio.h Now, I tried searching for your missing header files (on Debian 10.6) and all came up empty. However, when I removed the path and just searched for the filename, I got a few hits (I removed the html hits from the output): $ apt-file search Availability.h libclang-6.0-dev: /usr/lib/llvm-6.0/include/clang/AST/Availability.h libclang-7-dev: /usr/lib/llvm-7/include/clang/AST/Availability.h libclang-8-dev: /usr/lib/llvm-8/include/clang/AST/Availability.h libjavascriptcoregtk-4.0-dev: /usr/include/webkitgtk-4.0/JavaScriptCore/WebKitAvailability.h Since these packages are all non-standard libraries, I'd have to assume that Availability.h is not supposed to be there, at least out-of-the-box.
How can I know whether a bunch of header file are part of a "standard" C++ toolchain on Debian
1,518,444,908,000
Initially, My approach was to re-compile whole kernel from scratch but all hopes went down the moment I found out that it requires large amount of disk space. Now, I'm trying to figure out how to compile only part of it. The file I modified is net/ipv4/tcp_ipv4.c I then followed following steps but lost at middle, Downloaded kernel source files to /home/linux ran cp /boot/config-$(uname -r) .config make oldconfig make scripts prepare modules_prepare apt-get install linux-headers-$(uname -r) make -C . M=net/ipv4 make net/ipv4/tcp_ipv4.c I don't know what to do next as the answers found on Stackoverflow are describing about building custom made modules. An answer found on Stackoverflow (LINK) describes that it can't be done, and that it's only possible to compile whole Kernel. "It can't be done. Just compile the whole kernel. After the first compilation, make will ensure only changed files are recompiled so future builds will be fast." I had thought about using Google Drive as the HDD for full compile but it looks like there are lack of options doing that as-well. My last option would be resizing the whole server. Edit: I entered into /net/ipv4 dir via CD and tried make -C /lib/modules/$(uname -r)/build M=$(pwd) modules_install and it outputs Building modules, stage 2. MODPOST 61 modules FATAL: parse error in symbol dump file make[1]: *** [scripts/Makefile.modpost:94: __modpost] Error 1 make: *** [Makefile:1632: modules] Error 2 make: Leaving directory '/usr/src/linux-headers-5.4.0-26-generic'
If you look at net/ipv4/Makefile, you’ll see that tcp_ipv4.o is part of obj-y, which means it can only be built as part of the kernel, it can’t be built as a module. If you want your changes to be taken into account, you’ll have to rebuild the complete kernel. Since you’re short of disk space, you can build only the kernel, install that, then clean the build tree and build the modules, and install them; that will require a little less disk space.
How to compile only net/ipv4 of Linux Kernel?
1,518,444,908,000
I'm using FreeBSD 13.0-CURRENT (for more information see here), and I want to use GNU m4. Downloaded this version of it, configured, and tried to make. But it gives error as stated in the title: error: don't know how to make ../../build-aux/snippet/c++defs.h. For the whole log, see here. Any idea? Thanks to all suggestions and answers. EDIT1: indeed, my m4 version is 1.4.17.
Just unpacked m4-1.4.17 here (Fedora 31; your latest might be another version... but it seems to date from 2013). The offending file is there (in build-aux/snippet). There is a script called bootstrapincluded, but that presumably is only needed for sources straight from version control. The traditional ./configure; make dance goes fine, but fails after compiling a bunch of stuff with: freadahead.c: In function 'freadahead': freadahead.c:91:3: error: #error "Please port gnulib freadahead.c to your platform! Look at the definition of fflush, fread, ungetc on your system, then report this to bug-gnulib." A simple search for "GNU m4 FreeBSD" leads to FreshPorts, FreeBSD's m4 manual talks about a -g option to activate GNU m4 compatitility. Why aren't those enough? Presumably whatever patched version FreshPorts carries is a better starting point (if they are civilized and carry original sources and separate patches, porting the patches to another version is less work than debugging this mess yourself).
Building m4 from source on FreeBSD error: don't know how to make ../../build-aux/snippet/c++defs.h
1,518,444,908,000
What I've tried: gcc -L/path/to/lib/ -llib ... gcc -l/path/to/lib/lib.so.x.x.x ... Update ldconfig Added path to LD_LIBRARY_PATH file shows correct build version and link to the correct file No matter what, I still get /usr/bin/ld: cannot find -lavfilter Any ideas?
ld looks for shared libraries or linker scripts named libsomething.so, or static libraries named libsomething.a, where something matches the -lsomething parameter given to ld. Libraries named libsomething.so.x.y.z, where x.y.z is the library’s version, are used at runtime, not for building, and ld won’t use them. You therefore need to install the development packages for libraries you want to link to, such as libavfilter-dev in your case (assuming Debian or a derivative).
LD cannot find lib even with specified path
1,518,444,908,000
For a project I intend to compile various *NIX OSs and their relative packages with different CPUs in order to maximize optimization and performance for specific systems. In order to save time and money I'd like to know if there really is a difference between compiling code with similar architectures. For example: If I compile Debian GNU/Linux + all the packages of the default repos with an Intel Core i7-8700 and use the OS on a system with an Intel Core i7-8650U, will it perform the same as if I compiled everything with the i7-8650U or would I lose some performance? (I don't care about the percentage, I'd like to know even if it were 1%) To put it shortly is there an amount of worth grater than zero (0) in compiling code on different CPU models of the same generation? Because if that wasn't the case I'd just take one CPU from every generation from every manufacturer and call it a day for all the other models.
If you are truly intending to maximize optimizations, then you'll be using target-specific optimizations, including those that rely on knowing the cache sizes of the processor. This means you'll almost certainly want one compile per processor, though you can compile on the fastest available processor for any of the others with appropriate -march and -mtune settings.
Actual compilation differences between CPUs
1,518,444,908,000
Do I need the latest gcc? Can I use the gcc binaries included with my distro? Does it matter?
The kernel build requirements are quite conservative: for kernel 4.18, GCC 3.2 and binutils 2.20 are sufficient. Thus your distribution’s compiler should work fine. In practice pretty much any version of GCC will do, although one can sometimes run into problems with versions of GCC which are too new. I’m currently using GCC 7 with no issues, but I haven’t tried GCC 8 yet to build the kernel.
What version of gcc should I use to compile the latest stable Linux kernel?
1,518,444,908,000
I am trying to install pgbouncer using source on RHEL 7.4. I am following this tutorial. While running ./configure --prefix=/usr/local/ --with-libevent=/usr/lib64, I am getting below error: checking for libevent... configure: error: not found, cannot proceed. However, running rpm -qa|grep libevent gives libevent-2.0.21-4.el7.x86_64 and ll /usr/lib64 |grep libevent gives lrwxrwxrwx. 1 root root 21 Aug 28 2017 libevent-2.0.so.5 -> libevent-2.0.so.5.1.9 -rwxr-xr-x. 1 root root 297816 Jan 26 2014 libevent-2.0.so.5.1.9 lrwxrwxrwx. 1 root root 26 Aug 28 2017 libevent_core-2.0.so.5 -> libevent_core-2.0.so.5.1.9 -rwxr-xr-x. 1 root root 179800 Jan 26 2014 libevent_core-2.0.so.5.1.9 lrwxrwxrwx. 1 root root 27 Aug 28 2017 libevent_extra-2.0.so.5 -> libevent_extra-2.0.so.5.1.9 -rwxr-xr-x. 1 root root 133864 Jan 26 2014 libevent_extra-2.0.so.5.1.9 lrwxrwxrwx. 1 root root 29 Aug 28 2017 libevent_openssl-2.0.so.5 -> libevent_openssl-2.0.so.5.1.9 -rwxr-xr-x. 1 root root 24464 Jan 26 2014 libevent_openssl-2.0.so.5.1.9 lrwxrwxrwx. 1 root root 30 Aug 28 2017 libevent_pthreads-2.0.so.5 -> libevent_pthreads-2.0.so.5.1.9 -rwxr-xr-x. 1 root root 11200 Jan 26 2014 libevent_pthreads-2.0.so.5.1.9 Unable to find out what is the problem.
checking for libevent... configure: error: not found libevent-2.0.21-4.el7.x86_64 : That's only the run time files. You are missing the files required for compiling an application with libevent → # yum install libevent-devel Provides /usr/{lib64/[libs].so, /include/{files.h, event/files.h, event2/files.h}}
Unable to install pgbouncer
1,518,444,908,000
I want to install PHP 5.6 in Debian Jessie and I'm following the procedures listed in this page (the server is using ISPConfig and I want to add this PHP version to the list of PHP versions available). When I run: ./configure --prefix=/opt/php-5.6 --with-pdo-pgsql --with-zlib-dir --with-freetype-dir --enable-mbstring --with-libxml-dir=/usr --enable-soap --enable-calendar --with-curl --with-mcrypt --with-zlib --with-pgsql --disable-rpath --enable-inline-optimization --with-bz2 --with-zlib --enable-sockets --enable-sysvsem --enable-sysvshm --enable-pcntl --enable-mbregex --enable-exif --enable-bcmath --with-mhash --enable-zip --with-pcre-regex --with-pdo-mysql --with-mysqli --with-mysql-sock=/var/run/mysqld/mysqld.sock --with-jpeg-dir=/usr --with-png-dir=/usr --enable-gd-native-ttf --with-openssl=/opt/openssl --with-fpm-user=www-data --with-fpm-group=www-data --with-libdir=/lib/x86_64-linux-gnu --enable-ftp --with-kerberos --with-gettext --with-xmlrpc --with-xsl --enable-opcache --enable-fpm I get the following error: checking for GNU gettext support... yes checking for bindtextdomain in -lintl... no checking for bindtextdomain in -lc... no configure: error: Unable to find required gettext library The thing is I have gettext installed and I do not know how to go on with this. Any feedback would be much appreciated.
Please make sure you've installed the following packages: # apt-get install libxml2-dev libz-dev libbz2-dev libcurl4-openssl-dev libmcrypt-dev libpq-dev libxslt-dev I've tried the configuration command within a Docker container [1] and the command finished successfully. Mind the change in ./configure command: --with-openssl=/opt/openssl was removed the absence of gettext package [1] Dockerfile for configuring PHP 5.6 within Debian Jessie (directives are split to emphasize the order for each required package but a condensed form [2] would have work the same) FROM debian:jessie RUN apt-get update RUN apt-get install -y wget RUN wget http://de2.php.net/get/php-5.6.33.tar.bz2/from/this/mirror -O php-5.6.33.tar.bz2 RUN apt-get install -y bzip2 RUN tar jxf ./php-5.6.33.tar.bz2 RUN apt-get install -y gcc RUN apt-get install -y libxml2-dev RUN apt-get install -y libz-dev RUN apt-get install -y libbz2-dev RUN apt-get install -y libcurl4-openssl-dev RUN apt-get install -y libmcrypt-dev RUN apt-get install -y libpq-dev RUN apt-get install -y libxslt-dev RUN cd php-5.6.33 && ./configure --prefix=/opt/php-5.6 --with-pdo-pgsql --with-zlib-dir --with-freetype-dir --enable-mbstring --with-libxml-dir=/usr --enable-soap --enable-calendar --with-curl --with-mcrypt --with-zlib --with-pgsql --disable-rpath --enable-inline-optimization --with-bz2 --with-zlib --enable-sockets --enable-sysvsem --enable-sysvshm --enable-pcntl --enable-mbregex --enable-exif --enable-bcmath --with-mhash --enable-zip --with-pcre-regex --with-pdo-mysql --with-mysqli --with-mysql-sock=/var/run/mysqld/mysqld.sock --with-jpeg-dir=/usr --with-png-dir=/usr --enable-gd-native-ttf --with-fpm-user=www-data --with-fpm-group=www-data --with-libdir=/lib/x86_64-linux-gnu --enable-ftp --with-kerberos --with-gettext --with-xmlrpc --with-xsl --enable-opcache --enable-fpm [2] Condensed Dockerfile for configuring PHP 5.6 within Debian Jessie FROM debian:jessie RUN apt-get update && \ apt-get install -y wget bzip2 gcc libxml2-dev libz-dev libbz2-dev libcurl4-openssl-dev libmcrypt-dev libpq-dev libxslt-dev && \ wget http://de2.php.net/get/php-5.6.33.tar.bz2/from/this/mirror -O php-5.6.33.tar.bz2 && \ tar jxf ./php-5.6.33.tar.bz2 && \ cd php-5.6.33 && ./configure --prefix=/opt/php-5.6 --with-pdo-pgsql --with-zlib-dir --with-freetype-dir --enable-mbstring --with-libxml-dir=/usr --enable-soap --enable-calendar --with-curl --with-mcrypt --with-zlib --with-pgsql --disable-rpath --enable-inline-optimization --with-bz2 --with-zlib --enable-sockets --enable-sysvsem --enable-sysvshm --enable-pcntl --enable-mbregex --enable-exif --enable-bcmath --with-mhash --enable-zip --with-pcre-regex --with-pdo-mysql --with-mysqli --with-mysql-sock=/var/run/mysqld/mysqld.sock --with-jpeg-dir=/usr --with-png-dir=/usr --enable-gd-native-ttf --with-fpm-user=www-data --with-fpm-group=www-data --with-libdir=/lib/x86_64-linux-gnu --enable-ftp --with-kerberos --with-gettext --with-xmlrpc --with-xsl --enable-opcache --enable-fpm
Can't compile PHP 5.6 in Debian 8
1,518,444,908,000
I've been trying to compile the GnuTLS package from source, as part of a BLFS (Linux From Scratch) system. Here is the LFS page for it. I have installed all of the required and recommended packages that are listed on that page; however, when I ran ./configure at the top of the source tree for GnuTLS, according to the script output, it didn't seem to find several of those packages, for example valgrind, libunistring, libtasn1. So, I am just wondering what is the best way to troubleshoot this, if a configure script doesn't seem to work correctly? I had a look at config.log, but that didn't seem very helpful (at least in the case of valgrind). I also tried to have a look through the configure script itself, but it's a 40,000+ line monster. Ok, I think I've been a bit silly and misunderstood the configure script. The configure summary said this: configure: summary of build options: version: 3.5.14 shared 44:6:14 Host/Target system: x86_64-pc-linux-gnu Build system: x86_64-pc-linux-gnu Install prefix: /usr Compiler: gcc Valgrind: no CFlags: -g -O2 Library types: Shared=yes, Static=no Local libopts: yes Local libtasn1: no Local unistring: no Use nettle-mini: no Documentation: yes (manpages: yes) Which I took to mean that it hadn't found those packages (I interpreted 'Local' as meaning 'on my computer'). However, searching in more detail through the output, I found these: checking for LIBTASN1... yes checking whether to use the included minitasn1... no checking for libunistring... yes checking how to link with libunistring... /usr/lib/libunistring.so It seems that it did actually find those packages, and 'Local' in the summary must have been referring to GnuTLS' own built-in version of those libraries. It was a bit confusing, but it makes sense now. For valgrind, I see this: checking for valgrind... valgrind checking whether self tests are run under valgrind... no So, again it seems to have found it, although it doesn't seem to want to use it for the self tests, for some reason. Anyway, I'll go ahead and build it and see if it tests ok.
config.log should contain the exact reason why configure failed, but it can be hard to find it. To do so, you should start from the end of config.log; there you’ll see a dump of the full configure state at the point where it stopped, which is daunting, but if you skip past that, you should find the error which broke configure. Look for Running config.status, and scroll up... In the case of an Autoconf-generated setup, there’s not much point in reading configure itself; it’s much more useful to look at its source code, configure.ac (or configure.in if it’s an old piece of software), along with any .m4 file which is pulled in.
How best to troubleshoot a source package configure script?
1,518,444,908,000
I have downloaded mandoc from mandoc | UNIX manpage compiler. I execute these command in my Ubuntu 14.04 to install- tar -xzvf mandoc.tar.gz cd mandoc-1.14.3 ./configure Then sudo make install results as follows- read.c:34:18: fatal error: zlib.h: No such file or directory #include <zlib.h> ^ compilation terminated. make: *** [read.o] Error 1 Now, how to solve this I don't know?
The program obviously makes use of zlib, a compression library, probably to be able to decompress compressed manual sources. Depending on your Unix, you will need to install the zlib development files (headers etc.). On Debian based Linux distributions, and Ubuntu, these come packaged in the zlib1g-dev package, for example. Also, if your Unix already have mandoc available as a pre-compiled package, then use that rather than compiling it yourself. See the list of Unices here for example (list may be incomplete), and note that mandoc is sometimes known as mdocml. On Ubuntu (Zesty or later, but not Trusty which the user asking the question is running): apt-get install mandoc
mandoc installation problem
1,518,444,908,000
I don't find an answer on the web. '-' $ cmake .. -- The C compiler identification is GNU 4.9.2 -- The CXX compiler identification is GNU 4.9.2 -- Check for working C compiler: /usr/bin/cc -- Check for working C compiler: /usr/bin/cc -- works -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Check for working CXX compiler: /usr/bin/c++ -- Check for working CXX compiler: /usr/bin/c++ -- works -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Found PkgConfig: /usr/bin/pkg-config (found version "0.28") -- checking for modules 'gtk+-3.0;gee-0.8;gio-unix-2.0;libgnome-menu 3.0' -- found gtk+-3.0, version 3.14.5 -- found gee-0.8, version 0.16.1 -- found gio-unix-2.0, version 2.42.1 -- found libgnome-menu-3.0, version 3.13.3 -- checking for module 'gthread-2.0 >= 2.14.0' -- found gthread-2.0 , version 2.42.1 -- Found Vala: /usr/bin/valac -- checking for a minimum Vala version of 0.14.0 -- found Vala, version 0.26.1 -- Configuring done -- Generating done -- Build files have been written to: /usr/local/slingswarm/build $ make [ 11%] Generating slingshot.c, frontend/widgets/AppItem.c, frontend/widgets/CompositedWindow.c, frontend/widgets/Indicators.c, frontend/widgets/Searchbar.c, frontend/Utilities.c, frontend/Color.c, backend/GMenuEntries.c /usr/local/slingswarm/slingshot.vala:167.17-167.45: error: 2 missing arguments for `void Gtk.Grid.attach (Gtk.Widget child, int left, int top, int width, int height)' this.grid.attach (item, c, r); ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Compilation failed: 1 error(s), 0 warning(s) CMakeFiles/slingswarm-launcher.dir/build.make:60: recipe for target 'slingshot.c' failed make[2]: *** [slingshot.c] Error 1 CMakeFiles/Makefile2:60: recipe for target 'CMakeFiles/slingswarm-launcher.dir/all' failed make[1]: *** [CMakeFiles/slingswarm-launcher.dir/all] Error 2 Makefile:117: recipe for target 'all' failed make: *** [all] Error 2 https://github.com/echo-devim/slingswarm
Slingswarm expects newer versions of valac than 0.26.1; later versions define default values for the missing parameters to Gtk.Grid.attach. To build the package on Debian 8, change the erroneous line in slingshot.vala to this.grid.attach (item, c, r, 1, 1); and run make again. (So really this is a bug in Slingswarm.)
I cannot compile the Slingswarm package from GitHub on Debian 8
1,518,444,908,000
There is a flag called CPU_FREQ_STAT, which exports CPU frequency statistics information through sysfs file system. More info available here: cpufreq driver kconfig Then, one can guess constant I/O operation due to the statistic information exporting can cause memory overhead or any decrease of the performance and battery life. Would this affirmation be correct? If not, why?
sysfs is a virtual filesystem. It doesn't really exist on disk, so there is no (disk) I/O. Nor is there any I/O at all, even virtual, except when something reads the file. It's just a kernel API that's being exposed to userspace through open/read/write/close instead of, e.g., adding another syscall. There is probably a tiny overhead. It surely takes a trivial bit of memory to hold the counters, a trivial amount of CPU time to update them, and increases the kernel image size by a trivial amount. OTOH, if frequency scaling is used on your machine, turning that off will greatly reduce your ability to investigate its behavior—and lowering CPU frequency at the right time typically has a major effect on both performance and battery life.
Could disabling kernel cpu frequency monitoring improve performance and/or battery life?
1,518,444,908,000
I tried to compile awusb.ko on my Linux Mint with make make -v GNU Make 3.81 Copyright (C) 2006 Free Software Foundation, Inc. Dies ist Freie Software; siehe die Programmquellen für Vervielfältigungsbedingungen. The makefile: obj-m := awusb.o KDIR := /lib/modules/$(shell uname -r)/build PWD := $(shell pwd) default: $(MAKE) -C $(KDIR) SUBDIRS=$(PWD) modules clean: $(MAKE) -C $(KDIR) SUBDIRS=$(PWD) clean rm -rf Module.markers module.order module.sysvers make stops with: makemake -C /lib/modules/3.19.0-32-generic/build SUBDIRS=/home/ger/progentp/Flash droid/sunxi-livesuite-master/awusb modules make[1]: Verzeichnis »/usr/src/linux-headers-3.19.0-32-generic« wird betreten Makefile:669: Cannot use CONFIG_CC_STACKPROTECTOR_REGULAR: -fstack-protector not supported by compiler make[1]: *** Keine Regel, um »droid/sunxi-livesuite-master/awusb« zu erstellen. Schluss. make[1]: Verzeichnis »/usr/src/linux-headers-3.19.0-32-generic« wird verlassen make: *** [default] Fehler 2 In my opinion with obj-m := awusb.o there is a rule for building the awusb.ko module. Any help please?
Your problem is the space in /home/ger/progentp/Flash droid/ Remove the space from the folder name, or move your git clone to another location without spaces.
Kernel module compilation fails: No rule to make target droid/sunxi-livesuite-master/awusb'' [closed]
1,518,444,908,000
Just starting out with Xubuntu 14.04 on a refurbished machine (2GB mem, 2GHz dual core), and have a cursory/reading knowledge of c, but am not a c programmer. I'm trying to compile code I found here to create a visual notification for the action of moving between workspaces. The code: // wschanged.c #include <libwnck/libwnck.h> #include <stdlib.h> static void on_active_workspace_changed (WnckScreen *screen, WnckWorkspace *space, gpointer data) { // Executes a script on workspace change system ("~/.workspace-changed"); } int main(int argc, char ** argv) { GMainLoop *loop; WnckScreen *screen; glib:gdk_init (&argc, &argv); loop = g_main_loop_new (NULL, FALSE); screen = wnck_screen_get_default(); g_signal_connect (screen, "active-workspace-changed", G_CALLBACK (on_active_workspace_changed), NULL); g_main_loop_run (loop); g_main_loop_unref (loop); return 0; } The compile command: gcc -O2 -DWNCK_I_KNOW_THIS_IS_UNSTABLE -o wschanged `pkg-config --cflags --libs libwnck-3.0` wschanged.c The errors I'm receiving: wschanged.c: In function ‘on_active_workspace_changed’: wschanged.c:12:12: warning: ignoring return value of ‘system’, declared with attribute warn_unused_result [-Wunused-result] system ("~/.workspace-changed"); ^ /tmp/ccR60OkB.o: In function `main': wschanged.c:(.text.startup+0x16): undefined reference to `gdk_init' wschanged.c:(.text.startup+0x1f): undefined reference to `g_main_loop_new' wschanged.c:(.text.startup+0x27): undefined reference to `wnck_screen_get_default' wschanged.c:(.text.startup+0x41): undefined reference to `g_signal_connect_data' wschanged.c:(.text.startup+0x49): undefined reference to `g_main_loop_run' wschanged.c:(.text.startup+0x51): undefined reference to `g_main_loop_unref' collect2: error: ld returned 1 exit status I have the latest version of libwnck, and I've also added: #include <glib.h> to see if this would fix the errors seeming to stem from undefined references to objects in the glib package, but this has not changed any of the error output. Any suggestions would be greatly appreciated!
The order of arguments to gcc is significant, so you need to split the --cflags and --libs variants of the pkg-config invocations: gcc -O2 -DWNCK_I_KNOW_THIS_IS_UNSTABLE -o wschanged `pkg-config --cflags libwnck-3.0` wschanged.c `pkg-config --libs libwnck-3.0`
Undefined reference error to glib components even with glib included
1,518,444,908,000
I'm trying to build Firefox 51 from source on Mint 17. I already ran the boostrap script and it installed rust, but ./mach build apparently can't find it? The error is: 0:03.58 checking for rustc... not found 0:03.58 checking for cargo... not found 0:03.58 ERROR: Rust compiler not found. 0:03.58 To compile rust language sources, you must have 'rustc' in your path. 0:03.58 See https//www.rust-lang.org/ for more information. 0:03.58 0:03.58 You can install rust by running './mach bootstrap' 0:03.58 or by directly running the installer from https://rustup.rs/ 0:03.58 But I already ran ./mach bootstrap and it installed rust! Now when I run ./mach bootstrap again, it says: Could not find a Rust compiler. You have some rust files in /home/user/.cargo/bin but they're not part of this shell's PATH. To add these to the PATH, edit your shell initialization script, which may be called ~/.bashrc or ~/.bash_profile or ~/.profile, and add the following line: source /home/user/.cargo/env Then restart your shell and run the bootstrap script again. So I did that. ~/.profile now has source /home/user/.cargo/env at the bottom. And I restarted the terminal. And ./mach bootstrap and ./mach build still can't find rust. How do I fix this?
Installing Rust manually seemed to fix the issue: curl https://sh.rustup.rs -sSf | sh
Firefox build "you must have 'rustc' in your path"?
1,518,444,908,000
I'm trying to compile and install the kernel-4.9.8 sources from https://kernel.org on Debian 8 (jessie). I'm following this procedure: make defconfig make menuconfig make I managed to compile the sources succesfully, but I can't install the kernel, I've tried with both sudo make install and sudo dkms autoinstall -k 4.9.8, but they seems to require linux-headers-4.9.8 and I can't find it the Debian repositories.
Try using make-kpkg instead. When run from a kernel source tree it'll compile a kernel and build a full set of debian packages using that source and config -- linux-image, linux-headers, linux-doc, all as per your version specified. It's part of the kernel-package package, so what you want to do is: sudo apt-get install kernel-package Edit /etc/kernel-img.conf and /etc/kernel-kpg.conf to match your preferences fakeroot make-kpkg --initrd linux-image Sit back, get some tea. The above process will take a while. It will generate a linux-image-(version) deb package one level up, which you can then install with dpkg and will handle things like calling your bootloader's update to add the new kernel automatically. This will significantly ease your difficulties. At the end of this process, you will have a Linux kernel that has the exact capabilities you told it to have, and none of the capabilities that you didn't tell it to have. Consider that last sentence a polite warning.
Compile and install pure Kernel on Debian
1,486,153,211,000
I am using a webserver that is administered by someone else. The version of mysqldump on this server is from 2011: $ mysqldump --help mysqldump Ver 10.13 Distrib 5.1.63, for debian-linux-gnu (x86_64) Copyright (c) 2000, 2011, Oracle and/or its affiliates. All rights reserved. As a result, when I try to do a database dump, I get this output: mysqldump: Couldn't execute 'SET OPTION SQL_QUOTE_SHOW_CREATE=1': You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'OPTION SQL_QUOTE_SHOW_CREATE=1' at line 1 (1064) According to this link I need to upgrade mysqldump. I asked my sysadmin to do it, and got generic non-helpful responses. I doubt I will be able to convince them to do the upgrade. Moving to a new server is not an option at the moment. Is there some way I can download and compile a local version in my home directory (without having to compile a gazillion dependencies)? If so, would you mind pointing me in the right direction? Thanks!
Your link also contains the quickest solution: Andre Couture • 3 years ago Hi, I had the same issue trying to mysqldump a 5.6 database in order to downgrade. I took a slightly different approach as I did not had the luxury to install mysql-client5.6 I simply did a copy of /usr/bin/mysqldump, vi -b mysqldump look for SET OPTION ( use the '/' command) replace the first character by # in order to read "#ET OPTION" (use the 'r' command) save and ran my dump ( ESC :x ) Now you have a version of mysqldump that can connect and dump data from mysql5.6 Worked like a charm for me
Download / Compile local version of mysqldump
1,486,153,211,000
Now that the vc4 DRM/GBM driver has been making progress, is it possible to build Linux (not just the kernel but the entire operating system eg. Linux From Scratch) without libraries and programs from /opt/vc? Would it work? Right now having /opt/vc means the normal BLFS (Beyond Linux From Scratch) setup for X11 does not work with many programs because they link at compile-time to libraries in /opt/vc and at runtime search /usr/lib.
The only things special you should need for building Raspberry Pi with graphics support, compared to other ARMs currently: linux-next kernel (to get the devicetree for vc4, until 4.7 is released) rasperrypi/firmware on a vfat partition However, building everything from scratch will be an exercise in frustration, and I would recommend starting from an existing distribution with ARM support.
Build Linux for a Raspberry Pi without /opt/vc?
1,486,153,211,000
I'm trying to build opencv for my x86_64 Centos 6 operating system. I think the problem is make is trying to use the 32 bit version of the bz2 library instead of the 64 bit version. I get this error from make: [ 17%] Building CXX object modules/videoio/CMakeFiles/opencv_videoio.dir/src/cap_mjpeg_decoder.cpp.o [ 17%] Building CXX object modules/videoio/CMakeFiles/opencv_videoio.dir/src/cap_dc1394_v2.cpp.o [ 17%] Building CXX object modules/videoio/CMakeFiles/opencv_videoio.dir/src/cap_gstreamer.cpp.o [ 19%] Building CXX object modules/videoio/CMakeFiles/opencv_videoio.dir/src/cap_v4l.cpp.o [ 19%] Building CXX object modules/videoio/CMakeFiles/opencv_videoio.dir/src/cap_ffmpeg.cpp.o Linking CXX shared library ../../lib/libopencv_videoio.so /lib/libbz2.so.1: could not read symbols: File in wrong format collect2: ld returned 1 exit status make[2]: *** [lib/libopencv_videoio.so.3.1.0] Error 1 make[1]: *** [modules/videoio/CMakeFiles/opencv_videoio.dir/all] Error 2 make: *** [all] Error 2 And I run cmake like this: cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_LIBRARY_PATH=/usr/lib64 CMAKE_INSTALL_PREFIX=/usr/local .. I have the library installed: [root@localhost build]# ldconfig -p | grep libbz2 libbz2.so.1 (libc6,x86-64) => /lib64/libbz2.so.1 libbz2.so.1 (libc6) => /lib/libbz2.so.1 What can I do to fix this problem? Thanks! EDIT: I also have the directories /lib and /lib64 and in /lib64 I have libbz2.so.1 and libbz2.so.1.0.4 EDIT: And I'm following these instructions http://docs.opencv.org/2.4/doc/tutorials/introduction/linux_install/linux_install.html
After browsing several other Q&A sites, I came up with the problem being that cmake is using the 32 bit library as if it were the 64 bit one. I solved this on Fedora 22 by doing this: remove your CMakeCache.txt file run cmake to regenerate it cmake -D blah blah flags and values Edit your CMakeCache.txt file and change this line //Path to a library. BZIP2_LIBRARIES:FILEPATH=/lib/libbz2.so.1 to this //Path to a library. BZIP2_LIBRARIES:FILEPATH=/lib64/libbz2.so.1 Run cmake again (not sure if needed but just in case) make It then finished without errors
Having problems with make and opencv
1,486,153,211,000
I am compiling linux red hat stable kernel on red hat linux and the error which I am facing is as below
First install XZ yum -y install xz then tar -xvf yourfile.tar.xz
how to untar file in linux red hat?
1,486,153,211,000
Position Independent Code means that the generated machine code is not dependent on being located at a specific address in order to work. and the jumps are relative. So is it OK to declare -fPIC system wide in a Linux distro. specially the normal intel PC machines?
First, PIC is a compiler issue and not Linux distro issue. PIC should be allowed to set as a compiler flag instead of hardcoding globally. Not all machine architectures support PIC. If your builds are static (non-shared), you do not need PIC, and it can be inefficient. Some architectures/compilers might have a different equivalent flag, for example, IBM xl compilers have -qpic flag. While you ask for intel PC machines, if you happen to create build files for some packages, it might limit their portability.
-fPIC Flag System-wide?
1,486,153,211,000
I'm trying to build php-5.3 on arch linux using phpenv + php-build. The problem is bison is too new: $ phpenv install 5.3.29 [Info]: Loaded apc Plugin. [Info]: Loaded pyrus Plugin. [Info]: Loaded xdebug Plugin. [Info]: Loaded xhprof Plugin. [Info]: php.ini-production gets used as php.ini [Info]: Building 5.3.29 into /home/yuri/.phpenv/versions/5.3.29 [Downloading]: http://php.net/distributions/php-5.3.29.tar.bz2 [Preparing]: /tmp/php-build/source/5.3.29 ----------------- | BUILD ERROR | ----------------- Here are the last 10 lines from the log: ----------------------------------------- configure: warning: bison versions supported for regeneration of the Zend/PHP parsers: 1.28 1.35 1.75 1.875 2.0 2.1 2.2 2.3 2.4 2.4.1 2.4.2 2.4.3 2.5 2.5.1 2.6 2.6.1 2.6.2 2.6.4 (found: 3.0.2). configure: warning: You will need re2c 0.13.4 or later if you want to regenerate PHP parsers. configure: error: mcrypt.h not found. Please reinstall libmcrypt. ----------------------------------------- The full Log is available at '/tmp/php-build.5.3.29.20141005234955.log'. [Warn]: Aborting build. So I was thinking about building bison from sources and feeding it to php-build. Or building php manually if that helps. Is there a way? UPD Well, I'm now stuck with adding openssl support. For now, I'm trying to make plain configure + make work, and here's what they say: $ ./configure --with-openssl && make ... [1mConfiguring extensions[m ... checking for OpenSSL support... yes checking for Kerberos support... no checking for DSA_get_default_method in -lssl... (cached) no checking for X509_free in -lcrypto... (cached) yes checking for pkg-config... (cached) /usr/bin/pkg-config checking for PCRE library to use... bundled ... /bin/sh /home/yuri/_/php-5.3.29/libtool --silent --preserve-dup-deps --mode=compile gcc -Iext/standard/ -I/home/yuri/_/php-5.3.29/ext/standard/ -DPHP_ATOM_INC -I/home/yuri/_/php-5.3.29/include -I/home/yuri/_/php-5.3.29/main -I/home/yuri/_/php-5.3.29 -I/home/yuri/_/php-5.3.29/ext/date/lib -I/home/yuri/_/php-5.3.29/ext/ereg/regex -I/usr/include/libxml2 -I/home/yuri/_/php-5.3.29/ext/sqlite3/libsqlite -I/home/yuri/_/php-5.3.29/TSRM -I/home/yuri/_/php-5.3.29/Zend -I/usr/include -g -O2 -fvisibility=hidden -c /home/yuri/_/php-5.3.29/ext/standard/info.c -o ext/standard/info.lo /bin/sh /home/yuri/_/php-5.3.29/libtool --silent --preserve-dup-deps --mode=compile gcc -Imain/ -I/home/yuri/_/php-5.3.29/main/ -DPHP_ATOM_INC -I/home/yuri/_/php-5.3.29/include -I/home/yuri/_/php-5.3.29/main -I/home/yuri/_/php-5.3.29 -I/home/yuri/_/php-5.3.29/ext/date/lib -I/home/yuri/_/php-5.3.29/ext/ereg/regex -I/usr/include/libxml2 -I/home/yuri/_/php-5.3.29/ext/sqlite3/libsqlite -I/home/yuri/_/php-5.3.29/TSRM -I/home/yuri/_/php-5.3.29/Zend -I/usr/include -g -O2 -fvisibility=hidden -c main/internal_functions.c -o main/internal_functions.lo /bin/sh /home/yuri/_/php-5.3.29/libtool --silent --preserve-dup-deps --mode=link gcc -export-dynamic -I/usr/include -g -O2 -fvisibility=hidden ext/date/php_date.lo ext/date/lib/astro.lo ext/date/lib/dow.lo ext/date/lib/parse_date.lo ext/date/lib/parse_tz.lo ext/date/lib/timelib.lo ext/date/lib/tm2unixtime.lo ext/date/lib/unixtime2tm.lo ext/date/lib/parse_iso_intervals.lo ext/date/lib/interval.lo ext/ereg/ereg.lo ext/ereg/regex/regcomp.lo ext/ereg/regex/regexec.lo ext/ereg/regex/regerror.lo ext/ereg/regex/regfree.lo ext/libxml/libxml.lo ext/openssl/openssl.lo ext/openssl/xp_ssl.lo ext/pcre/pcrelib/pcre_chartables.lo ext/pcre/pcrelib/pcre_ucd.lo ext/pcre/pcrelib/pcre_compile.lo ext/pcre/pcrelib/pcre_config.lo ext/pcre/pcrelib/pcre_exec.lo ext/pcre/pcrelib/pcre_fullinfo.lo ext/pcre/pcrelib/pcre_get.lo ext/pcre/pcrelib/pcre_globals.lo ext/pcre/pcrelib/pcre_maketables.lo ext/pcre/pcrelib/pcre_newline.lo ext/pcre/pcrelib/pcre_ord2utf8.lo ext/pcre/pcrelib/pcre_refcount.lo ext/pcre/pcrelib/pcre_study.lo ext/pcre/pcrelib/pcre_tables.lo ext/pcre/pcrelib/pcre_valid_utf8.lo ext/pcre/pcrelib/pcre_version.lo ext/pcre/pcrelib/pcre_xclass.lo ext/pcre/php_pcre.lo ext/sqlite3/sqlite3.lo ext/sqlite3/libsqlite/sqlite3.lo ext/ctype/ctype.lo ext/dom/php_dom.lo ext/dom/attr.lo ext/dom/document.lo ext/dom/domerrorhandler.lo ext/dom/domstringlist.lo ext/dom/domexception.lo ext/dom/namelist.lo ext/dom/processinginstruction.lo ext/dom/cdatasection.lo ext/dom/documentfragment.lo ext/dom/domimplementation.lo ext/dom/element.lo ext/dom/node.lo ext/dom/string_extend.lo ext/dom/characterdata.lo ext/dom/documenttype.lo ext/dom/domimplementationlist.lo ext/dom/entity.lo ext/dom/nodelist.lo ext/dom/text.lo ext/dom/comment.lo ext/dom/domconfiguration.lo ext/dom/domimplementationsource.lo ext/dom/entityreference.lo ext/dom/notation.lo ext/dom/xpath.lo ext/dom/dom_iterators.lo ext/dom/typeinfo.lo ext/dom/domerror.lo ext/dom/domlocator.lo ext/dom/namednodemap.lo ext/dom/userdatahandler.lo ext/fileinfo/fileinfo.lo ext/fileinfo/libmagic/apprentice.lo ext/fileinfo/libmagic/apptype.lo ext/fileinfo/libmagic/ascmagic.lo ext/fileinfo/libmagic/cdf.lo ext/fileinfo/libmagic/cdf_time.lo ext/fileinfo/libmagic/compress.lo ext/fileinfo/libmagic/encoding.lo ext/fileinfo/libmagic/fsmagic.lo ext/fileinfo/libmagic/funcs.lo ext/fileinfo/libmagic/is_tar.lo ext/fileinfo/libmagic/magic.lo ext/fileinfo/libmagic/print.lo ext/fileinfo/libmagic/readcdf.lo ext/fileinfo/libmagic/readelf.lo ext/fileinfo/libmagic/softmagic.lo ext/filter/filter.lo ext/filter/sanitizing_filters.lo ext/filter/logical_filters.lo ext/filter/callback_filter.lo ext/hash/hash.lo ext/hash/hash_md.lo ext/hash/hash_sha.lo ext/hash/hash_ripemd.lo ext/hash/hash_haval.lo ext/hash/hash_tiger.lo ext/hash/hash_gost.lo ext/hash/hash_snefru.lo ext/hash/hash_whirlpool.lo ext/hash/hash_adler32.lo ext/hash/hash_crc32.lo ext/hash/hash_salsa.lo ext/iconv/iconv.lo ext/json/json.lo ext/json/utf8_to_utf16.lo ext/json/utf8_decode.lo ext/json/JSON_parser.lo ext/pdo/pdo.lo ext/pdo/pdo_dbh.lo ext/pdo/pdo_stmt.lo ext/pdo/pdo_sql_parser.lo ext/pdo/pdo_sqlstate.lo ext/pdo_sqlite/pdo_sqlite.lo ext/pdo_sqlite/sqlite_driver.lo ext/pdo_sqlite/sqlite_statement.lo ext/phar/util.lo ext/phar/tar.lo ext/phar/zip.lo ext/phar/stream.lo ext/phar/func_interceptors.lo ext/phar/dirstream.lo ext/phar/phar.lo ext/phar/phar_object.lo ext/phar/phar_path_check.lo ext/posix/posix.lo ext/reflection/php_reflection.lo ext/session/session.lo ext/session/mod_files.lo ext/session/mod_mm.lo ext/session/mod_user.lo ext/simplexml/simplexml.lo ext/simplexml/sxe.lo ext/spl/php_spl.lo ext/spl/spl_functions.lo ext/spl/spl_engine.lo ext/spl/spl_iterators.lo ext/spl/spl_array.lo ext/spl/spl_directory.lo ext/spl/spl_exceptions.lo ext/spl/spl_observer.lo ext/spl/spl_dllist.lo ext/spl/spl_heap.lo ext/spl/spl_fixedarray.lo ext/sqlite/sqlite.lo ext/sqlite/sess_sqlite.lo ext/sqlite/pdo_sqlite2.lo ext/sqlite/libsqlite/src/opcodes.lo ext/sqlite/libsqlite/src/parse.lo ext/sqlite/libsqlite/src/encode.lo ext/sqlite/libsqlite/src/auth.lo ext/sqlite/libsqlite/src/btree.lo ext/sqlite/libsqlite/src/build.lo ext/sqlite/libsqlite/src/delete.lo ext/sqlite/libsqlite/src/expr.lo ext/sqlite/libsqlite/src/func.lo ext/sqlite/libsqlite/src/hash.lo ext/sqlite/libsqlite/src/insert.lo ext/sqlite/libsqlite/src/main.lo ext/sqlite/libsqlite/src/os.lo ext/sqlite/libsqlite/src/pager.lo ext/sqlite/libsqlite/src/printf.lo ext/sqlite/libsqlite/src/random.lo ext/sqlite/libsqlite/src/select.lo ext/sqlite/libsqlite/src/table.lo ext/sqlite/libsqlite/src/tokenize.lo ext/sqlite/libsqlite/src/update.lo ext/sqlite/libsqlite/src/util.lo ext/sqlite/libsqlite/src/vdbe.lo ext/sqlite/libsqlite/src/attach.lo ext/sqlite/libsqlite/src/btree_rb.lo ext/sqlite/libsqlite/src/pragma.lo ext/sqlite/libsqlite/src/vacuum.lo ext/sqlite/libsqlite/src/copy.lo ext/sqlite/libsqlite/src/vdbeaux.lo ext/sqlite/libsqlite/src/date.lo ext/sqlite/libsqlite/src/where.lo ext/sqlite/libsqlite/src/trigger.lo ext/standard/crypt_freesec.lo ext/standard/crypt_blowfish.lo ext/standard/crypt_sha512.lo ext/standard/crypt_sha256.lo ext/standard/php_crypt_r.lo ext/standard/array.lo ext/standard/base64.lo ext/standard/basic_functions.lo ext/standard/browscap.lo ext/standard/crc32.lo ext/standard/crypt.lo ext/standard/cyr_convert.lo ext/standard/datetime.lo ext/standard/dir.lo ext/standard/dl.lo ext/standard/dns.lo ext/standard/exec.lo ext/standard/file.lo ext/standard/filestat.lo ext/standard/flock_compat.lo ext/standard/formatted_print.lo ext/standard/fsock.lo ext/standard/head.lo ext/standard/html.lo ext/standard/image.lo ext/standard/info.lo ext/standard/iptc.lo ext/standard/lcg.lo ext/standard/link.lo ext/standard/mail.lo ext/standard/math.lo ext/standard/md5.lo ext/standard/metaphone.lo ext/standard/microtime.lo ext/standard/pack.lo ext/standard/pageinfo.lo ext/standard/quot_print.lo ext/standard/rand.lo ext/standard/soundex.lo ext/standard/string.lo ext/standard/scanf.lo ext/standard/syslog.lo ext/standard/type.lo ext/standard/uniqid.lo ext/standard/url.lo ext/standard/var.lo ext/standard/versioning.lo ext/standard/assert.lo ext/standard/strnatcmp.lo ext/standard/levenshtein.lo ext/standard/incomplete_class.lo ext/standard/url_scanner_ex.lo ext/standard/ftp_fopen_wrapper.lo ext/standard/http_fopen_wrapper.lo ext/standard/php_fopen_wrapper.lo ext/standard/credits.lo ext/standard/css.lo ext/standard/var_unserializer.lo ext/standard/ftok.lo ext/standard/sha1.lo ext/standard/user_filters.lo ext/standard/uuencode.lo ext/standard/filters.lo ext/standard/proc_open.lo ext/standard/streamsfuncs.lo ext/standard/http.lo ext/tokenizer/tokenizer.lo ext/tokenizer/tokenizer_data.lo ext/xml/xml.lo ext/xml/compat.lo ext/xmlreader/php_xmlreader.lo ext/xmlwriter/php_xmlwriter.lo TSRM/TSRM.lo TSRM/tsrm_strtok_r.lo TSRM/tsrm_virtual_cwd.lo main/main.lo main/snprintf.lo main/spprintf.lo main/php_sprintf.lo main/safe_mode.lo main/fopen_wrappers.lo main/alloca.lo main/php_scandir.lo main/php_ini.lo main/SAPI.lo main/rfc1867.lo main/php_content_types.lo main/strlcpy.lo main/strlcat.lo main/mergesort.lo main/reentrancy.lo main/php_variables.lo main/php_ticks.lo main/network.lo main/php_open_temporary_file.lo main/php_logos.lo main/output.lo main/getopt.lo main/streams/streams.lo main/streams/cast.lo main/streams/memory.lo main/streams/filter.lo main/streams/plain_wrapper.lo main/streams/userspace.lo main/streams/transports.lo main/streams/xp_socket.lo main/streams/mmap.lo main/streams/glob_wrapper.lo Zend/zend_language_parser.lo Zend/zend_language_scanner.lo Zend/zend_ini_parser.lo Zend/zend_ini_scanner.lo Zend/zend_alloc.lo Zend/zend_compile.lo Zend/zend_constants.lo Zend/zend_dynamic_array.lo Zend/zend_execute_API.lo Zend/zend_highlight.lo Zend/zend_llist.lo Zend/zend_opcode.lo Zend/zend_operators.lo Zend/zend_ptr_stack.lo Zend/zend_stack.lo Zend/zend_variables.lo Zend/zend.lo Zend/zend_API.lo Zend/zend_extensions.lo Zend/zend_hash.lo Zend/zend_list.lo Zend/zend_indent.lo Zend/zend_builtin_functions.lo Zend/zend_sprintf.lo Zend/zend_ini.lo Zend/zend_qsort.lo Zend/zend_multibyte.lo Zend/zend_ts_hash.lo Zend/zend_stream.lo Zend/zend_iterators.lo Zend/zend_interfaces.lo Zend/zend_exceptions.lo Zend/zend_strtod.lo Zend/zend_gc.lo Zend/zend_closures.lo Zend/zend_float.lo Zend/zend_objects.lo Zend/zend_object_handlers.lo Zend/zend_objects_API.lo Zend/zend_default_classes.lo Zend/zend_execute.lo sapi/cgi/cgi_main.lo sapi/cgi/fastcgi.lo main/internal_functions.lo -lcrypt -lresolv -lcrypt -lrt -lrt -lm -ldl -lnsl -lxml2 -lz -lm -ldl -lxml2 -lz -lm -ldl -lxml2 -lz -lm -ldl -lcrypt -lxml2 -lz -lm -ldl -lxml2 -lz -lm -ldl -lxml2 -lz -lm -ldl -lcrypt -o sapi/cgi/php-cgi ext/openssl/openssl.o: In function `zm_startup_openssl': /home/yuri/_/php-5.3.29/ext/openssl/openssl.c:992: undefined reference to `SSL_library_init' /home/yuri/_/php-5.3.29/ext/openssl/openssl.c:993: undefined reference to `OpenSSL_add_all_ciphers' /home/yuri/_/php-5.3.29/ext/openssl/openssl.c:994: undefined reference to `OpenSSL_add_all_digests' /home/yuri/_/php-5.3.29/ext/openssl/openssl.c:995: undefined reference to `OPENSSL_add_all_algorithms_noconf' /home/yuri/_/php-5.3.29/ext/openssl/openssl.c:997: undefined reference to `ERR_load_ERR_strings' ... /home/yuri/_/php-5.3.29/ext/openssl/xp_ssl.c:559: undefined reference to `X509_dup' /home/yuri/_/php-5.3.29/ext/openssl/xp_ssl.c:558: undefined reference to `sk_num' /home/yuri/_/php-5.3.29/ext/openssl/xp_ssl.c:519: undefined reference to `SSL_shutdown' ext/openssl/xp_ssl.o: In function `php_openssl_setup_crypto': /home/yuri/_/php-5.3.29/ext/openssl/xp_ssl.c:397: undefined reference to `SSL_CTX_free' collect2: error: ld returned 1 exit status Makefile:243: recipe for target 'sapi/cgi/php-cgi' failed make: *** [sapi/cgi/php-cgi] Error 1 Supposedly the problem is with this line: checking for DSA_get_default_method in -lssl... (cached) no But I don't really know how to remedy it. I have openssl 1.0.1.i-1 package installed, if anything.
The trick was to install the latest openssl-0.9.x version, apparently php-5.3 doesn't work with openssl-1.x one. Then: PHP_BUILD_CONFIGURE_OPTS=--with-openssl=/usr/local/ssl phpenv install 5.3.29 Also, it might be necessary to specify mysqli.default_socket = /run/mysqld/mysqld.sock in ~/.phpenv/versions/5.3.29/etc/php.ini. Otherwise, php says: PHP Warning: mysqli_connect(): [2002] No such file or directory (trying to connect via unix:///tmp/mysql.sock) in /home/yuri/_/1.php on line 2 PHP Stack trace: PHP 1. {main}() /home/yuri/_/1.php:0 PHP 2. mysql_connect() /home/yuri/_/1.php:2 PHP Warning: mysqli_connect(): (HY000/2002): No such file or directory in /home/yuri/_/1.php on line 2 PHP Stack trace: PHP 1. {main}() /home/yuri/_/1.php:0 PHP 2. mysqli_connect() /home/yuri/_/1.php:2 bool(false) The script: <?php $r = mysqli_connect('localhost', '<USER>', '<PASS>'); var_dump($r); UPD The other way to fix this is to pass one more option to configure: PHP_BUILD_CONFIGURE_OPTS='--with-openssl=/usr/local/ssl --with-mysql-sock' phpenv install 5.3.29
building php-5.3 on arch linux