date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,431,359,411,000
AFAIK make was the beginning or grandfather of build-tools. It needs/needed .configure correctly done in order to spew out a binary tool/package etc. I just commented on node. I do remember using one tool which had a build progress bar on the CLI and performed perfectly but don't remember the name. Can somebody please help me to remember? I am looking for a modern build-tool which has minimal set of dependencies ( in Debian) and has the above features as well.
A number of build tools offer progress information. CMake in particular produces makefiles which, by default, print out their progress: Scanning dependencies of target mikmod-static [ 1%] Building C object CMakeFiles/mikmod-static.dir/drivers/drv_AF.c.o [ 1%] Building C object CMakeFiles/mikmod-static.dir/drivers/drv_aiff.c.o [ 2%] Building C object CMakeFiles/mikmod-static.dir/drivers/drv_ahi.c.o [ 3%] Building C object CMakeFiles/mikmod-static.dir/drivers/drv_aix.c.o [ 3%] Building C object CMakeFiles/mikmod-static.dir/drivers/drv_alsa.c.o [ 4%] Building C object CMakeFiles/mikmod-static.dir/drivers/drv_dart.c.o [ 5%] Building C object CMakeFiles/mikmod-static.dir/drivers/drv_ds.c.o [ 5%] Building C object CMakeFiles/mikmod-static.dir/drivers/drv_esd.c.o [ 6%] Building C object CMakeFiles/mikmod-static.dir/drivers/drv_gp32.c.o [ 7%] Building C object CMakeFiles/mikmod-static.dir/drivers/drv_hp.c.o [ 7%] Building C object CMakeFiles/mikmod-static.dir/drivers/drv_mac.c.o The Meson Build system, or rather the build tool Ninja commonly used with it, also prints progress information, in a much more compact form (it shows a single line during the build, unless something goes wrong).
What is the build-tools alternative to 'make' that has a progress bar?
1,431,359,411,000
I am trying to compile the linux kernel 5.15.64 but it fails. I have the config and use make -j4 && sudo make modules_install -j4 but this is the error I get. make[1]: *** [kernel/Makefile:160: kernel/kheaders_data.tar.xz] Error 127 make: *** [Makefile:1896: kernel] Error 2 What is going wrong in the process?
There is a bug report https://bugs.gentoo.org/701678 that has the same messages. This was caused by having CONFIG_IKHEADERS=m in your configuration and not having cpio available.
Error trying to compile kernel 5.15
1,431,359,411,000
I'm struggling to compile mkclean on Ubuntu 16.04. I download the file and extract it but when I run ./configure I get this: ./configure: 2: ./configure: %%BEGIN: not found ./configure: 3: ./configure: SCRIPT: not found ./configure: 4: ./configure: %%END: not found make: *** corec/tools/coremake: No such file or directory. Stop. mv: cannot stat 'corec/tools/coremake/coremake': No such file or directory ./configure: 1: ./configure: corec/tools/coremake/system_output.sh: not found Running ./coremake ./configure: 11: ./configure: ./coremake: not found Now you can run make -C %(PROJECT_NAME) or gmake -C %(PROJECT_NAME) Any help? I can only find old results and they don't work
mkclean now uses CMake: tar xf mkclean-0.9.0.tar.bz2 cd mkclean-0.9.0 mkdir build cd build cmake .. make In older versions of mkclean, using autoconf, the configure script needs to be processed before it can be used. You should run ./mkclean/configure.compiled from the parent directory instead, after converting its end-of-line characters (using fromdos from the tofrodos package): fromdos mkclean/configure.compiled The full build sequence, starting with the downloaded source code, is: tar xf mkclean-0.8.10.tar.bz2 cd mkclean-0.8.10 fromdos mkclean/configure.compiled ./mkclean/configure.compiled make -C mkclean This gives me a release/gcc_linux_x64/mkclean binary.
Compile mkclean on ubuntu 16.04
1,431,359,411,000
Where do C# executables running on an Ubuntu Linux 16.04 Lenovo Thinkstation desktop which use source code that DLLImport's shared objects (so's) look for shared objects at runtime? Even if the shared object libxyz.so resides in same subdirectory as the C# executable's subdirectory, I found that it is necessary to export LD_LIBRARY_PATH to ensure correct C# executable program behavior. Why is this the case? We noticed that during the installation of many third-party Linux software products, the installation programs or scripts manage to find libc.so.6 in the subdirectory /usr/libx86_64-linux-gnu without requiring the customer specify an LD_LIBRARY_PATH containing that subdirectory. Why is that the case? Also, if we wish to run the C# executable as a point and click mono-service, how do we globally specify LD_LIBRARY_PATH, until the computer reboots, without resorting to opening an Ubuntu Linux 16.04 terminal? Is there a more elegant way than passing LD_LIBRARY_PATH as an envp argument to execle?
I'll try to answer all three parts of this question for you [Why is it] necessary to export LD_LIBRARY_PATH to ensure correct C# executable program behavior installation programs or scripts manage to find libc.so.6 in the subdirectory /usr/libx86_64-linux-gnu without requiring the customer specify an LD_LIBRARY_PATH Linked libraries are referenced from a known set of locations. Typically these are system directories so that privileged code can use them safely (they can't be overwritten by users). Once you understand this you realise that the set of known locations cannot not include .. You can see the set of known locations by examining the text file /etc/ld.so.conf. If you edit it you must run ldconfig to update its correpsonding binary database. The set of known locations can be extended per application by using an instance of LD_LIBRARY_PATH, which takes a colon-separated list of directories to search. If you use this though, the kernel discards all privileges from a program - so you can't use it to cheat passwd or sudo, for example. how do we globally specify LD_LIBRARY_PATH, until the computer reboots [...] Is there a more elegant way than passing LD_LIBRARY_PATH as an envp argument to execle? Setting it globally would be a really bad idea as it would break sudo, passwd, and other privileged programs. I don't see why you can't set LD_LIBRARY_PATH in a shell script per application, though. You don't need to start it as a "terminal program" as it doesn't write anything significant to a terminal #!/bin/bash # APP_DIR=/path/to/application APP_DIR_LIB="$APP_DIR/lib" APP_DIR_EXE="$APP_DIR/someprogram.exe" export LD_LIBRARY_PATH="$APP_LIB_DIR"${LD_LIBRARY_PATH:+:$LD_LIBRARY_PATH}" exec "$APP_DIR_EXE" "$@" echo "Ooops" >&2 exit 1 I've used "$@" so that any arguments passed to the script are applied to the executable itself. I don't know how you would start or stop a mono service so I can't help you with the specifics of that. If you update your question I'll see if there's anything I can add here.
Where do C# executables running on an Ubuntu Linux 16.04 desktop which use source code that DLLImport's shared objects look for them at runtime? [closed]
1,450,588,996,000
I've been trying to cross compile the Squid 3.5.7 on ARM Cortex A8 (Linux). I downloaded it from http://www.squid-cache.org/Versions/v3/3.5/ I have arm-linux-gnueabi-gcc and arm-linux-gnueabi-g++. tar -zxvf squid-3.5.7.tar.gz cd squid-3.5.7 ./configure --prefix=/usr/local/squid After ./configure --prefix=/usr/local/squid I have this Makefile:http://wklej.se/makefile make all make install Next I copy folders /usr/local/squid and ~/squid-3.5.7 to SD card. When I try open ./squid -z from SD card on the board with ARM I have problem: root@am335x:/# ls bin etc lib mnt srv usr boot findHelp linuxrc proc sys var dev home media sbin tmp root@am335x:/media/mmcblk0/squid/sbin# ls squid root@am335x:/media/mmcblk0/squid/sbin# ./squid -z ./squid: line 20: syntax error: ")" unexpected root@am335x:/media/mmcblk0/squid/sbin# ./squid ./squid: line 20: syntax error: ")" unexpected root@am335x:/media/mmcblk0/squid/sbin# I don't know what to do :/
You're not actually cross-compiling; to cross-compile you need to tell ./configure about your target architecture: ./configure --prefix=/usr/local --host=arm-linux-gnueabi You should then get Makefiles which use arm-linux-gnueabi-gcc, and a resulting squid binary which is appropriate for your ARM device. (Assuming you have all the necessary libraries of course.)
Squid cross compile
1,450,588,996,000
I'm trying to build the libesedb package so I can read edb files. I went to: http://code.google.com/p/libesedb/wiki/Building and found the following: tar xfv libesedb-alpha-<version>.tar.gz cd libesedb-<version> ./configure make So I extracted the files into the folder libesedb-20120102 I then cd to the folder and used ./configure - it returned the following: si@siMint ~/Desktop/libesedb-20120102 $ ./configure checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for a thread-safe mkdir -p... /bin/mkdir -p checking for gawk... gawk checking whether make sets $(MAKE)... yes checking build system type... x86_64-unknown-linux-gnu checking host system type... x86_64-unknown-linux-gnu checking for style of include used by make... GNU checking for gcc... gcc checking whether the C compiler works... no configure: error: in `/home/si/Desktop/libesedb-20120102': configure: error: C compiler cannot create executables See `config.log' for more details. I ran make anyway, and got this: si@siMint ~/Desktop/libesedb-20120102 $ make make: *** No targets specified and no makefile found. Stop. I'm using Linux Mint (16, Cinnamon). Can anyone tell me what's going wrong?
The ./configure script returns an error regarding your c-compiler. There are some compiler tests that are not working with your gcc. I tried the same on my machine (debian 6) with gcc version 4.4.5 and the ./configure script succeeded without an error. So check your gcc version with gcc --version and install another version if necessary. In the official repositories are always different versions of gcc see with: apt-cache search gcc | grep ^gcc I recommend the version 4.4, because it worked for me, so after installing it change the link of gcc, see: ls -la /usr/bin/gcc lrwxrwxrwx 1 root root 7 Dec 29 18:19 /usr/bin/gcc -> gcc-4.4 and check again with gcc --version. ./configure or make errors are often depending on a wrong gcc version.
How to install libesedb? Build error
1,450,588,996,000
I have double-checked that I have every dependency to build X and that they are all at the latest version. I'm not even sure where to begin with these errors, so I was hoping someone here could help me. Compiling on WSL2 if it matters. Compiling using the build.sh method. Here's the command I'm using: ./util/modular/build.sh --clone $HOME/Xbuild Here's the error: /home/mason/lx-os/cross-tools/lib/gcc/x86_64-unknown-linux-gnu/9.3.0/../../../../x86_64-unknown-linux-gnu/bin/ld: warning: libXau.so.6, needed by /home/mason/Xbuild/lib/libxcb.so, not found (try using -rpath or -rpath-link) /home/mason/lx-os/cross-tools/lib/gcc/x86_64-unknown-linux-gnu/9.3.0/../../../../x86_64-unknown-linux-gnu/bin/ld: warning: libXdmcp.so.6, needed by /home/mason/Xbuild/lib/libxcb.so, not found (try using -rpath or -rpath-link) /home/mason/lx-os/cross-tools/lib/gcc/x86_64-unknown-linux-gnu/9.3.0/../../../../x86_64-unknown-linux-gnu/bin/ld: warning: libxcb-shm.so.0, needed by ../image/.libs/libxcb-image.so, not found (try using -rpath or -rpath-link) /home/mason/lx-os/cross-tools/lib/gcc/x86_64-unknown-linux-gnu/9.3.0/../../../../x86_64-unknown-linux-gnu/bin/ld: /home/mason/Xbuild/lib/libxcb.so: undefined reference to `XauGetBestAuthByAddr' /home/mason/lx-os/cross-tools/lib/gcc/x86_64-unknown-linux-gnu/9.3.0/../../../../x86_64-unknown-linux-gnu/bin/ld: ../image/.libs/libxcb-image.so: undefined reference to `xcb_shm_put_image' /home/mason/lx-os/cross-tools/lib/gcc/x86_64-unknown-linux-gnu/9.3.0/../../../../x86_64-unknown-linux-gnu/bin/ld: /home/mason/Xbuild/lib/libxcb.so: undefined reference to `XauDisposeAuth' /home/mason/lx-os/cross-tools/lib/gcc/x86_64-unknown-linux-gnu/9.3.0/../../../../x86_64-unknown-linux-gnu/bin/ld: ../image/.libs/libxcb-image.so: undefined reference to `xcb_shm_get_image' /home/mason/lx-os/cross-tools/lib/gcc/x86_64-unknown-linux-gnu/9.3.0/../../../../x86_64-unknown-linux-gnu/bin/ld: /home/mason/Xbuild/lib/libxcb.so: undefined reference to `XdmcpWrap' /home/mason/lx-os/cross-tools/lib/gcc/x86_64-unknown-linux-gnu/9.3.0/../../../../x86_64-unknown-linux-gnu/bin/ld: ../image/.libs/libxcb-image.so: undefined reference to `xcb_shm_get_image_reply' collect2: error: ld returned 1 exit status make[2]: *** [Makefile:648: test_xcb_image] Error 1 I thought that running sudo ldconfig would fix it, but it did not.
Running the build script as root solved it.
Trying to compile X Window System and getting errors that I don't know
1,450,588,996,000
I am triying to compile on FreeBsd 12.1 .. and game and database source from Metin2 game . I am using gcc+6.5 and gnu gmake -j20 comand Like all the files are getting compiled but in the last second it s getting me this error: linking ../game ld: error: undefined symbol: ERR_free_strings >>> referenced by vio.c >>> vio.c.o:(vio_end) in archive ../../../extern/mysql/lib/libmysqlclient.a ld: error: undefined symbol: EVP_CIPHER_CTX_cleanup >>> referenced by my_aes_openssl.cc >>> my_aes_openssl.cc.o:(my_aes_decrypt) in archive ../../../extern/mysql/lib/libmysqlclien t.a c++: error: linker command failed with exit code 1 (use -v to see invocation) gmake: *** [Makefile:228: ../game] Error This is my Makefile CC = c++ INCDIR = LIBDIR = BINDIR = .. OBJDIR = .obj $(shell if [ ! -d $(OBJDIR) ]; then mkdir $(OBJDIR); fi) ### CFLAGS CFLAGS = -w -O3 -ggdb -g -gdwarf -std=c++14 -pipe -mtune=i386 -fstack-protector -m32 -static -D_THREAD_SAFE ### END ### LIBS FROM ../EXTERN and LOCAL/INCLUDE # boost (for boost, you need to install boost from PuTTY. Example: pkg install boost-libs). INCDIR += -I/usr/local/include # GSL: Guideline Support Library INCDIR += -I../../../extern/gsl/include # cryptopp (if cryptopp doesn't work, you need download src cryptopp 5.6.5 from https://github.com/weidai11/cryptopp/releases/tag/CRYPTOPP_5_6_5 and recompile with your actually compiller (g++/c++). INCDIR += -I../../../extern/cryptopp LIBDIR += -L../../../extern/cryptopp/lib LIBS += -lcryptopp # devil (for that, you need to install devIL from PuTTY. Example: pkg install devil). INCDIR += -I../../../local/include LIBDIR += -L/usr/local/lib LIBS += -lil -lpng -ltiff -lmng -llcms -ljpeg -ljbig -llzma # minilzo INCDIR += -I../../../extern/minilzo LIBDIR += -L../../../extern/minilzo/lib LIBS += -lminilzo -lmd # mysql INCDIR += -I/usr/local/include/mysql LIBDIR += -L../../../extern/mysql/lib LIBS += -lmysqlclient -lz -pthread -lm -lssl -lcrypto ### END ### LIBS FROM ../SOURCE/LIB # libgame INCDIR += -I../../lib/libgame LIBDIR += -L../../lib/libgame/lib LIBS += -lgame # libpoly INCDIR += -I../../lib/libpoly LIBDIR += -L../../lib/libpoly/lib LIBS += -lpoly # libsql INCDIR += -I../../lib/libsql LIBDIR += -L../../lib/libsql/lib LIBS += -lsql # libthecore INCDIR += -I../../lib/libthecore LIBDIR += -L../../lib/libthecore/lib LIBS += -lthecore # lua INCDIR += -I../../../extern/lua/lua LIBDIR += -L../../../extern/lua/lib LIBS += -llua ### END CPP = abuse.cpp\ activity.cpp\ affect.cpp\ ani.cpp\ arena.cpp\ banword.cpp\ battle.cpp\ BattleArena.cpp\ blend_item.cpp\ BlueDragon.cpp\ BlueDragon_Binder.cpp\ buff_on_attributes.cpp\ buffer_manager.cpp\ building.cpp\ char.cpp\ char_affect.cpp\ char_battle.cpp\ char_change_empire.cpp\ char_dragonsoul.cpp\ char_gaya.cpp\ char_horse.cpp\ char_item.cpp\ char_manager.cpp\ char_quickslot.cpp\ char_resist.cpp\ char_skill.cpp\ char_state.cpp\ cmd.cpp\ cmd_emotion.cpp\ cmd_general.cpp\ cmd_gm.cpp\ cmd_oxevent.cpp\ config.cpp\ constants.cpp\ crc32.cpp\ cube.cpp\ damage_top.cpp\ db.cpp\ desc.cpp\ desc_client.cpp\ desc_manager.cpp\ desc_p2p.cpp\ dragon_soul_table.cpp\ DragonLair.cpp\ DragonSoul.cpp\ dungeon.cpp\ empire_text_convert.cpp\ entity.cpp\ entity_view.cpp\ event.cpp\ event_queue.cpp\ exchange.cpp\ file_loader.cpp\ fishing.cpp\ FSM.cpp\ gm.cpp\ group_text_parse_tree.cpp\ guild.cpp\ guild_manager.cpp\ guild_war.cpp\ horse_rider.cpp\ horsename_manager.cpp\ input.cpp\ input_api.cpp\ input_auth.cpp\ input_db.cpp\ input_login.cpp\ input_main.cpp\ input_p2p.cpp\ inventory.cpp\ item.cpp\ item_addon.cpp\ item_attribute.cpp\ item_manager.cpp\ item_manager_idrange.cpp\ item_manager_read_tables.cpp\ locale.cpp\ locale_service.cpp\ log.cpp\ login_data.cpp\ lzo_manager.cpp\ main.cpp\ map_location.cpp\ MarkConvert.cpp\ MarkImage.cpp\ MarkManager.cpp\ marriage.cpp\ MeleyLair.cpp\ messenger_manager.cpp\ mining.cpp\ mob_manager.cpp\ motion.cpp\ MountSystem.cpp\ nearby_scanner.cpp\ New_PetSystem.cpp\ OXEvent.cpp\ p2p.cpp\ packet_info.cpp\ party.cpp\ PetSystem.cpp\ polymorph.cpp\ priv_manager.cpp\ pvp.cpp\ questevent.cpp\ questlua.cpp\ questlua_affect.cpp\ questlua_arena.cpp\ questlua_battleArena.cpp\ questlua_building.cpp\ questlua_danceevent.cpp\ questlua_dragonlair.cpp\ questlua_dragonsoul.cpp\ questlua_dungeon.cpp\ questlua_game.cpp\ questlua_global.cpp\ questlua_guild.cpp\ questlua_horse.cpp\ questlua_item.cpp\ questlua_marriage.cpp\ questlua_MeleyLair.cpp\ questlua_npc.cpp\ questlua_oxevent.cpp\ questlua_party.cpp\ questlua_pc.cpp\ questlua_pet.cpp\ questlua_petnew.cpp\ questlua_quest.cpp\ questlua_support.cpp\ questlua_target.cpp\ questlua_TempleOchao.cpp\ questmanager.cpp\ questnpc.cpp\ questpc.cpp\ reborn.cpp\ refine.cpp\ regen.cpp\ safebox.cpp\ sectree.cpp\ sectree_manager.cpp\ shop.cpp\ shop_manager.cpp\ shopEx.cpp\ skill.cpp\ skill_power.cpp\ snow_flake_flr.cpp\ start_position.cpp\ SupportSystem.cpp\ target.cpp\ TempleOchao.cpp\ text_file_loader.cpp\ trigger.cpp\ utils.cpp\ vector.cpp\ war_map.cpp\ wedding.cpp\ whisper_admin.cpp\ cipher.cpp\ CPPOBJS = $(CPP:%.cpp=$(OBJDIR)/%.o) GAME_TARGET = $(BINDIR)/game default: $(GAME_TARGET) $(OBJDIR)/%.o: %.cpp @echo -e "\033[0;32m [OK] \033[0m" $< @$(CC) $(CFLAGS) $(INCDIR) -c $< -o $@ $(GAME_TARGET): $(CPPOBJS) @echo linking $(GAME_TARGET) @$(CC) $(CFLAGS) $(LIBDIR) $(CPPOBJS) $(LIBS) -o $(GAME_TARGET) clean: @rm -f $(CPPOBJS) @rm -f $(BINDIR)/game* $(BINDIR)/conv tag: ctags *.cpp *.h
Your OpenSSL version is too new for your code. The function was removed in Openssl 1.1.0 According to the changelog: EVP_MD_CTX_cleanup(), EVP_CIPHER_CTX_cleanup() and HMAC_CTX_cleanup() were removed. HMAC_CTX_reset() and EVP_MD_CTX_reset() should be called instead to reinitialise an already created structure. Either ask your upstream for a new version of the program, or try to change the code yourself
gmake: *** [Makefile:228: ../game] Error 1
1,450,588,996,000
In trying to compile an old version of libvirt (to see if I can get some old patches up to date, see https://www.redhat.com/archives/libvir-list/2014-March/msg00106.html), I get the error: getopt.h:85:29: fatal error: getopt-pfx-core.h: No such file or directory when I run make (after running ./autogen.sh). This is on libvirt commit aa50a5c. In searching for this error ("fatal error: getopt-pfx-core.h: No such file or directory") I find very few results, but it does not seem specific to libvirt, which makes sense since the error seems to be something with getopt.
Make sure that you are building with a clean tree. In this case, I had files from a build of a much newer commit. Running: (warning, this deletes stuff in the working tree that is not in git!) git reset --hard HEAD git clean -fdx git clean -fdX and then doing the build again worked.
Getting the error "getopt.h:85:29: fatal error: getopt-pfx-core.h: No such file or directory" while trying to compile libvirt [closed]
1,450,588,996,000
Monitoring the folder and when the .CPP file arrived inside the folder then automatically compile the program in ubuntu can anyone help me to solve this problem please?
For such a simple case real monitoring (inotify) is probably overkill. #! /bin/bash file_path='/path/to/file.cpp' while true; do if [ -f "$file_path" ]; then : do something exit 0 else sleep 1 fi done
Monitoring folder and compile [closed]
1,450,588,996,000
I don't actually know where to start and what keywords to use for searching more, but can we simply make a programs in C++ that runs in Linux as well as Windows. I guess we are talking about binary files. Does it change the style of programming or do I just have to compile them in different way? If I open a Visual Studio on my Windows machine and I start creating a simple application which shows a string "Hi. You just execute me." in C++. Does it run on both of machines? Is there something I have to know about?
i often use wxwidgets, this places a layer in the code that you work with, which at compile time replaces references to windows / gtk... it does change the way you code as instead of using gtk class or windows.h class direct, you'll use wxwidgets classes which get replaced with appropriate ones at compile time. this gets the "real feel", as the end result is an application that is 100% native, using native class and gui. other options are stuff like mono, qt, java... these all have the goal of standardization through an additional layer carried over to the host OS. basically they have a platform that runs in any env, and you can run your application on this platform. going back to wxwidgest and the code style, you will find there are some things that wxwidgets doesn't have a wrapper class for. for example serial com ports, and linux / windows handle this very differently. For this you'd have to double code (define code for windows and code for linux).
C++ Apps ready for Linux as well as Windows? [closed]
1,450,588,996,000
I've successfully been able to configure the latest Firefox (source) without errors. All the required dependencies are in place (i.e. GCC 4.9.2 via devtoolset-3, Python 2.7, Yasm, libffi 3.2.1, and on). When I run ./mach build it also successfully configures and starts makeing the binaries... then after about 24 minutes it chokes on 24:40.15 /home/osboxes/firefox-50.0b7/gfx/thebes/gfxFontconfigFonts.cpp: In member function ‘virtual already_AddRefed<gfxFont> gfxPangoFontGroup::FindFontForChar(uint32_t, uint32_t, uint32_t, gfxFontGroup::Script, gfxFont*, uint8_t*)’: 24:40.15 /home/osboxes/firefox-50.0b7/gfx/thebes/gfxFontconfigFonts.cpp:1628:66: error: ‘g_unicode_script_from_iso15924’ was not declared in this scope 24:40.15 (const PangoScript)g_unicode_script_from_iso15924(scriptTag); 24:40.15 ^ The pertinent part being ‘g_unicode_script_from_iso15924’ was not declared in this scope I searched online for this error first and the only reference to this is a fixed bug in v52 (ref) which isn't even in the sources repo at this time. This isn't a bug. How to compile Firefox 50 for a system using GLibc 2.12? Solved: I discovered that g_unicode_script_from_iso15924 is a new symbol in GLib 2.30 (ref). Glib needs to be updated to at least version 2.30.
That's not a symbol in glibc, it's a symbol in GLib. If you build and install GLib 2.30 or later, you should be able to build Firefox 50.
Compiling Firefox 50 under GLibc 2.12
1,450,588,996,000
The error message I am getting when running ntp-keygen (or ntpd) on my ARM device is: ./ntp-keygen: relocation error: ./ntp-keygen: symbol DSA_generate_parameters_ex, version OPENSSL_1_1_0 not defined in file libcrypto.so.1.1 with link time reference The script to configure, build and install OpenSSL is as follows: #!/bin/bash # Build dependencies if any. depends() { cd $buildDir if [ "x${PERL}" = "x" ]; then export PERL=`which perl` fi return $? } # Configure software package. configure() { depends cd $packageDir # Available ciphers: # DES, AES, CAMELLIA, MD2, MDC2, MD4, MD5, HMAC, SHA, RIPEMD, WHIRLPOOL, # RC4, RC5, RC2, IDEA, SEED, BF(blowfish), CAST, RSA, DSA, ECDSA, ECDH # We use: # DES, AES, MD4, MD5, HMAC, SHA, RSA, ECDSA, ECDH ./Configure shared threads --prefix=$PWD/install-arm linux-armv4 return $? } # Build the software package. compile() { local mmx_machine_type=`echo $MMX_MACHINE_TYPE | tr '[:upper:]' '[:lower:]'` depends cd $packageDir export CFLAGS="$CFLAGS -DCONFIG_MACHINE_${MMX_MACHINE_TYPE}" configure if [ "$?" -ne "0" ]; then return 1; fi make CC=$CC AR=$AR NM=$NM RANLIB=$RANLIB if [ "$?" -ne "0" ]; then return 1; fi make CC=$CC AR=$AR NM=$NM RANLIB=$RANLIB install if [ "$?" -ne "0" ]; then return 1; fi return 0 } # Clean-up. clean() { depends cd $packageDir rm -rf install-arm/* rm -rf install-i386/* make clean } # Install to rootfs the necessary pieces (e.g. directories, links...) install() { targetPath=${buildDir}/.tmp.rootfs/rootfs local openssl="bin/openssl" local libssl="lib/libssl.so.1.1" local libcrypto="lib/libcrypto.so.1.1" cd ${packageDir}/install-arm if [ -f ${openssl} -a -f ${libssl} -a -f ${libcrypto} ] then cp ${openssl} ${targetPath}/sbin/ cp ${libssl} ${targetPath}/lib/ cp ${libcrypto} ${targetPath}/lib/ else printf " $package not built.\n" fi return 0 } For the ntp package, the only reference to OpenSSL is in the configure options. Below is the full ntp package configurations: #!/bin/sh # Configure software package. configure() { cd $packageDir ./bootstrap ./configure --host=arm-linux --with-yielding-select=yes --with-crypto=openssl \ --with-openssl-incdir=$OPENSSL_DIR/install-arm/include/ \ --with-openssl-libdir=$OPENSSL_DIR/install-arm/lib/ \ return $? } # Build the software package. compile() { cd $packageDir # make PROJECT_NAME=$project make if [ "$?" -ne "0" ]; then return 1; fi return 0 } # Clean-up. clean() { cd $packageDir make clean } # Install to rootfs the necessary pieces (e.g. directories, links...) install() { cd $packageDir sourcePath=. targetPath=$buildDir/.tmp.rootfs/rootfs if [[ -f $sourcePath/ntpclient ]]; then cp $sourcePath/$package $targetPath/sbin/ else printf " $package not built.\n" return 1 fi return 0 }
Based on the error message you're getting, it looks like the ntp-keygen command is trying to use a function named DSA_generate_parameters_ex from the OpenSSL library, but this function is not defined in the version of the library that you have installed. This could be because you are using an older version of OpenSSL that does not include this function, or because there is an issue with the way the library is being linked to ntp-keygen. To fix this error, you could try updating to a newer version of OpenSSL that includes the DSA_generate_parameters_ex function, or you could try re-building the OpenSSL library and ntp-keygen to ensure that they are linked correctly. You can also try using the ldd command to check the dependencies of the ntp-keygen binary and make sure that it is linked to the correct version of the OpenSSL library.
Relocation error when running NTPv4 (cross-compile) on ARM device
1,450,588,996,000
I want to know if there are ways to compile C, C++ and Python code in order to not be able to reverse engineering it over Linux or not? I have heard there are some ways over Windows to do it, but I am working on Linux. I want to compile my code securely, as released or final version. UPDATE At least I want to make it hard for usual users to disassemble, I am using GCC for C and C++, also I would be thankful if you introduce me best compiler for Python.
preamble you probably don't want to invest time into preventing people from disassembling your code: instead focus on making your project better, so that once your competitors have figured out how you did feature X, your software already has feature Y... the reasoning is simple: if you have a dull project, then nobody will care to disassemble it and you have invested all the time for nought. otoh, if your product is cool, an armada of hackers will spent time to figure out how you did it. there is little you can do about it (and it happens to major players (like microsoft,...) as well). but these hackers will always be one step behind: re-constructing a program from assembler is not trivial. so make sure that you keep moving, and they will stay behind. thus make sure that your code does not contain debugging symbols. with gcc this basically means that you should turn off the -g flag. (most likely this is exactly what Visual Studio's "Release" builds do for MSVC). you might also think about static linking of external libraries (in order to keep code injection via the dynamic linker minimal) finally do not trust any vendor, that providing a Release build will protect your binary in any way.
How to compile c, c++ and python code as "Released/Final" version? [closed]
1,450,588,996,000
So I'm compiling the 6.0.3 kernel in Debian 11, and I've been given the task of getting the smallest kernel possible that boots and has Internet connection. I find myself at a point where I've compiled the kernel 89 times in total, and my kernel has 599 static modules and 0 loadable modules. I'm using the command make nconfig and I've searched high and low for the section to disable the GUI, but I can't find it. My OS boots still with a GUI, and I want to disable that because I'm sure I can remove a lot of modules that way and make my kernel even smaller. Somebody knows which section of the menu has this option? EDIT: The task is finished and I've ended up with 533 static modules + 0 dynamic modules. I literally can't remove any more modules, and the GUI is still working and there is no section in the menu to disable it. You were all right, thanks!
To build a minimal kernel you should use make tinyconfig instead of make nconfig. To disable the graphic interface, use: sudo systemctl set-default multi-user.target to revert back: sudo systemctl set-default graphical.target But it doesn't make the kernel smaller.
How to disable GUI when compiling the linux kernel?
1,450,588,996,000
I don't understand why us, Linux's users, have to compile source code in order to install apps? Why it can't be like Windows platform, everything is ready to serve as binary packages? Of course, please don't misunderstand that I complain the face compiling-to-run. In fact I love it, it helps me a lot in practicing with command lines, deeper understanding in programming process. But my friends and my family don't think so. Well, their point is they aren't programmer. Indeed that we do have some distros such as Ubuntu and its family which is easy to install apps. But not every apps found on the internet could do that, even on Ubuntu or Mint. So, why we have to compile source code to install app? (It would be very kind of you if I could have a bonus answer on 'What is benefit from compiling source code but not using binary package?')
As @ivanivan answered, very few distributions require you to compile from source (and Gentoo automates the process.) Traditionally, the main reason why source was preferred was because of the extreme difference in flavour of Unix and architecture that existed between Unix systems (System V vs. BSD, or MIPS vs. Intel as examples), but the biggest reason I have had to compile from scratch is when a program that I want to install is not included in the repository or there is a patch to fix something or add a specific feature. Even when a program isn't in a repository, there are often .deb or .rpm packages that you can download and install on most distributions: here's an example for Java. The distribution will then do the hard work of tracking down dependencies (and updating them if necessary.) The main difference between installing these packages in Linux and Windows is that the installer is generally separate from the package in Linux, and functionally self-contained in Windows (even when they use the Windows installation system; MSI installers use Windows' installer directly.)
Why users have to compile source code to install app on Linux? [closed]
1,450,588,996,000
According to installation guides, having executed on my Debian 12: sudo apt-get install python3 python3-pip python3-setuptools python3-wheel ninja-build sudo apt install build-essential meson should be installed on my system. And if I do a sudo updatedb + locate meson | xargs -I {} dirname {} | sort | uniq I have a lot of outputs: /data/docker/overlay2/a7ad1e9d584e8675ba05ab724dbf033bd1d83d907756e429f0c2d40f1a919cec/diff/usr/share/mime/text /data/sauvegardes_par_rsync/home/lebihan/dev/apprentissageDev/python/python_pour_les_mathématiques/ch01/venv/lib/python3.9/site-packages/scipy /home/lebihan/Bureau/anaconda3/pkgs/fribidi-1.0.10-h7b6447c_0/info/recipe /home/lebihan/Bureau/anaconda3/pkgs/glib-2.68.1-h36276a3_0/info/recipe/patches /home/lebihan/dev/apprentissageDev/python/python_pour_les_mathématiques/ch01/venv/lib/python3.9/site-packages/scipy /home/lebihan/dev/Java/opensource/openapi-generator/samples/server/petstore/cpp-pistache/build/PISTACHE-prefix/src/PISTACHE /home/lebihan/.local/share/JetBrains/Toolbox/apps/clion/plugins /home/lebihan/.local/share/JetBrains/Toolbox/apps/clion/plugins/clion-meson-plugin /home/lebihan/.local/share/JetBrains/Toolbox/apps/clion/plugins/clion-meson-plugin/lib /home/lebihan/.mozilla/firefox/eolge1mk.default-esr/storage/default /home/lebihan/.mozilla/firefox/eolge1mk.default-esr/storage/default/https+++mesonbuild.com /home/lebihan/.steam/debian-installation/ubuntu12_64/steam-runtime-sniper/sniper_platform_0.20240307.80401/files/share/mime/text /snap/gnome-3-28-1804/194/usr/share/gtksourceview-3.0/language-specs /snap/gnome-3-28-1804/194/usr/share/mime/text /snap/gnome-3-28-1804/198/usr/share/gtksourceview-3.0/language-specs /snap/gnome-3-28-1804/198/usr/share/mime/text /snap/gnome-3-38-2004/140/usr/share/gtksourceview-3.0/language-specs /snap/gnome-3-38-2004/140/usr/share/gtksourceview-4/language-specs /snap/gnome-3-38-2004/140/usr/share/mime/text /snap/gnome-3-38-2004/143/usr/share/gtksourceview-3.0/language-specs /snap/gnome-3-38-2004/143/usr/share/gtksourceview-4/language-specs /snap/gnome-3-38-2004/143/usr/share/mime/text /usr/lib/python3/dist-packages/pygments/lexers /usr/lib/python3/dist-packages/pygments/lexers/__pycache__ /usr/pgadmin4/venv/lib/python3.11/site-packages/pygments/lexers /usr/share/doc/libdav1d-dev/examples /usr/share/gtksourceview-4/language-specs /usr/share/gtksourceview-5/language-specs /usr/share/mime/text /usr/share/vim/vim90/ftplugin /usr/share/vim/vim90/indent /usr/share/vim/vim90/syntax /usr/src/linux-headers-6.1.0-18-common/arch/arm/include/debug /usr/src/linux-headers-6.1.0-18-common/include/dt-bindings/clock /usr/src/linux-headers-6.1.0-18-common/include/dt-bindings/gpio /usr/src/linux-headers-6.1.0-18-common/include/dt-bindings/power /usr/src/linux-headers-6.1.0-18-common/include/dt-bindings/reset /usr/src/linux-headers-6.1.0-18-common/include/dt-bindings/sound /usr/src/linux-headers-6.1.0-18-common/include/linux/firmware /usr/src/linux-headers-6.1.0-18-common/include/linux/firmware/meson /usr/src/linux-headers-6.1.0-18-common/include/linux/soc/amlogic /usr/src/linux-headers-6.1.0-20-common/arch/arm/include/debug /usr/src/linux-headers-6.1.0-20-common/include/dt-bindings/clock /usr/src/linux-headers-6.1.0-20-common/include/dt-bindings/gpio /usr/src/linux-headers-6.1.0-20-common/include/dt-bindings/power /usr/src/linux-headers-6.1.0-20-common/include/dt-bindings/reset /usr/src/linux-headers-6.1.0-20-common/include/dt-bindings/sound /usr/src/linux-headers-6.1.0-20-common/include/linux/firmware /usr/src/linux-headers-6.1.0-20-common/include/linux/firmware/meson /usr/src/linux-headers-6.1.0-20-common/include/linux/soc/amlogic I have a path that looks normal: echo $PATH /home/lebihan/anaconda3/bin:/home/lebihan/.local/bin:/home/lebihan/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/snap/bin:/opt/apache-maven-3.9.4/bin:/opt/zeppelin-0.10.0-bin-all/bin:/opt/spark-3.3.0-bin-hadoop3/bin:/opt/gradle/gradle-6.9.1/bin:/opt/kafka_2.12-3.4.0/bin:/home/lebihan/.local/share/coursier//bin:/home/lebihan/.local/bin/bin:/usr/local/go/bin:/home/lebihan/.local/share/JetBrains/Toolbox/scripts:/home/lebihan/.local/share/coursier/bin However, my command meson is found nowhere... $meson bash: meson : commande introuvable Where could it be?
You need to install it: sudo apt install meson The packages you installed don’t depend on meson, so it wouldn’t have been installed as a dependency. For future reference, you can use apt-file to find packages providing a given command; in this instance: sudo apt install apt-file sudo apt update apt-file search bin/meson (The first two commands are only required if apt-file isn’t already installed.)
Where is meson? build-essential or python3, python3-pip, python3-setuptools, python3-wheel, ninja-build packages are installed but it isn't here
1,450,588,996,000
I want to find exact source code of compiled program in Linux/Unix systems. for illustration: computer:/ username$ whereis ping /sbin/ping And the task is to find the source code of /sbin/ping.
The source code of a compiled binary may not be available on your system. On OpenBSD (which is not Linux), the source code is for the complete base system (including kernel and utilities like ping) is available over CVS. For a web-browsable OpenBSD repository, see https://cvsweb.openbsd.org/cgi-bin/cvsweb/src/ The ping sources is located in src/sbin/ping. The NetBSD project (again, not a Linux) has a browsable CVS repository at http://cvsweb.netbsd.org/bsdweb.cgi/src/ The source for ping is located in src/sbin/ping in that tree, as for OpenBSD. The FreeBSD project (which is also not Linux) has a GitHub repository at https://github.com/freebsd/freebsd The source code for ping is located at sbin/ping in that tree. On these BSD system, the source of the base utilities and kernel will only be available on the system if a user has checked out the respective repositories. (the Makefiles with the build instructions for) Third-party tool packages/ports are kept in a separate repository for all three of these operating systems, and the source code is usually fetched from the main distribution site of the tool in question if one decides to compile the tool oneself and not use a ready-made binary package/port. See the documentation provided by the relevant Unix for how to go about using their package/port system. For Linux utility source code, you would have to first figure out what package the utility comes from, and then (if possible) use the package manager software to fetch the source code for the package. Alternatively, find where the source code is fetch from by the package maintainers when they create a binary package. This would be different depending on what Linux and package manager software you are using.
Where are stored sources of compiled programs? [closed]
1,450,588,996,000
I am requesting opinions on expected and desired outcome of prog initialization, specifically loading of shared libraries for a program that I do not have source code. All code delivered via RPMs. Suspect prog exhibits constant Revc-Q buffering on two TCP conns. Other end of TCP conns looks good. Suspect prog buffers 1000-10000 bytes almost constantly, rarely goes to zero. Host of suspect prog shows tcpActiveOpens.0 50,000 with tcpAttemptFails.0 at 47,000 with both incrementing continuously. Many other probable TCP issues. When ldd run on suspect prog returns total of 42 libraries with 24 "not found" the other 18 resolve with a DIR and hexaddr. Put an strace wrapper around suspect prog and noted the many "-1 ENOENT" on every library, not just the ones noted "not found" with ldd. All libraries are eventually found and loaded into suspect progs mem. Some have as many as 42 ENOENT before success. Contacted dev with findings and have been assured that when I ran ldd I needed to source their environment config file, which is supposed to run at prog launch and set all library paths. No comment on the ENOENT. Questions: When you have finished your code and complied, do you validate with tools such as ldd? Should ldd always return 0, or is some or a lot of "not founds" not always an issue. And what about the ENOENTs? It seems to me that there should be zero errors if the code is compiled and running correctly.
ldd should always return OK, otherwise the program can't run. On the other hand, there are ways to control where the dynamic linker searches for shared libraries. Based on the reply you got from the program supplier, I assume they have delivered some kind of startup script that sets up the program environment so that the search paths are satisfied. If you don't run the program like it is intended to be run, ldd will very likely report errors. The ENOENT errors are perfectly normal when the library search path contains several directories. The dynamic linker tries to find what it is looking for by opening the requested file, and if it is not found ("No such file or directory"), it proceeds to the next directory in the search path. This applies not only to shared libraries, but also other types of files. For example, if a configuration file is optional, the program just ignores the error message and continues execution. Ps. I found the newspaper personals ad style of you question amusing.
Linux Prog has 24 Libs Fails LDD, and strace shows 692 "1 ENOENT" during prog library reads
1,450,588,996,000
I try to compile the newest boost library (1.62.0) on a Linux system with kernel version 2.6.18-92.1.13.el5 (from uname -r), and 2016 intel c++ compiler using gcc 6.1.0. The new intel c++ compiler and gcc are installed at a sub directory of my home directory. I am using an old glibc probably as ancient as the kernel (ldd --version gives ldd (GNU libc) 2.5). I get the following error for the thread module: In file included from libs/log/src/event.cpp(31): /usr/include/linux/futex.h(96): error: identifier "u32" is undefined long do_futex(u32 *uaddr, int op, u32 val, unsigned long timeout, ^ That is the only error in the whole compilation. I cannot install new kernel on the computer because I don't have root access. Is it a good idea to install the newest Linux header? Will that allow me to install boost 1.62.0 without getting the error?
I found the following in the first reference: CentOS 5.2 ships with Boost 1.33.1 If you insist on this hackish approach you need to visit the Boost Archived Versions, and look for version 1.33.1. But notice the release date: Version 1.33.1 December 5th, 2006 12:00 GMT After downloading the version that was supposed to be installed via yum, build it in your home directory. Do not upgrade your GCC version. Due to the age of your system, the tools you're trying to install are constrained by the age of your archaic toolchain. In any system that has a package manager, the package manager should NEVER BE DISABLED. You should consider an OS upgrade. See the accepted answer here. You should not upgrade your header files until you upgrade your kernel, and you can't upgrade the kernel until yum is restored. References RPM spec for Boost (libboost) RPM on CentOS 5.2
Compile new boost library on linux with ancient kernel
1,450,588,996,000
I'm migrating to from Windows to Linux and I heard a lot about Linux and how you can do some real good programming under Linux. But I'm have no idea about how to compile a code written in C on Linux.
Compile your source file prog.c with: $ gcc prog.c this will generate an executable named a.out. Execute it with: $ ./a.out To specify the name of the executable during compilation: $ gcc prog.c -o prog execute with: $ ./prog gcc is also the C++ compiler. There are plenty command line options available, so it's worth getting to know the man page. Here is the manual for the most recent version of the compiler.
How to compile & execute C programs under Linux?
1,450,588,996,000
In terminal, I have compiled a source file named "file.C". I did this by typing g++ file.C -o file. Now, I'd like to know how can I use Unix commands to name the output as "helloworld.out"? Thank you in advance!
Given that $ g++ file.C -o file will create an executable called file, one would hopefully have been able to extrapolate that $ g++ file.C -o helloworld.out will create an executable falled helloworld.out.
How can I name the output of a compiled source file in a certain way?
1,286,899,912,000
I found this question, but I'm sorry I don't quite understand the settings on the two variables ServerAliveInterval and ClientAliveInterval mentioned in the accepted response. If my local server is timing out, should I set this value to zero? Will it then never time out? Should I instead set it to 300 seconds or something? My question is simply, some of my connections time out when I suspend & then unsuspend my laptop with the response Write failed: Broken pipe and some don't. How can I correctly configure a local sshd so that they don't fail with a broken pipe?
ServerAliveInterval: number of seconds that the client will wait before sending a null packet to the server (to keep the connection alive). ClientAliveInterval: number of seconds that the server will wait before sending a null packet to the client (to keep the connection alive). Setting a value of 0 (the default) will disable these features so your connection could drop if it is idle for too long. ServerAliveInterval seems to be the most common strategy to keep a connection alive. To prevent the broken pipe problem, here is the ssh config I use in my .ssh/config file: Host myhostshortcut HostName myhost.com User barthelemy ServerAliveInterval 60 ServerAliveCountMax 10 The above setting will work in the following way, The client will wait idle for 60 seconds (ServerAliveInterval time) and, send a "no-op null packet" to the server and expect a response. If no response comes, then it will keep trying the above process till 10 (ServerAliveCountMax) times (600 seconds). If the server still doesn't respond, then the client disconnects the ssh connection. ClientAliveCountMax on the server side might also help. This is the limit of how long a client are allowed to stay unresponsive before being disconnected. The default value is 3, as in three ClientAliveInterval.
What do options `ServerAliveInterval` and `ClientAliveInterval` in sshd_config do exactly?
1,286,899,912,000
I'm using OpenBox window manager without any desktop environment. xdg-open behaves strangely. It opens everything with firefox. $ xdg-settings --list Known properties: default-web-browser Default web browser I'm looking for a simple program; something like reading every *.desktop file in /usr/share/applications/ folder and automatically setting xdg settings.
You can install and use perl-file-mimeinfo in the extra repository to manage mimetypes. Example to open all .pdf files in apvlv: /usr/bin/vendor_perl/mimeopen -d $file.pdf or on other Linux distributions where mimeopen is NOT in /usr/bin/vendor_perl/ but is in one of the $PATH directories : mimeopen -d $file.pdf and then, at the prompt, enter the application: apvlv.
How to properly and easily configure `xdg-open` without any environment?
1,286,899,912,000
I'm using tmux 2.1 and tried to on mouse mode with set -g mouse on And it works fine, I can switch across tmux window splits by clicking the appropriate window. But the downside of this is that I cannot select text with mouse. Here is how it looks like: As you can see, the selection just become red when I keep pressing the mouse button and disappear when I release the button. Without mouse mode enabled the "selection with mouse" works completely fine. Is there some workaround to turn mouse mode on and have the ability to select text?
If you press Shift while doing things with the mouse, that overrides the mouse protocol and lets you select/paste. It's documented in the xterm manual for instance, and most terminal emulators copy that behavior. Notes for OS X: In iTerm, use Option instead of Shift.  In Terminal.app, use Fn.
Tmux mouse-mode on does not allow to select text with mouse
1,286,899,912,000
The header looks like this: #!/bin/sh -e # # rc.local - executed at the end of each multiuser runlevel # # Make sure that the script will "exit 0" on success or any other # value on error. What is the reason for this file (it does not contain much), and what commands do you usually put in it? What is a "multiuser runlevel"? (I guess rc is "run commands"?)
A runlevel is a state of the system, indicating whether it is in the process of booting or rebooting or shutting down, or in single-user mode, or running normally. The traditional init program handles these actions by switching to the corresponding runlevel. Under Linux, the runlevels are by convention: S while booting, 0 while shutting down, 6 while rebooting, 1 in single-user mode and 2 through 5 in normal operation. Runlevels 2 through 5 are known as multiuser runlevels since they allow multiple users to log in, unlike runlevel 1 which is intended for only the system administrator. When the runlevel changes, init runs rc scripts (on systems with a traditional init — there are alternatives, such as Upstart and Systemd). These rc scripts typically start and stop system services, and are provided by the distribution. The script /etc/rc.local is for use by the system administrator. It is traditionally executed after all the normal system services are started, at the end of the process of switching to a multiuser runlevel. You might use it to start a custom service, for example a server that's installed in /usr/local. Most installations don't need /etc/rc.local, it's provided for the minority of cases where it's needed.
Purpose and typical usage of /etc/rc.local
1,286,899,912,000
Occasionally, I need to check resources on several machines throughout our data centres for consolidation recommendations. I prefer htop primarily because of the interactive feel and the display. Is there a way to customise some settings to my setup for htop? For example, one thing I'd always like to have shown is the average CPU load. Important note: Setting this on specific boxes isn't feasible - I'm looking for a way to set this dynamically every time I SSH into the box. Is this possible at all?
htop has a setup screen, accessed via F2, that allows you to customize the top part of the display, including adding or removing a "Load average" field and setting it's style (text, bar, etc.). These seem to be auto saved in $HOME/.config/htop/htoprc, which warns: # Beware! This file is rewritten by htop when settings are changed in the interface. # The parser is also very primitive, and not human-friendly. I.e., edit that at your own risk. However, you should be able to transfer it from one system to another (version differences might occasionally cause a bit of an issue). You could also set up a configuration, quit, and then copy the file, so that you could maintain a set of different configurations by swapping/symlinking whichever one with htoprc.
How can I set customise settings for htop?
1,286,899,912,000
I thought Moving tmux pane to window was the same question but it doesn't seem to be. Coming from using GNU screen regularly, I'm looking for tmux to do the same things. On of the things I do regularly is have a couple of different windows open, one with some code open in vim, and a couple of terminals windows open to test the code, and sometimes a window or two for various other things. I split the screen vertically and will often have the vim window in the top pane and then one of the other windows in the bottom bane. The main commands I then use are Ctrla,Tab to rotate among the panes and Ctrla,n to rotate between the windows within a pane. For instance, with vim in the top pane, I switch to the bottom pane and then rotate through the other terminals, doing whatever I need. The screen stays split the whole time. The problem is I can't find comparable capability to screen's Ctrla,n in tmux. Switching windows seems to not work inside a pane, but jumps entirely. If the screen is split the only two options seem to be to jump to some window that isn't split and then split it or to do a sub-split of a pane. Neither are what I was looking for. Suggestions (besides just sticking with screen)?
tmux and screen have different models so there is no exact equivalent. In screen terms, a split lets you display multiple windows at the same time. next (C-a n) rotates windows through the active part of the split; this lets you rotate “hidden” windows through the active region of the split. In tmux terms, a split divides a window into one or more panes. Each part of a split window is an individual pane, panes are never hidden (if a window is selected (visible) all its panes are, too), and a pane can only be used in a single split of one window (a pane can not be in multiple windows, and it can not be in multiple splits of the same window). There are commands to move panes around in (or between) windows, but not in an identical way to next in screen. You could use a binding like the following to arrange a similar effect: bind-key C-n swap-pane -s :+.top \; rotate-window -Ut :+ You will probably want to put this in your ~/.tmux.conf file, but you can just type/paste it after Prefix : to bind it in your current server instance. To use the binding, pick your “main window”, split it, create a “pane container” window immediately after the “main window”, then use the binding to rotate any pane in the “main window” among the group in the “pane container” window. Here is how you might create the setup: Pick a window to use as your “main window”. Start (e.g.) Vim in it. Split your “main window” into two panes. E.g. Prefix " (:split-window) You can use this pane as your testing window (or log viewer, or whatever). Create a new window (the “pane container”) immediately after your main window. E.g. Prefix c (:new-window) It is important that no other window gets between the indexes of the “main window” and the “pane container” window (+ in the window specifiers used in the bound commands means “the next higher numbered window”). Split this window into a number of panes. To rotate through three panes, split this window into two panes (the third pane is the one in the “main window”). Maybe you need a shell for git, and a shell for running a database interface. Put each in a separate pane in this “pane container” window. Switch back to your “main window”. Select the pane that you want to “rotate out”. You can use Prefix Up/Down/Left/Right to move among the panes. Invoke the binding to swap the current pane with the first pane in “pane container” window and (“behind the scenes”) rotate the panes inside the “pane container” window (so that the next time you run the binding, the first command swaps with the “next” pane in the sequence). Prefix Control-n (the binding use C-n, but you could change this to whatever you like). To scroll backwards through the panes, you can use the below: bind-key C-p swap-pane -s :+.bottom \; rotate-window -Dt :+
How do I cycle through panes inside a window in tmux like in screen?
1,286,899,912,000
The OpenSSH daemon has many "default" values for its settings. So looking at the sshd_config might not give someone the complete set of active settings. How to display the full sshd configuration (for OpenSSH)?
The OpenSSH sshd command has an extended test switch which can be used to "Check the validity of the configuration file, output the effective configuration to stdout and then exit." (source sshd man page). So the answer is (as root, or prefixing it with sudo): sshd -T (the settings are not in alphabetical order, one can use sshd -T | sort to quickly look for one setting, or just grep the setting) :-) PS: my first reflex was looking for this answer online, but the search engines were not helpful. And then only looked at the man page and found the answers. For those lazy like me which turns too quickly to the internet, there is now the answer posted online! Lesson learned today: check the man page first, ask a search engine later if needed.
Display full settings of sshd
1,286,899,912,000
When I run : gvim -p *.xyz I find that not all files are opened in tabs. It feels, like a kind of tab limit? But ! When I try to open unopened with : :tabnew it is opened next to previous tabs - it works ! How to make gvim -p ... to open all files without need of opening those above limit manually with :tabnew ? Btw. Is this limit somewhere written ? Possible to be configured?
Put this in your .vimrc (usually located at ~/.vimrc): set tabpagemax=100
gvim -p limit of opened tabs?
1,286,899,912,000
I'd like to have a file eg. f with only zsh aliases (pureness reasons). Then I'd like to include f file in my .zshrc file, so that the aliases defined in f are visible in .zshrc. Is it possible? If it is, I could create a script eg. my_alias ($my_alias ll 'ls -l') which appends alias to f file. Of course I could do $echo {alias command} >> ~/.zshrc but this makes .zshrc one big mess. Additionally how is it looks like in bash? UPDATE If someone share my idea this is solution, thanks to phunehehe: # source aliases ALIASFILE=~/.aliasesrc source $ALIASFILE function add_alias() { if [[ -z $1 || -z $2 || $# -gt 2 ]]; then echo usage: echo "\t\$$0 ll 'ls -l'" else echo "alias $1='$2'" >> $ALIASFILE echo "alias ADDED to $ALIASFILE" fi }
.zshrc and .bashrc are script files, not config files, so you "source" the alias file. In Zsh (.zshrc) and Bash (.bashrc) alike: . my_alias will run my_alias and leave its effects in the same environment with the RC files, effectively giving you the aliases in the shell. Of course, your are not limited to aliases either. I use a .shrc that is sourced by both .bashrc and .zshrc for common exports, functions and aliases. For more on sourcing see Different ways to execute a shell script.
Is it possible to include file in config file of zsh? How?
1,286,899,912,000
There are a number of hidden configuration files in my home directory: some of them are in ~/ (e.g. ~/.cinnamon) some of them are in ~/.config/ (e.g. ~/.config/cinnamon-session) some of them are in ~/.local/share/ (e.g. ~/.local/share/cinnamon-session) What is the logic as to where home configuration files live? a) What is the difference between hidden files in these three places? b) What exactly does "local" mean in this context, vs config, vs home? c) In the home directory, are there also other important common configuration directories used by multiple applications? Debian 8.6 Cinnamon 2.2.16
There's a long history here when it comes to the general case of "dot files", but the $HOME/.config and $HOME/.local directories that you specifically mention have an origin in the XDG Base Directory Specification. $HOME/.config is where per-user configuration files go if there is no $XDG_CONFIG_HOME. $HOME/.cache is where per-user cache files go if there is no $XDG_CACHE_HOME. $HOME/.local/share is where per-user data files go if there is no $XDG_DATA_HOME. Windows users may recognize this as a parallel of what Microsoft has had in Windows NT since version 4 (albeit that the names changed in version 6.0): %USERPROFILE%/AppData/Local/ a.k.a. %LOCALAPPDATA% — where per-user data files for this machine go %USERPROFILE%/AppData/Roaming/ a.k.a. %APPDATA% — where per-user data files that a roaming user can access from multiple machines go %USERPROFILE%/AppData/Local/Temp/ a.k.a. %TEMP% — where per-user temporary files go The idea is that per-user files can be (amongst quite a lot of other things) application data files (machine-specific or roaming), application configuration files, cached files, and temporary files, and applications place them in subtrees rooted at these particular directories. (MacOS has a similar system where users get individual per-user "user local" subtrees under /var/folders with C and T subdirectories for cache and temporary files.) As the Arch people note, there are some "dot" files and directories that have become commonly used by several applications and are unlikely to agree with XDG in the foreseeable future, such as $HOME/.ssh and $HOME/.netrc . Further reading Waldo Bastian, Ryan Lortie, and Lennart Poettering (2010). XDG Base Directory Specification. Freedesktop.org. Chris Jackson (2008-02-05). Where Should I Write Program Data Instead of Program Files?. Original Recipe Awesomsauce. Microsoft. Managing Roaming User Data Deployment Guide. Windows Vista Technical Library. Microsoft TechNet. https://askubuntu.com/questions/102046/ https://unix.stackexchange.com/a/555214/5132 https://wiki.archlinux.org/index.php/XDG_Base_Directory_support Lionel Drico (2009-03-11). Modify your application to use XDG folders.
Understanding home configuration file locations: ~/, ~/.config/ and ~/.local/share/
1,286,899,912,000
I usually use grep when developing, and there are some extensions that I always want to exclude (like *.pyc). Is it possible to create a ~/.egreprc or something like that, and add filtering to exclude pyc files from all results? Is this possible, or will I have to create an alias for using grep in this manner, and call the alias instead of grep?
No, there's no rc file for grep. GNU grep 2.4 through 2.21 applied options from the environment variable GREP_OPTIONS, but more recent versions no longer honor it. For interactive use, define an alias in your shell initialization file (.bashrc or .zshrc). I use a variant of the following: alias regrep='grep -Er --exclude=*~ --exclude=*.pyc --exclude-dir=.bzr --exclude-dir=.git --exclude-dir=.svn' If you call the alias grep, and you occasionally want to call grep without the options, type \grep. The backslash bypasses the alias.
Is there a 'rc' configuration file for grep/egrep? (~/.egreprc?)
1,286,899,912,000
On my Ubuntu machine, in /etc/sysctl.conf file, I've got reverse path filtering options commented out by default like this: #net.ipv4.conf.default.rp_filter=1 #net.ipv4.conf.all.rp_filter=1 but in /etc/sysctl.d/10-network-security.conf they are (again, by default) not commented out: net.ipv4.conf.default.rp_filter=1 net.ipv4.conf.all.rp_filter=1 So is reverse path filtering enabled or not? Which of the configuration locations takes priority? How do I check the current values of these and other kernel options?
Checking the value of a sysctl variable is as easy as sysctl <variable name> and, by the way, setting a sysctl variable is as straightforward as sudo sysctl -w <variable name>=<value> but changes made this way will probably hold only till the next reboot. As to which of the config locations, /etc/sysctl.conf or /etc/sysctl.d/, takes precedence, here is what /etc/sysctl.d/README file says: End-users can use 60-*.conf and above, or use /etc/sysctl.conf directly, which overrides anything in this directory. After editing the config in any of the two locations, the changes can be applied with sudo sysctl -p
Finding out the values of kernel options related to sysctl.conf and sysctl.d
1,286,899,912,000
I've been using the default configuration of vim for a while and want to make a few changes. However, if I edit ~/.vimrc it seems to overwrite all other configuration settings of /etc/vimrc and such, e.g. now there is no syntax highlighting. Here is what vim loads: :scriptnames /etc/vimrc /usr/share/vim/vimfiles/archlinux.vim ~/.vimrc /usr/share/vim/vim80/plugin/... <there are a few> In other words I want to keep whatever there is configured in vim, but simply make minor adjustments for my shell user. What do I need to do to somehow weave ~/.vimrc into the existing configuration or what do I need to put into ~/.vimrc so it loads the default configuration? EDIT: My intended content of ~/.vimrc: set expandtab set shiftwidth=2 set softtabstop=2
You can source the global Vim configuration file into your local ~/.vimrc: unlet! skip_defaults_vim source $VIMRUNTIME/defaults.vim set mouse-=a See :help defaults.vim and :help defaults.vim-explained for details.
Extend Default Configuration of vim
1,286,899,912,000
I'm renaming network interfaces by modifying the files in /etc/sysconfig/network-scripts. eth0 -> nic0 eth1 -> nic1 The content of the network scripts looks like this, after modification: # cat /etc/sysconfig/network-scripts/ifcfg-nic0 DEVICE=nic0 BOOTPROTO=static ONBOOT=yes HWADDR=xx:xx:xx:xx:xx:xx USERCTL=no IPV6INIT=no MASTER=bond0 SLAVE=yes A reboot activates the new config. But how do I activate this configuration without rebooting? A systemctl restart network doesn't do the trick. I can shut down one interface by its old name (ifdown eth0) but ifup results in below message no matter if the old or new name was provided: ERROR : [/etc/sysconfig/network-scripts/ifup-eth] Device nic0 does not seem to be present, delaying initialization. /etc/init.d/network status shows this output: Configured devices: lo bond0 nic0 nic1 Currently active devices: lo eth0 eth1 bond0 Both, ifconfig and ip a show the old interface names.
You can rename the device using the ip command: /sbin/ip link set eth1 down /sbin/ip link set eth1 name eth123 /sbin/ip link set eth123 up Edit: I am leaving the below for the sake of completeness and posterity (and for informational purposes,) but I have confirmed swill's comment and Marco Macuzzo's answer that simply changing the name and device of the interface /etc/sysconfig/network-scripts/ifcfg-eth0 (and renaming the file) will cause the device to be named correctly as long as the hwaddr= field is included in the configuration file. I recommend using this method instead after the referenced update. You may also want to make sure that you configure a udev rule, so that this will work on the next reboot too. The path for udev moved in CentOS 7 to /usr/lib/udev/rules.d/60-net.rules but you are still able to manage it the same way. If you added "net.ifnames=0 biosdevname=0" to your kernel boot string to return to the old naming scheme for your nics, you can remove ACTION=="add", SUBSYSTEM=="net", DRIVERS=="?*", ATTR{type}=="1", PROGRAM="/lib/udev/rename_device", RESULT=="?*", NAME="$result" And replace it with ACTION=="add", SUBSYSTEM=="net", DRIVERS=="?*", ATTR{address}=="00:50:56:8e:3f:a7", NAME="eth123" You need one entry per nic. Be sure to use the correct MAC address and update the NAME field. If you did not use "net.ifnames=0 biosdevname=0", be careful as there could be unintended consequences.
CentOS 7 - Rename network interface without rebooting
1,286,899,912,000
I want to add a permanent iptables rule to my new VPS, and after brief google search i was surprised that there are two places this rule can be added, that seems like identical: /etc/rc.local and /etc/init.d/rc.local. Maybe someone knows why where is two places for simple startup code to place? Is it linux flavor specific (but ubuntu has both!)? Or one of them is deprecated?
/etc/init.d is maintained on ubuntu for backward compatibility with sysvinit stuff. If you actually look at /etc/init.d/rc.local you'll see (also from a 12.04 LTS Server): #! /bin/sh ### BEGIN INIT INFORMATION # Provides: rc.local # Required-Start: $remote_fs $syslog $all # Required-Stop: # Default-Start: 2 3 4 5 # Default-Stop: # Short-Description: Run /etc/rc.local if it exist ### END INIT INFO And "Run /etc/rc.local" is exactly what it does. The entirety of /etc/rc.local is: #!/bin/sh -e # # rc.local # # This script is executed at the end of each multiuser runlevel. # Make sure that the script will "exit 0" on success or any other # value on error. # # In order to enable or disable this script just change the execution # bits. # # By default this script does nothing. exit 0 I would guess the purpose in doing this is to provide a dead simple place to put shell commands you want run at boot, without having to deal with the stop|start service stuff, which is in /etc/init.d/rc.local. So it is in fact a service, and can be run as such. I added a echo line to /etc/rc.local and: »service rc.local start hello world However, I do not believe it is referenced by anything in upstart's /etc/init (not init.d!) directory: »initctl start rc.local initctl: Unknown job: rc.local There are a few "rc" services in upstart: »initctl list | grep rc rc stop/waiting rcS stop/waiting rc-sysinit stop/waiting But none of those seem to have anything to do with rc.local.
What's the difference between /etc/rc.local and /etc/init.d/rc.local?
1,286,899,912,000
I've found tons of sites that explain how to have git warn you when you're changing line endings, or miscellaneous other techniques to prevent you from messing up an entire file. Assume it's too late for that -- the tree already has commits that toggle the line endings of files, so git diff shows the subtraction of the old file followed by the addition of a new file with the same content I'm looking for a git configuration option or command-line flag that tells diff to just ignore those -- if two lines differ only by whitespace, pretend they're the same. I need this config option/flag to work for anything that relies on file differences -- diff, blame, even merge/rebase ideally -- I want git to completely ignore trailing whitespace, particularly line endings. How can I do that?
For diff, there's git diff --ignore-space-at-eol, which should be good enough. For diff and blame, you can ignore all whitespace changes with -w: git diff -w, git blame -w. For git apply and git rebase, the documentation mentions --ignore-whitespace. For merge, it looks like you need to use an external merge tool. You can use this wrapper script (untested), where favorite-mergetool is your favorite merge tool; run git -c mergetool.nocr.cmd=/path/to/wrapper/script merge. The result of the merge will be in unix format; if you prefer another format, convert everything to that different format, or convert $MERGED after the merge. #!/bin/sh set -e TEMP=$(mktemp) tr -d '\013' <"$BASE" >"$TEMP" mv -f "$TEMP" "$BASE" TEMP=$(mktemp) tr -d '\013' <"$LOCAL" >"$TEMP" mv -f "$TEMP" "$LOCAL" TEMP=$(mktemp) tr -d '\013' <"$REMOTE" >"$TEMP" mv -f "$TEMP" "$REMOTE" favorite-mergetool "$@" To minimize trouble with mixed line endings, make sure text files are declared as such. See also Is it possible for git-merge to ignore line-ending differences? on Stack Overflow.
Ignore whitespaces changes in all git commands
1,286,899,912,000
Ok, so I've been searching the web for solutions to this problem with no answers seeming to work for me. Hopefully someone can help me. I'm only trying to configure the OpenVPN Client. I'm running CrunchBang Linux 3.2.0-4-amd64 Debian 3.2.60-1+deb7u1 x86_64 GNU/Linux and I just switched over to using systemd. The changeover went smooth enough but now I can't get my OpenVPN client to come up using systemd I've tried following these configuration tutorials, but nothing works. http://fedoraproject.org/wiki/Openvpn http://d.stavrovski.net/blog/how-to-install-and-set-up-openvpn-in-debian-7-wheezy And looked at a bunch of other different guides. I can bring up the tunnel from the command line with openvpn /etc/openvpn/vpn.conf. So I know the config file is good, it was working with sysvinit just fine so I'm not surprised. I then attempt to just do a status with systemctl status [email protected] resulting in: $ sudo systemctl status [email protected] [email protected] Loaded: error (Reason: No such file or directory) Active: inactive (dead) I realized that I need to do some setup for services. I want to be prompted for a password so I followed this guide to create an [email protected] in /etc/systemd/system/. But restarting the OpenVPN service still doesn't prompt for a password. $ sudo service openvpn restart [ ok ] Restarting openvpn (via systemctl): openvpn.service. The Fedora tutorials go through the steps of creating symbolic links, but don't create any of the .service files in the walk-throughs. What piece am I missing? Do I need to create an [email protected]? If so, where exactly do I place it? I feel like it shouldn't be this difficult, but I can't seem to find any solution that works for me. I'm happy to provide any more information that's needed. Solution -rw-r--r-- 1 root root 319 Aug 7 10:42 [email protected] [Unit] Description=OpenVPN connection to %i After=network.target [Service] Type=forking ExecStart=/usr/sbin/openvpn --daemon ovpn-%i --status /run/openvpn/%i.status 10 --cd /etc/openvpn --config /etc/openvpn/%i.conf ExecReload=/bin/kill -HUP $MAINPID WorkingDirectory=/etc/openvpn [Install] WantedBy=multi-user.target [email protected] (END) Symlink: lrwxrwxrwx 1 root root 36 Aug 7 10:47 [email protected] -> /lib/systemd/system/[email protected] Prompt For Password Everything is working now, except for being prompted for a password to connect. I've attempted this solution. I tweaked the file from above just a bit, and added an Expect script like in the example. Working like a charm! My files are below. Modified lines from the above /lib/systemd/system/[email protected] ExecStart=/usr/sbin/openvpn --daemon ovpn-%i --status /run/openvpn/%i.status 10 --cd /etc/openvpn --management localhost 5559 --management-query-passwords --management-forget-disconnect --config /etc/openvpn/%i.conf ExecStartPost=/usr/bin/expect /lib/systemd/system/openvpn_pw.exp Expect script /lib/systemd/system/openvpn_pw.exp. Make sure to do the following: chmod +x on the script. Have telnet installed Code of the expect script: #!/usr/bin/expect set pass [exec /bin/systemd-ask-password "Please insert Private Key password: "] spawn telnet 127.0.0.1 5559 expect "Enter Private Key Password:" send "password 'Private Key' $pass\r" expect "SUCCESS: 'Private Key' password entered, but not yet verified" send "exit\r" expect eof It should be noted that the above solution does log your password entered in plaintext in the following logs in /var/log/syslog and /var/log/daemon.log
I think the Debian OpenVPN setup with systemd is currently a tad bit broken. To get it to work on my machines I had to: Create /etc/systemd/system/[email protected] (the directory), and place in it a new file with this:[Unit] Requires=networking.service After=networking.serviceI called my file local-after-ifup.conf. It needs to end with .conf. (This is the bit that's currently a tad bit broken.) Create a file in /etc/tmpfiles.d (I called mine local-openvpn.conf) with the contents:# Type Path Mode UID GID Age Argument d /run/openvpn 0755 root root - -This is Debian bug 741938 (fixed in 2.3.3-1). Create a symlink into multi-user.target.wants (easiest way is systemctl enable openvpn@CONF_NAME.service) E.g., if you have /etc/openvpn/foo.conf, you'd use [email protected]. If you also have the SysV init script showing up in systemd, disable it. This is Debian bug 700888 (fixed in 2.3.3-1). NOTE: 2.3.3-1 or later is not yet in testing, though it is in unstable.
Using OpenVPN with systemd
1,286,899,912,000
Is there any way to automate Linux server configuration? I'm working on setting up a couple of new build servers, as well as an FTP server, and would like to automate as much of the process as possible. The reason for this is that the setup and configuration of these servers needs to be done in an easily repeatable way. We figured that automating as much of this process as possible would make it easiest to repeat as needed in the future. Essentially, all the servers need is to install the OS, as well as a handful of packages. There's nothing overly complicated about the setups. So, is there a way to automate this process (or at least some amount of it)? EDIT: Also, say I use Kickstart, is there a way to remove the default Ubuntu repositories, and just install the packages from a collection of .deb files we have locally (preferably through apt, rather than dpkg)?
Yes! This is a big deal, and incredibly common. And there are two basic approaches. One way is simply with scripted installs, as for example used in Fedora, RHEL, or CentOS's kickstart. Check this out in the Fedora install guide: Kickstart Installations. For your simple case, this may be sufficient. (Take this as an example; there are similar systems for other distros, but since I work on Fedora that's what I'm familiar with.) The other approach is to use configuration management. This is a big topic, but look into Puppet, Chef, Ansible, cfengine, Salt, and others. In this case, you might use a very basic generic kickstart to provision a minimal machine, and the config management tool to bring it into its proper role. As your needs and infrastructure grow, this becomes incredibly important. Using config management for all your changes means that you can recreate not just the initial install, but the evolved state of the system as you introduce the inevitable tweaks and fixes caused by interacting with the real world. We figured that automating as much of this process as possible would make it easiest to repeat as needed in the future. You are absolutely on the right track — this is the bedrock principle of professional systems administration. We even have a meme image for it: It's often moderately harder to set up initially, and there can be a big learning curve for some of the more advanced systems, but it pays for itself forever. Even if you have only a handful of systems, think about how much you want to work at recreating them in the event of catastrophe in the middle of the night, or when you're on vacation.
How to automate Linux server configuration?
1,286,899,912,000
Is there a way to dynamically assign environment variables in a systemd service unit file? We have a machine that has 4 GPUs, and we want to spin up multiple instances of a certain service per GPU. E.g.: gpu_service@1:1.service gpu_service@2:1.service gpu_service@3:1.service gpu_service@4:1.service gpu_service@1:2.service gpu_service@2:2.service gpu_service@3:2.service gpu_service@4:2.service ad nauseam So the 1:1, 2:1, etc. are effectively the %i in the service unit file. In order for the service to bind to a particular GPU, the service executable checks a certain environment variable, e.g.: USE_GPU=4 Is there a way I can take %i inside the service unit file and run it through some (shell) function to derive the GPU number, and then I can set the USE_GPU environment variable accordingly? Most importantly, I don't want the hassle of writing multiple /etc/systemd/system/gpu_service@x:y.service/local.conf files just so I can spin up more instances.
If you are careful you can incorporate a small bash script sequence as your exec command in the instance service file. Eg ExecStart=/bin/bash -c 'v=%i; USE_GPU=$${v%:*} exec /bin/mycommand' The $$ in the string will become a single $ in the result passed to bash, but more importantly will stop ${...} from being interpolated by systemd. (Earlier versions of systemd did not document the use of $$, so I don't know if it was supported then).
Dynamic variables in systemd service unit files
1,286,899,912,000
A lot of unix config files in xx.d folders are prefixed by a number, like : $ ls /etc/grub.d/ 00_header 10_linux 30_os-prober 40_custom 05_debian_theme 20_linux_xen 30_uefi-firmware 41_custom Is there any convention on this number? What does it mean ? Might just be to avoid name clashing but I'm curious if there's anything more.
It's a convention used both to keep filenames unique, and to control the order in which scripts get executed. In general, the xx.d directories are scanned by something doing the moral equivalent of for file in /etc/grub.d/*; do ... and the numeric prefixes give this an ordering other than alphabetical. There may be application-specific standards for what's a 4x_foo vs a 9x_foo but nothing consistent across all the xx.d directories.
What is the number prefix in config files from .d directory
1,286,899,912,000
On Ubuntu 10.4 I have edited the /etc/bash.bashrc file to set some variables like the command history size (HISTSIZE=5000), however if I create a new users Ubuntu by default gives them a .bashrc file in their home directory with this set as HISTSIZE=1000 which is overriding mine. How can I change the default .bashrc file that is created?
You may put default configurations in /etc/skel so that useradd(8) can copy files in /etc/skel whenever it creates new user's directory by '-m' option. Note that this is used only for the new-user. Existing user accounts are not affected.
How do I set a user's default .bashrc file?
1,286,899,912,000
there are so many tutorials out there explaining how to setup dhcpd server, in relation to providing ntp suggestions to dhcp clients, that I had always thought that ntp configuration was carried out automatically. Recently I started seeing clock drifts in my local network, so I assume this was a wrong assumption. So I set out to see how can one minimize the ntp client configuration, provided one has carried out the effort to set up ntp-server suggestions through dhcpd. I have not been able to find much apart from this Ubuntu specific help tutorial https://help.ubuntu.com/community/UbuntuTime . Even here (see paragraph under "Troubleshooting -> Which configuration file is it using?") the information is scarce but it says that if an /etc/ntp.conf.dhcp file is found it will be used instead. First of all the actual location that the writer meant here is /var/lib/ntp/ntp.conf.dhcp as observed in /etc/init.d/ntp , but regardless of that the presence of this file does not guarantee that the ntp will request servers from dhclient. As a result, I have to explicitly add the server clause in ntp.conf.dhcp for my local ntp server. But in that case, why do I even setup ntp settings on the dhcpd server? This seems to go against intuition, ie setup ntp settings once (ie on the server) and let dhcpd server delegate the information to the clients. How can I minimize (if not avoid altogether), client configuration for the ntp. Alternatively, how can I get ntp information through dhclient. Is there a cli solution that fits all linux distros? I assume every client should have the executables of ntpd, but I do not know how to proceed from there. Thank you EDIT: ubuntu client verbose output when running manually dhclient: sudo dhclient -1 -d -pf /run/dhclient.eth0.pid -lf /var/lib/dhcp/dhclient.eth0.leases eth0 Internet Systems Consortium DHCP Client 4.2.4 Copyright 2004-2012 Internet Systems Consortium. All rights reserved. For info, please visit https://www.isc.org/software/dhcp/ Listening on LPF/eth0/20:cf:30:0e:6c:12 Sending on LPF/eth0/20:cf:30:0e:6c:12 Sending on Socket/fallback DHCPREQUEST of 192.168.112.150 on eth0 to 255.255.255.255 port 67 (xid=0x2e844b8f) DHCPACK of 192.168.112.150 from 192.168.112.112 reload: Unknown instance: invoke-rc.d: initscript smbd, action "reload" failed. RTNETLINK answers: File exists * Stopping NTP server ntpd ...done. * Starting NTP server ntpd ...done. bound to 192.168.112.150 -- renewal in 41963 seconds. The ntpd service is restarted, yet running ntpq -cpe -cas afterwards I still do not see my local ntp server in the list of ntp servers. Of course my dhcpd server does have option ntp-servers subnet 192.168.112.0 netmask 255.255.255.0 { max-lease-time 604800; default-lease-time 86400; authoritative; ignore client-updates; option ntp-servers 192.168.112.112; #self ... (many other options) }
If the dhcp server you are using is configured to provide the ntp-servers option, you can configure your dhclient to request ntp-servers by adding ntp-servers to the default request line in dhclient.conf, as shown at the end of this example from Ubuntu Linux (as of 19.04, but present since at least 12.04): request subnet-mask, broadcast-address, time-offset, routers, domain-name, domain-name-servers, domain-search, host-name, dhcp6.name-servers, dhcp6.domain-search, dhcp6.fqdn, dhcp6.sntp-servers, netbios-name-servers, netbios-scope, interface-mtu, rfc3442-classless-static-routes, ntp-servers; /etc/ntp.conf and the information from DHCP will be used to create /etc/ntp.conf.dhcp. Your ntpd must be told to use /etc/ntp.conf.dhcp if it exists. On the version of Ubuntu that I'm using, this is done via /etc/dhcp/dhclient-exit-hooks.d/ntp. <-- this is the file that tells NTPd to use /etc/ntp.conf.dhcp if it exists, and to just use /etc/ntp.conf if it doesn't.
how do you set up a linux client to use ntp information provided through dhcp?
1,286,899,912,000
How can I establish a reverse ssh tunnel with my ./ssh/config file? I'm trying to reproduce this command ssh -R 55555:localhost:22 user@host in my .ssh/config file so that when I type ssh host I'll ssh to the host as user and with a reverse tunnel. The commands accepted by the config file are more verbose counterparts to the command line flags. Based on the ssh manpage and the manpage for ssh_config, it seems like the corresponding setting is BindAddress. In my .ssh/config file I have: Host host Hostname host User user BindAddress 55555:localhost:22 This, and slight variations of, result in a connection refused when I try ssh localhost -p 55555 once logged in on the host. The same works fine if I explicitly give the command at the top when first sshing to the host. My config file does work without the reverse tunnel command; ssh host logs me into host as user.
BindAddress is not the option you're after. From man ssh_config: BindAddress Use the specified address on the local machine as the source address of the connection. Only useful on systems with more than one address. The configuration file equivalent of -R is RemoteForward: RemoteForward Specifies that a TCP port on the remote machine be forwarded over the secure channel to the specified host and port from the local machine. The first argument must be [bind_address:]port and the second argument must be host:hostport. [...] With this information the command line ssh -R 55555:localhost:22 user@host translates into the following .ssh/config syntax: Host host HostName host User user RemoteForward 55555 localhost:22
Reverse ssh tunnel in config
1,286,899,912,000
I am thinking on a system, where /etc were tracked on a remote git repository. I am thinking on a git workflow, where every host machine where a different branch. Every previous versions on every machine could be easily tracked, compared, merged. If a /etc modification had to be committed on many machines, it could be easily done by some merging script. In case of an "unwanted" /etc change, this could be good visible (even alarm scripts could be tuned to watch that). Anybody used already a such configuration? Are there any security problems with it?
The program etckeeper does manage /etc in git, you just need to change the default vcs backend from bzr to git in /etc/etckeeper/etckeeper.conf. It is installed by default in Ubuntu Linux, and handles the common cases of when to commit automatically. It commits before installing packages in case there are uncomitted manual changes, and after installing.
Using git to manage /etc?
1,286,899,912,000
I'm trying to apply the same sshd settings to multiple users. According to the manual, it seems Match User acts like an AND: Introduces a conditional block. If all of the criteria on the Match line are satisfied, the keywords on the following lines override those set in the global section of the config file How do I state "for any of these users...", so in this example bob, joe, and phil are allowed to use SSH as a proxy, but not allowed to log in: Match User bob, User joe, User phil PasswordAuthentication yes AllowTCPForwarding yes ForceCommand /bin/echo 'We talked about this guys. No SSH for you!'
Not having done this myself, I can only go on what the manuals say: From the sshd_config manual: The match patterns may consist of single entries or comma-separated lists and may use the wildcard and negation operators described in the PATTERNS section of ssh_config(5). This means that you ought to be able to say Match User bob,joe,phil PasswordAuthentication yes AllowTCPForwarding yes ForceCommand /bin/echo 'We talked about this guys. No SSH for you!' Note that "comma-separated" means no additional spaces should be inserted between the names. See also this answer on the Information Security forum: Creating user specific authentication methods in SSH.
Match multiple users in 'sshd_config'
1,286,899,912,000
I want to set up a plugin for the Geany editor on a Debian system. It's a theme changing plugin, so I am following this manual. It says: The simplest way to do this is to copy the contents of the archive into the ~/.config/geany/filedefs/ folder. I don't understand this. What do they mean by ~/.config? Is that the default directory where Geany is installed? I have its files at /usr/lib/geany but that doesn't seem to be location they are talking about.
~ is your home directory, usually /home/username. A file or folder name starting with a . is the Linux version of a hidden file/folder. So ~/.config is a hidden folder within your home directory. Open up your file browser to your home folder, then find the option to show hidden files and folders. If you don't see .config, you'll have to create it. Then navigate into it, find or create the geany folder, go into that, then find or create a folder named filedefs. You can then put the relevant files into there. .config is a convention, defined by XDG Base Directory Specification see also https://stackoverflow.com/questions/1024114/location-of-ini-config-files-in-linux-unix
What ~/.config refers to and how to put files there?
1,286,899,912,000
So SSH has these files that configure settings for a specific user. ~/.ssh/authorized_keys ~/.ssh/config ~/.ssh/id_rsa ~/.ssh/id_rsa.pub ~/.ssh/known_hosts I'd like to globalise some of these files, like config and known_hosts. So that other users ( including root ) could share the configured hosts. What would be the best way to do this?
For ~/.ssh/config you can place relevant system-wide settings in /etc/ssh/ssh_config according to the man page: ssh(1) obtains configuration data from the following sources in the following order: command-line options user's configuration file (~/.ssh/config) system-wide configuration file (/etc/ssh/ssh_config) For each parameter, the first obtained value will be used. The configuration files contain sections separated by “Host” specifications, and that section is only applied for hosts that match one of the patterns given in the specification. Note that only the first value will be used, which means that the user can always override the system-wide configuration options locally. For ~/.ssh/known_hosts you can use /etc/ssh/ssh_known_hosts or another file specified by the GlobalKnownHostsFile configuration option: GlobalKnownHostsFile Specifies a file to use for the global host key database instead of /etc/ssh/ssh_known_hosts. I'm unsure if it is possible for the other files, but I imagine you could work something out with symlinks if you really wanted to share private keys among users as well.
making ssh hosts global to all the users on the computer
1,286,899,912,000
I was wondering if there's a way to find out the default shell of the current user within a shell script? Use case: I am working on a script that sets an alias for a command and this alias is set within a shell script. !# /bin/bash alias = 'some command to set the alias' There's a logic in the script where it tries to find the default shell of the user that executes the script and adds this alias in the respective ~/.bashrc or ~/.zshrc file But as I am adding a shebang in the front of the script and explicitly asking it to use bash, answers posted here always return bash as expected although I am executing this script on a ZSH terminal. Is there a way to get the shell type where the script is executed regardless of the shebang set? I am looking for a solution that works on both Mac and all the linux based bistro.
$ finger $USER|grep -oP 'Shell: \K.*' /bin/mksh
Finding out the default shell of a user within a shell script
1,286,899,912,000
I'm mounting a NFS filesystem on my machine. How do I figure out what version of the NFS protocol the server uses? I don't have access to the NFS server machine, but I do have root on my client machine. Is there anything I can run on my client machine to identify what version of the NFS protocol is being used by the server, or what versions it supports? I wasn't able to find any useful information in /var/log/messages or kernel debugging output (dmesg). I have tried running nfsstat, but I'm not sure if it is giving me any useful information. However, when I run nfsstat -s to request information about the server, I don't see anything useful: # nfsstat -s Server rpc stats: calls badcalls badfmt badauth badclnt 0 0 0 0 0 When I run nfsstat -c to request information about the client, I do see some information about Client nfs v3, but I'm not sure how to interpret this. Does this tell me anything about the protocol being used between my client machine and the NFS server? Does it mean I am currently using v3 of the NFS protocol? Does it tell me anything about what versions of the NFS protocol the server supports, e.g., NFS v4?
The nfsstat -c program will show you the NFS version actually being used. If you run rpcinfo -p {server} you will see all the versions of all the RPC programs that the server supports. On my system I get this output: $ rpcinfo -p localhost program vers proto port 100000 2 tcp 111 portmapper 100000 2 udp 111 portmapper ... 100003 2 tcp 2049 nfs 100003 3 tcp 2049 nfs 100003 4 tcp 2049 nfs 100003 2 udp 2049 nfs 100003 3 udp 2049 nfs 100003 4 udp 2049 nfs ... This shows me that my NFS server (localhost in this example) offers versions 2, 3, and 4 of the NFS protocol all over UDP and TCP.
Which version of NFS is my NFS server using?
1,286,899,912,000
When zsh shows you a menu of possible completions, I'd like it to let me use shift-tab to access previous completion entries--is there an option that controls what key is used to access previous completion entries? And if so, what would I need to add to my ~/.zshrc file in order to set it up. If it makes any difference, I currently can navigate through a completion-menu using the arrow-keys, but I dislike doing this since the arrow-keys feel out of place and awkward to use.
you want to bind the editor command reverse-menu-complete to the menuselect keymap. bindkey -M menuselect '^[[Z' reverse-menu-complete I am not sure how portable the escape sequence ^[[Z is, so you may want to check terminfo(5) to see if there is a way of using the $terminfo array to correctly bind it. Note that the menuselect keymap is available after you load the zsh/complist module. If you've configured the completion system with compinstall, that module is generally automatically loaded the first time you complete something. To be able to add that binding to your ~/.zshrc, you need to load the module manually there first with: zmodload zsh/complist
Zsh completion, enabling shift-tab
1,286,899,912,000
Due to work I have recently started using OS X and have set it up using homebrew in order to get a similar experience as with Linux. However, there are quite a few differences in their settings. Some only need to be in place on one system. As my dotfiles live in a git repository, I was wondering what kind of switch I could set in place, so that some configs are only read for Linux system and other for OS X. As to dotfiles, I am referring, among other, to .bash_profiles or .bash_alias.
Keep the dotfiles as portable as possible and avoid OS dependent settings or switches that require a particular version of a tool, e.g. avoid GNU syntax if you don't use GNU software on all systems. You'll probably run into situations where it's desirable to use system specific settings. In that case use a switch statement with the individual settings: case $(uname) in 'Linux') LS_OPTIONS='--color=auto --group-directories-first' ;; 'FreeBSD') LS_OPTIONS='-Gh -D "%F %H:%M"' ;; 'Darwin') LS_OPTIONS='-h' ;; esac In case the configuration files of arbitrary applications require different options, you can check if the application provides compatibility switches or other mechanisms. For vim, for instance, you can check the version and patchlevel to support features older versions, or versions compiled with a different feature set, don't have. Example snippet from .vimrc: if v:version >= 703 if has("patch769") set matchpairs+=“:” endif endif
How to keep dotfiles system-agnostic?
1,286,899,912,000
I want to make "echo 1 > /sys/kernel/mm/ksm/run" persistent between boots. I know that I can edit /etc/sysctl.conf to make /proc filesystem changes persist, but this doesn't seem to work for /sys. How would I make this change survive reboots?
Most distros have some sort of an rc.local script that you could use. Check your distro as names and path may vary. Normally expect to look under /etc.
Make changes to /sys persistent between boots
1,286,899,912,000
I usually turn the alert sound (by default a water drop sound) off by going to control-center→Sound→Sound Effects and muting the Alert volume. This is in Gnome. I wanted to turn it off in a custom live build of Debian by default, but I can't figure where this setting is stored. I tried dconf and looked around config directories extensively without success. I tried find ~ -mmin -1 also gio monitor and inotifywatch without success. The only output by find ~ -mmin -1 was .config/dconf/ and .config/dconf/user which get edited all the time the control center is opened anyway. I replaced this user file in a vm to test and all dconf settings were updated except the one I need (the alert sound). I also tried dconf watch / which gave no output when I tried editing the alert sound setting I'd like someone to tell me how to mute this setting from command line and possibly tell me where it is stored.
This can be achieved by this command dconf write /org/gnome/desktop/sound/event-sounds "false" However, this doesn't turn off the sound volume slider effect. To completely turn off the sound effects the closest way I've found was to live boot into a clean iso of the distro and open the System settings > Sound > Sound effects and turn these sounds off as preferred, then copy the file ~/.config/pulse/*-stream-volumes.tdb and save it. Then, to turn off the "sound effects" on an installed environment or while building a custom version of the distro do cp saved-pulse-volumes.tdb ~/.config/pulse/*-stream-volumes.tdb
How to turn off alert sounds/sound effects on Gnome from terminal?
1,286,899,912,000
Specifically AllowUsers parameter: e.g. convert this AllowUsers user1 user2 user3 user4 to this AllowUsers user1 user2 user3 user4
No, but it's not useful in this case. You can have multiple AcceptEnv, AllowGroups, AllowUsers, DenyGroups, DenyUsers, HostKey, PermitOpen, Port and Subsystem lines, and each line adds one or more (or sometimes zero) elements to the list. Nonetheless, if you can't easily fit your AllowUsers directive on one line, I suggest creating a ssh_allowed group and using AllowGroups ssh_allowed in sshd_config.
Is it possible to break long lines in sshd_config?
1,286,899,912,000
A few months ago, meld started behaving oddly. Common lines are almost unreadable, and shown as dark grey text on a black background. Oddly enough, running it as root is fine (with kdesudo meld), although the theme is less pretty. How can I specify the text's colour options for meld? I'm using: Arch Linux KDE 4.14.3 (also seen in 4.14.2) meld 3.12.2 (also seen in 3.12.1) gtk3 3.14.6 (also seen in 3.14.5) Troubleshooting KDE system settings meld uses GTK3, so I fiddled with System Settings > Common Appearance and Behaviour > Application Appearance > GTK > Select a GTK3 Theme. This change was reflected in meld, but none of the three options I selected changed the text. (The available options were Default, Emacs, and oxygen-gtk; the latter is used in the screenshot above.) Manually modifying config files I looked in ~ for files with gtk in their name. ~/.gtkrc-2.0 ~/.gtkrc-2.0-kde4 ~/.config/gtk-2.0 ~/.config/gtk-3.0 ~/.kde4/share/config/gtkrc ~/.kde4/share/config/gtkrc-2.0 Interestingly, there is nothing with gtk in its name in /root. Hence, I tried deleting some of the ~ files, to see if I could get the same effect for my user. I presume all the gtkrc-2.0 files are irrelevant to meld. Firstly, I deleted ~/.config/gtk-3.0, but this had no effect, and was recreated when I opened meld. The only other option appeared to be ~/.kde4/share/config/gtkrc, so deleted this and started meld, which was unaffected. However, the file was not recreated, and it contains some possibly pertinent lines (e.g. text[ACTIVE] = { 1.000, 1.000, 1.000 }). I'm unsure if the (missing) file was loaded at all. I tried kbuildsycoca4 ; kquitapp plasma-desktop ; sleep 2 ; kstart plasma-desktop, but this had no effect. Do I need to manually reload the gtkrc? And why is this file not being affected/rewritten by the system settings? (Also, FWIW, I removed ~/.gtkrc-2.0-kde4, which was actually a symlink to ~/.gtkrc-2.0, and I also removed the target itself, but that didn't help. Again, I didn't reload gtk (I'm not sure if this is necessary, or possible), and the files weren't re-created when I tried running meld again.) Possibly pertinent environment variables $ export | grep -i gtk declare -x GTK2_RC_FILES="/etc/gtk-2.0/gtkrc:/home/sparhawk/.gtkrc-2.0:/home/sparhawk/.kde4/share/config/gtkrc-2.0" declare -x GTK_IM_MODULE="xim" declare -x GTK_MODULES="canberra-gtk-module" declare -x GTK_RC_FILES="/etc/gtk/gtkrc:/home/sparhawk/.gtkrc:/home/sparhawk/.kde4/share/config/gtkrc" (Disclosure: I've previously asked this question on the KDE forums, but didn't come to a solution.)
It looks like it was a regression introduced in Meld 3.12.1. I downloaded previous versions from the meld website. Meld 3.12.0 works fine. Meld 3.12.1 does not. I contacted the devs and they told me that it was indeed a regression introduced in the gtk+ 3 port. They suggested trying the just-released 3.12.3, which now works. (However, it still doesn't fully explain why meld in a new account would work.)
How can I make text in meld readable?
1,286,899,912,000
When I boot, PulseAudio defaults to sending output to Headphones. I'd like it to default to sending output to Line Out. How do I do that? I can manually change where the output is current sent as follows: launch the Pulseaudio Volume Control application, go to the Output Devices tab, and next to Port, select the Line Out option instead of Headphones. However, I have to do this after each time I boot the machine -- after a reboot, Pulseaudio resets itself back to Headphones. That's a bit annoying. How do I make my selection stick and persist across reboots? Here's a screenshot of how the Volume Control application looks after a reboot, with Headphones selected: If I click on the chooser next to Port, I get the following two options: Selecting Line Out makes sound work. (Notice that both Headphones and Line Out are marked as "unplugged", but actually I do have something plugged into the Line Out port.) Comments: I'm not looking for a way to change the default output device. I have only one sound card. pacmd list-sinks shows only one sink. Therefore, pacmd set-default-sink is not helpful. (This doesn't help either.) Here what I need to set is the "Port", not the output device. If it's relevant, I'm using Fedora 20 and pulseaudio-5.0-25.fc21.x86_64.
I had the same problem (for at least a year now), and the following seemed to work: Taken from: https://bbs.archlinux.org/viewtopic.php?id=164868 Use pavucontrol to change the port to your desired one. Then find the internal name of the port with this command: $ pacmd list | grep "active port" active port: <hdmi-output-0> active port: <analog-output-lineout> active port: <analog-input-linein> Using this information about the internal name of the port, we can change it with the command: pacmd set-sink-port 0 analog-output-lineout If you (or someone else with the problem) has multiple cards, try changing the 0 to a 1. If this works, you can put: set-sink-port 0 analog-output-lineout in your /etc/pulse/default.pa file to have it across reboots.
Change default port for PulseAudio (line out, not headphones)
1,286,899,912,000
I get that I can use mount to set up / directories and that I can use the /etc/fstab to remount them on reboot. Testing the fstab file is also fun with mount -faV. When I'm looking at the fstab file, the number of space is disconcerting. I would have expected one space (like a separator between command parameters) or four spaces (like a tab). I'm seeing seven spaces at a time, almost as convention. My question is: What are all the spaces in the /etc/fstab for? (Perhaps also - Will it matter if I get the wrong number?)
The number of spaces is a way to cosmetically separate the columns/fields. It has no meaning other than that. I.e. no the amount of white space between columns does not matter. The space between columns is comprised of white space (including tabs), and the columns themselves, e.g. comma-separated options, mustn't contain unquoted white space. From the fstab(5) man page: [...] fields on each line are separated by tabs or spaces. and If the name of the mount point contains spaces these can be escaped as `\040'. Example With the following lines alignment using solely a single tab becomes hard to achieve. In the end the fstab without white space looks messier than what you consider disconcerting now. /dev/md3 /data/vm btrfs defaults 0 0 /var/spool/cron/crontabs /etc/crontabs bind defaults,bind //bkpsrv/backup /mnt/backup-server cifs iocharset=utf8,rw,credentials=/etc/credentials.txt,file_mode=0660,dir_mode=0770,_netdev Can you still see the "columns"?
What are all the spaces in the /etc/fstab for?
1,286,899,912,000
When I turned on my Ubuntu 18.04 yesterday and wanted to start GitKraken, it did not work. After I click its icon I see how the process tries to start in the upper left corner (next to "Activities") but after a few seconds the process seems to die and nothing happens. Trying to launch GitKraken from the console fails too with the following two messages: /snap/gitkraken/58/bin/desktop-launch: line 23: $HOME/.config/user-dirs.dirs: Permission denied ln: failed to create symbolic link '$HOME/snap/gitkraken/58/.config/gtk-2.0/gtkfilechooser.ini': File exists Unfortunately, my Linux skills are too limited to solve this. The only thing I've tried is chmod 777 $HOME/.config/user-dirs.dirs because of the Permossion denied but that did not help. EDIT: as terdon suggested in his comment I've made ls -ld ~/.config/user-dirs.dirs and this is its output: -rwxrwxrwx 1 myusername myusername 633 Mai 6 10:30 /home/mayusername/.config/user-dirs.dirs Then, I made the mv ~/snap/gitkraken/58/.config/gtk-2.0/gtkfilechooser.ini gtkfilechooser.ini.bak command and tried to start GitKraken afterwards. I did not start showing again: /snap/gitkraken/58/bin/desktop-launch: line 23: /home/myusername/.config/user-dirs.dirs: Permission denied The ln: failed to create symbolic link ... error from my initial post did not appear. Exe cuting ll in the directory ~/snap/gitkraken/58/.config/gtk-2.0 gives me the following output: drwxrwxr-x 2 myusername myusername 4096 Jun 3 16:44 ./ drwxrwxr-x 8 myusername myusername 4096 Mai 21 12:28 ../ lrwxrwxrwx 1 myusername myusername 47 Jun 3 15:45 gtkfilechooser.ini -> /home/myusername/.config/gtk-2.0/gtkfilechooser.ini -rw-r--r-- 1 myusername myusername 198 Jun 3 16:44 gtkfilechooser.ini.bak gtkfilechooser.ini -> /home/myusername/.config/gtk-2.0/gtkfilechooser.ini is red since the file does not exist anymore. Executing the chmod command afterwards did not change anything. GitKraken does not start and outputs the same errors.
SOLVED: Had to install libgnome-keyring: sudo apt install libgnome-keyring0 The UI now comes up and works for me. Still get the following warnings, but it's working: Gtk-Message: 11:19:31.343: Failed to load module "overlay-scrollbar" Gtk-Message: 11:19:31.349: Failed to load module "canberra-gtk-module" Node started time: 1528391971495 state: update-not-available EVENT: Main process loaded at 441 ms state: checking-for-update state: update-not-available state: checking-for-update state: update-not-available EVENT: Starting initial render of foreground window at 5331 ms EVENT: Startup triggers started at 5446 ms
GitKraken does not start anymore on Ubuntu 18.04
1,286,899,912,000
When I view a message in the pager mutt displays the time in the Date header in UTC rather than my local time zone. The index view displays the local time correctly. I found this old mailing list post that describes how to get the local time to display in the status bar at the bottom of the screen, but this still doesn't "fix" the time in the Date header at the top of the screen. Is there any way to get the pager to convert the Date header time to local time?
In your .muttrc add the following line: set display_filter="exec sed -r \"s/^Date:\\s*(([F-Wa-u]{3},\\s*)?[[:digit:]]{1,2}\\s+[A-Sa-y]{3}\\s+[[:digit:]]{4}\\s+[[:digit:]]{1,2}:[[:digit:]]{1,2}(:[[:digit:]]{1,2})?\\s+[+-][[:digit:]]{4})/date +'Date: %a, %d %b %Y %H:%M:%S %z' -d '\\1'/e\"" This will change the Date: header in the message (for display only) to your local timezone if the header contained a valid RFC formatted date. If the provided date format was incorrect (we are dealing with untrusted user input after all) it will be preserved. To combat a possible attempt to inject the shell code through the header the sed pattern implements a whitelist based on RFC 5322 (this RFC defines the format of the Date: field). Note that mutt limits the command line to be no more than 255 character long, hence I optimised the original sed command that had stricter whitelist to fit into 255 bytes. If you plan to do other things with the message, then the full sed command you can put in a script is: sed -r "s/^Date:\s*(((Mon|Tue|Wed|Thu|Fri|Sat|Sun),\s*)?[[:digit:]]{1,2}\s+(Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec)\s+[[:digit:]]{4}\s+[[:digit:]]{1,2}:[[:digit:]]{1,2}(:[[:digit:]]{1,2})?\s+[+-][[:digit:]]{4})/date +'Date: %a, %d %b %Y %H:%M:%S %z' -d '\1'/e"
How do I configure mutt to display the date header in my local time zone in the pager?
1,286,899,912,000
I currently run Angstrom Linux 2.6.32. I intend to upgrade linux kernel from 2.6.32 to 3.0.7. For this reason, I had to configure kernel 3.0.7 running make menuconfig. Now, I want to compare the new kernel configuration with the previous, but I can't find kernel 3.0.7 configuration file. Any ideas?
Your new one is .config at the top level of your kernel source tree. It may also get installed to /boot/config-3.0.7 or similar, depending.
Where kernel configuration file is stored?
1,286,899,912,000
I'm running a small server for our flat share. It's mostly a file server with some additional services. The clients are Linux machines (mostly Ubuntu, but some others Distros too) and some Mac(-Book)s in between (but they're not important for the question). The server is running Ubuntu 11.10 (Oneiric Ocelot) 'Server Edition', the system from which I do my setup and testing runs the 11.10 'Desktop Edition'. We where running our shares with Samba (which we are more familiar with) for quite some time but then migrate to NFS (because we don't have any Windows users in the LAN and want to try it out) and so far everything works fine. Now I want to setup auto-mounting with autofs to smooth things up (up to now everyone mounts the shares manually when needed). The auto-mounting seems to work too. The problem is that our "server" don't run 24/7 to save energy (if someone needs stuff from the server s/he powers it on and shuts it down afterwards, so it only runs a couple of hours each day). But since the autofs setup the clients hang up quit often when the server isn't running. I can start all clients just fine, even when the server isn't running. But when I want to display a directory (in terminal or nautilus), that contains symbolic links to a share under /nfs while the server isn't running, it hangs for at least two minutes (because autofs can't connect to the server but keeps trying, I assume). Is there a way to avoid that? So that the mounting would be delayed untill a change into the directory or till content of that directory is accessed? Not when "looking" at a link to a share under /nfs? I think not, but maybe it is possible not to try to access it for so long? And just give me an empty directory or a "can't find / connect to that dir" or something like that. When the server is running, everything works fine. But when the server gets shut down, before a share got unmounted, tools (like df or ll) hang (assuming because they think the share is still on but the server won't respond anymore). Is there a way to unmount shares automatically, when the connection gets lost? Also the clients won't shutdown or restart when the server is down and they have still shares mounted. They hang (infinitely as it seems) in "killing remaining processes" and nothing seems to happen. I think it all comes down to neat timeout values for mounting and unmounting. And maybe to remove all shares when the connection to the server gets lost. So my question is: How to handle this? And as a bonus: is there a good way to link inside /nfs without the need to mount the real shares (an autofs option or maybe using a pseudo FS for /nfs which gets replaced when the mount happens or something like that)? My Setup The NFS setting is pretty basic but served us well so far (using NFSv4): /etc/default/nfs-common NEED_STATD= STATDOPTS= NEED_IDMAPD=YES NEED_GSSD= /etc/idmapd.conf [General] Verbosity = 0 Pipefs-Directory = /var/lib/nfs/rpc_pipefs Domain = localdomain [Mapping] Nobody-User = nobody Nobody-Group = nogroup /etc/exports /srv/ 192.168.0.0/24(rw,no_root_squash,no_subtree_check,crossmnt,fsid=0) Under the export root /srv we got two directories with bind: /etc/fstab (Server) ... /shared/shared/ /srv/shared/ none bind 0 0 /home/Upload/ /srv/upload/ none bind 0 0 The 1st one is mostly read only (but I enforce that through file attributes and ownership instead of NFS settings) and the 2nd is rw for all. Note: They have no extra entries in /etc/exports, mounting them separately works though. On the client side they get setup in /etc/fstab and mounted manually as needed (morton is the name of the server and it resolves fine). /etc/fstab (Client) morton:/shared /nfs/shared nfs4 noauto,users,noatime,soft,intr,rsize=8192,wsize=8192 0 0 morton:/upload /nfs/upload nfs4 noauto,users,noatime,soft,intr,rsize=8192,wsize=8192 0 0 For the autofs setup I removed the entries from /etc/fstab on the clients and set the rest up like this: /etc/auto.master /nfs /etc/auto.nfs First I tied the supplied executable /etc/auto.net (you can take a look at it here) but it won't automatically mount anything for me. Then I write a /etc/auto.nfs based on some HowTos I found online: /etc/auto.nfs shared -fstype=nfs4 morton:/shared upload -fstype=nfs4 morton:/upload And it kinda works... Or would work if the server would run 24/7. So we get the hangups when a client boots without the server running or when the server goes down while shares where still connected.
Using any mount system, you want to avoid situations where Nautilus lists the directory containing a mount that may or not be mounted. So, with autofs, don't create mounts in, for instance, /nfs. If you do, when you use Nautilus to list the 'File System' it will try to create whatever mounts should exist in /nfs, and if those mount attempts fail it takes minutes to give up. So what I did was change auto.master to create the mounts in /nfs/mnt. This fixed the problem for me. I only get a long delay if I try to list the contents of /nfs/mnt, which I can easily avoid.
automount nfs: autofs timeout settings for unreliable servers - how to avoid hangup?
1,283,688,724,000
I have a few different linux machines and a lot of config files (and folders) on each. For example: ~/.ssh/config ~/.config/openbox/rc.xml ~/.config/openbox/autostart.sh ~/.scripts/ ( folder ) ~/.bashrc ...etc Is there a simple and elegant method to keep these files synced between my machines ( one has no internet access )? Also, some files will need a more advanced syncing process, as they will have to differ slightly... for example: My desktop keyboard has a range of hotkeys, where my laptop has almost none. I use XF86Mail to open thunderbird on my desktop, but Meta+M on my laptop. My Home Desktop and Work Desktop are both more "multiple user" orientated, where my Laptop is just for me. So on my laptop, I tend to keep the 'rc.xml' file for openbox at /etc/xdg/openbox/rc.xml but on the desktops at ~/.config/openbox/rc.xml
Keep the files under version control. This has multiple benefits, including facilitating keeping files synchronized (commit on one machine, update on the others) and keeping a history of changes (so you can easily find out what broke a program that worked last month). I use CVS and synchronize the repositories with Unison or sneakernet, but that's because I've been doing this since a time before widely-available distributed version control. Anyone starting now should use a proper distributed version control tool, such as bazaar, darcs, git, mercurial, ... Managing files that need to differ between machines is always a bit of a pain. If the configuration language allows conditionals, use them. Otherwise, if there is an include mechanism, use it to split the configuration file into a machine-dependent part and a shared part. Keep all the machine-dependent parts in a separate directory (something like ~/.local/NAME/) which is always referred to through a symbolic link (~/.here -> local/NAME on each machine). I have a few files that are generated by a script in the shared part from parameters kept in the machine-specific part; this precludes modifying these files indirectly through a GUI configuration interface. Avoid configuring things in /etc, it's harder to synchronize between machines.
Keeping config files synced across multiple pc's
1,283,688,724,000
What are the differences in dependencies between select and depends on in the kernels Kconfig files? config FB_CIRRUS tristate "Cirrus Logic support" depends on FB && (ZORRO || PCI) select FB_CFB_FILLRECT select FB_CFB_COPYAREA select FB_CFB_IMAGEBLIT ---help--- This enables support for Cirrus Logic GD542x/543x based boards on Amiga: SD64, Piccolo, Picasso II/II+, Picasso IV, or EGS Spectrum. In the example above, how is FB_CIRRUS diffrently related to FB && (ZORRO || PCI) than it is to FB_CFB_FILLRECT, FB_CFB_COPYAREA and FB_CFB_IMAGEBLIT? Update I've noticed that depend on doesn't really do much in terms of compilation order. For example. A successful build of AppB depends on a statically linked LibB to be built first. Setting depends on LibB in Kconfig for AppB will not force the LibB to be built first. Setting select LibB will.
depends on A indicates the symbol(s) A must already be positively selected (=y) in order for this option to be configured. For example, depends on FB && (ZORRO || PCI) means FB must have been selected, and (&&) either ZORRO or (||) PCI. For things like make menuconfig, this determines whether or not an option will be presented. select positively sets a symbol. For example, select FB_CFB_FILLRECT will mean FB_CFB_FILLRECT=y. This fulfills a potential dependency of some other config option(s). Note that the kernel docs discourage the use of this for "visible" symbols (which can be selected/deselected by the user) or for symbols that themselves have dependencies, since those will not be checked. Reference: https://www.kernel.org/doc/Documentation/kbuild/kconfig-language.txt
What is the difference between "select" vs "depends" in the Linux kernel Kconfig?
1,283,688,724,000
I'm trying to set up automatic SSH hopping through a server which doesn't have nc. This works from the command line: ssh -A gateway ssh steve@target (I have added my public key to the SSH agent). However, adding it to ~/.ssh/config doesn't: Host target User steveb ProxyCommand ssh -A gateway ssh steve@targetip $ ssh target Pseudo-terminal will not be allocated because stdin is not a terminal. ^CKilled by signal 2. Attempting to force the issue with -t is amusing but unhelpful. ProxyCommand ssh -A -t gateway ssh steve@targetip $ ssh target Pseudo-terminal will not be allocated because stdin is not a terminal. Pseudo-terminal will not be allocated because stdin is not a terminal. ^CKilled by signal 2. More -t's? No good. ProxyCommand ssh -A -t -t gateway ssh steve@targetip $ ssh target tcgetattr: Inappropriate ioctl for device ^CKilled by signal 2. Is this possible? Most tutorials (eg http://www.arrfab.net/blog/?p=246 ) suggest using nc.
SSH ProxyCommand without netcat The ProxyCommand is very useful when hosts are only indirectly accessible. With netcat it is relative strait forward: ProxyCommand ssh {gw} netcat -w 1 {host} 22 Here {gw }and {host} are placeholders for the gateway and the host. But it is also possible when netcat is not installed on the gateway: ProxyCommand ssh {gw} 'exec 3<>/dev/tcp/{host}/22; cat <&3 & cat >&3;kill $!' The /dev/tcp is a built-in feature of standard bash. The files don't exist. To check whether bash has this feature built-in use run: cat < /dev/tcp/google.com/80 ...on the gateway. To make sure that bash is used, use: ProxyCommand ssh {gw} "/bin/bash -c 'exec 3<>/dev/tcp/{host}/22; cat <&3 & cat >&3;kill $!'" And it even works together with ControlMaster. (Updated on Oct 22 to include kill to clean up background cat) (Updated on Mar 3 2011 to make placeholders more clear and explain /dev/tcp) 100% credit to roland schulz. Here's the source: http://www.rschulz.eu/2008/09/ssh-proxycommand-without-netcat.html see more useful info in the comments there. There is also more here: http://www.linuxjournal.com/content/tech-tip-tcpip-access-using-bash http://securityreliks.securegossip.com/2010/08/enabling-devtcp-on-backtrack-4r1ubuntu/ UPDATE: here's something new from Marco In reference to a ProxyCommand in ~/.ssh/config where one has a line like this: ProxyCommand ssh gateway nc localhost %p Marco says: You don't need netcat if you use a recent version of OpenSSH. You can replace nc localhost %p with -W localhost:%p. The result would look like this: ProxyCommand ssh gateway -W localhost:%p
Pseudo-terminal will not be allocated because stdin is not a terminal
1,283,688,724,000
Subtitle files come in a variety of formats, from .srt to .sub to .ass and so on and so forth. Is there a way to tell mpv to search for subtitle files alongwith the media files and if it does to start playing the file automatically. Currently I have to do something like this which can be pretty long depending on the filename - [$] mpv --list-options | grep sub-file (null) requires an argument --sub-file String list (default: ) [file] Look forward to answers. Update 1 - A typical movie which has .srt (or subscript) [$] mpv Winter.Sleep.\(Kis.Uykusu\).2014.720p.BrRip.2CH.x265.HEVC.Megablast.mkv (null) requires an argument Playing: Winter.Sleep.(Kis.Uykusu).2014.720p.BrRip.2CH.x265.HEVC.Megablast.mkv (+) Video --vid=1 (*) (hevc) (+) Audio --aid=1 (aac) (+) Subs --sid=1 'Winter.Sleep.(Kis.Uykusu).2014.720p.BrRip.2CH.x265.HEVC.Megablast.srt' (subrip) (external) [vo/opengl] Could not create EGL context! [sub] Using subtitle charset: UTF-8-BROKEN AO: [alsa] 48000Hz stereo 2ch float VO: [opengl] 1280x536 yuv420p AV: 00:02:14 / 03:16:45 (1%) A-V: 0.000 The most interesting line is this :- (+) Subs --sid=1 'Winter.Sleep.(Kis.Uykusu).2014.720p.BrRip.2CH.x265.HEVC.Megablast.srt' (subrip) (external) Now if the file was as .ass or .sub with the same filename, it wouldn't work. I have tried it in many media files which have those extensions and each time mpv loads the video and audio and the protocols but not the external subtitle files. Update 2 - The .ass script part is listed as a bug at mpv's bts - https://github.com/mpv-player/mpv/issues/2846 Update 3 - Have been trying to debug with help of upstream, filed https://github.com/mpv-player/mpv/issues/3091 for that. It seems though that it's not mpv which is responsible but ffmpeg (and libavformat) which is supposed to decode the subtitles. Hence have added ffmpeg to it too.
As seen in man mpv: --sub-auto=<no|exact|fuzzy|all>, --no-sub-auto Load additional subtitle files matching the video filename. The parameter specifies how external subtitle files are matched. exact is enabled by default. no Don't automatically load external subtitle files. exact Load the media filename with subtitle file extension (default). fuzzy Load all subs containing media filename. all Load all subs in the current and --sub-paths directories. exact would seem like the appropriate choice, but since it's the default and it doesn't load files like [video name minus extension].srt, fuzzy is the next best bet and it works on my system. So just echo "sub-auto=fuzzy" >> ~/.config/mpv/mpv.conf.
Play subtitles automatically with mpv
1,283,688,724,000
So, I have a bunch of machines I manage, where I've aliased each of them for ease of access. This looks like this for each of them in the ssh client config: Host MACHINE-1075 M1075 m1075 1075 User service HostName 10.0.100.75 And also: Match User service IdentityFile ~/.ssh/service_user Which allows me to simply type ssh 1075 to get into that machine, with the correct identity file and user automatically. This works just fine for normal accesses. Sometimes, however, I may need to log in as root for certain tasks. I can accomplish this by explicitly specifying the identity file, e.g. ssh root@1075 -i ~/.ssh/root_user. This is okay, but what I'd really like to do is to configure SSH to figure out the required identity file from the combination of user and host, allowing me to type ssh root@1075 and do the right thing. I know I can match all uses of the root user and link it up to an identity file with: Match User root IdentityFile ~/.ssh/root_user This doesn't work for my case, however, because there are several groups of machines which may require different credentials for root access, so not all of them should match. Ideally, what I'd like to do is something like: Match Host 10.0.100.75 && Match User root IdentityFile ~/.ssh/root_user But this doesn't seem to work. As a temporary solution, I've simply aliased the machines with root- as a prefix, so I can do ssh root-1075, which isn't too bad, but it's not quite what I want. This is on Ubuntu 21.10 running OpenSSH 8.4.
It's IMHO not entirely clear in man ssh_config, however the syntax for matching multiple conditions appears to be Match keyword [pattern|pattern-list] keyword [pattern|pattern-list] where patterns in pattern-list are comma-separated, but keyword pattern pairs are separated from one another by simple whitespace: Match Host 10.0.100.75 User root No explicit logical operator like && is supported because all criteria must be satisfied - logical AND is understood.
Matching both user and host simultaneously in SSH config
1,283,688,724,000
I have been using w3m for a couple of weeks and am convinced that it is my preferred text browser - with one exception. Is there any way to yank URLs to the clipboard without using the mouse? I have looked through the manual and, using the default keybindings, there doesn't appear to be any documented way to do this. Has anyone developed a script to work around this?
Set the “External Browser” option to sh -c 'printf %s "$0" | xsel' You can use xsel -b to use the clipboard instead of the primary selection. An alternative to xsel is xclip (xclip for the primary selection, xclip -selection CLIPBOARD for the clipboard). In ~/.w3m/config, that's the extbrowser setting. Then press M to copy (yank) the URL of the current page, or ESC M to copy the URL of the link under the cursor. You can use the second or third external browser for that instead; then the key combination is 2 M or 3 M or 2 ESC M or 3 ESC M.
Yanking URLs in w3m
1,283,688,724,000
I was using the mouse copy-paste extensively, until recently, when some OpenSuSe upgrade reconfigured this on all my machines. Now the scrollbuton is the one to paste (which I hate, since it's hard to click without scrolling, and I also click it sometimes accidentally). Where is this configured? Ideally I would love something that I can add to session start (for both Gnome and KDE).
It is configured in /etc/X11/xorg.conf. You'll see a section that looks like Section "InputDevice" Identifier "Configured Mouse" Driver "mouse" Option "CorePointer" Option "Device" "/dev/input/mice" Option "Protocol" "ImPS/2" Option "Emulate3Buttons" "true" EndSection Here is a random vaguely relevant link from SU. https://superuser.com/questions/258649/multi-button-mouse-on-x11-how-can-i-configure-several-buttons-to-act-as-the-midd
Configuring mouse for right+left button simulating middle click (for copy/paste)
1,283,688,724,000
I woud like to set up the default sound volume once for all, for all ALSA devices that will be connected ever. Of course, I could do amixer ... or even alsamixer to modify the volume of currently available soundcards. But I really want to modify the default volume even for future soundcards that will be added later. In which config file should I set this default sound volume? I've seen /var/lib/alsa/asound.state but the content is specific to currently connected soundcards. What I want is a solution that will apply to any soundcard that will be connected. Context : why do I want this? I'm providing a ready-to-use Debian image for my project SamplerBox. User #1 might use the computer's built-in-soundcard, User #2 might have a USB DAC, User #3 might have another soundcard... I would like to provide a default -3dB volume that will work for any ALSA soundcard people could have... Note: I reinstalled a fresh new system and it seems that, by default, the volume is -20dB for all devices :
There are some generic and driver-specific config files in /usr/share/alsa/init/, where you can specify settings like ENV{ppercent}:="75%" and ENV{pvolume}:="-20dB" (pvolume = playback volume, cvolume = capture volume, etc.). /usr/share/alsa/init/default should already contain those settings, so you can use it as an example. You can force ALSA to re-initialize all devices with alsactl init and can also override the default configuration files for that with alsactl -i /usr/share/alsa/init/foo init. For some reason, ALSA seems to ignore the ppercent and pvolume settings on my system, but from your comments it seems like they worked for you. If anyone can enlighten me on why the configuration might be ignored, I'd be glad to amend this answer.
Default sound volume for all ALSA devices
1,283,688,724,000
I don't have a desktop manager installed (and I don't want to). After logging in through the terminal I use startx to start the GUI. I have entries in ~/.xinitrc for my GUI sessions. Right now I have xmonad in there, but sometimes I want to run a GNOME session, and sometimes a KDE session. I used to edit ~/.xinitrc for that purpose, but I think there should be a more elegant way (something like using alternate configurations). However, I can't find anything in man startx or man xinit. I plan to have several configuration files (one for each GUI session), and then tell startx to load them when I want. How can I do that?
According to the xinit man page that I read, xinit (and thereby startx) looks in its command line parameters for a client program to run. If it doesn't find one, it runs ~/.xinitrc instead. So you should be able to write startx path/to/my_alternate_xinitrc and it will do what you want. You will need to provide a path, though, and not just a filename. In my testing, startx ./my_xinitrc worked but startx my_xinitrc did not.
How to make startx use alternate xinitrc?
1,283,688,724,000
I run Gnome, which has pretty good support for my HiDPI screen. However, when I run QT apps I can't seem to find a way to scale the fonts. Is there a way to do this without installing a full version of KDE?
Updated: Since Qt 5.6, Qt 5 applications can be instructed to honor screen DPI by setting the QT_AUTO_SCREEN_SCALE_FACTOR environment variable. If automatic detection of DPI does not produce the desired effect, scaling can be set manually per-screen (QT_SCREEN_SCALE_FACTORS) or globally (QT_SCALE_FACTOR). You can also use QT_FONT_DPI to adjust scaling of text. Original: You can try this recipe from the archwiki Qt5 applications can often be run at higher dpi by setting the QT_DEVICE_PIXEL_RATIO environment variable. Note that the variable has to be set to a whole integer, so setting it to 1.5 will not work. This can for instance be enabled by creating a file /etc/profile.d/qt-hidpi.sh export QT_DEVICE_PIXEL_RATIO=2 And set the executable bit on it.
How can I set the default font size for all Qt5 apps?
1,283,688,724,000
I have a system without X display and I want to use nmcli to configure my cell modem to connect to a certain apn. I can get it going with this modem just fine on Ubuntu (with X) and I would like to achieve the same now on the command line. How can I setup the connection? so far I get this: # nmcli dev status ** (process:2379): WARNING **: Could not initialize NMClient /org/freedesktop/NetworkManager: Permissions request failed: GDBus.Error:org.freedesktop.DBus.Error.ServiceUnknown: The name org.freedesktop.PolicyKit1 was not provided by any .service files DEVICE TYPE STATE ttyUSB1 gsm disconnected eth0 802-3-ethernet connected
A bit late to the party, but I was stuck at the same hurdle. Since I have worked it out I thought I'd share my findings as every other post on the topic is about as clear as mud. Although NetworkManager can see the device, it doesn't know of any connections that are supported by the device. Unlike WiFi, we can't just do a scan to make a list of available connections. We need to add one ourselves. Before creating the connection, ensure NetworkManager does not prevent the device from being managed. This is by default happening on Ubuntu Server to prevent NetworkManager from taking over an existing legitimate legacy connection (see explanation from Ubuntu developer here). You can verify that the device is unmanaged whennmcli device shows unmanaged status for your device, the opposite being disconnected. In this case, skip to the next paragraph. To make NetworkManager on Ubuntu Server handle the connection, copy the file /usr/lib/NetworkManager/conf.d/10-globally-managed-devices.conf to /etc/NetworkManager/conf.d/10-globally-managed-devices.conf, then edit it: modify the line beginning with unmanaged-device by adding gsm type to the list of exceptions: unmanaged-devices=*,except:type:wifi,except:type:wwan,except:type:gsm Don't forget to check updates to the original /usr/lib file when upgrading NetworkManager. Creating a connection To start off with, we create a new connection named as you wish with the (appropriately named /s) edit command e.g.: sudo nmcli connection edit type gsm con-name "My GPRS Connection" Use sudo if you don't want to be disappointed when you try to save the connection. Of course, if you aren't using GSM, you can change the type parameter to a different protocol supported by NetworkManager. Now you will enter edit mode. Most of the settings you need are automatically filled in for you. You can see all the current settings with the print command: nmcli> print =============================================================================== Connection profile details (My GPRS Connection) =============================================================================== connection.id: My GPRS Connection connection.uuid: 27b012ca-453f-482f-bc0e-c81bbab07310 connection.interface-name: -- connection.type: gsm connection.autoconnect: yes connection.timestamp: 0 connection.read-only: no connection.permissions: connection.zone: -- connection.master: -- connection.slave-type: -- connection.secondaries: connection.gateway-ping-timeout: 0 ------------------------------------------------------------------------------- ipv4.method: auto ipv4.dns: ipv4.dns-search: ipv4.addresses: ipv4.routes: ipv4.ignore-auto-routes: no ipv4.ignore-auto-dns: no ipv4.dhcp-client-id: -- ipv4.dhcp-send-hostname: yes ipv4.dhcp-hostname: -- ipv4.never-default: no ipv4.may-fail: yes ------------------------------------------------------------------------------- ipv6.method: auto ipv6.dns: ipv6.dns-search: ipv6.addresses: ipv6.routes: ipv6.ignore-auto-routes: no ipv6.ignore-auto-dns: no ipv6.never-default: no ipv6.may-fail: yes ipv6.ip6-privacy: -1 (unknown) ipv6.dhcp-hostname: -- ------------------------------------------------------------------------------- gsm.number: *99# gsm.username: -- gsm.password: -- gsm.password-flags: 0 (none) gsm.apn: -- gsm.network-id: -- gsm.network-type: -1 gsm.allowed-bands: 1 (any) gsm.pin: -- gsm.pin-flags: 0 (none) gsm.home-only: no ------------------------------------------------------------------------------- Type help to see a full list of commands. The only thing you are likely to need to edit is the APN of your network. This can be set with set gsm.apn <APN> where APN would be something like epc.t-mobile.com, wholesale or vzwinternet for verizon. You can also restrict the connection to a particular interface. This is not recommended especially for serial-based connections where the device name can change readily. If you wanted to though, you could do set connection.interface-name ttyS4 for example. Provided you're running as root, you'll now be able to save your connection nmcli> save That's it. If you need to go back to edit the connection, use nmcli c edit "My GPRS Connection", or directly edit the config file. On Debian-based systems you'll find it in /etc/NetworkManager/system-connections/, on Redhat it'll be in /etc/sysconfig/network-scripts/. These files seem to be transferable from system to system - the UUID is basically random. Connecting to our new connection Now you should be able to connect with sudo nmcli device connect <interface name> If all goes well, NetworkManager will select "My GPRS Connection" automatically. If not, sudo nmcli connection up "My GPRS Connection" ifname <interface name> This is where it falls down for me right now. It times out during the connection but I think I'm out of signal range. Hopefully it works better for you. Please comment if you need any more information.
configure gsm connection using nmcli
1,283,688,724,000
I'd like a tcsh'ism that I haven't been able to find: On a blank line with no content, I want to press the tab key and see the equivalent of an ls. That is to say I want $ <tab> to do something other then giving me a \t. I've found fantastic resources for command completion, but not for this base case. Any help on this would be great! Thanks.
# expand-or-complete-or-list-files function expand-or-complete-or-list-files() { if [[ $#BUFFER == 0 ]]; then BUFFER="ls " CURSOR=3 zle list-choices zle backward-kill-word else zle expand-or-complete fi } zle -N expand-or-complete-or-list-files # bind to tab bindkey '^I' expand-or-complete-or-list-files
zsh tab completion on empty line
1,283,688,724,000
I want to create a script that runs when a Zsh instance starts, but only if the instance is: Non-login. Interactive I think I'm right to say .zshrc runs for all interactive shell instances, .zprofile and .zlogin run for all login shells, and .zshenv runs in all cases. The reason I want to do this is to check if there is an existing ssh-agent running, and make use of it in the newly opened shell if there is. I imagine any tests carried out would be best placed in .zshrc (as this guarantees an interactive shell) and the designated "non-login event" script called from there. I probably first want to check if the new shell is already running as part of an existing remote SSH session before testing for the ssh-agent, but I have found this SE recipe for this purpose. I pick Zsh as it is the shell I favor, but I imagine any correct technique to do this would apply similarly to other shells.
if [[ -o login ]]; then echo "I'm a login shell" fi if [[ -o interactive ]]; then echo "I'm interactive" fi [[ -o the-option ]] returns true if the-option is set. You can also get the values of options with the $options special associative array, or by running set -o. To check if there's an ssh-agent: if [[ -w $SSH_AUTH_SOCK ]]; then echo "there's one" fi In ksh (and zsh): case $- in (*i*) echo interactive; esac case $- in (*l*) echo login; esac In bash, it's a mess, you need: case $- in *i*) echo interactive; esac # that should work in any Bourne/POSIX shell case :$BASHOPTS: in (*:login_shell:*) echo login; esac And $SHELLOPTS contains some more options. Some options you can set with set -<x>, some with set -o option, some with shopt -s option.
How would I detect a non-login shell? (In Zsh)
1,283,688,724,000
A lot of Linux programs state that the config file(s) location is distribution dependent. I was wondering how the different distributions do this. Do they actually modify the source code? Is there build parameters that sets these locations? I have searched for this but cannot find any information. I know it's out there, I just can't seem to find it. What is the "Linux way" in regards to this?
It depends on the distribution and the original ('upstream') source. With most autoconf- and automake-using packages, it is possible to specify the directory where the configuration files will be looked for using the --sysconfdir parameter. Other build systems (e.g., CMake) have similar options. If the source package uses one of those build systems, then the packager can easily specify the right parameters, and no patches are required. Even if they don't (e.g., because the upstream source uses some home-grown build system), it's often still possible to specify some build configuration to move the config files to a particular location without having to patch the upstream source. It that isn't the case, then often the distribution will indeed have to add patches to the source to make it move files in what they consider to be the 'right' location. In most cases, distribution packagers will then write a patch which will allow the source to be configured in the above sense, so that they can send the patch to the upstream maintainers, and don't have to keep maintaining/updating it. This is the case for configuration file locations, but also for other things, like the bin/sbin executables (the interpretation of what is a system administrator's command differs between distributions), location where to write documentation, and so on. Side note: if you maintain some free software, please make it easy for packagers to talk to you. Otherwise we have to maintain such patches for no particularly good reason...
How do different distributions modify the locations of config files for programs?
1,283,688,724,000
How can I exclude files by default with rsync? Here is how my normal rsync syntax starts out: rsync --exclude ".ht*" --exclude "error_log" --exclude ".DS*" --exclude "old" ... I've seen a lot of mention of configuring the /etc/rsyncd.conf file, but maybe that's more for the daemon than the rsync command. Is it possible to have some default excludes for rsync when called from the command line like in my default syntax above?
Add your excludes to a file, then use --exclude-from=/path/to/exclude_file e.g. # cat rsync.excludes .ht* error_log .DS* old ... # rsync --exclude-from=rsync.excludes
How can I exclude files by default with rsync?
1,283,688,724,000
I want to disable VSync (it's called "Sync to VBlank" in nvidia-settings) for my nvidia graphics card. But the configuration only takes effect if I start the nvidia-settings tool. After rebooting the system VSync is enabled again and I have to start the program again. I tried exporting the xorg.conf and putting it in /etc/X11/ but with no success. So my question is how can I make changes in the nvidia-settings tool persistent?
Looking into the readme indeed helps sometimes :) This behaviour is intentional to give different users the chance to have their own settings. In short the nvidia-settings config file is stored in ~/.nvidia-settings-rc and can be executed by calling nvidia-settings --load-config-only at startup. For more details, here's the relevant part of the readme: 4) Loading Settings Automatically The NVIDIA X driver does not preserve values set with nvidia-settings between runs of the X server (or even between logging in and logging out of X, with xdm, gdm, or kdm). This is intentional, because different users may have different preferences, thus these settings are stored on a per user basis in a configuration file stored in the user‘s home directory. The configuration file is named "~/.nvidia-settings-rc". You can specify a different configuration file name with the "--config" commandline option. After you have run nvidia-settings once and have generated a configuration file, you can then run: nvidia-settings --load-config-only at any time in the future to upload these settings to the X server again. For example, you might place the above command in your ~/.xinitrc file so that your settings are applied automatically when you log in to X. Your .xinitrc file, which controls what X applications should be started when you log into X (or startx), might look something like this: nvidia-settings --load-config-only & xterm & evilwm or: nvidia-settings --load-config-only & gnome-session If you do not already have an ~/.xinitrc file, then chances are that xinit is using a system-wide xinitrc file. This system wide file is typically here: /etc/X11/xinit/xinitrc To use it, but also have nvidia-settings upload your settings, you could create an ~/.xinitrc with the contents: nvidia-settings --load-config-only & . /etc/X11/xinit/xinitrc System administrators may choose to place the nvidia-settings load command directly in the system xinitrc script. Please see the xinit(1) manpage for further details of configuring your ~/.xinitrc file.
How to make changes in nvidia-settings tool persistent
1,283,688,724,000
Mozilla just released a new tool to check your website configuration. observatory.mozilla.org But the scan is complaining about Cookies (-10 points): Session cookie set without the Secure flag ... Unfortunately the service running behind my nginx can only set the secure header if the SSL terminates there directly and not when SSL terminates on the nginx. Thus the "Secure" flag is not set on the cookies. Is it possible to append the "secure" flag to the cookies somehow using nginx? Modifing the location/path seems to be possible. http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cookie_domain http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cookie_path
I know two ways to sorta do this, neither of them great. The first is to just abuse proxy_cookie_path like this: proxy_cookie_path / "/; secure"; The second is to use the more_set_headers directive from the Headers More module like this: more_set_headers 'Set-Cookie: $sent_http_set_cookie; secure'; Both of these can introduce problems because they blindly add the items. For example if the upstream sets the secure flag you will wind up sending the client a duplicate like this: Set-Cookie: foo=bar; secure; secure; and in the second case if the upstream app does not set a cookie nginx will send this to the browser: Set-Cookie; secure; This is doubleplusungood, of course. I think this problem needs to be fixed as many people has asked about it. In my opinion a directive is needed something like this: proxy_cookie_set_flags * HttpOnly; proxy_cookie_set_flags authentication secure HttpOnly; but alas, this does not currently exist :(
Nginx Add Secure Flag to Cookies from proxied server
1,283,688,724,000
I believe that if there is any output from a cronjob it is mailed to the user who the job belongs to. I think you can also add something like [email protected] at the top of the cron file to change where the output is sent to. Can I set an option so that cron jobs system-wide will be emailed to root instead of to the user who runs them? (i.e. so that I don't have to set this in each user's cron file)
Check /etc/crontab file and set MAILTO=root in there. Might also need in /etc/rc file crond seems to accept MAILTO variable, I guess I am not sure completely but its worth a try changing the environment variable for crond before it is started. Like in /etc/sysconfig/crond or /etc/rc.d/init.d/crond script which sources the earlier file. Example: [centos@centos scripts]$ strings /usr/sbin/crond | grep -i mail ValidateMailRcpts MailCmd cron_default_mail_charset usage: %s [-n] [-p] [-m <mail command>] [-x [ CRON_VALIDATE_MAILRCPTS mailed %d byte%s of output but got status 0x%04x [%ld] no more grandchildren--mail written? MAILTO /usr/sbin/sendmail mailcmd too long [%ld] closing pipe to mail MAIL
Can I change the default mail recipient on cron jobs?
1,283,688,724,000
I've just moved to Awesome WM from OpenBox. I like that it's very extensible, customizable and I have huge control about window layout. I like structured and organized things and I'd like to separate that huge .config/awesome/rc.lua configuration into multiple files.
You can simply place code in a separate file and include it with dofile("somefile.lua") Note: The working directory is $HOME. To specify a file relative to rc.lua you can use dofile(awful.util.getdir("config") .. "/" .. "somefile.lua") If it's more than just some code and it might be used by others as well, it might make sense to create a lua module which can be included with somemodule = require("somemodule")
How to separate Awesome's `rc.lua` configuration into multiple files?
1,283,688,724,000
How to change the applications associated with certain file-types for gnome-open, exo-open, xdg-open, gvfs-open and kde-open? Is there a way by editing config files or by a command-line command? Is there a way to do this using a GUI? For both questions: How to do it per user basis, how to do it system-wide?
It's all done with MIME types in various databases. xdg-mime can be used to query and set user values.
Change default applications used by gnome-open, exo-open, xdg-open, gvfs-open and kde-open
1,283,688,724,000
I want to tweak smb.conf without causing network hiccups for folks who rely on our samba4 fileshare. I made an obvious path change that should only affect my own private share, and ran smbcontrol smbd reload-config. It didn't complain, but didn't affect my share, either. I also tried smbcontrol samba4 reload-config, which returned Can't find pid for destination 'samba4', so I tried without the '4', and it had the same no-change behaviour as smbd. Is there a way to reload the conf in Samba 4 without closing open files and the like?
You can try with sending SIGHUP signal to smbd process killall -HUP smbd nmbd NOTE: Be careful using killall on Unix. Running killall on Solaris on root would send kill signals to all processes! [en.wikipedia.org/wiki/Killall]
Reloading Samba4's smb.conf without restarting the service
1,283,688,724,000
What is the file format expected by iptables-restore? Does a description of the format exist?
I found my answer, sort-of... As best as I can tell, there is no document. However, in reading the source code I've uncovered how it works. Lines starting with # are for comments and are not parsed. Blank lines are ignored. * marks the table name. : marks the chain, followed by the default policy and optionally the packet and byte counters. byte counters can precede a rule. Rules are exactly as given on the command line less the table name. Each table section must end with COMMIT. The good news is that the syntax for the actual rules is just as it says in man iptables. # iptables-restore format *<table> :<chain> <policy> [<packets_count>:<bytes_count>] <optional_counter><rule> ... more rules ... COMMIT - # iptables-restore example *filter :INPUT DROP [0:0] -A INPUT -s 127.0.0.1 -p tcp -m tcp --dport 9000 -J ACCEPT -A INPUT -p tcp -m tcp --dport 9000 -j REJECT --reject-with icmp-port-unreachable COMMIT
netfilter - iptables-restore file format documentation
1,283,688,724,000
I configured the service - calc_mem.service as the following Restart=on-failure RestartSec=5 StartLimitInterval=400 StartLimitBurst=3 the configuration above should do the following from my understanding the service have 3 retries when service exit with error and before service start it will wait 5 seconds also I found that "Restart" can be also: Restart=always I can understand that need to restart the service on failure but what is the meaning of Restart=always ? in which case we need to set - Restart=always
The systemd.service man page has a description of the values Restart= takes, and a table of what options cause a restart when. Always pretty much does what it says on the lid: If set to always, the service will be restarted regardless of whether it exited cleanly or not, got terminated abnormally by a signal, or hit a timeout. I don't know for sure what situation they had in mind for that feature, but we might hypothesise e.g. a service configured to only run for a fixed period of time or to serve fixed number of requests and to then stop to avoid any possible resource leaks. Having systemd do the restarting makes for a cleaner implementation of the service itself. In some sense, we might also ask why not include that option in systemd. Since it is capable of restarting services on failure, they might as well include the option of restarting the service always, just in case someone needs it. To provide tools, not policy. Note also that a "successful exit" here is defined rather broadly: If set to on-success, it will be restarted only when the service process exits cleanly. In this context, a clean exit means an exit code of 0, or one of the signals SIGHUP, SIGINT, SIGTERM or SIGPIPE, [...] SIGHUP is a common way of asking a process to restart, but it unhandled, it terminates the process. So having Restart=always (or Restart=on-success) allows to use SIGHUP for restarting, even without the service itself supporting that. Also, as far as I can read the man page, always doesn't mean it would override the limits set by StartLimitInterval and StartLimitBurst: Note that service restart is subject to unit start rate limiting configured with StartLimitIntervalSec= and StartLimitBurst=, see systemd.unit(5) for details. A restarted service enters the failed state only after the start limits are reached.
systemctl + what is the meaning of Restart=always
1,283,688,724,000
I use BIND as my DNS server at home. For my Start Of Authority (SOA record) I always use a serial in the recommended format YYYYMMDD## where ## is the counter for changes on that day. Unfortunately I changed the serial and added 1 more digit by mistake. After updating the name-daemon, I couldn't revert this anymore. Is there a possible way to reset the serial / counter inside BIND's internal libraries?
"BIND's internal libraries" don't care what the serial number is. It's only agreement between the master server and slave servers that matters. In other words, BIND will happily let you decrease the serial number in a zone file without complaint.. It's just that the slaves would no longer receive updates. Zone file serial numbers are unsigned 32-bit integers and they wrap around the largest possible 32-bit unsigned integer. So there is a way to decrease the serial number by incrementing it repeatedly until it rolls over and becomes closer to zero. There is a maximum amount by which you can increment it at a time, so you have to do this iteratively in multiple steps: Increase the serial number by a large increment but no more than 2147483647 Wait for all of the slave servers to catch up and be up to date with the current SOA. Repeat You can always pick an increment such that you don't need to iterate more than twice. Follow this HOWTO.
How can I reset or lower the serial used in BIND DNS server's SOA record?
1,283,688,724,000
Specifically I'm trying to give a notification after some command was completed. So, for example, if I reload my configuration file, I'd like to have some confirmation that it worked, which might be done something like this: bind R source-file "$HOME/.tmux.conf" && display-message "Configuration reloaded." That, however, doesn't work. Nor do any other things I tried as ways of stringing commands together.
You could use the run-shell option, but the critical thing is to separate the commands with \; In this case, something like: bind R source-file ~/.tmux.conf \; run-shell "echo 'Reload'" run-shell shell-command (alias: run) Execute shell-command in the background without creating a window. After it finishes, any output to stdout is displayed in copy mode. If the command doesn't return success, the exit status is also displayed.
How can I bind multiple tmux commands to one keystroke?
1,283,688,724,000
I would like to set an environment variable so that it is set when I launch a specific Flatpak application, and only set for this application. How do I go about doing this in a permanent manner?
You can do this via the flatpak override command. To set only one environment variable you can use this syntax: flatpak override --env=VARIABLE_NAME=VARIABLE_VALUE full.application.Name To set multiple environment variables you can use this syntax: flatpak override --env=VARIABLE_NAME_ONE=VARIABLE_VALUE_ONE --env=VARIABLE_NAME_TWO=VARIABLE_VALUE_TWO full.application.Name This will set it globally and therefore requires you to run the command as root. If you want to do this for your current user, you can add the --user parameter to the command, like so: flatpak override --user --env=VARIABLE_NAME=VARIABLE_VALUE full.application.Name Source and further reading: http://docs.flatpak.org/en/latest/flatpak-command-reference.html#flatpak-override
How do I permanently set an environment variable for a specific Flatpak application?
1,283,688,724,000
OS: Funtoo. I have bound NGINX to port 81 (I want to run it alongside my Apache server for a short time for ease of transition), and it listens at the port (If I point at another port, using wget I get "Connection refused", but using port 81 I get "connected") but it never serves an HTML response of any kind! When running a wget on the port, from the localhost, I get: # wget localhost:81 -2014-04-16 23:56:45- http://localhost:81/ Resolving localhost... 127.0.0.1 Connecting to localhost|127.0.0.1|:81... connected. HTTP request sent, awaiting response... On another computer... $ wget 192.168.18.42:81 -2014-04-16 23:57:19- http://192.168.18.42:81/ Connecting to 192.168.18.42:81... connected. HTTP request sent, awaiting response... Nothing ever happens after that. The documents exist, it's the normal Funtoo nginx.conf. UPDATE: I can make it listen to port 80, but it still rattles me that I can't get it to work on any port.... netstat -aWn | grep 81 | grep LISTEN tcp 60 0 0.0.0.0:81 0.0.0.0:* LISTEN Edit: Configuration files: user nginx nginx; worker_rlimit_nofile 6400; error_log /var/log/nginx/error_log info; events { worker_connections 1024; use epoll; } http { include /etc/nginx/mime.types; # This causes files with an unknown MIME type to trigger a download action in the browser: default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] ' '"$request" $status $bytes_sent ' '"$http_referer" "$http_user_agent" ' '"$gzip_ratio"'; client_max_body_size 64m; # Don't follow symlink if the symlink's owner is not the target owner. disable_symlinks if_not_owner; server_tokens off; ignore_invalid_headers on; gzip off; gzip_vary on; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/javascript text/x-js image/x-icon image/bmp; sendfile on; tcp_nopush on; tcp_nodelay on; index index.html; include /etc/nginx/sites-enabled/*; } Server block: server { listen *:81; root /usr/share/nginx/html; location / { index index.html; } }
Turns out the big problem? Nginx had set worker_processes to 0. I added a line setting it to auto in the top of my nginx.conf, and all was well with the world! Thank you all for your time and patience.
Nginx listens at a port, only responds if set to port 80
1,283,688,724,000
I frequently ssh into openstack instances. All of the instances are on a dedicated vlan and subnet (10.2.x.x). All of the instances have the same username (bob) I connect like so: ssh [email protected] or sometimes like this ssh 10.2.x.x -l bob Is it possible to configure my laptop to automatically use the name bob when I ssh into any vm on the 10.2.x.x subnet? I don't want to automatically use bob when sshing into a machine on any other subnet. It looks like ssh config doesn't support wildcards. (Correct me if I am wrong). I'm thinking maybe an alias could do this, but I'm not sure what the syntax would be.
The ssh_config man page has a section PATTERNS which details how you can do that, you can use wildcards of * and ?. In my ~/.ssh/config: Host 172.16.*.* User drav and then on issuing ssh -vvv 172.16.13.1: debug1: Reading configuration data /home/drav/.ssh/config debug1: /home/drav/.ssh/config line 4: Applying options for 172.16.*.* debug1: /home/drav/.ssh/config line 46: Applying options for * Note that these matches are canonical, so if "fred.mynetwork.com" in DNS is 172.16.13.1, issuing ssh fred.mynetwork.com will not match the Host 172.16.*.* entry. You could always however, add an additonal Host *.mynetwork.com entry to apply the same options when a DNS name is used instead.
ssh config wildcards to avoid typing username
1,283,688,724,000
From the output of lspci how do I interpret the BUSID for xorg.conf.d ? Example: 00:02.0 VGA compatible controller: Intel Corporation Skylake GT2 [HD Graphics 520] (rev 07) 01:00.0 Display controller: Advanced Micro Devices, Inc. [AMD/ATI] Sun XT [Radeon HD 8670A/8670M/8690M / R5 M330 / M430 / Radeon 520 Mobile] (rev 83) How do I write the BUSID for the AMD card ? Is this correct ? BUSID PCI 0@1:00:0
In your lspci output, 01:00.0 means bus 1, device 0, function 0, which maps to a BusID specifier of PCI:1:0:0 (without specifying the domain): BusID "PCI:1:0:0" See the xorg.conf documentation for details.
Setting BUSID in xorg.conf
1,283,688,724,000
Recently Gnome came up with a new feature of Night Light. It is really helpful. It feels too time-consuming that I need to go to settings (and then display) and activate it each time. Is there any terminal command that can simply turn on the night light feature? Manual and/or sunrise to sunset option aren't really helpful as even I don't know when I will be needing that particular feature to turn on or off. I tried to google but, found nothing related to command. Might not matter, but just in case, I am using Kali Linux. This is the feature I am talking about.
Yes, you can turn it on with gsettings set org.gnome.settings-daemon.plugins.color night-light-enabled true or dconf write /org/gnome/settings-daemon/plugins/color/night-light-enabled true Same commands with false instead of true will turn it off. If you list the keys under the org.gnome.settings-daemon.plugins.color schema you'll see that you can also configure the schedule (auto: on/off, manual: from/to) as well as the night light temperature. A very convenient way to set the latter is via the night light slider extension.
Activate Night Light option from terminal
1,283,688,724,000
The main apache config file is in /etc/httpd/conf/httpd.conf on my CentOS system an in there is a line: Include conf.d/*.conf Inside conf.d is mostly files that do something like this: LoadModule auth_kerb_module modules/mod_auth_kerb.so But there are also other sites that are setup in there to and have their own config files. Was this not well thought out or am I missing something?
Separating configuration files is a way to manage them. By putting configuration lines specific to a module into their own files it become much easier to enable and disable modules. It also helps managing them, because now you only have a small configuration file to edit. (Imagine opening up a 500 line httpd.conf and looking for an incorrect option.) Different systems seem to have different ways to separate apache configuration files. For example on my Gentoo there are modules.d/ and vhosts.d/, while on my Ubuntu there are conf.d/, mods-available/, mods-enabled/, sites-available/ and sites-enabled/. You can guess what they do by the name, or look inside httpd.conf for Include lines.
Why put some config info in conf/httpd.conf and some in files in the conf.d folder?
1,283,688,724,000
I saw a kernel option today in menuconfig that used braces for its checkbox. {*} Button This isn't listed in the legend at the top of the screen. [*] built-in [ ] excluded <M> module < > module capable What do the braces signify?
It represents an option that has been implied to a specific value by another option. This Gentoo's wiki has a clear explanation and lists all the available types that menuconfig can display. For example: the hyphen is also listed there.
What do the kernel options in braces mean?
1,283,688,724,000
I want to enable reversed path filtering to prevent source ip spoofing on my server. I noticed that I have the following settings at current: net.ipv4.conf.all.rp_filter = 0 net.ipv4.conf.default.rp_filter = 1 net.ipv4.conf.lo.rp_filter = 0 net.ipv4.conf.p4p1.rp_filter = 1 net.ipv4.conf.eth0.rp_filter = 1 The setting in all and the one in default are not the same. There are no explicit settings on my /etc/sysctl.conf file. I would like to what is the impact to the rest of the configurations between setting net.ipv4.conf.all.rp_filter = 1 and net.ipv4.conf.default.rp_filter = 1 Do I have to set both or just one of them?
According to this post titled: all vs. default in /proc/sys/net/ipv4/conf [message #3139]: When you change variables in the /proc/sys/net/ipv4/conf/all directory, the variable for all interfaces and default will be changed as well. When you change variables in /proc/sys/net/ipv4/conf/default, all future interfaces will have the value you specify. This should only affect machines that can add interfaces at run time, such as laptops with PCMCIA cards, or machines that create new interfaces via VPNs or PPP, for example. References Linux Firewall-related /proc Entries /proc/sys/net/ipv4/* Variables:
What is the difference between all and default in kernel setting? [duplicate]
1,283,688,724,000
I have Rapsberry Pi B+ with Arch Linux installation. uname reports version: [computer@computer001 ~]$ uname -a Linux computer001 3.18.3-3-ARCH #1 PREEMPT Mon Jan 26 20:10:28 MST 2015 armv6l GNU/Linux I've installed ftp server via pacman -S vsftpd and installation has passed without any errors. Then I tried to configure it, which resulted in following vsftpd.conf: anonymous_enable=NO local_enable=YES write_enable=YES #local_umask=022 anon_upload_enable=NO anon_mkdir_write_enable=NO dirmessage_enable=YES xferlog_enable=YES connect_from_port_20=YES chown_uploads=YES chown_username=computer #xferlog_file=/var/log/vsftpd.log #xferlog_std_format=YES #idle_session_timeout=600 #data_connection_timeout=120 #nopriv_user=ftpsecure #async_abor_enable=YES #ascii_upload_enable=YES #ascii_download_enable=YES ftpd_banner=Welcome to personal ftp service. #deny_email_enable=YES #banned_email_file=/etc/vsftpd.banned_emails #chroot_local_user=YES #chroot_list_enable=YES #chroot_list_file=/etc/vsftpd.chroot_list ls_recurse_enable=YES listen=YES #listen_ipv6=YES Now, when I try to restart vsftpd, I get: [computer@computer001 etc]$ sudo systemctl restart vsftpd.service && systemctl status -l vsftpd.service * vsftpd.service - vsftpd daemon Loaded: loaded (/usr/lib/systemd/system/vsftpd.service; enabled; vendor preset: disabled) Active: failed (Result: exit-code) since Thu 1970-01-01 06:32:24 UTC; 112ms ago Process: 350 ExecStart=/usr/bin/vsftpd (code=exited, status=2) Main PID: 350 (code=exited, status=2) Here is also output of sudo journalctl | grep -i vsftp: Jan 01 06:32:24 computer001001 sudo[347]: computer001 : TTY=pts/0 ; PWD=/etc ; USER=root ; COMMAND=/usr/bin/systemctl restart vsftpd.service Jan 01 06:32:24 computer001001 systemd[1]: Starting vsftpd daemon... Jan 01 06:32:24 computer001001 systemd[1]: Started vsftpd daemon. Jan 01 06:32:24 computer001001 systemd[1]: vsftpd.service: main process exited, code=exited, status=2/INVALIDARGUMENT Jan 01 06:32:24 computer001001 systemd[1]: Unit vsftpd.service entered failed state. Jan 01 06:32:24 computer001001 systemd[1]: vsftpd.service failed. Here is unit script /usr/lib/systemd/system/vsftpd.service: [Unit] Description=vsftpd daemon After=network.target [Service] ExecStart=/usr/bin/vsftpd ExecReload=/bin/kill -HUP $MAINPID KillMode=process [Install] WantedBy=multi-user.target If I run sudo /usr/bin/vsftpd, I get following error: 500 OOPS: config file not owned by correct user, or not a file I have corrected file permissions for /etc/vsftpd.conf via sudo chown root:root /etc/vsftpd.conf and now manually server gets started. I am also aware date/time is not correct, I haven't setup it yet.What am I missing?
I've reset the permissions for /etc/vsftpd.conf to root:root via sudo chown root:root /etc/vsftpd.conf and now the vsftpd server get started via sudo systemctl restart vsftpd.service and running it manually via sudo /usr/bin/vsftpd.
vsftpd won't start: "systemd[1]: vsftpd.service: main process exited, code=exited, status=2/INVALIDARGUMENT"
1,283,688,724,000
I want to disable the bold font in my urxvt config (.Xresources). xterm has the option allowBoldFonts. Is there a similar option for urxvt? I can't find anything similar.
You can effectively disable bold fonts by just applying the same font string for both urxvt's regular and bold fonts in .Xresources, for example: URxvt.font:xft:droid sans mono slashed:size=10.5 URxvt.boldFont:xft:droid sans mono slashed:size=10.5
Disable bold font in urxvt
1,283,688,724,000
If I look at my home directory there are a large number of dot files. If I am creating a new program that needs a user configuration file, is there any guidance where to put it? I could imagine creating a new dot directory ~/.myProgramName or maybe I should add it to /.config or ~/.local.
The .config directory is a newish development courtesy of XDG that seems, deservedly, to have won favour. Personally, I don't mind a dot directory of your own. A bunch of separate dot files (ala bash and various old school tools) in the toplevel of $HOME is a bit silly. Choosing a single dot file is a bad idea, because if in the future you realize maybe there are a couple more files that would be good to have, you have a possible backward compatibility issue, etc. So don't bother starting out that way. Use a directory, even if you are only going to have one file in it. A better place for that directory is still in ~/.config, unless you are very lazy, because of course you must first check to make sure it actually exists and create it if necessary (which is fine). Note you don't need a dot prefix if your directory is in the .config directory. So to summarize: use a directory, not a standalone file put that directory in $HOME/.config
Where should user configuration files go? [duplicate]
1,283,688,724,000
Various Debian packages, including logrotate and rsyslog, put their own log rotation definitions in /etc/logrotate.d/ What is the correct way to override those definitions? If I modify the files, I get warnings at every system update and I risk losing the changes if I (or someone else) give the wrong answer; or risk not getting new upstream definitions for new log files if I (or someone else) fail to merge the files by hand. Both things have happened regularly in the past few years. I tried overriding the definition in 00_* or zz_* files, but I get a duplicate error: error: zz_mail:1 duplicate log entry for /var/log/mail.log error: found error in /var/log/mail.log , skipping Is there any clean solution? Should I write a cron script to re-apply my changes to the definition files every day? Edit: to be more clear, ideally I would like to keep 99% of rsyslog's log rotation definitions in place, and automatically updated with APT. Except for a single definition, that of /var/log/mail.log, for which I need to apply a different rotation policy. If Logrotate allowed duplicate definitions, and only used the first or the last one for each file, my problem would be solved. If it had an override option, to flag a definition as overriding a previous one on purpose, that would also solve it. But alas, it seems I need to override the entire /etc/logrotate.d/rsyslog (and nginx, and others) with my own versions.
First of all, I recommend using a tool such as etckeeper to keep track of changes to files in /etc; that avoids data loss during upgrades (among other benefits). The “correct” way to override the definitions is to edit the configuration files directly; that’s why dpkg knows how to handle configuration files and prompts you when upgrades introduce changes. Unfortunately that’s not ideal, as you discovered. To actually address your specific configuration issue, in a Debian-friendly way, I would suggest actually logging your mail messages to a different log file, and setting that up in logrotate: add a new log configuration file in /etc/rsyslog.d, directing mail.* to a new log file, e.g. /var/log/ourmail.log (assuming you’re using rsyslog — change as appropriate); configure /var/log/ourmail.log in a new logrotate configuration file. Since this only involves adding new configuration files, there’s no upgrade issue. The existing log files will still be generated and rotated using the default configuration, but your log files will follow your configuration.
How can I properly override logrotate policies?