date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,681,080,673,000
I am using ranger as my terminal file manager and now that I understand the basics, I wanted to get a bit deeper into customizing the rc for my purposes. One thing I like to do is to map a command to a keybind that copies a template into the current folder and then starts the rename_append command, (usually called with keybind a) on the file. Here is an example: map NS shell cp ~/.templates/bash.sh . ; rename_append The problem is, that no matter how I call the command, it does copy the template in my current directory, but then doesn't start the rename-process. When I quit ranger, the errormessage says that rename_append wasn't found, but in the ranger-config, the same command is used to rename files before the appendix. My theory is that, since I am using shell, the command tries to find rename_append in my programs. I don't know how to chain commands in my ranger-config where one is a shell-command and the next one isn't. Currently I have to use a second keybind after the first and I would like to only use one and automatically go into renaming-mode after copying the file. I hope, my problem is understandable.
You need the chain command to execute multiple commands. chain <command1>; <command2>; ... This part is probably not relevant anymore but for the copy and rename task I came up with this solution: map NS chain shell cp ~/.templates/bash.sh .; console shell mv bash.sh%space This binding can be used by pressing NS, typing the new file name followed by pressing enter. This solution does not use rename_append because it executes the command on the current selection and I couldn't get the selection to change.
ranger chain commands in config
1,681,080,673,000
After some read and search through the net, I am still having the following problem. I am using Pop!_OS on a Lenovo X230. After some time with GNOME DE, I have decided to see how Xfce DE looks like. While still in progress to know it better, I have encountered the following issue. In GNOME when I close the lid, and then bring the lid back again I am required to put the login password, but it is not the case in Xfce. How do I change that? The power manager has no such option (after really checking different combinations of things). Also I tried to edit logind.conf file: GNU nano 5.2 /var/tmp/logindXXgwJbx5.conf # This file is part of systemd. # # systemd is free software; you can redistribute it and/or modify it # under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation; either version 2.1 of the License, or # (at your option) any later version. # # Entries in this file show the compile time defaults. # You can change settings by editing this file. # Defaults can be restored by simply deleting this file. # # See logind.conf(5) for details. [Login] #NAutoVTs=6 #ReserveVT=6 #KillUserProcesses=no #KillOnlyUsers= #KillExcludeUsers=root #InhibitDelayMaxSec=5 #HandlePowerKey=poweroff HandleSuspendKey=lock #HandleHibernateKey=hibernate HandleLidSwitch=lock #HandleLidSwitchExternalPower=suspend #HandleLidSwitchDocked=ignore #PowerKeyIgnoreInhibited=no #SuspendKeyIgnoreInhibited=no #HibernateKeyIgnoreInhibited=no #LidSwitchIgnoreInhibited=yes #HoldoffTimeoutSec=30s In which I uncommented and edited HandleLidSwitch=ignore to HandleLidSwitch=lock. This (after rebooting) didn't cause any visible effect. How can I resolve this issue?
In the following reddit thread one can find a solution that worked for me. Basically, there you can find in detail instruction to switch from Light-locker to XScreenSaver. User ID: u/EMH_Mark_I https://www.reddit.com/r/xubuntu/comments/9tlnnu/how_to_setup_xscreensaver_to_lock_on_suspend_for/
Not logging off in Xfce when closing the lid
1,681,080,673,000
I have configured CentOS VM, with NetworkManager disabled. Question: When I start VM I have access to the internet, but after a while I lose it. How can I fix that?
in step one, you say: When I start VM I have access to the internet, but after a while I lose it: test this on console: dhclient -v You will notice it will bound IP address as shown below [root@localhost network-scripts]# dhclient -v Internet Systems Consortium DHCP Client 4.2.5 Copyright 2004-2013 Internet Systems Consortium. All rights reserved. For info, please visit https://www.isc.org/software/dhcp/ Listening on LPF/ens32/00:0c:29:68:22:e2 Sending on LPF/ens32/00:0c:29:68:22:e2 Sending on Socket/fallback DHCPDISCOVER on ens32 to 255.255.255.255 port 67 interval 4 (xid=0x433a9e33) DHCPREQUEST on ens32 to 255.255.255.255 port 67 (xid=0x433a9e33) DHCPOFFER from 172.16.179.254 DHCPACK from 172.16.179.254 (xid=0x433a9e33) bound to 172.16.179.136 -- renewal in 822 seconds. then run this command to show your IP: ip addr or ifconfig then you should add to startup for auto/start : go to : cd /etc/init.d create net-autostart file with sudo vim and save it: #!/bin/bash # Solution for "Centos Connection from VMware" # ### BEGIN INIT INFO # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 ### END INIT INFO dhclient -v run command : chmode +755 net-autostart chconfig --add net-autostart restart virtualbox
CentOS loses internet connection
1,539,973,562,000
I am trying to understand what I am doing wrong here. I was under the impression that make savedefconfig would be the way to go to reduce one config to the (equivalent) bare minimum. So here are my step, take a config file from the Debian package directly: $ dpkg -S /boot/config-4.14.0-3-powerpc linux-image-4.14.0-3-powerpc: /boot/config-4.14.0-3-powerpc $ apt-cache policy linux-image-4.14.0-3-powerpc linux-image-4.14.0-3-powerpc: Installed: 4.14.13-1 Candidate: 4.14.13-1 Version table: *** 4.14.13-1 500 500 http://ftp.fr.debian.org/debian sid/main powerpc Packages 100 /var/lib/dpkg/status Copy it over to my main machine: $ scp macminig4:/boot/config-4.14.0-3-powerpc ./arch/powerpc/configs/my_defconfig Verify the option I want to play with is still there: $ grep CONFIG_SSB_B43_PCI_BRIDGE ./arch/powerpc/configs/my_defconfig CONFIG_SSB_B43_PCI_BRIDGE=y Now let's update it since it is not in perfect sync with git v4.14: $ git checkout v4.14 $ make ARCH=powerpc my_defconfig $ diff -u .config ./arch/powerpc/configs/my_defconfig | diffstat my_defconfig | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) So some minor change occurred, but nothing bad, at least I can still see my option: $ grep CONFIG_SSB_B43_PCI_BRIDGE .config CONFIG_SSB_B43_PCI_BRIDGE=y Now let's try a savedefconfig: $ make ARCH=powerpc savedefconfig HOSTCC scripts/basic/fixdep HOSTCC scripts/basic/bin2c HOSTCC scripts/kconfig/conf.o SHIPPED scripts/kconfig/zconf.tab.c SHIPPED scripts/kconfig/zconf.lex.c HOSTCC scripts/kconfig/zconf.tab.o HOSTLD scripts/kconfig/conf scripts/kconfig/conf --savedefconfig=defconfig Kconfig If I now check my option is now lost forever: $ grep CONFIG_SSB_B43_PCI_BRIDGE defconfig -> nothing ! Why are some options disappearing? Is there a way to control savedefconfig to preserver some option?
Turns out that this was simply bad timing. The git/master (actually 4.15) is affected by: https://patchwork.kernel.org/patch/10185397/ After upgrading an old laptop to 4.15-rc9, I found that the eth0 and wlan0 interfaces had disappeared. It turns out that the b43 and b44 drivers require SSB_PCIHOST_POSSIBLE which depends on PCI_DRIVERS_LEGACY, a config option that only exists on Mips.
Does `make savedefconfig` lose configuration options?
1,539,973,562,000
I'm trying to write a build script for SLES/RHEL that modifies config files to conform to our company standards. Some configs are formatted as setting = value and some are just setting value. I'm using sed currently and feel I'm almost there but it's not working when there is no equals sign. Current code: (Sorry I'm not very good at formatting these properly yet...) $ cat test.conf setting1 value1 setting2= value2 setting3 = value3 setting4 =value4 Current (test) function: #!/bin/bash replace () { file=$1 var=$2 new_value=$3 sed -e "s/^$var=.*/$var = $new_value/" -e "s/^$var =.*/$var = $new_value/" -e "s/^$var.*/$var = $new_value/" -e "s/^$var /$var $new_value/" "$file"|grep $var } replace test.conf setting1 value1new replace test.conf setting2 value2new replace test.conf setting3 value3new replace test.conf setting4 value4new However I get this as a result: $ ./functiontest setting1 value1new= value1new setting2 value2new= value2new setting3 value3new= value3new setting4 value4new= value4new It works if I take out the last sed portion, but only where there is an equals sign. Any ideas what I'm doing wrong?
You can simplify the set of "similar" sed substitutions to the following: cat replace.sh #!/bin/bash replace () { var=$2 new_val=$3 sed -e "s/^$var *= *.*/$var = $new_val/; s/^$var [^=]*$/$var $new_val/" "$1" | grep $var } replace test.conf setting1 value1new replace test.conf setting2 value2new replace test.conf setting3 value3new replace test.conf setting4 value4new Usage: bash replace.sh The output: setting1 value1new setting2 = value2new setting3 = value3new setting4 = value4new
sed function to replace any config file entry
1,539,973,562,000
How do I configure a CentOS7 server to connect to the internet and be known as a specific IPv4 address in the form of: aa.aa.aaa.aa2? HERE ARE THE DETAILS (Updated 2/27/2017): Cable Modem A Cisco DPC3941B (see link) router from an internet access provider has a Gateway IP aa.aa.aaa.aa6, and has several IP addresses allocated to it in the form of aa.aa.aaa.aa1, aa.aa.aaa.aa2, aa.aa.aaa.aa3, aa.aa.aaa.aa4, and aa.aa.aaa.aa5. It also has subnet mask 255.255.255.248 and DNS bb.bb.bb.bb, bb.bb.cc.cc. CentOS 7 Config nmcli con show gives results including eno1 uuid 802-3-ethernet eno1. nmcli con show eno1 gives a lot of output, including: IPV4.ADDRESS[1]: aa.aa.aaa.aa2/29 IPV4.GATEWAY: aa.aa.aaa.aa6 IPV4.DNS[1]: bb.bb.bb.bb IPV4.DNS[2]: bb.bb.cc.cc But yet when I ping google.com from the same terminal, the response is connect: Network is unreachable. And when I try to Putty to aa.aa.aaa.aa2 from another computer, the connection times out without connecting. Similarly, typing ping aa.aa.aaa.aa2 from another computer also times out with 0% packet return. In case this is a firewall issue, I typed firewall-cmd --zone=public --list-all and got: public (default, active) interfaces:eno1 sources: services: dhcpv6-client ssh ports: masquerade: no forward-ports: icmp-blocks rich rules: How A Windows Machine Is Configured to Do The Same Thing Successfully To rule out the possibility that the problem might be caused by the cable modem, I connected a Windows laptop to the modem with the following steps outlined below, and am able to connect to the internet and be seen as aa.aa.aaa.aa1 when using the Windows laptop through a different ethernet cable connected to the same modem. Here are the steps that get the Windows laptop to connect to the internet as aa.aa.aaa.aa1 through the same cable modem: 1. Control Panel > Network and Internet > Network and Sharing Center 2. Click “Change Adapter Settings” 3. Right click on “Ethernet 2” connection and click on “Properties” 4. Select “Internet Protocol Version 4 (TCP/IPv4)” 5. Then click on “Properties” Button to open the target dialog box: a. In the default state, the “Obtain IP address automatically” option is checked b. To claim a specific IP instead, click “Use The Following IP Address” and enter the following information: i. IP Address: aa.aa.aaa.aa1 ii. Subnet Mask: 255.255.255.248 iii. Default Gateway: aa.aa.aaa.aa6 iv. Preferred DNS Server: bb.bb.bb.bb v. Alternate DNS Server: bb.bb.cc.cc vi. Check the “Validate Settings on Exit” option. vii. Click OK 6. Click on any other open dialog boxes to return computer to normal state Pinging The Cable Modem's Local IP The Windows command prompt is able to successfully ping the Cable modem by typing ping 10.x.x.x, which is the local IP for the cable modem. But when I type the identical ping 10.x.x.x from the CentOS 7 server's terminal, the response is connect: Network is unreachable. Ethernet cable lit on both ends The Ethernet jack on the server is lit up indicating that it is connected, and the Ethernet jack on the other end of the server cable attached to the cable modem is also lit up indicating that it is connected. So there is an electrical connection between the CentOS 7 server and the cable modem. The problem seems isolated to the CentOS 7 config. Setting Up A Route To The Cable Modem From CentOS The Internet Service Provider gave this link containing information about how to set up a connection between a generic machine and the local modem. The link is for a different kind of connection, but the ISP said it could be adapted.
Long story short, need to configure a static IP address on Centos7, which is covered in the FAQ. Once this was done everything worked.
Claiming IP address from CentOS 7 server
1,453,926,487,000
I am trying to open a website since I was told to use and they gave me some tasks to do from there. I have always been able to use it from my work network, but not from other networks. And the guy who created and used to administer the page has gone MIA, so I can't ask him.It gives me a: 403 Forbidden You don't have permission to access / on this server. I know some of the causes for a 403 status code, but I don't know why it would allow me to only access it from a particular network. Is there a chance that the website was configured to be used this way?
Certain websites are configured this way for security reasons. If I'm not mistaken, they use subnet filtering to make sure that the network you are trying to access the page from is on the same network as the hosting server. Think of it like this: if you can't access the network from outside, it'd be more difficult to compromise. Considering that the link you included was a .edu site (it also had "admin" in it, but I'm not sure if that's significant or if it's just an administration page), it would make sense for it to block incoming connections from other networks.
Can't access website from other networks
1,453,926,487,000
I try to set up saslauthd for the XMPP server prosody but got stuck somewhere. I used the following documentation: http://blogs.mafia-server.net/nur-bahnhof/2013/12/prosody-authentification-ldapactivedirectory/ http://prosody.im/doc/cyrus_sasl https://wiki.debian.org/InstallingProsody My problem is that I can't get connected. The XMPP client always gets stuck somewhere while exchanging authentication information. Test using testsaslauthd was successful: testsaslauthd -u theuser -p "$pw" 0: OK "Success." I assume this means that the /etc/saslauthd.conf file is correct in this case. Test using sasl-sample-server/sasl-sample-client (called in different terminals and copy-pasting the S: and C: lines): root@xmpp:~# sasl-sample-server -s "xmpp" -m plain Forcing use of mechanism plain Sending list of 1 mechanism(s) S: cGxhaW4= Waiting for client mechanism... C: U......................= got 'PLAIN' sasl-sample-server: SASL Other: Password verification failed sasl-sample-server: Starting SASL negotiation: user not found (user not found) <terminates> root@xmpp:~# sasl-sample-client -s xmpp -a theuser service=xmpp Waiting for mechanism list from server... S: cGxhaW4= recieved 5 byte message Choosing best mechanism from: plain returning OK: theuser Password: Using mechanism PLAIN Preparing initial. Sending initial response... C: U......................= Negotiation complete Username: theuser SSF: 0 Waiting for encoded message... I don't understand why testsaslauthd succeeds while the other tool combo can't find the user. After running /usr/sbin/saslauthd -d I found the following block in /var/log/auth.log. Maybe that's the problem. But whatever I tried, I can't find out what's supplying the invalid parameter: Dec 2 15:42:14 xmpp sasl-sample-server: auxpropfunc error invalid parameter supplied Dec 2 15:42:14 xmpp sasl-sample-server: _sasl_plugin_load failed on sasl_auxprop_plug_init for plugin: ldapdb Dec 2 15:42:14 xmpp sasl-sample-server: ldapdb_canonuser_plug_init() failed in sasl_canonuser_add_plugin(): invalid parameter supplied Dec 2 15:42:14 xmpp sasl-sample-server: _sasl_plugin_load failed on sasl_canonuser_init for plugin: ldapdb Dec 2 15:42:20 xmpp sasl-sample-client: ldapdb_canonuser_plug_init() failed in sasl_canonuser_add_plugin(): invalid parameter supplied Dec 2 15:42:20 xmpp sasl-sample-client: _sasl_plugin_load failed on sasl_canonuser_init for plugin: ldapdb Dec 2 15:42:34 xmpp sasl-sample-server: DIGEST-MD5 common mech free Also, I found that sasl-sample-server and sasl-sample-client use a list of several methods when using without -m option but in the file /usr/lib/sasl2/xmpp.conf I explicitly select the PLAIN method: pwcheck_method: saslauthd mech_list: PLAIN Probably I got the wrong path so I copied the file also to /etc/sasl/xmpp.conf and /etc/sasl2/xmpp.conf just for case. Unfortunately, I can't find any piece of documentation which tells the paths explicitly for Debian 8. Also testsaslauthd doesn't seem to care about the service: root@xmpp:~# testsaslauthd -s xmpp -u theuser -p "$pw" 0: OK "Success." root@xmpp:~# testsaslauthd -s nonexistingservice -u theuser -p "$pw" 0: OK "Success." Any idea what else I can to do find the reason? Update: Obviously, sasl-sample-server accesses the file /etc/sasldb2 which should not happen in ldap mode, I think. Is it possible that this tool doesn't care about configuration and that it doesn't support ldap? Output from strace: stat("/etc/sasldb2", {st_mode=S_IFREG|0640, st_size=12288, ...}) = 0 open("/etc/sasldb2", O_RDONLY) = 3 fcntl(3, F_GETFD) = 0 fcntl(3, F_SETFD, FD_CLOEXEC) = 0 read(3, "\0\0\0\0\1\0\0\0\0\0\0\0a\25\6\0\t\0\0\0\0\20\0\0\0\10\0\0\0\0\0\0"..., 512) = 512 close(3) = 0
I found the solution: First of all: I was on the wrong trace because sasl-sample-server doesn't seem to support ldap in any way. It seems it doesn't talk with saslauthd but instead has its own implementation. Because of this, it cannot care about the -a ldap option of /usr/sbin/saslauthd. So I started through testing with prosody directly. After setting logging to debug I discovered that all the paths for the config file xmpp.conf were wrong. But not the path - it's the service name which changed. I found the following hint in http://prosody.im/doc/cyrus_sasl: Setting up Prosody to authenticate against LDAP (blog post) This post uses xmpp.conf, but the name is now prosody.conf (see cyrus_application_name above) So the final path was /usr/lib/sasl2/prosody.conf (still Debian 8) and its content is still the same: pwcheck_method: saslauthd mech_list: PLAIN With this change I was finally able to log in using xmpp
testsaslauthd succeeds but sasl-sample-server/client fail
1,453,926,487,000
I use CentOS 7 and I need to change the file that delivers the SSH configuration /etc/ssh/sshd_config to /etc/ssh/sshd_config_other. This demand was due to a security automation that always overwrites the sshd_config and I dont have allow to change. I tested changing the /etc/systemd/system/multi-user.target.wants/sshd.service creating a variable to ExecStart=/usr/sbin/sshd -D $ OPTIONS, but it dosent work. Does anyone know a way to change the SSH configuration file for /etc/ssh/sshd_config_other ?
Note that /etc/systemd/system/multi-user.target.wants/sshd.service is just a link to /usr/lib/systemd/system/sshd.service, and you should change this file by copying it first to /etc/systemd/system/, then editing this copy. However, if your sshd.service file has the line EnvironmentFile=/etc/sysconfig/sshd then you can simply add to file /etc/sysconfig/sshd a line OPTIONS='-f /etc/ssh/sshd_config_other' If you change systemd files you need to sudo systemctl daemon-reload, but I dont suppose you do for this change.
CentOS - SSH - File That Change Iunput Setup
1,453,926,487,000
I have configured freeradius on RHEL 6.5 server for MAC based authentications and for this, I've followed this guide. According to the mentioned guide, I have created authorized_macs file for the valid MAC addresses as below: xx-xx-xx-xx-xx-xx Reply-Message = "Device with MAC Address %{Calling-Station-Id} authorized for network access" yy-yy-yy-yy-yy-yy Reply-Message = "Device with MAC Address %{Calling-Station-Id} authorized for network access" I've tried to make certain changes in authorize section of /etc/raddb/sites-available/default file, in order to set Reply-Message for failed authentications, as below: authorize { preprocess # if cleaning up the Calling-Station-Id... rewrite.calling_station_id # now check against the authorized_macs file authorized_macs if (!ok) { update control { Reply-Message := "Login Failed. MAC Address %{Calling-Station-ID} is NOT valid." } reject } else { # accept users update control { Auth-Type := Accept } } } When configuration is tested using radclient, Successful authentication: > echo "Calling-Station-Id=xx-xx-xx-xx-xx-xx" | radclient -s localhost:1812 auth testing123 Received response ID 55, code 2, length = 93 Reply-Message = "Device with MAC Address xx-xx-xx-xx-xx- authorized for network access" Total approved auths: 1 Total denied auths: 0 Total lost auths: 0 Failed Authentication: > echo "Calling-Station-Id=zz-zz-zz-zz-zz-zz" | radclient -s localhost:1812 auth testing123 Received response ID 220, code 3, length = 20 Total approved auths: 0 Total denied auths: 1 Total lost auths: 0 In case of unsuccessful authentication, no Reply-Message is displayed. What should I do if I need to enable messages for Access-Reject responses?
Set the Reply-Message in an update reply block, rather than update control. Using your example: update reply { Reply-Message := "Login Failed. MAC Address %{Calling-Station-ID} is NOT valid." }
Freeradius: No reply message for Failed Authentications
1,453,926,487,000
I'm building my own Embedded Linux distro using bitbake . I added udev in the list of dependencies (RDEPENDS). I noticed that the output of: udevadm info --query=property --path=/sys/block/sda is just: DEVNAME=/dev/sda DEVPATH=/devices/pci0000:00/0000:00:13.0/ata1/host0/target0:0:0/0:0:0:0/block/sda DEVTYPE=disk MAJOR=8 MINOR=0 SUBSYSTEM=block whereas I expect something like this (the output on my Ubuntu): DEVLINKS=/dev/disk/by-id/ata-WDC_WD10EALX-009BA0_WD-WMATR1360774 /dev/disk/by-id/wwn-0x50014ee2072ca983 DEVNAME=/dev/sda DEVPATH=/devices/pci0000:00/0000:00:1f.2/ata1/host0/target0:0:0/0:0:0:0/block/sda DEVTYPE=disk ID_ATA=1 ID_ATA_DOWNLOAD_MICROCODE=1 ID_ATA_FEATURE_SET_HPA=1 ID_ATA_FEATURE_SET_HPA_ENABLED=1 ID_ATA_FEATURE_SET_PM=1 ID_ATA_FEATURE_SET_PM_ENABLED=1 ID_ATA_FEATURE_SET_PUIS=1 ID_ATA_FEATURE_SET_PUIS_ENABLED=0 ID_ATA_FEATURE_SET_SECURITY=1 ID_ATA_FEATURE_SET_SECURITY_ENABLED=0 ID_ATA_FEATURE_SET_SECURITY_ENHANCED_ERASE_UNIT_MIN=174 ID_ATA_FEATURE_SET_SECURITY_ERASE_UNIT_MIN=174 ID_ATA_FEATURE_SET_SECURITY_FROZEN=1 ID_ATA_FEATURE_SET_SMART=1 ID_ATA_FEATURE_SET_SMART_ENABLED=1 ID_ATA_SATA=1 ID_ATA_SATA_SIGNAL_RATE_GEN1=1 ID_ATA_SATA_SIGNAL_RATE_GEN2=1 ID_ATA_WRITE_CACHE=1 ID_ATA_WRITE_CACHE_ENABLED=1 ID_BUS=ata ID_MODEL=WDC_WD10EALX-009BA0 ID_MODEL_ENC=WDC\x20WD10EALX-009BA0\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20 ID_PART_TABLE_TYPE=dos ID_REVISION=15.01H15 ID_SERIAL=WDC_WD10EALX-009BA0_WD-WMATR1360774 ID_SERIAL_SHORT=WD-WMATR1360774 ID_TYPE=disk ID_WWN=0x50014ee2072ca983 ID_WWN_WITH_EXTENSION=0x50014ee2072ca983 MAJOR=8 MINOR=0 SUBSYSTEM=block I want to grep after ID_BUS to check whether a device is a usb or not, but it seems that the lines ID_ are missing. Do you know what am I missing here? My first guess is a missing package, which I am not aware of. Thank you.
The problem is that udev was not started. /etc/init.d/udev start Conclusion: in case someone experiences any issue related to udev, firstly make sure it has been started.
udevadm does not show expected information
1,453,926,487,000
I have a CentOS box running my Apache website and Postfix, but whenever I try PHP's mail() function to my @aevidi email, it sends to my local mailbox in ~/mail, but is not visible in Zoho's webmail. When I send an email from my Gmail account to my @aevidi email, I see it in Zoho but not in my local mailbox (expected result). Here is my setup: Zoho as my email host for my custom domain CentOS box with my site and mail server (postfix) I followed this tutorial for my Zoho/SES/Postfix high level setup (honestly not sure where SES comes into play) I followed this tutorial to setup Postfix (and Dovecot before I realized what it did, but have removed it since). /var/log/maillog after executing mail() Aug 3 20:05:37 aevidi postfix/pickup[7702]: 31ADD40BDD: uid=48 from=<[email protected]> Aug 3 20:05:37 aevidi postfix/cleanup[7707]: 31ADD40BDD: message-id=<20140804000537.31ADD40BDD@aevidi> Aug 3 20:05:37 aevidi postfix/qmgr[7703]: 31ADD40BDD: from=<[email protected]>, size=344, nrcpt=1 (queue active) Aug 3 20:05:37 aevidi postfix/local[7709]: 31ADD40BDD: to=<[email protected]>, relay=local, delay=0.03, delays=0.02/0.0 1/0/0, dsn=2.0.0, status=sent (delivered to maildir) Aug 3 20:05:37 aevidi postfix/qmgr[7703]: 31ADD40BDD: removed /etc/postfix/main.cf myhostname = aevidi #This is my FQDN when I type 'hostname -f' mydomain = aevidi.com myorigin = $mydomain home_mailbox = mail/ mynetworks = 127.0.0.0/8 inet_interfaces = all mydestination = $myhostname, localhost.$mydomain, localhost, $mydomain smtpd_sasl_auth_enable = yes smtpd_sasl_type = cyrus smtpd_sasl_security_options = noanonymous broken_sasl_auth_clients = yes smtpd_sasl_authenticated_header = yes smtpd_recipient_restrictions = permit_sasl_authenticated,permit_mynetworks,reject_unauth_destination smtpd_tls_auth_only = no smtp_use_tls = yes smtpd_use_tls = yes smtp_tls_note_starttls_offer = yes smtpd_tls_key_file = /etc/postfix/ssl/smtpd.key smtpd_tls_cert_file = /etc/postfix/ssl/smtpd.crt smtpd_tls_CAfile = /etc/postfix/ssl/cacert.pem smtpd_tls_received_header = yes smtpd_tls_session_cache_timeout = 3600s tls_random_source = dev:/dev/urandom relayhost = email-smtp.us-east-1.amazonaws.com:25 smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd smtp_tls_security_level = encrypt smtp_tls_CAfile = /etc/ssl/certs/ca-bundle.crt DNS configuration Any help is appreciated!
If you want postfix to deliver mails for aevidi.com by doing an MX lookup (i.e to Zoho mail servers, in your case), then you should remove $domain i.e aevidi.com from mydestination.
Postfix is sending and receiving mail locally, but not to external mailbox
1,453,926,487,000
First off, I've installed arch before, but managed to not encounter any of the problems I'm having at the moment (not sure how). But I'm well and truly stuck. First, my network interface is now called enp3s0 rather than eth0, so every time I start arch, I need to run ip link set enp3s0 and then dhcpcd enp3s0 how do I configure this so it happens automatically? My second issue seems more peculiar; after booting into arch, I installed the enlightenment WM with pacman, and tried to run it, but apparently I did not have a couple of xorg packages, namely xorg-xinit, and another which I forget. After installing these however, editing the .xinitrc file, and running startx, I just got 3 white bash boxes on a black screen. Though if I run enlightenment_start in one of those boxes, enlightenment starts fine (albeit with 3 terminal boxes open, 2 I can close fine, but if the third is closed, enlightenment exits). I am certain this is not normal behaviour, and any help as to what I'm doing wrong here would be much appreciated.
Your first issue (the networking) can be automated with a systemd service file. You would use the [email protected], in your case: systemctl {enable,start} [email protected]. The other issue is that, when starting X, you are still starting twm. See Xorg page on the Arch Wiki on why this is happening, but essentially, X is being configured by /etc/X11/xinit/xinitrc. Create a correct ~/.xinitrc, log out and log back in as that user and it should start enlightenment.
Newly installed arch linux problems
1,453,926,487,000
AFAIK, FreeBSD has strict separation between base system and userland. IMO, theoretically, it seems possible to transfer whole userland to other machine by simple copying. If the base systems are equal (version) Can I actually copy the userland by simple copying? If it's possible what directories should I copy? If it is not, what's the major reason which prevent copying userland configurations? Update The term userland seems to be wrong in the text. My intention was stuffs of full OS except base system, and I don't know correct wording for this.
Simple copying is insufficient because cp doesn't preserve uids/gids, flags, hard/symlinks etc. But you can easily copy userland with tar. If you have mean packages/ports when say "userland" you have to try pkg create -a command that creates in current dir bundles for all the software installed from ports/packages. That bundles can be easily copied to the other machine(s) and installed there with pkg add *. Sure, you have to copy all configuration files manually.
Is it possible to copy whole userland into other machine in FreeBSD?
1,453,926,487,000
I'm setting up a proFTPd server so I can upload files to my webserver, but I've never tried this before. I've installed proftpd, added a user with a home folder: /home/FTP-shared and added /bin/false shell to it as well. But what do I do configuration-wise now in proftp to be able to login with this user, and up and download, delete and so on? And my idea was to symlink to Apache www folder from the ftp user directory? Will that work?
For your first question, you can read it here. For your second question, I'm currently using mount --bind.
Creating a proFTPd user
1,453,926,487,000
I have a Macbook Pro 6,1 that I have dual booting OSX and now Fedora 15. I'm working on getting the audio working. In System Settings > Sound > Output there are two options for audio devices: High Definition Audio Control Digital Stereo (HDMI) Internal Audio Analog Stereo There is no sound when the first option (HD Audio Control Digital Stereo) is selected. However, if the second (Internal Audio Analog Stereo) is selected the headphone jack works, but the on board speakers do not work. I can't even tell what actual hardware on the laptop is. The Apple website (http://support.apple.com/kb/SP621) just says, Stereo speakers with subwoofers, omnidirectional microphone, audio line in minijack (digital/analog), audio line out/headphone minijack (digital/analog) How do I get the speakers working?
I was able to get this working. Problem was the channels were muted. See instructions here: http://linsek.com/?q=node/12
Fedora 15 - Audio Configuration
1,453,926,487,000
In an Linux application I don't want to use "my own" configuration parser, but use the one which (should be?) is already available. Simple to keep the maintenance of the application config simple and not adding extra libraries.
There really isn't a 'standard' configuration parser library. If you peruse through /etc, you will find some combination of: XML config Windows INI style config Basic KEY=VALUE config JSON (mostly seen with web applications). YAML (mostly seen with newer stuff, especially if written in Python). Things that look like JSON or YAML or XML, but aren't (see for example configuration for Nginx (looks like JSON, but isn't), Apache (looks like XML, but isn't), and Unbound (looks like YAML, but isn't). Various shell languages. Various things that look like shell languages but technically aren't. Source snippets in various scripting languages. Possibly other things I haven't thought of. As far as what to use with your app: Please for the love of all things holy avoid XML. It's unnecessarily verbose, very complicated to parse (and thus takes a long time and lots of memory), and brings a number of security issues. It's also non-trivial to get the proper balance between elements and attributes, and you will usually end up regretting choices made regarding that at some point later. The only real advantage here is that you're pretty much guaranteed to have a working XML parser on any system you come across. Windows style INI files are generally a safe bet, though they limit the complexity of your config structures. Lots of libraries exist for this, and your system probably already has at least one. They aren't as prevalent on Linux (classic config files are traditionally KEY=VALUE pairs without section headers), but they're still widely used, and they're easy to understand. Basic KEY=VALUE pairs (one per line ideally) are so trivial to parse that you don't even need a library for it, but are very limited in what they can do. JSON is safe and easy to parse, is widely supported (pretty much every major language has at least one parser these days), and supports arbitrary nesting of config structures. However, it doesn't support comments (some parsers might, but the results won't be interoperable), which is not great for files designed to be edited with a text editor. YAML is my personal favorite, it's reasonably safe and easy to parse, looks very natural to most people, supports comments, and has very minimal overhead. The only big thing here is that indentation really matters, as it accounts for about 80% of the syntax, which combined with the fact that YAML requires spaces for indentation (no tabs), can make it a bit of a hassle to work with if you don't have a good editor. If you're using a scripting language, you might consider using source snippets for config, but be very careful doing this. Unless you're very careful about how you parse them, you're pretty much letting users do arbitrary things to your internal logic if they want to, which is a customer support nightmare (you will eventually get people complaining that you broke their config, which happened to include stuff poking at core program internals that you changed).
Which is the "standard" configuration parser library used in Linux?
1,453,926,487,000
When editing configuration files, such as /etc/sysctl.conf for example, it is often useful to do the update in an idempotent way, meaning that if the script is executed multiple times, you don't end up with multiple entries for the configuration change you made. As a real-world instance where I encountered this, I need to edit the above file in multi-stage ansible playbook. But the problem is that if the later stage fails, the playbook will need to be run again, which means that the update command may run multiple times, causing duplication if the update is not idempotent. So the question is how can you update such configuration files in an idempotent way? An example of a non-idempotent update would be: echo "vm.swappiness=10" | sudo tee -a /etc/sysctl.conf or in ansible: - name: Set swappiness setting shell: echo "vm.swappiness=10" | sudo tee -a /etc/sysctl.conf Ideally if the value of the variable is initially set, it should be replaced with new value.
Instead of running shell commands, use the sysctl module. If possible, avoid using shell. - ansible.posix.sysctl: name: vm.swappiness value: '10' state: present
Edit configuration files idempotently
1,453,926,487,000
When I start fish, it prints: Welcome to fish, the friendly interactive shell Type `help` for instructions on how to use fish And then the prompt. I've actually used fish for a while so I don't need this welcome message. How can I disable it?
this has already been answered here but tldr youll want to use the command set -U fish_greeting "" you can customise the welcome prompt too by typing what you want in the double quotes e.g set -U fish_greeting "üêü"
Fish shell: How to disable help message?
1,453,926,487,000
I've been using GNU/Linux for over a year now. And there's this question to which I need an answer from you, Linux gurus: What language(s) do config files like .bashrc, .vimrc, .i3status.conf, .conkyrc, .xinitrc, etc. use?
There's no global standard. They can be (and are) all different syntaxes. For example, the bashrc is simply a bash script, the vimrc a vimscript script, i3 uses its own syntax that's pretty close to a scripting language (but they claim it isn't a programming language, but I think they're lying there, the conditional screen placement thing looks extremely much like you can build a turing machine out of it), xinitrc is just an arbitrary script (which will be run by the shell specified in the #! line at the beginning of the file, so it could just as well by say, Python, bash, zsh, tcl, perl, … Conky uses JSON or YAML, I think, Essentially, there is no standard, and you always need to read the documentation.
What language do config files use?
1,590,078,237,000
I have a script like this: #!/bin/bash . config ./sub-script There are a lot of variables in config and I don't want to pass them to sub-scirpt like ./subscript arg1 arg2 ... arg100500. Also, I don't like an idea to source config from sub-script, because in general it may not know where there config file resides. Is there any other way to make config available in sub-script?
You can export your variables: VAR=foo export VAR or: export VAR=foo However, these variables will be visible in the environment of all subprocesses.
BASH: pass arguments to sub-script
1,590,078,237,000
The Linux kernel has a large set of parameters that enable users to adjust the kernel behavior without recompiling it. There doesn't seem to be a parameter to adjust the preemption model: -*- Preemption Model No Forced Preemption (Server) (PREEMPT_NONE) --> Voluntary Kernel Preemption (Desktop) (PREEMPT_VOLUNTARY) Preemptible Kernel (Low-Latency Desktop) (PREEMPT) Why didn't the kernel developers add a switch to choose between them?
Preemption is implemented using a (large) number of explicit preemption points (look for might_sleep in the kernel), many of which are in somewhat hot paths. Full preemption (CONFIG_PREEMPT) has an even greater impact; for instance spinlocks check the preemption count (at least, on non-SMP kernels), as do IRQs. Currently, the chosen preemption model is applied at compile-time; as a result, with no forced preemption, none of the preemption code survives in the kernel, and you get maximum throughput. Likewise, a voluntary preemption kernel doesn’t have any preemption checks in IRQ and kernel entry points. Changing this so that preemption could be changed at runtime would mean that all configurations would have to check the preemption setting, at least at boot, and suffer some cost even in the best case (e.g., even if preemption was a boot-time setting, and the “no preemption” setting could patch out the relevant call sites, you’d still end up with do-nothing code taking up precious space in code caches).
Why can't the Linux preemption model be changed by kernel parameter? [closed]
1,590,078,237,000
I have multiple errors while trying to start Hostapd I will post the errors output which i got after i tried to start it and looked at the status: root@l0calh0st:~# service hostapd status ● hostapd.service - Advanced IEEE 802.11 AP and IEEE 802.1X/WPA/WPA2/EAP Authenticator Loaded: loaded (/lib/systemd/system/hostapd.service; disabled; vendor preset: disabled) Active: failed (Result: exit-code) since Sun 2018-01-07 16:42:38 CET; 4s ago Process: 1682 ExecStart=/usr/sbin/hostapd -P /run/hostapd.pid -B $DAEMON_OPTS ${DAEMON_CONF} (code=exited, sta Jan 07 16:42:38 l0calh0st systemd[1]: Starting Advanced IEEE 802.11 AP and IEEE 802.1X/WPA/WPA2/EAP Authenticato Jan 07 16:42:38 l0calh0st hostapd[1682]: Configuration file: Jan 07 16:42:38 l0calh0st hostapd[1682]: Could not open configuration file '' for reading. Jan 07 16:42:38 l0calh0st hostapd[1682]: Failed to set up interface with Jan 07 16:42:38 l0calh0st hostapd[1682]: Failed to initialize interface Jan 07 16:42:38 l0calh0st systemd[1]: hostapd.service: Control process exited, code=exited status=1 Jan 07 16:42:38 l0calh0st systemd[1]: hostapd.service: Failed with result 'exit-code'. Jan 07 16:42:38 l0calh0st systemd[1]: Failed to start Advanced IEEE 802.11 AP and IEEE 802.1X/WPA/WPA2/EAP Authe I dont know why he dont find the configuration file he seems to search for ' ' aka nothing...! I don't edited anything....! Here if i try to start it normal: root@l0calh0st:~# service hostapd start Job for hostapd.service failed because the control process exited with error code. See "systemctl status hostapd.service" and "journalctl -xe" for details. If use the command "journalct1 -xe" i get: root@l0calh0st:~# journalctl -xe -- Unit systemd-tmpfiles-clean.service has finished starting up. -- -- The start-up result is RESULT. Jan 07 16:57:44 l0calh0st systemd[1]: Starting Advanced IEEE 802.11 AP and IEEE 802.1X/WPA/WPA2/EAP Authenticator... -- Subject: Unit hostapd.service has begun start-up -- Defined-By: systemd -- Support: https://www.debian.org/support -- -- Unit hostapd.service has begun starting up. Jan 07 16:57:44 l0calh0st hostapd[1865]: Configuration file: Jan 07 16:57:44 l0calh0st hostapd[1865]: Could not open configuration file '' for reading. Jan 07 16:57:44 l0calh0st hostapd[1865]: Failed to set up interface with Jan 07 16:57:44 l0calh0st hostapd[1865]: Failed to initialize interface Jan 07 16:57:44 l0calh0st systemd[1]: hostapd.service: Control process exited, code=exited status=1 Jan 07 16:57:44 l0calh0st systemd[1]: hostapd.service: Failed with result 'exit-code'. Jan 07 16:57:44 l0calh0st systemd[1]: Failed to start Advanced IEEE 802.11 AP and IEEE 802.1X/WPA/WPA2/EAP Authenticator. -- Subject: Unit hostapd.service has failed -- Defined-By: systemd -- Unit hostapd.service has failed. -- -- The result is RESULT. lines 1521-1543/1543 (END) Edit: Ok, i found the file "hostapd.service" but i don't see anything commented-out. (#?): [Unit] Description=Advanced IEEE 802.11 AP and IEEE 802.1X/WPA/WPA2/EAP Authenticator After=network.target [Service] Type=forking PIDFile=/run/hostapd.pid EnvironmentFile=/etc/default/hostapd ExecStart=/usr/sbin/hostapd -P /run/hostapd.pid -B $DAEMON_OPTS ${DAEMON_CONF} [Install] WantedBy=multi-user.target This is the /etc/default/hostapd file: # Defaults for hostapd initscript # # See /usr/share/doc/hostapd/README.Debian for information about alternative # methods of managing hostapd. # # Uncomment and set DAEMON_CONF to the absolute path of a hostapd configuration # file and hostapd will be started during system boot. An example configuration # file can be found at /usr/share/doc/hostapd/examples/hostapd.conf.gz # #DAEMON_CONF="" # Additional daemon options to be appended to hostapd command:- # -d show more debug messages (-dd for even more) # -K include key data in debug messages # -t include timestamps in some debug messages # # Note that -B (daemon mode) and -P (pidfile) options are automatically # configured by the init.d script and must not be added to DAEMON_OPTS. # #DAEMON_OPTS=""
Your system is using systemd. In some distributions, I've noticed that using the legacy service wrapper can hide some error messages that would be visible using the systemd-native systemctl command. But it looks like there is enough information here. In /lib/systemd/system/hostapd.service file, the line that determines the actual command used to start hostapd is apparently this: ExecStart=/usr/sbin/hostapd -P /run/hostapd.pid -B $DAEMON_OPTS ${DAEMON_CONF} Since it includes environment variables that are outside systemd's default set (see man systemd.exec for details), the hostapd.service file should probably have an option like Environment=, EnvironmentFile= or PassEnvironment=. Probably something like: EnvironmentFile=/etc/default/hostapd.conf If such a file exists, it probably has some commented-out defaults you'll need to edit to match your system configuration and then uncomment, before you can start hostapd. Usually such files are prepared by your distribution maintainers, and have helpful comments describing what you'll need to do. If not, there might be some distribution-specific information in /usr/share/doc/hostapd-*/ directory which you should read first.
Cant Start Hostapd because multiple Errors
1,590,078,237,000
I need a kernel compiled, featuring the qcserial module to have support for the Huawei EM 680 model (Gobi 3000). I got kernel 3.11.6 and can find the appropriate source file in ./drivers/usb/serial/qcserial.c but how can I make sure it gets compiled and loaded statically? I can't find it in the kernel config dialog... any ideas? I'm cross compiling this kernel for an arm AT91 CPU and I need support for above cell modem... "As of linux-3.1.1-1 the device is detected by the qcserial module" - is what I found on https://wiki.archlinux.org/index.php/Gobi_Broadband_Modems edit modem-manager after installing modem-manager, i tried to launch it but I don't really get anything, see the screen output below: # modem-manager modem-manager[2417]: <info> ModemManager (version 0.5.2.0) starting... modem-manager[2417]: <info> Loaded plugin Novatel modem-manager[2417]: <info> Loaded plugin ZTE modem-manager[2417]: <info> Loaded plugin Option High-Speed modem-manager[2417]: <info> Loaded plugin Longcheer modem-manager[2417]: <info> Loaded plugin Ericsson MBM modem-manager[2417]: <info> Loaded plugin Samsung modem-manager[2417]: <info> Loaded plugin Nokia modem-manager[2417]: <info> Loaded plugin SimTech modem-manager[2417]: <info> Loaded plugin Huawei modem-manager[2417]: <info> Loaded plugin MotoC modem-manager[2417]: <info> Loaded plugin X22X modem-manager[2417]: <info> Loaded plugin Generic modem-manager[2417]: <info> Loaded plugin Sierra modem-manager[2417]: <info> Loaded plugin Option modem-manager[2417]: <info> Loaded plugin Wavecom modem-manager[2417]: <info> Loaded plugin Linktop modem-manager[2417]: <info> Loaded plugin Gobi modem-manager[2417]: <info> Loaded plugin AnyData modem-manager[2417]: <info> (ttyUSB0) opening serial port... modem-manager[2417]: <info> (ttyUSB1) opening serial port... modem-manager[2417]: <info> (ttyUSB2) opening serial port... modem-manager[2417]: <info> (ttyS1) opening serial port... modem-manager[2417]: <info> (ttyS2) opening serial port... modem-manager[2417]: <info> (ttyS3) opening serial port... modem-manager[2417]: <info> (ttyS4) opening serial port... modem-manager[2417]: <info> (ttyS0) opening serial port... modem-manager[2417]: <info> (ttyUSB0) closing serial port... modem-manager[2417]: <info> (ttyUSB0) serial port closed modem-manager[2417]: <info> (ttyUSB0) opening serial port... modem-manager[2417]: <info> (ttyUSB1) closing serial port... modem-manager[2417]: <info> (ttyUSB1) serial port closed modem-manager[2417]: <info> (ttyUSB1) opening serial port... modem-manager[2417]: <info> (ttyUSB2) closing serial port... modem-manager[2417]: <info> (ttyUSB2) serial port closed modem-manager[2417]: <info> (ttyUSB2) opening serial port... modem-manager[2417]: <info> (ttyS1) closing serial port... modem-manager[2417]: <info> (ttyS1) serial port closed modem-manager[2417]: <info> (ttyS1) opening serial port... modem-manager[2417]: <info> (ttyS2) closing serial port... modem-manager[2417]: <info> (ttyS2) serial port closed modem-manager[2417]: <info> (ttyS2) opening serial port... modem-manager[2417]: <info> (ttyS3) closing serial port... modem-manager[2417]: <info> (ttyS3) serial port closed modem-manager[2417]: <info> (ttyS3) opening serial port... modem-manager[2417]: <info> (ttyS4) closing serial port... modem-manager[2417]: <info> (ttyS4) serial port closed modem-manager[2417]: <info> (ttyS4) opening serial port... modem-manager[2417]: <info> (ttyS0) closing serial port... modem-manager[2417]: <info> (ttyS0) serial port closed modem-manager[2417]: <info> (ttyS0) opening serial port... modem-manager[2417]: <info> (ttyUSB0) closing serial port... modem-manager[2417]: <info> (ttyUSB0) serial port closed modem-manager[2417]: <info> (ttyUSB1) closing serial port... modem-manager[2417]: <info> (ttyUSB1) serial port closed modem-manager[2417]: <info> (ttyUSB2) closing serial port... modem-manager[2417]: <info> (ttyUSB2) serial port closed modem-manager[2417]: <info> (ttyS1) closing serial port... modem-manager[2417]: <info> (ttyS1) serial port closed modem-manager[2417]: <info> (ttyS2) closing serial port... modem-manager[2417]: <info> (ttyS2) serial port closed modem-manager[2417]: <info> (ttyS3) closing serial port... modem-manager[2417]: <info> (ttyS3) serial port closed modem-manager[2417]: <info> (ttyS4) closing serial port... modem-manager[2417]: <info> (ttyS4) serial port closed modem-manager[2417]: <info> (ttyS0) closing serial port... modem-manager[2417]: <info> (ttyS0) serial port closed Which is kind of odd as when I plugin the modem, in dmesg I now get: usb 1-1: New USB device found, idVendor=12d1, idProduct=14f1 usb 1-1: New USB device strings: Mfr=3, Product=2, SerialNumber=0 usb 1-1: Product: Huawei EM680 w/Gobi Technology usb 1-1: Manufacturer: HUAWEI Incorporated qcserial 1-1:1.1: Qualcomm USB modem converter detected usb 1-1: Qualcomm USB modem converter now attached to ttyUSB0 qcserial 1-1:1.2: Qualcomm USB modem converter detected usb 1-1: Qualcomm USB modem converter now attached to ttyUSB1 qcserial 1-1:1.3: Qualcomm USB modem converter detected usb 1-1: Qualcomm USB modem converter now attached to ttyUSB2
If you look into drivers/usb/serial/Makefile, you'll see that CONFIG_USB_SERIAL_QUALCOMM is responsible for this driver. Execute make menuconfig and goto "Device Drivers"->"USB support"->"USB Serial Converter support"->"USB Qualcomm Serial modem"
how do I include the qcserial module in the kernel?
1,590,078,237,000
Installed a fresh Debian Wheezy to enjoy Gnome 3 but it starts in fallback mode. I suppose that's because the loaded drivers do not support 3D acceleration. Installed packages I know are relevant: xserver-xorg-video-ati libgl1-mesa-dri The Gnome 3 was working fine with Ubuntu 12.04, and I belive it was using the FOSS drivers. Interestingly there is no /etc/X11/xorg.conf and when I try to generate it with Xorg -configure I get: X.Org X Server 1.12.1 Release Date: 2012-04-13 X Protocol Version 11, Revision 0 Build Operating System: Linux 3.2.0-2-amd64 x86_64 Debian Current Operating System: Linux blackwhisper 3.2.0-2-amd64 #1 SMP Mon Apr 30 05:20:23 UTC 2012 x86_64 Kernel command line: BOOT_IMAGE=/vmlinuz-3.2.0-2-amd64 root=UUID=e6f57a36-19aa-4dfc-9b61-32d5e08abcc6 ro quiet Build Date: 07 May 2012 12:15:23AM xorg-server 2:1.12.1-2 (Cyril Brulebois <[email protected]>) Current version of pixman: 0.24.4 Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. (==) Log file: "/var/log/Xorg.0.log", Time: Sat May 19 20:15:31 2012 List of video drivers: mga ...MANYMORE radeon ...MANYMORE ati ...MANYMORE vesa (++) Using config file: "/root/xorg.conf.new" (==) Using system config directory "/usr/share/X11/xorg.conf.d" (II) [KMS] No DRICreatePCIBusID symbol, no kernel modesetting. Number of created screens does not match number of detected devices. Configuration failed. Server terminated with error (2). Closing log file. ADDITION I found now at the message boot: [ 8.121829] [drm] Loading RS780 Microcode [ 8.156063] r600_cp: Failed to load firmware "radeon/RS780_pfp.bin" [ 8.156092] [drm:r600_startup] *ERROR* Failed to load firmware!
The firmware for your graphics card is missing. You have to explicitly install firmware-linux-nonfree from the non-free repository. Add the non-free repository to /etc/apt/sources.list (or /etc/apt/sources.list.d/) Run apt-get update as root Install firmware-linux-nonfree with apt-get install firmware-linux-nonfree You probably have to reboot after this step or reload your device driver. Just some additional background information: most current devices require some kind of firmware blob to run. Debian decided to move these kind of blobs into a non-free package (you can't alter them, you don't know what they are doing and sometimes they are not even distributable).
How to configure FOSS ATI drivers on Debian Wheezy and ATI RS880 [Radeon HD 4250]?
1,590,078,237,000
I'm trying to set up the Hashicorp Vault Agent as a systemd service. I can manually run that agent with the user vault. Note, perhaps that's important: here's the /etc/passwd for that user : vault:x:994:989::/home/vault:/bin/false So I need to do sudo su -s /bin/bash vault to get a vault session. With that in mind, I can do the vault agent -config=<pathToConfig>and it works. Now here the /usr/lib/systemd/system/vault-agent.service I've set up : [Unit] Description="HashiCorp Vault - A tool for managing secrets" Documentation=https://www.vaultproject.io/docs/ Requires=network-online.target After=network-online.target ConditionFileNotEmpty=/etc/vault.d/vault.hcl [Service] User=vault Group=vault ProtectSystem=full ProtectHome=read-only PrivateTmp=yes PrivateDevices=yes SecureBits=keep-caps AmbientCapabilities=CAP_IPC_LOCK Capabilities=CAP_IPC_LOCK+ep CapabilityBoundingSet=CAP_SYSLOG CAP_IPC_LOCK NoNewPrivileges=yes ExecStart=/bin/vault agent -non-interactive -config=/etc/vault.d/agent-config-prod.hcl ExecReload=/bin/kill --signal HUP $MAINPID KillMode=process KillSignal=SIGINT Restart=no RestartSec=5 TimeoutStopSec=30 StartLimitIntervalSec=60 StartLimitBurst=3 LimitNOFILE=65536 [Install] WantedBy=multi-user.target This is a service conf I've found multiple times. But I always get the same issue: Error storing PID: could not open pid file: open ./pidfile: permission denied I tried to replace the ExecStart= by /bin/whoami, just to be sure, yes, it's indeed vault. Permission and location of that ./pidfile (default install location): /etc/vault.d/pidfile drwxr-xr-x. 108 root root 8192 May 15 16:32 etc drwxr-xr-x 3 vault vault 113 May 15 17:43 vault.d -rwxrwxrwx 1 vault vault 0 May 15 17:48 pidfile #not default permission, but I am desesperate. I am really suspicous about the sudo su -s /bin/bash vault command that, perhaps, grants the vault user more privileges. If so, how to incorporate it into my service? I ran systemctl reload daemon everytime and SELinux is disable. ps: if someone has a great link about how to set up a systemd for the vault AGENT (not as root), I'll take it. EDIT : about the sudo -s /bin/bash vault $ sudo -s /bin/bash vault /bin/vault: cannot execute binary file $ su -s /bin/bash vault Password: (and I have no password or I don't know it) So that's why I'm using the full sudo su -s /bin/bash vault command.
The option ProtectSystem=full literally mounts /etc as read-only for the process defined in the service: Takes a boolean argument or the special values "full" or "strict". If true, mounts the /usr/ and the boot loader directories (/boot and /efi) read-only for processes invoked by this unit. If set to "full", the /etc/ directory is mounted read-only, too. You should either move the pidfile to a writable location for that process, or remove the option ProtectSystem=full from the service file. You should look into all of the other systemd service options that you are using which you are unsure of what they do. There are a number of other restrictions in there that may cause problems with your setup.
systemd / service user has not the same rights / permission as same user in a shell
1,590,078,237,000
I need to edit /etc/lightdm/lightdm.conf using sed inside specific section, uncomment and set value. This section is [Seat:*] and line is #autologin-user= I expect this change: Before: [LightDM] . . . [Seat:*] . . . #autologin-user= . . . After: [LightDM] . . . [Seat:*] . . . autologin-user=pi . . . I've tried this command: sed -i.bak '/^\[Seat:*]/{s/#autologin-user/autologin-user=pi/}' /etc/lightdm/lightdm.conf But without success. PS: There are more occurrences of #autologin-user, so selecting section [Seat:*] is really important.
Try with this, given an altered input file sample: [LightDM] [Seat:*] #autologin-user= [Foo:*] #autologin-user= [Bar:*] #autologin-user= The command: $ sed '/^\[Seat:\*\]$/,/\[/s/^#autologin-user=$/autologin-user=pi/' foo.txt [LightDM] [Seat:*] autologin-user=pi [Foo:*] #autologin-user= [Bar:*] #autologin-user=
Enable lightdm autologin using sed
1,590,078,237,000
I'm trying to connect to a Gateway via ssh. In order to connect to the Gateway I write ssh root@ip_GW , where ip_GW is the ip of the Gateway. So, in order to not always write the ip adress of the GW, i made an alias in the .ssh/config file like this ( I also made key to connect passwordless) : Host GW2 #IdentitiesOnly=yes HostName ip_GW Identity file ~/.ssh/id_rsa_GW2 User root Now i can connect passwordless to the Gateway like this ssh GW2 and it works just fine Now the problem is that, if I write again the specific ip_address of the GW like this ssh root@ip_GW , where ip_GW is the ip of the GW it will give the error: Too many authentification failures I really need to connect to the GateWay also using the specific ip_address What should I do ?
Add the IP address to the list of the host name patterns that should match the configuration section. Here, GW2 and 203.0.113.1 will match: Host GW2 203.0.113.1 #IdentitiesOnly=yes HostName 203.0.113.1 Identity file ~/.ssh/id_rsa_GW2 User root You can find this documented (briefly) with man ssh_config, Host Restricts the following declarations (up to the next Host or Match keyword) to be only for those hosts that match one of the patterns given after the keyword. If more than one pattern is provided, they should be separated by whitespace. A single * as a pattern can be used to provide global defaults for all hosts. The host is usually the hostname argument given on the command line (see the CanonicalizeHostname keyword for exceptions).
~/.ssh/config Host entry not honored when connecting via IP address
1,590,078,237,000
We can put a 3rd party apps new global PATH in /etc/profile, appending to the original $PATH, ok. But.. I can see that the /etc/profile file is provided by a package: aaa_base. What would happen if someone upgrade the aaa_base? The Q: How can we ensure that the $PATH stays somewhere, where an upgrade wouldn't modify it?
Since you have an existing /etc/profile.d directory (and presumably the corresponding /etc/profile or /etc/${SHELL}rc files that source files in that directory), I'd recommend placing an /etc/profile.d/3rd-party-app.sh and/or /etc/profile.d/3rd-party-app.csh file with the required code. If you are the packager of the 3rd-party app, you could include those files in the packaging so that they are installed, updated, and removed by the package manager. Otherwise, as a user of the software, placing those files there will make them unmanaged, and so unaffected by OS package updates. UPDATE from OP: https://www.suse.com/documentation/sles11/book_sle_admin/data/sec_adm_whatistheshell.html /etc/profile Do not modify this file, otherwise your modifications can be destroyed during your next update! /etc/profile.local Use this file if you extend /etc/profile /etc/profile.d/ Contains system-wide configuration files for specific programs
Update $PATH variable that survives updates?
1,590,078,237,000
Ifconfig returns the following: eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.1.39 netmask 255.255.255.0 broadcast 192.168.1.255 inet6 fe80::a00:27ff:fefa:258e prefixlen 64 scopeid 0x20<link> ether 08:00:27:fa:25:8e txqueuelen 1000 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 20 bytes 1368 (1.3 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 My etc/network/interfaces looks like this: # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). source /etc/network/interfaces.d/* auto eth0 iface eth0 inet static address 192.168.1.39 netmask 255.255.255.0 gateway 192.168.1.1 Route -n returns the following: Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.1.1 0.0.0.0 UG 0 0 0 eth0 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 I am using the following dns: domain ** search ** nameserver 172.139.62.5 nameserver 8.8.8.8 (Stars hiding my local DNS which I rather not have public) Using a non-static IP does work. I am running Linux in a VM.
It appears that the gateway address you are using, 192.168.1.1 does not match the address of your router. If you are using static IP addressing the details much match your network or it cannot work. Start with the IP address of your router. It might be 192.168.1.254. It might be 10.11.12.13. It could be something else entirely. You then need the netmask, again from your router configuration, which will either be 24 or something like 255.255.255.0. For every 255 in the netmask you need to copy through the corresponding numbers from your router's IP address. So three lots of 255 would mean you copy through the first three groups of numbers. (If you have only a single number netmask such as 24, divide it by 8, and the result gives you the count of the numbers you must copy through.) Finally, you need to assign unused values so that you end up with four groups of numbers. Assume your router is 10.1.1.254 and your netmask is 255.255.0.0. Then you'd copy through 10.1 and come up with the remaining two numbers in the range 1-254. The result is your IP address, eg 10.1.44.66, but do not use a number group that's already in use!
Internet not working when using Static IP. No current solutions working [closed]
1,590,078,237,000
I have configured Apache2 in Debian 8 and created a website. I created a new .conf file in /etc/apache2/site-avaliable directory. I edited the file properly to point to the directory which holds the .html file. My question is when I enter my url I get a page with has my .html file as a link and I have to click on it to view the page. The url after I enter the link is: www.mysite.com/file1.html How can I go directly to file1.html when I enter mysite.com with no link? Thanks
httpd has a list of files to try to display when you give a URL that points to a directory, which can be configured with the DirectoryIndex directive. (Its default value is just index.html, so if you renamed your file1.html to index.html, it would also show immediately.) The listing that you see is generated by mod_autoindex; but if you disable it and don't have an index.html (or similar), you'd just get a permission denied error page instead.
How to create website in Apache 2 Debian 8?
1,590,078,237,000
I'm trying to make use of Ansible for configuration management and centralized administration. All the machines I'm interested about are actually containers on the host which is going to run Ansible. Currently I am writing a dynamic inventory script that groups the different hosts and makes certain hostvars available per group and also per host. How can I use the inventory information to run local tasks? Example: I have a container named foo and the dynamic inventory defines certain items like IP address, cgroup limits and so on for it. How can I reuse that information before the guest container is even up, in order to generate (using the usual Jinja2 templates) the container configuration on the host?
If I understand correctly you need to access some ansible variables defined for a generic host. You can access all hosts variables by the dictionary hostvars, that has hostname as primary key, for your example: {{ hostvars['foo']['ipv4']['address'] }} Credits goes to: https://docs.ansible.com/playbooks_variables.html#magic-variables-and-how-to-access-information-about-other-hosts https://serverfault.com/questions/638507/how-to-access-host-variable-of-a-different-host-with-ansible
How can I reuse the Ansible inventory for local tasks?
1,590,078,237,000
Is there a way to set default options for cryptsetup? For example, lets say I want to make sure that I only open cryptsetup devices with the -r option. I would like to add it to a config file, so that I don't have to type it every time (and potentially forget it) Reading man cryptsetup did not reveal any information.
AFAIK there is no configuration file for cryptsetup. You can of course define an alias and put that somewhere where it gets read in at login: alias cryptsetup='cryptsetup --readonly'
setting default options for cryptsetup
1,590,078,237,000
When searching the occurrence of the chain /var/log in default configuration files of apache this cannot be found anywhere. But log files are defined as such # part of httpd.conf ErrorLog logs/error_log I cannot understand how does apache decide where to placed its error log file. Why does not apache define exactly where the log files will be sent? Or, if you prefer, where is defined the root directory for log files in apache?
If you don't use absolute path, apache assume that it's relative path to ServerRoot directive. According to apache doc: The ErrorLog directive sets the name of the file to which the server will log any errors it encounters. If the file-path is not absolute then it is assumed to be relative to the ServerRoot. In almost apache version, ServerRoot default set to /usr/local/apache.
Why does apache not define where to log by default?
1,590,078,237,000
I'm using zsh 5.0.2 (x86_64-apple-darwin12.3.0) on the latest MacOSX. If it makes any difference, I have also enabled oh-my-zsh. The shell seems to be missing the .zshrc file when I want to source it. The result of the execution of the following commands should expose my problem clearly. (~ » is my prompt). The file exists ~ » ls -l .zshrc -rw-r--r-- 1 user staff 1882 May 16 10:43 .zshrc ~ » [[ -r .zshrc ]] && echo "Exists" Exists The file isn't read by "." ~ » . .zshrc .: no such file or directory: .zshrc I don't know what causes this behavior, especially that this works ~ » . "$(pwd)"/.zshrc Success! My question is: Why won't . .zshrc work?
The . command searches for the file in your $path, it does not by default search in the current directory. That is why it works when you give the absolute path ("$(pwd)"/.zshrc). From the zsh manual about the . command: . file [ arg ... ] Read commands from file and execute them in the current shell environment. If file does not contain a slash, or if PATH_DIRS is set, the shell looks in the components of $path to find the directory containing file. Files in the current directory are not read unless ‘.’ appears somewhere in $path. ... Compare to the source command: source file [ arg ... ] Same as ‘.’, except that the current directory is always searched and is always searched first, before directories in $path.
Weird behavior when sourcing .zshrc
1,590,078,237,000
I install slime through ELPA. Here is what my .emacs file looks like: (setq inferior-lisp-program "sbcl --noinform") (add-to-list 'load-path "~/slime/") (require 'slime) (slime-setup) (setq package-archives '(("gnu" . "http://elpa.gnu.org/packages/") ("marmalade" . "http://marmalade-repo.org/packages/") ("melpa" . "http://melpa.milkbox.net/packages/"))) I get the following error when I start emacs: Warning (initialization): An error occurred while loading `/home/name/.emacs': File error: Cannot open load file, slime To ensure normal operation, you should investigate and remove the cause of the error in your initialization file. Start Emacs with the `--debug-init' option to view a complete error backtrace. When I debug: Debugger entered--Lisp error: (file-error "Cannot open load file" "slime") require(slime) eval-buffer(#<buffer *load*> nil "/home/name/.emacs" nil t) ; Reading at buffer position 91 load-with-code-conversion("/home/name/.emacs" "/home/name/.emacs" t t) load("~/.emacs" t t) #[0 "\205\262 When I visit ~/.emacs.d/elpa/slime-20130308.1112, slime.el is clearly there. Other people online seem to be having issues too. If I cannot get it to work with emacs24, how can I setup a slime environment for common lisp?
I installed emacs 24 using the Debian amd64 package from Damien Cassou's ppa. I had some problems with slime (I don't recall if they were the same as those above). I fixed it by reinstalling quicklisp (http://www.quicklisp.org/), then using it to install slime: (ql:quickload :quicklisp-slime-helper) It works fine with sbcl for me (on two machines). I just looked at my .emacs; I have nothing added to my load-path, just (load (expand-file-name "~/quicklisp/slime-helper.el")) (setq inferior-lisp-program "sbcl") (require 'slime) (slime-setup '(slime-fancy))
problems using slime on emacs24
1,590,078,237,000
How can I install another PHP version? I have php 5.4.5 for now, but I need the version 5.3.15 running too. How can I perform this on Fedora 17? Without using a Virtual Machine?
One of the standard ways of doing this is to download the php source code from http://php.net/downloads.php and compile it with ./configure --prefix=/opt/php/5.3.15 or something to that effect. Then, your new php will not conflict with any system packages. Note that on Fedora, you will need to install a bunch of -devel packages to build php with the functionality you most likely want.
Install another PHP version. Fedora 17
1,590,078,237,000
So I have a KVM machine, where I run an Ubuntu 12.04 (server edition). I want to compile a minimalistic kernel for the KVM guest, but I only found the howto for hosts, and guests with paravirtualization. I want to compile a kernel for a guest. Nothing more. With only the things it needs.
If you don't care about paravirtualization, then compiling a minimalistic kernel for a KVM guest is the same as compiling a minimalistic kernel for hardware. There is lots of documentation available on the latter topic: https://www.google.com/search?q=linux+build+minimal+kernel
How to compile kernel for KVM guest (without paravirtualization)?
1,590,078,237,000
Is there a command to set settings in sshd_config, instead of manually editing the file? I would prefer it to do this way, because it's easier to automate. Otherwise I'd have to grep config with my script.
Although there is no standard tool to change settings in sshd_config, Ubuntu's post-installation script has some functions for modifying the configuration. It supports: enabling and disabling options renaming options reading the value of settings, e.g. it retrieves 22 from Port 22 setting the value of settings, e.g. it can set 22 for Port: Port 22 There are limitations: it does not support contexts (like Match) it's not aware of the values a key expects, e.g. AllowUsers user@host more@multiple values It can be found on http://bazaar.launchpad.net/~ubuntu-branches/ubuntu/oneiric/openssh/oneiric/view/head:/debian/openssh-server.postinst An alternative to such functions using sed. For example, to change a Port setting: sed "s/^ *Port .*/Port 22/i" -i sshd_config Obviously, this only works if Port was defined before. As an alternative, you can remove existing settings and append the new setting: sed "/^ *Port/di" -i sshd_config echo Port 22 >> sshd_config
wrapper command for sshd_config?
1,590,078,237,000
I am running syslog-ng on debian. How do I check which conf file was loaded upon startup? Neither systemctl status syslog-ng nor systemctl show syslog-ng tell me.
By default, syslog-ng loads the configuration from a hard-coded default configuration path (you can check that path with the syslog-ng --help command, it's next to the --cfgfile option. This can be changed via the command line with the mentioned option. If you want to see all the configuration files loaded recursively (@include), you can run syslog-ng in debug mode: $ syslog-ng -Fed Starting to read include file; filename='/usr/share/syslog-ng/include/scl/sudo/sudo.conf', depth='2' ... If you want to see the full preprocessed configuration of a running syslog-ng instance, you can query it with the sbin/syslog-ng-ctl config --preprocessed command. If you want to ensure that the correct version of the configuration is running in syslog-ng (there might be a newer config on the disk that hasn't been applied yet), you can use the following command: sbin/syslog-ng-ctl config --verify Configuration file matches active configuration You can also get a hash or identifier for similar purposes: sbin/syslog-ng-ctl config --id
How do I check which conf file was loaded by syslog-ng when starting?
1,590,078,237,000
I'm about to upgrade from Debian 10 to 11. During this step, and like when I do common apt-get upgrade or dist-upgrade, I'm expecting to receive a lot of questions: "Do you want to replace or to keep your configurations files?" And having little knowledge (or absolutely no, sometimes) about the goals and the effects of most packages that are asking for this, a diff won't help me. What is the default response you recommend to answer, when you "know nothing"? Y : replace configurations file or N : keep them or press Enter key, to use the default answer suggested?
Most of the time, keeping your current configuration is the best option - you generally want an upgrade to be providing the same functionality in pretty much the same way, and you generally don't want to lose any hand-crafted configuration changes. That's why it's the default. It's extremely rare for an upgraded program to have a completely incompatible config file, which is about the only time that keeping the old config will prevent an upgraded service from running....and even in that case, you generally don't want it to run until you've gone over the new config and made sure it's going to do what you want it to - especially, if what you want differs from the default. You can always change your mind later, anyway. Debian packages don't just replace listed conffiles. If you choose to overwrite your current config, it keeps the old config file renamed to end with .dpkg-old. If you choose to keep your current config, it renames the suggested new config file to end with .dpkg-dist. You can manually copy one of those over the config file at any time. NOTE: a conffile isn't necessarily a configuration file. Nor is every configuration file a conffile (although most are). In Debian, conffiles are those files which are listed in the package as being a conffile. conffiles are basically a way of telling the package manager (dpkg) "If this file has been edited since it was installed, don't replace it without asking". dpkg uses md5sums to detect whether the current on-disk version of a conffile differs from the packaged file.
During any upgrade, and even upgrading my Linux major version, should I reply: "replace configuration files" or "keep them" when I'm asked for?
1,590,078,237,000
About a week ago (maybe after an update) the starting directory of xfce4-terminal changed from ~ to ~/Documents when launched from the panel. I'm pretty sure that wasn't because of anything I have done: there are no cd commands in ~/.bashrc (and that should not be necessary) and the launcher did not contain anything in the field 'Working Directory' (I put $HOME there just to try, but that does not work either). $ grep cd ~/.bashrc $ grep Desktop .config/xfce4/terminal/terminalrc $ I'm runnning Ubuntu 20.04, has anyone had this same thing happen in Ubuntu?
Looks like the xfce4-terminal defaults to opening in whatever directory it was launched from. I just installed it on my Arch system, and confirmed the behavior. So I looked at its Preferences section (Edit => Preferences) and saw: So, just set that field to /home/yourUser and it should work. It should, but at least on my system it does not! I tried this and the setting seems to be ignored which makes me think this is a bug in the program. You should let the developers know by filing a bug report, or you can wait until it is corrected. In the meantime, as a workaround for your launcher, you can change the launcher so that it executes: xfce4-terminal --default-working-directory=/home/yourUser That should make new terminals open as expected.
xfce4-terminal starts in ~/Desktop instead of ~
1,590,078,237,000
What I'm looking to do is modify ~/.fvwm/config, adding a few lines that would allow me adjust the audio volume by hitting non-standard keys on my keyboard (volume adjustment buttons). I can't see how to detect it though. Is there any way in FVWM to detect when volume and mute buttons are pressed on a keyboard? I'm using Kubuntu, and would be happily using KDE if it wasn't continuously crashing (presumably an issue with the video card).
You'll need to identify the media keys in question so you can tell fvwm to bind them. I suggest using xev(1) for this. Then you'll need to use an appropriate tool to change the volume. Here's an example: Key XF86VolUp A A Exec exec volume_increase +5 Here. "XF86VolUp" is the name of the key which xev told me about, and "volume_increase" is some fictional program which might raise the volume.
Is there a way in fvwm config to listen for non-standard keyboard buttons?
1,590,078,237,000
I've been setting up archlinuxarm installs on my headless raspberry pis and I recently ran into an interesting problem. All of the characters []\{}| are being displayed as different characters in my terminal when connecting over SSH. It doesn't prevent anything from working it is just strange to see and annoying that I don't know how to fix it. Here is what my bash prompt looks like: Æuser@host ~Å$ instead of [user@host ~]$ And if I were to show the contents of a file containing the character [ it would instead be displayed as Æ. The problem is not with my terminal as it was working fine before (and still works with other raspberry pis). I've tried to regenerate the locales, reset the keyboard-layout, but I cannot seem to fix this. I don't even know where to begin searching. I think the change occurred when I accidentally cat a large binary file but I don't know what could have happened. I have rebooted several times and compared configs to my other archlinuxarm install and cannot seem to see a difference. Has anyone experienced this or can anyone offer any advice on what might have caused this? Thanks! EDIT I was going to call it an evening so I started closing terminal. Decided to check one last time by SSHing from my other alarmpi to the weird-character-alarmpi and that apparently did the trick? Not really sure what is going on but it is fixed.
Your binary data file probably included some sequences that messed up your terminal. Odd that only those characters changed. Sometimes your whole terminal may be gobledihook. You can issue the reset command to clean it up to the initial state. Or you can open a new terminal window.
Characters are being displayed incorrectly in bash
1,590,078,237,000
I need to re-instate an RAID 1 (mirror) array. I have both drives and mdadm tells me the following about them: $ sudo mdadm --examine /dev/sdb1 [sudo] password for pi: /dev/sdb1: Magic : a92b4efc Version : 1.2 Feature Map : 0x1 Array UUID : c3178bbd:a7547105:dca0fc2a:4c137310 Name : raspi:0 Creation Time : Sun Feb 16 09:29:07 2020 Raid Level : raid1 Raid Devices : 2 Avail Dev Size : 240218112 (114.54 GiB 122.99 GB) Array Size : 120109056 (114.54 GiB 122.99 GB) Data Offset : 133120 sectors Super Offset : 8 sectors Unused Space : before=133040 sectors, after=0 sectors State : clean Device UUID : 012f89b4:c8b76c0e:8ae8fb78:52cc8175 Internal Bitmap : 8 sectors from superblock Update Time : Fri Jul 17 22:20:33 2020 Bad Block Log : 512 entries available at offset 16 sectors Checksum : a3b8e3db - correct Events : 20895 Device Role : Active device 1 Array State : AA ('A' == active, '.' == missing, 'R' == replacing) [pi@alarm ~]$ sudo mdadm --examine /dev/sda1 /dev/sda1: Magic : a92b4efc Version : 1.2 Feature Map : 0x1 Array UUID : c3178bbd:a7547105:dca0fc2a:4c137310 Name : raspi:0 Creation Time : Sun Feb 16 09:29:07 2020 Raid Level : raid1 Raid Devices : 2 Avail Dev Size : 240218112 (114.54 GiB 122.99 GB) Array Size : 120109056 (114.54 GiB 122.99 GB) Data Offset : 133120 sectors Super Offset : 8 sectors Unused Space : before=133040 sectors, after=0 sectors State : clean Device UUID : baef96f5:8750ba2b:892f40a7:3ecc2b38 Internal Bitmap : 8 sectors from superblock Update Time : Fri Jul 17 22:20:33 2020 Bad Block Log : 512 entries available at offset 16 sectors Checksum : f302843f - correct Events : 20895 Device Role : Active device 0 Array State : AA ('A' == active, '.' == missing, 'R' == replacing) How do I recreate the array to mount the /dev/md0 (?) and get access to all the files on the drives? The Array state says AA even though there is currently no array: $ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 1 119.3G 0 disk `-sda1 8:1 1 114.6G 0 part sdb 8:16 1 114.6G 0 disk `-sdb1 8:17 1 114.6G 0 part mmcblk0 179:0 0 29.8G 0 disk |-mmcblk0p1 179:1 0 121M 0 part /boot `-mmcblk0p2 179:2 0 29.7G 0 part / Oh and: $ cat /proc/mdstat Personalities : unused devices: <none> Which is active device 0 vs 1 when calling mdadm --create? UPDATE sudo mdadm --assemble /dev/md0 /dev/sda1 /dev/sdb1 gave me: $ sudo mdadm --assemble /dev/md0 /dev/sda1 /dev/sdb1 [sudo] password for pi: mdadm: /dev/sda1 is busy - skipping mdadm: /dev/sdb1 is busy - skipping and it looked like nothing happened but this shows that something did happen: $ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 1 119.3G 0 disk `-sda1 8:1 1 114.6G 0 part `-md0 9:0 0 114.6G 0 raid1 sdb 8:16 1 114.6G 0 disk `-sdb1 8:17 1 114.6G 0 part `-md0 9:0 0 114.6G 0 raid1 mmcblk0 179:0 0 29.8G 0 disk |-mmcblk0p1 179:1 0 121M 0 part /boot `-mmcblk0p2 179:2 0 29.7G 0 part / $ cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdb1[1] sda1[2] 120109056 blocks super 1.2 [2/2] [UU] bitmap: 0/1 pages [0KB], 65536KB chunk unused devices: <none> Ipon whixch I was able to succesfully mount my array /dev/md0 and I executed sudo mdadm --detail --scan | sudo tee -a /etc/mdadm.conf in order to store the current array configuration to my mdadm.conf file
mdadm --assemble /dev/md0 /dev/sda1 /dev/sdb1 Devices get their indices based on the order of create, it's purely ordinal
How do I reassemble mdadm array with both drives available?
1,547,329,167,000
XPROP returns WM_CLASS, and WM_NAME how do I target the different values returned by XPROP in i3?
i3's names are a little different. For all of these if the same key is available as _NET_ it holds precedence to the non-_NET_ variant. For reference WM_NAME can be matched with title. WM_CLASS is a two part field comma-seperated and quoted, The first part is instance The second part is class WM_WINDOW_ROLE is window_role WM_WINDOW_TYPE is window_type These can all be found in the docs on i3 configuration though indexed in the opposite fashion, by i3-name. Here is a simple script xprop2i3 which behaves like xprop except outputs i3 labels, and only the fields which i3 selectors use.
Is there a list of names indexed by xprop to i3 config?
1,547,329,167,000
Is it possible to customize Dconf's file storing path? I'd like to store Dconf configuration file in one of my synced directory, like Google-Drive/Dropbox or any other for constant syncing. Let's say I have Dropbox auto synced directory at ~/Dropbox I'd like to store Dconf config file at: ~/Dropbox/dconf so that it's is autamatically synchronised (by Dropbox client) and I have settings backuped.
man 7 dconf explains that the user configuration is saved by default in file $XDG_CONFIG_HOME/dconf/user. Depending on your system, this often means file ~/.config/dconf/user when XDG_CONFIG_HOME is not defined. You should be able to move this directory to the wanted place, and replace it with a symbolic link. Eg mv ~/.config/dconf ~/Dropbox/ ln -s ~/Dropbox/dconf ~/.config/dconf Alternatively, you can make a bind mount which makes the same directory appear in 2 different places: mkdir ~/Dropbox/dconf sudo mount -o bind ~/.config/dconf ~/Dropbox/dconf To undo the binding use sudo umount ~/Dropbox/dconf.
Is it possible to customize Dconf's file storing path?
1,547,329,167,000
Does anyone know an easy practical way to remove configuration blocks. I have a file in the format: lease { interface "eth0"; ... } lease { interface "wlan3"; fixed-address 192.168.0.108; option subnet-mask 255.255.255.0; ... } I want to remove the configuration block for interface "wlan3";. I started trying to write a custom grep function but it was getting complex quickly. This seems like something that might be a common problem. Does anyone have a convenient solution for dealing with configuration files of this format?
With awk: awk -v RS='}' 'NF && ! /interface "wlan3";/{print $0"}"}' infile The output would be: lease { interface "eth0"; ... }
Removing blocks from configuration files?
1,547,329,167,000
I am trying to use augtool to auto edit my /etc/hosts, as I wish to add an alias for localhost (so that I can test my webserver with a different host-name, localy). I have been looking every-ware to find good documentation. I with to find the node with ipaddr of 127.0.0.1 and add an alias to it. I would also like to find some good documentation to Augeas.
While adding an alias to a host is not really hard, what's usually more interesting is to ensure a host entry has an alias, i.e. make the operation idempotent. Here is how you can do that with Augeas: set /files/etc/hosts/*[ipaddr="127.0.0.1"]/alias[.="mycouchdb"] "mycouchdb" which will only add the alias if it doesn't exist yet. Explanation: alias[.="mycouchdb"] refers to the alias with value mycouchdb (since . refers to the current node). When there is no alias yet with value mycouchdb, alias[.="mycouchdb"] will not match anything and Augeas will create a node with label alias and value mycouchdb. The rule when the node doesn't exist is to use the path label without filters, in this case alias, so it creates a new label node and assigns it the value mycouchdb When there is an alias already, the expression will match and the set command will replace the value with mycouchdb, which will do nothing.
add alias for localhost in /etc/hosts using Augeas
1,547,329,167,000
I just broke a machine running Linux Mint with KDE 4. Thankfully I managed to back up /home/, so I have all my data. Now I've installed Kubuntu with KDE 5, and I'm trying to configure my Shell colorscheme to match my previous setup. I usually created a variant on "Linux Colors" in "Shell Profiles". Where can I find this data?
You probably mean the KDE Konsole shell profiles and their color schemes. These are usually saved in ~/.kde/share/apps/konsole/. Custom shell profiles use the extension .profile, custom color schemes use .colorscheme.
Where are the Shell Profile Settings Stored?
1,547,329,167,000
I have been browsing some examples of avahi services files (XML format), and some of them had <!--*-nxml-*--> comment following the document type definition. For example: <?xml version="1.0" standalone='no'?><!--*-nxml-*--> <!DOCTYPE service-group SYSTEM "avahi-service.dtd"> <service-group> <name replace-wildcards="yes">%h</name> <service> <type>_device-info._tcp</type> <port>0</port> <txt-record>model=RackMac</txt-record> </service> </service-group> After searching, I figured this tag purpose was to be interpreted by emacs for its nXML mode. But since I'm not an emacs user myself, this tag seems completely useless as it is. Is there any convention that says this should be the defaults for XML files (may be only configuration files), or does it has another hidden purpose I can't figure?
The XML comment <!--*-nxml-*--> will be ignored by any XML parser, so it has no function whatsoever in terms of the actual meaning or function of the XML from that point of view. It is only there to allow emacs to do correct highlighting etc. In vim, this is called a modeline, and would look like <!-- vim: set filetype=nxml.xml : --> In Emacs, this is called specifying file local variables, and the slightly longer (compared to what's in your file) format is <!-- -*- mode: modename; var: value; … -*- --> This is useful if the editor can't figure out the file's proper type in any other way (from the filename or by using some heuristic matching of the contents). Apart from that, it serves no purpose. There are no standards, no best practices and no recommendations that say this type of editor-specific comment needs to be added.
Is it considered good practice to use a nxml tag in xml conf files?
1,547,329,167,000
[root@localhost ~]# hostname hello [root@localhost ~]# hostname hello [root@localhost ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 [root@localhost ~]# The command line output is as above. Why is the hostname output different from what is found in /etc/hosts?
The system hostname is not set using the file /etc/hosts. The hostname is set using the system configuration management system. Where the hostname is stored persistently depends on the distribution. For instance, on my kali machine the hostname is stored in the file /etc/hostname from where it is read during startup; on my gentoo machine it is stored in /etc/conf.d/hostname.
Why does the hostname command's output differ from /etc/hosts?
1,547,329,167,000
I am not sure , whether i should i ask this question here or some other stack-exchange community , but i am trying some simple commands to run on a list of servers , using python fabric. The code for the command: from fabric.api import run def host_type(): run('uname -s') This will run the uname -s command on all Linux servers by invoking: $fab -H < ......Comma, separated , Servers , List , Here.....> host_type Now the problem is that , how can i run it/configure it such that it returns the results without asking for user/root password during execution. There might be some Linux command line trick but i don't remember. Edit1: Ok there is a -p option with fab command but each server has a different password, so this option may not work for me.
There are two ways that you can do that. Fabric uses openssh on the backend so if you already configured passwordless connection then you will not need to set anything and it will work. The other way is simple as well. You only need to set up the env variables. from fabric.state import env env.user = "user" env.password = "password" env.colorize_errors = True env.connection_attempts = 3 env.disable_known_hosts = True env.skip_bad_hosts = True env.parallel = False env.linewise = True This should do the trick. I usually put it in a separate file and import it. PS: I, personally, find paramiko which is the library that Fabric uses is easier to use for simpler tasks.
Running a command on all servers using Fabric
1,547,329,167,000
I've been using Postfix on my CentOS 6 Server on it's default install settings but I would like to configure it to use the mail server from my existing email. I altered the /etc/postfix/main.cf file by adding relayhost = [mail.mywebsite.com]:993 Then at the bottom I added smtp_sasl_auth_enable = yes smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd smtp_sasl_security_options = noanonymous Then in /etc/postfix/sasl_passwd I added [mail.mywebsite.com]:993 my_uname:my_pword I gave that file 600 permissions, then ran # postmap /etc/postfix/sasl_passwd and finally I restarted postfix. After these changes it no longer works as when I try to send an email I am getting the following error -Queue ID- --Size-- ----Arrival Time---- -Sender/Recipient------- 2FF29F0F 355 Thu Jun 4 11:56:24 [email protected] (lost connection with mail.mywebsite.com[192.185.2.93] while receiving the initial server greeting) [email protected] -- 0 Kbytes in 1 Request Is there something that I did wrong in setting this up?
Postfix is an SMTP server, but port 993 is used for IMAPS. Try using 25, 465 or 587 instead, depending on what mail.mywebsite.com supports.
CentOS Postfix changing mail server
1,547,329,167,000
I used to use gconftool-2 to edit keys in this way (here I change the cursor shape in gnome-terminal): gconftool --type string --set /apps/gnome-terminal/profiles/Profile0/cursor_shape ibeam But it doesn't work anymore, and I feel like there is a problem with the DBus daemon, even though I can't explain why. This command does change the key in the ~/.gconf/.../Profile0/%gconf.xml, where I can now read: <entry name="cursor_shape" mtime="1419267709" type="string"> <stringvalue>ibeam</stringvalue> </entry> But it has no effect on my cursor shape anymore: it is still a block. Now, here is an interesting fact: if I use gconf-editor and navigate to this key, I find it set to block. And if I now edit this key with the gui, it does change my cursor shape. Everything behaves like the keys stored in memory and the keys stored in the .xml files are not updated together with the gconftool-2 command. I also noticed that gconftool-2 --ping doesn't return anything. I have tried reinstalling gconf2 gconf2-common gconf-service gconf-default-service with no success. I also tried erasing the whole ~/.gconf folder, but the same thing keeps happening. I have had a look at gsettings but my gnome-terminal doesn't seem to be supported with it since the schema org.gnome.terminal doesn't exist and since I can't find any folder gnome-terminal nor gnome/terminal under dconf-editor. This is driving me mad, did it happen to anyone? How is the gconftool-2 supposed to refresh and get instant changes in the running apps?
Got it! Credits to this answer. I added the following lines to my .zshrc or .bashrc: sessionfile=`find "${HOME}/.dbus/session-bus/" -type f` export `grep "DBUS_SESSION_BUS_ADDRESS" "${sessionfile}" | sed '/^#/d'` And the settings are now refreshed as soon as I use gconftool-2.
gconftool-2 doesn't refresh with the dbus anymore?
1,547,329,167,000
I have a very basic question here. This thing is puzzling me a little bit. I have two machines, one is my local desktop running Windows and I have cygwin installed in it and second machine I have is in staging domain in our company which is running Ubuntu 12.04. I started Netflix Exhibitor like this in my desktop through CYGWIN - david@desktop /cygdrive/c/ApacheExhibitor/Exhibitor-1.5.1/target $ java -jar exhibitor-1.5.1-jar-with-dependencies.jar -c file v1.5.1 INFO com.netflix.exhibitor.core.activity.ActivityLog Exhibitor started [main] INFO org.mortbay.log Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog [main] INFO org.mortbay.log jetty-1.5.1 [main] Dec 18, 2013 6:07:37 PM com.sun.jersey.server.impl.application.WebApplicationImpl _initiate INFO: Initiating Jersey application, version 'Jersey: 1.9.1 09/14/2011 02:36 PM' INFO org.mortbay.log Started [email protected]:8080 [main] And then I went to chrome browser and I opened this URL - http://localhost:8080/exhibitor/v1/ui/index.html after that, I can see Exhibitor console up and showing me everything which it should be showing in my desktop. Now I did the same thing in my Ubuntu machine which is in staging domain in our company. With the below command I started Exhibitor - cronusapp@phx5qa01c:/zook$ java -jar ./exhibitor-1.5.1/lib/exhibitor-1.5.1-jar-with-dependencies.jar -c file v1.5.1 INFO com.netflix.exhibitor.core.activity.ActivityLog Exhibitor started [main] INFO org.mortbay.log Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog [main] INFO org.mortbay.log jetty-1.5.1 [main] Dec 18, 2013 7:10:35 PM com.sun.jersey.server.impl.application.WebApplicationImpl _initiate INFO: Initiating Jersey application, version 'Jersey: 1.9.1 09/14/2011 02:36 PM' INFO org.mortbay.log Started [email protected]:8080 [main] And then I went to chrome and I opened the url like this with the machine hostname - http://phx5qa01c.stratus.phx.qa.host.com:8080/exhibitor/v1/ui/index.html And this url is showing me blank white page on the screen. Now I am not sure why this is happening. Is there any file which I am supposed to modify in my Ubuntu box to recognize the hostname so that it can show me the Exhibitor console page as I cannot use localhost now on the chrome? I am pretty much sure I am missing very minor thing here.. UPDATE:- HOSTNAME of my ubuntu machine - cronusapp@phx5qa01c:/zook$ hostname -f phx5qa01c.stratus.phx.qa.host.com
Based on the clarifying comments above, your DNS resolver is apparently unaware of the name phx5qa01c.stratus.phx.qa.host.com. You have two general choices: Talk to your DNS administrator and see whether they can make that name available for you to use, or suggest a different name that would work Add phx5qa01c.stratus.phx.qa.host.com to your local /etc/hosts file Your local /etc/hosts file is consulted for name lookups in addition to DNS (subject to the rules in /etc/nsswitch.conf). You can add a line in /etc/hosts for your phx5qa01c.stratus.phx.qa.host.com host that translates the name to that server's IP address. If you are using Cygwin as you suggest, then the above instructions may not apply and you'll have to ask somebody else how to do the same thing for Cygwin.
How to recognize hostname on the port number 8080 in Ubuntu machine?
1,547,329,167,000
I upgraded to Fedora 40 and was introduced to its new feature of MAC address randomization (for Wi-Fi interfaces at least). How do I disable it? Links I found didn't work for me: https://discussion.fedoraproject.org/t/f40-change-proposal-wifi-mac-randomization-system-wide/99856/5 https://fedoraproject.org/wiki/Changes/StableSSIDMACAddress I tried to follow the advice from link 2 and create 2 files in /etc/NetworkManager/conf.d/: 22-wifi-mac-addr.conf: [connection.22-wifi-mac-addr] match-device=type:wifi wifi.cloned-mac-address=stable-ssid [.config] enable=nm-version-min:1.45 and 90-wifi-mac-addr.conf: [connection-90-wifi-mac-addr-conf] wifi.cloned-mac-address=permanent I followed instructions literally: e.g. the 22* file has section named [connection.22-wifi-mac-addr] while the 90* file has one named [connection-90-wifi-mac-addr-conf] (i.e. the dot and minus after the word 'connection', present and absent '-conf' suffix etc.). Also I tried to make it uniform, sort of to fix all of the typos. Nothing, after I restarted the NetworkManager.service: # systemctl restart NetworkManager.service the MAC address keeped to change on every enable/disable Wifi toggle. Could someone help me please?
I tried to follow the advice from link 2 and create 2 I don't think that is what link 2 suggests. The Fedora change introduces a new file /usr/lib/NetworkManager/conf.d/22-wifi-mac-addr.conf. That's all. You can prevent that file from being loaded at all, by creating a file /etc/NetworkManager/conf.d/22-wifi-mac-addr.conf. That file can be empty, or contain additional configuration. Of course, you can drop any other configuration snippets (preferably sorted after "22*"), to overwrite that configuration. Configuration snippets are loaded in a documented order, where later files overwrite earlier ones. See man NetworkManager.conf. Also, it's possible that the best choice is not to change the default back. Instead, modify the few profiles that should use a certain MAC address. For example with nmcli connection modify "$PROFILE" wifi.cloned-mac-address permanent. You probably should do that instead. the MAC address keeped to change on every enable/disable Wifi toggle. Are you sure about that? Note that while NetworkManager is not connected, the MAC address also gets randomized. That is nothing new. Did you check that the MAC address also changes while being connected? You could disable the randomization during scanning via wifi.scan-rand-mac-address. See man NetworkManager.conf. But there should be no need to do that.
How to disable MAC address randomization in Fedora 40?
1,547,329,167,000
I have a Linux device that utilizes USB gadget for RNDIS support. The goal was to be able to connect any computer this device without having to mess with IP settings. I've set a static IP address on my RNDIS device. As far as communication goes, everything works. What does not work is my host PC seems to add my RNDIS device as a gateway, thus losing internet connection. I can remote the gateway route each time I plug in my device, but this reduces the user experience. How do I modify my RNDIS configuration in order for my host PC to not add a gateway?
The RNDIS device may have static IP address, but where and how does the host PC get the IP address settings for connecting to the RNDIS device? If the RNDIS device provides the settings for the host, using DHCP, PPPoE or some other mechanism, then the RNDIS device should not provide a default gateway setting if it is not prepared to act as a Internet gateway. In the terms of pppd options, that would mean removing any defaultroute options and adding nodefaultroute instead. In generic DHCP server terms, that would mean not providing the DHCP option #3 at all - if using the ISC dhcpd for example, you should remove any option routers ... line from the dhcpd.conf file.
RNDIS interface gets a gateway
1,547,329,167,000
First of all, I know what the option does. According to the Arch Linux Wiki, it gives the command line "wpa_cli" utility the permission to rewrite wpa_supplicant.conf: Warning: Setting update_config to 1 allows wpa_supplicant to overwrite the configuration file. When overwriting, wpa_supplicant will reset file permissions according to your default umask. It might accidentally make the file readable to everyone thus exposing your passwords, if your system is multiuser. But the Arch Linux wiki is the closest I could get to an authoritative description of this feature. It doesn't appear in any of the man pages for wpa_supplicant, including the following: wpa_cli(8), wpa_supplicant(8) and wpa_supplicant.conf(5). Where is the official documentation for this feature? My fear is that, if it's indeed an "undocumented" feature that appears only in the source code, then any developer could simply edit out or disable the feature in a future release.
The option appears to be documented in the example configuration file on the project's web site: # Whether to allow wpa_supplicant to update (overwrite) configuration # # This option can be used to allow wpa_supplicant to overwrite configuration # file whenever configuration is changed (e.g., new network block is added with # wpa_cli or wpa_gui, or a password is changed). This is required for # wpa_cli/wpa_gui to be able to store the configuration changes permanently. # Please note that overwriting configuration file will remove the comments from # it. #update_config=1 As it's documented on the project's web site, I'd assume that the option isn't going anywhere for a while.
Where is the wpa_supplicant.conf option "update_config=1" documented?
1,547,329,167,000
I am asking this question about several different linux systems: Ubuntu Raspbian Arch Linux Each of these systems offers a GUI with a list of Wifi servers in which you pick the one you wish to connect to. I would like to know, for each of these, where this choice is persisted, so that it's available the next time you connect.
For systems using NetworkManager (which includes all of the above distributions, at least in common configurations), the location is typically /etc/NetworkManager/system-connections; e.g: $ ls /etc/NetworkManager/system-connections Starbucks.nmconnection Company Wifi.nmconnection Where the files look something like: [connection] id=Company WIFI uuid=12345678-5e72-456a-a46d-7cb239f2d5de type=wifi interface-name=wlan0 [wifi] mode=infrastructure ssid=Company WIFI [wifi-security] auth-alg=open key-mgmt=wpa-psk psk=secretpassword [ipv4] method=auto [ipv6] addr-gen-mode=default method=auto [proxy] NetworkManager will iterate through available Wifi connection profiles until it finds one that works.
In what file is the WiFi connection chosen in the GUI persisted?
1,547,329,167,000
I'm making a file manager system for managing my projects. My package is named filesystem. I need a config file that stores the path to the root of my file system. I think I need to create a file /etc/filesystem/root.conf and add it to DEBIAN/conffiles. If the file doesn't exist I want to prompt the installer for a root directory. Else I want to keep the root directory the same (for example when upgrading). I think the prompt should be in postinst. Should I add the file to conffiles? Should I add the prompt in postinst? How do I add an option to apt install that forces a prompt? (I have read about --force-confnew) I need the file to be deleted in apt purge.
To prompt the user for a root directory, you should use debconf. This will allow you to prompt the user, while still allowing pre-configuration, and explicit re-configuration using dpkg-reconfigure. See Configuration management with debconf and man debconf-devel for details, in particular Config file handling in the latter. Purging a configuration file handled using debconf should be done in postrm; see the relevant chapter in Debian Policy. Files which are manipulated in maintainer scripts as a result of debconf prompts or stored values shouldn’t be listed in conffiles, otherwise your users will end up with confusing questions during upgrades.
Common practice for config files
1,547,329,167,000
I have a custom ssh config on a directory. When the jump parameter -J is passed, ssh ignores the custom config, instead it looks at the home config at ~/.ssh/config. Is this how ProxyJump works or is it a bug? This doesn't work: ssh -vvv -F config -J [email protected] [email protected] OpenSSH_9.3p1, OpenSSL 3.0.8 7 Feb 2023 debug1: Reading configuration data config debug1: config line 1: Applying options for * debug2: resolve_canonicalize: hostname 10.0.3.120 is address debug1: Setting implicit ProxyCommand from ProxyJump: ssh -l jump -F config -vvv -W '[%h]:%p' 192.168.56.11 debug1: Executing proxy command: exec ssh -l jump -F config -vvv -W '[10.0.3.120]:22' 192.168.56.11 debug1: identity file /home/strboul/.ssh/id_rsa type 0 debug1: identity file /home/strboul/.ssh/id_rsa-cert type -1 debug1: identity file /home/strboul/.ssh/id_ecdsa type -1 debug1: identity file /home/strboul/.ssh/id_ecdsa-cert type -1 debug1: identity file /home/strboul/.ssh/id_ecdsa_sk type -1 debug1: identity file /home/strboul/.ssh/id_ecdsa_sk-cert type -1 debug1: identity file /home/strboul/.ssh/id_ed25519 type -1 debug1: identity file /home/strboul/.ssh/id_ed25519-cert type -1 debug1: identity file /home/strboul/.ssh/id_ed25519_sk type -1 debug1: identity file /home/strboul/.ssh/id_ed25519_sk-cert type -1 debug1: identity file /home/strboul/.ssh/id_xmss type -1 debug1: identity file /home/strboul/.ssh/id_xmss-cert type -1 debug1: identity file /home/strboul/.ssh/id_dsa type -1 debug1: identity file /home/strboul/.ssh/id_dsa-cert type -1 debug1: Local version string SSH-2.0-OpenSSH_9.3 OpenSSH_9.3p1, OpenSSL 3.0.8 7 Feb 2023 debug1: Reading configuration data config debug1: config line 1: Applying options for * debug1: config line 9: Applying options for 192.168.56.11 debug2: resolve_canonicalize: hostname 192.168.56.11 is address debug3: ssh_connect_direct: entering debug1: Connecting to 192.168.56.11 [192.168.56.11] port 22. debug3: set_sock_tos: set socket 3 IP_TOS 0x48 debug1: Connection established. debug1: identity file id_ed25519-client_test type 3 debug1: identity file id_ed25519-client_test-cert type -1 debug1: Local version string SSH-2.0-OpenSSH_9.3 debug1: Remote protocol version 2.0, remote software version OpenSSH_8.2p1 Ubuntu-4ubuntu0.7 debug1: compat_banner: match: OpenSSH_8.2p1 Ubuntu-4ubuntu0.7 pat OpenSSH* compat 0x04000000 debug2: fd 3 setting O_NONBLOCK debug1: Authenticating to 192.168.56.11:22 as 'jump' debug1: load_hostkeys: fopen /etc/ssh/ssh_known_hosts: No such file or directory debug1: load_hostkeys: fopen /etc/ssh/ssh_known_hosts2: No such file or directory debug3: order_hostkeyalgs: no algorithms matched; accept original debug3: send packet: type 20 debug1: SSH2_MSG_KEXINIT sent debug3: receive packet: type 20 debug1: SSH2_MSG_KEXINIT received debug2: local client KEXINIT proposal debug2: KEX algorithms: [email protected],curve25519-sha256,[email protected],ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group14-sha256,ext-info-c debug2: host key algorithms: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],ssh-ed25519,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,[email protected],[email protected],rsa-sha2-512,rsa-sha2-256 debug2: ciphers ctos: [email protected],aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected] debug2: ciphers stoc: [email protected],aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected] debug2: MACs ctos: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1 debug2: MACs stoc: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1 debug2: compression ctos: none,[email protected],zlib debug2: compression stoc: none,[email protected],zlib debug2: languages ctos: debug2: languages stoc: debug2: first_kex_follows 0 debug2: reserved 0 debug2: peer server KEXINIT proposal debug2: KEX algorithms: curve25519-sha256,[email protected],ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group14-sha256 debug2: host key algorithms: rsa-sha2-512,rsa-sha2-256,ssh-rsa,ecdsa-sha2-nistp256,ssh-ed25519 debug2: ciphers ctos: [email protected],aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected] debug2: ciphers stoc: [email protected],aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected] debug2: MACs ctos: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1 debug2: MACs stoc: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1 debug2: compression ctos: none,[email protected] debug2: compression stoc: none,[email protected] debug2: languages ctos: debug2: languages stoc: debug2: first_kex_follows 0 debug2: reserved 0 debug1: kex: algorithm: curve25519-sha256 debug1: kex: host key algorithm: ssh-ed25519 debug1: kex: server->client cipher: [email protected] MAC: <implicit> compression: none debug1: kex: client->server cipher: [email protected] MAC: <implicit> compression: none debug3: send packet: type 30 debug1: expecting SSH2_MSG_KEX_ECDH_REPLY debug3: receive packet: type 31 debug1: SSH2_MSG_KEX_ECDH_REPLY received debug1: Server host key: ssh-ed25519 SHA256:wUJgKzsnvmIfNKpldnCZEasb1RAvQV0B97Rr2zjRwvQ debug1: load_hostkeys: fopen /etc/ssh/ssh_known_hosts: No such file or directory debug1: load_hostkeys: fopen /etc/ssh/ssh_known_hosts2: No such file or directory Warning: Permanently added '192.168.56.11' (ED25519) to the list of known hosts. debug3: send packet: type 21 debug2: ssh_set_newkeys: mode 1 debug1: rekey out after 134217728 blocks debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug3: receive packet: type 21 debug1: SSH2_MSG_NEWKEYS received debug2: ssh_set_newkeys: mode 0 debug1: rekey in after 134217728 blocks debug3: ssh_get_authentication_socket_path: path '/tmp/ssh-XXXXXXLePZAR/agent.15888' debug1: get_agent_identities: bound agent to hostkey debug1: get_agent_identities: ssh_fetch_identitylist: agent contains no identities debug1: Will attempt key: id_ed25519-client_test ED25519 SHA256:1zzOdFQiBRu5EFHKIq1V0TfvYhHLGWNbaTyDn1m7giI explicit debug2: pubkey_prepare: done debug3: send packet: type 5 debug3: receive packet: type 7 debug1: SSH2_MSG_EXT_INFO received debug1: kex_input_ext_info: server-sig-algs=<ssh-ed25519,[email protected],ssh-rsa,rsa-sha2-256,rsa-sha2-512,ssh-dss,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,[email protected]> debug3: receive packet: type 6 debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug3: send packet: type 50 debug3: receive packet: type 51 debug1: Authentications that can continue: publickey,password debug3: start over, passed a different list publickey,password debug3: preferred publickey,keyboard-interactive debug3: authmethod_lookup publickey debug3: remaining preferred: keyboard-interactive debug3: authmethod_is_enabled publickey debug1: Next authentication method: publickey debug1: Offering public key: id_ed25519-client_test ED25519 SHA256:1zzOdFQiBRu5EFHKIq1V0TfvYhHLGWNbaTyDn1m7giI explicit debug3: send packet: type 50 debug2: we sent a publickey packet, wait for reply debug3: receive packet: type 60 debug1: Server accepts key: id_ed25519-client_test ED25519 SHA256:1zzOdFQiBRu5EFHKIq1V0TfvYhHLGWNbaTyDn1m7giI explicit debug3: sign_and_send_pubkey: using publickey with ED25519 SHA256:1zzOdFQiBRu5EFHKIq1V0TfvYhHLGWNbaTyDn1m7giI debug3: sign_and_send_pubkey: signing using ssh-ed25519 SHA256:1zzOdFQiBRu5EFHKIq1V0TfvYhHLGWNbaTyDn1m7giI debug3: send packet: type 50 debug3: receive packet: type 52 Authenticated to 192.168.56.11 ([192.168.56.11]:22) using "publickey". debug3: ssh_init_stdio_forwarding: 10.0.3.120:22 debug1: channel_connect_stdio_fwd: 10.0.3.120:22 debug2: fd 4 setting O_NONBLOCK debug2: fd 5 setting O_NONBLOCK debug1: channel 0: new stdio-forward [stdio-forward] (inactive timeout: 0) debug3: fd 4 is O_NONBLOCK debug3: fd 5 is O_NONBLOCK debug1: getpeername failed: Bad file descriptor debug3: send packet: type 90 debug2: fd 3 setting TCP_NODELAY debug3: set_sock_tos: set socket 3 IP_TOS 0x48 debug1: Requesting [email protected] debug3: send packet: type 80 debug1: Entering interactive session. debug1: pledge: network debug3: client_repledge: enter debug1: pledge: fork debug3: receive packet: type 80 debug1: client_input_global_request: rtype [email protected] want_reply 0 debug3: receive packet: type 4 debug1: Remote: /etc/ssh/sshd_config.d/authorized_keys:1: key options: agent-forwarding port-forwarding pty user-rc x11-forwarding debug3: receive packet: type 4 debug1: Remote: /etc/ssh/sshd_config.d/authorized_keys:1: key options: agent-forwarding port-forwarding pty user-rc x11-forwarding debug3: receive packet: type 91 debug2: channel_input_open_confirmation: channel 0: callback start debug2: channel_input_open_confirmation: channel 0: callback done debug2: channel 0: open confirm rwindow 2097152 rmax 32768 debug1: Remote protocol version 2.0, remote software version OpenSSH_8.2p1 Ubuntu-4ubuntu0.7 debug1: compat_banner: match: OpenSSH_8.2p1 Ubuntu-4ubuntu0.7 pat OpenSSH* compat 0x04000000 debug2: fd 5 setting O_NONBLOCK debug2: fd 4 setting O_NONBLOCK debug1: Authenticating to 10.0.3.120:22 as 'service2' debug1: load_hostkeys: fopen /etc/ssh/ssh_known_hosts: No such file or directory debug1: load_hostkeys: fopen /etc/ssh/ssh_known_hosts2: No such file or directory debug3: order_hostkeyalgs: no algorithms matched; accept original debug3: send packet: type 20 debug1: SSH2_MSG_KEXINIT sent debug3: receive packet: type 20 debug1: SSH2_MSG_KEXINIT received debug2: local client KEXINIT proposal debug2: KEX algorithms: [email protected],curve25519-sha256,[email protected],ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group14-sha256,ext-info-c debug2: host key algorithms: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],ssh-ed25519,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,[email protected],[email protected],rsa-sha2-512,rsa-sha2-256 debug2: ciphers ctos: [email protected],aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected] debug2: ciphers stoc: [email protected],aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected] debug2: MACs ctos: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1 debug2: MACs stoc: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1 debug2: compression ctos: none,[email protected],zlib debug2: compression stoc: none,[email protected],zlib debug2: languages ctos: debug2: languages stoc: debug2: first_kex_follows 0 debug2: reserved 0 debug2: peer server KEXINIT proposal debug2: KEX algorithms: curve25519-sha256,[email protected],ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group14-sha256 debug2: host key algorithms: rsa-sha2-512,rsa-sha2-256,ssh-rsa,ecdsa-sha2-nistp256,ssh-ed25519 debug2: ciphers ctos: [email protected],aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected] debug2: ciphers stoc: [email protected],aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected] debug2: MACs ctos: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1 debug2: MACs stoc: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1 debug2: compression ctos: none,[email protected] debug2: compression stoc: none,[email protected] debug2: languages ctos: debug2: languages stoc: debug2: first_kex_follows 0 debug2: reserved 0 debug1: kex: algorithm: curve25519-sha256 debug1: kex: host key algorithm: ssh-ed25519 debug1: kex: server->client cipher: [email protected] MAC: <implicit> compression: none debug1: kex: client->server cipher: [email protected] MAC: <implicit> compression: none debug3: send packet: type 30 debug1: expecting SSH2_MSG_KEX_ECDH_REPLY debug3: receive packet: type 31 debug1: SSH2_MSG_KEX_ECDH_REPLY received debug1: Server host key: ssh-ed25519 SHA256:AcrBUTeKSU89ylX1sz1NatDPp1i4HrLsIv5aXDCS0mE debug1: load_hostkeys: fopen /etc/ssh/ssh_known_hosts: No such file or directory debug1: load_hostkeys: fopen /etc/ssh/ssh_known_hosts2: No such file or directory Warning: Permanently added '10.0.3.120' (ED25519) to the list of known hosts. debug3: send packet: type 21 debug2: ssh_set_newkeys: mode 1 debug1: rekey out after 134217728 blocks debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug3: receive packet: type 21 debug1: SSH2_MSG_NEWKEYS received debug2: ssh_set_newkeys: mode 0 debug1: rekey in after 134217728 blocks debug3: ssh_get_authentication_socket_path: path '/tmp/ssh-XXXXXXLePZAR/agent.15888' debug1: get_agent_identities: bound agent to hostkey debug1: get_agent_identities: ssh_fetch_identitylist: agent contains no identities debug1: Will attempt key: /home/strboul/.ssh/id_rsa RSA SHA256:HaEGWnCwC3qIn7zKYwZ79E1Ts3d0zMLKWM9lhyW1pl8 debug1: Will attempt key: /home/strboul/.ssh/id_ecdsa debug1: Will attempt key: /home/strboul/.ssh/id_ecdsa_sk debug1: Will attempt key: /home/strboul/.ssh/id_ed25519 debug1: Will attempt key: /home/strboul/.ssh/id_ed25519_sk debug1: Will attempt key: /home/strboul/.ssh/id_xmss debug1: Will attempt key: /home/strboul/.ssh/id_dsa debug2: pubkey_prepare: done debug3: send packet: type 5 debug3: receive packet: type 7 debug1: SSH2_MSG_EXT_INFO received debug1: kex_input_ext_info: server-sig-algs=<ssh-ed25519,[email protected],ssh-rsa,rsa-sha2-256,rsa-sha2-512,ssh-dss,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,[email protected]> debug3: receive packet: type 6 debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug3: send packet: type 50 debug3: receive packet: type 51 debug1: Authentications that can continue: publickey,password debug3: start over, passed a different list publickey,password debug3: preferred publickey,keyboard-interactive debug3: authmethod_lookup publickey debug3: remaining preferred: keyboard-interactive debug3: authmethod_is_enabled publickey debug1: Next authentication method: publickey debug1: Offering public key: /home/strboul/.ssh/id_rsa RSA SHA256:HaEGWnCwC3qIn7zKYwZ79E1Ts3d0zMLKWM9lhyW1pl8 debug3: send packet: type 50 debug2: we sent a publickey packet, wait for reply debug3: receive packet: type 51 debug1: Authentications that can continue: publickey,password debug1: Trying private key: /home/strboul/.ssh/id_ecdsa debug3: no such identity: /home/strboul/.ssh/id_ecdsa: No such file or directory debug1: Trying private key: /home/strboul/.ssh/id_ecdsa_sk debug3: no such identity: /home/strboul/.ssh/id_ecdsa_sk: No such file or directory debug1: Trying private key: /home/strboul/.ssh/id_ed25519 debug3: no such identity: /home/strboul/.ssh/id_ed25519: No such file or directory debug1: Trying private key: /home/strboul/.ssh/id_ed25519_sk debug3: no such identity: /home/strboul/.ssh/id_ed25519_sk: No such file or directory debug1: Trying private key: /home/strboul/.ssh/id_xmss debug3: no such identity: /home/strboul/.ssh/id_xmss: No such file or directory debug1: Trying private key: /home/strboul/.ssh/id_dsa debug3: no such identity: /home/strboul/.ssh/id_dsa: No such file or directory debug2: we did not send a packet, disable method debug1: No more authentication methods to try. [email protected]: Permission denied (publickey,password). debug3: send packet: type 1 debug1: channel 0: free: direct-tcpip: listening port 0 for 10.0.3.120 port 22, connect from 127.0.0.1 port 65535 to UNKNOWN port 65536, nchannels 1 debug3: channel 0: status: The following connections are open: #0 direct-tcpip: listening port 0 for 10.0.3.120 port 22, connect from 127.0.0.1 port 65535 to UNKNOWN port 65536 (t4 [stdio-forward] r0 i0/0 o0/0 e[closed]/0 fd 4/5/-1 sock -1 cc -1 io 0x01/0x00) Killed by signal 1. This works: ssh -vvv -F config -tt [email protected] ssh [email protected] OpenSSH_9.3p1, OpenSSL 3.0.8 7 Feb 2023 debug1: Reading configuration data config debug1: config line 1: Applying options for * debug1: config line 9: Applying options for 192.168.56.11 debug2: resolve_canonicalize: hostname 192.168.56.11 is address debug3: ssh_connect_direct: entering debug1: Connecting to 192.168.56.11 [192.168.56.11] port 22. debug3: set_sock_tos: set socket 3 IP_TOS 0x48 debug1: Connection established. debug1: identity file id_ed25519-client_test type 3 debug1: identity file id_ed25519-client_test-cert type -1 debug1: Local version string SSH-2.0-OpenSSH_9.3 debug1: Remote protocol version 2.0, remote software version OpenSSH_8.2p1 Ubuntu-4ubuntu0.7 debug1: compat_banner: match: OpenSSH_8.2p1 Ubuntu-4ubuntu0.7 pat OpenSSH* compat 0x04000000 debug2: fd 3 setting O_NONBLOCK debug1: Authenticating to 192.168.56.11:22 as 'jump' debug1: load_hostkeys: fopen /etc/ssh/ssh_known_hosts: No such file or directory debug1: load_hostkeys: fopen /etc/ssh/ssh_known_hosts2: No such file or directory debug3: order_hostkeyalgs: no algorithms matched; accept original debug3: send packet: type 20 debug1: SSH2_MSG_KEXINIT sent debug3: receive packet: type 20 debug1: SSH2_MSG_KEXINIT received debug2: local client KEXINIT proposal debug2: KEX algorithms: [email protected],curve25519-sha256,[email protected],ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group14-sha256,ext-info-c debug2: host key algorithms: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],ssh-ed25519,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,[email protected],[email protected],rsa-sha2-512,rsa-sha2-256 debug2: ciphers ctos: [email protected],aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected] debug2: ciphers stoc: [email protected],aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected] debug2: MACs ctos: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1 debug2: MACs stoc: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1 debug2: compression ctos: none,[email protected],zlib debug2: compression stoc: none,[email protected],zlib debug2: languages ctos: debug2: languages stoc: debug2: first_kex_follows 0 debug2: reserved 0 debug2: peer server KEXINIT proposal debug2: KEX algorithms: curve25519-sha256,[email protected],ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group14-sha256 debug2: host key algorithms: rsa-sha2-512,rsa-sha2-256,ssh-rsa,ecdsa-sha2-nistp256,ssh-ed25519 debug2: ciphers ctos: [email protected],aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected] debug2: ciphers stoc: [email protected],aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected] debug2: MACs ctos: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1 debug2: MACs stoc: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1 debug2: compression ctos: none,[email protected] debug2: compression stoc: none,[email protected] debug2: languages ctos: debug2: languages stoc: debug2: first_kex_follows 0 debug2: reserved 0 debug1: kex: algorithm: curve25519-sha256 debug1: kex: host key algorithm: ssh-ed25519 debug1: kex: server->client cipher: [email protected] MAC: <implicit> compression: none debug1: kex: client->server cipher: [email protected] MAC: <implicit> compression: none debug3: send packet: type 30 debug1: expecting SSH2_MSG_KEX_ECDH_REPLY debug3: receive packet: type 31 debug1: SSH2_MSG_KEX_ECDH_REPLY received debug1: Server host key: ssh-ed25519 SHA256:wUJgKzsnvmIfNKpldnCZEasb1RAvQV0B97Rr2zjRwvQ debug1: load_hostkeys: fopen /etc/ssh/ssh_known_hosts: No such file or directory debug1: load_hostkeys: fopen /etc/ssh/ssh_known_hosts2: No such file or directory Warning: Permanently added '192.168.56.11' (ED25519) to the list of known hosts. debug3: send packet: type 21 debug2: ssh_set_newkeys: mode 1 debug1: rekey out after 134217728 blocks debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug3: receive packet: type 21 debug1: SSH2_MSG_NEWKEYS received debug2: ssh_set_newkeys: mode 0 debug1: rekey in after 134217728 blocks debug3: ssh_get_authentication_socket_path: path '/tmp/ssh-XXXXXXLePZAR/agent.15888' debug1: get_agent_identities: bound agent to hostkey debug1: get_agent_identities: ssh_fetch_identitylist: agent contains no identities debug1: Will attempt key: id_ed25519-client_test ED25519 SHA256:1zzOdFQiBRu5EFHKIq1V0TfvYhHLGWNbaTyDn1m7giI explicit debug2: pubkey_prepare: done debug3: send packet: type 5 debug3: receive packet: type 7 debug1: SSH2_MSG_EXT_INFO received debug1: kex_input_ext_info: server-sig-algs=<ssh-ed25519,[email protected],ssh-rsa,rsa-sha2-256,rsa-sha2-512,ssh-dss,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,[email protected]> debug3: receive packet: type 6 debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug3: send packet: type 50 debug3: receive packet: type 51 debug1: Authentications that can continue: publickey,password debug3: start over, passed a different list publickey,password debug3: preferred publickey,keyboard-interactive debug3: authmethod_lookup publickey debug3: remaining preferred: keyboard-interactive debug3: authmethod_is_enabled publickey debug1: Next authentication method: publickey debug1: Offering public key: id_ed25519-client_test ED25519 SHA256:1zzOdFQiBRu5EFHKIq1V0TfvYhHLGWNbaTyDn1m7giI explicit debug3: send packet: type 50 debug2: we sent a publickey packet, wait for reply debug3: receive packet: type 60 debug1: Server accepts key: id_ed25519-client_test ED25519 SHA256:1zzOdFQiBRu5EFHKIq1V0TfvYhHLGWNbaTyDn1m7giI explicit debug3: sign_and_send_pubkey: using publickey with ED25519 SHA256:1zzOdFQiBRu5EFHKIq1V0TfvYhHLGWNbaTyDn1m7giI debug3: sign_and_send_pubkey: signing using ssh-ed25519 SHA256:1zzOdFQiBRu5EFHKIq1V0TfvYhHLGWNbaTyDn1m7giI debug3: send packet: type 50 debug3: receive packet: type 52 Authenticated to 192.168.56.11 ([192.168.56.11]:22) using "publickey". debug1: channel 0: new session [client-session] (inactive timeout: 0) debug3: ssh_session2_open: channel_new: 0 debug2: channel 0: send open debug3: send packet: type 90 debug1: Requesting [email protected] debug3: send packet: type 80 debug1: Entering interactive session. debug1: pledge: network debug3: client_repledge: enter debug3: receive packet: type 80 debug1: client_input_global_request: rtype [email protected] want_reply 0 debug3: receive packet: type 4 debug1: Remote: /etc/ssh/sshd_config.d/authorized_keys:1: key options: agent-forwarding port-forwarding pty user-rc x11-forwarding debug3: receive packet: type 4 debug1: Remote: /etc/ssh/sshd_config.d/authorized_keys:1: key options: agent-forwarding port-forwarding pty user-rc x11-forwarding debug3: receive packet: type 91 debug2: channel_input_open_confirmation: channel 0: callback start debug2: fd 3 setting TCP_NODELAY debug3: set_sock_tos: set socket 3 IP_TOS 0x48 debug2: client_session2_setup: id 0 debug2: channel 0: request pty-req confirm 1 debug3: send packet: type 98 debug1: Sending command: ssh [email protected] debug2: channel 0: request exec confirm 1 debug3: send packet: type 98 debug3: client_repledge: enter debug1: pledge: fork debug2: channel_input_open_confirmation: channel 0: callback done debug2: channel 0: open confirm rwindow 0 rmax 32768 debug3: receive packet: type 99 debug2: channel_input_status_confirm: type 99 id 0 debug2: PTY allocation request accepted on channel 0 debug2: channel 0: rcvd adjust 2097152 debug3: receive packet: type 99 debug2: channel_input_status_confirm: type 99 id 0 debug2: exec request accepted on channel 0 Welcome to Ubuntu 20.04.6 LTS (GNU/Linux 5.4.0-149-generic x86_64) * Documentation: https://help.ubuntu.com * Management: https://landscape.canonical.com * Support: https://ubuntu.com/advantage Last login: Tue May 30 11:09:04 2023 from 10.0.3.100 To run a command as administrator (user "root"), use "sudo <command>". See "man sudo_root" for details. service2@service2:~$ Directory contents: config id_ed25519-client_test id_ed25519-client_test.pub Config file: Host * PasswordAuthentication no StrictHostKeyChecking no UserKnownHostsFile /dev/null Host 192.168.56.11 IdentityFile id_ed25519-client_test
The two methods are not equivalent and I think your interpretation is wrong. With ssh … [email protected] ssh [email protected] you connect from the local computer to 192.168.56.11 and then from 192.168.56.11 to 10.0.3.120 (by running ssh [email protected] on 192.168.56.11). The second connection uses credentials and config available to jump user on 192.168.56.11 (your local config is irrelevant for this connection). Both connections succeed. With ssh … -J [email protected] [email protected] you connect from the local computer to 192.168.56.11 and then from your local computer to 10.0.3.120, using packets forwarded via 192.168.56.11. The second connection uses credentials and config available to your local user. Config and keys on the jump host are irrelevant, no ssh process is run on the jump host. Each connection starts in your local computer and involves your local configuration. It's the second connection that fails. Connection from your local computer to 192.168.56.11 works in each case. I see no evidence your custom config is ignored. The difference is in what happens later. It looks that: (without -J) [email protected] is able to log into [email protected] using its configuration and credentials; (with -J) your local account is not able to log into [email protected] using the local configuration and local credentials. This is the difference. The config file you posted (it's the local config, right?) contains an entry for 192.168.56.11, but no entry for 10.0.3.120. If you want to use the same identity file while connecting from local to 10.0.3.120 then you need to add Host 10.0.3.120 IdentityFile id_ed25519-client_test to the local config. Side note (see man 5 ssh_config): for each parameter the first obtained value will be used. For this reason you generally should place host-specific declarations near the beginning of the file, and general defaults at the end. You have Host * before Host 192.168.56.11. The current config is not problematic yet, but by customizing the config further you may get to a point where some line under Host * conflicts with some line under Host 192.168.56.11, then the line under Host * will "win" (example). Consider moving the Host * section to the end.
Custom ssh config is ignored with jump host
1,547,329,167,000
We have an Ubuntu PC that has one user, dev, which multiple people ssh into simultaneously. How can each user that's sshed into dev use their own .conf files? There are numerous scripts that only work for dev, so it's not practical for us to have separate users for every person. We could run something like su - dev -c "./script" from our own accounts, but I'm wondering if there's a better way.
You're trying to reinvent user accounts. Don't do that but instead use the tools you've already got. There are numerous scripts that only work for dev, so it's not practical for us to have separate users for every person. Fixing these would be the better investment of time. This way you'll have a standard maintainable system running custom development tools. Changing your Linux-based system to handle multiple "user accounts" under the single dev login may require additional effort each time you upgrade and regardless will result in a non-standard system that needs special care to administer
Multiple users ssh onto the same account simultaneously. How can everybody use their own .conf files?
1,547,329,167,000
How to ignore the the last cursor position of the editor XED by console? It is known that something similar is possible in the following way: XED, wipe the history for search: gsettings reset org.x.editor.state.history-entry history-search-for XED, wipe the replace history: gsettings reset org.x.editor.state.history-entry history-replace-with Concretization, to avoid misunderstandings: The code we are looking for for the terminal, is to ignore the last cursor position, but not to disable it permanently.
You can disable the restore-cursor-position with gsettings set org.x.editor.preferences.editor restore-cursor-position false This is permanent. If you want a temporary solution you can invoke xed with +1 as option and the filename in question e.g. xed +1 infile that will open infile and position the cursor at the beginning of the file. If you want to use it with a desktop launcher then something like this should work: Exec=xed +1 %u OK, another way would be to use an alternate dconf profile. You set the key to false, save the profile to another directory, and then set the key back to true. When you want to ignore the cursor position you load the saved profile, otherwise you run xed normally. Run mkdir -p ~/.alt_xed/dconf/ gsettings reset org.x.editor.state.history-entry history-search-for gsettings reset org.x.editor.state.history-entry history-replace-with gsettings set org.x.editor.preferences.editor restore-cursor-position false cp ~/.config/dconf/user ~/.alt_xed/dconf gsettings set org.x.editor.preferences.editor restore-cursor-position true If you then start your editor with the following command it will open any/all files with the cursor on the first line of the file: XDG_CONFIG_HOME=~/.alt_xed xed
How to temporarily ignore/disable the the last cursor position of the editor XED?
1,547,329,167,000
For Fedora Server 36 was installed MySQL 8 Community through the .rpm from MySQL Community Downloads. Now according with the Editing Conf. Files section is indicated to use the /etc/my.cnf.d/community-mysql-server.cnf file, but it does not exist, it because the /etc/my.cnf.d/ directory is empty, the truly valid file is /etc/my.cnf/ where it currently contains: [mysql] # # many comments # datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock log-error=/var/log/mysqld.log pid-file=/run/mysqld/mysqld.pid Well if is added port=3307 as [mysql] # # many comments # port=3007 datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock log-error=/var/log/mysqld.log pid-file=/run/mysqld/mysqld.pid saved the file and executed the following commands: sudo systemctl stop mysqld sudo systemctl start mysqld for the second command fails with the following message: Job for mysqld.service failed because the control process exited with error code. See "systemctl status mysqld.service" and "journalctl -xeu mysqld.service" for details For the systemctl status mysqld.service command shows With the journalctl -xeu mysqld.service command shows From above something Error 13: (Permission denied) With the sudo cat /var/log/mysqld.log command shows From above: Can't start server: Bind on TCP/IP port: Permission denied Do you already have another mysqld server running on port: 3307? With the sudo lsof -i -P command shows What is missing or what should be done? Note: I have this situation even with port 3308. Of course if is declared 3306 explicitly all work fine.
The problem is related to SELinux (Troubleshooting problems related to SELinux) because as @Artem S. Tashkinov said, this allows mysql only to listen on port 3306. So if you want to be able to use another port you (e.g. 3307) you will need to run this command: sudo /usr/sbin/semanage port -a -t mysqld_port_t -p tcp 3307
Is not possible change MySQL's port at Fedora Server 36
1,547,329,167,000
Please consider the prior discussion as background to this new question. I have modified my script and applied the same filesystem options to my USB drive's ext4 partitions using tune2fs, and mount options specified in the fstab. Those options are all the same as for the previous discussion. I have applied those changes and performed a reboot, but the mount command is not reporting what I would have expected, namely that it would show mount options similar to those reported for the internal hard drive partitions. What is being reported is the following: /dev/sdc3 on /site/DB005_F1 type ext4 (rw,relatime) /dev/sdc4 on /site/DB005_F2 type ext4 (rw,relatime) /dev/sdc5 on /site/DB005_F3 type ext4 (rw,relatime) /dev/sdc6 on /site/DB005_F4 type ext4 (rw,relatime) /dev/sdc7 on /site/DB005_F5 type ext4 (rw,relatime) /dev/sdc8 on /site/DB005_F6 type ext4 (rw,relatime) /dev/sdc9 on /site/DB005_F7 type ext4 (rw,relatime) /dev/sdc10 on /site/DB005_F8 type ext4 (rw,relatime) /dev/sdc11 on /site/DB006_F1 type ext4 (rw,relatime) /dev/sdc12 on /site/DB006_F2 type ext4 (rw,relatime) /dev/sdc13 on /site/DB006_F3 type ext4 (rw,relatime) /dev/sdc14 on /site/DB006_F4 type ext4 (rw,relatime) /dev/sdc15 on /site/DB006_F5 type ext4 (rw,relatime) /dev/sdc16 on /site/DB006_F6 type ext4 (rw,relatime) /dev/sdc17 on /site/DB006_F7 type ext4 (rw,relatime) /dev/sdc18 on /site/DB006_F8 type ext4 (rw,relatime) These are all reporting the same, but only reporting "rw,relatime", when I expected much more. The full dumpe2fs report for the first USB partition (same as for all others) is as follows: root@OasisMega1:/DB001_F2/Oasis/bin# more tuneFS.previous.DB005_F1.20220907-210437.dumpe2fs dumpe2fs 1.45.5 (07-Jan-2020) Filesystem volume name: DB005_F1 Last mounted on: <not available> Filesystem UUID: 11c8fbcc-c1e1-424d-9ffe-ad0ccf480128 Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_fi le dir_nlink extra_isize metadata_csum Filesystem flags: signed_directory_hash Default mount options: journal_data user_xattr acl block_validity nodelalloc Filesystem state: clean Errors behavior: Remount read-only Filesystem OS type: Linux Inode count: 6553600 Block count: 26214400 Reserved block count: 1310720 Free blocks: 25656747 Free inodes: 6553589 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 1017 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8192 Inode blocks per group: 512 Flex block group size: 16 Filesystem created: Sat Nov 7 09:57:44 2020 Last mount time: Wed Sep 7 18:18:32 2022 Last write time: Wed Sep 7 20:55:33 2022 Mount count: 211 Maximum mount count: 10 Last checked: Sun Nov 22 13:50:57 2020 Check interval: 1209600 (2 weeks) Next check after: Sun Dec 6 13:50:57 2020 Lifetime writes: 1607 MB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 32 Desired extra isize: 32 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: 802d4ef6-daf4-4f68-b889-435a5ce467c3 Journal backup: inode blocks Checksum type: crc32c Checksum: 0x21a24a19 Journal features: journal_checksum_v3 Journal size: 512M Journal length: 131072 Journal sequence: 0x000000bd Journal start: 0 Journal checksum type: crc32c Journal checksum: 0xf0a385eb Does anyone know why this is happening? Can something be done to have both internal and USB hard disk report same options? In my /etc/default/grub file, I currently use the following definition involving a quirk: GRUB_CMDLINE_LINUX_DEFAULT="quiet splash scsi_mod.use_blk_mq=1 usb-storage.quirks=1058:25ee:u ipv6.disable=1" Do I need to specify another quirk for the journalling and mount options to take effect as desired? Or is this again an "everything is OK" situation, the same as for the other post? Modified script: #!/bin/sh #################################################################################### ### ### $Id: tuneFS.sh,v 1.3 2022/09/08 03:31:12 root Exp $ ### ### Script to set consistent (local/site) preferences for filesystem treatment at boot-time or mounting ### #################################################################################### TIMESTAMP=`date '+%Y%m%d-%H%M%S' ` BASE=`basename "$0" ".sh" ` ### ### These variables will document hard-coded 'mount' preferences for filesystems ### count=1 BOOT_MAX_INTERVAL="-c 20" ### max number of boots before fsck [20 boots] TIME_MAX_INTERVAL="-i 2w" ### max calendar time between boots before fsck [2 weeks] ERROR_ACTION="-e remount-ro" ### what to do if error encountered #-m reserved-blocks-percentage ### ### This OPTIONS string should be updated manually to document ### the preferred and expected settings to be applied to ext4 filesystems ### OPTIONS="-o journal_data,block_validity,nodelalloc" ASSIGN=0 REPORT=0 VERB=0 SINGLE=0 USB=0 while [ $# -gt 0 ] do case ${1} in --default ) REPORT=0 ; ASSIGN=0 ; shift ;; --report ) REPORT=1 ; ASSIGN=0 ; shift ;; --force ) REPORT=0 ; ASSIGN=1 ; shift ;; --verbose ) VERB=1 ; shift ;; --single ) SINGLE=1 ; shift ;; --usb ) USB=1 ; shift ;; * ) echo "\n\t Invalid parameter used on the command line. Valid options: [ --default | --report | --force | --single | --usb | --verbose ] \n Bye!\n" ; exit 1 ;; esac done workHorse() { reference=`ls -t1 "${PREF}."*".dumpe2fs" 2>/dev/null | tail -1 ` if [ -n "${reference}" -a -s "${reference}" ] then if [ ! -f "${PREF}.dumpe2fs.REFERENCE" ] then mv -v ${reference} ${PREF}.dumpe2fs.REFERENCE fi fi reference=`ls -t1 "${PREF}."*".verify" 2>/dev/null | tail -1 ` if [ -n "${reference}" -a -s "${reference}" ] then if [ ! -f "${PREF}.verify.REFERENCE" ] then mv -v ${reference} ${PREF}.verify.REFERENCE fi fi BACKUP="${BASE}.previous.${PARTITION}.${TIMESTAMP}" BACKUP="${BASE}.previous.${PARTITION}.${TIMESTAMP}" rm -f ${PREF}.*.tune2fs rm -f ${PREF}.*.dumpe2fs ### reporting by 'tune2fs -l' is a subset of that from 'dumpe2fs -h' if [ ${REPORT} -eq 1 ] then ### No need to generate report from tune2fs for this mode. ( dumpe2fs -h ${DEVICE} 2>&1 ) | awk '{ if( NR == 1 ){ print $0 } ; if( index($0,"revision") != 0 ){ print $0 } ; if( index($0,"mount options") != 0 ){ print $0 } ; if( index($0,"features") != 0 ){ print $0 } ; if( index($0,"Filesystem flags") != 0 ){ print $0 } ; if( index($0,"directory hash") != 0 ){ print $0 } ; }'>${BACKUP}.dumpe2fs echo "\n dumpe2fs REPORT [$PARTITION]:" cat ${BACKUP}.dumpe2fs else ### Generate report from tune2fs for this mode but only as sanity check. tune2fs -l ${DEVICE} 2>&1 >${BACKUP}.tune2fs ( dumpe2fs -h ${DEVICE} 2>&1 ) >${BACKUP}.dumpe2fs if [ ${VERB} -eq 1 ] ; then echo "\n tune2fs REPORT:" cat ${BACKUP}.tune2fs echo "\n dumpe2fs REPORT:" cat ${BACKUP}.dumpe2fs fi if [ ${ASSIGN} -eq 1 ] then echo " COMMAND: tune2fs ${COUNTER_SET} ${BOOT_MAX_INTERVAL} ${TIME_MAX_INTERVAL} ${ERROR_ACTION} ${OPTIONS} ${DEVICE} ..." tune2fs ${COUNTER_SET} ${BOOT_MAX_INTERVAL} ${TIME_MAX_INTERVAL} ${ERROR_ACTION} ${OPTIONS} ${DEVICE} rm -f ${PREF}.*.verify ( dumpe2fs -h ${DEVICE} 2>&1 ) >${BACKUP}.verify if [ ${VERB} -eq 1 ] ; then echo "\n Changes:" diff ${BACKUP}.dumpe2fs ${BACKUP}.verify fi else if [ ${VERB} -eq 1 ] ; then echo "\n Differences:" diff ${BACKUP}.tune2fs ${BACKUP}.dumpe2fs fi rm -f ${BACKUP}.verify fi fi } workPartitions() { case ${PARTITION} in 1 ) case ${DISK_ID} in 1 ) DEVICE="/dev/sda3" ; OPTIONS="" ;; 5 ) DEVICE="/dev/sdc3" ;; 6 ) DEVICE="/dev/sdc11" ;; esac ;; 2 ) case ${DISK_ID} in 1 ) DEVICE="/dev/sda7" ;; 5 ) DEVICE="/dev/sdc4" ;; 6 ) DEVICE="/dev/sdc12" ;; esac ;; 3 ) case ${DISK_ID} in 1 ) DEVICE="/dev/sda8" ;; 5 ) DEVICE="/dev/sdc5" ;; 6 ) DEVICE="/dev/sdc13" ;; esac ;; 4 ) case ${DISK_ID} in 1 ) DEVICE="/dev/sda9" ;; 5 ) DEVICE="/dev/sdc6" ;; 6 ) DEVICE="/dev/sdc14" ;; esac ;; 5 ) case ${DISK_ID} in 1 ) DEVICE="/dev/sda12" ;; 5 ) DEVICE="/dev/sdc7" ;; 6 ) DEVICE="/dev/sdc15" ;; esac ;; 6 ) case ${DISK_ID} in 1 ) DEVICE="/dev/sda13" ;; 5 ) DEVICE="/dev/sdc8" ;; 6 ) DEVICE="/dev/sdc16" ;; esac ;; 7 ) case ${DISK_ID} in 1 ) DEVICE="/dev/sda14" ;; 5 ) DEVICE="/dev/sdc9" ;; 6 ) DEVICE="/dev/sdc17" ;; esac ;; 8 ) case ${DISK_ID} in 1 ) DEVICE="/dev/sda4" ;; 5 ) DEVICE="/dev/sdc10" ;; 6 ) DEVICE="/dev/sdc18" ;; esac ;; esac PARTITION="DB00${DISK_ID}_F${PARTITION}" PREF="${BASE}.previous.${PARTITION}" echo "\n\t\t PARTITION = ${PARTITION}" echo "\t\t DEVICE = ${DEVICE}" count=`expr ${count} + 1 ` COUNTER_SET="-C ${count}" workHorse } workPartitionGroups() { if [ ${SINGLE} -eq 1 ] then for PARTITION in `echo ${ID_SET} ` do echo "\n\t Actions only for DB00${DISK_ID}_F${PARTITION} ? [y|N] => \c" ; read sel if [ -z "${sel}" ] ; then sel="N" ; fi case ${sel} in y* | Y* ) DOIT=1 ; break ;; * ) DOIT=0 ;; esac done if [ ${DOIT} -eq 1 ] then #echo "\t\t PARTITION ID == ${PARTITION} ..." workPartitions exit fi else for PARTITION in `echo ${ID_SET} ` do #echo "\t\t PARTITION ID == ${PARTITION} ..." workPartitions done fi } if [ ${USB} -eq 1 ] then for DISK_ID in 5 6 do echo "\n\n DISK ID == ${DISK_ID} ..." ID_SET="1 2 3 4 5 6 7 8" workPartitionGroups done else DISK_ID="1" echo "\n\n DISK ID == ${DISK_ID} ..." ID_SET="2 3 4 5 6 7 8" workPartitionGroups fi exit 0 exit 0 exit 0
Some ext4 filesystem options may not take effect if specified in /etc/fstab as they require changes to filesystem structures. Some of those can be simply applied with tune2fs while the filesystem is unmounted, but there are some options that may require running a full filesystem check after tune2fs to take effect properly. As far as I know, there is no mechanism that would affect filesystem options based on whether the disk is connected by USB or not.
EXT4 on USB - how to specify journalling behaviour to be same as for root disk partitions
1,547,329,167,000
The idea is to have a docker image, or something similar, where I configure a whole bunch of hardware-unrelated things, and then deploy that to an actual device like a raspberry pi or plain x64 server. I'm assuming this is not possible to do it this straightforward, since docker has some virtualized hardware that will mismatch the target? And I don't want to run a docker-image in a container on a fresh linux install, I need everything to run native. Can someone point me in the right direction how to achieve this, what kind of dev-ops software I should think of when starting this. Other than making a few install-scripts and copying over the configs (which is what is basically pretty much is really). Where I want to get is to have a restore-image, that I preferably am configuring in a virtual machine, en then have a starting point where I install that image on (virtually) any hardware and it will boot... Or do I have misconceptions on how linux handles changing hardware? (I have bad experience with GPUs changing) (Looking around it seems like docker is the best and most sensible solution for this, using docker export, but I'm curious if there are even simpler solutions, perhaps a program that can export the specific applications/configs that I specify, instead of manually writing a script, not because lazy, but because of human error)
I assume that what you are looking for is a way to "clone" an image of the installed system with applications, configuration etc.) to a different machine (actual bare-metal hardware, not a virtual machine). So you should start off also with real hardware as the source, not a virtual machine or Docker container. Install everything you need on that machine, but try to avoid installing device-specific third party drivers. If you can stick to the drivers that are automatically provided by the kernel, there are quite big chances your image will run on another machine. Of course it needs to be the same architecture, so if you want to build an image for x86, you need a x86 machine, and if you want to build an image for Pi, you need a Pi. Then you can use Clonezilla to make a disk image and then restore that image on another machine. However, Clonezilla is available for x86 machines only, not for Pi. Another method is to dd the whole disk (as a raw device) to a file on external drive/USB stick and restore that image file on another machine (writing to raw disk device again), but there is a limitation that you need to have identical size disks in both machines - otherwise I don't expect the cloned system to boot at all. Of course you need to do this after the machine (either source or target) is booted from a live medium, so that the OS on disk is not active (in case of target machine it's not present at all, so the only way to boot is from a live medium). Most "hardcore" method, requiring quite a lot of work (but you can use this method to clone your system to a machine with a much bigger disk) is to tar the whole filesystem(s) on disk (also after booting from live medium) and unpack them to appropriate partition(s) on target machine (after manually creating the partition(s), of course). After this, you need to adjust the /etc/fstab file (as the partition UUID(s) usually will be different) and install the bootloader on the target machine. I have "cloned" working systems several times using this method, when we replaced servers with more powerful hardware. Warning: If the machine has a statically configured IP address(es), the "cloned" target machine will have the same address(es) - you have to change them manually. In case of some systems, you have also to delete some files from /etc/udev/rules.d (the ones that have "persistent" in their name) to make udev "forget" the devices from the old machine and detect them again on next boot - otherwise, you could have problems for example with network interfaces not present (as in this example)
Linux config & applications deploy on fresh hardware
1,547,329,167,000
I recently came across the Linux password change method via the GRUB by entering single user mode. After some digging around I found some articles on how to secure it with a sha512 hashed password.While this sounds like a good option to secure GRUB I also read you can simply change it using a Linux installation. So how can you go about securing the password in a way that cannot be modified or removed through the use of a Linux installation?
So how can you go about securing the password in a way that cannot be modified or removed through the use of a Linux installation? This is not possible. If someone has physical access to your device, they can do everything. You have two options: Encrypt all the partitions and boot from e.g. a USB stick which only you have access to. With new GRUB releases if you have secure EFI [boot] enabled, you can encrypt all the partitions and leave only the EFI boot partition unecrypted with no GRUB password at all. Obviously you'll have to set the password to access BIOS cause otherwise the attacker may disable secure boot or install their own MAC key and tamper with the boot loader and sniff your passwords. Lastly if someone has physical access to your PC they may tamper with your keyboard and install a hardware keylogger - then all your protections are worthless. Protecting your device from physical attacks is a very complicated topic. Maybe you should settle on recently released Apple devices or Android phones which do it perfectly. A run of the mill x86 PC/laptop is wide open to all sorts of physical undetectable attacks.
Securing GRUB password from linux installation
1,547,329,167,000
I find myself preferring the interactive mode that can be accessed by adding the -i flag when running Alpine's apk over the default non-interactive mode. However, it is rather tedious to constantly write e.g. # apk add -i over # apk add. Is there any way I can make the interactive mode the default mode of operation for Alpine's apk?
The source code of apk has this function that might be of interest. static void setup_automatic_flags(void) { [...] if (!(apk_flags & APK_SIMULATE) && access("/etc/apk/interactive", F_OK) == 0) apk_flags |= APK_INTERACTIVE; } APK_INTERACTIVE is the flag enabled by the -i option: #define GLOBAL_OPTIONS(OPT) \ ... OPT(OPT_GLOBAL_interactive, APK_OPT_SH("i") "interactive") \ ... And: static int option_parse_global(void *ctx, struct apk_db_options *dbopts, int opt, const char *optarg) { ... case OPT_GLOBAL_interactive: apk_flags |= APK_INTERACTIVE; break; I think this means the existence of the /etc/apk/interactive file automatically enables the -i option's behaviour. I couldn't find any mention of it in the manpage though. This was added in apk 2.3: apk: /etc/apk/interactive enables interactive mode for tty sessions In case someone prefers extra quesions while running apk in a terminal. The file is always from the real root; not from --root so that we will not accidentally enable interactive mode when in initramfs bootstrap.
Can you configure Alpine's apk to be interactive by default?
1,547,329,167,000
I'm trying to use the postconf(1) command to add a new entry to the master.cf file like so: $ sudo postconf -e -M 'submission/inet=private=n unpriv=- chroot=y wakeup=- maxproc=- command=smtpd -o smtpd_enforce_tls=yes -o smtpd_sasl_auth_enable=yes -o syslog_name=postfix/submission' Note: broken up on multiple lines for nicer display here. This gives me an error as follow: postconf: fatal: invalid type field "unpriv=-" in "private=n unpriv=- chroot=y wakeup=- maxproc=- command=smtpd -o smtpd_enforce_tls=yes -o smtpd_sasl_auth_enable=yes -o syslog_name=postfix/submission" I also tried without the field names: $ sudo postconf -M 'submission/inet=n - y - - smtpd -o smtpd_enforce_tls=yes -o smtpd_sasl_auth_enable=yes -o syslog_name=postfix/submission' But that didn't help either: postconf: fatal: invalid type field "-" in "n - y - - smtpd -o smtpd_enforce_tls=yes -o smtpd_sasl_auth_enable=yes -o syslog_name=postfix/submission" Also the postconf -F ... fails saying that there are no submission inet entries in the file. Just in case, I tried to also include the -e option, but that made no difference (-e -M or -Me and just -M are all equivalent according to the manual page). Someone knows what the correct syntax of the -M option is?
Yup, annoying, isn't it, the way it is so badly documented. Try: sudo postconf -M submission/inet="submission inet n - y - - smtpd -o smtpd_enforce_tls=yes -o smtpd_sasl_auth_enable=yes -o syslog_name=postfix/submission" And a postconf -M | grep submission will hopefully confirm your requirements.
How do you use the `postconf -Me ...` option?
1,547,329,167,000
How to set an alternative path for the zsh history file instead of the default ~/.zsh_history?
~/.zshrc ‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾ HISTFILE="your/custom/history/file/path"
Set custom shell history file [duplicate]
1,547,329,167,000
Operating System: Debian GNU/Linux 10 (buster) Kernel: Linux 4.19.0-17-amd64 Architecture: x86-64 I am studying behavior of crontab -e. Are following assumptions correct? crontab -e edits the crontab file in "default editor" Such "default editor" is selected by sudo update-alternatives --config editor If printenv EDITOR returns blank, the above "default editor" is used But if $EDITOR is defined, it takes precedence over the "default editor" Also, after selecting "default editor", where is that selection stored? Many online resource explains how to select "default editor", but I couldn't find answer to location of configuration file.
man crontab answers most of your questions; if you’re using Vixie Cron: The -e option is used to edit the current crontab using the editor specified by the VISUAL or EDITOR environment variables. After you exit from the editor, the modified crontab will be installed automatically. If neither of the environment variables is defined, then the default editor /usr/bin/editor is used. So the editor is determined by the VISUAL variable, or if it’s not set, the EDITOR variable, and if that’s not set, /usr/bin/editor. The latter is an alternative, i.e. a symlink to /etc/alternatives/editor, which is itself a symlink to the chosen editor, configurable as you say by running update-alternatives. That’s how the chosen alternative is stored: the corresponding symlink is updated. Information about alternatives is also stored in /var/lib/dpkg/alternatives. See man update-alternatives for details.
Behaviour of crontab -e and environmental variables or configuration of default editor ( Debian )
1,547,329,167,000
When I open xpdf, I'm seeing a bunch of errors printed to my console: Config Error: No display font for 'Courier' Config Error: No display font for 'Courier-Bold' Config Error: No display font for 'Courier-BoldOblique' Config Error: No display font for 'Courier-Oblique' Config Error: No display font for 'Helvetica' Config Error: No display font for 'Helvetica-Bold' Config Error: No display font for 'Helvetica-BoldOblique' Config Error: No display font for 'Helvetica-Oblique' Config Error: No display font for 'Symbol' Config Error: No display font for 'Times-Bold' Config Error: No display font for 'Times-BoldItalic' Config Error: No display font for 'Times-Italic' Config Error: No display font for 'Times-Roman' Config Error: No display font for 'ZapfDingbats' The strange thing is: This only started happening after I created the config file ~/.xpdfrc (even if it is empty). Am I missing something? Is there a way to suppress/fix the errors?
It turned out on Arch Linux specifically (not sure about other distributions), the file /etc/xpdfrc is not all comments. Possibly the Ghostscript fonts are not installed in the "default" location (as explained in /etc/xpdfrc), which is why the font mappings (fontFile) are required. The solution for me was to: cp /etc/xpdfrc ~/.xpdfrc And then add custom options to the end of ~/.xpdfrc instead of creating a new config file in my home directory.
xpdf: Config Error: No display font for
1,547,329,167,000
I'm new to QEMU and currently playing around with QEMU config file to understand how it works. I tried to put all the options of my simple QEMU command line sudo qemu-system-x86_64 -cpu host -enable-kvm -m 8192 -nic user,host=192.168.0.2,net=192.168.0.2 -nic bridge,br=virbr0 ubuntu.img in the config file qemu.cfg and here is how it looks like: [nic "user"] host=192.168.0.2 net=192.168.0.2 [nic "bridge"] br=virbr0 cpu = host enable-kvm m = 8192 ubuntu.img Running QEMU with the readconfig file specified I got the following error $ sudo qemu-system-x86_64 -readconfig qemu.cfg qemu-system-x86_64:qemu.cfg:1: Invalid parameter 'host=192.168.0.2' How to fix the configuration file?
After hours of googling I didn't find any comprehensive manual related to the syntax of the configuration file. So I looked at the source code and here is the resulting config file: [nic] type = "user" host = "192.168.0.2" net = "192.168.0.2" [nic] type = "bridge" br = "virbr0" [memory] size = "8192" There are some mistakes made in the config presented in the question: Spaces matter It is required to insert space before and after the "=" sign: [nic] type = "user" #correct [nic] type="user" #wrong Find a correct name of a config group. In case of incorrectly named config group qemu prints error message of the form: There is no option group It's easy to find it in the qemu source code. After that we just need to find all the config groups added into static QemuOptsList *vm_config_groups[48]; The function void qemu_add_opts(QemuOptsList *list) is responsible for that. The parameters for nic group can be found in the declaration: QemuOptsList qemu_nic_opts = { .name = "nic", .implied_opt_name = "type", .head = QTAILQ_HEAD_INITIALIZER(qemu_nic_opts.head), .desc = { /* * no elements => accept any params * validation will happen later */ { /* end of list */ } }, }; which implies the syntax I specified in the answer. Unanswered question: I don't know if there's a way to specify the rest of the options-cpu host -enable-kvm ubuntu.img via such configuration file. Looking through the valid groups didn't yield any useful results. UPD: It turned out that -enable-kvm is being configured through the accel option group as follows: [accel] accel = "kvm"
Qemu config file error "Invalid parameter host"
1,617,612,428,000
According to the man-page, it's possible to rebind keys in zathura, but I can't seem to get it to work. N is "search next" and n is "search previous". I would like it to be the other way around. However, when adding this my file, looking at the documentation : unmap n unmap N map N search previous map n search next But now both n and N searches forward instead. I've tried experimenting with the argument up, down, right and left, that is supplied in the documentation, but they don't do anything. In fact, I suspect that it does not recognize the argument "previous" at all, but defaults to the default search argument, which is probably "next". I also tried --previous and -previous, just because, but of course that didn't work either. Can you see what I'm doing wrong here? I'm on a lubuntu 20.04 machine, with zathura 0.4.5 girara 0.3.4 (runtime: 0.3.4) (plugin) pdf-poppler (0.3.0) (/usr/lib/x86_64-linux-gnu/zathura/libpdf-poppler.so)
Hey not sure if it still matters but you just forgot to add [normal] after the map. I have this in my config and it works: map [normal] n search forward map [normal] p search backward Now to unbind \ as search and replace it is something I am still trying to figure out.
zathura config keybinding not working as supposed to in the documentation?
1,617,612,428,000
Can someone please share examples of when and/or where the rpmrc files under the path (/etc/rpmrc and ~/.rpmrc) are used by a system administrator ? Evident from the question but stating it for clarity - I am noob s/w eng and have no experience with Linux, much lesser with CentOS. I have scoured the internet for when, where and why the rpmrc is used but did not find much. (perhaps I am not looking for the right thing!) I am looking to understand what kind of system wide customizations might an administrator put under /etc/rpmrc and what customizations might an administrator put under ~/.rpmrc.
The file /etc/rpmrc is old and obsoleted. Modern systems use /etc/rpm/macros.* or even /usr/lib/rpm/macros.d/. You should not touch it. Not even as a user. Not even as a packager or DevOps guy. People creating the distribution put there the distribution default. Nearly all macros do not affect runtime. They are read and interpreted during the build of the package and cannot be changed later. Therefore there should be no need to alter it. You may want to change some macros on the machine where you build your custom packages. Even then you should not touch it. For such case you can - and should - use ~/.rpmmacros. You should not define new macros there. When you use such macro in your SPEC file, then the package cannot be built on a different machine. When you need to define a new macro you should do that at the top of the SPEC file. And some good example? It is good to re-define the existing macro. E.g., _smp_mflags. If you check how it is defined (rpm --showrc is your friend) then you will find that it pass to make -l option based on top of your CPU core. But the upper limit is 16 cores. Now, imagine, that you are building some scientific package, which needs a lot of parallelisms. And you have a machine with 128 cores. You are not using the full power you have! You improve the performance of building your rpm package when you put _smp_mflags -l128 in the ~/.rpmmacros.
Example of customisations with rpmrc?
1,617,612,428,000
A mysterious '~' directory recently appeared inside my home directory (as in ~/~). Inside I found a single hidden directory .confit which contained a single directory chromium. Inside that ~/~/.confit/chromium directory were several of the same contents as ~/.config/chromium. I have been using Chromium for years and this is the first time I have seen this behavior. Has anyone else encountered this? I can delete the '~' directory but it comes back every time I launch Chromium. I wish I could identify exactly when it started, but I have already deleted it several times and I don't remember exactly when it first appeared. What determines where Chromium saves its config files? server-side settings? source code? I don't even know where to look. I'm running Arch Linux in case that matters. Edit: Is it possible that the error could be in my configs for an Electron app rather than Chromium itself? Edit: $CHROME_CONFIG_HOME is not set and I set $XDG_CONFIG_HOME to /home/(my username)/.config when I was first troubleshooting this issue. It didn't change anything.
I found it. User error sure enough. My i3 config file was the culprit. I had keybindings set to launch chromium with flag --user-data-dir=~/.confit/chromium/... Really obvious in hindsight. Changed it to --user-data-dir=$HOME/.config/chromium/... and it works correctly now. Thanks for all the help!
Chromium suddenly started creating redundant '~' directory inside home directory. How to get rid of it?
1,617,612,428,000
Is it possible to define a configuration file for zsh scripts? I am running macOS 10.15 and the default shell is zsh. I have defined, i.e., alias abc="echo yes" in ~/.zshrc, ~/.zsh_profile, ~/.profile, and ~/.zsh_login, every configuration file I can think of. However, a shell script t.sh calling abc will complain command not found: abc: ⇒ cat t.sh abc ⇒ zsh t.sh t.sh:1: command not found: abc ⇒ sh t.sh t.sh: line 1: abc: command not found I understand that adding a line source ~/.zshrc or source ~/.zsh_login on the top of t.sh will work, but I wonder is it possible to set a configuration file that runs by default for any zsh script executed on my machine (by me)? I'm asking this because I want to set an alias that works when I envoke a command inside vim via :!, e.g. :!abc, as well as the ] command in the program nnn. Also please let me know if this is irrelevant to what I asked above. I'm confused of the login shell and non-login shell, or other types of shell. FYI, in vim, :!echo $0 will return /bin/zsh and :!whoami will return ***(my name), so it seems like it's a login shell.
For any zsh script you run, ~/.zshenv will be sourced: => cat ~/.zshenv alias abc="echo yes" => cat t.sh abc => zsh t.sh yes A simplified hierarchy of dotfiles for zsh: ~/.zshenv - sourced for all zsh scripts. ~/.zshrc - sourced for all interactive zsh sessions (those attached to a terminal). This is usually the best place to put aliases. ~/.zprofile - sourced for a login zsh session, i.e. the first interactive session in a set. For your usage, it may be preferable to configure MacVim to launch an interactive shell so it'll pick up aliases from ~/.zshrc. To do that, add this to your ~/.gvimrc (note the g) file: let &shell='/bin/zsh -i' For other vim flavors, you may have to do something a bit more complicated - see the answers here: https://vi.stackexchange.com/questions/16186/how-to-run-zsh-aliased-command-from-vim-command-mode
Configuration file for shell script?
1,617,612,428,000
I've noticed that the PASS_MIN_LEN is missing from the password aging controls section inside /etc/login.defs (Ubuntu 20.04 LTS). At the end of the file there is a note that it's obsolete, alongside other options, and someone needs to edit the appropriate file in /etc/pam.d directory. ################# OBSOLETED BY PAM ############## # # # These options are now handled by PAM. Please # # edit the appropriate file in /etc/pam.d/ to # # enable the equivelants of them. # ############### But even if I cat /etc/pam.d/* | grep "PASS_MIN_LEN" I still cannot find this option! What's up with that?
Pluggable Authentication Modules (PAM) is a (not really) newer generic API for authentication originally proposed by Sun Microsystems in 1995 and adapted a year later on the Linux ecosystem. With an API change, don't expect things to keep the same name (nor to always map exactly the same). from man pam_unix: OPTIONS [...] minlen=n Set a minimum password length of n characters. The default value is 6. The maximum for DES crypt-based passwords is 8 characters. Implementation can vary from distribution to distribution. On Ubuntu (like on Debian), /etc/pam.d/passwd will source /etc/pam.d/common-password where the first password occurence includes the useful options. It can be altered and have an minlen=X parameter added. This later file is autoregenerated by pam-auth-update typically on upgrades, so check it works after an upgrade of PAM-related packages. Normally it should: The script makes every effort to respect local changes to /etc/pam.d/common-*. Local modifications to the list of module options will be preserved, and additions of modules within the managed portion of the stack will cause pam-auth-update to treat the config files as locally modified and not make further changes to the config files unless given the --force option. Other options might still affect the result. Eg, if you set minlen=1, the obscure option will still reject a one-letter password as a palindrome, or a 2-letters password as too simple etc. Be careful when altering PAM: you might lock yourself out of your system in case of wrong settings.
PASS_MIN_LEN missing from login.defs
1,617,612,428,000
I keep asking myself how to get familiar with the software (mostly a daemon) and its configuration. When running man <program name> I am very overwhelmed by the options available. How do you or other professional system administrators learn what they need from the manual? Which steps are made by you/them to get a clear overview of the tasks to do to reach their goals? I hope there is a step-by-step guide or guideline that I can adopt or follow. For example, I have a VPS and want to configure a proxy. I choose squid and the manual told me to go to https://wiki.squid-cache.org. I did that and get shoot by the information overflow. And squid is not the only daemon I am fighting with. I am getting confused by many services. SSH is at least under control :D Hope you can help me with that and my concern is not worth mentioning! Thank you!
Unix/Linux systems are complex beasts. Welcome to a wonderful world. Check out the site of your favorite distribution, they should have some "introduction to system administration" pages. Look for the various "introduction to Unix/Linux command line" tutorials. The various GNU tools (workalikes of much of the traditional Unix tools) often come with extensive documentation in info form (can also be built as books to be printed). The classic "The Unix programming environment" by Kernighan and Pike is a must read. Falls far short for today's much more complex environments, but it provides a solid foundation for understanding many of the more mystifying whys in Unix. Look for texts on the systems that most interest you (O'Reilly has series of books on salient tools). Ask pointed questions. General, shopping list kind of questions, find little sympathy here.
How to handle information overflow of man and the documentation?
1,617,612,428,000
I have a device DEV1 which should communicate with device DEV3, however in the middle there is DEV2. My understanding is that I need to use IP Forwarding in DEV2 and edit route tables on DEV1 and DEV3. For DEV2 I have enabled IP Forwarding: -> sysctl net.ipv4.ip_forward net.ipv4.ip_forward = 1 I can’t set up rest of the things. What should I do to get this to work?
Given that the two routers allow all connections to pass from either side the simplest thing is to add new IP addresses to the two interfaces of DEV2. We do this so that DEV2 can easily distinguish between packets meant for it, and packets meant to go through: ip addr add 192.168.2.3/24 dev INTERFACE2 ip addr add 10.12.0.218/24 dev INTERFACE3 (substitute the real interfaces names for INTERFACE2/3, and make sure that these addresses are not taken, to do so just ping -c 192.168.2.3 for instance and see whether you get any reply. Also, I guessed the two masks are /24, if not please adjust accordingly). Now anything for 192.168.2.3 and 10.12.0.218 is for DEV1/3, while anything for 192.168.2.1-10.12.0.217 is for DEV2. Now we forward anything arriving on the two new addresses: iptables -A FORWARD -j ACCEPT iptables -A PREROUTING -t nat -d 192.168.2.3 -j DNAT --to 10.10.3.154 iptables -A PREROUTING -t nat -d 10.12.0.218 -j DNAT --to 192.168.2.2 iptables -t nat -A POSTROUTING -j MASQUERADE The first rule allows packets to migrate from one interface to the other (the rule net.ipv4.ip_forward = 1 is necessary but not sufficient), the last rule rewrites all packet headers as if coming from the outgoing interface so that replies are again routed thru DEV2; the two rules in between rewrite the packet headers so that packets are sent from DEV1 to DEV3 (rule n.2) and from DEV3 to DEV1 (rule n.3). The advantage of this setup is that it is clean: all protocols, and all ports are routed simultaneously, without any need to add unnecessary complications. CAVEAT: interface1 on DEV1 and interface2 on DEV2 belong to the same subnet, which is strange since you say that the two are separated by a router: by definition, a router joins two distinct subnets. So, either router1 is not a router, or, if it is, there is an error in its configuration since it is surrounded by the same networks on both sides. I have assumed the former, not the latter.
How to configure IP Forwarding
1,617,612,428,000
I want a specific matched User to have its 'ssh/scp' activities logged in an alternative facility. The current default is AUTH, but I'd like to log stuff for user 'dummy' to syslog Facility 'LOCAL2'. I've tried this in /etc/ssh/sshd_config, without success: Match User dummy SyslogFacility LOCAL2 I just get the following error message /etc/ssh/sshd_config line 125: Directive 'SyslogFacility' is not allowed within a Match block Any suggestions here?
The current version of OpenSSH's sshd, which is typically ahead of the OpenSSH version provided in AIX, does not support the SyslogFacility directive in a Match block, just as it says. The sshd documentation says, for the Match directive: Only a subset of keywords may be used on the lines following a Match keyword. Available keywords are AcceptEnv, AllowAgentForwarding, AllowGroups, AllowStreamLocalForwarding, AllowTcpForwarding, AllowUsers, AuthenticationMethods, AuthorizedKeysCommand, AuthorizedKeysCommandUser, AuthorizedKeysFile, AuthorizedPrincipalsCommand, AuthorizedPrincipalsCommandUser, AuthorizedPrincipalsFile, Banner, ChrootDirectory, ClientAliveCountMax, ClientAliveInterval, DenyGroups, DenyUsers, ForceCommand, GatewayPorts, GSSAPIAuthentication, HostbasedAcceptedKeyTypes, HostbasedAuthentication, HostbasedUsesNameFromPacketOnly, IPQoS, KbdInteractiveAuthentication, KerberosAuthentication, LogLevel, MaxAuthTries, MaxSessions, PasswordAuthentication, PermitEmptyPasswords, PermitListen, PermitOpen, PermitRootLogin, PermitTTY, PermitTunnel, PermitUserRC, PubkeyAcceptedKeyTypes, PubkeyAuthentication, RekeyLimit, RevokedKeys, RDomain, SetEnv, StreamLocalBindMask, StreamLocalBindUnlink, TrustedUserCAKeys, X11DisplayOffset, X11Forwarding and X11UseLocalhost. Not among them is SyslogFacility.
OpenSSH Server: Directive 'SyslogFacility' is not allowed within a Match block
1,617,612,428,000
I am sending mail message logs and amp logs from the same server 10.10.10.10 via syslog to my RHEL7 server. I am running rsyslog and have the following config file: mail_logs.conf $template NetworkLog, "/var/log/mail_logs/mail_logs.log" :fromhost-ip, isequal, "10.10.10.10" -?NetworkLog & ~ And my mail_logs.log looks like: Oct 16 10:58:01 server.com mail_mess_logs: Info: Begin Logfile Oct 16 10:58:01 server.com mail_mess_logs: Info: Version: 0.0.0 SN:... Oct 16 10:58:01 server.com mail_mess_logs: Info: Time offset from UTC: -14400 seconds Oct 16 10:58:02 server.com amp_logs: Info: Begin Logfile Oct 16 10:58:02 server.com amp_logs: Info: Version: 0.0.0 SN:... Oct 16 10:58:02 server.com amp_logs: Info: Time offset from UTC: -14400 seconds I would like to break these up by mail_mess_logs and amp_logs so I would have 2 files like: mail_mess_logs.log Oct 16 10:58:01 server.com mail_mess_logs: Info: Begin Logfile Oct 16 10:58:01 server.com mail_mess_logs: Info: Version: 0.0.0 SN:... Oct 16 10:58:01 server.com mail_mess_logs: Info: Time offset from UTC: -14400 seconds amp_logs.log Oct 16 10:58:02 server.com amp_logs: Info: Begin Logfile Oct 16 10:58:02 server.com amp_logs: Info: Version: 0.0.0 SN:... Oct 16 10:58:02 server.com amp_logs: Info: Time offset from UTC: -14400 seconds How can I accomplish this?
You are using one of the older, more basic filter styles supported by rsyslog. A slightly less old style allows you to use expressions including the and operator. The property programname should hold the "mail_mess_logs" string. So you can do if $fromhost-ip=="10.10.10.10" and $programname=="mail_mess_logs" then -/var/log/mail_logs/mail_mess_logs.log if $fromhost-ip=="10.10.10.10" and $programname=="amp_logs" then -/var/log/mail_logs/amp_logs.log Alternatively, there is a more sophisticated style called RainerScript.
How to edit my config file to create two log files while ingesting syslog events?
1,617,612,428,000
I am trying to get a samba share setup so that users have both read and write permissions. I thought that I would be able to do this by editing /etc/samba/smb.conf to add my share like so: [CLOUD] path = /cloud writable = yes security = user valid users = neon, win write list = neon, win and then running: $ sudo systemctl restart smb.service $ sudo systemctl restart nmb.service I can access my share from the two accounts, but neither can write. When I run testparam, some of the parameters are missing(security and writable, but there is no explicit error. rlimit_max: increasing rlimit_max (1024) to minimum Windows limit (16384) Load smb config files from /etc/samba/smb.conf rlimit_max: increasing rlimit_max (1024) to minimum Windows limit (16384) Processing section "[homes]" Processing section "[printers]" Processing section "[CLOUD]" Global parameter security found in service section! Loaded services file OK. Server role: ROLE_STANDALONE Press enter to see a dump of your service definitions # Global parameters [global] dns proxy = No log file = /usr/local/samba/var/log.%m max log size = 50 server role = standalone server server string = Samba Server workgroup = MYGROUP idmap config * : backend = tdb [homes] browseable = No comment = Home Directories read only = No [printers] browseable = No comment = All Printers path = /usr/spool/samba printable = Yes [CLOUD] path = /cloud read only = No valid users = neon win write list = neon win I take this to mean that the writable = yes parameter is not being recognized, but since there are no errors I have no idea why. Any suggestions as to what might be going on here? I am using Arch.
Try this: [CLOUD] writeable = yes path = /cloud valid users = neon,win Restart smbd Check also read/write permissions directly in your Linux system when you have logged in as user neon or win. I often had to sign out/in again in Windows
Samba write access problem: parameter does not show in testparm even though it is in smb.conf
1,617,612,428,000
Is there a LAMP-MySQL (or equivalent) agnostic way to create a DB stack that includes: DB-user and a relative password DB with the same name as of the user Setting a host for the DB (say, localhost) Give the user all privileges The reason I need such a way or approach is to have easier life when working with different LAMP-MySQL (or equivalent) DB programs as part of LAMP. For example, not all LAMP stacks has particularly MySQL or MariaDB and the SQL syntax or DB-CLUI (Bash) extension syntax might be a tiny bit different for each DB SQL variant, hence I seek standardization in a LAMP-RDBMS agnostic fashion.
I am not sure whether anything already exists (i.e. in Galaxy), but Ansible certainly provides all the building blocks to do this. A reason this may well not have been done, or at least released publically, is that there is no one way to configure a database, much less an agreed method that any DB technology should be generally configured. So I would guess you will need to roll this yourself. As for how to go about doing that, an approach I would likely investigate: Write a general DB role, that expects a common series of parameters to be passed to it (i.e. DB technology, default users, default DB's) Write a role for each DB technology type, that can be passed only the common parameters. This role would then run the DB technology specific tasks to provision your instance Add role dependencies to your general DB role, conditional on the DB technology type A pseudo example: playbook.yml --- # Apply the 'db' role to DB hosts - hosts: db_hosts roles: - db roles/db/defaults/main.yml --- # Provide a default tech so one does not have to be passed at run time db_technology: postgresql roles/db/meta/main.yml --- # Include the specific DB tech role, based on the value of 'db_technology' dependencies: - { role: postgresql, db_params: "{{ db_params }}", when: db_technology == 'postgresql' } - { role: mysql, db_params: "{{ db_params }}", when: db_technology == 'mysql' } roles/db/tasks/main.yml --- # Do tasks that are common to all DB types here roles/postgresql/tasks/main.yml --- # Do postgresql specific tasks here roles/mysql/tasks/main.yml --- # Do mysql specific tasks here Finally, this could then be run with: ansible-playbook -e "db_technology=mysql, db_params={'users': {'someuser': 'somepassword'}, 'dbs': ['some_db_name', 'another_db_name']" playbook.yml Please do be aware, this is a very general, very incomplete example to give you an idea of the structure you could use to solve this problem.
LAMP DB-agnostic way to create a DB stack
1,617,612,428,000
I know that some of the advantages of Ansible over many other CMs are these: Ansible's scripts being written in YAML, a simple serialization language. The fact that one doesn't have to install it on the machines you deploy its commands/playbooks. Ansible's strong user base and community (for example, galaxy-roles) I know there is another bold different, using the "push" method" instead of some other CMs using the "pull" method. What is the difference here? Maybe it reflects difference 2?
In Ansible push mode, a centralized server connects to other target servers and runs a series of commands to set the target servers into a desired state. Because the centralized server can potentially serve hundreds or thousands of target systems, this can put quite a bit of load on the centralized system. In pull mode, each system acts as its own server, allowing for greater scalability since no single server is forced to take on a high load of serving many target systems. Ansible Pull Documentaion
What's the difference between CMs "push" method (Ansible) to "pull" method (Chef/Puppet)?
1,617,612,428,000
I'd like to be able to host users who have different native languages (e.g. English, French, Spanish, German and so on). Is there a way I can configure a system so that man pages are available in each language (for both the base install and any packages added via the OS package manager)? If so can this be configured to keep them in language specific locations (e.g. /usr/share/man/en, /usr/share/man/es and so on)?
Most distributions (and perhaps even all general-purpose distributions) are already set up like this. You’ll see manpages in various languages under language-specific directories in /usr/share/man; for example, /usr/share/man/de, /usr/share/man/fr... These manpages are used automatically, based on the language specified by the LC_MESSAGES or LANG environment variable. Try LANG=fr_FR man man to see an example. The main issue you’ll run into is that few manpages are translated.
Linux - How to setup a multi lingual system?
1,617,612,428,000
Below is the current configuration for Syslog-NG logging, locally, source s_network { udp( flags(syslog_protocol) keep_hostname(yes) keep_timestamp(yes) use_dns(no) use_fqdn(no) ); }; destination d_all_logs { file("/app/syslog-ng/custom/output/all_devices.log"); }; log { source(s_network); destination(d_all_logs); }; To forward certain messages... below is the configuration to be added. filter message_filter_string_1{ match("01CONFIGURATION\/6\/hwCfgChgNotify\(t\)", value("MESSAGE")); } filter message_filter_string_2{ match("01SHELL\/5\/CMDRECORD", value("MESSAGE")); } filter message_filter_string_3{ match("10SHELL", value("MESSAGE")); } filter message_filter_string_4{ match("ACE-1-111008:", value("MESSAGE")); } destination remote_log_server { udp("192.168.0.20" port(25214)); }; log { source(s_network); filter(message_filter_string_1); destination(remote_log_server); }; log { source(s_network); filter(message_filter_string_2); destination(remote_log_server); }; log { source(s_network); filter(message_filter_string_3); destination(remote_log_server); }; log { source(s_network); filter(message_filter_string_4); destination(remote_log_server); }; Actually there are more than 80 such filters Does Syslog-NG config allow writing a syntax with single filter statement having match of regex1 or regex2 or regex3? (or) Does Syslog-NG config allow writing a syntax with single log statement having multiple filter?
If you want to combine multiple match statements, use or: filter send_remote { match("01CONFIGURATION\/6\/hwCfgChgNotify\(t\)", value("MESSAGE")) or match("01SHELL\/5\/CMDRECORD", value("MESSAGE")) or match("10SHELL", value("MESSAGE")) or match("ACE-1-111008:", value("MESSAGE")); } ... and then use that filter name once: log { source(s_network); filter(send_remote); destination(remote_log_server); };
SyslogNG-How to optimise filter and log statements? [closed]
1,617,612,428,000
What's wrong with my config? I had to --force the logrotate a couple times to see changes, but the numbering is all wonky. ----@----------:/var/log/upstart# ls -Anh total 4.0G -rw-r----- 1 0 0 56K Aug 21 08:41 graylog-server.log -rw-r----- 1 0 0 1.1G Aug 21 08:36 graylog-server.log.1.1.gz -rw-r----- 1 0 0 727M Aug 21 08:35 graylog-server.log.1.gz.1.gz -rw-r----- 1 0 0 0 Aug 20 11:22 graylog-server.log.2.gz -rw-r----- 1 0 0 28K Aug 20 10:40 graylog-server.log.3.gz.1.gz -rw-r----- 1 0 0 1.2G Aug 20 10:29 graylog-server.log.4.gz.1 -rw-r----- 1 0 0 861M Aug 21 08:40 graylog-server.log.4.gz.1.gz -rw-r----- 1 0 0 212M Aug 20 10:25 graylog-server.log.5.gz -rw-r----- 1 0 0 5.3M Aug 20 06:25 graylog-server.log.6.gz Config: ----@----------:/var/log/upstart# vim /etc/logrotate.d/upstart /var/log/upstart/*-server.log.* { size 3G missingok rotate 5 compress notifempty nocreate } Using logrotate --force /etc/logrotate.d/upstart to rotate.
It seems that you are log-rotating already log-rotated logs. In your config you use /var/log/upstart/*-server.log.* to select the files to rotate. This expression matches graylog-server.log.1 but no graylog-server.log. So you are rotating the old rotated log files but not the current log file. Probably you want to use /var/log/upstart/*-server.log instead.
Upstart Logrotate?
1,617,612,428,000
I started with the default localhost.cfg found in /usr/local/nagios/etc/objects. I simply added a service field at the top of the appropriate section. I used existing "ping" service section as a template ... ############################################################################### # # SERVICE DEFINITIONS # ############################################################################### # Define a custom service define service { use local-service ; Name of service template to use host_name localhost service_description docker_testconn check_command check_testconn_xxx22 } define service { use local-service ; Name of service template to use host_name localhost service_description PING check_command check_ping!100.0,20%!500.0,60% } The check_testconn_xxx22.sh lives in /usr/local/nagios/libexec and (for testing) simply returns a positive message .... #!/bin/bash countWarnings=2 if (($countWarnings<=5)); then echo "OK - $countWarnings services in Warning state" exit 0 elif ((6<=$countWarnings && $countWarnings<=30)); then # This case makes no sense because it only adds one warning. # It is just to make an example on all possible exits. echo "WARNING - $countWarnings services in Warning state" exit 1 elif ((30<=$countWarnings)); then echo "CRITICAL - $countWarnings services in Warning state" exit 2 else echo "UNKNOWN - $countWarnings" exit 3 fi ... # ls -la check_testconn_xxx22.sh -rwxr-xr-x 1 root root 663 Jul 20 12:07 check_testconn_xxx22.sh # ./check_testconn_xxx22.sh OK - 2 services in Warning state # echo $? 0 # service nagios restart Job for nagios.service failed. See 'systemctl status nagios.service' and 'journalctl -xn' for details. # journalctl -xn -- Logs begin at Thu 2018-07-19 16:28:44 CEST, end at Fri 2018-07-20 12:08:21 CEST. -- Jul 20 12:08:21 docker-server-1 nagios[2872]: ***> One or more problems was encountered while running the pre-flight check... Jul 20 12:08:21 docker-server-1 nagios[2872]: Check your configuration file(s) to ensure that they contain valid Jul 20 12:08:21 docker-server-1 nagios[2872]: directives and data definitions. If you are upgrading from a previous Jul 20 12:08:21 docker-server-1 nagios[2872]: version of Nagios, you should be aware that some variables/definitions Jul 20 12:08:21 docker-server-1 nagios[2872]: may have been removed or modified in this version. Make sure to read Jul 20 12:08:21 docker-server-1 nagios[2872]: the HTML documentation regarding the config files, as well as the Jul 20 12:08:21 docker-server-1 nagios[2872]: 'Whats New' section to find out what has changed. Jul 20 12:08:21 docker-server-1 systemd[1]: nagios.service: control process exited, code=exited status=1 Jul 20 12:08:21 docker-server-1 systemd[1]: Failed to start Nagios Core 4.4.1. -- Subject: Unit nagios.service has failed -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit nagios.service has failed. -- -- The result is failed. I can't figure out why nagios is unhappy with that section.
Check the command.cfg config file that sets up all the commands referenced in the localhost.cfg file
Nagios failing restart with new service directove in localhost.cfg [closed]
1,617,612,428,000
When I install Postfix as a mail server to transfer emails from my Content Management System's contact-forms, into my Gmail account, I choose the option internet-site. After I choose that option Postfix asks me to insert the domain of my site. But what I have 2 or more sites instead just one? What domain should I insert then? I don't assume the creators of Postfix expect users to enter a long list of domains separated by commas. Thanks,
The list of choices is not a standard Postfix feature; it's probably the result of the package management system of your Linux distribution. Perhaps Ubuntu, or some other Debian-related distribution? If I'm correct, you should treat the option dialog as an "easy first installation wizard" only. For a non-trivial configuration, the "training wheels" will have to come off: you'd use the postconf command to further adjust the configuration, or perhaps stop Postfix and then replace the default configuration files with your own customized ones.
Why does Posfix “limits” me to use only one domain under “internet-site”?
1,617,612,428,000
Running Yum (version 3.4.3) on a Centos 7.3.1611 system. I've tried to enable colors on yum for a while now, with no luck. Here's what I've tried: Using the --color=always option on a command (e.g. yum list installed --color=always) Adding color=always to /etc/yum.conf In addition to adding that to yum.conf, specifying that as the configuration file when running a command (e.g. yum list installed --config=/etc/yum.conf) Adding all specific color options to /etc/yum.conf (see here for the list: https://gist.github.com/jakebathman/aa1e4f6d5803ce6361dfd78e3d945e0c) Other elements are able to be colored in the prompt, including grep foo --color=always and the prompt itself through ~/.bashrc. I can't figure out what's going wrong. Any additional thoughts or things to try?
It has to be color=on instead of color=always in /etc/yum.conf.
Yum color=always option isn't working
1,617,612,428,000
In all instructions I found to setup vhosts on Ubuntu LAMP you need to create a new vhost conf file for each vhost. Can I declare all my vhosts in one single file and what I need to do for it?
You can do one big file, and if you only have a host name or two to worry about it will work fine. Even with LOTS of hosts one file will technically work. BUT ... when you have lots of hosts (more than 2 or 3 for me), having each one in its own file, named after the FQDN w/ a SSL indicator (ie ssl-webapp3.example.com.conf or webapps.example.com.conf or www.example.com.conf) will make your life MUCH easier and be "more maintainable" in the long term, and it makes enabling/disabling individual hosts/sites easier as well.
Ubuntu all of Apache's vhosts in one file
1,617,612,428,000
Can I use systemd-nspawn to setup a "chroot" install(in that case debian using debootstrap) before booting it? I.e. unattended install and setup. I need to set the keymap, hostname, maybe timezone and locale. And I'd like to use systemd tools such as hostnamectl, localectl, timedatectl,... Is it possible and the way to do it or should I use config files? Are they even recognised by systemd? For example if I set a hostname in /etc/hostname is it recognised properly? Can I also set the keymap like that?
Is it possible yes. I've run Debian in nspawn. It works great with minimal installs. On the other extreme, if you do this with a desktop install, you should expect to find one or two little issues to work around. should I use config files? Are they even recogised by systemd? For example if I set a hostname in /etc/hostname is it recignised properly? /etc/hostname definitely works, it's supported by systemd, it's exactly what hostnamectl would edit. In general, it's hard to see what you're worried about here. Debian Jessie defaults to systemd, so any documentation written for Debian Jessie about how to configure things - e.g. in the debootstrap appendix of the install guide - is supposed to work for systemd. (Although that appendix is more of a sketch, and doesn't show how you would make it run unattended). From what you've said, you might be also interested in systemd-firstboot. I haven't tried to use it and my understanding is it's limited in some ways, but it could be informative. In any case I think you would need to use systemd-nspawn --boot, having added a service file similar to the one used by systemd-firstboot. If you don't boot the system, e.g. hostnamectl won't work... Once your script has finished one way or another, it would also need to shut down the system. One notable issue is if you have any network services installed (including avahi, cups, ...), you probably want to run nspawn with --net-veth or equivalent, to avoid conflicts with the host network services. To get network access at this point (e.g. to install more packages), set up a DHCP client on the interface host0.
System setup using systemd-nspawn
1,617,612,428,000
I'm writing a bash script to install a certain development environment on a computer. In order to do this I need to enable the CONFIG_USB_ACM module in the kernel. I am doing this through: cd ~/l4t-kernel-surgery-kernel/kernel-4.4 zcat /proc/config.gz > .config # PART I NEED IDEAS vim .config # change the line that says CONFIG_USB_ACM=n to CONFIG_USB_ACM=m make clean make prepare make modules_prepare make M=drivers/usb/class ... Would it work for me to append the =m line to the end of the config file? Will that overwrite the previous setting of =n? Is there a better way to edit the file in this way from bash?
zcat /proc/config.gz | sed 's/CONFIG_USB_ACM=n/CONFIG_USB_ACM=m/' > .config
Kernel Surgery in a Script
1,617,612,428,000
Once an openbox session is running, ps ax | grep openbox shows that it has been started from ~/.config/openbox/lubuntu-rc.xml. Now, openbox --reconfigure will re-read this file and apply any changes made in it. Can it be reconfigured with a different configuration file stored somewhere else? If not, would the option --reset do the job?
Have you looked into openbox's manual? Or running: openbox --help Note that there is the option --config-file. Although I am not personally in need of multiple configurations I can think of so many situations it would be much appreciated. So the next time you need a different configuration just replace your normal openbox --replace with openbox --config-file <path to some other config file> --replace If you want that other configuration to load with your Lubuntu session I think you should add a delayed autostart command in $HOME/.config/openbox/autostart. Something like: sleep 5 ; openbox --config-file <path to some other config file> --replace & But honestly if you need only one configuration file, just make a back-up of your current one ($HOME/.config/openbox/lubuntu-rc.xml) and edit it to whatever your heart desires.
How to reconfigure openbox from a different configuration file
1,617,612,428,000
After installing Debian Jessie on a System76 Gazelle laptop I was presented with the classic "black screen with blinking cursor". I figured this was some problem with X so I tried startx - it gave me this error: (EE) Fatal server error:(EE) no screens found(EE) I ran Xorg -configure as root and then X -config /root/xorg/conf.new, only to get the same error. The new conf file has the following section for Screen: Section "Screen" Identifier "Screen0" Device "Card0" Monitor "Monitor0" SubSection "Display" Viewport 0 0 Depth 1 EndSubSection SubSection "Display" Viewport 0 0 Depth 4 EndSubSection SubSection "Display" Viewport 0 0 Depth 8 EndSubSection SubSection "Display" Viewport 0 0 Depth 15 EndSubSection SubSection "Display" Viewport 0 0 Depth 16 EndSubSection SubSection "Display" Viewport 0 0 Depth 24 EndSubSection EndSection So it appears that X cannot understand the laptop's screen - it has only the default screen, I am not sure what resolution. I have access to a full backup of this laptop before the fresh install and there is no 10-monitor.conf in /usr/share/X11/xorg.conf.d, so I do not know if it previously needed to be worked around. How can I fix X to properly recognize this laptop monitor?
In my case it was not a question of missing firmware, unsupported hardware, etc. I ran a postinstall script from d-i preseed that grabbed the latest kernel from backports, but for some reason did not install the latest X as it usually does. Running apt install xserver-xorg-video-intel -t jessie-backports fetched the correct version of X for this kernel and it worked perfectly after a reboot.
Xorg does not recognize laptop screen?
1,617,612,428,000
So here's the deal; I was reading the man page for xorg.conf when I decided to edit /etc/X11/xorg.conf to try some of the flags listed in the 'SERVERFLAGS' section. When I opened the file in vim, I found it to be empty (or nonexistent), so I added one line to the file. I believe it was Option "DontZoom" "True" I don't think think the syntax was correct because after rebooting x wouldn't load, giving me error messages about not being able to find a screen and Option "DontZoom" "True" not being valid. (I'm sorry I can't give the exact error messages) I tried generating a new xorg.conf with the X -configure command and replacing the old with the new, but that didn't work either. What actually fixed the problem was removing xorg.conf entirely, so I'm wondering why just having a configuration file seems to prevent x from working. I am running Linux Mint 17.3
When you supply an xorg.conf file, this is the complete configuration. You can't just add one option, you have to supply all mandatory parts, including declaring the input and output peripherals. It's common for programs to start with sensible options and allow them to be overridden by a configuration file, but in the case of Xorg, supplying a configuration file erases the default options. Saving the output of X -configure and using this as a configuration file should be equivalent to running X with no configuration file. Since you didn't provide the content of the configuration nor the logs, I can't help you as to why this didn't work for you.
Why did xorg.conf break X?
1,617,612,428,000
I wrote a simple module for the Linux Kernel and it has a stack buffer overflow vulnerability. I want to exploit the module, but I have to turn off the stack protector in the kernel first. How could I do this quickly and simply? Is it required to compile the kernel every time? Is there any other way to turn off stack protection in a module of the Linux Kernel (without compiling the kernel)?
Those options work by passing options to the compiler, so the most straightforward way is to recompile the kernel. However for a reproducible and module-specific way kbuild allows you to set custom CFLAGs on a per-module basis. https://www.kernel.org/doc/Documentation/kbuild/makefiles.txt You particularly want to set -fno-stack-protector for the modules you want to exploit. DKMS additionally allows you to set up automatic rebuild for out of tree drivers against arbitrary kernel versions.
How to turn off stack protector in linux kernel easily? [duplicate]
1,463,327,505,000
A router runs firmware containing BusyBox and in addition to flash memory the device has secondary memory storage. That USB stick is mounted at both /media/Main and /opt: # mount | grep sda /dev/sda1 on /media/Main type ext4 (rw,noatime,data=ordered) /dev/sda1 on /opt type ext4 (rw,noatime,data=ordered) Duplicates in locate database The issue is that locate updatedb indexes both /media and /opt. I wish to permanently remove these duplicates from /opt/var/locatedb without changing drive mounting. I do wish to use the updatedb command without adding options to that command from both cron and shell. An alias might be an option. Though my first search for "locate database exclude" did return a blog post that suggest to use an “/etc/updatedb.conf” for Arch Linux. updatedb.conf First try was to create a file /opt/etc/updatedb.conf containing: # directories to execlude from the locate database PRUNEPATHS="/media /mnt /tmp /var/tmp /var/cache /var/lock /var/run /var/spool" export PRUNEPATHS # filesystems to exclude from the locate database: PRUNEFS="afs auto autofs binfmt_misc cifs coda configfs cramfs debugfs devpts devtmpfs ftpfs iso9660 mqueue ncpfs nfs nfs4 proc ramfs securityfs shfs smbfs sshfs sysfs tmpfs udf usbfs vboxsf" export PRUNEFS That is not enough to let updatedb use the desired configuration. Next was reading GNU locate documentation. GNU updatedb documentation states: Typically, operating systems have a shell script that “exports” configurations for variable definitions and uses another shell script that “sources” the configuration file into the environment and then executes updatedb in the environment. Does my embedded Linux export and source configuration variables? This embedded Linux operating system might have the GNU suggested shell scripts that export configuration variables and also sources them back into the environment. How can I verify that this OS exports and sources? And when the OS doesn't, how to correctly export and source configuration variables here? Environment GNU locate was installed via opkg on this external storage medium BusyBox v1.24.1 according to /bin/sh --version locate (GNU findutils) 4.6.0 shell is -sh according to echo $0 /opt/home/admin/.ash_history exists $ cat /opt/etc/profile #!/bin/sh export PATH='/opt/usr/sbin:/opt/sbin:/opt/bin:/usr/local/sbin:/usr/sbin:/usr/bin:/sbin:/bin' export TMP='/opt/tmp' export TEMP='/opt/tmp' # This is for interactive sessions only if [ "$PS1" ] ; then export TERM=xterm [ -d /opt/share/terminfo ] && export TERMINFO='/opt/share/terminfo' export LANG='en_US.UTF-8' export LC_ALL='en_US.UTF-8' fi export TERMINFO=/opt/share/terminfo
Where to export After having read https://bitbucket.org/padavan/rt-n56u/wiki/EN/UsingCron a good way to export configuration variables for both crontab and shell usage, is to insert the /opt related variables into /opt/etc/profile. Where and how to source To use ("source") the variables in cron it is suggested to: create a shell-wrapper script source /etc/profile in that wrapper scriptnote: /etc/profile will also source /opt/etc/profile call that wrapper script by prepending the crontab configuration content with the line: SHELL=/etc/storage/cron/shell-wrapper.sh
How to “export” configuration variables and "source" them on embedded Linux?