date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,318,598,595,000
There is a lot of stuff that is configured using directories in /etc with .d suffix, which stands for "directory" and even though Unix doesn't require such suffix, it is used to avoid name clashing. It's hard to google it. I wanted to disable one of the scripts in /etc/update-motd.d/ and could not do it. How to disable a script in such .d directory?
It depends a lot on the directory and distro in question. For example: update-motd.d scripts in Ubuntu have to be executable, as the update-motd manpage says: Executable scripts in /etc/update-motd.d/* are executed by pam_motd(8) Files in profile.d in Ubuntu should have the a .sh extension, since /etc/profile contains: if [ -d /etc/profile.d ]; then for i in /etc/profile.d/*.sh; do if [ -r $i ]; then . $i Files in sudoers.d shouldn't have extensions, or end with ~: ... For example, given: #includedir /etc/sudoers.d sudo will read each file in /etc/sudoers.d, skipping file names that end in ‘~’ or contain a ‘.’ And so on. All three points also probably apply to Debian.
How to disable script in "dot-d" (with suffix ".d") directory in /etc without deleting it?
1,318,598,595,000
Almost 3 weeks that, in my downtime, I try to find out where the files cron.allow & cron.deny are located in debian7 distro. No way, it seems that by default they are not in the system. 'Just' for hardening purposes, I would have those files available in my system. My question is actually if I can just touch them and use them without have to make other configurations. root@asw-deb:~# touch /etc/cron.allow root@asw-deb:~# touch /etc/cron.deny Or if I may have to 'map' those files, maybee editing some cron configuration files, 'saying' where cron can find the two files I created. Sorry if I sound a little nooby.
From the manual man 1 crontab: If the /etc/cron.allow file exists, then you must be listed (one user per line) therein in order to be allowed to use this command. If the /etc/cron.allow file does not exist but the /etc/cron.deny file does exist, then you must not be listed in the /etc/cron.deny file in order to use this command. If neither of these files exists, then depending on site-dependent configuration parameters, only the super user will be allowed to use this command, or all users will be able to use this command. If both files exist then /etc/cron.allow takes precedence. Which means that /etc/cron.deny is not considered and your user must be listed in /etc/cron.allow in order to be able to use the crontab. Regardless of the existance of any of these files, the root administrative user is always allowed to setup a crontab. For standard Debian systems, all users may use this command. I gave it try on Debian 7, and it is working exactly this way.
debian7 cron.allow & cron.deny files
1,318,598,595,000
After a long struggle I finally seem to have installed the non-free wireless firmware for my wireless NIC. I'm trying to set up a file server, so I want to configure the network to be static. Would one of you guys mind helping me? For example I don't know what my /etc/network/interfaces file should look like, currently it looks like this: auto lo iface lo inet loopback allow-hotplug wlan1 iface wlan1 inet static address 192.168.10.111 netmask 255.255.255.0 network 192.168.10.0 broadcast 192.168.10.255 gateway 192.168.10.1 # wireless-* options are implemented by the wireless-tools package wireless-mode managed wireless-essid Optimus Pwn wpa-psk s:roonwolf # I changed this from wiresless-key1 or something like that dns-* options are implemented by the resolvconf package, if installed dns-nameservers 192.168.10.1 dns-search localdomain My ifconfig command looks like this: lo Link encap: Local Loopback inet addr: 127.0.0.1 Mask: 255.0.0.0 inet6 addr: ::1/128 Scope: Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets: 95 errors: 0 dropped: 0 overruns: 0 frame: 0 TX packets: 95 errors: 0 dropped: 0 overruns: 0 carrier: 0 collisions: 0 txqueuelen: 0 RX bytes: 10376 (10.1KiB) TX bytes: 10376 (10.1 KiB) wlan1 Link encap: Ethernet HWaddr 00:18:f3:85:99:07 inet addr:192.168.10.111 Bcast:192.168.10.255 Mask:255.255.255.0 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier: 0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Here's what I get when I iwlist my ssid: wlan1 Scan completed : Cell 01 - Address: 00:14:D1:A4:0A:36 Channel:6 Frequency:2.437 GHz (Channel 6) Quality=70/70 Signal level=-17 dBm Encryption key:on ESSID:"Optimus Pwn" Bit Rates:1 Mb/s; 2 Mb/s; 5.5 Mb/s; 11 Mb/s; 6 Mb/s 9 Mb/s; 12 Mb/s; 18 Mb/s Bit Rates:24 Mb/s; 36 Mb/s; 48 Mb/s; 54 Mb/s Mode:Master Extra:tsf=00000003ff6381c1 Extra: Last beacon: 100ms ago IE: Unknown: 000B4F7074696D75732050776E IE: Unknown: 010882848B960C121824 IE: Unknown: 030106 IE: Unknown: 0706555320010B1B IE: Unknown: 200100 IE: WPA Version 1 Group Cipher : TKIP Pairwise Ciphers (1) : TKIP Authentication Suites (1) : PSK IE: Unknown: 2A0100 IE: Unknown: 32043048606C IE: Unknown: DD180050F2020101070003A4000027A4000042435E0062322F00 IE: Unknown: DD1E00904C334C101BFFFF000000000000000000000000000000000000000000 IE: Unknown: 2D1A4C101BFFFF000000000000000000000000000000000000000000 IE: Unknown: DD1A00904C3406001900000000000000000000000000000000000000 IE: Unknown: 3D1606001900000000000000000000000000000000000000 IE: Unknown: DD0900037F01010000FF7F When I ping 192.168.10.101(My primary desktop) I get PING 192.168.10.101 (192.168.10.101) 56(84) bytes of data. From 192.168.10.111 icmp_seq=2 Destination Host Unreachable When I ping google.com I get(after a lengthy pause): ping: unknown host google.com What exactly am I doing wrong here? Should I restart the network?
Have you tried Network Manager? It's easy to set up static IPs for wireless networks using the GUI. Once you get things working there, if you want the connection available all the time even when you're not logged in (e.g. for a file server), just select the "Connect Automatically" and "Available to all users" checkboxes. If you're allergic to GUIs, you can configure the connection by creating a file in the /etc/NetworkManager/system-connections/ directory, as described on this page.
Configuring Wireless Network
1,318,598,595,000
Recently I bought this disk and I cannot get rid of funny clicking -- funny because normally not configured disk make dry click. This one make click like a sweet chirp. But besides this, this click occurs every several seconds, and it is annoying. I tried as usual: hdparm -B 254 but it changed nothing. So does anyone know how to disable that clicking?
This is a known issue and the fix is here: http://forums.seagate.com/t5 Barracuda-XT-Barracuda-Barracuda ANNOUNCEMENT-New-firmware-update-for-Barracuda-1TB-platter/td-p/162362
How to disable clicking in Seagate ST3000DM001?
1,633,501,064,000
I set up a reverse SSH tunnel to access a node, node1, behind a NAT. I have set up an EC2 instance, myEC2, to act as the intermediary. From my laptop, when I want to access node1, I have to SSH into the EC2 in order to then SSH into the node. The workflow is like this: In node1, make sure to run: ssh -i key.pem -R 3000:localhost:22 ubuntu@myEC2. This is always running in a service. From my laptop, SSH into the EC2: ssh ubuntu@myEC2 Once inside the EC2: ssh xavier@localhost -p 3000 I'm in node1! What I'm looking for is a way of expressing that workflow in a SSH config that I can use to login directly into node1 from my laptop. This will help me access node1 via Visual Studio Code's Remote SSH extension. I tried something like this: Host node1 Hostname myEC2 User ubuntu Port 3000 IdentityFile key.pem But that does not work, I assume it is because Port should be 22 rather than 3000. I just really don't know how to express the workflow. I have looked into ProxyJump but I'm not sure if that is what I'm looking for and to be honest I haven't had success with that either. Any suggestions are welcomed! =D Edit #1: After following Stéphane's suggestions I ended up with an ssh_config file that looks like this: Host myEC2 Hostname <myEC2_IP> User ubuntu Port 22 IdentityFile ec2_key.pem Host node1 Hostname localhost User xavier Port 3000 IdentityFile /path/to/node1-id_rsa ProxyJump ubuntu@myEC2 While I can SSH into myEC2 with no issues, I can't go into node1. My understanding is that this is supposed to be equivalent to ssh -p 3000 -J ubuntu@myEC2 xavier@localhost. Any help is greatly appreciated! This is what I get by adding the -v flag to SSH. xaviermerino@Xaviers-MBP .ssh % ssh doc debug1: Executing proxy command: exec ssh -l ubuntu -W '[localhost]:3000' myEC2 debug1: identity file node1-id_rsa type -1 debug1: identity file node1-id_rsa-cert type -1 debug1: Local version string SSH-2.0-OpenSSH_8.1 debug1: Connecting to myEC2 [myEC2_IP_ADDRESS] port 22. debug1: Connection established. debug1: identity file ec2_key.pem type -1 debug1: identity file ec2_key.pem-cert type -1 debug1: Local version string SSH-2.0-OpenSSH_8.1 debug1: Remote protocol version 2.0, remote software version OpenSSH_8.2p1 Ubuntu-4ubuntu0.2 debug1: match: OpenSSH_8.2p1 Ubuntu-4ubuntu0.2 pat OpenSSH* compat 0x04000000 debug1: Authenticating to myEC2_IP_ADDRESS:22 as 'ubuntu' debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: algorithm: curve25519-sha256 debug1: kex: host key algorithm: ecdsa-sha2-nistp256 debug1: kex: server->client cipher: [email protected] MAC: <implicit> compression: none debug1: kex: client->server cipher: [email protected] MAC: <implicit> compression: none debug1: expecting SSH2_MSG_KEX_ECDH_REPLY debug1: Server host key: ecdsa-sha2-nistp256 SHA256:/U4HE+zUBFNZJgxDM6lWDW7FX8GSHXWYc/fMEyOvMlw debug1: Host 'myEC2_IP_ADDRESS' is known and matches the ECDSA host key. debug1: Found key in /Users/xaviermerino/.ssh/known_hosts:226 debug1: rekey out after 134217728 blocks debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: rekey in after 134217728 blocks debug1: Will attempt key: ec2_key.pem explicit debug1: SSH2_MSG_EXT_INFO received debug1: kex_input_ext_info: server-sig-algs=<ssh-ed25519,[email protected],ssh-rsa,rsa-sha2-256,rsa-sha2-512,ssh-dss,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,[email protected]> debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Trying private key: ec2_key.pem debug1: Authentication succeeded (publickey). Authenticated to myEC2 ([IP_Address_Goes_Here]:22). debug1: channel_connect_stdio_fwd localhost:3000 debug1: channel 0: new [stdio-forward] debug1: getpeername failed: Bad file descriptor debug1: Requesting [email protected] debug1: Entering interactive session. debug1: pledge: network debug1: client_input_global_request: rtype [email protected] want_reply 0 debug1: Remote: /home/ubuntu/.ssh/authorized_keys:1: key options: agent-forwarding port-forwarding pty user-rc x11-forwarding channel 0: open failed: connect failed: Connection refused stdio forwarding failed kex_exchange_identification: Connection closed by remote host I'm not sure what this means Does it have to do with the settings at sshd_config in the EC2?. This is what I have in there: #AllowAgentForwarding yes #AllowTcpForwarding yes GatewayPorts yes X11Forwarding yes #X11DisplayOffset 10 #X11UseLocalhost yes #PermitTTY yes PrintMotd no #PrintLastLog yes #TCPKeepAlive yes #PermitUserEnvironment no #Compression delayed #ClientAliveInterval 0 #ClientAliveCountMax 3 #UseDNS no #PidFile /var/run/sshd.pid #MaxStartups 10:30:100 #PermitTunnel no #ChrootDirectory none #VersionAddendum none Edit #2: Someone had turned off the computers. It now works! To summarize it for whoever is looking into this. To solve this, I needed: Host myEC2 Hostname <myEC2_IP> User ubuntu Port 22 IdentityFile ec2_key.pem Host node1 Hostname localhost User xavier Port 3000 IdentityFile /path/to/node1-id_rsa ProxyJump ubuntu@myEC2 And that was it! Thanks @StephaneChazelas
You're actually using myEC2 as a jump host. You could ssh to node1 from your laptop with: ssh -p 3000 -J ubuntu@myEC2 xavier@localhost The corresponding ssh_config entries would look like: Host node1 Hostname localhost User xavier Port 3000 IdentityFile key.pem ProxyJump ubuntu@myEC2 Note that the IdentityFile there is the one used for authenticating to node1. To specify one for myEC2, you'd use another Host entry for myEC2.
SSH config for connecting to host via reverse SSH tunnel
1,633,501,064,000
I have an Ubuntu 19.10-based distro with LightDM installed. I changed the username recently, but lightdm keeps displaying the old username. Is there a way I can fix this? I have tried fiddling with /etc/lightdm/lightdm.conf with no success. Attached are pictures demonstrating what I am talking about. Is this a lightdm issue? Or some other configuration that hasn't been modified? Is there a way to fix this? Thanks!
It looks like the name batcastle is the username, while live is the so-called fingername. Both are stored in the file /etc/passwd. The username (that is the login name) is somewhat more complicated to change, because most likely you want to have the home directory called /home/$(whoami). To change the username, use usermod. usermod -l newusername -d /home/newusername -m oldusername You may want to change the ownerships of the files and directories accordingly. This should not be necessary, because actually, on the filesystem, the owner information is only stored as the numerical ID: chown -R newusername /home/newusername To change the fingername, use chfn: chfn -f 'John Doe' username For more details, please refer to the man pages of these commands.
Lightdm displaying wrong username
1,633,501,064,000
I am bringing in log files via rsyslog and my config looks like the following: root@rhel:/etc/rsyslog.d# head mail_prod_logs.conf if $fromhost-ip=="10.10.10.10" and $programname=="AMP_Logs" then -/var/log/mail_logs/amp.log My logs are all stored in the /var/log/mail_logs/amp.log folder: Oct 18 13:29:28 server.com AMP_Logs: Info: Begin Logfile Oct 18 14:29:28 server.com AMP_Logs: Info: Version: 12.1.0-000 SN: ..... Oct 18 14:29:28 server.com AMP_Logs: Info: Time offset from UTC: -14400 seconds Oct 18 15:29:23 server.com AMP_Logs: Info: Response received for..... Oct 18 15:29:23 server.com AMP_Logs: Info: File reputation query..... Oct 19 13:29:23 server.com AMP_Logs: Info: Response received for fil.... Oct 19 13:29:58 server.com AMP_Logs: Info: File reputation query .... Oct 19 13:29:58 server.com AMP_Logs: Info: File reputation query .... I would like to use the datetime portion of the log to put these in hourly folders inside of daily folders inside of the month while the data is coming in by editing the mail_prod_logs.conf. So it would look like: /var/log/mail_logs/Sep/30/23.log /var/log/mail_logs/Oct/01/00.log /var/log/mail_logs/Oct/01/01.log /var/log/mail_logs/Oct/01/02.log ... How can I do this?
You can do this with a dynamic file template. Use a property replacor to select parts of the %timestamp% property, in particular the option date-day and date-hour and characters 1 to 3 of date-rfc3164 (which is a string like "Oct 9 09:47:08"). Typically, in examples, the template is called DynFile: $template DynFile,"/var/log/mail_logs/%timestamp:1:3:date-rfc3164%/%timestamp:::date-day%/%timestamp:::date-hour%.log" To use the template, replace the ...then -/var/log/mail_logs/amp.log by ...then -?DynFile Should you consider replacing the 3-letter month (Jan, Feb, ...) by the 2-digit month for easier handling, use instead $template DynFile,"/var/log/mail_logs/%timestamp:::date-month%/%timestamp:::date-day%/%timestamp:::date-hour%.log
How to split logs into monthly, daily and hourly folders when bringing in syslog events?
1,633,501,064,000
I'm planning on setting up a new personal OpenBSD-current system by means of installing the latest OpenBSD snapshot, and I would want it to look more or less similar to my existing OpenBSD-current system in terms of local changes made to the files under /etc without actually copying the /etc directory over to the new system (it also contains configuration that the new system should not use, and other things that should be different). What is the easiest way to compare my old setup with that of a pristine system so that I manually may go through and set up the new machine? I would like to compile a list of individual base system services etc. that I need to configure after installing the new OpenBSD system, just as a way of not accidentally leaving things out, like setting up a mail alias for root.
After upgrading my (old) system to the latest snapshot, a copy of the pristine /etc hierarchy (and also of other variable files under /root and /var) is available in /var/sysmerge/etc.tgz. There is also a separate archive called /var/sysmerge/xetc.tgz holding the base system's default X11 configuration. Note that etc.tgz does not contain files not intended to be modified, like /etc/services or /etc/daily, and that it does not contain configuration files for things that are installed without a default configuration file (but that may have one for local tweaking of the defaults), such as /etc/doas.conf. To see what files I have modified on my old system, I may extract these archives to some directory and simply do a diff. The extraction has to be done as root as the /var/sysmerge directory is protected. mkdir /tmp/tmpdir doas tar -xz -f /var/sysmerge/etc.tgz -C /tmp/tmpdir doas tar -xz -f /var/sysmerge/xetc.tgz -C /tmp/tmpdir Looking at the diff is made easier by first deleting old unused configuration files belonging to packages no longer installed. These may be found by using the sysclean tool from ports. This tool would also identify old and unused versions of libraries, and it may be a good idea to run it after any system upgrade or after upgrading ports. Producing a diff for the /etc files: doas diff -ru /tmp/tmpdir/etc /etc This produces a recursive unified diff of the files I'm likely to be interested in. On my private system, this is enough to give me a hint of what I need to set up when I later get to configuring my new machine. For example, I had forgotten that I had disable root logins and password authentication in my /etc/ssh/sshd_config file, and that I at some point had set up an additional user for testing (!). When done, I delete the unpacked archives: doas rm -rf /tmp/tmpdir
OpenBSD: Check what files under /etc has changed in comparison to pristine base system
1,633,501,064,000
I want to assign multiple IP4 addresses to a USB->Ethernet adapter in an Ubuntu 18.04 LTS system. I have removed netplan, since I find the yaml-based configuration even more obscure than the traditional way of configuring the network. Since I want the extra addresses to be permanent, I put them into /etc/network/interfaces, as described here as "Legacy method". Adding extra IP4 addresses to a "fixed" ethernet interface works, but the same doesn't work with the USB-to-Ethernet dongle. I'm puzzled as to what the difference is. EDIT: I was asked to share my interfaces file. Here it is: auto lo iface lo inet loopback auto eno1 iface eno1 inet static address 192.168.2.6 netmask 255.255.255.0 broadcast 192.168.2.255 offload-gro off offload-gso off offload-tso off auto enx000ec6fe56fb iface enx000ec6fe56fb inet static address 192.168.31.6 netmask 255.255.255.0 broadcast 192.168.31.255 gateway 192.168.31.1 offload-gro off offload-gso off offload-tso off auto enx000ec6fe56fb:0 iface enx000ec6fe56fb:0 inet static address 192.168.31.4 netmask 255.255.255.0 auto eno1:0 iface eno1:0 inet static address 192.168.2.4 netmask 255.255.255.0 As you can see, I introduce a virtual IP interface for each of the real interfaces. eno1 is a plain Ethernet interface on the mainboard, while enx000ec6fe56fb is a USB-to-Ethernet dongle. The virtual interface for eno1 works, the other doesn't.
Because ifupdown is deprecated since the Ubuntu 17.10 release (the /etc/network/interfaces file is used by ifupdown), you should reinstall netplan on your system and remove the ifupdown package. There is how to configure a multiple IP address for a network interface using the following example from the official website :Multiple addresses on an interface. sudo nano /etc/netplan/your-config-file.yaml : network: version: 2 renderer: NetworkManager ethernets: enp3s0: addresses: - 10.100.1.38/24 - 10.100.1.39/24 gateway4: 10.100.1.1 Test and apply the new configuration: sudo netplan generate sudo netplan try sudo netplan apply See: MigratingToNetplan Deprecate ifupdown in Ubuntu for the 17.10 release.
Can't permanently assign additional IP addresses to USB ethernet adapter via /etc/network/interfaces. Why?
1,633,501,064,000
Problem I'm transitioning the configuration of my multihead monitors from using some rather ugly scripts to /etc/X11/xorg.conf.d/10-monitor.conf. My layout has two monitors of 1920x1200, one rotated left. The scripts were able to configure this just fine using the following command: xrandr \ --output "DP-1" \ --mode 1920x1200 \ --pos 1200x360 \ --rotate normal \ --primary \ --output "DP-2" \ --mode 1920x1200 \ --pos 0x0 \ --rotate left I've tried to translate this to configuration: Section "Monitor" Identifier "DP-1" Option "Primary" "true" Option "Position" "1200 360" EndSection Section "Monitor" Identifier "DP-2" Option "Rotate" "left" EndSection This unfortunately has the side effect of setting the resolution of the rotated screen to 1600×1200, even though the preferred mode is still 1920×1200: $ xrandr […] DP-2 connected 1200x1600+0+0 left (normal left inverted right x axis y axis) 518mm x 324mm 1920x1200 59.95 + 1920x1080 60.00 1600x1200 60.00* […] How can I write configuration which will use the rotated monitor's preferred resolution of 1920x1200? Non-solutions Explicitly setting the screen size to fit both monitors: Section "Screen" Driver "radeon" SubSection "Display" Virtual 3120 1920 EndSubSection EndSection Explicitly setting the preferred mode for DP-2 (Option "PreferredMode" "1920x1200") caused the other screen to be reduced to 1600×1200, so that's probably a clue. Workaround Force the resolution by using xrandr --output DP-2 --mode 1920x1200.
What worked in the end was to explicitly set the virtual screen size and the preferred mode for both of the screens: Section "Monitor" Identifier "DP-1" Option "Primary" "true" Option "Position" "1200 360" Option "PreferredMode" "1920x1200" EndSection Section "Monitor" Identifier "DP-2" Option "Rotate" "left" Option "PreferredMode" "1920x1200" EndSection Section "Screen" Driver "radeon" SubSection "Display" Virtual 3120 1920 EndSubSection EndSection
X11 ignores preferred mode
1,633,501,064,000
I've installed a minimal installation of CentOS 7, meaning no GUI, on my Dell XPS 15 9560 laptop.  uname -r returns 3.10.0-862.11.6.el7.x86_64.  The laptop does not have an Ethernet card, but it does have a Wi-Fi card.  During the installation I configured a Wi-Fi connection and I could confirm that I received an IP address. When booting into the OS, however, I don't have an active connection. I've tried to find out how to activate the Wi-Fi and establish a connection with the tools already installed (as I can't install any new ones), but to no avail. I'm not sure exactly what is of interest but this is what I know: ip addr shows that the interface (is that the correct term?) wls2s0 is DOWN. running nmtui (after systemctl enable NetworkManager and service NetworkManager start) shows the connection I created and it seems correct. After all I successfully connected during the installation. The "activate a connection" menu is empty, though. nmcli d shows a row like so: wlp2s0 wifi unmanaged -- nmcli connection show lists my connection but the "device"-field is empty (--). nmcli connection up <connection name> gives me the following error: Error: Connection activation failed: No suitable device found for this connection. I suspect my Wi-Fi card is not active, but I'm not sure how to activate it. I've tried the Fn+PrtScr combination, which usually activates it, but no luck. Running lshw gave me some additional info. The Wi-Fi card is listed under pci devices as: *-network DISABLED description: Wireless interface product: QCA6174 802.11ac Wireless Network Adapter vendor: Qualcomm Atheros ... logical name: wlp2s0 ... configuration: broadcast=yes driver=ath10k_pci driverversion=3.10.0-862.11.6.el7.x86_64 firmware=WLAN.RM.4.4.1-00051-QCARMSWP-1 latency=0 link=no multicast=yes wireless=IEEE 802.11 resources: irq:140 memory:ed200000-ed3fffff so the driver seems to be ath10k_pci.  Running lsmod | grep "ath10k" gives me the following: ath10k_pci 47418 0 ath10k_core 325711 1 ath10k_pci ath 29446 1 ath10k_core mac80211 714741 1 ath10k_core cfg80211 623433 3 ath,mac88211,ath10k_core I'm not sure if the above means that the ath10k_pci driver is being loaded, though. Neither lsusb nor lspci is present on the system. Any suggestions, where do I go from here?
See https://bugs.launchpad.net/ubuntu/+source/linux-firmware/+bug/1520343 for instructions on the Ubuntu approach to fixing the issue. Here are what I think are the relevant extracts, but note that I have not tested this as I don't have your hardware: If you have kernel 4.5.0 sudo mkdir -p /lib/firmware/ath10k/QCA6174/hw3.0/ sudo rm /lib/firmware/ath10k/QCA6174/hw3.0/* 2> /dev/null sudo wget -O /lib/firmware/ath10k/QCA6174/hw3.0/board.bin https://github.com/kvalo/ath10k-firmware/blob/master/QCA6174/hw3.0/board.bin?raw=true sudo wget -O /lib/firmware/ath10k/QCA6174/hw3.0/board-2.bin https://github.com/kvalo/ath10k-firmware/blob/master/QCA6174/hw3.0/board-2.bin?raw=true sudo wget -O /lib/firmware/ath10k/QCA6174/hw3.0/firmware-4.bin https://github.com/kvalo/ath10k-firmware/blob/master/QCA6174/hw3.0/firmware-4.bin_WLAN.RM.2.0-00180-QCARMSWPZ-1?raw=true Reboot or reload the ath10k_pci module and you should be able to connect. Otherwise sudo mkdir -p /lib/firmware/ath10k/QCA6174/hw3.0/ sudo rm /lib/firmware/ath10k/QCA6174/hw3.0/* 2> /dev/null sudo wget -O /lib/firmware/ath10k/QCA6174/hw3.0/board.bin https://github.com/FireWalkerX/ath10k-firmware/blob/7e56cbb94182a2fdab110cf5bfeded8fd1d44d30/QCA6174/hw3.0/board-2.bin?raw=true sudo wget -O /lib/firmware/ath10k/QCA6174/hw3.0/firmware-4.bin https://github.com/FireWalkerX/ath10k-firmware/blob/7e56cbb94182a2fdab110cf5bfeded8fd1d44d30/QCA6174/hw3.0/firmware-4.bin_WLAN.RM.2.0-00180-QCARMSWPZ-1?raw=true sudo chmod +x /lib/firmware/ath10k/QCA6174/hw3.0/* Reboot or reload the ath10k_pci module and you should be able to connect. Caveats A number of comments on the original link say that these fixes do not work straight off, and tweaks are supplied. I would strongly recommend you work your way through the entire thread. Read it twice - once to see what's going on, and the once (at least) to work out what needs applying in your situation. This isn't going to be easy.
How to configure/connect to Wi-Fi with minimal CentOS installation? [closed]
1,633,501,064,000
In /etc/sysconfig/* scripts one can have ordinary name=value assignments, of course. But these files are interpreted by . from a shell, aren't they? Are there any restrictions upon the shell language that can legitimately (i.e. in accordance with whatever rules there are for the operating system) and portably (i.e. across multiple operating systems) be used in them? Or can one use arbitrary (POSIX) shell language? I am thinking of things like: complex assignments: abc=/var/${logdir} sourcing common "library" or "helper" scripts This RedHat documentation and this OpenMandriva documentation go into great depth on the various individual options that can be set, but are mute on the actual fundamental format of these files. (The equivalent to /etc/sysconfig on Debian is /etc/default, which does have rules about the format. Debian's Policy Manual requires that files in /etc/default "must contain only variable settings and comments in POSIX.1-2017 sh format". This question is about RedHat's /etc/sysconfig though.)
short: it depends longer: Redhat and Mageia/Mandriva use a symbolic link for /bin/sh to point to bash, so when those are scripts, you'll get bash acting like sh, or bash itself, depending on whether the sourcing script uses bash or "sh". There are both cases for both systems. Not all of the files in /etc/sysconfig are scripts; some (such as /etc/sysconfig/partmon on my Mageia6 machine, or systat.ioconf on my CentOS 7) are just data. Something reads that, but it's not done by sourcing a script.
Is general (POSIX) shell language allowed in /etc/sysconfig/* scripts? Or are there restrictions?
1,633,501,064,000
Since browsers on Xubuntu started using GTK3 instead of GTK2, and when saving a file, the dialog for entering a new folder name is white on white and hence unreadable. I gather this might be changed in ~/.config/gtk-3.0/gtk.css, but I am unsure under what element ID name that would be. Furthermore, is there a system wide gtk.css file that solves this issue for all users on this system?
I found the culprit. gtk-theme-config serves both GTK2 and GTK3 applications. Resetting Custom menu colours to its defaults resolves the issue. I opened a bug report against gtk-theme-config. Xubuntu LTS 16.04 comes with version 1.2.1-0ubuntu1. The bug persists in the latest version 1.2.2-1.
GTK3 Folder Name dialog is white on white
1,633,501,064,000
How can I configure ~/.emacs so that I indent how nano does by default? Uses a tab character instead of 5 spaces I can add as many tabs to a line as I please
I added the following to ~/.emacs: (setq-default indent-tabs-mode t) (setq backward-delete-char-untabify-method nil) (setq indent-tabs-mode t) (defun my-insert-tab-char () "Insert a tab char. (ASCII 9, \t)" (interactive) (insert "\t")) (global-set-key (kbd "TAB") 'my-insert-tab-char) ; same as Ctrl+i
How to make 'emacs' indent with tabs exactly how 'nano' does...?
1,633,501,064,000
I have a request to increase the POST limit of a particular vhost in Apache to 20MB. I do neither wish to increase all the vhosts in the server, nor disable modsecurity to that particular vhost. Is it possible to raise the POST limit only of one vhost in ModSecurity?
Indeed you can increase the POST limit of a particular vhost. However, that is configured directly in the vhost file definition and not in modsecurity. For that, add this line to your vhost: SecRequestBodyLimit 20971520 Where 20971520 is 20MB, as this directive expect an argument as bytes. From ModSecurity Handbook SecRequestBodyLimit Sets the maximum request body size ModSecurity will accept The SecRuleEngine directive is context-sensitive (i.e., it works with Apache’s container tags , , and so on), which means that you are able to control exactly where ModSecurity runs. [Rui: and define parameters]
ModSecurity+Apache: Bigger POST limit for a vhost?
1,633,501,064,000
Is there a desktop environment in which I can modify all settings completely through config file(s)? And then load them on a new system? I'm currently using Manjaro with KDE, and I really like my setup. I use a dark theme and I've defined a number of custom keybindings. I use several different computers, and I'm also constantly installing new OS's on various machines. Getting my desktop setup to my liking via the point-and-click settings/preferences repeatedly is borderline unbearable to me, but I like consistency. What I would like to do is version control the desktop environment config file(s) with git and then simply push/pull changes. I do this with my .rc files and I like it. If I break something I can simply revert back to a previous version. I realize that this might not be possible since there are multiple moving parts involved (e.g. a window manager, display manager, others? etc.), and it might not be possible with every desktop environment. If possible, I'd like this to work with KDE, but beggars can't be choosers.
What I would like to do is version control the desktop environment config file(s) with git and then simply push/pull changes. All Linux desktop environments (DEs) that I know of store their configurations in files, so choice of DE is not critical for this. On balance though, this ain't such a great idea, because a lot of default settings are going to end up in there. Thus the files will get polluted and difficult to edit/merge by hand, largely defeating the purpose of using version control. You might as well just tar up the configuration directory. KDE, for instance, stores settings in ~/.config, and configuration is managed through many different files like plasmarc, plasmashellrc, kglobalshortcutsrc (just to name a few). As you install programs, some configurations, such as those of chromium and syncthing, will be stored in ~/.config as well, and these settings will update automatically though you will likely not want to save these. You could use a .gitignore but this will become onerously tedious as you will constantly have to add new files/directories to this, and it will be difficult to tell what has been modified by you vs. automatically. The usual approach to this kind of thing is to find out what command-line tool is used to set settings in the DE (most of them should support this). Then you write a script that invokes this command to set all the settings, and version control that script. This is what I do (with the xfce4 desktop environment, though a lot of others support command-line configuration as well; I don't know about KDE specifically, but GNOME does for sure).
A desktop environment completely editable through config files?
1,633,501,064,000
I want to change login failure message in Debian 8 x64 login: root password: login incorrect - /this/ login: Can you tell me how to do that? P.S.: and for more - how to set number of login attempts? Something like: 3 failed attempts and then 15min cooldown.
Here: https://github.com/shadow-maint/shadow/blob/master/src/login.c it is hardcoded at line #834: (void) puts (""); (void) puts (_("Login incorrect")); So you have to modify the source then compile login.c for your system. ps: one question per post
Change login failure message
1,633,501,064,000
It is my understanding that on a desktop/laptop for normal, personal use (not servers or for other specialized tasks), one of the primary benefits of having /home on its own partition is keeping the user's files and quite a lot of application configuration files between re-installs (as novice Linux users are wont to do when trying different distributions or simply screwing up badly). Apart from files the user deliberately puts in /home, I have unfortunately not been able to dig up a lot of information on what exactly is stored (or not stored) there. It seems to me that some kinds of configuration files are kept in /etc, but I don't know if these are desirable to keep when reinstalling. I have two questions. I realize no definite answer can be given, but there might exist some de facto design choices in the Linux development world which could yield somewhat concrete answers not unfit for this site. Is it reasonable to assume that applications store configuration files (of the kind desirable to keep between re-installs) only in /home? Is it reasonable to assume that in most cases, applications (re-)installed after a system reinstall will recognize existing configuration files and continue functioning more or less as before? If the answer to any of the above questions are "no", how can headache be minimized when reinstalling and wishing applications to function as before?
As a rule of thumb, applications you run as a non-root user will put their configuration under /home. System-wide configuration resides under /etc (and to a lesser extent under /var/lib and other locations), but applications not running as root don't have write access to these locations. As for your 2nd question, it depends. If your new system contains the same version of the applications concerned as the old one, the configuration will almost always be recognized (some details may be wrong if the new system is otherwise very different from the old one; for a trivial example, if the system-wide wallpaper you used in the old system isn't available in the new one). Many applications (especially console applications like mutt, alpine, irssi etc.) will happily work with configuration files written by/for older versions of the same application, and in most cases even when you use an older version of the program than what the configuration is for. GUI applications tend to be more finicky (it's anyone's guess whether an older version of Chromium will work with the profile directory of a newer version). In many cases, even the location of the configuration files changes between versions. And there are cases where newer versions of the "same" thing deliberately ignore the configuration of the older version; for example, KDE5 ignores KDE4 settings.
Are configuration files which are desirable to keep when reinstalling normally kept in /home?
1,633,501,064,000
So far this is what I've done: $ less /etc/nginx/hhvm.conf location ~ \.(hh|php)$ { fastcgi_pass unix:/var/run/hhvm/sock; include fastcgi_params; } $ less /etc/hhvm/server.ini ; php options pid = /var/run/hhvm/pid ; hhvm specific hhvm.server.file_socket = /var/run/hhvm/sock hhvm.server.type = fastcgi hhvm.server.default_document = index.php hhvm.log.use_log_file = true hhvm.log.file = /var/log/hhvm/error.log hhvm.repo.central.path = /var/run/hhvm/hhvm.hhbc It worked perfectly well with proper TCP port configuration, but replacing it with UNIX socket configuration results in same nginx error as a port misconfiguration.
You should check the file permissions. nginx must be able to write to the php5-fpm or hhvm Unix socket. You probably can find a line like this one inside the nginx error log /var/log/nginx/error.log, confirming that this is the problem: 2015/10/28 16:32:24 [crit] 14845#0: *1 connect() to unix:/var/run/php5-fpm.sock failed (13: Permission denied) while connecting to upstream, client: 127.0.0.1, server: localhost, request: "HEAD /test.php HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "localhost" Solution: Add the nginx user to the group of the user owning the socket (usually www-data). The socket file should be writable by the group, so you would be good to go with the following command: # usermod -a -G www-data nginx
How do I configure UNIX socket in nginx/HHVM?
1,633,501,064,000
The following /etc/network/interfaces file brings up the dummy0 interface automatically at startup (or using the ifup command), but without multicast. What is the proper way to enable multicast in this file? # interfaces(5) file used by ifup(8) and ifdown(8) auto lo iface lo inet loopback auto dummy0 iface dummy0 inet static address 10.10.0.1 netmask 255.255.255.0 multicast 1 source-directory interfaces.d
Try: auto dummy0 iface dummy0 inet static address 10.10.0.1 netmask 255.255.255.0 post-up ifconfig dummy0 multicast
How to set multicast in /etc/network/interfaces?
1,633,501,064,000
I was trying to configure LightDM. It appears that lightdm.conf is sensitive to trailing spaces. I found that I get different behavior with greeter-hide-users=true and greeter-hide-users=true where the second has a trailing space. Without the space, the greeter hides the list of users as I expected. With the space, the greeter displays the list of users as if the greeter-hide-users parameter is not set to true. I am thinking of reporting this as a bug, but I want to make sure this type of sensitivity to trailing spaces is not typical in configuration files.
It depends on the configuration, /etc/passwd for example can be whitespace sensitive, because then you've set a user's shell to /bin/tcsh and then they cannot login because /bin/tcsh does not exist. This can also be difficult to debug; logging should ideally quote or bracket things so the logs have '/bin/tcsh ' or [username ] in them, and looking at the data with a hex viewer (hexdump, xxd) can by handy. Eliminating trailing whitespace by default should be sensible and safe. (Um, except for the trailing newline, which it is not sensible to remove from the end of a file.)
Are configuration files usually sensitive to trailing whitespace
1,633,501,064,000
I'm using vim, tmux and zsh. I love those tools. I spent a lot of time to configure those tools, particularly Vim. In my day-to-day job, I have to access a lot remote machines. I use ssh to be connected with remote servers, Raspberry PIs or virtual machines. For now, every time I access to a new machine, I have to reconfigure all my favourite tools. Most of the time, I just don't use them except for important machines. I cannot only copy my dotfiles, I have also to install some prerequirements. Is there a way to simplify this process?
Assuming you have root access to the machine, you could write an installation routine. I made a few assumptions, as the availability of apt-get #script, saved local, executed on ssh server: #install tools: apt-get -y install tool1 tool2 #be careful with the -y option, though #new zsh tools: #load standard .zshrc file echo . /home/user/.zshrc > /home/user/.additional_zsh_rc #add alias echo alias faster=\'than this long command\' > /home/user/.additional_zsh_rc #new vimrc echo "vim settings" > /home/user/.vimrc #make sure to change ownerships as we are running this script as root chown user:user /home/user/.additional_zsh_rc chown user:user /home/user/.vimrc Now that we will run the script as root to install ssh root@server "/bin/zsh -s" <install_script THEN we can log in as user (after script execution, you are automatically logged out as root). Note that we will have to change our standard rcfile to the new one with -rcs (feel free to change what to source or to change the whole .zshrc file, in my example the new rcfile also sources the standard .zshrc) ssh user@host "/bin/zsh --rcs /home/user/.additional_zsh_rc" One could now also put both ssh commands into a script, so you will not need to manually execute them every time. For a hasle-free run, use passwordless ssh-login, but be aware of the dangers, when someone has password-less root access to remote machines after stealing your PC.
A way to keep dotfiles and configuration with ssh
1,633,501,064,000
I had a server restart and since then I am no longer able to run any commands under SSH. Any command will just return something like: -bash: ls: command not found I realize my $PATH must have been changed somehow, doing /bin/ls seems to work fine. An echo $PATH returns: /usr/local/sbin:/usr/sbin:/sbin:$PATH:/opt/jdk1.8.0_45/bin:/opt/jdk1.8.0_45/jre/bin:/root/bin I would assume Java is the culprit here, but how do I go around resetting my $PATH variable?
You can execute the following command to add /bin, or whichever directory you need, to PATH. export PATH="$PATH:/bin" You can then add that line to .profile or .bashrc (if you use bash) to make sure that directory is included in your path each time you log in.
How to reset $PATH on CentOS 6.5
1,633,501,064,000
I'm building a cross compiled 3.2.15 kernel for a Marvell Armada 370 system. The vendor's default config file for this is armada_370_v7up_defconfig. So when I perform a make armada_370_v7up_defconfig step, shouldn't that result in a .config file that matches the armada_370_v7up_defconfig file? Instead, I'm seeing a lot of differences (can include if needed). Or am I misunderstanding how make defconfig works?
Defconfig generates a new kernel configuration with the default answer being used for all options. The default values are taken from a file located in the arch/$ARCH/configs/armada_370_v7up_defconfig file. These default configurations are not designed to exactly fit your target but are rather meant to be a superset so you only have to modify them a bit. The make armada_370_v7up_defconfig creates your initial .config, which you can now edit through make menuconfig and make your changes. After that, you can run make which will then compile the kernel using your settings.
Linux kernel build : shouldn't make <manufacturername>defconfig yield the same .config file?
1,633,501,064,000
I have no problem adding config files to a given script, but automatically updating them is another matter. In this instance I'm providing a few variables to a script, and I'd like the script to be able to change the values provided by the config file. Is there a standard way to change the value of a variable in a running script, and then update the config file accordingly? My current approach is sed-ing for the line containing the variable and changing its contents, but I feel like there must be a better way. For example, imagine a program that generates some arbitrary text file: Line in config file, defining a variable available to the program: EXIT_ON_GEN="true" //exit on successful generation of text file if true The user selects a menu option that indicates they would like to stay in the program instead, because they like generating arbitrary text so much. It would be easy to simply set it to false for the current instance of the program, but what's the best way to change the config file itself, so the user doesn't have to go back and change the settings each time they run it?
Using sed Starting with this file: $ cat >file EXIT_ON_GEN="true" //exit on successful generation of text file if true One method for changing true to false (and displaying the new version on the terminal) would use sed: $ sed 's/^EXIT_ON_GEN=[^[:space:]]*/EXIT_ON_GEN="false"/' file EXIT_ON_GEN="false" //exit on successful generation of text file if true To instead make the change to the file in place, use the -i option: $ sed -i 's/^EXIT_ON_GEN=[^[:space:]]*/EXIT_ON_GEN="false"/' file Using awk $ awk -F'"' -v OFS='"' '/^EXIT_ON_GEN/{$2="false"} 1' file EXIT_ON_GEN="false" //exit on successful generation of text file if true
Updating config files within the script that references them?
1,633,501,064,000
I'm using OSX Mavericks. Using VIM 7.3, I can't seem to get the "hybrid" line numbers to work. I was reading a tutorial stating that if both set number and set relativenumber were included in the .vimrc file, you could get a relative number on all lines but get the actual line number on the current line you're editing. I've searched forums and have followed what others have done in terms of my .vimrc file and cannot get it to function. It just presents as if I had set relativenumbers active and not the absolute number for the current line. Im still relatively fresh to VIM but I have exhausted my knowledge. The answer was to update vim to 7.4 To do this required the following: Updating homebrew Installing python (I read this was a dependency, but not sure) Installing mercurial (I read this was a dependency, but not sure) Installing VIM 4.7(+) (through homebrew) Configuring the path to use homebrew's VIM over the OSX default VIM I found the answer on this thread very helpful: installing vim via homebrew
In Vim 7.3, the combined relativenumber-number setting you describe is not supported. See :help relativenumber, which states that: When setting this option, 'number' is reset. Vim 7.4 supports using both together, and the same section says that: The number in front of the cursor line also depends on the value of 'number', see number_relativenumber for all combinations of the two options. OSX includes only Vim 7.3 by default, but Version 7.4 is included in Homebrew, and MacVim is also based on version 7.4, so installing one of those will help you get where you want to be.
on osx vim 7.3 set number set relative number not working
1,633,501,064,000
Let's say I didn't know what ~/.bashrc is for. Is there a command that would tell me what the file is/does? Other than googling for an answer. The man pages have a files section, maybe there is a way to tell man: give me the man page for the command that has this file in its man page FILES section. Does something like this exist?
If you have a package manager, you can query, which package owns a given file. On Arch Linux, you can use pacman -Qo FILENAME On Ubuntu, Debian and other distributions with apt, you can use apt-file FILENAME To search man files, you can use zgrep cd /usr/share/man find -name *.gz | \ # List all *.gz files while read line; do # For each file: zgrep bashrc "$line" && # call zgrep with pattern and filename echo "--- $line ---"; # print filename if zgrep found something (&&) done #
Find command associated with configuration file
1,633,501,064,000
I am looking for an Open Source Voice Chat Application, like Skype, but within an isolated intranet (no Internet is available). Is there any application which can run on Scientific Linux or CentOS? Client-Server will be OK, but our need is Client Based. Update: As per Anonymous answer I have installed Ekiga. In two of Scientific Linux Terminal. These machines are in one network (same subnet), having 192.168.3.51 and 192.168.3.56. When I open Ekiga, both users are visible online to each other in neighbours section. But when I am trying to make a call, message appears user is not available. And when I try to send a message, in message box, after sending a message this error appears: NOTICE: Could not send message While configuring I choose I do not want to sign up for ekiga.net free service I do not want to sign up for ekiga call out service because Internet is NOT available to any of my Linux boxes. Is there any configuration missing?
SIP VoIP? Empathy and Ekiga can do it.
(IP Telephony, VoIP and Video Conferencing) Ekiga configuration for LAN with same subnet
1,633,501,064,000
Using the Emacs editor I can set it to enable OpenVMS EDT editor keypad mode with this command: M-x edt-emulation-on How can I set it so that whenever I invoke Emacs, EDT emulation is the default on a Linux system?
From the emacs edt documentation: (add-hook 'term-setup-hook 'edt-emulation-on) For term-setup-hook have a look at the emacs documentation about terminal initialization.
Set EDT emulation by default for Emacs
1,633,501,064,000
I've bought some new speakers, and they play much louder than my old ones. It's so much that I most of the time keep them at 1% to 3% in alsamixer, which is -103dB to -84dB. This obviously leaves little room for details in configuration. 3 options in total. What I would like is to be able to set them at 1.6% or 2.3%. Or even better: to tweak the function mapping percentages to dB, so I could make the load "area" take up less "room" in the percentage scale. Do you by any chance know if alsa supports this level of configuration?
You can use amixer to better control the volume. But it really depends on the channel/card. For example, my card only has 255 levels, so even if I issue amixer set PCM '0.1dB-', the volume is reduced by a full 0.2 dB. Btw, it's a command line program, not graphical control. See man amixer or amixer -h.
Getting more details in ALSA volume
1,702,549,088,000
I have an issue with an internal networking utility server that I am building, specifically around Basic Authentication. Now I have managed to get it to work with the following configuration:- root@P-UIDMON-02:/# /usr/local/scripts/LinuxVersion.sh ### Linux Standard Base Information No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 23.04 Release: 23.04 Codename: lunar ### Kernel Release 6.2.0-1015-azure ### Version of Apache2 Server version: Apache/2.4.55 (Ubuntu) Server built: 2023-10-26T13:37:01 root@P-UIDMON-02:/# /usr/local/scripts/ApacheConfig.sh ### /etc/apache2/apache2.conf Config File ServerName p-uidmon-02.acas.example.org DefaultRuntimeDir ${APACHE_RUN_DIR} PidFile ${APACHE_PID_FILE} Timeout 300 KeepAlive On MaxKeepAliveRequests 100 KeepAliveTimeout 5 User ${APACHE_RUN_USER} Group ${APACHE_RUN_GROUP} HostnameLookups Off ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel warn IncludeOptional mods-enabled/*.load IncludeOptional mods-enabled/*.conf Include ports.conf <Directory /> Options FollowSymLinks AllowOverride None Require all denied </Directory> <Directory /usr/share> AllowOverride None Require all granted </Directory> <Directory /var/www/> Options Indexes FollowSymLinks AllowOverride All Require all granted </Directory> AccessFileName .htaccess <FilesMatch "^\.ht"> Require all denied </FilesMatch> LogFormat "%v:%p %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combined LogFormat "%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %O" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent IncludeOptional conf-enabled/*.conf IncludeOptional sites-enabled/*.conf ### p-uidmon-02.conf Site Specific Config File <Directory /var/www> SSLRequireSSL </Directory> <Directory /usr/lib/cgi-bin> SSLOptions +StdEnvVars </Directory> <VirtualHost *:80> RewriteEngine On RewriteCond %{HTTPS} off RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} Redirect "/" "https://p-uidmon-02.acas.example.org/" ServerName acas.example.org ServerAlias p-uidmon-02.acas.example.org ServerAdmin webmaster@localhost DocumentRoot /var/www/html <Directory /var/www/html> Options -Indexes +FollowSymLinks DirectoryIndex index.html AllowOverride All SSLRequireSSL AuthType Basic AuthName "Restricted Content" AuthUserFile /etc/apache2/auth/.htpasswd Require valid-user </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> <VirtualHost *:443> <FilesMatch "\.(?:cgi|shtml|phtml|php)$"> SSLOptions +StdEnvVars </FilesMatch> ServerName acas.example.org ServerAlias p-uidmon-02.acas.example.org ServerAdmin webmaster@localhost DocumentRoot /var/www/html ErrorLog ${APACHE_LOG_DIR}/error.log CustomLog ${APACHE_LOG_DIR}/access.log combined SSLEngine on SSLCertificateFile /etc/ssl/certs/apache-selfsigned.crt SSLCertificateKeyFile /etc/ssl/private/apache-selfsigned.key </VirtualHost> ### /var/www/html/.htaccess File AuthType Basic AuthName "Restricted Content" AuthUserFile /etc/apache2/auth/.htpasswd Require valid-user My problem (that I don't understand) is that I want to be able to use the Basic Authentication details against the Virtual Host and NOT have to put a .htaccess file in the specific directory, but the only way I can get this to work is by having the .htaccess file in the directory. It is like it is ignoring the Virtual Host <Directory>XYZ</Directory> statements. If I remove the .htaccess file, it doesn't challenge me; if I have it in there, then I get the challenge! Although it's working, it is driving me insane trying to understand why it doesn't work if I take the .htaccess away. Can anyone of you explain to me what I am doing wrong here?
It seems to me that you've got three major issues: You have put the htaccess configuration statements in the Virtual Host managing port 80 rather than the one managing port 443 You have put your SSL configuration statements into the non-SSL port 80 host instead of the SSL port 443 host The Virtual Host on port 80 quite reasonably does an immediate redirect to the Virtual Host on port 443 so any configuration statements there will be (mostly) ignored
Ubuntu Apache2 Basic Authentication Issue
1,702,549,088,000
I would like to configure Mutt so that the default save folder changes depending on the current folder: When reading messages in the folder =account1/Unsorted or =account1/Important or any other sub-folder under =account1/, I'd like the default save folder (the one suggested when pressing s in the message index) to be =account1/INBOX. Likewise, for sub-folders of =account2/, I'd want the default save folder to be =account2/INBOX, etc. How may I configure Mutt to do this? It appears save-hook would have been useful, if it had a way of matching against the folder name (I don't think it has). It appears folder-hook would have been useful, if it had a way of setting the save folder, but there is no save folder setting to set with the hook (there's record, but that's for outgoing messages).
Update: Reportedly, the following two lines were used folder-hook +account1 unhook save-hook folder-hook +account1 save-hook . +account1/INBOX repeated for each account. Mutt, as you know, is configured by .muttrc file. Look at man muttrc, the patterns at all others is there. I also do not see a folder pattern there but searching for save I found elsewhere Note that these expandos are supported in "save-hook", "fcc-hook" and "fcc-save-hook", too. and above that %b filename of the original message folder (think mailbox) %B the list to which the letter was sent, or else the folder name (%b). %O original save folder where mutt would formerly have stashed the message: list name or recipient name if not sent to a list Some of them could potentially be used, but I do not see in patterns any way how to match against a fixed string and not against the message. Use can make procmail (or whatever is messing with your incoming mail) to add a custom header line and use -h to test for it. I think your solution will be using folder-hook [!]regexp command When mutt enters a folder which matches regexp (or, when regexp is preceded by an exclamation mark, does not match regexp), the given command is executed. When several folder-hooks match a given mail folder, they are executed in the order given in the configuration file. May be something like folder-hook account1 unhook * folder-hook account1 save-hook ~A =account1/INBOX or folder-hook account1 save-hook ".*" =account1/INBOX folder-hook ^=account1.* save-hook ........... ??? UPDATE: Reportedly, the following two lines were used as solution (and seem to work) folder-hook +account1 unhook save-hook folder-hook +account1 save-hook . +account1/INBOX # repeated for each separate account # Note: We know the documentation for folder-hook says "regexp", but using +account (for some given account) seems to work too for whatever reason It is unlikely that the most recent save-hook take precedence, so I added unhook (change it to unhook save-hook or whatever you test to work) unhook [ * | hook-type ] This command will remove all hooks of a given type, or all hooks when "*" is used as an argu- ment. hook-type can be any of the -hook commands documented above. The following info might be useful for other users who have simpler needs and somehow came to this page: save_address Type: boolean Default: no If set, mutt will take the sender's full address when choosing a default folder for saving a mail. If $save_name or $force_name is set too, the selection of the Fcc folder will be changed as well. save_name Type: boolean Default: no This variable controls how copies of outgoing messages are saved. When set, a check is made to see if a mailbox specified by the recipient address exists (this is done by searching for a mailbox in the $folder directory with the username part of the recipient address). If the mailbox exists, the outgoing message will be saved to that mailbox, otherwise the message is saved to the $record mailbox. Also see the $force_name variable. An example: set save_address=yes Please tell me and others how you solved the problem.
Set Mutt's default save folder depending on the current folder
1,702,549,088,000
On Linux, each software can decide the configuration format he wishes to use. Some uses TOML, INI, JSON, XML, CSV, YAML, JS, CSS, scripts, and so on. However, some configuration files use kind of INI-like text formats, which seem non-standard, e.g. : A text file in which each line is composed of a key and a value separated by one or many white space characters, line beginning by "#" are comments, and sometime it has some kind of "blocs" (e.g. in SSH) : Include /etc/ssh/ssh_config.d/*.conf Host * # ForwardAgent no SendEnv LANG LC_* A variant is used (e.g. in Nginx) in which "blocs" are defined by {} : server { listen 127.0.0.1:80; } Is there any name/group of words used to designate this type/family/kind of formats ?
"Text-based configuration format, with weak structural hierarchy" is what I'd call the common denominator of both examples. An nginx config file is syntactically as different to an SSH config file as it is to JSON – it might look similar on a cursory glance, but the things both parsers can do with the content of the file are sufficiently different that I wouldn't even want to throw them in the same class of syntaxes.
Is this type of Linux configuration files format has a name or a way to designate them?
1,702,549,088,000
I want to be able to share a folder in Dolphin. Right click on a folder, then go to Share tab. Unfortunately I see a message there: "You appear to not have sufficient permissions to manage Samba user shares": I am on Arch Linux, if that matters. I have installed samba package and I am able to manually configure a share. But I would like to be able to do it fast from gui, just like on Windows. Also, I have seen that in other distros using KDE, this Share tab actually works. How to fix that problem?
You should follow https://wiki.archlinux.org/title/Samba#Enable_Usershares. It explains the steps. But here they are. Run: sudo mkdir /var/lib/samba/usershares sudo groupadd -r sambashare sudo chown root:sambashare /var/lib/samba/usershares sudo chmod 1770 /var/lib/samba/usershares In /etc/samba/smb.conf add: [global] usershare path = /var/lib/samba/usershares usershare max shares = 100 usershare allow guests = yes usershare owner only = yes Run: sudo gpasswd sambashare -a your_username sudo systemctl restart smb.service sudo systemctl restart nmb.service Log out and log back in. Now in Share tab you can configure permissions level for your new share. Also you can press "Show Samba status monitor" monitor and see the list of all your user shares and reconfigure them if required:
How to share a folder in KDE Dolphin?
1,702,549,088,000
I know this question has been asked before but none of the answers worked out for me. I use Arch Linux and KDE with the linux-zen kernel and I have set my locale to en_US.utf-8. Whenever I open Konsole I get: bash: warning: setlocale: LC_ALL: cannot change locale (en_US.utf-8) How can I fix this? Here is my /etc/environment: # # This file is parsed by pam_env module # # Syntax: simple "KEY=VAL" pairs on separate lines # LANG=en_US.utf-8 LC_ALL=C I have also set LC_ALL to C in my .bashrc.
The file /etc/enviroment has nothing to do with system locale and you have to edit LC_COLLATE=C.UTF-8. LC_ALL=C was back in the days but you using a rolling release. When using sysvinit or openrc or something similar ... your locales are created with locale-gen and /etc/locale.gen then passed to /etc/locale.conf and /etc/env.d/02locale Here is an example ... [~] cat /etc/locale.gen C.UTF8 UTF-8 en_US ISO-8859-1 en_US.UTF-8 UTF-8 de_DE ISO-8859-1 de_DE@euro ISO-8859-15 de_DE.UTF-8 UTF-8 [~] cat /etc/locale.conf # Configuration file for eselect LANG="de_DE.utf8" LC_COLLATE="C.UTF-8" [~] cat /etc/env.d/02locale # Configuration file for eselect LANG="de_DE.utf8" LC_COLLATE="C.UTF-8" [~] ls -l /etc/env.d/02locale lrwxrwxrwx 1 root root 14 1. Aug 20:10 /etc/env.d/02locale -> ../locale.conf Verify that the selected locales are available by running locale -a and after availability is verified you can run source /etc/profile to update your system on the fly. Note: locale will be saved to /usr/lib/locale/locale-archive and can be checked via localedef --list-archive. When using systemd ... Get a list of available locales with localectl list-locales. Set the desired locale via localectl set-locale LANG=de_DE.utf8 and localectl set-locale LC_COLLATE=C.UTF-8 if needed. Check result with localectl.
LC_ALL: cannot change locale (en_US.utf-8)
1,702,549,088,000
I am trying to determine whether rules put in place using tc persist beyond a reboot (I do not believe they do by default), and whether there is any way to cause them to persist, or if the best you can do is to re-execute the commands at boot in order to put them in place again. Also: how/where do these rules get persisted? (assuming there is in fact a way to do so)
Consolidating comments into an answer Based on comments from @dirkt and @berndbausch, it seems like the bottomline is: There is no tc-specific way of persisting rules that are put in place using tc. The specifics of how to do so the Right Way will vary depending on your distro, but it will come down to re-running the tc commands as part of some file at boot time (for example, /etc/network/interfaces).
Can TC rules persist beyond a reboot? Where?
1,702,549,088,000
I've been reading through documentation and sample configurations of the Lynx text-based browser to learn how to map a key to a command in Lynx. I learned that the space bar key can be used to page down, which is similar to the behavior in most major browsers such as Chrome. See the following link. http://web.mit.edu/cygwin/cygwin_v1.3.2/usr/share/lynx_help/keystrokes/keystroke_help.html I would like to map the "shift + space bar" keystroke to copy the page up behavior in Lynx. I found the syntax to map a key in a CFG file is: KEYMAP:<KEYSTROKE>:<LYNX FUNCTION>. See the following link. https://lynx.invisible-island.net/lynx2.8.3/breakout/lynx.cfg I also learned that the caret symbol "^" represents the Control key. For example, KEYMAP:^A:HOME maps Ctrl-A to the Home command in Lynx, which moves the cursor to the top of the page. However, I don't see any examples of mapping the shift key. How do you map the "shift + space bar" keystroke command in the Lynx browser?
Generally speaking, you don't (because unless you're able to (re)configure the keyboard, terminals won't send a distinct set of characters for ShiftSpace). Beyond that, lynx doesn't have a special feature for mapping key-modifiers since that's too terminal-specific to have been standardized. Lynx's KEYMAP feature uses a subset of the standard terminfo capabilities.
How do you map the "shift + space bar" keystroke command in the Lynx browser?
1,702,549,088,000
I know KConfig serves to tune the C preprocessor, at the start of the Linux kernel compilation. And that the device tree is used to give a compiled kernel a description about hardware at runtime. How do these two configurability features overlap? Both give information about intrinsic CPU details and drivers for external peripherals, for example. I imagine that any peripheral mentionned in the device tree must have gotten its driver previously declared in the .config file. I can guess too that if a driver has been compiled as built-in, it will not be loaded as a module again... but what finer details are there?
A compile-time kernel configuration can specify whether or not include each of the standard drivers included in the kernel source tree, how those drivers will be included (as built-in or as loadable modules), and a number of other parameters related to e.g. what kind of optimizations and other choices will be used in compiling the kernel (e.g. optimize to specific CPU models, or to be as generic as possible, or whether or not enable some features like Spectre/Meltdown security mitigations by default or not). If a compile-time kernel configuration is set generic enough, the same kernel can be used with a large number of different systems within the same processor architecture. On the other hand, the device tree is about the actual hardware the kernel is currently running on. For embedded systems and other systems with no autoprobeable technologies like ACPI or PCI(e), the device tree will specify the exact I/O or memory addresses specific hardware components will be found at, so that the drivers will be able to find and use those hardware components. Even if the device tree describes a particular hardware component exists and how it can be accessed, if the necessary driver for it is disabled at compile-time, then the kernel will be unable to use it unless the driver module is added separately later. Or if the kernel is compiled with a completely monolithic configuration with no loadable module support, then that kernel won't be able to use a particular device at all unless the driver for it is included in the kernel compilation. If a driver for a hardware component is included in kernel configuration (either built-in or as a loadable module) but there is no information for it in the device tree (and the hardware architecture does not include standard detection mechanisms), then the kernel will be unaware of the existence of the hardware component. For example, if the device tree incorrectly specifies the display controller's framebuffer area as regular usable RAM, then you might get a random display of whatever bytes happen to get stored in the display controller's default framebuffer area, as a jumble of pixel "noise". Or, if the display controller needs a specific initialization to start working, you might get no output at all. In other words, the purpose of the device tree is to tell the kernel where the various hardware features of the system are located in the processor's address space(s), both to enable the kernel to point the correct drivers to correct physical hardware, and to tell the kernel which parts of address space it should not attempt to use as regular RAM.
How does kbuild compare to the device tree?
1,702,549,088,000
I changed user for supervisor from root to non-root called dev. All is good, supervisor is running as dev: me@server$ ps aux | grep supervisor dev 25230 0.2 1.0 60404 21392 ? Ss 21:42 0:00 /usr/bin/python /usr/bin/supervisord -n -c /etc/supervisor/supervisord.conf ...but logs shows this message: Nov 15 21:45:00 server supervisord[25473]: 2019-11-15 21:45:00,880 CRIT Set uid to user 1003 This is uid for user dev: me@server$ id dev uid=1003(dev) gid=1000(ww) groups=1000(ww) What does it mean? How should I change uid when user with this uid is already running supervisor?
According to docs, you have to start supervised as root, and let her drop privileges. Current version logs the user change like Set uid to user dev succeeded Probably you are using some older version, if you upgrade for current, this misleading log message will disappear. For now, you can safely ignore it. Here is the github commit for the mentioned change/fix
Changing user for supervisor - error CRIT Set uid to user
1,702,549,088,000
I have a virtualhost which I configured to redirect any hit to a different site, so: <VirtualHost *:80> Redirect 301 / http://other.site/ </VirtualHost> Now I would like to re-configure it on a way, that it should redirect any hit, except to a specific virtual directory. Intuitively, I would think some similar: <VirtualHost *:80> <Location ! "/subdir"> Redirect 301 / http://other.site/ </Location> ...configuration for /subdir... </VirtualHost> Is it in Apache possible? As I understood its config, it is not very strong in any negative rules.
Yes, it is possible. You can use RedirectMatch with the mod_alias Apache module, like this: <VirtualHost *:80> ServerName _default_ RedirectMatch 301 ^/(?!subdir...)(.*) http://other.site/ </VirtualHost> Or you can use Apache's mod_rewrite module and do this: <VirtualHost *:80> ServerName _default_ RewriteCond %{REQUEST_URI} !^/subdir... RewriteRule (.*) http://other.site/ [L,R=301] </VirtualHost>
How to make negative rules in Apache?
1,702,549,088,000
I want to start keepassX in floating mode in i3wm. my .config/i3/config contains the line for_window [class="keepassx"] floating enable and the xprop xprop _NET_WM_USER_TIME(CARDINAL) = 7578932 WM_STATE(WM_STATE): window state: Normal icon window: 0x0 _NET_WM_SYNC_REQUEST_COUNTER(CARDINAL) = 29360143 _NET_WM_ICON(CARDINAL) = Icon (64 x 64): XdndAware(ATOM) = BITMAP _MOTIF_DRAG_RECEIVER_INFO(_MOTIF_DRAG_RECEIVER_INFO) = 0x6c, 0x0, 0x5, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x10, 0x0, 0x0, 0x0 _NET_WM_NAME(UTF8_STRING) = "myKeys.kdbx - KeePassX" WM_CLIENT_LEADER(WINDOW): window id # 0x1c00005 _NET_WM_PID(CARDINAL) = 26787 _NET_WM_WINDOW_TYPE(ATOM) = _NET_WM_WINDOW_TYPE_NORMAL _MOTIF_WM_HINTS(_MOTIF_WM_HINTS) = 0x3, 0x3e, 0x7e, 0x0, 0x0 WM_PROTOCOLS(ATOM): protocols WM_DELETE_WINDOW, WM_TAKE_FOCUS, _NET_WM_PING, _NET_WM_SYNC_REQUEST WM_NAME(STRING) = "Keys.kdbx - KeePassX" WM_LOCALE_NAME(STRING) = "en_US.UTF-8" WM_CLASS(STRING) = "keepassx", "Keepassx" WM_HINTS(WM_HINTS): Client accepts input or input focus: True Initial state is Normal State. bitmap id # to use for icon: 0x1c0000b window id # of group leader: 0x1c00005 WM_NORMAL_HINTS(WM_SIZE_HINTS): user specified location: 960, 22 program specified location: 960, 22 user specified size: 956 by 1033 program specified size: 956 by 1033 program specified minimum size: 640 by 517 window gravity: NorthWest WM_CLIENT_MACHINE(STRING) = "nautilus" WM_COMMAND(STRING) = { "keepassx" } I also tried the command for_window [instance="keepassx"] floating enable how can I make the keepassX always stars in floating mode?
As Adaephon said, you just looked at the wrong string. Everything else should be fine. You want to distinct by class, so let's look at your xprop: WM_CLASS(STRING) = "keepassx", "Keepassx" This line is defined like: WM_CLASS(STRING) = instance, class As you see, you wanted to float keepassx, but the class is Keepassx There are two solutions for you: Use for_window [class="Keepassx"] floating enable, as this refers to the right class name. Use for_window [class="(?i)keepassx"] floating enable, which means that the searched string will be case-unsensitive. Bear in mind, that you can also use for_window with other attributes, like name, instance, etc. EDIT: I've read his comment again and yes, he should be right: Look again at your config to strike out that after that line, another one comes that may disable floating mode for specific or every windows.
Make KeepassX float in i3wm
1,702,549,088,000
I am using a lenovo thinkpad w520 running fedora 22 and gnome 3. This particular laptop has both an integrated (Intel HD Graphics 3000) and discrete graphics (NVIDIA Quadro 1000M) card. When I'm in Linux, I only ever use the integrated graphics card because support for the NVIDIA card isn't awesome without installing proprietary drivers. For the most part, this works well, except that the screen is always on 100% brightness. Turning down the brightness with either the keyboard controls or in the Gnome pane has no effect. I did some poking and discovered that in the /sys/class/backlight directory there are two entries: intel_backlight and nv_backlight. I tried changing the brightness in the Gnome pane and watched the intel_backlight/brightness and nv_backlight/brightness files. The former does not change but the latter does. If I manually change the former, the true brightness actually does update. So I guess I need to figure out how to tell gnome to use the intel_backlight device rather than the nv_backlight device. How would I go about doing this? OS info: 4.2.8-200.fc22.x86_64 Using grub2.
Create the following config for X11 following Archlinux wiki to force optimus hardware to use intel_backlight instead of nv_backlight: # cat /etc/X11/xorg.conf.d/20-backlight.conf Section "Device" Identifier "Intel Graphics" Driver "intel" Option "Backlight" "intel_backlight" EndSection Note that in my fedora 28 this is the only X11 config snippet, the rest is likely detected automatically.
Change which brightness device gnome uses?
1,702,549,088,000
I want to filter the netflow records by engine_id, I have tried nfdump -r <FILE> engine_id 10 and nfdump -r <FILE> -s engine_id 10 But, it is not working. What do I do wrong? Here you find the manual for nfdump.
I don't know we can filter with engine id in netflow records. I found this LINK. My goal is to differentiate the input OVS port in different servers. From that article, we can differentiate the input OVS ports with combination of "add_to_interface=true”, “engine_id=10”. In that Article "There is another use case for Engine ID. As I already explained that OVS uses OpenFlow port number as an In/Out interface number in NetFlow flow record. Because OpenFlow port number is a per bridge unique number, there is a chance for these numbers to collide across bridges. To get around this problem, you can set “add_to_interface” to true." “When this parameter is set to true, the 7 most significant bits of In/Out interface number is replaced with the 7 least significant bits of Engine ID. This will help interface number collision happen less likely.”
How to use option from manual for netflow?
1,702,549,088,000
I have an application which modifies environment variables. The intent is to launch a bash shell with a modified context-specific environment. When my application modifies the environment, and exec()s "bash --noprofile --norc" with the modified environment then I almost get the behavior I want, except that aliases are dropped. If I log in and open a shell directly from the OS, I get the "normal" aliases, but if my application launches a bash, then I don't get any aliases because the initialization files are skipped. Is there any way to have bash initialize from a dynamic source? In other words, it would be helpful if I could have my application launch "bash" with all its various user/facility settings (including aliases) and then at the end of that, source the differences that my application needs to apply. Ideally, this would leave open a shell prepped and ready to go for my users. I'm not finding this (or perhaps understanding it) from the man page. In an ideal world, we could refactor the user/factory settings to be more reentrant (aware of the application, and skip reinitialization steps that don't need to happen again); but in practice this is turning out to be a little bit of a hassle. Any suggestions? Thanks.
bash --noprofile --norc <<ALIAS $(alias) exec </dev/tty ALIAS That should do it if you're running the bash --noprofile... bit from a bash that knows your aliases. Else you could do as @WilliamEverett suggests (which, as I believe, is ultimately the better way to go). One way to facilitate this is: alias >~/.aliasrc { cp ~/.bashrc ~/.bashrc.old grep -v '^ *alias' <~/.bashrc.old echo '. ~/.aliasrc' } >~/.bashrc You'll want to do a little comparison between the rc file that generates and the .old one it saves in case some alias definitions were multi-lined. After you've sorted that though you can: bash --noprofile --rcfile ~/.aliasrc
bash: dynamic environment control
1,702,549,088,000
I'm currently working on updating our CentOS server, however whenever I'm trying to use yum I get this error. Config Error: Plugin "replace" requires API 2.6. Supported API is 2.5. I'm new to CentOS and have previously only worked with Debian/Ubuntu servers and I can't seem to get rid of this error; Google also doesn't help much, hence I'm using this as a last resort before bothering my server company.
If you don't use command yum replace something-soft-name, you can remove package yum-plugin-replace: rpm -e yum-plugin-replace
CentOS replace config error
1,702,549,088,000
I have used Webmin on multiple distros, both on the Red Hat-based side and the Debian-based side, and have had various issues with it on both. My issues have included things like configuration failing when restarting a service from within Webmin, or settings not sticking when I enter them via Webmin and apply changes. A couple of examples are with bind9 and samba. I tried to set up a new samba user but it failed to do so, and I can't even tell exactly where, I just know that restarting the server failed. Same with bind9, after configuring a zone and adding A and PTR records, the bind server failed to start and didn't give any helpful reason why, so I ended up merely using the configuration file editor to get it done. My frustration with it has led me to try other configuration tools. It also has brought up the question -- is there a particular distro that Webmin was developed and tested on? I know Webmin is available for a variety of distros but it doesn't seem to work really well for any of them. Maybe I just haven't used it on the right one?
Webmin should work with most of the major distros such as CentOS, Debian, and Ubuntu. As you move away from those I would expect it to start to fall off. I found this thread titled: Any better distro than CentOS for Webmin/Virtualmin?, which has several peoples perspectives on using Webmin on different OSes. One thing you'll likely run into is that Webmin might have issues with different services rather than an issue with a particular distro. So for example it might work great on Ubuntu, except for maybe the Samba service, on Ubuntu. So overall it works just fine with that particular distro, except with that one service.
Is Webmin oriented toward a particular Linux distro?
1,702,549,088,000
Seen similar question many times on AskUbuntu, but most answers was bout unity-helpers or gconf ...canonical... etc, so this actually doesn't seem to work. The problem is that I decided to move to lightdm from gdm. Yep, it works,but I can't setup background image to it - always getting black bg color in exchange of picture. My configs: tempos@parmasse ~ $ cat /etc/lightdm/lightdm-gtk-greeter.conf # # logo = Logo file to use, either an image absolute path, or a path relative to the greeter data directory # background = Background file to use, either an image path or a color (e.g. #772953) # theme-name = GTK+ theme to use # icon-theme-name = Icon theme to use # font-name = Font to use # xft-antialias = Whether to antialias Xft fonts (true or false) # xft-dpi = Resolution for Xft in dots per inch (e.g. 96) # xft-hintstyle = What degree of hinting to use (hintnone, hintslight, hintmedium, or hintfull) # xft-rgba = Type of subpixel antialiasing (none, rgb, bgr, vrgb or vbgr) # show-language-selector (true or false) # [greeter] #logo= background=/usr/share/backgrounds/lightdm.jpg #background=#772953 #theme-name=Adwaita #icon-theme-name=gnome #font-name= #xft-antialias= #xft-dpi= #xft-hintstyle= #xft-rgba= show-language-selector=true The file itself: tempos@parmasse ~ $ ls -la /usr/share/backgrounds/lightdm.jpg -rwxrwxrwx 1 root root 1362684 авг 14 12:36 /usr/share/backgrounds/lightdm.jpg
Thanks all. Seems, that it was some bug - in lightdm itself (meening package-specific or some libraries) or, possibly, it was simply installed with some errors/bugs. I'm now trying to install many different things, like compiz, awesome, enlightenment, lightdm and others, so can't be sure. The fact is today both lightdm and lightdm-gtk-greeter received updates, and this fixed background's problems even with original images and config.
Change lightdm background
1,702,549,088,000
I am trying to configure a wireless network on a newly installed Arch Linux. The command iw dev wlp3s0 scan gives information about all the found networks. I only need the information about my SSID.
I don't have wifi around here to check, but I believe iw dev IFACE scan starts each section with a non-indented line and indents all subsequent lines. So you can treat a non-indented line as a section break. This is not very easy to parse with the usual commands, so you can do it in two steps. First insert an empty line between sessions. Then use awk's paragraph mode. iw dev wlp3s0 scan | sed 's/^[^ \t]/\n&/' | awk -v RS= '/^[ \t]*SSID: myssid$/' Beware to quote any special characters in the SSID properly. If you're passing it as a variable and need to handle special characters safely, it's a little more work. iw dev wlp3s0 scan | sed 's/^[^ \t]/\n&/' | awk -v RS= -v target="$ssid" '{ ssid = substr($0, index($0, "\tSSID:")); ssid = substr(ssid, 1, index(ssid, "\n")); if (ssid == target) print; }'
How to restrict output of iw dev wlp3s0 scan?
1,702,549,088,000
I've worked with Fontconfig previously, and understand how to do most of the common configurations. All my aliases work as expected, and I'm almost completely finished. I'm trying to set the default unavailable font selection, and couldn't find anything helpful in any documentation. Countless Google searches returned unrelated results. To clarify what I'm attempting to configure, how can I specify which font is selected in this circumstance? fc-match 'nonexistent font'
As mentioned in the fonts.conf man page, you can set the FC_DEBUG environment variable to turn on a wide variety of additional messages from Fontconfig. For example, I tried FC_DEBUG=$((1 + 2 + 4 + 4096)) fc-match "nonexistent font" and I got a whole lot of messages that looked helpful. Of course there are even more flags you can add.
Fontconfig default unavailable font selection, How is it defined in the XML configs? Couldn't locate in documentation
1,702,549,088,000
I think I seriously messed things up on my EC2 instance which I'm currently hacking on. I tried to install some rpmfusion repository from which to install FFMPEG, but it broke things and I wasn't able to do any updates or install anything. So, I ran a pretty straightforward rm command: rm /etc/yum.repos.d/rpmfusion-*. I think this really messed things up, though, as I can't seem to find rpmbuild which I need to install FFMPEG. Can anyone help me recover from this? I don't have access to the EC2 control panel, otherwise I'd just up another instance and start over. Can anyone instruct me on how to simply install FFMPEG on a CentOS-like OS?
You can reload the REPO RPMS here: http://rpmfusion.org/Configuration You probably want to find the version that matches what you have installed and do: yum reinstall packagename
I wiped my /etc/yum.repos.d
1,702,549,088,000
I have a print server using CUPS on a CentOS 5.3 box. On my PC, I set up a remote printer with the URI http://$PRINT_SERVER:631/printers/$PRINTER_NAME, and have successfully been able to print files to it. There is another system, which my team does not have control over, that sends all of its print requests using LPD on port 515. I need to handle this somehow. I installed the cups-lpd package and edited the /etc/xinetd.d/cups-lpd file to enable it (or so I thought): ~$ cat /etc/xinetd.d/cups-lpd service printer { socket_type = stream protocol = tcp port = 515 wait = no user = lp group = sys passenv = server = /usr/libexec/cups/daemon/cups-lpd server_args = -o document-format=application/octet-stream disable = no } But as far as the other computers on the network are concerned, port 515 is closed: Starting Nmap 5.51 ( http://nmap.org ) at 2011-09-02 16:41 Central Daylight Time Nmap scan report for [IP address] Host is up (0.028s latency). Not shown: 995 closed ports PORT STATE SERVICE 514/tcp open shell 631/tcp open ipp 1066/tcp open fpo-fns 1067/tcp open instl_boots 6000/tcp open X11 Nmap done: 1 IP address (1 host up) scanned in 0.52 seconds Is there something else I need to change in the xinetd configuration to enable the LPD port?
I ran xinetd with the -d (debug) flag, and got the following helpful error messages: 11/9/6@15:32:33: ERROR: 2767 {server_parser} Server /usr/libexec/cups/daemon/cups-lpd is not executable [file=/etc/xinetd.d/cups-lpd] [line=10] 11/9/6@15:32:33: ERROR: 2767 {identify_attribute} Error parsing attribute server - DISABLING SERVICE [file=/etc/xinetd.d/cups-lpd] [line=10] 11/9/6@15:32:33: ERROR: 2767 {fix_server_argv} Must specify a server in printer There was no /usr/libexec/cups/daemon/cups-lpd file, but there was a /usr/lib/cups/daemon/cups-lpd. That's what I get for copying sample code from the internet. Edited this line, and the printer is working now.
How to enable cups-lpd / port 515?
1,702,549,088,000
I have the Jack Audio Connection Kit (JACK) installed, but cannot seem to get jack_control start to start the service. I'm using Slackware64-current, which recently updated its /etc/dbus-1/system.conf to have a more restrictive configuration: <!-- ... --> <policy context="default"> <!-- All users can connect to system bus --> <allow user="*"/> <!-- Holes must be punched in service configuration files for name ownership and sending method calls --> <deny own="*"/> <deny send_type="method_call"/> <!-- Signals and reply messages (method returns, errors) are allowed by default --> <allow send_type="signal"/> <allow send_requested_reply="true" send_type="method_return"/> <allow send_requested_reply="true" send_type="error"/> <!-- All messages may be received by default --> <allow receive_type="method_call"/> <allow receive_type="method_return"/> <allow receive_type="error"/> <allow receive_type="signal"/> <!-- Allow anyone to talk to the message bus --> <allow send_destination="org.freedesktop.DBus"/> <!-- But disallow some specific bus services --> <deny send_destination="org.freedesktop.DBus" send_interface="org.freedesktop.DBus" send_member="UpdateActivationEnvironment"/> </policy> Ever since the update, running jack_control start as a regular user produces the following error: --- start DBus exception: org.jackaudio.Error.Generic: failed to activate dbusapi jack client. error is -1 It did not do this before. The new configuration file says I'm supposed to punch a hole for it in the service configuration files. I'm not even quite sure what DBUS has to do with JACK. Extra information: JACK2 SVN revision 4120 (2011-02-09) DBUS version 1.4.1 DBUS-Python version 0.83.1
I figured this out a while ago. Turns out it was a CAS-ARMv7 patch to JACK that broke DBUS functionality and I managed to fix using this patch. The issues were resolved some time ago in the JACK subversion repository and it works fine now.
Configuring DBUS to start JACK
1,640,274,829,000
When modifying Linux configuration files, it is often recommended to place local changes under a .d directory, e.g., /etc/sudoers.d/ or /etc/apt/sources.list.d. In my understanding this is to avoid system updates overwriting the local changes if they were directly placed to files such as /etc/sudoers or /etc/apt/sources.list. Is there a similar way to apply local sshd_config settings? Currently, I am direclty modifying the /etc/ssh/sshd_config file, but I am worried that I may lose all the changes after some update replacing that file.
No, not by default. However there are many alternative solutions. You can make sshd look for the configuration elsewhere, or keep a backup of the configuration. You can also 'chattr +i' your configuration as root to prevent editing/removal by any user as long as the immutable flag is set.
Persisting sshd_config settings
1,640,274,829,000
I'm running Ubuntu 18.04 on an MSI GE63 Stealth 8RE, with an NVIDIA GTX 1060. There's a good amount of screen tearing when watching videos, and I found several sources online telling me that creating a file in /etc/modprobe.d/ with options nvidia_drm modeset=1 would resolve the issue. Lo and behold, it did! No more screen tearing! It fixed the Prime Synchronization issues. However, for some reason, I was no longer able to connect to my HDMI monitor. The output of xrandr --query is as follows: Screen 0: minimum 8 x 8, current 1920 x 1080, maximum 32767 x 32767 eDP-1-1 connected 1920x1080+0+0 (normal left inverted right x axis y axis) 344mm x 194mm 1920x1080 60.02*+ 60.01 59.97 59.96 59.93 1680x1050 59.95 59.88 1600x1024 60.17 1400x1050 59.98 1600x900 59.99 59.94 59.95 59.82 1280x1024 60.02 1440x900 59.89 1400x900 59.96 59.88 1280x960 60.00 1440x810 60.00 59.97 1368x768 59.88 59.85 1360x768 59.80 59.96 1280x800 59.99 59.97 59.81 59.91 1152x864 60.00 1280x720 60.00 59.99 59.86 59.74 1024x768 60.04 60.00 960x720 60.00 928x696 60.05 896x672 60.01 1024x576 59.95 59.96 59.90 59.82 960x600 59.93 60.00 960x540 59.96 59.99 59.63 59.82 800x600 60.00 60.32 56.25 840x525 60.01 59.88 864x486 59.92 59.57 800x512 60.17 700x525 59.98 800x450 59.95 59.82 640x512 60.02 720x450 59.89 700x450 59.96 59.88 640x480 60.00 59.94 720x405 59.51 58.99 684x384 59.88 59.85 680x384 59.80 59.96 640x400 59.88 59.98 576x432 60.06 640x360 59.86 59.83 59.84 59.32 512x384 60.00 512x288 60.00 59.92 480x270 59.63 59.82 400x300 60.32 56.34 432x243 59.92 59.57 320x240 60.05 360x202 59.51 59.13 320x180 59.84 59.32 DP-1-1 disconnected (normal left inverted right x axis y axis) HDMI-1-1 disconnected (normal left inverted right x axis y axis) I'd like to not have screen tearing, but I'd also like to be able to use my HDMI port. Does anyone have a suggestion as to what I can do to resolve this issue?
I fixed the issue!! I switched from using GDM3 to LightDM, rebooted, and I no longer have the issue of not being able to connect to external monitors. I tested on both DisplayPort and HDMI external monitors. It also happened to fix a problem I'd been having with external monitors not being recognized as viable audio sinks, so now I also get audio out of my external monitors, with no screen tearing :) To switch from GDM3 (which is the default display manager since Ubuntu 17.10) to LightDM, I just ran sudo apt install lightdm because it wasn't already installed, and it prompted me with the option to choose which display manager to have as my default. If you already have it installed, running sudo dpkg-reconfigure gdm3 will show you the same prompt. I hope this helps anyone else who runs into this issue :)
HDMI not showing in xrandr after nvidia modeset=1
1,640,274,829,000
I have created /etc/X11/xorg.conf. Section "ServerFlags" Option "NoTrapSignals" "true" EndSection It successfully affects my GNOME session if I start a GNOME session which uses X and not Wayland. I have checked this by killing the X server with SIGABRT, and verifying that it does not try to print it's own backtrace by catching the signal. However the config file doesn't have the effect I really wanted, which is to achieve the same behaviour for the Xwayland instance, which GNOME starts when I start a normal GNOME session with Wayland. I can't even find the log messages from Xwayland, to see if it mentions anything about where it reads configuration from! I notice Xorg has a man page, but Xwayland does not. None of the options Xwayland is running with (-rootless -terminate -core -listen 4 -listen 5 -displayfd 6) are documented in man Xorg, though to be fair GNOME also passes -displayfd to Xorg when running a native X session. Does anyone know how to do this? Environment Fedora 27 GNOME gnome-session-3.26.1-1.fc27.x86_64 xorg-x11-server-Xwayland-1.19.6-5.fc27.x86_64 Context I have an annoying XWayland crash. I'm having difficulty understanding it from the core dump my system saves. I desperately want to disable the built-in X backtrace generator. It's just getting in the way, the backtrace generator itself is vulnerable to crashes, and most importantly by catching the error signal, I believe it stops Linux from logging the exact cause of the SIGBUS error in the kernel log. I say this is the correct value for NoTrapSignals, because it's an inherently fragile feature, and AFAICT it's pointless in an unprivileged Xwayland server. It's not like the bad old days of user mode setting, where the kernel couldn't reset the display to text mode, so you desperately hoped the X server would still be able to do so if it crashed.
I say this is the correct value for NoTrapSignals, because it's an inherently fragile feature, and AFAICT it's pointless in an unprivileged Xwayland server. It's not like the bad old days of user mode setting, where the kernel couldn't reset the display to text mode, so you desperately hoped the X server would still be able to do so if it crashed. If that's right, the answer is that this is a bug in Xwayland, and Xwayland should be fixed to behave correctly without needing some config file. Comparing attempts to run Xorg and Xwayland under strace, suggests that Xwayland does not look for any configuration file, only XKB data files. Both /usr/libexec/Xorg and /usr/bin/Xwayland will print usage advice, if you pass --help as an option. Xorg includes a section "Device Dependent Usage" with options to set the logfile or config file. Xwayland mentions none of these. So Xwayland does not appear to be configurable like Xorg is. Technically Xwayland seems to get run with -core "generate core dump on fatal error". Xorg does not, though it claims to support the same option. From the evidence so far it feels like this is irrelevant though, particularly since NoTrapSignals does not affect whether or not a core dump is generated. Option "NoTrapSignals" "boolean" This prevents the Xorg server from trapping a range of unexpected fatal signals and exiting cleanly. Instead, the Xorg server will die and drop core where the fault occurred. The default behaviour is for the Xorg server to exit cleanly, but still drop a core file. In general you never want to use this option unless you are debugging an Xorg server problem and know how to deal with the consequences.
How can I configure Xwayland (to set NoTrapSignals to the correct value)
1,640,274,829,000
I have the following virtual host in /etc/apache2/vhosts.d/ip-based_vhosts.conf: <VirtualHost test.local:80> ServerAdmin [email protected] ServerName test.local DocumentRoot /home/web/test.net/html ErrorLog /var/log/apache2/test-error.log CustomLog /var/log/apache2/test-access.log combined HostnameLookups Off UseCanonicalName Off ServerSignature On <Directory "/home/web/test.net/html"> Options Indexes FollowSymLinks AllowOverride All <IfModule !mod_access_compat.c> Require all granted </IfModule> <IfModule mod_access_compat.c> Order allow,deny Allow from all </IfModule> DirectoryIndex index.php </Directory> <IfModule proxy_fcgi_module> ProxyPassMatch ^/(.*\.php(/.*)?)$ fcgi://127.0.0.1:9000/home/web/test.net/html/$1 </IfModule> </VirtualHost> And in /home/web/test.net/html I have: .htacess <IfModule mod_rewrite.c> RewriteEngine on RewriteRule ^(.*)$ index.php [QSA,L] </IfModule> index.php <?php echo $_SERVER['REQUEST_URI']; ?> When I visit http://test.local I get correctly "/" (without quotes). But when I visit anything else, e.g. http://test.local/abc I get a 404 page instead of "/abc". How can I solve this to work properly?
You should check: whether the .htaccess file has proper permissions whether mod_rewrite is enabled
.htaccess rewrite not working?
1,640,274,829,000
Why is one of the Unix paradigms to save both the configuration name (aka attribute name) and the configuration value in configuration files? An alternative is to save the attribute name in the file name, and only the configuration value inside the file, following the KISS (keep it simple and stupid) principle. Sorting configuration files can still be done with folders. As far as I know, something similar is already implemented with some files inside /proc on linux, although I have never seen it anywhere else. As far as I can remember, Unix philosophy says: "Everything is a file". Why not here?
A couple of thoughts here on why configuration files are not exploded, option by option into a format such as: config default key key ... key other key key ... where the first sub directory is a config type and each file inside ('key's here) are all separate config options, each with a value which is the contents of the file. All of these points are just things I have picked up along the way, and there are certainly loads of exceptions. By and large, most UNIX / Linux utilities operate on a single file at a time, line by line, and not multiple files. Why? Lots of reasons, but this seems to be the most simple way to handle input. Most files when viewed from a programming interface are just a series of lines. Lines (some text followed by a \n, ASCII line feed) are really the most basic unit of data in most applications, not files. ASIDE: Consider that your terminal was designed to be printed on paper, line by line as the user worked. Honestly not much has changed in the modern tty as it is still treated like a serial interface. Consider a situation where there are lots and lots of configuration options for a complex program like a socket server or other daemon. For the program to have to open hundreds of files to search for possible derivations from defaults incurs two penalties, an open file descriptor limit (easy to get around) and a couple of syscalls per file. The syscalls can add up because a program has to ask the kernal for assistance for both reading and opening files. Updating and editing a file is generally much more simple to do either with traditional editors (of which ed is the clear winner :D ), or utilities like sed and awk. To use traditional editing tools on a host of file-based config options would really be inconvenient. Also searching over lots of options would become an exercise for a user's shell and not the really possible for an editor. If most config options were just simple values, echo and some shell IO redirection would be the editor of choice (which if I understand your preferences, might not be that bad of a thing). Comments. Config files can be more comment than option, and I really like comments. If config files were exchanged for config directories, folks had better get used to making README files, and then all of the sudden we have a nasty line-based file in there ;) Overall I would say that file based configuration is just in the bones of UNIX / Linux systems. Could it be done a different way? Sure. Is the current paradigm the best way? It might be for UNIX / Linux systems, but not for another environment where directories replace files as the point of interface with input.
configuration: why not conf option = file name and conf value = file content [closed]
1,640,274,829,000
I have 2x stunnels linux based, 1 server, 1 client. What I am trying to do is to use a stunnel client and with verify 3 it authenticates the user based on the certificate. Here are the config files of each: Client: cert = /stunnel/client_Access_stunnel.pem key = /stunnel/client_Access_stunnel.pem CAfile = /stunnel/client_Access_stunnel.pem CApath = /stunnel/cacerts/ flips=no pid = /var/run/stunnel-tcap.pid ; Socket parameters tuning socket = l:TCP_NODELAY=1 socket = r:TCP_NODELAY=1 socket = l:SO_KEEPALIVE=1 socket = r:SO_KEEPALIVE=1 output = /stunnel/stunnel.log client = yes ;verify = 3 debug = 5 [tcap] accept = 0.0.0.0:3701 connect = 192.168.1.4:3700 Server: pid = /var/run/stunnel/server.pid cert = /opt/quasar/cert/certs/stunnels/server.pem key = /opt/quasar/cert/certs/stunnels/server.pem CApath = /opt/certs/stunnels/cacerts/ ; Socket parameters tuning socket = l:TCP_NODELAY=1 socket = r:TCP_NODELAY=1 socket = l:SO_KEEPALIVE=1 socket = r:SO_KEEPALIVE=1 ; Security level verify = 2 ; Uncomment for troubleshooting purposes debug = 7 ; Log file path output = /opt/stunnels/stunnel.log [stunnel1] accept = 0.0.0.0:3700 connect = 127.0.0.1:3701 The error is: Client: 2016.11.16 12:55:10 LOG7[77]: Remote descriptor (FD=11) initialized 2016.11.16 12:55:10 LOG6[77]: SNI: sending servername: 192.168.104.74 2016.11.16 12:55:10 LOG7[77]: SSL state (connect): before/connect initialization 2016.11.16 12:55:10 LOG7[77]: SSL state (connect): SSLv2/v3 write client hello A 2016.11.16 12:55:10 LOG6[78]: Certificate verification disabled 2016.11.16 12:55:10 LOG6[78]: Certificate verification disabled 2016.11.16 12:55:10 LOG6[78]: Certificate verification disabled 2016.11.16 12:55:10 LOG6[77]: Certificate verification disabled 2016.11.16 12:55:10 LOG6[77]: Certificate verification disabled 2016.11.16 12:55:10 LOG6[77]: Certificate verification disabled 2016.11.16 12:55:10 LOG7[77]: SSL alert (read): fatal: unknown CA 2016.11.16 12:55:10 LOG3[77]: SSL_connect: 14094418: error:14094418:SSL routines:ssl3_read_bytes:tlsv1 alert unknown ca 2016.11.16 12:55:10 LOG5[77]: Connection reset: 0 byte(s) sent to SSL, 0 byte(s) sent to socket 2016.11.16 12:55:10 LOG7[77]: Deallocating application specific data for addr index Server: 2016.11.16 11:55:17 LOG7[36384:140097622492928]: SSL state (accept): before/accept initialization 2016.11.16 11:55:17 LOG7[36384:140097622492928]: SSL state (accept): SSLv3 read client hello A 2016.11.16 11:55:17 LOG7[36384:140097622492928]: SSL state (accept): SSLv3 write server hello A 2016.11.16 11:55:17 LOG7[36384:140097622492928]: SSL state (accept): SSLv3 write certificate A 2016.11.16 11:55:17 LOG7[36384:140097622492928]: SSL state (accept): SSLv3 write certificate request A 2016.11.16 11:55:17 LOG7[36384:140097622492928]: SSL state (accept): SSLv3 flush data 2016.11.16 11:55:17 LOG4[36384:140097622492928]: VERIFY ERROR: depth=0, error=unable to get local issuer certificate: /C=UK/ST=London/L=London/O=org/OU=OP/CN=client/[email protected] 2016.11.16 11:55:17 LOG7[36384:140097622492928]: SSL alert (write): fatal: unknown CA 2016.11.16 11:55:17 LOG3[36384:140097622492928]: SSL_accept: 140890B2: error:140890B2:SSL routines:SSL3_GET_CLIENT_CERTIFICATE:no certificate returned 2016.11.16 11:55:17 LOG5[36384:140097622492928]: Connection reset: 0 bytes sent to SSL, 0 bytes sent to socket Please ignore the time-stamps. same error different times. I have added the CA certificate to the client_Access_stunnel.pem, unchanged. I have added all CA's certs in to CApath cert was signed by xca locally managed
CApath is used with the verifyChain or verifyPeer options, I don't see either of those options set anywhere. Also note "the certificates in this directory should be named XXXXXXXX.0 where XXXXXXXX is the hash value of the DER encoded subject of the cert." (taken from stunnel manual) What happens when you test the certificate with the following: openssl verify -CApath /opt/certs/stunnels/cacerts/ server-certificate-file
stunnel No certificate returned unknown CA
1,640,274,829,000
I would like to monitor every syscall being called on my FreeBSD using auditd. I know it is possible on Linux but I cannot find any information on how I should configure FreeBSD. Is it even possible to monitor every system call in FreeBSD? Detaills My /etc/security/audit_control looks like this at the moment: # # $FreeBSD: releng/10.3/contrib/openbsm/etc/audit_control 293161 2016-01-04 16:32:21Z brueffer $ # dir:/var/audit dist:off flags:lo,aa minfree:5 naflags:lo,aa policy:cnt,argv filesz:2M expire-after:10M Flags are set to audit everything and the policy is set to record command line to execve(2) (see audit_control(5)).
I looks like there was a typo in my /etc/security/audit_control: # # $FreeBSD: releng/10.3/contrib/openbsm/etc/audit_control 293161 2016-01-04 16:32:21Z brueffer $ # dir:/var/audit dist:off flags:all minfree:5 naflags:all policy:cnt,argv,arge,seq, filesz:2M expire-after:10M This configuration produces insane amount of audit trails. In Linux Audit they are present in a0, a1, a2 and a3 fields explicitly while in the OpenBSM format they are stored in the argument tokens (see audit.log(5)). For example: header,108,11,close(2),0,Mon Aug 15 01:47:53 2016, + 865 msec argument,1,0x6,fd attribute,644,root,wheel,88,3148396,6394391 subject,-1,root,wheel,root,wheel,1721,0,0,0.0.0.0 return,success,0 trailer,108
How to monitor syscalls being called by a user on FreeBSD using auditing?
1,427,075,345,000
I want to configure two screens side by side and want the configuration to persist a system restart. (I'm using xfce on Xubuntu 12.) I've inspected old questions and the answers mentioned arandr and xrandr, and so I did create the (working) shell script that calls xrandr with appropriate options and arguments to fit my needs. Now I can place that script in some profile so that it will become active with every login. My question is; is it possible to configure twin-screens in some system config file so that I need not have a xrandr based script executed each time? As far as my investigation goes, the config file could be /etc/X11/xorg.conf, and the file contains sensible information for my twin-screen setup. But that configuration seems to be ignored.
Here's a solution that solved the issue for me (for Xubuntu 12): In directory /etc/X11/Xsession.d/ create a file 45-custom_xrandr-settings (with Xubuntu 13 its name would have to be 45x11-custom_xrandr-settings). The content of the file is (for my case; adjust the definitions as necessary): # The IDs of the screens INTERNAL_OUTPUT="DVI-1" EXTERNAL_OUTPUT="DVI-0" # EXTERNAL_LOCATION, which can be one of: left, right, above, below EXTERNAL_LOCATION="left" case "$EXTERNAL_LOCATION" in left|LEFT) EXTERNAL_LOCATION="--left-of $INTERNAL_OUTPUT" ;; right|RIGHT) EXTERNAL_LOCATION="--right-of $INTERNAL_OUTPUT" ;; top|TOP|above|ABOVE) EXTERNAL_LOCATION="--above $INTERNAL_OUTPUT" ;; bottom|BOTTOM|below|BELOW) EXTERNAL_LOCATION="--below $INTERNAL_OUTPUT" ;; *) EXTERNAL_LOCATION="--left-of $INTERNAL_OUTPUT" ;; esac xrandr | grep $EXTERNAL_OUTPUT | grep " connected " if [ $? -eq 0 ]; then xrandr --output $INTERNAL_OUTPUT --auto --output $EXTERNAL_OUTPUT --auto $EXTERNAL_LOCATION else xrandr --output $INTERNAL_OUTPUT --auto --output $EXTERNAL_OUTPUT --off fi Installed in the above specified directory this configuration file will then be executed automatically when the X session is started.
Persistent configuration of two screens
1,427,075,345,000
How can I update my Buildroot without losing my configuration, packages, etc.? And how can I update the Linux kernel that is configured? Is it just change the url from git repository in menuconfig? If someone helps me I will be grateful.
Yes, you can update your Buildroot and keep your .config. Buildroot has a mechanism for handling legacy configurations, which will warn you if certain options have disappeared or been renamed. You can also keep your packages, even though some changes might be needed as the package infrastructure evolves from time to time. However, we generally try to also have some logic to warn the user when the package uses some old/deprecated mechanisms. Regarding your packages, I would however recommend to: 1/ submit to the official Buildroot all your packages for open-source components or generally publicly available software components, and 2/ use the BR2_EXTERNAL mechanism to separate your own private packages from the core of Buildroot. Regarding the Linux kernel, it's entirely up to you in the Buildroot configuration to define which version you want to build. It can be a stable version downloaded as a tarball from kernel.org, a custom tarball location, or a custom Git tree.
Doubts about the Buildroot configuration
1,427,075,345,000
I can't find a configuration file for memcached (1.4.21-1) on archlinux. I have looked in /etc/ and /etc/conf.d/ . Is there a config file? And where can I find it?
memcached doesn't have a configuration file on Arch anymore since May 2013.
Where is the memcached configuration file in archlinux?
1,427,075,345,000
While I have asked quite a few questions about how to run subtitles in movies, this time though it's the opposite, how do I tell mpv not to load subtitle file while playing a media file. The media file is structured something like this - Format : Matroska Format version : Version 4 / Version 2 File size : 3.67 GiB Duration : 2 h 40 min Overall bit rate : 3 270 kb/s Movie name : Encoded date : UTC 2016-02-29 16:33:21 Writing application : mkvmerge v8.9.0 ('Father Daughter') 64bit Writing library : libebml v1.3.3 + libmatroska v1.4.4 Video ID : 1 Format : AVC Format/Info : Advanced Video Codec Format profile : [email protected] Format settings : CABAC / 5 Ref Frames Format settings, CABAC : Yes Format settings, ReFrames : 5 frames Codec ID : V_MPEG4/ISO/AVC Duration : 2 h 40 min Bit rate : 2 500 kb/s Width : 1 920 pixels Height : 816 pixels Display aspect ratio : 2.35:1 Frame rate mode : Constant Frame rate : 24.000 FPS Color space : YUV Chroma subsampling : 4:2:0 Bit depth : 8 bits Scan type : Progressive Bits/(Pixel*Frame) : 0.066 Stream size : 2.80 GiB (76%) Title : Writing library : x264 core 146 r2538 121396c Encoding settings : cabac=1 / ref=5 / deblock=1:-1:-1 / analyse=0x3:0x113 / me=umh / subme=10 / psy=1 / psy_rd=1.00:0.07 / mixed_ref=1 / me_range=24 / chroma_me=1 / trellis=2 / 8x8dct=1 / cqm=0 / deadzone=21,11 / fast_pskip=0 / chroma_qp_offset=-3 / threads=12 / lookahead_threads=2 / sliced_threads=0 / nr=1000 / decimate=1 / interlaced=0 / bluray_compat=0 / constrained_intra=0 / bframes=8 / b_pyramid=2 / b_adapt=2 / b_bias=0 / direct=3 / weightb=1 / open_gop=0 / weightp=2 / keyint=240 / keyint_min=24 / scenecut=40 / intra_refresh=0 / rc_lookahead=60 / rc=2pass / mbtree=1 / bitrate=2500 / ratetol=1.0 / qcomp=0.60 / qpmin=0 / qpmax=69 / qpstep=4 / cplxblur=20.0 / qblur=0.5 / vbv_maxrate=62500 / vbv_bufsize=78125 / nal_hrd=none / filler=0 / ip_ratio=1.40 / aq=1:1.00 Language : Hindi Default : Yes Forced : No Color range : Limited Color primaries : BT.709 Transfer characteristics : BT.709 Matrix coefficients : BT.709 Audio ID : 2 Format : DTS Format/Info : Digital Theater Systems Codec ID : A_DTS Duration : 2 h 40 min Bit rate mode : Constant Bit rate : 768 kb/s Channel(s) : 6 channels Channel positions : Front: L C R, Side: L R, LFE Sampling rate : 48.0 kHz Frame rate : 93.750 FPS (512 SPF) Bit depth : 24 bits Compression mode : Lossy Stream size : 882 MiB (23%) Title : Language : Hindi Default : Yes Forced : No Text ID : 3 Format : UTF-8 Codec ID : S_TEXT/UTF8 Codec ID/Info : UTF-8 Plain Text Duration : 2 h 30 min Bit rate : 42 b/s Count of elements : 1825 Stream size : 47.1 KiB (0%) Title : Language : English Default : Yes Forced : No Now I want to be able to play the media file but NOT to play the subtitles at all.
From the mpv(1) man page: --sid=<ID|auto|no> Display the subtitle stream specified by <ID>. auto selects the default, no disables subtitles.
how to tell mpv to NOT play subtitles which may be embedded in a media file.
1,427,075,345,000
Is there any keyboard shortcut to switch to either or: Switch to desktop on the right. i.e. [ctrl + ->] Switch to desktop on the left. i.e. [ctrl + <-] Switch to 3rd desktop. i.e. [ctrl + 3] I've been trying to find it but the topic is so polluted with so many other searches related that I have not been able to find anything
Yes! They are: Win/Super + Right Arrow Key Win/Super + Left Arrow Key Win/Super + 3 If you press F1 while beeing on the Desktop it shows a little manual where you can find more details. In the keyboard section in the setting you will find a list and an option to configure new shortcuts.
Deepin: Switch to next desktop
1,427,075,345,000
I have access to a machine running linux. Apache2 is installed on it. I need to add 3 virtual hosts. In /etc/apache2/sites-available, there is a set of about 15 files displayed with ls -la. I was told Apache2 would read these files one-by-one in alphabetic order to create virtual hosts, is this correct? How does Linux give preference in case of conflict? Last read file wins? First read file wins? If I want to configure my 3 virtual hosts (which are not in conflict with existing virtual hosts), is it just a matter of creating an extra file with them in this directory? P.S.: I have mixed-up available with enabled. One should read /etc/apache2/enabled. Sorry.
You may want to read the Apache2 documentation. I was told Apache2 would read these files one-by-one in alphabetic order to create virtual hosts, is this correct? Virtual hosts are not read in /etc/apache2/sites-available but in /etc/apache2/sites-enabled. That said, apache2 uses the libc, and according to gnu.org, The order in which files appear in a directory tends to be fairly random. edit: You may want to read the answer from @nwildner too which is more accurate than me. How does Linux give preference in case of conflict? Last read file wins? First read file wins? Really not sure, but if there is a conflict, generally it displays a warning message and your httpd service won't (re)load. If I want to configure my 3 virtual hosts (which are not in conflict with existing virtual hosts), is it just a matter of creating an extra file with them in this directory? Yes, and once you've added your extra files, run these commands: sudo a2ensite my_site1.conf my_site2.conf my_site3.conf sudo service apache2 reload It will search in /etc/apache2/sites-available for my_site1.conf, my_site2.conf and my_site3.conf.
Configuring Apache virtual hosts in extra file
1,427,075,345,000
I am trying to create Apache virtualhost that is closed for all IP addresses with exception of one IP address and two URLs that should be publicly accessible. <IfModule mod_ssl.c> <VirtualHost *:443> ServerName example.com ServerAdmin [email protected] DocumentRoot "/var/www/example.com/app/webroot" <Directory "/var/www/example.com/app/webroot"> Options FollowSymLinks Includes ExecCGI AllowOverride All ErrorDocument 403 "https://example.com/403.php" DirectoryIndex index.php </Directory> <Location "/foo/bar/"> Require all granted </Location> <Location "/.well-known/"> Require all granted </Location> <Location "/"> Require ip 1.2.3.4 </Location> SSLProxyEngine on ProxyPreserveHost On ProxyRequests Off ProxyPass /api/ https://host.docker.internal:8443/api/ connectiontimeout=5 timeout=300 ProxyPassReverse /api/ https://host.docker.internal:8443/api/ SSLCertificateFile /etc/letsencrypt/live/example.com/fullchain.pem SSLCertificateKeyFile /etc/letsencrypt/live/example.com/privkey.pem Include /etc/letsencrypt/options-ssl-apache.conf </VirtualHost> </IfModule> I've tried to change order of Location tags, but all requests are redirected to ErrorDocument directive value. URL /foo/bar/ is rewrited by .htaccess located in DocumentRoot (for testing purposes I tried to remove .htaccess with no effect). Apache version: Server version: Apache/2.4.59 (Debian) Server built: 2024-04-05T12:02:26 Apache runs in a Docker container, but it probably has no meaning for the problem. Q: What is wrong with the configuration?
There were two problems in my configuration: As @sotirov mentioned, <Location> tags are processed by order of presence - correct order should be: restrict all access to certain IP addresses and by following <Location> tags allow access to specified locations. Rewrited URLS - my url /foo/bar/ is rewrited in .htaccess to index.php, and in this situation URL is not matched by <Location "/foo/bar/"> location. When I use real physical file <Location "/foo.bar"> location tag works as expected.
Not working `<Location/>` tag in Apache Virtual host
1,427,075,345,000
I saw the following in /etc/shells - % cat /etc/shells # /etc/shells: valid login shells /bin/sh /bin/dash /bin/bash /bin/rbash /bin/zsh /usr/bin/zsh I want to know if there is a difference betweem /usr/bin/zsh and /bin/zsh ? I did chose /usr/bin/zsh as it has to be interactive login and CTE skills.
One of them is probably a link to the other... Traditionally, shells (like bash, csh and zsh) are located in /bin - because a shell is needed even in single user mode or other times when /usr may be unmounted (/usr is often on a separate partition and may even be mounted through the network - thus not readily available in singe user mode). On the other hand, additional shells (than the default one/ones) aren't strictly needed in single user mode (unless root happens to use one of them), so it's natural to put such shells it in /usr/bin instead of /bin. When you do place it in /usr/bin though, it's common to provide a symbolic link to it from /bin, as users tends to expect their shells to be directly under /bin (not that a link would help if /usr wasn't mounted). So when compiling the list of available shells to choose from (/etc/shells), both the real executable and the link have been listed. You can use ls -l to check what is the link and what is the executable. +++ Both /bin/zsh and /usr/bin/zsh are explicitly added together (same if-fi block) in the postinst (post-install) script for the zsh-package, using the add-shell command: From zsh_5.1.1-1ubuntu2_amd64.deb:/DEBIAN/postinst #!/bin/sh ... case "$1" in (configure) # if test -z "$2"; then add-shell /bin/zsh add-shell /usr/bin/zsh # fi ...
what's the difference between /bin/zsh and /usr/bin/zsh? [duplicate]
1,427,075,345,000
In one of my config [~/.tmux.conf] there is a line : set -g default-terminal "screen-256color" set -s escape-time 10 I think it sets global variables of my system. But I am not sure.. I searched for it on the web ... & even the man pages doesn't give me information about -g option when in bash I did help set and there are not information about -g option and -s option.. How do I find out what these command does by reading man pages...
set is an alias to tmux set-option tmux man Commands which set options are as follows: set-option [-agsuw] [-t target-session | target-window] option value (alias: set) Set a window option with -w (equivalent to the set-window-option command), a server option with -s, otherwise a session option. If -g is specified, the global session or window option is set. With -a, and if the option expects a string, value is appended to the existing setting. The -u flag unsets an option, so a session inherits the option from the global options. It is not possible to unset a global option.
what does set -g means in config files?
1,427,075,345,000
I had a function inside .zshrc that I removed. Now, when I try to source it, it indeed sources it, but doesn't remove the function that once was inside .zshrc from memory. Is there a way to remove the function (now I believe in memory, in zsh namespace or something like that) without restarting my machine?
In zsh, you can remove a function with unhash -f functionname or unfunction functionname. That doesn't automatically clear functions you've removed from a given startup file, though, because of course the shell doesn't remember where it got it from in the first place and attribute any special meaning to re-sourcing the same file. So you'll have to know what you want to forget. Since you mention .bashrc in the subject: the bash equivalent is unset -f functioname
Is it possible to source again .bashrc and .zshrc AND remove functions once inside them without restarting?
1,427,075,345,000
I want to make a bash file that reads a parameter from a config file. In Debian based linux it works fine by: my.config: MYVARIABLE=12345 my.sh: #!/bin/bash source my.config echo $MYVARIABLE But I am not able to achieve this in FreeBSD. Do you have any idea?
You do not supply any errors or what you have done. This then leaves us guessing: Bash is not installed by default. You might need to add it yourself: pkg install bash As with Debian you need to make the script executable: chmod +x my.sh The first line of your script should point to the location of bash. It is not located in /bin by default. You can make it work using a link but I would prefer to change the first line of your script to: #!/usr/bin/env bash source my.config echo $MYVARIABLE The above will make you script OS agnostic. Or you can use the explicit path: #!/usr/local/bin/bash source my.config echo $MYVARIABLE Why? See Why is it better to use “#!/usr/bin/env NAME” instead of “#!/path/to/NAME” as my shebang?
Getting a parameter from a config file in FreeBSD
1,427,075,345,000
I installed org.mozilla.firefox from Flathub, and upon investigating with xeyes, I found that it seems to be running via XWayland as the eyes are able to trace my cursor when hovering over Firefox. I'm using GNOME via Wayland, and I would like to run Firefox as a native Wayland client instead of running it via XWayland. How might I do this with the org.mozilla.firefox package from Flathub? I am not interested in using my distribution's package instead.
You can do this by setting the environment variable MOZ_ENABLE_WAYLAND to 1 and allowing org.mozilla.firefox to access the Wayland socket via flatpak override. Something like this: $ flatpak override --env=MOZ_ENABLE_WAYLAND=1 --socket=wayland org.mozilla.firefox --user You can omit --user if you want to do this for all users. If you prefer to do this graphically, you can use Flatseal: https://flathub.org/apps/details/com.github.tchx84.Flatseal
How do I run org.mozilla.firefox from Flathub as a native Wayland client?
1,427,075,345,000
Which programs / services are parsing contents of /etc/securetty configuration file?
The securetty manpage lists two users of /etc/securetty: some versions of login, and pam_securetty. The intention in both cases is to limit the terminals on which root can log in.
Which services read the /etc/securetty configuration file?
1,427,075,345,000
I want all users of nano to have tabsize 4 instead of the default 8. What is the best way to achieve this? I would prefer a file that overrides /etc/nanorc at the system level so I don't have to maintain separate user nanorc's for this purpose. In the simple case, my override would only need to contain: set tabsize 4 Here's another way to state my question: Does nano recognize /etc/nanorc.d/ and config files placed therein? If so, what is the required naming and/ content of config files placed there? What I tried so far was to create /etc/nanorc.d/ and place a file named tabsize.conf in that directory and put only the following contents in the file: set tabsize 4 My naive attempt did not work, but I am hoping there is a way to use this config.d/ pattern with nano. I will make my question even more specific. I am using Arch Linux. I have do do these steps when the package has a new nanorc: mv /etc/nanorc.pacnew /etc/nanorc Then edit /etc/nanorc, search for tabsize, uncomment the line, change the value from 8 to 4 and save the file. My goal is to only have to do this step: mv /etc/nanorc.pacnew /etc/nanorc And to have a file similar to /etc/nanorc.d/tabsize.conf that contains my desired tab size. It's a small savings of time, but multiplied across a number of computers it adds up. This year it seems like I have gotten new /etc/nanorc.pacnew files about six times. It is very inefficient to keep editing tabsize over and over.
So /etc/nanorc.pacnew is the new rc file that came with the new distribution upgrade? How about sed '/tabsize/ {s/^# *//; s/[0-9]*$/4/}' /etc/nanorc.pacnew > /etc/nanorc , then? Another possible trick might be to have a symbolic link ~/.nanorc in every user's home dir pointing to a central file with the relevant commands. on demand: sed '/tabsize/ # if the line matches "tabsize" {s/^# *//; # remove "#" and trailing spaces from begin-of-line (BOL) s/[0-9]*$/4/ # substitute any sequence of digits at EOL by "4" }' /etc/nanorc.pacnew # input file > /etc/nanorc # redirection to target file
How to override /etc/nanorc systemwide?
1,427,075,345,000
Ubuntu 16.04 Bash version 4.4.0 nginx version: nginx/1.14.0 How can I test the Nginx configuration files in a Bash script? At the moment I use -t when I'm in a shell: $ sudo nginx -t nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful But I would like to do this in a script?
Use the exit status. From the nginx manpage: Exit status is 0 on success, or 1 if the command fails. and from http://www.tldp.org/LDP/abs/html/exit-status.html: $? reads the exit status of the last command executed. An example: [root@d ~]# /usr/local/nginx/sbin/nginx -t;echo $? nginx: the configuration file /usr/local/nginx/conf/nginx.conf syntax is ok nginx: configuration file /usr/local/nginx/conf/nginx.conf test is successful 0 [root@d ~]# echo whatever > /usr/local/nginx/nonsense.conf [root@d ~]# /usr/local/nginx/sbin/nginx -t -c nonsense.conf;echo $? nginx: [emerg] unexpected end of file, expecting ";" or "}" in /usr/local/nginx/nonsense.conf:2 nginx: configuration file /usr/local/nginx/nonsense.conf test failed 1 A scripted example: #!/bin/bash /usr/local/nginx/sbin/nginx -t 2>/dev/null > /dev/null if [[ $? == 0 ]]; then echo "success" # do things on success else echo "fail" # do whatever on fail fi
How can I test if nginx configuration files are valid within a Bash script?
1,427,075,345,000
I have a headless box based on Debian. It is intended to be accessed via its network interface. The Network is configured via the /etc/network/interfaces. I'm trying to validate if the file is really valid. My idea is to check if the file have any error, and in this case, fallback to a default file. My question is about the first part of the problem: detecting a error in the interfaces file. I found a lot of questions about using ifup --no-act commands, but what I see in practice is that this method is not robust enough for my case. Take the following example: root@arm:~# ifup --no-act --interfaces=/etc/network/interfaces eth0 run-parts /etc/network/if-pre-up.d ip addr add 1s23.123.123.123/255.255.255.192 broadcast + dev eth0 label eth0 ip link set dev eth0 up ip route add default via 123.123.123.122 dev eth0 run-parts /etc/network/if-up.d Note the invalid IP address set (extra 's' in first octet). The command gives no error (that I can see, anyway) and if I reboot the machine with the file in this state, I would loose the networking capability. A valid interfaces gives me a very similar result (except for the broadcast + part): root@arm:~# ifup --no-act --interfaces=/etc/network/interfaces eth0 run-parts /etc/network/if-pre-up.d ip addr add 123.123.123.123/255.255.255.192 broadcast 123.123.123.127 dev eth0 label eth0 ip link set dev eth0 up ip route add default via 123.123.123.122 dev eth0 run-parts /etc/network/if-up.d This is mainly the problem I'm trying to avoid. My question is how can I detect invalid configuration in /etc/network/interfaces file? Does the plus sign indicates there's a error? What does it means? What is the best approach to solve this?
Short answer: Call ifup --no-act with the network config to test and check the return code. Long answer: I've been doing something very similar for an embedded appliance; here's the relevant part of the init script I created for this purpose, which is executed before networking is started, so it is ensured the system has a working ethernet setup. ETH0_CUSTOM=/var/myproject/etc/interfaces.eth0 result=1 if [ -f "${ETH0_CUSTOM}" ]; then ifup -n -i "${ETH0_CUSTOM}" eth0 if [ $? -eq 0 ]; then ln -sf "${ETH0_CUSTOM}" /etc/network/interfaces.eth0 result=0 else echo "Validating ${ETH0_CUSTOM} failed" fi fi if [ ${result} -ne 0 ]; then ln -sf /etc/network/interfaces.eth0.default /etc/network/interfaces.eth0 fi In the original /etc/network/interfaces the valid settings are included with source /etc/network/interfaces.eth0
How to make sure /etc/network/interfaces will bring the interface up?
1,427,075,345,000
I found a lot of ways to deal with the whitespace problem in Git, but all of them require some sort of action on the user side: configure Git to ignore ws changes, setup a pre-commit hook with warning, or just configure the editor to remove them automatically. But is there a way to simply ignore whitespace changes on the server side or even better to strip all whitespace silently? Background: we have a large repository with a number of developers whom we do not want to chase and convince to reconfigure their tools, but we do not want to have any trailing whitespace in our repository. For me the solution is obvious: strip all whitespace once, and ignore any ws changes from that point on. Ideally, also strip ws in all new commits. But how can I do this?
You can't really do this sanely. You can't modify the content of a commit; you can only create new commits. Even if this were possible, it would break git: the commits on the server would be unrelated to the history stored in local developer repositories, and nothing would ever work. You really need to perform this sort of filtering before files are added to the repository, which is why you do it client side in pre-commit hooks, etc. A better solution is to implement an enforcement mechanism on the server: reject changes that don't meet your standards.
Git Server: ignore whitespace changes in new commits
1,427,075,345,000
Both php-fpm and supervisord use a config format that looks like this: [category] settings = value some_other_setting = value Is there a name for this format?
It's called an ini file or initialisation file. The part in square brackets in called a section: ; comment [section] key=value
What do you call the supervisord and php-fpm et. al. config format?
1,427,075,345,000
I am configuring a Linux kernel before I compile the code. However, after I was thirty minutes into answering the many questions concerning various kernel settings, I accidentally pressed Ctrl+C. I do not want to start over, so is it possible to make the configuration tool resume where I left off?
You probably want to adjust the workflow a little bit, since configuring the Linux kernel from scratch is not for everyone (I'd almost say it's not for anybody). First, you are much better of with menu-based configurators like make nconfig, make menuconfig or the GUI based ones, as it usually allows you to save the configuration when you deem that a good idea. Second, unless you are really savvy about kernel things, you want to use some basic configuration as a starting point - either make defconfig or use your distribution as a base for your endeavours. BTW, make help will tell you all the build targets (including the configuration ones, of which there is about dozen and a half) and the applicable options.
Resuming the Linux Kernel Configuration
1,427,075,345,000
I just recently got my hands on a linux (Fedora) VPS and I would like to ask if there are special configurations that I have to be wary of. Do I still configure it like I would configure a normal virtual machine? Are there more things to take care of, or be cautious about because that machine is 24/7 available online, and can fall pray to those (hackers/crackers) seeking a machine to test their knowledge on? I would appreciate your input.
In addition to Tim's suggestion, configure your user account to use ssh keys for authentication, then configure SSH to only accept key based auth (no passwords). Also make sure that root logins are disabled. Here's a summary of the options that do this: ChallengeResponseAuthentication no HostbasedAuthentication no PasswordAuthentication no PermitEmptyPasswords no PermitRootLogin no PubkeyAuthentication yes RSAAuthentication yes RhostsRSAAuthentication no UsePAM yes UsePrivilegeSeparation yes Remember, you must have key based login working before you disable password logins. Otherwise you will be locked out permanently.
Linux VPS Configuration
1,427,075,345,000
I have a home file server on which I have recently reinstalled the OS. I replaced Ubuntu Server 10.04.2 32-bit with 10.04.3 64-bit due to hardware upgrades. I've copied my previous Samba configuration over, recreated the share user, and made sure the permissions for the shared directories, on another disk, were still intact. I have an XP and a Win7 machine. Both can see the file server, but neither can access the shares. If I go to \\Server on a Windows machine, it prompts for a user/pass and appears to accept the connection. If I go to \\Server\Share next, it asks for a user/pass again, and will not authenticate. No network settings have changed on the Windows machines. Is there some other configuration I might be missing for the server? What else could be wrong? Troubleshooting: I found the logs, as suggested. smbd and nmbd are both running. In the logs for the windows machines, I get a lot of lines like this when trying to connect. [2011/11/07 07:23:53, 1] smbd/service.c:676(make_connection_snum) create_connection_server_info failed: NT_STATUS_ACCESS_DENIED This is clearly the problem, but I don't know why it's happening. The user and pass I'm using are good, and it was working before the upgrade. I did find this in log.winbindd-idmap: [2011/11/07 07:14:12, 1] winbindd/idmap.c:321(idmap_init_domain) idmap initialization returned NT_STATUS_UNSUCCESSFUL [2011/11/07 07:23:40, 0] winbindd/idmap.c:201(smb_register_idmap_alloc) idmap_alloc module tdb already registered! [2011/11/07 07:23:40, 0] winbindd/idmap.c:149(smb_register_idmap) Idmap module passdb already registered! [2011/11/07 07:23:40, 0] winbindd/idmap.c:149(smb_register_idmap) Idmap module nss already registered! [2011/11/07 07:23:40, 1] winbindd/idmap_tdb.c:214(idmap_tdb_load_ranges) idmap uid missing [2011/11/07 07:23:40, 0] winbindd/idmap_tdb.c:287(idmap_tdb_open_db) Upgrade of IDMAP_VERSION from -1 to 2 is not possible with incomplete configur ation [2011/11/07 07:23:40, 1] winbindd/idmap.c:321(idmap_init_domain) idmap initialization returned NT_STATUS_UNSUCCESSFUL log.smbd [2011/11/06 20:01:29, 0] smbd/server.c:1069(main) smbd version 3.4.7 started. Copyright Andrew Tridgell and the Samba Team 1992-2009 [2011/11/06 20:01:29, 0] printing/print_cups.c:103(cups_connect) Unable to connect to CUPS server localhost:631 - Connection refused [2011/11/06 20:01:29, 0] printing/print_cups.c:103(cups_connect) Unable to connect to CUPS server localhost:631 - Connection refused [2011/11/06 20:01:29, 0] smbd/server.c:1115(main) standard input is not a socket, assuming -D option log.nmbd [2011/11/06 13:40:55, 0] nmbd/nmbd.c:854(main) nmbd version 3.4.7 started. Copyright Andrew Tridgell and the Samba Team 1992-2009 smb.conf, most of which is stock [global] workgroup = MyGroup # edited server string = %h server (Samba, Ubuntu) dns proxy = no use sendfile = yes # edited log file = /var/log/samba/log.%m max log size = 1000 syslog = 0 panic action = /usr/share/samba/panic-action %d security = user # edited encrypt passwords = true passdb backend = tdbsam obey pam restrictions = yes unix password sync = yes passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . pam password change = yes map to guest = bad user guest account = myshareuser # edited usershare allow guests = yes [printers] comment = All Printers browseable = no path = /var/spool/samba printable = yes guest ok = no read only = yes create mask = 0700 [print$] comment = Printer Drivers path = /var/lib/samba/printers browseable = yes read only = yes guest ok = no # added [share] path = /mnt/storage/share force user = myshareuser force group = myshareuser read only = No create mask = 0777 directory mask = 0777 guest only = No guest ok = No [backup] path = /mnt/storage/backup force user = myshareuser force group = myshareuser read only = No create mask = 0777 directory mask = 0777 guest only = No guest ok = No
Could be a corrupt passdb.tdb file. If you remove it and restart Samba can you add users?
Cannot connect to Samba shares after reinstall
1,427,075,345,000
Once I tried to make a virtual host, and the problem was that if I make another file in /etc/apache2/sites-available/ that says ... DocumentRoot /var/www/newsite ServerName newsite ... Then save it, restart Apache, and appending /etc/hosts like this: 127.0.0.1 newsite I get the page that corresponds the DocumentRoot from /etc/apache2/sites-available/default That time I left it unsolved. Now I want to do it once again and I get the same problem. Despite the fact that I use different version of Linux distro. I feel like I'm doomed.
did you enable the site using the a2ensite command? The /etc/apache2/sites-available directory lists those that you have setup but it needs to be in /etc/apache2/sites-enabled to be picked up when you next reload the apache2 configuration.
Problem making a virtual host with Apache
1,427,075,345,000
I upgraded from Fedora 38 to Fedora 40 more or less smoothly. Yet, when new kernels are installed, the grub configuration is not updated. The command grep vmlinuz /boot/grub2/grub.cfg shows linux /boot/vmlinuz-6.8.8-300.fc40.x86_64 root=UUID=0e08d465-d601-478f-be17-a2663626588c ro linux /boot/vmlinuz-6.8.8-100.fc38.x86_64 root=UUID=0e08d465-d601-478f-be17-a2663626588c ro linux /boot/vmlinuz-6.3.12-200.fc38.x86_64 root=UUID=0e08d465-d601-478f-be17-a2663626588c ro but I do not have 6.3.12 installed, and ls /boot/loader/entries gives me vmlinuz-6.8.8-100.fc38.x86_64 vmlinuz-6.8.8-300.fc40.x86_64 vmlinuz-6.8.9-300.fc40.x86_64 agreeing with rpm -q kernel as well: kernel-6.8.8-100.fc38.x86_64 kernel-6.8.8-300.fc40.x86_64 kernel-6.8.9-300.fc40.x86_64 Yes, grub2-mkconfig -o /boot/grub2/grub.cfg restores the config, with the command grep vmlinuz /boot/grub2/grub.cfg now giving linux /boot/vmlinuz-6.8.9-300.fc40.x86_64 root=UUID=0e08d465-d601-478f-be17-a2663626588c ro linux /boot/vmlinuz-6.8.8-300.fc40.x86_64 root=UUID=0e08d465-d601-478f-be17-a2663626588c ro linux /boot/vmlinuz-6.8.8-100.fc38.x86_64 root=UUID=0e08d465-d601-478f-be17-a2663626588c ro but grub2-mkconfig is not supposed to be run each time a kernel is installed. What is going on?
With the /boot/loader/entries in existence and up to date, there should be no mention of individual kernels in /boot/grub2/grub.cfg: instead, the configuration should be invoking the GRUB command blscfg which will cause GRUB to read /boot/loader/entries and use the information within. In other words, grep blscfg /boot/grub2/grub.cfg should return something like: # The blscfg command parses the BootLoaderSpec files stored in /boot/loader/entries and insmod blscfg blscfg If your grub2-mkconfig updates the list of kernels into /boot/grub2/grub.cfg, it means you must have an old version of the script /etc/grub.d/10_linux still active, or you must have added GRUB_ENABLE_BLSCFG=false to /etc/default/grub. However, since RHEL 8 (and corresponding Fedora versions) the default kernel postinstall scripts now assume that blscfg is in use, and so the grub2-mkconfig doesn't need to be invoked after installing a new kernel, unless GRUB_ENABLE_BLSCFG=false is used in /etc/default/grub. In RHEL/Fedora, kernel package post-transaction scripts will invoke /bin/kernel-install add <kernel version> ... on installation and /bin/kernel-install remove <kernel-version> ... on removal. The /bin/kernel-install will in turn invoke any scripts it finds in directories /etc/kernel/install.d/ and /usr/lib/kernel/install.d (with files in the former directory overriding any files with the same names in the latter) with similar add/remove and kernel version arguments. The last script to execute on both installation and removal should be /usr/lib/kernel/install.d/99-grub-mkconfig.install: it runs grub2-mkconfig, but only if /etc/default/grub has GRUB_ENABLE_BLSCFG set to the exact string false. Any deviation will cause the script to assume the bootloader is capable of using /boot/loader/entries, and so the script will exit without doing anything. If all this seems fine, look into your /etc/grub.d/ directory. Do you perhaps have a 10_linux.rpmnew or similar in there? If you have such a file, backup your old (possibly customized) 10_linux file and replace it with the 10_linux.rpmnew (or similar) file, then run grub2-mkconfig -o /boot/grub2/grub.cfg one final time. Apparently the /etc/grub.d/10_linux file might come from the grub2-tools package, so a dnf reinstall grub2-tools might be necessary if there is no 10_linux.rpmnew or similar file present.
Why are new kernel versions not appearing in Fedora 40 Grub?
1,427,075,345,000
I uninstall samba this way: My os is debian11. sudo rm -f /etc/samba/smb.conf sudo apt purge samba sudo apt install samba Now check the default samba's configuration file. sudo ls /etc/samba/smb.conf ls: cannot access '/etc/samba/smb.conf': No such file or directory
The package which “owns” smb.conf is samba-common; you need to purge that, and re-install samba (since it will be removed when removing samba-common): sudo apt purge samba-common sudo apt install samba
Why can't get the default configuration file after reinstalling samba?
1,427,075,345,000
After overriding some configurations of /usr/share/containers/containers.conf in /etc/containers/containers.conf - e. g. log_size_max = 10485760, what is the official method to apply this new configuration with the least impact on the system? There is enough documentation on how to configure things, but not on how to apply the configuration. It surely works after a reboot of the whole system, but I assume that there is a less impacting way to do that.
Podman runs no daemon (unlike docker/Moby); so, there is no need to reload any daemon. The configuration that is present when podman is executed apply. In other words, as soon as you change something, it applies to all podman runs thereafter, immediately. It cannot apply to currently running pods - podman reads its configuration when it's started! So, you'll have to restart these, if the configuration changes apply to their runtime behavior.
How can I apply a new `podman` config with the least impact on the system?
1,427,075,345,000
Upon running sudo apt upgrade, I was notified that the package maintainer for Samba has provided a new config file, the difference I noticed was the new commented out wins support and obviously didn't include my specified settings. I reverted back to the old one because my settings were necessary, but I can add them to the new file if it's important, such as security purposes. In general (not just this situation so I'm prepared next time) why would the maintainer push out a new config file and in what situations should I use the new one?
Every version of the Samba package will contain the configuration file, because the package needs to be useful for initial installations too, not just for upgrades. But dpkg, the low-level package management tool for apt, is smart enough to record the hashes of any packaged configuration files. It knows what the hash of the original installed version of the configuration file was, so it knows if you have made changes to it after installing the package or not. If the current configuration file is unchanged from the old packaged version, then dpkg will automatically replace it with a version from a newer package whenever the package is updated, with no questions asked. If the hash of the new packaged version is the same as of the old packaged version, then dpkg knows the packaged configuration file has not been changed between package versions and your customized version should still work with the new version of the package, no questions asked. But if you have made changes to the configuration file and the configuration file in the new package has a different hash than the one recorded from the originally installed configuration file, then dpkg will prompt you what to do about the file, so your customizations won't be removed without your knowledge and authorization. Note that the prompt includes an option to show the differences between the versions: if you are uncertain as to what to do, you should first do that to see what has been changed. If everything you see are just your own customizations and some new commented-out settings, then you can keep using the old file, just without the minor benefit of having the commented-out new settings available to you as examples. But if you see major changes to the structure of the file, you should go read the change log of the package, or the release notes of the new OS release if you are doing a major upgrade, and be aware that you might need to redo your customizations in a way that is compatible with the new version. Obviously, this should not happen within a single release of the operating system, unless the old software version had such fatal bugs that the distribution had no option to backport the fixes, but had to make a major update to that software package in mid-release. Fortunately, such events are quite rare. With Samba, the most relevant long-term change is the accelerated deprecation of the SMB protocol version 1 and NetBIOS following the WannaCry worm and the related exploits. Since the security experts' attitude to SMBv1 is now "kill it with fire", it is expected that new versions of Samba will make SMBv1 and NetBIOS not enabled unless very deliberately configured, or even outright remove support for them. As a side effect of the removal of the old NetBIOS services, Samba servers in non-Active Directory environments will become non-browseable: you will still be able to contact Samba shares if you know the name of the computer and share you wish to connect to, but you won't be able to find them in a list of computers on a network. This can be remedied by adding an alternative browsing solution based on the WS-Discover protocol: as far as I know, this is not integrated into Samba yet, but implementations are available with names like wsdd or wsdd2.
Should I accept the package maintaner's new config file? (Samba)
1,427,075,345,000
Usually Vim tutorials on remapping keys will only show how to remap and assume you already know how to call a certain key within Vim, such as the well-known <esc> or <F2>. But what if I need first to know how my key is called (and even if it is available for remapping)? In my specific case, I'm trying to remap KEY_SCROLLLOCK to esc, but I don't know how Vim represents it, nor where I could find a list of all available keys in Vim.
In vim, see :help key-notation (I don't see KEY_SCROLLLOCK in there though)
How to get a key's name in Vim?
1,427,075,345,000
I'm on Lubuntu 20.04, with no PulseAudio installed. I'm having some trouble editing my ALSA setting, as any change I make interferes with my microphone. In particular, if I use the following basic configuration file: pcm.!default { type hw card 2 } ctl.!default { type hw card 2 } Then I am unable to run OBS and Discord in parallel, as the first tries to open the microphone in stereo mode, while the latter in mono. The last to try always fails to open the device. However, with just the lines defaults.pcm.card 2 defaults.ctl.card 2 Everything works correctly. This hints to me that the default device that ALSA provides is more flexible than a simple type hw plugged to the correct device. I tried to look into somehow making ALSA print its defaults, but could not find anything about it. How can I replicate the default ALSA device in my configuration file, so that I can make and test my changes as diffs to what ALSA already does for me?
The default definition of the default device can be found in /usr/share/alsa/pcm/default.conf. If it does not redirect to a driver-specific default, it is defined like this: pcm.!default { type plug slave.pcm { type hw card 2 } } The plug plugin implements automatic sample rate/format conversion. Most drivers do have their own default definition. In particular, most motherboard devices are handled by /usr/share/alsa/cards/HDA-Intel.conf, which defines something like this to allow multiple clients: pcm.!default { type asym playback.pcm { type plug slave.pcm "dmix:2" } capture.pcm { type plug slave.pcm "dsnoop:2" } }
What exactly is the default pcm ALSA device?
1,427,075,345,000
I tried plugin vim-indentguides and did not work well so I decided to remove it using Plug. After uninstall is nvim writing ^I instead of tabs. How can I fix it ? Here is my init.vim and demonstration of the problem: " PLUGINS call plug#begin() ^IPlug 'vim-airline/vim-airline' ^IPlug 'vim-airline/vim-airline-themes' ^IPlug 'SirVer/ultisnips' ^IPlug 'honza/vim-snippets' call plug#end() " TABS set tabstop=4 shiftwidth=4^I" set tab size to 4 spaces set autoindent smartindent ^I" make intending a bit more clever filetype plugin indent on " UI set cursorline ^I^I^I" highlight the current line set nu ^I^I^I^I^I^I" line numbering " set signcolumn=numbers^I^I" merge number and signs in to one column set scrolloff=4^I^I^I^I" show at least 4 lines above or below set list!^I^I^I^I^I" visualize tabs "set listchars=tab:>-^I^I" set tabs to >--- a set listchars=trail:~" ^I^I" trail spaces to ~ " CODING syntax on^I^I^I^I^I" highlight syntax set colorcolumn=88 ^I^I" ruler python convention " EDITOR BEHAVIOR "set nowrap ^I^I^I" disable word wrapping set linebreak ^I^I^I" break on words only if wrap is enabled set breakindent ^I^I^I" indent broken lines only if wrap is enabled nnoremap j gj nnoremap k gk " PERFORMANCE AND SECURITY set lazyredraw ^I^I^I" redraw only when it is necessary set ttimeoutlen=0 ^I^I^I" set delay after exiting visual mode to 0 set autoread ^I^I^I^I" automatically read files when reloaded outside Nvim set autochdir ^I^I^I^I" automatically change directory to current file set undofile^I^I^I^I" undo backup " COMMANDS set showcmd ^I^I^I^I" shows last issued command set wildmenu ^I^I^I^I" show command suggestions " SPELL CHECKING command Sc :set spell spelllang=cz,en_us command Sccz :set spell spelllang=cz command Scen :set spell spelllang=en_us command Scno :set nospell Thank you for your help
From your configuration file: set list!^I^I^I^I^I" visualize tabs If you don't want to visualize tabs, don't set the list option. To turn off the visualization of tabs in the current session, use :set nolist interactively.
neovim: writes ^I instead of tab
1,592,501,948,000
OS: Linux Mint 18.3 I'm currently trying to install the latest stable release of cryptsetup. It's installed, but as usual the Synaptic version is very old (1.6.6 compared to 2.3.2). Running ./configure as per the "INSTALL" document, I found some problems which were solved with this answer. ./configure then failed again with: checking for json-c... no configure: error: Package requirements (json-c) were not met: No package 'json-c' found Consider adjusting the PKG_CONFIG_PATH environment variable if you installed software in a non-standard prefix. Alternatively, you may set the environment variables JSON_C_CFLAGS and JSON_C_LIBS to avoid the need to call pkg-config. See the pkg-config man page for more details. This then led to me to this page, where I tried following the "Build instructions". I ran the git clone instruction from the directory where you find the "configure" file for cryptsetup. The cmake command seemed to finish OK, but running ./configure again (for cryptsetup) I get the same error. I don't understand this business of a "non-standard prefix". Can someone say how I install this json-c package to the "standard prefix"?
You need to install the libjson-c-dev package: apt-get install libjson-c-dev That will provide the development headers and libraries that are needed to build cryptsetup. If you continue to receive error messages like that, then it means that you need to install the package that's specified.
How do I install a package to a "standard prefix"?
1,592,501,948,000
Examples: .bashrc .config/fish/config.fish I would like to know which is more common and what pros and cons they each have. I imagine a dotfile would be easier to change, since it is right in the home directory, but it seems .config would be easier to carry around, since it is one directory with everything in it. Do applications usually support just one, or both? Would it be a good idea to pick one, then symlink for each application? For example, if I wanted a dotfile, I could use ln .config/fish/config.fish .fish and just edit .fish, right?
Dotfiles are the older form, and I believe avoiding them completely will be difficult unless you use a distribution that insists on patching every software included to use the .config directory tree instead of plain dotfiles. Many old applications will have a long history of using a particular dotfile; some may have their own dot directory. Others may actually have both: for example, vim supports .vimrc, but also .vim/ directory with multiple files and a specific sub-directory structure. The .config directory structure is based on XDG Base Directory Specification. It was initially taken up by Desktop Environments like GNOME and KDE, as they both originally had a lot of per-user configuration files and were both already independently chosen somewhat similar sub-directory solutions. For GUI file managers, the concept of hidden files can be problematic: if you choose to not display file and directory names beginning with a dot by default, following the classic unix-style behavior, the existence and function of dot files will not be easily discoverable by a GUI user. And if you choose to not hide the dot files and directories, you get a lot of clutter in your home directory, which is in some sense the absolute top level of your personal workspace. Both ways will make someone unhappy. Pushing the per-user configuration files to a dedicated sub-directory can be an attractive solution, as having just one sub-directory instead of a number of dot files and/or dot directories will reduce clutter when "hidden" files are displayed in GUI, and the difference in ease of access is not too big. But it flies in the face of long-standing user expectations: (some) dotfiles "have always been here and named like this". This is going to be a very opinion-based issue. If the dotfiles are not related to login access or some other privileged access control, you can use symlinks to bridge from one convention to another, whichever way you prefer. But if you really edit a specific configuration file so often that ease of access is important, perhaps you might want to create a shell alias or desktop icon/menu item that opens the actual configuration file in your favorite editor immediately (using an absolute pathname) instead? It could be even more convenient. Some dotfiles and directories are accessed by privileged processes (e.g. as part of authentication and access control) like ~/.ssh, ~/.login_conf etc. and they cannot normally be replaced by symbolic links, as these processes want the actual file instead of a symbolic link in the designated location in order to disallow various kinds of trickery and exploits. If you want to relocate these files, it must be done by modifying the configuration of the appropriate process, usually system-wide.
What is the difference between dotfile and dot config? [duplicate]
1,592,501,948,000
I want to "update" old users with new /etc/skel content on Debian and Ubuntu installations. Scripting this is possible... find /home -maxdepth 1 -mindepth 1 -type d | while read homedir; do user="$(stat -c%U $homedir)" su -c 'tar -cf- -C /etc/skel . | tar -vxf- -C $HOME' $user done ...but I'm wondering if anyone knows a better way.
You could update the /etc/skel files in users' directories with a script like this. #!/bin/sh # getent passwd | while IFS=: read -r username x uid gid gecos home shell do if [ ! -d "$home" ] || [ "$username" == 'root' ] ## || [ "$uid" -lt 1000 ] then continue fi tar -cf - -C /etc/skel . | sudo -Hu "$username" tar --skip-old-files -xf - done Notes Intentionally, it will not update files that already exist, but it cannot identify files a user has deleted that you want to put back again so those will be recreated It will not update root's files at all Remove the ## from the if ...conditions... should you want to exclude system accounts with UID<1000 too If you had the original files available, an alternative approach could be to update users' files if and only if they were unchanged, and otherwise to install them alongside (much like Debian's *.dpkg-dist approach). But that would need a different approach than using tar.
How can I `usermod` old users with new `/etc/skel` files?
1,592,501,948,000
Simple question. I'm trying to find the config file of pm2's logrotate module to edit it manually. Unfortunately this information is not provided in the Github repo's README. So where is this file? Backstory: I accidentally added a config with the incorrect key using pm2 set pm2-logrotate:wrong-key. I don't want it to confuse me when I come back to it later. Since there's no way to remove the config line in console (that I am aware of), I would like to get rid of it manually.
Found it. ~/.pm2/module_conf.json I believe this file stores configuration for all modules.
Where on disk is the config file of pm2-logrotate module?
1,592,501,948,000
I am looking for a command line tool that checks tcp_wrapper configuration file syntax to make sure daemon names are set right and things like that, check for spelling or syntax errors etc.
According to ftp://ftp.porcupine.org/pub/security/hints-and-tips.html: If tcpd access rules do not work as expected, run tcpdchk -v and see if its output matches your expectation. If that does not clear things up, please use the tcpdmatch command, report what it says, and also report what result you expected to get. Both commands come with the tcp wrapper source code.
Is there a tool to check /etc/hosts.{allow,deny} syntax?
1,592,501,948,000
I would like to know if there is a tool that allows me to check if I made any syntax errors in wpa_supplicant.conf(5). I am looking for a utility for wpa_supplicant(8) that would server the same purpose as the --check flag of visudo(8) from the sudo(8) suite. The only solution I've come up with so far is running wpa_supplicant -c wpa_supplicant.conf -iNonexistentInterface` but it is less than ideal. Partially because the return code is always 255 due to the invalid interface name. Ideally, I'd like the utility to run on FreeBSD.
I understand your wish but know of no such thing. It would be a nice feature. But I think you are close already now. There is however another utility named wpa_cli which might be helpful if you are willing to consider a slightly different approach. Or maybe you know it and have already discarded the idea. It is available along with wpa_supplicant.conf in the base system: $ uname -r 11.1-RELEASE $ wpa_cli -v wpa_cli v2.5 Copyright (c) 2004-2015, Jouni Malinen <[email protected]> and contributors Version 2.7 is available as a port. The man page is unfortunately not kept quite up-to-date. Neither is the readme wpa_cli -help lists all current options. And of those these might be interesting: reconfigure set dump save_config If you can live with the fact that you are changing the live settings - then set allows you to adapt the config and get errors for each setting. When things are to your liking you can then use save_config. Another - probably obvious - idea would be to add -dd to your wpa_supplicant command line. But still not ideal. But your general idea is actually workable. Though it seems that it always flakes out with exit code 255 no matter what the error. The textual output is easy to parse. If you have a parse failure you can always look for: Failed to read or parse configuration '{}'. All parse failures are prepended with Line {}: But a suggestion upstream to allow for -t for test in place of -i and more granular exit status might be a good idea.
How to check if wpa_supplicant.conf has any syntax errors?
1,592,501,948,000
Are there any configs for OpenSSH server to disallow weak (e.g. <2048 bits) RSA keys? I'm aware of PubkeyAccetedAlgorithms which can disallow specific key types, incl. rsa-sha2, as a whole.
The RequiredRSASize option was added in OpenSSH 9.1, released on October 4th, 2022: ssh(1), sshd(8): add a RequiredRSASize directive to set a minimum RSA key length. Keys below this length will be ignored for user authentication and for host authentication in sshd(8). ssh(1) will terminate a connection if the server offers an RSA key that falls below this limit, as the SSH protocol does not include the ability to retry a failed key exchange.
OpenSSH: how to disallow weak (<2048 bits) RSA keys
1,592,501,948,000
I use for some testing Raspberry Pis (Stretch) working as WiFi access points. Because i want to work with one global config file on many RPis i split hostapd.conf in 2 parts: hostapd.conf.global - that describes almost all parameters of my wifi access point. hostpad.conf.local - where SSID for particular RPi is saved. Is it possible to join these 2 config files and send it as one to hostapd? What i've tried, is to send: cat hostapd.conf.global hostapd.conf.local | hostapd -dd (pipeline command) but as a result i get then help for hostapd. I've tried also cat hostapd.conf.global hostapd.conf.local > hostapd.conf && sudo hostapd -dd hostapd.conf and this works but isn't the pipeline command equivalent of this? Is my understanding of pipelining totally wrong?
The | operator connects the STDOUT from one command to the STDIN of the next command. This will only work with downstream commands that take STDIN as an input. hostapd doesn't appear to have a parameter for this (many commands allow you to provide - as a filename to indicate that you want it to read from STDIN). Thus, you can't pipe configuration to it because it doesn't know how to consume that. You could try: cat hostapd.conf.global hostapd.conf.local | hostapd -dd /dev/stdin and see if that works. This is telling hostapd to use /dev/stdin as its configuration file... since you've piped your config files to STDIN, this should have the desired effect.
Is it possible to pipe config file to hostapd?
1,592,501,948,000
I have a bash script that I've written that takes variables from a config file. I pass them from the command line like this: ./my_script.sh ./config1.conf As I've continued to make more configs that need to be run, I now have to run a lot of commands to get through all the configs. I'm wondering if there is a way to have the script run through all the configs like rsyslog does by numbering them 01-first_config.conf, 10-config_ten.conf, 20-config_twenty.conf, etc. I've tried the following but it only runs the first file: ./my_script.sh ./*.conf I could also just put all the variables in one file with separate sections but I'm unsure how to do that since each section would essentially have a complete list of all variables required by the script. Don't know how to pass each section through the script after the previous section finishes.
You have to decide whether you want this handling within (like rsyslog) or outside the script. If you want ./my_script.sh ./*.conf to work then you have to adapt the script so that it accepts several parameters. Something like for config_file; do . "$config_file" done Or you hard-code or somehow pass a directory with these files: for config_file in /path/to/configs/*.conf; do . "$config_file" done
How to run multiple config files through a script?
1,592,501,948,000
Trying to make a cheap CNC machine to work, I have to connect through a parallel port. Unfortunately, I do not manage to make the parallel port to work. It seem the PCI parallel port card is detected, but I do not achieve to transmit/connect anything to it. How to make the parallel port working? How to make the parallel port working with normal user privileges? EDITED The port seem to works only under root privileges. That is probably the issue. But how to make the parallel port to work for normal users? . Note: My machine is a Debian Linux with RT kernel 4.9. What I did tried: The PCI parallel card is plugged into my computer. Running $lsmod |grep ppdev return what seem a correct result: ppdev 20480 2 parport 49152 3 lp,parport_pc,ppdev Running $lspci -v Return information, that I don't fully understand: 03:01.0 Parallel controller: MosChip Semiconductor Technology Ltd. PCI 9865 Multi-I/O Controller (prog-if 03 [IEEE1284]) Subsystem: Device a000:2000 Flags: bus master, medium devsel, latency 32, IRQ 22 I/O ports at dc00 [size=8] I/O ports at d880 [size=8] Memory at fcfff000 (32-bit, non-prefetchable) [size=4K] Memory at fcffe000 (32-bit, non-prefetchable) [size=4K] Capabilities: <access denied> Kernel driver in use: parport_pc And more data: $dmesg |grep parport (note: I have on single parallel port) return [ 11.791907] parport_pc 00:02: reported by Plug and Play ACPI [ 11.791998] parport0: PC-style at 0x378 (0x778), irq 5 [PCSPP,TRISTATE,EPP] [ 11.888153] lp0: using parport0 (interrupt-driven). [ 11.888949] parport1: PC-style at 0xdc00, irq 22 [PCSPP,TRISTATE,EPP] [ 11.984195] lp1: using parport1 (interrupt-driven). I downloaded a test application from here, which I run from the command line WITHOUT root permissions. It shows all the out-pins in red and all the in-pins in green. When pressing on an out-pin, it switch to green, but I suspect it does not mean anything. Finally, the ultimate test: I connected to the parallel port a LED between GND and PIN_02 (with 1k ohm resistor). If I connect it to BUZY (by default on), the led turn on, but while connected to PIN_01, it never light, even while pressing the button from the test application. From all those test, I suspect the card is correctly installed, but due to some permission or other misconfiguration, it does not work. I tried to run the PortTest with root admin, but it seem not happy with it.
The issue is with the privileges of the parallel port: by default, it is accessible only by users of the group lp. The root user is obviously allowed to it, but normal users are not. Adding the user to the lp group make the parallel port accessible without sudo: adduser <user-name> lp After that, the parallel port is working and I could continue the configuration.
How to make the parallel port to work?
1,592,501,948,000
I installed fail2ban from EPEL using yum install, and then proceeded to screw up the configuration after forgetting to back up /etc/fail2ban. Now I want the original configuration back. First I tried yum reinstall fail2ban, but that was silly because yum install doesn't overwrite existing configuration files. Then I mved /etc/fail2ban somewhere else and tried yum reinstall fail2ban again, which according to some old blog post would give me the original configuration back. No such luck. I tried uninstalling with rpm -e and reinstalling. No such luck. I got frustrated and rm -rfed my /etc/fail2ban.backup directory, thinking maybe there was some kind of weird system discovery going on. Still nothing after reinstalling. Finally I downloaded and unpacked the RPM source and rsynced the config directory to /etc/fail2ban, which got me most of the way there. But there are still a few differences in how the log files are set up and in how it integrates with systemd. Instead of Frankensteining something together, I really just want the original configuration back. Is there a way to force a fresh install of an RPM package, including config and log files, either with YUM or some other tool? I'm using the standard Linode CentOS 7 image, if that matters at all.
On my system, fail2ban is actually spread across several packages: fail2ban fail2ban-firewalld fail2ban-systemd fail2ban-sendmail fail2ban-server systemd-python Evidently, the configuration files don't get generated unless some or all of the above are installed. yum autoremove got rid of them, and then yum install fail2ban restored the original config files.
Restore original fail2ban configuration on CentOS 7
1,592,501,948,000
I haven't had need to mess with X for a long time (I'd guess it was still when Xfree86 ruled). A few weeks ago, however, I was trying to get a laptop to use an external display and I managed to cripple the setup (annoying flashing screen). Since the laptop was already due an upgrade (and I had a backup of /home) I just upgraded to F24 and moved on, but it got me thinking that I should probably have a backup of a working configuration. What files should I backup in case I need to mess with X again? Or should I just backup everything in /etc and not worry about specifics?
Just back up /etc. (And don't put configuration files elsewhere, obviously, e.g. never modify files under /usr.) Rather than rolling your own, install etckeeper, which automatically commits modification in your chosen version control system on package upgrades. When you make changes, commit them (git commit or hg commit or bzr commit or darcs record) with an explicit log message to remember why you did it.
What to backup before messing with X