date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,592,501,948,000
I want to connect my linux laptop (debian 8) with my windows laptop (windows 10) with ethernet over bluetooth. (This is a must have setup.) On the linux side I have bluez5. I found out that there is a org.bluez.Networkserver1 method to register a server (network-api). There I choose "NAP". But I don't know what to write as bridge? I tried to setup some bridge connection on the linux laptop but this doesn't work. Can anybody give my some steps or an good tutorial? All I could find was outdated (bluez4) or for linux - linux connection. Note: Sharing internet is not neccessary. Thank you.
It seems that there is no need for a NAP for a point-to-point conenction. In this case it is enough if both devices are in mode "PANU". Just execute the bluez-test script "test-network" with thhe mac of the device which you want to connect to as argument. (After pairing) Then everything works fine without nedd for interaction.
Setup linux as bluetooth NAP and connect to windows over bluetooth ethernet (Bluez5)
1,592,501,948,000
I looked at the usual culprit /etc/apt/apt.conf but there is no configuration file for apt therein so it has to be somewhere else. Running reportbug against apt does give the dump but it doesn't tell where the configuration file resides. I know have used apt-config dump but it isn't easy or tell where it is sourcing the file from. Is there a way to figure it out ? I am sure a grep should give the secret but what needs to be grepped, am not sure, am not familiar with perl. I am on testing, running apt 1.3~rc4
The configuration is split in multiple files in /etc/apt/apt.conf.d/. For me it is: ├── apt.conf.d │   ├── 00aptproxy │   ├── 00CDMountPoint │   ├── 00trustcdrom │   ├── 01autoremove │   ├── 01autoremove-kernels │   ├── 05etckeeper │   ├── 20apt-show-versions │   ├── 20listchanges │   ├── 20packagekit │   ├── 50apt-file.conf │   ├── 70debconf │   └── 99synaptic
how do I find out where the configuration file for apt is located?
1,592,501,948,000
I have a Dell XPS13 whose Philips UltraWide monitor connected is via a Thunderbolt 3/USB-C connection. As no monitor other than Apple monitors support this new fangled connection, I have an external converter to HDMI (I've also tried to DVI and Mini HDMI). My monitor is not detected by Fedora 23 however. There's nothing useful in /etc/X11/xorg.conf.d, only a keyboard conf. Also nothing useful in /usr/share/X11/xorg.conf.d. I've tried restarting, restarting with it plugged in, not plugged in, restarting in terminal mode (plugged in and not) and running startx Any ideas why or is there anything I can try? There is always the possibility of course that it's not supported. The Wifi on this thing isn't supported by Linux yet. Linux Kernel 4.1 & 4.2 Bug There is an existing bug with Thunderbolt on Linux Kernel 4.1, 4.2 and 4.3 but I've downloaded Fedora 22 Live and booted from that (which uses Kernel 4.0) and I have the same problem. xrandr -q xrandr: Failed to get size of gamma for output default Screen 0: minimum 1920 x 1080, current 1920 x 1080, maximum 1920 x 1080 default connected primary 1920x1080+0+0 0mm x 0mm 1920x1080 77.00* lspci -v 00:02.0 VGA compatible controller: Intel Corporation Sky Lake Integrated Graphics (rev 07) (prog-if 00 [VGA controller]) DeviceName: Onboard IGD Subsystem: Dell Device 0704 Flags: bus master, fast devsel, latency 0, IRQ 11 Memory at db000000 (64-bit, non-prefetchable) [size=16M] Memory at 90000000 (64-bit, prefetchable) [size=256M] I/O ports at f000 [size=64] Expansion ROM at <unassigned> [disabled] Capabilities: <access denied> Kernel modules: i915 and my kernel is up to date uname -a Linux localhost.localdomain 4.2.6-300.fc23.x86_64 #1 SMP Tue Nov 10 19:32:21 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux find /dev -group video /dev/video0 /dev/fb0 glxinfo | grep -i vendor server glx vendor string: SGI client glx vendor string: Mesa Project and SGI OpenGL vendor string: VMware, Inc. cat /var/log/Xorg.0.log | grep "(EE)" [ 1838.502] (EE) [ 1838.503] (EE) Cannot establish any listening sockets - Make sure an X server isn't already running(EE) [ 1838.503] (EE) [ 1838.503] (EE) Please also check the log file at "/var/log/Xorg.0.log" for additional information. [ 1838.503] (EE) [ 1838.503] (EE) Server terminated with error (1). Closing log file. Xorg.0.log - Other parts that may be relevant [ 11.762] (==) No Layout section. Using the first Screen section. [ 11.762] (==) No screen section available. Using defaults. [ 11.762] (**) |-->Screen "Default Screen Section" (0) [ 11.762] (**) | |-->Monitor "<default monitor>" [ 11.763] (==) No monitor specified for screen "Default Screen Section". Using a default monitor configuration. ... [ 11.772] (==) Matched intel as autoconfigured driver 0 [ 11.772] (==) Matched modesetting as autoconfigured driver 1 [ 11.772] (==) Matched fbdev as autoconfigured driver 2 [ 11.772] (==) Matched vesa as autoconfigured driver 3 [ 11.772] (==) Assigned the driver to the xf86ConfigLayout [ 11.772] (II) LoadModule: "intel" [ 11.772] (II) Loading /usr/lib64/xorg/modules/drivers/intel_drv.so [ 11.772] (II) Module intel: vendor="X.Org Foundation" [ 11.772] compiled for 1.17.99.901, module version = 2.99.917 [ 11.772] Module class: X.Org Video Driver [ 11.772] ABI class: X.Org Video Driver, version 20.0 [ 11.772] (II) LoadModule: "modesetting" [ 11.772] (II) Loading /usr/lib64/xorg/modules/drivers/modesetting_drv.so [ 11.773] (II) Module modesetting: vendor="X.Org Foundation" [ 11.773] compiled for 1.18.0, module version = 1.18.0 [ 11.773] Module class: X.Org Video Driver [ 11.773] ABI class: X.Org Video Driver, version 20.0 [ 11.773] (II) LoadModule: "fbdev" [ 11.773] (II) Loading /usr/lib64/xorg/modules/drivers/fbdev_drv.so [ 11.773] (II) Module fbdev: vendor="X.Org Foundation" [ 11.773] compiled for 1.17.99.901, module version = 0.4.3 [ 11.773] Module class: X.Org Video Driver [ 11.773] ABI class: X.Org Video Driver, version 20.0 [ 11.773] (II) LoadModule: "vesa" [ 11.773] (II) Loading /usr/lib64/xorg/modules/drivers/vesa_drv.so [ 11.773] (II) Module vesa: vendor="X.Org Foundation" [ 11.773] compiled for 1.17.99.901, module version = 2.3.2 [ 11.773] Module class: X.Org Video Driver [ 11.773] ABI class: X.Org Video Driver, version 20.0 [ 11.773] (II) intel: Driver for Intel(R) Integrated Graphics Chipsets: i810, i810-dc100, i810e, i815, i830M, 845G, 854, 852GM/855GM, 865G, 915G, E7221 (i915), 915GM, 945G, 945GM, 945GME, Pineview GM, Pineview G, 965G, G35, 965Q, 946GZ, 965GM, 965GME/GLE, G33, Q35, Q33, GM45, 4 Series, G45/G43, Q45/Q43, G41, B43 [ 11.773] (II) intel: Driver for Intel(R) HD Graphics: 2000-6000 [ 11.773] (II) intel: Driver for Intel(R) Iris(TM) Graphics: 5100, 6100 [ 11.773] (II) intel: Driver for Intel(R) Iris(TM) Pro Graphics: 5200, 6200, P6300 [ 11.773] (II) modesetting: Driver for Modesetting Kernel Drivers: kms
There is a bug where Thunderbolt connections aren't recognized in Linux kernel 4.1, 4.2 and 4.3 but are in 4.0. This has been fixed in kernel 4.4 so adding an updated kernel fixes it. As Fedora 23 didn't update to 4.4. till Fedora 24 (Actually 4.6). If you are using an older version, it can be done manually as follows Add the Kernel Vanilla Repo curl -s https://repos.fedorapeople.org/repos/thl/kernel-vanilla.repo | sudo tee /etc/yum.repos.d/kernel-vanilla.repo Install the stable release (or dev [kernel-vanilla-mainline] if you're brave) sudo dnf --enablerepo=kernel-vanilla-stable update Then restart, kernel 4.4 will be an option on startup. I've no idea why Fedora with Kernel 4.0 didn't work though.
Why is my (thunderbolt connected) monitor not detected in Fedora 23
1,592,501,948,000
I notice I spend a lot of time using tutorials to manually configure a CentOS 7 server. How can I convert the manual steps from a tutorial into an automated shell script that can be used to configure multiple CentOS 7 servers with the same settings? Let's use this tutorial as an example, but the answer should be generalizable to other config shell scripts in addition to simply providing a working shell script to automate this tutorial. Here is my first attempt at the shell script: #!/bin/bash yum update yum install yum-utils bzip2 bzip2-devel wget curl tar sudo yum install gcc-c++ cd /opt wget http://nodejs.org/dist/v0.12.7/node-v0.12.7.tar.gz tar zxf node-v0.12.7.tar.gz cd node-v0.12.7 ./configure --prefix=/usr/local make make install npm install bower -g npm install gulp -g How do I correctly write the shell script above? And how do I check for example that each step is done correctly? I have to be root to run the above. If I run it as a sudoer, how do I handle the periodic requests for passwords? Or does the fact that it is in a shell script mean that you only have to give the sudoer password when you first call the script? I am brand new to shell scripts, so please be patient and explain in language that others who are new to shell scripting can also understand. Also, not this is specific to CentOS 7 with things like yum install, etc.
You have several pieces to your question, and I can't claim familiarity with all of the commands you run, but here's my take: Either run the whole script as root (directly) or via sudo. That way, you won't need to run sudo in the script itself. If you require sudo for a particular step, then you'll either need to set your ID up with a NOPASSWD flag in sudoers or accept an interactive prompt for your password during the script's execution. To answer your other general question about each step executing correctly, the way to do that is to check the return code from the step and hope that the exiting program set the return code appropriately. For examples: Short-hand notation: yum update && \ yum install yum-utils bzip2 bzip2-devel wget curl tar Longer-hand: yum update RC=$? if [ $RC -ne 0 ] then print "Some error message here" exit 1 ## or some other identifying error code fi yum install yum-utils bzip2 bzip2-devel wget curl tar RC=$? if [ $RC -ne 0 ] then print "Some other error message here" exit 2 ## or some other identifying error code fi ...
automating CentOS 7 configuration using shell scripts
1,592,501,948,000
Is there a tool that handles generic migration of config? For example if I have httpd, postfix, MySQL and users and groups data, is there a tool that can extract the config data for each service so that I can apply it on another system. Generally speaking is there a tool (or strategy) that handles this for all services?
One the popular accepted solutions to this problem is using a configuration management system. Some examples are puppet, chef, and saltstack. These systems allow you to define exactly what a server (or in some cases an application stack) looks like. Using these tools you define a server's state, including its configuration. Here is an example of a very basic apache configuration using Puppet with the puppetlabs/apache module: class { 'apache': } apache::vhost { 'first.example.com': port => '80', docroot => '/var/www/first', } This simple bit of puppet code ensures the following: Apache is installed on the server The webserver is running and listening on port 80 Contains a vhost with the docroot /var/www/first You can then apply this manifest to many servers in a cluster. There are many reasons for the movement towards this type of configuration instead manually copying configuration files. It treats your server configuration and infrastructure in a very similar manner to how you treat code. The configs for these systems are often stored in version control. This allows you to easily view changes, rollback, etc Your server states can be unit tested and acceptance tested Shared modules work like code libraries - you do not need to reinvent the wheel Your servers are provisioned in a way which is repeatable (and more reliable) Many consider use of these systems a big part of the devops movement.
Tool for migration of service configuration
1,592,501,948,000
This is a followup to Somehow managed to mute mplayer and can't figure out how to restore sound. I've noticed that the sound level settings in mplayer are saved on exit. However, I can't figure out where they are saved. There are config files in the .mplayer directory, but none of them are being written to. This is on Debian wheezy with mplayer version 2:1.0~rc4.dfsg1+svn3454.
PulseAudio stores stream state for each app independently. This lets you (for example) set your music player to a lower volume than your instant message alert tone, so you hear the IM alert over the music. If you have PulseAudio's module-stream-restore loaded—and the default config loads it—, then these settings will be saved when you exit the program and loaded back when you start it again. The settings are saved in ~/.pulse/…stream-volumes.tdb. The easiest way to change them is by starting mplayer again, and then using one of the many PulseAudio UIs, e.g., command-line pactl, GUI pavucontrol, etc. to change it. If mplayer is using the PulseAudio mixer (likely), then you can also try m, 9, and 0 (mplayer's default mute and volume keys).
Where are the sound level settings for mplayer saved?
1,592,501,948,000
I've downloaded a kernel binary which I am using now. In order to use the watchdog on my system I must recompile the kernel with watchdog support. Is it possible to obtain the current kernel configuration of the binary? The binary is obtained from this page. I've used version R5.
If the kernel config is not distributed in /boot/config-* or available at /proc/config.gz, it is nearly impossible to get it. As Alex wrote, they could also have patched the kernel and included proprietary drivers. But because the kernel is under GPLv2, the owner of the site where you download the binaries, have to give you the corresponding configuration including the source code they used to compile it. In the case you get problems, contact gpl-violations.org.
Find or create kernel configuration of kernel binary
1,592,501,948,000
This is part of my config of xmonad in ~/.xmonad/xmonad.hs myWorkspaces :: [String] myWorkspaces = clickable . (map dzenEscape) $ ["web","doc","ssh","devel","chat","temp"] where clickable l = [ "^ca(1,xdotool key super+" ++ show (n) ++ ")" ++ ws ++ "^ca()" | (i,ws) <- zip [1..] l, let n = i ] myManageHook = composeAll [ className =? "MPlayer" --> doFloat , className =? "Vlc" --> doFloat , className =? "Gimp" --> doFloat , className =? "skype" --> doF (W.shift (myWorkspaces !! 4)) , className =? "Mail" --> doF (W.shift (myWorkspaces !! 4)) -- , className =? "XCalc" --> doFloat , className =? "Firefox" --> doF (W.shift (myWorkspaces !! 0)) -- send to ws 0 -- , className =? "Nautilus" --> doF (W.shift (myWorkspaces !! 5)) -- send to ws 5 , className =? "gvim" --> doF (W.shift (myWorkspaces !! 1)) -- send to ws 1 -- , className =? "Terminal" --> doF (W.shift (myWorkspaces !! 3)) -- send to ws 3 , className =? "Gimp" --> doF (W.shift (myWorkspaces !! 1)) -- send to ws 1 , className =? "Codeblocks" --> doF (W.shift (myWorkspaces !! 3)) -- send to ws 3 , className =? "stalonetray" --> doIgnore ] The thing is, that Firefox or Codeblocks start at workspace I want, but skype and mail (Thunderbird) doest respect these settings and always start in active workspace.
Make sure that Skype is capitalized. I use className =? "Skype" --> doShift "8" and that works, but if I leave Skype in lowercase it doesn't. I don't use Thunderbird, but perhaps it is also a class name issue. It looks like you should be using "Thunderbird-bin". http://ubuntuforums.org/archive/index.php/t-863092.html
Xmonad: some apps do not start in workspace which I defined in config
1,592,501,948,000
I found myself enabling one of my computer on LAN to connect to a server through port 1236. A check on the list of services show: bvcontrol 1236/tcp rmtcfg # Daniel J. Walsh, Gracilis Packeten remote config server bvcontrol 1236/udp # Daniel J. Walsh I really can't recall why I open up this particular port. Would appreciate if someone can explain what is a bvcontrol and what a remote config server does? So that I can figure out whether to keep this port open or close.
The complete bit in my /etc/services is: # /etc/services: # $Id: services,v 1.53 2011/06/13 15:00:06 ovasik Exp $ # # Network services, Internet style # IANA services version: last updated 2011-06-10 [...] # Port 1236 is registered as `bvcontrol', but is also used by the # Gracilis Packeten remote config server. The official name is listed as # the primary name, with the unregistered name as an alias. bvcontrol 1236/tcp rmtcfg # Daniel J. Walsh, Gracilis Packeten remote config server bvcontrol 1236/udp # Daniel J. Walsh According to this: bv-Control for UNIX v9.0 is a security and systems management tool for system administrators and security auditors. The tool’s implementation adopts the powerful querying and reporting features of RMS Console and Information Server. The RMS Console along with bv-Control for UNIX is a powerful tool designed to help you manage your server environment. For more information about the RMS Console and the Information Server see the RMS Console and Information Server Getting Started Guide. Since this is a commercial software product, you would probably know if you were using it. As for the "Gracilis Packeten remote config server", here's a clue for you: http://manpages.ubuntu.com/manpages/gutsy/man1/p10cfgd.1.html I believe "packeten" is German for packets, "gracilis" latin for slender and would guess the Gracilis Packeten is an obscure probably obsolete piece of hardware. In other words, if you want to use that port for something, you are fine doing so. It may (or may not) occasionally get scanned by something expecting "bvcontrol" but that should not matter.
What is a remote config server?
1,592,501,948,000
From here: http://www.xenomai.org/documentation/xenomai-2.6/TROUBLESHOOTING Q: Which CONFIG_* items are latency killers, and should be avoided ? ... APM: The APM model assigns power management control to the BIOS, and BIOS code is never written with RT-latency in mind. If configured, APM routines are invoked with SMI priority, which breaks the rule that adeos-ipipe must be in charge of such things. DISABLE_SMI doesnt help here (more later). The problem is that I am not able to find this APM thing anywhere. "ACPI (Advanced Configuration and Power Interface) Support" results in the following menu: --- ACPI (Advanced Configuration and Power Interface) Support [*] Deprecated /proc/acpi files [*] Deprecated power /proc/acpi directories <M> ACPI 4.0 power meter < > EC read/write access through /sys/kernel/debug/ec (NEW) [*] Deprecated /proc/acpi/event support <M> AC Adapter <M> Battery {M} Button {M} Video <M> Fan [*] Dock <M> Processor < > IPMI (NEW) <M> Processor Aggregator <M> Thermal Zone -*- NUMA support () Custom DSDT Table file to include [*] Debug Statements [ ] Additionally enable ACPI function tracing <M> PCI slot detection driver {M} Container and Module Devices (EXPERIMENTAL) <M> Memory Hotplug <M> Smart Battery System < > Hardware Error Device (NEW) [ ] ACPI Platform Error Interface (APEI) (NEW) Please help.
You can find this option yourself: Press / in the menu menuconfig interface , and put CONFIG_APM there , if you find anything , it's supported I can only give you output from 3.3.7 version: But anyway , you could edit the .config file yourself , and append CONFIG_APM=y , then redo make menuconfig ,
Where is CONFIG_APM in kernel - 2.6.38.8
1,592,501,948,000
Are there any drawbacks to having Cygwin and Windows share the same $HOME directory, in this case the Windows profile directory?
Merging them will work fine. Cygwin proper doesn't store anything in your HOME directory. On first running Cygwin with a fresh home directory, default versions of .bash_profile and such get put there, but again, there is no conflict with things that already get put there. I, too, find it frequently convenient to be able to use Cygwin on things that live under your Windows profile directory. However, I don't want the two to be the same[*], so I just make a symlink to it in my home directory. I'm never farther from my Windows profile directory than a cd ~/WinHome. [*] So many programs feel privileged to throw random junk in the Windows profile directory that it would annoy me to see it every time I say ls in my home directory. I prefer to keep that mess at arm's length. I feel my home directory should be mine. I'm happy to let ~/WinHome be a midden.
setting Cygwin's $HOME to Windows profile directory
1,592,501,948,000
I'm looking for a script that allows complete configuration of a CentOS 5 system via TUI (no GUI, X, etc.). I found system-config-network-tui but other than the fact it looks unprofessional (typos, bugs, etc.) I actually would like to find a script which supports the configuration of many other aspects (such as date/time, timezone, routing, etc.) Is there anything available?
Did you try setuptool ? Install it with yum install setuptool
Looking for a complete TUI script for configuring date and networking of CentOS 5
1,592,501,948,000
When I reboot machine disabled the docker.service, I do not have br1 in my ip a output. However, after start docker, the br1 appears. But sudo docker network inspect does not show any related network. Error response from daemon: cannot create network 33618cb1603a773a11d97750182fde9d8feb98c03a9882bb0c4539c3ea3fbe1d (br1): conflicts with network aed743fd32b35084f5ccee11e4cfb631b31dccb2fec79cba60cd5f4252b03106 (br1): networks have same bridge name sudo docker network inspect aed743fd32b35084f5ccee11e4cfb631b31dccb2fec79cba60cd5f4252b03106 [] Error response from daemon: network aed743fd32b35084f5ccee11e4cfb631b31dccb2fec79cba60cd5f4252b03106 not found So where is this network, how can I find it? I did sudo docker network prune, now only 3 entry left. But br1 is still created.
This happens when removing Docker network fails with bridge record left in database.  Database location is /var/lib/docker/network/files/local-kv.db (AlmaLinux).  Although the usual suggestion is to stop Docker, delete this database and then rebuild containers, this isn't the best solution for larger infrastructure.  In this case, a better solution is removing the leftover bridge record from database. This could be done by using boltcli tool (https://github.com/spacewander/boltcli). Assuming that go is installed, install boltcli: go install github.com/spacewander/boltcli@latest Make a copy of local-kv.db and open it with boltcli: boltcli /path_to_db_copy/local-kv.db List buckets to get one to work with: buckets * Most probably, bucket is "libnetwork" Find key based on id (or part of it) of the conflicting network (aed743fd32b35084f5ccee11e4cfb631b31dccb2fec79cba60cd5f4252b03106 from the question) keys libnetwork *aed743fd32b35084f5ccee11e4cfb631b31dccb2fec79cba60cd5f4252b03106* Use returned key (probably, "docker/network/v1.0/bridge/aed743fd32b35084f5ccee11e4cfb631b31dccb2fec79cba60cd5f4252b03106/") to remove record del libnetwork docker/network/v1.0/bridge/aed743fd32b35084f5ccee11e4cfb631b31dccb2fec79cba60cd5f4252b03106/ Exit boltcli (Ctrl+C), stop Docker and overwrite the original db file with the altered one (making a copy of the original is recommended before doing it).  Start Docker.
docker complain I have a bridge name conflict with a invisible network
1,592,501,948,000
I'm currently coding a network configuration role used by Ansible to customize our fresh new virtual machine that came from our Debian 11 template. I got a weird issue while I try to set up and configure 2 physical network interfaces. When i deploy a new VM from my template, it has 2 separate vmnics, from a debian perspective it means it has only ens3 and ens4 (i don't use any bond or subinterfaces at all). Here's the simple interfaces configuration file I setup: # This file describes the network interfaces available on your system and how to activate them. For more information, see interfaces(5). source /etc/network/interfaces.d/* # The loopback network interface auto lo iface lo inet loopback # The primary network interface auto ens3 iface ens3 inet static address 10.0.0.1/24 gateway 10.0.0.254 dns-nameservers 10.230.100.1 dns-search mydomain.net auto ens4 iface ens4 inet static address 192.168.0.1/24 gateway 192.168.0.254 dns-nameservers 10.230.100.1 dns-search mydomain.net Then, when i restart networking.service through systemctl or better, when i reboot the machine, configuration is well set from ip a perspective but it has issues from journalctl perspective : févr. 16 14:04:53 MY-HOST systemd[1]: Starting Raise network interfaces... févr. 16 14:04:53 MY-HOST ifup[1100]: RTNETLINK answers: File exists févr. 16 14:04:53 MY-HOST ifup[1078]: ifup: failed to bring up ens4 févr. 16 14:04:53 MY-HOST systemd[1]: networking.service: Main process exited, code=exited, status=1/FAILURE févr. 16 14:04:53 MY-HOST systemd[1]: networking.service: Failed with result 'exit-code'. févr. 16 14:04:53 MY-HOST systemd[1]: Failed to start Raise network interfaces. Once i reboot the server, I still have a lot of these issues but configuration seems well set up 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 50:6b:8d:d0:c0:3d brd ff:ff:ff:ff:ff:ff altname enp0s3 inet 10.0.0.1/24 brd 10.0.0.255 scope global dynamic ens3 valid_lft 2147483506sec preferred_lft 2147483506sec 3: ens4: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 50:6b:8d:8a:24:94 brd ff:ff:ff:ff:ff:ff altname enp0s4 inet 192.168.0.1/24 brd 192.168.0.255 scope global dynamic ens4 Moreover, if i manually sudo ifdown ens4 then sudo ifup ens4 i got the following error : ifdown: interface ens4 not configured RTNETLINK answers: File exists ifup: failed to bring up ens4 I figure out that if I comment out auto ens4 in my interfaces file, i do not have any error but when i reboot, ens4 isn't up so that's not a solution for me... My question is : How can i fix it ? do i miss something in my interfaces configuration ? or is there a mistake i didn't see ? Thanks a lot !
Can you try this: auto ens3 allow-hotplug ens3 iface ens3 inet static address 10.0.0.1/24 auto ens4 allow-hotplug ens4 iface ens4 inet static address 192.168.0.1/24 Restart or reboot than try to ping 10.0.0.1 and 192.168.0.1 If that work's config your dns, nameserver, gateway etc.. Check this post and edit /etc/resolve.conf for nameserver Debian 11 dhcp assigning ip to more than one interface network devices/interfaces informations Check the Debian NetworkConfiguration: Debian NetworkConfiguration CHECK THIS TOO: Adding two default gateways in Debian interfaces file Maybe this post can help: how to configure 2 network interfaces with different gateways
Can't correctly bring up a second network interface on Debian 11
1,592,501,948,000
Good Morning, I am trying to sends Zeek logs to another host on my local network with rsyslog. So far I have a configuration file in /etc/rsyslog.d which looks like this : module(load="imfile") #### Templates #### template (name="zeek_Logs" type="string" string="<%PRI%>%PROTOCOL-VERSION% %TIMESTAMP:::date-rfc3339% %HOSTNAME% %APP-NAME% %PROCID% %MSGID% %STRUCTURED-DATA% %$!msg%\n" ) #### RULES for where to send Log Files #### # Send messages over TCP using the ZEEK_Logs template ruleset(name="send_zeek_logs") { if $msg startswith not "#" then { set $!msg = replace($msg, "|", "%7C"); # Handle existing pipe char set $!msg = replace($!msg, "\t", "|"); action ( type="omfwd" protocol="tcp" target="192.168.1.140" port="7000" template="zeek_Logs" ) } } #### Inputs #### input ( type="imfile" File="/opt/zeek/logs/current/weird.log" Tag="zeek_weird" Facility="local7" Severity="info" RuleSet="send_zeek_logs" ) input ( type="imfile" File="/opt/zeek/logs/current/modbus_detailed.log" Tag="zeek_detailed" Facility="local7" Severity="info" RuleSet="send_zeek_logs" ) but when launching rsyslog, I get this error : nov. 22 13:00:53 zeek rsyslogd[1442]: imfile: on startup file '/opt/zeek/logs/current/weird.log' does not exist but is configured in static file monitor - this may indicate a misconfiguration. If the file appears at a later time, it will automatically be processed. Reason: Permission denied [v8.2001.0]> nov. 22 13:00:53 zeek rsyslogd[1442]: imfile: on startup file '/opt/zeek/logs/current/modbus_detailed.log' does not exist but is configured in static file monitor - this may indicate a misconfiguration. If the file appears at a later time, it will automatically be processed. Reason: Permission denied [v8.2001.0]> nov. 22 13:00:53 zeek rsyslogd[1442]: [origin software="rsyslogd" swVersion="8.2001.0" x-pid="1442" x-info="https://www.rsyslog.com"] start I tried to give read permission on /opt/zeek/logs directory and I also have disabled apparmor temporarly but nothing works. What else am I missing ? Thank you for your help.
Probably user syslog lacks read permission for the directory, you can test it with: sudo -u syslog ls /opt/zeek/logs/current The permission failure may be because of a directory higher up the tree of course. Crude bash example of how to find where: TESTDIR=/opt/zeek/logs/current while [[ ${#TESTDIR} -gt 1 ]]; do sudo -u syslog ls "$TESTDIR" >/dev/null 2>&1 && \ echo "syslog can read contents of $TESTDIR" || \ echo "syslog cannot read contents of $TESTDIR" TESTDIR=$(dirname "$TESTDIR") done
Access denied for rsyslog
1,592,501,948,000
I modified the interface configuration file as follows: vi /etc/sysconfig/network-scripts/ifcfg-ens160 I have changed the IP and gateway. I run the command as follows: nmcli connection down ens160 && nmcli connection up ens160 However, the IP address does not change when I do ifconfig. I have to reboot the server for the change to take place. What other commands can I run so I won't have to reboot the server? I am running AlmaLinux 8.6.
After editing the interface configuration file, you should run: nmcli connection reload Alternatively, you should make your modifications to the interface configuration with nmcli connection modify ens160 ... or nmcli connection edit ens160 or nmtui or with any other NetworkManager front-end. If you make your changes in one of these ways, the "RedHat-like" NetworkManager configuration backend will automatically update the /etc/sysconfig/network-scripts/ifcfg-* files appropriately, as that backend is read/write. (On Debian and related distributions, the NetworkManager backend that reads Debian's classic /etc/network/interfaces is read-only, and any configuration updates get stored to /etc/NetworkManager/system-connections/ or to a per-user location which may be specific to distribution and/or desktop-environment used.)
Server requires reboot after modifying the interface configuration file
1,592,501,948,000
How can I change the actual editor theme in .nanorc, I am not speaking about the syntax highlighting but editor elements such as titlebar or line numbers color/background color? For instance, I would like to set the title bar and line numbers background to black/transparent, and the font color to white.
Edit the nanorc file, and add the following lines: set titlecolor COLOR_1,COLOR_2 # COLOR_1 is the text, COLOR_2 is the background. Supported colors are white, black, blue, green, red, cyan, yellow, magenta set numbercolor COLOR_1,COLOR_2 # same as above
nano change line numbers color
1,592,501,948,000
I installed new daughter cards for a parallel port and a telephone modem on the PCI bus of my computer. The lspci command reveals that the system sees the cards but I don't know which config file I need to edit to use these new cards. I've searched the web but have not yet found anything that helps me. root@CLM1001-Ubuntu:~# lspci | grep 04: 04:05.0 Parallel controller: Device 1c00:2170 (rev 0f) 04:06.0 Multiport serial controller: PCTel Inc HSP MicroModem 56 (rev 02) This is an old computer with a video card that does not support a newer linux kernel, so I am stuck running Ubuntu 14.04 LTS.
The PCI vendor:product ID of the parallel port card is 1c00:2170. The fact that the ID number is displayed without using lspci -n or lspci -nn indicates that the vendor is not included in the system's PCI ID database. That's not a good sign. This webpage mentions the vendor ID: 1c00 is not a listed PCI vendor ID. 1C00 is the Vendor ID used by WCH (not assigned by pcisig). WCH seems to be a Chinese vendor of various adapter cards. The fact that they seem to have just grabbed a vendor ID without officially registering it with the PCI-SIG is not a good sign, either. Even the newest stable kernel (5.17.1 at the time of this writing) only supports two product IDs with this vendor ID: those would be 3050 and 3250. The product ID 2170 is completely unknown. And even those two product IDs were added to the kernel in 2018, so the original kernel of Ubuntu 14.04 LTS probably would not have even those. If the card came with a Windows driver (or a working download link for one), then reading the *.INF file of the Windows driver might provide some clues about the card. You might also see if there are any visible markings on the main chips on the card, and Google them if you find any; if it turns out the card uses a chip that is already known to Linux, WCH might be using a copy of an existing card design. If it turns out that your card is a copy of a PCI parallel port card that is already supported by Linux, creating a kernel patch to add support for it could be a fairly simple matter of basically copying the relevant lines defining the details of the supported card to make a new entry in <Linux kernel source root>/drivers/parport/parport_serial.c and changing the PCI IDs of the new entry to match your card. Then you would have to compile your own kernel and test your changes.
What config file do I need to edit to use my newly-installed parallel port PCI card
1,592,501,948,000
FreeBSD and other BSD systems, required certain network information at installation time, like if I will use a DHCP server, or the IP address, subnet mask, IP address of default gateway, domain name of the network, IP addresses of the network’s DNS servers, etc. When I install Ubuntu, I do not need to provide this information: in some way, Ubuntu recollect that data and use it, given the impression that the system "just works". I understand that under the hood Ubuntu just automatize something that the BSD developers thought it will be enough to just ask the user. But because I do not know this information beforehand, I will like to know where Ubuntu store this information, so I can use it to install FreeBSD on the same machine that now I have Ubuntu. What I want to do is to move to FreeBSD (currently, I'm using it inside a VirtualBox inside Ubuntu), but I hit that wall. I hope someone can point me in the right direction
By default Ubuntu uses Networkmanager. In FreeBSD you configure the network manually. If you "do not need to provide this information when you install Ubuntu" means default DHCP works for you. See 5. Network Configuration. FreeBSD Quickstart Guide for Linux® Users how to configure ethernet DHCP in FreeBSD. If you need more details from Network Manager use nmcli. Read 32.3. Wireless Networking on how to configure WiFi in FreeBSD. Generally, in FreeBSD, you'll have to understand advanced details e.g. Chapter 32. Advanced Networking to configure them.
How to recollect network information from Ubuntu to use it on Freebsd?
1,592,501,948,000
suddenly, i3 stopped moving Inkscape to the workspace I assigned to it. I really cannot understand why it is doing this, because it was just working fine like two days ago. I'll post the code I wrote in i3/config to move Inkscape to the ninth workspace and to move me as well to that workspace: for_window [class="Inkscape"] move to workspace $ws9 workspace number $ws9 Moreover, this is the output of xprop used on the Inkscape window: WM_CLASS(STRING) = "org.inkscape.Inkscape", "Inkscape" If you need anything else, let me know. (I checked that the name of the workspace is actually $ws9) One thing I noticed is that it doesn't move only when I open it in workspaces where I have other windows; if it is in an empty workspace, it will be moved to the ninth one.
I had the same problem, and changing a few settings on Inkscape solved it for me. Open Inkscape, go to Edit > Preferences. On the Preferences window, go to Interface > Windows. Then set the "Default window size" to Default, and "Saving window geometry (size and position)" to Don't save window geometry.
i3wm doesn't move Inkscape to workspace assigned
1,592,501,948,000
I have these aliases in my ~/.bashrc alias grep='grep --color=auto -H' alias fgrep='fgrep --color=auto -H' alias egrep='egrep --color=auto -H' but they have no effect when I run find ... -exec grep ..., and I always have to provide those options manually. Is there a way to tell find to rely on aliases in the -exec option's arguments? I'm thinking of configuration files, rather than other aliases. Would it be unsafe in some way?
You can't use aliases like that. Aliases work only if they're used first in a long command sequence, the shell basically replaces the alias text with the actual command. When you enter a command, the shell first searches for an alias, then a function and so on. Command substitution/alias substitution doesn't work if you're using an alias in the middle of a command sequence. Furthermore, the -exec flag of find, will always spawn a seperate process executing the binary, neither an alias, nor a function, that's hard coded.
Aliasing grep in find's -exec option
1,592,501,948,000
I want to switch on CONFIG_CONTEXT_TRACKING, I am able to find this config with a search in menuconfig but not able to turn it on. I am also having difficulty in understanding the config options that CONTEXT_TRACKING depends on. Can someone tell me step by step how to switch on this config?
You need to compile your own Linux kernel. CONTEXT_TRACKING is an automatic setting, which is enabled if VIRT_CPU_ACCOUNTING_GEN is selected. VIRT_CPU_ACCOUNTING_GEN is available under “General setup”, “CPU/Task time and stats accounting”, “Cputime accounting”, “Full dynticks CPU time accounting”. You can find this out by typing / and searching for VIRT_CPU_ACCOUNTING_GEN in make menuconfig: Selecting this option, which is only possible on architectures with support for CONTEXT_TRACKING, will automatically enable CONTEXT_TRACKING: The availability of VIRT_CPU_ACCOUNTING_GEN depends on all of the following: HAVE_CONTEXT_TRACKING (automatically set on ARM, ARM64, MIPS, 64-bit PowerPC, 64-bit SPARC, 64-bit x86) HAVE_VIRT_CPU_ACCOUNTING_GEN (indicates support for 64-bit cputime_t; automatically set on 64-bit architectures and architectures where the appropriate locking has been implemented, i.e. ARM and non-SMP MIPS) GENERIC_CLOCKEVENTS (automatically set on architectures supporting generic clock events, i.e. everything but Itanium)
How do I switch on CONFIG_CONTEXT_TRACKING in Linux?
1,592,501,948,000
How do I query the compile-time options of bash on a given system? The system rc path for bash differs across systems. Sometimes it is /etc/bash.bashrc and sometimes it is /etc/bashrc. How can I detect this programmatically? I know I can list options in a shell with: set -o or shopt
As far as I can tell, Bash source code doesn't differentiate between SYS_BASHRC and other included rc files after compilation. In addition, SYS_BASHRC could be undefined, and the resulting binary wouldn't use a system rc at all. All the files used by a process can be found out by strace, however. Bash includes rc files only if it is run interactively, so: echo | strace -e openat -o tmp.log bash -i 2>/dev/null The resulting file tmp.log will contain the information wanted: openat(AT_FDCWD, "/etc/bash.bashrc", O_RDONLY) = 3 Unfortunately, it will also contain large numbers of lines e.g. for libraries (and the redirection to /dev/null). I'm not sure how to select the correct line in every case. But in practice I think it will most probably be the first non-library in /etc/: grep -v O_CLOEXEC tmp.log | grep \"/etc | head -n 1 | sed -e 's/.*"\(.*\)".*/\1/'
How can I query the system rc path set at compile time with -DSYS_BASHRC=?
1,550,440,851,000
I want to change the text color of notification from green to black for example. I'm using Xenlism Minimalism, that is the great shell for me, but the only issue is that the text color is light green really hard to see. How can I change this?
go to your theme, usually installed in /usr/share/themes/<your_theme_folder>/gnome-shell find .message-title and .message-content selector add color or do whatever you want to it
How can I change the text color in gnome top bar?
1,550,440,851,000
I currently have a .confin my etc/nginx/sites-available with a bunch of location entries. Some of those location entries are setup as reverse proxy's to specific ports. However, I'm having trouble adding a location entry that just points at a directory. server { listen 443 ssl; server_name sub.domain.com www.sub.domain.com; root /var/www/html; charset utf-8; access_log /var/log/nginx/sub.domain.com-access.log combined; error_log /var/log/nginx/sub.domain.com.log error; ssl_certificate /etc/letsencrypt/live/sub.domain.com/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/sub.domain.com/privkey.pem; location /site1 { proxy_pass http://127.0.0.1:7777; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } location /site2 { proxy_pass http://127.0.0.1:8888; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } location /site3 { root /opt/site3; index index.html; allow all; } } Right now, I'm having trouble getting sub.domain.com/site3 to serve the content of /opt/site3. Any help on how to correctly use location {} entries side by side with proxy reverse would be greatly appreciated! Thanks.
Attempting to use root with a sublocation will mean that it's going to try $root$uri, which in your case becomes /opt/site3/site3. You can do what you did and use root so that the root directory is a folder before the folder you are trying to access. However, you don't need to do this. Try using alias /opt/site3; instead; this should work and access the correct location provided you set the index field, and if necessary have a try_files in that location block as well.
NGINX - location {} slug with different root domain
1,550,440,851,000
What is the meaning of postscreen_dnsbl_reply_map in postscreen (postfix) ? I've read from documentation: if your DNSBL queries have a "secret" in the domain name, you must censor this information from the postscreen(8) SMTP replies (1) And from manual: A mapping from actual DNSBL domain name which includes a secret password, to the DNSBL domain name that postscreen will reply with when it rejects mail. When no mapping is found, the actual DNSBL domain will be used. (2) I don't understand about a secret password means, how a DNS domain name will include a password? Could you explain me?
Some non-free DNSBLs give customers a secret DNS label to insert between the base domain and the query target (i.e. octet-reversed IP or domain name) as a form of authentication. Obviously this "secret" isn't well-protected from snooping by actors who can sniff the DNS traffic, but as a practical matter it is safe enough for most DNSBLs' needs. For so, the "postscreen_dnsbl_reply_map" feature can hide "password" information in DNSBL domain names. In other words, when postscreen rejects mail, usually a spam, its SMTP reply contains the DNSBL domain name, so the "postscreen_dnsbl_reply_map" feature (i.e. a configuration parameter) in postfix can hide "password" information for those "some non-free DNSBLs". In addition, it can hide the DNSBLs names that are used in response to emails that are rejected.
What is postscreen_dnsbl_reply_map use for?
1,550,440,851,000
I have a file that I am performing some replacement of certain columns in two new files. It works fine right now. I wanted to put some of the variables like the "X" (the character I replace a column with) and "\036" separator, the columns selected like $13, $6,$10,$5, etc into a configuration file kind of like below. I am not sure how to accomplish this the best way with an awk statement. Config: export FIELD_DELIMITER="\036" export MASK_COLUMN="$13" export PRINT_COLUMN="$13, $6, $10, $5" Code: awk 'BEGIN{FS=OFS="\036"} {gsub(/./, "X", $13)} 1' $1 > $file_directory'/'$mask_filename$seperation$temp$DATA_SUFFIX awk 'BEGIN{FS=OFS="\036"} {print$6,$10,$5,$13}' $1 > $file_directory'/'$mask_filename$seperation$temp1$DATA_SUFFIX
You can pass awk variables from outside by using the -v argument. You can use it repeatedly to pass multiple variables. For example, for the first line of your script: awk -v FIELD_DELIMITER='\036' -v WIPEOUT_CHARACTER='X' 'BEGIN{FS=OFS=FIELD_DELIMITER} {gsub(/./, WIPEOUT_CHARACTER, $13)} 1' $1 > $file_directory'/'$mask_filename$seperation$temp$DATA_SUFFIX I checked GNU Awk 4.1.4 and it's happy to take the "\036" and interpret that as ASCII RS (record separator), so I'd expect that should work for you too.
Using variables in awk statement
1,550,440,851,000
In my RHEL 7 minimal installation with Virtualization profile I found that interfaces configured with network manager(s) contain BROWSER_ONLY=no option, which is always set to no by default. Server is currently offline, so I won't post entire config, but this is from "regular", physical WAN facing interface enp5s0 (eth0). When i configured another physical NIC using nmtui, the same option appeared. I've seen it in many config files posted online. Now, if I don't know it, then probably I don't need it. Of course that's not the case, this is purely out of curiosity. Does the name literally mean "this interface is only for web browser-related traffic"? I can see how that would make sense with dedicated physical card, possibly VPN/IPSEC, meant only for safe browsing, maybe online banking, stock exchange(sic.), finance. Research: all duckduckgo searches return content of ifcfg-* files that contain this option by default set to "no". I've tried searching for a phrase "BROWSER_ONLY=yes" to no avail neither of IFCFG(8), IP(8), NETWORKS(5) man pages reveal that information /usr/share/doc/initscripts-*/sysconfig.txt describes most of the options that go into ifcfg-* files for different types of connection, however there's no mention of BROWSER_ONLY briefly browsed through http://linux-ip.net/ that contains IP Command reference ip-cref.ps, to which man ifcfg refers and which I didn't find on my system and of course here, on Stack Exchange grep -R BROWSER /usr/share/doc/*, info --apropos=BROWSER_ONLY - guess what... haven't tried changing it to yes yet, however I doubt to see any difference, unless the interface will stop working altogether Does it mean absolutely no one is using it? Is it some hidden or reserved for future use option? I'm torn between posting this question in Unix&Linux and Server Fault. Not sure where it fits better.
I found NetworkManager docs that suggests this setting is only used if you also configure proxy settings, i.e. you have PROXY_METHOD=auto present, and then determines whether the proxy settings are for everything of just browsers, however they think they can distinguish that. The default is no already.
What is BROWSER_ONLY option in ifcfg-* network configuration files?
1,550,440,851,000
The title doesn't really mean that I expect GRUB to recognize my Desktop environments. I just want to have separate Debian 9 installations with different environments and to be able to recognize them in the GRUB menu. I tried to change the /etc/default/grub but this is used only from the current system (let's say Debian 9.2 xfce) and so the other system (let's say Debian 9.2 lxde) sees just "Debian GNU/Linux 9 (stretch)". I can't understand which file I have to change so that GRUB from every OS will give the appropriate entry name (with DesktopEnvironment). I looked in similar topics that was discussing of changing 40_custom or 30_os_prober, but didn't manage to find an answer.
Finally the previous settings (before this edit) can not work after some updates of my debian system. So I solved the problem like this: In the file of /etc/grub.d/10_linux of every distro, I added a word that shows the DE used in the distro like this (see "MATE"): if [ "x${GRUB_DISTRIBUTOR}" = "x" ] ; then OS=GNU/Linux else case ${GRUB_DISTRIBUTOR} in Ubuntu|Kubuntu) OS="${GRUB_DISTRIBUTOR}" ;; *) OS="${GRUB_DISTRIBUTOR} MATE" ;; esac CLASS="--class $(echo ${GRUB_DISTRIBUTOR} | tr 'A-Z' 'a-z' | cut -d' ' -f1|LC$ fi then I edited the file /etc/grub.d/30_os-prober and changed some things. My final file is: #! /bin/sh set -e # grub-mkconfig helper script. # Copyright (C) 2006,2007,2008,2009 Free Software Foundation, Inc. # # GRUB is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # GRUB is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with GRUB. If not, see <http://www.gnu.org/licenses/>. prefix="/usr" exec_prefix="/usr" datarootdir="/usr/share" quick_boot="0" export TEXTDOMAIN=grub export TEXTDOMAINDIR="${datarootdir}/locale" . "$pkgdatadir/grub-mkconfig_lib" found_other_os= adjust_timeout () { if [ "$quick_boot" = 1 ] && [ "x${found_other_os}" != "x" ]; then cat << EOF set timeout_style=menu if [ "\${timeout}" = 0 ]; then set timeout=10 fi EOF fi } if [ "x${GRUB_DISABLE_OS_PROBER}" = "xtrue" ]; then exit 0 fi if [ -z "`which os-prober 2> /dev/null`" ] || [ -z "`which linux-boot-prober 2> /dev/null`" ] ; then # missing os-prober and/or linux-boot-prober exit 0 fi OSPROBED="`os-prober | tr ' ' '^' | paste -s -d ' '`" if [ -z "${OSPROBED}" ] ; then # empty os-prober output, nothing doing exit 0 fi osx_entry() { found_other_os=1 if [ x$2 = x32 ]; then # TRANSLATORS: it refers to kernel architecture (32-bit) bitstr="$(gettext "(32-bit)")" else # TRANSLATORS: it refers to kernel architecture (64-bit) bitstr="$(gettext "(64-bit)")" fi # TRANSLATORS: it refers on the OS residing on device %s onstr="$(gettext_printf "(on %s)" "${DEVICE}")" cat << EOF menuentry '$(echo "${LONGNAME} $bitstr $onstr" | grub_quote)' --class osx --class darwin --class os \$menuentry_id_option 'osprober-xnu-$2-$(grub_get_device_id "${DEVICE}")' { EOF save_default_entry | grub_add_tab prepare_grub_to_access_device ${DEVICE} | grub_add_tab cat << EOF load_video set do_resume=0 if [ /var/vm/sleepimage -nt10 / ]; then if xnu_resume /var/vm/sleepimage; then set do_resume=1 fi fi if [ \$do_resume = 0 ]; then xnu_uuid ${OSXUUID} uuid if [ -f /Extra/DSDT.aml ]; then acpi -e /Extra/DSDT.aml fi if [ /kernelcache -nt /System/Library/Extensions ]; then $1 /kernelcache boot-uuid=\${uuid} rd=*uuid elif [ -f /System/Library/Kernels/kernel ]; then $1 /System/Library/Kernels/kernel boot-uuid=\${uuid} rd=*uuid xnu_kextdir /System/Library/Extensions else $1 /mach_kernel boot-uuid=\${uuid} rd=*uuid if [ /System/Library/Extensions.mkext -nt /System/Library/Extensions ]; then xnu_mkext /System/Library/Extensions.mkext else xnu_kextdir /System/Library/Extensions fi fi if [ -f /Extra/Extensions.mkext ]; then xnu_mkext /Extra/Extensions.mkext fi if [ -d /Extra/Extensions ]; then xnu_kextdir /Extra/Extensions fi if [ -f /Extra/devprop.bin ]; then xnu_devprop_load /Extra/devprop.bin fi if [ -f /Extra/splash.jpg ]; then insmod jpeg xnu_splash /Extra/splash.jpg fi if [ -f /Extra/splash.png ]; then insmod png xnu_splash /Extra/splash.png fi if [ -f /Extra/splash.tga ]; then insmod tga xnu_splash /Extra/splash.tga fi fi } EOF } used_osprober_linux_ids= wubi= for OS in ${OSPROBED} ; do DEVICE="`echo ${OS} | cut -d ':' -f 1`" LONGNAME="`echo ${OS} | cut -d ':' -f 2 | tr '^' ' '`" LABEL="`echo ${OS} | cut -d ':' -f 3 | tr '^' ' '`" BOOT="`echo ${OS} | cut -d ':' -f 4`" PTLABEL="`echo $(blkid -po udev ${DEVICE} |grep LABEL_ENC| sed 's/^.*=//')`" if UUID="`${grub_probe} --target=fs_uuid --device ${DEVICE%@*}`"; then EXPUUID="$UUID" if [ x"${DEVICE#*@}" != x ] ; then EXPUUID="${EXPUUID}@${DEVICE#*@}" fi if [ "x${GRUB_OS_PROBER_SKIP_LIST}" != "x" ] && [ "x`echo ${GRUB_OS_PROBER_SKIP_LIST} | grep -i -e '\b'${EXPUUID}'\b'`" != "x" ] ; then echo "Skipped ${LONGNAME} on ${DEVICE} by user request." >&2 continue fi fi BTRFS="`echo ${OS} | cut -d ':' -f 5`" if [ "x$BTRFS" = "xbtrfs" ]; then BTRFSuuid="`echo ${OS} | cut -d ':' -f 6`" BTRFSsubvol="`echo ${OS} | cut -d ':' -f 7`" fi if [ -z "${LONGNAME}" ] ; then LONGNAME="${LABEL}" fi # os-prober returns text string followed by optional counter CLASS="--class $(echo "${LABEL}" | LC_ALL=C sed 's,[[:digit:]]*$,,' | cut -d' ' -f1 | tr 'A-Z' 'a-z' | LC_ALL=C sed 's,[^[:alnum:]_],_,g')" gettext_printf "Found %s on %s labeled %s\n" "${LONGNAME}" "${DEVICE}" "${PTLABEL}" >&2 case ${BOOT} in chain) case ${LONGNAME} in Windows*) if [ -z "$wubi" ]; then if [ -x /usr/share/lupin-support/grub-mkimage ] && \ /usr/share/lupin-support/grub-mkimage --test; then wubi=yes else wubi=no fi fi if [ "$wubi" = yes ]; then echo "Skipping ${LONGNAME} on Wubi system" >&2 continue fi ;; esac found_other_os=1 onstr="$(gettext_printf "(on %s)" "${DEVICE}")" cat << EOF menuentry '$(echo "${LONGNAME} $onstr" | grub_quote)' $CLASS --class os \$menuentry_id_option 'osprober-chain-$(grub_get_device_id "${DEVICE}")' { EOF save_default_entry | grub_add_tab prepare_grub_to_access_device ${DEVICE} | grub_add_tab if [ x"`${grub_probe} --device ${DEVICE} --target=partmap`" = xmsdos ]; then cat << EOF parttool \${root} hidden- EOF fi case ${LONGNAME} in Windows\ Vista*|Windows\ 7*|Windows\ Server\ 2008*) ;; *) cat << EOF drivemap -s (hd0) \${root} EOF ;; esac cat <<EOF chainloader +1 } EOF ;; efi) found_other_os=1 EFIPATH=${DEVICE#*@} DEVICE=${DEVICE%@*} onstr="$(gettext_printf "(on %s)" "${DEVICE}")" cat << EOF menuentry '$(echo "${LONGNAME} $onstr" | grub_quote)' $CLASS --class os \$menuentry_id_option 'osprober-efi-$(grub_get_device_id "${DEVICE}")' { EOF save_default_entry | sed -e "s/^/\t/" prepare_grub_to_access_device ${DEVICE} | sed -e "s/^/\t/" cat <<EOF chainloader ${EFIPATH} } EOF ;; linux) if [ "x$BTRFS" = "xbtrfs" ]; then LINUXPROBED="`linux-boot-prober btrfs ${BTRFSuuid} ${BTRFSsubvol} 2> /dev/null | tr ' ' '^' | paste -s -d ' '`" else LINUXPROBED="`linux-boot-prober ${DEVICE} 2> /dev/null | tr ' ' '^' | paste -s -d ' '`" fi prepare_boot_cache= boot_device_id= is_top_level=true title_correction_code= OS="${LONGNAME}" for LINUX in ${LINUXPROBED} ; do LROOT="`echo ${LINUX} | cut -d ':' -f 1`" LBOOT="`echo ${LINUX} | cut -d ':' -f 2`" LLABEL="`echo ${LINUX} | cut -d ':' -f 3 | tr '^' ' '`" LKERNEL="`echo ${LINUX} | cut -d ':' -f 4`" LINITRD="`echo ${LINUX} | cut -d ':' -f 5`" LPARAMS="`echo ${LINUX} | cut -d ':' -f 6- | tr '^' ' '`" if [ -z "${LLABEL}" ] ; then LLABEL="${LONGNAME}" fi if [ "${LROOT}" != "${LBOOT}" ]; then LKERNEL="${LKERNEL#/boot}" LINITRD="${LINITRD#/boot}" fi if [ -z "${prepare_boot_cache}" ]; then prepare_boot_cache="$(prepare_grub_to_access_device ${LBOOT} | grub_add_tab)" [ "${prepare_boot_cache}" ] || continue fi found_other_os=1 onstr="$(gettext_printf "(on %s)" "${DEVICE}")" recovery_params="$(echo "${LPARAMS}" | grep 'single\|recovery')" || true counter=1 while echo "$used_osprober_linux_ids" | grep 'osprober-gnulinux-$LKERNEL-${recovery_params}-$counter-$boot_device_id' > /dev/null; do counter=$((counter+1)); done if [ -z "$boot_device_id" ]; then boot_device_id="$(grub_get_device_id "${DEVICE}")" fi used_osprober_linux_ids="$used_osprober_linux_ids 'osprober-gnulinux-$LKERNEL-${recovery_params}-$counter-$boot_device_id'" if [ "x$is_top_level" = xtrue ] && [ "x${GRUB_DISABLE_SUBMENU}" != xy ]; then cat << EOF menuentry '$(echo "$OS $PTLABEL $onstr" | grub_quote)' $CLASS --class gnu-linux --class gnu --class os \$menuentry_id_option 'osprober-gnulinux-simple-$boot_device_id' { EOF save_default_entry | grub_add_tab printf '%s\n' "${prepare_boot_cache}" cat << EOF linux ${LKERNEL} ${LPARAMS} EOF if [ -n "${LINITRD}" ] ; then cat << EOF initrd ${LINITRD} EOF fi cat << EOF } EOF echo "submenu '$(gettext_printf "Advanced options for %s" "${OS} ${PTLABEL} $onstr" | grub_quote)' \$menuentry_id_option 'osprober-gnulinux-advanced-$boot_device_id' {" is_top_level=false fi title="${LLABEL} ${PTLABEL} $onstr" cat << EOF menuentry '$(echo "$title" | grub_quote)' --class gnu-linux --class gnu --class os \$menuentry_id_option 'osprober-gnulinux-$LKERNEL-${recovery_params}-$boot_device_id' { EOF save_default_entry | sed -e "s/^/$grub_tab$grub_tab/" printf '%s\n' "${prepare_boot_cache}" | grub_add_tab cat << EOF linux ${LKERNEL} ${LPARAMS} EOF if [ -n "${LINITRD}" ] ; then cat << EOF initrd ${LINITRD} EOF fi cat << EOF } EOF if [ x"$title" = x"$GRUB_ACTUAL_DEFAULT" ] || [ x"Previous Linux versions>$title" = x"$GRUB_ACTUAL_DEFAULT" ]; then replacement_title="$(echo "Advanced options for ${OS} $onstr" | sed 's,>,>>,g')>$(echo "$title" | sed 's,>,>>,g')" quoted="$(echo "$GRUB_ACTUAL_DEFAULT" | grub_quote)" title_correction_code="${title_correction_code}if [ \"x\$default\" = '$quoted' ]; then default='$(echo "$replacement_title" | grub_quote)'; fi;" grub_warn "$(gettext_printf "Please don't use old title \`%s' for GRUB_DEFAULT, use \`%s' (for versions before 2.00) or \`%s' (for 2.00 or later)" "$GRUB_ACTUAL_DEFAULT" "$replacement_title" "gnulinux-advanced-$boot_device_id>gnulinux-$version-$type-$boot_device_id")" fi done if [ x"$is_top_level" != xtrue ]; then echo '}' fi echo "$title_correction_code" ;; macosx) if [ "${UUID}" ]; then OSXUUID="${UUID}" osx_entry xnu_kernel 32 osx_entry xnu_kernel64 64 fi ;; hurd) found_other_os=1 onstr="$(gettext_printf "(on %s)" "${DEVICE}")" cat << EOF menuentry '$(echo "${LONGNAME} $onstr" | grub_quote)' --class hurd --class gnu --class os \$menuentry_id_option 'osprober-gnuhurd-/boot/gnumach.gz-false-$(grub_get_device_id "${DEVICE}")' { EOF save_default_entry | grub_add_tab prepare_grub_to_access_device ${DEVICE} | grub_add_tab grub_device="`${grub_probe} --device ${DEVICE} --target=drive`" mach_device="`echo "${grub_device}" | sed -e 's/(\(hd.*\),msdos\(.*\))/\1s\2/'`" grub_fs="`${grub_probe} --device ${DEVICE} --target=fs`" case "${grub_fs}" in *fs) hurd_fs="${grub_fs}" ;; *) hurd_fs="${grub_fs}fs" ;; esac cat << EOF multiboot /boot/gnumach.gz root=device:${mach_device} module /hurd/${hurd_fs}.static ${hurd_fs} --readonly \\ --multiboot-command-line='\${kernel-command-line}' \\ --host-priv-port='\${host-port}' \\ --device-master-port='\${device-port}' \\ --exec-server-task='\${exec-task}' -T typed '\${root}' \\ '\$(task-create)' '\$(task-resume)' module /lib/ld.so.1 exec /hurd/exec '\$(exec-task=task-create)' } EOF ;; minix) cat << EOF menuentry "${LONGNAME} (on ${DEVICE}, Multiboot)" { EOF save_default_entry | sed -e "s/^/\t/" prepare_grub_to_access_device ${DEVICE} | sed -e "s/^/\t/" cat << EOF multiboot /boot/image_latest } EOF ;; *) # TRANSLATORS: %s is replaced by OS name. gettext_printf "%s is not yet supported by grub-mkconfig.\n" " ${LONGNAME}" >&2 ;; esac done adjust_timeout The changes are: 1) Created the variable: PTLABEL="`echo $(blkid -po udev ${DEVICE} |grep LABEL_ENC| sed 's/^.*=//')`" 2) This variable is the partition label and I am labeling my partitions (at least the root partitions of every operating system). This label is saved in this variable and then is added in the menuentries and sudmenuenties of my system independent of the system. Search for PTLABEL in the code above to see where I used it. You can also use it in any other kind of OS (I suppose) but I used it just for linux distros. 3) Added this variable in the sentence "found DISTRONAME" that appears when we updating grub, So that I can check if my systems found or which found etc. This way I use the labels of my disks to name my grub entries.
GRUB configuration to recognize different desktop environments (installations) of same Linux distro
1,550,440,851,000
I am trying to use the following at the top of my files authorize to test a new radius installation on default configs. head /etc/raddb/mods-config/files/authorize bob Cleartext-Password := "hello" Reply-Message := "Hello, %{User-Name}" test Cleartext-Password := "test" Reply-Message := "Hello, %{User-Name} # # Configuration file for the rlm_files module. # Please see rlm_files(5) manpage for more information. This fails to load at start up. With the last few lines of the logs looking like this. /sbin/radiusd -f -X -x ..... Wed Aug 16 16:37:38 2017 : Debug: reference = "Accounting-Request.%{%{Acct-Status-Type}:-unknown}" Wed Aug 16 16:37:38 2017 : Debug: } Wed Aug 16 16:37:38 2017 : Debug: (Loaded rlm_files, checking if it's valid) Wed Aug 16 16:37:38 2017 : Debug: # Loaded module rlm_files Wed Aug 16 16:37:38 2017 : Debug: # Instantiating module "files" from file /etc/raddb/mods-enabled/files Wed Aug 16 16:37:38 2017 : Debug: files { Wed Aug 16 16:37:38 2017 : Debug: filename = "/etc/raddb/mods-config/files/authorize" Wed Aug 16 16:37:38 2017 : Debug: usersfile = "/etc/raddb/mods-config/files/authorize" Wed Aug 16 16:37:38 2017 : Debug: acctusersfile = "/etc/raddb/mods-config/files/accounting" Wed Aug 16 16:37:38 2017 : Debug: preproxy_usersfile = "/etc/raddb/mods-config/files/pre-proxy" Wed Aug 16 16:37:38 2017 : Debug: compat = "cistron" Wed Aug 16 16:37:38 2017 : Debug: } Wed Aug 16 16:37:38 2017 : Debug: reading pairlist file /etc/raddb/mods-config/files/authorize Wed Aug 16 16:37:38 2017 : Error: /etc/raddb/mods-config/files/authorize[5]: Parse error (reply) for entry test: Expected end of line or comma Wed Aug 16 16:37:38 2017 : Error: Failed reading /etc/raddb/mods-config/files/authorize Wed Aug 16 16:37:38 2017 : Error: /etc/raddb/mods-enabled/files[9]: Instantiation failed for module "files"
After many hours and a lot of googling. I fixed this by taking a harder look at the lines in my authorise bob Cleartext-Password := "hello" Reply-Message := "Hello, %{User-Name}" test Cleartext-Password := "test" Reply-Message := "Hello, %{User-Name} The problem was the trailing " was missing on my test user. Googling for the error did not get me to any useful answers. Error: /etc/raddb/mods-config/files/authorize[5]: Parse error (reply) for entry test: Expected end of line or comma I just added the missing " after %{User-Name}" and everything worked. test Cleartext-Password := "test" Reply-Message := "Hello, %{User-Name}" I hope this save somebody some time in the future. $ radtest "test" test 127.0.0.1 1812 testing123 Sent Access-Request Id 25 from 0.0.0.0:59986 to 127.0.0.1:1812 length 74 User-Name = "test" User-Password = "test" NAS-IP-Address = 127.0.1.1 NAS-Port = 1812 Message-Authenticator = 0x00 Cleartext-Password = "test" Received Access-Accept Id 25 from 127.0.0.1:1812 to 0.0.0.0:0 length 33 Reply-Message = "Hello, test"
freeradius test user fails Parse error (reply) for entry test: Expected end of line or comma
1,550,440,851,000
I have many programs and all of them have some identical values in their config files (most, if not all, of which are in /etc). Let's say it is hostname, which is stored in the config files of Apache, Postfix, SQL, clamAV, whatever... Sometimes I need to change those values. What I do now is to edit all those files and find&replace the previous value with the new one. I would like to change it in one place, and set all those files properly. I thought about bash's export variable, source command or something similar; however, since the config files are not executable, I don't think it will work. What would be the recommended method?
Obviously, you must identify all the parameters that you want to manage, and all locations where they appear.  (Duh.)  You knew that already. Here’s an approach that might get you started on the right track: Choose a character string that will never, ever, appear in one of the configuration files.  (That makes it sound like you must get it right on the first try.  That’s not really true; if you choose a string (for example, @@) and you later need to use that string in one of the files, you can fix it.  You’ll just have to redo a lot of this setup.) For example, a long time ago, Unix had a version control system called the Source Code Control System (SCCS); it used the string @(#) as a string that would never appear naturally in a file.  As far as I know, SCCS isn’t in use any more (at least, not much), so it should be safe to use @(#).  Or you could use something like !user2461440?, or whatever your real name is.  You could include control character(s); e.g., Ctrl+A or Ctrl+G. Choose a naming convention for parameter placeholders.  This could be something simple and straightforward like @(#){HOSTNAME}, @(#){IP}, @(#){GATEWAY}, etc. Create template versions of all your configuration files, like apache.template, etc.  Edit those templates to replace all occurrences of the parameters you want to manipulate with their corresponding parameter placeholders (from the previous paragraph).  You should put these (and the following) in a safe, out-of-the-way place, like a subdirectory of /root. Write a script like this: HOST=Zanzibar IP=10.11.12.42 ︙ LOG=/var/log/lumber ︙ fullpath[apache]=/etc/apache.conf fullpath[postfix]=/etc/postfix/configuration ︙ for file in apache postfix … do path=${fullpath[$file]} sed -e "s/@(#){HOSTNAME}/$HOST/g" \ -e "s/@(#){IP}/$IP/g" \ ︙ -e "s|@(#){LOG}|$LOG|g" \ ︙ "$file.template" > "$path.new" && mv "$path" "$path.bak" && mv "$path.new" "$path" done Observe that the subcommand that replaces @(#){LOG} with $LOG uses a different delimiter (|), because the $LOG value contains /s.  Note that, therefore, the @(#) string must not contain this delimiter (|).  (And, of course, it must not contain the standard (/) delimiter.) Arrays (e.g., fullpath[apache]) don’t work in all shells.  If you don’t have bash or another shell that supports arrays, the script will need to be adapted to simulate or work around them. You might need to add chown and chmod commands to the script to set the system attributes of the files correctly.  Or, if you’re really really sure that you’ve gotten the script working correctly, you can modify it to overwrite the files in place, as in sed … > "$path" thus retaining the inode and its attributes, and not use the mv command or the .new and .bak files. When you want to change one of the parameters that you’ve chosen to automate, edit the corresponding assignment statement (e.g., HOST=Wonderland) at the beginning of the script.  If you don’t want to have to edit the script, break the script into two files: one that contains the parameter values (HOST=…, IP=…, etc…) and one that does all the handling of the configuration files.  The second script would source the first one to get the parameter values.  That way, when a parameter value changes, you need to edit only the (script) file that contains the values, and not the main script. Be sure not to manually edit the files in place, as those changes will be overwritten the next time you run the parameterization script.  Instead, edit the corresponding template file and re-run the script.  You might want to put comments in the files to remind you of this.  (If you don’t like the idea of regenerating all of the configuration files for a change that affects only one of them, you can modify the script so it has the capability to regenerate only selected file(s).)
How to populate many config files with same value(s) [closed]
1,550,440,851,000
I am trying to get Wifi working on my Arch Linux installation so I have installed broadcom-wl-dkms, but it still does not seem to work. I noticed that one every startup I got this message: Support for cores revisions 0x17 and 0x18 disabled by module param allhwsupport=0. Try b43.allhwsupport=1 So I enabled them as it said, but still the Wifi doesn't work. During the broadcom-wl-dkms installation I was told to run the following commands or reboot (neither worked): rmmod b43 b43legacy ssb bcm43xx brcm80211 brcmfmac brcmsmac bcma wl modprobe wl Upon running the first one I got this output: rmmod: ERROR: Module b43legacy is not currently loaded rmmod: ERROR: Module bcm43xx is not currently loaded rmmod: ERROR: Module brcm80211 is not currently loaded rmmod: ERROR: Module brcmfmac is not currently loaded rmmod: ERROR: Module wl is not currently loaded And the second gave this output: modprobe: FATAL: Module wl not found in directory /lib/modules/4.11.0-1-hardened I have also noticed that on Kernel updates I get messages like this: ==> dkms remove broadcom-wl/6.30.223.271 -k 4.11.0-1-hardened Error! There is no instance of broadcom-wl 6.30.223.271 for kernel 4.11.0-1-hardened (x86_64) located in the DKMS tree. And this: ==> dkms install broadcom-wl/6.30.223.271 -k 4.11.0-2-hardened Error! Bad return status for module build on kernel: 4.11.0-2-hardened (x86_64) Consult /var/lib/dkms/broadcom-wl/6.30.223.271/build/make.log for more information. So I assume that something has gone wrong. What has gone wrong? And how can I fix this and get the Wifi working? This is a Lenovo B590 laptop.
OP has a Broadcom BCM4313 chipset, which is not supported by the b43 driver, so enabling the core revisions listed in the warning will have no effect. Further, this particular chipset is not fully supported by the brcmsmac driver, leaving only Broadcom's own (restrictively-licensed) broadcom-wl driver, specifically the broadcom-wl-dkms variant. However, at the time the Q was posted, the broadcom-wl driver (at least in the Arch repositories) was not yet updated to support kernels 4.11-rc1 or later. These newer kernels changed a bit of the interface to network devices, including removing the last_rx field from struct net_device. As of 10 May 2017, version 6.30.223.271-12 of the broadcom-wl-dkms driver was made available through these repositories, allowing compilation against the 4.11 series kernels.
Unable to get Broadcom wireless drivers working on Arch Linux
1,550,440,851,000
As a standard desktop I use Mate on all my computers with different GNU/Linux distributions and FreeBSD. I have recently upgraded a laptop running Manjaro from Mate-1.16.1 to Mate-1.18.0. With Mate-1.16.1 my desktop looked like this: Notice that the background of the selected workspace on the bottom bar is a solid colour. In the bottom bar, the current active terminal window is shown as a rectangle with a darker background, which is also a solid colour. The same darker background colour is used to highlight menu items in the top menu. With Mate-1.18.0 my desktop looks like this: Now both the background of the chosen workspace and the background of the active window in the bottom bar use a gradient with a slightly darker colour. The highlighted items in the top menu (not shown in the picture) also use a gradient. This difference in colour seems related to the Mate version and not to the desktop theme. I have the first style (solid background) in all my systems using an older Mate version (FreeBSD, Debian 7). I have the second style (darker gradient background) in all the systems using the newer Mate version (Manjaro, Arch, Void). I have tried to switch the style back after the upgrade to the new Mate version but I cannot find any options related to the backgrounds of the elements I have indicated above (selected workspace, selected window, selected menu item). So is there such an option that allows to change the style or is the new style hard-coded in Mate?
MATE is now using GTK+ 3, according to the release note of MATE 1.18, which is likely the reason why the appearance of MATE desktop has changed regardless of the theme. The entire MATE Desktop suite of applications and components is now GTK3+ only! Requires GTK+ >= 3.14. All GTK2+ code has been dropped [...] Direct answers This difference in colour seems related to the Mate version and not to the desktop theme. The recent version of MATE uses GTK+ 3, which means the desktop theme is now using GTK+ 3 theme and not GTK+ 2 anymore. There is no changes to the theme's background colour itself #accd8a for Menta. So the colour gradient is one of the differences between GTK+ 2 and GTK+ 3 themes of the particular theme. I cannot find any options related to the backgrounds of the elements I have indicated above (selected workspace, selected window, selected menu item). There is no such option by default, regardless of desktop environment. Those detailed configurations are specified in the theme files. So is there such an option that allows to change the style... No, or at least I haven't heard any to this answered date. ...or is the new style hard-coded in Mate? No, the theme is not hard-coded in MATE. The GTK+ 3 theme files can be found in /usr/share/themes/THEME/gtk-3.0 directory of THEME. In newer version of the theme, configuration for MATE desktop components is specified in ../mate-applications.css file. Extended answer When looking into mate-applications.css file, the relevant parts can be found by using gradient as keyword. Open the file in a text editor and find using the keyword. For example, workspace switcher part in Menta theme: /* selected WnckPager */ PanelApplet.wnck-applet .wnck-pager:selected { background-image: linear-gradient(to bottom, @theme_selected_bg_color, shade (@theme_selected_bg_color, 0.36)); } The part can be just modified to without the gradient. To begin with, remove linear-gradient(,,) part and leave only the shade() and replace background-image with background-color (more details in GTK+ CSS: GTK+ 3 Reference Manual). Then, it will look like this: /* selected WnckPager */ PanelApplet.wnck-applet .wnck-pager:selected { background-color: shade (@theme_selected_bg_color, 0.36); } To see the changes, open Appearance settings in MATE, select any other theme, then select again the last used theme i.e. Menta. No need to log out or restart, just need to reselect the theme. Do similarly for other desktop components i.e. panel menu bar, panel applet, etc. To prevent loss of modified theme, user should create a copy of existing theme with new name i.e. Menta-custom and put in /usr/share/themes. This will make the theme independent and persists between system upgrades. Disclaimer: I do not use MATE desktop, and I had no time to setup MATE 1.18 for testing; however, theme customization is similarly applicable to other GTK+ environment such as Xfce. TL;DR The only way to customize the theme to meet user preference, such as removing the colour gradient, is to manually configure the files provided by the theme.
Mate workspace switcher and menu background configuration
1,550,440,851,000
I am running RHEL 6.7 with the following configuration for sshd # $OpenBSD: sshd_config,v 1.80 2008/07/02 02:24:18 djm Exp $ # This is the sshd server system-wide configuration file. See # sshd_config(5) for more information. # This sshd was compiled with PATH=/usr/local/bin:/bin:/usr/bin # The strategy used for options in the default sshd_config shipped with # OpenSSH is to specify options with their default value where # possible, but leave them commented. Uncommented options change a # default value. #Port 22 #AddressFamily any #ListenAddress 0.0.0.0 #ListenAddress :: # Disable legacy (protocol version 1) support in the server for new # installations. In future the default will change to require explicit # activation of protocol 1 Protocol 2 AUTOCREATE_SERVER_KEYS=RSAONLY # HostKey for protocol version 1 #HostKey /etc/ssh/ssh_host_key # HostKeys for protocol version 2 HostKey /etc/ssh/ssh_host_rsa_key #HostKey /etc/ssh/ssh_host_dsa_key # Lifetime and size of ephemeral version 1 server key #KeyRegenerationInterval 1h #ServerKeyBits 1024 # Logging # obsoletes QuietMode and FascistLogging #SyslogFacility AUTH SyslogFacility AUTHPRIV LogLevel INFO # Authentication: AllowGroups sshusers LoginGraceTime 2m PermitRootLogin no StrictModes yes MaxAuthTries 3 MaxSessions 3 #RSAAuthentication yes #PubkeyAuthentication yes #AuthorizedKeysFile .ssh/authorized_keys #AuthorizedKeysCommand none #AuthorizedKeysCommandRunAs nobody # For this to work you will also need host keys in /etc/ssh/ssh_known_hosts #RhostsRSAAuthentication no # similar for protocol version 2 HostbasedAuthentication no # Change to yes if you don't trust ~/.ssh/known_hosts for # RhostsRSAAuthentication and HostbasedAuthentication #IgnoreUserKnownHosts no # Don't read the user's ~/.rhosts and ~/.shosts files IgnoreRhosts yes # To disable tunneled clear text passwords, change to no here! #PasswordAuthentication yes PermitEmptyPasswords no PasswordAuthentication yes # Change to no to disable s/key passwords #ChallengeResponseAuthentication yes ChallengeResponseAuthentication no # Kerberos options #KerberosAuthentication no #KerberosOrLocalPasswd yes #KerberosTicketCleanup yes #KerberosGetAFSToken no #KerberosUseKuserok yes # GSSAPI options #GSSAPIAuthentication no GSSAPIAuthentication no #GSSAPICleanupCredentials yes GSSAPICleanupCredentials yes #GSSAPIStrictAcceptorCheck yes #GSSAPIKeyExchange no # Set this to 'yes' to enable PAM authentication, account processing, # and session processing. If this is enabled, PAM authentication will # be allowed through the ChallengeResponseAuthentication and # PasswordAuthentication. Depending on your PAM configuration, # PAM authentication via ChallengeResponseAuthentication may bypass # the setting of "PermitRootLogin without-password". # If you just want the PAM account and session checks to run without # PAM authentication, then enable this but set PasswordAuthentication # and ChallengeResponseAuthentication to 'no'. #UsePAM no UsePAM yes # Accept locale-related environment variables AcceptEnv LANG LC_CTYPE LC_NUMERIC LC_TIME LC_COLLATE LC_MONETARY LC_MESSAGES AcceptEnv LC_PAPER LC_NAME LC_ADDRESS LC_TELEPHONE LC_MEASUREMENT AcceptEnv LC_IDENTIFICATION LC_ALL LANGUAGE AcceptEnv XMODIFIERS #AllowAgentForwarding yes #AllowTcpForwarding yes GatewayPorts no X11Forwarding no #X11Forwarding yes #X11DisplayOffset 10 #X11UseLocalhost yes #PrintMotd yes PrintLastLog yes #TCPKeepAlive yes #UseLogin no UsePrivilegeSeparation yes PermitUserEnvironment no #Compression delayed ClientAliveInterval 900 ClientAliveCountMax 0 #ShowPatchLevel no #UseDNS yes #PidFile /var/run/sshd.pid #MaxStartups 10 #PermitTunnel no #ChrootDirectory none # no default banner path Banner /etc/issue # override default of no subsystems Subsystem sftp /usr/libexec/openssh/sftp-server # Example of overriding settings on a per-user basis #Match User anoncvs # X11Forwarding no # AllowTcpForwarding no # ForceCommand cvs server # Added for DISA GEN005538 RhostsRSAAuthentication no # Added for DISA GEN005539 Compression delayed # Added for DISA GEN005526 KerberosAuthentication no # FIPS 140-2 Encryption (Highest Level using Counter Mode) Ciphers aes128-ctr,aes192-ctr,aes256-ctr MACs hmac-sha1 When I start up sshd and the startup gets to line 23 AUTOCREATE_SERVER_KEYS=RSAONLY of the configuration file, I get the following error Starting sshd: /etc/ssh/sshd_config: line 23: Bad configuration option: AUTOCREATE_SERVER_KEYS /etc/ssh/sshd_config: terminating, 1 bad configuration options I have looked at the RHEL documentation and configuration options which say this is a valid option syntax for the sshd configuration file, why does mine keep failing if this is the case?
Because the AUTOCREATE_SERVER_KEYS is valid not in /etc/ssh/sshd_config file but in /etc/sysconfig/sshd
Bad configuration option: AUTOCREATE_SERVER_KEY [closed]
1,550,440,851,000
How can I establish a reverse ssh tunnel with my ./ssh/config file? I'm trying to reproduce this command ssh [email protected] -L 4444:restricedserver1.org:4420 -L 4445:restricedserver2:4430
Yes. The option is called RemoteForward with a bit different syntax. But in your example, you are using LocalForward, which would look like this in ssh_config: Host dmx.com User admin LocalForward 4444 restricedserver1.org:4420 LocalForward 4445 restricedserver2:4430
Reverse ssh tunnel in with .ssh/config
1,550,440,851,000
I would like Alsa to output everything at 44.1kHz (by default it looks like it's using 48kHz). I understand that the correct option would be something like: defaults.pcm.dmix.rate 44100 But where should this be included: in .asoundrc or .asoundrc.asoundconf? Does .asoundrc override settings in .asoundrc.asoundconf?
Possible locations for the configuration file are /etc/asound.conf for all users, or ~/.asoundrc for a single user. The file ~/.asoundrc.asoundconf is a file created by the asoundconf tool, and should not be edited by hand.
Setting Alsa to output 44.1kHz
1,550,440,851,000
I poke around make menuconfig select this, deselect that. Rebuild a linux kernel and boot up with it. How can I confirm the selection(s) I made via menuconfig exist after booting? lsmod?
Depending on your distribution and kernel version the configuration of the currently running kernel can be in one of the following locations: /proc/config.gz /boot/config /boot/config-$(uname -r) The first one provides the proc filesystem and must be configured in the kernel config: General Setup ---> <*> Kernel .config support [*] Enable access to .config through /proc/config.gz
How to confirm CONFIG setting made with make menuconfig?
1,550,440,851,000
(This might be off-topic and/or not answerable, but I want to ask anyway.) Recently, I am managing a lot of VPS server with Linux for my personal and professional projects. However, I am kind of tired by the repetitive tasks. Let's say I have to do the following after installation of a VPS add some users, add them to sudoers install the basic needed packages from apt-get find out that the package X is not in the basic repository, so I add some repositories do some basic configuration, both as a root and as a user, copy-pasting some stuff from the internet to some files and see what sticks finally start coding Is there any way to automate the whole process? Basically to "seal" the whole configuration, so I can then do all this in somehow more simplier way.
This task is an example of "configuration management". As you might expect, many other people have also had the same questions as you. The general class of software that performs this function is called configuration management software. Some popular examples are: Chef Puppet SaltStack
How to replicate basic configuration tasks? [duplicate]
1,550,440,851,000
I'm trying to follow the directions here under "sudo and multiple users". I believe I managed to modify /etc/sudoers correctly enough (by adding Defaults :me env_keep += "HGRCPATH" at the beginning of the defaults section, but later cutting :me because sudo was giving me parse errors) because I get this: [me /]$ su Password: [root /]$ echo $HGRCPATH /home/me/.hgrc However, when I try to actually use hg, I run into trouble: [me /]$ sudo hg commit -m "Initial check-in." abort: no username supplied (see "hg help config") Indeed: [me /]$ sudo hg debugconfig --debug | grep read read config from: /usr/etc/mercurial/hgrc read config from: /etc/mercurial/hgrc read config from: /etc/mercurial/hgrc.d/mergetools.rc read config from: /root/.hgrc Why does hg appear to be ignoring $HGRCPATH and looking in /root/.hgrc rather than /home/me/.hgrc? UPDATE Here are the non-commented lines of /etc/sudoers: $ sudo cat /etc/sudoers | grep '^[^#]' Defaults env_keep += "HGRCPATH" Defaults requiretty Defaults !visiblepw Defaults always_set_home Defaults env_reset Defaults env_keep = "COLORS DISPLAY HOSTNAME HISTSIZE INPUTRC KDEDIR LS_COLORS" Defaults env_keep += "MAIL PS1 PS2 QTDIR USERNAME LANG LC_ADDRESS LC_CTYPE" Defaults env_keep += "LC_COLLATE LC_IDENTIFICATION LC_MEASUREMENT LC_MESSAGES" Defaults env_keep += "LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER LC_TELEPHONE" Defaults env_keep += "LC_TIME LC_ALL LANGUAGE LINGUAS _XKB_CHARSET XAUTHORITY" Defaults secure_path = /sbin:/bin:/usr/sbin:/usr/bin root ALL=(ALL) ALL me ALL=(ALL) ALL
Defaults env_keep += "HGRCPATH" Defaults env_keep = "COLORS DISPLAY HOSTNAME HISTSIZE INPUTRC KDEDIR LS_COLORS" That second line resets env_keep. Either stick to += or move the = line before any += line.
HGRCPATH kept in /etc/sudoers, yet ignored by hg?
1,550,440,851,000
Xmonad uses 1) Mod+2 for switching to workspace 2 2) Mod+Shift+2 for moving window to workspace 2 How would you remap 1) to Mod+k and 2) to Mod+Shift+k in ~/xmonad/xmonad.hs?
You can do it, but it's not particularly pleasant (and don't forget that in the default configuration, M-k and MS-k are already used to cycle between windows and move windows around in the stack order - you probably don't want to mask those functions). What follows is based on a brief look at the source in XMonad/Config.hs. You will need to import XMonad.StackSet: import qualified XMonad.StackSet as W and in your keybindings, you want a couple of lines like this: , ((0 .|. modMask, xK_k), windows $ W.greedyView "2") , ((shiftMask .|. modMask, xK_k), windows $ W.shift "2") Note that unless you explicitly remove the bindings for (or rebind) M-2 and MS-2, they'll still behave as before.
Remapping keys for workspaces in Xmonad
1,550,440,851,000
If I move the settins.xml (or any other file) from the .purple folder into another place and create a symlink to this file instead, it gets replaced by regular files after a restart of pidgin. I want to put some configuration files into a git repository and put symlinks to it. That worked on all other programs but pidgin seems to delete the symlinks. cd .purple mv settins.xml ../ ln -s ../settings.xml ls -l settings.xml settings.xml -> ../settings.xml restart pidgin ls -l settings.xml settings.xml Why is that happening and what can I do to prevent this behaviour?
Pidgin seems to save everytime settings to settings.xml and does it in the easiest and safe way: It writes/copies everything into a new temporary file and then rename it to settings.xml. To stop this behaviour, you would need to modify libpurple (bundled with pidgin). The relevant code is probably in libpurple/util.c.
Pidgin replace my symlinks
1,550,440,851,000
I am compiling php 5.3.13 on my server. I want to create an autonome php5 folder. So prefix is: /usr/local/php5 In this folder I have a lib folder, where I put all lib needed for php to be executed such as: libk5crypto.so.3 libxml2.so.2 libjpeg.so.62 .... Even if I compile with --with-jpeg-dir=/usr/local/php5/lib/, php binary is still looking for the file in /usr/lib64. The only solution I found so far is to manually export LD_LIBRARY_PATH=/usr/local/php5/lib I would like the same automatically at compile time. Is that possible?
There are two distinct linker paths, the compile time, and the run time. I find autoconf (configure) is rarely set up to do the correct thing with alternate library locations, using --with-something= usually does not generate the correct linker flags (-R or -Wl,-rpath). If you only had .a libraries it would work, but for .so libraries what you need to specify is the RPATH: export PHP_RPATHS=/usr/local/php5/lib ./configure [options as required] (In many cases just appending LDFLAGS to the configure command is used, but PHP's build process is slightly different.) This effectively adds extra linker search paths to each binary, as if those paths were specified in LD_LIBRARY_PATH or your default linker config (/etc/ld.so.conf). This also takes care of adding -L/usr/local/php5/lib to LDFLAGS so that the compile-time and run-time use libraries are from the same directory (there's the potential for problems with mismatched versions in different locations, but you don't need to worry here). Once built, you can check with: $ objdump -j dynamic -x ./sapi/cli/php | grep RPATH RPATH /usr/local/php5/lib $ objdump -j dynamic -x ./libs/libphp5.so | fgrep RPATH RPATH /usr/local/php5/lib Running ldd will also confirm which libraries are loaded from where. What --with-jpeg-dir should be really be used for is to point at /usr/local/ or some top-level directory, the directories include/, lib/, and possibly others are appended depending on what the compiler/linker needs. You only need --with-jpeg-dir if configure cannot find the installation, configure will automatically find it in /usr/local and other (possibly platform specific) "standard" places. In your case I think configure is finding libjpeg in a standard place, and silently disregarding the directive. (Also, PHP 5.3.13 is no longer current, I suggest 5.3.21, the current version at this time.)
PHP compilation - link to library
1,550,440,851,000
From here: http://www.xenomai.org/index.php/FAQs#Which_kernel_settings_should_be_avoided.3F Which kernel settings should be avoided? Note that Xenomai will warn you about known invalid combinations during kernel configuration. - CONFIG_CPU_FREQ - CONFIG_APM - CONFIG_ACPI_PROCESSOR Now, when I look in the .config, I do find these options clearly but I don't know their dependencies. So, it is wise to simply put a n next to these options in the .config file? Will the make procedure take care of the dependencies? The make menuconfig window do not present these options explicitly.
make menuconfig does present this option. If you are in the menu press / and search for CPU_FREQ. This will show all CONFIG parameters containing CPU_FREQ. It does also show how you can access it through the menu, e.g: │ Symbol: CPU_FREQ [=y] │ Type : boolean │ Prompt: CPU Frequency scaling │ Defined at drivers/cpufreq/Kconfig:3 │ Location: │ -> Power management and ACPI options │ -> CPU Frequency scaling This means you find it under Power managment and ACPI options -> CPU Frequency scaling and the name of the entry is CPU Frequency scaling.
Edit the .config file when en/disabling a particular option like CONFIG_CPU_FREQ?
1,550,440,851,000
On the Sawfish Wikia, there's a beautiful image: Does anyone know how to configure sawfish to look like this? I can't find any docs regarding this picture/desktop.
His sawfish.rc is on github here: https://github.com/ZaneA/Dotfiles/blob/master/rc and there are links to the GTK theme and sawfish theme in that screenshot on his deviantart page here: http://hashbox.deviantart.com/art/Arch-170211-197724511
Searching for a sawfish config
1,550,440,851,000
I run an Ubuntu based server on raspi (6.5.0-1015-raspi #18-Ubuntu). On this system, I have knxd running, exposing a KNX bus to my server, and then Home Asssitant in a docker container. knxd and docker are configured as systemctl services. knxd is configured to create a UNIX domain socket in /tmp/eib, with the command line argument -u /tmp/eib. This works when I start the service, e.g. using systemctl start, once the system is running. However after reboot, there's a directory in /tmp/eib, owned by root:root, which blocks knxd from creating its domain socket. knxd then (understandably) crashes on start. When I manually remove the directory with sudo rm -rf /tmp/eib, and then systemctl restart knxd, knxd manages to create the correct socket and start up successfully. # after reboot. With this in place, knxd crashes on startup. $ ll -d /tmp/eib drwxr-xr-x 2 root root 4096 Apr 27 09:33 /tmp/eib/ # If I manually remove the file... $ sudo rm -rf /tmp/eib $ sudo systemctl restart knxd # ... wait a bit ... # ... then knxd comes up successfully and creates the correct file+permissions $ ll -d /tmp/eib srwxr-xr-x 1 knxd knxd 0 Apr 27 09:39 /tmp/eib= # now knxd and everything depending on it works fine How do I debug who's creating this file? How do I set things up such that knxd comes up successfully after a reboot?
knxd github repository at https://github.com/knxd/knxd mentions where the socket is created: "If you use Debian Jessie or another systemd-based distribution, /lib/systemd/system/knxd.socket is used to open the "standard" sockets on which knxd listens to clients. You no longer need your old -i or -u options." It also advises using /run/ instead of /tmp: "knxd's Unix socket should never have been located in /tmp; the default is now /run/knx. You can add a -u /tmp/eib (or whatever) option if necessary, but it's better to fix the clients."
Unix socket in /tmp turns into directory on reboot
1,550,440,851,000
Multinodes(3 nodes) openstack cluster deploy by kolla-ansible, two nodes(2nd and 3rd nodes) are working well, the one node(1st_node) have some containers always Restarting with the error logs, e.g. kolla_toolbox container: + sudo -E kolla_set_configs sudo: unknown uid 42401: who are you? I had check the kolla_toolbox container's /etc/passwd file, it has the same md5sum with the other two normal nodes. And the /etc/passwd file has the line with the content: ansible:x:42401:42401::/var/lib/ansible:/usr/sbin/nologin. The result of id 42401 and id ansible in all containers of three nodes are: uid=42401(ansible) gid=42401(ansible) groups=42401(ansible),42400(kolla) in three hypervisor nodes are: :no such user I had ran docker image rm kolla_toolbox, pull it and deploy in 1st_node, the issue still exist, but it works on the other two nodes. What's wrong with the 1st_node about the docker or container? How could I fix it? kolla_set_configs is a python file in the path of /usr/local/bin/kolla_set_configs which only found inside the container, and I can't figure out which line about the kolla_set_configs file make the error logs.
Apparently, the ID of the ansible user is the same across all three nodes for the kolla_toolbox container, but maybe there's another reference or condition or dependency to other containers where the UID is different. I had a similar issue in a baremetal installation of openstack where we had to reinstall a control node, and the uid/gid of the critical users (cinder, nova) were different. We use a mounted cephfs to be able to live-migrate and for cinder conversion. I don't see another way to fix this than to redeploy all containers (on first node) with the exact same version so the UID/GID are the same.
docker logs err:"+ sudo -E kolla_set_configs sudo: unknown uid 42401: who are you?" in openstack container
1,550,440,851,000
Well, one picture for thousand words. 3 private subnets: +-----+ +-----+ | PC2 | | PC3 | Linux | .2 | | .3 | __ +----------+ +-----+ +-----+ i \ | .1| | | n ) +-----+ | ------|-----+----------+----- t ( | | 192.168.0.0/24 |.1/ / |eth1 192.168.1.0/24 e > ----|FW .2|------------------| < LR X | r ( | | eth0| \ \ |eth2 192.168.2.0/24 n ) +-----+ | ------|-----+----------+----- e__/ | .1| | | +----------+ +-----+ +-----+ Router | .4 | | .5 | | PC4 | | PC5 | +-----+ +-----+ Linux Router: ifcace eth0 inet static address 192.168.0.1/24 gateway 192.168.0.2 dns-nameservers 8.8.8.8 ifcace eth1 inet static address 192.168.1.1/24 ifcace eth2 inet static address 192.168.2.1/24 PCx (..1.x): ifcace eth0 inet static address 192.168.1.x/24 gateway 192.168.1.1 dns-nameservers 8.8.8.8 PCx (..2.x): ifcace eth0 inet static address 192.168.2.x/24 gateway 192.168.2.1 dns-nameservers 8.8.8.8 LR # echo "1" > /proc/sys/net/ipv4/ip_forward # ip route list default via 192.168.0.2 dev eth0 onlink 192.168.0.0/24 dev eth0 proto kernel scope link src 192.168.0.1 192.168.1.0/24 dev eth1 proto kernel scope link src 192.168.1.1 192.168.2.0/24 dev eth2 proto kernel scope link src 192.168.2.1 Linux Router can easily ping to FW and comunicate with public Internet. LR can also ping all PCs. PCx can ping up to 192.168.0.1 address but cannot ping to FW 192.168.0.2 (Host Unreachable) It is not intended to route between 192.168.1.0/24 and 192.168.2.0/24, but it is highly expected to reach the public Internet through FW. I know, that it is possible to do something with iptables NAT, what means to config two firewalls tandem, but that is not what we need. The simple static route is prefered. I googled a note, that there could be helpful to set "ip rules" but did not understand how. Please, can you let me know, what the damned config can set the expected routing? ip route could be powerfull tool, but some clear tutorial with examples should be very usefull.
Network Address Translation (NAT) doesn't particularly care what the local LAN IPs are. It just knows that any traffic forwarded out the external interface to the internet has to carry a "From" address in the IP packet that matches the public IP on the external interface of FW. So there's no reason your diagram as shown won't work for outgoing traffic. But when traffic comes back in, again carrying the "To" IP address of FW's external interface, NAT will dutifully convert the "To" address to match the original 192.168.X.0/24 address recorded in the NAT table. Here's where the problem begins. If the original address was 192.168.0.x/24, no problem, because FW has an interface on that network, and thus knows how to reach those hosts directly. But if the original IP was 192.168.1.x or 192.168.2.x, then FW has no clue where those IPs are. It doesn't have an interface on either of those networks, so it can't reach them directly; and FW doesn't know (unless you tell it) that those IPs should be routed back to LR so that LR can forward them on. The solution to that problem would be to set up static routes on FW that tell it where to route traffic for networks 192.168.1.0/24 and 192.168.2.0/24, namely to route both networks to 192.168.0.1. Given your comment that says you can use the NAT functionality on FW but you cannot change its routing table, the simplest configuration that will work would be to change LR from functioning as a router, to function as a bridge. Recall that a bridge connects multiple hosts on the same network broadcast domain (by learning the MAC addresses present on each of the bridged interfaces), much the same way a network switch does. If you re-configure router LR as a bridge LB (bridging eth0, eth1 and eth2), use 192.168.0.x/24 IPs throughout, and set 192.168.0.2 as the default gateway for all hosts, then FW will not only forward NAT traffic out, but when traffic comes back in, it will be able to deliver it back to the 192.168.0.X network without issue.
How to config static route from two subnets to firewall
1,638,714,919,000
My /etc/network/interfaces file contains the line: iface default inet dhcp I vaguely remember putting it in there years ago, but I don't remember why I did so. If I am not mistaken, iface precedes a network interface configuration, inet stipulates ipv4 address specs, and dhcp means: get your address, mask, and gateway info from a DHCP server. But what does default mean in this case? Does it refer to some default interface? If so, where would such a thing be specified? If not, does it refer to a default configuration that applies to all existing interfaces not otherwise configured? Generally, what is the purpose, if any, of such a line? When I remove the line in question my computer seems to keep on connecting to networks just fine. I looked at man interfaces and googled quite a bit, but I have been unable, so far, to find any official explanation for the use of default in this case. Any information would be greatly appreciated.
The name "default" is just a place holder and can be used to specify how a interface should be loaded. #auto eth0=foo iface foo inet dhcp iface bar inet static address 192.168.178.2 gateway 192.168.178.1 dns-nameserver 192.168.178.1 ... then you could call the interface like this ... ifup eth0=bar There are many configurations possible this way. Have a look in your man interfaces and man ifup.
Meaning of "default" in "iface default inet dhcp" in interfaces file
1,638,714,919,000
Whisker Menu has a great but quite underrated feature called "Search Actions" that can easily trigger a predefined command to search/open/run various folders/files/programs very quickly by assigning a "pattern" in the form of one or more characters. It has some default ones like "run in terminal" by typing ! and then the desired command But other more interesting can be added like running a search of file or folders through a search tool like Catfish, starting CD/DVD playback, opening specific files or folders and many others. Any others can be added like start any application, logout, restart, shut down, upgrade, and what not. In this sense the name "Search Actions" can be misleading, because they can and even by default do more than simply "searching". I was interested in a rather marginal problem (Can the Whisker-menu "Search Actions" feature use a custom icon?), but one that could be considered more closely by accessing the files that store these "search actions". In these way they could be saved and maybe fine-tuned to serve more specific needs. Where are these settings stored?
The configuration file for the Whisker Menu is saved in your xfce4 panel directory: ~/.config/xfce4/panel/whiskermenu-1.rc The actions defined at the bottom of the file contain the same properties as in the "Search Actions" dialog, i.e. name, pattern, command and a boolean regex flag. $ tail -18 ~/.config/xfce4/panel/whiskermenu-1.rc [action2] name=Wikipedia pattern=!w command=exo-open --launch WebBrowser https://en.wikipedia.org/wiki/%u regex=false [action3] name=Run in Terminal pattern=! command=exo-open --launch TerminalEmulator %s regex=false [action4] name=Open URI pattern=^(file|http|https):\\/\\/(.*)$ command=exo-open \\0 regex=true
In what files/form are the "Search actions" of Whisker Menu saved?
1,638,714,919,000
I am trying to install and run a MongoDB server on CentOS 7 machine. The CentOS 7 machine is in my university campus and I am accessing it from my home over ssh through VPN. I have followed every step given in the link: https://docs.mongodb.com/manual/tutorial/install-mongodb-on-red-hat/ Here is the output of sudo systemctl start mongod : Job for mongod.service failed because the control process exited with error code. See "systemctl status mongod.service" and "journalctl -xe" for details. Here is the output of systemctl status mongod.service : ● mongod.service - MongoDB Database Server Loaded: loaded (/usr/lib/systemd/system/mongod.service; enabled; vendor preset: disabled) Active: failed (Result: exit-code) since Wed 2020-12-30 00:23:07 IST; 1min 41s ago Docs: https://docs.mongodb.org/manual Process: 61587 ExecStart=/usr/bin/mongod $OPTIONS (code=exited, status=14) Process: 61584 ExecStartPre=/usr/bin/chmod 0755 /var/run/mongodb (code=exited, status=0/SUCCESS) Process: 61581 ExecStartPre=/usr/bin/chown mongod:mongod /var/run/mongodb (code=exited, status=0/SUCCESS) Process: 61578 ExecStartPre=/usr/bin/mkdir -p /var/run/mongodb (code=exited, status=0/SUCCESS) Dec 30 00:23:07 smart systemd[1]: Starting MongoDB Database Server... Dec 30 00:23:07 smart mongod[61587]: about to fork child process, waiting until server is ready for connections. Dec 30 00:23:07 smart mongod[61587]: forked process: 61589 Dec 30 00:23:07 smart mongod[61587]: ERROR: child process failed, exited with 14 Dec 30 00:23:07 smart mongod[61587]: To see additional information in this output, start without the "--for...tion. Dec 30 00:23:07 smart systemd[1]: mongod.service: control process exited, code=exited status=14 Dec 30 00:23:07 smart systemd[1]: Failed to start MongoDB Database Server. Dec 30 00:23:07 smart systemd[1]: Unit mongod.service entered failed state. Dec 30 00:23:07 smart systemd[1]: mongod.service failed. Hint: Some lines were ellipsized, use -l to show in full. Here is the output of journalctl -xe : Dec 30 00:23:07 smart polkitd[1826]: Unregistered Authentication Agent for unix-process:61557:106467206 (system bus Dec 30 00:23:08 smart dbus[1879]: [system] Successfully activated service 'org.fedoraproject.Setroubleshootd' Dec 30 00:23:08 smart setroubleshoot[61594]: failed to retrieve rpm info for /proc/sys/net/ipv4/tcp_fastopen Dec 30 00:23:08 smart setroubleshoot[61594]: SELinux is preventing /usr/bin/mongod from open access on the file /pr Dec 30 00:23:08 smart python[61594]: SELinux is preventing /usr/bin/mongod from open access on the file /proc/sys/n ***** Plugin catchall (100. confidence) suggests ************************** If you believe that mongod should be allowed open access on the tcp_fastopen f Then you should report this as a bug. You can generate a local policy module to allow this access. Do allow this access for now by executing: # ausearch -c 'mongod' --raw | audit2allow -M my-mongod # semodule -i my-mongod.pp Dec 30 00:23:11 smart setroubleshoot[61594]: SELinux is preventing /usr/bin/mongod from unlink access on the sock_f Dec 30 00:23:11 smart python[61594]: SELinux is preventing /usr/bin/mongod from unlink access on the sock_file mong ***** Plugin catchall (100. confidence) suggests ************************** If you believe that mongod should be allowed unlink access on the mongodb-2701 Then you should report this as a bug. You can generate a local policy module to allow this access. Do allow this access for now by executing: # ausearch -c 'mongod' --raw | audit2allow -M my-mongod # semodule -i my-mongod.pp Dec 30 00:24:52 smart chronyd[2023]: Source 162.159.200.123 replaced with 5.189.141.35 I don't know where I went wrong or I missed some fundamental configuration step. I have tried many online blogs/sites like: https://unix.stackexchange.com/a/568238/372656 https://stackoverflow.com/a/64818226 but they didn't help. Can anyone please explain me how can I resolve this? Thanks in advance.
I suppose I should have put my comment as a solution: It says in the journalctl output that SELinux is preventing Mongod from open access on some file, which is stopping mongodb from working. You need to put SELinux in permissive mode, or tell SELinux to allow mongodb to run: See this link for more details. This link for a longer explination, or just set enforcing=0 as per link 1, or edit /etc/selinux/config and set it to permissive.
How to install and run MongoDB on CentOS 7?
1,638,714,919,000
Sorry if this naive question, but while using the kconfig system for the kernel and buildroot, when I hit / for search, the results always have shortcut numbers on the left to navigate to them quickly. Currently, I am using Yocto, and when I enter the busybox menuconfig by issuing bitbake -c menuconfig busybox I don't see those numbers, I there any option to make them show up?
Busybox’s version of Kconfig is very old: it was copied from the kernel in 2006, and subsequent changes to the kernel’s Kconfig haven’t been imported. Support for jump keys in search results was added to the kernel in 2012. Busybox’s Kconfig doesn’t support jump keys in search results, and there’s no configuration option to make them show up.
No Numbers on busybox menuconfig search results
1,638,714,919,000
I recently setup a fresh install of CentOS 8 to use the Mingw Compiler for C++ (I believe it's removed from CentOS 7). Everything was installed as follows yum -y groupinstall "Development Tools" yum --enablerepo=PowerTools install mingw32-gcc yum --enablerepo=PowerTools install mingw64-gcc Which did give me the commands I wanted both i686-w64-mingw32-gcc and x86_64-w64-mingw32-gcc (specifically for targeting Windows builds) I am unable to use them though because calling both on a simple cpp file gives the error x86_64-w64-mingw32-gcc: error trying to exec 'cc1plus': execvp: No such file or directory I can still compile for Linux with the g++ command though without any issue but what am I missing to be able to use the Mingw compilers? UPDATE By the way this CentOS 8 is running in Docker, I don't know if that makes a difference
You’re compiling C++ code, so the frontend is looking for the C++ compiler. mingw{32,64}-gcc only provides the C compiler, you need to install the C++ compiler too: dnf --enablerepo=PowerTools install mingw{32,64}-gcc-c++
CentOS 8 Mingw Compile Error with cc1plus
1,638,714,919,000
I'm trying to use certbot to get a certificate for my http server running nextcloud (archarm on a raspi). When I run: $ sudo certbot --apache, I get: $ sudo certbot --apache Saving debug log to /var/log/letsencrypt/letsencrypt.log Plugins selected: Authenticator apache, Installer apache No names were found in your configuration files. Please enter in your domain name(s) (comma and/or space separated) (Enter 'c' to cancel): I then enter my domain: example.duckdns.org upon which I get: Obtaining a new certificate Performing the following challenges: http-01 challenge for example.duckdns.org Cleaning up challenges Unable to find a virtual host listening on port 80 which is currently needed for Certbot to prove to the CA that you control your domain. Please add a virtual host for port 80. While I do have example_duckdns.conf in /etc/httpd/conf/extra which looks like: <Directory /var/www/html/nextcloud> Require all granted </Directory> <VirtualHost *:80> DocumentRoot "/var/www/html/nextcloud" ServerName example.duckdns.org ServerAlias example.duckdns.org ServerAdmin [email protected] ErrorLog "/var/log/httpd/error_log_example_duckdns_org" CustomLog "/var/log/httpd/access_log_example_duckdns_org" combined </VirtualHost> And I get: $ apachectl configtest Syntax OK What do I have wrong? Using: Apache/2.4.43
Sure, your /etc/httpd/conf/extra looks correct, but does your main Apache configuration file (or any file included by it) have anything like Include /etc/httpd/conf/extra or IncludeOptional /etc/httpd/conf/* in it anywhere?
certbot does not recognize the added VirtualHost
1,638,714,919,000
I'd like to switch off/on alternatively the microphone and the speakers with a simple click to avoid looping the voice in the speakers in a conversation. Is there such a possibility in a easy way ? I thought maybe about a script, but in this case what are the bash commands to switch off/on the microphone and the speakers and how to test witch are on ? Ubuntu 18.04 with Alsa. Thank you.
There is a command-line tool amixer which should allow you to perform the necessary tasks. First, run amixer controls to get a list of control options. You will likely get output like numid=XX,iface=MIXER,name='Master Playback Switch' ... numid=YY,iface=MIXER,name='Capture Switch' You can get the status of the control option with $ amixer cget name='Master Playback Switch' numid=XX,iface=MIXER,name='Master Playback Switch' ; type=BOOLEAN,access=rw------,values=1 : values=off To set, use $ amixer cset name='Master Playback Switch' 'on' numid=XX,iface=MIXER,name='Master Playback Switch' ; type=BOOLEAN,access=rw------,values=1 : values=on So, to switch to "speak" mode, you could use amixer cset name='Master Playback Switch' 'off'; amixer cset name='Capture Switch' 'on' and to switch to "listen" mode: amixer cset name='Capture Switch' 'off'; amixer cset name='Master Playback Switch' 'on'
Alternate mike / speakers
1,638,714,919,000
Does dash have a non-interactive non-login rc file? I've read the man page, which recommends .profile for login shells and $ENV environment variable for interactive shells. Is there anything that runs for non-interactive non-login shells, such as zsh's zshenv files, or bash's $BASH_ENV environment variable? Is there an equivalent file for Bourne shell too? Best I can come up with so far (although not ideal at all as it requires me editing every single script) is to edit the shebang as follows: #!/bin/sh /path/to/script f where /path/to/script contains f() { echo "/path/to/script"; }
A typical shell does not have an rc file that is read for non-interactive shells. .profileis read for a login shell that is identified by an argv[0] that starts with a -. $ENV is read by an interactive POSIX shell and if not set already, the shell uses it's own default. This is .kshrc for ksh, .bashrc for bash and .shrc for newer versions of the Bourne Shell. Dash however does not define a default $ENV and thus typically does not read it, even when in interactive mode.
Dash non-interactive non-login rc file
1,638,714,919,000
Trying to configure PHP to perform a core dump, I executed the following: [root@myserver ~]# echo '/tmp/core-%e.%p' > /proc/sys/kernel/core_pattern [root@myserver ~]# echo 0 > /proc/sys/kernel/core_uses_pid [root@myserver ~]# ulimit -c unlimited I do not know what the original ulimit values were, but they are now as follows: [michael@myserver ~]$ ulimit unlimited [michael@myserver ~]$ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 7867 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 4096 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited [michael@myserver ~]$ I also made some changes to the php-fpm config files, but already changed them back to their default settings. I am running Centos7. What should I set ulimit values back to? Also, should I do anything to reverse the two echo commands?
That looks like the default values to me. If you changed them using the CLI, the changes are not permanent. You can restart the session and they will revert to the original values. Similarly, proc changes are not permanent; a reboot will reset them.
Recommended ulimit values for Centos7
1,638,714,919,000
I have a .conf file in key/value format. But there may be some non-unique keys. Distinction between them is like below: ### ### [meta] ### ### Controls the parameters for the Raft consensus group that stores metadata ### about the InfluxDB cluster. ### [meta] # Where the metadata/raft database is stored dir = "/var/lib/influxdb/meta" # Automatically create a default retention policy when creating a database. # retention-autocreate = true # If log messages are printed for the meta service # logging-enabled = true ### ### [data] ### ### Controls where the actual shard data for InfluxDB lives and how it is ### flushed from the WAL. "dir" may need to be changed to a suitable place ### for your system, but the WAL settings are an advanced configuration. The ### defaults should work for most systems. ### [data] # The directory where the TSM storage engine stores TSM files. dir = "/var/lib/influxdb/data" # The directory where the TSM storage engine stores WAL files. wal-dir = "/var/lib/influxdb/wal" What I want to achieve is to write a script in fedora to change value of dir key under data block. I saw a similar script for unique keys in here (https://stackoverflow.com/questions/2464760/modify-config-file-using-bash-script). But this does not work for me unfortunately. How can I do this?
Assuming your filename is foo.conf and you want to change dir value to "/dev/sdh", below code will replace dir keyword only for data section. sed -re '/^\[data\]$/,/^\[/ s/^(\s+)*(dir = .*)$/\1dir = "\/dev\/sdh"/' foo.conf /^\[data\]$/,/^\[/ This part makes sed works for only "data" section. You can replace "data" to any keyword to make it work for any section.
How to change value of a key from terminal among non-unique keys in a .conf file?
1,638,714,919,000
I've just changed the /etc/ssh/sshd_config file on a server, where I want to deny all users except for one in a specific group. This is because I am in the disallowed group myself, but as the server's maintainer I want to be able to access it through ssh. So my problem is that, as it says in the man page, the order of processing rules is "DenyUsers, AllowUsers, DenyGroups, and finally AllowGroups", which (if I'm understanding this correctly) makes it impossible to do something like: DenyGroups student AllowUsers myself because the DenyGroups is more important than the AllowUsers. I have also tried to exclude myself from the DenyGroups by adding a specific Match User myself: DenyGroups student Match User myself AllowUsers * But that did not have any effect either, despite the manual saying "the keywords on the following lines override those set in the global section of the config file". How would I go about disallowing the whole group of students except for myself?
As @muru has pointed out, when the manual says "the keywords on the following lines override those set in the global section of the config file", it means you have to use the same keyword, so you can do the following: DenyGroups student Match User myself DenyGroups none This worked for me.
How to allow only one user in a group in sshd_config
1,638,714,919,000
While I am working with Web API project with Slim, I was using .htaccess for API folder /v1 under root web. My OS is Ubuntu 16.04, with Apache/2.4.18. I wanted to apply the .htaccess only for the /v1 folder. The .htaccess file looks like this: RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^ index.php [QSA,L] When I try to access file in the /v1 folder I get 404 response. For example if try to access http://localhost/project/v1/loadSomething The response will be 404: Not Found The requested URL project/v1/loadSomething was not found on this server. Apache/2.4.18 (Ubuntu) Server at localhost Port 80 I have tried to edit to make some change like this: <Directory "/var/www/html"> AllowOverride All Order allow,deny Allow from all </Directory> But the response in this case is 500, Internal Server Error The server encountered an internal error or misconfiguration and was unable to complete your request. The logs looks like [Sat Aug 10 21:16:11.356667 2019] [core:alert] [pid 4699] [client 127.0.0.1:33852] /var/www/html/project/v1/.htaccess: Invalid command 'RewriteEngine', perhaps misspelled or defined by a module not included in the server configuration [Sat Aug 10 21:20:21.783996 2019] [core:alert] [pid 4700] [client 127.0.0.1:34374] /var/www/html/project/v1/.htaccess: Invalid command 'RewriteEngine', perhaps misspelled or defined by a module not included in the server configuration [Sat Aug 10 21:20:40.368584 2019] [core:alert] [pid 4701] [client 127.0.0.1:34376] /var/www/html/project/v1/.htaccess: Invalid command 'RewriteEngine', perhaps misspelled or defined by a module not included in the server configuration Can anyone help me please?
Since Apache 2.4 directives Order, Allow and Deny are deprecated and were replaced by the new Require syntax. Replace Order allow,deny Allow from all with Require all granted in your config. See https://httpd.apache.org/docs/current/upgrading.html It seems mod_rewrite is not enabled in your server. Enable the module with the a2enmod command (which creates a symbolic link /etc/apache2/mods-enabled/rewrite.load pointing to ../mods-available/rewrite.load), then restart the server: sudo a2enmod rewrite sudo service apache2 restart To list all enabled modules you can use the a2query command with the -m flag: a2query -m
Apache/2.4.18 (Ubuntu) Server is not working with RewriteEngine mode by .htaccess for specific folder
1,638,714,919,000
Using the linux distribution nixos, I have 2 similar problems: I have to add my custom_syntax_color_scheme.vim file to the existing /share/vim/vim80/colors folder in the nix store from existing package nixos.vim I have to add a custom-tex-template.tex file to the existing /share/ghc-8.2.2/x86_64-linux-ghc-8.2.2/pandoc-2.0.6/data/templates/ folder in the nix store from existing package pandoc (I suppose, otherwise it's nixos.texlive.combined.scheme-full) I have skimmed through the Nix-Pills, but I cannot make my mind about solving this particular problem: adding a configuration file to an existing derivation. What is the Nix way of doing it? I suppose I have to create a new derivation that include the file I want, but I don't know how and how the existing package will manage to include it. My problem feels similar to How to add a file to /etc in NixOS? which now has an answer, but cannot be applied here.
In both case, it is not needed to alter the files installed by the package: Put the custom colorscheme files into the folder ~/.vim/colors. This folder needs to be created. the option passed to pandoc --template should contain either the template file name with the extension, either the path to the template file name. (I was following the README file blindly, and it was giving command example with the template name without the file extension)
How to add an additional configuration file to an existing nix derivation?
1,638,714,919,000
From View man pages in Vim, I learnt how to open man pages in vim by adding the following lines to ~/.vimrc: " Enable viewing man page in vim by ":Man ..." runtime ftplugin/man.vim " Set keyword 'K' to use ":Man ..." to view man pages in vim set keywordprg=:Man However, this can only work for one Man per tab. What I want is to open different Man pages in different splits.
If there is only one manual window, all :Man commands affect that window. However if you split the window, any :Man command will affect the current or last used manual window.
How to open multiple man pages in split in vim?
1,638,714,919,000
I have secondary network adapter on my VM (VMnet 10). I am running Centos 7. Now, I can't detect my second NIC. Here are my configurations: I know that my secondary adapter is ens37 depending on my MAC address of my network adapter 2. I would like to configure it via terminal and via GUI. When I run the command: ls -la /etc/sysconfig/network-scripts I see the first NIC ens33 and I can't see ens37. I can't understand why it was not detected! Is there anything that I can do to fix this problem? Important to note that I added this NIC after the initial OS setup/install, and, I can't assign static IP via GUI.
You should be able to create the ifcfg file by hand. I think this may be similar to the problem described here: https://serverfault.com/questions/715369/centos-virtualbox-no-icfg-eth1-when-adding-secondary-network-interface
NIC can not detect CentOS VMware
1,638,714,919,000
By default, rsyslog's config file is in /etc/rsyslog.conf. You can set the config file path on startup with the -f /path/to/file option. My question is: how can you find the config file if it's been set somewhere other than the default?
This will depend on the software and how it has been configured; rsyslogd at least the version present on Centos 7 does close the configuration file after reading it so a tool like lsof will not reveal that file once the daemon is up and running: % sudo lsof -p `pidof rsyslogd` | perl -nle 'print for grep -f, split' lsof: WARNING: can't stat() fuse.gvfsd-fuse file system /run/user/1000/gvfs Output information may be incomplete. /usr/sbin/rsyslogd ... The filename does however appear in the process table, which can then be searched for under /etc as that's usually where such configuration is hidden: % < /proc/`pidof rsyslogd`/cmdline tr '\0' ' ' ; echo /usr/sbin/rsyslogd -n -f /nunca/adivinarás/esto % sudo grep -r '/nunca/adivinar' /etc /etc/sysconfig/rsyslog:SYSLOGD_OPTIONS="-f /nunca/adivinarás/esto" On a completely unknown system you may need to use something like SystemTap or sysdig—a kernel tracing facility, in other words—and report on what files the daemon uses: % sudo sysdig -p '%fd.name' 'proc.name = rsyslogd' | tee files-used ... And then restart the daemon. Lacking a kernel tracing facility one may be able to trace the daemon in question with a userland tool such as strace or ktrace and from the system calls possibly figure out what the configuration file is. You may need to compare the trace output from a sensibly configured system to figure out where to look for the configuration file read.
How can I find the configuration file for rsyslog if it's not the default?
1,638,714,919,000
Seeking to make a netctl profile for a tap device. Here is the info I was given about the connection. GATEWAY=192.168.117.1 DNS=192.168.117.1 BROADCAST=255.255.255.255 **or** 192.168.117.255 (*I was given both of these different values*) PREFIX=31 STATIC IP ADDRESS=192.168.117.2/24 TYPE=TAP Netctl includes some examples. I used the one I found in examples/tuntap: Description='Example tuntap connection' Interface=tun0 Connection=tuntap Mode='tun' User='nobody' Group='nobody' ## Example IP configuration #IP=static #Address='10.10.1.2/16' Here is the profile I came up with: Description='My tap connection' Interface=tap0 Connection=tuntap Mode='tap' User='nobody' Group='nobody' IP=static Address='192.168.117.2/24' UsePeerDNS=true DefaultRoute=true SkipDAD=yes DHCPReleaseOnStop=yes Questions Do I need to specify the broadcast address or gateway? Is a prefix needed (and what is prefix 31)? Is there anything else I have overlooked?
Do I need to specify the broadcast address or gateway? From the looks of this article/thread titled: [SOLVED] Static IP wired connection doesn't work with netctl the broadcast address can be incorporated into the static IP's definition. For example, they provided you with this: BROADCAST=255.255.255.255 or 192.168.117.255 (I was given both of these different values) I'd assume that the 2nd one, 192.168.117.255, is in fact correct, which would be a /24 mask, hence your Address= already has it: Address='192.168.117.2/24' Is a prefix needed (and what is prefix 31)? Prefixes or, prefix lengths, are described here in these two articles titled: How do prefix-lists work? Working with IP Addresses - The Internet Protocol Journal - Volume 9, Number 1 excerpt The prefix length is just a shorthand way of expressing the subnet mask. The prefix length is the number of bits set in the subnet mask; for instance, if the subnet mask is 255.255.255.0, there are 24 This table shows how they're calculated:                                   So in your case, this information is a bit confusing. Your network address appears to be /24, but your prefix length is 31 bits. In either case, I'd ignore the 31 for the time being, and go with the /24. Is there anything else I have overlooked? Everything else in your example profile appears to check out. You should be good to go. References netctl-profile man page netctl wiki page - ArchLinux
How to make a netctl profile for a TAP device?
1,638,714,919,000
I'm need to add gateway from server dashboard page to routing list for accessing IPv6 from internet and I try to do this using ip -6 route add default via <gateway ipv6> but i geted this error RTNETLINK answers: No route to host
The system is trying to tell you: "I cannot reach that gateway address through any IPv6 networks I'm connected to." Is the gateway IPv6 address really within the address range of one of the IPv6 networks you're connected to? Ideally, an IPv6 router should be announcing itself using ICMPv6 router advisory messages, so that it could be discovered automatically and you should not have to configure it manually at all.
Debian IPv6 routing
1,638,714,919,000
I'm trying to create my own website using JSP, Tomcat... I'm trying to install Tomcat in Ubuntu (in my Cloud Instance) but I can't access my website. I trying to install Tomcat 9 and I'm using Ubuntu 17.10 x64 I use this tutorial: https://www.digitalocean.com/community/tutorials/how-to-install-apache-tomcat-8-on-ubuntu-16-04 But on Step 8 when I will access my website http://xxx.xxx.xxx.xxx:8080 nothing happens. I'm using Chrome to access website so it responds "This page is not working" So I started to investigate using this code: sudo systemctl status tomcat This is the result of command: But for me this message is OK. So I tried to create a server Node.JS to see if the problem is the firewall or port 8080, and it's work, the page loads correctly.
Have you taken a look at the logs? Hopefully you could find them with sudo locate catalina. You could also try deploying the sample application to the webapps direcory.
I can't access my website using tomcat
1,638,714,919,000
This is somewhat related to Play subtitles automatically with mpv I am running mpv 0.26.0-3 and trying for the media file to load subtitles but is failing although mediainfo shows that there is en/utf-8 text file for about 80 KB . The media file is in mkv format - Format : Matroska Format version : Version 4 / Version 2 File size : 699 MiB Duration : 2 h 15 min Overall bit rate mode : Variable Overall bit rate : 723 kb/s Movie name : TamilRockers.com Encoded date : UTC 2017-10-11 11:55:33 Writing application : mkvmerge v7.8.0 ('River Man') 64bit built on Mar 27 2015 16:31:37 Writing library : libebml v1.3.1 + libmatroska v1.4.2 Video ID : 1 Format : AVC Format/Info : Advanced Video Codec Format profile : [email protected] Format settings : CABAC / 4 Ref Frames Format settings, CABAC : Yes Format settings, RefFrames : 4 frames Codec ID : V_MPEG4/ISO/AVC Duration : 2 h 15 min Bit rate mode : Variable Bit rate : 627 kb/s Maximum bit rate : 40.0 Mb/s Width : 640 pixels Height : 272 pixels Display aspect ratio : 2.35:1 Frame rate mode : Constant Frame rate : 23.976 (24000/1001) FPS Color space : YUV Chroma subsampling : 4:2:0 Bit depth : 8 bits Scan type : Progressive Bits/(Pixel*Frame) : 0.150 Stream size : 606 MiB (87%) Writing library : x264 core 142 r2431 ac76440 Encoding settings : cabac=1 / ref=5 / deblock=1:0:0 / analyse=0x3:0x113 / me=umh / subme=8 / psy=1 / psy_rd=1.00:0.00 / mixed_ref=1 / me_range=16 / chroma_me=1 / trellis=1 / 8x8dct=1 / cqm=0 / deadzone=21,11 / fast_pskip=1 / chroma_qp_offset=-2 / threads=6 / lookahead_threads=1 / sliced_threads=0 / slices=4 / nr=0 / decimate=1 / interlaced=0 / bluray_compat=1 / constrained_intra=0 / bframes=3 / b_pyramid=1 / b_adapt=2 / b_bias=0 / direct=3 / weightb=1 / open_gop=1 / weightp=1 / keyint=24 / keyint_min=1 / scenecut=40 / intra_refresh=0 / rc_lookahead=24 / rc=2pass / mbtree=1 / bitrate=627 / ratetol=1.0 / qcomp=0.60 / qpmin=0 / qpmax=69 / qpstep=4 / cplxblur=20.0 / qblur=0.5 / vbv_maxrate=40000 / vbv_bufsize=30000 / nal_hrd=vbr / filler=0 / ip_ratio=1.40 / aq=1:1.00 Default : Yes Forced : No Audio ID : 2 Format : AAC Format/Info : Advanced Audio Codec Format profile : HE-AAC / LC Format settings : Explicit Codec ID : A_AAC-2 Duration : 2 h 15 min Bit rate : 93.8 kb/s Channel(s) : 6 channels Channel positions : Front: L C R, Side: L R, LFE Sampling rate : 48.0 kHz / 24.0 kHz Frame rate : 23.438 FPS (1024 SPF) Compression mode : Lossy Delay relative to video : 31 ms Stream size : 90.7 MiB (13%) Default : Yes Forced : No Text ID : 3 Format : UTF-8 Codec ID : S_TEXT/UTF8 Codec ID/Info : UTF-8 Plain Text Duration : 2 h 9 min Bit rate : 78 b/s Count of elements : 2491 Stream size : 74.3 KiB (0%) Default : No Forced : No Menu 00:00:00.000 : en:Chapter 01 00:02:05.267 : en:Chapter 02 00:05:09.367 : en:Chapter 03 00:10:03.400 : en:Chapter 04 00:22:55.734 : en:Chapter 05 00:34:40.668 : en:Chapter 06 00:44:35.035 : en:Chapter 07 00:58:14.802 : en:Chapter 08 01:10:22.502 : en:Chapter 09 01:14:09.669 : en:Chapter 10 01:22:36.236 : en:Chapter 11 01:30:17.736 : en:Chapter 12 01:35:45.570 : en:Chapter 13 01:41:16.837 : en:Chapter 14 01:56:03.705 : en:Chapter 15 01:59:13.306 : en:Chapter 16 02:11:47.606 : en:Chapter 17 I have not shared the hash or the filename for privacy reasons. but as can be seen there is this - Text ID : 3 Format : UTF-8 Codec ID : S_TEXT/UTF8 Codec ID/Info : UTF-8 Plain Text Duration : 2 h 9 min Bit rate : 78 b/s Count of elements : 2491 Stream size : 74.3 KiB (0%) Default : No Forced : No This is how my ~/.mpv/config is set up. ┌─[shirish@debian] - [~/.mpv] - [10033] └─[$] cat config 1 # Write your default config options here! 2 alang=eng,en,english,hin,hindi 3 slang=en,eng,english 4 sub-scale=1.25 I tried to toggle v while the media was playing but with no success. There are no subs. Toggling v says - a. Subtitles hidden b. subtitles visble (but no subtitles selected) How do I get out of this quagmire ?
the answer is - either adding --sid=1 or --sid=2 depending if there are one or more subtitles internally. the two flags are also convenient if you have an internal subtitle and an external subtitle and want to choose between the two as well.
unable to get mpv to play embedded subtitles even with config file setting on
1,638,714,919,000
For example, I have a function in my .bashrc file: function open_bashrc() { gedit ~/.bashrc source ~/.bashrc } So anywhere I am, if I type open_bashrc, then it will open the .bashrc file. I can open it and change it, but after I save and click close, it doesn't do the second step source .bashrc. Rather I have to type source ~/.bashrc myself. Why? What's wrong with the function?
I have this in my aliases file and it works: alias bashrc='vim ~/.bashrc && source ~/.bashrc'
How to source .bashrc file directly after I close and save?
1,638,714,919,000
I have noticed on my Arch Linux (with GNOME 3.24.2 and GDM) installation that my ~ is filled with files like this and they keep increasing: -rw-r--r-- 1 root root 0 May 8 00:01 wget-log -rw-r--r-- 1 root root 0 May 8 00:01 wget-log.1 -rw-r--r-- 1 root root 0 May 8 00:01 wget-log.2 -rw-r--r-- 1 root root 0 May 8 00:01 wget-log.3 -rw-r--r-- 1 root root 0 May 8 00:01 wget-log.4 -rw-r--r-- 1 root root 0 May 8 20:04 wget-log.5 -rw-r--r-- 1 root root 0 May 8 20:04 wget-log.6 -rw-r--r-- 1 root root 0 May 8 20:04 wget-log.7 -rw-r--r-- 1 root root 0 May 8 20:04 wget-log.8 -rw-r--r-- 1 root root 0 May 8 20:04 wget-log.9 In fact, there would be more if I didn't delete them every day. I have noticed these files appearing after running sudo pacman -Syu, but I have also observed them not appearing after doing so so perhaps it was just coincidence? But I would really like to track down the cause of these empty log files appearing in ~ as they are actually quite annoying and seem to serve no real purpose. So what are they caused by and is there any way I get either stop them from appearing or have them do so in a different location?
This looks like a bug, a regression from wget 1.18 to wget 1.19.1 which is used by Arch Linux. I have opened a bug report here: https://savannah.gnu.org/bugs/?51181 This bug is fixed in Wget 1.19.3, released on 19 January 2018.
Why do I keep getting wget-log file in ~ on Arch Linux?
1,638,714,919,000
I have Arch Linux with the latest grsec-hardened 4.9.x Linux kernel with paxd installed. But because of this when I try to run Java I get the following error: Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x0000035ea1000000, 2555904, 1) failed; error='Operation not permitted' (errno=1) # # There is insufficient memory for the Java Runtime Environment to continue. # Native memory allocation (mmap) failed to map 2555904 bytes for committing reserved memory. # An error report file with more information is saved as: # /home/[username]/hs_err_pid2813.log Now, I got this error in the past and I managed to tell it to allow Java to do this, however I cannot remember nor find the resources of how to do it. I have looked at this SO answer, but alas, my system tells me that is cannot find the command paxctl even though I have all the grsec related utilities installed mentioned on the Arch wiki. So how do I make it allow Java?
paxctl should work for you, root #paxctl -h PaX control v0.7 Copyright 2004,2005,2006,2007,2009,2010,2011,2012 PaX Team <[email protected]> usage: paxctl <options> <files> options: -p: disable PAGEEXEC -P: enable PAGEEXEC -e: disable EMUTRAMP -E: enable EMUTRAMP -m: disable MPROTECT -M: enable MPROTECT -r: disable RANDMMAP -R: enable RANDMMAP -x: disable RANDEXEC -X: enable RANDEXEC -s: disable SEGMEXEC -S: enable SEGMEXEC -v: view flags -z: restore default flags -q: suppress error messages -Q: report flags in short format -c: convert PT_GNU_STACK into PT_PAX_FLAGS (see manpage!) -C: create PT_PAX_FLAGS (see manpage!) Usually I would disable all restrictions like this, paxctl -pemrxs `which java` Though you can also set the flags more directly without needing paxctl. So for instance if you wanted to disable the mr ones you would do: sudo setfattr -n user.pax.flags -v "mr" `which java`
How can I run Java on a grsec-hardend Arch Linux kernel with paxd?
1,638,714,919,000
I've install both vi & vim in RedHat 6.7. Both of them are 7.4 but with different features turn on or off. I've setup a line in ~/.vimrc set mouse=a And the color scheme molokai is imported by plugin manager dein. These features are workable when I type vim, but it shows error message when I use view which is a link to /bin/vi. The error messages are Error detected while processing /home/myname/.vimrc: line 21: E538: No mouse support: mouse=a line 263: E185: Cannot find color scheme 'molokai' I'm wonder how could I write a workable .vimrc for both vi/view or vim? Here is the features about mouse on the different versions: $ /bin/vi --version|grep mouse +acl -farsi -mouse_sgr -tag_old_static -arabic -file_in_path -mouse_sysmouse -tag_any_white -autocmd -find_in_path -mouse_urxvt -tcl -balloon_eval -float -mouse_xterm +terminfo -ebcdic -mouse -startuptime -xterm_clipboard -emacs_tags -mouse_dec -statusline -xterm_save -eval -mouse_gpm -sun_workshop -xpm -ex_extra -mouse_jsbterm -syntax -extra_search -mouse_netterm -tag_binary $ /usr/bin/vim --version|grep mouse +acl +farsi +mouse_netterm +syntax +arabic +file_in_path +mouse_sgr +tag_binary +autocmd +find_in_path -mouse_sysmouse +tag_old_static -balloon_eval +float +mouse_urxvt -tag_any_white -browse +folding +mouse_xterm -tcl -ebcdic +mouse +smartindent -xim +emacs_tags -mouseshape -sniff -xsmp +eval +mouse_dec +startuptime -xterm_clipboard +ex_extra +mouse_gpm +statusline -xterm_save +extra_search -mouse_jsbterm -sun_workshop -xpm
For those features that are listed in :version output, you can use if has('mouse') conditionals. Another built-in function that can be used for many tests is :help exists(). The sledgehammer method: just prepend :silent! in front of the command; it will silence any errors. If vi is a different binary, you can also check the :help v:progpath variable.
How to write a workable .vimrc for both vim & vi in Red Hat 6?
1,638,714,919,000
I would like to know what is the order followed by kbuild when configuring the kernel and what is the order that it's more convenient to use when writing CONFIG_ options in the .config file . I have read the docs about kbuild but so far no specs on the order of the operations .
You should strive to not have order dependencies! The system starts at the first line of the top level Kconfig file, and processes each line in turn. When it sees a 'source' line, it suspends reading the current file, processes the specified file. When it gets to the end of a file it resumes where it was in the previous file.
Scanning order of the kbuild / kconfig kernel build system?
1,638,714,919,000
After editing some configurations on the dhclient configuration file (/etc/dhcp/dhclient.conf) the changes doesn't seem to have any effect until I reboot the machine. Since dhclient is running on the background, I believe the process needs to restart to actually read the configuration file again and apply said changes. What would be the best way to accomplish it? On the dhclient man page I found the following option: -r Release the current lease and stop the running DHCP client as previously recorded in the PID file. When shutdown via this method dhclient-script will be executed with the specific reason for calling the script set. The client normally doesn't release the current lease as this is not required by the DHCP protocol but some cable ISPs require their clients to notify the server if they wish to release an assigned IP address. If I understood it right, this option would kill the dhclient and thus make it release the lease and read the configuration file again when it started (which I'm not sure if would be as simply as calling dhclient &). Checking the processes tree, I also noticed dhclient is a child process of network-manager. Would run sudo service network-manager restart be a cleaner way to make dhclient start again with the new configurations?
Re-activate the connection. For example via nmcli connection up $NAME or any other client of NetworkManager. You usually wouldn't restart NetworkManager.
dhclient - Applying configuration changes with no reboot
1,469,094,356,000
I was reading a thread on the bug-bash mailing list and saw: Configuration Information [Automatically generated, do not change]: Machine: x86_64 OS: linux-gnu ... Since this "Configuration Information" header appears in other threads I assume there is some kind of tool to get it automatically. However, I tried with dmidecode, lscpu and cat /proc/cpuinfo or cat /proc/meminfo and none of them matches this content. How does this Configuration Information get created?
The tool that creates that report is called bashbug and it's part of bash package. See man bashbug for more details.
How to create a "Configuration Information [Automatically generated, do not change]"?
1,469,094,356,000
In my mac os x computer, node.js, you can use 'require()' to load your configuration files, but I am tired of having to do that on all my programs. Is there a way to make node.js automatically load config files? I tried to put this in my .bash_profile: alias node='node var config = require("./config")' When I enter the node command, it does load the file, but it exits the node shell. Is there a way to do this without making it exiting the node shell?
Use -r switch instead. e.g node -r ./config.js It will preload that module, and keep the shell for you.
How can I make node.js automatically load config files?
1,469,094,356,000
Having not had to muck about with the X configuration in quite a while I've recently found that many Linux current distros no longer use an xorg.conf file (unless it is manually created for whatever reason) instead configuring X on the fly at boot. So where is this configuration stored now such that it can be perused? Somewhere in memory, I imagine.
Xorg logs to /var/log/Xorg.n.log where n is the server log file for display n. In newer implementations1, the log file may be found at $HOME/.local/share/xorg/Xorg.n.log. The log will contain all of the currently loaded values for the running display, including any configuration options loaded through conf files in /etc/X11/xorg.conf.d/. 1. From Xorg 1.16, X can be run rootless via systemd-logind.
X configuration location on systems that configure X on the fly at startup
1,469,094,356,000
I've had the same issue as described in this question, and the solution of replacing $document_root with it's absolute path worked until redirects added the absolute path to the URL, as in: EXPECTED http://domain/pma/ REALITY http://domain/var/www/pma/ I'm about to pull my hair out of frustration here. Please help me not become bald. (Although keeping the hair in line is a hassle in itself, but you get the idea) $ less /etc/nginx/conf.d/default.conf server { listen 80; server_name localhost; #charset koi8-r; #access_log /var/log/nginx/log/host.access.log main; location / { root /usr/share/nginx/html; index index.html index.htm index.php; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } include hhvm.conf; # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ { # root html; # fastcgi_pass 127.0.0.1:9000; # fastcgi_index index.php; # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; # include fastcgi_params; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } $ less /etc/nginx/hhvm.conf location ~ \.(hh|php)$ { fastcgi_keep_conn on; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; }
The root directive is defined inside location, this causes your problem. Defining the root directive directly at the level of serversets the proper value of variable $document_root, which will also get available inside hhvm.conf. server { listen 80; server_name localhost; root /var/www; location / { index index.php index.html index.htm; } error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } include hhvm.conf; } Then, no modification is needed in your hhvm.conf, although you may do some cleanup there too.
nginx/HHVM 404, fixed replacing $document_root with absolute path, but redirect adds path to URL
1,469,094,356,000
I have Redmine/Git/nginx/fcgiwrap running in a jail on FreeBSD 9.3 for (potentially) authenticated Git commits over HTTP/S. Everything works until I restart the jail. In order for a commit to work I need to manually change /var/run/fcgiwrap/fcgiwrap.sock from srwxr-xr-x root:wheel to srwxrwxr-x root:www. It seems like there should be a better way to do this so that its persistent over a reboot. My feeling is that there should be some way of telling fcgiwrap who to run as but I can't work out where this is specified on FreeBSD. The man page says: Most probably you will want to launch fcgiwrap by spawn-fcgi using a configuration like this: FCGI_SOCKET=/var/run/fcgiwrap.sock FCGI_PROGRAM=/usr/sbin/fcgiwrap FCGI_USER=nginx FCGI_GROUP=www FCGI_EXTRA_OPTIONS="-M 0700" ALLOWED_ENV="PATH" Based on this question I have looked in /usr/local/etc/rc.d for spawn-fcgi but its not there which I assume means its not installed. It also seems overkill to install spawn-fcgi just to manage who fcgiwrap runs as. I've found in /usr/local/etc/rc.d/fcgiwrap it says: # fcgiwrap rc.d script supports multiple profiles (a-la rc.d/nginx) # When profiles are specified, the non-profile specific parameters become defaults. # You need to make sure that no two profiles have the same socket parameter. What is a profile and how would I go about creating one for this rc.d script? Or am I going about this all the wrong way?
OK. Never mind. I was closer to the solution than I thought. Reading through Practical rc.d scripting in BSD I just needed to add fcgiwrap_user="www" to /etc/rc.conf.
How to configure which user fcgiwrap runs as on FreeBSD?
1,469,094,356,000
I'm trying to configure the following restrictions in my sshd_config: Users with local IP addresses face no restrictions Users with non-local IP addresses, who are in the sftp group, are allowed to use sftp in a chroot jail Users with non-local IP addresses, who are not in the sftp group, are not allowed to do anything. Here's what I came up with: Match Address 10.0.0.0/24,172.16.0.0/20,192.168.0.0/16 X11Forwarding yes Match Address *,!10.0.0.0/24,!172.16.0.0/20,!192.168.0.0/16 Group sftp X11Forwarding no AllowTcpForwarding no ChrootDirectory %h ForceCommand internal-sftp Match Address *,!10.0.0.0/24,!172.16.0.0/20,!192.168.0.0/16 Group *,!sftp X11Forwarding no AllowTcpForwarding no ForceCommand /sbin/nologin The problem: when I try to login from an internal address, as a non-sftp user, I get rejected; the third Match triggered. According to the sshd_config manpage, a Match is only satisfied if all of its clauses are satisfied, but in my case, the first clause is not satisfied (I am coming in from a machine with a 172.16.0.0/20 IP address), only the second one is (I am not in the sftp group). Is the sshd_config manpage wrong? Is it possible to do what I am trying to do? UPDATE: at @steve's suggestion, I ran sshd in debug mode, and got this: debug1: userauth-request for user root service ssh-connection method none debug1: attempt 0 failures 0 debug1: connection from 172.19.187.49 matched 'Address *,!10.0.0.0/24,!172.16.0.0/20,!192.168.0.0/16' at line 144 debug1: user root does not match group list sftp at line 144 debug1: connection from 172.19.187.49 matched 'Address *,!10.0.0.0/24,!172.16.0.0/20,!192.168.0.0/16' at line 150 debug1: user root matched group list *,!sftp at line 150 debug1: PAM: initializing for "root" The no-internal-adress clauses are matching; also, the first Match is not mentioned in the debug log, which seems odd. UPDATE: As @Gilles pointed out, my address specifications as shown above are incorrect. What I should have used was 10.0.0.0/8, not 10.0.0.0/24, and 172.16.0.0/12, not 172.16.0.0/20. I was counting the mask bits down from 32 instead of up from 0. Yikes. With the corrected addresses, the configuration works. Thanks Gilles and Uriel! (I also changed the /sbin/nologin in the last line to /bin/false; the former leads to a strange error message from sftp: Received message too long 1416128883.)
Your netmaks are incorrect. If you want to include private networks, you should use : 10.0.0.0/8,172.16.0.0/12,192.168.0.0/16 This is probably why you do not match, rather than a ssh bug.
Unexpected Match behavior in sshd_config
1,469,094,356,000
I hosted one application UI in apache-tomcat server. I had set environment variables in bin/setenv.sh file as: CATALINA_OPTS="-server -Xms256m -Xmx512m -XX:PermSize=512m -XX:MaxPermSize=1024m Every time I perform a huge operation,(ex: Generating report with all the data in the application), 'JavaHeapSpace' exception is coming and then on-wards application is not working. Again I have to restart the server. My questions are: What are -Xms and -Xmx? What values should i put for them in-order to avoid JavaHeapSpace exception? What could be the maximum and minimum values for them? If it depends on the server configuration, I have shared my server configuration below. Please make suggestions. I really want to avoid this problem. My Server configuration: RAM : 8GB, Processor: octa core, HardDisk: 500GB Let me know, if you guys need any information on the same.
What are -Xms and -Xmx? Xms256m ---> Selects a low initial JVM heap size for an application. So, Xms specifies the amount of memory, in Megabytes, that will be used to start the server. Xmx512m ---> Selects the maximum JVM heap size permissible for an application. So, Xmx specifies the maximum amount of memory, in Megabytes, that will be dedicated to running the server. What values should i put for them in-order to avoid JavaHeapSpace exception? This depends not on your system's configuration, but, the kind of application you're targeting execution of. So, all those are un-necessary details provided by you. You should care whether your JVM is 32-bit or the 64-bit one. Like, I have this configuration of JVM for execution of my local minimal nature applications ---> -J-Xms100m -J-Xmx200m -J-XX:PermSize=100m. What could be the maximum and minimum values for them? This is limited by the your system's and the JVM's nature. The maximum theoretical heap limit for the 32-bit JVM is 4G. Due to various additional constraints such as available swap, kernel address space usage, memory fragmentation, and VM overhead, in practice the limit can be much lower. On most modern 32-bit Windows systems the maximum heap size will range from 1.4G to 1.6G. On 32-bit Solaris kernels the address space is limited to 2G. On 64-bit operating systems running the 32-bit VM, the max heap size can be higher, approaching 4G on many Solaris systems. If your application requires a very large heap you should use a 64-bit VM on a version of the operating system that supports 64-bit applications. Check this link to know more about it
What is the perfect combination of Environment Variables for Tomcat? [closed]
1,469,094,356,000
I get a lot of config folders in my home folder. I assume that is, because $XDG_CONFIG_HOME is not set and some software (e.g. fontconfig, umlet) does not follow the standards. $XDG_CONFIG_HOME defines the base directory relative to which user specific configuration files should be stored. If $XDG_CONFIG_HOME is either not set or empty, a default equal to $HOME/.config should be used. Now since my distribution does not set it, I want to do it. Most of the internet comments recommend to write it in the .bashrc. But I am the system administrator and want to do this for all users including future users, without introducing future maintenance work. This means I want to set it system-wide (globally) and since every new users should inherit it, it should be dynamically, using e.g. $HOME/.config. How can I solve this general problem in a reasonable way?
Insted of .bashrc put the setting line to /etc/profile. This file is loaded on every user login just like .bashrc is for a specific user. The bonus is this works for other shells and sessions as well.
How to correctly set the XDG base dirs gobally and dynamically
1,469,094,356,000
I'm trying to set my color syntax highlighting in nano, but it doesn't work as expected. One system everything works. This is an Fedora 21 laptop. Two systems everything I've tried except man something works. This is an Fedora 21 desktop and an Fedora 21 vm in VirtualBox. One system only one file I've tried works(opening nanorc itself gives highlighting). This is an Debian Wheezy desktop. If I do man emacs it only works as expected on one system. I also have syntax highlighting for many other types of files, I thought the only thing I needed to set this up was to have .nanorc located in the users home directory so nano could find it. This is very confusing. I've tried to look for differences in bash_profile, /etc/profile, bashrc but nothing stands out and maybe that's irrelevant. I've looked at the permissions. I've started an new terminal and restarted the system. Here is a piece from my .nanorc file: ##################################################################### ## Manpages ##include "/usr/share/nano/man.nanorc" ## Here is an example for manpages. ## syntax "man" "\.[1-9]x?$" color green "\.(S|T)H.*$" color brightgreen "\.(S|T)H" "\.TP" color brightred "\.(BR?|I[PR]?).*$" color brightblue "\.(BR?|I[PR]?|PP)" color brightwhite "\\f[BIPR]" color yellow "\.(br|DS|RS|RE|PD)" ##################################################################### Questions: Why is the same .nanorc file not working the same on four Linux systems(Fedora 21 is working, two Fedora 21 not working and Debian Wheezy not working at all). What am I missing? What are the steps to set a custom .nanorc file to be used by nano and be sure it's not in some kind of conflict or something? -------------------------------------------- Here is the full nanorc file on pastebin.com.
I since found that there is a bug in nano < 2.7.4-1 nano: /etc/nanorc is ignored, if ~/.nanorc exists Latest from the bug report: I just made the dist-upgrade to Debian 9.0, which included an update of package nano to version 2.7.4-1 and the problem vanished, the bug is solved in 2.7.4-1. The bug report: bug
Color syntax highlighting working on one system but not the others. Same nanorc file
1,469,094,356,000
I am sorry to post this type of question here but due to the great experience and skillsets here, I hope for your understanding. I am using (unfortunately) httpd package (v 2.4.6). In various guides, I see that many modules defined in httpd.conf are loaded as follows (ending with ".c" <IfModule mod_headers.c> Other modules are to be loaded without that ".c" in the end, such as: <IfModule security2_module> Call me stupid, but I had a hard time to find related, official, documentation about that and so I am just guessing: Adding ".c" will override the default definitions for that module with the definitions provided. Am I right with that assumption? If not: can someone please be so kind and point me to the official documentation about that?
You have to check this page: The module argument can be either the module identifier or the file name of the module, at the time it was compiled. For example, rewrite_module is the identifier and mod_rewrite.c is the file name. If a module consists of several source files, use the name of the file containing the string STANDARD20_MODULE_STUFF. In short this means: The module mod_rewrite.c is compiled into mod_rewrite.so. The source file mod_rewrite.c contains line module AP_MODULE_DECLARE_DATA rewrite_module;, which declares a module called rewrite_module. So, the module can be referred either as rewrite_module or (module identifier), or mod_rewrite.c from which it is compiled. Directive LoadModule tells to load module with identifier rewrite_module from compiled object mod_rewrite.so. When this is configured, you can either refer to the identifier or the source file name.
httpd 2.4.6 on CentOS 7 - question related to module config in httpd.conf
1,469,094,356,000
I added a new keybinding to my rc.xml and now I get this message every time I log into Openbox. Also, the right-click menu does not work anymore either. Now I would have fixed the error myself, but unfortunately my rc.xml does not have a line 749. It ends at 748. And I am a bit puzzled at how to "see stdout". This is what I added, copied from the Arch Linux Wiki. <!-- Keybindings for screenshots --> <keybind key="Print"> <action name="Execute"> <startupnotify> <command>sh -c "import -window root ~/Pictures/$(date '+%Y%m%d-%H%M%S').png"</command> </action> </keybind> And I changed the quote at the top, which usually says that the document should be copied or else it will be overwritten. My OS is Fedora 21, I used raw Openbox, right now I am in Gnome.
You need to close the <startupnotify> tag with a corresponding </startupnotify>. See the official documentation.
Openbox Syntax Error in ~/.config/openbox/rc.xml [closed]
1,469,094,356,000
The kdebase-workspace package on Arch Linux only preservers changes made to /usr/share/config/kdm/kdmrc when the package is updated. I need to edit /usr/share/config/kdm/Xsetup to get my monitors to rotate correctly, but the changes get lost every time kdebase-workspace gets updated. The Arch Wiki recommends copying /usr/share/config/kdm/Xsession to /usr/share/config/kdm/Xsession.custom. I could do this with /usr/share/config/kdm/Xsetup, but I thought files in /usr/share/ are supposed to be managed by the package manager. It seems like this might be a bug in the package (i.e., should it be saving all the configuration files) or should I be making a change in /usr/share/config/kdm/kdmrc to tell it to look some place else and if so where?
Files under /usr are meant to be under the control of the package manager (except for files under /usr/local). Configuration files that the system administrator may modify live in /etc. This is part of the traditional unix directory structure and codified for Linux in the Filesystem Hierarchy Standard. The recommendation in the Arch Wiki to edit files under /usr is a bad idea; the fact that your changes are overwritten by an upgrade is expected. Arch Linux manages files in a somewhat nonstandard way. You can mark the file as not to be changed on upgrade (this is documented on the wiki) by declaring it in /etc/pacman.conf: NoUpgrade = usr/share/config/kdm/Xsetup You may want to replace /usr/share/config/kdm/Xsetup by a symbolic link to a file under /etc (e.g. /etc/kdm/Xsetup), to make it easier to keep track of the customizations that you've made.
Why do upgrades to KDM/KDE not preserve changes to configuration files?
1,469,094,356,000
I'm sure that I'm just missing something really simple here, but I have the following sudoers file where I'm intending certain users to be allowed to run a couple of commands as the repomgr user without a passphrase: lambda@host:~$ sudo cat /etc/sudoers.d/repo Cmnd_Alias REPO_LOAD_PASSPHRASE = /bin/bash -c /home/repomgr/preset-passphrase.sh Cmnd_Alias REPO_PULL = /bin/bash -c /usr/bin/reprepro -b /var/packages/devel pull %qa ALL = (repomgr) NOPASSWD: REPO_LOAD_PASSPHRASE, \ (repomgr) NOPASSWD: REPO_PULL Now, the REPO_LOAD_PASSPHRASE command works fine; I run that and it doesn't prompt me for a password: lambda@host:~$ sudo -K lambda@host:~$ sudo -u repomgr -i /home/repomgr/preset-passphrase.sh lambda@host:~$ However, the second command, the REPO_PULL command, continues to prompt me for a password despite the NOPASSWD setting: lambda@host:~$ sudo -K lambda@host:~$ sudo -u repomgr -i reprepro -b /var/packages/devel pull [sudo] password for lambda: If I check how sudo interprets it, indeed everything but the NOPASSWD is present for the second command: lambda@host:~$ sudo -l Matching Defaults entries for lambda on this host: env_reset, secure_path=/usr/local/sbin\:/usr/local/bin\:/usr/sbin\:/usr/bin\:/sbin\:/bin User lambda may run the following commands on this host: (ALL : ALL) ALL (repomgr) NOPASSWD: /bin/bash -c /home/repomgr/preset-passphrase.sh, (repomgr) /bin/bash -c /usr/bin/reprepro -b /var/packages/devel pull Why isn't this working? What would cause the NOPASSWD declaration to just be dropped from the second command?
Well, yep, it sure was something obvious as to why it wasn't working. When I had fixed the bug that I needed to add /bin/bash -c to allow the use of -i, I hadn't changed the full path for the command, /usr/bin/reprepro, to what I was actually passing in, reprepro. Changing it to use the full path as below, or likewise changing the rule to only include the command, works fine. lambda@host:~$ sudo -K lambda@host:~$ sudo -u repomgr -i /usr/bin/reprepro -b /var/packages/devel pull That still leaves the puzzle of why the NOPASSWD isn't showing up in the sudo -l query, but I've solved the actual problem.
NOPASSWD option not applying to second command
1,469,094,356,000
I want to get rid of the xscreensaver config file in my home .xscreensaver. I read in man xscreensaver, that: Options to xscreensaver are stored in one of two places: in a .xscreensaver file in your home directory; or in the X resource database. If the .xscreensaver file exists, it overrides any settings in the resource database but I don't understand what that means. What is this X resource database? Where should I put the contents of my $HOME/.xscreensaver, so that this file can be deleted from my home?
The X resource database is a kind of configuration abstraction (somewhat analogous to the MS-Windows registry). You create/manage one or more text configuration files (system wide ones, and ~/.Xdefaults), these are loaded into the X server by during the startup process, and applications can query the relevant settings instead of (though often as well as) custom configuration files. You need to keep reading that xscreensaver man page, the Configuration section tells you exactly what to do: The syntax of the .xscreensaver file is similar to that of the .Xdefaults file; for example, to set the timeout parameter in the .xscreensaver file, you would write the following: timeout: 5 whereas, in the .Xdefaults file, you would write xscreensaver.timeout: 5 If you change a setting in your X resource database, or if you want xscreensaver to notice your changes immediately instead of the next time it wakes up, then you will need to reload your .Xdefaults file, and then tell the running xscreensaver process to restart itself, like so: xrdb < ~/.Xdefaults xscreensaver-command -restart Don't forget the xrdb step, changes to resource files need to be imported. You don't need to enter every setting into your .Xdefaults, only the changes relative to those set in the (system dependent) app-defaults. xrdb -all -query | grep xscreensaver will help. Trading one configuration file for another isn't a great leap, but X resource files let you keep any and all resource-aware application settings together, and also offers dynamic configuration by way of pre-processing (e.g. dependent on host and client settings).
configure xscreensaver through X resource database
1,469,094,356,000
On the site (mongodb) it lists ... Create a /etc/yum.repos.d/mongodb.repo file to hold the following configuration information for the MongoDB repository: then... [mongodb] name=MongoDB Repository baseurl=http://downloads-distro.mongodb.org/repo/redhat/os/x86_64/ gpgcheck=0 enabled=1 I am confused as to what the name of the second set of directions should be, ie what file name?
I will try to do my best to help you, using the informations you gave us. Assuming you have a 64 bit CPU (uname -p to check this, x86_64=64 bit), you have to just add the repository as you stated, than issue this commands: yum clean all this will clean your yum cache and you should be able to see the new mondodb repository in the list yum install mongo-10gen mongo-10gen-server with this command you will install mongodb and all the dependencies, if needed. service mongod start and finally you can start your new service. If you have a 32 bit CPU, use instead this repository: [mongodb] name=MongoDB Repository baseurl=http://downloads-distro.mongodb.org/repo/redhat/os/i686/ gpgcheck=0 enabled=1 and just run the same commands above.
How do you Configure Package Management System (YUM) for MongoDB
1,469,094,356,000
I added the below 2 lines in the end of the configuration file /etc/vsftpd.conf so as to deny a local user named 'tentenths' from loggin in the ftp server. I restarted the vsftpd service after the change. But still the user was permitted to login. Where am I mistaken! userlist_deny=YES userlist_file=/etc/vsftpd.denied_users The content of the above file is: ravbholua@ravi:~$ cat /etc/vsftpd.denied_users tentenths ravbholua@ravi:~$ I am referring this link for the same. Have a look at the whole conf. file: # Example config file /etc/vsftpd.conf # # The default compiled in settings are fairly paranoid. This sample file # loosens things up a bit, to make the ftp daemon more usable. # Please see vsftpd.conf.5 for all compiled in defaults. # # READ THIS: This example file is NOT an exhaustive list of vsftpd options. # Please read the vsftpd.conf.5 manual page to get a full idea of vsftpd's # capabilities. # # # Run standalone? vsftpd can run either from an inetd or as a standalone # daemon started from an initscript. listen=YES # # Run standalone with IPv6? # Like the listen parameter, except vsftpd will listen on an IPv6 socket # instead of an IPv4 one. This parameter and the listen parameter are mutually # exclusive. #listen_ipv6=YES # # Allow anonymous FTP? (Disabled by default) anonymous_enable=YES # # Uncomment this to allow local users to log in. local_enable=YES # # Uncomment this to enable any form of FTP write command. write_enable=YES # # Default umask for local users is 077. You may wish to change this to 022, # if your users expect that (022 is used by most other ftpd's) #local_umask=022 # # Uncomment this to allow the anonymous FTP user to upload files. This only # has an effect if the above global write enable is activated. Also, you will # obviously need to create a directory writable by the FTP user. anon_upload_enable=YES # # Uncomment this if you want the anonymous FTP user to be able to create # new directories. #anon_mkdir_write_enable=YES # # Activate directory messages - messages given to remote users when they # go into a certain directory. dirmessage_enable=YES # # If enabled, vsftpd will display directory listings with the time # in your local time zone. The default is to display GMT. The # times returned by the MDTM FTP command are also affected by this # option. use_localtime=YES # # Activate logging of uploads/downloads. xferlog_enable=YES # # Make sure PORT transfer connections originate from port 20 (ftp-data). connect_from_port_20=YES # # If you want, you can arrange for uploaded anonymous files to be owned by # a different user. Note! Using "root" for uploaded files is not # recommended! #chown_uploads=YES #chown_username=whoever # # You may override where the log file goes if you like. The default is shown # below. #xferlog_file=/var/log/vsftpd.log # # If you want, you can have your log file in standard ftpd xferlog format. # Note that the default log file location is /var/log/xferlog in this case. #xferlog_std_format=YES # # You may change the default value for timing out an idle session. #idle_session_timeout=600 # # You may change the default value for timing out a data connection. #data_connection_timeout=120 # # It is recommended that you define on your system a unique user which the # ftp server can use as a totally isolated and unprivileged user. #nopriv_user=ftpsecure # # Enable this and the server will recognise asynchronous ABOR requests. Not # recommended for security (the code is non-trivial). Not enabling it, # however, may confuse older FTP clients. #async_abor_enable=YES # # By default the server will pretend to allow ASCII mode but in fact ignore # the request. Turn on the below options to have the server actually do ASCII # mangling on files when in ASCII mode. # Beware that on some FTP servers, ASCII support allows a denial of service # attack (DoS) via the command "SIZE /big/file" in ASCII mode. vsftpd # predicted this attack and has always been safe, reporting the size of the # raw file. # ASCII mangling is a horrible feature of the protocol. #ascii_upload_enable=YES #ascii_download_enable=YES # # You may fully customise the login banner string: #ftpd_banner=Welcome to blah FTP service. # # You may specify a file of disallowed anonymous e-mail addresses. Apparently # useful for combatting certain DoS attacks. #deny_email_enable=YES # (default follows) #banned_email_file=/etc/vsftpd.banned_emails # # You may restrict local users to their home directories. See the FAQ for # the possible risks in this before using chroot_local_user or # chroot_list_enable below. chroot_local_user=NO # # You may specify an explicit list of local users to chroot() to their home # directory. If chroot_local_user is YES, then this list becomes a list of # users to NOT chroot(). # (Warning! chroot'ing can be very dangerous. If using chroot, make sure that # the user does not have write access to the top level directory within the # chroot) #chroot_local_user=YES chroot_list_enable=YES # (default follows) chroot_list_file=/etc/vsftpd.chroot_list # # You may activate the "-R" option to the builtin ls. This is disabled by # default to avoid remote users being able to cause excessive I/O on large # sites. However, some broken FTP clients such as "ncftp" and "mirror" assume # the presence of the "-R" option, so there is a strong case for enabling it. #ls_recurse_enable=YES # # Customization # # Some of vsftpd's settings don't fit the filesystem layout by # default. # # This option should be the name of a directory which is empty. Also, the # directory should not be writable by the ftp user. This directory is used # as a secure chroot() jail at times vsftpd does not require filesystem # access. secure_chroot_dir=/var/run/vsftpd/empty # # This string is the name of the PAM service vsftpd will use. pam_service_name=vsftpd # # This option specifies the location of the RSA certificate to use for SSL # encrypted connections. rsa_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem # This option specifies the location of the RSA key to use for SSL # encrypted connections. rsa_private_key_file=/etc/ssl/private/ssl-cert-snakeoil.key userlist_deny=YES userlist_file=/etc/vsftpd.denied_users
You also need to add this configuration option: userlist_enable=YES Details userlist_deny — When used in conjunction with the userlist_enable directive and set to NO, all local users are denied access unless the username is listed in the file specified by the userlist_file directive. Because access is denied before the client is asked for a password, setting this directive to NO prevents local users from submitting unencrypted passwords over the network. The default value is YES. userlist_enable — When enabled, the users listed in the file specified by the userlist_file directive are denied access. Because access is denied before the client is asked for a password, users are prevented from submitting unencrypted passwords over the network. The default value is NO, however under Red Hat Enterprise Linux the value is set to YES. userlist_file — Specifies the file referenced by vsftpd when the userlist_enable directive is enabled.
Not able to deny selected local user to login ftp server
1,469,094,356,000
I have a few different Ubuntu installs and this is the first time I've experienced this. I have emacs-nox installed, because I hate graphical versions of emacs. Normally when I type emacs test.txt it will load test.txt if it exists and load a blank file if it does not exist. Instead, on this machine, it says: File exists: /home/myusername/.emacs.d/ And loads a blank file, even if it exists. Then when I try to save it, it makes me specify a full path, like I had not told it where I wanted it to save when I called it from the command line. Any thoughts on what might be causing this? I didn't customize my emacs install in any way, and I've tried removing and re-installing the package to no avail. Edit to add requested details user@comp:~$ ls -l ~/.emacs.d ls: cannot open directory /home/user/.emacs.d: Permission denied user@comp:~$ sudo ls -l ~/.emacs.d total 0
Following the comment by @EvanTeitelman , I looked at the output in the question's edit, deleted the non-existent but locked-down directory ~/.emacs.d, and now it works. sudo rm -r ~/.emacs.d This appears to be a common thing, at least on Xubuntu. I noticed another installation doing it today. So hopefully this will help someone.
emacs-nox doesn't load the files I ask it to at command line
1,469,094,356,000
I want map the key Ñ (Shift+ñ) to : in the normal mode of vim. I've searched vim configs with Ntilde but I've found nothing. Any idea?.
Marco suggested nmap Ñ : and it works perfect.
map "Ñ" to ":" in vim
1,469,094,356,000
Here is what I currently have online, as you can see there is no information about my debian server. (While was installing I tried to follow next instructions) What I have changed in default gmond.conf: cluster { name = "dspproc" owner = "unspecified" latlong = "unspecified" url = "dspproc" } udp_send_channel { mcast_join = 127.0.0.1 port = 8649 ttl = 1 } udp_recv_channel { mcast_join = 127.0.0.1 port = 8649 bind = 127.0.0.1 } And this is what I changed in gmetad.conf: data_source "dspproc" 10 127.0.0.1 authority "http://195.19.243.13/ganglia/" trusted_hosts 127.0.0.1 195.19.243.13 case_sensitive_hostnames 0 My question is: what I do wrong , and how to make ganglia show info about current machine its installed on? Update Following this answer Changed to: udp_send_channel { host = 127.0.0.1 port = 8649 ttl = 1 } /* You can specify as many udp_recv_channels as you like as well. */ udp_recv_channel { host = 127.0.0.1 /* line 41 */ port = 8649 bind = 127.0.0.1 } got this on restart: Starting Ganglia Monitor Daemon: /etc/ganglia/gmond.conf:41: no such option 'host' and still Hosts up: 0 in web ui. Upadate 2: So... when I read the answer again and went on to the link made next changes into configuration and all worked out!) Thank you noffle! Now that block of gmod.conf looks like udp_send_channel { host = 127.0.0.1 port = 8649 ttl = 1 } udp_recv_channel { port = 8649 family = inet4 } udp_recv_channel { port = 8649 family = inet6 } and all seems to work...
I seem to remember having a similar problem when setting up Ganglia many moons ago. This may not be the same issue, but for me it was that my box/network didn't like Ganglia's multicasting. Once I set it up to use unicasting, all was well. From the Ganglia docs: If only a host and port are specified then gmond will send unicast UDP messages to the hosts specified. Perhaps try replacing the mcast_join = 127.0.0.1 with host = 127.0.0.1.
How to configurate ganglia-monitor on a single debian machine?
1,469,094,356,000
I have a recent problem with my sound configuration. Basically, it's way too loud until I set the volume below 10%. And then it's very quickly too silent. Using the alsa mixer, I can set the headphone volume and PCM volume to about 50% and then obtain a reasonable range on the master. But any application using pulse will reset all the non-master channels to max and kill my ears instantly. Is there a way to force pulse to NOT change the other channels? I tried to look for information, and it seems that I need to change the channels from "mixin" to "ignore" in the configuration file, but there are so many configuration files, and I haven't found which ones are actually used by my system. So in the end, I am not even sure that what I think is correct. Can someone tell me: how to find the exact configuration files I need to change, or how to override the global configuration with some local one? and what I need to actually change? Thanks.
So in the end, I figured out that my profile was called analog-output-headphones. And the relevant configuration file is there: /usr/share/pulseaudio/alsa-mixer/paths/analog-output-headphones.conf For some reason, the configuration of my alsa card is such that the master volume doesn't do anything and I haven't found how to change that. But I can "ignore" the master and only act on the headphones ... This is not ideal, but currently works.
How to change mixing of channels by pulse audio / alsa
1,469,094,356,000
Where can I find configuration file for uw-imapd on debian? Is there even one?
From what I can tell, there is no configuration file for uw-imapd. It is known for needing very little configuration. But according to this link, you should be able to change some settings by modifying xinetd.d configs.
Where can I find configuration file for uw-imapd on debian?
1,469,094,356,000
It seems my configuration for root doesn't have access to DNS even though my lower-privileged user does. $ ping google.com PING google.com (142.251.32.110) 56(84) bytes of data. 64 bytes from 142.251.32.110 (142.251.32.110): icmp_seq=1 ttl=64 time=1.35 ms 64 bytes from 142.251.32.110 (142.251.32.110): icmp_seq=2 ttl=64 time=2.71 ms ^C --- google.com ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1001ms rtt min/avg/max/mdev = 1.350/2.029/2.708/0.679 ms $ sudo -E ping google.com ping: google.com: Name or service not known This is wreaking havoc with apt-get. How can I troubleshoot this? I am running WSL Ubuntu. Edit 1 It looks like root is trying to use localhost for DNS. socket(AF_INET, SOCK_DGRAM|SOCK_CLOEXEC|SOCK_NONBLOCK, IPPROTO_IP) = 5 setsockopt(5, SOL_IP, IP_RECVERR, [1], 4) = 0 connect(5, {sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr("127.0.0.1")}, 16) = 0 The equivalent line for unprivileged me is this. I think that 172.* address is some WSL thing. socket(AF_INET, SOCK_DGRAM|SOCK_CLOEXEC|SOCK_NONBLOCK, IPPROTO_IP) = 5 setsockopt(5, SOL_IP, IP_RECVERR, [1], 4) = 0 connect(5, {sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr("172.21.160.1")}, 16) = 0 This is what /etc/resolv.conf looks like. # This file was automatically generated by WSL. To stop automatic generation of this file, add the following entry to /etc/wsl.conf: # [network] # generateResolvConf = false nameserver 172.21.160.1 Edit 2 Root is not able to read /etc/resolv.conf newfstatat(AT_FDCWD, "/etc/resolv.conf", 0x7ffc8a2d25f0, 0) = -1 EACCES (Permission denied)
The problem was that I updated the automount settings in my wsl.conf. [automount] root=/home/myuser Once I reverted this change root was once again able to access the internet. I believe this is due to a boot order problem because /home is not available soon enough to configure the network for root.
No DNS for root?
1,469,094,356,000
Good evening! I'm trying to create a streaming setup for me and my buddies to hang out via Skype and I'm really struggling with the audio part. There are two problems : The microphone only transmits on the left channel. I would like for my microphone to be transmitted to both audio channels. For this, according to the documentation, ( https://gitlab.freedesktop.org/pipewire/pipewire/-/wikis/Virtual-Devices ) I need to create a mono-sink where I pour in the microphone then transmit it to my friend sink. This blogpost ( https://blogshit.baka.fi/2021/07/pipewire-microphone/ ) seems to cover this use case but I have no media-session.d file. I would like to pour in some applications and other audio sources such as my guitar in the friend sink and I would like to hear what's in it, except for my microphone (maybe only hear my microphone as a one time test). How do I achieve this? How do I know how to name the configuration file since they seem to have specific names in the documentation? How do I pour my application audio? Do I need a separate sink for myself? How do I find the device names for pipewire? Here is my pactl info output shaddox@pop-os:/usr/share/pipewire$ pactl info Server String: /run/user/1000/pulse/native Library Protocol Version: 35 Server Protocol Version: 35 Is Local: yes Client Index: 660 Tile Size: 65472 User Name: shaddox Host Name: pop-os Server Name: PulseAudio (on PipeWire 0.3.79) Server Version: 15.0.0 Default Sample Specification: float32le 2ch 48000Hz Default Channel Map: front-left,front-right Default Sink: alsa_output.usb-Grace_Design_SDAC-00.iec958-stereo Default Source: alsa_input.usb-Yamaha_Corporation_Steinberg_UR22mkII-00.analog-stereo Cookie: 3fff:d574 If it helps, here is my arecord -l output shaddox@pop-os:/usr/share/pipewire$ arecord -l **** List of CAPTURE Hardware Devices **** card 0: PCH [HDA Intel PCH], device 0: ALC887-VD Analog [ALC887-VD Analog] Subdevices: 1/1 Subdevice #0: subdevice #0 card 0: PCH [HDA Intel PCH], device 2: ALC887-VD Alt Analog [ALC887-VD Alt Analog] Subdevices: 1/1 Subdevice #0: subdevice #0 card 3: Webcam [C922 Pro Stream Webcam], device 0: USB Audio [USB Audio] Subdevices: 1/1 Subdevice #0: subdevice #0 card 4: UR22mkII [Steinberg UR22mkII], device 0: USB Audio [USB Audio] Subdevices: 0/1 Subdevice #0: subdevice #0 And here is my aplay -l output: shaddox@pop-os:/usr/share/pipewire$ aplay -l **** List of PLAYBACK Hardware Devices **** card 0: PCH [HDA Intel PCH], device 0: ALC887-VD Analog [ALC887-VD Analog] Subdevices: 1/1 Subdevice #0: subdevice #0 card 0: PCH [HDA Intel PCH], device 1: ALC887-VD Digital [ALC887-VD Digital] Subdevices: 1/1 Subdevice #0: subdevice #0 card 1: SDAC [SDAC], device 0: USB Audio [USB Audio] Subdevices: 0/1 Subdevice #0: subdevice #0 card 2: NVidia [HDA NVidia], device 3: HDMI 0 [22M35] Subdevices: 1/1 Subdevice #0: subdevice #0 card 2: NVidia [HDA NVidia], device 7: HDMI 1 [HDMI 1] Subdevices: 1/1 Subdevice #0: subdevice #0 card 2: NVidia [HDA NVidia], device 8: HDMI 2 [HDMI 2] Subdevices: 1/1 Subdevice #0: subdevice #0 card 2: NVidia [HDA NVidia], device 9: HDMI 3 [HDMI 3] Subdevices: 1/1 Subdevice #0: subdevice #0 card 4: UR22mkII [Steinberg UR22mkII], device 0: USB Audio [USB Audio] Subdevices: 1/1 Subdevice #0: subdevice #0 I only use the Steinberg UR22mkII for guitar recording and the microphone while the SDAC one is where I listen to.
Of course, once you formulate your own question, it sort of gives you an idea of what to look out for. First of all, I needed to create two virtual devices which will serve as a base of operations. This file will be named ~/.config/pipewire/pipewire.conf.d/10-coupled-skype-stream.conf . How did I come to that conclusion, you may ask? The source of truth finds itself in /usr/share/pipewire! Now, onto the file contents: context.modules = [ { name = libpipewire-module-loopback args = { node.description = "Steinberg Front Left" capture.props = { node.name = "capture.mono-micorphone" audio.position = [ FL ] target.object = "alsa_input.usb-Yamaha_Corporation_Steinberg_UR22mkII-00.analog-stereo" stream.dont-remix = true node.passive = true } playback.props = { media.class = "Audio/Source" node.name = "mono-microphone" audio.position = [ MONO ] } } } { name = libpipewire-module-loopback args = { audio.position = [ FL FR ] capture.props = { media.class = Audio/Sink node.name = skype_sink node.description = "Virtual Skype sink" } } } ] I found the name of the devices with the help of a simple program called Simple Wireplumber GUI. What the first pair will do is that it will create a tether from the left channel of the source to a new, virtual, MONO one. The second part of the configuration file creates something called a sink, which is just a place where you dump the audio stuff you want everyone to hear. Now, you use a piece of software called qpwgraph or Helvum to tether what you want everyone to hear to the sink. Left channel goes to the left channel, right channel goes to the right. For the MONO source, it goes to both channels. Once you had enough of hearing your voice for testing, look around the aforementioned program for a loopback tether and sever it. Hope this will be of help to others.
Chaining pipewire sinks for a streaming setup
1,469,094,356,000
I have two users on my ssh-server machine, user_A and user_B. user_B is permitted to log in with private key only for security reasons, because he needs to log in from remote. All this works. My problem: How do I prevent user_A to login likewise from remote with username/password, because he only needs to login from the local network? According to the man page of sshd, CIDR-notation is allowed. What I have done: #605433 suggests AllowUsers [email protected], so I adapted to AllowUsers [email protected]/24 #740700 suggests: Match 192.168.0.10/24 AllowGroups PrivateSubnetSshUsers My version looks like Match 192.168.10.0/24 AllowUsers user_A Against my expectations, user_A can still log in from 192.168.1.220 in both cases. I had done some systemctl restart sshd before retrying. What do I overlook here?
First, there is error in your Match statement as it should be Match address 192.168.10.0/24 and not just Match 192.168.10.0/24. Your version gives an error message when trying to restart sshd. Didn't you get one? Second, you should Match on user name, not on address. The following should work: Match user user_A AllowUsers [email protected]/24 BTW. From my tests, AllowUsers [email protected]/24 alone (ie. not in Match block) also works (so you must have done something wrong when testing it), but it restricts all other users except user_A from login at all (and user_A can login only from the specified network). If it is within a Match block as above, then user_A can login only from specified network, and all other users can login from any IP address.
How to restrict user login for specific IP-address (private address)?
1,681,080,673,000
I'm on a Fedora Server 37 iso, so no DM/DE is pre-installed. Install wayland, lightdm, and lightdm-gtk-greeter. Edit lightdm's config to use lightdm-gtk-greeter (Line 102) change greeter-session=example-gtk-gnome to greeter-session=lightdm-gtk-greeter (Line 107) change user-session=default to user-session=qtile Try to start graphical.target, and it fails. From sudo tail -n100 /var/log/lightdm/lightdm.log, here's a chunk of the relevant part. It tries to start an xserver, even though I don't have one installed and want to use wayland instead. I can't find a config option to tell it to use wayland either. Did I miss it, or is there another way to do this? [+0.00s] DEBUG: Using D-Bus name org.freedesktop.DisplayManager [+0.00s] DEBUG: _g_io_module_get_default: Found default implementation local (GLocalVfs) for ‘gio-vfs’ [+0.00s] DEBUG: Using cross-namespace EXTERNAL authentication (this will deadlock if server is GDBus < 2.73.3) [+0.00s] DEBUG: Monitoring logind for seats [+0.00s] DEBUG: New seat added from logind: seat0 [+0.00s] DEBUG: Seat seat0: Loading properties from config section Seat:* [+0.00s] DEBUG: Seat seat0 has property CanMultiSession=no [+0.00s] DEBUG: Seat seat0: Starting [+0.00s] DEBUG: Seat seat0: Creating greeter session [+0.00s] DEBUG: Seat seat0: Creating display server of type x [+0.00s] DEBUG: Using VT 1 [+0.00s] DEBUG: Seat seat0: Starting local X display on VT 1 [+0.00s] DEBUG: XServer 0: Logging to /var/log/lightdm/x-0.log [+0.00s] DEBUG: XServer 0: Can't launch X server X -core -noreset, not found in path [+0.00s] DEBUG: XServer 0: X server stopped [+0.00s] DEBUG: Releasing VT 1 [+0.00s] DEBUG: Seat seat0: Display server stopped [+0.00s] DEBUG: Seat seat0: Can't create display server for greeter [+0.00s] DEBUG: Seat seat0: Session stopped [+0.00s] DEBUG: Seat seat0: Stopping display server, no sessions require it [+0.00s] DEBUG: Seat seat0: Stopping [+0.00s] DEBUG: Seat seat0: Stopped [+0.00s] DEBUG: Failed to start seat: seat0 Edit: This github issue is someone's log that gets a lot farther than I do. My log above, you see Seat seat0: Creating display server of type x, but theirs is type wayland. This is the main thing I'd like to figure out. However later in their log, they call /etc/lightdm/Xsession ... still too. Is there a way to install a tiny subset of x11 to work, or do I need an entire x server package along with wayland to get LightDM running?
or do I need an entire x server package along with wayland to get LightDM running? LightDM requires the full featured Xorg server. You may want to install GDM, the only display manager which works in pure Wayland mode as of April, 2023. Addendum: Fedora 38 contains a GIT version of SDDM which can work in Wayland mode as well. There's also an sddm-git Wayland enabled package in Arch.
How to switch lightdm-gtk-greeter to use wayland only? (x11 not installed)
1,681,080,673,000
Is it possible to check the different status of video drivers when it is on, is off, in error, in no-signal? Example: monitor off - some state 0-, monitor no-signal - some state not connected and so on?
Try this: grep . /sys/class/drm/card0-*/{status,enabled,dpms}. If this doesn't satisfy your needs, just leave a comment below.
How to detect display driver info
1,681,080,673,000
I have this configuration with Samba 4.14.12: [global] netbios name = MyRouter interfaces = br-lan eth0 server string = MyRouter unix charset = UTF-8 workgroup = WORKGROUP bind interfaces only = yes #server min protocol = SMB2 passdb backend = smbpasswd dns proxy = no socket options = IPTOS_LOWDELAY TCP_NODELAY use sendfile = yes map to guest = Bad User load printers = no printcap name = /dev/null disable spoolss = yes printing = bsd client signing = mandatory ## disable core dumps enable core files = no #smb encrypt = desired security = user mdns name = mdns #delete veto files = yes ######### Dynamic written config options ######### disable netbios = yes smb ports = 445 aio read size = 0 aio write size = 0 [HDDSoft] path = /media/HDDSoft/HDD_DATI/+PC create mask = 0666 directory mask = 0777 read only = yes guest ok = no guest only = yes [hdd] path = /media/HDDSoft/HDD_DATI valid users = root create mask = 0666 directory mask = 0777 browseable = no read only = no guest ok = no Why doesn't it show in network File Explorer?
I solved the problem. The problem is that the wsdd (WSD/LLMNR Discovery/Name Service Daemon) protocol was missing. From package wsdd2 on Github With Microsoft turning off SMB1 feature completely on Windows 10, any Samba shares on the local network become invisible to Windows computers (start from windows 7). That's due to the fact that SMB1 is required for Computer Browser service to function. Newer Windows systems can use WSD (Web Services for Devices) to discover shares hosted on other computers The primary purpose of this project is to enable WSD on Samba servers so that network shares hosted on a Unix box can appear in Windows File Explorer / Network.
SAMBA(Openwrt) Share not show in File explorer (Windows 7)