date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,587,666,937,000 |
I configured XFCE on my desktop at home and wanted to setup my work system with the same configurations (figured it'd just be copying a few files) or even a script? In the past I used to just mark the changes on paper and have to repeat myself with what ever desktop env I used.
Is there an easier way to replicate personalization changes from one system to the next?
|
In my experience, the simplest way to transfer environment settings is to copy the user configuration directories wholesale, renaming the existing directories first. In the case of XFCE, that would be ~/.config/xfce4. There may also be necessary files in ~/.local. Be sure to install any requisite software before copying the configuration.
| Transferring XFCE customization from one system to another? |
1,587,666,937,000 |
Why does Apache2 have multiple configuration files? What are their roles? I found some information about older Apache versions, but it's usually deprecated, and the official Apache documentation doesn't explain the logic of splitting up the configuration files and what they're for.
|
Hmm. From one point of view, you can dump all the configuration into one httpd.conf file, but this would be... hard to read.
Most distros will divide up the configuration by having httpd.conf include subdirectories. You may want to look at distro-specific documentation, for example:
https://help.ubuntu.com/12.04/serverguide/httpd.html
For Ubuntu, the Apache configuration directory is /etc/apache2. The primary subdirectories for your organizational convenience are conf.d, mods-available, mods-enabled, sites-available and sites-enabled. You would keep your module configuration in the mods-available directory, and your virtualhost configurations in the sites-available directory. Note that the *-enabled directories contain symlinks to the corresponding *-available directories, so you can keep a bunch of things floating around in *-available, but only activate them by symlinking from the *-enabled directory. The master httpd.conf file will do an include of what's in the *-enabled directories.
RHEL/CentOS doesn't work that way, and leave it somewhat more up to your discretion on how to set up the /etc/httpd base directory. You can dump everything into /etc/httpd.conf; you can create a similar directory structure to Ubuntu (and modify httpd.conf to include the *-available directories that you've made), or some combination thereof.
So, you may want to check your distro's documentation first. As you'll see with the Ubuntu one, they provide links to other resources.
| What is the purpose of Apache2's configuration files? |
1,587,666,937,000 |
I am using filter to remove comments before committing config files in git:
$ cat .git/config
[filter "stripcomments"]
clean = "stripcomments"
$ cat .git/info/attributes
/etc/* filter=stripcomments
I also have shell prompt which changes color if there are any uncommited files. This is the relevant part in my .zshrc:
[[ -n "$(git status --porcelain 2> /dev/null | tail -n1)" ]] && git_c=220 || git_c=34
The problem I am having is, that when I change a file by adding a comment, my filter strips the comment, and the file for gits purposes should be unchanged. I can verify that by git diff which shows no difference. But git status shows the file has changed, and consequently my shell shows yellow prompt, indicating uncommited changes.
What do I have to use to solve this problem?
EDIT:
what is even worse, when I do git status, it shows me modified files. When I do git diff it shows no changes. And when I do git commit, it tells me nothing to commit.
So in other words, git status shows changed file but I am unable to commit.
|
git status works as expected if you change comments without changing file size e.g. change one character, or have two one-line comments and switch them, anything that doesn't change the total length of the file nor the filtered result.
For modified (mtime) files of the same size, git status reads their contents, runs the filter, and compares. As long as the comment changes get filtered out, there is no change.
However, if the file size is different, git status does not run the filter at all, and does not even look at file contents. A change in file size alone is enough to make git status consider the file as changed.
Comparing file sizes instead of file contents is a common optimization. Which, in this case, works against you. And there doesn't seem to be an option to turn it off.
There is core.checkStat = minimal which disables almost everything except the filesize check. So that's a dead end. If there is any other option related to this issue, I couldn't find it.
So I don't have a proper solution to change git status behavior.
You might have to switch to a different command altogether (run git diff? or git commit --dry-run so it doesn't actually commit?). These tools run the filter (and then do nothing) since they actually have to look at the file contents to diff/commit the changes. Otherwise, run git add to update the cached filesize in git? (no commit involved)
Another (weird) option would be to force the filesize issue. For text files you can just append comments, then truncate the files to always be of the same size. So the filesize never changes and git status does have to check the file contents for you.
That would make git status work for your use case but it's hardly practical.
A more radical approach would be to patch git itself.
diff --git a/read-cache.c b/read-cache.c
index 35e5657877..4e087ca3f5 100644
--- a/read-cache.c
+++ b/read-cache.c
@@ -213,7 +213,7 @@ int match_stat_data(const struct stat_data *sd, struct stat *st)
#endif
if (sd->sd_size != (unsigned int) st->st_size)
- changed |= DATA_CHANGED;
+ changed |= MTIME_CHANGED;
return changed;
}
This would pretend a size change is merely a time change (and then kick off the file contents comparison later on). However, this would only work for you locally and this is very risky. After all, this function is responsible for the integrity of your git repository and I can't judge if such a change is safe in any way at all.
If you do anything like this you should call it frankengit and make sure it can't modify your repository at all (at minimum, add --no-optional-locks to the status command).
Same question, asked 9 years ago:
Why does 'git status' ignore the .gitattributes clean filter?
Git mailing list discussion:
git filter bug
Nothing came of it apparently but if you care about this feature, you should take it up with the mailing list one more time regardless. Otherwise the same issue will still be around another 9 years later. ;-)
| git: inconsistent behavior when using filter to strip comments |
1,587,666,937,000 |
When installing an upgrade with sudo apt-get upgrade it displays:
Configuration file '/etc/grub.d/30_os-prober'
==> Modified (by you or by a script) since installation.
==> Package distributor has shipped an updated version.
What would you like to do about it ? Your options are:
Y or I : install the package maintainer's version
N or O : keep your currently-installed version
D : show the differences between the versions
Z : start a shell to examine the situation
The default action is to keep your current version.
*** defaults.list (Y/I/N/O/D/Z) [default=N] ?
However, I changed /etc/grub.d/30_os-prober when I set a GRUB password, now I'm not sure what to do. Most likely the best thing to do would be the get package maintainer's config file and then do the same change (merge the files).
I didn't get asked about this on another machine, maybe because unattended-upgrades did the upgrading or because I used Apper (GUI used for updating) instead of the console (why doesn't Apper prompt you about it? is there a question or an issue about that?). It must have chosen the default and I guess not changing config files as well as changing config files can sometimes break things.
How to display all config files that differ from the package maintainers' versions? The respective package that includes this config file should be displayed too. I think reinstalling that package should show the same prompt about the config file again if there is no command to retrieve the config file only.
I'd prefer a way by which one can then easily merge/change/compare these config files with a merge-GUI like Diffuse Merge or Meld (the diff view in the console when pressing D is barely usable).
I'm using Debian11/KDE.
|
You can find most changed configuration files with the following command:
debsums -ac
You can go a little further and query the dpkg database to find which packages those files belong to, as suggested in the manpage for debsums(1):
dpkg -S $(debsums -ac)
I don’t know any convenient way to compare the content of the “current” configuration file with the content of the “original” file, especially considering that dpkg does not keep a copy of the configuration files, and that, by default, recent versions of apt clean .deb files for successfully installed packages.
Note that it only works for configuration files that are contained it their packages.
A few packages would rather generate default contents for their configuration files within maintainer scripts and use ucf to update them while preserving local changes. You can find those that were modified with the following command:
md5sum --quiet --check /var/lib/ucf/hashfile
You can can which package created each ucf-managed configuration file in /var/lib/ucf/registry and you can find the “original” files in /var/lib/ucf/cache/. You may also use ucfq to query the ucf database.
| How to see all config files that differ from package maintainers' versions in Debian? |
1,587,666,937,000 |
How can the Date/Time format that Nagios uses everywhere be changed to
YYYY-MM-DD HH:MM:SS?
|
A date/time format of "YYYY-MM-DD HH:MM:SS" is configurable in the main configuration file with the date_format option set to iso8601:
This option allows you to specify what kind of date/time format Nagios should use in the web interface and date/time macros. Possible options (along with example output) include:
Option
Output Format
Sample Output
us
MM/DD/YYYY HH:MM:SS
06/30/2002 03:15:00
euro
DD/MM/YYYY HH:MM:SS
30/06/2002 03:15:00
iso8601
YYYY-MM-DD HH:MM:SS
2002-06-30 03:15:00
strict-iso8601
YYYY-MM-DDTHH:MM:SS
2002-06-30T03:15:00
You will need to restart Nagios for the change to take effect.
| Nagios Core 3.5 change Date format |
1,587,666,937,000 |
This is related to a question I asked about 3 years ago. Since then things have changed a bit apparently. One of the obvious ones is that aptitude itself has changed from whatever version it was to aptitude 0.8.12 or to be more precise -
$ aptitude --version
aptitude 0.8.12
Compiler: g++ 9.2.1 20190821
Compiled against:
apt version 5.0.2
NCurses version 6.1
libsigc++ version: 2.10.1
Gtk+ support disabled.
Qt support disabled.
Current library versions:
NCurses version: ncurses 6.1.20191019
cwidget version: 0.5.18
Apt version: 5.0.2
The other thing which has changed is aptitude has its documentation in aptitude-doc-en and the point/documentation I am interested is located in -
file:///usr/share/doc/aptitude/html/en/ch02s05s05.html
where it says -
Option: Aptitude::CmdLine::Verbose
Default: 0
Description: This controls how verbose the command-line mode of aptitude is. Every occurrence of the -v command-line option adds 1 to
this value.
Now the configuration file is supposed to be in one of the three places and I chose and made a 3-4 lines at
$ cat ~/.aptitude/config
Aptitude "";
Aptitude::CmdLine "";
Aptitude::CmdLine::Verbose "2";
Now I don't know if this is good enough or not. I tried the following commands -
$ sudo apt update
and
$ sudo aptitude update
But neither gave me any more output. Am I doing something wrong ?
|
Regarding the verbosity setting, your configuration is correct (but you only need the last line). However, apt update doesn’t use Aptitude’s settings, so you won’t see a difference there. The difference with aptitude update is minor, it adds a status line at the end, showing the number of upgradable packages etc.
To check verbosity settings, the best command is aptitude moo: it shows a different message for each verbosity setting up to 6.
| Verbosity in aptitude command-line mode via its configuration file |
1,587,666,937,000 |
I only want echo $(date) to return the date not the backticked version.
echo $(date) # should return Wed Mar 6 09:50:41 EST 2019
echo `date` # should return `date`
|
Wrap the backticks in strong quotes to divest them of their subshelly powers:
$ echo '`echo`'
`echo`
Beware, though, the contraction wrapped in strong quotes:
$ echo 'I can't process this.'
> Oh whoops that ">" means we're still in a strong quote.
I cant process this.
Oh whoops that ">" means were still in a strong quote.
| How do configure ZSH commands substition to not use backticks (`)? |
1,587,666,937,000 |
I use Bash 4.3.48(1) in Ubuntu 16.04 (xenial) with a LEMP stack.
I try to create a php.ini overridings file in a version agnostic way with printf.
1) The version agnostic operation fails:
printf "[PHP]\n post_max_size = 200M\n upload_max_filesize = 200M\n cgi.fix_pathinfo = 0" > /etc/php/*/fpm/zz_overrides.ini
The following error is given:
bash: /etc/php/*/zz_overrides.ini: No such file or directory
2) The version gnostic operation succeeds:
printf "[PHP]\n post_max_size = 200M\n upload_max_filesize = 200M\n cgi.fix_pathinfo = 0" > /etc/php/7.0/fpm/zz_overrides.ini
As you can see, both are basically identical besides * vs 7.0.
I didn't find a clue about this (regex?) problem in man printf.
I searched in Google and found nothing about "allowing regex in printf".
Why does the version agnostic operation fails and is there any bypass?
Edit: If possible, it is most important for me to use a one-line operation.
|
The behavior of a pattern match in a redirection appears to differ between shells. Of the ones on my system, dash and ksh93 don't expand the pattern, so you get a file name with a literal *. Bash expands it(1), but only if the pattern matches one file. It complains if there are more filenames that match. Zsh works as if you gave multiple redirections, it redirects the output to all matching files.
(1) except when it's non-interactive and in POSIX mode
If you want the output to go to all matching files, you can use tee:
echo ... | tee /path/*/file > /dev/null
If you want it to go to only one file, the problem is to decide which one to use. If you want to check that there's only one file that matches the pattern, expand the whole list and count them.
In bash/ksh:
names=(/path/*/file)
if [ "${#names[@]}" -gt 1 ] ; then
echo "there's more than one"
else
echo "there's only one: ${names[0]}"
fi
In standard shell, set the positional parameters and use $# for the count.
Of course, if the pattern doesn't match any file, it's left as-is, and since the glob is in the middle, the result points to a nonexisting directory. It's the same as trying to create /path/to/file, before /path/to exists, it's just here we have /path/* instead, literally, with the asterisk.
To deal with that, you'd have to expand the directory name(s) without the filename, and then append the file name to all the directories. This is a bit ugly...
dirs=(/path/*)
files=( "${dirs[@]/%/\/file}" )
and then we can use that array:
echo ... | tee "${files[@]}" > /dev/null
Or we could take the easy way out and loop over the filename pattern. It's a bit unsatisfactory in the more general case, since it requires running the main command once for each output file, or using a temporary file to hold the output.
for dir in /path/* ; do
echo ... > "$dir/file"
done
| Redirection to a globbed file name fails |
1,587,666,937,000 |
With SMB 1.0/CIFS being removed from Windows 10 in Redstone 3 update due to vulnerability, this will conk out a lot of systems relying on older network hard drive enclosures.
I have a Linux-based device (Raspberry Pi) that I could connect up to the drive with USB, but I'm not sure on this point:
Is there a way to restrict Samba on the Pi to using only SMB 3.0?
|
Use server min protocol option in smb.conf:
This setting controls the minimum protocol version that the server will allow the client to use.
Possible values are listed in documentation for server max protocol option.
The documentation matching the samba version installed on your system should be available with man smb.conf.
| How to force Samba to use SMB 3.0? |
1,587,666,937,000 |
I'm on Fedora 25 and am just moving from gnome to i3wm. When on i3wm, my touchpad assumes some default configuration I suppose that is quite different from my gnome setup. Is there a way to copy the gnome touchpad configuration across to i3wm?
A few points:
I believe I'm using the default gnome touchpad configuration when in gnome but might have done some customisations long ago and forgot about it. I'd like to have the touchpad behave in the exact same way as it does in gnome if possible
Three finger as middle click doesn't work on i3wm
one finger tap doesn't work on i3wm
|
No answers on one of the top Google results, that's terrible.
xinput is going to be your new friend. Open a terminal as your user and run xinput you should see something similar to this:
⎡ Virtual core pointer id=2 [master pointer (3)]
⎜ ↳ Virtual core XTEST pointer id=4 [slave pointer (2)]
⎜ ↳ TPPS/2 IBM TrackPoint id=18 [slave pointer (2)]
⎜ ↳ SynPS/2 Synaptics TouchPad id=17 [slave pointer (2)]
⎣ Virtual core keyboard id=3 [master keyboard (2)]
↳ Virtual core XTEST keyboard id=5 [slave keyboard (3)]
↳ Power Button id=6 [slave keyboard (3)]
↳ Video Bus id=7 [slave keyboard (3)]
↳ Sleep Button id=8 [slave keyboard (3)]
↳ Integrated Camera id=15 [slave keyboard (3)]
↳ AT Translated Set 2 keyboard id=16 [slave keyboard (3)]
↳ ThinkPad Extra Buttons id=19 [slave keyboard (3)]
(That's on a ThinkPad x260)
Now you can find out what options are available for your touchpad with the list-props argument.
$ xinput list-props "SynPS/2 Synaptics TouchPad"
Device 'SynPS/2 Synaptics TouchPad':
Device Enabled (139): 0
Coordinate Transformation Matrix (141): 1.000000, 0.000000, 0.000000, 0.000000, 1.000000, 0.000000, 0.000000, 0.000000, 1.000000
libinput Tapping Enabled (292): 0
libinput Tapping Enabled Default (293): 0
libinput Tapping Drag Enabled (294): 1
libinput Tapping Drag Enabled Default (295): 1
libinput Tapping Drag Lock Enabled (296): 0
libinput Tapping Drag Lock Enabled Default (297): 0
libinput Tapping Button Mapping Enabled (298): 1, 0
libinput Tapping Button Mapping Default (299): 1, 0
libinput Accel Speed (278): 0.000000
libinput Accel Speed Default (279): 0.000000
libinput Natural Scrolling Enabled (274): 0
libinput Natural Scrolling Enabled Default (275): 0
libinput Send Events Modes Available (259): 1, 1
libinput Send Events Mode Enabled (260): 0, 0
libinput Send Events Mode Enabled Default (261): 0, 0
libinput Left Handed Enabled (283): 0
libinput Left Handed Enabled Default (284): 0
libinput Scroll Methods Available (285): 1, 1, 0
libinput Scroll Method Enabled (286): 1, 0, 0
libinput Scroll Method Enabled Default (287): 1, 0, 0
libinput Click Methods Available (300): 1, 1
libinput Click Method Enabled (301): 1, 0
libinput Click Method Enabled Default (302): 1, 0
libinput Middle Emulation Enabled (290): 0
libinput Middle Emulation Enabled Default (291): 0
libinput Disable While Typing Enabled (303): 1
libinput Disable While Typing Enabled Default (304): 1
Device Node (262): "/dev/input/event5"
Device Product ID (263): 2, 7
libinput Drag Lock Buttons (276): <no items>
libinput Horizontal Scroll Enabled (277): 1
I had to google a few but most are self explanatory, you can now change these to find your ideal config. (For me it's disabling the trackpad, but that's just me).
xinput set-prop "SynPS/2 Synaptics TouchPad" "Device Enabled" 0
Lastly to make it stick (and not break stuff in Gnome I use the i3/config to run the xinput commands when I login.
exec --no-startup-id /usr/bin/xinput set-prop ....
| How to copy gnome touchpad configuration to i3wm |
1,587,666,937,000 |
I use the venerable Awesome WM to manage tiled window layouts across a couple of screens. My configuration has a few goodies, but in general it follows a familiar pattern with a bar across the top with information about my keyboard layout, the tag situation, active windows, a system tray, and a clock.
For a while now I've had in mind that vertical screen real estate is too valuable to waste on this usage. It's not too bad on my desktop where I have multiple monitors and the bar is only on one of them (and I can rotate the displays!), but every time I use my 11" laptop I find myself wishing those 23 pixels were taken out of the width rather than the height of my screen.
What I would like to do is simply rotate the entire layout, text direction and all, and place it along the right edge sort of like this:
When I've experimented with this in the past, all I've been able to achieve is a bar on that edge with items stacked it it vertically, but each item's orientation was still horizontal and obviously things like the task bar section did not play very nicely that way. I think I'm okay with the rotated text (and even tray icons) if I could just spin the whole thing.
Is this possible? If so, how?
|
A vertical wibox is possible, I used one with 3.4 since years and had to recreate setup with 3.5. Based on this mailing list discussion, here a short example with widgets re-ordered for my own needs, including margins to introduce spacing between widgets:
-- Create the wibox
mywibox[s] = awful.wibox({ position="left",orientation="north", screen = s })
-- Widgets that are aligned to the bottom
local bottom_layout = wibox.layout.fixed.horizontal()
bottom_layout:add(wibox.layout.margin(mytextclock,0,5))
if s == 1 then bottom_layout:add(wibox.widget.systray()) end
bottom_layout:add(mypromptbox[s])
-- Now bring it all together (with the tasklist in the middle)
local layout = wibox.layout.align.horizontal()
layout:set_first(bottom_layout)
layout:set_second(wibox.layout.margin(mytasklist[s],5,5))
layout:set_third(mytaglist[s])
-- Rotate
-- http://comments.gmane.org/gmane.comp.window-managers.awesome/9676
local rotate = wibox.layout.rotate()
rotate:set_direction("east")
rotate:set_widget(layout)
-- Widgets from top to bottom
local wibox_layout = wibox.layout.fixed.vertical()
wibox_layout:add(mylauncher)
wibox_layout:add(wibox.layout.margin(mylayoutbox[s],0,0,5,5))
wibox_layout:add(rotate)
mywibox[s]:set_widget(wibox_layout)
When adjusting widget placements, reload configuration with Mod+Ctrl+r
To rotate systray, this code may do the trick (I did not test it)
if s == 1 then
local systray = wibox.widget.systray()
systray:set_horizontal(false)
systray:set_base_size(100)
right_layout:add(systray)
end
You can find a base configuration for Awesome 3.5 at https://github.com/ymartin59/awesome-vertical-wibox
| Can a wibox in Awesome-WM be setup vertically? |
1,587,666,937,000 |
I want to set-up some watch folders that will then move the completed torrents to a specific directory for further processing. What I would like to know before I proceed based on this rather old guide from here is what are the values 11,10 and 12,10 and 13,10? I have done a few searches for this, but haven't been lucky. Maybe my choice of keywords.
# Schedules to watch folders
schedule = watch_directory_1,11,10,"load_start=~/torrents/misc/*.torrent,d.set_custom1=~/Downloads/"
schedule = watch_directory_2,12,10,"load_start=~/torrents/tv/*.torrent,d.set_custom1=~/Downloads/TV/"
schedule = watch_directory_3,13,10,"load_start=~/torrents/movie/*.torrent,d.set_custom1=~/Downloads/Movies/"
# Move completed downloads to preset target
system.method.set_key = event.download.finished,move_complete,"d.set_directory=$d.get_custom1=;execute=mv,-u,$d.get_base_path=,$d.get_custom1="
|
The first value is the start, the second value the interval in which the function is executed. The one seconds difference, just makes sure not all 3 watch_directory "invocations" are distributed over time.
Instead of seconds (as in your example), that can also be time values (HH:MM:SS). The following is scheduled for 1AM every 24 hours:
schedule = throttle_1,01:00:00,24:00:00,download_rate=0
schedule = throttle_2,01:00:00,24:00:00,upload_rate=300
| Need explanation of rtorrent.rc schedule values |
1,587,666,937,000 |
Where once was the filename of the file now there is a black stripe, preventing me to see what I'm editing.
To clarify a bit, it's that black stripe between "F1" and "All", when I did the screenshot I was editing my .emacs file.
I'm running GNU Emacs 23.2.1 (with the -nw flags) on Ubuntu 11.04.
I've tried executing emacs -q, the graphical interface pops up and the file name is correctly readable.
Here is my .emacs:
(defconst user-init-dir '~/Dropbox/emacs)
(add-to-list 'load-path "~/Dropbox/emacs")
(add-to-list 'load-path "~/Dropbox/clojure/clojure-mode")
(require 'clojure-mode)
(eval-after-load "slime"
'(progn (slime-setup '(slime-repl))))
(add-to-list 'load-path "~/Dropbox/emacs/slime")
(require 'slime)
(slime-setup)
;;line numbers
(global-linum-mode)
;;parens highlight
(show-paren-mode 1)
(require 'package)
(add-to-list 'package-archives
'("marmalade" . "http://marmalade-repo.org/packages/"))
;;steve yegge's js mode http://code.google.com/p/js2-mode/wiki/InstallationInstructions
(setq load-path (append (list (expand-file-name "~/Dropbox/emacs/js2")) load-path))
(autoload 'js2-mode "js2" nil t)
(add-to-list 'auto-mode-alist '("\\.js$" . js2-mode))
;;save how the session was when i exited http://www.gnu.org/s/libtool/manual/emacs/Saving-Emacs-Sessions.html
(desktop-save-mode 1)
EDIT: Sadly this seems to be bigger than I thought, seems to be a wider configuration of colors which somehow I changed, I'm almost certain of this because now man does not show flags letters for the switches in its pages and completions in emacs are not shown.
Anyway, this just for the sake of completeness, maybe I'll open a new question on this after I've searched a bit more...
|
The buffer name is in the mode-line-buffer-id face, applied above the mode-line face. By default, on a dark background in a terminal, mode-line is in black on white and mode-line-buffer-id is bold; maybe you accidentally gave it a black foreground.
| Emacs not showing filename |
1,587,666,937,000 |
Files like ~/.config/vlc/vlcrc are 99% junk if you want to version control only the configuration options. I've got a script to remove the comments, but there's a ton of empty configuration sections left over. My sed- and awk-fu is not up to speed, so how can I remove the empty configuration sections?
The first line of a configuration section matches ^\[.*\]$, and it is empty if the first line is followed by any number of lines consisting only of whitespace, then followed by another line matching ^\[.*\]$ or EOF.
|
Recall section headers as you see them, but don't print them until you see a setting line in that section. You could do it in sed by storing the section header in the hold space, but it's clearer in awk.
awk '
/^ *\[/ {section=$0} # recall latest section header
/^ *[^[#]/ { # setting line
if (section != "") {print section; section="";} # print section header if not yet done
print
}
' ~/.vlc/vlcrc >~/config/vlc/vlcrc
| Remove empty configuration section |
1,587,666,937,000 |
I'm doing unattended / non-interactive package installations via
DEBIAN_FRONTEND=noninteractive apt-get install -y my_package
This works as attended in most cases, but still gives me an interactive prompt if there is config file conflict, e.g. something like this:
Configuration file '/etc/foo'
==> Modified (by you or by a script) since installation.
==> Package distributor has shipped an updated version.
What would you like to do about it ? Your options are:
Y or I : install the package maintainer's version
N or O : keep your currently-installed version
D : show the differences between the versions
Z : start a shell to examine the situation
I know that I can choose the answer to this by passing a suitable dpkg option to apt-get via -o, e.g.
DEBIAN_FRONTEND=noninteractive apt-get -o DPkg::Options::=--force-confdef install -y my_package
However, the corresponding options offered by dpkg seem not to include a way to abort the installation upon a conflict, which is what I would need.
How can I non-interactively install a package via apt-get and fail if a config conflict is encountered?
The following would also be acceptable to me:
Non-interactively check before calling apt-get whether there will be a conflict
Keep the versions of the config files on disk (like --confold) but then exit with a non-zero exit code or having another way of detecting this afterwards.
|
I haven’t checked this in your scenario, but dpkg should abort if it needs to ask for information and can’t read from its standard input; so
DEBIAN_FRONTEND=noninteractive apt-get install -y my_package < /dev/null
should abort with an error if there’s a configuration file conflict.
If that doesn’t work, you can always look for leftovers from conflicts: depending on your --conf options, dpkg will either leave the old version with a .dpkg-old suffix, or the new version with a .dpkg-new suffix. You can therefore look for new .dpkg-* files in /etc after an installation attempt to determine whether there were any conflicts.
| Force non-interactive `apt-get install` to fail on config file conflict |
1,587,666,937,000 |
I edited the /etc/apt/apt.conf.d/50unattended-upgrades file (on Ubuntu 22.04.3) and intentionally introduced an error:
Unattended-Upgrade::Automatic-Reboot-WithUsers "Falsed"; // instead of "false"
When I restart the service it shows as running:
systemctl restart unattended-upgrades.service
systemctl status unattended-upgrades.service # shows "Active: active (running)"
If I run unattended-upgrades --dry-run it exits without error.
How do I have confidence that the configuration file is correct and will be used? With nginx there is the nginx -t command. Is there anything similar for unattended-upgrades?
|
The APT configuration format is essentially a hierarchical tree-value store, and is lenient on both the keys and values. It has to be as far as keys are concerned — there’s no central registry of allowed keys. As far a values are concerned, in the current implementation for boolean values, the strings “no”, “false”, “without”, “off”, and “disable” are interpreted as false, the strings “yes”, “true”, “with”, “on“, and “enable” are interpreted as true. Any other value found when the configuration parser is asked for a boolean results in the default value being returned (not an error). For the setting evoked in your question, that default is true.
As a result of the above, as long as all the configuration files involved in APT’s configuration are syntactically valid, errors in the same vein as your example will go undetected.
It is possible to see how the current configuration files are interpreted:
apt-config dump
reads all the configuration files and dumps all the keys found there and the associated values.
| Possible to validate unattended-upgrades configuration? |
1,587,666,937,000 |
When we boot a GNU/linux system it shows lots of messages on stdout. And then, immediately before the prompt it shows something like this:
Linux raspberrypi 4.19.66-v7+ #1253 SMP Thu Aug 15 11:49:46 BST 2019 armv7l
The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
I would like to hide that message. Which file do I need to edit to accomplish that?
|
The last two paragraphs are stored in /etc/motd; you just rename or delete it to get rid of them:
$ cd /etc
$ sudo mv motd motd.old
The first line is specified in /etc/update-motd.d/10-uname; you can also delete it, or move it to another directory (making it hidden doesn't work).
| How to hide/disable preprompt messages? |
1,587,666,937,000 |
I'm currently in the progress of preseeding a Debian installation with custom setup scripts running after the actual installation to create a simple installer that will create everything that I need.
Now I found how to install additional packages and added the NetworkManager package, to simplify networking stuff. However the device has multiple ethernet interfaces and installing NetworkManager during the Debian installation creates the file /etc/NetworkManager/system-connections/Wired connection 1. However that file is configured incorrectly for the actual system. So if I remove it while NetworkManager is off and reboot, everything is working just fine. But having the file makes NetworkManager label all interfaces as "Wired connection 1" and only one interface can be active, etc. All in all, that file needs to go.
Now I first tried just removing the file during the script I invoke wiht preseed/late_command (the script runs and removes the file, I checked that). But upon booting into the system after the installation the file is back. Next I tried stopping the NetworkManager service before removing the file with in-target systemctl stop NetworkManager, but that just gives me the lovely log line in-target: Running in chroot, ignoring request. And naturally that also doesn't work.
How I can install NetworkManager during preseeding with a blank "system-connections" configuration?
In summary the relevant (and working) lines from my preseed.cfg are:
d-i pkgsel/include string ... network-manager ...
d-i preseed/late_command string sh /.../postinstall.sh
and in my postinstall.sh I tried
in-target rm /etc/NetworkManager/system-connections/*
(which actually removes the file in that moment) and
in-target systemctl stop NetworkManager
in-target rm /etc/NetworkManager/system-connections/*
Update:
As suggested I tried removing the connection with nmcli directly.
This is my script:
in-target nmcli con delete $(in-target nmcli -g uuid con)
And this is the result:
May 6 09:16:43 log-output: + in-target
May 6 09:16:43 log-output: nmcli -g uuid con
May 6 09:16:43 log-output: dpkg-divert: warning: diverting file '/sbin/start-stop-daemon' from an Essential package with rename is dangerous, use --no-rename
May 6 09:16:43 in-target: Error: Could not create NMClient object: Could not connect: No such file or directory.
May 6 09:16:44 log-output: + in-target nmcli con delete
May 6 09:16:44 log-output: dpkg-divert: warning: diverting file '/sbin/start-stop-daemon' from an Essential package with rename is dangerous, use --no-rename
May 6 09:16:44 in-target: Error: Could not create NMClient object: Could not connect: No such file or directory.
|
Working with wired connections
By default, NetworkManager generates a connection profile for each wired ethernet connection it finds. At the point when generating the connection, it does not know whether there will be more ethernet adapters available. Hence, it calls the first wired connection "Wired connection 1". You can avoid generating this connection, by configuring no-auto-default (see man NetworkManager.conf), or by simply deleting it. Then NetworkManager will remember not to generate a connection for this interface again.
You can also edit the connection (and persist it to disk) or delete it. NetworkManager will not re-generate a new connection. Then you can change the name to whatever you want. You can use something like nm-connection-editor for this task.
So, you can create NetworkManager.conf before installing network manager and set it up according to your hardware and with no-auto-default option if needed. (also check that the config file is not overwritten after the install 'should not be the case...')
An other alternative could be locking the write access to the problematic file with chmod u-w or chattr +i but this is not recommended because its not intended to work that way and may introduce other issues.
Source: arch-wiki
| Issues with installing NetworkManager during Debian installation with preseeding |
1,587,666,937,000 |
As the options make menuconfig and make nconfig allow a nice way to configure the kernel options, are there any way to get this hierachical structure for print it?
Something similar to the "tree" command ouput.
|
Thanks to the replay of @jeff-schaller I did a contribution to the project Kconfiglib and now there is a new example script for this task. These are the steps to use it:
Inside the directory with the linux source, clone the repo:
root@23e196045c6f:/usr/src/linux-source-4.9# git clone git://github.com/ulfalizer/Kconfiglib.git
Cloning into 'Kconfiglib'...
remote: Counting objects: 3367, done.
remote: Compressing objects: 100% (57/57), done.
remote: Total 3367 (delta 64), reused 89 (delta 50), pack-reused 3259
Receiving objects: 100% (3367/3367), 1.25 MiB | 1.79 MiB/s, done.
Resolving deltas: 100% (2184/2184), done.
Patch the makefile:
root@23e196045c6f:/usr/src/linux-source-4.9# patch -p1 < Kconfiglib/makefile.patch
patching file scripts/kconfig/Makefile
Configure as needed, basically for get a .config file:
root@23e196045c6f:/usr/src/linux-source-4.9# make menuconfig
Run the script with the config file:
root@23e196045c6f:/usr/src/linux-source-4.9# make scriptconfig SCRIPT=Kconfiglib/examples/print_config_tree.py SCRIPT_ARG=.config
======== Linux/x86 4.9.65 Kernel Configuration ========
[*] 64-bit kernel (64BIT)
General setup
() Cross-compiler tool prefix (CROSS_COMPILE)
[ ] Compile also drivers which will not load (COMPILE_TEST)
() Local version - append to kernel release (LOCALVERSION)
[ ] Automatically append version information to the version string (LOCALVERSION_AUTO)
-*- Kernel compression mode
--> Gzip (KERNEL_GZIP)
Bzip2 (KERNEL_BZIP2)
LZMA (KERNEL_LZMA)
...
But the nice thing is that it is possible to pass differente kernel configurations and match the changes easily:
root@23e196045c6f:/usr/src/linux-source-4.9# make scriptconfig SCRIPT=Kconfiglib/examples/print_config_tree.py SCRIPT_ARG=/tmp/config1 > config1-list.txt
root@23e196045c6f:/usr/src/linux-source-4.9# make scriptconfig SCRIPT=Kconfiglib/examples/print_config_tree.py SCRIPT_ARG=/tmp/config2 > config2-list.txt
And finally now with a diff tool:
| Formatted print of linux kernel config |
1,587,666,937,000 |
I enabled issue_discards in lvm.conf file on machine with ssd and I'd like to perform blkdiscard on one of logical volumes. Can i do that without rebooting machine? I'm able to unmount physical volume along with all logical volumes stored on this particular ssd but I'd prefer to avoid system reboot.
|
According to the default comments in my lvm.conf, the issue_discards option only controls what happens to the freed space when you run lvreduce or lvremove, nothing else:
# Configuration option devices/issue_discards.
# Issue discards to PVs that are no longer used by an LV.
# Discards are sent to an LV's underlying physical volumes when the LV
# is no longer using the physical volumes' space, e.g. lvremove,
# lvreduce.
It is confirmed by this message on linux-lvm mailing list by RedHat's Mike Snitzer:
lvm.conf's issue_discards doesn't have any affect on the kernel (or
underlying device's) discard capabilities. It only controls whether
discards are issued by lvm for certain lvm operations (like when an LV
is removed).
So, if the underlying SSD supports TRIM or other method of discarding data, you should be able to use blkdiscard on it or any LVs placed on it just fine.
In other words, if you enable issue_discards, you can achieve the discarding of a LV's contents in two ways:
run blkdiscard on the LV. Example:
# lvcreate -L 1g vg00
Logical volume "lvol6" created.
# blkdiscard -v /dev/vg00/lvol6
/dev/vg00/lvol6: Discarded 1073741824 bytes from the offset 0
just use lvremove and LVM does the discarding for you. You don't have to do anything special to make the setting take effect.
[issue_discards initially disabled]
# lvremove /dev/vg00/lvol6
Do you really want to remove active logical volume vg00/lvol6? [y/n]: y
Logical volume "lvol6" successfully removed
# vi /etc/lvm/lvm.conf
[set issue_discards to enabled]
# lvcreate -L 1g vg00
Logical volume "lvol6" created.
# lvremove /dev/vg00/lvol6
Do you really want to remove and DISCARD active logical volume vg00/lvol6? [y/n]: y
Logical volume "lvol6" successfully removed
Note the added ... and DISCARD ... in the message of the lvremove command.
| Is it possible to reload lvm.conf without reboot? |
1,498,477,205,000 |
I have web Apache servers with Jessie or Stretch that have been upgraded successively from older versions of Debian (from Squeeze onwards, depending on the servers).
In all of them, I have Apache with security.conf having configured the directives ServerTokens Prod and ServerSignature off according to this question The Hosting History OS is unknow in the www.netcraft.com .
I also have checked out security.conf is present in the conf-enabled directory in all of them (it is).
Now it comes the most interesting part; some of them do not honour that configuration; and some of them do. The only pattern I was able to establish is that recently installed servers with Apache configured from scratch do not exhibit that behaviour (e.g. they always honour the configuration).
What could be happening?
|
Interestingly enough, to use this, and other configurations, the /etc/apache2/conf-enabled directory has to be included in /etc/apache2/apache2.conf as is done by default in later Apache versions with the line:
IncludeOptional conf-enabled/*.conf
What happened is that in some other servers, while upgrading, the old configuration file apache2.conf was kept without adding that directive. To add to the confusion, at least in one server that was spotted and corrected, so it skewed establishing that pattern.
Thus, apparently, while it appeared those security directives were configured, they were not being used by Apache. Apache was assuming the default values for ServerTokens and ServerSignature instead, which for the former is Full and for the latter, On.
I ended up adding to the end of /etc/apache2/apache2.conf
IncludeOptional conf-enabled/*.conf
After restarting the Apache service, the situation was corrected, and Apache no longer reports extra configuration data.
| Apache not hiding Server Tokens/Signature |
1,498,477,205,000 |
I see this on the Internet:
General Setup --->
<*/M> Kernel .config support
[*] Enable access to .config through /proc/config.gz
But can't understand what's that mean?
I have an arm-based board(NanoPi-M1 with Allwinner H3 sun8iw7p1 SoC) that has Debian Jessie OS, and I have no config.gz file in /proc directory. I only have config-3.4.39-h3.new file in /boot directory that it is an empty file!
I added modules="configs" in /etc/modules file and reboot my system but had no sense!
How can I access to kernel configuration?
|
I see this on the Internet:
It specifies the location in Linux's menuconfig from where you can enable /proc/config.gz. You must recompile the Linux kernel to do this. On an ARM-based board this may not be mainline Linux but a different tree specific to the SoC used on the ARM board.
So, the steps would be:
Figure out which SoC you have on the board
Figure out where to obtain the Linux kernel tree ported to that SoC
Obtain and compile the Linux kernel, enabling the /proc/config.gz option
Install modules, register the newly-compiled kernel with the bootloader, and reboot
| How to enable access to the kernel config file through /proc/config.gz? |
1,498,477,205,000 |
I have default Debian 8.5 Jessie /etc/logrotate.conf contents:
# see "man logrotate" for details
# rotate log files weekly
weekly
# keep 4 weeks worth of backlogs
rotate 4
# create new (empty) log files after rotating old ones
create
# uncomment this if you want your log files compressed
#compress
# packages drop log rotation information into this directory
include /etc/logrotate.d
# no packages own wtmp, or btmp -- we'll rotate them here
/var/log/wtmp {
missingok
monthly
create 0664 root utmp
rotate 1
}
/var/log/btmp {
missingok
monthly
create 0660 root utmp
rotate 1
}
# system-specific logs may be configured here
With this settings logrotate does its jobs well. However, if I change:
rotate 4
to something different, for example to:
rotate 5
logrotate never does its job, consuming all the CPU power so I have to kill its process eventually.
Why is that? Should I change something when tuning rotate?
|
Try looking for some command to parse/debug the logrotate config without actually applying it.
from man logrotate
-d, --debug
Turns on debug mode and implies -v. In debug mode, no changes
will be made to the logs or to the logrotate state file.
to use you would run
logrotate -d /etc/logrotate.conf
| logrotate Uses All CPU Power |
1,498,477,205,000 |
I’m having a problem with my sshd config. I want to limit all users of the group www-user to sftp use. All of them but the user yorunokoe.
I saw that related question : How to exclude from a "Match Group" in SSHD? and my config end like this :
Match Group www-user User !yorunokoe
ChrootDirectory %h
ForceCommand internal-sftp
AllowTcpForwarding no
PermitTunnel no
X11Forwarding no
But that doesn’t work. I tested different variations and it seems that everytime I use the exclamation mark, the whole directive returns as false and no subsequent config is applied. With the above config, all users still have SSH access, they’re not chrooted and they’re not limited to sftp.
I’m running with OpenSSH_6.7p1 Debian-5+deb8u2, OpenSSL 1.0.1k
What am I doing wrong ?
|
So, thanks to https://unix.stackexchange.com/users/28235/n-st, the correct config is :
Match Group www-user User *,!yorunokoe
ChrootDirectory %h
ForceCommand internal-sftp
AllowTcpForwarding no
PermitTunnel no
X11Forwarding no
That limit all users in www-user except user yorunokoe to sftp.
| Can not succeed in excluding user in match directive in SSHD config |
1,498,477,205,000 |
I've recently started to use the i3 window manager.
I configured i3 and xterm to fit my needs, but whenever I start xterm and want use the menu (ctrl+mouse_button), the menu gets displayed as a small window with no border and no title bar. I can't use the menu, because the options are not visible.
The black border is the one surrounding the menu. As you can see, it displays "VT" ...
How can I fix this? Is it something with i3 or Xresources?
~/.Xdefaults
xterm*dynamicColors: true
xterm*background: grey13
xterm*foreground: yellow
xterm*utf8: 1
xterm*eightBitInput: true
xterm*saveLines: 32767
xterm*scrollTtyKeypress: true
xterm*scrollTtyOutput: false
xterm*scrollBar: true
xterm*loginShell: true
xterm*font: 7x13
xterm*jumpScroll: true
xterm*multiScroll: true
xterm*toolBar: true
xterm*geometry: 90x30
xterm*cursorBlink: true
|
The problem is the resource setting for geometry:
xterm*geometry: 90x30
That applies to the VT100 window and the menus — using different units of measure. For the VT100, it is characters, but for the menu it is pixels. Pixels are a lot smaller than (most) characters.
You probably meant this:
xterm*VT100.geometry: 90x30
Further reading: Why are the menus tiny? (xterm FAQ)
| i3 xterm menu (ctrl+mouse) too small |
1,498,477,205,000 |
Different Linux distributions store the same configuration parameters in different files. For example IPv4 address in Debian Wheezy is stored in /etc/network/interfaces file while in Fedora 20 it is stored in /etc/sysconfig/network-scripts/ifcfg-eth0 file. In addition, syntax of those configuration files is different. How do distributions parse those files? In addition, is the content of configuration files parsed in a way that userspace utilities like ip and ifconfig are called or are there some lower-level system calls which actually configure the kernel?
|
The kernel doesn't read any configuration file. As a rule, kernels avoid accessing the filesystem; there are a few exceptions and variation, but mainly, the kernel launches a program at the location /sbin/init when it boots, and the only accesses filesystems on behalf of user land processes.
Network configuration files (like other kinds of configuration files) are read by applications. For the network configuration files, these applications are suites of scripts and accompanying binaries that read the file and apply the configuration by making system calls. For network configuration on Linux, the system calls are mostly ioctl calls on sockets (you don't need to understand this).
Over the history of Linux (and Unix before it), some configurations have become standard, because everybody agreed on how to do it. This is largely the case (though not universal) for users (/etc/passwd, NSSwitch) and filesystems (fstab). On the other hand, network configuration has remained pretty diverse. Distributions derived from Red Hat keep it under /etc/sysconfig/network; distributions derived from Debian keep it in /etc/network/interfaces.
Going with Debian as an example, /etc/network/interfaces is parsed by the ifup program, which is invoked from /etc/init.d/networking, the init script in charge of setting up networking, itself invoked by init.
| How is the content of configuration files parsed on different Linux distributions? |
1,498,477,205,000 |
I operate a Linux system where I give out free Linux shell accounts to people for educational purposes. Unfortunately, while doing so it's expected to meet abusive users who will keep sending spam emails to other servers such as Google, Zoho, etc and hence will get the IP of the server blocked.
What I would like to do is allow the users on the system to send messages within localhost only. This means that when a user tries to send out an email to an external domain name, GMail for example, the request will be refused. However, if the user tries to send an email to another user on localhost (example: giovanni@localhost), the message will be sent. I don't mind receiving emails from other servers, but I don't want my server to send emails to other servers. How can I do so?
I'm running CentOS 6.5 with Postfix installed. How can I configure this? Any suggestion will be hugely appreciated!
|
Use a transport map:
Find or add the following line in your main.cf (alter the file location to fit your CentOS setup):
transport_maps = hash:/etc/postfix/transport
Edit the transport map file above to:
localhost :
<your FQDN> :
* error: Outgoing mail from this system has been disabled.
localhost and your FQDN will use local delivery. Anything else will be bounced with a message.
Update the database with:
# postmap /etc/postfix/transport
Reload the config:
# service postfix restart
| Allowing outgoing emails that will be delivered to localhost only |
1,498,477,205,000 |
Is it possible to provide a custom location for multitail.conf, or are my only options /etc/multiltail.conf or ~/multitail.conf?
I'd like to provide a specific config file that has regexes specific to our app defined, but I can't put this in the account's home directory, or /etc.
I care most about the colors, so if it's possible to reference those from a separate file, then that would work well too.
|
No, you can tell multitail where to source the configuration file from by using the --config switch.
--config filename
Load the configuration from given filename.
See the man page for more info.
| multitail - custom config (multitail.conf) location |
1,498,477,205,000 |
When I execute a command in Ubuntu, which results in a listing, I get results without the field names. Example is ls -l or ps l.
I am not very experienced and always need to go digging through man pages and online documentation. And the names are quite crypcit already.
Is there a way to turn on field name listing globally i.e. for all commands?
Note: actually ps l shows field names, while ls -l does not. It is true that the second is very trivial. However, the question stands - is there a way to overwrite this behaviour?
|
As @StephaneChazelas stated this isn't possible. You're only other options are to modify the source (don't do this) and/or develop some wrapper scripts and aliases for yourself to assist.
There is this technique for preserving the columns of ps in output that you're going to pipe to sort.
sort but keep header line in the at the top?
I would take this as an opportunity to hone your alias/scripting skills by putting together the pieces that you need. Much of using Unix/Linux is in tricking out your environment so that things are more accessible to your work habits and style.
| How to set what field names are displayed in listings? |
1,498,477,205,000 |
I have some rules in my local.cfg file with some addresses defined as blacklist_to. However, I keep getting spam for these addresses -- or rather any email whatsoever.
How can I test spamassassin to see why it keeps accepting said emails?
|
You can feed an arbitrary message to spamassassin by piping it to spamc -R. You'll get a spamassassin report on your message that looks like
1.5/5.0
1.5 : -0.0 NO_RELAYS Informational: message was not relayed via SMTP
0.1 MISSING_MID Missing Message-Id: header
-0.0 NO_RECEIVED Informational: message has no Received headers
1.4 MISSING_DATE Missing Date: header
The first line is the score of the message and the threshold score for messages to be considered spam.
| Spamassassin blacklist_to testing |
1,498,477,205,000 |
I need to write a default mic and default speaker output in the asound.conf config file. But I don't know exactly how I can find my external sound card or microphone device's name, so that on reboot or unplug/plug I don't have to reconfigure it again.
I tried to find them by using:
sun@sun-To-be-filled-by-O-E-M:/tmp$ pacmd dump | grep alsa_input
set-source-volume alsa_input.pci-0000_00_1b.0.analog-stereo 0xddb
set-source-mute alsa_input.pci-0000_00_1b.0.analog-stereo no
suspend-source alsa_input.pci-0000_00_1b.0.analog-stereo yes
set-source-volume alsa_input.usb-0d8c_C-Media_USB_Audio_Device-00-Device.analog-mono 0x9091
set-source-mute alsa_input.usb-0d8c_C-Media_USB_Audio_Device-00-Device.analog-mono no
suspend-source alsa_input.usb-0d8c_C-Media_USB_Audio_Device-00-Device.analog-mono yes
set-source-volume alsa_input.usb-046d_HD_Pro_Webcam_C920_8E9E4FCF-02-C920.analog-stereo 0xfffe
set-source-mute alsa_input.usb-046d_HD_Pro_Webcam_C920_8E9E4FCF-02-C920.analog-stereo no
suspend-source alsa_input.usb-046d_HD_Pro_Webcam_C920_8E9E4FCF-02-C920.analog-stereo yes
set-default-source alsa_input.usb-046d_HD_Pro_Webcam_C920_8E9E4FCF-02-C920.analog-stereo
or:
sun@sun-To-be-filled-by-O-E-M:/tmp$ aplay -l
**** List of PLAYBACK Hardware Devices ****
card 0: PCH [HDA Intel PCH], device 0: ALC892 Analog [ALC892 Analog]
Subdevices: 1/1
Subdevice #0: subdevice #0
card 0: PCH [HDA Intel PCH], device 1: ALC892 Digital [ALC892 Digital]
Subdevices: 1/1
Subdevice #0: subdevice #0
card 0: PCH [HDA Intel PCH], device 3: HDMI 0 [HDMI 0]
Subdevices: 1/1
Subdevice #0: subdevice #0
card 0: PCH [HDA Intel PCH], device 7: HDMI 1 [HDMI 1]
Subdevices: 1/1
Subdevice #0: subdevice #0
card 2: Device [C-Media USB Audio Device], device 0: USB Audio [USB Audio]
Subdevices: 1/1
Subdevice #0: subdevice #0
But it confused me. Which one is the name that I have to use when doing sudo vim /etc/asound.conf?
From the information given above this device is my microphone:
card 2: Device [C-Media USB Audio Device], device 0: USB Audio [USB Audio]
Subdevices: 1/1
Subdevice #0: subdevice #0
And from the above information this device is my audio output:
card 0: PCH [HDA Intel PCH], device 3: HDMI 0 [HDMI 0]
Subdevices: 1/1
Subdevice #0: subdevice #0
How can I tell this in my /etc/asound.conf? I tried the following but it does not work:
pcm.usb
{
type hw
card C-Media USB Audio Device
}
pcm.!default
{
type asym
playback.pcm
{
type plug
slave.pcm "dmix"
}
capture.pcm
{
type plug
slave.pcm "usb"
}
}
|
If I understand correctly, you want playback on your build in sondcard and capture (microphone) from external USB device.
Your external device is listed as card 2: device 0 and your build in soundcard as card 0: device 0
I think your asound.conf should look something like this:
pcm.!default
{
playback.pcm
{
type hw
card 0
device 0
}
playback.capture
{
type hw
card 2
device 0
}
}
| How can I find the correct name for my microphone and sound output using aplay or pacmd or something else, to apply in asound.conf? |
1,498,477,205,000 |
I've removed nagios3 with apt-get remove nagios3 and then removed the files in by using the command sudo rm -R /etc/nagios*
Now when I run apt-get install nagios3 the config files (/etc/nagios3/*) are not present. How can I regenerate them?
This box is on Ubuntu 10.04.
|
Purge nagios3. Then reinstall. That will probably work.
apt-get purge nagios3
apt-get install nagios3
The purge will get rid of the config files, which the system didn't delete initially, and so thought were still installed. If purging nagios3 is not an option, then it will be a little more complicated. If that is the case, leave a comment.
| How can I regenerate the default '/etc/' config files? |
1,498,477,205,000 |
I have to program a realtime application on Linux, but don't know whether the standard installation of Ubuntu has CONFIG_HIGH_RES_TIMERS enabled. How can I check this?
I'm using Ubuntu 11.04 64bit.
|
Ubuntu ships the kernel configuration in /boot/config-$version (in the same package as the kernel image /boot/vmlinuz-$version). You can check this file on a live system, or you can download it from the Ubuntu website. There are several images to choose from; the default under amd64 is -generic, and you can download the binary package and extract the file /boot/config-*. The simplest way to open a Debian package if you're not running a dpkg-based distribution is to convert it with alien.
By the way, the answer is yes under 10.04/-generic/amd64 which I happened to have available while writing this answer.
Several other distributions ship a /boot/config-* file. Others make the kernel configuration available in /proc/config or something similar, so that it's easy to see on a live system but doesn't appear in the binary package. In that case, if you don't have a live system, you need to check the source package.
| How to check if CONFIG_HIGH_RES_TIMERS enable? |
1,498,477,205,000 |
I'm trying to compile a Linux Kernel to run light and paravirtualized on XenServer 5.6 fp1.
I'm using the guide given here: http://www.mad-hacking.net/documentation/linux/deployment/xen/pv-guest-basics.xml
But I'm stumped when I reached the option CONFIG_COMPAT_VDSO.
Where is it exactly in make menuconfig? The site indicated that the options is in the Processor type and features group, but I don't see it:
[*] Tickless System (Dynamic Ticks)
[*] High Resolution Timer Support
[*] Symmetric multi-processing support
[ ] Support for extended (non-PC) x86 platforms
[ ] Single-depth WCHAN output
[*] Paravirtualized guest support --->
[*] Disable Bootmem code (NEW)
[ ] Memtest (NEW)
Processor family (Core 2/newer Xeon) --->
(2) Maximum number of CPUs
[ ] SMT (Hyperthreading) scheduler support
[ ] Multi-core scheduler support
Preemption Model (No Forced Preemption (Server)) --->
[ ] Reroute for broken boot IRQs
[ ] Machine Check / overheating reporting
< > Dell laptop support (NEW)
< > /dev/cpu/microcode - microcode support
<M> /dev/cpu/*/msr - Model-specific register support
<M> /dev/cpu/*/cpuid - CPU information support
[ ] Numa Memory Allocation and Scheduler Support
Memory model (Sparse Memory) --->
[*] Sparse Memory virtual memmap (NEW)
[*] Allow for memory hot-add
[*] Allow for memory hot remove
[ ] Allow for memory compaction
[*] Page migration
[*] Enable KSM for page merging
(65536) Low address space to protect from user allocation (NEW)
[ ] Check for low memory corruption
[ ] Reserve low 64K of RAM on AMI/Phoenix BIOSen
-*- MTRR (Memory Type Range Register) support
[ ] MTRR cleanup support
[*] Enable seccomp to safely compute untrusted bytecode (NEW)
[*] Enable -fstack-protector buffer overflow detection (EXPERIMENTAL)
Timer frequency (100 HZ) --->
[ ] kexec system call
[ ] kernel crash dumps
[*] Build a relocatable kernel (NEW)
-*- Support for hot-pluggable CPUs
[ ] Built-in kernel command line (NEW)
FYI, I'm configuring Gentoo's Kernel v2.6.36-hardened-r9
|
As you had already said, it IS under "Processor Types and Features".
You are compiling Gentoo's hardened kernel source, so the code would have undergone many patches.
A quick search in Google returned this: Gentoo kernel VDSO. It looks like Gentoo has it disabled even several versions before.
Why don't you download directly from kernel.org?
| Where is CONFIG_COMPAT_VDSO in make menuconfig? |
1,498,477,205,000 |
Yesterday I configured OpenVPN on a Ubuntu 18.04 server which seems to work. I can connect no problem and systemctl status openvpn gives me green. However, my syslog is being riddled with errors which seem to relate to a different service than openvpn.service. I am kind of unsettled by this since the server goes into use tomorrow and the only way to get access then is via openvpn.
Here is the syslog:
Jun 22 15:30:41 localhost systemd[1]: [email protected]: Main process e xited, code=exited, status=1/FAILURE
Jun 22 15:30:41 localhost systemd[1]: [email protected]: Failed with re sult 'exit-code'.
Jun 22 15:30:41 localhost systemd[1]: Failed to start OpenVPN connection to multi-user.
Jun 22 15:30:47 localhost systemd[1]: [email protected]: Service hold-off time over, scheduling restart.
Jun 22 15:30:47 localhost systemd[1]: [email protected]: Scheduled restart job, restart counter is at 146.
Jun 22 15:30:47 localhost systemd[1]: Stopped OpenVPN connection to multi-user.
Jun 22 15:30:47 localhost systemd[1]: Starting OpenVPN connection to multi-user...
Jun 22 15:30:47 localhost ovpn-multi-user[3046]: Options error: In [CMD-LINE]:1:Error opening configuration file: /etc/openvpn/multi-user.conf
Jun 22 15:30:47 localhost ovpn-multi-user[3046]: Use --help for more information.
Jun 22 15:30:47 localhost systemd[1]: [email protected]: Main process exited, code=exited, status=1/FAILURE
Jun 22 15:30:47 localhost systemd[1]: [email protected]: Failed with result 'exit-code'.
Jun 22 15:30:47 localhost systemd[1]: Failed to start OpenVPN connection to multi-user.
Jun 22 15:30:52 localhost systemd[1]: [email protected]: Service hold-off time over, scheduling restart.
Jun 22 15:30:52 localhost systemd[1]: [email protected]: Scheduled restart job, restart counter is at 147.
Jun 22 15:30:52 localhost systemd[1]: Stopped OpenVPN connection to multi-user.
Jun 22 15:30:52 localhost systemd[1]: Starting OpenVPN connection to multi-user.
|
I initially worked around the issue by putting the following line in my rc.local:
systemctl stop [email protected]
I finally solved it by disabling the deprecated [email protected], removing all configuration files from the OpenVPN root directory and moving them to the server directory, as well as activating the respective [email protected].
| OpenVPN riddling syslog with errors, but otherwise seems to work flawlessly |
1,498,477,205,000 |
Description of the problem
I have UPS Orvaldi KC2000 (its capacity: 2000VA/1400W) and I want to setup configuration of my Debian 10 (Buster which is currently testing) to:
get GUI (preferably GNOME) notifications when there is power failure and my computer is running on UPS battery (to know that I have little time to save the work and turn off the system),
automatically turn off the computer when the battery of my UPS is critically low.
I hoped that this is easy with NUT (Network UPS Tools), but it turned out that there is no obvious way to meet the 1-st requirement – which is getting GUI (preferably GNOME) notifications.
What I've done to solve the problem?
I've installed NUT (provided by nut package which installs [among others] nut-server and nut-client) and I configured it by editing files residing in /etc/nut directory.
root@host:~# ls /etc/nut
nut.conf ups.conf upsd.conf upsd.users upsmon.conf upssched.conf
specifically:
/etc/nut/upsd.users:
[upsmon] # name of my UPS
password = my_UPS_password
actions = SET
instcmds = ALL
upsmon master
/etc/nut/nut.conf:
MODE=standalone
/etc/nut/ups.conf:
maxretry = 3
[myups]
driver = blazer_usb
port = auto
/etc/nut/upsmon.conf:
MONITOR myups@localhost 1 upsmon my_UPS_password master
MINSUPPLIES 1
SHUTDOWNCMD "/sbin/shutdown -h +0"
POLLFREQ 5
POLLFREQALERT 5
HOSTSYNC 15
DEADTIME 15
POWERDOWNFLAG /etc/killpower
RBWARNTIME 43200
NOCOMMWARNTIME 300
FINALDELAY 5
rest of the files (/etc/nut/upsd.conf, /etc/nut/upssched.conf) have default content – /etc/nut/upsd.conf is empty and /etc/nut/upssched.conf has single line: CMDSCRIPT /bin/upssched-cmd.
(I skipped the comments sections in the above listings.)
After editing above configuration files I needed to run systemctl restart nut-*. * may be overkill, but I don't remember which services need to be restarted – nut-client.service, nut-driver.service, nut-monitor.service or nut-server.service.
I also installed nut-monitor which (quote): provides nut-monitor, a GUI application to monitor UPS status. I hoped that nut-monitor has some functionality that allows to popup warning window if UPS battery is low or there is some way to configure GNOME to display status of the UPS, but unfortunately I didn't find any way to do that.
To simulate power failure I use 2 commands: upsdrvctl -t shutdown and upsmon -c fsd (which shuts down the computer).
Recently I've found that there is nut-hal-drivers package that provides GUI notifications, but:
I can't find this package in Debian repository.
nut-hal-drivers package apparently doesn't work with upsmon and upsd provided by the nut package.
The question
My question: how can I setup configuration of my system to popup some kind of warning (preferably popup message native for given desktop environment) if there is power failure and my computer is using UPS' battery?
|
This probably a partial duplicate of: Run various shell commands when NUT reports a low UPS battery
The tricky part is to display the notification on the desktop, googling a bit, I found http://rogerprice.org/NUT/ConfigExamples.A5.pdf, page 71 it describes some scripts how to do that.
| Setup NUT power failure notifications when running computer on UPS battery |
1,498,477,205,000 |
I have a modern PC running Fedora 24 with a real-time patch (CCRMA audio tools) with an ASUS Essence STX II sterio sound card installed on PCIe. With it we run a playback/capture application. Also, we need to integrate CAN (Controller Area Network) and BLE (Bluetooth Low Energy) into the system and have a PCIe-card for each of these functions. The CAN PCIe card is from PEAK (PCAN-PCI Express 2-ch) and the BLE card is an Intel 8260 M2 card that HP have put on a PCIe card (AFAIK).
With only the audio card installed it works fine (using ALSA as API). When the CAN and BLE is installed the following is observed:
Playback works as before.
One capture channel only returns zero (0) or minus one (-1) in all samples.
The other capture channel returns values in the range -2..2 (expected -100..100) and when applying our application signal processing low quality, but detectable, expected results are presented.
The ALSA API report no problems in setup and configuration.
CAN and BLE functions as expected.
Without any deeper PCIe experience I suspect that CAN and/or BLE PCIe cards jumble the mapping of the sound card functions. I see before me some hands on setup scripting to untangle the cards but have no (!) idea where to begin.
Can someone:
Tell me if my hunch is in the ballpark?
Inform me on where I might go for information on how to rectify the problem?
...or, share your solution to a similar problem?
Thanks!
UPDATE
arecord -l gives the same report for all card combinations (audio card only, BLE + audio and CAN + BLE + audio).
dmesg does not shout odd, but I'm not qualified to tell.
From lspci -vv, with all three card installed, I have
Bus: primary=00, secondary=01, subordinate=01, sec-latency=0
Bus: primary=00, secondary=02, subordinate=02, sec-latency=0
Bus: primary=00, secondary=03, subordinate=03, sec-latency=0
Bus: primary=00, secondary=04, subordinate=05, sec-latency=0
Bus: primary=04, secondary=05, subordinate=05, sec-latency=0
from all entries that claim to be PCI bridges. I interpret that to be a structure where the main (00) bus have four sub-busses (01-04) and that sub-bus 04 have another sub-bus attached (bus 05).
The audio card has BDF 05:04.0 and use IRQ 16, that propagates through the 00-04-05 bridge, 04:00.0. Now, there is a "SMBus" device at 00:1f.4 also using IRQ 16. That device is also present with only the audio card installed (when the audio works as expected), then also using IRQ 16. The fourth (!) user of IRQ 16 is the PEAK CAN network controller, at 01:00.0. All other devices listed has unique IRQ numbers.
I'm learning by the minute but cannot decide if non-unique IRQs are a problem. Is it a problem? Are there better/other information in lspci that I should look at?
|
We've solved it! It seems that us inserting other PCI card affected the mixer in the ALSA driver, diverting the capture to the motherboard built-in sound function's front mic. On top of that the volume for that mic was set to 0. That works with the diagnostics we've seen (single channel capture with very low signal levels). We had missed to properly set up the ALSA configuration, allowing it to be controlled by other processes. One possible culprit could have been the pulseaudio process that allow remote audio control.
The mixer settings, as well as a most comprehensive set of information on all current audio settings can be found using alsa-info.sh where we found the discrepancies affecting our application.
We have now made shure we do explicit setup of the ALSA driver's configuration of all audio functions and verified functionallity on all PCI card. This link show how to save mixer settings and we also set up audio drivers explicitly via an /etc/modprobe.d/alsa.conf configuration file.
Whooho!
| PCIe cards interfere with each others function |
1,498,477,205,000 |
I'm running a Ubuntu 64-bit system in a VM. I wanted to fuzz the VLC media player, so I grabbed the tar file, and built the dependencies and tried configuring it using this line:
./configure CC="afl-gcc" CXX="afl-g++" --disable-shared; make
However, this runs into an error:
requested libavcodec >= 57.37.100 but version of libavcodec is 56.60.100
Is there a workaround for this, other than building a new one from contrib?
|
You have a number of options:
rebuild a recent ffmpeg source package to get libavcodec57 & co.;
upgrade to Ubuntu 16.10 which has libavcodec57;
follow the VLC package approach, which is to embed the appropriate version of fmmpeg and use that instead.
The latter approach is the one I'd recommend; to get started:
sudo apt-get install devscripts
dget http://httpredir.debian.org/debian/pool/main/v/vlc/vlc_2.2.4-8.dsc
cd vlc-2.2.4
CC=afl-gcc CXX=afl-g++ dpkg-buildpackage -us -uc
This will tell you which other packages you need to install (if any). If you don't want to use dpkg-buildpackage, see at least debian/rules for the relevant configuration options.
| How to find libavcodec to build VLC (with AFL fuzz)? |
1,498,477,205,000 |
I'm trying to install Apache 2.4 and PHP 5.6 on CentOS which I've done correctly and confirmed using ./apachectl configtest and php -v. The problem is that Apache is not recognizing PHP scripts because the Apache install did not include any of the PHP modules that Apache requires.
The only thing the PHP documentation tells me to do is to add the modules to httpd.conf by adding:
# Extra Modules
AddModule mod_php.c
AddModule mod_perl.c
# Extra Modules
LoadModule php_module modules/mod_php.so
LoadModule php5_module modules/libphp5.so
LoadModule perl_module modules/libperl.so
AddType application/x-httpd-php .php
which, when trying to start the server, outputs:
httpd: Syntax error on line 156 of /usr/local/apache2/conf/httpd.conf: Cannot load modules/mod_php.so into server: /usr/
local/apache2/modules/mod_php.so: cannot open shared object file: No such file or directory
I've compiled both Apache and PHP correctly without any errors.
I compiled with this for Apache:
./configure --with-included-apr=/usr
make
make install
And this for PHP:
./configure
make
make install
Any help is appreciated.
|
To not leave it unanswered.
Apache cannot build mod_php.so as part of its build because it does not know how to parse or run PHP. On the other hand PHP needs to know how Apache has been compiled to produce mod_php.so. You need to specify:
--enable-so to the configure script of Apache, to allow for building of extra modules as shared libraries. And create apxs, the extension tool to build the shared libraries.
And --with-apxs2=/path/to/apache to the configure script of PHP, for it to be able to build against Apache headers, find apxs, and generate mod_php.so.
Typically on a *nix system the Apache --prefix will default to /usr/local and apxs will end at /usr/local/bin/apxs. Therefore the compilation should run as follows.
First Aapche (httpd) with:
./configure --enable-so
make
make install
And then PHP with:
./configure --with-apxs2=/usr/local/apache2/bin/apxs
make
make install
References:
PHP INSTALL file
| No PHP modules after compiling Apache |
1,498,477,205,000 |
I have a Debian Linux server with nginx configured for several sites (/etc/nginx/sites-enabled), every site showing on its own domain.
Now when I remove a site from /etc/nginx/sites-enabled, querying the the domain of the removed site displays not something like ("This domain is not configured") but some another site (configured for a totally different domain).
I want to remove a site from my server, but instead of proper removal I see it replaced with another site.
Here is my config for one of my sites, for an example:
# cat /etc/nginx/sites-available/homepage | grep -vE '^\s*#'
server {
listen 80;
listen [::]:80;
root /var/www/homepage/web/;
index index.html index.htm index.nginx-debian.html;
server_name portonvictor.org;
location / {
try_files $uri $uri/ =404;
}
}
|
Unless you explicitly define a default server, nginx will use the first server with a matching port (for any request where there is no explicit server_name match). See this document for details.
You should create a catch all server block, for example:
server {
listen 80 default_server;
...
}
| Wrong site appears in Nginx |
1,498,477,205,000 |
I'm running OpenSuSE 12.1 with the KDE desktop on a test machine. The "logout" menu entry in the KDE menu does not seem to do anything. I am not aware of changing any KDE settings that might be relevant.
How can I find out the cause of this behaviour, and how can I remedy it?
|
Apparently, this is a bug KDE knows about and seems to be quite common. It seems that it could be caused by KDE trying to play the "logoff" sound byte and hanging there.
The only solution I could find was to disable the audio (through the configuration menu) and try again.
| Impossible to log out from OpenSuSE's KDE menu |
1,498,477,205,000 |
I have just installed archlinux on a virtual machine and I managed to install lightdm by following the instructions given in
https://wiki.archlinux.org/index.php/LightDM
https://wiki.archlinux.org/index.php/Display_Manager
but lightdm looks like this
But I want it to look like the default one in ubuntu
How can that be done ?
Ps: I am running xfce4 as the desktop Environment
|
[The ArchWiki looks dead currently, so I don't know what is contained in the instructions you linked to.]
To change the looks of LightDM, you need to install a theme and configure it. This page suggests that the relevant Arch packages might be lightdm-unity-greeter or lightdm-webkit-greeter.
| ubuntu like lightdm in arch linux |
1,498,477,205,000 |
Through the magic of piezoelectric phenomena, I experience "coil whine" when moving the mouse.
Turns out said coil is energized by the CPU, and that the Intel driver enabling Turbo Boost makes it process my mouse movements extremely quickly, resulting in audible power consumption spikes.
When I disable it with the following command, I get back my sanity:
echo "1" | sudo tee /sys/devices/system/cpu/intel_pstate/no_turbo
But unfortunately, it only lasts until the next reboot.
Is there a way to persistently disable Turbo Boost? Perhaps via some incantation involving x86_energy_perf_policy or cpuinfo?
In case it's relevant, my particular CPU model is i9-10900.
|
Add this command to rc.local or create a systemd unit - whatever you like. Instead of disabling Turbo you might want to limit the maximum operating frequency of your CPU. There's a gulf between base and turbo frequencies, so disabling Turbo feels like an overkill. I have a script for that as well. With the intel-pstate driver you're free to set any maximum CPU operating frequency.
| Persistently disable Intel Turbo Boost |
1,498,477,205,000 |
I am trying to make parts of my .tmux.conf file conditional depending on the kind of system I'm on. For a start, I want one part to only be processed on MacOS.
The man page says that I can use the #(COMMAND) syntax to substitute the output of COMMAND in formats, so in particular in conditions. So I'm trying this:
%if "#{==:#(/usr/bin/uname -s),Darwin}"
CONF-COMMANDS
%endif
But no luck, CONF-COMMANDS are ignored on Mac. I have verified, of course, that "Darwin" is indeed the output of uname -s. As you can see, I'm using the absolute file name to eliminate any PATH problems. I have also verified that the trivial condition %if "#{==:Darwin,Darwin}" in fact works. So I must be doing something wrong regarding the syntax. What is it?
|
I have a feeling you are running into a situation described in the man page:
When constructing formats, tmux does not wait for #() commands to finish; instead, the previous result from running the same command is used, or a placeholder if the command has not been run before.
You can demonstrate this from within a Tmux session with:
tmux display-message -p "#(uname -s)"
Which should return <'uname -s' not ready>.
Consider using if-shell as an alternative to %if for this:
tmux if-shell '[ "$(uname -s)" = "Darwin" ]' "CONF-COMMAND; CONF-COMMAND2; ..."
Under Tmux 3.2a, the syntax is somewhat cleaner:
tmux if-shell '[ "$(uname -s)" = "Darwin" ]' {
CONF-COMMAND1
CONF-COMMAND2
}
Not tested on Mac, but confirmed to work with WSL/Ubuntu/Bash. I think it should work as-is with Mac/Zsh.
| tmux configuration: command output substitution doesn't |
1,498,477,205,000 |
I'm working on a platform with Wayland & Weston and I so far only have the Wayland-Terminal application installed. I can start it but it's unusable because I cannot type a single letter, it constantly gets repeated many times. It appears as if the keyboard repeat delay is set way too low. According to http://manpages.ubuntu.com/manpages/bionic/man5/weston.ini.5.html#keyboard%20section I have added a [keyboard] section to /etc/xdg/weston/weston.ini and it now looks like:
[core]
idle-time=0
require-input=false
repaint-window=17
[keyboard]
repeat-rate=50
repeat-delay=500
but after a reboot, the keyboard remains unusable, there does not seem to be any change at all. Anyone that can assist in this matter?
Thank you!
|
I was able to come to a usable keyboard configuration with the following values (which I know are far from optimal but are good enough for me for now):
[core]
idle-time=0
require-input=false
repaint-window=17
[keyboard]
repeat-rate=0
repeat-delay=500
| How to set keyboard repeat delay in Weston |
1,498,477,205,000 |
I tried to create a custom mimetype (text/graphml+xml) by creating the file ~/.local/share/mime/packages/graphml+xml-mime.xml with this content:
<?xml version="1.0" encoding="UTF-8"?>
<mime-info xmlns='http://www.freedesktop.org/standards/shared-mime-info'>
<mime-type type="text/x-graphml+xml">
<comment>GraphML file</comment>
<acronym>GraphML</acronym>
<expanded-acronym>Graph Modelling Language</expanded-acronym>
<sub-class-of type="text/xml"/>
<glob pattern="*.graphml"/>
</mime-type>
</mime-info>
And installed an appropriate icon with:
xdg-icon-resource-resourse install --context mimetype --novendor --size ${size} --mode user text-x-graphml+xml.png
Then updated the database with
update-mime-database ~/.local/share/mime
But the icon for a my.graphml file is not displayed in nautilus (it's a debian minimal gnome system).
The icons in ~/.local/share/icon/hicolor/${size}x${size}/mimetype/text-x-graphml+xml.png does exist.
gio info my.graphml says:
...
standard::icon: text-x-graphml+xml, text-x-generic, text-x-graphml+xml-symbolic, text-x-generic-symbolic
standard::content-type: text/x-graphml+xml
standard::fast-content-type: text/x-graphml+xml
...
I can double click it and the file is opened with yed (as I want - did create the ~/.local/share/applicatons/yed.desktop file)
But no icon! :-(
|
The fix
Use this XML file instead:
<?xml version="1.0" encoding="UTF-8"?>
<mime-info xmlns='http://www.freedesktop.org/standards/shared-mime-info'>
<mime-type type="application/x-graphml+xml">
<comment>GraphML file</comment>
<acronym>GraphML</acronym>
<expanded-acronym>Graph Modelling Language</expanded-acronym>
<glob pattern="*.graphml"/>
<icon name="x-graphml+xml"/>
</mime-type>
</mime-info>
and make sure you run xdg-icon-resource
with
--context mimetypes
not
--context mimetype
otherwise they'll go in the wrong folder.
For example, if the icon is 48x48, the installation commands will look like this:
xdg-mime install --mode user graphml+xml-mime.xml
xdg-icon-resource install --context mimetypes --size 48 text-x-graphml+xml.png x-graphml+xml
update-mime-database ~/.local/share/mime
update-icon-caches ~/.local/share/icons
Attempt at an explanation
This is a strange one.
It appears the difficulty is that when the mimetype is
text/x-graphml+xml
instead of
application/x-graphml+xml
it defaults to the generic text icon.
This seems to depend on the file manager and desktop, though.
Is this a bug? You decide!
| Assign an icon to a custom mimetype |
1,498,477,205,000 |
I have an old P3 laptop with an 800x600 screen on which I've installed WattOS 7.5. I know it is now on 10 but 7.5 is the latest one that will fit on a CD. The later ones will only fit on DVD. This is an old machine - it only has a CD reader, it will not boot off USB and it doesn't like microWatt (which still fits on a CD). There is something wrong with the microWatt drivers - the entire screen appears as a 0.25" bar. Anyway, it has WattOS 7.5.
My login screen is very reddish. If I run it from my desktop it is greenish - which is the correct colour. Also, for some reason, the system thinks it has a 1024x768 screen. After logging in, all the colours are OK.
The obvious answer would have been to ask the WattOS site but it is a chicken and egg situation. It wants me to sign in and in order to sign in, I need an invite key. I've got no idea what one of those is. To find out, I'd have to login and ask but I can't login because I haven't got an invite key.
Before committing the settings in stone, I decided to try them out. First, I set the screen size
xrandr -s 800x600
The screen size changes and I can now see the taskbar but the brightness has also changed to half. When I type xrandr -q, I get
xrandr: Failed to get size of gamma for output default
Followed by the resolutions
I suspect it is assuming I have 24 or 32-bit colour but this is an old system so at most 16-bit colour, which possibly explains why it has all gone dim and why my login screen seems to have lost its blue/green component.
The questions
I must be looking up the wrong keywords - I can't find how to set the number of colours to 65536 or to tell the system that it has 16-bit colour. All the hits I am getting are how to set console window colours.
Another lot of searches says gedit /etc/X11/xorg.conf This file doesn't exist on my system. Again, I think I'm looking up the wrong keywords. Almost all the hits tell me where it is, none of them tell me what has replaced it.
How do I make these settings permanent so that my login screen appears in the correct colour and the system knows what size my screen is.
Edit I've found https://ubuntuforums.org/showthread.php?t=1493835 Apparently xorg.conf will be used if created. I'll give that a try later today. The laptop can only stray from a power supply for 5 minutes: after that it shuts down.
Edit This question is now purely academic - the machine just died.
|
Finally got to the bottom of this. WattOS has an on screen utility called arandr. I've never been successful in pasting pictures so I'll do a sort of diagram. On some systems, it is called the Screen layout editor. On WattOS, it is called arandr. I used it because I thought it had something to do with xrandr.
________________
|____menu________|
|____icons_______|
| | |
| default | |
|____________|___|
What I was doing was going through all the menu options and icons and not finding anything to do with resolution. The help doesn't tell you how to use it - all it has is the about screen.
What I should have done is right click on the area marked Default, select Resolution and then pick the resolution.
It is as simple as that: it has only taken me 7 years to figure out this one.
| Screen dims after xrandr |
1,376,662,415,000 |
There was such tool but I cannot remember its name. I needed to configure precedence of addresses by /etc/gai.conf. I finally managed to find an error, but for future, what's the name of tool which displays the addresses of hostname as getaddrinfo(3) displays it?
|
I know there is a tool resolveip for this that comes with MySQL. It should also be dead-simple to write something with e.g. Python or Perl...
| Testing precedence of resolving addresses from commandline |
1,376,662,415,000 |
I have an NPI iMX6ULL ARM based single-board computer running Debian Buster. It has 2 network ports listed by ifconfig as eth0 and eth1
It seems to be ignoring my network configuration in /etc/network/interfaces
auto lo eth0 eth1
iface lo inet loopback
iface eth0 inet dhcp
iface eth1 inet static
address 192.168.1.254
netmask 255.255.255.0
iface usb0 inet static
address 192.168.7.2
netmask 255.255.255.252
network 192.168.7.0
gateway 192.168.7.1
After booting the above configuration with eth0 connected to a dhcp server, ifconfig reports:
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.4.10.131 netmask 255.255.192.0 broadcast 10.4.63.255
inet6 fe80::d489:7cff:feec:e09e prefixlen 64 scopeid 0x20<link>
ether d6:89:7c:ec:e0:9e txqueuelen 1000 (Ethernet)
RX packets 478 bytes 42931 (41.9 KiB)
RX errors 0 dropped 29 overruns 0 frame 0
TX packets 30 bytes 2883 (2.8 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eth1: flags=-28669<UP,BROADCAST,MULTICAST,DYNAMIC> mtu 1500
ether d6:89:7c:ec:e0:9d txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
Similarly, after booting the above configuration with eth1 connected to a dhcp server, ifconfig reports:
eth0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
ether d6:89:7c:ec:e0:9e txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eth1: flags=-28605<UP,BROADCAST,RUNNING,MULTICAST,DYNAMIC> mtu 1500
inet 10.4.11.126 netmask 255.255.192.0 broadcast 10.4.63.255
inet6 fe80::d489:7cff:feec:e09d prefixlen 64 scopeid 0x20<link>
ether d6:89:7c:ec:e0:9d txqueuelen 1000 (Ethernet)
RX packets 1234 bytes 118390 (115.6 KiB)
RX errors 0 dropped 58 overruns 0 frame 0
TX packets 38 bytes 3547 (3.4 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
If I do an sudo ifdown eth1 it reports:
ifdown: interface eth1 not configured
and sudo ifup eth1 it comes up:
debian@npi:~$ sudo ifup eth1
debian@npi:~$ ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.4.10.131 netmask 255.255.192.0 broadcast 10.4.63.255
inet6 fe80::d489:7cff:feec:e09e prefixlen 64 scopeid 0x20<link>
ether d6:89:7c:ec:e0:9e txqueuelen 1000 (Ethernet)
RX packets 16846 bytes 1401257 (1.3 MiB)
RX errors 0 dropped 856 overruns 0 frame 0
TX packets 65 bytes 4551 (4.4 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eth1: flags=-28669<UP,BROADCAST,MULTICAST,DYNAMIC> mtu 1500
inet 192.168.1.254 netmask 255.255.255.0 broadcast 192.168.1.255
ether d6:89:7c:ec:e0:9d txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
What is controlling my network configuration on boot up. How do I find out?
I did find a couple of Google hits that talked about issues with my MAC address but this happens to both interfaces. I tried changing my MAC address as well, but nothing changed.
|
Thank you to Seamus for pointing me in the right direction. The key was that the image I was working with was based on the BeagleBone images.
The issue was connman taking control of the network ports before networkd. On one unit, I was able to remove connman with apt. After that, my configuration in /etc/systemd/network/interfaces was properly applied.
My research indicated that connman is the way of the future so I figured out how to configure that too. This page gave lots of great info. connmanctl can be used to set a static IP directly.
root@npi:~# connmanctl
connmanctl> config ethernet_00142d259a48_cable --ipv4 manual 192.168.10.2 255.255.255.0 192.168.10.1
connmanctl> config ethernet_00142d259a48_cable --nameservers 8.8.8.8
connmanctl> exit
The hex string in the middle of the device name is the MAC address of the device. manual specifies static IP and the numbers are the ipaddress, network mask and gateway (gateway is optional).
This page also explains it well. Most documentation unfortunately is aimed at configuring wifi. Most sites only mention static addressing in passing.
Hope this helps the next guy.
| Debian Buster Can not set static IP |
1,376,662,415,000 |
Say I want to automatically update my /etc/ntp.conf configuration using sed. Format of ntp.conf allows to define lists by usage same keywords for lines occurred all over the file. For example:
# first block occurrences
server 1.1.1.1
server 2.2.2.2
driftfile /var/lib/ntp/drift
# second block of occurrences
server 3.3.3.3
server 4.4.4.4
Now, I've got updated list of ntp servers, say 5.5.5.5, 6.6.6.6 and 7.7.7.7. As a result I want to get:
# first block occurrences
server 5.5.5.5
server 6.6.6.6
server 7.7.7.7
driftfile /var/lib/ntp/drift
# second block of occurrences
Can I do it with sed? Is it right tool for this problem or should I use something else?
P.S.: commenting out second block (or both) of occurrences could also be an option.
|
If you have the new list of servers in a file named list.txt in cwd:
sed '/^server/{x;//!r list.txt
d}' /etc/ntp.conf
or, if you don't want to use a file but rather hardcode the new server names:
sed '/^server/{x;//!c\
server 1\
........\
server n-1\
server n
d}' /etc/ntp.conf
This assumes that there is at least one non-commented server line in your /etc/ntp.conf (also, it will not remove any server lines that are commented out - you can change the regex to include those too). If you wanted to insert those lines even if there were no server entries in original file (and in that case, add the new servers at the end of file) you could do something like:
sed '/^server/{x;//!r list.txt
d}
${x;//!r list.txt
x}' /etc/ntp.conf
or use the same conditions if you prefer using c\ - I'll leave that as an exercise. Keep in mind that when changing lines with c\ all backslashes and embedded newlines have to be escaped with a backslash (as in my example).
| sed: remove all matches in the file and insert some lines where the first match was |
1,376,662,415,000 |
I just joined a new company, and this company is using rPath in its system deployment process.
If you did not know, rPath as a company and solution, has been closed and discontinued when they were acquired by SAS.
Is there any similar solution like rPath, or a straight clone for that solution?
RPath use the model of "package version snapshot", which for a given day, you can get a snapshot of operating system package that should be working correctly. With that base snapshot, you can create other set of group (eg. web server, mail server, VNC, etc) that sit on top of that, to create a fully working environment. With that version defined, you can go to any machine that subscribe to the snapshot and the group, and do "migration" that will automatically resolve and install the dependencies. If it have errors, you can roll back the installation, and get the last working version.
|
To answer my own question, there are no visible rPath clone out there.
If you are using rPath for package management and using Red Hat in your environment and would like to transition away, the logical tool for the task is Red Hat Satellite.
If you are adventurous, you can try Spacewalk, which is the Open Source upstream version of Red Hat Satellite.
| rPath clone or similar |
1,376,662,415,000 |
When searching the web for information about how to configure the default resolution and color depth for RealVNC sessions, I always come across stuff that talks about passing commandline parameters to vncserver, such as vncserver -geometry 1024x768 or something. However, I have my system configured to start the RealVNC server on boot (for runlevels 2-5; I'm using Debian) via the /etc/init.d/vncserver-x11-serviced script that RealVNC installs; I'm not using the vncserver command. How do I configure this to have a particular default resolution and color depth? Is there a config file somewhere I can use?
|
NOTE: I'm aware that this answer applies to the Virtual Mode of RealVNC rather than the Service Mode (vncserver-x11-serviced), but I think the Virtual Mode is generally more useful anyway, and it's the only one whose resolution can be changed dynamically. It's probably a pretty similar technique to change resolution for RealVNC when it's running in Service Mode.
After installing RealVNC (at the time of writing, version 5.2.1), applying a free license to it using vnclicense -add ..., and running the VNC server in its "Virtual Mode" (there are also Service and User modes which I won't go into here as it's complex enough already), I discovered how deep this rabbithole goes. :-) The documentation for this is far from comprehensive or obvious, and the only KB articles on RealVNC's site are somewhat dated (talking about the vncserver command instead of the apparent current recommendation, vncserver-virtual), or about changing the resolution dynamically with RandR during a session, not choosing what resolution should be used when the VNC server is started.
Firstly, the word "geometry" tends to be used in the context of The X Window System instead of the word "resolution". The default resolution given to you by the VNC server (or maybe X itself?) seems to be 800x600.
Now, you can pass parameters directly to the X Server when starting RealVNC's server manually from the commandline (or obviously in an automated way on boot from something like rc.local), and the param to pass to set the resolution is geometry. So, this will start a virtual RealVNC server instance with 1280x1024 resolution:
> vncserver-virtual -geometry 1280x1024
I usually run as root when I'm testing stuff like this, so I switch to the user whose desktop I want to VNC into first:
> sudo -u [someuser] vncserver-virtual -geometry 1280x1024
That requires passing a geometry parameter on the commandline, of course. RealVNC also supports configuration of the VNC service through various configuration files. However, confusingly, config for "Xvnc" (the underlying RealVNC server used on UNIX systems that communicates with X, and which needs to take the geometry config parameter) has to go in a different location from the "normal" RealVNC config files; either /etc/vnc/config.custom for server-wide settings, or ~/.vnc/config for user-specific settings. These are wholly separate from the other RealVNC config files and finding this out was pretty tough. As I said, the docs on this are terrible. So, in order to not have to specify resolution at the commandline, create the file /etc/vnc/config.custom and give it the following contents:
-geometry 1280x1024
Then when you run:
> sudo -u [someuser] vncserver-virtual
... the resolution for that VNC server instance will default to 1280x1024.
I assume this config file will also be used by the daemon vncserver-virtuald as well, for each of the vncserver-virtual instances it spawns (in fact it may be the only way to set the default resolution for the daemon), but as the daemon requires an enterprise license to use and I'm only using the free license, I couldn't test it.
Whew!
By the way, I mentioned RandR earlier. That's another way of getting a different resolution/geometry when connecting to the RealVNC server on UNIX systems. It only works when the RealVNC server is running in "Virtual Mode". It also allows dynamic changing of resolutions during the VNC session, and is somewhat better documented than the -geometry parameter stuff I mentioned above. :-) When this config setting is set (and it goes in the "normal" set of RealVNC config files, not those separate ones for Xvnc... go figure), you should be able to use the xrandr command from within the VNC session to change the resolution dynamically to one of the resolutions specified in the RandR setting.
| Configuring default resolution and color depth for RealVNC's vncserver-x11-serviced? |
1,376,662,415,000 |
I want to use avahi on a system with a read-only rootfs where /etc is not writable.
I can start avahi-daemon with the -f option to specify a non-standard location for the avahi-daemon.conf file (default location is /etc/avahi/avahi-daemon.conf). However I can't find any way to specify a non-standard location for the service definitions (default location is /etc/avahi/services). Is there any option for this ?
|
It seems that Avahi does not provide any configuration option for searching for service definitions in a non-standard location (other than rebuilding with a custom --prefix, but that obviously has other implications). In case someone else needs this, these are the options I've found:
Symlink /etc/avahi/services to a different directory. For this to work, the avahi daemon must be started with the --no-chroot option; otherwise it will not be able to reach out of the chroot jail.
Bind mount a different directory on /etc/avahi/services. This does not require that --no-chroot is used.
Both options work fine.
| Using a non-standard location for avahi services |
1,376,662,415,000 |
Since at least 2.6 kernels, Kconfig offers the option CONFIG_X86_RESERVE_LOW, described as the "Amount of low memory, in kilobytes, to reserve for the BIOS".
(Starting from physical address 0 as I understand it and ranging from 4K to 640K)
Booting on my system, my logs inform me close to the beginning of the boot process :
BIOS-provided physical RAM map:
BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable
From which I infer that the BIOS is telling the kernel that the very first 0x9ec00 (~640K) bytes of ram are usable. (not reserved)
A couple of lines further, I can read :
e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
That I understand as a consequence of my setting : CONFIG_X86_RESERVE_LOW = 4K
But, considering the BIOS itself asserting that the 0-0x9ebff range is usable, what is the point for the kernel to "reserve" whatever amount of low memory < ~ 640K for the BIOS ?
|
You should see a longer help text for this config option. It offers two reasons.
config X86_RESERVE_LOW
int "Amount of low memory, in kilobytes, to reserve for the BIOS"
default 64
range 4 640
help
Specify the amount of low memory to reserve for the BIOS.
The first page contains BIOS data structures that the kernel
must not use, so that page must always be reserved.
[snip]
There is a similar comment in the code:
* A special case is the first 4Kb of memory;
* This is a BIOS owned area, not kernel ram, but generally
* not listed as such in the E820 table.
The traditional BIOS would use the first 1280 bytes (0x500). Linux allocates RAM in units of the MMU page size (4096 bytes). OSDev points out -
After all the BIOS functions have been called, and your kernel is loaded into memory somewhere, the bootloader or kernel may exit Real Mode forever (often by going into 32bit Protected Mode). If the kernel never uses Real Mode again, then the first 0x500 bytes of memory in the PC may be reused and overwritten.
Linux is not generally able to call into the BIOS. However it may do so in a few terrifying moments: early boot, shutdown, and resume from sleep mode. If your system was booted using UEFI, then as far as Linux can tell there is no BIOS it can call.
Also, reserving the first page means that successful physical memory allocations never return the value 0. C programming traditionally reserves the address 0 to represent a "NULL pointer". We can see this reflected in memblock_phys_alloc_range(). At this point, changing it seems unlikely to repay the effort (and the risks :-).
* Return: physical address of the allocated memory block on success,
* %0 on failure.
*/
phys_addr_t __init memblock_phys_alloc_range(
Here is the second reason:
By default we reserve the first 64K of physical RAM, as a
number of BIOSes are known to corrupt that memory range
during events such as suspend/resume or monitor cable
insertion, so it must not be used by the kernel.
You can set this to 4 if you are absolutely sure that you
trust the BIOS to get all its memory reservations and usages
right. If you know your BIOS have problems beyond the
default 64K area, you can set this to 640 to avoid using the
entire low memory range.
If you have doubts about the BIOS (e.g. suspend/resume does
not work or there's kernel crashes after certain hardware
hotplug events) then you might want to enable
X86_CHECK_BIOS_CORRUPTION=y to allow the kernel to check
typical corruption patterns.
Leave this to the default value of 64 if you are unsure.
The safest assumption is that this could apply to UEFI firmware as well, just as it did to BIOS :-).
Since v3.9, the extra low reserve is
not shown in the kernel log messages. It is also not shown in /proc/iomem. The kernel only shows the first 4k reserved, even though the rest of the memory should still be reserved. It is just not added in the e820 map. It is added to a different list instead. The patch for this change is here: x86, mm: Move reserving low memory later in initialization.
If you want to find out more about the extra reservation, and the tale of woe that required it, here are the patch messages:
x86: add DMI quirk for AMI BIOS which corrupts address 0xc000 during resume
Alan Jenkins and Andy Wettstein reported a suspend/resume memory
corruption bug and extensively documented it here:
http://bugzilla.kernel.org/show_bug.cgi?id=11237
The bug is that the BIOS overwrites 1K of memory at 0xc000 physical,
without registering it in e820 as reserved or giving the kernel any
idea about this.
Detect AMI BIOSen and reserve that 1K.
We paint this bug around with a very broad brush (reserving that 1K
on all AMI BIOS systems), as the bug was extremely hard to find and
needed several weeks and lots of debugging and patching.
The bug was found via the CONFIG_X86_CHECK_BIOS_CORRUPTION=y debug
feature, if similar bugs are suspected then this feature can be
enabled on other systems as well to scan low memory for corrupted
memory.
x86: add X86_RESERVE_LOW_64K
This bugzilla:
http://bugzilla.kernel.org/show_bug.cgi?id=11237
Documents a wide range of systems where the BIOS utilizes the first
64K of physical memory during suspend/resume and other hardware
events.
Currently we reserve this memory on all AMI and Phoenix BIOS systems.
Life is too short to hunt subtle memory corruption problems like this,
so we try to be robust by default.
Still, allow this to be overriden: allow users who want that first 64K
of memory to be available to the kernel disable the quirk, via
CONFIG_X86_RESERVE_LOW_64K=n.
x86, bios: By default, reserve the low 64K for all BIOSes
The laundry list of BIOSes that need the low 64K reserved is getting
very long, so make it the default across all BIOSes. This also allows
the code to be simplified and unified with the reservation code for
the first 4K.
This resolves kernel bugzilla 16661 and who knows what else...
Bug 16661 - Corrupted low memory
[...] It means we should add his BIOS (dmidecode info please) to the blacklist bad_bios_dmi_table in arch/x86/kernel/setup.c. However, the bottom line is that 64K is such a small amount of memory and the list by now covers such a vast number of existing BIOSes, that we should just make it unconditional.
As far as I know, Windows 7 actually reserves all memory below 1 MiB to avoid BIOS bugs.
| What is the point of the kernel reserving CONFIG_X86_RESERVE_LOW memory for the BIOS? |
1,376,662,415,000 |
(On raspberry pi zero w, kernel 4.14y) It seems the wireless adapter chip isn't a device in the /dev fs, but is the name of something that 'ifconfig' knows about. I understand that this is an artifact from Berkley Sockets.
It is hardware, I assume it must be mentioned in the device tree -- to cause some driver to be loaded, but it must not create an entry in /dev (devfs).
Where/how does Sockets find this device that is not a device?
|
In Linux, network interfaces don't have a device node in /dev at all.
If you need the list of usable network interfaces e.g. in a script, look into /sys/class/net/ directory; you'll see one symbolic link per interface. Each network interface that has a driver loaded will be listed.
Programmatically, you can use the if_nameindex() syscall: see this answer on Stack Overflow.
Also, note that /dev is the device filesystem.
The device-tree has a specific different meaning: it is a machine-readable description of a system's hardware composition. It is used on systems that don't have Plug-and-Play capable hardware buses, or otherwise have hardware that cannot be automatically discovered. As an example, Linux on ARM SoCs like Raspberry Pi uses a device tree.
The boot sequence of a RasPi is quite interesting: see this question on RasPi.SE.
In short, at boot time, under control of the /boot/start.elf file, the GPU loads the appropriate /boot/*.dtb and /boot/overlay/*.dtbo files before the main ARM CPU is started. The *.dtb file is the compiled device tree in binary format. It describes the hardware that can be found on each specific RasPi model, and is produced from a device tree source file (.dts`) which is just text, formatted in a specific way.
The kernel's live image of the device-tree can be seen in: /sys/firmware/devicetree/base Per Ciro Santilli, it can be displayed in .dts format by:
sudo apt-get install device-tree-compiler
dtc -I fs -O dts /sys/firmware/devicetree/base
You can find the specification of the device tree file format here. The specification is intended to be OS-independent. You may also need the Device Tree Reference as clarification to some details.
So, the answer to your original question is like this:
the Berkeley Sockets API gets the network interface from the kernel
the kernel gets the essential hardware information from the device tree file
the device tree file is loaded by the GPU with /boot/start.elf according to configuration in /boot/config.txt
the device tree file was originally created according to the hardware specifications of each RasPi model and compiled to the appropriate binary format.
The device tree scanning code is mostly concerned about finding a valid driver for each piece of hardware. It won't much care about each device's purpose: that's the driver's job.
The driver uses the appropriate *_register_driver() kernel function to document its own existence, takes the appropriate part of the device tree information to find the actual hardware, and then uses other functions to register that hardware as being under its control. Once the driver has initialized the hardware, it uses the kernel's register_netdev() function to register itself as a new network interface, which, among other things, will make the Sockets API (which is just another interface of the kernel, not an independent entity as such) aware that the network interface exists.
The driver is likely to register itself for other things too: it will list a number of ethtool operations it supports for link status monitoring, traffic statistics and other low-level functions, and a driver for a wireless NIC will also use register_wiphy() to declare itself as a wireless network interface with specific Wi-Fi capabilities.
The Linux TCP/IP stack has many interfaces: the Berkeley Sockets API is the side of it that will be the most familiar to application programmers. The netdev API is essentially the other, driver-facing side of the same coin.
| How does Linux find/configure something like 'wlan0' when it is not a device that appears in /dev? |
1,376,662,415,000 |
Say, I have custom kernel from my distribution, how could I get list of all options the kernel was build with?
It's possible to get them by reading config file from kernel package from vendor's repo, but is there any other way? I mean ways to get that information form the kernel itself, maybe from procfs?
|
In addition to what @Stephen Kitt said, at least on my Debian system you can find the information in:
/boot/config-<version>
Where version, in my case, is:
3.16.0-4-686-pae
So, issuing:
less /boot/config-3.16.0-4-686-pae
Spits out the kernel configs in a long list!
| How to determine the options Linux kernel was build with? [duplicate] |
1,376,662,415,000 |
Where do I enable the CONFIG_NO_HZ_FULL kernel configuration? Is it something I can set in conf files or is it something that has to be enabled when I build the kernel?
I am using CentOS with an upgraded kernel 3.10.
|
If you would like to check if an option is configured into your current kernel, you can probably get the config via gunzip -c /proc/config.gz > somefile. So to check this one:
gunzip -c /proc/config.gz | grep HZ _FULL
You can search for options when configuring the kernel with make menuconfig via / -- the other TUI config apps should have a similar feature, and I presume the GUI ones something in a menu. The forward slash is a standard *nix-ish hotkey for searching, used in (e.g.) the standard pager, less.
Anyway, a quick check of this for 3.13.3 turned up five different options, the first of which is NO_FULL_HZ and that option is set in the menuconfig hierarchy (the other config interface apps use the same one, I believe) by General setup->Timers subsystem->Timer tick handling.
Note that some options have prerequisites and will not appear unless those are satisfied. You can untangle these by looking at the output from the search, which indicates how your prereq values are currently set (in square brackets, e.g. [=y]) and uses logical operator notation:
! indicates the option must not be set (so you would want [=n])
&& indicates the preceding requisite and the next one must be set.
|| indicates the preceding requisite or the next one must be set.
Conversely, some options are required by other ones which have been selected and clues about that will be in the search output too.
| How I can enable Linux full tickless mode using CONFIG_NO_HZ_FULL? |
1,376,662,415,000 |
I'm having an issue with the exclusion of users from the match group statement of my sshd configuration.
I already have a user exclusion declared like this
Match Group sftpusergroup User *,!"sftp_user"
This only started to work after I've put the user inside "", I suspect it is because of the _ on the username.
Now I need to add a second user to this exclusion but when I do it all the users get the following error while trying to connect to the server
16:23:56 Error: Network error: Connection refused
16:23:56 Error: Could not connect to server
I've added the second exclusion using
Match Group sftpusergroup User *,!"sftp_user",!"sftp_user_2"
Also tried to make it with
Match Group sftpusergroup User *,!"sftp_user",!sftp_user_2
But always with the same result. Has anyone experienced this behavior before?
|
I haven't seen this behaviour before, but looking at the man ssh_config I see:
A pattern-list is a comma-separated list of patterns. Patterns within
pattern-lists may be negated by preceding them with an exclamation
mark ('!'). For example, to allow a key to be used from anywhere
within an organisation except from the ''dialup'' pool, the following
entry (in authorized_keys) could be used:
from="!.dialup.example.com,.example.com"
Perhaps this will work:
Match Group sftpusergroup User "*,!sftp_user,!sftp_user_2"
by putting the entire user list in quotes.
| Excluding users from Match Group on SSH |
1,376,662,415,000 |
I would like to configure bash so that when I execute command (preferably from a list of commands, not any command) without an argument, it takes the argument of previous command.
So, for example I type emacs ~/.bashrc, then when I enter source, bash executes source ~/.bashrc. I know it's possible but I don't know where to look for such options.
|
You can press Space then Meta+. before pressing Enter. This has the advantage that you can use it even with commands that make sense when applied to no argument. For source, use . to type less.
If you're old-school, you can use !^ instead to recall the first argument from the previous command, or !$ to recall the last argument, or !* to recall all of them (except the command name).
You can get exactly the behavior you describe by writing functions that wrap around each command. The last argument from the previous command is available in the special parameter $_.
make_wrapper_to_recall_last_argument () {
for f do eval <<EOF; done
function $f {
if [ \$# -eq 0 ]; then set -- "\$_"; fi
command $f "\$@"
}
EOF
}
make_wrapper_to_recall_last_argument source .
| Configure bash to execute command with last argument when no argument was provided |
1,376,662,415,000 |
So I made a dedicated Samba file server on my Debian(3.2) machine. I have had great success accessing it from both Windows and Unix. I can SSH into it on the local network.
When I try to SSH into it via the public IP address, it says connection refused.
I would like to be able to ssh into it remotely, directing into the Samba share. How would I go about doing this? I hear I might have to port forward? Do I need to change anything in the smb.conf file?
Here's my sshd_config file:
# Package generated configuration file
# See the sshd_config(5) manpage for details
# What ports, IPs and protocols we listen for
Port 22
# Use these options to restrict which interfaces/protocols sshd will bind to
#ListenAddress ::
#ListenAddress 0.0.0.0
Protocol 2
# HostKeys for protocol version 2
HostKey /etc/ssh/ssh_host_rsa_key
HostKey /etc/ssh/ssh_host_dsa_key
#Privilege Separation is turned on for security
UsePrivilegeSeparation yes
# Lifetime and size of ephemeral version 1 server key
KeyRegenerationInterval 3600
ServerKeyBits 768
# Logging
SyslogFacility AUTH
LogLevel INFO
# Authentication:
LoginGraceTime 120
PermitRootLogin yes
StrictModes yes
RSAAuthentication yes
PubkeyAuthentication yes
#AuthorizedKeysFile %h/.ssh/authorized_keys
# Don't read the user's ~/.rhosts and ~/.shosts files
IgnoreRhosts yes
# For this to work you will also need host keys in /etc/ssh_known_hosts
RhostsRSAAuthentication no
# similar for protocol version 2
HostbasedAuthentication no
# Uncomment if you don't trust ~/.ssh/known_hosts for RhostsRSAAuthentication
#IgnoreUserKnownHosts yes
# To enable empty passwords, change to yes (NOT RECOMMENDED)
PermitEmptyPasswords no
# Change to yes to enable challenge-response passwords (beware issues with
# some PAM modules and threads)
ChallengeResponseAuthentication no
# Change to no to disable tunnelled clear text passwords
#PasswordAuthentication yes
# Kerberos options
#KerberosAuthentication no
#KerberosGetAFSToken no
#KerberosOrLocalPasswd yes
#KerberosTicketCleanup yes
# GSSAPI options
#GSSAPIAuthentication no
#GSSAPICleanupCredentials yes
X11Forwarding yes
X11DisplayOffset 10
PrintMotd no
PrintLastLog yes
TCPKeepAlive yes
#UseLogin no
#MaxStartups 10:30:60
#Banner /etc/issue.net
# Allow client to pass locale environment variables
AcceptEnv LANG LC_*
Subsystem sftp /usr/lib/openssh/sftp-server
# Set this to 'yes' to enable PAM authentication, account processing,
# and session processing. If this is enabled, PAM authentication will
# be allowed through the ChallengeResponseAuthentication and
# PasswordAuthentication. Depending on your PAM configuration,
# PAM authentication via ChallengeResponseAuthentication may bypass
# the setting of "PermitRootLogin without-password".
# If you just want the PAM account and session checks to run without
# PAM authentication, then enable this but set PasswordAuthentication
# and ChallengeResponseAuthentication to 'no'.
UsePAM yes
|
Scriptonaut, probably your problem has nothing to do with Samba, but has to do with port forwarding/NAT. If you have your SAMBA serving Debian computer in a LAN network, behind a router, you need it configured to transfer requests to some of its ports to your SAMBA running machine:
First, I'll tell, how outgoing connections work with router. When 2 machines speak via TCP/IP each machine (source machine and destination machine) is addressed with a pair IP/port number, so the connection is determined by 2 pairs: source IP/port number and destination IP/port number.
When you open a tab in Mozilla and access Google on your 192.168.1.2 machine, it transfers some IP packets to the Router with source address of itself IP=192.168.1.2 and arbitrary outgoing TCP port number it allocated for that tab of browser (like 43694) and asks the router to transfer that packets to Google machine with certain IP on 80 port of that machine, cause 80 is standard port for incoming http connections (you can see the list of standard TCP ports in /etc/services file on Linux). Router allocates a port of its own at random (e.g.12345), replaces source IP/port pair in that packets with its own WAN IP (74.25.14.86) and port 12345 and remembers, that if it gets response on port 12345 from Google, it should automatically transfer that response back to 192.168.1.2, port 43694.
Now, what happens when an outer machine wants to access your server?
When you try to access your SAMBA server from the outer machine, it sends IP packets to your WAN IP=74.25.14.86, port 22 of it (because, 22 is a standard TCP port for listening to SSH connections, you can see the list of standard TCP ports in /etc/services file on Linux). Your Router receives that packets. By default, firewalls on routers are configured to block all incoming connections to any port, if there was no outgoing connection, bound to that port (so, when you were accessing Google in previous case, router didn't block response from Google to port 12345 of itself, cause it remembered that your 192.168.1.2 initiated connection to Google and response from google should come to port 12345). But it would block attempts to initiate connections from the outer world to port 22 of it, cause port 22 was not mapped for any connections incoming from LAN.
So, what you need to do is to configure your router to transfer all the connections to its port 22 from the outside to port 22 of your 192.168.1.2. This can be done in web-interfaces of hardware routers, usually the menu option you need is called "Port-forwarding" or "NAT - network address translation".
| How do I SSH into my Samba file server? |
1,376,662,415,000 |
I noticed the error this morning, but I don't think I have changed anything last night, so I am very confused right now. Perhaps I updated some utilities on my system and it somehow broke the back compatibility. Basically I got a math: Error: Missing operator error when using tab completion.
Say I type fish, and hit Tab to get the suggestions like fish_config and fish_add_path (here is an asciinema screencast in case you want to see it in action: https://asciinema.org/a/L3xr32eVMGHuCY0Gjr19gFzCu)
[I] ~ $ fishmath: Error: Missing operator
'Wed Dec 31 18:00:00 CST 1969 - 1655913830'
^
[I] ~ $ fish_config
fish (command)
fish_add_path
fish_breakpoint_prompt
fish_clipboard_copy
…and 29 more rows
The tab completion does work, but the error looks very annoying. Looks like I am trying to evaluate a data string or something. How do I diagnose the bug?
I am on macOS Monterey. Here is my ~/.config/fish/config.fish.
set -px PATH /opt/local/bin /opt/local/sbin
set -px PATH $HOME/.local/bin
set -px PATH $HOME/bin
set -px PATH /Applications/MacPorts/Alacritty.app/Contents/MacOS
set -px PATH $HOME/Foreign/drawterm
set -px PATH $HOME/google-cloud-sdk/bin
set -x XDG_CONFIG_HOME $HOME/.config
set -x PIPENV_VENV_IN_PROJECT 1
set -x PLAN9 /usr/local/plan9
set -px PATH $PLAN9/bin
if test -e $HOME/.config/fish/sensitive.fish
source $HOME/.config/fish/sensitive.fish
end
if status is-interactive
# Commands to run in interactive sessions can go here
alias vi='/opt/local/bin/nvim'
set -gx EDITOR /opt/local/bin/nvim
source /opt/local/share/fzf/shell/key-bindings.fish
end
set -g fish_key_bindings fish_hybrid_key_bindings
alias matlab='/Applications/MATLAB_R2021b.app/bin/matlab -nodisplay'
zoxide init fish | source
direnv hook fish | source
# The next line updates PATH for the Google Cloud SDK.
if [ -f '/Users/qys/google-cloud-sdk/path.fish.inc' ]; . '/Users/qys/google-cloud-sdk/path.fish.inc'; end
|
The error goes away after I remove the line set -px PATH $PLAN9/bin. I guess it was because I accidentally shadowed some system utilities with its counterpart in Plan 9 from User Space.
Another workaround is to use set -ax PATH $PLAN9/bin instead. By using -a, the directory $PLAN9/bin is appended to $PATH (as opposed to prepended when using -p), so that the commands already present in $PATH takes precedence over the Plan 9 ones.
| Fish shell reports "math: Error: Missing operator" on tab completion |
1,376,662,415,000 |
I have come across a peculiar issue while using ansible. The problem is very strange and dangerous. I have a written a code to insert data in a particular section of a file i.e to add line after [database] in say /etc/cinder/cinder.conf.
The problem is I have noticed sometimes it adds the content properly after the tag [database] , but sometimes it gets confused by seeing a line like # put ur infore here for [database] in the file and adds our required line below it instead of where it should actually put it.
- name: Adding Entries in "/etc/cinder/cinder.conf"
lineinfile:
dest: "/etc/cinder/cinder.conf"
insertafter: "{{ item.inserts }}"
state: present
line: "{{ item.lines }}"
with_items:
- { inserts: '\[database\]', lines: 'rpc_backend = rabbit' }
This situation is quite dangerous in a production environment! How can I add the data correctly?
|
To avoid matching in a comment, anchor your regexp to the beginning of the line:
- { inserts: '^\[database\]', lines: 'rpc_backend = rabbit' }
| inserting data into a particular section of a configuration file with ansible |
1,376,662,415,000 |
In various blogs explaining the terminal multiplexer tmux and Git repositories containing a configuration file tmux.conf I find the following two lines (possibly with a varying prefix key):
set -g prefix2 C-a
bind C-a send-prefix -2
But what I couldn't find an answer for is why the second line is needed. As to my understanding the first line already binds the given key as the secondary prefix. I also tried to configure tmux with only the first line present and it seems to work. So why should it be bound a second time to explicitly send the secondary prefix key again?
|
The second line means Ctrl-A-Ctrl-A sends a literal Ctrl-A input to the program (send-prefix). If you don't have a use for that, you don't need it, though it also doesn't hurt. It isn't binding the key a second time.
One use case for this is running tmux or screen inside tmux.
| Binding prefix key in tmux twice really needed? |
1,376,662,415,000 |
I've noticed that some Linux configuration files (e.g. /etc/samba/smb.conf) expect you to enter the actual settings (key value pairs) in a particular "section" of the file such as [global].
I'm looking for a terminal tool/command which allows you to append lines to a specific section of a specific configuration file.
Example:
configadd FILE SECTION LINE
configadd /etc/samba/smb.conf '[global]' "my new line"
|
You can do the task by sed directly, for example:
sed '/^\[global\]/a\my new line' /etc/samba/smb.conf
NOTE: This is not a solution because such line can be in config already. So firstly you should to test whether is the line present.
| Appending a line to a [section] of a config file |
1,376,662,415,000 |
I can't find a tutorial on how to set up a shared hosting server.
The part that I'm missing is the way privileges are set for the webmasters so that they don't see each others directories.
Previous post:
How OVH can configure their SSH server to do this?
I'm trying to set up a multi-users web server and for that I'd like that each user can use both connect with SSH and SFTP but, most important, only sees their own directory. OVH managed to do that, but after 6 hours searching and trying (creating a chroot jail), I don't see how they did it. Maybe it's trivial, but I simply don't see it.
Here is what I can do when I log into my OVH account:
pwd gives me my home dir (/homez.52/creak)
/homez.52/creak is actually a symlink to /home/creak
I can cd into all the common Linux directories (/bin, /usr, /home, ..) but each time ls gives me this error: ls: cannot open directory .: Permission denied
I can browse all my files in both /homez.52/creak and /home/creak
How did they managed to do that? chroot? ACL?
Thanks
|
In a shared web-hosting environment, there are a couple issues that you need to address right off the bat.
Regarding directory permissions and only being able to access your files: what you want to do is set home directory permissions such that the "others" group has no permission whatsoever. Remember that eXecute permission is needed to cd into directories, but that by itself won't allow you to read their contents. Therefore, /home should be owned by root and have rwxr-x--x, so users can only "blindly" go to their home folder but not have a peek around and know how many users are in your system. It would look something like this (date, size etc omitted for clarity):
# ls -la /home
drwxr-x--x root root .
drwxr-xr-x root root ..
drwxr-x--- usr1 usr1 usr1
drwxr-x--- usr2 usr2 usr2
...
If you really don't want users to be able to read the contents of directories like /bin, etc. simply remove the "read" permission bit for the "others" group. It will not affect their ability to use the binaries contained within, provided they know the full route beforehand.
For SSH and FTP access, if you configure your filesystem permissions correctly, then any decent SSH or FTP implementation will already be secure. I recommend vsftpd for FTP, of course OpenSSH for SSH, but by no means I imply they're the only correct options available. Remember to tweak configuration options for those services (in particular, disallow root login through SSH, probably disallow password login for anyone sudo-capable, etc.)
The tricky part is configuring your web server correctly, especially if you have to run CGI scripts for dynamic websites. Everyone and their grandmother want PHP these days, and you really can't have /home/dumbuser/public_html/php_shell.php running as the same user that spawned your Apache/Nginx, right?
A possible solution here, if you're running the Apache web server, is to use the suexec module, which will have your CGI script run as the user that owns the executable file (think setuid bit). To allow the HTTP server access to the actual files, consider adding the user the server runs as (typ. www-data) to every user group on the system ("every user group" meaning every user that's using your shared environment, not every user account on the system).
Note that this is barely scratching the surface of all that must be done to properly configure and harden a shared server. The configuration files for each service running must be completely understood, and you will probably have to modify them to suit your needs. In particular, you'll probably have to spend a good week reading configuration options for your web server and trying it out on a development/testing environment.
| Is there a tutorial to set up a shared hosting server? |
1,376,662,415,000 |
It seems like the general pool of logging, changelog, readme, and config files in Linux is very inconsistently named. It always makes me wonder, why didn't *nix devs decide on a common file name schema long ago? I feel that it's unnecessarily annoying having to remember exactly how a file is named in regards to, let's say, configs.
For example, here is the configuration naming convention: We find this, this.cnf, this.conf, this.config, or neither, dropping into the underscore this_config realms, or even occasionally the Windowsy this.cfg style. Considering the diversity above, what is the most accepted approach for naming configs? The same can be said/asked for logs, changelogs, readmes, etc.
Most of the scripts and binaries seem well laid out with consistent schema, so what made configs take the hard road? Did something happen back in the day that divided the dev crowd into naming convention camps, or was "tertiary" consistency like this just never worth pursuing due to the open nature and free spirit approach of Linux?
|
Considering the diversity above, what is the most accepted approach for naming configs?
Whatever you want to call them. File extensions don't matter much beyond letting an admin know what the file probably is. A human is probably going to know that *.cfg and *.conf are both probably config files.
The *.cnf I've only ever seen with MySQL which is a one-off deviation that you'd have to ask the MySQL/MariaDB developers about.
Did something happen back in the day that divided the dev crowd into naming convention camps, or was "tertiary" consistency like this just never worth pursuing due to the open nature and free spirit approach of Linux?
It's probably not something most people would consider important. Most people go with *.conf nowadays (nginx, udev, apache, rsyslog/syslog-ng, etc), but it's possible *.cfg was preferred back when file paths could only have a few characters. It probably never changed for the same reason /etc/fstab was never renamed, most people who cared already know what the file in question does.
| Why are certain naming conventions so inconsistent in Linux? [closed] |
1,376,662,415,000 |
I want to edit the startup list of Linux Mint 17 without logging in. Where does the Startup Applications program keep its list of applications?
|
You can easily find the location of such configuration info by making a small change to one of the application settings (make sure you can undo it if necessary) and then do:
find ~ -type f -mmin -1
Which find all files under your home directory that changed in the last minute.
You will find that the files are under ~/.config/autostart/ (for each user)
| Startup Applications config path |
1,376,662,415,000 |
I recently rented a Linux box and I intend to provision it using Ansible. I'd like to use Ansible as early as possible in the process. At this stage I only have a root account and the corresponding password.
Questions
Given the fact that my target box as well as OpenSSH-Server is running, would it be possible to copy my public key to the server and configure it with Ansible at this early stage?
Do I have to manually copy the pub key to the server and configure ssh for Ansible to be able to communicate with the server in the first place?
EDIT 1:
I should've mentioned, that after reading the docs I used the following syntax:
ansible <nameoftargetbox> -m ping -u root -k
The command basically means use user root ( -u ) and ask for password ( -k ). I am correctly prompted for the password, but I keep getting this response:
<nameoftargetbox>| FAILED => to use -c ssh with passwords, you must install the sshpass program
Needless to say that a common ssh root@<targetbox> works flawlessly.
After researching some more I found a solution in the Ansible github issue tracker. I'll post it as an answer to this question.
|
Appending -c paramiko to ansible <nameoftargetbox> -m ping -u root -k forces Ansible to use the Paramiko Python library internally which appearently does not have to have sshpass installed. Please consult this closed issue on the Ansible github issue tracker.
EDIT1:
To answer the original question, yes it is possible to administer a Linux box using Ansible with the root account and the password. One could use the -c paramiko switch in the first place to copy the pubkey to the target and doing some more bootstrapping before switching to using full blown Ansible Playbooks.
| Is it possible to start administering a linux box via Ansible with just the root account and its password? |
1,376,662,415,000 |
I keep getting errors on startup in Arch relating to my rc.conf:
failed to load module "wl"
failed to load module "lib80211"
etc. It lists all the modules in my /etc/rc.conf
This is my full rc.conf:
#
# /etc/rc.conf - configuration file for initscripts
#
DAEMONS=(syslog-ng dbus networkmanager crond .. etc. ..)
MODULES=(... wl... lib80211... nvidia-bl... openntpd... slim... acpid... pommed)
# Storage
#
# USEDMRAID="no"
# USELVM="no"
# Network
#
# interface=
# address=
# netmask=
# gateway=
HARDWARECLOCK="UTC-5"
TIMEZONE="US/Central"
I notcied this when pommed failed to start in X when I added sudo pommed & in my xinitrc.
Is the config file formated properly?
|
Your rc.conf is not properly configured; the elipses (...) in the wiki are illustrative only. The rc.conf file is a shell script and arrays shouldn't contain those dots.
Using that method is the deprecated way of loading modules. If you wish to continue to list them in this file, then you should use this format:
MODULES=(wl lib80211 nvidia-bl)
The correct way to load these modules is outlined on the Arch Wiki, by creating files under /etc/modprobe.d/. In the case of wl, as an example, you would create /etc/modprobe.d/wl and the file would contain:
# load broadcom module at boot
wl
Further, openntpd is a daemon, and should be listed in that array, not in the modules one—as should slim and acpid. I don't know what pommed is but I would check where that should be placed as well.
| Arch modules not loading on start? |
1,376,662,415,000 |
Is there a way to check varnish configuration syntax without actually using the new version?
I'm searching for a native varnish equivalent of apache2ctl configtest
|
You can ask Varnish to compile your VLC file to a temporary file. This is part of our script that loads a new configuration into our varnish servers:
tmpfile=$(mktemp)
trap 'rm -f $tmpfile' 0
varnishd -C -f /srv/web/fe/varnish/default.vcl > $tmpfile
echo
if [ ! -s $tmpfile ]; then
echo "ERROR: There are errors in the varnish configuration." >&2
exit 1
fi
This works because varnishd -C will not generate any output on stdout if there are errors in the VCL.
| varnish configtest |
1,318,598,595,000 |
I got an embedded system trying to create a ppp connection using a GSM modem. However the connection is never established and all I get is this error message in syslog:
Oct 12 08:38:48 pppd[451]: pppd 2.4.4 started by root, uid 0
Oct 12 08:38:59 pppd[451]: Connect script failed
Oct 12 08:39:00 pppd[451]: Exit.
I now need some hints how to proceed finding the cause of this problem. Where should I start looking?
|
Somewhere in your ppp setup (probably either in /etc/ppp/options or at the command line), you have an option called connect followed by a command used to setup the modem for a connection. It's usually a chat script. You need to find out why that command is failing. If it is a chat script, you can make it verbose by changing it from chat blah blah... to chat -v blah blah.
Also for convenience, I like to add either the updetach or nodetach option to ppp so I don't have to keep checking the log.
| How to proceed solving ppp connection problems? |
1,318,598,595,000 |
I Have recently started configuring "screen", I have included my .screenrc below. I have a problem that if windows 0 & 1 (containing bash) are idle for about 10 mins they will close only leaving window 2 containing irssi. Have I done something wrong? is there something i can do to stop this from happening? I have tried searching for similar problems or solutions but I am finding it difficult to find anything relevant.
startup_message off
autodetach on
shell /bin/bash
defutf8 on
altscreen on
hardstatus alwayslastline
hardstatus string '%{= kG}[ %{G}%H %{g}][%= %{=kw}%?%-Lw%?%{r}(%{W}%n*%f%t%?(%u)%?%{r})%{w}%?%+Lw%?%?%= %{g}][%{B}%Y-%m-%d %{W}%c %{g}]'
defscrollback 30000
# Default screens
screen -t bash-0 0
screen -t bash-1 1
screen -t irssi 2 irssi
select 0
|
Is the TMOUT environment variable set (nothing to do with screen)? If it's set to 600, then bash will close the session after 600 seconds (10 minutes).
| Configuring screen, window problems |
1,318,598,595,000 |
I'm running Linux Mint 9 gnome on a Toshiba Satellite A105-S4211
With this,
inxi -G Graphics: Card Intel Mobile 945GM/GMS 943/940GML Express Integrated Graphics Controller X.Org 1.7.6 Res: [email protected] GLX Renderer Mesa DRI Intel 945GM GEM 20091221 2009Q4 x86/MMX/SSE2 GLX Version 1.4 Mesa 7.7.
And I want to know how to configure the monitor/graphic/video/driver/gamma settings.
What I really want to do is change the gamma settings on my computer. I was told that I can do this using the video card driver.
I want to know how to do that.
|
You should be able to adjust the gamma using the xgamma command.
| How do I configure my graphics drivers? |
1,318,598,595,000 |
Like most of us, I have several machines: at home, at work, for travelling... etc. I mainly write papers or books while I code. But I'm tired of svn'ing, rsync'ing and so on, so I've decided to carry a pendrive with me, with my Ubuntu customizations (bash, emacs, ...) and at the end of the day, do a rsnapshot. My question is: how do I minimally run my home directory from a pendrive? What should I put in there?
Thanks for any input.
|
The answer is pretty simple :-)
You should put in there your documents you are working on and the dotfiles of the applications you use.
Theres no such thing like a minimal set of files you need. If an application is missing its configuration file, it will usually create a new one like at the first start.
Which files you will need depends on the applications you use, so you are the only one who can answer this.
If you are unable to trace some config files, keep an eye on the subdirectories of ~/.gnome2 or ~/.kde.
To tell the system, where the location of your new home directory is, you should just automount your pendrive to /home/username or simply change the location of your users home directory in /etc/passwd to your pendrives mountpoint.
If this doesn't fit your question, please be more specific. :-)
| Carry-on Ubuntu Customization |
1,318,598,595,000 |
This is what I have in ~/.zshrc (actually a file sourced from my ~/.zshrc, see below).
#CHANGING DIRECTORIES
setopt CD_SILENT
setopt CDABLE_VARS
setopt AUTO_CD
#COMPLETION
setopt recexact autolist listambiguous menucomplete hashlistall globcomplete completeinword completealiases autoparamslash
#EXPANSION AND GLOBBING
setopt nomatch badpattern globstarshort rcexpandparam extendedglob nocaseglob numericglobsort markdirs
#HISTORY
SAVEHIST=8192
HISTSIZE=$(( 1.2 * SAVEHIST ))
HISTFILE="$ZDOTDIR/.zhistory"
setopt nohistbeep extendedhistory histverify sharehistory histallowclobber histreduceblanks histfcntllock histignoredups histignorealldups histsavenodups
#SCRIPTS AND FUCNTIONS
setopt multios aliasfuncdef localoptions localloops localtraps cbases cprecedences
#MISC
setopt transientrprompt nocheckjobs ignoreeof nobeep nolistbeep nonotify noclobber interactivecomments
Nothing works except the HISTFILE setting.
% setopt
interactive
monitor
shinstdin
zle
% ls .zhistory
.zhistory
However, if I use setopt autocd inside the terminal it works for that session. Here is an example:
% setopt
interactive
monitor
shinstdin
zle
% setopt autocd
% setopt
autocd
interactive
monitor
shinstdin
zle
If I add echo I AM BEING LOADED, EVEN THEN I FAIL TO LOAD ANY SETOPT COMMANDS to that .zshrc, upon startup, I do see:
I AM BEING LOADED, EVEN THEN I FAIL TO LOAD ANY SETOPT COMMANDS
% setopt
interactive
monitor
shinstdin
zle
That's with zsh 5.9 on GNU/Linux amd64.
The file is actually sourced with:
# FUNCTOIN TO SOURCE READABLE FILES
function src {
[[ -r "$1" ]] && source "$1"
}
if [[ -d ~/.config/.zshell/zload.d/ ]]; then
for FileToLoad in ~/.config/.zshell/zload.d/*.zsh(n); do
src "$FileToLoad"
done
unset FileToLoad
fi
unset -f src
|
Sorry guys I misunderstood the use of setopt localoptions.
Please note that I am learning so mistakes can happen.
That is the reason I was having the issue. That was a dumb mistake lol
Nothing was working and it was making me mad. So, I decided to revisit the basics at https://zsh.sourceforge.io/Doc/Release/Options.html
To debug the solution that I concocted for myself. I turned on setopt xtrace to see if those lines were being read. Here is the output:
You can see here that nothing is wrong with the shell and every line is being read. From there trial and error can lead you to the solution.
Once I realized commenting out localoptions solves all my issues. I went and read what it does again and realized that my comprehension of it was wrong. To know what the option does you can visit the above shared link.
Here, as setopt localoptions was done from within the src function, all option settings were made local to that function.
Here is a photo that shows that it is now working:
P.S-> I did not figure it out by myself actually, someone in the zsh forum helped me get there.
My zsh version:
| setopt does not work in .zshrc. Can someone tell me why? |
1,318,598,595,000 |
In Zsh, the following fails:
$ echo hi > /tmp/this/path/does/not/exist/out.txt
zsh: no such file or directory: /tmp/this/path/does/not/exist/out.txt
Obviously the problem is that > cannot create missing parent directories. I find this behavior very annoying, it should just create the dirs. How can I accomplish this?
With the likes of cp and mkdir it is possible to just alias the --parents option. However > cannot be aliased, as it is not a command. What can I do?
Ideally I would like to accomplish this in zsh, but I will accept "use a different shell" as an answer.
|
You can always create a:
create() { mkdir -p -- $1:h && cat > $1; }
And use:
echo something | create path/to/some/file
Or even:
create() {
local dest
for dest do
mkdir -p -- $dest:h || return
done
cat > "$@"
}
To be able do do:
echo hi | create some/file some/other/file
Another approach:
makeparents() {
mkdir -p -- $1:h
print -r -- $1
}
And:
echo hi > "$(makeparents path/to/some/file)"
(won't work for file names that end in newline characters)
You could also do it using zsh's dynamic named directory expansion:
redir-parent() {
[[ $1 = n ]] && [[ $2 = p:* ]] || return
local file=${2#p:}
mkdir -p -- $file:h
reply=($file)
}
zsh_directory_name_functions+=(redir-parent)
And then:
echo something > ~[p:path/to/some/file]
Where path/to/some would be created as part of the expansion.
In any case, in all of those, when path directory components are created, they will have default permissions as affected by the current umask.
In the cmd | create path/to/file version, cmd will be run even if /path/to can't be created or path/to/file can't be opened (and could end up being killed with a SIGPIPE).
In the other ones, the failed redirection will cancel the running of cmd. They also have the advantage of preserving cmd's exit status.
If not all directory components can be created or if the file can't be opened for writing, you could end up with some directories having been created even if the redirection itself failed. You may then want to do some manual cleanup.
| Create parent directories when using shell redirect |
1,318,598,595,000 |
When I ssh into this one remote system I'm unable to modify PS1. However, while I'm ssh'd in, if I start a non-login Bash, then I'm able to modify PS1. Here's my console output:
dev ~ ❯ bash --login
dev ~ ❯ echo $PS1
dev \W ❯
dev ~ ❯ PS1="foobar: "
dev ~ ❯ echo $PS1
dev \W ❯
dev ~ ❯ bash
dev ~ ❯ PS1="foobar: "
foobar: echo $PS1
foobar:
foobar:
Here's the same output but with echo statements at the beginning and end of ~/.bash_profile, ~/.bash_login, ~/.profile, and ~/.bashrc:
dev ~ ❯ bash --login
bash_profile
bash_login
profile
bashrc
bashrc end
profile end
bash_login end
bash_profile end
dev ~ ❯ echo $PS1
dev \W ❯
dev ~ ❯ PS1="foobar: "
dev ~ ❯ echo $PS1
dev \W ❯
dev ~ ❯ bash
bashrc
bashrc end
dev ~ ❯ PS1="foobar: "
foobar: echo $PS1
foobar:
foobar:
On the system, the default PS1 appears to be getting set inside of /etc/bash.bashrc:
PS1='${ENV:-${ENVIRONMENT:-$(basename HOSTNAME)}} \W ❯ '
That file seems to be getting sourced from /etc/profile.
# If PS1 is not set, load bashrc || zshenv or set the prompt
# shellcheck disable=SC1091
if [ "${PS1-}" ]; then
if [ "${BASH-}" ] && [ "${BASH}" != '/bin/sh' ]; then
[ -f /etc/bash.bashrc ] && . /etc/bash.bashrc
# elif [ "${ZSH-}" ] && [ "${ZSH}" != '/bin/sh' ]; then
# [ -f /etc/zshenv ] && . /etc/zshenv
else
# lightning symbol \342\232\241
"${IS_ROOT}" && PS1='\[\342\232\241\] ' || PS1='❯ '
fi
fi
Note: In the end I'd like to be able to set PS1 inside of ~/.bashrc.
|
fra-san mentioned this in a comment above before I added this answer -- credit goes to him.
It's possible that something in ${PROMPT_COMMAND} is setting the prompt. I can reproduce your issue with:
function set_ps1() {
PS1="hi> "
}
$ PROMPT_COMMAND="set_ps1"
hi> PS1="hello "
hi>
In this case, when I try to set PS1 to "hello", it changes it, runs PROMPT_COMMAND. That function changes PS1 back before displaying the prompt.
| Why can't I change my PS1 in my bash login shell? |
1,318,598,595,000 |
I'm writing some integration tests, that test SSH connections between servers.
For the time being the tests are run from people's laptops. In order not to muck around in the user's (the user running the tests) ~/.ssh/config I create a temporary directory with a bespoke ./tmp/.ssh/config file just for the tests. Then I export HOME=/path/to/tmp. Unfortunately, I've found that openssh doesn't use $HOME to search for an ssh config or identity files.
This is ok if I'm ssh-ing directly to a host, because I can just explicitly set my config using the -F flag. However, if I'm ssh-ing through a bastion and I have a proxycommand, ssh does not pass that configuration file down to my proxycommand. So, if my bespoke ssh config uses a different default username (for example), that configuration won't be used for the proxycommand.
I "could" modify the proxycommand as well (to take an ssh config file as an argument), however, I'd like to know if it's possible to get openssh to look for the config/identity files in a different location just by use of environment variables (without having to pass the configuration file down to each subsequent downstream command). I can change my ssh-agent using SSH_AUTH_SOCK so I was hoping to be able to change the config file directory as well.
|
According to the source code, ssh gets the home directory from the password file and then, if it does not succeed, from the HOME environment variable.
What you can do is add an Include to every user's ~/.ssh/config, say ~/tmp/user/.ssh/config.
If the file to be included does not exist, ssh will not complain. But if it exists and is readable, it will include it. That should allow you to do the tests without messing too much with their files.
Notice that it poses a security risk. Anybody knowing those paths will be able to inject local configurations for other users if you don't secure them well.
| OpenSSH not respecting $HOME when searching for ssh config files |
1,318,598,595,000 |
I would want to use the common ntpd daemon to test whether time is kept by a virtual machine or not, without letting it actually adjust the clock.
I'm running Solaris 11.4 (Oracle's standard image for Intel) inside VirtualBox on a macOS system, and I can't get the time to synchronise properly. It struck me that the VM might well be using VirtualBox Guest Additions for doing this already (I don't know how this works) and that I might be upsetting the time keeping by running ntpd in the guest.
To test this, I thought I'd set up ntpd in in the Solaris VM to monitor a few public time servers, but somehow keep it from modifying the local clock. That way I could look at how the loopstats and peerstats logs looked over time to see whether the local clock actually kept good time.
The issue is that I can't find any hints about how to stop ntpd from adjusting the local clock.
I have wanted to do this in the past too on systems where I use openntpd (from OpenBSD) for actual time keeping. The ntpd daemon could then just sit in the background and monitor, without interfering. But I could not find any way to achieve this then either.
|
ntpd
Understanding that you must use ntpd, the only options AFAIK are:
disable ntp
As seen on the ntp.conf manual page there is the possibility of disabling the ntp feedback loop, or, in layman terms: remove the ability of calculating time corrections between time servers and the local clock. The ntp.conf line needed to activate such option is:
disable ntp
Note: when using this option the time that ntpd may give to other systems asking for a time reference could be wrong/off. Seems reasonable to use a line of deny to deny all queries for time from other systems unless you want to monitor time drift from an external system (use deny and allow the IP of the external system).
Note: It is not completely clear to me that the system clock is actually left to "run free" by ntpd. However, it is a documented option, so if ntpd fails to comply with what is documented it is a bug.
minsane
minsane minsane
This is the minimum number of candidates available to the clock selection algorithm in order to produce one or more truechimers for the clustering algorithm. If fewer than this number are available, the clock is undisciplined and allowed to run free.
This is done by setting a line ( in ntp.conf) like:
tos minsane 100
Or some other high number (bigger than the servers available or used).
Note: it is not clear to me that the kernel drift value is reset to 0 to avoid that the clock slowly shifts in value. May be reasonable to additionally set the disable kernel to disable kernel discipline functions.
Related
ntpd -qn
When a ntpd server is running ntpq -pn could report how well the ntpd server is doing its job of keeping the system clock in sync. That is an alternative way to log the time difference.
ntpdate -q
The package ntpdate (which is tagged as deprecated) may be used to check the time difference with:
ntpdate -q 'pool.ntp.org' # marked as deprecated.
Use ntpdate -qu 'pool.ntp.org' so the command doesn't need root privileges to run (-u means "use unprivileged network ports", still, the executable has to be accessible to the user).
sntp
There is a simple program to query (not change if no -s or -S option is used):
sntp pool.ntp.org
rdate
The program rdate is able to show remote time (and local time):
rdate -np pool.ntp.org; date
Where -n means: Use SNTP (RFC 2030) instead of the (default) RFC 868 time protocol; and, to only print the result without making any actual changes.
However, this program is limited to a resolution of whole seconds (not fractions). And, it has no options in solaris
chrony
The replacement package of ntp (chrony) is able to execute a test of time difference without setting the system clock:
chronyd -Q 'pool pool.ntp.org iburst'
I believe that those are all methods to detect (without changing) the time difference between internet ntp time and system time.
| Using ntpd for monitoring time drift, but not for adjusting |
1,318,598,595,000 |
What does Load "fb" in Section "Module" of /etc/xorg.conf actually do?
Tried to RTFM and searching first.
|
Load "fb" is telling X to load the framebuffer module.
(II) Loading sub module "fb"
(II) LoadModule: "fb"
(II) Loading /usr/lib/xorg/modules//libfb.so
(II) Module fb: vendor="X.Org Foundation"
compiled for 1.4.2, module version = 1.0.0
ABI class: X.Org ANSI C Emulation, version 0.3
from the freedesktop.org xorg archives
The fb library is what is responsible for almost all of the software
rendering that your X Server might do.
You're probably mostly spending your time in fbComposite() and its
children, which is the Render extension software implementation. Some
drivers have hardware implementations of this, and we're working on
making this be the case for more hardware.
--
Eric Anholt anholt at FreeBSD.org
eric at anholt.net eric.anholt at intel.com
On most distros you can locate files like this:
$ locate libfb.so
/usr/lib64/xorg/modules/libfb.so
$ rpm -qf /usr/lib64/xorg/modules/libfb.so
xorg-x11-server-Xorg-1.19.5-5.el7.x86_64
And inquire into the package itself about the contents/purpose.
What's a framebuffer
So the next question might be, what's a framebuffer. For that look to wikipedia: Framebuffer:
A framebuffer (frame buffer, or sometimes framestore) is a portion of RAM1 containing a bitmap that drives a video display. It is a memory buffer containing a complete frame of data.2 Modern video cards contain framebuffer circuitry in their cores. This circuitry converts an in-memory bitmap into a video signal that can be displayed on a computer monitor.
In computers, buffers such as this, are used to directly map a region of memory to a display/screen, which has a driver that's monitoring the region. Anything placed into this location is picked up and rendered on the display/screen itself.
For more on frame buffers, please refer to this U&L Q&A titled: What is a framebuffer device and is it required to obtain a higher resolution?.
Reference
What is libfb.so responsible for?
| What is `Load "fb"` in xorg.conf |
1,318,598,595,000 |
I'm using zsh version 5.3.1:
% zsh --version
zsh 5.3.1 (x86_64-pc-linux-gnu)
I'm trying to define a key binding, using the key sequence C-x r, to reload the configuration of zsh. Thanks to @Gilles, I included this code in my ~/.zshrc:
reread_zshrc () {
. ~/.zshrc
}
zle -N reread_zshrc
bindkey '^Xr' reread_zshrc
It works, except that when I hit C-x r, zsh complains with the errors:
stty: 'standard input': Bad file descriptor
stty: 'standard input': Bad file descriptor
dircolors: /home/user/.dircolors: Bad file descriptor
I can reproduce these errors with the following minimal zshrc:
stty -ixon
stty quit undef
eval "$(dircolors -b ~/.dircolors)"
reread_zshrc () {
. ~/.zshrc
}
zle -N reread_zshrc
bindkey '^Xr' reread_zshrc
Here are the reasons why I included the 2 stty commands, and the dircolors command in my zshrc.
stty -ixon prevents the terminal driver from interpreting C-s and C-q as terminal flow controls: by default, C-s freezes the terminal and C-q unfreeze it. It allows to use C-s and C-q in key bindings for the shell or the text editor, without freezing the terminal.
stty quit undef prevents the terminal driver from sending the SIGQUIT signal when C-\ is pressed. Again, it allows to use this key in a key binding without the foreground process to quit.
eval "$(dircolors -b ~/.dircolors)" asks the ls command to read its configuration from ~/.dircolors. It allows to customize the colors from the output of a ls command.
I suppose I need to protect these 3 lines from being re-sourced by zsh if they've already been executed in the current shell. But I don't know which condition to write:
if <stty and dircolors haven't been executed already>; then
stty -ixon
stty quit undef
eval "$(dircolors -b ~/.dircolors)"
if
Besides, I would like to better understand these error messages, because if I execute them in an interactive zsh shell, they don't cause any issue. Why do they only raise errors from this key binding.
|
In zle widgets, it seems zsh closes stdin. I suppose zsh wants to avoid commands in those widgets interfering directly with user input, but it would be more sensible to redirect stdin from /dev/null instead (that will be fixed in the next release).
When stdin (file descriptor 0) is closed, that means the first file a command opens becomes its stdin (as file descriptors are allocated from the first free one).
In dircolors, that triggers a bug. dircolors opens your ~/.dircolors, and then tries to make it its stdin without noticing it was already its stdin (because that's the fd open() returns). So, the dup2(0,0) (dup stdin onto itself) fails with a EBADF error which dircolors reports.
stty sets the settings of the terminal open on its stdin. Here, stdin is closed, so stty returns with an error.
Here, you can change your widget so it restores stdin to the terminal:
reread_zshrc () . ~/.zshrc < $TTY
But note that changing the tty settings from within a zle widget (though I don't know what your stty command does) is a bad idea as zle sets the tty in a special mode for line editing which you don't want to mess up with (and at the end of editing, the normal tty settings will be restored anyway, so the changes you're making will be lost).
So maybe instead you should make stdin /dev/null (as you don't really want to be doing things with the terminal there), but stty would still complain (as /dev/null is not a tty device), so you may also want to redirect stderr to /dev/null to hide those error messages (though it would hide all error messages):
reread_zshrc() . ~/.zshrc < /dev/null 2> /dev/null
| How to test if the current zsh shell has already executed `dircolors` and `stty` commands? |
1,318,598,595,000 |
I often find myself in trouble when I try to edit configuration files from the ~/.config/ folder. I expect any change I make to them to be effective, at least after restarting the application or having logged out/inagain.
But they sometimes don't. Here for example, I try to edit ~/.config/nautilus/accels, changing the line:
; (gtk_accel_path "<Actions>/DirViewActions/Trash" "<Primary>Delete")
by:
; (gtk_accel_path "<Actions>/DirViewActions/Trash" "Delete")
After I close Nautilus, then restart it, or log out-then-in, the "Delete" key stil doesn't do anything. More disturbing, the output of head ~/.config/nautilus/accels is:
; nautilus GtkAccelMap rc-file -*- scheme -*-
; this file is an automated accelerator map dump
;
; (gtk_accel_path "<Actions>/DirViewActions/Start Volume" "")
; (gtk_accel_path "<Actions>/DirViewActions/Trash" "<Primary>Delete")
; (gtk_accel_path "<Actions>/DirViewActions/Save Search" "")
; (gtk_accel_path "<Actions>/DirViewActions/Location Poll" "")
; (gtk_accel_path "<Actions>/DirViewActions/Set As Wallpaper" "")
; (gtk_accel_path "<Actions>/DirViewActions/New Folder with Selection" "")
; (gtk_accel_path "<Actions>/ShellActions/Tab9" "<Alt>
just like I hadn't done anything! This means to me that some information is stored elsewhere in some way. What should I do, after having edited a file in ~/.config/, to make the changes effective?
|
; starts a comment. So a line starting with ; is ignored.
And probably nautilus overwrites the config file at close. So you should stop nautilus, delete the ; and start nautilus again.
| Syntax of GTK applications' configuration files in ~/.config |
1,318,598,595,000 |
I'm trying to configure IP settings on my Raspberry Pi running Rasperian.
I edited /etc/networks/interfaces to be:
auto lo
iface lo inet loopback
iface eth0 inet static
address 110.76.71.106
netmask 255.255.255.0
network 110.76.71.0
broadcast 110.76.71.255
gateway 110.76.71.1
dns-nameserver 143.248.1.177
allow-hotplug wlan0
iface wlan0 inet manual
wpa-roam /etc/wpa_supplicant/wpa_supplicant.conf
iface default inet dhcp
after that, I came back to bash and types 'ifconfig' and the result was like this:
eth0 Link encap:Ethernet HWaddr b8:27:eb:e0:70:ca
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
as far as I know, beneath the 'Link encap:Ethernet HWaddr b8:27:eb:e0:70:ca' line, there should be something like 'inet addr:110.76.71.106 Bcase:110.76.71.255 ...blahblah'.
What could I have done wrong?
P.S. when I'm doing this, I'm not yet plugging the LAN cable to the raspberry pi. Could this be a reason why the correct 'ifconfig' result doesn't show up?
|
As steeldriver notes in a comment, there is a typo. If that's not just a typo in your question, you need to fix that.
iface etho0 inet static
^
extra "o"
Also, for readability, traditionally they are indented and you don't actually need to specify the network and broadcast when the defaults are OK:
iface eth0 inet static
address 110.76.71.106
netmask 255.255.255.0
gateway 110.76.71.1
dns-nameserver 143.248.1.177
Once you've fixed that (or if that's not an error in the actual file) then you need to either reboot or do ifdown eth0; ifup eth0 to actually apply the network config. Also, you need an allow-hotplug eth0 or auto eth0 line to make it come up on boot.
| ifconfig not showing changed ip address |
1,318,598,595,000 |
I'm running tightvnc on a raspberry pi and looking at the log
/root/.vnc/hostname:5900.log
I see this line:
30/04/14 09:23:18 Listening for VNC connections on TCP port 11800
How can I change this so it listens on port 5900?
|
Tehehe, you got the schema behind the command line wrong. The argument you give to tightvnc is the display number, not port number. Display numbers correspond to port numbers in the way that
port = display + 5900
So display 0 would result in port 5900, display 5900 in port 11800. Took me a while, too. ;)
| How to make tightvnc listen for incoming connections on port 5900? |
1,318,598,595,000 |
I was playing with the apache configuration files after a system restore when I noticed something I have never really though about too much. Here are the first lines of the default /etc/apache2/sites-available/default:
DocumentRoot /var/www
<Directory />
Options FollowSymLinks
AllowOverride None
</Directory>
<Directory /var/www/>
Options Indexes FollowSymLinks MultiViews
AllowOverride None
Order allow,deny
allow from all
</Directory>
Since the root of a web server is usually /var/www, why is there the need to add a <Directory /> in here?
|
It could be that you somewhere in your configuration define a Directory outside of DocumentRoot (e.g. I store my static pages under DocumentRoot but have web-applications in a separate directory outside DocumentRoot). By having <Directory /> in your configuration you define a reasonable default that's valid for every directory not specified by an own <Directory>-stanza.
| What's the difference between <Directory /> and <Directory /var/www/> in apache? |
1,318,598,595,000 |
In ~/.kde/share/config/kdeglobals, the value for the default web browser starts with an exclamation mark. What is the purpose of the exclamation mark?
[General]
BrowserApplication[$e]=!sensible-browser
|
Let's just ask the source code. If you're not interested in the details, just skip to the end to see the result:
There is a KCM for setting the default applications. Let's look up its name:
$ kcmshell4 --list | ack -i default
componentchooser - Choose the default components for various services
NOTE: The following 5 steps are Gentoo specific, but could be applied to any other distribution or could be replaced by browsing through KDE's source repositories manually!
Let's search the filesystem for files belonging to the componentchooser:
$ find /usr -name "*componentchooser*"
/usr/lib64/kde4/kcm_componentchooser.so
/usr/share/doc/HTML/en/kcontrol/componentchooser
/usr/share/doc/HTML/de/kcontrol/componentchooser
/usr/share/kde4/services/componentchooser.desktop
/usr/share/apps/kcm_componentchooser
/usr/share/locale/de/LC_MESSAGES/kcmcomponentchooser.mo
Now we'll query the package manager (in our case Gentoo's Portage) and ask for the package which contains any of these files:
$ find /usr -name "*componentchooser*" | xargs qfile
kde-base/kdelibs (/usr/share/apps/kcm_componentchooser)
kde-base/kcontrol (/usr/share/apps/kcm_componentchooser)
kde-base/kcontrol (/usr/share/kde4/services/componentchooser.desktop)
kde-base/kcontrol (/usr/share/doc/HTML/en/kcontrol/componentchooser)
kde-base/kcontrol (/usr/lib64/kde4/kcm_componentchooser.so)
kde-base/kde-l10n (/usr/share/locale/de/LC_MESSAGES/kcmcomponentchooser.mo)
kde-base/kde-l10n (/usr/share/doc/HTML/de/kcontrol/componentchooser)
As we're looking for the source code which actually writes the value for the default-browser, we should look into the .so file which contains actual code, while the other files just provide documentation (/usr/share/doc/[…]), meta-information ([…].desktop) and translation strings (/usr/share/locale/[…]).
This means, we'll have to take a look at the package providing the shared-object (.so) file, which is kde-base/kcontrol on Gentoo.
First, we make sure, the source tarball is present on our filesystem, by asking Portage to download it for this package (--nodeps ensures, only the sources for this package are downloaded, but not for any dependencies):
$ emerge --fetchonly --nodeps kde-base/kcontrol
>>> Fetching (1 of 1) kde-base/kcontrol-4.11.4
* kde-runtime-4.11.4.tar.xz SHA256 SHA512 WHIRLPOOL size ;-) ... [ ok ]
In this case, the file was already present and just its checksums were verified.
Now we're going to unpack this file into a temporary location for examining it further:
$ cd /tmp
$ tar xf /usr/portage/distfiles/kde-runtime-4.11.4.tar.xz
The result is the directory kde-runtime-4.11.4 which we're going to change into now:
$ cd kde-runtime-4.11.4
This directory contains now a lot of components belonging to the kde-runtime package of KDE SC. We're interested in the kcontrol component:
$ cd kcontrol
Now we need to identify the file which contains the source code to write the default browser to kdeglobalsrc. There are different ways to do this:
Browse through the directory structure and try to find the file by its name.
Look for a file whose name contains something like componentchooser and examine its source code
Scan the source code and find directly the file which writes the value BrowserApplication.
The shortest path to our goal is option '3', so that's what we're going to do:
$ ack BrowserApplication
componentchooser/componentchooserbrowser.cpp
50: QString exec = config.readPathEntry( QLatin1String("BrowserApplication"), QString("") );
92: config.writePathEntry( QLatin1String("BrowserApplication"), exec); // KConfig::Normal|KConfig::Global
So obviously in line '92' of the file componentchooser/componentchooserbrowser.cpp, that's were this value is being written, so let's have a closer look at it:
80 void CfgBrowser::save(KConfig *)
81 {
82 KConfigGroup config(KSharedConfig::openConfig("kdeglobals"), QLatin1String("General") );
83 QString exec;
84 if (radioExec->isChecked())
85 {
86 exec = lineExec->text();
87 if (m_browserService && (exec == m_browserExec))
88 exec = m_browserService->storageId(); // Use service
89 else if (!exec.isEmpty())
90 exec = '!' + exec; // Literal command
91 }
92 config.writePathEntry( QLatin1String("BrowserApplication"), exec); // KConfig::Normal|KConfig::Global
93 config.sync();
94
95 KGlobalSettings::self()->emitChange(KGlobalSettings::SettingsChanged);
96
97 emit changed(false);
98 }
In line '92', the key BrowserApplication is written and it's value is in the variable exec. The exclamation mark is added to the command string in line '90', but there is no elaborate comment in the code at this line which would explain, why this is done, so let's have a look instead at the code logic which leads to adding an ! in front of the BrowserApplication value:
Line '86' sets exec to the string which is provided by the input field
Line '87' checks, whether the member variable m_browserService is true and whether the content of the variable exec is the same as the member variable m_browserExec.
m_browserService is set (0 or 1) by the method CfgBrowser::selectBrowser when the default browser is selected by browsing the application tree instead of entering the executable name directly as string. In case the browser is selected by browsing the application tree, the content of the input field is the name of the applications *.desktop file.
m_browserExec is the name of the *.desktop file when selecting the browser via the application tree.
In case both statements evaluate to TRUE, exec is set to the result of storageId (the name of the *.desktop entry).
Otherwise, the name of the executable file is set, but it is prepended by an !.
To make it short:
The exclamation mark for the BrowserApplication entry in kdeglobalsrc is used to distinguish between an actual binary name to be executed for launching the browser or the name of a browser's *.desktop file.
| What does an exclamation mark at the beginning of a value in KDE configuration files do? |
1,318,598,595,000 |
I'm a KDE user, and When I switch Linux distributions, I don't want to copy my entire home folder, since most of the configuration files there will be created automatically when I install\run programs on the new installation.
There are, however, some applications that I've put a lot of work into their configuration, and I like to hand-pick their configuration files that I want to migrate to the new installation.
Now, I'm having trouble doing that with the KDE configuration itself - I can't find my way around the .kde and .kde4. I don't want to migrate the entire folder - but I need some specific settings from there.
So, the question is - what do I need to do to migrate the following KDE settings:
File associations
Activities
That's it. I need a way to migrate those - be it copying specific files, copying parts of files, or using a tool.
Thanks in advance!
|
All the file associations are stored in
~/.local/share/applications/mimeapps.list
For the KDE acitivies have a look at these files
activitymanagerrc
plasma-desktop-appletsrc
| Migrating the KDE configuration files |
1,318,598,595,000 |
I have configured /etc/network/interface like this:
auto lo eth0
iface lo inet loopback
iface eth0 inet dhcp
But when a lease cannot be obtained, the booting is not completed. Is it possible to leave DHCP enabled but, in case a DHCP lease is not achieved, still complete the boot (so that a new network configuration can be performed)?
|
There is a undocumented parameter to udhcp which sends it to the background and allows to boot.
udhcpc -b
| How to configure DHCP so that boot is completed even without a DHCP server? |
1,318,598,595,000 |
We have a systemd service unit that starts a third-party agent; call it "service c". The service unit functions correctly -- at least, as far as I can tell!
After a patching cycle, systemd starts this service unit (as expected) but then it turns around and stops the service unit about two seconds after it successfully started it. I have every reason to believe that the service started successfully the first time. When I log in after the reboot, I can see that the service is indeed not running; at that point, I can start the service unit manually (systemctl start service-c) and it starts the service as expected.
I would like to find out why systemd thinks it should be stopping the service unit. What can I configure or enable to determine why systemd took the "stop" action?
I am aware of the systemd LogLevel option and have already set it to "debug", up from the default of "info".
A similar idea is to set Environment=SYSTEMD_LOG_LEVEL=debug in the service unit file, but I don't particularly need the service debugged, but rather systemd itself.
The service unit configuration is:
# /etc/systemd/system/service-c.service
[Unit]
Description=service c
After=network-online.target local-fs.target
[Service]
Type=forking
ExecStart=/local-path/start.service-c
ExecStop=/local-path/stop.service-c
Restart=on-failure
[Install]
WantedBy=multi-user.target
... and the evidence is:
$ systemctl status service-c
● service-c.service - service c
Loaded: loaded (/etc/systemd/system/service-c.service; enabled; vendor preset: disabled)
Active: inactive (dead) since Wed 2021-04-07 17:49:30 EDT; 14h ago
Process: 3162 ExecStop=/local-path/stop.service-c (code=exited, status=0/SUCCESS)
Process: 1319 ExecStart=/local-path/start.service-c (code=exited, status=0/SUCCESS)
Main PID: 1478 (code=exited, status=0/SUCCESS)
/local-path is an obfuscated version of a local directory on the system.
Since this has been an ongoing issue, after the last reboot I instrumented the "stop" wrapper script to log the process parent tree (using pstree -a -A -l -p -s $$)); that log file shows:
04/07/2021 17:49:19 stop.service-c:
systemd,1 --switched-root --system --deserialize 22
`-stop.service-c,3162 /local-path/stop.service-c
`-pstree,3178 -a -A -l -p -s 3162
... where PID 3162 corresponds to systemd's invocation of the stop script. This looks to me like systemd is calling the ExecStop for the service.
systemd stops this service about two seconds after it has finished starting; the agent's log file has these timestamps:
04/07/2021 17:49:12 start.service-c: Starting agent
04/07/2021 17:49:17 start.service-c: startup success
04/07/2021 17:49:19 stop.service-c: Executing from /agent/home as user
... ending in ...
04/07/2021 17:49:30 stop.service-c: Finished with RC=0
... which corresponds to systemd's 17:49:30 timestamp for being "dead".
The "Restart=on-failure" directive would restart the service, but systemd tells me that the service started successfully:
Apr 07 17:49:10 hostname systemd[1]: Starting service c...
Apr 07 17:49:17 hostname systemd[1]: Started service c.
Since the service started cleanly, and since there's no attempt made by systemd to restart the service, I don't think the Restart parameter is coming into play.
Perhaps interesting, there's no corresponding "Stopping service c..." log from journalctl (as there is when I manually stop the service), yet evidence points to systemd calling the ExecStop.
I am currently running systemd 219.
|
I would like to find out why systemd thinks it should be stopping the service unit. What can I configure or
enable to determine why systemd took the "stop" action?
In order to see the live-state of a service you can:
Use a systemd-cgls -l <service-cgroup-path> command: there you will see all the services's processes as they are at that moment. The service's cgroup path can be retrieved with systemctl show -p ControlGroup <service-name> command. In more recent versions of systemd (not in v219) you may also use the convenient -u <service-name> option of systemd-cgls in place of the service's cgroup path
For detailed insight you may use the very verbose systemctl show <service-name> command: this will give loads of info regarding the service state as known by systemd, and from that info you can try to infer in more detail what is happening
To investigate the "suspect stop" case, it is correct to add those commands as ExecStop commands. You can simply add them at the beginning of your own stop.service-c script (if it is indeed a script).
Or alternatively you can add them as additional ExecStop commands on their own before your stop.service-c command, as in:
[Service]
Type=forking
ExecStart=/local-path/start.service-c
ExecStop=-/bin/sh -c 'systemd-cgls -l -u %n && systemctl show %n'
ExecStop=/local-path/stop.service-c
Restart=on-failure
Note that the %n specifier is correctly handled by systemd also when it occurs within quoted strings.
Alternatively you can also:
[Service]
Type=forking
ExecStart=/local-path/start.service-c
ExecStop=-/usr/bin/systemd-cgls -l -u %n
ExecStop=-/bin/systemctl show %n
ExecStop=/local-path/stop.service-c
Restart=on-failure
Note also the - prefixing the commands so as to ignore their exit status, just in case they failed for unfathomable reasons.
Naturally you might also use them as ExecStartPost commands, so as to ponder the live-state immediately after the service is considered "successfully started" by systemd. (again make to ignore their exit status or systemd will tear down the entire service if they fail).
With regard to systemd-cgls's output run as ExecStop command, you should note whether the MainPID process still shows up at that time: if it does show up, then it proves that ExecStop really has been performed autonomously by systemd as you suggest. Else (if the MainPID process is not present in systemd-cgls's output at "stop" time) it means that ExecStop has been run as a result of the MainPID process exiting on its own accord. (See further below for additional reasoning). You might also want to pay attention to the PID numbers of the service's processes together with the PID numbers of the (now dead) ExecStart command to try and infer what fork(2)-ing has been going on since the service start and all along, because that is very relevant for a type=forking service in order to assess whether it is well-behaved. (See further below for additional reasoning).
With regard to systemctl show's output run as ExecStop command , I would say that the most relevant properties to pay attention to in your particular case are:
MainPID: reads 0 if the service's main process has exited on its own accord, else reads the service's main process's PID if it is still alive and thus is indeed being stopped by systemd
ExecMainExitTimestamp: reads the exit time in date format if the service's main process has exited on its own accord, else does not read at all if the process is still alive and thus is indeed being stopped by systemd
ExecMainExitTimestampMonotonic: as above but reads in Linux's monotonic clock and reads 0 if the process is still alive
ExecMainCode: this corresponds to the code= string in systemctl status1, only it reports the decimal value of the CLD_* symbols instead of their translations into english words: according to Linux's current values for CLD_* symbols (which is an enum starting from 1), the ExecMainCode field reads 0 if the process is still alive and thus is indeed about to be stopped by systemd, else reads 1 if the process has already _exit(2)-ed on its own accord, 2 if it has been kill(2)-ed (in this use case clearly not by systemd), and so on
Note however that the above fields do not correspond to the service's current state if systemd was unable to detect the service's main process at the time of service start. (see below for explanations). They would rather correspond to the most recent run for which systemd was able to accomplish the detection fully.
Further insights
In your reasoning I can see two key points that worth extra clarification:
type=forking services
type=forking services are particularly tricky for systemd, especially when using GuessMainPID=yes (the default, therefore what you're currently using for your agent). For these service types the ExecStart command is expected to fork(2) itself once and then exit while its forked process is expected to live long and prosper as the MainPID of the service. Else:
If such forked process rather forks again and then exits as well, delegating to its own "secondly" forked process(es) the responsibility to function as the actual service, GuessMainPID simply loses track and systemd simply believes the service has finished regularly and spontaneously, and thus accomplishes the duty of cleaning everything (i.e. run ExecStop etc.) but without logging the Stopping service... message because, as far as systemd is concerned, it only reacted to the service's deliberate exit
If instead the ExecStart original process fork(2)s twice (or more) before exiting, then GuessMainPID surrenders and systemd restrains from tearing everything down when the ExecStart original process finally exits. This is a better case because the service's actual processes survive, but it is not yet ideal because then systemd won't keep full track of events either, thus leading to e.g. inconsistent/incomplete journal logs at the least.
ExecStop execution
The ExecStop commands are run also when the MainPID process exits successfully on its own accord, as long as the main process had also started successfully (which is your case at hand). I understand that it seems counterintuitive but that is simply normal behavior by systemd: it simply deems a service's ExecStop command the preferred way to clean after that service, prior to resort to (by default, see systemd.kill(5)) sending a SIGTERM first and possibly a SIGKILL afterwards.
It doesn't say it so explicitly anywhere in the systemd.service(5) manpage, but it can be inferred by a few bits and pieces of documentation, especially those regarding the environment variables available to the Exec* commands. See the $SERVICE_RESULT, $EXIT_CODE and $EXIT_STATUS variables as to what values they can take, what semantic significance they have, and the fact that they are made available precisely to the ExecStop as well as the ExecStopPost commands.
Apart from non-explicit (or personal interpretation of the) documentation, let's look at the sources that perform that behavior. Taken from v219, here is service_sigchld_event() invoking service_enter_running() on an event referring to a child that is known to be in "running" state, and then the latter function invoking the service_enter_stop() "stopping" action in all cases except for when RemainAfterExit=yes or type=dbus or the service's main process has not been detected (see type=forking explanation above) or the control-group is not healthy.
As to why the systemd people decided to do so I wouldn't know as I am not a systemd developer, but I can see the usefulness of this behavior so as to give to all still alive-yet-"unknown" processes of a service their chance to be notified about their imminent termination in the nicest possible way, before getting a harsh SIGTERM and SIGKILL as per systemd's last resort as it proceeds to close the whole control-group. This measure is particularly useful precisely for type=forking services because those are the most difficult for systemd to track down correctly, as explained in the type= paragraph of systemd.service(5), and because systemd tries to clean after legacy/lazy/poorly-implemented services that don't close gracefully before exiting.
HTH
1. code= followed by a word representing the "exit reason" of the process: whether it exited or has been killed or trapped or even dumped; in practice: literally a word translating the various CLD_* values valid for the siginfo_t.si_code field as described in sigaction(2)
| What can I configure or enable to determine why systemd took a "stop" action on a service? |
1,318,598,595,000 |
I’ve scanned the ? commands inside ranger as well as some cheat sheets people made online, but to no avail.
How do I change ranger to only use 2 panes (or less)? I don’t mind getting rid of the entirety of the file preview window, if that’s what it takes (so, turn it into deer in the ranger plugin version for Emacs). Or, sacrifice one panel of parent dir.
|
You can configure this with
set column_ratios 3,4
in your ~/.config/ranger/rc.conf file.
You can read 3,4 as make one pane 3 units wide and one pane 4 units wide. The left pane will show your files with the preview on the right.
It's actually in the man page inside ranger but you have to search for column not pane.
| ranger file explorer: how to change into 2 panes at most (for vertically narrow console) |
1,318,598,595,000 |
I'm using menuconfig to setup a linux kernel for debugging, but why does it seem that DEBUG_STACKOVERFLOW only works for 32 bit systems?
As you can seen in the screenshot, enabling HAVE_DEBUG_STACKOVERFLOW requires the system to be 32 bit. Is this because it's enabled by default when compiling for 64 bit systems? Google is not leading me to any answers
|
x86_64 used to have stack overflow checks, but they were removed once guard pages were added to all the stack types. Guard pages provide reliable overflow protection, without needing extra checks, so the stack overflow checks were redundant.
| How come you can't enable "DEBUG_STACKOVERFLOW" when configuring a 64 bit kernel? |
1,318,598,595,000 |
Occasionally, NixOS changes config options in a way that is not entirely backwards compatible. For example, nixos 19.09 did not have a programs.gnupg.agent.pinentryFlavor option, but in nixos unstable (soon 20.03) I need to set it to a non-default value in order to get the right pinentry variant.
I share my configuration across machines, some of which run nixos-19.09 and some nixos-unstable, so I want my configuration to be compatible with both. (even without multiple machines, it would be nice to be able to switch nixos channels without breakage)
Setting programs.gnupg.agent.pinentryFlavor = "gtk2"; as needed for nixos-unstable causes nixos-rebuild to fail on nixos-19.09:
error: The option `programs.gnupg.agent.pinentryFlavor' defined in `[...]/desktop.nix' does not exist.
(use '--show-trace' to show detailed location information)
Is there a way to check if an option is valid?
Essentially, I'm looking for what to write in place of ???(pinentryFlavor)) here, so as to not set a nonexistent option:
programs.gnupg.agent = { enable = true;} // (
if ???(pinentryFlavor)
then { pinentryFlavor = "gtk2"; }
else {});
|
The configuration function does receive an options attr, so it is possible to check if a given option is defined using builtins.hasAttr before setting it in the configuration.
Most NixOS configurations don't extract options, so you may need to add it first. For example:
{ config, pkgs, options, ... }:
{
programs.gnupg.agent =
{ enable = true; } //
# False on NixOS 19.09
(if builtins.hasAttr "pinentryFlavor" options.programs.gnupg.agent
then { pinentryFlavor = "gtk2"; }
else {});
}
Similarly, the same approach can be used to set options used by nixos-rebuild build-vm, which would normally not be available.
Instead of needing to set options via environment variables when running the VM like
QEMU_OPTS='-m 4096 -smp 4 -soundhw ac97' ./result/bin/run-*-vm
the equivalent options can be set in configuration.nix:
# The default 384MB RAM is not enough to run Firefox in a VM
virtualisation =
lib.optionalAttrs (builtins.hasAttr "qemu" options.virtualisation) {
memorySize = 4096;
cores = 4;
qemu.options = [ "-soundhw ac97" ];
};
| Set NixOS config option only when it is valid, for backwards compatibility |
1,318,598,595,000 |
in the context of automating the installation of a machine, I would like to configure firefox, specifically the proxy settings, from the command line, either by executing commands or by editing configuration files, for example.
Is this possible, and if yes how?
Edit: I forgot to mention that I would like to configure the proxy for all users.
|
You have basically two choices (that i can think of)
Launch firefox, and update your profile with the correct settings (proxy ones for example).
Then close and retrieve your configuration in ~myusername/.mozilla/firefox/xxxxxxx.default/prefs.js. xxxxx is a dynamic string. You can then use this user preferences for your deployement.
Directly update that file, after you've deployed / installed a machine, with the proxy settings.
When you will launch firefox with that user, the settings will be directly applied.
According to the comment of @Sparhawk, the second option would fit better. In that case we keep the original prefs.js as intact as possible, just changing the proxy settings:
user_pref("network.proxy.http", "IPADDRESS OR URL");
user_pref("network.proxy.http_port", 8080);
| Configure firefox without using the gui |
1,318,598,595,000 |
I usually work with Debian/Ubuntu as a basis for Apache and recently learned that commands like a2enmod MOD or a2dismod MOD are Debian-specific.
Is there a global / distro-agnostic way (maybe with some CM like Ansible or another) to enable or disable modules?
All I personally enable is http2, deflate and expires (and I use them all-default).
|
The platform independent way would in general be an adding of the according LoadModule directives to your Apache configuration. What a2enmod does, is linking files with such directives (.load files, plus additional .conf files, if needed) from /etc/apache2/mods-available to /etc/apache2/mods-enabled
| How to enable or disable Apache modules in a distro-agnostic way? |
1,318,598,595,000 |
I configuring the battery module for polybar on arch linux.
I can make the battery icon red with
ramp-capacity-0 =
ramp-capacity-1 =
ramp-capacity-2 =
ramp-capacity-3 =
ramp-capacity-4 =
ramp-capacity-0-foreground = #ff0000
This will make the icon #ff0000, but the 10% label is still white. Is there any way to change the label colour so when it is using ramp-capacity-0, the the icon+percentage are all red?
|
Setting ramp-capacity-0-foreground will only change the color of the ramp-capacity-0 text, as you have already noticed.
I assume you want to have the percentage in different colors depending on the battery charge, so you want to set it in the ramp. The only way you can currently do this is by setting
ramp-capacity-0 = %{F#ff0000}
However, this only works, if the percentage appears directly after the ramp and ramp-capacity-0-foreground and label-discharging-foreground are not set.
This works because of how polybar handles setting text color. %{F...} is the foreground formatting tag. If the -foreground property of a label or ramp (or anything else) is set, all its text will be wrapped in %{F#...}TEXT%{F-} where #... is whatever the foreground was set to and %{F-} is the formatting tag that resets the foreground for the following text to the bar background defined in the bar section. This is the reason that neither ramp-capacity-0-foreground nor label-discharging-foreground can be set, since if they were set, the formatting tag you added in ramp-capacity-0 would not have any effect.
Example:
If you set ramp-capacity-0 = %{F#ff0000} depending on the rest of your configuration, polybar will generate the following:
With neither ramp-capacity-0-foreground nor label-discharging-foreground set:
%{F#ff0000} 10%
With label-discharging-foreground = #ffffff:
%{F#ff0000} %{F#ffffff}10%{F-}
With ramp-capacity-0-foreground = #ff0000
%{F#ff0000}%{F#ff0000}%{F-} 10
You can see, only if both are not set, can the formatting tag you have inserted manually "bleed over" into the discharging label.
References:
Formatting Wiki Page
| Linux Polybar battery capacity 0 label color |
1,318,598,595,000 |
I don't know how to properly debug the kernel configuration process when an option that should be on ( because it doesn't really depends on anything and it doesn't conflict with anything that I can think of ) , really can't find a way to live beyond make olddefconfig .
Among other things I'm using
CONFIG_SYS_SUPPORTS_ZBOOT=y
with a 4.8.6 kernel tree and I found no way to write a .config file that will retain this specific flag on after using make and according to the available documentation and scripts in the arch/mips this shouldn't happen .
Since this option is required in order to generate vmlinuz for mips targets I have enabled lzma for the kernel for both compression and decompression but so far ... nothing, CONFIG_SYS_SUPPORTS_ZBOOT=y keeps on "going off" .
Do you have any idea how I can literally force a CONFIG flag to stay on or how to debug why make and kbuild are thinking that this flag can't be on ?
|
Manually changing the .config file without Kconfig is discouraged as it might lead to unexpected behavior. In your case the best solution would be to run
make menuconfig
and selecting the configuration option from the menuconfig (the parameter you are looking for should be under arch/mips).
| How to force a CONFIG_ option to stay on? |
1,318,598,595,000 |
In the usual message index, mutt uses one line for displaying
q:Quit d:Del u:Undel s:Save m:Mail r:Reply g:Group ?:Help
Similar key bindings are displayed in other menus, e.g in the attachment menu, and so on.
This is helpful when starting out with mutt, but after a while this line isn't all that useful anymore.
Is there a way to turn this menu and use this line of screen real estate for an extra message, an extra attachment, etc?
|
This is the help line. It can be toggled on or off with :set help.
You can have mutt start with it off with a setting in your muttrc:
unset help
If you would like to have the option to toggle it back on from time to time (rather than using the command), you can bind it to a key:
macro index,pager <F2> ":toggle help<enter>:set ?help<enter>" "toggle help status line"
Now, hitting F2 will toggle it on and off as required.
| Don't show first line with common key bindings in mutt's menus |
1,318,598,595,000 |
In what directory does etckeeper store its metadata and permissions in?
I want to know for purposes of testing the restoration of a configuration.
|
It stores its metadata in the /etc/.etckeeper file, which is also tracked in the repository used to store /etc.
| What directory does etckeeper store its permissions / metadata in? |
1,318,598,595,000 |
I take a simple example: systemd-timesyncd.service
This service is responsible to keep time accurate (like the regular NTP server, but it only act as a lightweight client, and synchronize to only one server at a time).
The default configuration file /etc/systemd/timesyncd.conf is empty so the compile-time parameters are used.
If I download the source code, I can see what are the default parameters.
If I get my specific distribution (Debian) patches I can see the custom default parameters (if any).
For example the NTP servers used by default are time{1,2,3,4}.google.com
And Debian replace them by {0,1,2,3}.debian.pool.ntp.org at compile-time.
I can see which server is currently used: systemctl status systemd-timesyncd.service
This allow to have an idea about the default configuration. But this is far incomplete, even if it is probably often easy to guess the remaining servers.
Question: is there a standard way to display the default parameters for a systemd service?
|
Question: is there a standard way to display the default parameters for a systemd service?
There is no standard way to display the default parameters for a systemd service.
Many services expose some parameters on the bus.
For example:
busctl call org.freedesktop.systemd1 /org/freedesktop/systemd1 org.freedesktop.DBus.Properties GetAll "s" ""
shows properties of the manager itself.
Output contains RuntimeWatchdogSec, ShutdownWatchdogSec(as RuntimeWatchdogUSec and ShutdownWatchdogUSec), LogLevel, DefaultStandardOutput, DefaultStandardError etc.
I take a simple example: systemd-timesyncd.service
See: https://github.com/systemd/systemd/issues/1589
| systemd: how to print a service's default configuration? |
1,318,598,595,000 |
I like adapting the style of my terminal depending on what I am doing inside. Using tilda, I am therefore looking for a way to dynamically change the cursor shape, say, with a command line.
I know that this option can be changed without having to restart tilda since I can do this from the gui config editor. However, running
sed "s/^cursor_shape = 0/cursor_shape = 1/" -i ~/.config/tilda/config_0
does not work, even if it does change the desired file in the desired way. Moreover, the change is canceled if I quit tilda then restart it, which means to me that some information is stored elsewhere in some way.
Is there a way I can make this change immediately effective? (like a function I would call to make tilda read the config file again?)
|
Thanks to Lanoxx, who is currently developping tilda. I can now answer this question.
tilda saves the configuration on exit to the config files. Therefore, editing them while it is running has no effect. Changing the configuration from the command line is not supported yet. It would require a dbus interface to be implemented against tilda, which is quite a job and will probably not be done soon. tilda is still a great terminal emulator anyway :)
| Edit tilda config files while running tilda |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.