date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,401,581,406,000 |
I'm thinking about restarting a client's Apache server, but I'm reluctant to do so because I know he's currently running HTTPS and I don't want to get stuck with the server prompting me for the SSL Passphrase (which I don't have and he's not sure if there is one or not).
Is there a quick/easy way to check whether Apache will require the SSL pass phrase before restarting it?
|
You could check whether the private key is password protected by running
$ openssl rsa -in /path/to/private.key -check -noout
If this prompts you for the password, the key is obviously password protected.
| Checking if Apache requires SSL pass-phrase |
1,401,581,406,000 |
In XFCE4, there is a list of items to be started when an XFCE4-session is started
(XFCE4 Settings xfce4-settings-manager → Tab "Application Autostart"):
I'm wondering where this list is stored.
In ~/.config/autostart I have three .desktop files, which are available in the aforementioned list, but there are many more items in that list than those three files.
I was wondering if those items are stored somewhere in a human readable file, or perhaps directory structure.
Although I'm not exactly planning to edit those items via scripting, it would help if it was at all possible to modify that list while a session is not active. For instance should I want to edit those items over SSH, while no one is logged in.
|
The entries you see are populated from:
~/.config/autostart (user-specific)
and
/etc/xdg/autostart/ (system-wide)
If you want to disable something from the second system-wide location, you create the appropriate entry to your start-up directory with this content:
[Desktop Entry]
Hidden=true
E.g. I have /etc/xdg/autostart/blueman.desktop - to disable it you create:
~/.config/autostart/blueman.desktop with the above content.
Redefining something looks a tad tedious and over-complicated but you first have to disable it, then create your own desired entry.
| XFCE4 - Session and Startup: where are autostart items saved? |
1,401,581,406,000 |
I'm talking about files like ~/.foo found on a user's home dir. I'm working on a program that reads from such a file, and I'd also like to clean up my user root directory if I can.
Is there a POSIX-specified variable, such as ~/$(conf) where config like .emacs can be found?
|
POSIX
Searching through the specification for the strings "user config" or "configuration files" turned up zero hits, so I would say no it doesn't specify this in any way.
http://pubs.opengroup.org/onlinepubs/9699919799/
FHS
Looking at the FHS - Filesystem Hierarchy Standard it had this bit:
User specific configuration files for applications are stored in the user's home directory in a file that starts with the '.' character (a "dot file"). If an application needs to create more than one dot file then they should be placed in a subdirectory with a name starting with a '.' character, (a "dot directory"). In this case the configuration files should not start with the '.' character. 11.
sysconf/getconf
Looking through the list of POSIX configuration constants present in <limits.h> is the only other place I can think of where something like this would be configured. Running the command getconf <var> will return these types of results.
For example:
$ getconf _POSIX_CHILD_MAX
1024
But looking through the list of definitions I don't see any pertaining to a user's home directory.
limits.h - implementation-defined constants
unistd.h - standard symbolic constants and types
sysconf - get configurable system variables
| Is there a standards-specified location for user configuration files? |
1,401,581,406,000 |
I am deploying systems that must be configured using the Red Hat 6 (v1r2) Security Technical Implementation Guide(STIG) published by the Defense Information Systems Agency (DISA).
Link to site.
I've started developing a Kickstart file to automate many of these settings based on other KS files I've found via Google.
Does anyone have any advice, additional tools, or other resources that will help?
I do not need to use Kickstart, it just seemed like the easiest way to get started. I'm looking for any resources: playbooks for Ansible, basic shell scripts, etc.
|
I have some scripts that are probably still "beta" from a project on GitHub by the RedHatGov organization (Red Hat, Inc. government sector employees).
https://github.com/RedHatGov
While their project is not complete, nor universally applicable, it is a great start and I plan on forking, contributing, and requesting pulls for the projects.
Frank Cavvigia of Red Hat has also made this script publicly available (by forking the code from other projects such as Aqueduct), which will modify a RHEL 6.4 .iso with many settings and requirements for DISA STIG compliance. This creates a new .ISO you can burn and use to install a system with many compliant options from the get-go.
http://people.redhat.com/fcaviggi/stig-fix/
I have tested this script, with minor modifications, on CentOS 6.5 and it has been somewhat successful. Someday when I get it cleaned up I will document and share my findings/changes.
| Automate DISA STIG controls for RHEL/CentOS? |
1,401,581,406,000 |
I have a VPS running Ubuntu 13.10 on Digital Ocean.
I'd like to install postfix on the server, but I want it to be able to send e-mails only to my e-mail address whenever a system message wants to be sent to root.
Now I have my postfix installed as local only. This sends the messages to /var/mail/root.
Instead, I'd like my messages to go to my real e-mail ([email protected]), but I don't want to allow other users/sites to send e-mail (like for example from PHP's mail()).
Is this possible?
|
There are several ways to do this
Using SSMTP:
You can find a detailed article here. (Please consider Zulakis' comment below regarding security: I let ssmtp solution here for your knowledge, but prefer postfix solution)
Install ssmtp
sudo aptitude install ssmtp
Edit the configuration file:
sudo vim /etc/ssmtp/ssmtp.conf
And configure it with your gmail account:
[email protected]
mailhub=smtp.gmail.com:587
[email protected]
UseSTARTTLS=YES
AuthUser=username
AuthPass=password
FromLineOverride=yes
Using Postfix
If you want to use your postfix install, you can configure it to work with your gmail account. You can find a detailed article here.
Check that you have all the needed dependencies
mailutils libsasl2-2 ca-certificates libsasl2-modules
Edit the configuration of postfix:
sudo vim /etc/postfix/main.cf
And configure it with your gmail account:
relayhost = [smtp.gmail.com]:587
smtp_sasl_auth_enable = yes
smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd
smtp_sasl_security_options = noanonymous
smtp_tls_CAfile = /etc/postfix/cacert.pem
smtp_use_tls = yes
Create the file with your password:
vim /etc/postfix/sasl_passwd
And add the following lines
[smtp.gmail.com]:587 [email protected]:PASSWORD
Allowing root only
I'm not exactly sure what you mean with
I don't want to allow other users/sites to send e-mail (like for example from PHP's mail())
But for blocking mail access per user or per domain, you can edit the following file:
vim /etc/mail/access
And add rules such as:
To:[email protected] REJECT # Reject a1 user from recieving mails
From:[email protected] REJECT # Reject a1 user from sending mails
I hope this helps.
| How to install postfix for sending mails to admin only? |
1,401,581,406,000 |
What would be the best way to set Vim up to always put the cursor in insert mode at the end of the first line (to account for commit message templates) when running git commit? This would basically do something identical to pressing ggA every time. Ideally this should be a Vim configuration (presumably in ~/.vim/after/ftplugin/gitcommit.vim), because I rely on $VISUAL rather than configuring editors for everything separately.
This almost works:
call feedkeys('ggA', 'int')
However, when running echo 'some text' >/tmp/COMMIT_EDITMSG && vim -Nu NONE --cmd 'filetype plugin on' /tmp/COMMIT_EDITMSG the cursor is on the status line until I press something:
1 | startinsert! works for echo 'some text' >/tmp/COMMIT_EDITMSG && vim -Nu NONE --cmd 'filetype plugin on' /tmp/COMMIT_EDITMSG, but when running git commit -t /tmp/COMMIT_EDITMSG it breaks completely - the commit message is not shown and the commit template is shown below the status line:
After pressing right arrow the commit message and cursor shows up, and the editor is in insert mode, but the cursor is at the second character rather than at the end of line:
Do I need to add something to the configuration to tell Vim to show the actual contents of the buffer?
|
Adapting one of the autocommands given in the Vim Wikia, this seems to work fine with git commit -t /tmp/COMMIT_EDITMSG for me:
" ~/.vim/ftplugin/gitcommit.vim
au! VimEnter COMMIT_EDITMSG exec 'norm gg' | startinsert!
I used exec 'norm gg' | instead 1 | because :1 | is equivalent to :1p | and there's a small delay as the line is printed.
| How to start Vim in insert mode at end of first line when committing Git changes? |
1,401,581,406,000 |
In previous versions of gtk it was easy to launch one instance of an application with default gtk settings:
GTK2_RC_FILES= epdfview
But for version 3, it won't work anymore:
GTK3_RC_FILES= evince
has no effect. Is this convenience definitely gone?
|
Ok the implementation has changed and details can be found in the documentation for the GtkCssProvider class. To use plain default settings, one can now use following:
GTK_DATA_PREFIX= gtk3-app
| Launch a GTK3 aplication without customized gtk settings? |
1,401,581,406,000 |
I'm configuring a CentOS7 host (through ansible) running authconfig. Now I need to add/configure pam_exec module to the setup but it seems it is not supported by authconfig (cf. man authconfig and /etc/sysconfig/authconfig).
I'm afraid (as mentioned in some /etc/pam.d/*conf headers) subsequent authconfig call will overwrite my changes.
How do I integrate a specific pam config to RedHat authconfig framework?
|
authconfig will only change the PAM configuration in the /etc/pam.d/*-ac files. Those files are not included directly into the configuration of individual services, but via a symbolic link. For example, /etc/pam.d/system-auth-ac is by default linked as /etc/pam.d/system-auth, and the include lines in files like /etc/pam.d/sshd or /etc/pam.d/login will always use the name system-auth, never system-auth-ac.
man authconfig says that if the symbolic links are modified, authconfig won't re-link them. So this is one place where system administrators can inject their own settings.
You have two options:
Add your pam_exec to the PAM configuration files of the individual services, either before or after the appropriate include lines. This is the recommended option if you want your pam_exec to apply only with specific services. (Do you really need your pam_exec to run when an user runs chfn to update their own name/office/phone number information?)
Or replace the appropriate include link with an actual file that has your pam_exec and then include lines referring to the corresponding *-ac file.
For example, if you want your pam_exec to run with all services that use password-auth, you could replace the /etc/pam.d/password-auth symlink (that points to password-auth-ac which is modified by authconfig) with a file like this:
auth include password-auth-ac
account include password-auth-ac
password include password-auth-ac
session include password-auth-ac
session required pam_exec.so <your parameters>
... assuming that you want your pam_exec in the end of the session phase. If you want to place it into a different phase, edit to suit your needs.
| How to make changes to pam config such that further execution of authconfig will not overwrite them? |
1,401,581,406,000 |
Is it possible to automatically run "source .bashrc" every time when I edit the bashrc file and save it?
|
One way, as another answer points out, would be to make a function that replaces your editor call to .bashrc with a two-step process that
opens your editor on .bashrc
sources .bashrc
such as:
vibashrc() { vi $HOME/.bashrc; source $HOME/.bashrc; }
This has some shortcomings:
it would require you to remember to type vibashrc every time you wanted the sourcing to happen
it would only happen in your current bash window
it would attempt to source .bashrc regardless of whether you made any changes to it
Another option would be to hook into bash's PROMPT_COMMAND functionality to source .bashrc in any/all bash shells whenever it sees that the .bashrc file has been updated (and just before the next prompt is displayed).
You would add the following code to your .bashrc file (or extend any existing PROMPT_COMMAND functionality with it):
prompt_command() {
# initialize the timestamp, if it isn't already
_bashrc_timestamp=${_bashrc_timestamp:-$(stat -c %Y "$HOME/.bashrc")}
# if it's been modified, test and load it
if [[ $(stat -c %Y "$HOME/.bashrc") -gt $_bashrc_timestamp ]]
then
# only load it if `-n` succeeds ...
if $BASH -n "$HOME/.bashrc" >& /dev/null
then
source "$HOME/.bashrc"
else
printf "Error in $HOME/.bashrc; not sourcing it\n" >&2
fi
# ... but update the timestamp regardless
_bashrc_timestamp=$(stat -c %Y "$HOME/.bashrc")
fi
}
PROMPT_COMMAND='prompt_command'
Then, the next time you log in, bash will load this function and prompt hook, and each time it is about to display a prompt, it will check to see if $HOME/.bashrc has been updated. If it has, it will run a quick check for syntax errors (the set -n option), and if the file is clean, source it.
It updates the internal timestamp variable regardless of the syntax check, so that it doesn't attempt to load it until the file has been saved/updated again.
| How to run "source .bashrc" automatically after I edit and save it? |
1,401,581,406,000 |
I want to use different dns server (not tor, any dns server I set) for certain terminal command.
Say it would looks like
$ DNS_SERVER=8.8.8.8 dnsify ping example.com
and it uses Google dns.
I know there's socksify, torrify and other tools like that. I look for any tool, hack, or other way I can set it explicitly for my command or at least restricted to terminal session. So I use different dns for my command or in my terminal session, and main dns for all other software.
I tryed proxychains but can't force it to use non-system proxy.
So, is there anything for dns proxifying?
|
I'm not aware of any method to override the system resolvers simply by using environment variables. You can override resolv.conf options using RES* environment variables but those can't be used to override the nameserver definitions (see the resolv.conf manual page for more information).
The best option would be to use the LD_PRELOAD mechanism of the dynamic linker to preload a library that allows you to override the various resolver calls to use your own DNS server rather than the system ones.
One that I've found is resolvconf-override. From the README:
resolvconf override provides a shared library to be used as an LD_PRELOAD to override the nameservers listed in /etc/resolv.conf on glibc-based systems (eg. most Linux distributions).
...
To use the Google DNS in place of the ones mentioned in
/etc/resolv.conf you would run:
LD_PRELOAD=/usr/lib64/libresolvconf-override.so NAMESERVER1=8.8.8.8
NAMESERVER2=8.8.4.4 myapplication
You will need to compile it from source, but it looks to do exactly what you need.
Note: You didn't specify an operating system, but I'm assuming Linux.
| Proxify dns queries at the command-line |
1,401,581,406,000 |
Some Emacs keyboard shortcuts are intercepted by KDE. I know that KDE lets you configure keyboard shortcuts via GUI, but I am tired of sifting every time all the menus. I would like to open the files where KDE stores its shortcuts, both at the system level and at the user level, and change all the shortcuts that interfere with Emacs once for all.
I am using KDE 4.14.2.
|
The global ones are in ~/.kde4/share/config/kglobalshortcutsrc.
Different apps/services may have specific ones in their own config files - many in the same dir.
Note: the ~/.kde4 path is seen on OpenSUSE, on other distributions the path may exist under ~/.kde instead.
| Where does KDE 4 store its keyboard shortcuts? |
1,401,581,406,000 |
This question is about best practices. I know logging in over secure shell or switching users su, and su -l have different effects. Also, in the event you make a typo in the configuration, you still want to be able to log in. Where is/are the/some ideal place/s to store color definitions? At the moment I have them in .bash_profile. Is it ok to store them in .bashrc?
Configuration Locations:
According to the ArchWiki
/etc/profile Sources application settings in /etc/profile.d/*.sh and /etc/bash.bashrc.
~/.bash_profile Per-user, after /etc/profile.
~/.bash_login (if .bash_profile not found)
~/.profile (if .bash_profile not found)
/etc/skel/.bash_profile also sources ~/.bashrc.
~/.bash_logout
/etc/bash.bashrc Depends on the -DSYS_BASHRC="/etc/bash.bashrc" compilation flag. Sources /usr/share/bash-completion/bash_completion
~/.bashrc Per-user, after /etc/bash.bashrc.
Let's save I have two color definitions, one for the command prompt and one for the ls command.
set_prompt () {
Last_Command=$? # Must come first!
Blue='\[\e[01;34m\]'
White='\[\e[01;37m\]'
Redbold='\[\e[01;31m\]'
Greenbold='\[\e[01;32m\]'
Greenlight='\[\e[00;32m\]'
Blueintense='\[\033[00;96m\]'
Purplelight='\[\e[00;35m\]'
Yellowbold='\[\e[01;33m\]'
Graydark='\[\e[01;90m\]'
Reset='\[\e[00m\]'
FancyX='\342\234\227'
Checkmark='\342\234\223'
PS1="${Graydark}\t "
if [[ $Last_Command == 0 ]]; then
PS1+="$Greenlight$Checkmark "
else
PS1+="$Redbold$FancyX "
fi
if [[ $EUID == 0 ]]; then
PS1+="\\u@$Redbold\\h "
else
PS1+="$Greenlight\\u$White@$Redbold\\h "
fi
PS1+="$Graydark\\W $Redbold\\\$$Reset "
}
PROMPT_COMMAND='set_prompt'
set_ls () {
Default='0;0'
White='97'
Yellowbold='01;33'
Greenlight='00;32'
Purplelight='00;35'
Purplebold='01;35'
Whitelight='00;37'
Yellowlight='00;33'
Graydark='00;90'
# Highlight
Highlightpurpledark='45'
Highlightgraydark='100'
LS_COLORS="fi=$Greenlight:di=$White;$Highlightgraydark:*.tex=$Purplebold"
export LS_COLORS
}
set_ls
|
I would put environment variables in .bash_login or .bash_profile, since they are (when exported) inherited to subshells and don't need to be reset for every shell invocation. Not that resetting them would cost practically anything, but in case I want to set an envvar to something else for the duration of a subshell. That's hard to do if the .bashrc overrides the setting.
For everything else (including functions), you want to put them in .bashrc, since .bash_login and friends won't be read by subshells. .bashrc usually will be, through one of the profile/login scripts.
Of course your use of PS1 is a bit different, since you want a function that changes it.
(Bash is a bit funny with the initializations files. Login shells read bash_profile and friends, but not bashrc. Non-login shells work exactly the opposite. So there's no file that will be read by all shell invocations, unless bashrc is sourced by the profile scripts. ref.
https://www.gnu.org/software/bash/manual/bashref.html#Bash-Startup-Files )
Choosing between .profile, .bash_profile and .bash_login is completely up to you, and choosing between global configuration and per-user configuration of course depends on if you want to change the behaviour for all users, or only one.
As for typos, keep a shell open and test run the scripts after changing them. :) Not that a simple typo would matter, at worst it will stop reading the init script and/or mess up the rest of the settings. Unless you have an "exit" in your .bashrc for some reason.
| Where should I save color codes for the PS1 command line / terminal? [closed] |
1,401,581,406,000 |
How to enable promiscuous mode on network adapter. I have tried by adding PROMISC=yes in /etc/sysconfig/network-scripts/ifcfg-ensxxx
but no effect even after network restart or rebooting the system.
|
CentOS 7 /usr/share/doc/initscripts-9.49.24/sysconfig.txt says:
No longer supported:
PROMISC=yes|no (enable or disable promiscuous mode)
ALLMULTI=yes|no (enable or disable all-multicast mode)
So for enabling you have to run:
ip link set ethX promisc on
Or if you want to happen on boot you can use systemd service rc-local.
Put the above line in /etc/rc.d/rc.local (don't forget to change ethX with your proper device), then:
chmod u+x /etc/rc.d/rc.local
systemctl enable rc-local
systemctl start rc-local
| Enable Promiscous mode in CentOS 7 |
1,401,581,406,000 |
Since linux 2.6.30, filesystems are mounted with "relatime" by default. In this discussion, Ingo Molnar says he has added the CONFIG_DEFAULT_RELATIME kernel option, which:
makes 'norelatime' the default for all mounts without an extra kernel
boot option.
I don't really get it, I wonder if that means that without CONFIG_DEFAULT_RELATIME in .config, a kernel will not use relatime as a default mount option?
How can one enable or disable CONFIG_DEFAULT_RELATIME in make menuconfig? (I don't find anything related to relatime.)
And finally, I can't even find CONFIG_DEFAULT_RELATIME in the kernel sources.
Can someone enlighten me?
|
Ingo Molnar proposed a patch, but this patch wasn't accepted into the kernel tree. Linus Torvalds made relatime the default setting in 2.6.30, unconditionally, and this is still true in 3.0. If you want relatime to default off in the kernel, you need to apply Ingo Molnar's patch in your copy of the source.
| How to configure CONFIG_DEFAULT_RELATIME to disable relatime |
1,401,581,406,000 |
Many configuration files are based on the format Key value or Key=value with one line for each of them.
Many packages provide a default configuration file where theses available configuration keys are already written with their default value and/or are commented.
I'm wondering if there is a tool that allows to change that kind of files without the need to open an interactive editor and more high level than sed (possibly built over it).
That would be something as simple as :
$ conftool file key value
It would find the key in the file, remove the comment sign(s) if any, change the value and save the result.
|
as far as i know there is no generic config line changer tool. i imagine it would be hard to create such a tool because there are so many different config file syntaxes.
if you want to change a specific value in a specific config file then you can write a specialized tool for that specific task.
here are two examples using sed and awk to help you get started
a simple sed command to replace the value of a key for a simple key value syntax
$ sed 's/^key2 value2$/key2 newvalue2/' config
example
$ cat config
key1 value1
key2 value2
key3 value3
$ sed 's/^key2 value2$/key2 newvalue2/' config
key1 value1
key2 newvalue2
key3 value3
but beware: if there are more key2 value2 lines (possibly in other sections of the config file) then all will be replaced. this is hard to prevent in sed (possible but hard) and easier in awk. see below for an awk command that respects sections.
explanation:
this sed command does roughly the following:
for every line:
if line is "key2 value2":
print "key2 newvalue2"
this sed command s/pattern/replace/ means: in every line search for pattern and if found replace with replace. pattern can be a normal string or a regex (regular expression).
the ^ and & in the regex are called anchors and means beginning of line and end of line respectively. without the anchors this pattern key2 value2 would also match this line xkey2 value2x and the results would be xkey2 newvalue2x.
here are some examples how we can change the behaviour with the pattern.
also works with key=value syntax
$ sed 's/^key2=value2$/key2=newvalue2/' config
to just match key regardless of old value
$ sed 's/^key2=.*/key2=newvalue2/' config
to remove possible comment sign
$ sed 's/^#\?key2 value2$/key2 newvalue2/' config
to see that something was changed if you redirect the output you can also print to stderr
$ sed 's/^#\?key2 value2$/key2 newvalue2/ w /dev/stderr' config > newconfig
you can do a lot more with the correct regex. but that would be another answer for another question.
here is an awk script that can also handle config sections
/^\[section2\]$/ {
print
insection2=1
next
}
insection2 && /^#?key2=value2$/ {
print "key2=newvalue2"
next
}
/^\[.*\]$/ {
insection2=0
}
1
use like this
$ awk -f configer.awk config
example
$ cat config
[section1]
key1=value1
key2=value2
[section2]
key1=value1
key2=value2
[section3]
key1=value1
key2=value2
$ awk -f configer.awk config
[section1]
key1=value1
key2=value2
[section2]
key1=value1
key2=newvalue2
[section3]
key1=value1
key2=value2
you can also add a verbose output to stderr so you can see what has changed if you redirect the output
insection2 && /^#?key2=value2$/ {
print "key2=newvalue2"
print "changed line "NR > "/dev/stderr"
next
}
short explanation of awk script
the first rule looks for the [config2] section header. it will set the insection2 flag to true
the second rule looks for the key2=value2 line. but only if the insection2 flag is true. it will then print the line with the new value.
the third rule looks for any other section header. it will reset the insection2 flag to false.
the last rule (the lone 1) is the "default rule". it will just print the line unchanged.
in pseudo code
for every line:
if line is [section2]:
note that we are in section2
else if we are in section2 and line is key2=value2:
print modified line
else if line is any other section header:
note that we are no longer in section2
else
print line unchanged
| Command line for editing a configuration file value without an interactive editor |
1,401,581,406,000 |
On NixOS, I'm using a FHS environment to supply libraries (unixODBC and sqlite-odbc) to libreoffice.
{ pkgs ? import <nixpkgs> {} }:
( pkgs.buildFHSUserEnv {
name = "odbc-sqlite-libreoffice";
targetPkgs = pkgs: with pkgs; [libreoffice unixODBC unixODBCDrivers.sqlite];
}).env
However, this works in conjunction with the configuration file /etc/odbcinst.ini, which is generated from the environment.unixODBCDrivers option, but I can't figure out how to pass it to the chroot's filesystem.
I tried using the extraBuildCommands option:
extraBuildCommands = "ln -s /host/etc/odbcinst.ini /etc/odbcinst.ini";
but it doesn't seem to be the right way, and it results in an error: ln: failed to create symbolic link '/etc/odbcinst.ini': Permission denied
How would I go about placing the config file? I imagine there should be a way to create an environment based on a particular system configuration/generation.
If there are other ways to configure ODBC and SQLite on NixOS, they are also very welcome.
|
So I decided to look into the source, since the documentation is pretty terrible.
Apparently, if you add to buildTargets a derivation outputting files in /etc or /var folders, buildFHSUserEnv will automatically copy them to their respective places in the FHS environment.
For my situation, I wrote a simple derivation to place a config file to $out/etc/odbcinst.ini, and added it to buildTargets:
odbcinst = pkgs.stdenv.mkDerivation {
name = "odbcinst";
buildCommand = ''
mkdir -p $out/etc
cp $odbcinst $out/etc/odbcinst.ini
'';
odbcinst = pkgs.writeTextFile {
name = "odbcinst-ini";
text = ''
[SQLite]
Description = ODBC driver for SQLite
Driver = /lib/libsqlite3odbc.so
'';
};
}
And lo and behold:
[...]$ nix-shell odbc.nix
odbc-chrootenv:[...]$ ls /etc
asound.conf hosts mtab pam.d resolv.conf sudoers
default localtime nsswitch.conf passwd shadow sudoers.d
fonts login.defs odbcinst.ini profile ssl zoneinfo
group machine-id os-release profile.d static
Libreoffice recognized the file, but then it gave me some inscrutable error about not being able to read the sqlite library. So, I'm giving up and running it in an Ubuntu VM.
| NixOS: Modifying config files on a buildFHSUserEnv environment |
1,401,581,406,000 |
Similar to this question, I have some applications (Calibre, texdoc) open PDFs with Mendeley. Opening PDFs from Thunar, Thunderbird, Firefox etc. opens evince, the expected default.
It seems that those applications use xdg-open since:
$ xdg-mime query default application/pdf
mendeleydesktop.desktop
I tried to find where this comes from but was unsuccessful; I fixed it with
xdg-mime default evince.desktop application/pdf
The question remains: where did xdg-open get the idea that Mendeley should be the default PDF viewer from?
I'm using Ubuntu 16.04 with i3 4.11. xdg-open is at version 1.1.0 rc3.
|
The question remains: where did xdg-open get the idea that Mendeley should
be the default PDF viewer from?
This is an eminently reasonable question.
Here's a somewhat long answer in three parts.
Option 1: read the documentation
For example, the FreeDesktop standard
on mimetype associations has this to say:
Association between MIME types and applications
Users, system administrators, application vendors and distributions can
change associations between applications and mimetypes by writing into a
file called mimeapps.list.
The lookup order for this file is as follows:
$XDG_CONFIG_HOME/$desktop-mimeapps.list user overrides, desktop-specific (for advanced users)
$XDG_CONFIG_HOME/mimeapps.list user overrides (recommended location for user configuration GUIs)
$XDG_CONFIG_DIRS/$desktop-mimeapps.list sysadmin and ISV overrides, desktop-specific
$XDG_CONFIG_DIRS/mimeapps.list sysadmin and ISV overrides
$XDG_DATA_HOME/applications/$desktop-mimeapps.list for completeness, deprecated, desktop-specific
$XDG_DATA_HOME/applications/mimeapps.list for compatibility, deprecated
$XDG_DATA_DIRS/applications/$desktop-mimeapps.list distribution-provided defaults, desktop-specific
$XDG_DATA_DIRS/applications/mimeapps.list distribution-provided defaults
In this table, $desktop is one of the names of the current desktop,
lowercase (for instance, kde, gnome, xfce, etc.)
Note that if the environment variables such as XDG_CONFIG_HOME and XDG_DATA_HOME are not set, they will revert to their default values.
$XDG_DATA_HOME defines the base directory relative to which user specific data files should be stored. If $XDG_DATA_HOME is either not set or empty, a default equal to $HOME/.local/share should be used.
$XDG_CONFIG_HOME defines the base directory relative to which user specific configuration files should be stored. If $XDG_CONFIG_HOME is either not set or empty, a default equal to $HOME/.config should be used.
This illustrates one of the trickiest aspects of mimetype associations:
they can be set in many different locations,
and those settings might be overridden in a different location.
However, ~/.config/mimeapps.list is the one that we should use to set our own associations.
This also matches the documentation for the GNOME desktop.
To override the system defaults for individual users, you need to create a
~/.config/mimeapps.list file with a list of MIME types for which you want
to override the default registered application.
There's also this helpful tidbit:
You can use the gio mime command to verify that the default registered
application has been set correctly:
$ gio mime text/html
Default application for “text/html”: myapplication1.desktop
Registered applications:
myapplication1.desktop
epiphany.desktop
Recommended applications:
myapplication1.desktop
epiphany.desktop
The cross-platform command to check mimetype associations is:
xdg-mime query default application/pdf
For GNOME, the command is:
gio mime application/pdf
For KDE Plasma the command is:
ktraderclient5 --mimetype application/pdf
When I look at my ~/.config/mimeapps.list file,
it looks something like this:
[Added Associations]
application/epub+zip=calibre-ebook-viewer.desktop;org.gnome.FileRoller.desktop;
<snip>
application/pdf=evince.desktop;qpdfview.desktop;okularApplication_pdf.desktop;<snip>
<snip>
[Default Applications]
application/epub+zip=calibre-ebook-viewer.desktop
<snip>
application/pdf=evince.desktop;
You can see there only one entry for application/pdf under [Default Applications];
so evince.desktop is the default handler for PDF files.
I don't have Mendeley installed, but one way to make it the default PDF handler
is to put its desktop file here instead of evince.desktop.
Notice we're trusting the documentation here that ~/.config/mimeapps.list
is the correct file; we don't actually know that for sure.
We'll come back to this in part 3.
Option 2: read the source code.
xdg-open is a shell script that behaves differently
depending on the value of $XDG_CURRENT_DESKTOP.
You can see how this works here:
if [ -n "${XDG_CURRENT_DESKTOP}" ]; then
case "${XDG_CURRENT_DESKTOP}" in
# only recently added to menu-spec, pre-spec X- still in use
Cinnamon|X-Cinnamon)
DE=cinnamon;
;;
ENLIGHTENMENT)
DE=enlightenment;
;;
# GNOME, GNOME-Classic:GNOME, or GNOME-Flashback:GNOME
GNOME*)
DE=gnome;
;;
KDE)
DE=kde;
;;
Since you are using i3,
the DE variable will be set to generic and the script will call
its open_generic() function,
which in turn will call either run-mailcap or mimeopen
depending on what is installed.
Note that you can get some extra information
by setting the XDG_UTILS_DEBUG_LEVEL, e.g.
XDG_UTILS_DEBUG_LEVEL=4 xdg-open ~/path/to/example.pdf
However, the debug information is not that informative for our purposes.
Option 3: trace the opened files.
From the previous investigations,
we know that mimetype associations are stored in files somewhere on the hard drive,
not e.g. as environment variables or dconf settings.
This means we don't have to rely on documentation,
we can use strace to determine what files the xdg-open command actually opens.
For the application/pdf mimetype, we can use this:
strace -f -e trace=open,openat,creat -o strace_log.txt xdg-open /path/to/example.pdf
The -f is to trace child processes since xdg-open doesn't do everything by itself.
The -e trace=open,openat,creat is to trace just the syscalls open, openat, and creat.
These are from the man page from man 2 open or online.
The -o strace_log.txt is to save to a log file to inspect later.
The output is somewhat voluminous,
but we can ignore the lines that say ENOENT (No such file or directory)
since these files do not exist.
You can also use other commands such as xdg-mime or gio mime.
I found that gio mime read these files in my home directory:
~/.local/share//mime/mime.cache
~/.config/mimeapps.list
~/.local/share/applications
~/.local/share/applications/mimeapps.list
~/.local/share/applications/defaults.list
~/.local/share/applications/mimeinfo.cache
It also read these system-level files:
/usr/share/mime/mime.cache
/usr/share/applications/defaults.list
/usr/share/applications/mimeinfo.cache
/var/lib/snapd/desktop/applications
/var/lib/snapd/desktop/applications/mimeinfo.cache
To look for application/pdf associations, this should do the trick:
grep 'application/pdf' ~/.local/share//mime/mime.cache ~/.config/mimeapps.list ~/.local/share/applications ~/.local/share/applications/mimeapps.list ~/.local/share/applications/defaults.list ~/.local/share/applications/mimeinfo.cache /usr/share/mime/mime.cache /usr/share/applications/defaults.list /usr/share/applications/mimeinfo.cache /var/lib/snapd/desktop/applications /var/lib/snapd/desktop/applications/mimeinfo.cache | less
From here you can see where Mendeley's desktop file is getting added.
I have some applications (Calibre, texdoc) open PDFs with Mendeley. Opening
PDFs from Thunar, Thunderbird, Firefox etc. opens evince, the expected
default.
Firefox and Thunderbird have their own default application settings.
I believe texdoc relies on xdg-open.
I'm not sure about Thunar,
but I doubt it is relying on xdg-open.
So ultimately this is probably due to:
xdg-open having different fallbacks than other applications on i3; and
Mendeley's installer adding mimetype associations in some files but not others.
Addendum: xdg-open should not use the mimeinfo.cache file on i3,
but if you need to regenerate it, this is the command to use:
update-desktop-database ~/.local/share/applications
and here is the documentation:
Caching MIME Types
To make parsing of all the desktop files less costly, a
update-desktop-database program is provided that will generate a cache
file. The concept is identical to that of the 'update-mime-database' program
in that it lets applications avoid reading in (potentially) hundreds of
files. It will need to be run after every desktop file is installed. One
cache file is created for every directory in $XDG_DATA_DIRS/applications/,
and will create a file called $XDG_DATA_DIRS/applications/mimeinfo.cache.
https://specifications.freedesktop.org/desktop-entry-spec/0.9.5/ar01s07.html
Related:
https://askubuntu.com/questions/939027/pdf-book-opens-in-mendeley-when-openned-from-calibre
https://askubuntu.com/questions/992582/how-do-mimeinfo-cache-files-relate-to-mimeapps-list
How to make xdg-open follow mailcap settings in Debian
xdg-open opens a different application to the one specified by xdg-mime query
| Why does xdg-open use Mendeley as default for PDFs? |
1,401,581,406,000 |
I want to do a backup of my cups configuration to transfer it to a new system. But where does cups actually store the printer settings? I've watched /etc/cups/ but the only thing that happened when adding a new printer was that the PPD was added to /etc/cups/ppd/.
Edit: A remark, the configuration actually is in /etc/cups/printers.conf but it was written delayed after actually adding the printer in the web interface. That was originally the reason I couldn't find it. So make sure everything was written, before doing a backup.
|
In Debian Jessie, the whole configuration is in /etc/cups:
classes.conf
interfaces
raw.convs
subscriptions.conf
cups-browsed.conf
ppd
raw.types
subscriptions.conf
cupsd.conf
printers.conf
snmp.conf
cups-files.conf
ssl
Is your system "non-linux"?
| Where does cups store its configuration? |
1,401,581,406,000 |
I would like to protect my laptop from thieves. I would configure Prey, but I can't find any tutorial for Debian/Ubuntu users. I tried to go to the control panel on the project, but whenever I try to add a device, the server redirect me to download page. I have no idea how to go ahead. Who can help me? I have installed prey from the repository of Debian Wheezy. I found the configuration file, but I have no ideas on how to configure it.
|
Go to https://panel.preyproject.com/login and register for an account.
A free account will allow you to track three devices.
After registering and logging in, you will find an api key on your account
page. Add the api key in the config file /etc/prey/config.
Then just be patient.
The default install on Debian (jessie in my case)
runs every 20 minutes. You can change this in /etc/cron.d/prey if you wish.
But if you just wait, you will find the device announced on your Prey page.
The device key will be filled in automatically in the config file.
| Prey anti-theft configuration on Debian |
1,401,581,406,000 |
I'm trying to send mail from shell(GNU bash, version 3.2.25(1)-release (x86_64-redhat-linux-gnu)) using;
mail [email protected]
After I complete the command, the mail won't show up in the mailbox. What could be wrong, how can I check is my configuration is correct. Thanks.
|
Like every unix program that occasionally has cause to send email notifications, mail assumes that there is a functioning MTA on localhost that is 1) capable of accepting mail and 2) knows how to pass it on.
To find out what mail server you're running, try telnet localhost 25 and look at the identifier string.
The command mailq, if it exists for you, will show you what messages are currently in the local mail server's queue, possibly with an explanation as to why it hasn't been passed on to its destination yet.
In addition, most distributions by default configure MTAs and syslog to report mail log messages to either /var/log/mail.log or similar. Look in /var/log/ for any file that looks viable, and grep it for 'bar.com'
Without more information as to what's going on it's hard to offer better advice than this, sorry.
| How to send mail? |
1,401,581,406,000 |
As a rule in the Debian 10 hardening guide, and various other audit guides of the Center for Internet Security (CIS), setting the use_pty sudoers option is recommended for the following rationale:
Attackers can run a malicious program using sudo which would fork a background process that remains even when the main program has finished executing.
In the sudoers man page, it is described that running a background process that retains access to the user's terminal after the main process has finished executing is no longer possible when the commands are run in a separate pseudo-terminal.
I don't really grasp the nuance here.
What does it mean to run the sudo command in a separate pseudo-terminal, and why is the background process attack no longer possible when this flag is set?
What other ramifications does setting use_pty have?
Thank you!
|
When you're using the command-line through a serial port or other character-oriented device, you're doing so through a terminal (tty). Any programs connected to you through that terminal have a connection to that terminal via one or more open file descriptors (typically, their stdin, stdout, and stderr "files" at fd numbers 0, 1, and 2, respectively).
Terminals are actually very complex things that do far more than provide input characters to programs and receive output characters from them. The terminal also accepts commands from any program connected to it (see tcsetattr()). For example, the terminal's default input processing mode is to echo your keystrokes back to you and buffer up your input, allowing you edit the line before you hit ENTER to pass the results to the program. This mode can be changed by any connected program. For example, echoing can be turned on or off and line editing features can be disabled or enabled. ...There are a lot of options. Programs interacting with you through a terminal can also ask for the terminal window dimensions, allowing programs to properly format their output in your window.
A pseudoterminal (pty) is a sort of fake terminal that looks exactly like a terminal from the point of view of the program connected to it. But, instead of the other side being connected to a hardware device, it is connected to another program via a "pty master".
When sudo runs a child process in a separate pty, it creates a pty and pty master and connects the child process to the new pty instead of connecting it directly to the terminal that sudo is itself connected to. While the child process is still alive, sudo relays input, output, and control commands between the child process and sudo's own terminal via the master pty. But, when the child process terminates, sudo stops relaying and closes the pty master.
If the child process forks itself, those grandchildren inherit their parent's file descriptors, including connections to the tty or pty. If sudo had connected its child to your terminal directly, then the grandchildren would also be connected to your terminal, and they could continue to access your terminal even though their parent had terminated and sudo had exited too. But because sudo used a pty instead, and because the master pty was closed, any grandchildren will be unable to interact with your terminal after sudo terminates.
| How does the use_pty sudoers option prevent a persistence attack? |
1,401,581,406,000 |
In order to make the statically assigned IP work, I have already modified the file /etc/sysconfig/network-scripts/ifcfg-eth0 as follows:
DEVICE=eth0
IPADDR=10.33.17.143
NETMASK=255.255.255.0
BOOTPROTO=static
ONBOOT=yes
Any other configuration files that I might need to take care of before the static IP could be used? I am trying to change the network settings from the default DHCP to static IP.
|
You may also need to set a default route (often known as your default gateway) in /etc/sysconfig/network-scripts/route-eth0 as follows:
default via 1.2.3.4
Just make sure you substitute the correct Default Gateway for 1.2.3.4 otherwise Bad Things Will Happen... ;)
| Changing Redhat Network Settings From DHCP to STATIC IP Via Configuration Files |
1,330,601,237,000 |
I am working on an audio stream between two virtual servers. For that, I have set up a dummy soundcard (as the vservers don't have a hw card) using modprobe snd-dummy.
That seems to work fine - I am able to tweak volume levels using alsamixer. Unfortunately, I am not able to record any playback. I used arecord -r 48000 -c 1 to see what's going on, and the output is quite creepy. It contains paths:
Aufnahme: WAVE 'stdin' : Unsigned 8 bit, Rate: 48000 Hz, mono
RIFF$WAVEfmt »data8¶5m8¶5m {D {DXCCC± C²5m C@P {D°}DxC@C0C! C°~DPÀ}DІCÐ{DÐ{DÀ{DQ`C C `D(²5m@0re/alsa/bluetooth.confáЂC²5m 0°|Dre/alsa/pulse.confЂC²5mbluetooth.confaC CiceAà|D|D |Dr/.asoun P°CCX|DÐ{DÀ{DÑ CCÿÿÿÿÿÿÿÿèC`CPC`C C |D¸CCCA CC CP@Cà~D~DC8¶5m8¶5m {D {DXCCC± C²5m C@P {D°}DxC@C0C! C°~DPÀ}DІCÐ{DÐ{DÀ{DQ`C C `D(²5m@0re/alsa/bluetooth.confáЂC²5m 0°|Dre/alsa/pulse.confЂC²5mbluetooth.confaC CiceAà|D|D |Dr/.asoun P°CCX|DÐ{DÀ{DÑ CCÿÿÿÿÿÿÿÿèC`CPC`C C |D¸CCCA CC CP@Cà~D~DC8¶5m8¶5m {D {DXCCC± C²5m C@P {D°}DxC@C0C! C°~DPÀ}DІCÐ{DÐ{
And so on...
There mentions of [..]/alsa/pulse.conf, bluetooth.conf and .asoun[drc?] are really really strange.
Does anyone have a clue what is going on here? Have I configured the soundcard the wrong way, or is there anything I missed?
|
arecord will record from your sound card, which is a dummy, so its not surprising it contains rubbish. You want to record from the network. There are lots of ways to do that, but snd-dummy won't help. You could try pulse-audio - it has good support for network sound, or you could use jack audio - a bit harder to set-up, but less confusing and low latency.
| ALSA dummy device - how to configure? |
1,330,601,237,000 |
I fear I may have to revert to system defaults if I can't get this sorted out.
I'm trying to set various system configurations for more robust ext4 for a single-user desktop environment. Trying to assign desired configuration settings where they will take effect properly.
I understand that some of these should be included in the file mke2fs.conf so that the filesystems are initially created with those proper settings. But I will address that later, keeping the distro default file for the following.
I understand that the EXT4 options I wanted could be set in /etc/fstab. This following entry shows what I would typically want:
UUID=00000000-0000-0000-0000-000000000000 /DB001_F2 ext4 defaults,nofail,data=journal,journal_checksum,journal_async_commit,commit=15,errors=remount-ro,journal_ioprio=2,block_validity,nodelalloc,data_err=ignore,nodiscard 0 0
where each DB001_F{p} is a partition on the root disk ( p = [2-8] ).
I repeat those options here, in the same sequence as a list, in case that makes it more easy to assimilate:
defaults
nofail
data=journal
journal_checksum
journal_async_commit
commit=15
errors=remount-ro
journal_ioprio=2
block_validity
nodelalloc
data_err=ignore
nodiscard
Mounting during boot, the below syslog shows all as reporting what I believe to be acknowledged acceptable settings:
64017 Sep 4 21:04:35 OasisMega1 kernel: [ 21.622599] EXT4-fs (sda7): mounted filesystem with journalled data mode. Opts: data=journal,journal_checksum,journal_async_commit,commit=15,errors=remount-ro,journal_ioprio=2,block_validity,nodelalloc,data_err=ignore,nodiscard
64018 Sep 4 21:04:35 OasisMega1 kernel: [ 21.720338] EXT4-fs (sda4): mounted filesystem with journalled data mode. Opts: data=journal,journal_checksum,journal_async_commit,commit=15,errors=remount-ro,journal_ioprio=2,block_validity,nodelalloc,data_err=ignore,nodiscard
64019 Sep 4 21:04:35 OasisMega1 kernel: [ 21.785653] EXT4-fs (sda8): mounted filesystem with journalled data mode. Opts: data=journal,journal_checksum,journal_async_commit,commit=15,errors=remount-ro,journal_ioprio=2,block_validity,nodelalloc,data_err=ignore,nodiscard
64021 Sep 4 21:04:35 OasisMega1 kernel: [ 22.890168] EXT4-fs (sda12): mounted filesystem with journalled data mode. Opts: data=journal,journal_checksum,journal_async_commit,commit=15,errors=remount-ro,journal_ioprio=2,block_validity,nodelalloc,data_err=ignore,nodiscard
64022 Sep 4 21:04:35 OasisMega1 kernel: [ 23.214507] EXT4-fs (sda9): mounted filesystem with journalled data mode. Opts: data=journal,journal_checksum,journal_async_commit,commit=15,errors=remount-ro,journal_ioprio=2,block_validity,nodelalloc,data_err=ignore,nodiscard
64023 Sep 4 21:04:35 OasisMega1 kernel: [ 23.308922] EXT4-fs (sda13): mounted filesystem with journalled data mode. Opts: data=journal,journal_checksum,journal_async_commit,commit=15,errors=remount-ro,journal_ioprio=2,block_validity,nodelalloc,data_err=ignore,nodiscard
64024 Sep 4 21:04:35 OasisMega1 kernel: [ 23.513804] EXT4-fs (sda14): mounted filesystem with journalled data mode. Opts: data=journal,journal_checksum,journal_async_commit,commit=15,errors=remount-ro,journal_ioprio=2,block_validity,nodelalloc,data_err=ignore,nodiscard
But mount shows that some drives are not reporting as expected, even after reboot, and this is inconsistent as seen below:
/dev/sda7 on /DB001_F2 type ext4 (rw,relatime,nodelalloc,journal_checksum,journal_async_commit,errors=remount-ro,commit=15,data=journal)
/dev/sda8 on /DB001_F3 type ext4 (rw,relatime,nodelalloc,journal_checksum,journal_async_commit,errors=remount-ro,commit=15,data=journal)
/dev/sda9 on /DB001_F4 type ext4 (rw,relatime,nodelalloc,journal_checksum,journal_async_commit,errors=remount-ro,commit=15,data=journal)
/dev/sda12 on /DB001_F5 type ext4 (rw,relatime,nodelalloc,journal_async_commit,errors=remount-ro,commit=15,data=journal)
/dev/sda13 on /DB001_F6 type ext4 (rw,relatime,nodelalloc,journal_async_commit,errors=remount-ro,commit=15,data=journal)
/dev/sda14 on /DB001_F7 type ext4 (rw,relatime,nodelalloc,journal_async_commit,errors=remount-ro,commit=15,data=journal)
/dev/sda4 on /DB001_F8 type ext4 (rw,relatime,nodelalloc,journal_async_commit,errors=remount-ro,commit=15,data=journal)
I read somewhere about a limitation regarding the length of the option string in fstab, so I used tune2fs to pre-set some parameters at a lower level. Those applied via tune2fs are:
journal_data,block_validity,nodelalloc
which is confirmed when using tune2fs -l:
Default mount options: journal_data user_xattr acl block_validity nodelalloc
With that in place, I modified the fstab for entries to show as
UUID=00000000-0000-0000-0000-000000000000 /DB001_F2 ext4 defaults,nofail,journal_checksum,journal_async_commit,commit=15,errors=remount-ro,journal_ioprio=2,data_err=ignore,nodiscard 0 0
I did a umount for all my DB001_F? (/dev/sda*), then I did a mount -av, which reported the following:
/ : ignored
/DB001_F2 : successfully mounted
/DB001_F3 : successfully mounted
/DB001_F4 : successfully mounted
/DB001_F5 : successfully mounted
/DB001_F6 : successfully mounted
/DB001_F7 : successfully mounted
/DB001_F8 : successfully mounted
No errors reported for the options string for each of the drives.
I tried using journal_checksum_v3, but mount -av failed all with that setting. I used the mount command to see what was reported.
I also did a reboot and repeated that mount again for these reduced settings, and mount shows again that the drives are not reporting as expected, and this is still inconsistent as seen here:
/dev/sda7 on /DB001_F2 type ext4 (rw,relatime,journal_checksum,journal_async_commit,commit=15)
/dev/sda8 on /DB001_F3 type ext4 (rw,relatime,journal_checksum,journal_async_commit,commit=15)
/dev/sda9 on /DB001_F4 type ext4 (rw,relatime,journal_checksum,journal_async_commit,commit=15)
/dev/sda12 on /DB001_F5 type ext4 (rw,relatime,journal_async_commit,commit=15)
/dev/sda13 on /DB001_F6 type ext4 (rw,relatime,journal_async_commit,commit=15)
/dev/sda14 on /DB001_F7 type ext4 (rw,relatime,journal_async_commit,commit=15)
/dev/sda4 on /DB001_F8 type ext4 (rw,relatime,journal_async_commit,commit=15)
Since these are all ext4 type filesystems, and all on the same physical drive, I don't understand the behaviour of the journal_checksum not be uniformly actioned! I also, I find it interesting that there is a dividing line in terms of the 2 classes of behaviour, since the order listed above is the order specified in the fstab (according to /DB001_F?), which presumably is the mounting order ... so what "glitch" is causing the "downgrading" of the remaining mount actions ?
My thinking (possibly baseless) is that some properties might be better set at time of creation of the filesystems, and that this would make them more "persistent/effective" than otherwise. When I tried to again shift some of the property settings by pre-defining those in mke2fs.conf. mke2fs.ext4 fails AGAIN, I suspect, because the option string is restricted to a limited length (64 characters ?). So ... I have backed away from making any changes to the mke2fs.conf.
Ignoring the mke2fs.conf issue for now, and focusing on the fstab and tune2fs functionality, can someone please explain to me what I am doing wrong that is preventing mount from correctly reporting what is the full range of settings currently in effect?
At this point, I don't know what I can rely on to provide the actual real state of the ext4 behaviour and am considering simply reverting to distro defaults, which leaves me wanting.
Is it possible that all is well and that the system is simply not reporting correctly? I am not sure that I could comfortably accept that viewpoint. It is counter-intuitive.
Can someone please assist?
Environment
UbuntuMATE 20.04 LTS
Linux OasisMega1 5.4.0-124-generic #140-Ubuntu SMP Thu Aug 4 02:23:37 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
RAM = 4GB
DSK = 2TB (internal, 8 data partitions, 3 1GB swap partitions) [ROOT]
DSK = 500GB (internal, 2 data partitions, 1 1GB swap partitions)
DSK = 4TB (external USB, 16 data partitions) [BACKUP drive]
This is what is being reported by debugfs:
Filesystem features:
has_journal
ext_attr
resize_inode
dir_index
filetype
needs_recovery
extent
flex_bg
sparse_super
large_file
huge_file
dir_nlink
extra_isize
metadata_csum
Not very useful for additional insights into the problem.
debugfs shows following supported features:
debugfs 1.45.5 (07-Jan-2020)
Supported features: (...snip...) journal_checksum_v2 journal_checksum_v3
Noteworthy is that debugfs is showing either journal_checksum_v2 or journal_checksum_v3 available but not the journal_checksum which is referenced in the manual pages.
Does that mean that I should be using v2 or v3, instead of journal_checksum?
|
Given the discussion that has transpired as comments on my original post, I am prepared to conclude that the many changes to the Kernel over the 2+ years since my original install of the UbuntuMATE 20.04 LTS distro are the source of the differences in behaviour observed by the set of 8 ext4 filesystems that were created at different times, notwithstanding the fact that they reside on the same physical device.
Consequently, the only way to ensure that all filesystems of a given fstype (i.e. ext4) react identically to mounting options, tune2fs options and behave/report identically by debuge2fs or mount commands, is to ensure that they are created with the same frozen version of an OS Kernel and the various filesystem utilities that are used to create and tune those filesystems.
So, to answer my original question, there is no problem with the filesystems reporting differently because they are reporting correctly, each for their own historical context leading to their current state.
Looking forward to my pending upgrade to UbuntuMATE 22.04 LTS (why I was digging into all this to begin with), to avoid the discrepencies, because the install disk is not the latest for the Kernel or utilities, my defined process must be to:
upgrade to newer OS,
reboot,
apply all updates,
create backup image of the upgraded+updated OS now residing on the root partition,
re-create root partition with latest Kernel and utilities (using a duplicate fully-updated OS residing on secondary internal disk, which is the reason for existence of my 500 GB drive, namely testing, proving, confirming final desired install before rolling over into "production"),
recover the primary fully-updated OS from backup image to its proper ROOT partition,
reboot, then
backup all other partitions on the primary disk, recreate those partitions, then restore the data for each of those partitions.
Only in this manner can all the partitions be created as "equals" with the latest and best offered at the one snapshot in time. Otherwise, the root partition is out of step with all other partitions that are created post-updates following the distro installation.
Also, having a script similar to the one I created ensures the required actions will be applied uniformly, avoiding any possible errors that might slip in from the tedium when performing it manually many times.
For those who want to be able to manage and review these options in a consistent fashion with a script, here is the script I created for myself:
#!/bin/sh
####################################################################################
###
### $Id: tuneFS.sh,v 1.2 2022/09/07 01:43:18 root Exp $
###
### Script to set consistent (local/site) preferences for filesystem treatment at boot-time or mounting
###
####################################################################################
TIMESTAMP=`date '+%Y%m%d-%H%M%S' `
BASE=`basename "$0" ".sh" `
###
### These variables will document hard-coded 'mount' preferences for filesystems
###
BOOT_MAX_INTERVAL="-c 10" ### max number of boots before fsck [10 boots]
TIME_MAX_INTERVAL="-i 2w" ### max calendar time between boots before fsck [2 weeks]
ERROR_ACTION="-e remount-ro" ### what to do if error encountered
#-m reserved-blocks-percentage
###
### This OPTIONS string should be updated manually to document
### the preferred and expected settings to be applied to ext4 filesystems
###
OPTIONS="-o journal_data,block_validity,nodelalloc"
ASSIGN=0
REPORT=0
VERB=0
SINGLE=0
while [ $# -gt 0 ]
do
case ${1} in
--default ) REPORT=0 ; ASSIGN=0 ; shift ;;
--report ) REPORT=1 ; ASSIGN=0 ; shift ;;
--force ) REPORT=0 ; ASSIGN=1 ; shift ;;
--verbose ) VERB=1 ; shift ;;
--single ) SINGLE=1 ; shift ;;
* ) echo "\n\t Invalid parameter used on the command line. Valid options: [ --default | --report | --force | --single | --verbose ] \n Bye!\n" ; exit 1 ;;
esac
done
workhorse()
{
case ${PARTITION} in
1 )
DEVICE="/dev/sda3"
OPTIONS=""
;;
2 )
DEVICE="/dev/sda7"
;;
3 )
DEVICE="/dev/sda8"
;;
4 )
DEVICE="/dev/sda9"
;;
5 )
DEVICE="/dev/sda12"
;;
6 )
#UUID="0d416936-e091-49a7-9133-b8137d327ce0"
#DEVICE="UUID=${UUID}"
DEVICE="/dev/sda13"
;;
7 )
DEVICE="/dev/sda14"
;;
8 )
DEVICE="/dev/sda4"
;;
esac
PARTITION="DB001_F${PARTITION}"
PREF="${BASE}.previous.${PARTITION}"
reference=`ls -t1 "${PREF}."*".dumpe2fs" 2>/dev/null | grep -v 'ERR.dumpe2fs'| tail -1 `
if [ ! -s "${PREF}.dumpe2fs.REFERENCE" ]
then
mv -v ${reference} ${PREF}.dumpe2fs.REFERENCE
fi
reference=`ls -t1 "${PREF}."*".verify" 2>/dev/null | grep -v 'ERR.verify'| tail -1 `
if [ ! -s "${PREF}.verify.REFERENCE" ]
then
mv -v ${reference} ${PREF}.verify.REFERENCE
fi
BACKUP="${BASE}.previous.${PARTITION}.${TIMESTAMP}"
BACKUP="${BASE}.previous.${PARTITION}.${TIMESTAMP}"
rm -f ${PREF}.*.tune2fs
rm -f ${PREF}.*.dumpe2fs
### reporting by 'tune2fs -l' is a subset of that from 'dumpe2fs -h'
if [ ${REPORT} -eq 1 ]
then
### No need to generate report from tune2fs for this mode.
( dumpe2fs -h ${DEVICE} 2>&1 ) | awk '{
if( NR == 1 ){ print $0 } ;
if( index($0,"revision") != 0 ){ print $0 } ;
if( index($0,"mount options") != 0 ){ print $0 } ;
if( index($0,"features") != 0 ){ print $0 } ;
if( index($0,"Filesystem flags") != 0 ){ print $0 } ;
if( index($0,"directory hash") != 0 ){ print $0 } ;
}'>${BACKUP}.dumpe2fs
echo "\n dumpe2fs REPORT [$PARTITION]:"
cat ${BACKUP}.dumpe2fs
else
### Generate report from tune2fs for this mode but only as sanity check.
tune2fs -l ${DEVICE} 2>&1 >${BACKUP}.tune2fs
( dumpe2fs -h ${DEVICE} 2>&1 ) >${BACKUP}.dumpe2fs
if [ ${VERB} -eq 1 ] ; then
echo "\n tune2fs REPORT:"
cat ${BACKUP}.tune2fs
echo "\n dumpe2fs REPORT:"
cat ${BACKUP}.dumpe2fs
fi
if [ ${ASSIGN} -eq 1 ]
then
tune2fs ${BOOT_MAX_INTERVAL} ${TIME_MAX_INTERVAL} ${ERROR_ACTION} ${OPTIONS} ${DEVICE}
rm -f ${PREF}.*.verify
( dumpe2fs -h ${DEVICE} 2>&1 ) >${BACKUP}.verify
if [ ${VERB} -eq 1 ] ; then
echo "\n Changes:"
diff ${BACKUP}.dumpe2fs ${BACKUP}.verify
fi
else
if [ ${VERB} -eq 1 ] ; then
echo "\n Differences:"
diff ${BACKUP}.tune2fs ${BACKUP}.dumpe2fs
fi
rm -f ${BACKUP}.verify
fi
fi
}
if [ ${SINGLE} -eq 1 ]
then
for PARTITION in 2 3 4 5 6 7 8
do
echo "\n\t Actions only for DB001_F${PARTITION} ? [y|N] => \c" ; read sel
if [ -z "${sel}" ] ; then sel="N" ; fi
case ${sel} in
y* | Y* ) DOIT=1 ; break ;;
* ) DOIT=0 ;;
esac
done
if [ ${DOIT} -eq 1 ]
then
workhorse
fi
else
for PARTITION in 2 3 4 5 6 7 8
do
workhorse
done
fi
exit 0
exit 0
exit 0
For those who are interested, there is a modified/expanded script in a follow-on posting.
Thank you all for your input and feedback.
| OS seems to apply ext4 filesystem options in arbitrary fashion |
1,330,601,237,000 |
I am using RHEL6, but I could not find the Ethernet interface configuration file ifcfg-eth0 under /etc/sysconfig/network-scripts. Issuing ifconfig -a shows that eth0 does exist. Could it be that the configuration file is hidden or saved in some other directory? If not, could I create such a file in the default directory(i.e., /etc/sysconfig/network-scripts) so that it would serve as the default configuration file?
|
You could find an example in /usr/share/doc*/initscript*, can't remember the exact name, but I'll provide a comprehensive example here:
All fields are fairly easy to understand
DEVICE=eth0
BOOTPROTO=none
ONBOOT=yes
NETWORK=10.0.1.0
NETMASK=255.255.255.0
IPADDR=10.0.1.27
Docs here
| Where could I find the ifcfg-eth0 file? |
1,330,601,237,000 |
We have the following line in a file on Linux:
discovery.uri=http://master.navada.com:8800
we want to add the word koko before master, so I just do:
sed –i 's/master/kokomaster/' file
which gives:
discovery.uri=http://kokomaster.navada.com:8800
but what we want is to add koko only if koko isn't already present before master.
For example, the next time that we run
sed –i 's/master/kokomaster/' file
the line will be:
discovery.uri=http://kokokokomaster.navada.com:8800
|
You can replace a bit more to avoid this problem:
sed -i sX/masterX/kokomasterX file
This replaces “/master”, so the next time you run it, nothing will be replaced, since “/kokomaster” doesn’t match.
| sed + add word before string only if not exists |
1,330,601,237,000 |
Ok this may not be a very concrete question, and is perhaps subject to taste, yet I'm struggling to get this right so here it goes.
I have a computer.
This computer has linux on it (thank god). Arch Linux to be specific (with awesome wm).
I am the single user on this computer.
As to good practice I've set up two users: the root user and the everyday use romeovs user. This way I only use permissions when needed (using sudo for example).
Over the years I have been pimping out my software suite, adding a bunch of applications to this computer. Notably: vim, git, mpc, mutt, calcurse, ufw, ...
Now here is the rub: which of these applications' config files do I use? All of these supply an /etc-based global configuration file, that affects all users, as well as a local ~/.config (or, sadly, ~/) config options.
I've always worked using the local configuration setups, because this felt more natural. But as I grow more familiar with my computer, I feel this somehow lacks elegance. The contra's to this approach are:
dicrepancy when switching to root user, even with sudo (e.g. when using vim)
will not always work, e.g. when loading deamons from the arch linux DEAMONS
array they are run by the root user and thus don't pick up local user configs.
major $HOME directory clutter. Sadly there are very few apps that adhere to the $XDG_CONFIG_HOME philosophy.
Benefits are:
stuff is local, which feels more in the lines of the permissions splitting between root and romeovs.
quick and easy acces to the files. no need to sudo to edit them.
easier for git tracking of the config files.
somehow feels safer: a user can screw stuff up without messign with the machine's global settings.
it is more "a-package-update-may-overwite-my-config"-proof
Let's get conrete:
What is the de-facto standard to split configuration on a single user machine, especially for the system maintainer (single-user)?
|
One day you're going to change your computer, or to give someone else (a family member, for example) an account on your computer.
If you want to keep a setting on your next computer, put it in your home directory.
If the other person might want a different setting, put it in your home directory.
If the setting is computer-dependent and not user-dependent, put it in /etc.
Your arguments against putting configuration files in the home directory don't really hold water:
sudo keeps the HOME environment variable (unless you've told it not to). So your programs will keep reading their settings from your home directory.
Daemons are not supposed to read your personal settings. Daemons are normally configured through files in /etc, not through environment variables or through files in your home directory.
$HOME is supposed to have a lot of dot files. That's why ls doesn't show them.
| configuration ethics (esthetics): /etc vs $HOME |
1,330,601,237,000 |
After moving the hard drive of a makeshift server to another compatible hardware (64-bit, same processor "generation", laptop->desktop) configuration, networking fails to initiate.
Specifically:
ifconfig only shows lo
sudo service networking restart shows:
-
stop: unknown instance:
networking stop/waiting
quite obviously something in the system and/or kernel is misconfigured for the new hardware setup.
How to detect what exactly is wrong and enable eth0?
The system in question is an Ubuntu 14.04 Server distro, but I suspect the problem is general.
|
One of the things to look out for when cloning Linux systems is udev's persistent network device naming rules.
udev may create and update the file /etc/udev/rules.d/70-persistent-net.rules to map MAC addresses to interface names. It does this with the script /lib/udev/write_net_rules. Each MAC address (with some exceptions; see /lib/udev/rules.d/75-persistent-net-generator.rules) is mapped to an interface named (by default) ethn, where n starts at 0 and goes up. An example:
# This file was automatically generated by the /lib/udev/write_net_rules
# program, run by the persistent-net-generator.rules rules file.
#
# You can modify it, as long as you keep each rule on a single
# line, and change only the value of the NAME= key.
# PCI device 0x8086:0x100f (e1000)
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="00:0c:de:ad:be:ef",ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth0"
Entries can be edited if you want to change the mapping, and are not automatically removed from this file. So interface names are stable even when you add additional NICs or remove unneeded NICs.
The flip side is, as you discovered, if you copy this file to another system via cloning, the new hardware's interfaces will be added to this file, using the first available interface name, such as eth1, eth2, etc., and eth0 will be referencing a MAC address that does not exist on the new system.
In your case, in which you transplanted the disks, you can comment out the lines containing your old hardware's interfaces, and edit the erroneous entries added due to the new hardware to have the desired interface names (or just remove them), and reboot. I initially recommended commenting them out so that when you move the disks back to the old hardware it's easy to restore, but @Guido van Steen provided a simpler solution: mv the 70-persistent-net.rules file to something else (but be careful about the new name if it's in the same directory!) and reboot.
| No eth0 after HD transplant |
1,330,601,237,000 |
I'd like to reset the timezone through editing /etc/timezone. However, after I was done editing and saved the file, the system time did not change accordingly.
On the other hand, when I take advantage of the command dpkg-reconfigure tzdata to change the timezone, the time will change immediately. Plus, the /etc/timezone file is modified accordingly.
What steps am I missing after editing and saving the config file in order for the new time to take effect?
|
Take a look at /var/lib/dpkg/info/tzdata.postinst, which I think is what is being run when dpkg-reconfigure tzdata is called.
Note in particular the following command, which runs after /etc/timezone has been updated.
cp -f /usr/share/zoneinfo/$AREA/$ZONE /etc/localtime.dpkg-new && \
mv -f /etc/localtime.dpkg-new /etc/localtime
So, the file /etc/localtime needs to be updated. I haven't tried it, but my guess is that is an important step in making the timezone change. It is unclear if the tzdata maintainer expects you to make this change manually if you have edited /etc/timezone yourself.
| How to make an Ubuntu time zone change stick? |
1,330,601,237,000 |
I am configuring the Linux kernel version 3.9.4. I am being asked questions about RCU (seen below). Specifically, what are each of these and what are the advantages and disadvantages of enabling or disabling some of these?
Consider userspace as in RCU extended quiescent state (RCU_USER_QS) [N/y/?]
Tree-based hierarchical RCU fanout value (RCU_FANOUT) [64]
Disable tree-based hierarchical RCU auto-balancing (RCU_FANOUT_EXACT) [N/y/?]
Accelerate last non-dyntick-idle CPU's grace periods (RCU_FAST_NO_HZ) [Y/n/?]
Offload RCU callback processing from boot-selected CPUs (RCU_NOCB_CPU) [N/y/?]
|
There are some details about these options over on the LTTng Project site. RCU's are (read-copy-update). These are data structures in the kernel which allow for the same data to be replicated across cores in a multi-core CPU and they guarantee that the data will be kept in sync across the copies.
excerpt
liburcu is a LGPLv2.1 userspace RCU (read-copy-update) library. This
data synchronization library provides read-side access which scales
linearly with the number of cores. It does so by allowing multiples
copies of a given data structure to live at the same time, and by
monitoring the data structure accesses to detect grace periods after
which memory reclamation is possible.
Resources
There is a good reference to what RCU's are and how they work over on lwn.net titled: What is RCU, Fundamentally?.
There's also this resource by the same title as lwn.net but it's different content.
There is also the Wikipedia entry on the RCU topic too.
Finally there's the Linux kernel documentation available here: rcu.txt.
So what are these options?
RCU_USER_QS
This option sets hooks on kernel / userspace boundaries and puts RCU
in extended quiescent state when the CPU runs in userspace. It means
that when a CPU runs in userspace, it is excluded from the global RCU
state machine and thus doesn't try to keep the timer tick on for RCU.
Unless you want to hack and help the development of the full dynticks
mode, you shouldn't enable this option. It also adds unnecessary
overhead.
If unsure say N
RCU Fanout
This option controls the fanout of hierarchical implementations of
RCU, allowing RCU to work efficiently on machines with large numbers
of CPUs. This value must be at least the fourth root of NR_CPUS, which
allows NR_CPUS to be insanely large. The default value of RCU_FANOUT
should be used for production systems, but if you are stress-testing
the RCU implementation itself, small RCU_FANOUT values allow you to
test large-system code paths on small(er) systems.
Select a specific number if testing RCU itself. Take the default if
unsure.
RCU_FANOUT_EXACT
This option forces use of the exact RCU_FANOUT value specified,
regardless of imbalances in the hierarchy. This is useful for testing
RCU itself, and might one day be useful on systems with strong NUMA
behavior.
Without RCU_FANOUT_EXACT, the code will balance the hierarchy.
Say N if unsure.
RCU_FAST_NO_HZ
This option permits CPUs to enter dynticks-idle state even if they
have RCU callbacks queued, and prevents RCU from waking these CPUs up
more than roughly once every four jiffies (by default, you can adjust
this using the rcutree.rcu_idle_gp_delay parameter), thus improving
energy efficiency. On the other hand, this option increases the
duration of RCU grace periods, for example, slowing down
synchronize_rcu().
Say Y if energy efficiency is critically important, and you don't care
about increased grace-period durations.
Say N if you are unsure.
RCU_NOCB_CPU
Use this option to reduce OS jitter for aggressive HPC or real-time
workloads. It can also be used to offload RCU callback invocation to
energy-efficient CPUs in battery-powered asymmetric multiprocessors.
This option offloads callback invocation from the set of CPUs
specified at boot time by the rcu_nocbs parameter. For each such CPU,
a kthread ("rcuox/N") will be created to invoke callbacks, where the
"N" is the CPU being offloaded, and where the "x" is "b" for RCU-bh,
"p" for RCU-preempt, and "s" for RCU-sched. Nothing prevents this
kthread from running on the specified CPUs, but (1) the kthreads may
be preempted between each callback, and (2) affinity or cgroups can be
used to force the kthreads to run on whatever set of CPUs is desired.
Say Y here if you want to help to debug reduced OS jitter. Say N here
if you are unsure.
So do you need it?
I would say if you don't know what a particular option does when compiling the kernel then it's probably a safe bet that you can live without it. So I'd say no to those questions.
Also when doing this type of work I usually get the config file for the kernel I'm using with my distro and do a comparison to see if I'm missing any features. This is probably your best resource in terms of learning what all the features are about.
For example in Fedora there are sample configs included that you can refer to. Take a look at this page for more details: Building a custom kernel.
| Understanding RCU when Configuring the Linux Kernel |
1,330,601,237,000 |
So there is a directory full of torrent files:
debian.iso.torrent
fedora.iso.torrent
I can start downloading them with a:
rtorrent *.torrent
command, when the working directory is the same where the torrents are.
But. Every time when I start rtorrent in this way it calculates all the hashes..it takes looong time do to that and it's a cpu intensive thing.
Are there any methods to avoid this? (other console-based torrent client? or a feature to add a single torrent when already downloading a torrent without calculating all the torrent's hashes?)
|
You can set up a "session directory" so that some data is stored and, when you exit rtorrent cleanly, you can open it without going through the hashing.
According to the manpage, this can be done using the -s path option, so -s ~/torrentdir would use that as session directory. But you probably want to set this through ~/.rtorrent.rc so that you don't have to specify it all the time.
(Sorry for the lack of a working example, I don't have a computer with rtorrent set up near me right now.)
| How to add a torrent to a running rtorrent download? |
1,330,601,237,000 |
I want to uninstall tmux on my machine. My tmux version is:
tmux next-3.4
and which tmux gives me:
/usr/local/bin/tmux
I tried to uninstall it with sudo yum remove tmux, and I get:
Updating Subscription Management repositories.
No match for argument: tmux
No packages marked for removal.
Dependencies resolved.
Nothing to do.
Complete!
My machine info is as follows:
NAME="Red Hat Enterprise Linux"
VERSION="8.6 (Ootpa)"
ID="rhel"
ID_LIKE="fedora"
VERSION_ID="8.6"
PLATFORM_ID="platform:el8"
PRETTY_NAME="Red Hat Enterprise Linux 8.6 (Ootpa)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:redhat:enterprise_linux:8::baseos"
HOME_URL="https://www.redhat.com/"
DOCUMENTATION_URL="https://access.redhat.com/documentation/red_hat_enterprise_linux/8/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 8"
REDHAT_BUGZILLA_PRODUCT_VERSION=8.6
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="8.6"
Red Hat Enterprise Linux release 8.6 (Ootpa)
Red Hat Enterprise Linux release 8.6 (Ootpa)
|
You didn't install a tmux package, you compiled and installed tmux yourself - so the package management tools, e.g. yum, know nothing about your self-installed version of tmux.
In short: you installed it manually, you'll have to uninstall it manually.
Some possibilities that might make this easier:
Some programs come with both an install and and uninstall target in their Makefile. I don't know if tmux does this, but it's worth checking. Either run make uninstall or examine tmux's Makefile to see if it has that target. Note that uninstall makefile targets aren't always reliable and generally aren't tested anywhere near as well as build or install targets. caveat emptor. YMMV. Good luck!
Run make -n install and make note of all files which would be installed. Then delete them manually. Hint: it would help to redirect make -n's output to a file, especially if there's a lot of output.
BTW, in case it's not obvious,make's -n option is a dry-run - from man make:
-n, --just-print, --dry-run, --recon
Print the commands that would be executed, but do not execute them
(except in certain circumstances).
Recommendations for the future:
Either stick to packaged software or use programs like GNU Stow or CheckInstall when compiling and installing software. They provide some (but not all) of the useful functionality of packages, and make it easier to upgrade and/or uninstall self-compiled software.
Note that 99+% of the time, there's little or no benefit to compiling software yourself. "It's shinier and newer" and "it has a bigger version number" are almost never good enough reasons. Especially so if you don't know how to uninstall a program that you've self-compiled, or the difference between packaged and self-compiled software.
If there's a specific new feature that you absolutely must have, or a bug that affects you and you KNOW for a fact that it's fixed in the latest upstream version AND you can't wait a few days or weeks for the package to be updated then it might be worth the hassle...but even then it usually isn't, you're just trading one problem (a bug, or lack of a feature) for another (unpackaged software). It's almost always better to just wait.
Or, rather than just download the source and run make install, learn enough about your distro's packaging system to build your own package of the latest version. Quite often this can be as simple as downloading the package source files and applying upstream patches to it and then re-building the package, or porting the packaging changes (e.g. debian/ directory on debian/ubuntu/etc, or spec file etc on RPM-based distros) to the newest upstream source.
| How do you uninstall a package that you manually cloned from git |
1,330,601,237,000 |
I made a short script which can export various KDE settings from user home directory to use as a basis for a quick setup of the desktop environment on a different machine.
I was successful with all the settings that were of interest to me, but only one is elusive: I can't seem to find where the chosen keyboard layouts are stored. Basically, I would like to get to all the configuration that can be manipulated from the KDE settings application under the System Settings > Hardware > Input Devices > Keyboard > Layout tab (particularly the layouts themselves and the keyboard shortcut to switch between them). Does anyone have any idea? Maybe these settings are not specific to KDE and manipulate different configuration files? Thanks for any tips.
|
After some time of searching and playing with grep, I was able to locate the configuration file: ~/.config/kxkbrc.
| Where does KDE 5 store user-specific keyboard layout choices? |
1,330,601,237,000 |
I have a computer (in fact, a Banana Pi Pro) with a touchscreen which I have configured to emulate the right click via xorg.conf:
Section "InputClass"
Identifier "Touchscreen"
Option "EmulateThirdButton" "1"
Option "EmulateThirdButtonTimeout" "750"
Option "EmulateThirdButtonThreshold" "30"
EndSection
This works really well. But sometimes, when I want to use a real mouse, these settings become quite annoying, because long left mouse clicks are converted to right mouse clicks. Also, drag selection becomes imprecise because of 30 pixels threshold.
I wonder if it's possible to disable the right click emulation when the mouse is used:
Is it possible to modify Xorg configuration at runtime to alter the "InputClass" section?
If not, is it possible to apply this section only to one particular input device (the touchscreen)?
If the only way is to update xorg.conf and restart the server, what would be the least painful way to do it? Ideally it would be nice to preserve the applicatons which are already running, but I doubt it's possible.
Is there a program which does what I want without changing xorg.conf? Like in this question, where xrandr is used to dynamically configure parameters which are static when configured via xorg.conf.
|
xinput controls input settings. It has the same role for input that xrandr has for the display.
Run xinput list to list devices. Each device has a name and a numerical ID. You can use either this name or this ID to list properties of the corresponding device. Device IDs can depend on the order in which the devices are detected, so to target a specific device, use its name. For example, I have a mouse as device 8; here's an excerpt of its properties:
$ xinput list-props 8
…
Evdev Third Button Emulation (280): 0
Evdev Third Button Emulation Timeout (281): 1000
Evdev Third Button Emulation Button (282): 3
Evdev Third Button Emulation Threshold (283): 20
…
So I can use either of the following commands to turn on third button emulation for this device:
xinput set-prop 8 280 1
xinput set-prop 8 'Evdev Third Button Emulation' 1
There is a hierarchy of devices, which xinput list represents graphically. Applying a property to a device also applies it to its children. For example, you can apply a property to all pointing devices by applying it to the root pointer Virtual core pointer.
| How to disable Xorg right click emulation at runtime |
1,330,601,237,000 |
Here is an example that will explain better:
I have a selected the audio driver from the picture and i would like to browse through its source. How do i get to the path of the source files from here?
|
You have to use grep -r CONFIG_SND_SOC_MXS_SGTL5000.
Each of these config options just represents a #define macro. Many of them don't belong to a single file but instead are checked in multiple source files. CONFIG_64BIT for example appears in around 1k source code files.
| How to relate a kernel config setting to the source files? |
1,330,601,237,000 |
I've got setuptools set up and have read through the list of what's in /etc/setuptools.d
I'm interested in installing timeconfig as well, but I only need the TUI version (no GUI). Which RPMs do I need? (I'll be including these RPMs in a CentOS ISO installation)
|
You need the package system-config-date.
In CentOS it is not installed by default but you can install it by typing
sudo yum install system-config-date
and that should get you the tui tool you need which you can run by typing
sudo system-config-date
| What RPMs do I need for timeconfig? |
1,330,601,237,000 |
I am using Arch Linux with a custom kernel stored as /boot/vmlinuz-linux1. Some features I would like to have don't work in it, but there is also a /boot/vmlinuz-linux kernel where those features work. How can I retrieve the .config kernel configuration file from the second vmlinuz file in order to compare it with the configuration of the first kernel in a text editor?
|
As far as I'm aware, extracting the .config configuration file from a kernel is possible only if you've compiled it with the configuration option CONFIG_IKCONFIG (available in the configuration menu as entry General setup > Kernel .config support). Here is the documentation of that configuration option:
CONFIG_IKCONFIG:
This option enables the complete Linux kernel ".config" file
contents to be saved in the kernel. It provides documentation
of which kernel options are used in a running kernel or in an
on-disk kernel. This information can be extracted from the kernel
image file with the script scripts/extract-ikconfig and used as
input to rebuild the current kernel or to build another kernel.
It can also be extracted from a running kernel by reading
/proc/config.gz if enabled (below).
The last sentence refers to an additional configuration option CONFIG_IKCONFIG_PROC which gives you access to the configuration of a running kernel through a file in the proc pseudo-filesystem.
If your kernel has not been compiled with CONFIG_IKCONFIG, I don't think you can retrieve its configuration easily. Otherwise, it's as simple as
gunzip /proc/config.gz > .config
if CONFIG_IKCONFIG_PROC has been selected and you're currently running your /boot/vmlinuz-linux kernel, or
scripts/extract-ikconfig /boot/vmlinuz-linux
The script extract-ikconfig is the one available along with the kernel sources, in folder scripts.
| Can I get .config file from vmlinuz file? |
1,330,601,237,000 |
I was successfully using a small (20,000 entries) zone file with bind9 server, but today my data provider sent an update which caused the zone file to become 300,000+ entries large (30Mb+).
The problem is the server would not start with this zone file. The named-checkconf would not report any errors. No log messages are available (or I could not log them properly).
I would like to know if bind9 is capable of handling large configuration files and if yes how do I fix it. If no I would like to know if there are any workarounds for this issue. Maybe it's possible to store entries in a database?
The zone file I'm trying to use can be downloaded from here.
Update:
service bind9 status showed some information which may be relevant:
adjusted limit on open files from 4096 to 1048576
found 1 CPU, using 1 worker thread
using 1 UDP listener per interface
using up to 4096 sockets
loading configuration from '/etc/bind/named.conf'
I'm not quite sure how to interpret or use this information... Any ideas?
Also I was not able to find where bind9 logs are located: /var/log/ has no bind9 entries. Can anybody tell me where they are located on Debian Jessie?
|
I have seen your zone file: it appears to be a list of more than 350k domains, at the moment, where it is defined the local BIND server as the master. The domains are with the following format:
zone "xxxx.com" { type master; notify no; file "null.zone.file"; };
As per memory requirements, I would say as a ballpark figure you might need around 40MB-80MB of free RAM for that as domain tables are loaded in memory. (albeit I would feel more comfortable with 200MB at least)
Unless the server is severely constrained in RAM, it seems a bit improbable, but it could happen.
I also have noticed there are underscores ("_") in the name of several domains. Having underscores in DNS RR breaks a couple of RFCs (RFC 952 and RFC 1123), and you need to add to the BIND options section the directive:
check-names master ignore;
As for the format and method being used for blacklisting domains. From version 9.8 onwards, BIND supports what is known as Response Policy Zones (RPZ), that were created specifically for blacklisting domains.
Several (commercial) blacklist providers follow nowadays that format. (I myself use RPZ both at work and at home).
Using RPZ should make more sense and also means a lighter load, and as such, if you are paying the service, I would advise you to contact your supplier to know how to use it. The RPZ format also supports to some extent wildcards, which would mean a much smaller blacklist file.
An alternative is also to process the file with a script to alter it to RPZ format.
I will leave here relevant links about RPZ and official RPZ providers:
https://dnsrpz.info
and a tutorial how to configure RPZ:
http://www.zytrax.com/books/dns/ch9/rpz.html
As you may have noted, with the current configuration, you will also have a lot of open files; hence I recommend again using RPZ.
For dealing with more open files, in large email, DNS or HTTP servers, the limits have often to be raised.
The situation is not so bad as it used to be with older kernels, but nonetheless I do recommend raising the limits.
Edit /etc/sysctl.conf and modify/add the directive fs.file-max for the global limit of open files:
fs.file-max=500000
For applying the new file limit without rebooting, you need to run:
sudo sysctl -p
And for the files limits per process, edit, /etc/security/limits.conf:
* - nofile 400000
To apply the file limits per process, either logout and login, or run:
sudo ulimit -n 400000
After raising these two limits, you need to restart BIND:
sudo service bind9 restart
To convert your file to RPZ format, you run:
cat bind | tr -d \" | awk ' { print $2" CNAME ." } ' > /etc/bind/rpz.db
The script will convert the entries to the following format:
zeus.developershed.com CNAME .
zeusclicks.com CNAME .
zintext.com CNAME .
Add in the options section of named:
response-policy { zone "rpz"; };
Create the declaration of the RPZ zone:
zone "rpz" {
type master;
file "/etc/bind/rpz.db";
};
Add to the top of /etc/bind/rpz.db file:
$TTL 604800
@ IN SOA localhost. root.localhost. (
2 ; Serial
604800 ; Refresh
86400 ; Retry
2419200 ; Expire
604800 ) ; Negative Cache TTL
@ IN NS your_dns_fqdn.
Deconfigure that DNS file of yours and restart your BIND server. Evidently the RPZ file can be optimised with wildcards and made much shorter, however even without that optimisation now you won't need so much open files.
As for consulting BIND/DNS logs, they are together with the system logs in /var/log/syslog with the tag named. You can use the command:
sudo grep named /var/log/syslog
| Large zone file for bind9 : ad-blocking |
1,330,601,237,000 |
I am trying to get a list of wireless networks nearby while the adapter is acting as an access point but iwlist returns the following error:
$ sudo iwlist wlan0 scan
wlan0 Interface doesn't support scanning : Operation not supported
Is there another way of getting this list, perhaps with another utility? My Tomato powered WRT54 seems to be able to achieve this (listing nearby APs while the device itself is set up as an AP), so I'm curious how I could replicate that behaviour.
Thanks.
|
iwlist is seriously deprecated. Remove it from your system and never use it again. Do the same with iwconfig, iwspy. Those tools are ancient and were designed in an era where 802.11n didn't exist. Kernel developers maintain a ugly compatibility layer to still support wireless-tools, and this compatibility layer often lies.
Now install iw if not already done. The iw command you are looking for is
iw dev wlan0 scan ap-force.
This is a fairly recent addition. Not all drivers support this, but most should do.
| Getting a list of WiFi networks nearby when the adapter is in AP mode |
1,330,601,237,000 |
I'm currently exploring creating a startup script in the form of a system process located in the /etc/init.d/ on my Fedora 14 Linux installation. It sounds like the following two lines are bare minimum requirements?
#!/bin/bash
# chkconfig: 345 85 15 (however on this one I've seen different combos)
What's the purpose of these lines? Is there a good resource that would help me understand how to better create these and other header lines for such a file?
|
Look at the docs file /usr/share/doc/initscripts-*/sysvinitfiles (On current F14, /usr/share/doc/initscripts-9.12.1/sysvinitfiles). There's further documentation here: http://fedoraproject.org/wiki/Packaging/SysVInitScript.
The chkconfig line defines which runlevels the service will start in by default (if any), and where in the startup process they'll be ordered.
# chkconfig: <startlevellist> <startpriority> <endpriority>
Required. <startlevellist> is a list of levels in which
the service should be started by default. <startpriority>
and <endpriority> are priority numbers. For example:
# chkconfig: 2345 20 80
And, note that this all becomes obsolete with Fedora 15 and systemd.
| Are certain parts of startup scripts necessary or just good practice? |
1,330,601,237,000 |
I got some trouble in order to understand how profile.d works. As far as I know, the scripts get executed whenever a user logs in. Currently, I'm running CentOS 6.10 on my Server and got the following weird behavior:
In /etc/profile.d I got a script called logchk.sh which is meant to send an email to the admin email address via /bin/mail. If someone logs in via ssh user@serveradress this script is properly executed and the email is sent. However, it depends on the login method if the script is executed or not. What works is the following
ssh user@serveradress regardless of the host system regardless of the user
git pull user@repoadress does trigger the e-mail script but only for some users, regardless of the host system
What doesn't work is the following
git pull user@repoadress for some users
connecting via filezilla using ssh as protocol
So depending on who is connecting to the server, git pull or FileZilla does not trigger the script while for other users the script is triggered. All users use the bash shell and the behavior is the same regardless if the user has root rights or not.
So in summary I don't understand why the script is triggered for some users and for others it isn't since it's a global configuration. If anyone could provide me with some detail about when exactly the scripts in /etc/profile.d get triggered, I would be happy.
|
All about bash
Let's start with this bit from the bash man page:
When bash is invoked as an interactive login shell, or as a non-
interactive shell with the --login option, it first reads and
executes commands from the file /etc/profile, if that file
exists. After reading that file, it looks for ~/.bash_profile,
~/.bash_login, and ~/.profile, in that order, and reads and
executes commands from the first one that exists and is readable.
The --noprofile option may be used when the shell is started to
inhibit this behavior.
[...]
When an interactive shell that is not a login shell is started,
bash reads and executes commands from ~/.bashrc, if that file
exists. This may be inhibited by using the --norc option. The
--rcfile file option will force bash to read and execute commands
from file instead of ~/.bashrc.
And also:
Bash attempts to determine when it is being run with its standard
input connected to a network connection, as when executed by the
historical remote shell daemon, usually rshd, or the secure shell
daemon sshd. If bash determines it is being run non-
interactively in this fashion, it reads and executes commands
from ~/.bashrc, if that file exists and is readable. It will not
All about ssh
When you ssh into a remote server without specifying a command...
ssh [email protected]
...this starts a login shell, so you get /etc/profile services.
On the other hand, when you run a command -- either explicitly, as in ssh [email protected] somecommand, or implicitly, by using git or rsync or some other tool that operates over an ssh connection -- you start a non-interactive shell, so you don't get /etc/profile services. As we see from the third man page excerpt above, bash will still read your ~/.bashrc file, despite it being a non-interactive shell.
All about dotfiles
The specific configuration of dotfiles I'm referring to in this section is specific to Fedora (and probably most RHEL derivatives), but may also hold true for other distributions.
When you start a login shell, bash reads /etc/profile. This script typically contains code to source the scripts from /etc/profile.d. So when someone runs ssh user@serveraddress, this start an interactive login shell, which we know causes bash to read /etc/profile, hence it runs your logchk.sh script.
When you start an interactive non-login shell or when you start a non-interactive shell via a network connection like ssh, bash reads your ~/.bashrc file. The default .bashrc file created for user accounts include:
if [ -f /etc/bashrc ]; then
. /etc/bashrc
fi
And /etc/bashrc also sources script from /etc/profile.d.
So if someone has the default .bashrc file, then when they run a command over ssh (e.g., by running git pull user@repoaddress), this will also execute your logchk.sh script. However, if they have replaced or modified the default ~/.bashrc file in their account so that it no longer sources /etc/bashrc, then they won't run scripts from /etc/profile.d (unless they have explicitly decided to do so).
This would explain why you see different behavior for some of your users.
| When exactly do the scripts in /etc/profile.d get executed? |
1,330,601,237,000 |
I accidentally overwrote my .zshrc file after a misexecuted command, which contains several hundred lines of configs. However, I still have 5 terminals that had zsh open before this incident, and as a result, they are unaffected. However, any new shell I open loses the entire zsh config, and I have no backup for it.
I could simply continue using these 5 terminals, but I think there must be some form of way to extract the zshrc from memory, as ostensibly zsh loads the file into memory when run and stores it there until it's killed. I've tried this:
sudo dd if=/dev/mem bs=1M count=256|hexdump -C > ramfile
But all I've gotten is data unrelated to my zshrc.
Any solutions would be much appreciated.
|
I would have suggested using /proc/PID/fd/ directory, but zsh closes the file descriptor pointing to its configuration after parsing it. From that, my best guess is that your file in its original form is gone.
However, there are ways to dump zsh's current configuration, which may help you rebuild it. This other question's answer comes to mind:
All key bindings:
for m ($keymaps) bindkey -LM $m
All ZLE user widgets
zle -lL
All zstyles:
zstyle -L
Loaded modules:
zmodload -L
All variables:
typeset -p +H -m '*'
With the zsh/parameters module loaded, that will also include
aliases, options, functions...
| Recover overwritten .zshrc with still-running zsh |
1,330,601,237,000 |
I'm experimenting with different kernel configuration files and wanted to keep a log on the ones I used.
Here is the situation:
There is configuration file called my_config which i want to use as a template
I do make menuconfig, load my_config make NO changes and save as .config.
When i do diff .config my_config, there are differences in the files
Why would here be differences between the old file and the new file?
|
Why would here be differences
Because you loaded my_config into menuconfig, made changes, then saved it as .config. Of course they are different. If you saved it twice, once with each name, then they would be the same.
If you mean, they are more different than you think they should be, keep in mind there is not a 1:1 correspondence between things you select in menuconfig and changes that appear in the config file.
Also, if my_config was the product of an earlier version of the kernel source, make menuconfig will notice this and convert the file to reflect the newer source version. This means even if you change nothing, just loading it and saving it will result in substantial changes to the text of the file. However, the actual configuration should be essentially the same (generally the changes are the addition of new options with appropriate default values).
| Saving a kernel config file through menuconfig results with different options? |
1,330,601,237,000 |
I am using Fedora 19, there is no ~/.gconf/networking/NetworkManager directory, and system-wide /etc/NetworkManager has no files required.
I want to export my user's NetworkManager configurations, that is Wireless, VPN credentials (ip, username, password) to a file, or at least find the directory where they are stored in Fedora 19. How to do that?
|
In Fedora seems that /etc/NetworkManager doesn't work, it's always empty. Take a look at /etc/sysconfig/network-scripts, I found mine here.
| Exporting NetworkManager configurations |
1,330,601,237,000 |
Here's the thing. I'm running Mint 19, relatively fresh install. I heard a lot of hype about Dwarf Fortress, and installed it once; then, needing to leave in a hurry, closed it with Control-C. Ever since then, every time I attempt to run it, I get the output:
/tmp/dwarf-fortresss7j3cousrun/df: 6: /tmp/dwarf-fortresss7j3cousrun/df: ./libs/Dwarf_Fortress: not found
Traceback (most recent call last):
File "/usr/games/dwarf-fortress", line 93, in <module>
main()
File "/usr/games/dwarf-fortress", line 90, in main
run_df_in_unionfs_with_cleanup(user_run_dir, data_dirs, sys.argv)
File "/usr/games/dwarf-fortress", line 60, in run_df_in_unionfs_with_cleanup
run_df_in_unionfs(user_run_dir, data_dirs, args)
File "/usr/games/dwarf-fortress", line 54, in run_df_in_unionfs
run_df(tmp_dir, args)
File "/usr/games/dwarf-fortress", line 46, in run_df
subprocess.run(cmd).check_returncode()
File "/usr/lib/python3.6/subprocess.py", line 369, in check_returncode self.stderr)
subprocess.CalledProcessError: Command '['/tmp/dwarf-fortresss7j3cousrun/df', '/usr/games/dwarf-fortress']'
returned non-zero exit status 127.
Followed by immediate program termination. I have attempted to remove, even purge, and reinstall, dwarf-fortress, only to get the same result. It ran perfectly fine exactly once, and as much as I have been staring at this error, I cannot make sense of it.
It isn't business critical or anything, I'm not technically even a player; but I would really like to know why the program is now failing, and in what manner it was broken by me. It's just too much of a mystery to leave unchecked. Thanks for your time.
|
TL;DR: Use the following script to clean your $XDG_DATA_HOME/$HOME from left-overs and unmount previous unionfs:
#!/bin/sh
set -eu
echo "Killing currently running Dwarf Fortress instances"
killall -q -9 Dwarf_Fortress || true
echo "Removing old Dwarf Fortress unionfs mounts and mount points"
find /tmp/ -maxdepth 1 -name "dwarf-fortress*" \
-printf " Found %f\n" \
\( -exec fusermount -u {} \; -o -true \) \
-exec rmdir {} \;
UNIONFSDIR="${XDG_DATA_HOME:-"${HOME:?}/.local/share/"}dwarf-fortress/run/.unionfs-fuse"
if [ -d "$UNIONFSDIR" ]; then
echo "Removing old .unionfs-fuse directory"
rm -r -- "$UNIONFSDIR"
fi
echo "Done. Run dwarf-fortress and praise Armok!"
Why does this happen?
The dwarf-fortress package provided by Ubuntu uses a Python wrapper /usr/games/dwarf-fortress. That wrapper creates a secondary data hierarchy in $XDG_DATA_HOME/.local/share/dwarf-fortress/run/.unionfs-fuse, which gets mounted as unionfs(8) together with some other directories.
This enables you to place your mods in your $XDG_DATA_HOME/.local/share/dwarf-fortress directory, so you don't need to change the contents of /usr/share/games/dwarf-fortress, which is great! However, a unionfs must be handled with care and get cleaned up correctly. The Python script failed to do so when you've used C-c to get out of the game.
Therefore, the unionfs is probably still mounted, but in a bad state.
How do I fix it?
So first of all, make sure that the game is completely closed:
killall -s KILL Dwarf_Fortress
Then make sure that there is no leftover unionfs DF mount:
mount | grep -a -e dwarf -e unionfs
unionfs on /tmp/dwarf-fortresswvlaptrarun type fuse.unionfs (rw,nosuid,nodev,relatime,user_id=1000,group_id=1000)
unionfs on /tmp/dwarf-fortress4ylv2t19run type fuse.unionfs (rw,nosuid,nodev,relatime,user_id=1000,group_id=1000)
As you can see, there are currently two on my system. Since both are broken, lets get rid of them with fusermount -u:
fusermount -u /tmp/dwarf-fortresswvlaptrarun
fusermount -u /tmp/dwarf-fortress4ylv2t19run
And last, but most important, remove the .unionfs-fuse in $XDG_DATA_HOME/dwarf-fortress/run/. I don't have XDG_DATA_HOME set, so I have to use $HOME. Check env before you accidentally delete the wrong directory!
rm -r $HOME/.local/share/dwarf-fortress/run/.unionfs-fuse
That's it. Note that the df_linux version from Bay 12 Games doesn't run into this problems, as it neither uses a Python wrapper nor a Unionfs.
That being said, the issue was fixed in 2019 and is shipped to bullseye. Unfortunately, it is not fixed in 0.44.12-1, so buster might still be affected.
How did you come up with this solution?
First of all, I had a look at file $(which dwarf-fortress), which told me that it's a Python script:
$ file $(which dwarf-fortress)
/usr/games/dwarf-fortress: Python script, ASCII text executable
I then checked the script with any editor and found
def get_user_run_dir():
old_run_dir = xdg.BaseDirectory.save_data_path('dwarf-fortress')
new_run_dir = os.path.join(old_run_dir, 'run')
...
def run_df_in_unionfs(user_run_dir, data_dirs, args):
mnt_dirs = user_run_dir + "=rw:" + ':'.join(data_dirs)
with tempfile.TemporaryDirectory(suffix='run', prefix='dwarf-fortress') as tmp_dir:
cmd = ['unionfs', '-o', 'cow,relaxed_permissions', mnt_dirs, tmp_dir]
subprocess.run(cmd).check_returncode()
try:
run_df(tmp_dir, args)
finally:
subprocess.run(['fusermount', '-u', tmp_dir]).check_returncode()
which showed that there was at least something to be found in xdg.BaseDirectory, which is $HOME/.local/share (unless set otherwise). At the same time, fusermount -u shows that there is an unmount pending after we quit Dwarf Fortress, and mount | grep unionfs confirmed the active mounts in /tmp. I got rid of the whole $XDG_DATA_HOME/dwarf-fortress directory, and it worked again. With strace -ff -e trace=execve dwarf-fortress, I was able to confirm the unionfs mounts and found the .unionfs-fuse directory.
| Weird problem with Dwarf Fortress install |
1,330,601,237,000 |
I have a problem similar to this:
Unable to get Broadcom wireless drivers working on Arch Linux
But in my case, loading the broadcom-wl-dkms driver did not work. I am new to this, so maybe the solution is quite simple (hopefully).
What I did so far:
I installed various drivers with yaourt and pacman, ending up with the broadcom-wl-dkms driver.
When I list the available internet devices with ip link I still only get two results, the lo and the chipset of my motherboard (where the LAN´s plugged in and works just fine).
With lsmod I thought I would get a list of all active drivers, but the broadcom-wl-dkms is not shown there.
What do I have to do in order to get the drivers all set up and running?
Ah, running wifi-menu gives me returns in bright red INVALID INTERFACE SPECIFICATIONS but I'm guessing that's just because it can't see any wireless networking devices.
I read quite a lot of posts but nothing really helped so far (And yes, I checked the Arch Wiki beforehand).
Does it have something to do with the driver being restrictively licensed drivers?
Output of lspci -knn | grep net -A2 :
00:1f.6 Ethernet controller [0200]: Intel Corporation Ethernet
Connection (2) I219-V [8086:15b8] (rev 31) Subsystem: Micro-Star
International Co., Ltd. [MSI] Ethernet Connection (2) I219-V
[1462:7a12] Kernel driver in use: e1000e Kernel modules: e1000e
Fascinating: output of lspci -knn|grep Net -A2:
07:00.0 Network controller: Broadcom Limited BCM4360 802.11ac Wireless
Network Adapter (rev 03)
Running lsmod | grep wl produces no result. How do I load the driver?
|
I finally got it working.
My working environment: 4.16.5-1-ARCH [rname -r]
My Desktop: GNOME
My Network-Env.: network-manager-applet 1.8.11dev+12+ga37483c1-1
My Wlan-Card: Broadcom Limited BCM4360 802.11ac Wireless Network Adapter [14e4:43a0] [lspci -vnn -d 14e4:]
What I did:
I looked at https://wireless.wiki.kernel.org/en/users/Drivers/b43#Supported_devices[/url:2j8cyqph] to find compatible drivers, which in my case turned out to be none but the wl package (install with pacman -S broadcom-wl). Make sure everything is up-to-date with sudo pacman -Sy, after that make sure your filesystem is all good with sudo pacman filesystem linux (which it wasn't in my case ;)). Check if your systems version uname -r && pacman -Q linux. Reboot. This already solved it, as the new kernel update brought some changes to
b43-firmware broadcom-wl nvidia
rmmod b43 b43legacy bcm43xx bcma brcm80211 brcmfmac brcmsmac ssb wl 117
modprobe wl
did change nothing for me, but you can try /(as mentioned in the wiki ->https://wiki.archlinux.org/index.php/Broadcom_wireless)
You might have to restart Network Manager: systemctl restart NetworkManager.service
What might have helped was installing linux headers: sudo pacman -S linux-headers
I don't know for sure what changed it to working, I guess it was the kernel update.
| Broadcom Wireless PCI Card BMC4360 14e4:43a0 cannot get drivers working |
1,330,601,237,000 |
Is it a bad practice to run a command which requires sudo in ~/.profile?
If really want to do that, how can I make the command run at rebooting Ubuntu?
make the command running with sudo and under my user account not require password, by editing /etc/sudoers?
provide my password in the command with sudo in ~/.profile, by echo <passwd> | sudo -S <mycommand>?
I haven't verified if the first way works, because I am still learning how to do it.
The second way seems to raise serious security concern, and probably the least way I want to go.
Thanks.
|
If you put the command in your ~/.profile, it will run every time you launch a login shell. Some terminal emulators allow you to use a login shell for each terminal window. Do you want your command running that often?
If you want to be allowed to use sudo for that command without entering a password, use the visudo command with sudo visudo (or, to use your favourite editor, use sudo -E visudo).
DO NOT EDIT /etc/sudoers DIRECTLY.
Add a line like this:
tim ALL=(ALL) NOPASSWD: /path/to/my/command
The order is important in the sudoers file, so add it below this line: root ALL=(ALL:ALL) ALL
However, if you only want it to run when your system starts up, add it to /etc/rc.local and you don't have to worry about sudo.
| How do you run a command with sudo in `~/.profile`? |
1,330,601,237,000 |
How does linux kernel Makefile understands .config? Does it have a parser of defconfig file? It has to produce a lot of #defines for each enabled option from defconfig and also maintain a lot of minor Makefiles which are compiled or not, based on directives in .config file.
|
The syntax of the .config file is compatible with make; for example a line like CONFIG_CRC16=m sets the make variable CONFIG_CRC16 to the value m. It's parsed by make and included indirectly in the toplevel Makefile:
Makefile contains -include include/config/auto.conf
include/config/auto.conf is built by recursively calling the toplevel Makefile on the silentoldconfig target.
Conditional compilation of files is done mostly by playing with target names: makefiles include rules like
obj-$(CONFIG_CRC16) += crc16.o
The target obj-y thus builds all objects that are enabled as built-ins by a configuration option, and obj-m builds all objects that are enabled as modules. There are also conditional directives in the makefiles for more complex cases.
For conditional compilation in the C language, C source files include include/generated/autoconf.h which contains lines like #define CONFIG_CRC16_MODULE 1. This file is generated from include/config/auto.conf by the programs invoked by the xxxconfig targets (scripts/kconfig/conf for batch targets like oldconfig, scripts/kconfig/qconf for xconfig, etc.); the source code for that is scripts/kconfig/confdata.c which does some very simple text processing.
| How does linux kernel Makefile understands .config? |
1,330,601,237,000 |
We just updated our server from CentOS 6 to RHEL 7 and after setting up our HP LaserJet 600 from ppd, I'm noticing that all print jobs now have about a 2" margin at the top of the page.
Is it possible to define margins in a configuration file? This reply suggests that margins can be set with some arguments to lpr, but I'd rather store them in a conf file.
using lp:
-o page-bottom=N
-o page-left=N
-o page-right=N
-o page-top=N
Sets the page margins when printing text files.
The values are in points - there are 72 points to the inch.
|
Standard options can be set with the lpoptions command.
If run as a normal user the file $HOME/.cups/lpoptions is set.
If run as the root user then the system defaults /etc/cups/lpoptions is set.
This can be used to change various settings (eg double sided printing) and page-top.
| Set default margins in cups? |
1,330,601,237,000 |
I need to configure xfreerdp to be set as 15 bpp every time is launched, without the need of use the
# xfreerdp --sec rdp -a 15 --no-bmp-cache srvaddr
Opening the config.txt of xfreerdp, shows me the IP of the server, and if I add /bpp:15 or -a 15, the program won't launch.
What is the correct syntax for this config file?
|
Sad to say there is no such option as storing settings in a config file with xfreerdp.
What you could do instead is use a scripting language and wrap xfreerdp by adding this functionality support.
For new cli versions of xfreerdp:
xfreerdp /bpp:15 ...
For deprecated cli versions:
xfreerdp -a 15 ...
| Configure xfreerdp to always pass some options |
1,330,601,237,000 |
In setting up fail2ban there are what appear to be variables at the top of the jail.conf that look like this:
mytime=300
.
.
.
[ssh]
bantime=%(mytime)s
Or in this more complicated form like this:
action_ = %(banaction)s[name=%(__name__)s, port="%(port)s", protocol="%(protocol)s", chain="%(chain)s"].
Questions
How do these work and what's going on with them?
Specifically what's the deal with the %(...string...)s?
|
If you take a look at the rules that are included with fail2ban you'll notice that they use these variables to make things neater and more parameterized. For example in the included jail.conf they've used them to make general action rules that they can then use when defining the various jails.
Example
Here are some basic variables at the top.
# Destination email address used solely for the interpolations in
# jail.{conf,local,d/*} configuration files.
destemail = root@localhost
# Sender email address used solely for some actions
sender = root@localhost
# Default protocol
protocol = tcp
# Ports to be banned
# Usually should be overridden in a particular jail
port = 0:65535
These variables are then used in other variables to construct some basic actions.
# Default banning action (e.g. iptables, iptables-new,
# iptables-multiport, shorewall, etc) It is used to define
# action_* variables. Can be overridden globally or per
# section within jail.local file
banaction = iptables-multiport
# The simplest action to take: ban only
action_ = %(banaction)s[name=%(__name__)s, port="%(port)s", protocol="%(protocol)s", chain="%(chain)s"]
# ban & send an e-mail with whois report to the destemail.
action_mw = %(banaction)s[name=%(__name__)s, port="%(port)s", protocol="%(protocol)s", chain="%(chain)s"]
%(mta)s-whois[name=%(__name__)s, dest="%(destemail)s", protocol="%(protocol)s", chain="%(chain)s"]
Notice here that they're constructing a general purpose action called, action_ which is made using other variables, such as, %(banaction)s, %(port)s, `%(protocol)s, etc.
From the man jail.conf man page:
Using Python "string interpolation" mechanisms, other definitions are allowed and can later
be used within other definitions as %(name)s. For example.
baduseragents = IE|wget
failregex = useragent=%(baduseragents)s
So the %(...)s are part of the Python language. If you search for them you'll eventually find this page from the Python language's specification, specifically this section titled: 5.6.2. String Formatting Operations. There is an example on this page:
>>> print '%(language)s has %(number)03d quote types.' % \
... {"language": "Python", "number": 2}
Python has 002 quote types.
The %(...string...)s is called a string formatting or interpolation operator in Python. The s at the end of the %(...string...) is a flag, specifying that any Python objects that may be passed to it, get converted to strings. From the link I referenced, there's a table with all the flags allowed:
The % specifies where you want the specifier to begin, and the (...string...) is what Python variable we want to have expanded here.
| What are these %(...)s strings in fail2ban's jail.conf file, and how do they work? |
1,330,601,237,000 |
When I work with the terminal and use su or sudo to execute a command with the root user's permissions, is it possible to apply the configuration of my "non-root user" (from which I am invoking su or sudo) stored in this user's home directory?
For instance, consider that (being logged on as a non-root user) I would like to edit the configuration file /etc/some_file using vim and that my vim configuration file is located at /home/myuser/.vimrc. Firing up the command line and typing sudo vim /etc/some_file, I would want "my" beautiful and well-configured vim to show up. But what I get is an ugly vim with the default configuration, no plugins etc.
Can I make su or sudo use my user's configuration files instead of the root user's files located at /root?
|
Use sudo -E to preserve your environment:
$ export FOO=1
$ sudo -E env | grep FOO
FOO=1
That will preserve $HOME and any other environment variables you had, so the same configuration files you started with will be accessed by the programs running as root.
You can update sudoers to disable the env_reset setting, which clears out all environment variables and is generally enabled by default. You may have to enable the ability to use sudo -E at all in there as well. There are a few other sudoers settings that might be relevant: env_keep, which lets you specify specific variables to keep by default, and env_remove, which declares variables to delete always. You can use sudo sudo -V to see which variables are/are not preserved.
An alternative, if you can't modify sudoers, is to provide your environment explicitly:
sudo env HOME=$HOME command here
You can make a shell alias to do that automatically so you don't have to type it in.
Note that doing this (either way) can have potentially unwanted side effects: if the program you run tries to make files in your home directory, for example, those files will be created as root and your ordinary user won't be able to write to them.
For the specific case of vim, you could also put your .vimrc as the system-wide /etc/vimrc if you're the only user of this system.
| Use non-root user configuration for root account |
1,330,601,237,000 |
I have the following rules
Host *
Compression yes
Host sop
HostName 192.168.56.101
if i ssh sop will the compression flag also be added?
|
Yes, all matching blocks are applied.
If you say ssh -v sop it will show you exactly which lines of the config are applied in this case.
| Do ssh_config rules cascade? |
1,330,601,237,000 |
What happens if the last line of fstab is not terminated by a newline?
Why do people get a warning when that last line is not terminated by a newline?
|
A line is a sequence of characters terminated by a newline character. The characters that appear after the last newline of a file are not part of a line.
Such a file that has characters after the last newline is not a text file as per the POSIX definition of a text file, and the behaviour of text utilities is unspecified in that case and in practice, behaviour varies:
Some ignore those characters completely (skip that non-terminated line)
Some consider it as a line and preserve the absence of newline (like GNU sed)
Some consider it as a line and add back the missing newline (like GNU cut)
Some utilities behaviour changes. Like read which returns a non-zero exit status when reading a non terminated line.
So even if the mount, swapon, fsck... utilities (those which typically read /etc/fstab) understand non-terminated lines, some script that process that file may still fail. You should always make sure text files are terminated by a newline character (unless they're empty). Text editors should do that by default. You generally need to go through hoops to remove that last newline characters.
| Is the last new line on fstab important? |
1,330,601,237,000 |
I have full disk encryption on my arch linux laptop. When i power on the machine it prompts me for my disk password. My system is encrypted by following the LVM on luks archwiki page.
the prompt says something like "a password is required for the cryptlvm volume" i would like to change this to feature some imformation about the system like the owner and an address to return it to if lost. So far i have just tried to look at the arch wiki and search to see if anyone else had asked anything similar but i cannot seem to find anything.
|
I found out that that you can make a custom initramfs module with mkinitcpio that prints out such information. Ensure you follow this correctly, otherwise your kernel will panic. To do so, you can create files under:
/usr/lib/initcpio/hooks/MODULENAME
/usr/lib/initcpio/install/MODULENAME
/usr/lib/initcpio/install/MODULENAME
This is a bash script that helps build the module when you regenerate initramfs with mkinitcpio. It must have build() and help() functions. The build function calls an add_runscript command which adds our runtime bash file of the same name under: /usr/lib/initcpio/hooks/MODULENAME.
build() {
add_runscript
}
/usr/lib/initcpio/hooks/MODULENAME
This is a bash script that is run when initramfs is loaded.
Any commands you would like to be run must be in a function called run_hook()
run_hook() {
# note this environment is limited as our drive is encrypted
# only core system commands will be available
# it is possible to add more commands to the initramfs environment
echo "hello!!"
}
Add hook to mkinitcpio.conf
Now we add the hook to the array in our mkinitcpio configuration file located at /etc/mkinitcpio.conf
# we put in the custom hook
# we put it before our encrypt hook!!
# so it shows before our password prompt
HOOKS=(base udev autodetect modconf kms keyboard MODULENAME encrypt lvm2 keymap consolefont block filesystems fsck)
Regenerate mkinitcpio
finally we can regenerate our initramfs so that this module can loaded on next boot.
$~ sudo mkinitcpio -p linux
Check the output for any errors before rebooting to check -- and pray for no kernel panic!
| custom prompt for system encryption password entry on startup |
1,330,601,237,000 |
If I check under
tools > options > Language Settings > Languages > Language of > User interface:
it only contains "Default - English (USA)" and "English (USA)", however I would like to change to "German (Germany)" or "Deutsch (Deutschland)"
So I would like to install the german language pack of LibreOffice on Fedora (Linux).
|
Check with dnf list libreoffice-langpack-de if it is installed/installable.
run: sudo dnf install libreoffice-langpack-de
This answer is based on this post.
| LibreOffice install language package from official dnf-repositories |
1,330,601,237,000 |
I have recently moved to fish from bash. I'm immediately in love with the out-of-the-box functionality, but I'm a little bit at a loss when it comes to configuring the shell.
I've read through the documentation, particularly the section about initialization, and it states:
On startup, Fish evaluates a number of configuration files...
Configuration snippets in files ending in .fish, in the directories:
$__fish_config_dir/conf.d (by default, ~/.config/fish/conf.d/)
...
User initialization, usually in ~/.config/fish/config.fish ...
So far, this is clear to understand. Coming from bash, I took my .bash_globals and .bash_aliases files, rewrote them a little bit according to fish's syntax, placed them into the ~/.config/fish/conf.d/ and they are loaded as expected.
However, when I looked over the contents of the config.fish file, I couldn't figure out anything that would need to be put there. To my understanding, fish is designed to work already out of the box, so the usual bash config like setting HISTCONTROL isn't necessary. Nor are the conf.d/ files called from some main script (like the .bash_aliases, etc., would be in .bashrc) - they're loaded automatically.
Is there some particular use case where config.fish is preferred - or even required - over conf.d/ files? So far, I would say individual files are cleaner to read, maintain and move between hosts. Is there some convention that's recommended to follow? Was there a specific motivation behind allowing so many levels of config, aside from giving users more freedom?
|
As far as purpose, I'd say there are several good reasons to support both.
First, and probably most importantly, as @Zanchey points out in the comments, conf.d support came about in release 2.3.0, so getting rid of config.fish at that point would have been a breaking change.
Second, as you said, freedom for the users to choose the way they would like to handle startup behavior.
Next, it's also somewhat "path of least resistance". I definitely share your preference for the modularity of conf.d/ files, and I love not having a config.fish myself. But some (perhaps even most) users who are moving over to fish for the first time default to the familiarity of a single .bash_profile-like place to put their config. I can imagine that the paradigm shift of not having a single-file config might be off-putting to some. In other words, config.fish helps provide a smooth migration for new users.
Further, config.fish is easier to explain to a new user, since it maps to something they already know in their previous shell. Even the fish FAQ defaults to telling users that config.fish is the equivalent of their previous startup scripts. Although I do wish they'd go on to explain the conf.d/ alternative as well there.
And config.fish does have one other small advantage, in that the execution order is more explicit (i.e. it's executed start to end). conf.d/ files are read just like any other conf.d/, in alphabetical order (or glob-result order, most likely). In my experience, this means that you have to resort to 00_dependency.fish type of naming to ensure that one file is run before others. That said, it should rare that anyone would have to resort to that.
As for "convention", well, I know many distributions set up their configuration files (e.g. the Apache2 httpd.conf) with a default "single-file" that goes on to process a conf.d/ structure. In fish's case, they just did away with the boilerplate for conffile in ~/.config/fish/conf.d/*.fish; source $conffile; end that would otherwise be required in config.fish.
| What is the purpose of (and possibly the convention of using) multiple locations for user configuration in fish shell |
1,330,601,237,000 |
I have a NUC 5i3RYH and I wanted to set up a customized xorg.conf file, because using a mini DisplayPort to HDMI adapter (cheaper than mini HDMI to HDMI adapter) overscans (does not fit the screen).
Xorg Configuration
We want to set the resolution and transform it a bit as we would with xrandr -display :0 --output HDMI2 --mode 1920x1080 --transform 1.05,0,-35,0,1.05,-19,0,0,1. To set this boy up, you need to configure what Xorg calls a "Screen". It has two important dependencies: "Device" (link to physical graphics card) and "Monitor" (link to the output port).
I needed to find the video driver (link to graphics device). lspci -nnk | grep -i vga -A3 | grep 'in use' which yielded Kernel driver in use: i915, so naturally, I figured that I needed to put Driver "i915" into my "Device" section. It turned out that this should be "intel" Why, and how would I come to this conclusion? (assuming I do not have access to Google haha) What, in my understanding, is missing?
/etc/X11/xorg.conf.d/10-monitor.conf
Section "Device"
Identifier "Intel HD Graphics 5500" #Unique Ref for Screen Section
Driver "intel" #Driver used for physical device
Option "DPMS" "false"
EndSection
Section "Monitor"
Identifier "monitor-DisplayPort-HDMI2" #Unique Ref for Screen Section
# I have no idea how this gets linked to my output port
EndSection
Section "Screen"
Identifier "Screen0" #Join Monitor and Device Section Params
Device "Intel HD Graphics 5500" #Mandatory link to Device Section
Monitor "monitor-DisplayPort-HDMI2" #Mandatory link to Monitor Section
DefaultDepth 16 #Choose the depth (16||24)
SubSection "Display"
Depth 16
Modes "1920x1080_60.00" #Choose the resolution
Option "TransformationMatrix" "1.05,0,-35,0,1.05,-19,0,0,1" #Not working
EndSubSection
EndSection
Notes
Running Arch Linux:
4.9.11-1-ARCH #1 SMP PREEMPT Sun Feb 19 13:45:52 UTC 2017 x86_64 GNU/Linux
I am not sure where to put transform in an Xorg config
|
It seems like based on don's input, I need to look in the Xorg log. The problem is that with Xorg, you need to know the driver group in advance or install all drivers as Patrick Mevzek suggested.
Only then can you identify the "intel" driver specifically.
Searching for the words "Module" and "driver" and then reading the surrounding lines seems to do the trick (including the full log). My strategy was to search for "Module class" and look for: "X.Org Video Driver"
cat /var/log/Xorg.0.log | grep 'Module class' -B4 -A4
Relevant Lines
See LoadModule: "intel"
[ 1065.037] (II) LoadModule: "intel"
[ 1065.037] (II) Loading /usr/lib/xorg/modules/drivers/intel_drv.so
[ 1065.037] (II) Module intel: vendor="X.Org Foundation"
[ 1065.037] compiled for 1.19.0, module version = 2.99.917
[ 1065.037] Module class: X.Org Video Driver
| How am I supposed to arrive at the conclusion that my video driver is called "intel"? |
1,474,879,267,000 |
It's there a way to start instances of xterm using different configuration files. Ex: xterm -load .Xresources-1, xterm -load .Xresources-1
Using xrdb -load ~/.Xdefaults changes the configs globaly which I try to avoid.
|
That's usually done by changing the instance name, which by default is the name of the program which is run, but can be overridden using the -name option. (If you make a symbolic link to a program and run that link, that's a quick way of renaming a program as well).
If you have a resource file with settings like
xterm*font: fixed
the instance is the xterm at the beginning of the line.
You can also change the class name (which you commonly see as XTerm, also at the beginning of the resource lines). The uxterm script uses the -class option to override this to change settings to make xterm work consistently in UTF-8 mode.
If you have different class names, then you can use the app-defaults search mechanism to support different resource files. I set the environment variable XAPPLRESDIR to my own directory, and have locally-customized resource files (each named for a class). That is documented in X(7):
application-specific files
Directories named by the environment variable XUSERFILESEARCHPATH or the environment variable XAPPLRESDIR (which names a single directory and should end with a '/' on POSIX systems), plus directories in a standard place (usually under /tmp/Xorg-KEM/lib/X11/, but this can be overridden with the XFILESEARCHPATH environment variable) are searched for for application-specific resources. For example, application default resources are usually kept in /tmp/Xorg-KEM/lib/X11/app-defaults/. See the X Toolkit Intrinsics - C Language Interface manual for details.
| Start xterm instance with different configurations |
1,474,879,267,000 |
I am doing a restoration test of backup files on a computer. Without putting much thought into it, I did an rsync to replace all files in /etc folder with the ones in the backup. Then, I realize I shouldn't really be doing it as the passwords and user names for the two computers are not the same.
After a reboot, the computer is in a state where it cannot start, and I would possibly have to reinstall it from scratch. Now my question is, given the same OS, what are the files in /etc folder that are unique for each computer. This would allow me to fine-tune rsync to exclude those files in the future when doing a restoration from backup.
|
There are very few files that absolutely must be different between two machines, and need to be regenerated when cloning:
The host name /etc/hostname.
The SSH host keys: /etc/ssh_host_*_key* or /etc/ssh/ssh_host_*_key* or similar location.
The random seed: /var/lib/urandom/random-seed or /var/lib/random-seed or similar location. (/var/lib/systemd/random-seed on systems using systemd)
Anything else could be identical if you have a bunch of identical machines.
A few files are typically different on machines with different hardware:
/etc/fstab, /etc/crypttab, /etc/mdadm.conf, and bootloader configuration files (if located in /etc — some distributions put them in /boot) if disks are partitioned differently.
/etc/X11/xorg.conf, if present, if the machines have different graphics cards.
Modules to load or blacklist in /etc/modules, /etc/modprobe.conf, /etc/modprobe.d/ and /etc/modutils/.
In addition, some network configuration may need to change, in particular:
If you have static IP addresses, they need to be diversified per machine. The location of IP configuration varies between distribution (e.g. /etc/network/interfaces on Debian, /etc/sysconfig/network on Red Hat).
/etc/hosts often contains the host name.
Mail configuration often contains the host name: check /etc/mailname.
There's no general answer to “what are the files in /etc folder (…) are unique for each computer” because the whole purpose of /etc is to store files that can be customized on each computer. For example, if you have different accounts on different machines, then obviously you can't share the account database — and if you want to be able to share the account database, then you'll end up with the same accounts.
Generally speaking, don't try to share /etc by default unless you have a set of machines with the same software configuration — same installed software, same accounts, etc. If you do share /etc, you'll need to blacklist a few files from sharing as indicated above.
If you have machines with different configurations, then whitelist what you synchronize. Treat files in /etc as distinct on different machines, like files in /var. Synchronize only the ones that you've decided should apply everywhere.
One possible way to manage synchronization is to keep machine-specific files in a different directory, e.g. /local/etc, and make symbolic links like /etc/fstab -> ../local/etc/fstab. This still requires a largely homogeneous set of machines in terms of software as different distributions put files in different locations. Or, conversely, keep only the machine-specific files in /etc and all generic files elsewhere — but typical distributions don't accommodate this well.
You obviously can't do a live test of the restoration of the system configuration of one system on a different system. To test the restoration of your backups, fire up a virtual machine that emulates the hardware configuration sufficiently well (in particular, with a similar disk layout).
| Which config files in /etc folder must be unique for each computer? |
1,474,879,267,000 |
Just rebooted my system to this warning
:: Starting Syslog-NG [BUSY]
WARNING: Configuration file format is too old, please update it to use the 3.2 format as some constructs might operate inefficiently;
WARNING: the expected message format is being changed for unix-domain transports to improve syslogd compatibity with syslog-ng 3.2. If you are using custom applications which bypass the syslog() API, you might need the 'expect-hostname' flag to get the old behaviour back;
Anyone know of any good resources on converting formats? my syslog-ng.conf is primarily from the Gentoo Security Handbook and thus simply using the the .pacnew file won't work
here's my current conf file
@version: 3.0
#
# /etc/syslog-ng.conf
#
options {
stats_freq (0);
flush_lines (0);
time_reopen (10);
log_fifo_size (1000);
long_hostnames(off);
use_dns (no);
use_fqdn (no);
create_dirs (no);
keep_hostname (yes);
perm(0640);
group("log");
};
source src {
unix-stream("/dev/log");
internal();
file("/proc/kmsg");
};
destination d_authlog { file("/var/log/auth.log"); };
destination d_syslog { file("/var/log/syslog.log"); };
destination d_cron { file("/var/log/crond.log"); };
destination d_daemon { file("/var/log/daemon.log"); };
destination d_kernel { file("/var/log/kernel.log"); };
destination d_lpr { file("/var/log/lpr.log"); };
destination d_user { file("/var/log/user.log"); };
destination d_uucp { file("/var/log/uucp.log"); };
destination d_mail { file("/var/log/mail.log"); };
destination d_news { file("/var/log/news.log"); };
destination d_ppp { file("/var/log/ppp.log"); };
destination d_debug { file("/var/log/debug.log"); };
destination d_messages { file("/var/log/messages.log"); };
destination d_errors { file("/var/log/errors.log"); };
destination d_everything { file("/var/log/everything.log"); };
destination d_iptables { file("/var/log/iptables.log"); };
destination d_acpid { file("/var/log/acpid.log"); };
destination d_console { usertty("root"); };
# Log everything to tty12
destination console_all { file("/dev/tty12"); };
#destination knotifier { program('/usr/local/bin/knotifier'); };
filter f_auth { facility(auth); };
filter f_authpriv { facility(auth, authpriv); };
filter f_syslog { program(syslog-ng); };
filter f_cron { facility(cron); };
filter f_daemon { facility(daemon); };
filter f_kernel { facility(kern) and not filter(f_iptables); };
filter f_lpr { facility(lpr); };
filter f_mail { facility(mail); };
filter f_news { facility(news); };
filter f_user { facility(user); };
filter f_uucp { facility(cron); };
filter f_news { facility(news); };
filter f_ppp { facility(local2); };
filter f_debug { not facility(auth, authpriv, news, mail); };
filter f_messages { level(info..warn) and not facility(auth, authpriv, mail, news, cron) and not program(syslog-ng) and not filter(f_iptables); };
filter f_everything { level(debug..emerg) and not facility(auth, authpriv); };
filter f_emergency { level(emerg); };
filter f_info { level(info); };
filter f_notice { level(notice); };
filter f_warn { level(warn); };
filter f_crit { level(crit); };
filter f_err { level(err); };
filter f_iptables { match("IN=" value("MESSAGE")) and match("OUT=" value("MESSAGE")); };
filter f_acpid { program("acpid"); };
log { source(src); filter(f_acpid); destination(d_acpid); };
log { source(src); filter(f_authpriv); destination(d_authlog); };
log { source(src); filter(f_syslog); destination(d_syslog); };
log { source(src); filter(f_cron); destination(d_cron); };
log { source(src); filter(f_daemon); destination(d_daemon); };
log { source(src); filter(f_kernel); destination(d_kernel); };
log { source(src); filter(f_lpr); destination(d_lpr); };
log { source(src); filter(f_mail); destination(d_mail); };
log { source(src); filter(f_news); destination(d_news); };
log { source(src); filter(f_ppp); destination(d_ppp); };
log { source(src); filter(f_user); destination(d_user); };
log { source(src); filter(f_uucp); destination(d_uucp); };
#log { source(src); filter(f_debug); destination(d_debug); };
log { source(src); filter(f_messages); destination(d_messages); };
log { source(src); filter(f_err); destination(d_errors); };
log { source(src); filter(f_emergency); destination(d_console); };
log { source(src); filter(f_everything); destination(d_everything); };
log { source(src); filter(f_iptables); destination(d_iptables); };
#log { source(src); filter(f_messages); destination(knotifier); };
# Log everything to tty12
log { source(src); destination(console_all); };
|
Its probably related to this change in 3.2:
syslog-ng traditionally expected an optional hostname field even
when a syslog message is received on a local transport (e.g.
/dev/log). However no UNIX version is known to include this
field. This caused problems when the application creating the log
message has a space in its program name field. This behaviour has
been changed for the unix-stream/unix-dgram/pipe drivers if the
config version is 3.2 and can be restored by using an explicit
'expect-hostname' flag for the specific source.
You receive the warning because you use the unix-stream("/dev/log"); in your source. If you don't experience any problems with your local logs, there is nothing else to do except changing the first line to @version: 3.2
If your distro adds the hostname to log messages coming from /dev/log (which they rarely do), then include flags(expect-hostname) in the source.
Regards,
Robert Fekete
syslog-ng documentation maintainer
| Converting syslog-ng 3.0? format to 3.2 format |
1,474,879,267,000 |
I'm just wondering: why does less store its configuration in a binary file .less, which you have to generate with lesskey from the text-file .lesskey?
What could be the benefits of this behavior? Speed? But parsing a tiny human-readable configuration file can't take that long.
|
The source code for lesskey says:
* Copyright (C) 1984-2015 Mark Nudelman
which gives a hint that performance might have been a factor in deciding to use a compiled configuration file. Machines were a little smaller and slower 32 years ago.
| Why does less store its configuration in a binary file? |
1,474,879,267,000 |
I would like to edit the keyboard shortcuts used by the Enlightenment window manager and I would like to do so textually by editing some text/XML file. Sort of like how I edit ~/.config/openbox/lxde-rc.xml to change the keybindings used by openbox. Is this possible? I know of the GUI editor of keyboard shortcuts but I would prefer a textual method, if possible.
|
Enlightenment's configuration is stored in ".cfg" files using the Eet library. This is not human-readable, but you can use the vieet command to edit a textual representation of a file. For the key bindings, I believe these are stored in ~/.e/e/config/standard/e_bindings.cfg by default. Vieet also needs a 'section' to edit, put in 'config' here.
The complete command would be vieet ~/.e/e/config/standard/e_bindings.cfg config.
| Is there any file I can edit to change the keybindings of Enlightenment 19? |
1,474,879,267,000 |
In the past, I've used bash consistently, because it's everywhere.
But recently I started to try zsh. I don't want to give up updating my .bashrc fil which is rsync'ed to all my servers .
So, in my .zshrc, I sourced my old .bashrc using the command source ~/.bashrc.
Everything goes well, except every time I open a new terminal window with zsh.
There is a bunch of information prompts to the screen. It looks like this:
pms () {
if [ -n "$1" ]
then
user="$1"
else
user="zen"
fi
python /Users/zen1/zen/pythonstudy/creation/project_manager/project_manager.py $user show "$2"
}
pmcki () {
if [ -n "$1" ]
then
user="$1"
else
user="zen"
fi
python /Users/zen1/zen/pythonstudy/creation/project_manager/project_manager.py $user check_in "$2"
}
zen1@bogon:~|⇒
These are function definitions in my .bashrc. They're triggered by source ~/.bashrc in my .zshrc file.
What I want is for .zshrc to source my .bashrc quietly, with all stderr and stdout output hidden.
Is it possible to do that? How?
|
emulate -R ksh -c 'source ~/.bashrc'
This tells zsh to emulate ksh while it's loading .bashrc, so it'll by and large apply ksh parsing rules. Zsh doesn't have a bash emulation mode, ksh is as close as it gets. Furthermore when a function defined in .bashrc is executed, ksh emulation mode will be enabled during the evaluation of the function as well.
Hopefully this will solve the errors that you're getting when zsh reads your .bashrc. If it doesn't, it should be easy to tweak your .bashrc so that it works well under both shells for the most part. Make a few parts conditional, such as prompt settings and key bindings which are radically different.
if [[ -z $ZSH_VERSION ]]; then
bind …
PS1=…
fi
If you really want to hide all output, you can redirect it to /dev/null (source ~/.bashrc >/dev/null 2>&1), but I don't recommend it: you're just hiding errors that indicate that something isn't working, that doesn't make that thing work.
| Source .bashrc in zsh without printing any output |
1,474,879,267,000 |
tmux ignores the configuration files: both /etc/tmux.conf and ~/.tmux.conf. Even I pass the path to the configuration file, using tmux -f path/to/tmux.conf it still doesn't load it.
The configuration file contains:
set -g default-terminal "screen-256color"
set -g status-bg "#105C8D"
set -g status-fg white
set-window-option -g xterm-keys on
set -sg escape-time 0
I see that the status bar background is lightblue even the configuration sets it to dark blue (#105C8D). Also, 256 colors are not supported. That's why I guess the file is not loaded.
How can I fix the issue?
Running tmux version 1.9a but had the same issue with 1.8, on Ubuntu 14.04.
I already saw:
Tmux not sourcing my .tmux.conf
https://superuser.com/q/188491
http://blog.sanctum.geek.nz/reloading-tmux-config/
https://stackoverflow.com/q/12069477
I don't get any errors regarding the config syntax.
|
Found a solution... which is probably not the best, but its working.
Open tmux with: tmux -2 which force tmux to assume the terminal supports 256 colours.
| tmux ignores the configuration files |
1,474,879,267,000 |
Short form:
You can limit the bandwidth the scp uses with the -l switch, you pass a number that's in kbits/sec.
I'd rather set this in my .ssh/config file for certain names machines.
What's the equivalent named setting for -l? I haven't been able to find it.
Followup question:
Generally, not sure how to map back and forth between ssh command line options and config names, short of doing Google searches or manually comparing man pages on a case by case basis. Is there a table that directly equates the two?
Longer form of first question, with context:
I've started using ssh config quite a bit, especially now that I need to go through a proxy and do lots of port mappings. I even define the same machine more than once depending on what type of tunneling I need.
However, when uploading a large file, it's difficult to do anything else on my machine. Even though I have more download bandwidth than up, I think that scp saturates the link so even my small requests can't reach the Internet.
There's a fix for this, using the -l bandwidth command line switch for scp.
scp -l 1000 bigfile.zip titan:
I'd like to use this in my config instead, so I'd create an additional named entry called "titan-upload" and I'd use that as the target whenever I upload.
So instead of:
scp bigfile.zip titan:
I'd say:
scp bigfile.zip titan-upload:
Or even set different caps depending on where I am:
scp bigfile.zip titan-upload-from-work:
vs.
scp bigfile.zip titan-upload-from-home:
I'm generally on Mac and Linux.
|
Alas, as was mentioned, there doesn't see to be a config option to limit bandwidth. (I checked source code!)
Some possible solutions are to use an alias for scp, or perhaps a function. Bash is typically the default shell on both mac & linux, so this could work:
alias scp='scp -l 1000 '
-or-
alias scp-throttle='scp -l 1000 '
(note trailing space inside quotes!1) This would cause EVERY scp command you use to throttle bandwidth. Considering your situation, perhaps the best solution overall.
The second might be a good choice, since you could use scp for 'normal' transfers, and scp-throttle for slower transfers.
Or a function, with a bit more brains:
function scp { if [ "$*" =~ "-upload" ]; then command scp -l 1000 "$@"; else command scp "$@"; fi; }
Basically, if we find '-upload' anywhere in the arguments, we perform the transfer with the bw limit, otherwise, a normal transfer occurs.
This would allow you to continue using your multiple names/aliases to denote actions.
scp aaa titan: - would upload normally
scp aaa titan-upload: - would throttle
scp titan:aaa . - normal
scp titan-upload-from-home:aaa . - throttled
scp a-file-to-upload titan: - oops, throttled, not intentional!
EDIT:
1 - The trailing space INSIDE the alias allows further alias expansion after the aliased command. VERY helpful/useful. Bash Man Page, __ALIASES__ section
| Equivalent of scp -l bandwidth_cap for .ssh/config? |
1,474,879,267,000 |
This works:
nmap <silent> <S-t> :call InventTab()<CR>
function InventTab()
set expandtab!
if &expandtab
retab
echo 'spaces'
else
retab!
echo 'tabs'
endif
endfunction
I've tried to change it to a one-liner:
nmap <silent> <S-t> :set expandtab!<CR>:if &expandtab<CR>:retab<CR>:echo 'spaces'<CR>:else<CR>:retab!<CR>:echo 'tabs'<CR>:endif<CR>
The problem now is that it it insists on printing "Press ENTER or type command to continue" afterwards. If I add another <CR> it doesn't do that anymore, but then the echo output is cleared.
How should I write this to make sure I see the output but no extra stuff?
Result (see the accepted answer for details):
nmap <silent> <S-t> :set expandtab! ^V| if &expandtab ^V| retab ^V| echo 'spaces' ^V| else ^V| retab! ^V| echo 'tabs' ^V| endif<CR>
|
Does replacing the <CR>'s you have between commands with ^V| (where ^V is a literal ^V inserted by typing Ctrl-vCtrl-v) work?
| Rewrite a Vim function to a one-line map |
1,474,879,267,000 |
I'm using the Linux kernel's configuration tool Kconfig to manage the configuration of my own project.
(Please could someone with sufficient rep add "Kconfig" tag or whatever tag would be more appropriate). I didn't tag as "linux" or as "kernel" since my actual project is not the Linux kernel.
Given the following configuration:
mainmenu "Select/choice interaction test"
# Selectable menu granting access to multiple potentially independent config vars
menuconfig MULTICHOICE
bool "Multichoice"
config MULTICHOICE_A
bool "A"
depends on MULTICHOICE
config MULTICHOICE_B
bool "B"
depends on MULTICHOICE
config MULTICHOICE_C
bool "C"
depends on MULTICHOICE
# Choose exactly one item
choice CHOICE
prompt "Choice"
config CHOICE_A
bool "A"
config CHOICE_B
bool "B"
config CHOICE_C
bool "C"
endchoice
# Booleans which restrict/select other options from the previous sections
config SET_A
bool "Select A"
select CHOICE_A
select MULTICHOICE
select MULTICHOICE_A
config SET_B
bool "Select B"
select CHOICE_B
select MULTICHOICE
select MULTICHOICE_B
config SET_C
bool "Select C"
select CHOICE_C
select MULTICHOICE
select MULTICHOICE_C
Selecting items in a menuconfig works as expected. But setting the value of the choice does not work.
I can understand a potential problem (conflict) here - what if multiple options from the choice were selected implicitly by other configuration variables?
But in the sane case of only one choice option being implicitly selected by others, the value of the choice does not change.
For example, open that configuration file above with nconfig/menuconfig/gconfig/xconfig then select exactly one of SET_A/SET_B/SET_C. The value of CHOICE does not change at all.
Is there some other way of ensuring that only one option from a set is selected, but also forcing it to a certain value if other configuration variables are set a particular way?
|
As I can't reply, Ciro Santilli is not exactly right.
To quote the answer from there:
It is not possible to use select for non booleans according to > kernel docs v4.15
https://github.com/torvalds/linux/blob/v4.15/Documentation/kbuild/kconfig-language.txt#L104 says:
- reverse dependencies: "select" <symbol> ["if" <expr>]
[...]
Reverse dependencies can only be used with boolean or tristate symbols.
However this question is actually about booleans, so in that light, should be possible. Sadly, I found this question as I was looking for the answer as well.
| Kconfig - "select" a choice |
1,474,879,267,000 |
I want to run a distro of my choice on my desktop, my laptop, and my work desktop. I'm about to switch from Ubuntu to Fedora, and I'm posting this thread is because I would like some advice on how to make this kind of labor a bit easier.
Installing all the stuff I need / customizing GNOME (or possibly Cinnamon) the way I like it gets tedious when I have to do it on three separate machines. I'm looking for ways to make this simpler; a way to set it up once, and replicate across all machines. What's the best approach?
I also want to avoid having to implement a change (eg. new hotkey) across three different computers. I read a neat trick somewhere about using Git on ~/, instructing it to ignore all files except the ones I want to easily update by fetching and resetting from github. does anyone have an alternative strategy that might be better?
|
So there are actually a couple of things you need to worry about:
1) Installing same software on all machines. This is fairly easy to do either by cloning via clonezilla or just getting a list of installed packages and having the package manager install a matching set on the other machines. Package names that are architecture dependent may cause issues if you are using different CPU architectures.
2) Any system level configurations that you've changed, created, etc. Apache host configuration, etc. For me, I use the joe editor and it creates an auto backup file named filename~ when I edit filename. So I simply find all files ending in ~, remove the tilde, and make an archive, then extract on the other machine. Works fine, as long as you've got step 1 squared away.
3) Your home directory and whatever customization you make there. Desktop wall paper, auto launch apps, menu items, desktop icons, etc.
The good news is that there are multiple ways of dealing with this. There are configuration management tools, and creative ways of using other tools like git to check in/check out your configuration files, etc. Or you could do it yourself with a script or two.
| Same customized installation on multiple computers |
1,474,879,267,000 |
I haven't really got prior *nix systems experience to draw on, although I tried to set up a GUI based Linux some years back; this time I'm jumping directly to BSD because I like its philosophy and tight attention to quality and its resulting slightly conservative approach to release development cycles, and because I've got more experience with it than other *nix.
One area I'm really not confident about is tracking the config changes I set up. Everything config is a case of "edit /path/rc.things" or "add these lines to pkg-x.conf", or "set these variables in the environment or pkg". I'm comfortable with such directions; what bothers me is maintenance and reuse - how I track exactly what I changed as I was setting it up and as I use it, and keeping it easy for myself if I want to make those changes again in future on another system without 3 months of DIFFing every file in sight and reviewing line by line. I expect at first I'll have a lot of this, because I'll be learning from experience.
My aim is that if I want to set up a 2nd server with much of the system/pkg config similar or the same settings (as is common on a single LAN), or to backup, wipe and reinstall as part of my learning curve, or just to update the system, I don't really want a txt-file list I've kept, that lists 400 files with fragmented config on them, all to be laboriously manually re-created or to have multiple lines edited in console. I also won't want to just restore all files unthinkingly, or to find there are files which did contain config or other history/activity information that I didn't know about, or missing environment data, and spend forever trying to work out where stuff might be hidden that needs to be copied or edited in on the new system.
I'm not sure there is ever a truly easy solution to this (on any major OS) - even Windows with a GUI, I'd have to consider files and entries stored in all kinds of directories, registry branches, and so on. But at least I have many years trial-and-error with Windows so I have a good idea what to grab depending how the machine has been used and I can do it fairly quickly package by package. I know which packages I use, I can just reinstate its config directory or registry settings, and which I need to do via its GUI or some other way, and where there are useful system tweaks to add.
I also know in which cases it's safe to just copy a folder to reinstate my previous config or state, and in which cases it's best not to do that as other stuff needs to be set as well, and let the system/pkg do it the long way.
I don't have that know-fu on BSD yet.
On a positive note, I feel I have enough knowledge and experience to work out roughly what I need to do, as I do it, and to get a sense of enough of the security and good practices to not make a complete disaster of it - or I hope I do anyway. So using BSD hands-on is the only real way for me to learn, at this point.
How can I approach this area of BSD use/management, and what solutions do people who know BSD tend to ultimately end up using?
Put another way, how can I track, simplify, and reinstate settings in a less laborious or time-consuming manner (with a view to selective reinstatementwithout days of console file editing and diff-ing!), as I begin to move into using BSD properly, or what knowledge or pkgs will help if it's a matter of workflow and process?
|
A lot of the work can be isolated. There are a lot of the commonly edited files in /etc which can be mirrored in /usr/local/etc. Put your local changes in those, and generally they'll be picked up.
rc.conf is a bit messy, but you can put one line in there to grab stuff from elsewhere. periodic.conf works in much the same way.
rc.d files (if any) can go in /usr/local/etc/rc.d, separating them from the system ones.
You don't have to edit syslog.conf or newsyslog.conf because you can use small files in /usr/local/etc/{newsyslog,syslog}.conf.d to do what you want. Copying over these directories is much easier than editing the original single files. There are various other directories that end in .d where you can put small files that are all executed as part of the original single file.
Beware of syslog.conf.d. You have to end all the filenames in there with .conf or they are ignored!
There are also (for example, in /etc and /boot) files ending in .local. These include /boot/loader.conf.local. These are not nicely detached from the main system directories, but the fact that they are named like that makes them easier to notice and maintain.
If you have kernel configuration files, keep them in (say) /root/config. Then, before a buildkernel, make symbolic links to them in /sys/i386/conf (or wherever). Otherwise, an update to /usr/src will wipe them out - it's easier to recreate the symlink than it is to recreate (or even restore) the kernel config file.
Bear in mind that not all of these useful secondary files exist by default. That's why you have to look at manual pages for the 'main' files to see what alternatives are available.
In summary
For each of the files you find yourself editing, read its man page carefully. In most cases, you can edit or create a local file or put files in a local directory. This centralises nearly all of it under /usr/local/etc.
| Saving config/pkg dirs on existing BSD install for reuse on clean install, and safe ways to do it? |
1,474,879,267,000 |
I am about to start working on an embedded system which runs kernel v2.6.x.
It is configured to use its serial line as a TTY (accessible via e.g. minicom, stty), but I want to run IP over the serial line so that I can run multiple multiplexed sessions over the link (e.g. via UDP/TCP or SSH).
I don't have much more information about the boards yet (will post more when the documentation arrives), but assuming that the kernel provides reasonable abstraction over the hardware - what would be the process to configure it to run PPP or (C)SLIP over the serial link in place of TTY?
|
You would first disable getty running on your serial port device /dev/ttyS0 (or whatever it is named for your hardware) to free it (for example, by editing /etc/inittab and running telinit q - if you managed to steer away from systemd) and then you would run pppd(8) on it (either manually with appropriate parameters or via additional tools like wvdial)
| How to configure embedded kernel to use serial line for PPP instead of TTY |
1,474,879,267,000 |
The kernel configuration contains an NLS_UTF8 option. It can be found under File systems → Native language support. What does it do?
Its description maintains that it is needed for using a FAT or JOLIET CD-ROM filesystem. Is it necessary for an ext[234] filesystem?
|
FAT filenames (not file contents) are encoded in a country-specific manner, DOS called those "codepages". They need to be present in the kernel so your console con correctly display the characters.
This also counts for the UTF-8 encoding of the Unicode charater set.
This doesn't apply to ext FS though, read up here.
| What does the CONFIG_NLS_UTF8 kernel option do? |
1,474,879,267,000 |
What are the valid values (and what are they used for) for the static option in /etc/dhcpcd.conf file?
I'm configuring a network interface of a Raspberry (running raspbian stretch) by editing the /etc/dhcpcd.conf file.
Altough I was able to set up it correctly, I am curious about all the configuration options provided through this file, specifically for static configuration.
I read the man page of dhcpcd.conf and didn't find any explanation of the values the static option accepts. I wasn't able to find anything on google neither.
The man page of dhcpcd.conf just says this:
static value
Configures a static value. If you set ip_address then dhcpcd
will not attempt to obtain a lease and will just use the value
for the address with an infinite lease time. If you set
ip6_address, dhcpcd will continue auto-configuation as normal.
Here is an example which configures two static address,
overriding the default IPv4 broadcast address, an IPv4 router,
DNS and disables IPv6 auto-configuration. You could also use the
inform6 command here if you wished to obtain more information via
DHCPv6. For IPv4, you should use the inform ipaddress option
instead of setting a static address.
interface eth0
noipv6rs
static ip_address=192.168.0.10/24
static broadcast_address=192.168.0.63
static ip6_address=fd51:42f8:caae:d92e::ff/64
static routers=192.168.0.1
static domain_name_servers=192.168.0.1
fd51:42f8:caae:d92e::1
Here is an example for PPP which gives the destination a default
route. It uses the special destination keyword to insert the
destination address into the value.
interface ppp0
static ip_address=
destination routers
After reading some tutorials all valid options I know are these:
ip_address
routers
domain_name_servers
domain_search
domain_name
JFYI my /etc/dhcpcd.conf configuration file looks like this:
# Inform the DHCP server of our hostname for DDNS.
hostname
# Use the hardware address of the interface for the Client ID.
clientid
# Persist interface configuration when dhcpcd exits.
persistent
# Rapid commit support.
# Safe to enable by default because it requires the equivalent option set
# on the server to actually work.
option rapid_commit
# A list of options to request from the DHCP server.
option domain_name_servers, domain_name, domain_search, host_name
option classless_static_routes
# Most distributions have NTP support.
option ntp_servers
# A ServerID is required by RFC2131.
require dhcp_server_identifier
# Generate Stable Private IPv6 Addresses instead of hardware based ones
slaac private
# A hook script is provided to lookup the hostname if not set by the DHCP
# server, but it should not be run by default.
nohook lookup-hostname
# Static IP configuration for eth0.
interface eth0
static ip_address=192.168.12.234/24
static routers=192.168.12.1
static domain_name_servers=192.168.12.1
nogateway
|
I was wondering the same thing and also couldn't find any definitive answers out there, so I went digging.
I don't know if this is an exhaustive list, but here is a list of valid values for the static option that I have been able to glean from looking at the source code (available here):
ip_address
subnet_mask
broadcast_address
routes
static_routes
classless_static_routes
ms_classless_static_routes
routers
interface_mtu
mtu
ip6_address
These parameters are directly handled in the if-options.c file.
Here is where I am less certain about it being exhaustive, and where I am getting a bit speculative on what is going on. As you have no doubt noticed, this doesn't include domain_name_servers, etc. After parsing the config file and directly dealing with any of the above parameters, there can still be some parameters that have not been handled in if-options.c. I think that these remaining parameters are dealt with in the default hook scripts, specifically the 20-resolv.conf hook script (/usr/lib/dhcpcd/dhcpcd-hooks), for which I think there are only the following options:
domain_name
domain_name_servers
domain_search
As I said, I'm a bit unsure about the last bit as I didn't want to spend crazy amounts of time going through the source code. So any corrections would be very welcome.
| Configure static values in /etc/dhcpcd.conf |
1,474,879,267,000 |
I'm setting Arch Linux and I'd like to modify the standard dock bar, which is the file that I should edit?
EDIT: like this
I don't want to use docky
|
Panel settings are saved here ~/.config/xfce4/panel but you could also try the xfce4-settings-manager.
And to learn how it works you can just download many nice looking examples from here http://xfce-look.org/ and look into the config files that come with these themes.
| How to modify dock bar in XFCE4? |
1,474,879,267,000 |
I have two x86_64 kernels compiled on the same machine against the same code (4.15.0 in Linus' source tree).
The config files were produced by running make localmodconfig against that source, using different, larger original config files coming from different distros: Arch and Slackware respectively. I'll nickname them
arch config
and
slk config
for that reason.
The issue: running cat /proc/meminfo consistently reports about 55-60 MB more in the MemTotal field for arch than for slk:
MemTotal: 32600808 kB for arch
vs
MemTotal: 32544992 kB for slk
I say 'consistently' because I've tried the experiment with the same config files against earlier versions of the source (a bunch of the 4.15-rc kernels, 4.14 before that, etc., rolling over from one source to the next with make oldconfig).
This is reflected in the figures reported by htop, with slk reporting ~60MB less usage on bootup than arch. This is consistent with the htop dev's explanation of how htop's used memory figures are based on MemTotal.
My question is: any suggestions for which config options I should look at that would make the difference?
I of course don't mind the 60MB (the machine the kernels run on has 32 GB..), but it's an interesting puzzle to me and I'd like to use it as a learning opportunity.
Memory reporting on Linux is discussed heavily on these forums and outside in general, but searching for this specific type of issue (different kernels / same machine => different outcome in memory reporting) has not produced anything I found relevant.
Edit
As per the suggestions in the post linked by @ErikF, I had a look at the output of
journalctl --boot=#
where # stands for 0 or -1, for the current and previous boots respectively (corresponding to the two kernels). These lines do seem to reflect the difference, so it is now a little clearer to me where it stems from:
arch (the one reporting larger MemTotal):
Memory: 32587752K/33472072K available (10252K kernel code, 1157K rwdata, 2760K rodata, 1364K init, 988K bss, 884320K reserved, 0K cma-reserved)
slk (the one reporting smaller MemTotal):
Memory: 32533996K/33472072K available (14348K kernel code, 1674K rwdata, 3976K rodata, 1616K init, 784K bss, 938076K reserved, 0K cma-reserved)
That's a difference of ~55 MB, as expected!
I know the slk kernel is larger, as verified by comparing the sizes of the two vmlinuz files in my /boot/ folder, but the brunt of the difference seems to come from how much memory the two respective kernels reserve.
I'd like to better understand what in the config files affects that to the extent that it does, but this certainly sheds some light.
Second edit
Answering the questions in the comment by @Tim Kennedy.
Do you have a dedicated GPU, or use shared video memory
No dedicated GPU; it's a laptop with on-board Intel graphics.
and do both kernels load the same graphics driver?
Yes, i915.
Also, compare the output of dmesg | grep BIOS-e820 | grep reserved
As you expected, does not change. In all cases it's 12 lines, identical in every respect (memory addresses and all).
(Final?) edit
I believe it may just be as simple as this: the kernel reporting less MemTotal has much more of the driver suite built in; I just hadn't realized it would make such a noticeable difference..
I compared:
du -sh /lib/modules/<smaller-kernel>/modules.builtin
returns 4K, while
du -sh /lib/modules/<larger-kernel>/modules.builtin
returns 16K.
So in the end, I believe I was barking up the wrong tree: it won't be a single config option (or a handful), but rather the accumulated effect of many more built-in drivers.
|
I believe the picture is as in my last edit above; I just did not know enough about the process of reserving memory to realize when I asked the question: it all boiled down to the larger kernel having more of the drivers built in rather than modularized.
Incidentally, I did want to put this to some (remotely) practical use: I wanted to have as slim a kernel as I could, but still boot it without an initrd. I knew that
the smaller kernel (moniker arch above) is very lightweight and fast to boot, but not without an initramfs;
the larger kernel (slk) will boot without an initramfs, but takes longer.
So I made a compromise that I think I'll stick with for now. I took the two config files, call them large and small, and made sure that every unset config option in small is reflected in large.
First,
awk '/^# CO/ {print} small > staging'
grabs all lines in the small config file that are of the form
# CONFIG_BLAH is not set
and dumps them in a staging text file. Then,
for i in $(awk '{print $2}' staging) ; do sed -i "s/^$i=./# $i is not set/" large ; done
takes all lines from the config file large that set (via =y or =m) an option contained in staging and unsets them.
I then ran make oldconfig on the resulting file large in the kernel source directory and compiled. It all went through all right:
the new kernel boots some 3 seconds faster
without an initramfs
is much smaller in the sense that its /lib/modules/<kernel>/modules.builtin got cut down in half, from 16K to 8K.
Those are not much to write home about, but as I said this was puzzling me.. I think I'm all set on the issue now.
Presumably the more straightforward thing to do would have been to figure out once and for all precisely which drivers I need on this machine in order to boot without an initramfs, but I'll leave that for some other day. Besides, playing off one config against the other was a fun exercise in its own right.
| different kernels report different amounts of total memory on the same machine |
1,474,879,267,000 |
I have a number of versions of GNOME installed on a number of different hosts. All users have network mounted home directories. In some cases GNOME works poorly when reading configuration from the .gnome2 directory. I would like to read config files from version specific directories. Is there any way to specify this when starting GNOME? Environment variables perhaps? I know how to move the .gconf directories but this is not sufficient. I need to read the .gnome2 from a different path.
|
Looking at the developer documentation it doesn't look as if it was possible to use other directories than the default. You could have version-dependent directories and link them to ~/.gnome2 at login, however this breaks as soon as a user is logged in at two different hosts at a time.
| How do I read alternate GNOME configuration files |
1,474,879,267,000 |
I am wondering how I can run a daemon, in this case NTP, with custom parameters.
For example in my Ubuntu PC I observe that I've got ntpd running this way:
$ ps aux | grep ntpd
ntp 5936 ... 0:00 /usr/sbin/ntpd -p /var/run/ntpd.pid -g -u 119:127
You may notice that -g parameter.
But in my Gentoo PC I run the same command and I can observe that the ntp daemon is not running with that -g parameter and I want to add it!
Is this a distribution specific issue? How can I handle this?
|
Guessing from the Gentoo Wiki, editing NTPD_OPTS in /etc/conf.d/ntpd probably does the trick (regardless of the question if -g is advisable, no idea).
| How can I run a daemon with custom parameters |
1,474,879,267,000 |
I'm trying to setup nginx and cgit on FreeBSD but nginx can't access /var/run/fcgiwrap/fcgiwrap.sock.
In my /etc/rc.conf I already set fcgiwrap_user="www", and www is also the user nginx runs as.
When I make fcgiwrap.sock owned by www by performing chown www /var/run/fcgiwrap/fcgiwrap.sock, everything works the way I want.
However this is of course not the proper way to do this, and it will only last until reboot.
I was under the assumption that setting fcgiwrap_user="www" would also determine this.
Am I missing something?
Update:
I noticed that when I use service fcgiwrap start or restart, the message Starting fcgiwrap is followed by chmod: /var/run/fcgiwrap/fcgiwrap.sock: No such file or directory. However /var/run/fcgiwrap/fcgiwrap.sock does exist afterwards.
|
The RC script is located at /usr/local/etc/rc.d/fcgiwrap.
Looking at the code, fcgiwrap_user sets the owner of the process running the daemon (default root).
You need to set fcgiwrap_socket_owner="www" to set the owner of the socket.
| Nginx on FreeBSD: fcgiwrap.sock permission denied |
1,474,879,267,000 |
I am using Red Hat Enterprise Linux Server release 5.9 (Tikanga).
For installation of any application, it is very important to know the system configuration, is it 32 bit or 64 bit system, Installed OS is 32 bit or 64 bit etc...
Is there any command which provides me information about my system configuration just like Windows provide when go to Control Panel\System and Security\System.
Please suggest...
|
Use uname:
uname -i
For more information, see
man uname
If you get x86, it means you have 32 bit Linux OS and if you get x86_64, it means you have 64 bit Linux.
| Redhat Linux: How to know my system configuration? |
1,474,879,267,000 |
With regard to this question: How to script make menuconfig, what is the difference between running make and make oldconfig?
I assume that if you run make, it uses the old .config anyway.
|
make oldconfig is used to apply your old .config file to the newer
kernel.
For exapmle, you have .config file of your current kernel and you
downloaded new kernel and want to build your new kernel. Since very likely new kernel will have some new configuration options, you will need to update your config. The easiest way to do this is to run make oldconfig, which will prompt you questions about the new configuration options. (that is the ones your current .config file doesn't have)
| What is the difference between 'make' and 'make oldconfig' |
1,474,879,267,000 |
# Automatically generated file; DO NOT EDIT
Is at the header of the kernel configuration file: /usr/src/linux/.config
My question is why shouldn't you edit this file? If I know exactly what I need, or what I want to remove, then what is the problem with editing this file directly?
|
It's considered unsafe to edit .config because there are CONFIG-options which have dependencies on other options (needing some to be set, requiring others to be turned off, etc.). Other options aren't meant to be set by the user at all, but are set automatically by make config (resp. Kconfig to be correct) depending on architecture details, e.g. availability of some hardware dependant on architecture variant, like an MMU.
Changing .config without using Kconfig has a high chance of missing some dependency, which will either result in a non-functioning kernel, build failures, or unexpected behaviour (i.e. the change being ignored, which usually is very confusing).
| Why should you not edit the .config kernel configuration file? |
1,587,666,937,000 |
I am trying to compile the 5.4 kernel with the latest stable PREEMPT_RT patch (5.4.28-rt19) but for some reason can't select the Fully Preemptible Kernel (RT) option inside make nconfig/menconfig.
I've compiled the 4.19 rt patch before, and it was as simple as copying the current config (/boot/config-4.18-xxx) to the new .config, and the option would show. Now I only see:
No Forced Preemption (Server)
Voluntary Kernel Preemption (Desktop)
Preemptible Kernel (Low-Latency Desktop)
And if I press F4 to "ShowAll", I do see the option:
XXX Fully Preemptible Kernel (Real-Time)
But cannot select it. I've tried manually setting it in .config with various PREEMPT options like:
CONFIG_PREEMPT=y
CONFIG_PREEMPT_RT_BASE=y
CONFIG_PREEMPT_RT_FULL=y
But it never shows. I just went ahead and compiled it with CONFIG_PREEMPT_RT_FULL=y (which is overwritten before when saving the make nconfig), but it seems it's still not the fully preemptive kernel that is installed.
With 4.19, uname -a would show something like:
Linux 4.19.106-rt45 #2 SMP PREEMPT RT <date>
or something like that, but now it will just say:
Linux 5.4.28-rt19 #2 <date>
Anyone know what I'm missing here?
OS: CentOS 8.1.1911
Kernel: 4.18.0-147.8.1 -> 5.4.28-rt19
|
Please enable EXPERT mode after launching make nconfig/menuconfig. Then you'll be able to select Fully Preemptible Kernel (RT) option.
| Trouble selecting "Fully Preemptible Kernel (Real-Time)" when configuring/compiling from source |
1,587,666,937,000 |
I’m using Amazon Linux,
[davea@mymachine ~]$ uname -a
Linux mymachine.mydomein.org 4.4.35-45.83.amzn1.x86_64 #1 SMP Wed Jul 27 22:37:49 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
In my /etc/ssh/sshd_config, I have no value set for “ServerAliveInterval”. In this answer — What options `ServerAliveInterval` and `ClientAliveInterval` in sshd_config exactly do? , they say “Setting a value of 0 (the default) will disable these features so your connection could drop if it is idle for too long”. What does that mean? In precise terms, what is the default amount of time the connection will stay alive if this parameter is committed? Assume that I have set a value of “1000000000000” on the client side.
|
The value for ServerAliveInterval means that "if no data has been received from the server within this time then send a NULL message to the server".
Similarly, ClientAliveInterval means that "if no data has been received from the client within this time then send a NULL message to the client".
The default values are typically 0 which means these functions are disabled.
The main use of this is to prevent intermediate routers and firewalls from thinking a session is idle, and dropping it. It has no real impact on an ssh server, itself.
For example, many home NATting routers will drop idle sessions after a period of time (the exact time depends on your router; I've seen values from 1 hour to over 21 days). By setting the ServerAliveInterval you fake out this idle timeout in the router by making sure there's always some traffic within the router interval.
| What is default value of ServerAliveInterval? |
1,587,666,937,000 |
What's the best way to edit the /etc/nsswitch.conf file other than just using sed -i to edit it in place or overwrite it in total.
For our build we need to make changes to this file without destroying it if future changes occur in later packages.
I was hoping there was a tool to HELO interact with it but, that doesn't seem to exist. This is on redhat.
|
Take a backup as, cp /etc/nsswitch.conf /etc/nsswitch.conf.orignal
Now you can use sed -i or open /etc/nsswitch.conf with some editor like vim and do the changes.
If error occurs, you can revert back to the original version,
cp /etc/nsswitch.conf.original /etc/nsswitch.conf
I got this tool suggestion by Ulrich in the chat which is used for editing the configuration files. From their home page, I see,
Augeas is: An API provided by a C library A command line tool to
manipulate configuration from the shell (and shell scripts) Language
bindings to do the same from your favorite scripting language
Canonical tree representations of common configuration files A
domain-specific language to describe configuration file formats
| Editing nsswitch.conf file safely |
1,587,666,937,000 |
When configuring the Linux kernel, what are the advantages and disadvantages of enabling UTS namespaces? Would the new system be harmed if UTS namespaces were disabled?
|
UTS Namespaces are per-process namespaces allowing a process to have different namespaces for different resources. For example, a process could have a set of namespaces for the following:
mountpoints
PID numbers
network stack state
IPC - inter process communications
NOTE: the use of namespaces was limited only to root up until version 3.8+ of the Linux Kernel.
unshare
You can use the command unshare to disassociate a parent's namespace from a child process.
$ unshare --help
Usage: unshare [options] <program> [args...]
Run program with some namespaces unshared from parent
-h, --help usage information (this)
-m, --mount unshare mounts namespace
-u, --uts unshare UTS namespace (hostname etc)
-i, --ipc unshare System V IPC namespace
-n, --net unshare network namespace
For more information see unshare(1).
compiler option
CONFIG_UTS_NS
Support uts namespaces. This allows containers, i.e. vservers, to use
uts namespaces to provide different uts info for different servers. If
unsure, say N.
| Enabling UTS Namespaces in the Linux Kernel |
1,587,666,937,000 |
I've used several unix tools to display PDFs, like xpdf, evince, epdfview…
What I'm looking for is not very complicated.
I'd like to display a full page inside the application window with reduced margins (no margins or very small margins) and be able to go to the next/previous page by simply pressing one button.
I don't know any PDF viewer which can be configured to do that. Does anyone know how it could be done?
|
I much enjoy using mupdf. There's no visible UI and the default keybindings are fine.
| Looking for an efficient way to display PDFs |
1,587,666,937,000 |
I compiled a package for Solaris 11 Express that has some library dependencies, which I also compiled from source and installed in the usual /usr/local. (And Solaris doesn't even have /usr/local pre-created!) So, my program runs correctly, but I have to run it with
LD_LIBRARY_PATH=/usr/local/lib ./myprogram
or it complains that it couldn't find libsomething.so.
How do I include /usr/local/lib in the library search path, system-wide? Linux has /etc/ld.so.conf -- Solaris doesn't.
|
Check out the section about setting up the linker: http://bwachter.lart.info/solaris/solfaq.html
You want the crle command.
| Does Solaris have an equivalent to /etc/ld.so.conf? |
1,587,666,937,000 |
I have been trying to run a command to clean up some of my temporary files every time I exit a shell.
I initially thought that this would be the job of .zlogout but it doesn't seem to be executed if I, for example, have multiple shells open in my terminal emulator (kitty).
From what I have found in the doc, .zlogin and .zlogout apply to login shells only, which, correct me if I am wrong, is not the case when you simply open different tabs or windows in your terminal emulator.
What is the equivalent of .zlogout for non-login shells and alternatively what would be the recommended way to achieve a similar effect in non-login shells?
|
There's an example in this zsh guide which sources ~/.zlogout for non-login shells
using the TRAPEXIT function. This seems to be exactly what you want.
TRAPEXIT() {
# commands to run here, e.g. if you
# always want to run .zlogout:
if [[ ! -o login ]]; then
# don't do this in a login shell
# because it happens anyway
. ~/.zlogout
fi
}
Add this function to your ~/.zshrc.
| How to run a command every time that I exit a zsh shell (including non-login shells) |
1,587,666,937,000 |
Inside the xorg.conf.d/ for example, we have three files:
00-keyboard.conf 10-monitor.conf 30-touchpad.conf
I know that the 2-digit number determine the precedence that each file is read so 00-keyboard.conf is read before 10-monitor.conf.
But I noticed that documentation on different sites all seems to use the same convention, e.g., using 10-monitor.conf for monitor configurations.
So, what I want to know is if there are numbers mapped to certain devices or if is just a convention that everyone stick with and I can use whatever 2-digit number that I want (according to precedences, of course). And if they are mapped, where can I find them?
I have searched about it but everything I found just mention what I have just said and doesn't mention if I can use other numbers or not. Even the xorg.conf[5] man page does not mention anything.
|
There is no mapping to devices or anything like that. The numbering is only used to enforce an order, and you don’t even have to name your configuration files with a number at the start — it’s just easier to reason about order with numbers.
So you can use any scheme you want.
| What is the numbering convention in .conf files(inside a conf.d)? |
1,587,666,937,000 |
I am using vi with
set ts=4
set number
configuration and I am tired of settings these each time I open the vi editor on command line. So I want to configure vi with a configuration file that I can embed the settings listed above however, I could not find the config file for vi in Ubuntu. What is the exact location of vi config file in Ubuntu or which way I can configure vi? Any ideas ?
Note: This question is only specific to vi, not vim.
|
Open a new file in your home directory called .exrc, and put your configurations therein.
| How to configure vi in Ubuntu |
1,587,666,937,000 |
I am trying to change the font of dmenu. I am running the i3 window manager.
$ dmenu_run -v
/bin/bash: line 1: dmenu-4.5,: command not found
$ dmenu_run -fn "-xos4-terminus-medium-r-*-*-14-*"
cannot load font '-xos4-terminus-medium-r-*-*-14-*'
I want to use the following font - font pango:DejaVu Sans Mono 12 because it is the same font I am using inside my i3 config. However, no matter what font I try to use, dmenu reports it cannot load the font.
How do I get dmenu to allow me to load the above font?
|
I was able to resolve my issue.
In my .i3/config file I am using this line -
# start dmenu (a program launcher)
# bindsym $mod+d exec dmenu_run
# There also is the (new) i3-dmenu-desktop which only displays applications
# shipping a .desktop file. It is a wrapper around dmenu, so you need that
# installed.
bindsym $mod+d exec --no-startup-id i3-dmenu-desktop --dmenu="dmenu -fn 'DejaVu Sans Mono-15'"
I then opened my 'source' folder that I created under my home directory -
$ cd /home/me/Applications
$ git clone http://git.suckless.org/dmenu
$ cd dmenu
I then ran these commands
$ make (to make sure it compiles)
$ sudo make install clean (to install it)
I then had the latest version of dmenu which I was able to confirm by doing -
$ dmenu_run -v (it reports version 4.6)
This version supports the Xft font rendering. If the original make fails, be sure to install build tools and such. Just google the error you get and get any supporting compilation tools.
Thanks @wieland.
| Custom font with dmenu_run in i3 |
1,587,666,937,000 |
Sendmail works through sending to a smarthost, but can't find local users.
# sendmail -bv [email protected]
[email protected]... User unknown
# grep LocalUser /var/log/maillog
Sep 8 03:48:30 myhost sendmail[6678]: r887mUs3006678: [email protected]... User unknown
but ...
# ls /home|grep LocalUser
/LocalUser
and ...
# grep LocalUser /etc/passwd
LocalUser:x:1001:1001:LocalUser:/home/LocalUser:/bin/bash
How can I configure sendmail to find localusers? How should I diagnose this?
|
Sendmail and local users with uppercase letters
Diagnose
Sendmail's default configuration converts local user/mailbox names to all lowercase letters before delivery attempt. In your case email to [email protected] is delivered by sendmail to non existing localuser instead of existing LocalUser.
Possible fixes
Do not use usernames with uppercase letters
OR
Specify the only right uppercase and lowercase mix (fox given lowercase only string). It requires modification in sendmail.mc and aliases files.
http://www.sendmail.org/faq/section4.html#4.17
Subject: Q4.17 -- How do I handle user names with upper-case characters?
sendmail.mc file (requires recompilation into sendmail.cf file):
MODIFY_MAILER_FLAGS(`LOCAL', `+u')dnl
aliases file (requires recompilation with newaliases command):
# lowercase version to real for accounts with uppercase letters
localuser: LocalUser
| Sendmail can't find local users (with uppercase letters) |
1,587,666,937,000 |
My .muttrc file is starting to get quite large.
What I would prefer is to create a ~/.mutt/config directory to store various
config files for the account, status bar, composition,... etc and then source them all into my main .muttrc file.
How do you do this?
|
Create a folder ~/.mutt
Split up your configuration in your folder
source all the config files in your folder, i.e. (from my ~/.mutt/muttrc):
.
source ~/.mutt/rc
source ~/.mutt/hooks
source ~/.mutt/macros
source ~/.mutt/ml
source ~/.mutt/gnupg
| Separate .muttrc into parts |
1,587,666,937,000 |
I need to deploy my Firefox configured and with extensions in NixOS. I want to do that declaratively (in configuration.nix) and I do not want to use home-manager.
Via user profile:
Configuring possible via preference files
Loading extensions is not possible, support has ended with Firefox 74
Via installation directory:
Configuration + extensions managable via policies.json
Firefox looks for the config in /nix/store/<hash>-firefox-unwrapped-74.0.1/lib/firefox/distribution/policies.json (verified with strace).
Hence the question: How do I add this file to the Firefox nixpkgs package? (Bonus question: How do I get a file from my Github Repo there?)
I'm rather new to NixOs. I consulted the manuals about overriding, overlays, wrapping and more but I couldn't manage to pull it off. I tried with both firefox, firefox-bin and firefox-unwrapped.
|
I took a glance at the firefox Nix expressions and didn't see a way to provide a policies.json.
If you modify the package such that you can provide the policy as an input to the derivation, it would work but then users would be forced the burden of compiling Firefox; Because the file would be a build input.
If Firefox provides a way to specify where to find the policies file at runtime, that might be an ideal solution. Otherwise, you can add a patch to the Firefox package which modifies the source code to look for the policies file at say... /etc/firefox/policies.json. With that change in place, you can use the environment NixOS module in /etc/nixos/configuration.nix to create the policies. Something like this:
environment.etc."firefox/policies".text = "INSERT POLICY HERE";
| How to use Firefox Policies in NixOs / How to add config files to a Nix package? |
1,587,666,937,000 |
Currently I am using the following command for executing authentication request to obtain the server certificate (FINGERPRINT) and OpenConnect-Cookie:
openconnect --authenticate --user=<username> "VPN host"
Hereby I always have to enter my password in a later appearing user prompt.
Is there an option available to pass-over the password to OpenConnect already in the upper command?
For example, by extending the command like...
openconnect --authenticate --user=<username> password=<password> "VPN host"
... ?
The challenge is:
The user RuiFRibeiro had the idea just to echo the password within the command. Unfortunately this does not work in our case, because the server provides one more user prompt before reaching the second prompt (= password prompt).
It will happen like that:
First user prompt: Server saying
"Please choose if you want to tunnel all traffic or only specific one.
"Type in Tunnel all or Tunnel company".
Second user prompt: Server is saying
"Please enter your password."
As you can see, a simple echo would give the wrong answer to the wrong question. :-)
For a possible expect-script the real (exact) server request before inserting text is like followed:
First prompt: GROUP: [tunnel MyCompany|tunnel all]:, answer-insertion should be tunnel MyCompany
Second prompt: Password:, answer-insertion should be 123456789
|
Usually, VPN software does not allow as input the password for a user, because it is considered a security risk.
A possible solution is feeding the password via a pipe as in:
echo -e "Tunnel all\nYourPassword" | openconnect --authenticate --user=<username> "VPN host"
If we are talking about you being interested in this method to write a script:
be sure to understand the security implications of having your password in a file, and restrict the read rights of that file only to the user running the openconnect command.
PS Replace YourPassword with your real password
| OpenConnect: Passing-over user password when executing authentication request? |
1,587,666,937,000 |
I'm currently running Fedora on an old Thinkpad T530 and just configuring it to my needs. I'm using GDM/Gnome Desktop (on Xorg, not Wayland). I've installed and configured the proprietary NVIDIA driver.
Now I want to extend my battery's lifetime by forcing the driver running on lowest performance mode (current situation). That was my goal. But I want the adaptive mode back again on AC. I wasn't able to figure it out on my own and can't find any reference for NVIDIA's driver options.
My current xorg.conf contains:
Section "Screen"
Identifier "nvidia"
Device "nvidia"
Option "AllowEmptyInitialConfiguration"
Option "ConstrainCursor" "no"
Option "RegistryDwords" "PowerMizerEnable=0x1 PerfLevelSrc=0x2222; PowerMizerDefault=0x3; PowerMizerDefaultAC=0x3"
EndSection
I've found these options on NVIDIA Developer Forum.
Can someone please help me (or pointing on any reference) so I'm able run both a) battery on lowest (forced) performance mode and b) an AC adaptive?
Thank you in advance.
|
I got it! After searching the web again, I've found this site on reddit.
Basically the change is:
PerfLevelSrc to 0x2233, which means: fixed for battery (22) and dynamic for AC (33)
PowerMizerDefaultAC to 0x2, which means: set AC to dynamic/adaptive behaviour.
For clarification, setting PowerMizerDefault (same applies for PowerMizerDefaultAC) means:
PowerMizerDefault=0x1: maximum performance
PowerMizerDefault=0x2: dynamic performance
PowerMizerDefault=0x3: minimum performance
My whole line is now:
Option "RegistryDwords" "PowerMizerEnable=0x1; PerfLevelSrc=0x2233; PowerMizerDefault=0x3; PowerMizerDefaultAC=0x2"
I'll just leave this here for others, seeking guidance.
| Xorg Conf: NVIDIA performance settings for AC/battery? |
1,587,666,937,000 |
I would like to create a key binding, using the key sequence C-x r, to reload the configuration of bash, stored in ~/.bashrc, and the one of the readline library stored in ~/.inputrc.
To reload the configuration of readline, I think I could use the re-read-init-file function which is described in man 3 readline:
re-read-init-file (C-x C-r)
Read in the contents of the inputrc file, and incorporate any bindings or variable assignments found there.
To reload the configuration of bash, I could use the source or . command. However, I'm not sure what's the best way to combine a shell command with a readline function. So, I came up with the combination of 2 key bindings:
bind '"\C-xr": ". ~/.bashrc \C-x\C-z1\C-m"'
bind '"\C-x\C-z1": re-read-init-file'
When I hit C-x r in bash, here's what happens:
. ~/.bashrc `~/.bashrc` is inserted on the command line
C-x C-z 1 `C-x C-z 1` is typed which is bound to `re-read-init-file`
C-m `C-m` is hit which executes the current command line
It seems to work because, inside tmux, if I have 2 panes, one to edit ~/.inputrc or ~/.bashrc, the other with a shell, and I change a configuration file, after hitting C-x r in the shell, I can see the change taking effect (be it a new alias or a new key binding), without the need to close the pane to reopen a new shell.
But, is there a better way of achieving the same result? In particular, is it possible to execute the commands without leaving an entry in the history? Because if I hit C-p to recall the last executed command, I get . ~/.bashrc, while I would prefer to directly get the command which was executed before I re-sourced the shell configuration.
I have the same issue with zsh:
bindkey -s '^Xr' '. ~/.zshrc^M'
Again, after hitting C-x r, the command . ~/.zshrc is logged in the history. Is there a better way to re-source the config of zsh?
|
Don't inject a command into the command line to run it! That's very brittle — what you're trying assumes that there's nothing typed at the current prompt yet. Instead, bind the key to a shell command, rather than binding it to a line edition command.
In bash, use bind -x.
bind -x '"\C-xr": . ~/.bashrc'
If you also want to re-read the readline configuration, there's no non-kludgy way to mix readline commands and bash commands in a key binding. A kludgy way is to bind the key to a readline macro that contains two key sequences, one bound to the readline command you want to execute and one bound to the bash command.
bind '"\e[99i~": re-read-init-file'
bind -x '"\e[99b~": . ~/.bashrc'
bind '"\C-xr": "\e[99i~\e[99b~"'
In zsh, use zle -N to declare a function as a widget, then bindkey to bind that widget to a key.
reread_zshrc () {
. ~/.zshrc
}
zle -N reread_zshrc
bindkey '^Xr' reread_zshrc
| How to create a key binding re-sourcing the shell configuration without a new command being saved in the history? |
1,587,666,937,000 |
To authenticate in a corporate network I have to run the following command:
$ sudo wpa_supplicant -i eth0 -D wired -c /etc/wpa_supplicant/mywired.conf -B
The configuration script loaded thereby looks like this:
# global configuration
ctrl_interface=/var/run/wpa_supplicant
#ctrl_interface_group=wheel
ap_scan=0
# 802.1x wired configuration
# eap-ttls
network={
key_mgmt=IEEE8021X
eap=TTLS
identity="[email protected]"
anonymous_identity="[email protected]"
password="password"
ca_cert="/home/user/deutsche-telekom-root-ca-2.pem"
phase2="auth=PAP"
eapol_flags=0
priority=5
}
# eap-peap
network={
key_mgmt=IEEE8021X
eap=PEAP
identity="[email protected]"
anonymous_identity="[email protected]"
password="password"
ca_cert="/home/user/deutsche-telekom-root-ca-2.pem"
phase2="auth=MSCHAPV2"
eapol_flags=0
priority=10
}
Without the configuration I do not get an IP address assigned via DHCP.
How can I automatically apply this configuration at startup? I am running Ubuntu 14.10.
|
Just put your command in /etc/rc.local. Make sure it's on a single line.
sudo wpa_supplicant -i eth0 -D wired -c /etc/wpa_supplicant/mywired.conf -B
I assume that your connection is stable and not dropping. Do comment if your connection drops. I'll make a script. Have to sleep now.
| How to automatically apply wpa_supplicant configuration? |
1,587,666,937,000 |
I've installed jack2 as a substitution for jack from official repositories (I'm on Arch Linux):
# pacman -S jack2
I need to use jack2 because it provides jackd (it's needed for another application), while jack2_dbus does not provide it.
According to this manual, in order to configure such parameters as sampling rate, one should use jack_control, but it is available only for jack2_dbus (which I cannot use).
I also have read this article, but unfortunately, I can't follow it (it was written for jack, apparently jack2 does not include jackstart anymore):
[mark@arch ~]$ jackstart -R -d alsa -d hw:1U -p 512 -r 48000 -z s
bash: jackstart: command not found
I would like to somehow set default audio card, because when an application uses jack on my system, it uses card with 0 index and this is not what I want (I want, say, audio card with index 2).
Here is my ~/.asoundrc:
#
# ALSA Configuration File
#
defaults.ctl.card 2
defaults.pcm.card 2
defaults.dmix.rate 44100
defaults.dmix.channels 2
Is there configuration file that controls which audio card will be used when an application invokes jackd? Any other means to set the parameter (and others)?
|
You only choose audio card once when starting jackd. You can list cards available to alsa with aplay -l (aplay is part of alsa-utils). Then you can start the jack daemon, and pick the card to use with jackd -d alsa -d hw:<card>,<device>.
| How to configure which sound card jack2 will use |
1,587,666,937,000 |
I'm trying to dial-up with my Huawei modem EM680 using wvdial.
My modem is found properly on /dev/ttyUSB1 but when I execute wvdial I get this:
# wvdial
--> WvDial: Internet dialer version 1.61
--> Cannot get information for serial port.
--> Initializing modem.
--> Sending: ATZ
ATZ
OK
--> Sending: ATQ0 V1 E1 +FCLASS=0
ATQ0 V1 E1 +FCLASS=0
OK
--> Sending: AT+CGDCONT=1,"IP","m2mstatic.apn"
AT+CGDCONT=1,"IP","m2mstatic.apn"
ERROR
--> Bad init string.
--> Cannot get information for serial port.
--> Initializing modem.
--> Sending: ATZ
ATZ
OK
--> Sending: ATQ0 V1 E1 +FCLASS=0
ATQ0 V1 E1 +FCLASS=0
OK
--> Sending: AT+CGDCONT=1,"IP","m2mstatic.apn"
AT+CGDCONT=1,"IP","m2mstatic.apn"
ERROR
--> Bad init string.
--> Cannot get information for serial port.
--> Initializing modem.
--> Sending: ATZ
ATZ
OK
--> Sending: ATQ0 V1 E1 +FCLASS=0
ATQ0 V1 E1 +FCLASS=0
OK
--> Sending: AT+CGDCONT=1,"IP","m2mstatic.apn"
AT+CGDCONT=1,"IP","m2mstatic.apn"
ERROR
--> Bad init string.
#
Why does it say bad init string.?
My /etc/wvdial.conf looks like this:
[Dialer Defaults]
Init1 = ATZ
Init2 = ATQ0 V1 E1 +FCLASS=0
Init3 = AT+CGDCONT=1,"IP","m2mstatic.apn"
Stupid Mode = yes
Modem Type = Analog Modem
ISDN = 0
New PPPD = yes
Phone = *99#
Modem = /dev/ttyUSB1
Username = ;
Password = ;
Baud = 9600
|
This wvdial.conf worked on ZTE 3G Modem below:
[Dialer Defaults]
Modem = /dev/ttyUSB0
Init1 = ATZ
Init3 = AT+CGDCONT=1,"IP","apnname"
Phone = *99***1#
Username = user
Password = user
New PPPD = yes
Stupid Mode = 1
You can try it with my wvdial.conf.
| getting "bad init string" when dialing with wvdial |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.