date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,305,944,454,000
Albert Einstein quote Insanity: doing the same thing over and over again and expecting different results Often-times, Linux drives me mad because I'm doing the same thing over and over again and getting different results from box to box. (See my previous question). For me, the biggest area of confusion is taking over a machine that someone else has installed (as is the case when signing up with a web hosting company). You just don't know what you're dealing with. Is there some kind of clever diff tool that I can run on an installation of Linux (Ubuntu) to give me a heads-up on how that machine has veered from the default installation? i.e. Something that can show me a list of the commands that are going to behave surprisingly thus avoiding a trial and error approach.
Whenever I have a good reference system and a misbehaving one, I try to compare them with vimdiff. What I compare varies with the problem, e.g. 1) When comparing servers at the package level, I create sorted lists of packages on each server, send the results to files and diff them, e.g. On server1: dpkg --get-selections|sort > server1_packages On server2: dpkg --get-selections|sort > server2_packages Copy both files to the same machine and diff (or vimdiff) them. 2) Make a list of running services as in example 1 sysv-rc-conf --list|sort > server1_services sysv-rc-conf --list|sort > server2_services ...etc., and vimdiff those. 3) If you are troubleshooting inconsistent configurations with Apache for example, make copies of the config files, and vimdiff those, etc.
tool or technique to get a diff of two different linux installations
1,305,944,454,000
I am learning command top, know how to change color and columns mod, switch from one mode to another. After closing top's window and running again, all comes to default configuration - 4 default modes of columns and colors . Is there any way to save changes befor closing top's window.
Once you have your configuration set the way you want, type W (that is a capital W) and your configuration will be saved. From the top manpage: ´W´ :Write_the_Configuration_File This will save all of your options and toggles plus the current display mode and delay time. By issuing this command just before quitting top, you will be able restart later in exactly that same state.
linux command top: saving configuration
1,305,944,454,000
It seems like, by default, the tmux status bar clock granularity is set to 2s, however it would be nice to be able to bring that up to a one-second granularity. Is there any way to set the granularity in a .tmux.conf? I haven't been able to find anything about this under man tmux.
There is a status-interval session option which by default is set to 15 seconds. This determines how frequently the status line is redrawn. With set-option -s status-interval 1 in your .tmux.conf file, this would be would be changed to 1 second.
Is there any way to adjust the clock granularity under tmux?
1,305,944,454,000
I've been trying to write a simple bash script which I'll be using to install an application and update it's config file. I'm having hard time to have get it's config file modified. # DBHost=localhost DBName=test # DBPassword= any suggestions how I can get above modified as below? DBHost=localhost DBName=database DBPassword=password
The best way depends on whether you expect the file to be also modified by humans, how complex the file is, and whether you want your script to take precedence if it looks like someone else wants a different value. That is, if the file already contains DBPassword=swordfish, do you want to keep that, or replace it by DBPassword=password? A common way of dealing with this is to have a section of the file delimited by “magic comments”, and only edit the part between these comments. Here's a way to do this with awk. If the magic comments are not present, the new section is added at the end of the file. Warning: untested code. begin_marker='# BEGIN AUTOMATICALLY EDITED PART, DO NOT EDIT' end_marker='# END AUTOMATICALLY EDITED PART' new_section='DBHost=localhost DBName=database DBPassword=password' export begin_marker end_marker awk <file.conf >file.conf.new -v begin_marker="$begin_marker" -v begin_marker="$end_marker" -v new_section="$new_section" ' 1 {print} $0 == begin_marker && !changed { do {getline} while ($0 != end_marker); # discard old section content print new_section; print; changed = 1; } END {if (!changed) {print begin_marker; print new_section; print end_marker;}} ' ln -f file.conf file.conf.old mv -f file.conf.new file.conf This approach doesn't work well if the program that reads the configuration file doesn't support multiple lines setting the same configuration item. In that case, you'll really need to remove the old ones. In this case I would advise leaving the comments untouched and adding your own settings at the end. grep -vE '^[[:blank:]]*(DBHost|DBName|DBPassword)=$' <file.conf >file.conf.new cat <<EOF >>file.conf.new DBHost=localhost DBName=database DBPassword=password EOF ln -f file.conf file.conf.old mv -f file.conf.new file.conf
Editing config file via a bash script
1,305,944,454,000
Is there any developed automatic linux kernel configuration tool? I have found a method of make localmodconfig, but it is certainly very limited. I have searched the web but unfortunately have not come to any acceptable result. Although I am quite conversant in kernel configuration issues, I would like to optimize my time wasted on configuration every new system with particular hardware, since it is rather technical work than creative.
Now, that we talked about this a bit in the comments the answer for you is: no, there isn't. The main reason for that conclusion is that I think you are not looking for a tool to configure a kernel, but to automatically tune the kernel for your specfic (and yet unstated) use case. As stated in the comments, you can skip unneeded drivers and compile the wanted drivers statically into the kernel. That saves you some time during the boot process, but not after that, because the important code is the same whether builtin or module. Kernel tuning The kernel offers some alternatives, you mentioned scheduler yourself. Which scheduler works best for you depends on your use case the applications you use and the load and kind of load you put on your system. No install-and-run program will determine the best scheduler for you, if there even is such a thing. The same holds for buffers and buffer sizes. Also, a lot of (most?) settings are or at least can be set at runtime, not compile time. Optimal build options Also without automation, you can optimize the build options when compiling the kernel, if you have a very specialized CPU. I know of the Buildroot environment which gives you a nice framework for that. This may also help you if you are looking to create the same OS for many platforms. While this helps you building, it will not automate kernel tuning. That's why I and others tell you to use a generic kernel. Without a specific problem to solve building your own kernel is not worth while. Maybe you can get more help by identifying/stating the problem you are trying to solve.
Automatic kernel configuration tool
1,305,944,454,000
Configuration of KDE desktop applets, like the launcher ("Kickoff") or the clock is held in ~/.config/plasma-org.kde.plasma.desktop-appletsrc (at least for KDE 5). I'd like to configure the applets on a fresh system to my liking using Ansible, but I can't find a robust way to do that. I know I can use kwriteconfig5 to change the values there, like so kwriteconfig5 --file ~/.config/plasma-org.kde.plasma.desktop-appletsrc \ --group Containments --group 3 --group Applets --group 9 \ --group Configuration --group Appearance \ --key dateFormat isoDate which would hide some items from the system tray if the number of containment (3) and the applet (9) would match to the clock applet like so [Containments][3][Applets][9] immutability=1 plugin=org.kde.plasma.digitalclock which isn't guaranteed to happen between installations, from what I've seen. Is there some available elegant way to set the values for specific applets (plugins, in the config file)? Or is it necessary to write a script that will dig up the numbers for a specific app and then use the klunky kwriteconfig5 command?
A simplified solution on bash: config="plasma-org.kde.plasma.desktop-appletsrc" grp="" while IFS= read -r line do [[ $line == *Applets* ]] && grp="$line" [[ $line == *org.kde.plasma.digitalclock* ]] && break done < "$HOME/.config/$config" ContGrp=$(echo "$grp" | awk -F\] '{print $2}' | awk -F\[ '{print $2}') ApplGrp=$(echo "$grp" | awk -F\] '{print $4}' | awk -F\[ '{print $2}') kwriteconfig5 --file "$config" \ --group Containments --group "$ContGrp" --group Applets --group "$ApplGrp" \ --group Configuration --group General \ --key dateFormat isoDate
Robust command line (CLI) configuration of Plasma (KDE) applets
1,305,944,454,000
Right now pkg-config looks only in /usr/lib/pkgconfig. I can adjust it for a user by exporting the PKG_CONFIG_PATH environment variable, but once again I forgot to do it for root and wasted time wondering, why my plugin is not installed properly (the makefile used pkg-config). So how can set it system wide, so it would always look in to both /usr/lib and /usr/local/lib?
The traditional place to define an environment variable system-wide is /etc/profile. This file is read by Bourne-style shells (including bash, ksh, ash) when you log in for a text-mode session, either locally (on a text mode console) or remotely (over ssh). If you log in in a graphical environment, /etc/profile may or may not be read, depending on your login manager, desktop environment and operating system distribution. A better method, if available on your system, is to define the environment variable in /etc/environment. This file is read by PAM, specifically by the pam_env module. These variables are available in all sessions started by a login method that uses PAM and has the pam_env module referenced in /etc/pam.conf or /etc/pam.d/$method.
Tell pkg-config to look *.pc files also in /usr/local/lib/pkgconfig, system-wide
1,305,944,454,000
I work in a relatively heterogeneous environment where I may be running different versions of Bash on different HPC nodes, VMs, or my personal workstation. Because I put my login scripts in a Git repo, I would like use the same(ish) .bashrc across the board, without a lot of "if this host, then..."-type messiness. I like the default behavior of Bash ≤ 4.1 that expands cd $SOMEPATH into cd /the/actual/path when pressing the Tab key. In Bash 4.2 and above, you would need to shopt -s direxpand to re-enable this behavior, and that didn't become available until 4.2.29. This is just one example, though; another, possibly related shopt option, complete_fullquote (though I don't know exactly what it does) may have also changed default behavior at v4.2. However, direxpand is not recognized by earlier versions of Bash, and if I try to shopt -s direxpand in my .bashrc, that results in an error message being printed to the console every time I log in to a node with an older Bash: -bash: shopt: direxpand: invalid shell option name What I'd like to do is wrap a conditional around shop -s direxpand to enable that option on Bash > 4.1 in a robust way, without chafing the older versions of Bash (i.e., not just redirecting the error output to /dev/null).
Check if direxpand is present in the output of shopt and enable it if it is: shopt | grep -q '^direxpand\b' && shopt -s direxpand
How can I prevent unsupported 'shopt' options from causing errors in my .bashrc?
1,305,944,454,000
I know that in ~/.bashrc one must not put spaces around = signs in assignment: $ tail -n2 ~/.bashrc alias a="echo 'You hit a!'" alias b = "echo 'You hit b!'" $ a You hit a! $ b b: command not found I'm reviewing the MySQL config file /etc/my.cnf and I've found this: tmpdir=/mnt/ramdisk key_buffer_size = 1024M innodb_buffer_pool_size = 512M query_cache_size=16M How might I verify that the spaces around the = signs are not a problem? Note that this question is not specific to the /etc/my.cnf file, but rather to *NIX config files in general. My first inclination is to RTFM but in fact man mysql makes no mention of the issue and if I need to go hunting online for each case, I'll never get anywhere. Is there any convention or easy way to check? As can be seen, multiple people have edited this file (different conventions for = signs) and I can neither force them all to use no spaces, nor can I go crazy checking everything that may have been configured and may or may not be correct. EDIT: My intention is to ensure that currently-configured files are done properly. When configuring files myself, I go with the convention of whatever the package maintainer put in there.
I'll answer that in a more general way - looking a bit at the whole "Unix learning experience". In your example you use two tools, and see the language is similar. It just unclear when to use what exactly. Of course you can expect there is a clear structure, so you ask us to explain that. The case with the space around = is only and example - there are lot's of similar-but-bot-quite cases. There has to be a logic in it, right?! The rules how to write code for some tool, shell, database etc only depend on what this particular tool requires. That means that the tools are completely independent, technically. The logical relation that I think you expect simply does not exist. The obvious similarity of the languages you are seeing are not part of the programm implementation. The similarity exist because developers had agreed how to do it when they wrote it down for a particular program. But humans can agree only partially. The relation you are seeing is a cultural thing - it's neither part of the implementation, nor in the definition of the language. So, now that we have handeled the theory, what to do in practise? A big step is to accept that the consistency you expected does not exist - which is much easier when understanding the reasons - I hope the theory part helps with this. If you have two tools, that do not use the same configuration language (eg. both bash scripting), knowing the details of the syntax of one does not help much with understanding the other; So, indeed, you will have to look up details independently. Make sure you know where you find the reference documentation for each. On the positive side, there is some consistency where you did not expect it: in the context of a single tool (or different tools using the same language), you can be fairly sure the syntax is consistent. In your mysql example, that means you can assume that all lines have the same rule. So the rule is "space before and after = is not relevant". There are wide differences in how hard it is to learn or use the configuration- or scripting language of a tool. It can be some like "List foo values in cmd-foo.conf, one per line.". It can be a full scripting language that is used elsewhere too. Then you have a powerful tool to write configuration - and in some cases that's just nice, in others you will really need that. Complex tools, or large famillies of related tools sometimes just use very complex special configuration file syntax - (some famous examples are sendmail and vim). Others use a general scripting language as base, and extend that language to support the special needs, some times in complex ways, as the language allows. That would be a very specific case of a domain-specific language (DSL).
When are spaces around the = sign forbidden?
1,305,944,454,000
There are many packages which have grub in their names and part of the GRUB (the Grand Unified Boot Loader). The ones which are installed on my system are - grub-common grub-emu grub-pc grub-pc-bin grub-theme-starfield grub2 grub2-common grub2-splashimages I first looked at it to see whether it is a symlinked file or a regular file - [$] ll -h /etc/default/grub -rw-r--r-- 1 root root 1.2K 2017-01-22 14:16 /etc/default/grub I had a look but couldn't find anything which would tell me where this file comes from ? [$] dpkg -S /etc/default/grub dpkg-query: no path found matching pattern /etc/default/grub OR [$] dpkg-query -W /etc/default/grub dpkg-query: no packages found matching /etc/default/grub
In such cases you can find the relevant package by looking through the post-installation scripts: grep /etc/default/grub /var/lib/dpkg/info/*.postinst This reveals that the file is created by grub-pc.
In Debian, which package is responsible for creation of /etc/default/grub?
1,305,944,454,000
I'm pretty sure that all Red Hat and Debian based distributions follow the convention of shipping the kernel configuration in /boot/config-*, but what of other distributions? Or, if this convention is extremely common, which distributions don't follow it?
Debian and derivatives (Ubuntu, Linux Mint, …) The configuration for the kernel /boot/vmlinuz-VERSION is stored in /boot/config-VERSION. The two files ship in the same package, linux-VERSION or kernel-VERSION. Arch Linux, Gentoo (if enabled) The configuration for the running kernel is stored in the kernel binary and can be retrieved with zcat /proc/config.gz. This file exists when the CONFIG_IKCONFIG option is set when compiling the kernel - and so can be true (or not) regardless of distribution, though the default kernel configuration for the two named does enable it. Incidentally, arch linux's default configuration does not name the kernel (or its initramfs image) by version even in /boot - the files there are named only for their corresponding packages. For example, a typical arch linux boot kernel is named /boot/vmlinuz-linux where linux is the package one installs for the default kernel.
Where can I find the kernel configuration on each Linux distribution?
1,305,944,454,000
How can i do this in a single line? tcp dport 53 counter accept comment "accept DNS" udp dport 53 counter accept comment "accept DNS"
With a recent enough nftables, you can just write: meta l4proto {tcp, udp} th dport 53 counter accept comment "accept DNS" Actually, you can do even better: set okports { type inet_proto . inet_service counter elements = { tcp . 22, # SSH tcp . 53, # DNS (TCP) udp . 53 # DNS (UDP) } And then: meta l4proto . th dport @okports accept You can also write domain instead of 53 if you prefer using port/service names (from /etc/services).
How to match both UDP and TCP for given ports in one line with nftables
1,305,944,454,000
Is there a command that can be used to figure out which packages are installed system-wide in NixOS? For instance, I can list packages installed for my current user with nix-env -q. I don't know of any way to list packages installed on the whole system from /etx/nixos/configuration.nix. There are two separate instances I would want to use this: Let's say I add a package to /etc/nixos/configuration.nix in environment.systemPackages, but I forget whether I have run nixos-rebuild switch yet. It would be nice if there was a command I could run to check whether the package is in the system environment. I have programs.bash.enableCompletion set to true in /etc/nixos/configuration.nix. Without looking at the option in nixpkgs, I would guess that this option would set the bash-completion package to be installed. It would be nice if there was a command that I could run that checked whether the bash-completion package actually was in the system environment.
There's no specific tool for this. You may like the system.copySystemConfiguration option (see the docs for "caveats"). You'll get relatively close with nix-store -q --references /run/current-system/sw – the list of nix store paths directly contained in systemPackages, but note that various NixOS options may add packages in there.
how to find which packages are installed system-wide in NixOS?
1,305,944,454,000
I am chasing an error trying to apply a new tune to postgres. The exact error is: 2018-11-07 22:14:49 EST [7099]: [1-1] FATAL: could not map anonymous shared memory: Cannot allocate memory 2018-11-07 22:14:49 EST [7099]: [2-1] HINT: This error usually means that PostgreSQL's request for a shared memory segment exceeded available memory, swap space, or huge pages. To reduce the request size (currently 35301089280 bytes), reduce PostgreSQL's shared memory usage, perhaps by reducing shared_buffers or max_connections. I am familiar with this error. Tuning various instances of postgres is a monthly task for the engineers I work with. The solutions are to either pull back our postgres tune or manage settings like shmall and ulimit. In this case we are tuning a postgres installation that was created by someone else and has some cruft from a few years of runtime and upgrades. This installation started on a CentOS 5 install and is now on CentOS 7. The old SysV install on CentOS 5 applied several controls on memory limits including: /etc/sysconfig/postgresql.d/ulimit.sh /etc/sysconfig/postgresql.d/memory-cap Extremely conservative settings for shmmax and shmall Scripts from another vendor or sysadmin which intentionally force certain values by altering config files /etc/sysctl.conf Since the upgrade from CentOS 5 to CentOS 7 there now appears to be additional controls on memory limit which were applied when changing it from SysV to SystemD. For example systemctl cat postgresql.service shows: # /usr/lib/systemd/system/postgresql.service [Unit] Description=PostgreSQL database server After=network.target [Service] Type=forking User=postgres Group=postgres Environment=PGPORT=5432 Environment=PGDATA=/opt/pgsql/data OOMScoreAdjust=-1000 LimitSTACK=16384 ExecStart=/opt/pgsql/bin/pg_ctl start -D ${PGDATA} -s -o "-p ${PGPORT}" -w -l ${PGDATA}/serverlog ExecStop=/opt/pgsql/bin/pg_ctl stop -D ${PGDATA} -s -m fast ExecReload=/opt/pgsql/bin/pg_ctl reload -D ${PGDATA} -s TimeoutSec=300 [Install] WantedBy=multi-user.target # /etc/systemd/system/postgresql.service.d/memory-cap.conf # # THIS FILE IS AUTO-GENERATED by /opt/pgsql/bin/tune.sh # DO NOT MODIFY, it will be overwritten on next postgres startup. # If you need to make a change, then disable the tuner: # # ln -s /dev/null /etc/systemd/system/postgresql.service.d/tune.conf # [Service] LimitAS=12884901888 # /etc/systemd/system/postgresql.service.d/tune.conf # /usr/lib/systemd/system/postgresql.service.d/use-system-timezone.conf # Disable automatically setting the timezone by masking this drop-in file: # ln -s /dev/null /etc/systemd/system/postgresql.service.d/use-system-timezone.conf # Then you need to: # systemctl daemon-reload [Service] ExecStartPre=/opt/pgsql/bin/use-system-timezone.sh Now coming around to my actual question: There are clearly several layers of kernel settings, per-user limits, and service configurations which can each impose limits on shmmax, shmall, ulimit, and related settings. How do I determine either from configuration or at runtime what limits a SystemD service actually has applied when it is started? If I can identify what the limits are at runtime, I can then start greping config files and scripts to find where those are set. Once I can find those I can set the values as they need to be. I'm hoping there is a flag I can set to get SystemD or my postgres process to log out its apparent settings when it starts as a service. I am comfortable with what these values should be set to, there are just too many layers which might be forcing or overriding these values. I want to learn what configuration locations I need to touch. My perception is that I can have situations like a SystemD LimitFOO setting which is a different value than sysctl -w kernel.shmfoo and a different value than /etc/someconfig/serviceuser/limit.foo. I need to determine what limits are actually being used or applied so that I can correctly change and set those limits to tune the service I am running.
As you point out in your question, there are several limits in play: the System V IPC ones, such as shmall, shmmax, etc. the RLIMIT ones (which are often set and inspected by the ulimit command in the shell, so you might know them by that name.) the cgroup limits (particularly the memory cgroup, in your case), which is a new way to apply limits to groups of processes in modern kernels. systemd manages the latter two, in particular using cgroups as the main mechanism for limiting and accounting. It does have some small limited support for System V IPC, but not really for limits. Let's break down these three separate concepts and look into how to inspect and tune the limits on each, related to systemd. System V IPC systemd has some small support for System V IPC (for example, cleaning up IPCs when service stops, running a service in its own IPC namespace or mounting a private tmpfs (backed by shm) on /tmp for a single service), but for the most part it doesn't further manage System V IPC limits and doesn't do any accounting on it. So limits of System V IPC are exclusively managed by sysctl, so you can inspect those with something like: $ sysctl kernel.shmmax kernel.shmall kernel.shmmni kernel.shmmax = 18446744073692774399 kernel.shmall = 18446744073692774399 kernel.shmmni = 4096 And tune them with sysctl -w. systemd only gets involved in setting these limits since it includes systemd-sysctl.service which is responsible for setting those from /etc/sysctl.conf and /etc/sysctl.d/*.conf. But other than that, it's all sysctl, which also gives you directly the kernel's information on these limits. RLIMITs (ulimit) These limits are set per-process and inherited by subprocesses (so typically they are the same through a process tree, but not necessarily.) systemd allows setting those per service, so that the limits are set as configured when the service starts. These are configured by directives such as LimitSTACK=, LimitAS=, etc. which you already mention on your question. You can see the full list of RLIMITs in the man page for systemd, where it also correlates those to the familiar ulimit commands. You can inspect the current limits for a running unit by using the systemctl show command, which dumps the internal state of the unit from systemd. For example: $ systemctl show postgresql.service | grep ^Limit LimitSTACK=16384 LimitSTACKSoft=16384 LimitAS=12884901888 LimitASSoft=12884901888 ... (other RLIMITs omitted for terseness) ... You can also inspect what the kernel thinks the limits are, by looking at /proc/$pid/limits (remember, these are per-process, so you need to look at individual PIDs.) For example: $ cat /proc/12345/limits Limit Soft Limit Hard Limit Units Max stack size 16384 16384 bytes Max address space 12884901888 12884901888 bytes ... (other RLIMITs omitted for terseness) ... cgroups (memory cgroup) Finally, cgroups are the main mechanism by which systemd manages services, providing limits and accounting. There are many cgroups available and supported by systemd (like CPU, Memory, IO, Tasks, etc.), but for this discussion, let's focus on the memory cgroup (since these are the limits involved in your issue, and we looked at the corresponding memory limits for SysV IPC and RLIMITs too.) Same as with the RLIMITs, you can also use systemctl show to look at the memory accounting provided by systemd by using cgroups: $ systemctl show postgresql.service | grep ^Memory MemoryCurrent=631328768 MemoryAccounting=yes MemoryLow=0 MemoryHigh=infinity MemoryMax=infinity MemorySwapMax=infinity MemoryLimit=infinity MemoryDenyWriteExecute=yes You'll see that memory accounting is enabled (MemoryAccounting=yes) but none of the limits are set (all set to inifinity.) The list of limits may vary depending on your version of systemd and kernel, this is systemd 239 on kernel 4.20-rc0, which has "low", "high", "max", "limit" and a separate limit specifically for swap. One more point you may find interesting is that you'll be able to tell how much memory that service is using, through the MemoryCurrent= value. That is taken from the kernel cgroup information, it's a fresh measurement of memory usage by that service. You can also see that information when you use systemctl status on the service: $ systemctl status postgresql.service ● postgresql.service - PostgreSQL database server Loaded: loaded (/usr/lib/systemd/system/postgresql.service; enabled; vendor preset: disabled) Main PID: 12345 (postgresql) Tasks: 10 (limit: 4321) Memory: 602M CGroup: /system.slice/postgresql.service └─12345 /usr/lib/postgresql/postgresql As you can see, systemd is reporting memory usage (Memory: 602M), which comes from the cgroup information. You can also see the Tasks accounting is enabled (through the corresponding cgroup), and it's reporting currently using 10 tasks out of a limit of 4321 max tasks for that service. The status output also includes information about the underlying cgroup, named after the service (every service runs in its own cgroup), which you can then use to inspect the cgroup limits and accounting information directly from the kernel. For example: $ cd /sys/fs/cgroup/memory/system.slice/postgresql.service/ $ cat memory.limit_in_bytes 9223372036854771712 $ cat memory.usage_in_bytes 631328768 (The number 9223372036854771712 is 2^63 - 4096, which in this case represents infinity within a 64-bit counter.) You can look at the kernel documentation for the memory cgroup for more details on these limits and counters. There are two versions of cgroup in the kernel (cgroup-v1 and cgroup-v2), so you might find some significant differences in your system if it's using cgroup-v2 instead. systemd supports both (and a hybrid model where both are used), so querying the limits and counters using systemctl should give you a consistent view regardless of what version of cgroups is enabled on the kernel.
How do I identify all of the configured memory limits for a service started using systemd?
1,305,944,454,000
I've searched for an answer for the differences and using for these two configuration parameters in the openssl-config-file. certs = ... # Where the issued certs are and new_certs_dir = ... # default place for new certs In the Network Security with OpenSSL O'Reilly book also these two parameters in the default-openssl-config-file, but the certs is never used and never described. By my tests with openssl, all certificates are stored in the folder - defined by new_cers_dir. What is the difference between these two parameters? And is the parameter certs used somewhere?
As shown in the documentation https://www.openssl.org/docs/man1.1.0/apps/ca.html new_certs_dir is used by the CA to output newly generated certs. certs is not used here. However its referenced in the demoCA: "./demoCA/certs - certificate output file" Certs is ALSO not used for certificate chains as shown here: https://www.openssl.org/docs/man1.1.0/apps/pkcs12.html or https://www.openssl.org/docs/man1.1.0/apps/verify.html Note that /etc/ssl/certs is the default location for issued certs. But the certs variable is $dir/certs so it would be ./demoCA/certs I think we all agree its for issued certs specific to the CA. This makes sense because the CA might be signing certs that are chained to certs not yet issued by any public cert authority. But where is the documentation for this? I believe its an artifact of the configuration file. It use to be used for options like certificate which would hold the ca.pem within certs so certificate=$certs/ca.pem. I vaguely recall having this exact same question until I realized it was used later in the config file but now its not. Edit: It gets weirder. The current version of ca.c here: https://github.com/openssl/openssl/blob/master/apps/ca.c does not reference certs. But much older versions such as this: https://github.com/openssl/openssl/blob/d02b48c63a58ea4367a0e905979f140b7d090f86/apps/ca.c Reference it but do nothing with it.
OpenSSL, basic configuration, new_certs_dir, certs
1,305,944,454,000
It's long time I'm trying to fix my .conkyrc configuration file in order to set real transparency. There are many post out there about it, but none of them helped in my case, it seems the solution depends on many factors(windows manager, desktop environment, conky version and probably others). Actually it seems that my environment support real transparency since it works for my terminal(see Screenshot), but conky is using fake transparency(files on Desktop are covered/overridden) As you can see, I use Metacity as window manager, Mate as desktop environment. I installed conky 1.9 : conky -version Conky 1.9.0 compiled Wed Feb 19 18:44:57 UTC 2014 for Linux 3.2.0-37-generic (x86_64) And my distro is Mint 17.2 Rafaela: lsb_release -a No LSB modules are available. Distributor ID: LinuxMint Description: Linux Mint 17.2 Rafaela Release: 17.2 Codename: rafaela My .conkyrc actually is as following: background yes use_xft yes xftfont Roboto:size=9 xftalpha 0.8 update_interval 1 total_run_times 0 own_window yes own_window_transparent yes ############################################## # Compositing tips: # Conky can play strangely when used with # different compositors. I have found the # following to work well, but your mileage # may vary. Comment/uncomment to suit. ############################################## ## no compositor #own_window_type conky #own_window_argb_visual no ## xcompmgr #own_window_type conky #own_window_argb_visual yes ## cairo-compmgr own_window_type desktop own_window_argb_visual no ############################################## own_window_hints undecorated,below,sticky,skip_taskbar,skip_pager double_buffer yes draw_shades no draw_outline no draw_borders no draw_graph_borders no stippled_borders 0 #border_margin 5 #commento non è supportato border_width 1 default_color EDEBEB default_shade_color 000000 default_outline_color 000000 alignment top_right minimum_size 600 600 maximum_width 900 gap_x 835 gap_y 77 alignment top_right no_buffers yes uppercase no cpu_avg_samples 2 net_avg_samples 2 short_units yes text_buffer_size 2048 use_spacer none override_utf8_locale yes color1 212021 color2 E8E1E6 color3 E82A2A own_window_argb_value 0 own_window_colour 000000 TEXT ${goto 245}${voffset 25}${font GeosansLight:size=25} Today ${goto 124}${voffset -}${font GeosansLight:light:size=70}${time %I:%M}${image .conky/line.png -p 350,27 -s 3x189} ${offset 150}${voffset -55}${font GeosansLight:size=17}${time %A, %d %B} ${offset 380}${voffset -177}${font GeosansLight:size=25}Systems${font GeosansLight:size=22} ${offset 400}${voffset 5}${font GeosansLight:size=15}$acpitemp'C ${offset 400}${voffset 10}${cpu cpu0}% / 100% ${offset 400}${voffset 4}$memfree / $memmax${font GeosansLight:size=15} ${offset 400}${voffset 5}${if_up wlan0}${upspeed wlan0} kb/s / ${totalup wlan0}${endif}${if_up eth0}${upspeed eth0} kb/s / ${totalup eth0}${endif}${if_up ppp0}${upspeed ppp0} kb/s / ${totalup ppp0}${endif} ${offset 400}${voffset 5}${if_up wlan0}${downspeed wlan0} kb/s / ${totaldown wlan0}${endif}${if_up eth0}${downspeed eth0} kb/s / ${totaldown eth0}${endif}${if_up ppp0}${downspeed ppp0} kb/s / ${totaldown ppp0}${endif} ${goto 373}${voffset -162}${font Dingytwo:size=17}M$font ${goto 373}${voffset 7}${font Dingytwo:size=17}7$font ${goto 373}${voffset 1}${font Dingytwo:size=17}O$font ${goto 373}${voffset 1}${font Dingytwo:size=17}5$font ${goto 373}${voffset 1}${font Dingytwo:size=17}4$font I've tried many values for the own_window_type param, but none fixed the issue. Does somebody know how to achieve this, or what are the others environment factors that affect how the .conkyrc param must be set ?
-You just define: own_window yes own_window_transparent yes own_window_type conky own_window_argb_visual yes own_window_class override ...and you can get the transparency on the desktop.
.conkyrc - how to set real transparency
1,305,944,454,000
I would like to display Chinese characters in dwm's status bar. More specifically I would like the symbols to represent the different tags in dwm. Using an online converter, I found that the unicode representation for the symbols I want is: 憤怒 unicode: &#24996;&#24594; Putting the unicode characters directly into my config.h doesn't work, they don't even show up in vim. My locale is set to ISO-8859-1 and I'm using the Liberation Mono font for dwm. What can I do to get those symbols up there? EDIT Following Mat's instructions and patching dwm, the patch command hangs. Running strace: [max@prometheus dwm-6.0]$ strace patch -Np1 ../dwm-pango/dwm-pango/dwm-6.0-pango.patch execve("/usr/bin/patch", ["patch", "-Np1", "../dwm-pango/dwm-pango/dwm-6.0-p"...], [/* 30 vars */]) = 0 brk(0) = 0x1d52000 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f9dc4713000 access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory) open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3 fstat(3, {st_mode=S_IFREG|0644, st_size=92801, ...}) = 0 mmap(NULL, 92801, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f9dc46fc000 close(3) = 0 open("/lib/libc.so.6", O_RDONLY|O_CLOEXEC) = 3 read(3, "\177ELF\2\1\1\3\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0`\25\2\0\0\0\0\0"..., 832) = 832 fstat(3, {st_mode=S_IFREG|0755, st_size=1983446, ...}) = 0 mmap(NULL, 3804112, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f9dc4152000 mprotect(0x7f9dc42e9000, 2097152, PROT_NONE) = 0 mmap(0x7f9dc44e9000, 24576, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x197000) = 0x7f9dc44e9000 mmap(0x7f9dc44ef000, 15312, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7f9dc44ef000 close(3) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f9dc46fb000 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f9dc46fa000 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f9dc46f9000 arch_prctl(ARCH_SET_FS, 0x7f9dc46fa700) = 0 mprotect(0x7f9dc44e9000, 16384, PROT_READ) = 0 mprotect(0x61a000, 4096, PROT_READ) = 0 mprotect(0x7f9dc4714000, 4096, PROT_READ) = 0 munmap(0x7f9dc46fc000, 92801) = 0 brk(0) = 0x1d52000 brk(0x1d75000) = 0x1d75000 getpid() = 10412 lstat("/tmp/po8GP02f", 0x7fffdc075210) = -1 ENOENT (No such file or directory) lstat("/tmp/pikSWXEs", 0x7fffdc075210) = -1 ENOENT (No such file or directory) lstat("/tmp/prB1wVgF", 0x7fffdc075210) = -1 ENOENT (No such file or directory) lstat("/tmp/pp27ATSR", 0x7fffdc075210) = -1 ENOENT (No such file or directory) rt_sigaction(SIGCHLD, {SIG_DFL, [CHLD], SA_RESTORER|SA_RESTART, 0x7f9dc4186cb0}, {SIG_DFL, [], 0}, 8) = 0 rt_sigaction(SIGHUP, NULL, {SIG_DFL, [], 0}, 8) = 0 rt_sigaction(SIGHUP, {0x40cd90, [], SA_RESTORER, 0x7f9dc4186cb0}, NULL, 8) = 0 rt_sigaction(SIGPIPE, NULL, {SIG_DFL, [], 0}, 8) = 0 rt_sigaction(SIGPIPE, {0x40cd90, [], SA_RESTORER, 0x7f9dc4186cb0}, NULL, 8) = 0 rt_sigaction(SIGTERM, NULL, {SIG_DFL, [], 0}, 8) = 0 rt_sigaction(SIGTERM, {0x40cd90, [], SA_RESTORER, 0x7f9dc4186cb0}, NULL, 8) = 0 rt_sigaction(SIGXCPU, NULL, {SIG_DFL, [], 0}, 8) = 0 rt_sigaction(SIGXCPU, {0x40cd90, [], SA_RESTORER, 0x7f9dc4186cb0}, NULL, 8) = 0 rt_sigaction(SIGXFSZ, NULL, {SIG_DFL, [], 0}, 8) = 0 rt_sigaction(SIGXFSZ, {0x40cd90, [], SA_RESTORER, 0x7f9dc4186cb0}, NULL, 8) = 0 rt_sigaction(SIGINT, NULL, {SIG_DFL, [], 0}, 8) = 0 rt_sigaction(SIGINT, {0x40cd90, [], SA_RESTORER, 0x7f9dc4186cb0}, NULL, 8) = 0 fstat(0, {st_mode=S_IFCHR|0620, st_rdev=makedev(136, 2), ...}) = 0 open("/tmp/pp27ATSR", O_RDWR|O_CREAT|O_EXCL|O_TRUNC, 0600) = 3 fcntl(3, F_GETFL) = 0x8002 (flags O_RDWR|O_LARGEFILE) fstat(3, {st_mode=S_IFREG|0600, st_size=0, ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f9dc4712000 lseek(3, 0, SEEK_CUR) = 0 fstat(0, {st_mode=S_IFCHR|0620, st_rdev=makedev(136, 2), ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f9dc4711000 read(0, Could I be missing something?
I don't think you'll get Unicode support from dwm without patching it (and adding additional dependencies, notably pango). If that's an option for you, the pango patch from the official list of patches seems to work, just run patch command in the dwm folder passing the patch file to the standard input: $ tar xzf dwm-6.0.tar.gz $ cd dwm-6.0 $ patch -Np1 < ../dwm-6.0-pango.patch After that, you can edit your config file and put unicode literals (\u followed by the unicode codepoint in hex) in the tags strings for example. /* tagging */ static const char *tags[] = { "\u00c0", "\u61a4\u6012", "\u10e5\u10d0\u10e0", "4", "5", "6", "7", "8", "9" }; First item is À, second are your two symbols, third is some Georgian script ('cos I think it looks cool). With a large font, this results in:
Unicode characters in uxterm and dwm statusbar
1,305,944,454,000
Is there a way to use Emacs to sync with Google Calendar and Google Contacts, ideally keeping a local copy so I can access them offline?
Unfortunately, I am unable to give a complete answer. All I have is advice about some possible paths to wander down. The easiest route would be if the emacs-g-client that Gilles mentioned in the SU version of this question works. If that doesn't work, I would look into the following: At the very least you should be able to get some calendar functionality by accessing your google calendar using ical. The function icalendar-import-file can import an ical file to a emacs diary file (icalendar-import-file documentation). Thus, in your .emacs file you could have a bit of emacs lisp to get the google calendar ical file and import it into your diary. If you do end up using org-mode there are a number of ways to integrate org-mode with diary-mode. I think that the ultimate goal would be to make use of the gdata api. I don't think that there is an easy way to get access to Google contacts outside of this api. There is a command line utility that supports a wide range of functionality using this api called Google CL, which could theoretically be used inside some emacs lisp functions to provide full access to your contacts, calendar, and many other Google-hosted services. This however, would likely be much more difficult than just a few lines thrown into your .emacs.
Emacs sync w/ Google Calendar and Contacts?
1,305,944,454,000
One can select testing packages on a gentoo stable system by adding a line with the following syntax to keywords list: cat /etc/portage/package.keywords =dev-python/ipython-0.13.2 ~amd64 # and many lines later =dev-python/ipython-0.14.1 ~amd64 # and many lines later >=dev-python/ipython-0.13.4 ~amd64 This file will grow within the time and sooner or later one can not remember which lines are obsolete. How can I tidy up the list with a script from time to time? A line should be deleted, if the testing version is already stabilized >= was used for the same package = was used for the same package with smaller version number
There is an official package now for this task called app-portage/portpeek. It can find obsolete USE flags and obsolete KEYWORDS and clean the files, if -f (fix) is added as parameter.
How to tidy up the .keywords file on a gentoo system?
1,305,944,454,000
I've been trying to use udev to make a Debian system run a bash script when a wireless card is connected. So far I created this file /etc/udev/rules.d/wifi-detect.rules: ACTION=="add", ATTRS{idVendor}=="0cf3", ATTRS{idProduct}=="9271", RUN+="/root/test.sh" And for now, I'm trying to make test.sh with this contents work: #!/bin/bash /bin/echo "test!" > /test.txt But for some reason, nothing seems to happen when I connect the wireless card, no test.txt file is created. My lsusb on the card: Bus 001 Device 015: ID 0cf3:9271 Atheros Communications, Inc. AR9271 802.11n Running udevadm monitor –env this is what happens when I connect the card: KERNEL[1017.642278] add /devices/platform/bcm2708_usb/usb1/1-1/1-1.3 (usb) KERNEL[1017.644676] add /devices/platform/bcm2708_usb/usb1/1-1/1-1.3/1-1.3:1.0 (usb) KERNEL[1017.645035] add /devices/platform/bcm2708_usb/usb1/1-1/1-1.3/firmware/1-1.3 (firmware) KERNEL[1017.708056] remove /devices/platform/bcm2708_usb/usb1/1-1/1-1.3/firmware/1-1.3 (firmware) UDEV [1017.714772] add /devices/platform/bcm2708_usb/usb1/1-1/1-1.3 (usb) UDEV [1017.733002] remove /devices/platform/bcm2708_usb/usb1/1-1/1-1.3/firmware/1-1.3 (firmware) UDEV [1017.772669] add /devices/platform/bcm2708_usb/usb1/1-1/1-1.3/firmware/1-1.3 (firmware) UDEV [1017.798707] add /devices/platform/bcm2708_usb/usb1/1-1/1-1.3/1-1.3:1.0 (usb) KERNEL[1018.456804] add /devices/platform/bcm2708_usb/usb1/1-1/1-1.3/1-1.3:1.0/ieee80211/phy8 (ieee80211) KERNEL[1018.465994] add /devices/platform/bcm2708_usb/usb1/1-1/1-1.3/1-1.3:1.0/net/wlan0 (net) KERNEL[1018.479878] add /devices/platform/bcm2708_usb/usb1/1-1/1-1.3/1-1.3:1.0/leds/ath9k_htc-phy8 (leds) KERNEL[1018.483074] add /devices/platform/bcm2708_usb/usb1/1-1/1-1.3/usb_device/usbdev1.20 (usb_device) UDEV [1018.600456] add /devices/platform/bcm2708_usb/usb1/1-1/1-1.3/1-1.3:1.0/leds/ath9k_htc-phy8 (leds) UDEV [1018.604376] add /devices/platform/bcm2708_usb/usb1/1-1/1-1.3/1-1.3:1.0/ieee80211/phy8 (ieee80211) UDEV [1018.626243] add /devices/platform/bcm2708_usb/usb1/1-1/1-1.3/usb_device/usbdev1.20 (usb_device) KERNEL[1018.659318] move /devices/platform/bcm2708_usb/usb1/1-1/1-1.3/1-1.3:1.0/net/wlan1 (net) UDEV [1018.758843] add /devices/platform/bcm2708_usb/usb1/1-1/1-1.3/1-1.3:1.0/net/wlan1 (net) UDEV [1018.932207] move /devices/platform/bcm2708_usb/usb1/1-1/1-1.3/1-1.3:1.0/net/wlan1 (net) I've tried a lot of examples around but I can't make it work. I hope someone can help me out with this one ;) Thank you! EDIT: To simplify thing, I changed my rule to: ACTION=="add", ATTRS{idVendor}=="0cf3", ATTRS{idProduct}=="9271", RUN+="/bin/echo 'test' > /test.txt" I managed to set udevadm control --log-priority=info as @user1146332 suggested and I got this interesting log: Sep 9 16:27:53 iklive-rpi1 udevd[1537]: RUN '/bin/echo 'test' > /test.txt' /etc/udev/rules.d/wifi-detect.rules:1 Sep 9 16:27:53 iklive-rpi1 udevd[1544]: starting 'firmware.agent' Sep 9 16:27:53 iklive-rpi1 udevd[126]: seq 663 queued, 'remove' 'firmware' Sep 9 16:27:53 iklive-rpi1 udevd[126]: seq 663 forked new worker [1547] Sep 9 16:27:53 iklive-rpi1 udevd[1537]: 'firmware.agent' [1544] exit with return code 0 Sep 9 16:27:53 iklive-rpi1 udevd[1548]: starting '/bin/echo 'test' > /test.txt' Sep 9 16:27:53 iklive-rpi1 udevd[1547]: seq 663 running Sep 9 16:27:53 iklive-rpi1 udevd[1547]: no db file to read /run/udev/data/+firmware:1-1.3.4: No such file or directory Sep 9 16:27:53 iklive-rpi1 udevd[1547]: passed -1 bytes to netlink monitor 0x1af5ee0 Sep 9 16:27:53 iklive-rpi1 udevd[126]: seq 663 done with 0 Sep 9 16:27:53 iklive-rpi1 udevd[1547]: seq 663 processed with 0 Sep 9 16:27:53 iklive-rpi1 udevd[1537]: '/bin/echo 'test' > /test.txt'(out) 'test > /test.txt' Sep 9 16:27:53 iklive-rpi1 udevd[1537]: '/bin/echo 'test' > /test.txt' [1548] exit with return code 0 So... Isn't the return code 0 the exit code for successful completion? If so why I don't get any file on the system? EDIT 2: I managed to get this working using the tip by @htor. My current rule: ACTION=="add", ATTRS{idVendor}=="0cf3", ATTRS{idProduct}=="9271", RUN+="/bin/sh -c '/bin/echo test >> /test.txt'" But by some reason the command is executed like 8 times, is there a way to avoid this? I think is is happening because when the wireless card drivers are being load they need to virtually unmount and mount the card. Tips?
I had a similiar problem a while ago and the solution was to change the RUN+= part to RUN+="sh -c '/root/test.sh'". Now, I don't know if you need that in this case as the rule is calling a script, not a command. Another observation: try removing the ! from the "test!" string or replace the double-quotes with single quotes. The bang ! is probably making trouble because of its special meaning in the shell and the double quotes preserves that meaning.
Auto-run script when Wifi card is plugged in (udev)
1,305,944,454,000
I'm using an xrandr script to set screen size and rotation. In this case one screen is in landscape mode and the other is rotated. How can I detect this rotation in the Awesome WM configuration? The goal is to set the tag layout so that the windows are divided along the short axis of the screen. That is, a tag which uses awful.layout.suit.tile in landscape mode would use awful.layout.suit.tile.bottom in portrait mode. That is, rather than this: I want this:
Today this is rather easy. Assuming you have the following layouts defined in your rc.lua: awful.layout.layouts = { awful.layout.suit.tile, awful.layout.suit.tile.bottom, } With awful.screen.connect_for_each_screen(func) you can call a function for each existing and created-in-the-future screen. It is very likely you have such a call in your rc.lua already (for example to set the wallpaper or create tags). Depending on your configuration you need something like this: awful.screen.connect_for_each_screen(function(s) if s.geometry.width >= s.geometry.height then awful.tag({ "1", "2", "3", "4", "5", "6", "7", "8", "9", "0" }, s, awful.layout.layouts[1]) else awful.tag({ "1", "2", "3", "4", "5", "6", "7", "8", "9", "0" }, s, awful.layout.layouts[2]) end end)
How to use screen rotation in Awesome WM configuration?
1,305,944,454,000
All I want to do is pass mailto: links to urxvt -e mutt -F ~/path/to/muttrc with the rest of the mailto: URL appended. I've tried every script I can find online that purports to do this, from simple: #!/bin/sh exec "urxvt -e mutt -F /path/to/muttrc \"$@\"" to complex, and the most they do is open a terminal window for a split second before it automatically vanishes again (and there is no evidence of a running mutt process). Any suggestions?
Remove the quotes, or the shell will try to execute the full string as a command (which obviously does not exist). #!/bin/sh exec urxvt -e mutt -F /path/to/muttrc "$@" Not tested, but the presence of quotes is the explanation for the vanishing of the terminal.
How to make Firefox open mailto: links with mutt in terminal
1,305,944,454,000
I have tried to install ISPConfig3 on Debian Jessie 8.1, and it couldnt connect to mySQL (mariaDB 10.1). So I CTRL+C to kill the install and I tried to manually login to mySQL, but I failed. It was complaining about the socket. So I purged and removed mariaDB and mySQL: service mysql stop apt-get --purge remove "mysql*" mv /etc/mysql/ /tmp/mysql_configs/ apt-get remove --purge mysql* apt-get autoremove apt-get autoclean service apache2 restart apt-get update Inside the source.list I have (added last two lines) (nano /etc/apt/sources.list): deb http://debian.mirror.constant.com/ jessie main contrib non-free deb-src http://debian.mirror.constant.com/ jessie main contrib non-free deb http://security.debian.org/ jessie/updates main contrib non-free deb-src http://security.debian.org/ jessie/updates main contrib non-free deb [arch=amd64,i386] http://ftp.utexas.edu/mariadb/repo/10.0/debian jessie main deb-src http://ftp.utexas.edu/mariadb/repo/10.0/debian jessie main Then I followed the commands given by MariaDB: sudo apt-get install software-properties-common sudo apt-key adv --recv-keys --keyserver keyserver.ubuntu.com 0xcbcb082a1bb943db sudo apt-get update sudo apt-get install mariadb-server I get the following error: Setting up mariadb-server-10.1 (10.1.9+maria-1~jessie) ... 2015-12-15 11:26:57 140472422967232 [Note] /usr/sbin/mysqld (mysqld 10.1.9-MariaDB-1~jessie) starting as process 12018 ... 2015-12-15 11:26:57 140472422967232 [Note] Using unique option prefix 'myisam_recover' is error-prone and can break in the future. Please use the full name 'myisam-recover-options' instead. 2015-12-15 11:26:57 140472422967232 [Note] InnoDB: Using mutexes to ref count buffer pool pages 2015-12-15 11:26:57 140472422967232 [Note] InnoDB: The InnoDB memory heap is disabled 2015-12-15 11:26:57 140472422967232 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins 2015-12-15 11:26:57 140472422967232 [Note] InnoDB: Memory barrier is not used 2015-12-15 11:26:57 140472422967232 [Note] InnoDB: Compressed tables use zlib 1.2.8 2015-12-15 11:26:57 140472422967232 [Note] InnoDB: Using Linux native AIO 2015-12-15 11:26:57 140472422967232 [Note] InnoDB: Using CPU crc32 instructions 2015-12-15 11:26:57 140472422967232 [Note] InnoDB: Initializing buffer pool, size = 256.0M 2015-12-15 11:26:57 140472422967232 [Note] InnoDB: Completed initialization of buffer pool 2015-12-15 11:26:57 140472422967232 [Note] InnoDB: Highest supported file format is Barracuda. 2015-12-15 11:26:57 140472422967232 [Note] InnoDB: 128 rollback segment(s) are active. 2015-12-15 11:26:57 140472422967232 [Note] InnoDB: Waiting for purge to start 2015-12-15 11:26:57 140472422967232 [Note] InnoDB: Percona XtraDB (http://www.percona.com) 5.6.26-74.0 started; log sequence number 19615081045 2015-12-15 11:26:57 140471636559616 [Note] InnoDB: Dumping buffer pool(s) not yet started 2015-12-15 11:26:58 140472422967232 [Note] Plugin 'FEEDBACK' is disabled. Job for mariadb.service failed. See 'systemctl status mariadb.service' and 'journalctl -xn' for details. invoke-rc.d: initscript mysql, action "start" failed. dpkg: error processing package mariadb-server-10.1 (--configure): subprocess installed post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of mariadb-server: mariadb-server depends on mariadb-server-10.1 (= 10.1.9+maria-1~jessie); however: Package mariadb-server-10.1 is not configured yet. dpkg: error processing package mariadb-server (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: mariadb-server-10.1 mariadb-server E: Sub-process /usr/bin/dpkg returned an error code (1) How can I fix it?
Try the following: apt-get remove --purge mysql* apt-get remove --purge mysql apt-get remove --purge mariadb apt-get remove --purge mariadb* apt-get --purge remove mariadb-server apt-get --purge remove python-software-properties Note: When prompted if you want to dump your current Databases, say no. But you can deconfigure the phpmyadmin database easily. Install everything from fresh: Add the following to your /etc/apt/sources.list file: deb [arch=amd64,i386] http://ftp.utexas.edu/mariadb/repo/10.1/debian jessie main deb-src http://ftp.utexas.edu/mariadb/repo/10.1/debian jessie main Then, apt-get install python-software-properties apt-key adv --recv-keys --keyserver keyserver.ubuntu.com 0xcbcb082a1bb943db apt-get install software-properties-common apt-get install mariadb-server mariadb-client Once your are done you should be able to run mysql -V and see something like: mysql Ver 15.1 Distrib 10.1.9-MariaDB, for debian-linux-gnu (x86_64) using readline 5.2
MariaDB - dependency problems - leaving unconfigured
1,305,944,454,000
What is this file anyway? Documentation makes no mention of it. And it's not supposed to be run automatically (version 4.3, 2 February 2014): Invoked as an interactive login shell, or with --login When Bash is invoked as an interactive login shell, or as a non-interactive shell with the --login option, it first reads and executes commands from the file /etc/profile, if that file exists. After reading that file, it looks for ~/.bash_profile, ~/.bash_login, and ~/.profile, in that order, and reads and executes commands from the first one that exists and is readable. The --noprofile option may be used when the shell is started to inhibit this behavior. When a login shell exits, Bash reads and executes commands from the file ~/.bash_logout, if it exists. Invoked as an interactive non-login shell When an interactive shell that is not a login shell is started, Bash reads and executes commands from ~/.bashrc, if that file exists. This may be inhibited by using the --norc option. The --rcfile file option will force Bash to read and execute commands from file instead of ~/.bashrc. So, typically, your ~/.bash_profile contains the line if [ -f ~/.bashrc ]; then . ~/.bashrc; fi after (or before) any login-specific initializations. Invoked non-interactively When Bash is started non-interactively, to run a shell script, for example, it looks for the variable BASH_ENV in the environment, expands its value if it appears there, and uses the expanded value as the name of a file to read and execute. Bash behaves as if the following command were executed: if [ -n "$BASH_ENV" ]; then . "$BASH_ENV"; fi but the value of the PATH variable is not used to search for the filename. As noted above, if a non-interactive shell is invoked with the --login option, Bash attempts to read and execute commands from the login shell startup files.
From Debian's bash README: What is /etc/bash.bashrc? It doesn't seem to be documented. The Debian version of bash is compiled with a special option (-DSYS_BASHRC) that makes bash read /etc/bash.bashrc before ~/.bashrc for interactive non-login shells. So, on Debian systems, /etc/bash.bashrc is to ~/.bashrc as /etc/profile is to ~/.bash_profile.
When is /etc/bash.bashrc invoked?
1,305,944,454,000
Say I have a window in kitty and press ctrl+shift+enter to open a new window. The new window always uses ~/ as current working directory. I'd like the new window to use the same working directory that the last window used. Is this possible?
In your kitty.conf, instead of using map ctrl+shift+enter new_window, use map ctrl+shift+enter new_window_with_cwd. Couldn't find this in the documentation but the author mentions it in this issue.
Make kitty terminal emulator to use the current working directory for new windows
1,305,944,454,000
I have openssh-server installed on a Debian Jessie host and am trying to find the original version of the sshd_config file. But that was apparently not installed by openssh-server: root@apu ~$ dpkg -S /etc/ssh/sshd_config dpkg-query: no path found matching pattern /etc/ssh/sshd_config What am I missing? Are there config files in Debian that are not managed by dpkg?
There are quite a few configuration files which aren't managed by dpkg; they're managed by maintainer scripts instead. In this case, in Debian 9 the original file is available as /usr/share/openssh/sshd_config; that's copied to /etc/ssh/sshd_config by openssh-server.postinst. In Debian 8 the original contents are stored in openssh-server.postinst directly.
Where is my /etc/ssh/sshd_config coming from?
1,305,944,454,000
On nixos, I face a postgres error psql: FATAL: Peer authentication failed for user "postgres" similar error to this question, and would like to edit the authentication settings to resolve the issue as described in an answer there: edit pg_hba.conf to use md5 password authentication instead of peer authentication for unix sockets (local connection type) so Pg accepts password authentication I have resolved this same error previously on ubuntu by editing authorization configuration in that pg_hba.conf file. But my issue now is that nixos does not appear to have such an pg_hba.conf to edit. How do I make the corresponding postgres authorization configuration change in nixos? I noticed this postgres.nix file on github which appears to do something with pg_hba.conf, or at least contains the string, but I do not understand how to change my authentication settings from that. Also I have only used the one main configuration file /etc/nixos/configuration.nix and this appears to be a separate module, at nixos/modules/services/databases/postgresql.nix.
Following this example configuration, I set the NixOS option services.postgresql.authentication. I managed to get past the 'peer authentication failed' error when the postgres section of my /etc/nixos/configuration.nix had been set to # postgres services.postgresql.enable = true; services.postgresql.package = pkgs.postgresql94; services.postgresql.authentication = lib.mkForce '' # Generated file; do not edit! # TYPE DATABASE USER ADDRESS METHOD local all all trust host all all 127.0.0.1/32 trust host all all ::1/128 trust '';
How do I configure postgres's authorization settings in nixos?
1,305,944,454,000
At this page you can download a configuration file that lets you target a particular notebook architecture during the compilation of a new 32-bit Linux kernel. I need a 64 bit version. What do I have to do? I compiled a kernel 2-3 times in my life but I never touched a config file, I always have used an interactive menu.
The recommended answer, as the comment suggests, is to save it as .config in the top-level source directory, and then run make xconfig (GUI, easier) or make menuconfig (TUI) on a 64-bit system. That said, to simply switch from 32-bit to 64-bit without changing anything else, a little editing at the beginning is all that's needed. Compare: Original (32-bit) # CONFIG_64BIT is not set CONFIG_X86_32=y # CONFIG_X86_64 is not set CONFIG_OUTPUT_FORMAT="elf32-i386" CONFIG_ARCH_DEFCONFIG="arch/x86/configs/i386_defconfig" "Converted" 64-bit CONFIG_64BIT=y # CONFIG_X86_32 is not set CONFIG_X86_64=y CONFIG_OUTPUT_FORMAT="elf64-x86-64" CONFIG_ARCH_DEFCONFIG="arch/x86/configs/x86_64_defconfig" Note that CONFIG_X86=y is not touched.
How do I convert a kernel .config file from 32-bit to 64-bit?
1,305,944,454,000
The only way I can install majority of packages without rejection from signature database is to put SigLevel = Never in pacman.conf. Its not supposed to be the right way, but I don't seem to be able to get pacman going any other options for SigLevel. Is what I'm doing right? And is it a frequent and common security threat that I should worry about, every second, day and night? Thanks.
As of the end of this month, March 2012, all of the packages in the main databases (Core, Extra, Community and Multilib) and their Testing variants are signed. This means that you are able to, and should consider if you are interested in securing your machine, use Required in your SigLevel. Once you have checked and signed the master keys, it does not take long to authorize the other keys in the day-to-day updating of your machine. It would be overstating it greatly to describe this as an inconvenience and it is more than offset by the peace of mind that you will enjoy over the much longer term if you set this up properly now.
Arch: Is "SigLevel = Never" the only convenient way?
1,305,944,454,000
I have been setting up Linux desktops for a non-profit radio observatory. For me, this was the first time I had to think about "deploying" several identical machines, centralizing login, home directories and so on. It quickly became clear to me that, perhaps contrary to intuition, the "everything is textual" philosophy does not necessarily make that an easy task, and I wondered what seasoned admins do about this. In my case, I was installing Ubuntu 10.04 LTS on each machine. After installation, I ran a custom script that alters config files, removes and installs software and copies some files, like background images or browser bookmarks, from the server. I think, however, that my questions are distro-independent. Problems I was mainly encountering two problems: Firstly, inconsistent tools and config files, both across distributions and across versions, and secondly some crucial software not exposing settings to config files in an easy and intuitive manner. Let me give two short examples for what I mean: The ifconfig tool is being replaced by ip. All scripts relying on the presence of the former will break if, for example, run on a current ArchLinux box. So, I would need to check which tools in which versions are present on a machine I run a script on... this somehow feels like reinventing autoconf on a small scale. For the second problem, consider that I wanted to give the desktops some sort of "common identity". In my post-install-config-script, I use the following lines to achieve this: scp user@server:/export/admin/*.jpg /usr/share/backgrounds/ scp user@server:/export/admin/ubuntu-wallpapers.xml /usr/share/gnome-background-properties/ sed 's/warty-final-ubuntu\.png/MyBackground\.jpg/' -i /usr/share/gconf/defaults/10_libgnome2-common sed 's/warty\-final\-ubuntu\.png/MyBackground\.jpg/' -i /usr/share/gconf/defaults/16_ubuntu-wallpapers sed 's/ubuntu-mono-dark/ubuntu-mono-light/' -i /usr/share/gconf/defaults/16_ubuntu-artwork sed 's/Ambiance/Clearlooks/' -i /usr/share/gconf/defaults/16_ubuntu-artwork I suppose that creating a CI is a common task for organizational admins. So, how come there is no central config facility, perhaps even cross-desktop? Having to set two (identical!) undocumented values in two distinct config files strikes me as odd. Questions In an organizational environment, how do you handle central, unified configuration across multiple clients? Do systems like Debian's FAI offer significant advantages (aside from not having to change CDs) over my method of "install first, run script afterwards"? What are good practices for the transition between major versions of your distribution? And, apart from the technical stuff: Is there a desktop environment that promises long-term stability as far as the user experience is concerned? I don't think I can migrate my users to KDE 4 or GNOME 3, but XFCE still has some functional drawbacks... Is there a *nix system that adresses this type of configuration issues? For example, I'd assume there are systems that ask you for some imagery of your organization (logos, background images, colour and font sets etc) and apply them to the login manager, users' desktops, web apps (!) and so on. Note: In our case, I have to work with fat clients, so a purely thin-client solution won't help.
Using Puppet or CFEngine or Chef is the right solution for your problem. Of course it will consume some time and trial & error approach to write the Puppet script which just work. These tools are widely used for automating complex installations on cloud and have simplified the lives of admins like us. :)
What are your best practices and future plans for deploying unixoid desktops? [closed]
1,305,944,454,000
I want to replicate the Debian installation choices made for my system's current configuration in the installation of a new system. Debian can be pre-configured through a "pre-configuration" (aka "preseed") file, which basically contains the answers to the questions the installer will ask. The documentation states that one way to create a preconfiguration file from an existing installation of Debian is to: ...use the debconf-get-selections from the debconf-utils package to dump both the debconf database and the installer's cdebconf database to a single file: $ debconf-get-selections --installer > file $ debconf-get-selections >> file But it then immediately adds: However, a file generated in this manner will have some items that should not be preseeded... The documentation does not elaborate on what those items-that-should-not-be-preseeded would be. Could someone elaborate? By way of illustration, below I include the second field of the output I get from the two commands above, where I've kept only the lines that begin with d-i, along with the comments, sometimes truncated for brevity. (The reason for keeping only the configuration lines that begin with d-i is that in the example pre-configuration file provided by Debian, only such lines appear.) # Check the integrity of another CD-ROM? cdrom-checker/nextcd # Web server started, but network not running save-logs/no_network # for internal use only debian-installer/consoledisplay debian-installer/shell-plugin # Country, territory or area: # Choices: Antigua and Barbuda, Australia, Botswana, Canada, ... localechooser/shortlist # for internal use; can be preseeded preseed/include_command # Country of origin for the keyboard: # Choices: keyboard-configuration/layout # Choices: Canada, Mexico, Saint Pierre and Miquelon, United ... localechooser/countrylist/North_America # Choices: Greece, Cyprus, other localechooser/shortlist/el # Keyboard layout: # Choices: keyboard-configuration/variant # Choices: Algeria, Angola, Benin, Botswana, Burkina Faso, Bu... localechooser/countrylist/Africa # Choices: Finland, Sweden, other localechooser/shortlist/sv # Keep default keyboard options ()? keyboard-configuration/unsupported_options # Choices: Cyprus, Turkey, other localechooser/shortlist/tr # Interactive shell di-utils-shell/do-shell # for internal use only # Choices: stable, testing, unstable cdrom/suite # Choose an installation step: # Choices: debian-installer/missing-provide # Check CD-ROM integrity? cdrom-checker/start # Failed to retrieve the preconfiguration file preseed/retrieve_error # Directory in which to save debug logs: save-logs/directory # for internal use only debconf/showold # Failed to open checksum file cdrom-checker/md5file_failed # Choices: Andorra, Spain, France, Italy, other localechooser/shortlist/ca # Write the changes to the storage devices and configure RAID... partman-md/confirm_nooverwrite # PCMCIA resource range options: hw-detect/pcmcia_resources # Failed to mount the floppy save-logs/floppy_mount_failed # for internal use only debconf/language # Choices: China, Singapore, Taiwan, Hong Kong, other localechooser/shortlist/zh_TW # Dummy template for preseeding unavailable questions debian-installer/dummy # Additional parameters for module : hw-detect/retry_params # Incorrect CD-ROM detected cdrom-detect/wrong-cd # for internal use; can be preseeded cdrom-detect/eject # Choices: Argentina, Bolivia, Chile, Colombia, Costa Rica, E... localechooser/shortlist/es # for internal use; can be preseeded preseed/run # Write the changes to disks and configure LVM? partman-lvm/confirm_nooverwrite # Cannot save logs save-logs/bad_directory # Choices: Belgium, Canada, France, Luxembourg, Switzerland, ... localechooser/shortlist/fr # Insufficient memory lowmem/insufficient # for internal use keyboard-configuration/optionscode # Choices: China, Taiwan, Singapore, Hong Kong, other localechooser/shortlist/zh_CN # Load missing firmware from removable media? hw-detect/load_firmware # Choices: Italy, Switzerland, other localechooser/shortlist/it # Choices: Antarctica localechooser/countrylist/Antarctica # Choose the next step in the install process: # Choices: Choose language, Configure the speech synthesizer ... debian-installer/main-menu # Failed to load installer component anna/install_failed # Choices: Russian Federation, Ukraine, other localechooser/shortlist/ru # for internal use keyboard-configuration/modelcode # Entering low memory mode lowmem/low # Choices: Jordan, United Arab Emirates, Bahrain, Algeria, Sy... localechooser/shortlist/ar # Keep current keyboard options in the configuration file? keyboard-configuration/unsupported_config_options # Choices: Antigua and Barbuda, Australia, Botswana, Canada, ... localechooser/shortlist/en # Method for toggling between national and Latin mode: # Choices: Caps Lock, Right Alt (AltGr), Right Control, Right... keyboard-configuration/toggle # for internal use only anna/retriever # Choices: Curaçao localechooser/countrylist/other # Choices: Albania, Andorra, Armenia, Austria, Azerbaijan, Be... localechooser/countrylist/Europe # locale localechooser/help/locale # Load CD-ROM drivers from removable media? cdrom-detect/load_media # for internal use; can be preseeded debian-installer/framebuffer # for internal use espeakup/voice # for internal use; can be preseeded preseed/include # Error reading Release file cdrom-detect/no-release # Ignore questions with a priority less than: # Choices: critical, high, medium, low debconf/priority # Key to function as AltGr: # Choices: The default for the keyboard layout, No AltGr key,... keyboard-configuration/altgr # CD-ROM detected cdrom-detect/success # Choices: Bouvet Island, Falkland Islands (Malvinas), Saint ... localechooser/countrylist/Atlantic_Ocean # Continue the install without loading kernel modules? anna/no_kernel_modules # for internal use; can be preseeded debian-installer/exit/poweroff # Choices: Bangladesh, India, other localechooser/shortlist/bn # for internal use; can be preseeded preseed/include/checksum # Integrity test failed cdrom-checker/mismatch # Load missing drivers from removable media? hw-detect/load_media # Keep default keyboard layout ()? keyboard-configuration/unsupported_layout # Start PC card services? hw-detect/start_pcmcia # for internal use; can be preseeded debian-installer/add-kernel-opts # for internal use; can be preseeded mouse/protocol # for internal use; can be preseeded mouse/left # for internal use keyboard-configuration/layoutcode # for internal use keyboard-configuration/store_defaults_in_debconf_db # Choices: Brazil, Portugal, other localechooser/shortlist/pt # for internal use; can be preseeded preseed/early_command # for internal use only debian-installer/exit/always_halt # Choices: Africa, Antarctica, Asia, Atlantic Ocean, Caribbea... localechooser/continentlist # Insert Debian boot CD-ROM cdrom-checker/firstcd # How should the debug logs be saved or transferred? # Choices: floppy, web, mounted file system save-logs/menu # for internal use; can be preseeded rescue/enable # for internal use only cdrom-detect/cdrom_fs # Insert formatted floppy in drive save-logs/insert_floppy # Translations temporarily not available localechooser/translation/none-yet # Keymap to use: # Choices: American English, Albanian, Arabic, Asturian, Bang... keyboard-configuration/xkb-keymap # for internal use; can be preseeded mouse/device # for internal use only cdrom-detect/hybrid # for internal use only debconf/translations-dropped # Country to base default locale settings on: # Choices: Antigua and Barbuda${!TAB}-${!TAB}en_AG, Australia... localechooser/preferred-locale # Choices: Spain, France, other localechooser/shortlist/eu # Choices: Argentina, Bolivia, Brazil, Chile, Colombia, Ecuad... localechooser/countrylist/South_America # Failed to mount CD-ROM cdrom-checker/mntfailed # Retry mounting the CD-ROM? cdrom-detect/retry # Choices: Serbia, Montenegro, other localechooser/shortlist/sr # Module needed for accessing the CD-ROM: # Choices: cdrom-detect/cdrom_module # for internal use; can be preseeded preseed/file # for internal use; can be preseeded hw-detect/load-ide # for internal use; can be preseeded preseed/interactive # Installation step failed debian-installer/main-menu/item-failure # Error while running '' hw-detect/modprobe_error # Choices: Pakistan, India, other localechooser/shortlist/pa # Use Control+Alt+Backspace to terminate the X server? keyboard-configuration/ctrl_alt_bksp # Choices: China, India, other localechooser/shortlist/bo # Language: # Choices: C${!TAB}-${!TAB}No localization, Albanian${!TAB}-$... localechooser/languagelist # Installer components to load: # Choices: anna/choose_modules_lowmem # for internal use only debian-installer/language # for internal use keyboard-configuration/variantcode # Choices: Anguilla, Antigua and Barbuda, Aruba, Bahamas, Bar... localechooser/countrylist/Caribbean # Language selection no longer possible localechooser/translation/no-select # Failed to copy file from CD-ROM. Retry? retriever/cdrom/error # Choices: Afghanistan, Bahrain, Bangladesh, Bhutan, Brunei D... localechooser/countrylist/Asia # Write the changes to disk and configure encrypted volumes? partman-crypto/confirm_nooverwrite # for internal use; can be preseeded debian-installer/country # No valid Debian CD-ROM cdrom-checker/wrongcd # Choices: Belgium, Germany, Liechtenstein, Luxembourg, Austr... localechooser/shortlist/de # for internal use; can be preseeded anna/standard_modules # Failed to process the preconfiguration file preseed/load_error # for internal use; can be preseeded preseed/file/checksum # Device file for accessing the CD-ROM: cdrom-detect/cdrom_device # for internal use; can be preseeded directfb/hw-accel # for internal use; can be preseeded debian-installer/allow_unauthenticated # Continue the installation in the selected language? localechooser/translation/warn-severe # for internal use; can be preseeded debian-installer/theme # Choices: American Samoa, Australia, Cook Islands, Fiji, Fre... localechooser/countrylist/Oceania # Are you sure you want to exit now? di-utils-reboot/really_reboot # Choices: Brazil, Portugal, other localechooser/shortlist/pt_BR # for internal use only debconf/frontend # for internal use; can be preseeded debian-installer/exit/halt # Choices: Belize, Costa Rica, El Salvador, Guatemala, Hondur... localechooser/countrylist/Central_America # Keep the current keyboard layout in the configuration file? keyboard-configuration/unsupported_config_layout # Compose key: # Choices: No compose key, Right Alt (AltGr), Right Control, ... keyboard-configuration/compose # Method for temporarily toggling between national and Latin ... # Choices: No temporary switch, Both Logo keys, Right Alt (Al... keyboard-configuration/switch # Installer components to load: # Choices: cfdisk-udeb: Manually partition a hard drive (cfdi... anna/choose_modules # Integrity test successful cdrom-checker/passed # Manually select a CD-ROM module and device? cdrom-detect/manual_config # Terminal plugin not available debian-installer/terminal-plugin-unavailable # Insert a Debian CD-ROM cdrom-checker/askmount # Additional locales: # Choices: aa_DJ.UTF-8, aa_DJ, aa_ER, aa_ER@saaho, aa_ET, af_... localechooser/supported-locales # for internal use only cdrom-detect/usb-hdd # for internal use; can be preseeded preseed/late_command # Failed to run preseeded command preseed/command_failed # Modules to load: # Choices: hw-detect/select_modules # Keyboard model: # Choices: keyboard-configuration/model # Continue the installation in the selected language? localechooser/translation/warn-light # Choices: Aruba, Belgium, Netherlands, other localechooser/shortlist/nl # for internal use only cdrom/codename # Choices: British Indian Ocean Territory, Christmas Island, ... localechooser/countrylist/Indian_Ocean # for internal use; can be preseeded preseed/boot_command # Web server started save-logs/httpd_running # System locale: # Choices: debian-installer/locale # Choices: Macedonia\, Republic of, Albania, other localechooser/shortlist/sq # Country of origin for the keyboard: # Choices: keyboard-configuration/layout # Keymap to use: # Choices: American English, Albanian, Arabic, Asturian, Bang... keyboard-configuration/xkb-keymap # Keyboard layout: # Choices: English (US), English (US) - Cherokee, English (US... keyboard-configuration/variant # Keep default keyboard options ()? keyboard-configuration/unsupported_options # Use Control+Alt+Backspace to terminate the X server? keyboard-configuration/ctrl_alt_bksp # for internal use keyboard-configuration/variantcode # for internal use keyboard-configuration/optionscode # for internal use keyboard-configuration/modelcode # Keep current keyboard options in the configuration file? keyboard-configuration/unsupported_config_options # Keep the current keyboard layout in the configuration file? keyboard-configuration/unsupported_config_layout # Method for toggling between national and Latin mode: # Choices: Caps Lock, Right Alt (AltGr), Right Control, Right... keyboard-configuration/toggle # Compose key: # Choices: No compose key, Right Alt (AltGr), Right Control, ... keyboard-configuration/compose # Method for temporarily toggling between national and Latin ... # Choices: No temporary switch, Both Logo keys, Right Alt (Al... keyboard-configuration/switch # Key to function as AltGr: # Choices: The default for the keyboard layout, No AltGr key,... keyboard-configuration/altgr # Keep default keyboard layout ()? keyboard-configuration/unsupported_layout # Keyboard model: # Choices: A4Tech KB-21, A4Tech KBS-8, A4Tech Wireless Deskto... keyboard-configuration/model # for internal use keyboard-configuration/layoutcode # for internal use keyboard-configuration/store_defaults_in_debconf_db
Short answer From the Debian wiki page dedicated to D-I preseed : Do not work off a debconf-get-selections (--installer) generated preseed.cfg but get the values from it and modify the example preseed file with them. The preseed example file provided by Debian should be enought to start but you can find a lot of other preseed files provided by different people for different purposes on the same wiki page. Less short answer In the list of debconf questions you posted, each question with a comment "for internal use" without "can be preseeded" should not be preseeded. But a lot of other debconf questions may not be pre-answered with a preseed file, like the hardware-related questions (if you want to run the installation on a different hardware), the questions recording some automatic configuration failure or success (which can be preseeded in some special cases but could without problem be answered with the automatic process). The output of debconf-get-selections contains a lot of auto-answered questions most people never see and do (should) not care about. This automatic choices are changing over time, with new hardware, software, better detection or new possibilities. It is important to touch the least you can of the automatic configuration, to benefit from all the improvements of the debian-installer and to be able to keep to a minimum the changes needed to your preseed file over time and for different hardware.
What values from debconf-get-selections should not be preseeded?
1,305,944,454,000
I'm writing a script to automate setting up Puppet agent configuration files in Docker. Basically, I need to ensure that the following section is in /etc/puppet/puppet.conf: [agent] server=$PUPPETMASTER_HOSTNAME masterport=$PUPPETMASTER_PORT What I've been doing so far in my Puppet agent runit script is this: function write_puppet_config () { read -d '' puppet_config <<EOF [agent] server=$1 masterport=$2 EOF echo -e "$puppet_config" >> /etc/puppet/puppet.conf } # default puppet master port is 8410 test -z "$PUPPET_MASTER_TCP_PORT" && export PUPPET_MASTER_TCP_PORT="8410" # if there is a puppet master host defined, rewrite the config to match if [ ! -z "$PUPPET_MASTER_TCP_HOST" ]; then write_puppet_config "$PUPPET_MASTER_TCP_HOST" "$PUPPET_MASTER_TCP_PORT" fi The problem should be pretty apparent. If the Puppet configuration already specifies the configuration, I'm just appending another [agent] section, which is bad. I could just switch on conditional logic (ie: grep if it's there and then rewrite it with sed if it is), but is there a way to do an edit from the command line? I'd like to basically run a command which says "if there isn't an agent section, add it, and then make sure that server and masterport are set to the right values in that section." I know that structured tools like this exist for XML, but what about INI-style files?
Here are a few script examples. These are bare minimum and don't bother with error checking, command line options, etc. I've indicated whether I've run the script myself to verify its correctness. Ruby Install the inifile rubygem for this script. This script is tested. #!/usr/bin/env ruby # filename: ~/config.rb require 'inifile' PUPPETMASTER_HOSTNAME='hello' PUPPETMASTER_PORT='world' ini = IniFile::load('/etc/puppet/puppet.conf') ini['agent']['server'] = PUPPETMASTER_HOSTNAME ini['agent']['masterport'] = PUPPETMASTER_PORT ini.save Usage: $ chmod 700 ~/config.rb $ sudo ~/config.rb # or, if using rvm, rvmsudo ~/config.rb Perl Install Config::IniFiles using cpan or your OS package manager (if there is a package available). This script is untested as I've stopped using perl on my system. It may need a little work, and corrections are welcome. #!/usr/bin/env perl # filename: ~/config.pl use Config::IniFiles; my $PUPPETMASTER_HOSTNAME='perl'; my $PUPPETMASTER_PORT='1234'; my $ini = Config::IniFiles->new(-file => '/etc/puppet/puppet.conf'); if (! $ini->SectionExists('agent')) { $ini->AddSection('agent'); } if ($ini->exists('agent', 'server')) { $ini->setval('agent', 'server', $PUPPETMASTER_HOSTNAME); } else { $ini->newval('agent', 'server', $PUPPETMASTER_HOSTNAME); } if ($ini->exists('agent', 'masterport')) { $ini->setval('agent', 'masterport', $PUPPETMASTER_PORT); } else { $ini->newval('agent', 'masterport', $PUPPETMASTER_PORT); } $ini->RewriteConfig(); Usage: $ chmod 700 ~/config.pl $ sudo ~/config.pl awk This script is more Bash and *nix friendly and uses a common utility of *nix OS's, awk. This script is tested. #!/usr/bin/env awk # filename: ~/config.awk BEGIN { in_agent_section=0; is_host_done=0; is_port_done=0; host = "awk.com"; port = "4567"; } in_agent_section == 1 { if ($0 ~ /^server[[:space:]]*=/) { print "server="host; is_host_done = 1; next; } else if ($0 ~ /^masterport[[:space:]]*=/) { print "masterport="port; is_port_done = 1; next; } else if ($0 ~ /^\[/) { in_agent_section = 0; if (! is_host_done) { print "server="host; } if (! is_port_done) { print "masterport="port; } } } /^\[agent\]/ { in_agent_section=1; } { print; } Usage: $ awk -f ~/config.awk < /etc/puppet/puppet.conf > /tmp/puppet.conf $ sudo mv /tmp/puppet.conf /etc/puppet/puppet.conf
Editing INI-like files with a script
1,305,944,454,000
Is there a way to read a .vimrc file for only a single ssh session? That is, when I log in I perform some operation so that vim uses say /tmp/myvimrc until I log out? I do not want to permanently overwrite the current .vimrc, I just need to use a different set of settings for the duration of my login every once in a while.
Suppose you have this other set of settings in /tmp/myvimrc. If my reading of man vim is correct you can start vim with this set of settings using the following: $ vim -u /tmp/myvimrc Thus, to make this an option for the rest of the session, I would create a function that sets this as an alias for vim. Thus, in bash I would put something like this in my .bashrc file: function vimswitch { alias vim='vim -u /tmp/myvimrc' } Then, when I wanted my new vim settings, I would just run: $ vimswitch Note that I wouldn't store myvimrc in /tmp since this could easily be cleared out upon reboot. If you are using a shell other than bash this should still be possible, but the syntax could differ slightly.
Temporary .vimrc
1,347,520,499,000
What are the main differences between the Windows registry and the approach used in UNIX/Linux, and what are the advantages and disadvantages of each approach?
There is no real cognate in UNIX, but as wollud1969 says, /etc comes close. That, though, is only part of the story. You'd also need to consider things under /var (for information about installed software, running services, etc), /usr/local/etc (at least on FreeBSD and certain Linux distros) for configuration information for installed third party apps, and of course each user's dotfiles, which customise how software works for them (roughly equivalent the the HK_CURRENT_USER hive in the registry). Then there's /dev for device interfaces, /proc for running process data, and the kernel itself (either through sysctl, a kernfs virtual file system, etc). Depending on your particular platform, there may be other places to look, too. The primary advantage in the UNIX approach, from my perspective as a UNIX user these last 12 years, is that application config files, wherever they live, are usually just plain old text files, so can be read and edited by plain old humans. (Except, possibly, the sendmail config file, but that's a completely different religious war...). Many applications (browsers, desktop apps, etc) create config files for you, but they are text files, and the apps usually won't stop working if those files are then edited by hand, provided the edits don't break their syntax. The downside, though, is that there is no universal config language, so you need to learn the syntax for each app you manage. In reality, though, this is only a small annoyance at worst. The Windows Registry was developed, at least in part, to address a similar state of affairs that was deemed problematic by Microsoft, where application ini files were not centrally managed, with no strict control on what values went in them, and no standard location for software to put them. The registry fixes some of those concerns (it is centrally managed, with specific data types that can be stored in it), but its disadvantages are its binary format, so that even experienced Windows admins need to use a GUI tool to look at it, it's prone to getting corrupted if you lose power, and not all software authors are sufficiently conscientious to clean up after themselves when you decide to uninstall their kewl shareware app. And, as with almost any other file in Windows, it's entirely possible for the various components of the registry to become fragmented on disk, resulting in painfully slow read and update operations. There is no requirement for software to make use of the registry, and even Microsoft's own .NET platform uses XML files instead. The Wikipedia page about the registry is quite informative.
Differences between Windows registry and UNIX/Linux approach [closed]
1,347,520,499,000
I installed Redhat 6 x86_64. I am using the Network connection screen to set a static IP address like below (I want two PC's in my house to see each other: one Redhat PC and one Mac) 192.168.0.5 255.255.255.0 192.168.0.1 When I run ifconfig it displays only lo and virbr0 information. I don't know what these items are (I don't really know much about network settings). When I try ifconfig -a it displays eth0, lo, sit0 and virbr0. The information for eth0 is as follows: Link encap : Ethernet HWaddr 90:2B:34:74:05:30 BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:192 errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 frame:0 collisions:0 txqueuelen:1000 RX bytes 53811 (52.5 KiB) TX bytes:468 (468.0 b) Interrupt:29 Base address:0xc000 Could someone help me to point out if anything wrong with my setting or how to resolve this problem?
You can provide static IP by editing the file /etc/sysconfig/network-scripts/ifcfg-eth0 as root user in Redhat. It should look like this: DEVICE=eth0 BOOTPROTO=STATIC IPADDR=192.168.0.5 NETMASK=255.255.255.0 GATEWAY=192.168.0.1 ONBOOT=yes After saving this file. You need to restart the network daemon using following command. $ sudo /etc/init.d/network stop $ sudo /etc/init.d/network start This should provide IP address to eth0 interface also. And ifconfig command should list eth0 also.
Setup static IP in redhat 6
1,347,520,499,000
I want to automatically configure my fstab in python by running a script. I thought of using ConfigParser in python, but I am unable to use it properly.
You can using fstab module. Its documentation here. Example: fstab = Fstab() for entry in fstab.entries: pprint.pprint(entry)
how to read and modify fstab in python?
1,347,520,499,000
What is the difference between ssh configuration file settings: At the top (global) level In a Host * scope? Assuming there is a difference, in which cases would each be preferred?
The SSH configuration documentation touches on this indirectly: For each parameter, the first obtained value will be used. The configuration files contain sections separated by Host specifications, and that section is only applied for hosts that match one of the patterns given in the specification. Since the first obtained value for each parameter is used, more host-specific declarations should be given near the beginning of the file, and general defaults at the end. So settings in the “top” level can’t be overridden, whereas settings in Host * will be overridden by any setting defined before that section (in the “top” level, or in a section matching the target host). This answers “in which cases would each be preferred”: the “top” level should be used for settings which shouldn’t be overridden, and the Host * section, which should come last, should be used for default settings.
ssh config: global settings vs `Host *`
1,347,520,499,000
I'm trying to understand what cloned_interfaces in FreeBSD's rc.conf really does. Manual page says: cloned_interfaces: (str) Set to the list of clonable network interfaces to create on this host. Further cloning arguments may be passed to the ifconfig(8) create command for each interface by setting the create_args_<interface> variable. If an interface name is specified with sticky keyword, the interface will not be destroyed even when rc.d/netif script is invoked with stop argument. This is useful when reconfiguring the interface without destroying it. Entries in cloned_interfaces are automatically appended to network_interfaces for configuration. This doesn't give any useful information of what it does. It is used by for example if_bridge, if_tap and if_epair. What does it actually do? Why do I need it for specific network modules and not for others? Does it create some kind of dummy device? When is it needed? Security implications? Performance implications?
cloned_interfaces is one of the several settings in rc.conf, rc.conf.local, et al. that control the setting up and shutting down of network interfaces. In the Mewburn rc system it is /etc/rc.d/netif that is mostly responsible for using these settings. With nosh system management the external formats import subsystem takes these settings and translates them into a suite of single-shot and long-running services in /var/local/sv. Both systems at their bases run ifconfig a lot and run some long-running dæmons. cloned_interfaces is almost the same as the network_interfaces setting in that it lists network interfaces to be brought up and shut down. The single difference between the twain is that network_interfaces describes network interfaces that pre-exist, because hardware detection (of network interface hardwares) has brought them into existence; whereas cloned_interfaces are network interfaces that are brought into existence by dint of these service startup and shutdown actions alone. A bridge, tap, or epair network interface does not represent actual network interface hardware. Thus an extra step is necessary in startup and shutdown, the point where a new network interface is cloned and destroyed. This is done with, again, the ifconfig command. The first bridge network interface is cloned by running ifconfig bridge0 create, and destroyed with ifconfig bridge0 destroy. Listing bridge0 in the cloned_interfaces list causes this to happen and these commands to be run first and last; whereas listing it in network_interfaces would not, and the system would assume that there was an existing bridge0 device to be manipulated. (Technically, the loopback interface is not hardware, either. It is cloned, too; hence the first cloned loopback interface being lo0, for those who have ever wondered about the name. But there is special casing for it because it is not optional as bridges, taps, and epairs are.) Other than that, the two sets of interfaces are treated the same. Further reading Jonathan de Boyne Pollard (2017). "Networking". nosh Guide. Softwares. Andrew Thompson. "Bridging". FreeBSD Handbook. Brooks Davis (2004). The Challenges of Dynamic Network Interfaces.
What does 'cloned_interfaces' in rc.conf accomplish?
1,347,520,499,000
I want to generate xorg.conf but for that X needs to not be running. How do I stop X or start without it? I tried ctrl + alt + F2 but the X server is still running. I'm running Lubuntu 14.10.
I ended up doing the following: sudo service lightdm stop Then I had to run ctrl + alt + F2 and log in the second terminal, otherwise it would just sit there with dark screen. To start it back up: sudo service lightdm start
stop/restart X server
1,347,520,499,000
Only sometimes, I forget to make a backup of a given linux file such as /etc/rc.local, /etc/rsyslog.conf, /etc/dhcpcd.conf, etc, and later wish I did. Distribution agnostic, is there a good approach to later getting a copy of an unf'd up copy?
While the topic of configuration files backup/versioning might seem simple on the surface, it is one of the hot topics of system/infrastructure administration. Distribution agnostic, to keep automatic backups of /etc as a simple solution you can install etckeeper. By default it commits /etc to a repository/version control system installed on the same system. The commits/backups are by default daily and/or each time there are package updates. The etckeeper package is pretty much present in all Linux distributions. see: https://help.ubuntu.com/lts/serverguide/etckeeper.html or https://wiki.archlinux.org/index.php/Etckeeper It could be argued it is a good standard of the industry to have this package installed. If you have not etckeeper installed, and need a particular etc file, there are several ways; you might copy it from a similar system of yours, you can ask your package manager to download the installation file or download it by hand, and extract the etc file from there; one of the easiest ways is using mc (midnight commander) to navigate inside packages as if they were directories. You can also use the distribution repositories to get packages, in the case of debian is http://packages.debian.org Ultimately if the etc/configurations are mangled beyond recognition you always have the option to reinstall the particular package. move the etc files to a backup name/directory, and for instance in Debian: apt-get install --reinstall package_name You can also configure and install the source repos for your particular distribution/version, install the source package, and get the etc files from there. https://wiki.debian.org/apt-src (again a Debian example) In some packages, you might also have samples of the configurations files at /usr/share/doc/package_name, which might be fit or not for use. As a last resort, you may also find etc files in the repositories/github addresses if the corresponding open source projects, just bear in mind that often distributions change default settings and things around. Obviously, none of these alternatives exempt you from having a sound backup policy in place, and retrieve your lost /etc files from there. Times also move fast, and if following a devops philosophy, you might also choose to discard certains systems altogether and redeploy them in case some files get corrupted; you might also use CI and reploy the files for instance, from jenkins.
How to get copies of default Linux etc files
1,347,520,499,000
Under Debian Jessie i had for the first time backports in my package-source. I had some collision in apt, because some packages i had installt from http://www.deb-multimedia.org/ have higher versions in backports. First pin-priority for multimedia was 100. I try to set the priority for backports to -1, but it didn't work. LANG=C cat /etc/apt/preferences Package: * Pin: origin deb http://http.us.debian.org/debian jessie-backports main release o=Debian Backports,a=jessie-backports,n=jessie-backports,l=Debian Backports,c=non-free Pin-Priority: -1 Package: * Pin: origin deb http://http.us.debian.org/debian jessie-backports main release o=Debian Backports,a=jessie-backports,n=jessie-backports,l=Debian Backports,c=main Pin-Priority: -1 Package: * Pin: origin deb http://http.us.debian.org/debian jessie-backports contrib release o=Debian Backports,a=jessie-backports,n=jessie-backports,l=Debian Backports,c=contrib Pin-Priority: -1 Package: * Pin: origin www.deb-multimedia.org Pin-Priority: 300 ` What is wrong with my /etc/apt/preferences LANG=C apt-cache policy | grep backports | egrep -i -v translat 100 http://http.us.debian.org/debian/ jessie-backports/non-free i386 Packages release o=Debian Backports,a=jessie-backports,n=jessie-backports,l=Debian Backports,c=non-free 100 http://http.us.debian.org/debian/ jessie-backports/contrib i386 Packages release o=Debian Backports,a=jessie-backports,n=jessie-backports,l=Debian Backports,c=contrib 100 http://http.us.debian.org/debian/ jessie-backports/main i386 Packages release o=Debian Backports,a=jessie-backports,n=jessie-backports,l=Debian Backports,c=main 100 http://http.us.debian.org/debian/ jessie-backports/non-free amd64 Packages release o=Debian Backports,a=jessie-backports,n=jessie-backports,l=Debian Backports,c=non-free 100 http://http.us.debian.org/debian/ jessie-backports/contrib amd64 Packages release o=Debian Backports,a=jessie-backports,n=jessie-backports,l=Debian Backports,c=contrib 100 http://http.us.debian.org/debian/ jessie-backports/main amd64 Packages release o=Debian Backports,a=jessie-backports,n=jessie-backports,l=Debian Backports,c=main The folder ls -al /etc/apt/preferences.d/ insgesamt 8 drwxr-xr-x 2 root root 4096 Jan 25 2011 . drwxr-xr-x 6 root root 4096 Dez 11 11:53 .. is empty.
You only need one entry with the appropriate archive name: Package: * Pin: release a=jessie-backports Pin-Priority: -1 Note that backports are pinned to 100 by default so they are not installation candidates unless you specify -t jessie-backports. I don't know how that plays with packages from other sources though, especially if they have higher versions than the stable packages...
Where is the pin-priority for debian-backports defined?
1,347,520,499,000
Using jq, is it possible to update property value of an object that contains a specific value in some other property? In the example below I'd like to set the value of the "value" property of all objects that have "keyname" = "foo". The example .json file looks like this: "root" : { "instances": [ { "name": "1", "configs": [ { "keyname": "foo", "value": "" // <- update/set this }, { "keyname": "barrr", "value": "barrrr" } ] }, { "name": "2", "configs": [ { "keyname": "foo", "value": "" // <- update/set this }, { "keyname": "buzzz", "value": "buzzz" } ] } ] } I tried this but in vain, I get an error about array not being a string: jq '(.root.instances.configs[] | select(.keyname==foo)).value = foo'
Assuming that your JSON document is well formed, which the example that you show is not as it contains multiple issues: $ cat file { "root": { "instances": [ { "name": "1", "configs": [ { "keyname": "foo", "value": "" }, { "keyname": "barrr", "value": "barrrr" } ] }, { "name": "2", "configs": [ { "keyname": "foo", "value": "" }, { "keyname": "buzzz", "value": "buzzz" } ] } ] } } $ jq '( .root.instances[].configs[] | select(.keyname == "foo") ).value |= "foo"' file { "root": { "instances": [ { "name": "1", "configs": [ { "keyname": "foo", "value": "foo" }, { "keyname": "barrr", "value": "barrrr" } ] }, { "name": "2", "configs": [ { "keyname": "foo", "value": "foo" }, { "keyname": "buzzz", "value": "buzzz" } ] } ] } } This jq expression updates the value of the .value key to the string foo. The key that is updated is selected from one of the entries in .root.instances[].configs[]. Note that .root.instances is an array and that each .configs entry in each of its elements is also an array. The select() statement tests the .keyname key with the string foo. Making the query key and new value variable is done as follows: jq --arg querykey 'foo' \ --arg newval 'The train said "choo choo"' \ '( .root.instances[].configs[] | select(.keyname == $querykey) ).value |= $newval' file This creates two internal jq variables called $querykey and $newval. Their values will be properly encoded, so that e.g. $newval can contain double quotes, as shown above.
Use jq to update property of object that contains other property with specific value
1,347,520,499,000
I want to be sure that whatever string I pass into the line wpa-ssid "abc" in /etc/network/interfaces won't be used to break out of the configuration. All I can find in the manual is that \ can be used at the end of a line to continue on the next line. But what about \" in the middle of a line? My worries is an SSID something like A" up rm -rf /\ Are there any general encoding that can be used for arbitrary characters into the SSID field?
In Debian's /etc/network/interfaces (or any other distribution using Debian's ifupdown utility), a backslash-newline sequence is removed, and backslash is not special anywhere else. A double quote character is not special either. The character # starts a comment if it's the first non-whitespace character on a (non-continuation) line. Null bytes are treated as newline characters (I think — the parser uses C strings and has no special handling for null bytes, so they might cause additional mischief). Configuration lines take the form of an option name followed by a value, separated by whitespace. Leading and trailing whitespace is ignored. Some built-in options further parse the line into words; the value of options to iface always runs to the end of the line. For example, the line wpa-ssid "a b" "cd" sets the option wpa-ssid to the 12-character string "a  b"  "cd" (internal whitespace is preserved). WPA Supplicant's ifupdown script strips double quotes at the beginning and at the end of the wpa-ssid configuration string, the line above is equivalent to wpa-ssid a  b"  "cd. This way, you can have leading and trailing whitespace in the SSID. I can't find a quoting issue in the WPA Supplicant ifupdown scripts, so it looks like anything that ifupdown will produce is safe. Thus you can allow any string as an SSID to be injected into /etc/network/interfaces, provided that it does not contain any newline or null byte. Add double quotes around the string (if you don't, SSIDs with leading or trailing whitespace, or that end with \, or that begin or end with ", will be mangled).
escape characters in /etc/network/interfaces
1,347,520,499,000
In Firefox we have two options at Firefox->Preferences->Preferences->Fonts and colors->Colors menu, Use system colors and Sites can use other colors. I would like keep the first one checked (and this is ok) and change the second on a quick way. A quick way could be pressing a shortcut on keyboard, running a terminal command or changing a content of a config file (because I can do a shell script and use a keyboard command). My motivation is I would like to always use my system colors but if a webpage has strange visuals, I'd like change it to the original quickly. Any ideas?
I had found a solution... I asked on mozilla forum and they returned a answer to me. The solution is: Install a extension called PrefBar. With this extension we can put a checkbox on mozilla that will change the property browser.display.use_document_colors. We can set a shorcut too (for example, F1). With this extension we can enable severel other options too.
How to change a firefox option on a quick way (via shortcuts, command line,..)?
1,347,520,499,000
When modifying config files from the command line, I often want to find the setting in the config file and modify that line if that setting exists. If that setting doesn't exist, I want to add it to the end of the file. I end up doing something like: if [ `grep -c '^setting=' example.conf` == 0 ] then echo "setting=value" >> example.conf else sed -i 's/^setting=.*/setting=value/g' example.conf fi Which seems like an awful lot of code for something so simple. This doesn't even do basic things like check that the config file already ends in a new line before appending to it. Surely there is a utility that does this, or a simpler command that I can use.
Here is a confset Perl script that I just wrote that I'm going to put in my path: Can work with multiple files in a single invocation Can modify multiple config values in each file in a single invocation Separator can be specified (with --separator) Option to be liberal about white space around names Usage: confset <options> name1=value1 name2=value2 file1.conf file2.conf Options: -s --separator <value> What comes between names and values (default =) -w --whitespace <true|false> Allow space around names and values (default false) So to the handle the case I outlined in the question, I would call it with: confset example.conf setting=value Here is the script: #!/usr/bin/perl use strict; my $scriptname = $0; my $separator = '='; my $whitespace = 0; my @files = (); my @namevalues = (); # read in the command line arguments for (my $i=0; $i<scalar(@ARGV); $i++){ my $arg = @ARGV[$i]; if ($arg =~ /^-/){ &printHelp(*STDOUT, 0) if ($arg eq "-h" or $arg eq "--help"); &printHelp(*STDERR, 1) if ($i+1 >= scalar(@ARGV)); my $opt = @ARGV[++$i]; if ($arg eq "-s" or $arg eq "--separator"){ $separator = $opt; } elsif ($arg eq "-w" or $arg eq "--whitespace"){ $whitespace = 0; $whitespace = 1 if ($opt =~ /1|t|y/); } else { &printHelp(*STDERR, 1); } } elsif ( -e $arg){ push(@files, $arg); } else { push(@namevalues, $arg); } } # check the validity of the command line arguments if (scalar(@files) == 0){ print STDERR "ERROR: No files specified\n"; printHelp(*STDERR, 1); } if (scalar(@namevalues) == 0){ print STDERR "ERROR: No name value pairs specified\n"; printHelp(*STDERR, 1); } my $names = {}; foreach my $namevalue (@namevalues){ my ($name, $value) = &splitnv($namevalue); if ($name){ $names->{$name} = {"value",$value,"replaced",0}; } else { print STDERR "ERROR: Argument not a file and contains no separator: $namevalue\n"; printHelp(*STDERR, 1); } } # Do the modification to each conf file foreach my $file (@files){ # read in the entire file into memory my $contents = ""; open FILE, $file or die $!; while (my $line = <FILE>){ chomp $line; my ($name, $value) = &splitnv($line); # set matching lines to their new value if ($names->{$name}){ $line = $name . $separator . $names->{$name}->{value}; $names->{$name}->{replaced} = 1; } $contents .= "$line\n"; } close FILE or die $!; # add any new lines that didn't already get set foreach my $name (keys %$names){ if (!$names->{$name}->{replaced}){ $contents .= $name . $separator . $names->{$name}->{value}."\n"; } # reset for next file $names->{$name}->{replaced} = 0; } # overwrite the file open FILE, ">$file" or die $!; print FILE $contents; close FILE or die $!; } # Print help message to the specified stream and exit with the specified value sub printHelp(){ my ($stream, $exit) = @_; print $stream "Usage: $scriptname <options> name1=value1 name2=value2 file1.conf file2.conf\n"; print $stream "Options:\n"; print $stream " -s --separator <value> What comes between names and values (default =)\n"; print $stream " -w --whitespace <true|false> Allow space around names and values (default false)\n"; exit $exit; } # Split a string into a name and value using the global separator sub splitnv(){ my ($str) = @_; my $ind = index($str, $separator); return (0,0) if ($ind < 0); my $name = substr($str, 0, $ind); my $value = substr($str, $ind+length($separator)); $name =~ s/(^[ \t])*|([ \t])*$//g if ($whitespace); return ($name, $value); }
Change a value in a config file, or add the setting if it doesn't exist?
1,347,520,499,000
Are the files in /etc/sudoers.d read in a particular order? If so, what is the convention for that ordering?
From man sudoers, the exact position found with this command: $ LESS='+/sudo will suspend processing' man sudoers Files are parsed in sorted lexical order. That is, /etc/sudoers.d/01_first will be parsed before /etc/sudoers.d/10_second. Be aware that because the sorting is lexical, not numeric, /etc/sudoers.d/1_whoops would be loaded after /etc/sudoers.d/10_second. A consistent number of leading zeroes in the file names can avoid such problems. That's under the title: Including other files from within sudoers $ LESS='+/Including other files from within sudoers' man sudoers Lexical order is also called "dictionary order" as given by the values defined by the environment variable LC_COLLATE when the locale is C (numbers then Uppercae then lowercase letters). That's the same order as given by LC_COLLATE=C ls /etc/sudoers.d/. The list of files included and the specific order in which they are loaded could be exposed with: $ visudo -c /etc/sudoers: parsed OK /etc/sudoers.d/README: parsed OK /etc/sudoers.d/me: parsed OK /etc/dirtest/10-defaults: parsed OK /etc/dirtest/1one: parsed OK /etc/dirtest/2one: parsed OK /etc/dirtest/30-alias: parsed OK /etc/dirtest/50-users: parsed OK /etc/dirtest/Aone: parsed OK /etc/dirtest/Bone: parsed OK /etc/dirtest/aone: parsed OK /etc/dirtest/bone: parsed OK /etc/dirtest/zone: parsed OK /etc/dirtest/~one: parsed OK /etc/dirtest/éone: parsed OK /etc/dirtest/ÿone: parsed OK Note that the order is not UNICODE but C.
Are the files in /etc/sudoers.d read in a particular order?
1,347,520,499,000
I use a drawing program called Inkscape, which has both a GUI and a command line interface. When used on the command line, it has a large number of options that can only be controlled through a user-specific config file, which is hardcoded to be: $HOME/.config/inkscape/preferences.xml This config file always contains the options that were most recently used in the GUI, which may be the wrong ones when I'm scripting. To work around this, I have my script save a copy of the config file, replace it with a standard config file, run the program, and then copy the saved config file back. This works OK but is not really clean. For example, it won't work properly if two instances of the script are being run concurrently. On Unix, is there a cleaner way to carry out this task of faking out a program so it takes its config file from someplace that I want, rather than from the pathname hardcoded in the program? Maybe something involving links, or something like BSD jails?
Inkscape has a feature for this as of 0.47: $ INKSCAPE_PORTABLE_PROFILE_DIR=/some/other/path inkscape --args Put your script's custom preferences.xml file in /some/other/path. It should be a dedicated directory, because Inkscape will populate it with all the other files it normally puts in ~/.config/Inkscape when you run it like this.
Clean way to temporarily replace a config file?
1,347,520,499,000
Is it possible to set up a keybinding in Openbox for switching between open windows within an application? Just like you can in gnome 3 with alt + [key above Tab] .
I have implemented this function by using wmctrl. The relevant part in rc.xml of openbox: <keybind key="A-space"> <action name="execute"> <execute>wmctrl-switch-by-application</execute> </action> </keybind> below is the code in wmctrl-switch-by-application: # taken from https://unix.stackexchange.com/questions/26546/can-you-switch-between-windows-within-an-application-in-openbox # taken from: http://www.st0ne.at/?q=node/58 # get id of the focused window active_win_id=$(xprop -root | grep '^_NET_ACTIVE_W' | awk -F'# 0x' '{print $2}') # get window manager class of current window win_class=$(wmctrl -x -l | grep $active_win_id | awk '{print $2 " " $3}' ) # get list of all windows matching with the class above win_list=$(wmctrl -x -l | grep -- "$win_class" | awk '{print $1}' ) # get next window to focus on switch_to=$(echo $win_list | sed s/.*$active_win_id// | awk '{print $1}') # if the current window is the last in the list ... take the first one if [ -z "$switch_to" ];then switch_to=$(echo $win_list | awk '{print $1}') fi # switch to window wmctrl -i -a $switch_to
Can you switch between windows within an application in Openbox?
1,347,520,499,000
Since a few days, my web/mailserver (centos 6.4) is sending out spammails by the bunch, and only stopping the postfix service is putting an end to it. SMPT is set up to only accept connections over ssl and using username/pwd. And I already changed the password of the (suspected) infected emailaccount. Email was set up via iRedMail. Any help on identify and stopping this is more then welcome! ADDED: Some logs excerpts: Mar 23 05:01:52 MyServer postfix/smtp[9494]: 4E81026038: to=<[email protected]>, relay=mail.suddenlinkmail.com[208.180.40.132]:25, delay=3, delays=0.07/0/2.4/0.5, dsn=2.0.0, status=sent (250 Message received: [email protected]) Mar 23 05:02:01 MyServer postfix/smtp[9577]: 209BA26067: to=<[email protected]>, relay=127.0.0.1[127.0.0.1]:10024, delay=14, delays=12/0/0/2, dsn=2.0.0, status=sent (250 2.0.0 from MTA(smtp:[127.0.0.1]:10025): 250 2.0.0 Ok: queued as B654226078) Mar 23 05:02:01 MyServer postfix/smtp[9495]: 8278726077: to=<[email protected]>, relay=mx-biz.mail.am0.yahoodns.net[98.139.171.245]:25, delay=0.88, delays=0.25/0/0.47/0.14, dsn=4.7.1, status=deferred (host mx-biz.mail.am0.yahoodns.net[98.139.171.245] said: 421 4.7.1 [TS03] All messages from [IPADDRESS] will be permanently deferred; Retrying will NOT succeed. See http://postmaster.yahoo.com/421-ts03.html (in reply to MAIL FROM command)) A mailheader of an undeliverable report: Return-Path: <MAILER-DAEMON> Delivered-To: [email protected] Received: from localhost (icantinternet.org [127.0.0.1]) by icantinternet.org (Postfix) with ESMTP id 4669E25D9D for <[email protected]>; Mon, 24 Mar 2014 14:20:15 +0100 (CET) X-Virus-Scanned: amavisd-new at icantinternet.org X-Spam-Flag: YES X-Spam-Score: 9.501 X-Spam-Level: ********* X-Spam-Status: Yes, score=9.501 tagged_above=2 required=6.2 tests=[BAYES_99=3.5, BAYES_999=0.2, RAZOR2_CF_RANGE_51_100=0.5, RAZOR2_CF_RANGE_E8_51_100=1.886, RAZOR2_CHECK=0.922, RDNS_NONE=0.793, URIBL_BLACK=1.7] autolearn=no Received: from icantinternet.org ([127.0.0.1]) by localhost (icantinternet.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id FOrkYnmugXGk for <[email protected]>; Mon, 24 Mar 2014 14:20:13 +0100 (CET) Received: from spamfilter2.webreus.nl (unknown [46.235.46.231]) by icantinternet.org (Postfix) with ESMTP id D15BA25D14 for <[email protected]>; Mon, 24 Mar 2014 14:20:12 +0100 (CET) Received: from spamfilter2.webreus.nl (localhost [127.0.0.1]) by spamfilter2.webreus.nl (Postfix) with ESMTP id 7FB2EE78EFF for <[email protected]>; Mon, 24 Mar 2014 14:20:13 +0100 (CET) X-Virus-Scanned: by SpamTitan at webreus.nl Received: from mx-in-2.webreus.nl (mx-in-2.webreus.nl [46.235.44.240]) by spamfilter2.webreus.nl (Postfix) with ESMTP id 3D793E78E5A for <[email protected]>; Mon, 24 Mar 2014 14:20:09 +0100 (CET) Received-SPF: None (mx-in-2.webreus.nl: no sender authenticity information available from domain of [email protected]) identity=pra; client-ip=62.146.106.25; receiver=mx-in-2.webreus.nl; envelope-from=""; x-sender="[email protected]"; x-conformance=sidf_compatible Received-SPF: None (mx-in-2.webreus.nl: no sender authenticity information available from domain of [email protected]) identity=mailfrom; client-ip=62.146.106.25; receiver=mx-in-2.webreus.nl; envelope-from=""; x-sender="[email protected]"; x-conformance=sidf_compatible Received-SPF: None (mx-in-2.webreus.nl: no sender authenticity information available from domain of [email protected]) identity=helo; client-ip=62.146.106.25; receiver=mx-in-2.webreus.nl; envelope-from=""; x-sender="[email protected]"; x-conformance=sidf_compatible Received: from athosian.udag.de ([62.146.106.25]) by mx-in-2.webreus.nl with ESMTP; 24 Mar 2014 14:20:03 +0100 Received: by athosian.udag.de (Postfix) id 3B16E54807C; Mon, 24 Mar 2014 14:19:59 +0100 (CET) Date: Mon, 24 Mar 2014 14:19:59 +0100 (CET) From: [email protected] (Mail Delivery System) Subject: ***Spam*** Undelivered Mail Returned to Sender To: [email protected] Auto-Submitted: auto-replied MIME-Version: 1.0 Content-Type: multipart/report; report-type=delivery-status; boundary="36D9C5488E5.1395667199/athosian.udag.de" Content-Transfer-Encoding: 7bit Message-Id: <[email protected]>
Pravin offers some good general points, but doesn't really elaborate on any of them and doesn't address your likely actual problems. First, you need to find out how postfix is receiving those messages and why it's choosing to relay them (the two questions are very likely related). The best way to do it is by looking at the message ID of any one of the messages and then grepping the mail.log file for all log entries regarding it. This will tell you at the very least where the message came from and what postfix did with it right up until it left its care and went on into the world. Here's a (redacted) sample excerpt: Mar 26 00:51:13 vigil postfix/smtpd[9120]: 3B7085E038D: client=foo.bar.com[1.2.3.4] Mar 26 00:51:13 vigil postfix/cleanup[9159]: 3B7085E038D: message-id=<------------@someserver> Mar 26 00:51:13 vigil postfix/qmgr[5366]: 3B7085E038D: from=<[email protected]>, size=456346, nrcpt=2 (queue active) Mar 26 00:51:13 vigil postfix/lmtp[9160]: 3B7085E038D: to=<[email protected]>, relay=127.0.0.1[127.0.0.1]:10024, delay=0.3, delays=0.11/0/0/0.19, dsn=2.0.0, status=sent (250 2.0.0 Ok, id=04611-19, from MTA([127.0.0.1]:10025): 250 2.0.0 Ok: queued as 6EA115E038F) Mar 26 00:51:13 vigil postfix/qmgr[5366]: 3B7085E038D: removed This tells me the following things: The message came in from foo.bar.com, a server with IP address 1.2.3.4 calling itself foo.bar.com (Implied by the lack of warnings) According to forward and reverse DNS, that address does indeed match that name. The message was meant for a user named [email protected], which the server decided was an acceptable destination address. As per its configuration, the mail server relayed the message through 127.0.0.1:10024 (our spam/virus filter) for further processing. The filter said "Okay, I'll queue this as message with ID 6EA115E038F and handle it from here." Having received this confirmation, the main server declared it was done and removed the original message from the queue. Now, once you know how the message got into the system you can start finding out where the problem lies. If it came from elsewhere and was relayed to somewhere else entirely, postfix is currently functioning as an open relay. This is very, very bad and you should tighten up your smtpd_recipient_restrictions and smtpd_client_restrictions settings in /etc/postfix/main.cf. If it came in from localhost, it's very likely that one webhosting user or another has been compromised with a php script that sends out spam on demand. Use the find command to look for .php files that were recently added or altered, then take a good look at any suspicious names. Anything more specific will depend too much on the outcome of the above investigation so it's pointless to attempt to elaborate. I will leave you with the more general admonishment to at the very least install and configure postgrey at earliest opportunity.
My Postfix installation is sending out spam; how to stop it?
1,347,520,499,000
I have been trying to get this to work for hours now! I would like to set up a simple web server. My web files shall be in /var/www. I also want to have phpmyadmin. I created a directory /var/phpmyadmin. Now I want to acces the normal web files in the standard way. For instance: The file /var/www/test.php should be accesible with http://localhost/test.php. The phpmyadmin part should be acces like this: http://localhost/phpmyadmin. With the config below I get a 404. Also with this URL: http://localhost/phpmyadmin/index.php For this I created this file in the sites-availble folder of nginx: server { listen 80; ## listen for ipv4; this line is default and implied listen [::]:80 default_server ipv6only=on; ## listen for ipv6 root /var/www; index index.html index.htm index.php; try_files $uri $uri/ $uri/index.html $uri/index.htm $uri/index.php; # This didn't work location /phpmyadmin/ { alias /var/phpmyadmin; } # And this did neither. (Never used both at the same time!) location /phpmyadmin/ { root /var; } location ~ \.php$ { try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini # With php5-cgi alone: #fastcgi_pass 127.0.0.1:9000; # With php5-fpm: fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; } location ~ /\.ht { deny all; } } What am I doing wrong? Edit: Interesting to note is that this works (The root directory works (http://localhost)): root /var/www/htdocs; index index.php index.html index.htm; location /phpmyadmin/ { root /var/www/phpmyadmin; } And this doesn't: index index.php index.html index.htm; location / { root /var/www/htdocs; } location /phpmyadmin/ { root /var/www/phpmyadmin; } phpmyadmin still doesn't work!
Your goal is to completely separate your "regular" web files from your phpMyAdmin installation. It should be stressed that each server configuration in Nginx can (and should) have only one webroot. That being said, these are your options: Install phpMyAdmin in a directory under your webroot, which in your case is /var/www/phpmyadmin. It can be accessed through http://localhost/phpmyadmin This is the simplest configuration and I'm including it here for the sake of completeness (and people coming here from search engines). Install phpMyAdmin in a directory outside your webroot and then create a symlink named phpmyadmin in your webroot pointing to that directory. In that case, you need to make sure that you have specified disable_symlinks off in your server configuration. You can achieve separation on the same vhost by creating 2 server configurations listening on different ports, having different webroots and communicating through proxy_pass directive. A basic outline of such configuration is the following: server { listen 80; server_name localhost; root /var/www/htdocs; index index.php index.html index.htm; location /phpmyadmin { proxy_pass http://127.0.0.1:8080/; } # ...Add more location directives, php support, etc... } server { listen 8080; server_name localhost; root /var/www/phpmyadmin; index index.php index.html index.htm; # ...Specify additional location directives, php support, etc... } In this case, all requests to phpMyAdmin will be transparently passed to the server instance listening on port 8080 through the /phpmyadmin location in the server instance listening on port 80. Finally you can achieve separation on different vhosts by creating 2 server configurations listening on the same port, but having different server_name directives and different root locations. For example, a basic outline like this: server { listen 80; server_name dev.local; root /var/www/htdocs; index index.php index.html index.htm; # ...Add more location directives, php support, etc... } server { listen 80; server_name phpmyadmin.local; root /var/www/phpmyadmin; index index.php index.html index.htm; # ...Specify additional location directives, php support, etc... } Then, you would go ahead and add the following entries to your /etc/hosts: 127.0.0.1 dev.local 127.0.0.1 phpmyadmin.local and then you can access your files through http://dev.local and your phpMyAdmin instance through http://phpmyadmin.local. Obviously, from your local workstation.
nginx server config with multiple locations does not work
1,347,520,499,000
I've encountered a relatively common problem with ASIX AX88179 USB 3.0 Gigabit Ethernet adapter, where it was not working at all, or was working sporadically, and dmesg was showing errors like [23552.344134] ax88179_178a 2-1:2.1 eth1: Failed to read reg index 0x0000: -32 Searching on-line, I've found reports of this or similar problems without satisfactory solutions or explanation. After some debugging, it turned out that the problem is solved if cdc_mbim module is loaded before ax88179_178a. The following solves the problem until the next reboot: # rmmod ax88179_178a # modprobe cdc_mbim # modprobe ax88179_178a # optional I've checked that cdc_mbim is not declared a dependency of ax88179_178a, neither directly, nor indirectly. How can I make ax88179_178a depend on cdc_mbim, so that cdc_mbim be always loaded automatically before ax88179_178a? Update. My question seems to be a duplicate of Create Linux module dependency for autoloading module.
A similar, but slightly cleaner strategy also involving a file in modprobe.d/ is to use the softdep feature to tell modprobe to load cdc_mbim before ax88179_178a. In /etc/modprobe.d/ax88179.conf: softdep ax88179_178a pre: cdc_mbim
How to fix an apparenlty missing kernel module dependency declaration?
1,347,520,499,000
This is my first question and I'm still pretty new so please forgive me if I've missed or botched something, or if this is an obvious solution. I'm using CentOS 5.8 (yes I know it's ancient) and trying to test some squid configurations From the Squid wiki: NP: Squid must be built with the --enable-http-violations configure option before building. I've done some searching to try to determine where I can find which configuration options were specified at package build, but short of reading through all of the CentOS documentation I can't seem to locate where I can find these configuration options. I know this question may be similar to this one, but in this case the specific squid package may have been custom built, and I'm not sure I have access to the source without jumping through some hoops. Is there a way I can list the configuration flags with yum or rpm without extracting the spec file?
The question is about using RPM metadata to retrieve information about package specific compile time options. The information you're looking for isn't present in the RPM metadata. Either you need to have more than just an RPM (ideally a package build log or some of the files from the build directory), or you need to use a package specific way. I don't know the location of build information for CentOS, for Fedora it would be: http://koji.fedoraproject.org/ For squid, the package specific way is fairly easy: # squid -v Squid Cache: Version 3.4.5 configure options: '--build=x86_64-redhat-linux-gnu' '--host=x86_64-redhat-linux-gnu' '--program-prefix=' '--prefix=/usr' '--exec-prefix=/usr' '--bindir=/usr/bin' '--sbindir=/usr/sbin' '--sysconfdir=/etc' '--datadir=/usr/share' '--includedir=/usr/include' '--libdir=/usr/lib64' '--libexecdir=/usr/libexec' '--sharedstatedir=/var/lib' '--mandir=/usr/share/man' '--infodir=/usr/share/info' '--exec_prefix=/usr' '--libexecdir=/usr/lib64/squid' '--localstatedir=/var' '--datadir=/usr/share/squid' '--sysconfdir=/etc/squid' '--with-logdir=$(localstatedir)/log/squid' '--with-pidfile=$(localstatedir)/run/squid.pid' '--disable-dependency-tracking' '--enable-eui' '--enable-follow-x-forwarded-for' '--enable-auth' '--enable-auth-basic=DB,LDAP,MSNT,MSNT-multi-domain,NCSA,NIS,PAM,POP3,RADIUS,SASL,SMB,getpwnam' '--enable-auth-ntlm=smb_lm,fake' '--enable-auth-digest=file,LDAP,eDirectory' '--enable-auth-negotiate=kerberos' '--enable-external-acl-helpers=LDAP_group,time_quota,session,unix_group,wbinfo_group' '--enable-storeid-rewrite-helpers=file' '--enable-cache-digests' '--enable-cachemgr-hostname=localhost' '--enable-delay-pools' '--enable-epoll' '--enable-icap-client' '--enable-ident-lookups' '--enable-linux-netfilter' '--enable-removal-policies=heap,lru' '--enable-snmp' '--enable-ssl' '--enable-ssl-crtd' '--enable-storeio=aufs,diskd,ufs' '--enable-wccpv2' '--enable-esi' '--enable-ecap' '--with-aio' '--with-default-user=squid' '--with-dl' '--with-openssl' '--with-pthreads' 'build_alias=x86_64-redhat-linux-gnu' 'host_alias=x86_64-redhat-linux-gnu' 'CFLAGS=-O2 -g -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -fpie' 'LDFLAGS=-Wl,-z,relro -pie -Wl,-z,relro -Wl,-z,now' 'CXXFLAGS=-O2 -g -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -fpie' 'PKG_CONFIG_PATH=%{_PKG_CONFIG_PATH}:/usr/lib64/pkgconfig:/usr/share/pkgconfig' (the above output has been made using a Fedora rawhide version of squid) For other packages, there may or may not be a command to show build time configuration. For downloading, extracting and examining the SRPM to guess compiled in features from the .spec file, see the end of the other answer.
How do I determine which configuration options an rpm package is built with?
1,347,520,499,000
I'm trying to setup Debian Squeeze (ala Dreamplug) as a router. I can't seem to get the pieces to fit together. ETH0: Upstream/internet - DHCP-Client ETH1: Downstream/Lan - 192.168.0.1 DHCP-Server sudo vim /etc/network/interfaces auto lo br0 iface lo inet loopback auto eth0 iface eth0 inet dhcp auto eth1 iface eth1 inet static address 192.168.0.1 network 192.168.0.0 netmask 255.255.255.0 broadcast 192.168.0.255 iface br0 inet dhcp bridge_ports eth0 eth1 sudo vim /etc/dhcp/dhcpd.conf option domain-name "MyPlug.MyServer.com"; option domain-name-servers 8.8.8.8, 192.168.0.1; default-lease-time 600000000; max-lease-time 720000000; subnet 192.168.0.0 netmask 255.255.255.0 { range 192.168.0.100 192.168.0.200; option routers 192.168.0.1; option broadcast-address 192.168.0.255; } sudo vim /etc/default/isc-dhcp-server INTERFACES="eth1" service networking restart service isc-dhcp-server restart My windows 7 machine picks up an IP... AFTER I force release/renew. I was able to get it to connect to the server via putty once. Anything noticeably wrong with my settings? or anything else I can look for?
Don't bridge your internal and external interfaces. Your box is a router, not a switch. To make your machine a router you have to tell it to ,,forward'' packets between interfaces. I do so by echo 1>/proc/sys/net/ipv4/ip_forward. IIRC the way(TM) to do it is adding a line net.ipv4.ip_forward=1 to /etc/sysctl.conf and then execute /etc/init.d/procps restart. The proc file system, usually mounted to /proc is a representation of kernel information and configuration as files that can be read and written. By writing 0 or 1 to /proc/sys/net/ipv4/ip_forward we are disabling or enabling the kernel function to forward IP packets between interfaces. We want the kernel to forward packets! Now your machine is a router but you also need maquerading. To do that you need to: iptables -t nat -A POSTROUTING -i eth1 -o eth0 -j MASQUERADE (see http://tldp.org/HOWTO/IP-Masquerade-HOWTO/ if you like to know more) As long as we're using IPv4: You will only get one IP-address from your ISP and all your clients share this one address when interacting with systems on the internet. Masquerading takes care of everything to handle this sharing of an IP-address. The point is we need to tell iptables when to apply masquerading. If iptables won't accept -i and -o anymore a suitable replacement rule is iptables -t nat -A POSTROUTING -s 192.168.0.0/24 ! -d 192.168.0.0/24 -j MASQUERADE You may need to replace the subnet definitions 192.168.0.0/24 (both!) with the subnet your clients live in. The rule says "do masquerading for all packets that originate from the client subnet and are addressed to hosts outside the client subnet" I don't know Dreamplug, but you should have some file /etc/firewall* or /etc/iptables* where you can add this statement, so that this statement is executed on every reboot. Check your documentation for "firewall rules" and where you have to put them. For your DHCP configuration, your lease times seem rediculously high. Take 3-5 0 off. Also there is a chance that there are clients out there that can't/don't handle such big numbers. Also you should reverse the ordering of domain-name-servers. Clients will ask the first server in the list first. If your router acts as a name server as well it is most likely to remember previous queries for some time. This means if your Clients request the same address a second time the answer is much quicker, compared to asking a google name server.
Debian Squeeze as a router
1,347,520,499,000
Does lubuntu have an anti-blue light feature?
Yes, at least you can use Redshift on it — install the redshift-gtk package. Some desktop environments such as GNOME have similar features built-in, I’m not sure Lubuntu’s does.
Does lubuntu have an anti-blue light feature?
1,347,520,499,000
I have changed some stuff within the sshd_config file and want to reset the file to its default settings. How would I go about doing this?
The ssh default config file is on /private/etc/ssh/sshd_config, you can copy it to .ssh directory by the following command sudo cp /private/etc/ssh/sshd_config ~/.ssh/config Then restart SSHD: sudo launchctl stop com.openssh.sshd sudo launchctl start com.openssh.sshd
How to reset the sshd_config file to its default settings
1,347,520,499,000
I am running Fedora 25. I am unable to locate the file rsyslog.conf under /etc/ The output of the command ls /etc/*.conf is asound.conf kdump.conf radvd.conf brltty.conf krb5.conf request-key.conf chrony.conf ld.so.conf resolv.conf dleyna-server-service.conf libaudit.conf rygel.conf dnsmasq.conf libuser.conf sestatus.conf dracut.conf locale.conf sos.conf e2fsck.conf logrotate.conf sysctl.conf extlinux.conf man_db.conf tcsd.conf fprintd.conf memtest86+.conf Trolltech.conf fuse.conf mke2fs.conf updatedb.conf fwupd.conf mtools.conf usb_modeswitch.conf hba.conf nfs.conf vconsole.conf host.conf nfsmount.conf wvdial.conf idmapd.conf nsswitch.conf xattr.conf jwhois.conf passwdqc.conf The output of the command find / -name rsyslog.conf gives the output /usr/lib/dracut/modules.d/98syslog/rsyslog.conf Why is it like that?
Rsyslog is not installed by default on Fedora Workstation (although it is in some other flavors of Fedora). Many use cases can be served by querying the systemd journal directly with journalctl or other tools. Or, you can install and configure rsyslog — sudo dnf install rsyslog, and then find the config file you were expecting.
rsyslog.conf file is not present under /etc/ in Fedora 25
1,347,520,499,000
I am confused why Apache is not responding on port 80 ... $ wget http://localhost:80 --2014-05-06 15:32:44-- http://localhost/ Resolving localhost (localhost)... 127.0.0.1 Connecting to localhost (localhost)|127.0.0.1|:80... failed: Connection refused. ... but instead on post 8080 ... $ wget http://localhost:8080 --2014-05-06 15:32:38-- http://localhost:8080/ Resolving localhost (localhost)... 127.0.0.1 Connecting to localhost (localhost)|127.0.0.1|:8080... connected. HTTP request sent, awaiting response... 200 OK Length: 177 [text/html] Saving to: ‘index.html’ 100%[=================================================>] 177 --.-K/s in 0s 2014-05-06 15:32:38 (16,4 MB/s) - ‘index.html’ saved [177/177] Not too much too see in the output of apache2ctl: $ apache2ctl -t -D DUMP_VHOSTS VirtualHost configuration: *:80 is a NameVirtualHost default server localhost (/etc/apache2/sites-enabled/000-default.conf:1) port 80 namevhost localhost (/etc/apache2/sites-enabled/000-default.conf:1) port 80 namevhost localhost (/etc/apache2/sites-enabled/000-default.conf:1) However, netstat confirms the port: $ sudo netstat -anp | grep :8080 tcp6 0 0 :::8080 :::* LISTEN 5353/apache2 As asked by Joel here is the ports.conf: $ sudo cat /etc/apache2/ports.conf # If you just change the port or add more ports here, you will likely also # have to change the VirtualHost statement in # /etc/apache2/sites-enabled/000-default Listen 8080 <IfModule ssl_module> Listen 443 </IfModule> <IfModule mod_gnutls.c> Listen 443 </IfModule> # vim: syntax=apache ts=4 sw=4 sts=4 sr noet # NameVirtualHost *:8800 Listen 8800
What's the value of the Listen directive in the config file in /etc/apache2/ports.conf? Yours says 8080 and 8800, not 80, which is why you got those results.
Why is Apache running on port 8080 instead on port 80?
1,347,520,499,000
TL;DR: when I was trying to compile a driver for USB DAQ device I have reconfigured the kernel. The driver refused to compile under the default distro's kernel but everything works with my tweaked kernel. The driver consists of 2 kernel modules. I know which options I changed but I want to know which particular configuration option enabled my driver. Is there any way how to figure it out without trying (configuring and compiling kernel) every possible combination of the options? Longer story: I have an Advantech USB-4702 DAQ device which comes with a driver for various distros, e.g. openSUSE 11.4. It has to be compiled from source and it compiles well on supported distributions (I tried openSUSE 11.4 32bit with kernel 2.6.37.6-24-desktop). When I was trying to get it work under SLES 11 SP 3 (64bit, kernel 3.0.76-0.11-default) I got compile errors. One of them was caused by this snippet in the source: #ifndef CONFIG_USB # error "This driver needs to have USB support." #endif So I took a look at the configuration options of the running kernel (from /proc/config.gz) and found CONFIG_USB to be enabled (I guess I would not be able to use my USB keyboard and mouse if it was disabled). Then I started to play with the kernel configuration and enabled (some as modules) some of them. I compiled the kernel, installed it, rebooted. Then the driver compiled without any errors or warnings and I am able to use the device now. The question is, how can I find out which particular option "enabled" compilation of the driver? I know which options were changed, but I don't want to enable anything that is not necessary for the driver. And I don't want to go through configuring and compiling the kernel with every possible combination of the options.
After some further experimenting I can confirm my claim made in one of my comments: the CONFIG_USB option has to have value Y; m is "not enough". Incidentally, the kernel in openSUSE 11.4 has it Y by default and the kernel in SLES11SP3 has m. It's a pity that the error message does not state it clearly. An easy way of setting it up is via make menuonfig, then selecting Y for the option Support for host-side USB under Device Drivers -> USB Support.
How do I know which kernel configuration option enabled my driver?
1,347,520,499,000
Many projects these days use more than one programming / scripting language, and in standard DRY tradition these should not have separate configuration files if they need the same information. After a small survey in /etc, it looks like a lot of incompatible syntaxes are used in Ubuntu: varname=value - /etc/adduser.conf varname: value - /etc/debconf.conf varname = value - /etc/deluser.conf $varname value /etc/insserv.conf varname value - /etc/login.defs set varname value - /etc/lftp.conf [section] varname = value /etc/mke2fs.conf section label varname value set varname value /etc/smartd.conf As far as I can see, none of these are "Yaml or XML or JSON," (one of them is INI though). Which format would you recommend (and why) for a project which needs to provide simple values (debug = true, welcome = "Hello world!", threads = 4), arrays of simple values (servers = [dev, test, prod]), and values which refer to other variables (thread_msg = "Using $threads threads") to Bash, Perl and PHP?
The first 3 are serialization formats which aren't configuration formats. These formats are easy for machines to read and write but not as easy for humans to read and write. Do not use them for configuration. Your users will likely hate you for it. Also some of these files are simply NAME value pairs. Others might not really be configuration files but shell files, meaning they can basically be sourced by a shell for processing. I suggest using an actual configuration format like INI or Apache Style (Config::General in Perl) Config::Any is a good choice of perl module for loading a config, because it allows the user to essentially pick the format of their choice.
Language agnostic configuration file format
1,347,520,499,000
Is there any way to change the height of a conky window? .conkyrc background no update_interval 1 cpu_avg_samples 2 net_avg_samples 2 override_utf8_locale yes double_buffer yes no_buffers yes text_buffer_size 2048 #imlib_cache_size 0 # Window specifications # own_window_class Conky own_window yes own_window_type desktop own_window_transparent yes own_window_hints undecorated,below,sticky,skip_taskbar,skip_pager own_window_argb_visual yes border_inner_margin 0 border_outer_margin 0 minimum_size 200 200 maximum_width 200 alignment tr gap_x 0 gap_y 25 # Graphics settings # draw_shades no draw_outline no draw_borders no draw_graph_borders no # Text settings # use_xft yes xftfont Ubuntu:size=8 xftalpha 0.5 uppercase no temperature_unit celsius default_color FFFFFF # Lua Load # lua_load ~/.conky/draw_bg.lua lua_draw_hook_pre draw_bg lua_load ~/.conky/clock_rings.lua lua_draw_hook_post clock_rings TEXT ${voffset 8}${goto 25}${color FFFFFF}${font Ubuntu:size=16}${time %A}${font}${voffset -8}${alignr 50}${color FFFFFF}${font Ubuntu:size=38}${time %e}${font} ${color FFFFFF}${goto 25}${voffset -30}${color FFFFFF}${font Ubuntu:size=18}${time %b}${goto 75}${font Ubuntu:size=20}${time %Y}${font}${color 0B8904} ${voffset 150}${font Ubuntu:size=10}${font} ${font Ubuntu:size=12}${color FFFFFF}${alignr}${font} ${voffset -20}${alignr 50}${color FFFFFF}${font Ubuntu:size=38}${time %H}${font} ${alignr 50}${color FFFFFF}${font Ubuntu:size=38}${time %M}${font} ${voffset -95} ${color FFFFFF}${goto 23}${voffset 48}${cpu cpu0}% ${color 0B8904}${goto 23}CPU ${color FFFFFF}${goto 48}${voffset 23}${memperc}% ${color 0B8904}${goto 48}RAM ${color FFFFFF}${goto 73}${voffset 23}${swapperc}% ${color 0B8904}${goto 73}Swap ${color FFFFFF}${goto 98}${voffset 23}${fs_used_perc /}% ${color 0B8904}${goto 98}Disk ${color FFFFFF}${voffset 25}${alignr 62}${downspeed eth1}${goto 135}D ${color FFFFFF}${alignr 62}${upspeed eth1}${goto 135}U ${color 0B8904}${goto 123}Net ${color FFFFFF}${font Ubuntu:size=8}${goto 55}Uptime: ${goto 100}${uptime_short} ${color FFFFFF}${font Ubuntu:size=8}${goto 42}Processes: ${goto 100}${processes} ${color FFFFFF}${font Ubuntu:size=8}${goto 50}Running: ${goto 100}${running_processes}} draw_bg lua script -- Change these settings to affect your background. -- "corner_r" is the radius, in pixels, of the rounded corners. If you don't want rounded corners, use 0. corner_r=0 -- Set the colour and transparency (alpha) of your background. bg_colour=0x000000 bg_alpha=.8 require 'cairo' function rgb_to_r_g_b(colour,alpha) return ((colour / 0x10000) % 0x100) / 255., ((colour / 0x100) % 0x100) / 255., (colour % 0x100) / 255., alpha end function conky_draw_bg() if conky_window==nil then return end local w=conky_window.width local h=conky_window.height local cs=cairo_xlib_surface_create(conky_window.display, conky_window.drawable, conky_window.visual, w, h) cr=cairo_create(cs) cairo_move_to(cr,corner_r,0) cairo_line_to(cr,w-corner_r,0) cairo_curve_to(cr,w,0,w,0,w,corner_r) cairo_line_to(cr,w,h-corner_r) cairo_curve_to(cr,w,h,w,h,w-corner_r,h) cairo_line_to(cr,corner_r,h) cairo_curve_to(cr,0,h,0,h,0,h-corner_r) cairo_line_to(cr,0,corner_r) cairo_curve_to(cr,0,0,0,0,corner_r,0) cairo_close_path(cr) cairo_set_source_rgba(cr,rgb_to_r_g_b(bg_colour,bg_alpha)) cairo_fill(cr) end Screenshot of setup                                           Question I want to increase the background(drawn by the lua script using conky_window.height) to occupy the entire screen height. Tried Changing minimum_size has no effect Adding lines at the botto has no effect, see https://i.sstatic.net/1aP3O.jpg Fix Turns out that conky_window.height used by the lua script is preserved between conky restarts. Logging out and back in resolves this issue. Changing minimum_size works.
Just add ${voffset 200} at the end of the .conckyrc file and play with the value.
Increasing conky height
1,347,520,499,000
I have some Debian desktop machines, and during installation I left the domain name field blank, because I don't host any websites and I do not have a static IP (whatsmyip.org gives a different IP every few months). What should the domain name be in this situation?
You have several choices. You can use the .local domain name, which is reserved for machines that are not accessible from the Internet. (You can use it on a machine that can make outgoing connections to the Internet, or even from a machine that can but normally doesn't receive incoming connections from the Internet.) This name is reserved for that use, it will never be used by a machine on the Internet. Another similar, more common but not officially-sanctioned name is .localdomain. It is preferable as some systems only support .local for names discovered by mDNS (Linux doesn't care but OSX does, thanks roima). Alternatively, you can use a name that you pick, that isn't in use as a TLD. This has the advantage that you can use different names for different private networks. Alternatively, you can use names under a public TLD, even if the machine isn't reachable from the Internet. This can be confusing if these names aren't recorded in the domain name system however. For a single machine, having a domain name recorded is pretty much useless. The domain name setting is not used much. Its most common use is as a default zone to search for host names, as a default for the domain or search setting in /etc/resolv.conf, i.e. when you access the host foo, the application will try foo.localdomain or whatever you've picked. Setting a domain name is useful when you have multiple machines on your local network — either physical or virtual machines. If you have multiple machines, you'll probably want to set up a local name server (which doesn't require using a domain name, you can stick to dot-less host names). Setting a distinctive domain name is useful when some of your computers have variable network connectivity, e.g. a laptop, or a computer where you sometimes use a VPN. You can then use the domain name as an indication of which network you're currently connected to.
Correct domain name for a non-server desktop machine
1,347,520,499,000
I'm trying to setup two network profiles in Centos. One for at home, one for at work. The home profile has a fixed IP address, fixed gateway and DNS server addresses. The work profile depends on DHCP. I've created a 'home' and a 'work' directory in /etc/sysconfig/networking/profiles. Each has the following files containing the proper configuration: > -rw-r--r-- 2 root root 422 Apr 17 20:17 hosts > -rw-r--r-- 5 root root 223 Apr 17 20:18 ifcfg-eth0 > -rw-r--r-- 1 root root 101 Apr 17 20:17 network > -rw-r--r-- 2 root root 73 Apr 17 20:18 resolv.conf There was already a 'default' profile, which contains the same files. Then I issued these commands: system-config-network-cmd --profile work --activate service network restart I was expecting these files to get copied from the profiles/work directory to /etc/sysconfig/ and /etc/sysconfig/networking-scripts. And most files do get copied, except for ifcfg-eth0. Stangely enough that files seems to be overwritten with the current settings when I issue system-config-network-cmd. The other files are also touched, but there contents stays in tact. The system is Centos 5.7 running on a virtual pc within a windows 7 machine. Here is the output for ifconfig: # ifconfig eth0 Link encap:Ethernet HWaddr 00:03:FF:6F:2E:AB inet addr:192.168.1.200 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::203:ffff:fe6f:2eab/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:4199761 errors:7 dropped:0 overruns:0 frame:0 TX packets:1733750 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2316624688 (2.1 GiB) TX bytes:415533386 (396.2 MiB) Interrupt:9 Can someone tell what I'm missing here?
As it follows from the RedHat's documentation on networking profiles, you should not use base interface name (eth0) for profile interfaces, but have one called as eth0_work and so on. BTW, you don't need to restart network configuration, since profile switching handles it by its own. An example: # system-config-network-cmd --profile foobar --activate Network device deactivating... Deactivating network device eth0, please wait... Network device activating... Activating network device eth0_foobar, please wait...
How to configure network profiles in Centos?
1,347,520,499,000
I have the following output whenever I issue task: TASKRC override: /path/taskrc TASKDATA override: /path/.task It's because I put the config and data files in non-default external location specified by $TASKRC and $TASKDATA environment variables of Taskwarrior. How could I make task to be quiete and not warn me everytime. I'd like to find the command line switch to make it quiet for the issueing time (once) and the also config file option to make it permanent, if any.
You need to lower the verbosity by removing header from verbose. By default, verbose=yes, so you need to manually list each type of message that you want to see. For example, I fixed this by fully defining verbosity minus a few items: verbose=blank,footnote,label,new-id,affected,edit,special,project,sync,unwait in my ~/.config/task/config (or whatever your $TASKRC is). Note that I also removed filter from my verbosity, but that's not necessary to fix the problem. Just remove header. Note also that removing header will also hide the [task custom] message at the top of the output. If you need that message, the alternative would be to manually filter out the warning using grep and some regular expressions. TL;DR: place this in the file at $TASKRC: verbose=blank,footnote,label,new-id,affected,edit,special,project,sync,unwait
How to override warning in Taskwarrior?
1,347,520,499,000
I don't like big text configuration files. So, I would like to split my rc.xml file into multiple files: rc.xml rc.keyboard.xml rc.mouse.xml
Why don't you just split it in the files you propose and then just cat them all together? cat rc-something.xml rc.keyboard.xml rc.mouse.xml > rc.xml The only problem is that you will need to cat them each time you modify one of the individual files, but that should be trivial..
How can I split Openbox `rc.xml` into multiple files?
1,295,448,882,000
I am making a script that needs to access the computer's monitor(s) configuration. How can I do that? Is there a command or a file I could read where I can access this information? At the moment, I do: xwininfo -root But I only have the total resolution and not the details. What I need is the resolution of each screen individually.
This is heavily dependent on the set up of the system. One way to get the information would be if xrandr is being used: xrandr --query This will display something like: Screen 0: minimum 320 x 200, current 3046 x 1050, maximum 8192 x 8192 VGA1 connected 1680x1050+1366+0 (normal left inverted right x axis y axis) 473mm x 296mm 1680x1050 60.0*+ 1280x1024 75.0 60.0 1152x864 75.0 1024x768 75.1 60.0 800x600 75.0 60.3 640x480 75.0 60.0 720x400 70.1 LVDS1 connected 1366x768+0+0 (normal left inverted right x axis y axis) 353mm x 198mm 1366x768 60.0*+ 1360x768 59.8 60.0 1024x768 60.0 800x600 60.3 56.2 640x480 59.9 DP1 disconnected (normal left inverted right x axis y axis) You could then use some text processing tool to pull out the resolution for each display.
How to retrieve monitors configuration from the command line?
1,295,448,882,000
I'm new on linux user I try to run crontab to backup my database with vagrant user * * * * * /usr/bin/mysqldump -h localhost -u root -p root mydb | gzip > /var/backup/all/database_`date +%Y-%m-%d`.sql.gz >/dev/null 2>&1 when the crontab runs there is no backup file in the folder (my backup/all has the permission scheme 755). This is error from /var/log/syslog Aug 16 11:55:01 precise64 CRON[2213]: (vagrant) CMD (/usr/bin/mysqldump -h localhost -u root -p root mydb | gzip > /var/backup/all/database_`date +%Y-%m-%d`.sql.gz >/dev/null 2>&1) Aug 16 11:55:01 precise64 CRON[2212]: (CRON) info (No MTA installed, discarding output) So I think it's about crontab can't create backup file because of Permission denied. it's about I'm didn't install MTA but I use >/dev/null 2>&1 to disable crontab to sent it to email why it error ?
Of course, the error is that you don't have a mailer (sendmail,postfix, etc) implemented and active. That being said your other problem is that the >/dev/null 2>&1 ONLY only applies/associates to the LAST command in this case gzip. Thus there must be some type of output going to STDERR for your mysqldump. The correct way to do what I think you want is: * * * * * (command | command ) >/dev/null 2>&1
crontab error with (No MTA installed) but I use >/dev/null 2>&1
1,295,448,882,000
How to disable ChaCha20-Poly1305 encryption from ssh under Debian? I tried (as root): echo 'Ciphers [email protected]' > /etc/ssh/sshd_config.d/anti-terrapin-attack echo 'Ciphers [email protected]' > /etc/ssh/ssh_config.d/anti-terrapin-attack systemctl restart sshd But my ssh -Q cipher is still showing [email protected]. UPDATE: As the answers to fully solving my question are spreading across different answers, let me summarize them in one place. Why? what's the fuss? -- check out Attack Discovered Against SSH, and Debian's openssh stable version is generations behind the official fix. Thus I need to fix it myself now. Why OP is not working? -- two points: ssh -Q cipher always shows all of the ciphers compiled into the binary all configuration files in the "/etc/ssh/sshd_config.d" directory should end with ".conf". How to disable the attack? -- See Floresta's practical solution https://unix.stackexchange.com/a/767135/374303 How to verify that the attack is disabled? -- based on gogoud's practical solution: nmap --script ssh2-enum-algos -sV -p 22 localhost | grep chacha20 | wc 0 0 0 Better run it before and after applying Floresta's fixes.
ssh -Q cipher always shows all of the ciphers compiled into the binary, regardless of whether they are enabled or not. This is true also for algorithms which are insecure or disabled by default. The configuration you have set up should be sufficient to disable the algorithm, assuming you're using a recent version of OpenSSH which supports this syntax. You can verify this by attempting to connect via ssh -vvv, which will print the server to client cipher list. If you don't have a recent version of OpenSSH, then this syntax is not supported, and you need to explicitly list the ciphers you want. The default is listed in man sshd_config, and, for my version of OpenSSH (Debian's 9.6), would look like this (without ChaCha): Ciphers aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected] Assuming you have modern clients, placing AES-GCM first in the list will improve performance (and security if you're not using encrypt-then-MAC), but there was an older version of OpenSSH which would segfault during rekeying with AES-GCM (which all major distros patched), which is why they're at the end of the list. Note that if you're using a patched OS on both the client and server, then it's not necessary to disable [email protected]. The reason is that authorized clients and servers will negotiate a secure connection with the [email protected] and [email protected] extensions that are in patched versions of OpenSSH. It doesn't matter what random scrapers do, because they'll drop off shortly thereafter and there's no need to protect them. However, if you aren't using a patched OS, then of course you're vulnerable, but then again, you're vulnerable to a variety of other vulnerabilities as well.
How to disable ChaCha20-Poly1305 encryption to stop the terrapin ssh attack
1,295,448,882,000
My server is running on debian Wheezy and htop 1.0.1 but it does not display any values on the meters bar except the 100.0% value of CPUs. Is it possible to show always the values of a meter? This is the current display of htop on my server: But I want to have the numbers always on my meters like on this screenshot from https://hisham.hm/htop/index.php?page=screenshots: My htoprc is located in ~/.config/htop/htoprc and contains the following data: # The parser is also very primitive, and not human-friendly. fields=0 48 17 18 38 39 40 2 46 47 49 1 sort_key=46 sort_direction=1 hide_threads=0 hide_kernel_threads=1 hide_userland_threads=0 shadow_other_users=0 show_thread_names=0 highlight_base_name=1 highlight_megabytes=1 highlight_threads=0 tree_view=0 header_margin=1 detailed_cpu_time=0 cpu_count_from_zero=0 color_scheme=0 delay=15 left_meters=LeftCPUs2 CPU Memory Swap left_meter_modes=1 1 1 1 right_meters=RightCPUs2 Tasks LoadAverage Uptime right_meter_modes=1 2 2 2
Well, you create a htoprc file, mine is located in ~/config/htop/htoprc, I don't know how it is under Ubuntu, but that should work too. inside you just have to put: left_meters=AllCPUs Memory Swap left_meter_modes=1 1 1 That should give you the output you want. You can also change the number of the color_scheme. Maybe the background is the same color as the numbers.
Display numbers in meters of htop
1,295,448,882,000
To make things more readable, I'd like to put a little more margin between lines of text. I couldn't find an answer with man terminator_config or any of the Preferences panes.
Nowadays, Terminator luckily supports this. In Preferences/Global, you can set the "cell height". It does exactly what @egmont described in his answer. I've actually been browsing the source code in order to add this feature before discovering… it was already there.
How to change the line spacing in Terminator?
1,295,448,882,000
Whenever I open an image in feh, the background is set to the standard, dark gray and gray checkboard pattern like this: As you can see, it's the checkboard background. How do I permanently change this to black? I've search Google and other places, but I can't seem to find a straight answer. I'm guessing feh's config file is involved, but I can't find any examples of how to do it in the config file. I know you can do it in the command line with --bg-color black (or something) but I'd like to just have it set to black by default.
It seems that you cannot put your desired default options in a config file. If you know about $PATH you can resort to a hack. Create this script: #!/bin/sh feh --bg-color black "$@" Call it feh and place it in your $PATH before /usr/bin/ (assuming that feh itself is in /usr/bin/). Some distros have ~/bin/ in $PATH by default. So you would put that script into ~/bin/ (and make it executable). Otherwise just create this folder yourself and prepend it to your $PATH. Also, if you want to set multiple default options, you can group them into themes. (Theme is the feh developer's name for a named group of options.) Create ~/.config/feh/themes and add this line to that file: default --bg-color black feh -Tdefault will then start feh with your desired default options. This is handy if you want to set multiple options at once. Unfortunately there is no way to set a default theme either. So, in your case it doesn't help. But you can fallback to the same hack as above: #!/bin/sh feh -Tdefault "$@" Alternative: If you are just going to call feh manually from the commandline, you can instead set an alias in your shell. In bash you would add this line to your ~/.bashrc and restart the interpreter (e.g. re-open the terminal): alias feh="feh --bg-color black" In fish shell you would run: abbr -a feh feh --bg-color black
How to permanently set default color of feh's background to black?
1,295,448,882,000
This is all on Debian Testing (= Stretch as of now). I am trying to configure opendkim, but it won't use the socket I want it to. According to man opendkim.conf, the Socket can be configured in /etc/opendkim.conf. I have also tried creating the file /etc/default/opendkim as I see it in my Jessie box, but that did not work either. Thus, I have tried entering the following line in /etc/opendkim.conf: Socket inet:39172@localhost Now, according to /etc/init.d/opendkim, this file is read: if [ -f /etc/opendkim.conf ]; then CONFIG_SOCKET=`awk '$1 == "Socket" { print $2 }' /etc/opendkim.conf` fi To me, that looks good so far. But the following snippet, which follows immediately, seems to dump the information that has been read right now: # This can be set via Socket option in config file, so it's not required if [ -n "$SOCKET" -a -z "$CONFIG_SOCKET" ]; then DAEMON_OPTS="-p $SOCKET $DAEMON_OPTS" fi DAEMON_OPTS="-x /etc/opendkim.conf -u $USER -P $PIDFILE $DAEMON_OPTS" I don't really understand what this is supposed to do. $CONFIG_SOCKET is never actually used to start opendkim, is it? Why is being read from the configuration file in the first place, then? I noticed there is also a file /etc/systemd/system/multi-user.target.wants/opendkim which does not seem to load any configuration. If it is of any importance: To restart opendkim, I enter service opendkim restart. My check to see if the socket has been read is: telnet localhost 39172 says Connection refused and /var/log/syslog says: opendkim[8343]: OpenDKIM Filter v2.11.0 starting (args: -P /var/run/opendkim/opendkim.pid -p local:/var/run/opendkim/opendkim.sock) My question is: How should I be configuring the socket for opendkim on Debian Testing/Stretch? Which probably also solves the mystery how the script above is supposed to work.
You are configuring it correctly, but this is an open bug with Debian Stretch where it ignores configuration: See: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=864162
How to configure the socket for opendkim on Debian Testing/Stretch
1,295,448,882,000
On my Linux machine - I do the following: $ env | grep -i LESSOPEN LESSOPEN=|/usr/bin/lesspipe.sh %s So from env command I see that: LESSOPEN=|/usr/bin/lesspipe.sh %s I want to change the variable LESSOPEN, so I do the following search in order to find where it's located, so I can make this change. $ grep -Ril "LESSOPEN" / But the grep search did not find any such file with LESSOPEN.
On Red Hat and CentOS systems, it is defined in /etc/profile.d/less.sh. On version 5, this contains # less initialization script (sh) [ -x /usr/bin/lesspipe.sh ] && export LESSOPEN="${LESSOPEN-|/usr/bin/lesspipe.sh %s}" On other systems, such as version 7, the value may be ||/usr/bin/lesspipe.sh %s; there is a slightly different interpretation between values that begin with | and ||, detailed in the man page for less. You can either edit that file if you want all users of bash-like shells on your system to see a different value, or override it for yourself by editing ~/.bashrc or ~/.bash_profile to have an export LESSOPEN=whatever line. On Linux systems, grep -r string / or grep -R string / may run into problems when reading certain special files. grep will hang when reading /dev/rfkill, and, due to what I believe is a buffer allocation bug, will run out of memory reading certain large files in /proc. An alternative is to exclude /dev and /proc: find / '(' -path /proc -o -path /dev ')' -prune -o -type f -exec grep -il lessopen {} +
Which file defines the LESSOPEN environment variable?
1,295,448,882,000
I am looking for a clean, "modern" way to configure, start, and stop the dummy0 network interface (from the dummy kernel module). My /etc/network/interfaces used to work on an older system but now fails silently on ifup dummy0: iface dummy0 inet static address 10.10.0.1 netmask 255.255.255.0 # post-up ip link set dummy0 multicast on Uncommenting the post-up line produces this error (showing that it runs but that the interface is never created): dummy0: post-up cmd 'ip link set dummy0 multicast on'failed: returned 1 (Cannot find device "dummy0") This shell script works perfectly but isn't a nice clean config file: #!/bin/sh sudo ip link add dummy0 type dummy sudo ip link set dummy0 multicast on sudo ip addr add 10.10.0.1/24 dev dummy0 sudo ip link set dummy0 up My intention is to use it both manually and with a systemd service: [Service] Type=oneshot RemainAfterExit=yes ExecStart=/sbin/ifup dummy0 ExecStop=/sbin/ifdown dummy0 StandardOutput=syslog+console Environment: Kubuntu 18.04.2 LTS NetworkManager 1.10.6 iproute2 4.15.0 ifupdown2 1.0 systemd 237 +PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD -IDN2 +IDN -PCRE2 default-hierarchy=hybrid Questions: How can I convert the shell script into a working /etc/network/interfaces configuration? Are there any another cleaner or recommended ways to do this?
The interface wasn't "created" previously; ifupdown relied on it magically appearing as soon as the 'dummy' kernel module was loaded. This is old compatibility behavior, and (AFAIIRC) it also interfered with explicit creation of the same interface name, so it was disabled through a module parameter. Now dummy0 has to be created the same way dummy1 or dummyfoobar are created. You should be able to create the interface in a "pre-up" command: iface dummy0 inet static address 10.10.0.1/24 pre-up ip link add dummy0 type dummy If you also use NetworkManager on this system, recent NM versions support dummy interfaces. nmcli con add type dummy ifname dummy0 ipv4.addresses 10.10.0.1/24 [...] If the interface should be created on boot and remain forever, that can be done using systemd-networkd (one .netdev configuration to create the device, one .network config to set up IP addresses). However, 'networkctl' still does not have manual "up" or "down" subcommands.
Modern way to configure dummy0 in /etc/network/interfaces or similar?
1,295,448,882,000
I recently installed kmscon on my system. Now I want to configure it to use a different keyboard layout (neo2) and, while I am at it, a different font, too. I stumbled across this question here when searching for configuration file examples. But nowhere I could find an example configuration file or instructions how to format the configuration. Can you point me to any additional resource or give me an example?
Further investigation brought me to this page. Now my config file looks like the following: # config file for kmscon linux console xkb-layout=de xkb-variant=neo font-name=Inconsolata font-size=10 This answers both my questions.
How can I configure /etc/kmscon/kmscon.conf to use specific a) font and b) keyboard layout?
1,295,448,882,000
TLDR In sshd_config(5), this config segment: Match Address fe80::/10 PasswordAuthentication yes ... is not matching link-local IPv6 addresses as expected. Why and how to fix? I am trying to configure sshd to only allow password authentication when connecting from local addresses. Otherwise public key authentication is required. This is the relevant config in sshd_config. PubkeyAuthentication yes AuthorizedKeysFile .ssh/authorized_keys PasswordAuthentication no # Allow password auth on local network Match Address 169.254.0.0/16,192.168.0.0/16 PasswordAuthentication yes Match Address fe80::/10 PasswordAuthentication yes What works: Public key authentication enabled on all addresses, as expected. Password authentication enabled on address range 169.254.0.0/16,192.168.0.0/16 when connecting via IPv4, as expected. What does not work: Password authentication is not enabled on address range fe80::/10 when connecting via IPv6. Relevant line in var/log/secure: sshd[9457]: Connection reset by fe80::39c9:9db5:5a2a:1299%eth0 port 60468 [preauth] ... which is an address that should be matched by fe80::/10 Checklist items I've done: IPv6 traffic is not blocked by firewall sshd is listening on both stacks $ netstat -tupln | grep sshd tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 8903/sshd tcp6 0 0 :::22 :::* LISTEN 8903/sshd Combining / splitting the Match statements for IPv4 and IPv6 does nothing Match Address 169.254.0.0/16,192.168.0.0/16,fe80::/10 This doesn't work either. Putting the IPv6 address in square brackets Match Address [fe80::]/10 No bueno. sshd does not log any config error in var/log/secure Not a client problem - tried OpenSSH, PuTTY, WinSCP and got the same error Versions: sshd running on CentOS 7 $ uname -msr Linux 5.4.72-v8.1.el7 aarch64 $ ssh -V OpenSSH_7.4p1, OpenSSL 1.0.2k-fips 26 Jan 2017 I've already asked this question over on /r/sysadmin's discord server, and people's reaction was "weird". See our full conversation here if you are interested. It has some more minor details on the different things I tried.
After trying just about everything I could think of, I was able to find a solution that worked for me. I wanted to allow password auth to users on my LAN but only allow key based auth from outside the LAN which is why I ended up finding this post. From other reading I saw some indication that square brackets should be used with an ipv6 address and I also saw in the sshd logs that it logged the interface name (i.e. eth0, wlan0) where my connection came from when I would connect using ipv6. I decided to test all of the combos until I found something that worked. I put in my full ipv6 address and did not use a /10 to make sure that the format of that would not interfere and I tried with and without the interface name and with and without brackets (and putting them around different parts of the address) and I can definitively say that sshd does not like the brackets. Any time I included them it did not work. It also did not work without the interface name specified even if I used my full exact ipv6 address so it seems like sshd expects an ipv6 address to not include square brackets as it would in a URL and expects that it will include the interface name no matter what. The last piece was the /10 to include all link local addresses. I initially expected the correct form to be fe80::/10%eth0 but surprisingly that did not work. Instead sshd expects you to write fe80::%eth0/10. I guess this kind of makes sense if you view the /10 as modifying the entire ip address where a proper and complete ipv6 address is some number and an interface name while an ipv4 is only the number, but in either case, with that twist unraveled, I had a solution. This was the full match block I used to allow ipv4 and ipv6 local connections to authenticate with passwords: Match Address 10.0.0.0/8,172.16.0.0/12,192.168.0.0/16,fe80::%eth0/10 PasswordAuthentication yes Obviously you would need to modify the name eth0 to be the correct interface name for you machine, (you can look at the list of them with ifconfig or check the sshd logs when you attempt to connect via ipv6 to see what interface your connection is using) and If you wanted to support connections from multiple interfaces I think you need to specify ,fe80::%name/10 for each one. I hope this answer helps others who stumble upon this thread (and maybe OP, though I am not sure how much help this will be five months later)
sshd_config - "Match Address <IPv6>" not matching
1,295,448,882,000
I have seen lines like the one below, in a number of config files (xmobar has one of these and dmenu too has this) '-*-fixed-*-*-*-*-18-*-*-*-*-*-*-*' What is the meaning of each token which is separated by a hyphen (-)? Where can I see a list of fonts which I can use in these configs? There is another variation of this with obvious placeholders. Where can I see a list of fonts for this? (I understand that xft fonts are different) xft:Bitstream Vera Sans Mono:size=10:antialias=true
xfontsel will allow you to view, select and adjust all of these fields: You can also use the command xlsfonts to see a list of installed fonts. This will list all fonts, so you may want to pipe it through grep to view required subsets, eg,: xlsfonts | grep droid Similarly, fc-list will display a list of all installed fonts. You can read more about the differences between core and Xft fonts here: http://en.wikibooks.org/wiki/Guide_to_X11/Fonts#Core_versus_Xft_fonts
Meaning of different tokens in a font config string
1,295,448,882,000
I need to work on a Solaris server over ssh from my Ubuntu (Lucid) laptop. I got Home/End Insert/Delete Page Up/Down working in csh and bash using bindkey and ~/.inputrc, respectively. But I can't figure out how to get them working in less. How can I figure out what the problem is, and fix it?
I found the answer here, in section 4.4. less (1). to use it with the movement keys, have this plain ASCII file .lesskey in your home directory: ^[[A back-line ^[[B forw-line ^[[C right-scroll ^[[D left-scroll ^[OA back-line ^[OB forw-line ^[OC right-scroll ^[OD left-scroll ^[[6~ forw-scroll ^[[5~ back-scroll ^[[1~ goto-line ^[[4~ goto-end ^[[7~ goto-line ^[[8~ goto-end then run the command lesskey. (These are escape sequences for vt100-like terminals.) This creates a binary file .less containing the key bindings.
Why don't Page U/Down, Home/End work in less on Solaris over ssh from Ubuntu?
1,295,448,882,000
I was just wondering why echo $MANPATH does not work (on my system (Debian Jessie x86_64 GNU/Linux 3.16.0-4-amd64)). The manpath command alone works well: user@host:~$ manpath /usr/local/man:/usr/local/share/man:/usr/share/man /etc/manpath.config - exists and contains uncommented lines, according to the ones listed by manpath. The manpath man page says: If $MANPATH is set, manpath will simply display its contents ... So, why does manpath work and echo $MANPATH doesn't?
From my man manpath (Ubuntu 16.10 - as you didn't mention your system details): If $MANPATH is set, manpath displays its value rather than determining it on the fly. So $MANPATH is more of an override to the otherwise default configuration held by /etc/manpath.config. Note also: DESCRIPTION If $MANPATH is set, manpath will simply display its contents and issue a warning. If not, manpath will determine a suitable manual page hierarchy search path and display the results.
Why does 'manpath' work and 'echo $MANPATH' does not?
1,295,448,882,000
I've got 2 PCs connected on an Ethernet crossover cable. On PC1 I run: sudo ip addr add 192.168.2.1 peer 192.168.2.2 dev eth0 sudo ip link set dev eth0 up On PC2 I run: sudo ip addr add 192.168.2.2 peer 192.168.2.1 dev eth0 sudo ip link set dev eth0 up sudo ip route add default via 192.168.2.1 What's the cleanest way to replicate this set up in the interfaces(5) config file, so I can run ifup(8) instead of configuring each interface manually? I'm on Linux Mint 15, if it matters, but I'd think it'd be the same in any Debian-based distro.
Here's the interfaces(5) "native" way to describe a point-to-point connection (written for PC2): iface eth0 inet static address 192.168.2.2 pointopoint 192.168.2.1 gateway 192.168.2.1 It's also useful to know that in a pinch, if you have an unusual configuration that interfaces(5) doesn't support, you can tell it to run exactly your set of commands: iface eth0 inet manual up ip link set eth0 up up ip addr add 192.168.2.2 peer 192.168.2.1 dev eth0 up ip route add default via 192.168.2.1 down ip route del default via 192.168.2.1 down ip addr del 192.168.2.2 peer 192.168.2.1 dev eth0 down ip link set eth0 down This way is more error-prone, of course, and in this instance it's unnecessary. But you can also add up and down to non-manual definitions, if you want to tweak a standard setup.
Static IP for 2 PCs on crossover cable in interfaces(5) file?
1,295,448,882,000
So fat, I am setting up microphone settings : $ amixer set 'Rear Mic' 90% mute cap $ amixer set 'Rear Mic Boost' 80% But, after some sys. update, my default recoding chanell changed to 'Front Mic' : $ amixer sget 'Input Source' Simple mixer control 'Input Source',0 Capabilities: cenum Items: 'Front Mic' 'Rear Mic' 'Line' 'CD' 'Mix' Item0: 'Front Mic' How to change 'Input Source' to 'Read Mic' with amixer ? (Currently I do it manually with alsamixer or kmix - I would love to automatize it on startup).
I found solution here: http://thenerdshow.com/index30e5.html there I've found : $ amixer -c0 cset iface=MIXER,name='Input Source',index=1 'Front Mic' # (Record from Front Mic) slightly modified according to my sound-card and setup (default sound-card, different items ordering) : $ amixer cset name='Input Source',index=0 'Rear Mic'
amixer - How to change recording channel?
1,295,448,882,000
Is there any chance to create a configuration that does the following job? Only connect to available WiFi if it's signal is stronger than 30 % At many places, I stay in the border area of barely available wifi-signals. Thereupon those inevitable signal-abortions are just annoying, so I always have to switch between mobile data and wifi manually by myself. Is there any chance to set up some configuration that only allows connecting to WiFi for the case that signal strength is strong enough to avoid terminations (and hereby guarantee a stable connection)? Simplified approach: If signal strength is < 30 % ⇒ connection not allowed If signal strength is ≥ 30 % ⇒ connection allowed The value of 30 % is only an example of course... Maybe 20 % would make more sense, we will see!
I tried writing a script in python (python3, but works in 2 as well) that you can use for that. I've tried it until the connecting and disconnecting part, so that you can use the method that you prefer: with open("/proc/net/wireless", "r") as f: data = f.read() link = int(data[177:179]) level = int(data[182:185]) noise = int(data[187:192]) # print("{}{}{}".format(link, level, noise)) lmtqlty = -80 if(link < lmtqlty): os.system(nmcli c down id NAME`) # Will disconnect the network NAME else: os.system(nmcli c down id NAME`) # Will connect the network NAME You have to run it as sudo, but it's no problem since you will now put it into a cron service. I have not used cron services yet, but if you can't manage yourself I will give it a try. EDIT explanation: When you read the contents of "/proc/net/wireless", you get the following long string: Inter-| sta-| Quality | Discarded packets | Missed | WE face | tus | link level noise | nwid crypt frag retry misc | beacon | 22 wlan0: 0000 31. -79. -256 0 0 0 7 0 0 So you want to extract the correct values from the Quality column. This file gives you information about the connection between this system and the network. Here you have more information about it, and to explain what each Quality subcolumn means let me quote this other post: Decibel is a logarithmic unit (1 dB = 1/10 Bel, 1 Bel = power ratio 1.259 = amplitude ratio 1.122) that describes a relative relationship between signals. See wikipedia for details and a table. Negative decibels mean the received signal is weaker then the sent signals (which of course happens naturally). Level means how strong the signal is when received compared to how strong it was / it was assumed to be when sent. This is a physical measurement, and in principle the same for every Wifi hardware. However, often it's not properly calibrated etc. Link is a computed measurement for how good the signal is (i.e. how easy it is for the hardware/software to recover data from it). That's influenced by echoes, multipath propagation, the kind of encoding used, etc.; and everyone uses their own method to compute it. Often (but not always) it is computed to some value that's on the same scale as the "level" value. From experience, for most hardware I've seen, something around -50 means the signal is ok-ish, something around -80 means it's pretty weak, but just workable. If it goes much lower, the connection becomes unreliable. These values should be read just as a rough indication, and not as something scientific you can depend on, and you shouldn't expect them to be similar or even comparable on different hardware, not even "level". The best way to learn to interpret it is to take your hardware, carry it around a bit, watch how the signal changes and what the effects on speed, error rate etc. are. So I think you are interested in link (just changed it up there). Just to give you more ideas I searched, you have this one-line-script that shows you dynamically the link value: watch -n 1 "awk 'NR==3 {print \"WiFi Signal Strength = \" \$3 \"00 %\"}''' /proc/net/wireless" You could integrate it in a bash script rather than python :)
How to create a configuration to only connect to WiFi if signal is ≥ 30 %?
1,295,448,882,000
I do not want to store my sent file for mutt in my home directory. I would like to move this file to the .mutt directory. Inside my muttrc I did this - set record="~/.mutt/" but this causes an error and is non functional. I do not want to deactivate the sent file. I just want to move it. How do I move this file using muttrc?
You need to specify the full filename using record: set record = "~/.mutt/sent" You could also use + to place your sent mail in a mailbox alongside your other mailboxes (thanks to grochmal for the suggestion): set record = "+sent" The mailbox location is set using the folder variable, and is ~/Mail by default.
Mutt sent file in home directory - how to relocate
1,295,448,882,000
I use vim in many different contexts ... actually, probably most people do: There is the editing of configuration files, programming, documenting, email and so forth. I very frequently found myself wishing for a facility that lets me put a vimrc into a directory and whenever I start vim from within that directory a local vimrc gets read as well to make context specific adaptions. For instance: a vimrc in etc to implicitly set/release locks and create backups before/on write a vimrc for each of my programming projects for (for example) project-specific indentation and frequently used macros, e.g. Debian patch signature a vimrc in every documentation directory for the docs language specific settings (spellcheck!) This would not only help me myself, but also enable me/us to share project and task specific settings among many users in each of the described situations and ahve the configuration with the context where we want it applied. This seems very sensible to me and not too difficult to have/implement. Does anybody know of a vim plugin or have a config that honors vimrc files found within working directories?
Central configuration If it's okay to configure the local exceptions centrally, you can put such autocmds into your ~/.vimrc: :autocmd BufRead,BufNewFile /path/to/dir/* setlocal ts=4 sw=4 On the other hand, if you want the specific configuration stored with the project (and don't want to embed this in all files via modelines), you have the following two options: Local config with built-in functionality If you always start Vim from the project root directory, the built-in :set exrc enables the reading of a .vimrc file from the current directory. You can place the :set ts=4 sw=4 commands in there. Local config through plugin Otherwise, you need the help of a plugin; there are several on vim.org; I can recommend the localrc plugin, which even allows local filetype-specific configuration. Note that reading configuration from the file system has security implications; you may want to :set secure.
Customised vimrc for subfolders and projects
1,295,448,882,000
Long story short: I've used only distributions with "imperative configuration management/packaging" approach, so far. And,... I'm annoyed by hard to trace breakages/issues with imperative configuration management (when experimenting). I've found NixOS, which advertises: NixOS has a completely declarative approach to configuration management: you write a specification of the desired configuration of your system in NixOS’s modular language, and NixOS takes care of making it happen. I'm considering to use NixOS as my main desktop operating system, and store configuration in GIT repository. So, is NixOS configuration gittable? Can I "define" my main operating system configuration by git repository (proabbly with some "apply" commands)?
The NixOS configuration consists of two files (although you can break it up into more files): configuration.nix and hardware-configuration.nix. Both files are stored in /etc/nixos and they are text files. Hence, you can certainly put them in a GIT repo.
Can I manage my NixOS configuration in version control like git?
1,295,448,882,000
In order to troubleshoot an issue I'm looking in my kernel configuration settings for: CONFIG_SECCOMP, CONFIG_HAVE_ARCH_SECCOMP_FILTER and CONFIG_SECCOMP_FILTER. The first one is present in the kernel's config file as: CONFIG_SECCOMP=y but the other two are simply not present. This leaves me wondering how to interpret that.. Should settings missing in a kernel's config be interpreted as <setting>=n or are defaults used?
For boolean or tristate yes/no/module settings, missing and n are equivalent. Boolean settings correspond to a C preprocessor macro which is either defined or not. Source files check whether the macro is defined with #ifdef. If the setting is n, the macro is not defined, which is equivalent to the default state. Yes/no/module tristate settings are expanded in makefiles. Options set to y cause a source file to be compiled and the resulting object file to be linked into the main kernel image. Options set to m cause a source file to be compiled and the resulting object file to be linked as a separate module. Options set to n don't cause anything to be built. Some configuration options don't have a direct impact on the file, but only cause configuration interfaces to prompt you for a category of settings. If you have a .config file in the kernel source tree, you can run make oldconfig to regenerate the file with unknown options removed and options not present in the file added with their default setting. Some options are skipped from the resulting file if their category is skipped by setting the category prompt option to n.
Should settings missing in a kernel's config be interpreted as `<setting>=n` or are `defaults` used?
1,295,448,882,000
Has anyone here found the best way to STIG a version of RHEL 6.x automatically? The other answers I have found are either out of date or do not completely STIG the machine. Even an image that has a STIG of RHEL will help.
This project sounds like what you're looking for, titled: stig-fix-el6. excerpt DISA STIG Scripts to harden a system to the RHEL 6 STIG. These scripts will harden a system to specifications that are based upon the the following previous hardening provided by the following projects: DISA RHEL 6 STIG V1 R2 http://iase.disa.mil/stigs/os/unix/red_hat.html NIST 800-53 (USGCB) Content for RHEL 5 http://usgcb.nist.gov/usgcb/rhel_content.html NSA SNAC Guide for Red Hat Enterprise Linux 5 http://www.nsa.gov/ia/_files/os/redhat/NSA_RHEL_5_GUIDE_v4.2.pdf Aqueduct Project https://fedorahosted.org/aqueduct Tresys Certifiable Linux Integration Platform (CLIP) http://oss.tresys.com/projects/clip The contents of the project includes the following scripts: apply.sh - master script that runs scripts in cat1-cat4 and misc toggle_ipv6.sh - toggles IPv6 support, requires reboot (default is off) toggle_nousb.sh - toggles the 'nousb' kernel flag only toggle_udf.sh - toggles 'udf' mounting of DVDs (USGCB Blacklists udf) toggle_usb.sh - toggles 'nousb' kernel flag and the mass storage kernel module config - Directory with some pre-STIGed configurations (auditd,iptables,system-auth-local,etc.) cat1 - CAT I STIG Scripts cat2 - CAT II STIG Scripts cat3 - CAT III STIG Scripts cat4 - CAT IV STIG Scripts misc - NSA SNAC, GNOME, and Other miscellenous lockdown scripts manual - Manaully run (There be dragons here) backups - Backup copy of modified files to compare and restore configurations
STIG (Security Technical Implementation Guides) automation
1,295,448,882,000
I want to extract the default keybindings from my .i3config file and source it from another file I did that like this: #~/.i3config ... #source default keybindings . ~/.path_to_other_file But this doesn't work. Restarting i3 causes an error "you have a syntax error in your config file!" I can't think why this wouldn't be possible, but . ~/path_to_other_file and source ~/path_to_other_file both don't work.
There is actually a simple reason why this does not work as you expect. i3's config file is not a shell script. So, the question is, why would you want to do this? If you are hoping to be able to run commands in your i3 config specified in the script you mention, then it's not going to work. It seems like you're hoping to break down your config file into several smaller shell scripts; this will also not work. If you want i3 to be aware of a set of keybinds in any sensible way, you should put them directly in your config file. If, on the other hand, you just want to run a shell script when starting i3, this is quite easy. All you need to do is use the well-documented exec command (I imagine it would look something like this): exec sh /path/to/script/to/be/run
Possible to source a file in .i3config
1,295,448,882,000
I have an Apache 2 server set up on my computer that I use for local testing. To be clear, it is not hosting sites on the internet. It's just for local debugging and designing. I was using Ubuntu Linux, but now I have a new computer using Linux Mint. What I'd like to do is take all the Apache sites and settings that I have on the old Ubuntu machine and reproduce them on the new Linux Mint machine. I only know how to do this manually, one site at a time, starting from scratch. Creating a file for each site in the sites-available directory and activating them with a2ensite. And then making edits to any configuration files, like adding some lines to my php.ini file to enable Xdebug, and hoping I haven't missed anything. I'm sure I'm doing this inefficiently and in a way that's prone to human error. Is it not possible to in some way copy the entirety of the Apache 2 settings and sites on my Ubuntu machine and put them on my Linux Mint machine in one go? Or at least, in a minimum of steps that is less than recreating each site and setting from scratch? Please note that I am more of a designer than an administrator, so please assume my knowledge of Linux commands and server settings is minimal.
Your server settings, like any system-wide program settings, are to be found under /etc. The exact location depends on the distribution, but /etc/apache or /etc/apache2 are good bets. Both Ubuntu and Mint use /etc/apache2. If you have the same plug-ins installed and versions of Apache that aren't too far apart, you can simply copy the whole /etc/apache2 directory to the new machine. You'll need to copy your document root(s) as well, of course. If you're running some web applications, you'll need to migrate them as well. This may or may not be as straightforward as copying some files, it depends heavily on the application. In particular, if there's a database involved, you'll need to install the same database software (typically MySQL), dump the database on the old machine, and restore the dump on the new machine.
Can I duplicate my Apache server settings on a new Linux install?
1,295,448,882,000
I love lynx. I love browsing without tabs. Call me a luddite, but I only use a modern browser if I have to. Which is about twice a day, for a few minutes at most. There's one thing I really really hate about lynx, though. It's not immediately apparent how to customize lynx's behavior when it comes to filetypes. If I encounter a .pdf file, it downloads it, then dutifully asks me if I'd like to save it to disk. Thanks, lynx. It's like you read my mind or something. If I encounter a .torrent file, lynx downloads it, then opens it with transmission-gtk. Uh... no, lynx. I would have either preferred transmission-cli or just having the torrent file. If I try to open a magnet URL, lynx doesn't know what to do with it. (Psst! transmission-cli, lynx! But the worst is when I download .ogg, because lynx assumes that I want to play it with VLC in the TTY using caca to render the video as ASCII. Bad lynx! How do I whip lynx into shape? How do I customize this behavior? Editing /etc/lynx/lynx.cfg does not seem to do the trick.
Lynx does the standard thing (unlike Firefox and Chrome) and uses the system's mailcap database. The system mailcap is in /etc/mailcap, and the per-user file is ~/.mailcap. Add entries like application/x-bittorrent; transmission-cli '%s'; needsterminal application/pdf; pdftotext '%s'; copiousoutput application/ogg; vlc '%s'; test=test -n "$DISPLAY"
Customize Lynx's filetype behavior
1,295,448,882,000
For some reason, I want to use a trackball Logitech (aka Logicool) Marble Mouse (aka Trackman Marble) upside down. Is there a way to reverse the left and right rolling and up and bottom rolling respectively software-wise, without modifying the hardware? I tried this by writing a configuration file in /etc/X11/xorg.conf.g/ such as Section "InputClass" Identifier "Marble Mouse" MatchProduct "Logitech USB Trackball" Option "EmulateWheel" "true" Option "EmulateWheelButton" "8" Option "XAxisMapping" "6 7" Option "Emulate3Buttons" "true" Option "ButtonMapping" "1 2 3 5 4 7 6 2 2" EndSection The crucial part is that I switched the keys 4 and 5 and 6 and 7 so that instead of: Option "ButtonMapping" "1 2 3 4 5 6 7 2 2" I have: Option "ButtonMapping" "1 2 3 5 4 7 6 2 2" But this is not working, and this maybe only valid for scroll wheel emulation mode. How can I reverse the rolling?
In gaming we just refer to this as "invert mouse". From the Xorg mouse(4) man page: Option "InvX" "boolean" Invert the X axis. Default: off. Option "InvY" "boolean" Invert the Y axis. Default: off.
Reversing the direction of a trackball
1,295,448,882,000
The file /proc/config.gz isn't updated when I rebuild kernel with changed configuration (from make menuconfig). For instance, I have rebuilt kernel with BLK_DEV_IO_TRACE which works fine but config.gz is still showing # CONFIG_BLK_DEV_IO_TRACE is not set. Isn't the .config file in the root directory of kernel source which is included in kernel binary when we enable CONFIG_IKCONFIG? And BTW config.gz shows CONFIG_IKCONFIG=y while in actual it is CONFIG_IKCONFIG=m. I'm using Android NDK standalone GCC toolchain to build this kernel (3.18 arm64). NOTE: Just to clarify, as it's causing confusion, I'm sure my new kernel is running with new configuration. I've enabled a long list of changes to my default configuration which are working now, a number of userspace programs depend on these configurations: CONFIG_IKCONFIG=m CONFIG_IKCONFIG_PROC=y CONFIG_VETH=y CONFIG_MODULES=y CONFIG_MODULE_UNLOAD=y CONFIG_NFS_FS=m CONFIG_NFS_V2=m CONFIG_NFS_V3=m CONFIG_NFS_V4=m CONFIG_NFS_V4_1=y CONFIG_NFS_V4_2=y CONFIG_NFSD=m CONFIG_NFSD_V3=y CONFIG_NFSD_V4=y CONFIG_NFSD_V4_SECURITY_LABEL=y CONFIG_KEYS_DEBUG_PROC_KEYS=y CONFIG_OVERLAY_FS=m CONFIG_UTS_NS=y CONFIG_USER_NS=y CONFIG_PID_NS=y CONFIG_NET_CLS_CGROUP=m CONFIG_CGROUP_NET_CLASSID=y CONFIG_NETFILTER_XT_MATCH_CGROUP=m CONFIG_NETFILTER_NETLINK=m CONFIG_ISO9660_FS=m CONFIG_SQUASHFS=m CONFIG_UDF_FS=m CONFIG_UNIX_DIAG=m CONFIG_PSTORE=y CONFIG_FANOTIFY=y CONFIG_FANOTIFY_ACCESS_PERMISSIONS=y CONFIG_DEBUG_FS=y CONFIG_FTRACE=y CONFIG_BLK_DEV_IO_TRACE=y config.gz shows Linux/arm64 3.18.71 Kernel Configuration while the current is Linux/arm64 3.18.140 Kernel Configuration. Also it doesn't matches with any of the 16 *defconfig files in arch/arm64/configs/. There are 185 differences (88 additions, 97 drops) between actual config and config.gz. Initially I used arch/arm64/configs/franco_mido_defconfig; the one provided by custom kernel developer.
I should have done more research prior to posting this question, but I thought might be I was missing something. For reference, the problem reveals to be specific to my kernel source. Custom kernel developer applied a patch to always include an older configuration in kernel binary. So this should be undone (considering the risks, if any): ifeq ($(CONFIG_MACH_XIAOMI_MIDO),y) $(obj)/config_data.gz: arch/arm64/configs/mido_defconfig FORCE else ifeq ($(CONFIG_MACH_XIAOMI_TISSOT),y) $(obj)/config_data.gz: arch/arm64/configs/tissot_defconfig FORCE else $(obj)/config_data.gz: $(KCONFIG_CONFIG) FORCE endif
Why does “/proc/config.gz” show wrong configuration?
1,295,448,882,000
I want to default Guake to open as a floating window under i3. I created an entry under ~/.i3/config that reads as the following. My code is - for_window [class="guake"] floating enable My xprop for the window is - $ xprop GDK_TIMESTAMP_PROP(GDK_TIMESTAMP_PROP) = 0x61 WM_STATE(WM_STATE): window state: Normal icon window: 0x0 _NET_WM_DESKTOP(CARDINAL) = 4294967295 _NET_WM_STATE(ATOM) = _NET_WM_STATE_ABOVE, _NET_WM_STATE_STICKY, _NET_WM_STATE_SKIP_TASKBAR, _NET_WM_STATE_SKIP_PAGER WM_HINTS(WM_HINTS): Client accepts input or input focus: True Initial state is Normal State. window id # of group leader: 0x1200001 XdndAware(ATOM) = BITMAP _MOTIF_DRAG_RECEIVER_INFO(_MOTIF_DRAG_RECEIVER_INFO) = 0x6c, 0x0, 0x5, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x10, 0x0, 0x0, 0x0 _MOTIF_WM_HINTS(_MOTIF_WM_HINTS) = 0x2, 0x0, 0x0, 0x0, 0x0 _NET_WM_SYNC_REQUEST_COUNTER(CARDINAL) = 18874401 _NET_WM_WINDOW_TYPE(ATOM) = _NET_WM_WINDOW_TYPE_NORMAL _NET_WM_USER_TIME(CARDINAL) = 3768611 _NET_WM_USER_TIME_WINDOW(WINDOW): window id # 0x1200020 WM_CLIENT_LEADER(WINDOW): window id # 0x1200001 _NET_WM_PID(CARDINAL) = 1265 WM_LOCALE_NAME(STRING) = "en_US.UTF-8" WM_CLIENT_MACHINE(STRING) = "class-VirtualBox" WM_NORMAL_HINTS(WM_SIZE_HINTS): program specified location: 0, 0 program specified minimum size: 1 by 1 window gravity: North WM_PROTOCOLS(ATOM): protocols WM_DELETE_WINDOW, WM_TAKE_FOCUS, _NET_WM_PING, _NET_WM_SYNC_REQUEST WM_CLASS(STRING) = "guake", "Main.py" WM_ICON_NAME(STRING) = "Guake Terminal" _NET_WM_ICON_NAME(UTF8_STRING) = "Guake Terminal" WM_NAME(STRING) = "Guake Terminal" _NET_WM_NAME(UTF8_STRING) = "Guake Terminal" How do I make it so that guake always floats when opened in i3?
The correct command is - for_window [instance="guake"] floating enable Add the above to ~/.i3/config in order to allow guake to function and float as normal.
Make Guake Float in i3wm
1,295,448,882,000
I would like deadline to be the default IO scheduler for my system, and I don't want to lose that config when I reboot. What is the proper way of doing that? (I'm using Debian) Some hints: have a startup script doing echo deadline >| /sys/block/sda/queue/scheduler, use the kernel parameter elevator=deadline on GRUB startup config, use a udev rule like SUBSYSTEM=="block", ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/scheduler}="deadline", etc. What is the "preferred" solution? EDIT: can I have a configuration which set the iosched to deadline only for drives present at bootup, but not for subsequent hot-plugged drives (like USB keys)?
Depends on the situation really. All things equal, I would prefer the GRUB approach, purely because it is simple and you get your chosen scheduler right at the start of the boot. The main issue with it is that it is a system wide setting and if you have more than one disk and want different schedulers on each, then it is no use. The udev approach is better in this case, it offers the most fine grained control. You could even set different schedulers for external disks and they would be activated when you plug them in. The one I would least prefer is the startup script. The options here are either to put it in the /etc/rc.local script, in which case the scheduler won't change until (very) late in the boot process, or to put it in with the other sysvinit scripts. The latter is the most complicated of all as it requires writing LSB tags etc to do it properly. Also, it is more difficult (for me at least) to reliably get the correct disk via /sys. Note the example you give won't work if the disk you are trying to set the scheduler for is no longer sda for whatever reason. With udev you can match the device(s) according to a range of properties.
What is the recommended way of setting a default IO scheduler on Linux?
1,295,448,882,000
How to autologin a specified user with xdm? I know it's possible with other display managers but I wasn't able to figure out how xdm has to be configured to autologin a certain user. Is it possible? Or should I rather remove xdm and simply use an initscript with startx?
I haven't used xdm in a long while but as far as I know autologin is not supported by xdm (and, as per one of the devs, not needed).
How to autologin with XDM?
1,295,448,882,000
When there is no .zshrc file in a user's home directory and zsh is started, an interactive configuration utility is run instead of directly giving access to the shell prompt. I set up zsh to be the default shell on my Debian Wheezy systems. Therefore every newly created user gets zsh as login shell if I do not change that manually. Also there is a default .zshrc in /etc/skel, so all regular users on my system have a copy of the file in their home directory. This is not the case for system users. When I now change into a system user (for example the user for a specific network daemon) via sudo or sh I run into the configuration tool, because these users have no .zshrc in their home directories. It doesn't feel right to place a .zshrc into each and every daemon's home directory, which also would be a pain to setup and maintain on a lot of systems. But I still wouldn't want to downgrade to a less comfortable bash for these users. Is there a way to disable the zsh configuration tool without having to create a .zshrc file in the user's home directory? Additionally a way to setup a single file to be the system-wide default .zshrc for all users which don't have one would be nice too.
from: http://www.zsh.org/mla/users/2007/msg00398.html The shell then executes code in the file scripts/newuser in the shared library area (by default /usr/local/share/zsh/<VERSION>/scripts/newuser). This feature can be turned off simply by removing this script. The module can be removed entirely from the configured shell by editing the line starting "name=zsh/newuser" in the config.modules file, which is generated in the top level distribution directory during configuration: change the line to include "link=no auto=no". and /etc/zsh/zshrc is sourced by every shell that has the interactive,rcs, & globalrc options set. (which most interactive zsh processes do) or add zsh-newuser-install() { :; } in /etc/zsh/zshenv Which has the obvious side-effect of users not being able to use the function until they undefine yours. You can refine that by adding a test of the UID. if (( EUID < 1000 )) && (( EUID != 0 )); then # or whatever the highest daemon uid on your system zsh-newuser-install() { :; } fi
Disable the configuration tool in Zsh
1,295,448,882,000
My hands get tired everytime clicking on Ctrl+b to activate tmux (prefix) and then q to switch between panes (pane numbers). I want to map them to F1 and F2 respectively. I understand I need to change something in the configuration file ~/.tmux.conf but I'm not sure what, especially how to refer to Function Keys while mapping, and go beyond prefix's definition, that's refer to prefix+q in this case. Thanks in advance :)
Your question isn't clear as q by default prints the numbers of the panes, it doesn't switch between them... Nevertheless, you can achieve what you are after with some simple binds: first, resetting the prefix key to q and then setting F1 to move to the left pane and F2 to the right. With that knowledge, you can adapt to whatever it is you are actually asking. # set prefix key to ctrl+q set -g prefix C-q bind -n F1 select-pane -L bind -n F2 select-pane -R Note: I have included the -n switch which obviates the need for using prefix first, as you indicate you are tired of this. If you do want to hit the prefix before changing panes, just remove it.
How can I change specific keybindings in Tmux?
1,401,581,406,000
I often find myself in the situation where I need to look up the syntax and logic for a configuration file of some program on my computer. While I can do man mosquitto, this will not necessarily yield the help section for the file /etc/mosquitto.conf. What I am searching for is something like man /etc/mosquitto.conf or man ./mosquitto.conf, which should open the exact help I need for a given file. Not just mosquitto, that was only an example. Is there such a mapping somewhere? Is there a program that I can use to find help about specific configuration files, instead of having to search the internet?
TL/DR: There's no centralized repository of information. Most up-to-date source of information of a an application / tool / etc. is in its documentation. Otherwise there are man pages, info pages, tldr pages, html documentation, literature, internet... Man pages are just one form of software documentation. The fastest way to find out whether a man page exists is just giving command man foo. Command apropos foo outputs a list of man pages containing info about foo. man7.org hosts The Linux man-pages project, a web repository of man pages. It contains lists of man pages by section, alphabetically and by project. Man pages are viewed on the terminal by just man foo (for example man hosts), never man /path/foo or man ./foo (for example man /etc/hosts). Distributions can also host their own man pages online. Not everything has a man page. There's no man page for .bashrc, but some information about it is found in man bash - not specifics on its contents, tho'. xattr has a man page, xattr.conf doesn't; man xattr.conf outputs No manual entry for xattr.conf. Xattr's man page doesn't mention the conf file either. Some info about it can be seen just by cat /etc/xattr.conf: # Format: # <pattern> <action> # # Actions: # permissions - copy when trying to preserve permissions. # skip - do not copy. Other files like .bashrc also contain documentation in the form of comments. Similar projects to man pages are GNU Info and TLDR. While man pages contain references to other man pages, they're static; so following references requires opening another page. GNU Info has internal hyperlinks. TLDR pages are sort of cheatsheets, a community effort to simplify the man pages. It also provides practical examples. They're used just like man - i.e. info xattr and tldr xattr. GUI applications (GNOME and KDE, for example) don't use any of the above. Their end user documentation is provided using HTML, and they can contain viewers like GNOME's Yelp. In the end providing documentation is entirely up to the developers. They can freely choose in which form they provide it - or not to provide it at all. There's a great many of them. Consequently the quality and availability of specific information varies a lot, and creating a single repository is simply impossible. Today the fastest and easiest way to find info is the internet.
Manpage for configuration file
1,401,581,406,000
Say I have a shell configuration file config like this: HOST=localhost PORT=8080 Now I have a template template like this: The host is <%= @HOST %> The port is <%= @PORT %> How do I substitute placeholders in template with values in config file? I can certainly do it like this: $ . config $ sed -e "s/<%= @HOST %>/$HOST/" \ > -e "s/<%= @PORT %>/$PORT/" < template The host is localhost The port is 8080 But if there are many config values this becomes too cumbersome. How would I do this in more generic way? I would like to iterate over each placeholder and substitute it with a real value.
You could do something like: eval "cat << __end_of_template__ $(sed 's/[\$`]/\\&/g;s/<%= @\([^ ]*\) %>/${\1}/g' < template) __end_of_template__" That is, have sed replace all the <%= @xxx %> with ${xxx} after having escaped all the $, \ and ` characters and let the shell do the expansion. Or if you can't guarantee that template will not contain a __end_of_template__ line: eval "cut -c2- << x $(sed 's/[\$`]/\\&/g;s/<%= @\([^ ]*\) %>/${\1}/g;s/^/y/' < template) x"
Substitute placeholders in template
1,401,581,406,000
root@debian:/home/debian8# cat /etc/sudoers # # This file MUST be edited with the 'visudo' command as root. # # Please consider adding local content in /etc/sudoers.d/ instead of # directly modifying this file. # # See the man page for details on how to write a sudoers file. # Defaults env_reset Defaults mail_badpass Defaults secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" # Host alias specification # User alias specification # Cmnd alias specification # User privilege specification root ALL=(ALL:ALL) ALL # Allow members of group sudo to execute any command %sudo ALL=(ALL:ALL) ALL # See sudoers(5) for more information on "#include" directives: includedir /etc/sudoers.d The 27th line is only removed a chracter #,the primitive format is as below. #includedir /etc/sudoers.d I just remove the # character. root@debian:/home/debian8# ls /etc/sudoers.d myRules README root@debian:/home/debian8# cat /etc/sudoers.d/myRules debian8 ALL=(ALL:ALL) NOPASSWD:ALL How to fix it?
#includedir /etc/sudoers.d is not a comment, #includedir is a directive. The hash sign is part of it. Just re-add it.
sudo: parse error in /etc/sudoers near line 27
1,401,581,406,000
On Arch Linux with GNOME 3.24.2 and Firefox 53.0.3 (soon to be upgraded to 54 when it comes into the repository) I have found in my about:support section that it says: Multiprocess Windows 0/1 (Disabled by add-ons) So I was wondering if there is any way that I can check which add-on(s) is doing this?
If Multiprocess Windows is listed as "Disabled by add-ons", open about:addons and disable all your add-ons, then enable them one-by-one to find which add-on is disabling e10s. Although Firefox will not allow incompatible add-ons to load - add-ons can be incompatible with e10s and thus their developers have chosen to disable e10s while their add-on is enabled.
How to check which add-on(s) in Firefox is disabling e10s on Arch?