date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,361,651,451,000 |
I'm running OpenSuse with dwm as a window manager. Since I made the switch from KDE, I haven't found a way to take a screenshot or even capture part of my screen.
Is there a way I can do this in a command-line environment?
|
The euphoniously named scrot takes screenshots from the command line...
It has a couple of simple options, including a time delay and image quality.
If you are wanting to take a shot in the console, and you are running a framebuffer, then you can use fbgrab.
| Capturing area of the screen without a Desktop Environment? |
1,361,651,451,000 |
I would like to display Chinese characters in dwm's status bar. More specifically I would like the symbols to represent the different tags in dwm. Using an online converter, I found that the unicode representation for the symbols I want is:
憤怒
unicode: 憤怒
Putting the unicode characters directly into my config.h doesn't work, they don't even show up in vim. My locale is set to ISO-8859-1 and I'm using the Liberation Mono font for dwm.
What can I do to get those symbols up there?
EDIT
Following Mat's instructions and patching dwm, the patch command hangs. Running strace:
[max@prometheus dwm-6.0]$ strace patch -Np1 ../dwm-pango/dwm-pango/dwm-6.0-pango.patch
execve("/usr/bin/patch", ["patch", "-Np1", "../dwm-pango/dwm-pango/dwm-6.0-p"...], [/* 30 vars */]) = 0
brk(0) = 0x1d52000
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f9dc4713000
access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory)
open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=92801, ...}) = 0
mmap(NULL, 92801, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f9dc46fc000
close(3) = 0
open("/lib/libc.so.6", O_RDONLY|O_CLOEXEC) = 3
read(3, "\177ELF\2\1\1\3\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0`\25\2\0\0\0\0\0"..., 832) = 832
fstat(3, {st_mode=S_IFREG|0755, st_size=1983446, ...}) = 0
mmap(NULL, 3804112, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f9dc4152000
mprotect(0x7f9dc42e9000, 2097152, PROT_NONE) = 0
mmap(0x7f9dc44e9000, 24576, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x197000) = 0x7f9dc44e9000
mmap(0x7f9dc44ef000, 15312, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7f9dc44ef000
close(3) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f9dc46fb000
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f9dc46fa000
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f9dc46f9000
arch_prctl(ARCH_SET_FS, 0x7f9dc46fa700) = 0
mprotect(0x7f9dc44e9000, 16384, PROT_READ) = 0
mprotect(0x61a000, 4096, PROT_READ) = 0
mprotect(0x7f9dc4714000, 4096, PROT_READ) = 0
munmap(0x7f9dc46fc000, 92801) = 0
brk(0) = 0x1d52000
brk(0x1d75000) = 0x1d75000
getpid() = 10412
lstat("/tmp/po8GP02f", 0x7fffdc075210) = -1 ENOENT (No such file or directory)
lstat("/tmp/pikSWXEs", 0x7fffdc075210) = -1 ENOENT (No such file or directory)
lstat("/tmp/prB1wVgF", 0x7fffdc075210) = -1 ENOENT (No such file or directory)
lstat("/tmp/pp27ATSR", 0x7fffdc075210) = -1 ENOENT (No such file or directory)
rt_sigaction(SIGCHLD, {SIG_DFL, [CHLD], SA_RESTORER|SA_RESTART, 0x7f9dc4186cb0}, {SIG_DFL, [], 0}, 8) = 0
rt_sigaction(SIGHUP, NULL, {SIG_DFL, [], 0}, 8) = 0
rt_sigaction(SIGHUP, {0x40cd90, [], SA_RESTORER, 0x7f9dc4186cb0}, NULL, 8) = 0
rt_sigaction(SIGPIPE, NULL, {SIG_DFL, [], 0}, 8) = 0
rt_sigaction(SIGPIPE, {0x40cd90, [], SA_RESTORER, 0x7f9dc4186cb0}, NULL, 8) = 0
rt_sigaction(SIGTERM, NULL, {SIG_DFL, [], 0}, 8) = 0
rt_sigaction(SIGTERM, {0x40cd90, [], SA_RESTORER, 0x7f9dc4186cb0}, NULL, 8) = 0
rt_sigaction(SIGXCPU, NULL, {SIG_DFL, [], 0}, 8) = 0
rt_sigaction(SIGXCPU, {0x40cd90, [], SA_RESTORER, 0x7f9dc4186cb0}, NULL, 8) = 0
rt_sigaction(SIGXFSZ, NULL, {SIG_DFL, [], 0}, 8) = 0
rt_sigaction(SIGXFSZ, {0x40cd90, [], SA_RESTORER, 0x7f9dc4186cb0}, NULL, 8) = 0
rt_sigaction(SIGINT, NULL, {SIG_DFL, [], 0}, 8) = 0
rt_sigaction(SIGINT, {0x40cd90, [], SA_RESTORER, 0x7f9dc4186cb0}, NULL, 8) = 0
fstat(0, {st_mode=S_IFCHR|0620, st_rdev=makedev(136, 2), ...}) = 0
open("/tmp/pp27ATSR", O_RDWR|O_CREAT|O_EXCL|O_TRUNC, 0600) = 3
fcntl(3, F_GETFL) = 0x8002 (flags O_RDWR|O_LARGEFILE)
fstat(3, {st_mode=S_IFREG|0600, st_size=0, ...}) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f9dc4712000
lseek(3, 0, SEEK_CUR) = 0
fstat(0, {st_mode=S_IFCHR|0620, st_rdev=makedev(136, 2), ...}) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f9dc4711000
read(0,
Could I be missing something?
|
I don't think you'll get Unicode support from dwm without patching it (and adding additional dependencies, notably pango).
If that's an option for you, the pango patch from the official list of patches seems to work, just run patch command in the dwm folder passing the patch file to the standard input:
$ tar xzf dwm-6.0.tar.gz
$ cd dwm-6.0
$ patch -Np1 < ../dwm-6.0-pango.patch
After that, you can edit your config file and put unicode literals (\u followed by the unicode codepoint in hex) in the tags strings for example.
/* tagging */
static const char *tags[] = { "\u00c0",
"\u61a4\u6012",
"\u10e5\u10d0\u10e0",
"4", "5", "6", "7", "8", "9" };
First item is À, second are your two symbols, third is some Georgian script ('cos I think it looks cool).
With a large font, this results in:
| Unicode characters in uxterm and dwm statusbar |
1,361,651,451,000 |
I'm running dwm with dmenu under Arch Linux. While dmenu is working, it doesn't start some programs, e.g., emacs, although it shows the command with auto-completion. When I start them in the terminal, it works fine.
What can I do? Is there an error log file for dmenu?
|
dmenu doesn't have built in logging, but it is a very simple program and it is not difficult to have it log it's output to a file.
First, determine where pacman has placed the dmenu files with pacman -Ql dmenu. You should get:
dmenu /usr/
dmenu /usr/bin/
dmenu /usr/bin/dmenu
dmenu /usr/bin/dmenu_path
dmenu /usr/bin/dmenu_run
...
You can then open /usr/bin/dmenu_run, which is just a shell script, and add a temporary hack to write all output to a file, like so:
dmenu_path | dmenu "$@" | ${SHELL:-"/bin/sh"} &>/home/michael/dmenu_log
Selecting emacs from dmenu will now fail, but you will get the output in your log file:
]P0000000]P85e5e5e]P18a2f58]P9cf4f88]P2287373]PA53a6a6]P3914e89]PBbf85cc]P4395573]PC4779b3]P55e468c]PD7f62b3]P62b7694]PE47959e]P7899ca1]PFc0c0c0[H[JVim: Warning: Output is not to a terminal
Vim: Warning: Input is not from a terminal
...and a lot more
which makes the error pretty clear when you remove all the escapes. To have Emacs work, you'd have to assign a terminal as well from dmenu, something along the lines of: urxvt -e emacs yourfile.txt.
There is a long dmenu hacking thread on the Arch boards which has all manner of interesting hacks for dmenu, it is well worth checking out.
1. I don't have Emacs installed, but you'll get the same error...
| Dmenu does not start some programms (e.g., emacs) -- is there a log file? |
1,361,651,451,000 |
In KDE, there was a system setting where you could specifically set the monitors to never go black.
Now I've switched to dwm and (it might not be related) my screens dim after about 10 minutes. How do I change this setting directly from the command line? I'm guessing this has to do with X?
|
You need to change the DPMS settings, which are controllable with xset. You can disable all DPMS with:
$ xset -dpms
And re-enable them with:
$ xset +dpms
You can also control how long before the monitor switches into each state (standby, suspend, and off; they're explained in this Wikipedia article) by passing 3 integers for the number of seconds before each state should be activated:
# Switch to standby after a minute, suspend after two minutes, and off after three minutes
$ xset dpms 60 120 180
Setting a time of 0 disables the state, so -dpms is equivalent to:
$ xset dpms 0 0 0
| How to prevent my screens from dimming (going black) from the command line |
1,361,651,451,000 |
I'm using the dynamic window manager of suckless (dwm). I noticed that firefox is able to send nice notifications when a download has finished. See the two figures
When I'm on a different tag, I get this kind of notification (inverted 1 tag) upon a finished download.
I'd like to use this kind of notification for my other uxterms. E.g. in case a long job has finished it should light up like above. At best, this would also work inside the GNU screen sessions that I'm using.
I'm not sure if this is a Xorg or a dwm feature. Any ideas?
EDIT: The answer of @scai is very much to the point, but lacks the full compatibility with GNU screen. In case anybody can still improve this, it'd be very much appreciated.
|
This is probably the urgency hint which can be set on windows. This hint is recognized by most window managers.
Most terminals can be configured to set the urgency hint when receiving a bell.
(u)xterm for example has the bellIsUrgent option and (u)rxvt has urgentOnBell.
To ring the bell in a terminal just run tput bel or echo "\a" (depending on the shell you might need to pass option -e to echo).
When using screen you have to turn off the visual bell and turn on the audible bell via vbell off in your screenrc or by pressing ctrl+a ctrl+g.
| What kind of notification is Firefox sending when a download has finished? |
1,361,651,451,000 |
I'm trying to create an array of simple hotkeys on my desktop, running OpenSuse with dwm. Things like:
Ctrl+E /opt/eclipse/eclipse
Can this be configured from within dwm? If not is there an external application which I can run (in the background) to listen for these hotkeys.
Also, is it possible for the hotkeys to only work when I an not hovering over a window (so that the windows doesn't grab my input by accident)?
|
You can configure hotkeys in your config.h. To use your eclipse example (with a rule to have it open in a specific tag1 when you hit Ctrle:
static const Rule rules[] = {
{ "Eclipse", NULL, NULL, 1 << 0, False, -1 },
...
/* commands */
static const char *eclipsecmd[] = { "/opt/eclipse/eclipse", NULL };
...
static Key keys[] = {
{ ControlMask, XK_e, spawn, {.v = eclipsecmd } },
The window will not grab the input, irrespective of where the focus is.
1. Ignore the rule if you don't want to assign eclipse to the first tag...
| Custom hotkeys in dwm |
1,361,651,451,000 |
I'm writing a bash script to print system stats to my dwm status bar using xsetroot. Everything works as expected. What I'm currently missing is an easy way that just uses standard *nix tools to give me the current load for every core on my system (I have 4 cores.). I can't figure out how to do this e.g. using top. All other posts I found on this site so far just deal with average load. Has anybody done this before?
The main reason that I want it for every single core is to have a cheap and rough tool to check if a program is running some code I wrote in parallel (e.g. a for each loop).
|
Calculating the average per core usage from /proc/stat
The best solution I have come up so far uses bc to account for floating point arithmetic:
# Calculate average cpu usage per core.
# user nice system idle iowait irq softirq steal guest guest_nice
# cpu0 30404 2382 6277 554768 6061 0 19 0 0 0
A=($(sed -n '2,5p' /proc/stat))
# user + nice + system + idle
B0=$((${A[1]} + ${A[2]} + ${A[3]} + ${A[4]}))
B1=$((${A[12]} + ${A[13]} + ${A[14]} + ${A[15]}))
B2=$((${A[23]} + ${A[24]} + ${A[25]} + ${A[26]}))
B3=$((${A[34]} + ${A[35]} + ${A[36]} + ${A[37]}))
sleep 2
# user + nice + system + idle
C=($(sed -n '2,5p' /proc/stat))
D0=$((${C[1]} + ${C[2]} + ${C[3]} + ${C[4]}))
D1=$((${C[12]} + ${C[13]} + ${C[14]} + ${C[15]}))
D2=$((${C[23]} + ${C[24]} + ${C[25]} + ${C[26]}))
D3=$((${C[34]} + ${C[35]} + ${C[36]} + ${C[37]}))
# cpu usage per core
E0=$(echo "scale=1; (100 * ($B0 - $D0 - ${A[4]} + ${C[4]}) / ($B0 - $D0))" | bc)
E1=$(echo "scale=1; (100 * ($B1 - $D1 - ${A[15]} + ${C[15]}) / ($B1 - $D1))" | bc)
E2=$(echo "scale=1; (100 * ($B2 - $D2 - ${A[26]} + ${C[26]}) / ($B2 - $D2))" | bc)
E3=$(echo "scale=1; (100 * ($B3 - $D3 - ${A[37]} + ${C[37]}) / ($B3 - $D3))" | bc)
echo $E0
echo $E1
echo $E2
echo $E3
The average cpu usage per core can be directly computed from /proc/stat (Credits to @mikeserv for the hint for using /proc/stat.):
# Here we make use of bash direct array assignment
A0=($(sed '2q;d' /proc/stat))
A1=($(sed '3q;d' /proc/stat))
A2=($(sed '4q;d' /proc/stat))
A3=($(sed '5q;d' /proc/stat))
# user + nice + system + idle
B0=$((${A0[1]} + ${A0[2]} + ${A0[3]} + ${A0[4]}))
B1=$((${A1[1]} + ${A1[2]} + ${A1[3]} + ${A1[4]}))
B2=$((${A2[1]} + ${A2[2]} + ${A2[3]} + ${A2[4]}))
B3=$((${A3[1]} + ${A3[2]} + ${A3[3]} + ${A3[4]}))
sleep 0.2
C0=($(sed '2q;d' /proc/stat))
C1=($(sed '3q;d' /proc/stat))
C2=($(sed '4q;d' /proc/stat))
C3=($(sed '5q;d' /proc/stat))
# user + nice + system + idle
D0=$((${C0[1]} + ${C0[2]} + ${C0[3]} + ${C0[4]}))
D1=$((${C1[1]} + ${C1[2]} + ${C1[3]} + ${C1[4]}))
D2=$((${C2[1]} + ${C2[2]} + ${C2[3]} + ${C2[4]}))
D3=$((${C3[1]} + ${C3[2]} + ${C3[3]} + ${C3[4]}))
# cpu usage per core
E0=$(((100 * (B0 - D0 - ${A0[4]} + ${C0[4]})) / (B0 - D0)))
E1=$(((100 * (B1 - D1 - ${A1[4]} + ${C1[4]})) / (B1 - D1)))
E2=$(((100 * (B2 - D2 - ${A2[4]} + ${C2[4]})) / (B2 - D2)))
E3=$(((100 * (B3 - D3 - ${A3[4]} + ${C3[4]})) / (B3 - D3)))
echo $E0
echo $E1
echo $E2
echo $E3
or even shorter by making extensive use of bash direct array assignment:
# Here we make use of bash direct array assignment by assigning line
# 2 to 4 to one array
A=($(sed -n '2,5p' /proc/stat))
# user + nice + system + idle
B0=$((${A[1]} + ${A[2]} + ${A[3]} + ${A[4]}))
B1=$((${A[12]} + ${A[13]} + ${A[14]} + ${A[15]}))
B2=$((${A[23]} + ${A[24]} + ${A[25]} + ${A[26]}))
B3=$((${A[34]} + ${A[35]} + ${A[36]} + ${A[37]}))
sleep 0.2
# user + nice + system + idle
C=($(sed -n '2,5p' /proc/stat))
D0=$((${C[1]} + ${C[2]} + ${C[3]} + ${C[4]}))
D1=$((${C[12]} + ${C[13]} + ${C[14]} + ${C[15]}))
D2=$((${C[23]} + ${C[24]} + ${C[25]} + ${C[26]}))
D3=$((${C[34]} + ${C[35]} + ${C[36]} + ${C[37]}))
# cpu usage per core
E0=$((100 * (B0 - D0 - ${A[4]} + ${C[4]}) / (B0 - D0)))
E1=$((100 * (B1 - D1 - ${A[15]} + ${C[15]}) / (B1 - D1)))
E2=$((100 * (B2 - D2 - ${A[26]} + ${C[26]}) / (B2 - D2)))
E3=$((100 * (B3 - D3 - ${A[37]} + ${C[37]}) / (B3 - D3)))
echo $E0
echo $E1
echo $E2
echo $E3
A top based solution
This can also be achieved without installing an additional tool with top only (I used this in a later post.) By default top does only show the average cpu load when it is started but it will show all cpus when you press 1. To be able to use top's cpu output when we use it in batch output mode we will need to make this the default behaviour when top is started. This can be done by using a ~/.toprc file. Fortunately this can be automatically created: Start top press 1 and press W which will generate the ~/.toprc file in your homefolder. When you now run top -bn 1 | grep -F '%Cpu' you will see that top now outputs all of your cores. Now we have already everything we will need to make this work. All the information I need is in column 3 of the array that will be the output of top.
There is only one problem: When the cpu usage for a core reaches 100% the array that the command outputs will move the column with the current load from column 3 to column 2. Hence, with awk '{print $3}' you will then see us, as output for column 3. If you're fine with that leave it. If not your could have awk print column 2 as well. It will just be :. A solution that avoids all those pitfalls is:
top -bn 2 | grep -F '%Cpu' | tail -n 4 | gawk '{print $2 $3}' | tr -s '\n\:\,[:alpha:]' ' '
it strips the output of all newlines \n, , and letters [:alpha:] and removes all but one single whitespace -s.
| How to get cpu usage for every core with a bash script using standard *nix tools |
1,361,651,451,000 |
I would like to use the inconsolata font in dwm's statusbar. Right now my config.h is set up like so using terminus:
static const char font[] = "-*-terminus-medium-r-normal-*-14-*-*-*-*-*-*-*";
I tried changing that to:
static const char font[] = "-*-inconsolata-medium-r-normal-*-17-*-*-*-*-*-*-*";
But it didn't work. I currently have inconsolata working in urxvt by setting the following line in .Xdefaults:
URxvt.font: xft:inconsolata:size=10
So, I believe the best bet would be to just patch Xft support into dwm, but I can't seem to find a patch anywhere and I'm not sure how to do it myself.
Any help would be greatly appreciated. I am currently using dwm 5.8.2.
(At the time of this writing I didn't have enough points to create new tags so I just went with X11 and fonts, please feel free to modify).
|
There is a (reasonably old) thread on the suckless mailing list about this issue, that includes a patch: called pango.
There is slightly more recent version in the AUR for 5.8.2:
https://aur.archlinux.org/packages.php?ID=33193
| How do I add Xft suppport to dwm? |
1,361,651,451,000 |
I'm using Pertag patch
As a Gentoo user I apply dwm patches using Portage. Basically there's a directory where user can put a patch for sourcecode and Portage will apply it during the compilation process (if patching procedure goes wrong whole compilation will fail)
So basically I used just these commands:
cp ~/pertag.patch /etc/portage/patches/x11-wm/dwm
emerge -av dwm
Anyway Pertag patch seem to work fine - I can set different layouts for different tags etc, but when I restart dwm, all these changes are gone and I need to set it up again.
Is there a way (maybe different patch?) to save these changes, so when I start dwm again, tag 2 still has for example monocle layout, tag 3 floating layout, and there's no statusbar on tag 4?
|
dwm is an acronym for dynamic window manager: the central principle of dwm is that the tags are supposed to be dynamic, not fixed. See why tags don't remember their layout. The pertag patch breaks this paradigm.
If you want to be able to have your window manager use static workspaces, you are better off using xmonad or awesome (both inspired by dwm).
However, if you really are intent on patching out dwm's core and defining feature, there are some patches floating about that will do this, like Jokerboy's remember tags patch, which should be used in conjunction with his pertag patch.
| Dwm - pertag patch - save state between restarts |
1,361,651,451,000 |
I'm currently running Fedora 19 on my PC and I want to remove GNOME and then use dwm. Is it enough when I install dwm and remove GNOME completely?
|
You don't have to remove GNOME to use dwm, you can just install dwm and use it instead.
Removing GNOME might simplify cirvumventing xdm/gdm (dm = display manager; these are the things that control the graphical login) -- but it also might not. If you install multiple DEs, they may configure the dm to use a chooser, however, a stand-alone window manager such as dwm won't be included.
Meaning, you have to do a bit of manual work to run dwm anyway, so I recommend you just leave GNOME on disk. To use dwm you will want to create a ~/.Xclients:
#!/bin/sh
dwm
Make that executable: chmod o+x .Xclients (I'm not sure if that is really necessary). If either that or ~/.xinitrc already exists, edit that instead and comment out whatever is there (i.e. add # to the beginning of the line), and put dwm at the bottom.
At this point, you should be able to try dwm by logging out and switching to an unused VT (e.g. via ctrl-alt-F3). Log in on the console and type startx.
You can then try rebooting to see if xdm will use your configuration. If not, you need to disable the xdm or gdm services. I don't have those installed, so I am not sure what systemd calls them -- systemctl list-units | grep dm should provide a clue. Then systemctl disable [whatever]. You'll need root or sudo to use the systemctl commands. Then reboot. You will probably end up at a console prompt, just log in and type startx.
| Remove GNOME and install dwm |
1,361,651,451,000 |
I've been using KDE on Opensuse for some time now and I feel like improving performance and using the most of my dual screens.
I discovered tiling window managers and thought it would be a cool thing to try out. However, I'm stuck on a couple of issues.
Will all my applications still be compatible? LibreOffice? Google Chrome? The GIMP? Kate? VLC? VirtualBox?
Will it fully support my dual screens?
Is it difficult to change environments from the one OpenSuse shipped with? Do I need to change anything internally?
How easy is it to switch back to KDE and keep all my previous settings?
Which tiling window manager should I be looking for?
I realize that the last bullet point might be seen as off-topic (recommendation), but I will remove it if it's an issue.
|
I have lots of experience using xmonad and I think you'll be fine if you give it a whirl. Regarding your specific questions:
Almost everything will work just as they normally do. Regarding the specific list you gave, the only one that needs some TLC is chrome. Full screen is a bit flaky and you'll need to futz with your xmonad.hs config to get it to work properly but it's totally doable -- a little googling will turn up the proper changes to your config.
You can find xmonad.hs files for dual screens here.
Here's a link that gives you the details on installing xmonad using cabal.
Xmonad is my favorite (tiling) window manager but I don't have much experience with the others but I hear RatPoison is pretty good.
| Easily switching over to a dynamic window manager from KDE |
1,361,651,451,000 |
I'm trying to add the transparency patch to dwm. I downloaded the .diff file and in my dwm directory ran this:
max@linux-vwzy:~/misc/dwm/dwm-5.9> patch < dwm-transparency.diff
patching file config.def.h
patching file dwm.c
Hunk #1 FAILED at 58.
Hunk #5 succeeded at 306 (offset 1 line).
Hunk #6 succeeded at 847 (offset 27 lines).
Hunk #7 succeeded at 882 (offset 27 lines).
Hunk #8 FAILED at 1125.
Hunk #9 succeeded at 1558 with fuzz 1 (offset 2 lines).
2 out of 9 hunks FAILED -- saving rejects to file dwm.c.rej
I've patched dwm before to add a couple other patches. Is it possible that they are conflicting? Or is this another error?
|
The patch is failing because the other patches that you have previously applied have shifted the code around sufficiently to defeat patch's attempts to apply the change, even with an offset (as can be seen in those hunks that did succeed).
If you open dwm.c.rej you will see the failed hunks, then it is just a matter of hand patching them in to dwm.c.
For each failed hunk, search in dwm.c for the original code (the lines that begin with a - in dwm.c.rej) and replace them with the patched code (those lines beginning with a +). If dwm recompiles without error, you have successfully patched in transparency.
| Patching a file (in this case dwm) and failed hunks |
1,361,651,451,000 |
I'm using dwm with the dualstatus patch. This adds a status bar on the bottom of the screen, in addition to the standard bar at the top. The text in each bar is set in ~/.xinitrc (for example) like this:
xsetroot -name “top text;bottom text”
Is there a way in bash to set the top text and bottom text at different intervals? For example, I have a script topbar that displays system information, e.g. output from the uptime command, and a script bottombar that displays information like weather, battery state, etc.
The goal is to have the top bar update every second, while the bottom bar only updates every minute because its information comes from more expensive processes (e.g. querying my music player, checking the battery state, etc.) Right now my ~/.xinitrc looks like this:
while true; do
bottomdisp=$(bottombar)
for s in {1..60}
do
xsetroot -name "$(topbar);$bottomdisp";
sleep 1;
done
done &
xbindkeys
( ( sleep 5 && /usr/bin/xscreensaver -no-splash -display :0.0 ) & )
exec rundwm
This updates every second, though. Is there a simpler way to do this? The ideas I could think of were
Maybe a way to tell xsetroot to preserve whatever's in the bottom bar? As a last resort, I may tweak some of the code in the dualstatus patch to allow it to preserve the current state of the top/bottom bars if something like xsetroot -name ';bottom text' is passed, but that's not ideal because my C is rusty and I still use the above command to clear the bars at times.
Use a cronjob to update a cache of the text in the bottom bar, and run that once a minute. Even though the top/bottom bars would display every second, only the top bar would actually change every second.
Any other methods for this? Is there a simple(r) way to do this in bash that I missed?
|
I would suggest having bash keep track of your previous bottom string and only update it once a minute (when seconds of the current time modulo 60 is equal to 0 in this code).
while true; do
(( 10#$(date +%s) % 60 )) || bottomdisp=$(date)
xsetroot -name "$(topbar);$bottomdisp";
sleep 1;
done &
This syntax makes it easy to modify the frequency of the secondary (or have multiple) intervals, e.g. just change 60 to 15 for updates 4 times a minute.
| Is it possible to use xsetroot and dwm to set the top and bottom bars at different time intervals? |
1,361,651,451,000 |
I am trying to build dwm from source.
Grabbing the source doesn't work:
(28) $ apt-get source dwm
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: You must put some 'source' URIs in your sources.list
This is my sources.list file:
## CRUNCHBANG
## Compatible with Debian Wheezy, but use at your own risk.
deb http://packages.crunchbang.org/waldorf waldorf main
deb-src http://packages.crunchbang.org/waldorf waldorf main
## DEBIAN
deb http://ftp.debian.org/debian/ wheezy main contrib non-free
deb-src http://ftp.debian.org/debian/ wheezy main contrib non-free
## DEBIAN SECURITY
deb http://security.debian.org/ wheezy/updates main
#deb-src http://security.debian.org/ wheezy/updates main
What URL(s) should I put in my sources.list to get apt-get source dwm to work?
|
Don't forget to run apt-get update after changing your sources.list.
| Error while trying to retrieve dwm sources |
1,361,651,451,000 |
I have a laptop with an external monitor and I want to use the external monitor as the primary one. I'm also running Debian with dwm. xrandr -q gives me this:
Screen 0: minimum 320 x 200, current 3286 x 1080, maximum 8192 x 8192
LVDS1 connected 1366x768+1920+0 (normal left inverted right x axis y axis) 345mm x 194mm
1366x768 60.0*+ 50.0
VGA1 connected 1920x1080+0+0 (normal left inverted right x axis y axis) 509mm x 286mm
1920x1080 60.0*+
(I omitted some of the other resolutions for brevity). My dwm config.h file has several tag rules like this:
static const Rule rules[] = {
/* xprop(1):
* WM_CLASS(STRING) = instance, class
* WM_NAME(STRING) = title
*/
/* class instance title tags mask isfloating monitor */
{ "Chromium", NULL, NULL, 1, False, -1 },
{ "xxxterm", NULL, NULL, 1, False, -1 },
{ "Surf", NULL, NULL, 1, False, -1 },
{ "Iceweasel", NULL, NULL, 1 << 1, False, -1 },
{ "Vlc", NULL, NULL, 1 << 3, False, -1 },
};
and I have this xrandr command in my ~/.xinitrc file that configures the monitors properly when I run startx:
xrandr --output VGA1 --auto --output LVDS1 --auto --right-of VGA1
I want my tag rules to apply whenever I open a program, and for that program to be automatically assigned to whichever monitor has focus. I found a reddit post that said that -1 would do this, but for any program that has a tag rule, they always open on my laptop (LVDS1), not the external monitor (VGA1).
For programs that don't have a tag rule, e.g. st, they will open on whichever monitor/tag has focus. I tried changing the monitor value to 0 or 1, rebuilding dwm and restarting X, and the result is the same.
How do I configure dwm so that programs with tag rules open in their respective tag on whichever monitor has focus?
My full config.h:
/* appearance */
static const char font[] = "-*-terminus-medium-r-*-*-16-*-*-*-*-*-*-*";
static const char normbordercolor[] = "#333333";
static const char normbgcolor[] = "#101010";
static const char normfgcolor[] = "#999999";
static const char selbordercolor[] = "#224488";
static const char selbgcolor[] = "#224488";
static const char selfgcolor[] = "#ffffff";
static const unsigned int borderpx = 1; /* border pixel of windows */
static const unsigned int snap = 32; /* snap pixel */
static const unsigned int minwsz = 20; /* Minimal heigt of a client */
static const Bool showbar = True; /* False means no bar */
static const Bool topbar = True; /* False means bottom bar */
static const Bool viewontag = False; /* Switch view on tag switch */
static const Bool extrabar = True; /* False means no extra bar */
/* tagging */
static const char *tags[] = {"1", "2", "3", "4", "5", "6", "7", "8", "9" };
static const Rule rules[] = {
/* xprop(1):
* WM_CLASS(STRING) = instance, class
* WM_NAME(STRING) = title
*/
/* class instance title tags mask isfloating monitor */
{ "Chromium", NULL, NULL, 1, False, -1 },
{ "xxxterm", NULL, NULL, 1, False, -1 },
{ "Surf", NULL, NULL, 1, False, -1 },
{ "Iceweasel", NULL, NULL, 1 << 1, False, -1 },
{ "Vlc", NULL, NULL, 1 << 3, False, -1 },
{ NULL, NULL, "IPython", 1 << 4, False, -1 },
{ "Eclipse", NULL, NULL, 1 << 4, False, -1 },
{ "Quodlibet", NULL, NULL, 1 << 5, False, -1 },
{ "Icedove", NULL, NULL, 1 << 6, False, -1 },
{ "libreoffice", NULL, NULL, 1 << 7, False, -1 },
{ "Gnumeric", NULL, NULL, 1 << 7, False, -1 },
{ "Abiword", NULL, NULL, 1 << 7, False, -1 },
{ "Keepassx", NULL, NULL, 1 << 8, False, -1 },
};
/* layout(s) */
static const float mfact = 0.50; /* factor of master area size [0.05..0.95] */
static const float smfact = 0.00; /* factor of tiled clients [0.00..0.95] */
static const int nmaster = 1; /* number of clients in master area */
static const Bool resizehints = False; /* True means respect size hints in tiled resizals */
#include "patchlibs/bstack.c"
#include "patchlibs/bstackhoriz.c"
#include "patchlibs/fibonacci.c"
#include "patchlibs/gaplessgrid.c"
#include "patchlibs/tcl.c"
static const Layout layouts[] = {
/* symbol arrange function */
{ "T", tile }, /* first entry is default */
{ "F", NULL }, /* no layout function means floating behavior */
{ "B", bstack },
{ "G", gaplessgrid },
{ "M", monocle },
{ "H", bstackhoriz },
{ "C", tcl },
{ "S", spiral },
{ "D", dwindle },
};
/* key definitions */
#define MODKEY Mod1Mask
#define WINKEY Mod4Mask
#define TAGKEYS(KEY,TAG) \
{ MODKEY, KEY, view, {.ui = 1 << TAG} }, \
{ MODKEY|ControlMask, KEY, toggleview, {.ui = 1 << TAG} }, \
{ MODKEY|ShiftMask, KEY, tag, {.ui = 1 << TAG} }, \
{ MODKEY|ControlMask|ShiftMask, KEY, toggletag, {.ui = 1 << TAG} },
/* helper for spawning shell commands in the pre dwm-5.0 fashion */
#define SHCMD(cmd) { .v = (const char*[]){ "/bin/sh", "-c", cmd, NULL } }
/* commands */
static char dmenumon[2] = "0"; /* component of dmenucmd, manipulated in spawn() */
static const char *dmenucmd[] = { "dmenu_run", "-m", dmenumon, "-fn", font, "-nb", normbgcolor, "-nf", normfgcolor, "-sb", selbgcolor, "-sf", selfgcolor, NULL };
static const char *termcmd[] = { "st", NULL };
static const char *chromiumcmd[] = {"chromium-incognito", NULL};
static const char *icedovecmd[] = {"icedove", NULL};
static const char *xxxtermcmd[] = {"xxxterm", NULL};
static const char *musiccmd[] = {"quodlibet", NULL};
static const char *ipythoncmd[] = {"ipython3qt", NULL};
static const char *iceweaselcmd[] = {"iceweasel", NULL};
static const char *texteditcmd[] = {"scite", NULL};
static const char *lockcmd[] = {"lock", NULL};
static const char *videocmd[] = {"vlc", NULL};
static const char *screenshotcmd[] = {"screenshot", NULL};
#include "patchlibs/movestack.c"
static Key keys[] = {
/* modifier key function argument */
{ WINKEY, XK_t, spawn, {.v = termcmd } },
{ WINKEY, XK_c, spawn, {.v = chromiumcmd } },
{ WINKEY, XK_d, spawn, {.v = icedovecmd } },
{ WINKEY, XK_x, spawn, {.v = xxxtermcmd } },
{ WINKEY, XK_i, spawn, {.v = iceweaselcmd } },
{ WINKEY, XK_m, spawn, {.v = musiccmd } },
{ WINKEY, XK_e, spawn, {.v = texteditcmd } },
{ WINKEY, XK_p, spawn, {.v = ipythoncmd } },
{ WINKEY, XK_l, spawn, {.v = lockcmd } },
{ WINKEY, XK_v, spawn, {.v = videocmd } },
{ WINKEY, XK_s, spawn, {.v = screenshotcmd } },
{ MODKEY, XK_p, spawn, {.v = dmenucmd } },
{ MODKEY, XK_b, togglebar, {0} },
{ MODKEY, XK_b, toggleextrabar, {0} },
{ MODKEY, XK_j, focusstack, {.i = +1 } },
{ MODKEY, XK_k, focusstack, {.i = -1 } },
{ MODKEY, XK_i, incnmaster, {.i = +1 } },
{ MODKEY, XK_d, incnmaster, {.i = -1 } },
{ MODKEY, XK_h, setmfact, {.f = -0.05} },
{ MODKEY, XK_u, setsmfact, {.f = -0.05} },
{ MODKEY, XK_m, setsmfact, {.f = +0.05} },
{ MODKEY, XK_l, setmfact, {.f = +0.05} },
{ MODKEY, XK_Return, zoom, {0} },
{ MODKEY, XK_Tab, view, {0} },
{ MODKEY|ShiftMask, XK_c, killclient, {0} },
{ ControlMask|ShiftMask, XK_t, setlayout, {.v = &layouts[0]} },
{ ControlMask|ShiftMask, XK_f, setlayout, {.v = &layouts[1]} },
{ ControlMask|ShiftMask, XK_b, setlayout, {.v = &layouts[2]} },
{ ControlMask|ShiftMask, XK_g, setlayout, {.v = &layouts[3]} },
{ ControlMask|ShiftMask, XK_m, setlayout, {.v = &layouts[4]} },
{ ControlMask|ShiftMask, XK_h, setlayout, {.v = &layouts[5]} },
{ ControlMask|ShiftMask, XK_c, setlayout, {.v = &layouts[6]} },
{ ControlMask|ShiftMask, XK_s, setlayout, {.v = &layouts[7]} },
{ ControlMask|ShiftMask, XK_d, setlayout, {.v = &layouts[8]} },
{ ControlMask, XK_space, setlayout, {0} },
{ MODKEY|ShiftMask, XK_space, togglefloating, {0} },
{ MODKEY, XK_0, view, {.ui = ~0 } },
{ MODKEY|ShiftMask, XK_0, tag, {.ui = ~0 } },
{ MODKEY, XK_comma, focusmon, {.i = -1 } },
{ MODKEY, XK_period, focusmon, {.i = +1 } },
{ MODKEY|ShiftMask, XK_comma, tagmon, {.i = -1 } },
{ MODKEY|ShiftMask, XK_period, tagmon, {.i = +1 } },
{ MODKEY|ShiftMask, XK_j, movestack, {.i = +1 } },
{ MODKEY|ShiftMask, XK_k, movestack, {.i = -1 } },
TAGKEYS( XK_1, 0)
TAGKEYS( XK_2, 1)
TAGKEYS( XK_3, 2)
TAGKEYS( XK_4, 3)
TAGKEYS( XK_5, 4)
TAGKEYS( XK_6, 5)
TAGKEYS( XK_7, 6)
TAGKEYS( XK_8, 7)
TAGKEYS( XK_9, 8)
{ MODKEY|ShiftMask, XK_q, quit, {0} },
};
/* button definitions */
/* click can be ClkLtSymbol, ClkStatusText, ClkWinTitle, ClkClientWin, or ClkRootWin */
static Button buttons[] = {
/* click event mask button function argument */
{ ClkLtSymbol, 0, Button1, setlayout, {0} },
{ ClkLtSymbol, 0, Button3, setlayout, {.v = &layouts[3]} },
{ ClkWinTitle, 0, Button2, zoom, {0} },
{ ClkStatusText, 0, Button2, spawn, {.v = termcmd } },
{ ClkClientWin, MODKEY, Button1, movemouse, {0} },
{ ClkClientWin, MODKEY, Button2, togglefloating, {0} },
{ ClkClientWin, MODKEY, Button3, resizemouse, {0} },
{ ClkTagBar, 0, Button1, view, {0} },
{ ClkTagBar, 0, Button3, toggleview, {0} },
{ ClkTagBar, MODKEY, Button1, tag, {0} },
{ ClkTagBar, MODKEY, Button3, toggletag, {0} },
};
|
The default dwm behaviour is to open applications, with a rule or not, on the monitor/screen that has focus.
To open Surf on the third tag of the currently focussed monitor, the rule would be:
{ "Surf", NULL, NULL, 1 << 2, True, -1 },
To open VLC on the second tag of the primary monitor, irrespective of where the focus is, the rule would be:
{ "VLC", NULL, NULL, 1 << 1, True, 1 },
If your rules are not conforming to this behaviour, then there is likely something else wrong with the way you have configured dwm. Pasting your entire config.h might help.
| How do I configure multiple monitors to work with dwm's tag rules? |
1,361,651,451,000 |
I recently installed dwm (a linux window manager) which extensivley uses Alt+Shift key combinations for navigation but none of them are working. For example Alt+Shift+c will close a window.
I installed screenkey to check if my keyboard was working and saw that when I pressed Alt+Shift it registered as Alt+ISO_Next_Group. When I pressed Alt+Shift+c it registered as Alt+ISO_Next_Group then Alt+c.
I found this Arch Wiki page which mentions ISO_Next_Group but does not seem to suggest any solutions. What can I do to get Alt+Shift working again?
|
My issue was actually particular to MX-Linux. By default the "Layout Switch" key is set to Alt+Shift. You can change this in fskbsettings which is installed by default.
| Alt+Shift Not Working, Getting Alt+ISO_Next_Group Instead |
1,361,651,451,000 |
I have a dual screen setup which I've configured by running this command at startup:
xrandr --output VGA-0 --auto --right-of DVI-0
I'm running OpenSuse 11.4 with dwm as my window manager. I can post the output of any command or dump any file if you think it will help.
The problem happens when I try and run a fullscreen game. My right screen goes black and loses the signal. On the other screen, my resolution goes down and I see a magnified version of the upper left corner of my desktop (the dwm tag bar). I can hear the game audio but I can't see anything other than what I described above.
Another important thing is when this happens I need to reboot (or at least restart X) because I get no input and can't close anything.
EDIT 1
02:00.0 VGA compatible controller: ATI Technologies Inc Juniper [Radeon HD 5750 Series]
I'm not sure how I can find what driver I'm using, but I expect it's opensuse's stock driver. When the issue happens, I can't see my mouse at all -- even if I try and move it to the upper left. Pressing Ctrl+Alt+F1 drops me to a command line login. Once I enter my information I can hear the audio again but I am in still the CLI. Alt+SysReq+S does nothing at all.
EDIT 2
After running the game and getting "locked up", I hit Ctrl+Alt+F1 and was brought to a shell. I logged in and ran my display command and received the following error message:
X Error of failed request BadMatch (invalid parameter attributes)
Major opcode of failed request: 150 (RANDR)
Minor opcode of failed request: 7 (RRSetScreenSize)
Serial number of failed request: 40
Current serial number in output stream: 41
It looks like it's saying I've mistyped it or something, however I tried a couple times and I'm fairly certain my syntax was correct. I still tried going back to X with Ctrl+Alt+F7 and I was stuck in the same situation. I then went back to the console and killed the game's process, and went back to X to see what happened. The resolutions and screens were still messed up, however I had mouse and keyboard support and could use my browser.
|
I ended up being able to fix the issue by installing fglrx from the SuSE repositories. It seemed some capabilities (acceleration) were not supported in the open source version of the driver.
| Can't run certain fullscreen applications |
1,361,651,451,000 |
I am new to Arch and did a fresh install.
I have configured it to use dwm and I start it with startx. The problem is that some commands in xinitrc seem to not run. It clearly works to some extend, because dwm is starting, but i can't say the same for other commands.
My xintirc is located: ~/.xinitrc (or /home/xor/.xinitrc) and looks like this:
exec dwm
set xkbmap de
feh --bg-scale ~/background.png
xinput set-prop "UNIW0001:00 093A:0255 Touchpad" 349 1
xinput set-prop "UNIW0001:00 093A:0255 Touchpad" 326 1
xrandr --setprovideroutputsource modesetting NVIDIA-0
xrandr --auto
It should start dwm(which it does), set the keyboard layout to german qwertz(which works), set a background image(which does not work) adjust some touchpad settings(which does not work), and do something with the nvidia driver(Not exactly sure what it does, i guess it enables the card/driver, but I know that the drivers work and I can use my GPU).
I can paste the not working commands in a terminal and they work.
Is there somewhere an error log for the file xinit, and what is the problem in my case?
Thanks for help!
|
The problem with your .xinitrc is that it just starts with exec.
This particular way to launch some command (here dwm) makes dwm replace the running shell, hence forbidding everything further down to be launched.
Quoted from man exec
If exec is specified with command, it shall replace the shell
with command without creating a new process.
I suggest you have a look to Arch's Xinit wiki in which you will notice that exec should be the last thing the .xinitrc script does.
| xinitrc seems to not run some commands |
1,361,651,451,000 |
I installed netbeans using zypper install netbeans. Trying to run it, I get the "loading" splash screen which is proceeded by a small popup which asks me if I want to register my copy and receive some benefits. Once that popup is gone, I'm left with a big beige screen and nothing else. It's almost like netbeans can't be displayed. Running netbeans from the command line does not give me any output or error messages.
I also had this same issue when I was trying to install it using the .sh file found on their website.
I'm running opensuse 11.4 with dwm as my window manager. I'm also using the AMD's fglrx graphics driver (I have also experienced this issue using the default open source driver shipped with opensuse).
EDIT When trying to run sudo netbeans, I get:
/usr/share/netbeans/6.8/bin/../platform11/lib/nbexec: WARNING: environment variable DISPLAY is not set
And no display. This is not same behavior as when I launch as a regular user.
|
There's the same type of problem for awesomewm and probably quite a few other window managers.
The dmw wiki has a section on this: Fixing misbehaving Java applications. The solution proposed is to change the window manager name by installing wmname, and then running:
$ wmname LG3D
If that works, make sure that is called at every X session startup.
The awesomewm wiki has this same suggestion, and other workarounds that are most likely relevant to dwm here: Problems with Java.
| Netbeans has no display on my computer |
1,361,651,451,000 |
I am really struggling to get the session polkit to work.
I am not really familiar with how it works, but I have been using gnome before switching to dwm and in gnome it worked perfectly, so I wanted to replicate that.
First of all: As I understood it, the polkit is responsible for giving momentary privilege escalation to the user, by prompting him for the root password.
Is this correct?
How can I replicate that behavior without a DE but with a WM like dwm?
|
I use dwm as well, I have in a .xinitrc file, with a polkit to start at login. I use the xfce-polkit.
/usr/lib/xfce-polkit/xfce-polkit &
As an example I also use Thunar as my file manager and have a custom action that calls a root session of Thunar using pkexec.
Using the polkit will give you the same DE behaviour with dwm.
| dwm - session polkit |
1,361,651,451,000 |
I'm new to dwm (suckless.org) and also to GNU/Linux. I know a bit of the C language but don't really understand the config.h file.
SYS-CONFIG
I use Ubuntu 18.04 (installed with netinstaller + vanilla gnome...) and recently I wanted to try dwm 6.2.
HOW I INSTALLED IT
I downloaded the tar.gz file from the suckless.org website and for install I just typed make in terminal in that folder (without any error) and I also installed the dwm via Ubuntu repository and finally created a symbolic link in ~/bin/ thereafter, I created a .xinitrc in home folder and put exec dwm in that. then I rebooted, and logged in. I didn't change the config file.
PROBLEM
The default keybinding, Shift+Alt+Enter doesn't open gnome-terminal.
config.h
/* key definitions */
#define MODKEY Mod1Mask
#define TAGKEYS(KEY,TAG) \
{ MODKEY, KEY, view, {.ui = 1 << TAG} }, \
{ MODKEY|ControlMask, KEY, toggleview, {.ui = 1 << TAG} }, \
{ MODKEY|ShiftMask, KEY, tag, {.ui = 1 << TAG} }, \
{ MODKEY|ControlMask|ShiftMask, KEY, toggletag, {.ui = 1 << TAG} },
/* helper for spawning shell commands in the pre dwm-5.0 fashion */
#define SHCMD(cmd) { .v = (const char*[]){ "/bin/sh", "-c", cmd, NULL } }
/* commands */
static char dmenumon[2] = "0"; /* component of dmenucmd, manipulated in spawn() */
static const char *dmenucmd[] = { "dmenu_run", "-m", dmenumon, "-fn", dmenufont, "-nb", col_gray1, "-nf", col_gray3, "-sb", col_cyan, "-sf", col_gray4, NULL };
static const char *termcmd[] = { "st", NULL };
static Key keys[] = {
/* modifier key function argument */
{ MODKEY, XK_p, spawn, {.v = dmenucmd } },
{ MODKEY|ShiftMask, XK_Return, spawn, {.v = termcmd } },
{ MODKEY, XK_b, togglebar, {0} },
{ MODKEY, XK_j, focusstack, {.i = +1 } },
{ MODKEY, XK_k, focusstack, {.i = -1 } },
{ MODKEY, XK_i, incnmaster, {.i = +1 } },
{ MODKEY, XK_d, incnmaster, {.i = -1 } },
{ MODKEY, XK_h, setmfact, {.f = -0.05} },
{ MODKEY, XK_l, setmfact, {.f = +0.05} },
{ MODKEY, XK_Return, zoom, {0} },
{ MODKEY, XK_Tab, view, {0} },
{ MODKEY|ShiftMask, XK_c, killclient, {0} },
{ MODKEY, XK_t, setlayout, {.v = &layouts[0]} },
{ MODKEY, XK_f, setlayout, {.v = &layouts[1]} },
{ MODKEY, XK_m, setlayout, {.v = &layouts[2]} },
{ MODKEY, XK_space, setlayout, {0} },
{ MODKEY|ShiftMask, XK_space, togglefloating, {0} },
{ MODKEY, XK_0, view, {.ui = ~0 } },
{ MODKEY|ShiftMask, XK_0, tag, {.ui = ~0 } },
{ MODKEY, XK_comma, focusmon, {.i = -1 } },
{ MODKEY, XK_period, focusmon, {.i = +1 } },
{ MODKEY|ShiftMask, XK_comma, tagmon, {.i = -1 } },
{ MODKEY|ShiftMask, XK_period, tagmon, {.i = +1 } },
TAGKEYS( XK_1, 0)
TAGKEYS( XK_2, 1)
TAGKEYS( XK_3, 2)
TAGKEYS( XK_4, 3)
TAGKEYS( XK_5, 4)
TAGKEYS( XK_6, 5)
TAGKEYS( XK_7, 6)
TAGKEYS( XK_8, 7)
TAGKEYS( XK_9, 8)
{ MODKEY|ShiftMask, XK_q, quit, {0} },
};
|
Here is the problem:
static const char *termcmd[] = { "st", NULL };
The dwm build from suckless.org uses st as default terminal emulator therefore Alt+Shift+Enter is mapped to st which is not installed on your system. You need to change st to gnome-terminal or whatever other terminal emulator you want (and which is installed on your system).
Once you edited the configuration file run make and make install to apply the changes to your system.
| The default keybinding for opening a terminal in dwm does not work |
1,361,651,451,000 |
The context
I use dwm as my main windows manager and anki to create flashcards.
The problem
Some windows of the anki program are not correctly being displayed (see image or gif below)
Gif
Screenshot
The question
How can I make anki windows to be correctly displayed when using dwm?
Additional context
At a first glance I thought that it was because of the windows manager that I use (dwm) but then I tried opening anki while using i3 the problem still occurred.
I tried searching a similar issue on the Internet but didn't find any meaningful information. These are some of the searches that I performed
"anki" "dwm" bad windows display
"anki" "dwm" bad windows drawing
"anki" "linux" bad windows display
"anki" "linux" bad windows drawing
I've previously used anki on dwm and didn't have any issue. Unfortunately, I don't remember the previous version I was using. Because of this, I can't know whether this issue is because an anki update or not.
|
I was helped in the Anki forum after creating a post.
I solved the issue by executing
$ ANKI_NOHIGHDPI=1 anki
You can find further information on this issue in the related links that were posted by addons_zz in his reply to my post in the Anki forum.
| Windows of the anki program are not correctly being displayed |
1,361,651,451,000 |
I wrote a script for my dwm statusbar. One part of it is finding the current cpu usage for every single core on my system. I figured out a way myself but I would need some help fixing a bug in this. Here is the command: top -bn 2 -d 0.5 | grep -F '%Cpu' | tail -n 4 | awk '{print $3}'. This assumes that your top shows every core per default. You can achieve this by issuing top then pressing 1 and then pressing W to save the current configuration to a .toprc file in your home folder. Everytime you open top now it will display all of your cores. My aforementioned command has the drawback that when assigned to a variable like this:
CPU=$(top -bn 2 -d 0.5 | grep -F '%Cpu' | tail -n 4 | awk '{print $3}')
and then using $CPU with xsetroot like this: xsetroot -name "$CPU" I will get the output I want in my statusbar but between every cpu percentage there will be two symbols on top of each other separating them v and t. How do I get rid of them? Has this something to do that I might be using an array instead of a string here?
You can see the problem on the left side of the picture.
Command:
top -bn 2 -d 0.5 | grep -F '%Cpu' | tail -n 4 | awk '{print $3}'
sample output:
14.3
12.0
8.0
10.0
Note for anyone using this script: When the cpu usage for a core reaches 100% the array that the command outputs will move the column with the current load from column 3 to column 2. Hence, with awk '{print $3}' you will then see us, as output for column 3. If you're fine with that leave it. If not you could have awk print column 2 as well. It will just be :. A solution that avoids all those pitfalls is:
top -bn 2 | grep -F '%Cpu' | tail -n 4 | gawk '{print $2 $3}' | tr -s '\n\:\,[:alpha:]' ' '
|
The "stuff" between the values seems a visual representation of the newline character to me ( octal character code 12), which you would get when using:
echo -e 'a\012b'
What you could try is pipe the output through tr '\n' ' ' as with:
echo -e 'a\012b' | tr '\n' ' '
| How to print a bash variable that has an array as value |
1,361,651,451,000 |
The context
This gif shows what happens when trying to opening scilab while using i3
This gif shows what happens when trying to opening scilab while using dwm
The problem
I'm currently using dwm because using tags is more suitable in more workflow than using workspaces with i3, so in order to use scilab I have to switch back to i3.
The question
Why does dwm behave like this while i3 not?
By understanding the context, I will be able to search through all the available dwm patches in order to discern whether a given one would be useful to solve this specific issue.
The current workaround
scilab-cli runs well in both dwm and i3, I can plot graphs and perform any operation, so for the moment I can use scilab-cli while using dwm.
Additional context
I wonder whether setting one of these environment variables might help
$ ./bin/scilab --randomtext 2>&1 | tail -n 6
Several environment variables can be declared:
SCIVERBOSE Provides debugging information of the startup
JAVA_HOME Declares which Java Virtual Machine to use
SCI_DISABLE_TK Disables Tk (but not Tcl) features
SCI_JAVA_ENABLE_HEADLESS Runs Java Headless VM (without GUI; Windows and Linux only)
SCI_DISABLE_EXCEPTION_CATCHING Disable the catch by Scilab of exception (segfault, ...)
|
This is a common rendering issue with Java applications and non-reparenting window managers. There are 2 solutions:
Use wmname to impersonate another window manager, e.g.
$ wmname LG3D
Set no-reparenting flag for JDK
export _JAVA_AWT_WM_NONREPARENTING=1
| Scilab displays an empty screen when executing it while using "dwm" but not while using "i3" |
1,620,517,149,000 |
What is it, the dependencies for comiple dwm from scratch
for install dwm (suckless windows manager) in arch linux , i need to install xorg and make and other
|
Dependencies needed for compile dwm in void linux from scratch is:
sudo xbps-install base-devel libX11-devel libXft-devel libXinerama-devel freetype-devel fontconfig-devel
Dependencies needed for compile dwm in arch linux from scratch is:
sudo pacman -S base-devel libx11 libxft libxinerama freetype2 fontconfig
| How to install dwm in void linux |
1,620,517,149,000 |
Edit: The border on the bottom is not set my me. i don't have it in my config.h file. To be honest wouldn't know how to set it.
so, it doesn't happen with any other programs, just terminal emulators. in all of xterm, urxvt, st, and xfce4-terminal.
Edit: I want to get rid of this empty space. is there a patch or some setting i can tweak to avoid that? again, it only appears when the client is terminal emulator. not with any of firefox, pcmanfm etc.
|
Most application windows (firefox, pcmanfm, libre office, etc) allow you to resize them in pixels.
Most (all?) terminal emulators only allow you to resize them in characters - e.g. 80x24.
On my XFCE system (with a 2560x1440 screen), I have a top panel (menus, short-cut icons, some status displays) and a bottom panel (taskbar, desktop switcher, status bar, etc). Between those two, I can fit a terminal window that is 192 characters wide by 51 characters high. There is a small area (maybe 4 or 5 pixels) that is not covered by the terminal window.
The combination of my font setting "Monospace Regular" @ 16pt and 192x51 characters requires 2513 pixels by 1336 pixels (so says xwininfo). The handful of extra pixels is not enough to have an extra line (if there was, I would use it).
| border on the bottom of terminal emulators even in monocle view [closed] |
1,620,517,149,000 |
I'm setting up key bindings in dwm for things like changing brightness and taking screenshots. In order to bind the appropriate key, I followed someone else's example and added
#define XF86AudioMute 0x1008ff12
to my config.h, and referred to that key in my keybinding.
This works fine, but I have no idea where this value came from or how to find other similar values. For example, the PrtSc button on my keyboard is one that I haven't been able to find a value for.
What are these values, and how do I find them?
|
The definition
#define XF86AudioMute 0x1008ff12
comes from the header file XF86keysym.h, though it's spelled differently:
#define XF86XK_AudioMute 0x1008FF12 /* Mute sound from the system */
To find the keysyms that your keyboard sends, use xev. Not all keys will send keysyms, however (in that case, you can't do much).
Further reading:
What does this output from xev mean?
| How do you find what value a keyboard key has? (dwm keybinding) |
1,620,517,149,000 |
I recently started using dwm, which is really nice, but I have a problem with regards to my multi monitor setup.
I have my monitors placed like the following:
-----------------------------------------------------------------
| || |
| || |
| || |
| || |
| 2 || 3 |
| || |
| || |
| || |
| || |
-----------------------------------------------------------------
----------------------------------
| |
| |
| |
| |
| 1 |
| |
| |
| |
| |
----------------------------------
but when dragging my mouse from one screen to the next, I have to drag it from 1 -- 2 -- 3 through the right side of the screen with a lower number. This means that my screens are ordered as follows:
---------------------------------------------------------------------------------------------------
| || || |
| || || |
| || || |
| || || |
| 1 || 2 || 3 |
| || || |
| || || |
| || || |
| || || |
---------------------------------------------------------------------------------------------------
Is there a way for me to make dwm understand the correct placement of the screens?
Thanks for any help ;-)
|
What I do is start dwm manually (e.g. using startx) and, before executing it in .xinitrc I have a xrandr command that sets up external monitors. xrandr supports position-related arguments of the form "display x is above display y" and so on.
| screen placements on dwm |
1,620,517,149,000 |
I find no way to switch between last two focused windows. How can I do that? There is Mod+Tab but for tags only.
|
Toggling between windows in a tag is done through changing the focus between next and previous:
Mod1-j Focus next window.
Mod1-k Focus previous window.
| dwm. How to switch between last two windows? |
1,620,517,149,000 |
I want to set up icons for my tags in dwm. I have downloaded the ttf-font-awesome package with sudo pacman -S ttf-font-awesome and I changed this line in my config.h of dwm:
static const char *fonts[] = { "FontAwesome:size=16", "consolas:size=16" };
But it still doesn't work. Some icons show up correctly, while others show up slightly or even completely different. I copied the unicode glyph from fontawesome.com and added it to my *tags array in config.h. Did I miss something? Any help is greatly appreciated!
|
The solution was quite simple. I had a Nerd font loaded also, and this font caused the icons to be loaded differently from what I expected. I removed the ner font and everything worked just fine.
| dwm doesn't load fontawesome |
1,620,517,149,000 |
I am running Arch Linux on a Dell XPS-13 9380 with the DWM window manager (which uses X11).
I recently started using an external monitor with my laptop. My issue is that when and only when the second display is active, my terminal emulator glitches when I type into it. Please note that this only affects the terminal emulator on my laptop display, NOT the external display. It is especially annoying when editing files in emacs or vim.
Here is the script that I run to activate the second display:
xrandr --output DP-1 --auto --left-of eDP-1
The external dislay is DP-1, the integrated display is eDP-1.
By "glitches", I mean when a key is pressed that changes the display of the terminal, the text will flicker between the new change and the old change. If I am moving the cursor around with the arrow keys, the cursor will flicker around and sometimes settle in the wrong location. Note that the cursor is actually where it is supposed to be, it is just rendering improperly. This is purely a graphical issue.
I noticed the issue on the Alacritty and Kitty terminal emulators. These are both GPU accelerated emulators, so I tried URXVT and did not notice the same issues. Note: I do not believe that the Dell XPS-13 9380 has onboard graphics.
I am not sure if this is a firmware issue or an X11 issue.
Any ideas to get this to stop? I do not want to have to switch to another emulator.
UPDATE:
It also glitches for ST, meaning that the problem likely has nothing to do with the fact that alacritty and kitty are GPU accelerated. I am not sure why urxvt works fine...
|
It was an X11 problem. After installing the xf86-video-intel drivers (for the Intel UHD Graphics 620 chip. Consult the arch wiki to be sure). I added this to /etc/X11/xorg.conf.d:
20-intel.conf:
Section "Device"
Identifier "Intel Graphics"
Driver "intel"
Option "TripleBuffer" "true"
Option "TearFree" "true"
EndSection
| Terminal emulators glitch when using two displays (DWM) |
1,620,517,149,000 |
I'm using archlinux, dwm and dwmblocks. At startup dwmblocks shows only the icons of the blocks', without loading the scripts output. If I run
$ killall dwmblocks
and restart
$ dwmblocks &
it loads all required modules flawlessly.
For me, as a nonprofessional, it seems that the $PATH is not read before startx. So in .xinitrc I've sourced the bashrc (where the $PATH is extended) with
source $HOME/.bashrc &
before running dwmblocks &.
Also I've tried to delay the execution of dwmblocks by placing sleep 2 a line above. This doesn't help either.
Searching for the Xorg log-files wasn't a success eather. I've found them as indicated by archwiki, however the file dowsen't seem to give any clue about dwmblocks.
|
The command source is not portable. That is a bash-specific (possibly also some other shells) alias to the standard POSIX shell command .. So, it looks like your .profile is being read by something other than bash which means you should use . and not source.
Also, you can't be sure that $HOME will be set (it may well be in this case, but it might not) so use an absolute path instead, to be on the safe side. Finally, you don't need to send that to the background, it will just read the file and exit. Putting all that together, try using this instead:
. /home/alex/.bashrc
Also note that environment variables are better placed in ~/.profile and not ~/.bashrc since ~/.bashrc is only read by the bash shell and only for interactive, non-login shell sessions.
| dwmblocks not reading $PATH |
1,620,517,149,000 |
I got as my first desktop environment gnome. As i go to DWM and wanna call the terminal by Alt+Shift+Enter, the terminal opens in gnome environment, but not in dwm.
|
in the source code of dwm there's a variable called "termcmd" assuming the terminal under gnome is called "gnome-terminal" you'll have to replace "st" with "gnome-terminal"
| Terminal problem DWM |
1,620,517,149,000 |
Is it possible to swallow a terminal running Tmux? I can't seem to get the patch working unless Tmux is disabled.
I've tried with "Screen" on st and there were no issues :/.
Thanks!
https://dwm.suckless.org/patches/swallow/
(Using dwm-swallow-20200522-7accbcf.diff)
EDIT:
Couldn't figure it out, but found something awesome:
DVTM (Similar to running DWM in a terminal).
https://github.com/martanne/dvtm
|
I was having the same issue as you so I did some digging into how the swallow patch works and why it can't swallow tmux windows.
Basically, dwm’s swallow patch is incapable of handling applications launched from terminal emulators running tmux because the patch figures out which window should be swallowed by finding the parent process of the recently launched GUI application. This fails when running tmux because tmux forks applications from its server process which is a direct child of PID 1 (the init process). There’s no direct path up the process tree from the GUI application to the terminal emulator which means dwm can't figure out which terminal should be swallowed by the new application so it spawns the application normally.
Swallowing still works with screen because screen is a child process of the terminal emulator and so are applications launched from it. In this case, there is a direct path up the process tree from the GUI application to the terminal emulator so dwm can find out which terminal to swallow.
I have a more in depth exploration of how it works along with process tree graphs on my blog if you're interested.
There is a workaround in the form of a program called devour. It doesn't provide true window swallowing the same way the swallowing patch does, but it does work with tmux.
| Suckless' DWM Swallow Patch + Tmux |
1,620,517,149,000 |
I'm in the process of creating a nice looking statusbar with DWM and I want to implement unicode symbols with font awesome. Supposedly I can use the Pango patch but its out of date (for version 6.1) so patching fails. Is there another method I could use?
|
There was an XFT patch for 6.1 posted to the mailing list some time ago: that should work.
Your only other option is to update pango for 6.1, which may be a significant task given the changes to dwm over that version bump.
| DWM Unicode Support |
1,620,517,149,000 |
I tried to introduce colors to the status bar with xsetroot, but that did not work (of course).
I then found the status2d patch, but it did not use ansi color escape codes and it slows my statusbar down.
Is there an alternative?
|
have you tried this patch?
colored status text
https://dwm.suckless.org/patches/statuscolors/
| How do I bring color to my dwm status bar? |
1,620,517,149,000 |
I want to add additional language input to my linux. I am using MX Linux and DWM desktop environment. How can I do this?
|
Since MX Linux is Debian based, you can set your keyboard layout using,
sudo dpkg-reconfigure keyboard-configuration.
The keyboard settings file is /etc/default/keyboard if you prefer to do it manually. You can set the layout, the available languages, variants and the key combination to switch layout/language. In Debian the settings in this file are respected by console and Xorg.
Second part, is how to view the selected language in dwm bar. There are many ways, I'll suggest you two:
I suppose you've patched DWM with systray patch. If not I suggest you to do it, many apps use the system tray. In that case you can apt install fbxkb. It's a light app which shows an icon-flag on the system tray of the selected language. Easy and nice, but I don't like the flag on the systray :)
Use some dwm status bar customization. There are many available in dwm status monitor page. Most support showing current keyboard layout; I use dwm-bar. More steps to setup than the first proposal, but it's helpful to have whatever info you like on the bar - since you can add much more than just the current keyboard layout.
| How to add keyboard layout in dwm |
1,620,517,149,000 |
Since the last system update, for various applications the font size has increased. Even one application for which I can be sure, that it has not been replaced, because I have a custom version and install it from source (see dmenu).
As you can see in the picture, the font for dmenu appears ridiculously large. The screenshot also shows which font was specified in the source code and the output of fc-list, which indicates, that the font is installed.
Some other applications (such as gimp) show a similar phenomenon.
The overall resolution is full hd 1920x1080.
What happened and how can I reverse it? It is surprising to me, that some applications such as the terminal (alacritty) or VS Code still render fine. I already checked here and made sure that no nvidia drivers are installed (ie. sudo pacman -Q | grep nvidia shows no output)
|
First I installed the xorg-xrdb package which provides X11 access to the file .Xresources .
sudo pacman -S xorg-xrdb
Then I created .Xresources inside my home directory with the following content
*.dpi: 96
Lastly I have to make sure that .Xresources is loaded, so I added the following line to my .xinitrc
xrdb -merge ~/.Xresources
| Why do some fonts appear zoomed in after system update? |
1,620,517,149,000 |
I just got a quick question regarding dwm, how can I switch the places of two windows, so window A is switched with window B or is it even possible to move windows around?
Because I just like to have my main windows open up on the left side of the screen, so I can easily look at them.
The problem is that they sometimes move to the right side.
Thanks in advance, I would really appreciate your help with that.
|
Figured it out myself, by default mod + Return switches the master and stack.
| DWM: Switch two windows places | Move windows |
1,620,517,149,000 |
I want to run Telegram Desktop as a background process on dwm, such that I receive notifications when I have a new message and such that I don't have to keep the GUI open the entire time.
How would I go about that?
Whenever I close the GUI, the process closes entirely and I don't receive notifications.
|
You can try patching it: https://dwm.suckless.org/patches/systray
By default DWM doesn't sport a systray where Telegram could be minimized to.
| How do I run Telegram Desktop as a background process in dwm? |
1,620,517,149,000 |
I am attempting to set up the behavior described in the title. For reference, there is an answer which solve this for emacs. I however use DWM/ST and Zsh.
The solutions I have clumsily tried to come um with include modifying the .zshrc file in the with the following lined:
cd $pwd
I have realized this does not make sense as the path displayed by this instance of pwd will in fact be the path in which the terminal itself is opened, which is $HOME. Maybe the solution is messing with ST, but I have not had any ideas of how to do do so. Any help would be appreciated.
|
I hope this does the trick:
st & disown
EDIT:
You can make an alias to it and put it to your rc file (I don't know if it works on zsh)
alias st='st & disown'
So when you call it on the current shell it will open a new terminal on your current directory. Though alias are not inherited by subshells (you can define a function instead).
Maybe I misunderstood and you intended this:
There's this patch of st that let you spawn a new terminal on the current working directory with the ctrl+shift+return keybinding.
https://st.suckless.org/patches/newterm/
I think I'll patch my st too at this point.
| Opening Terminal in Current Directory [closed] |
1,620,517,149,000 |
I'm using Arch Linux with dwm desktop. I was trying to make some changes in my ~/.xinitrc, and now my dwm status bar displays weird characters. I reverted all the changes i made in /etc/X11/xconfig.d and ~/.xinitrc, but the problem persists. The characters get also displayed on some web forms too (which makes it very difficult for me to post even this). Do you have any idea what may cause this?
Here's a screenshot:
|
This is symptomatic of you not having installed a monospace font, and not having changed the font declaration in .config.h, the default which ships with dwm being:
static const char *fonts[] = { "monospace:size=10" };
| Dwm status bar and web forms display weird fonts |
1,620,517,149,000 |
After seting up shortcuts in config.h, pactl is not working at all. However when I run pactl in terminal, it works as intended
config.h:
#include <X11/XF86keysym.h>
static const char *upvol[] = { "/usr/bin/pactl", "set-sink-volume", "@DEFAULT_SINK@", "+5%", NULL };
static const char *downvol[] = { "/usr/bin/pactl", "set-sink-volume", "@DEFAULT_SINK@", "-5%", NULL };
static const char *mutevol[] = { "/usr/bin/pactl", "set-sink-mute", "@DEFAULT_SINK@", "toggle", NULL };
static Key keys[] = {
{ 0, XF86XK_AudioLowerVolume, spawn, {.v = downvol } },
{ 0, XF86XK_AudioMute, spawn, {.v = mutevol } },
{ 0, XF86XK_AudioRaiseVolume, spawn, {.v = upvol } }
};
Even if i replace @DEFAULT_SINK@ with 1 (which is my current sink), it still dosen't work
function keys work because xev detects that they have assigned events:
KeymapNotify event, serial 37, synthetic NO, window 0x0,
keys: 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
I run DWM through LightDM session with dwm.desktop file in /usr/share/xsessions:
[Desktop Entry]
Encoding=UTF-8
Name=dwm
Comment=Dynamic window manager
Exec=dwm
Icon=dwm
Type=XSession
|
Turns out DWM in some cases must be fully reinstalled to overwrite some settings
sudo make uninstall && sudo make install
| pactl not working in DWM config file |
1,330,333,166,000 |
I have destroyed my Mint Linux installation. I just wanted access to my remote storefront. So what happened was I was having trouble with ICEauthority file in my home directory. So following different directions on the internet I came to the conclusion that I could set the home directory recursively to chmod 755 to allow that file to work…eventually I ran into problems with the system loading. Eventually by setting the home directory to executable permission for root was I able to get read/write access…but then i reset my machine oh why oh why did i reset my machine!!! - now the system throws me the same error with ICEauthority but it never gets me into the OS because the disk is encrypted. Nothing I’ve tried seems to work and I don’t have the original mounting seed. I’ve also tried sudo ecryptfs-recover-private but my system then just says No such file or directory:
frankenmint@honeybadger /home $ sudo ecryptfs-recover-private
INFO: Searching for encrypted private directories (this might take a while)...
INFO: Found [/home/.ecryptfs/frankenmint/.Private].
Try to recover this directory? [Y/n]: y
INFO: Found your wrapped-passphrase
Do you know your LOGIN passphrase? [Y/n] y
INFO: Enter your LOGIN passphrase...
Passphrase:
Inserted auth tok with sig [979c6cdf80d2e44d] into the user session keyring
mount: No such file or directory
ERROR: Failed to mount private data at [/tmp/ecryptfs.Hy3BV96c].
I’m really worried because I had important files on there that were stored on a virtual machine…If I could just get to those files then I would have no qualms nuking the setup and starting over
|
I found that running sudo bash and then running ecryptfs-recover-private as root (rather than via sudo) worked. Not sure why it should be any different.
Edit:
TL;DR:
# ecryptfs-unwrap-passphrase /mnt/crypt/.ecryptfs/user/.ecryptfs/wrapped-passphrase - | ecryptfs-add-passphrase --fnek -
< Type your login password here >
Inserted auth tok with sig [aaaaaaaaaaaaaaaa] into the user session keyring
Inserted auth tok with sig [bbbbbbbbbbbbbbbb] into the user session keyring
You will not see a prompt and must type your login password, blind, into the above command.
Replace the aaaaaaaaaaaaaaaa and bbbbbbbbbbbbbbbb below with the hex signatures between brackets from the output above, in order:
# mount -i -t ecryptfs -o ecryptfs_sig=aaaaaaaaaaaaaaaa,ecryptfs_fnek_sig=bbbbbbbbbbbbbbbb,ecryptfs_cipher=aes,ecryptfs_key_bytes=16 /mnt/crypt/.ecryptfs/user/.Private /mnt/plain
Preliminaries
It turns out just running as root did not work reliably for me; sometimes it did, sometimes it didn't. Basically, ecryptfs seems buggy and quite user-unfriendly, often confusing login passwords and mount passphrases. After going down a deep, dark rabbit hole, I have some tips that should help. These notes are for Ubuntu 17.10, ecryptfs-utils 111-0, and you should become root before starting. I assume you want to mount your home directory from /mnt/crypt (which should already be mounted) to /mnt/plain, and you should replace user with the username.
Start Easy
The first thing to try is:
# ecryptfs-recover-private /mnt/crypt/.ecryptfs/user/.Private
If this works, well, you're lucky. If not, it may give an error message from mount about no such file or directory. This is extremely misleading: what it really means is your mount passphrase is wrong or missing.
Get The Signatures
Here is the important part: we need to verify ecryptfs is really trying the right mount passphrase(s). The passphrases must be loaded into the Linux kernel before ecryptfs can mount your filesystem. ecryptfs asks the kernel for them by their signature. The signature is a 16-byte hex value (and is not cryptographically sensitive). You can find the passphrase signatures ecryptfs is expecting:
# cat /mnt/crypt/.ecryptfs/user/.ecryptfs/Private.sig
aaaaaaaaaaaaaaaa
bbbbbbbbbbbbbbbb
Remember these. The goal is to get passphrases with these signatures loaded into the kernel and then tell ecryptfs to use them. The first signature (aaaaaaaaaaaaaaaa) is for the data, and the second (bbbbbbbbbbbbbbbb) is the FileName Encryption Key (FNEK).
Get the mount passphrase
This command will ask you for you login password (with a misleading prompt), and output your mount passphrase:
# ecryptfs-unwrap-passphrase /mnt/crypt/.ecryptfs/user/.ecryptfs/wrapped-passphrase
Copy this but be careful!!, as this is extremely cryptographically sensitive, the keys to the kingdom.
Try an interactive mount
The next thing to try is:
# mount -t ecryptfs /mnt/crypt/.ecryptfs/user/.Private /mnt/plain
The crucial thing here is that mount needs your (super-sensitive) mount passphrase that we just copied (not your login password).
This will ask you some questions, and you can accept the defaults except say yes to Enable filename encryption. It may give you a warning and ask to cache the signatures; you can say yes to both, but do double-check that you've got the right mount passphrase.
You will see the options that mount has decided to try for you:
Attempting to mount with the following options:
ecryptfs_unlink_sigs
ecryptfs_fnek_sig=bbbbbbbbbbbbbbbb
ecryptfs_key_bytes=16
ecryptfs_cipher=aes
ecryptfs_sig=aaaaaaaaaaaaaaaa
Mounted eCryptfs
If the signatures are wrong (don't match what you got from Private.sig), the mount won't work.
...but it will very unhelpfully report that it did. You will have to do an ls /mnt/plain and cat a file to make sure. At this point you can also look in /var/log/syslog and verify that ecryptfs is looking for the same signatures we are.
There are clearly two serious issues with ecryptfs here, and we have to work around them.
Load the keys into the kernel
If the interactive mount didn't help, we have to load the keys into the kernel ourselves and manually specify them in the mount options.
# ecryptfs-add-passphrase --fnek
And paste in your (super-senstive) mount passphrase copied from above. This should output:
Inserted auth tok with sig [aaaaaaaaaaaaaaaa] into the user session keyring
Inserted auth tok with sig [bbbbbbbbbbbbbbbb] into the user session keyring
Mount manually
Now the passphrases are loaded into the kernel, and we just need to tell mount to use them:
# umount /mnt/plain
# mount -i -t ecryptfs -o ecryptfs_sig=aaaaaaaaaaaaaaaa,ecryptfs_fnek_sig=bbbbbbbbbbbbbbbb,ecryptfs_cipher=aes,ecryptfs_key_bytes=16 /mnt/crypt/.ecryptfs/user/.Private /mnt/plain
You'll notice the options are similar to what the interactive mount printed out, except we're manually telling ecryptfs what's up.
Hopefully this works. If not, you can check that the keys are loaded into the kernel with the correct signatures using keyctl list @u, which should print out at least the two signatures you're expecting.
| mount: No such file or directory with encrypted recovery |
1,330,333,166,000 |
Someone ask me in other site about this question, i.e. a file named "abc.dat" has 0 file size but 8 blocks, and this is the output I ask him to give me (Some text has been translated from Chinese to English):
$ cp abc.dat abc2.dat; ls -ls abc2.dat #try to copy, it still 8 blocks but 0 byte
8 -rw-rw-r-- 1 rokeabbey rokeabbey 0 Feb 27 19:39 abc2.dat
8 -rw-rw-r-- 1 rokeabbey rokeabbey 0 Sep 18 19:11 abc.dat #sorry, this may be the extra wrong output he added
$ stat abc.dat
File: 'abc.dat'
Size: 0 Blocks: 16 IO Block: 4096 regular empty file
Device: 32h/50d Inode: 3715853 Links: 1
Access: (0664/-rw-rw-r--) Uid:( 1000/rokeabbey) Gid:( 1000/rokeabbey)
Access: 2018-02-26 21:13:57.640639992 +0800
Modify: 2017-09-18 19:11:42.221533011 +0800
Change: 2017-09-18 19:11:42.221533011 +0800
Birth: -
$ touch abc3.dat ; ls -sl | grep abc #try to create new empty file, it still 8 blocks by default
8 -rw-rw-r-- 1 rokeabbey rokeabbey 0 Feb 27 19:39 abc2.dat
8 -rw-rw-r-- 1 rokeabbey rokeabbey 0 Feb 27 19:40 abc3.dat
8 -rw-rw-r-- 1 rokeabbey rokeabbey 0 Sep 18 19:11 abc.dat
I've learned a bit about sparse file, file metadata, symlink cases, but none of that cases will causes 0 byte file size with 8 blocks. Is there any filesystems setup such as minimum block size for ANY file ?
He told me that his systems is Ubuntu 16.04 and ext4.
[UPDATE]
$ df -Th /home/rokeabbey
/home/rokeabbey/.Private ecryptfs 138G 39G 92G 30% /home/rokeabbey
[UPDATE] I can reproduced with ecryptfs
xb@dnxb:/tmp/test$ sudo mkdir /opt/data
xb@dnxb:/tmp/test$ sudo apt-get install ecryptfs-utils
...
xb@dnxb:/tmp/test$ sudo mount -t ecryptfs /opt/data /opt/data
Passphrase:
...
Selection [aes]: 1
...
Selection [16]: 1
Enable plaintext passthrough (y/n) [n]: y
Enable filename encryption (y/n) [n]: y
...
Would you like to proceed with the mount (yes/no)? : yes
...
in order to avoid this warning in the future (yes/no)? : no
Not adding sig to user sig cache file; continuing with mount.
Mounted eCryptfs
xb@dnxb:/tmp/test$ l /opt/data
total 8.0K
52953089 drwxr-xr-x 9 root root ? 4.0K Feb 27 23:16 ../
56369402 drwxr-xr-x 2 root root ? 4.0K Feb 27 23:16 ./
xb@dnxb:/tmp/test$ sudo touch /opt/data/testing
xb@dnxb:/tmp/test$ less /opt/data/testing
xb@dnxb:/tmp/test$ sudo umount /opt/data
xb@dnxb:/tmp/test$ ls -ls /opt/data
total 8
8 -rw-r--r-- 1 root root 8192 Feb 27 23:42 ECRYPTFS_FNEK_ENCRYPTED.FWbECDhE0C37e-Skw2B2pnQpP9gB.b3yDfkVU5wk7WhvMreg8yVnuEaMME--
xb@dnxb:/tmp/test$ less /opt/data/ECRYPTFS_FNEK_ENCRYPTED.FWbECDhE0C37e-Skw2B2pnQpP9gB.b3yDfkVU5wk7WhvMreg8yVnuEaMME--
"/opt/data/ECRYPTFS_FNEK_ENCRYPTED.FWbECDhE0C37e-Skw2B2pnQpP9gB.b3yDfkVU5wk7WhvMreg8yVnuEaMME--" may be a binary file. See it anyway?
xb@dnxb:/tmp/test$ sudo mount -t ecryptfs /opt/data /opt/data
Passphrase:
Select cipher:
...
Selection [aes]: 1
...
Selection [16]: 1
Enable plaintext passthrough (y/n) [n]: y
Enable filename encryption (y/n) [n]: y
...
Would you like to proceed with the mount (yes/no)? : yes
...
in order to avoid this warning in the future (yes/no)? : no
Not adding sig to user sig cache file; continuing with mount.
Mounted eCryptfs
xb@dnxb:/tmp/test$ ls -ls /opt/data
total 8
8 -rw-r--r-- 1 root root 0 Feb 27 23:42 testing
xb@dnxb:/tmp/test$
|
This happens if the file system is encrypted; the FS needs to store extra metadata for the file, even if it is empty.
As I happen to have a machine handy with a vanilla ecryptfs mount (Ubuntu 12.04-LTS), I can confirm that an empty file will get 8 blocks:
$ touch test
$ ls -ls test
8 -rw-rw-r-- 1 admin admin 0 feb 27 16:45 test
| How is it possible 8 blocks get allocated but file size 0? |
1,330,333,166,000 |
I can't do ssh public key login to my server and I think this issue is related to the fact my home is encrypted. I chose the option "encrypt my home folder" under the Ubuntu install setup. The permissions on /home/MY-USER are 700.
I've tried another workstation and everything works fine. I would be glad if someone help me to get out this without removing the encryption.
|
In the ssh_config file, you can can change the location of where it looks for your private key. You could probably do something like make a new folder at /etc/ssh/keys/ and put your id_rsa private key file in there and then change the IdentityFile option in ssh_config to look in the new location. In doing so you'll want to take certain measures to secure your private key.
This is assuming you're the only user of the computer. If not, you can make folders like /etc/ssh/keys/john/ and /etc/ssh/keys/dogbert/ and then in the IdentityFile option put /etc/ssh/keys/%u/id_rsa
| Can't do SSH public key login under encrypted home |
1,330,333,166,000 |
Ecryptfs encrypts filenames and sometimes I need to find particular file, so I would like a tool to map the encrypted filenames back to their plaintext file name.
|
You would want to use the utility ecryptfs-find
| Is there a tool to map ecryptfs plaintext and encrypted filenames? |
1,330,333,166,000 |
On my Debian-Testing-System, I want to completely conceal the home folders. That means, I not only want the data to be encrypted, but I also want to preclude determining any (or most) information from the encrypted data.
For instance, also file names should be encrypted. But not being an expert for data protection, maybe also other file/folder attributes need to be encrypted to grant privacy.
I considered ecryptfs to achieve this (Package ecryptfs-utils)
However, is this the right choice for my needs?
I also would appreciate links to step-by-step instructions on the implementation of encrypted home-folders in Debian very much!
[edit] I do a fresh install, therefore it's not necessary to migrate a previously unencrypted home folder.
|
Ecryptfs stores each encrypted file in one file (the lower file, in ecryptfs terminology). The directory structure of the lower files mirrors that of the payload files, although the file names are encrypted. The metadata (modification times, in particular) of the lower files also reveals that of the payload files. The size of the lower file is slightly larger than the size of the payload (with a fixed overhead for Ecryptfs's metadata)¹.
If you're storing your own work, where the attacker would already know roughly what kinds of data you have (“I already know this is a source code tree, and I know these are spreadsheets, what I want to know is !”), none of that is a problem. But if you're storing directory trees that may be identified by their layout (directory structure, approximate sizes, dates), then Ecryptfs is not the right tool for you.
Use encryption at the block device level. Linux provides this with dm-crypt. You can encrypt either the whole disk (except for a small area for the bootloader), or encrypt /home or some other partition. If you don't encrypt the whole disk, keep in mind that confidential information might end up in other places, especially the swap space (if you have any encrypted data anywhere, you should encrypt your swap). Note that if you go for whole-disk encryption, your computer will not be able to boot unattended, you will have to type your passphrase at the keyboard.
Since the whole block device is encrypted, the location of file content and metadata cannot be detected by an attacker who steals the disk. Apart from a header at the beginning of the encrypted area, the content is indistinguishable from random noise. An attacker could derive some information from seeing multiple snapshots of the encrypted data and studying how various sectors evolve over time, but even with this it would be hard to find out anything interesting, and this doesn't apply if you stop modifying the data after the attacker has seen the ciphertext (as in the case of a disk theft).
Many distributions offer the possibility to create a dmcrypt volume or encrypt the whole disk at install time. You may have to select the “advanced” or “server” installation image as opposed to the “desktop” or “basic” image.
The tool to manipulate dm-crypt volumes is cryptsetup. To create a dmcrypt volume, create a partition /dev/sdz9, say, then run cryptsetup luksFormat /dev/sdz9. You'll need to add the volume to /etc/crypttab; use cryptsetup luksOpen to activate the volume on the spot, or cryptmount -a after you've set up /etc/crypttab. Dm-crypt is only a cipher layer, so you'll need to make a filesystem on the encrypted volume.
Install Backtrack 5 r2 into running LUKS setup installed with ubuntu has a tutorial on setting up dm-crypt entirely manually.
¹ Experimentally, with default settings, the lower file size is the payload file size, rounded up to a multiple of 4kB, plus an 8kB overhead.
| conceal home folder completely - is ecryptfs the right choice? |
1,330,333,166,000 |
I have an encrypted share folder on my synology NAS DS413 (which uses ecryptfs). I can manually mount the encrypted folder and read the decrypted files without issue, using synologies GUI. For some reason, I have never been able to mount the encrypted folder using my passphrase . But I can always do it by using the private key generated during ecryptfs setup.
So I have since been doing some research on decrypting the encrypted files without a synology (for example if this thing catches fire or is stolen and I need to restore from backup). I've read several threads and howto's on decrypting synology/ecryptfs encrypted shares using linux and encryptfs-utils. But the howto always tells you to provide the passphrase and never mention the use of the key for decryption. So my question is how do I decrypt using the key (which works to mount and decrypt with synology's software)? The key I have is 80 bytes and is binary. The first 16 bytes are integers only and the remaining bytes appear to be random hex.
Thanks for any tips!
|
Short answer: Use the passphrase $1$5YN01o9y to reveal your actual passphrase from the keyfile with ecryptfs-unwrap-passphrase (the backslashes escape the $ letters):
printf "%s" "\$1\$5YN01o9y" | ecryptfs-unwrap-passphrase keyfile.key -
Then use your passphrase with one of the instructions you probably already know, like AlexP's answer here or Robert Castle's article.
Or do it all in a single line:
mount -t ecryptfs -o key=passphrase,ecryptfs_cipher=aes,ecryptfs_key_bytes=32,ecryptfs_passthrough=no,ecryptfs_enable_filename_crypto=yes,passwd=$(printf "%s" "\$1\$5YN01o9y" | ecryptfs-unwrap-passphrase /path/to/keyfile.key -) /path/to/encrypted/folder /path/to/mountpoint
I just tested the whole decryption process with a keyfile and can confirm its working:
Created a new encrypted shared folder in DSM 6.2 and downloaded the keyfile.
Shut down the NAS, removed a drive, connected it to a Ubuntu x64 18.04.2 machine and mounted the raid and volume group there.
Installed ecryptfs-utils and successfully got access to the decrypted data using the mount command mentioned above with the downloaded keyfile.
Credits: I found that $1$5YN01o9y-passphrase in a post in a German Synology forum. The user that probably actually found out the secret in 2014 is known there as Bastian (b666m).
| how to decrypt ecryptfs file with private key instead of passphrase |
1,330,333,166,000 |
I installed Ubuntu LTS 14.04 server edition on a remote computer, and added my local public key to ~/.ssh/authorized_keys on the remote computer. I found that I still needed to use password to log in the remote computer, even after setting the permission of ~/.ssh to 700, and ~/.ssh/* to 600 on the remote computer. However, once I log in, I can start using public key for authorization for other ssh sessions.
My home directory is encrypted.
How can I fix this?
|
Here is the solution from the link I posted in my comment. This comes from here, which references this superuser post.
Create .ssh folder in /home for the keys to be stored
sudo mkdir /home/.ssh
Move existing authorized_keys file into .ssh dir as username
sudo mv ~/.ssh/authorized_keys /home/.ssh/username
Create symbolic link to authorized_keys file in user .ssh dir
ln -s /home/.ssh/username ~/.ssh/authorized_keys
Update sshd_config file to set the new path for the authorized_keys file
sudo vim /etc/ssh/sshd_config
Change the AuthorizedKeysFile line to:
AuthorizedKeysFile /home/.ssh/%u
Reboot the computer
sudo shutdown -r now
Login to your server and you should be presented with a minimal un-decrypted home directory... You will need to create and edit a .profile file in there to get ecryptfs to mount your home directory.
sudo vim ~/.profile
Add these lines:
ecryptfs-mount-private
cd /home/username
Log out/Restart, and go back in again. You should be prompted for your password after SSH key auth, and then be presented with your decrypted home directory.
You should now be able to login using SSH keys every time, no matter if your home dir is decrypted or not.
| SSH public keys not working; my home directory is encrypted |
1,330,333,166,000 |
I have an SSD disk with several partitions. One of them is has a btrfs volume, mounted as /home, which holds an ecryptfs home directory.
When I trim the volumes, it seems that fstrim doesn't trim data blocks on such volume - why? Below you can see all the informations about the setup, and the procedure I follow, with comments.
$ cat /etc/fstab:
UUID=xxx / ext4 errors=remount-ro 0 1
UUID=yyy /media/vfio ext4 defaults 0 2
UUID=zzz /home btrfs defaults 0 2
$ mount | grep sda:
/dev/sda5 on / type ext4 (rw,relatime,errors=remount-ro,data=ordered)
/dev/sda1 on /media/vfio type ext4 (rw,relatime,stripe=32721,data=ordered)
/dev/sda2 on /home type btrfs (rw,relatime,ssd,space_cache,subvolid=5,subvol=/)
$ ls -la /home /home/myuser/.Private # summary:
/home:
.ecryptfs
myuser
/home/myuser/.Private -> /home/.ecryptfs/myuser/.Private
$ df -h:
Filesystem Size Used Avail Use% Mounted on
/dev/sda5 16G 11G 4,7G 69% /
/dev/sda1 93G 52G 36G 60% /media/vfio
/dev/sda2 828G 542G 286G 66% /home
/home/myuser/.Private 828G 542G 286G 66% /home/myuser
I execute fstrim on all the volumes, for the first time.
$ fstrim -va:
/home/myuser: 286,4 GiB (307518836736 bytes) trimmed
/home: 286,4 GiB (307485450240 bytes) trimmed
/media/vfio: 40,4 GiB (43318886400 bytes) trimmed
/: 5,4 GiB (5822803968 bytes) trimmed
It seems that fstrim runs twice on the /home tree, due to the additional ecryptfs mount. This would be ok (I could avoid it by running an fstrim with specific mountpoints). The problem is that trimming /home is not working as expected, as each run finds and trims the same amount of data.
This is shown by a further run.
$ fstrim -v / (this is ok):
/: 0 B (0 bytes) trimmed
$ fstrim -v /home (this isn't ok):
/home: 286,4 GiB (307478061056 bytes) trimmed
Note that the sda2 (/home) trimming takes some time to run, so it's actually doing something.
|
It is a common misunderstanding to worry about the sizes reported by fstrim.
It really doesn't mean anything. Just ignore it.
fstrim just issues the appropriate ioctl, everything else is the decision of the filesystem, and filesystems behave very differently. For example, ext4 tries to avoid trimming the same things over and over, so you will see 0 bytes trimmed. xfs doesn't care and trims everything that's free, so you'll always see <roughly free space> bytes trimmed. Other filesystems may do other things, it all depends on how the filesystem chose to implement the FITRIM syscall logic, if it's implemented at all.
As long as the amount of data trimmed is not larger than free space, you should be fine regardless of what fstrim (the filesystem, really) reports.
In the end only the SSD itself really knows what's currently trimmed and what not. Trimming already trimmed blocks does not cause any harm whatsoever.
Don't make conclusions based on x bytes trimmed as reported by fstrim.
If you want to verify that data was trimmed, you have to check raw data on the disk. ( https://unix.stackexchange.com/a/85880/30851 ) but that method might not work for btrfs, I have never tried.
| Why fstrim appears not to trim data blocks on btrfs (+ecrypts)? |
1,330,333,166,000 |
Are there any security related risks when mounting with root privileges compared to mounting with regular users?
I have a script which does some ecryptfs mounting with non-root privileges (by design), but it doesn't work as expected on all required Linux systems, so I'm wondering if switching to mounting as root is a good idea.
Any help is greatly appreciated.
|
Deep down, mounting is performed by root anyway: only root can call the mount system call. Programs such as mount, pmount and fusermount are setuid root and restrict what non-root callers are allowed to mount.
If you're mounting a filesystem that doesn't implement file ownership (e.g. FAT), the user calling mount will end up owning the files (unless overridden by a mount option). Other than that, it doesn't matter who does the mounting.
I'm not saying that mounting as root is the right solution in your scenario. I don't know what your scenario is. But there is no direct security risk in doing the mounting as root as opposed to some other user.
| Are there any security risks when mounting as root? |
1,330,333,166,000 |
I'm using ecryptfs for an encrypted home directory. I would like to try out the mount option "ecryptfs_xattr" on my encrypted home directory, because it will probably improve performance. Can I specify this option somewhere, and still have it decrypt the home when I log in? (I assume I have to re-create the encrypted home directory, that's no problem)
|
First of all, it's extremely doubtful that you'll see a noticeable performance improvement using xattrs for eCryptfs metadata.
As for specifying particular mount options, you can sort of do this using the "ALIAS" feature, which I've documented in the mount.ecryptfs_private manpage. Here, you can add some fstab-style mount options, which can work for other eCryptfs encrypted directories, but unfortunately not $HOME. The reason for this is that if you mangle these options, you could render your $HOME directory unmountable, so we've restricted the options you can tweak for encrypted $HOME. Sorry.
Full disclosure: I'm one of the authors and maintainers of eCryptfs.
| Mount options with ecryptfs encrypted home |
1,330,333,166,000 |
Can I decrypt an ecryptfs Private directory from a script?
My basic use case for this type of activity is for doing remote backups. Imagine you have a machine (call it privatebox) with an encrypted private directory that stores your photos (or some other sensitive information). It is only decrypted upon logging in. And imagine that you want to be able to write a script on a remote machine that will log into the privatebox, decrypt the directory to add a photo, then re-encrypt it and log out. All without user interactive steps being required (maybe it runs from cron). Note that the passphrase for the privatebox would NOT be stored on the privatebox in plain text or anything. And since it would be encrypted (except during the update) it would be protected if someone obtained the SD card, etc.
Such a script would work like this (in my mind):
setup private directory on privatebox that is encrypted with a passphrase
setup ssh keys from local machine to privatebox so you can use ssh non-interactively (cron can login)
Then what? How do you decrypt a private folder non-interactively if you know the passphrase?
It seems that ecryptfs is specifically designed to not allow this (even with SSH key trickery, you still have to manually mount your private directory).
Basically, what I'm looking for is a non-interactive version of 'ecryptfs-mount-private' or something similar if anyone knows a solution. Something like:
% ecryptfs-mount-private -p $PASSPHRASE
Where I could pass the passphrase instead of having to type it.
If ecryptfs can't do this, does anyone know of an alternative? Thanks!
|
Okay I figured this out. Thanks for your help Xen2050, I don't have enough reputation here to give you an upvote (yet).
Here's the bash script that works for me:
#Set this variable to your mount passphrase. Ideally you'd get this from $1 input so that the actual value isn't stored in bash script. That would defeat the purpose.
mountphrase='YOURMOUNTPASSPHRASE'
#Add tokens into user session keyring
printf "%s" "${mountphrase}" | ecryptfs-add-passphrase > tmp.txt
#Now get the signature from the output of the above command
sig=`tail -1 tmp.txt | awk '{print $6}' | sed 's/\[//g' | sed 's/\]//g'`
rm -f tmp.txt #Remove temp file
#Now perform the mount
sudo mount -t ecryptfs -o key=passphrase:passphrase_passwd=${mountphrase},no_sig_cache=yes,verbose=no,ecryptfs_sig=${sig},ecryptfs_cipher=aes,ecryptfs_key_bytes=16,ecryptfs_passthrough=no,ecryptfs_enable_filename_crypto=no /home/user/.Private /home/user/Private
Note that I had to disable filename encryption for this to work. When I tried using filename encryption I got a library error during the mount. In order to not have filename encryption, you must use the following when creating your Private/ directory:
ecryptfs-setup-private -n
This is now working for me.
In response to some people who would say 'why do it this way?', well, I don't always want to mount my private data on each login. I want to have a quick way of mounting the data that does not require me to use my actual user login password. Imagine I want to share the data with someone else? I would have to give them my password. Using the mount passphrase allows me to have essentially a mount password that can be less secure than my login. This is also useful if you want to automount your data and store the phrase somewhere (perhaps on a USB stick as a key to unlock your data). I would never want to store my login password anywhere in plain text. But if you know the content of your data, and you know the data itself is less private than your own account, this is an fine solution.
| Non-interactive ecryptfs directory encrypt/decrypt |
1,330,333,166,000 |
I have been using ecryptfs encryption for a long time. I prefer per user encryption. That means even a root user can not read a user files when the user is not logged in. When an user is not logged in, any users (including root) should not be able to read his files. But ecryptfs is not an encryption for the entire system.
So today, I am thinking to use LUKS to encrypt the entire partition and then use ecryptfs to encrypt per user's home. Is this recommended (standard) way to achieve what I want? should I use both LUKS and ecryptfs at the same time?
Thanks a lot.
|
This is exactly what I do with my desktop. I have my entire partition encrypted with LUKS. And then I have my home directory encrypted using ecryptfs. The reason I encrypt my home directory using ecryptfs is because the desktop is used by my partner as well.
If you are the sole user of your system, ecryptfs may not be necessary.
Make sure you use different passwords for LUKS and ecryptfs so that compromise of the LUKS password will still protect the files in your home directory.
| Is it good to have both LUKS and ecryptfs encryptions at the same time? |
1,330,333,166,000 |
I currently have an unencrypted external hard drive that I use as a backup for my encrypted (with LUKS) main machine. To update my backup, I simply log in to the main machine and rsync to my external hard drive. Clearly, having an unencrypted backup for material that was worth encrypting in the first place is a bad idea. However, due to time constraints, I am unable to regularly update my backup without the help of something like rsync. It follows that any encryption method that I use on the external drive must be compatible with rsync. However, I have ran in to the following issues:
Userspace stackable encryption methods like EncFS or eCryptfs appear to both take up a lot of space and not play nice with rsync. The hidden files reponsible for the encryption seem to change frequently enough that rsync ends up having to copy so many files that it's barely worth even using rsync.
luksipc would be an option, but it's latest documentation tells me to instead use the the cryptsetup-reencrypt tool from dm-crypt. Sadly, whenever I look up the relevant documentation on the arch wiki for cryptsetup-reencrypt I can neither tell what to do, nor if it'll work with rsync. The cryptsetup-reencrypt tool also seems to be new enough that it's hard to find doccumentation on it that someone at my level can read.
Plain LUKS, or anything similar isn't an option, because the earlier mentioned time constraints prevent me from being able to wipe the drive and make the backup again from scratch.
Duplicity could be an option, but it doesn't seem able to encrypt any unencrypted files that are on the external hard drive (i.e. where it's copying to).
Overall, it looks like #2 might be my best option for the goal of encrypting my external drive and keeping that drive up to date with rsync, but I don't really know where to begin and I'm not very open to the possibility that I might have to wipe the drive before encrypting it. Am I missing anything useful?
|
Nowadays cryptsetup itself supports non-destructively transforming an unencrypted partition into a encrypted LUKS device with the reencrypt subcommand.
Assuming that your external drive is accessible via /dev/sdX and the current filesystem is located on /dev/sdXY you need first shrink the filesystem to make room for the LUKS header and some scratch space for the encryption operation (32 MiB works). The exact command depends on you filesystem, e.g. for ext4:
e2fsck -f /dev/sdXY
resize2fs /dev/sdXY NEWSIZE
(Note that XFS doesn't support shrinking, thus you would need to fstransform it first ...)
Trigger the encryption:
cryptsetup reencrypt --encrypt /dev/sdXY --reduce-device-size 32M
Enlarge the filesystem again:
cryptsetup open /dev/sdXY backup
resize2fs /dev/mapper/backup
cryptsetup close backup
(without a size argument resize2fs uses all available space)
Since you don't change the content of you existing filesystem you can continue using rsync. Instead of something like
mount /dev/sdXY /mnt/backup
rsync -a /home /mnt/backup
umount /mnt/backup
you now have to do something like:
cryptsetup open /dev/sdXY backup
mount /dev/mapper/backup /mnt/backup
rsync -a /home /mnt/backup
umount /mnt/backup
Since you mention your time constraints: cryptsetup reencrypt isn't necessarily as fast as a cryptsetup luksFormat followed by a fresh rsync.
An alternative to the above is to switch to Restic for your backup needs. Restic encrypts all backups, supports incremental backups and is very fast.
If your external drive is large enough you can start with Restic by initializing a Restic repository in a new subdirectory. After the first Restic backup is finished you can remove the old unencrypted backup files. Finally, you have to wipe the free space to destroy any traces of the old unencrypted backup files.
| Encrypting a currently used external hard drive such that it can be updated with rsync? |
1,330,333,166,000 |
My system takes exactly 95 seconds to boot: 5 seconds actual boot and 90 seconds waiting for a nonexistent drive:
(...boot.log...)
A start job is running for dev-disk-by\x2duuid-6bbb4ed8\x2d53ea\x2d4603\x2db4f7\x2d1205c7d24e19.device (1min 29s / 1min 30s)
Timed out waiting for device dev-disk-by\x2duuid-6bbb4ed8\x2d53ea\x2d4603\x2db4f7\x2d1205c7d24e19.device.
This device is not listed in fstab, and I did not even manage to find the piece of hardware (usb disks etc.). Where can it come from and how can I disable it?
I have ecryptfs on my home directory, and I have manually disabled swap in order to save my SSD disk.
|
The file /etc/crypttab is a (less known) counterpart of fstab for managing crypto filesystems. The default installation of Ubuntu configured an encrypted swapfile:
cryptswap1 UUID=6bbb4ed8-53ea-4603-b4f7-1205c7d24e19 /dev/urandom swap,offset=1024,cipher=aes-xts-plain64
Originally I had disabled this swap partition in fstab only, which is not enough.
Anybody who knows more about the purpose and inner workings of /etc/crypttab is welcome to extend this vague self-answer of mine.
| Why does systemd wait for a disk not present in `fstab`? |
1,330,333,166,000 |
I'm using ext4 encryption. https://wiki.archlinux.org/index.php/Ext4#Using_file-based_encryption
Before I decrypt a directory, I can see lots of encrypted filenames in it.
I would like to copy the encrypted files so that I can decrypt them on a different machine.
I could do this with ecryptfs. How do I do this with ext4 encryption.
|
You can see encrypted & padded filenames, but you should be unable to read file contents. So trying to copy the files unencrypted will result in errors such as:
cp: cannot open 'vault/YgI8PdDi8wY33ksRNQJSvB' for reading: Required key not available
So you are pretty much not supposed to do this. The practical answer is to decrypt it, then copy it. The copy will be re-encrypted if you picked an encrypted location as the target directory. Over the network with rsync/ssh the transfer will be encrypted also. So most things work, just storing it in the cloud is probably out of the question. Filesystem specific encryption does not work outside of the filesystem.
Circumventing the read barrier is not sufficient: unlike ecryptfs where all metadata is regular files, the ext4 encryption involves metadata hidden in the filesystem itself, not visible to you, so you cannot easily copy it.
The closest I found is e4crypt get_policy, e4crypt set_policy which allows you to encrypt a directory with an existing key without knowing the actual key in clear text. But it only works for empty directories, not for files.
You can also encrypt a vault directory, populate it with files, then hardlink those files to the root directory, then delete the vault directory. You end up with encrypted files (contents) in the root directory (which you are not supposed to be able to encrypt). The filesystem just knows that the file is encrypted. (Not recommended to actually do this.)
If you must make a copy anyway, I guess you can do it the roundabout way:
make a raw dd copy of the entire filesystem
change filesystem UUID
delete the files you didn't want
Otherwise I guess you'd need a specialized tool that knows how to replicate an encrypted directory + metadata from one ext4 filesystem to another, but I didn't see a way to do so with e4crypt or debugfs.
debugfs in particular seems to be devoid of policy / crypt related features except for ls -r which shows encrypted filenames in their full glory as \x1e\x5c\x8d\xe2\xb7\xb5\xa0N\xee\xfa\xde\xa66\x8axY which means the ASCII representation regular ls shows is encoded in some way to be printable.
Actual filename is [padded to and actually stored in the filesystem as] 16 random bytes, but regular ls shows it as 22 ASCII characters instead. Copying such a file the traditional way would create a file stored as its ASCII character representation when you really need to store it as random bytes. So that's just bound to fail in so many layers.
tl;dr if there is a way to do it then I don't know about it :-}
| Copying ext4 encrypted files |
1,330,333,166,000 |
On Ubuntu 20.04 - and I have encountered this with (vanilla) GNOME before - with KDE Plasma (no, not Kubuntu!), I am faced with a strange thing that happens every few hours or so and for which I have no explanation or remedy as of yet.
Somehow the ecryptfs-encrypted home folder which gets mounted when I log on "disappears" out of the blue. I mostly notice it due to weird symptoms starting to occur, such as all sorts of programs reporting files from $HOME they can not find, which they deem corrupt or for which they simply report they can't open them.
The first time this happens, I can usually run /usr/bin/ecryptfs-mount-private, enter my passphrase and be done with it. Alas, this still doesn't recover functionality of certain KDE desktop elements. As an example, I am unable search for installed programs from that point on and so everything that isn't already running becomes unavailable until I log off and back on.
Subsequent times this happens and I attempt using /usr/bin/ecryptfs-mount-private I usually see:
$ /usr/bin/ecryptfs-mount-private
Enter your login passphrase:
Inserted auth tok with sig [2123456789012312] into the user session keyring
mount: No such file or directory
Even logging off in such situation becomes a minor nightmare as you can see from the following screenshot. The dialogs pop up merely based on the fact that I am opting to log off!
So my questions (yeah, plural ... since I'm currently at a loss how to even start diagnosing this):
which entity could be causing this automatic removal of my $HOME? ... I was reminded of weird behavior like when sessions get purged when you log off and so suddenly your Screen or Tmux sessions also get killed (unless you use loginctl with enable-linger)
what are the steps to troubleshoot such an issue? (keep in mind that the desktop behaves all weird when this happens!). I tried to look at journalctl output and in the logs with ripgrep, but I don't really know what terms to look for ...
suppose this is a known bug, what's the workaround if any?
It reminds me a bit of Tmux/Screen getting killed when logging out, something I'd not normally expect and that can be prevented only by starting Tmux/Screen after logging into SSH (i.e. separate login session) or enabling session lingering.
The one thing I found with journalctl which seems odd and correlates to the "lost" home directory is the following:
Sep 01 23:39:11 machine smbd[220424]: pam_unix(samba:session): session closed for user johndoe
Sep 01 23:39:11 machine systemd[1]: home-johndoe.mount: Succeeded.
-- Subject: Unit succeeded
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- The unit home-johndoe.mount has successfully entered the 'dead' state.
Sep 01 23:39:11 machine systemd[1977]: home-johndoe.mount: Succeeded.
-- Subject: Unit succeeded
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
... but that would indicate that something caused by the Samba daemon on behalf of my interactive user account leads to another part of the system assuming that I logged off and unmounting my $HOME ... that sounds exceedingly unlikely, no?
The above pattern pam_unix(samba:session) closing a session for my username followed by the $HOME folder becoming inaccessible is the the smoking gun, but also the only one so far. Currently reading up on how this whole session business is supposed to work and why that mount unit "thinks" it can "reap" my mounted home folder while I am still interactively logged on.
Edit #1: since the comment indicates that the configuration of Samba could be relevant, I am adding it here. I replaced my actual username with johndoe in the dump from testparm:
# Global parameters
[global]
debug uid = Yes
dns proxy = No
guest account = johndoe
log file = /var/log/samba/log.%m
map to guest = Bad Password
max log size = 1000
obey pam restrictions = Yes
panic action = /usr/share/samba/panic-action %d
passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* .
passwd program = /usr/bin/passwd %u
security = USER
server role = standalone server
server string = %h server (Samba, Ubuntu)
syslog = 7
syslog only = Yes
workgroup = NULL
idmap config * : backend = tdb
[sharename]
force create mode = 0660
force directory mode = 0770
guest ok = Yes
guest only = Yes
path = /data/sharedir
read only = No
As you can tell nothing special, but my guess is that the fact that I am "defaulting" to my own user as guest user via global setting is somehow causing the login session to appear for my user.
There are no entries with samba:session marker other than a handful more entries like the log line reproduced above.
Edit #2: my /etc/pam.d/samba looks like this:
@include common-auth
@include common-account
@include common-session-noninteractive
... and so I attempted to edit those referenced files and add debug (separated by a blank space) on every line that referenced either pam_unix or pam_ecryptfs. The result - after a reboot - was that I could no longer log into KDE at all. It simply stalled. So I used one of the other terminals to log on as root and revert my changes (which thanks to etckeeper was trivial).
Edit #3: a temporary workaround is to disable session lingering for my user by setting KillExcludeUsers=root johndoe in /etc/systemd/logind.conf or "locally" via loginctl. Which makes this seem more and more like a defect. ... Edit 4: the workaround turned out not to work.
|
Well, that's stupid of course, since I "wasted" 200 reputation on a bounty mere hours ago, but I seem to have solved the puzzle. Anyone providing hints what to look out for and try which are more straightforward than mine will get the bounty.
Alright, so it turned out that pam_unix from the logs was an important clue. I was able in the end to provoke the situation and thereby reproduce the unmounting reliably.
What I did is also described in the respective ticket on launchpad.net, but I'll reproduce the relevant parts which aren't in the question above here.
My smb.conf before I dug into this issue looked like this as per testparm output:
# Global parameters
[global]
debug uid = Yes
dns proxy = No
guest account = johndoe
log file = /var/log/samba/log.%m
map to guest = Bad Password
max log size = 1000
obey pam restrictions = Yes
panic action = /usr/share/samba/panic-action %d
passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* .
passwd program = /usr/bin/passwd %u
security = USER
server role = standalone server
server string = %h server (Samba, Ubuntu)
syslog = 7
syslog only = Yes
workgroup = NULL
idmap config * : backend = tdb
[sharename]
force create mode = 0660
force directory mode = 0770
guest ok = Yes
guest only = Yes
path = /data/sharedir
read only = No
I opted for a sort of brute-force trial&error method. In Tmux I had several panes open, while attempting to produce an MWE for a defect report. This was effectively what I was running:
while mountpoint /home/johndoe; do sudo service smbd restart; date; sleep 2s ; done
watch 'mount|grep ecryptfs'
sudo tail -F /var/log/auth.log|grep samba:session
... in another Tmux window I then edited/saved the /etc/samba/smb.conf.
Bang!
The auth.log showed the log entry (smbd[144802]: pam_unix(samba:session): session closed for user johndoe) and the mount point vanished.
I had found how to reproduce the annoying condition at last.
Given its name my first pick was indeed the obey pam restrictions setting. So I set it to no (but I could have simply commented it out, because it defaults to no).
Restarted the smbd service, logged off and back in and attempted to reproduce the error condition again.
This time it could not be reproduced. So evidently the obey pam restrictions setting had influenced this whole pam_unix and samba:session business.
Edit #1: in the mentioned ticket further information was requested. In particular in pam-auth-update I was asked to deactivate all but the Unix authentication setting. Like this:
[*] Unix authentication
[ ] Register user sessions in the systemd control group hierarchy
[ ] Create home directory on login
[ ] eCryptfs Key/Mount Management
[ ] Inheritable Capabilities Management
And it turned out that not the second systemd-related setting was the issue, but the fourth one: eCryptfs Key/Mount Management.
Lessons learned
don't place a bounty if you are going to investigate it yourself 😉
cargo cult garbage can really harm what you're doing ... this particular setting was one I had sort of carried around in my configuration management for smb.conf while evidently it could have been thrown out by now ... oh well
if all else fails, brute force and trial & error seem to be viable methods to hunt down a root cause
| ecryptfs mounted home folder "disappears" when Samba closes session for my (interactive) user via PAM |
1,330,333,166,000 |
If I enable filename encryption in eCryptfs, when I unmount the filesystem all my files have names which start with "ECRYPTFS_FNEK_ENCRYPTED". I understand the need for the file system to have a signature in the filename which it can use to identify a filename as encrypted, but I would like to use something more discreet. Is there a way that I can change what this string is so that there aren't a bunch of files on the filesystem boldly listed as "ENCRYPTED"? It seems like this could easily be a mount option but if there is one I am missing it. Is there something somewhere else, like a PAM configuration file I can use?
|
The prefix is a constant in the kernel source:
fs/ecryptfs/ecryptfs_kernel.h
188:#define ECRYPTFS_FNEK_ENCRYPTED_FILENAME_PREFIX "ECRYPTFS_FNEK_ENCRYPTED."
189:#define ECRYPTFS_FNEK_ENCRYPTED_FILENAME_PREFIX_SIZE 24
It may just be a matter of editing those constants, recompiling the ecryptfs kernel module, and loading that.
| Can the eCryptfs encrypted filename prefix be changed? |
1,330,333,166,000 |
i'm new and i hope to find an answer here. Please tell me, if you need more information.
I have an disk encryption for my home partition on Linux 4.13.0-43-generic x86_64 (Ubuntu 16.04.4 LTS).
Today when I started the laptop, I got the message, that my disk is full and there is no available space any more. With the disk usage analysis I saw, that the encryption directory (/home/.ecryptfs/bianca/.Private) is completly full - the other partition have enough space.
I did not find any answer by Google, but I would like to know, if there may be encryption files, which won't be needed any more because they may be outdated or old or anything? If yes, it is possible to remove these files or directories in this directory? Is there any tool, that can delete files, if they are not used any more?
Or do you have any other recommendation, what I can do?
It would be glad, if someone made already experience with this and can share it with me.
Thank you in advance.
Bianca
edit:
Output of lsblk:
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 1024M 0 rom
sda 8:0 0 465,8G 0 disk
├─sda2 8:2 0 1K 0 part
├─sda5 8:5 0 465,3G 0 part
│ ├─ubuntu--mate--vg-swap_1 253:1 0 15,7G 0 lvm
│ │ └─cryptswap1 253:2 0 15,7G 0 crypt [SWAP]
│ └─ubuntu--mate--vg-root 253:0 0 449,6G 0 lvm /
└─sda1 8:1 0 487M 0 part /boot
|
/home/.ecryptfs/bianca/.Private contains the encrypted versions of all your home files, when you're logged in they're decrypted on-the-fly to your home (~ or /home/bianca). It should be approximately the same size as your home when you're logged in. Delete (or backup/move) some files out of your home, not directly from /home/.ecryptfs/bianca/.Private since it's probably not clear which home files they really are.
Disk Usage Analyzer / baobab is a tool I like, or just du (there are some commands to make it more readable & sorted, a web search or man has more info)
| Disk encryption with ecryptfs - full disk |
1,330,333,166,000 |
I have an embedded Linux system with unencrypted kernel image and initramfs in NAND flash.
My RootFS is in SD card.
I want to encrypted some files on SD Card as my SD card is easily accessible physically.
For this I am planing to use eCryptfs.
But I want to keep the keys inside NAND flash in kernel image or initramfs.
What are my options, what is best way to secure some files on my SD card.
|
If you didn't have an initramfs, you could do it with kernel parameters. Just add a random string as kernel parameter and then use /proc/cmdline as the key for your encryption. If it's not easy to add such parameters to your boot loader, the Linux kernel has a CMDLINE config option that lets you compile it in. (Note: it is possible for kernel parameters to end up in log files and such. Whether it's suitable for your scenario depends on what is running on / writing to your SD card.)
With initramfs, of course you're free to do whatever you want. It can ask for your passphrase at bootup, or include a key, or do both using encrypted keys. It's up to you but the exact implementation depends on what your initramfs looks like without such modifications. You can look at various initramfs guides online to get an idea of how it works in principle, for example: https://wiki.gentoo.org/wiki/Custom_Initramfs
You just have to be careful to not leave an unencrypted copy of the key on the SD card itself. At the same time you should have a copy somewhere since it may be hard to get it back out of the NAND if the device ever breaks.
| eCryptfs key in kernel image or initramfs |
1,330,333,166,000 |
Is it possible to setup ecryptfs mounts to prompt for password upon bootup? Say for example /home and /var are ecryptfs folders that need to be mounted; how do I force a prompt upon bootup to ask for mount passwords?
|
Solution is to use luks/dm-crypt and then modify /etc/crypttab file to do what I need.
| Bootup prompt for ecryptfs password |
1,330,333,166,000 |
I have encrypted my home disk a long time ago and I configured pam to mount it automatically on login.
However after update to version 1.1.5-3 pam is not mounting the disk anymore.
Here are the logs :
PAM adding faulty module: /usr/lib/security/pam_ecryptfs.so
pam_unix(login:session): session closed for user ben
PAM unable to dlopen(/usr/lib/security/pam_ecryptfs.so): /usr/lib/security/pam_ecryptfs.so: cannot open shared object file: No such file or directory
Seems like the pam-ecryptfs library has been removed.
How can I fix this?
|
In ecryptfs-utils 96-1 the file pam_ecryptfs.so is installed in /lib/security (click) which was changed in ecryptfs-utils 96-2 to /usr/lib/security (click). You might just need to update your system.
| Pamd not mounting ecryptfs disk anymore since upgrade |
1,330,333,166,000 |
I tried to install eCryptfs on my server to open an eCryptfs I did on my home computer.
I got these errors.
$ sudo mount -t ecryptfs /home/(place)/enc/ /home/(place)/enc/
Unable to get the version number of the kernel module. Please make
sure that you have the eCryptfs kernel module loaded, you have sysfs
mounted, and the sysfs mount point is in /etc/mtab. This is necessary
so that the mount helper knows which kernel options are supported.
Make sure that your system is set up to auto-load your filesystem
kernel module on mount.
Enabling passphrase-mode only for now.
Unable to link the KEY_SPEC_USER_KEYRING into the
KEY_SPEC_SESSION_KEYRING; there is something wrong with your kernel
keyring. Did you build key retention support into your kernel?
I have no idea what that means. My server has Debian on it. Should I do something?
|
It seems that OpenVZ is the problem (again). OpenVZ uses the parent kernel and I can't do anything with that.
| Cannot mount eCryptfs |
1,330,333,166,000 |
Recently, we've rebooted server and got ecryptfs mount fail:
...
Signature not found in user keyring
Perhaps try the interactive 'ecryptfs-mount-private'
user@host:~$
Could that be because of password change?
Although,
1. There's no mount password
2. We might have login password
When trying to recover mount directory, it outputs:
user@host:~$ ls
Access-Your-Private-Data.desktop README.txt
user@host:~$ ecryptfs-mount-private
Enter your login passphrase:
Error: Unwrapping passphrase and inserting into the user session keyring failed [-5]
Info: Check the system log for more information from libecryptfs
ERROR: Your passphrase is incorrect
Enter your login passphrase:
user@host:~$ sudo ecryptfs-mount-private
[sudo] password for user:
Enter your login passphrase:
Inserted auth tok with sig [ad21fabcda6abfeab] into the user session keyring
fopen: No such file or directory
user@host:~$
So, as you can see, it shows such strange error: fopen: No such file or directory and, also, when running ecryptfs-mount-private without sudo - it fails.
When mounting folder using ecrypts-recover-private and login password it mounts it in temporary folder like a charm.
Also, we've tried to ecryptfs-rewrap-password and it doesn't work without sudo. So, using sudo ecryptfs-rewrap-password succeeded in rewrapping, but after reboot the same situation persists.
All in all, what could this be; how to fix this auto mount encrypted home directory at login?
|
I set up an ecryptfs private folder, then removed the r & w permission from the wrapped-passphrase file to test... If you had checked the syslog right after seeing the message
Info: Check the system log for more information from libecryptfs
You would have seen lines like this:
Jan 15 00:21:48 sys ecryptfs-insert-wrapped-passphrase-into-keyring: Failed to detect wrapped passphrase version: Permission denied
Jan 15 00:21:48 sys ecryptfs-insert-wrapped-passphrase-into-keyring: Error attempting to unwrap passphrase from file [/home/user/.ecryptfs/wrapped-passphrase]; rc = [-13]
Together those would be a pretty strong arrow pointing to check the permissions of the ~/.ecryptfs/wrapped-passphrase file. (No sudo or strace required)
All in all, just make sure you're running ecryptfs-mount-private command as same user's directory you're trying to mount and wrapped-passphrase file has -rw------- or (600) permissions and same owner as encrypted directory.
| "ecryptfs-mount-private" returns "fopen: No such file or directory" |
1,515,991,255,000 |
I want accomplish the following task:
On a Ubuntu System with multiple Users accounts and encrypted Home directorys a NFS share should be mounted after login.
I want to use systemd user service for this, but can't get it fully working.
What's working so far:
- manual mount with user rights, configured with sudo
- enabling the user service and starting it with systemctl --user start usermount.service
After a reboot systemd doesn't even know that this unit exists.
I think there is a problem in combination with an encrypted $HOME (ecryptfs in my case), because the service unit and autostart configuration are located in .config/systemd/user/. My assumption is that the systemd user process is started immediatly after login, before decrypting the homedir and hence doesn't see the users configuration.
Whats my possibilities to solve this task?
|
It's a bug in the ecryptfs package configuration.
You can use a quick fix:
Open /etc/pam.d/common-session and switch the lines
session optional pam_systemd.so
session optional pam_ecryptfs.so unwrap
to
session optional pam_ecryptfs.so unwrap
session optional pam_systemd.so
so that pam_systemd.so is loaded after pam_ecryptfs.so
| Use Systemd user services with ecryptfs |
1,515,991,255,000 |
I needed to reinstall ubuntu on my hard drive because I think I corrupted some files and it wasn't booting up (I know, it was very stupid). I tried to reinstall from a LiveCD by checking the "keep the files" option. Unfortunately, I did this kind of carelessly, where when it asked me for the username and password for the reinstallation, I chose something different than what was already there, which I realize was very stupid. Now I have a new user directory /home/newusername along with /home/oldusername where /home/oldusername contains README.txt, Access-Your-Private-Data.desktop and .Private. I think .Private has all the original data encrypted and I am trying to recover it. oldusername does not actually exist as a user, by the way.
I've been reading a whole bunch of answers about how to recover my encrypted files with people that seem to have very similar problems as me, but none of the solutions provided to them seem to work for me and I don't really understand why.
I tried this: https://help.ubuntu.com/community/EncryptedPrivateDirectory#Recovering_Your_Data_Manually
And it seems like my data is still encrypted after I am done. I think one thing that I am doing differently is that I am not using a LiveCD here. I'm just running the commands on the disk.
I feel so stupid for getting to this stage, because I did so many dumb things to get to this point and it feels very hopeless. Please help if you possibly can.
|
Using a Live CD was the only thing I hadn't tried and turned out that worked. I'm not sure why it was necessary though. It was only a directory that was encrypted, so it feels like it should have been able to decrypt it. Anyway, if you are stuck like I was, then that should probably do it.
| Cannot recover encrypted files on Ubuntu |
1,515,991,255,000 |
I've recently installed Ubuntu 12.04 LTS (minimal virtual machine) on a VPS host. I've encrypted the virtual disk and encrypted the home directory of my main user account, that I'm using to access the install through SSH (using public key authentication). So far, so good.
The thing is: I can't figure out what exact action it is that is making my hidden files appear and active (such that I am able to use aliases, such as ll).
When I sign in initially with SSH and issue ls -al, all I see is:
dr-x------ 2 username username 4096 Feb 10 01:10 .
drwxr-xr-x 4 root root 4096 Feb 10 01:10 ..
lrwxrwxrwx 1 username username 56 Feb 10 01:10 Access-Your-Private-Data.desktop -> /usr/share/ecryptfs-utils/ecryptfs-mount-private.desktop
lrwxrwxrwx 1 username username 34 Feb 10 01:10 .ecryptfs -> /home/.ecryptfs/username/.ecryptfs
lrwxrwxrwx 1 username username 33 Feb 10 01:10 .Private -> /home/.ecryptfs/username/.Private
lrwxrwxrwx 1 username username 52 Feb 10 01:10 README.txt -> /usr/share/ecryptfs-utils/ecryptfs-mount-private.txt
(As an en passant: why are permissions set to lrwxrwxrwx for most items here? Isn't that far too tolerant?)
Then, when I issue ecryptfs-mount-private (as per the README.txt) and issue another ls -al, I keep seeing the same as above.
One time, I believed I was able to use aliases after I issued a sudo command, but another time I was not able to. Then, another time I issued cd /home/username (which is the same directory as ~ already, is it not?), and all of a sudden all other hidden files appeared and I was able to use aliases.
But now, after a few minutes in (but perhaps I've actually imagined this), even though I'm still able to see all the hidden files, I'm not able to use aliases anymore. This makes me believe this behavior is somehow coupled with sudo, but I can't seem to figure out what exactly is going on here.
Can somebody enlighten me and explain what exact actions I need to undertake to see all hidden files in my home directory and enable aliases, and why this is (is it because of the encryption; is it coupled with sudo; or perhaps something completely different)?
If possible, preferably, I'd like this to automatically be enabled when I login with SSH. Is that possible?
edit (clarification for Hauke Laging's comment):
When I log in with SSH and immediately issue ll in ~, I get -bash: ll: command not found. (Although now, it appears to work immediately after I logged out and logged in again, but perhaps this is because the other time it was the first time after boot up? I have no idea, really. It all appears to behave rather random.)
Then, one time, I believe I issued a sudo ls -al, or some other inane command, after which ll appeared to work.
Another time ll didn't work, and only after I issued ecryptfs-mount-private and did an explicit cd /home/username, the hidden .bashrc (etc.) files appeared with ls -al, after which I was finally able to use an alias such as ll.
But, you know what? I think I'll have to investigate this a bit more thoroughly, since my vague descriptions are probably getting us nowhere. I was hoping my problem would immediately ring a bell, but it appears I have to be a bit more precise about what actions I undertook.
What I am trying to accomplish though, is this:
When I log in through SSH (whether it be the first time after boot up, or any other time after that), I want all hidden files, in my home directory, to appear immediately when I issue ls -al and that aliases, such as ll, are immediately available as well.
My analysis of the problem, thus far, is that it appears that I first have to decrypt my home dir, before I am able to see the mentioned hidden files (apart from the ecryptfs-related symbolic links) and use aliases. Is that a correct assessment?
|
Situation: you have an encrypted home directory.
Step 1: you log in over SSH. Your encrypted data is not mounted, so what you see is your “real” home directory on the (unencrypted) main filesystem. This home directory doesn't contain much that's directly usable:
~/.ecryptfs/ contains control data for your encrypted data
~/.Private/ contains your encrypted data in encrypted form
~/Access-Your-Private-Data.desktop is a desktop icon to mount your private data
~/README.txt contains human-readable instructions to mount your data.
Your .bashrc and other dot files that you're used to aren't available because they're on the encrypted volume that isn't mounted.
Step 2: you run ecryptfs-mount-private. This mounts your encrypted data to ~ (your home directory). After that point, ls -lA ~ will show your dot files such as ~/.bashrc. If you run a new instance of bash, it will therefore read your .bashrc and you'll have your aliases available.
A subtlety is that after step 2, the current directory of your interactive bash shell is still the home directory on the non-encrypted volume. Mounting the encrypted volume on ~ changed the directory that ~ refers to, but does not change the directory that bash has open¹. If you run cd ~ in bash (or the shortcut cd), it will change its current directory to what is now called ~, even though this won't affect the value reported by pwd.
If you log in over SSH while your encrypted home directory is already mounted, then as soon as your login shell starts, it sees the files in your encrypted volume, so your aliases are loaded. Mounting an encrypted volume (or any other filesystem) is a global action, it is not confined to a login session. If you run ecryptfs-umount-private, that makes your encrypted data unavailable, back to the boot-time state.
¹ Technically, this is the current directory of the bash process, not an open file, but the behavior is the same.
| What makes the files in my home directory appear, when I log in through SSH then run ecryptfs-mount-private? |
1,515,991,255,000 |
I recently powered on some old notebook with Linux Mint 11 (Katya) installed, and I thought I remembered my user password, but turns out I didn't. So I reset that password using the instructions here.
After doing that, I could login successfully with my new password, but right after that I got a series of error messages and was left with the plain Mint desktop: no program menu, no nautilus, no context menu; all I could do was restart or shutdown.
Here are the errors, in order of appearance:
Could not update ICEAuthority file /home/my_user/.ICEauthority
There is a problem with the configuration server.
(/usr/lib/libgconf2-4/gconf-sanity-check-2 exited with code 256)
The panel encountered a problem while loading "OAFIID:GNOME_mintMenu".
Do you want to delete the applet from your configuration?
Of course, I always answered "Don't delete".
Nautilus could not create the following required folders:
/home/my_user/Desktop, /home/my_user/.nautilus. Before running
Nautilus, please create these folders or set permissiones such that
Nautilus can create them.
When I restarted, I selected recovery mode from the GRUB menu and managed to login into a terminal and navigate to my home folder. When I ran ls there, all my files were gone, and in their place were a .desktop file and a README.
It seems Mint realized I changed my password and took it as an attempt to hack into the system, so it encrypted the files in my home folder.
Kudos to Linux security schemes, but what can I do now? Don't want to reinstall, I need those old files.
I tried running ecryptfs-mount-private like the README suggests, but it asked me for a passphrase, and the new one doesn't work. Figures, it needs the old one.
|
It is absolutely essential that you record your randomly generated mount passphrase, without which it's impossible to recover your data. I can't stress that more strongly :-)
You should write this down, or print it out and store it somewhere safe.
Alternatively, you might consider using the zEscrow service from Gazzang. In Ubuntu (or Mint) 12.04 or later, just install the zescrow-client package, and run the zescrow command. It will prompt you for a zEscrow server and your login password, and then encrypt your mount passphrase and send it to a remote zEscrow server for safe keeping. You'll receive a nonced url, which you'll need to click on, authenticate with a Google OpenID account, and "claim" your upload. Here's a nice little how-to guide that I've written.
| I reset my password and now I can login, but without nautilus or program menu |
1,515,991,255,000 |
I used ecryptfs-migrate-home to encrypt my home folder on my Debian (Testing) system.
As I am a complete encryption-greenhorn, I don't know yet how to check, if the encryption succeeded. However, I suppose encryption works but filenames are not encrypted. I want to have encrypted filenames as well.
How can I get encrypted filenames for my (already encrypted) /home-folder?
|
The encrypted home utilities don't support the ability to enable encrypted filenames after you've set up your encrypted home directory. But, I looked at the ecryptfs-migrate-home script and believe that it should be enabling filename encryption by default.
Let's verify that filename encryption is enabled. Do you have two lines in your key signature file?
$ wc -l ~/.ecryptfs/Private.sig
2 /home/user/.ecryptfs/Private.sig
If wc reports that there are two lines, things are looking good so far. Check to see if the eCryptfs mount includes the filename encryption key signature mount option:
$ grep ecryptfs_fnek_sig= /proc/mounts
/home/user/.Private /home/user ecryptfs rw,nosuid,nodev,relatime,ecryptfs_fnek_sig=0011223344556677,ecryptfs_sig=8899aabbccddeeff,ecryptfs_cipher=aes,ecryptfs_key_bytes=16,ecryptfs_unlink_sigs 0 0
If you see the ecryptfs_fnek_sig option, things are looking even better. Now make sure that filenames are encrypted in the lower filesystem:
$ ls /home/.ecryptfs/user/.Private
Do all filenames have a "ECRYPTFS_FNEK_ENCRYPTED." prefix? If so, the filename encryption feature is configured and working correctly.
| setup filename encryption for encrypted home folder in eCryptfs |
1,515,991,255,000 |
After an update and a reboot, my ecryptfs home directory in Linux is failing to decrypt and mount on boot. It caused a bit of a panic, as Mint seemed entirely reinstalled from scratch (but the contents of /usr/ are still there and the wifi is still configured, so that's how I know it was just the home dir that was affected. /home/username/ was just brand new.
The /home/.ecryptfs/username/ directory was still there (with subdirectories .ecryptfs and .Private), and the amount of free space on the disk hadn't increased. I managed to mount and decrypt it from a live USB, and backed up the unencrypted data to external storage. Any tips on restoring everything in place (i.e. without reinstalling Linux from scratch, and then copying the unencrypted home dir in its place)?
Really unexpected that this happened. I don't know which of the updates messed things up. The packages that were updated are:
firefox-locale-en:amd64 (96.0.2+linuxmint1+una, 96.0.3+linuxmint1+una), firefox-locale-nl:amd64 (96.0.2+linuxmint1+una, 96.0.3+linuxmint1+una), libwebkit2gtk-4.0-37:amd64 (2.34.3-0ubuntu0.20.04.1, 2.34.4-0ubuntu0.20.04.1), gir1.2-webkit2-4.0:amd64 (2.34.3-0ubuntu0.20.04.1, 2.34.4-0ubuntu0.20.04.1), firefox:amd64 (96.0.2+linuxmint1+una, 96.0.3+linuxmint1+una), libjavascriptcoregtk-4.0-18:amd64 (2.34.3-0ubuntu0.20.04.1, 2.34.4-0ubuntu0.20.04.1), gir1.2-javascriptcoregtk-4.0:amd64 (2.34.3-0ubuntu0.20.04.1, 2.34.4-0ubuntu0.20.04.1
Only other thing that I can think of was that I had just installed two different CUDA versions before the reboot (but those were in /usr/local anyway).
Edit: rechecked /var/log/apt/history.log and found that separately before that (about an hour earlier), I had installed cmake (with apt). What was installed was the following: Install: librhash0:amd64 (1.3.9-1, automatic), cmake-data:amd64 (3.16.3-1ubuntu1, automatic), cmake:amd64 (3.16.3-1ubuntu1). I don't recall doing any reboot between installing cmake and running the other updates. Could the installation of librhash be the reason things broke?
Edit2: restored a Timeshift snapshot from before the issues, hoping that some of the software I'd installed was causing the issue. No luck.
|
After trying to diagnose what happened, the most likely cause would be a bad SSD trim. Or solar rays. Not sure.
Run journalctl | grep fstrim to check. This is what I did and I had a weekly trim that happened ~3h before the fateful reboot.
For others who might come across this, boot from live USB, mount/decrypt your home dir (https://askubuntu.com/a/873171/1113584) and copy your data somewhere safe. Then backup your packages (with dpkg --get-selections > mylist.list and flatpak --list > flatpaklist.list - because Mint's backup tool doesn't see any packages installed). Don't forget to copy these .list files to a backup too.
At this point ecryptfs is probably borked and not worth fixing.
So what you need to do to restore your system:
reinstall Mint (with or without encryption)
reinstall all of the programs in the two .list files you just made
boot from the live USB you just used to install Mint again, open the backup and the partition you installed Mint to this will mess up permissions so just skip to 4)
copy back the /home/ dir you just backed up
If all goes well, you should have your system back as it was before (and a day wasted).
Thanks to lARRYlAFFER and DJPH from #linuxmint-help for the suggestions.
| Encrypted home dir suddenly failing to mount on boot |
1,515,991,255,000 |
I have installed Fedora, but used existing /home partition from previous Ubuntu install:
partitions:
/boot/efi,
/ (formatted during install),
/home (kept from Ubuntu),
user was set-up with same username and password as I had on Ubuntu install.
After installation, I couldn't login. So, I installed packages ecryptfs-simple.x86_64 and ecryptfs-utils.x86_64.
To successfully login with mounted /home/<username> I have to:
login to terminal,
run ecryptfs-mount-private,
login through gdm.
Direct login through gdm fails.
How can I make gdm to automatically run ecryptfs-mount-private when logging in?
|
It was SELinux issue. I solved it by setting up proper security contexts for home and ecryptfs stuff. Run this with unmounted ecryptfs home:
chcon -u unconfined_u -t user_home_dir_t /home/<username>/
chcon -u unconfined_u -t ecryptfs_t /home/.ecryptfs/<username>/.ecryptfs/
chcon -u unconfined_u -t ecryptfs_t /home/.ecryptfs/<username>/.ecryptfs/*
chcon -h -u unconfined_u -t user_home_t /home/<username>/* /home/<username>/.*
chcon -h -u unconfined_u -t ecryptfs_t /home/<username>/.ecryptfs /home/<username>/.Private
I have done other experimenting previously, which may have some effect:
enabling ecryptfs home encryption in SELinux: setsebool -P useecryptfshome_dirs 1
configured pam to use ecryptfs:
setting USEECRYPTFS=yes in /etc/sysconfig/authconfig
regenerating authconfig --enableecryptfs --updateall
Check grep ecrypt /etc/pam.d/*:
/etc/pam.d/postlogin:auth optional pam_ecryptfs.so unwrap
/etc/pam.d/postlogin:password optional pam_ecryptfs.so unwrap
/etc/pam.d/postlogin:session optional pam_ecryptfs.so unwrap
/etc/pam.d/postlogin-ac:auth optional pam_ecryptfs.so unwrap
/etc/pam.d/postlogin-ac:password optional pam_ecryptfs.so unwrap
/etc/pam.d/postlogin-ac:session optional pam_ecryptfs.so unwrap
I hope I didn't miss anything in the answer.
| How to automatically `ecryptfs-mount-private` on `gdm` login in Fedora 27? |
1,515,991,255,000 |
I recently got an SSD. I use it to store my / as well as my /home directories (on different partitions).
For each user, I would like to have most of their folders on my big RAID-1 with 2 hard drives (I'm talking about /home/<user>/Downloads, /home/<user>/Music, /home/<user>/Documents, etc. to make this more clear).
First I thought about symlinks, but I think this wouldn't work, as the whole home-directories should be encrypted with ecryptfs.
So, how can this be achieved?
|
I found a solution. Yet it is not perfect, but I think it can be improved.
Basically I did what @rcoup suggested here:
https://askubuntu.com/questions/103835/securely-automount-encrypted-drive-at-user-login/165451#165451
On debian for some reason mount.ecryptfs_private is in /sbin/. One can access mount.ecryptfs_private without root-privileges, however instead of
mount.ecryptfs_private extra
I had to use
/sbin/mount.ecryptfs_private extra
I wrote a script to mount every folder in home seperately, however that's maybe not the best way to do it, as everytime I move a file (e.g. from Downloads to Music) this process takes some time. Maybe it would be better to use /sbin/mount.ecryptfs_private to just mount one folder and use symlinks then.
| have /home/user/Downloads (and other user folders) on a different partition |
1,319,313,753,000 |
What can you do with the eval command? Why is it useful? Is it some kind of a built-in function in bash? There is no man page for it..
|
eval is part of POSIX. It's an interface which can be a shell built-in.
It's described in the "POSIX Programmer's Manual": http://www.unix.com/man-page/posix/1posix/eval/
eval - construct command by concatenating arguments
It will take an argument and construct a command of it, which will then be executed by the shell. This is the example from the manpage:
foo=10 x=foo # 1
y='$'$x # 2
echo $y # 3
$foo
eval y='$'$x # 5
echo $y # 6
10
In the first line you define $foo with the value '10' and $x with the value 'foo'.
Now define $y, which consists of the string '$foo'. The dollar sign must be escaped
with '$'.
To check the result, echo $y.
The result of 1)-3) will be the string '$foo'
Now we repeat the assignment with eval. It will first evaluate $x to the string 'foo'. Now we have the statement y=$foo which will get evaluated to y=10.
The result of echo $y is now the value '10'.
This is a common function in many languages, e.g. Perl and JavaScript.
Have a look at perldoc eval for more examples: http://perldoc.perl.org/functions/eval.html
| What is the "eval" command in bash? |
1,319,313,753,000 |
In order to run ssh-agent I have to use:
eval $(ssh-agent)
Why is it necessary to eval the output of ssh-agent? Why can't I just run it?
|
ssh-agent outputs the environment variables you need to have to connect to it:
shadur@proteus:~$ ssh-agent
SSH_AUTH_SOCK=/tmp/ssh-492P67qzMeGA/agent.7948; export SSH_AUTH_SOCK;
SSH_AGENT_PID=7949; export SSH_AGENT_PID;
echo Agent pid 7949;
shadur@proteus:~$
By calling eval you immediately load those variables into your environment.
As to why ssh-agent can't do that itself... Note the word choice. Not "won't", "can't". In Unix, a process can only modify its own environment variables, and pass them on to children. It can not modify its parent process' environment because the system won't allow it. This is pretty basic security design.
You could get around the eval by using ssh-agent utility where utility is your login shell, your window manager or whatever other thing needs to have the SSH environment variables set. This is also mentioned in the manual.
| Why eval the output of ssh-agent? |
1,319,313,753,000 |
The bash manual states:
eval [arg ...]
The args are read and concatenated together into a single com-
mand. This command is then read and executed by the shell, and
its exit status is returned as the value of eval. If there are
no args, or only null arguments, eval returns 0.
I try
eval `nonsense`
echo $?
The result is 0.
Whereas when I execute the back-quoted command separately:
`nonsense`
echo $?
The result is 127.
From what is written in the bash manual I would expect eval to return 127 when taking the back-quoted nonsense as argument.
How to obtain the exit status of the argument of eval?
|
When you do the following -
`nonsense`
echo $?
You basically are asking "Tell me the exit status when I try to get the output of the command nonsense"
the answer to that is "command not found" or 127
But when you do the following
eval `nonsense`
echo $?
You are asking "tell me the exit status of eval when I evaluate an empty string" (the output of command nonsense) which is equal to running eval without arguments.
eval has no problems in running without arguments and its exit status becomes 0
| return value from eval |
1,319,313,753,000 |
I've been using docker for a while and there's a command I write each time I boot up my docker:
eval $(docker-machine env)
I know eval shouldn't be used unless necessary, but it's mentioned by the following:
docker-machine env outputs environment variables like this:
docker-machine env
export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://<some_ip>:<some_port>"
export DOCKER_CERT_PATH="/home/gableroux/.docker/machine/machines/default"
export DOCKER_MACHINE_NAME="default"
# Run this command to configure your shell:
# eval $(docker-machine env)
eval grabs these and load them in my current session.
Now what if I'd like to have an alias like this:
alias dockereval="eval $(docker-machine env)"
Syntax is good, but the problem is when a dotfile (let's say .zshrc as an example), well the content of the $() is evaluated when registering the alias when you source that file.
which dockereval
Results in
dockerenv: aliased to eval
I tried a few things like:
alias dockereval="docker-machine env | eval"
alias dockereval="docker-machine env | /bin/bash"
alias dockereval="eval `docker-machine env`"
but none did work. 2nd one is probably because it's running in a different session, 3rd does the same as $() I guess
Is there an other way to load these environment variables with an alias?
|
Enclose your alias in single quotes instead of double quotes.
alias dockereval='eval $(docker-machine env)'
Double quotes allow expansion of variable (in bash at least) while single quotes don't
| How to save an alias of an eval $(other_comand) command |
1,319,313,753,000 |
As I was looking this answer https://stackoverflow.com/a/11065196/4706711 in order to figure out on how to use parameters like --something or -s some questions rised regarding the answer's script :
#!/bin/bash
TEMP=`getopt -o ab:c:: --long a-long,b-long:,c-long:: \
-n 'example.bash' -- "$@"`
if [ $? != 0 ] ; then echo "Terminating..." >&2 ; exit 1 ; fi
# Note the quotes around `$TEMP': they are essential!
eval set -- "$TEMP"
while true ; do
case "$1" in
-a|--a-long) echo "Option a" ; shift ;;
-b|--b-long) echo "Option b, argument \`$2'" ; shift 2 ;;
-c|--c-long)
# c has an optional argument. As we are in quoted mode,
# an empty parameter will be generated if its optional
# argument is not found.
case "$2" in
"") echo "Option c, no argument"; shift 2 ;;
*) echo "Option c, argument \`$2'" ; shift 2 ;;
esac ;;
--) shift ; break ;;
*) echo "Internal error!" ; exit 1 ;;
esac
done
echo "Remaining arguments:"
for arg do echo '--> '"\`$arg'" ; done
First of all what does the shift program in the following line:
-a|--a-long) echo "Option a" ; shift ;;
Afterwards what is the purpose to use the eval command in the following line:
eval set -- "$TEMP"
I tried to comment the line in script mentioned above and I got the following response:
$ ./getOptExample2.sh -a 10 -b 20 --a-long 40 -charem --c-long=echi
Param: -a
Option a
Param: 10
Internal error!
But if I uncomment it it runs like a charm:
Option a
Option b, argument `20'
Option a
Option c, argument `harem'
Option c, argument `echi'
Remaining arguments:
--> `10'
--> `40'
|
One of the many things that getopt does while parsing options is to rearrange the arguments, so that non-option arguments come last, and combined short options are split up. From man getopt:
Output is generated for each element described in the previous section.
Output is done in the same order as the elements are specified in the
input, except for non-option parameters. Output can be done in
compatible (unquoted) mode, or in such way that whitespace and other
special characters within arguments and non-option parameters are
preserved (see QUOTING). When the output is processed in the shell
script, it will seem to be composed of distinct elements that can be
processed one by one (by using the shift command in most shell
languages).
[...]
Normally, no non-option parameters output is generated until all
options and their arguments have been generated. Then '--' is
generated as a single parameter, and after it the non-option parameters
in the order they were found, each as a separate parameter.
This effect is reflected in your code, where the option-handling loop assumes that all option arguments (including arguments to options) come first, and come separately, and are finally followed by non-option arguments.
So, TEMP contains the rearranged, quoted, split-up options, and using eval set makes them script arguments.
Why eval? You need a way to safely convert the output of getopt to arguments. That means safely handling special characters like spaces, ', " (quotes), *, etc. To do that, getopt escapes them in the output for interpretation by the shell. Without eval, the only option is set $TEMP, but you're limited to what's possible by field splitting and globbing instead of the full parsing ability of the shell.
Say you have two arguments. There is no way to get those two as separate words using just field splitting without additionally restricting the characters usable in arguments (e.g., say you set IFS to :, then you cannot have : in the arguments). So, you need to able to escape such characters and have the shell interpret that escaping, which is why eval is needed. Barring a major bug in getopt, it should be safe.
As for shift, it does what it always does: remove the first argument, and shift all arguments (so that what was $2 will now be $1). This eliminates the arguments that have been processed, so that, after this loop, only non-option arguments are left and you can conveniently use $@ without worrying about options.
| Bash: Why is eval and shift used in a script that parses command line arguments? |
1,319,313,753,000 |
Consider the commands
eval false || echo ok
echo also ok
Ordinarily, we'd expect this to execute the false utility and, since the exit status is non-zero, to then execute echo ok and echo also ok.
In all the POSIX-like shells I use (ksh93, zsh, bash, dash, OpenBSD ksh, and yash), this is what happens, but things get interesting if we enable set -e.
If set -e is in effect, OpenBSD's sh and ksh shells (both derived from pdksh) will terminate the script when executing the eval. No other shell does that.
POSIX says that an error in a special built-in utility (such as eval) should cause the non-interactive shell to terminate. I'm not entirely sure whether executing false constitutes "an error" (if it was, it would be independent of set -e being active).
The way to work around this seems to be to put the eval in a sub shell,
( eval false ) || echo ok
echo also ok
The question is whether I'm expected to have to do that in a POSIX-ly correct shell script, or whether it's a bug in OpenBSD's shell? Also, what is meant by "error" in the POSIX text linked to above?
Extra bit of info: The OpenBSD shells will execute the echo ok both with and without set -e in the command
eval ! true || echo ok
My original code looked like
set -e
if eval "$string"; then
echo ok
else
echo not ok
fi
which would not output not ok with string=false using the OpenBSD shells (it would terminate), and I wasn't sure it was by design, by mistake or by misunderstanding, or something else.
|
That no other shell needs such workaround is an strong indication that it is a bug in OpenBSD ksh. In fact, ksh93 doesn't show such issue.
That there is a || in the command line must avoid the shell exit caused by an return code of 1 on the left side of it.
The error of an special built-in shall cause the exit of a non interactive shell acording to POSIX but that is not always true. Trying to continue out of a loop is an error, and continue is a builtin. But most shells do not exit on:
continue 3
A builtin that emits a clear error but doesn't exit.
So, the exit on false is generated by the set -e condition not by the builtin characteristic of the command (eval in this case).
The exact conditions on which set -e shall exit are quite more fuzzy in POSIX.
| Behaviour of "eval" under "set -e" in conditional expression |
1,319,313,753,000 |
I have the following file variable and values
# more file.txt
export worker01="sdg sdh sdi sdj sdk"
export worker02="sdg sdh sdi sdj sdm"
export worker03="sdg sdh sdi sdj sdf"
I perform source in order to read the variable
# source file.txt
example:
echo $worker01
sdg sdh sdi sdj sdk
until now every thing is perfect
but now I want to read the variables from the file and print the values
by simple bash loop I will read the second field and try to print value of the variable
# for i in ` sed s'/=/ /g' /tmp/file.txt | awk '{print $2}' `
do
echo $i
declare var="$i"
echo $var
done
but its print only the variable and not the values
worker01
worker01
worker02
worker02
worker03
worker03
expected output:
worker01
sdg sdh sdi sdj sdk
worker02
sdg sdh sdi sdj sdm
worker03
sdg sdh sdi sdj sdf
|
You have export worker01="sdg sdh sdi sdj sdk", then you replace = with a space to get export worker01 "sdg sdh sdi sdj sdk". The space separated fields in that are export, worker01, "sdg, sdh, etc.
It's probably better to split on =, and remove the quotes, so with just the shell:
$ while IFS== read -r key val ; do
val=${val%\"}; val=${val#\"}; key=${key#export };
echo "$key = $val";
done < vars
worker01 = sdg sdh sdi sdj sdk
worker02 = sdg sdh sdi sdj sdm
worker03 = sdg sdh sdi sdj sdf
key contains the variable name, val the value. Of course this doesn't actually parse the input, it just removes the double quotes if they happen to be there.
| bash + read variables & values from file by bash script |
1,319,313,753,000 |
What is the difference between using:
eval 'echo "foo"'
and
echo 'echo "foo"' | bash
is there any?
|
Short Answer
The command run by eval is executed in the current shell and the command piped to bash is executed in a sub-shell, e.g.:
> echo 'x=42' | bash; echo $x
> eval 'x=42'; echo $x
42
Longer Answer
In the comments the claim was made that in more recent versions of bash (>=4.2) the first command could also have the same effect. However this does not appear to be the case.
There are actually a couple of factors which cause the piped command not to run in the current session: the pipe and the bash command.
For the most part, piped commands run in subshells. The Bash Manual (Section 3.2.2: Pipelines) has the following to say:
Each command in a pipeline is executed in its own subshell (see Command Execution Environment).
As pointed out in the comments, this behavior can be modified via the lastpipe option. The Bash Manual (Section 4.3.2: The Shopt Builtin) has the following to say about the lastpipe option:
lastpipe
If set, and job control is not active, the shell runs the last command of a pipeline not executed in the background in the current shell environment.
We can verify that this is the case as follows.
First enable lastpipe:
> shopt -s lastpipe
Then disable job-control:
> set +m
Now execute a command which sets a variable from within a pipe:
> unset x
> echo x=42 | while IFS= read -r line; do eval "${line}"; done;
> echo $x
42
Notice that we use the while loop and read command as a work-around since the eval command cannot read its input from stdin (hence cannot get its input from a pipe).
This example demonstrates that the right-most command in the pipe can, in fact, be executed in the current shell. However this does not actually affect our original example. Even with lastpipe enabled and job-control disabled, we still get the following result when piping to bash:
> echo 'x=42' | bash; echo $x
>
This is because the bash command itself executes its input in a subshell.
| eval vs. pipe through bash |
1,319,313,753,000 |
I have a bash shell variable containing a string formed of multiple words delimited by whitespace. The string can contain escapes, such as escaped whitespace within a word. Words containing whitespace may alternatively be quoted.
A shell variable that is used unquoted ($FOO instead of "$FOO") becomes multiple words but quotes and escapes in the original string have no effect.
How can a string be split into words, giving consideration to quoted and escaped characters?
Background
A server offers restricted access over ssh using the ForceCommand option in the sshd_config file to force execution of a script regardless of the command-line given to the ssh client.
The script uses the variable SSH_ORIGINAL_COMMAND (which is a string, set by ssh, that contains the command-line provided to the ssh client) to set its argument list before proceeding. So, a user doing
$ ssh some_server foo 'bar car' baz
will see the script execute and it will have SSH_ORIGINAL_COMMAND set to foo bar car baz which would become four arguments when the script does
set -- ${SSH_ORIGINAL_COMMAND}
Not the desired result. So the user tries again:
$ ssh some_server foo bar\ car baz
Same result - the backslash in the second argument needs to be escaped for the client's shell so ssh sees it. What about these:
$ ssh some_server foo 'bar\ car' baz
$ ssh some_server foo bar\\ car baz
Both work, as would a printf "%q" quoting wrapper that can simplify the client-side quoting.
Client-side quoting allows ssh to send the correctly quoted string to the server so that it receives SSH_ORIGINAL_COMMAND with the backslash intact: foo bar\ car baz.
However there is still a problem because set does not consider the quoting or escaping. There is a solution:
eval set -- ${SSH_ORIGINAL_COMMAND}
but it is unacceptable. Consider
$ ssh some_server \; /bin/sh -i
Very undesirable: eval can't be used because the input can't be controlled.
What is required is the string expansion capability of eval without the execution part.
|
Use read:
read -a ssh_args <<< "${SSH_ORIGINAL_COMMAND}"
set -- "${ssh_args[@]}"
This will parse words from SSH_ORIGINAL_COMMAND into the array ssh_args, treating backslash (\) as an escape character. The array elements are then given as arguments to set. It works with an argument list passed through ssh like this:
$ ssh some_server foo 'bar\ car' baz
$ ssh some_server foo bar\\ car baz
A printf "%q" quoting ssh wrapper allows these:
$ sshwrap some_server foo bar\ car baz
$ sshwrap some_server foo 'bar car' baz
Here is such a wrapper example:
#!/bin/bash
h=$1; shift
QUOTE_ARGS=''
for ARG in "$@"
do
ARG=$(printf "%q" "$ARG")
QUOTE_ARGS="${QUOTE_ARGS} $ARG"
done
ssh "$h" "${QUOTE_ARGS}"
| Can bash expand a quoted and/or escaped string variable into words? |
1,319,313,753,000 |
I have the following example.
#!/bin/bash
ARGUMENTS="-executors 1 -description \"The Host\" "
# call1
# error: parameter Host" is not allowed
java -jar swarm-client.jar $ARGUMENTS
# call2
# works fine with eval
eval java -jar swarm-client.jar $ARGUMENTS
In $ARGUMENTS, I have a quoted argument. I do not understand why grouping of argument by escaped quotes is not working in call1. I do not understand why is eval necessary to resolve the quoting problem.
I think I do not understand the process and the order of command evaluation in shell. Can you explain it to me?
|
You don't pass quoted arguments to a command, you pass arguments.
When you enter:
cmd arg1 arg2
The shell parses that line in its own syntax where space is a word delimiter and calls cmd1 with cmd, arg1 and arg2 as arguments.
Note: cmd does not receive any space character in its arguments, the spaces are just operators in the shell language syntax.
Like when in C, you write func("foo", "bar"), at run time, func receives two pointer arguments, it does not see any of the ( or , or " or space character.
Also part of the shell syntax is quoting. " is used to be able to have words that contain characters that are otherwise part of the shell syntax.
When you do:
cmd "arg 1" arg2
cmd receives cmd, arg 1 and arg2 as arguments. It does not see any " character. Those " are used to prevent the space from being treated as a word separator in the shell syntax.
Now, when you do:
cmd $VAR
it's not the same as doing:
cmd the content of the variable
If it were, you'd have trouble with:
VAR='foo; reboot'
echo $VAR
for instance.
In Bourne-like shell, the content of $VAR is not passed verbatim as a single argument to cmd either (unfortunately; it's been fixed in some other shells like rc, es, fish and to a lesser extent zsh). Instead, it's subject to splitting and globbing (split+glob) and the resulting words passed to cmd.
The splitting is done based on the characters in the special $IFS variable, by default space, tab and newline.
For your $ARGUMENTS which contains -executors 1 -description "The Host", that's splitting into -executors, 1, -description, "The and Host". Since none of those words contain wildcard character, the glob part doesn't apply, so it's those words that are passed to cmd.
Here, you could use the split+glob operator, and use as separator for the splitting part a character that does not appear in those words:
ARGUMENTS='-executors|1|-description|The Host'
IFS='|'
cmd $ARGUMENTS
Or better, for shells that support them (like bash), use arrays, where you can have a variable that contains all those arguments.
eval is to evaluate shell code. So the other option is to have ARGUMENTS contain shell code (text in the shell syntax as opposed to a list of arguments), and have that passed to eval for interpretation. But remember to quote the variable to avoid the split+glob operator:
eval "cmd $ARGUMENTS"
| Why is using eval necessary to pass quoted arguments |
1,319,313,753,000 |
I encountered a strange bug today, when running a script in a directory containing a directory with parentheses in it, such as a().
Minimal working example
I managed to reduce the bug to the following minimal working example:
Create an empty directory in /tmp and cd into it:
mkdir /tmp/foo
cd /tmp/foo
Create a script named foo.sh in it containing:
foo() {
somevar=1;
case somevar in
aha) echo "something" ;;
*) echo "other" ;;
esac;
};
Run the following command:
eval $(/bin/cat foo.sh)
There should not be any error.
Create a file with parentheses:
touch "a()"
Run the command again:
eval $(/bin/cat foo.sh)
I now get the error:
bash: syntax error near unexpected token `('
Why does bash even care about what files are in the directory? Why do parentheses cause an error?
System information:
$ bash --version
GNU bash, version 4.4.19(1)-release (x86_64-pc-linux-gnu)
Copyright © 2016 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software; you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04.1 LTS
Release: 18.04
Codename: bionic
More detailed background and original error:
My problem came from using a script sourcing /usr/share/modules/init/bash from the environment-modules package, as summarized here:
$ dpkg -l environment-modules
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name Version Architecture Description
+++-=============================================================-===================================-===================================-================================================================================================================================
ii environment-modules 4.1.1-1 amd64 Modular system for handling environment variables
$ source /usr/share/modules/init/bash
$ touch "a()"
$ source /usr/share/modules/init/bash
bash: eval: line 43: syntax error near unexpected token `('
bash: eval: line 43: ` a() _mlshdbg='' ;;'
|
This is neither strange nor a bug in bash (it does seem to be a bug in /usr/share/modules/init/bash though). You are using an unquoted command substitution together with eval. The string that is the result of the command substitution will, since it is unquoted, undergo word splitting and filename expansion (globbing). The *) in the code matches the filename a(), so it is replace by this filename in the filename expansion stage.
Running your example under set -x highlights this:
$ eval $(cat foo.sh)
++ cat foo.sh
+ eval 'foo()' '{' 'somevar=1;' case somevar in 'aha)' echo '"something"' ';;' 'a()' echo '"other"' ';;' 'esac;' '};'
bash: syntax error near unexpected token `('
The same thing in the yash shell:
$ eval $(cat foo.sh)
+ cat foo.sh
+ eval 'foo()' '{' 'somevar=1;' case somevar in 'aha)' echo '"something"' ';;' 'a()' echo '"other"' ';;' 'esac;' '};'
eval:1: syntax error: `)' is missing
eval:1: syntax error: `esac' is missing
eval:1: syntax error: `}' is missing
And with ksh93:
$ eval $(cat foo.sh)
+ cat foo.sh
+ eval 'foo()' '{' somevar='1;' case somevar in 'aha)' echo '"something"' ';;' 'a()' echo '"other"' ';;' 'esac;' '};'
ksh93: eval: syntax error: `(' unexpected
And dash:
$ eval $(cat foo.sh)
+ cat foo.sh
+ eval foo() { somevar=1; case somevar in aha) echo "something" ;; a() echo "other" ;; esac; };
dash: 1: eval: Syntax error: "(" unexpected (expecting ")")
Only the zsh would handle this as it does not perform the globbing:
$ eval $(cat foo.sh)
+zsh:2> cat foo.sh
+zsh:2> eval 'foo()' '{' 'somevar=1;' case somevar in 'aha)' echo '"something"' ';;' '*)' echo '"other"' ';;' 'esac;' '};'
The correct way to handle this would be to source the foo.sh script:
. ./foo.sh
There is really no reason to use eval "$(cat foo.sh)" as far as I can see.
This is also a code injection vulnerability:
$ touch '*) echo "hello" ;; *)'
$ eval $(cat foo.sh)
$ declare -f foo
foo ()
{
somevar=1;
case somevar in
aha)
echo "something"
;;
*)
echo "hello"
;;
*)
echo "other"
;;
esac
}
Another way of breaking this command easily without creating a specially named file, is to set the IFS variable to a set of characters other than the default:
$ IFS=';{} '
+ IFS=';{} '
$ eval $(cat foo.sh)
++ cat foo.sh
+ eval 'foo()' '
' somevar=1 '
' case somevar 'in
' 'aha)' echo '"something"' '' '
' '*)' echo '"other"' '' '
' esac '
' ''
bash: syntax error near unexpected token `somevar=1'
This breaks it because of the word-splitting step rather than the file globbing step in the evaluation of the arguments to eval. With IFS=';{} ', each of those characters would be used to split the text in foo.sh up into words (and those characters would then be removed from the string).
Not even zsh would be immune to this:
$ IFS=';{} '
+zsh:2> IFS=';{} '
$ eval $(cat foo.sh)
+zsh:3> cat foo.sh
+zsh:3> eval 'foo()' $'\n' 'somevar=1' $'\n' case somevar $'in\n' 'aha)' echo '"something"' '' $'\n' '*)' echo '"other"' '' $'\n' esac $'\n' '' ''
zsh: parse error near `)'
Related:
Security implications of forgetting to quote a variable in bash/POSIX shells
When is double-quoting necessary?
Why does my shell script choke on whitespace or other special characters?
| File with parentheses/brackets in working directory causes eval error |
1,319,313,753,000 |
I'm trying to use eval command to eval a comment -- I'm not sure if this is the right way to do it. Example:
i=?? (What I want here is either a #, to comment what's after, or blank)
somecommand arg1 arg2 $(eval $i) >> file
So based on the $i value it has to be either:
somecommand arg1 arg2 # >> file as of "Don't print to file"
or
somecommand arg1 arg2 >> file as of "Print to file"
An example script for more clarity:
i=true
somecommand arg1 arg2 >> file1
[some code]
somecommand arg1 arg2 >> file2
[some code]
somecommand arg1 arg2 >> file3
[some code]
And so on...
I want it to print the output to the files only if $i it true; or, as I tried at first, to eval the $i to be a comment and comment the 'output to file' piece of code.
I asked because I think there is a more elegant way than doing something like this:
if $i
then
somecommand arg1 arg2 >> file3
else
somecommand arg1 arg2
fi
|
You could always do:
unset -v log
# or
log=true
([ -z "$log" ] || exec >> file1; somecommand arg1 arg2)
([ -z "$log" ] || exec >> file2; somecommand arg1 arg2)
Or:
if [ -n "$log" ]; then
exec 3>> file1 4>> file2
else
exec 3>&1 4>&1
fi
somecommand arg1 arg2 >&3
somecommand arg1 arg2 >&4
Or:
log() {
local output="$1"; shift
if [ -n "$output" ]; then
"$@" >> "$output"
else
"$@"
fi
}
log "${log+file1}" somecommand arg1 arg2
log "${log+file2}" somecommand arg1 arg2
Or (make sure the data passed to eval is not dynamic to avoid code injection vulnerabilities, hence the use of single quotes below inside which no expansion occurs):
eval ${log+'>> file1'} 'somecommand arg1 arg2'
eval ${log+'>> file2'} 'somecommand arg1 arg2'
With zsh:
if (($+log)); then
alias -g 'log?=>>'
else
alias -g 'log?=#'
fi
somecommand arg1 arg2 log? file1
somecommand arg1 arg2 log? file2
Or even (if you don't intend to use >> for anything other than that kind of conditional logging):
(($+log)) || alias -g '>>=#'
somecommand arg1 arg2 >> file1
somecommand arg1 arg2 >> file2
bash doesn't have alias -g, doesn't let you alias things like >>, but you could use simple aliases if you move the redirection to the start:
shopt -s expand_aliases
skip_one() { shift; "$@"; }
if [[ -v log ]]; then
alias 'log?=>>'
else
alias 'log?=skip_one'
fi
log? file1 somecommand arg1 arg2
log? file2 somecommand arg1 arg2
| How to conditionally redirect the output to files based on variable in bash |
1,319,313,753,000 |
I want a script to make awk to become an interactive mathematical calculator, to eval mathematical expressions given in each line.
I.e., instead of constructing awk commands to calculate expressions like the following:
$ awk 'BEGIN{print 180/1149}'
0.156658
$ awk 'BEGIN{print (150+141)/1149}'
0.253264
I want my script to take my mathematical expressions as input and do the calculation interactively. So the session will look like (alternative of input and output):
180/1149
0.156658
(150+141)/1149
0.253264
1 + 2
3
2 * 3 - 5
1
However I'm not able to do that myself:
$ awk '{print}'
180/1149
180/1149
^C
$ awk '{print $0}'
180/1149
180/1149
1 + 2
1 + 2
^C
If there is no simple solution to awk, what else, like perl?
|
The arithmetic evaluation is done as part of the awk/perl language evaluation. perl has a eval to evaluated arbitrary strings as perl code at runtime, so you can do:
perl -lne 'print eval $_'
But awk doesn't (though in the case of GNU awk, its debugger does, see below), so you'd need to run awk for each line of input, and pass the contents of that line inside the code argument for instance using GNU xargs:
xargs -rd '\n' -I CODE awk -- 'BEGIN{print(CODE)}'
You could also do:
sed 's/.*/BEGIN{print(&)}/' | awk -f -
To run one awk to interpret all the code lines, but awk would only start evaluating it upon reaching the end of input, so can't be used interactively. You'd be able to reuse results from a past evaluation like in perl though. Like on:
a = 3+4
a*2
It would print 7, then 14.
Using gawk's debugger
awk has a debugger mode which has an eval to which you can pass awk code to interpret, so you could hijack that with something like:
sed -u 's/["\\]/\\&/g;s/.*/eval "print(&)"/' | gawk -D -f /dev/null
| awk or perl to eval mathematical expressions in each line |
1,319,313,753,000 |
With bash >5, I'm trying to assign a different value to variables depending on the architecture specified in a variable. I use a function to do so. This works perfectly:
# arguments:
variable name to assign,
value for mac arch,
value for pi arch
create_variable_for_arch() {
if [ "$_run_for_arch" = "mac" ]; then
eval $1=\$2
else
eval $1=\$3
fi
}
However, this breaks my script for some reason:
create_variable_for_arch() {
if [ "$_run_for_arch" = "mac" ]; then
declare "$1"="$2"
else
declare "$1"="$3"
fi
}
Here is a snippet to demonstrate how I use create_variable_from_arch()
declare _moonlight_opt_audio
declare _arch_specific_stream_command
#
while getopts "b:fahdr:s" options; do
case $options in
a)
create_variable_for_arch "_moonlight_opt_audio" \
"--audio-on-host" "-localaudio"
;;
esac
done
create_variable_for_arch "_moonlight_opt_fps" "--fps 60" "-fps 60"
start_streaming() {
_arch_specific_options="$_moonlight_opt_resolution $_moonlight_opt_fps $_moonlight_opt_audio $_moonlight_opt_display_type $_moonlight_opt_bitrate"
create_variable_for_arch "_arch_specific_stream_command" "$_arch_specific_options stream $_target_computer_ip $_moonlight_opt_app_name" "stream $_arch_specific_options -app $_moonlight_opt_app_name $_target_computer_ip"
moonlight $_arch_specific_stream_command
}
The trace looks like this with eval()
+ start_streaming
+ _arch_specific_options='--resolution 1920x1080 --fps 60 --bitrate 5000'
+ create_variable_for_arch _arch_specific_stream_command '--resolution 1920x1080 --fps 60 --bitrate 5000 stream 192.168.1.30 StreamMouse' 'stream --resolution 1920x1080 --fps 60 --bitrate 5000 -app StreamMouse 192.168.1.30'
+ '[' mac = mac ']'
+ eval '_arch_specific_stream_command=$2'
++ _arch_specific_stream_command='--resolution 1920x1080 --fps 60 --bitrate 5000 stream 192.168.1.30 StreamMouse'
+ moonlight --resolution 1920x1080 --fps 60 --bitrate 5000 stream 192.168.1.30 StreamMouse
moonlight --resolution 1920x1080 --fps 60 --bitrate 5000 stream 192.168.1.30 StreamMouse
But with declare it looks like this:
+ start_streaming
+ _arch_specific_options=
+ create_variable_for_arch _arch_specific_stream_command ' stream 192.168.1.30 ' 'stream -app 192.168.1.30'
+ '[' mac = mac ']'
+ declare '_arch_specific_stream_command= stream 192.168.1.30 '
+ echo moonlight
moonlight
$_arch_specific_options ends up with no value. What is going on? I've tried a few different ways of quoting or not quoting variables, but I don't really understand what's doing what in terms of quotations.
|
declare (like the typeset of other shells; also understood by bash as an alias for declare) declares a variable in the current scope (and can set a type and/or value).
So here, you would declare a variable that is local to the create_variable_for_arch function. When that function returns, that variable would be gone.
bash's declare/typeset has a -g option to declare the variable global), but you can't use that either as it declares the variable (and sets its type and/or value) in the outer-most scope as oppose to the scope of the caller of the function, so is pretty useless there (it's more useful in mksh/zsh/yash where it's only skipping the making it local or with ksh93 which has static scoping, see What do `declare name` and `declare -g` do? for details).
SO here, your options are either to use eval, or to use namerefs:
create_variable_for_arch() {
if [ "$_run_for_arch" = mac ]; then
eval "$1=\$2"
else
eval "$1=\$3"
fi
}
Or, assuming $_run_for_arch is constant in your script:
if [ "$_run_for_arch" = "mac" ]; then
create_variable_for_arch() { eval "$1=\$2"; }
else
create_variable_for_arch() { eval "$1=\$3"; }
fi
Or with namerefs:
create_variable_for_arch() {
typeset -n _var_name="$1"
if [ "$_run_for_arch" = mac ]; then
_var_name=$2
else
_var_name=$3
fi
}
It's often (rightly) recommended to avoid eval for security reasons, but eval is safe when used properly. declare and namerefs would be as unsafe here when used improperly, as they can both also evaluate code.
All of:
f() { eval "$1=\$2"; }
f() { declare "$1=$2"; }
f() { declare -n v="$1"; v=$2; }
Would run the reboot command if called with:
f 'a[$(reboot)]' value
It's important to make sure the first argument is a variable name to avoid the arbitrary command execution vulnerability.
f() { declare $1=$2; }
would be much worse. As those parameter expansions are unquoted, they're subject to split+glob, so even the contents of $2 can end up being evaluated as shell code, as in:
f var 'foo a[$(reboot)]='
| Why does substituting eval with declare (for creating dynamic variables) result in an empty variable? |
1,319,313,753,000 |
I have a small script with the following lines
echo mom,dad |awk -F, '{print $1,$2}' | while read VAR1 VAR2
do
for i in VAR1 VAR2
do
eval X=\$$i
echo $X
done
done
OUTPUT:
mom
dad
What is this line doing eval X=\$$i?
I understand the rest of the lines, but I don't understand the iterations of this for loop with eval. Can someone shed light on this ? I am using Solaris 5.10 with Korn Shell.
|
eval performs an extra level of substitution and processing on the remainder of the line.
In the first iteration of the loop, i is set to "VAR1", and one level of backslash-escaping is reduced, so:
eval X=\$$i
becomes:
X=$VAR1
which evaluates to:
X=mom
(repeat for the next loop, only $i is then VAR2, and $VAR2=dad)
| What does eval X=\$$i mean in UNIX? |
1,319,313,753,000 |
I need a lot of different aliases to create ssh tunnels to different servers. To give you a few of them:
alias tunnel_1='autossh -M 20000 -N -L 8080:localhost:8080 -N -L 9200:localhost:9200 -N -L 8090:localhost:8090 [email protected]'
alias tunnel_2='autossh -M 20000 -N -L 8000:localhost:8080 -N -L 9200:localhost:9200 -N -L 8090:localhost:8090 [email protected]'
I came up with this function I added in my aliases :
addPort () {
echo "-N -L $1:localhost:$1 "
}
tunnel () {
aliasString="autossh -M 20000 "
for port in "${@:2}"
do
aliasString+=$(addPort $port)
done
aliasString+="$1"
eval $aliasString
}
so I just need to do this to tunnel to the server I want:
tunnel [email protected] 8080 9000 7200
It’s working well, But I’d like not to use eval if it’s possible.
is there another way to call autossh directly and give it the correct params without using eval?
|
Use a single shell function:
tunnel () {
local host="$1"; shift
local port
local args
args=( -M 20000 )
for port do
args+=( -N -L "$port:localhost:$port" )
done
autossh "${args[@]}" "$host"
}
or, for /bin/sh:
tunnel () {
host="$1"; shift
for port do
set -- "$@" -N -L "$port:localhost:$port"
shift
done
autossh -M 20000 "$@" "$host"
}
Both of these functions extracts the first argument into the variable host, and then builds a list consisting of strings made up from the provided port numbers.
At the end, both functions invoke autossh with the given list and the host.
| building an alias from a function |
1,319,313,753,000 |
Suppose:
a=b; b=c; c=d
Then eval echo \$$a produces the output:
c
If we want to extract the output d using just input a, I tried the following way:
(i) eval echo \$(eval echo \$$a) produces the error:
syntax error near unexpected token '('
(ii) eval echo \$\(eval echo \$$a\) produces the output:
c
I am not able to understand why escape slashing the bracket got rid of the error.
Also, could someone please explain why I didn't get the output as d in the second instance?
|
First, a word of caution:
From a security standpoint, it's a really bad idea to use eval in any shell script unless you know exactly what you're doing. (And even then, there are virtually zero instances where it is actually the best solution.) As a beginner to shell scripting, please just forget that eval even exists.
For further reading, see Eval command and security issues.
To get the output d, you could use:
eval echo \$${!a}
Or:
eval eval echo \\\$\$$a
Where you went wrong was in passing the unescaped parentheses characters to echo. If they are preceded by an unquoted $, it is command substitution. But if the $ is quoted and not the parentheses, it isn't valid shell syntax.
| Using the eval command twice |
1,319,313,753,000 |
I would like to get, in the output, the content of KW0_TEXT and KW1_TEXT from the "for" of this script:
#!/bin/sh
STRS=" KW0 KW1 "
KW0_TEXT="text text text"
KW1_TEXT="text text text text"
for str in ${STRS}; do
echo ${str}_TEXT
eval echo ${str}_TEXT
done
so far, in the output, I got only:
KW0_TEXT
KW0_TEXT
KW1_TEXT
KW1_TEXT
|
If your /bin/sh is actually /bin/bash, you can use variable indirection:
#!/bin/bash
STRS=" KW0 KW1 "
KW0_TEXT="text text text"
KW1_TEXT="text text text text"
for str in ${STRS}; do
var=${str}_TEXT
printf "%s\n" "${!var}"
done
| Get contents of passing string from a script |
1,319,313,753,000 |
I was checking .bashrc to set colors for ls comand and found this
export SHELL='/bin/bash'
export LS_OPTIONS='--color=auto'
eval "`dircolors`"
alias ls='ls $LS_OPTIONS'
Will there be any problem if i use dircolors instead of using it with eval? What's the difference?
|
Neither eval dircolors nor dircolors will work.
What you need is:
eval "$(dircolors)"
(or the ancient form eval "`dircolors`")
That is, you need to evaluate the output of dircolors. dircolors outputs code to be evaluated by the shell like:
LS_COLORS='...'
export LS_COLORS
That's the code you want to evaluate. eval dircolors is just like dircolors, so it will just run dircolors with its output not redirected, so that shell code above will just end up being displayed and not evaluated by any shell.
Also if $LS_OPTIONS is meant to contain a list of ls options in shell syntax for instance accepting things like --exclude='*~', then you'd need to define it as:
ls() {
eval 'command ls '"$LS_OPTIONS"' "$@"'
}
Or with zsh:
alias ls='ls "${(Q@)${(z)LS_OPTIONS}}"'
If it's meant to contain a space-separated list of options instead, with bash 4.4+
ls() {
local IFS=' '
local -
set -o noglob
command ls $LS_OPTIONS "$@"
}
Or with zsh:
alias ls='ls ${(s: :)LS_OPTIONS}'
| Will "dircolors" work here instead of " eval `"dircolors`"` "? |
1,319,313,753,000 |
I wish to execute different commands and check the return code afterwards before moving to the next steps in the script. At the same time I also wish to log the output of the executed commands to a file using the tee command.
Example:
#set non-existing folder
local_path="~/njn"
log_path_file="test.log"
cmd="ls -l ${local_path} | tee -a ${log_path_file}";
eval ${cmd}
returncode=$?
echo "execution result: ${returncode}" | tee -a ${log_path_file};
if [ ${returncode} -eq 0 ]; then
echo "success" | tee -a ${log_path_file}
else
echo "not success" | tee -a ${log_path_file}
fi
returncode is 0 where it should be > 0
I want the returncode variable to have the actual return of the executed command (in this example, the ls -l command.
I've seen there's a solution using a file to write the output of the command to it and then reading the return code from it (Here), but I'm looking for a more elegant solution.
|
After some additional testing, I've reached this code switch is wrapped nicely and returns the executed command return code.
The code which @Freddy have posted is nearly complete. The return code is exported inside the function but is not exported outside it.
The use of the shopt -s lastpipe was taken from this page: Bash FAQ entry #24: "I set variables in a loop. Why do they suddenly disappear after the loop terminates? Or, why can't I pipe data to read?"
Here is the final working solution:
#!/bin/bash
log_path_file="./logs/test.log"
exe_cmd()
{
echo "`date +%Y-%m-%d---%r` [Info]: Command to execute: $@" | tee -a ${log_path_file};
echo "" | tee -a ${log_path_file};
echo "" | tee -a ${log_path_file};
set +m
shopt -s lastpipe
cmdResult=0
{
"$@"
returncode=$?
# save result code
cmdResult=${returncode}
if [ "$returncode" -eq 0 ]; then
echo "`date +%Y-%m-%d---%r` [Info]: successfully executed \"$@\""
else
echo "`date +%Y-%m-%d---%r` [Info]: failed to execute \"$@\", exit code: ${returncode}"
fi
} 2>&1 | tee -a "$log_path_file"
echo "`date +%Y-%m-%d---%r` [Info]: cmdResult result ${#cmdResult[@]}"
return ${#cmdResult[@]};
}
cmd="scp some_user@$some_host:some_path/* a_local_path/sub_path";
exe_cmd ${cmd}
returncode=$?
echo "`date +%Y-%m-%d---%r` [Info]: scp execution result: ${returncode}" | tee -a ${log_path_file};
| Get return code of first piped command using eval? |
1,319,313,753,000 |
I have a little ugly bash script on my Ubuntu machine that contains the lines:
search_command="find -L $(printf "%q" "$search_folder") \( ! -regex '.*/\..*/..*' \) -mindepth 1 2> /dev/null"
for i in "${IGNOREENDINGS[@]}"
do
search_command="$search_command -not -name \"*$i\""
done
search_command="$search_command | sed 's|^${search_folder}/\?||'"
choice=$(eval "$search_command"|fzf -q "$file_query" -1 --preview "preview $search_folder {}")
The script lets me choose a file, using fzf among the matches of a GNU find command.
It has the following problem: Once I choose a file in the interface of fzf the script closes the fzf interface, so that seems to be done, but then I still have to wait for the find command to complete (verified with top), which somehow takes very long. I'm not really sure why; the files that I want always appear almost instantly.
I included a few extra lines above to avoid an X Y problem. I am happy with anything with the same functionality and quicker execution.
|
There are two possibilities here: either fzf is not actually exiting when you select a file, or find is not exiting when fzf does. If it's the latter one, you can write a script to close find manually when fzf exits.
The way that pipes work in Linux, find does not know that the pipe it is writing to has nothing reading from that pipe until it tries to write to that pipe and fails. As a consequence, if you pick the file after find has already found everything it's going to find, find is no longer writing to the pipe, and so will iterate over the whole file system before exiting.
As an illustration of this, if you make a random file in / and then run find / -name $random_file_name | head -n 1, you will very quickly get all the output you are going to get, but the program will continue to run for a long time.
One way to get around this is by killing the process yourself when it's done. Probably the easiest way to do it in your specific case is a named pipe:
tmp_fifo=`mktemp -u`
mkfifo "$tmp_fifo"
eval "$search_command" > "$tmp_fifo" &
choice="$(fzf -q "$file_query" -1 --preview "preview $search_folder {}" < "$tmp_fifo")" ; kill $!
rm "$tmp_fifo"
This creates a temporary named pipe, find writes to it, and fzf reads from it. But when fzf exits, kill $! is run, where $! stands for the last background process to have been started, in this case find.
| Bash: How to kill eval if the process that receives its output terminates |
1,319,313,753,000 |
In trying to make my .zshrc neater, I stumbled over the following problem/question:
"How can I run the output of another command?". While I'm sure this is a simple problem, I just don't understand what I'm doing wrong.
I want to add pip-completion to my config. For this, I need to add the output of $ pip completion --zsh to .zshrc:
$ pip completion --zsh
# pip zsh completion start
function _pip_completion {
local words cword
read -Ac words
read -cn cword
reply=( $( COMP_WORDS="$words[*]" \
COMP_CWORD=$(( cword-1 )) \
PIP_AUTO_COMPLETE=1 $words[1] 2>/dev/null ))
}
compctl -K _pip_completion pip
# pip zsh completion end
Now, the lines above simply don't look nice. Instead I tried adding the following line to .zshrc:
eval $(pip completion --zshrc)
However, pip completion is not being "installed" (as opposed to when I add the lines themselves in .zshrc), but I also don't get any errors.
I have a feeling zsh doesn't evaluate the # lines properly, but I'm not sure how to test it. when I run $ eval $(pip completion --zshrc) no errors pop out.
Where am I going wrong? Is there a similar alternative to evaluating the output of $ pip completion --zsh in my .zshrc?
|
You need to run
eval "$(pip completion --zsh)"
Compare the output of these two commands:
echo $(pip completion --zsh)
echo "$(pip completion --zsh)"
In zsh, $VARIABLE means “take the value of VARIABLE unless it's empty”. But $(command) means “take the output of command and break it into words at whitespace”. So in eval $(pip completion --zsh), the all sequences of whitespace, including newlines, are treated as word separators. Then eval puts all of its arguments together with a space between each. But a space is not equivalent to a newline, and the code that eval executes is just one long comment line.
This is a subset of Why does my shell script choke on whitespace or other special characters?: in zsh, unlike other sh-like shells, word splitting only happens on command substitution, not on variable substitution (except in a restricted form where an empty word resulting from a variable substitution is eliminated), and no globbing happens automatically on the result of expansion. You do need double quotes in zsh around command substitution, and around variable substitution when it might result in an empty word.
| Evaluate multi-line output (with comments) of another command. (pip-completion) |
1,319,313,753,000 |
When use eval on a command, does eval apply the shell twice to the redirection part of the command?
Suppose the filename in the redirection contains a whitespace, if eval applies all the steps of parsing, shell expansions and word splitting and other steps to it more than once, the filename will be split into two words during the word splitting step in the second round.
Does the following example imply that eval doesn't apply the shell twice to the redirection part of the following command, so the filename is not split into two words?
$ filename="my file"
$ eval "cat" < "$filename"
hi
|
In the example you provide, the only thing eval is getting is cat, the redirection is happening outside the eval and the file is being provided as stdin for the eval "cat" command.
One variation is to quote the whole command, including the redirection, using single quotes:
$ eval 'cat < "$filename"'
hi
Now eval is getting the whole string, including the redirection and the variable name, so it's doing variable expansion and the needed quoting for the filename with spaces. This would still work.
Another option is using double quotes for the string:
$ eval "cat < '$filename'"
hi
Now the variable is expanded by the shell, but this still works since the quotes inside it keep the filename together. (Note that this would break if the filename contains an apostrophe, though.)
What would *not" work is this:
$ eval "cat" "<" "$filename"
This is similar to your example, but with the < quoted, the redirection will not be executed by the external shell. eval will then put together the arguments, and the resulting command will be:
cat < my file
Which will not work as expected, since the quotes around my file are now gone...
| Does `eval` apply the shell to the redirection part of the following command? |
1,319,313,753,000 |
I am writing a script which accepts two arguments:
#! /bin/bash
eval for i in {$1..$2}; do echo $i; done
I run it like:
$ ./myscript 0002 0010
syntax error near unexpected token `do'
Why is the error?
I though it might be because the looping should be grouped. But by replacing eval for i in {$1..$2}; do echo $i; done with eval { for i in {$1..$2}; do echo $i; done; }, the error remains.
Note:
I hope to perform parameter expansion before brace expansion by using eval.
The desired output of my example is 0002 0003 0004 0005 0006 0007 0008 0009 0010. (See Perform parameter expansion before brace expansion?)
|
That's because the shell evaluated ;, so eval didn't see it.
You have to escape any shell special character to delay its evaluation and pass is literally to eval:
eval for i in \{"$1".."$2"\}\; do echo \"\$i\"\; done
| Error when eval a for-loop |
1,319,313,753,000 |
Consider the following:
$ a='friend'
$ b='printf "%s\n" "$a"'
$ eval "$b"
friend
This should be completely safe. Let's however say that $b is the same but $a is unknown. Are there any security implications then to eval "$b" and if so, what can I do to mitigate them?
|
If b contains the literal string printf "%s\n" "$a", i.e. you didn't expand $a into it before hand, then yes, eval "$b" should be fine. Not sure why you'd need eval there, though, since you just have a static command. Just run printf "%s\n" "$a" directly.
You said in comments you want to store some commands for future use. That's the job of functions. E.g. that printf command could be made into a function like this:
println() {
printf "%s\n" "$1"
}
which you run as println "hello there", println "$a" or whatever. "$1" is the first argument to the function, but of course you could read stdin instead, or use multiple arguments ("$2", "$3", ...; or all of them as a list "$@" (alike "${array[@]}")).
Similarly for the longer set of operations:
#!/bin/bash
say_hi() {
echo "hello, $1"
}
louder() {
echo "$1!"
}
funcs=(say_hi louder)
names=(Huey Dewey Louie)
for name in "${names[@]}"; do
tmp=$name
for func in "${funcs[@]}"; do
tmp=$($func "$tmp")
done
echo "result: $tmp"
done
| Security implications of executing strings using eval in bash |
1,319,313,753,000 |
Why does this produces _results=""...
_results="$( grep ${_gopts[@]} )"
And this produces the desired _results (list of SSHFS entries in fstab)...
_results="$( eval grep ${_gopts[@]} )"
The _gopts array is identical in both cases and consists of...
declare -p _gopts
declare -a _gopts=([0]="--extended-regexp" [1]="--with-filename" [2]="--recursive" [3]="--include" [4]="fstab" [5]="'^[^#]*sshfs#'" [6]="/etc")
|
Because of the extra quotes around the 5th element (the regexp: [5]="'^[^#]*sshfs#'").
The grep command will be passed an argument of the form '^regex' instead of ^regex, which will not match, ever (there's no way for regex to both follow a single quote and start at the beginning of the line, at the same time).
Remove them, and then quote the array expansion ("${_gopts[@]}" instead of ${_gopts[@]}):
declare -p _gopts
declare -a _gopts=([0]="--extended-regexp" [1]="--with-filename" [2]="--recursive" [3]="--include" [4]="fstab" [5]="^[^#]*sshfs#" [6]="/etc")
_results=$( grep "${_gopts[@]}" )
You may also want to use single instead of double quotes in the array declaration: double quotes aren't necessary, since no element contains variables & other expansions.
| Why does this bash idiom require eval? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.