date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,369,503,686,000
I am in the directory foo which contains the subdirectories bar1 bar2 and bar3 and nothing else. I would like to act pushd on foo, bar1, bar2 and bar3 in one command, however I am encountering difficulties: find `pwd` | xargs pushd returns xargs: pushd: No such file or directory I tried to circumvent the problem with a while loop: find `pwd` | while read line ; do pushd $line ; done Which gives an output which looks correct: ~/foo ~/foo ~/foo/bar1 ~/foo ~/foo ~/foo/bar2 ~/foo/bar1 ~/foo ~/foo ~/foo/bar3 ~/foo/bar2 ~/foo/bar1 ~/foo ~/foo However using dirs after that shows that I have not added any new directories to the stack: ~/foo Can anyone spot what I am doing wrong?
You’re very close.  You can do what you want with while read line; do pushd "$line"; done < <(find "$(pwd)" -type d) The problem with your command is that pushd, like cd, must be done in the main (parent) process to be useful, and (with some variation based on how your system is set up), the commands in a pipeline run in subprocesses (child processes).  < <(cmd) magically gives you a pipe without giving you a pipeline. This requires that you are running bash (or maybe one of the other advanced shells?), since POSIX doesn’t support <(cmd). The xargs approach, sadly, was doomed to failure, because pushd (like cd) is a shell builtin command (i.e., there is no program called pushd), and xargs requires an external, executable program.  You can get output that (almost) looks right with this command: $ find "$(pwd)" -type d | xargs -i sh -c 'pushd "$1"' sh ~/foo ~/foo ~/foo/bar1 ~/foo ~/foo/bar2 ~/foo ~/foo/bar3 ~/foo but that is executing the shell as an external program, and doing so (independently) for each directory.  This is a little closer: $ find "$(pwd)" -type d | xargs sh -c 'for line do pushd "$line"; done' sh ~/foo ~/foo ~/foo/bar1 ~/foo ~/foo ~/foo/bar2 ~/foo/bar1 ~/foo ~/foo ~/foo/bar3 ~/foo/bar2 ~/foo/bar1 ~/foo ~/foo executing a single shell process, and telling it to loop over all the directories.  As you can see, this technique is similar to your second attempt (and my answer), and the result is the same as your second attempt — you get a new shell process that calls pushd four times, and ends up with a directory stack that is five levels deep (counting the starting directory twice) — but that new shell process is in a subprocess, and, by the time you see the next shell prompt, that shell process is gone. For reference, note that this command is somewhat similar to an answer I gave a couple of years ago.  Stéphane Chazelas discusses the sh -c long_complex_shell_command sh command construct here and here.
How can I pushd a series of directories in one go?
1,369,503,686,000
I have used pushd to move about to various directories and now if I run dirs -v I get: 0 ~/Desktop 1 /etc 2 /var/log 3 ~/Downloads 4 /tmp How can I popd to a specific directory in the middle of the stack?, e.g option 2: /var/log man bash says +n Removes the nth entry counting from the left of the list shown by dirs, starting with zero. For example: ``popd +0'' removes the first directory, ``popd +1'' the second. I've tried ``popd +0'' popd +3 And it pops the correct directory off the stack, but doesn't change the current working directory. How can I popd the particular directory and change the current working directory to the "popped" dir?
cd "`dirs +<number>`" where <number> is 0 or 3 or something else. In any case, I recommend you check out a cd wrapper such as http://davidcorne.com/tag/cd/ , which pushes onto the dir stack in the background and allows you to do cd -- instead of dirs -v and cd -<number> to get you into the directory you want. It also replaces initial tildas with $HOME, eliminating the problem you've alluded to in the comments.
How can I popd to a specific directory?
1,369,503,686,000
I would like to share my directory stack (the one accessed with dirs) across sessions and tmux panes/windows. In the zshbuiltins man page I have found autopushd to add every directory I switch into to the stack. But there does not seem to be a native way to have the stack persist and share it. Do I have to save the stack to a temporary file or is there a better solution?
Generally speaking, this is not a good idea. Consider this scenario: Session one wants to temporarily change directories, so it pushes the current directory, expecting to pop it later when it is finished with the new directory. Session two tries the same thing. Session one tries to pop its original directory off the stack, but gets session two's directory instead. This doesn't really answer your question, but I'm trying to illustrate why there is no native method for doing so. For a one-time "export" of the stack from one session to another, your best bet is writing to a temp file and reading it back in, but there isn't going to be a general method for keeping the stack in sync among multiple sessions.
Persistent directory stacks across sessions in zsh
1,369,503,686,000
I would like to be able to programmaticly detect when my pushd stack is non-empty, within a bash shell. Is there any way to detect this? Something akin to $SHLVL would be nice. But so far, the only solution I've found is to wrap pushd and popd with aliases that parse the output of the originals to detect the depth. While that would work, it doesn't feel particularly elegant. (Clearly, the information is stored somewhere in the environment.) My reason is, I wish to adorn my bash prompt with a pushd depth-count, when this is the case.
You can check DIRSTACK length: $ [[ ${#DIRSTACK[@]} -gt 1 ]] && echo dir stack non-empty Note that you can not use this method if DIRSTACK is unset.
Detect pushd depth in bash?
1,369,503,686,000
I've been using pushd and popd for a long time while writing bash script. But today when I execute which pushd, I get nothing as output. I can't understand this at all. I was always thinking that pushd is simply a command, just like cd, ls etc. So why does which pushd give me nothing?
popd and pushd are commands built into Bash, they're not actual executables that live on your HDD as true binaries. excerpt bash man page DIRSTACK An array variable (see Arrays below) containing the current contents of the directory stack. Directories appear in the stack in the order they are displayed by the dirs builtin. Assigning to members of this array variable may be used to modify directories already in the stack, but the pushd and popd builtins must be used to add and remove directories. Assignment to this variable will not change the current directory. If DIRSTACK is unset, it loses its special properties, even if it is subsequently reset. The full list of all the builtin commands is available in the Bash man page as well as here - http://structure.usc.edu/bash/bashref_4.html. You can also use compgen -b or enable to get a full list of all these builtins: compgen $ compgen -b | grep -E "^push|^pop" popd pushd enable $ enable -a | grep -E "\bpop|\bpus" enable popd enable pushd Additionally if you want to get help on the builtins you can use the help command: $ help popd | head -5 popd: popd [-n] [+N | -N] Remove directories from stack. Removes entries from the directory stack. With no arguments, removes the top directory from the stack, and changes to the new top directory. $ help pushd | head -5 pushd: pushd [-n] [+N | -N | dir] Add directories to stack. Adds a directory to the top of the directory stack, or rotates the stack, making the new top of the stack the current working
Why can't I which pushd
1,369,503,686,000
I have a script which I source while in bash. It does various things and retains the $PWD I was in before sourcing it. pushd ~/.dotfiles >/dev/null || exit 1 # Do various things popd >/dev/null || exit 1 The script runs (mostly) fine in zsh too, but when I source it from the ~/.dotfiles location, I end up in the previous $OLDPWD after sourcing it. It seems that zsh disregards the pushd line if the current location is already the same, so the popd command goes to the $OLDPWD from before when the script was sourced. Is there a way of stopping zsh from ignoring the "redundant" pushd command, while also keeping the script compatible with bash? I do have the following in my .zshrc, but it also happens when I unset them: setopt AUTO_PUSHD PUSHD_SILENT PUSHD_IGNORE_DUPS
pushd and popd are intended for interactive convenience. They are neither reliable nor useful in scripts. You're seeing the effect of the pushd_ignore_dups, which is off by default but you've apparently enabled in your configuration. Another potential problem with pushd and popd is that you need to make sure that you're really calling them in pairs. There's a high risk of accidentally omitting a popd on an error path, and that throws confusion everywhere. In an auxiliary script meant to be sourced, you should generally not change the current directory at all. Just use absolute paths everywhere. If you really want to change the current directory, use the cd command. Save the old current directory in a variable. # instead of pushd... foo_old_pwd=$PWD cd /foo/directory # instead of popd... cd "$foo_old" This works in all Bourne-style shells (even the ones that don't implement pushd and popd), and will never restore the wrong directory. However, any code based on changing out of a directory and back in will break in one unusual case: if you don't have permission to change back into the original directory. This can happen after a process loses privileges.
Confusing pushd/popd behaviour when sourcing script in zsh
1,369,503,686,000
I want to push a directory onto the directory stack in order to refer to it using "tilde shorthand" (eg. ~1 refers to the second entry in the directory list), but I don't want to actually switch to the directory. In bash, it seems this can be done using the -n flag to pushd. What's the equivalent in zsh?
You can edit the dirstack variable. function insertd { emulate -L zsh typeset -i n=0 while [[ $1 == [+-]<-> ]]; do n=$1 shift done if (($# != 1)); then echo 1>&2 "Usage: insertd [+N|-N] DIR" return 3 fi dirstack=($dirstack[1,$n] $1 $dirstack[$n,-1]) } If you want to add this behavior to pushd itself, you can make it a function. function pushd { if [[ $1 == -n ]]; then shift insertd "$@" else builtin pushd "$@" fi } This simple version does not treat combinations of -n and another option exactly like bash. You can even edit the variable directly. vared dirstack
What is the zsh equivalent of "pushd -n" in bash?
1,369,503,686,000
Without Oh-My-Zsh, I can pushd two identical path: $ dirs ~ $ pushd Desktop Desktop ~ $ pushd ~ ~ Desktop ~ With Oh-My-Zsh: $ dirs ~ $ pushd Desktop Desktop ~ $ pushd ~ ~ Desktop How do I disable this? I want the original Zsh behavior.
(Insprired by this answer) It is set in $ZSH/lib/directories.zsh: setopt auto_pushd setopt pushd_ignore_dups auto_pushd makes cd behave the same as pushd. However, this would result in an directory stack overflow if you keep changing directory, so they set pushd_ignore_dups as well, to limit the stack. This is not a problem for me, since I disabled auto_pushd. Therefore, add unsetopt pushd_ignore_dups in ~/.zshrc. Reference Options - Zsh documentation
Oh-My-Zsh remove duplicated path in directory stack
1,369,503,686,000
I'm pushing directories to my stack using a while loop that reads the contents of a file. I've tried two approaches that should be equivalent, but they behave different. Approach 1 export DIRSTACK_SAVEFILE="/path/to/file" # file contains full paths to folders in each line while read line ; do pushd -n "$line" done < <(tac $DIRSTACK_SAVEFILE) Using this, I can later use dir and pushd +n and all the folders are loaded into the stack. Approach 2 export DIRSTACK_SAVEFILE="/path/to/file" # file contains full paths to folders in each line tac $DIRSTACK_SAVEFILE | while read line ; do pushd -n "$line" done After executing this approach, the directory stack in my shell has no new folders. Question Why only the 1st approach changes the directory stack in my shell? I've searched to understand how process substitution and pipe work. I think I understand what I read here about process substitution but I haven't found any explanation about pipe that helps me understand this behavior. PD.: I'm using GNU bash, version 5.1.8(1)-release (x86_64-redhat-linux-gnu)
As documented in man bash under Pipelines: Each command in a pipeline is executed as a separate process (i.e., in a subshell). Therefore, the changes to the current working directory happen in a subshell which doesn't influence the current working directory of the parent shell. You can run the last element of a pipeline in the current shell by setting shopt -s lastpipe In this case, it should make the second approach work, as well.
`pushd` inside a while loop from file contents does not behave the same depending on read method [duplicate]
1,369,503,686,000
I have seen How do I use pushd and popd commands? , and I am aware that with pushd <dir> I would push <dir> to the directory stack, with popd I would pop the top directory from the directory stack - and with dirs -v I should be viewing/listing the directory stack; the cited post gives this example: $ pushd /home; pushd /var; pushd log me@myhost:/home$ dirs -v 0 /home 1 /var 2 /tmp So, I'm on: $ echo $(cat /etc/issue; lsb_release -idcr); uname -a; bash --version | head -1 Ubuntu 14.04.5 LTS \n \l Distributor ID: Ubuntu Description: Ubuntu 14.04.5 LTS Release: 14.04 Codename: trusty Linux MyPC 4.4.0-104-generic #127~14.04.1-Ubuntu SMP Mon Dec 11 12:44:15 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux GNU bash, version 4.3.11(1)-release (x86_64-pc-linux-gnu) And I'm doing this test in a bash terminal: user@PC:~$ cd /tmp/ user@PC:tmp$ dirs -v ## just lists, no directory stack total 32 drwx------ 2 user user 4096 Jan 5 18:22 MozillaMailnews/ prwxrwxr-x 1 user user 0 Jan 6 11:54 SciTE.22933.in| -rw------- 1 user user 0 Jan 4 12:07 config-err-pY6fcB drwx------ 2 user user 4096 Jan 6 00:55 firefox_user/ -rw------- 1 user user 275 Jan 6 12:07 h drwx------ 2 user user 4096 Jan 5 15:43 mozilla_user0/ drwx------ 2 user user 4096 Jan 6 02:11 mozilla_mozillaUser0/ drwx------ 3 user user 4096 Jan 5 10:28 sni-qt_vlc_19957-mFfsIO/ drwx------ 2 user user 4096 Jan 4 12:07 ssh-Ry3s5LesiOrb/ drwx------ 6 root root 4096 Jan 4 12:07 systemd-generator.yBhsiB/ user@PC:tmp$ pushd /tmp/ssh-Ry3s5LesiOrb/ /tmp/ssh-Ry3s5LesiOrb /tmp user@PC:ssh-Ry3s5LesiOrb$ dirs -v ## again, just lists, no directory stack total 0 srw------- 1 user user 0 Jan 4 12:07 agent.1477= user@PC:ssh-Ry3s5LesiOrb$ pushd /tmp/sni-qt_vlc_19957-mFfsIO/ /tmp/sni-qt_vlc_19957-mFfsIO /tmp/ssh-Ry3s5LesiOrb /tmp user@PC:sni-qt_vlc_19957-mFfsIO$ dirs -v ## again, just lists, no directory stack total 4 drwxrwxr-x 2 user user 4096 Jan 5 10:28 icons/ user@PC:sni-qt_vlc_19957-mFfsIO$ popd /tmp/ssh-Ry3s5LesiOrb /tmp user@PC:ssh-Ry3s5LesiOrb$ dirs -v ## again, just lists, no directory stack total 0 srw------- 1 user user 0 Jan 4 12:07 agent.1477= user@PC:ssh-Ry3s5LesiOrb$ popd /tmp As you can see from this snippet, in the test I never got dirs -v to list/print the directory stack - it simply lists the files in the current directory similar to ls ?! So, how can I get dirs -v to show/print/list the directory stack? Alternatively, is there another command I could use to show/print/list the directory stack in a bash terminal?
Comment out the alias line in your ~/.bashrc with a #. In your current session you can use unalias dirs to remove this alias. Check again with type dirs.
dirs -v does not list the directory stack?
1,369,503,686,000
I am using zsh in Babun (Cygwin with oh-my-zsh and some extras). I noticed some odd behavior, it looks like cd is behaving like pushd? { ~ } » mkdir foo { ~ } » pushd foo ~/foo ~ { foo } » popd ~ The above is fine and expected, but see the below. { ~ } » cd foo { foo } » dirs ~/foo ~ I tried checking if there was some alias being set somewhere, but I saw no such thing. { foo } » alias | egrep "(cd|pushd)" -='cd -' 1='cd -' 2='cd -2' 3='cd -3' 4='cd -4' 5='cd -5' 6='cd -6' 7='cd -7' 8='cd -8' 9='cd -9' grt='cd $(git rev-parse --show-toplevel || echo ".")' pu=pushd Why is my cd appending dirs? It's not really a problem, I am more just curious.
I see now. oh-my-zsh does setopt auto_pushd which is described here as: AUTO_PUSHD (-N) Make cd push the old directory onto the directory stack.
Why is cd appending dirs like pushd?
1,369,503,686,000
I have aliased pushd in my bash shell as follows so that it suppresses output: alias pushd='pushd "$@" > /dev/null' This works fine most of the time, but I'm running into trouble now using it inside functions that take arguments. For example, test() { pushd . ... } Running test without arguments is fine. But with arguments: > test x y z bash: pushd: too many arguments I take it that pushd is trying to take . x y z as arguments instead of just .. How can I prevent this? Is there a "local" equivalent of $@ that would only see . and not x y z?
Aliases define a way to replace a shell token with some string before the shell event tries to parse code. It's not a programming structure like a function. In alias pushd='pushd "$@" > /dev/null' and then: pushd . What's going on is that the pushd is replaced with pushd "$@" > /dev/null and then the result parsed. So the shell ends up parsing: pushd "$@" > /dev/null . Redirections can appear anywhere on the command line, so it's exactly the same as: pushd "$@" . > /dev/null or > /dev/null pushd "$@" . When you're running that from the prompt, "$@" is the list of arguments your shell received so unless you ran set arg1 arg2, that will likely be empty, so it will be the same as pushd . > /dev/null But within a function, that "$@" will be the arguments of the function. Here, you either want to define pushd as a function like: pushd() { command pushd "$@" > /dev/null; } Or an alias like: alias pushd='> /dev/null pushd' or alias pushd='pushd > /dev/null
$@ in alias inside script: is there a "local" $@?
1,369,503,686,000
In Ubuntu 16.04 with Bash I had a problem when I didn't have a convenient way to upgrade all my WordPress components (core, translations, theme, plugin) and I used the following code to solve it: cat <<-EOF > /etc/cron.daily/cron_daily #!/bin/bash drt="/var/www/html" for dir in "$drt/*/"; do if pushd "$dir"; then wp plugin update --all --allow-root wp core update --allow-root wp language core update --allow-root wp theme update --all --allow-root rse popd fi done EOF What I'd like to ask is actually comprised of these questions: How does the dir variable resembles the asterisk (*) coming a little bit after it in the same line (how is the Bash interpreter knows that dir represents each directory inside /var/www/html? How does the pushd-popd sequence(?) works here? I understand it to "if you are inside $dir which resembles each iteration of the for loop, do stuff".
How does the dir variable resembles the asterisk (*) coming a little bit after it in the same line (how is the Bash interpreter knows that $dir [iteratively] represents each directory inside /var/www/html/? That's just how Bash shell globs work/behave, but you are mistaken about one thing: dir/* is a glob which includes all files within dir, not just all directories: in a POSIX environment, directories are a file type, but in this case only directories are relevant to the for loop and the subsequent pushd-popd pair. How does the pushd-popd sequence(?) works here? I understand it to "if you are inside $dir which resembles each iteration of the for loop, do stuff". pushd and popd are a pair of tools that work with a data structure known as a "stack". Picture a spring-loaded dispenser, like a Pez dispenser. If you push an item onto a stack, you are storing it for later use, like pushing one 'pill' into the top of the Pez dispenser. If you pop an item from a stack, you are taking it out for use or reference. This removes it from the stack. Take a look at this behavior for a simple example of how pushd and popd work: $ pwd /home/myuser $ pushd /etc /etc ~ $ pwd /etc $ pushd /usr/local /usr/local /etc ~ $ pwd /usr/local $ popd /etc ~ $ pwd /etc $ popd ~ $ pwd /home/myuser Your for loop basically works by saying if I can push this directory onto the directory stack, then that means firstly that the directory now exists, and secondly that that directory is now the current working directory. It will proceed to do the work, and then popd to go back to wherever it had been before, and then run the next iteration of the loop.
How does pushd work?
1,369,503,686,000
I have a file named filename.sh in different directories. I am trying to simultaneously submit this file to run on a number of cluster nodes. I need to find this file wherever they may be, cd into their containing folder and execute the command "sbatch run_code.sh" This is what I have tried. find . -mindepth 1 -type f -name filename.sh -exec sbatch run_code.sh {} \; And it works except it runs command from current folder which does not contain dependencies needed to run the code. alternatively I am leaning towards a for loop like this for d in dirname; do cd "$(dirname "$(find . -type f -name filename.sh)")" && sbatch run_code.sh) done which does not work. I look forward to getting some help here. Please.
Sounds like you want: find . -name filename.sh -type f -execdir sbatch run_code.sh {} ';' -mindepth 1 is superflous. It excludes ., but . is excluded anyway by both -type f and -name filename.sh. -execdir (from BSD), is not a standard find predicate, but chances are that if your find supports -mindepth (also non-standard though from GNU), it will also support -execdir. -execdir is exactly for that. It's like -exec except it runs the command in the parent directory of the selected file. Here, depending on the find implementation, the command being run will be either sbatch run_code.sh ./filename.sh or sbatch run_code.sh filename.sh. Remove the {} if you don't want the filename.sh to be passed as an argument to the command. With find implementations that don't support -execdir, you could do: find . -name filename.sh -type f -exec sh -c ' cd "${1%/*}" && exec sbatch run_code.sh "${1##*/}"' sh {} ';' ${1%/*} is $1 but with the shortest tail matching /* removed, so it will act like "$(dirname -- "$1")" but is more efficient and more reliable¹. ${1##*/} removes the longest head matching */ giving you a basename-like result. Or to avoid running one sh per file: find . -name filename.sh -type f -exec sh -c ' for file do (cd "${file%/*}" && exec sbatch run_code.sh "${file##*/}" done' sh {} + With the zsh shell, you could also do: for f (./**/filename.sh(ND.)) (cd $f:h && sbatch run_code.sh $f:t) (the exec (to save a process) is not necessary as zsh does it implicitly already for the last command run in the subshell). $f:h expands to $f's head (dirname) and $f:t to its tail (basename). ¹ would still work for instance if the dir name ended in newline characters; there are cases where using dirname (or zsh's $var:h) gives better results though such when the path to get the dirname of doesn't have any / or ends in / characters.
how to "execute command on file in multiple subdirectories within containing folder" i.e cd into each subdirectory containing file and execute command
1,369,503,686,000
I have pushd several pathnames, so my dirs stack has multiple stack frames. Now I would like to empty the stack, without changing my current directory. I wonder how to do that? Thanks.
Since dirs is a builtin, you can get help for it by running help dirs. Doing so will inform you about the -c option, which clears your directory stack. dirs -c
How can I empty the `dirs` stack, without changing my current directory?
1,369,503,686,000
pushd push a directory into the directory stack, and change the working directory. I guess that is all we need. But besides, why does pushd always output the stack to stdout? I think that is unnecessary and add clutter to the screen, or I miss its point. Thanks.
pushd and popd are primarily providing a convenient way to navigate directories interactively. In scripts, cd is more often used. As an interactive tool, pushd gives the user feedback about the current state of the directory stack whenever a new directory is successfully pushed onto it. The cd command is sometimes wrapped in a shell function (for interactive use only) that in the same way outputs the new directory whenever the current working directory is changed. The r alias in ksh shells, which repeats the most recently given command, also gives the user feedback in the form of echoing the actual command that is being executed. All these small pieces of feedback are there to help the user navigate and just to generally ensure that they are doing what they think they're doing when working at an interactive shell prompt.
Why does `pushd` always output the stack to stdout? [closed]
1,448,949,666,000
I am trying to extract an SFX file under Linux Mint 15 (64 bit) but it's not working. I've done chmod +x on the file and tried to run it like a script with no luck (it gives me an error that there's no such file or directory. What's interesting is that this worked for me when I was running Linux Mint 14 (64 bit). I found an article that mentions glibc support and how newer distributions have removed 32 bit glibc binaries but I'm not quite sure if this is accurate in my case since I'm not running RHEL. EDIT: I forgot to mention that I tried the solution posted on that article but it did not fix my problem. I've also tried using 7z, 7za, unzip, and unzipsfx with no success. unzipsfx gives me the error "unzipsfx: cannot find myself! [unzipsfx]" which I find rather strange. A quick note: The sfx relies on six other archives in the rar format. I'm not dealing with zip, 7z, or any other format like that. Am I doing something wrong? Something must have changed between distributions since extracting worked fine for me before...
Use unrar to extract files from RAR SFX archives. Like this: unrar x filename.sfx
Extracting SFX files in Linux
1,448,949,666,000
At best I would like to have a call like this: $searchtool /path/to/search/ -contained-file-name "*vacation*jpg" ... so that this tool does a recursive scan of the given path takes all files with supported archive formats which should at least be the "most common" like zip, rar, 7z, tar.bz, tar.gz ... and scan the file list of the archive for the name pattern in question (here *vacation*jpg) I'm aware of how to use the find tool, tar, unzip and alike. I could combine these with a shell script but I'm looking for a simple solution that might be a shell one-liner or a dedicated tool (hints to GUI tools are welcome but my solution must be command line based).
(Adapted from How do I recursively grep through compressed archives?) Install AVFS, a filesystem that provides transparent access inside archives. First run this command once to set up a view of your machine's filesystem in which you can access archives as if they were directories: mountavfs After this, if /path/to/archive.zip is a recognized archive, then ~/.avfs/path/to/archive.zip# is a directory that appears to contain the contents of the archive. find ~/.avfs"$PWD" \( -name '*.7z' -o -name '*.zip' -o -name '*.tar.gz' -o -name '*.tgz' \) \ -exec sh -c ' find "$0#" -name "*vacation*.jpg" ' {} 'Test::Version' \; Explanations: Mount the AVFS filesystem. Look for archive files in ~/.avfs$PWD, which is the AVFS view of the current directory. For each archive, execute the specified shell snippet (with $0 = archive name and $1 = pattern to search). $0# is the directory view of the archive $0. {\} rather than {} is needed in case the outer find substitutes {} inside -exec ; arguments (some do it, some don't). Or in zsh ≥4.3: mountavfs ls -l ~/.avfs$PWD/**/*.(7z|tgz|tar.gz|zip)(e\'' reply=($REPLY\#/**/*vacation*.jpg(.N)) '\') Explanations: ~/.avfs$PWD/**/*.(7z|tgz|tar.gz|zip) matches archives in the AVFS view of the current directory and its subdirectories. PATTERN(e\''CODE'\') applies CODE to each match of PATTERN. The name of the matched file is in $REPLY. Setting the reply array turns the match into a list of names. $REPLY\# is the directory view of the archive. $REPLY\#/**/*vacation*.jpg matches *vacation*.jpg files in the archive. The N glob qualifier makes the pattern expand to an empty list if there is no match.
Find recursively all archive files of diverse archive formats and search them for file name patterns
1,448,949,666,000
This is a more specific question of How to open rar file in linux (asked in 2015) that had no detailed answer for p7zip to open RAR files at this time of writing. p7zip is essentially the 7-zip archive manager on Linux, except that does not include the graphical interface. p7zip should be able to open RAR files like 7-zip does, but some recently downloaded RAR files can not be opened using p7zip. The RAR file itself is not broken for sure. This question may cover the following sub-questions (without the question marks, as not to be confused with the main question), which are relevant to explain the how-to: Does p7zip really support RAR format Which package to install for p7zip to support RAR format, and which repository would provide the package Which binary of 7z, 7za, or 7zr can open the RAR file Can p7zip be used to open the RAR file via graphical interface When using p7zip to open the RAR file, the command failed with message "Error: Can not open file as archive", then how to solve So how to use p7zip to open RAR files? This is a self-answer question that has been written like a new question, which was created based on the discussion in this meta post. Should there be more than one answer, the most accurate and most complete answer will be accepted after some time (not immediately).
p7zip is the Unix command-line port of 7-Zip, which has many supported formats. p7zip supports RAR format for unpacking or extract only. User can either download the binaries and source code or install the packages provided by Linux distributions and other supported systems. With the binaries and source code, p7zip is available in a single download file that can handle all supported formats and installation is optional. That means the command-line tool can be run as soon as the download file (tar.bz2) is extracted by a native archive manager on Linux. With the packages, p7zip will require separate packages to handle all supported formats and installation is necessary. For Debian (and Ubuntu), p7zip is available in three different packages from the repositories. Ensure that the main and non-free respositories (or universe and multiverse repositories for Ubuntu) are enabled and updated first. Install the required package 'p7zip-rar' that will additionally include 'p7zip-full' as one of the package dependencies; the other package 'p7zip' is not required at all. sudo apt-get install p7zip-rar Then run the 7z command to extract the RAR file. 7z x filename.rar Short explanation: p7zip provides 7zr command that can only handle 7z archives. p7zip-full provides 7z command that can handle more supported formats and p7zip-rar is required for 7z to handle RAR archives. Note the following use cases: 7zr and 7za commands will not work and only 7z command will work with the RAR format. p7zip on ArchWiki has noted the difference between 7z, 7za and 7zr binaries. 7z can be used with any of the supported graphical file archivers, including file-roller, ark, xarchiver, engrampa. Just install one of the supported archive managers to use p7zip via graphical interface, and no further configuration is needed. Newer version of RAR files (notably RAR version 5) should be unpacked or extracted using a newer version of p7zip (at least 16.02), otherwise p7zip will return error and failed to open the file. Example output of p7zip (9.20) failed to test open a RAR file (Rar5): $ 7z t sample.rar 7-Zip 9.20 Copyright (c) 1999-2010 Igor Pavlov 2010-11-18 p7zip Version 9.20 (locale=en_US.UTF-8,Utf16=on,HugeFiles=on,1 CPU) Processing archive: sample.rar Error: Can not open file as archive Example output of p7zip (16.02) successfully test open a RAR file (Rar5): $ p7zip_16.02/bin/7z t sample.rar 7-Zip [32] 16.02 : Copyright (c) 1999-2016 Igor Pavlov : 2016-05-21 p7zip Version 16.02 (locale=en_US.UTF-8,Utf16=on,HugeFiles=on,32 bits,1 CPU Intel(R) Celeron(R) M processor 1.60GHz (6D8),ASM) Scanning the drive for archives: 1 file, 483579957 bytes (462 MiB) Testing archive: sample.rar -- Path = sample.rar Type = Rar5 Physical Size = 483579957 Solid = - Blocks = 5 Encrypted = - Multivolume = - Volumes = 1 Everything is Ok Files: 5 Size: 498584235 Compressed: 483579957 The latter output of p7zip (16.02) had included the line Type = Rar5 that indicates the RAR version 5. The file command may contain similar but less human readable information of RAR version. $ file *.rar sample4.rar: RAR archive data, v1d, os: Unix sample.rar: RAR archive data, va6, flags: Archive volume, Commented, Locked, os: Unix Notice that the older RAR file (Rar) by default had included v1d whilst the newer RAR file (Rar5) had included va6 within the description of each file. To this answered date, no relevant information to determine whether the file descriptions correspond to the RAR versions or not. TL;DR p7zip can open RAR files, provided the package (p7zip-rar), the command (7z), and the newer version (16.02+ for Rar5 support) are used to handle the RAR format. Answerer's note: This self-answer--some 18 months later--will be accepted and made as community wiki. Anyone with minimum reputation may edit to make this answer more complete, should there be any missing information.
How to use p7zip to open RAR files?
1,448,949,666,000
I have a directory full of rar files, with extensions ranging from .r00 to .r30. It also has one file with .rar extension. From all of this must come a video file. How do I do it?
If you only have rar program, the command x would accomplish the task: rar x <part_name> The program automatically searches for the appropriate parts of the archive. Note: sometimes the naming can be <archive_name>.part##.rar.
Working with rar files
1,448,949,666,000
I have noticed that Nautilus (GNOME Files) can extract some RAR files that cannot be extracted using free packages like unrar-free or file-roller via CLI, nor using GUI tools like Engrampa or Xarchiver. Don’t know why exactly. No passwords involved or anything unusual, just (what seems like) regular RAR files. Maybe different formats? Anyway, I’d like to know what (if any) standalone tool does Nautilus use for extracting RAR files so I could do it myself on the command line. I only use Debian free repositories, so I guess it should be a free package... This is my first question, let me know if I should improve anything. Thanks!!
Nautilus uses libarchive to process archives; this supports some RAR formats without any external helpers. bsdtar is a command-line tool using the same library; in Debian it’s packaged as libarchive-tools.
What free tool does Nautilus use to extract RAR files?
1,448,949,666,000
I've got a 20GB RAR file to extract with a password on Debian Linux Google Cloud VM. I first tried sudo apt-get install unrar but the following output was given: Reading package lists... Done Building dependency tree Reading state information... Done Package unrar is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source E: Package 'unrar' has no installation candidate I found that this is likely to be because I don't have the multiverse activated, so I tried sudo add-apt-repository multiverse. This didn't work: Error: 'multiverse' invalid I eventually found a post saying that 'unrar free' could be installed. I installed it, and ran unrar-free -x -p Filename.rar. It is currently going through each file in the archive and giving the following output: Extracting Folder_name/image/0/1.jpg Failed Extracting Folder_name/image/0/10.jpg Failed Extracting Folder_name/image/0/100.jpg Failed Extracting Folder_name/image/0/1000.bmp Failed Apparently unrar-free is unable to extract archives in the RAR 3.0 format. I don't know how to tell which version of RAR this archive was compressed in. How can I extract this RAR file? I don't mind paying some money if it means faster extraction - I've got 140GB of RAR files to get through.
You can extract RAR archives, including RAR 5 archives, in Debian with unar, which is available in the main repositories. To be able to install the unrar package, you need to enable the non-free repositories (non-free in the “free as in freedom” sense): sudo sed -i.bak 's/buster[^ ]* main$/& contrib non-free/g' /etc/apt/sources.list sudo apt update (The sed command adds contrib non-free to the end of every line containing “buster”; use the appropriate codename if you’re using a different release.) This will allow you to run sudo apt install unrar and use that to extract your RAR archives.
How can I extract a RAR file on Debian?
1,448,949,666,000
I am using Linux Mint 14 Nadia . Earlier whenever I right clicked any folder I got options like Extract it, Compress etc. But now I am not getting them . Probably I changed/deleted some of the utilities . I have nautilus installed on my system . Is there any way I can get those 2 features back in the options list when I right click any folder ?
From the Linux Mint Forum I found the following post, Extract, Compress in right-click menu. According to that post you have 2 options: Use Caja instead of Nautilus (I believe Caja is just a fork of Nautilus) Install File Roller + xarchiver According to that post these are the steps required to install File Roller: open synaptic manager search for file roller select file roller to install search for xarchiver select xarchiver to install restart
How to get extract/compress option on right clicking?
1,448,949,666,000
I have a lot of rar files - Folder/ --- Spain.rar --- Germany.rar --- Italy.rar All the files contains no root folder so it's just files. What I want to achieve when extracting is this structure: - Folder/ -- Spain/ ---- Spain_file1.txt ---- Spain_file2.txt -- Germany/ ---- Germany_file1.txt ---- Germany_file2.txt -- Italy/ ---- Italy_file1.txt ---- Italy_file2.txt So that a folder with the name of the archive is created and the archive is extracted to it. I found this bash example in another thread but it's not working for me, it's trying to create one folder with all the files as name. #!/bin/bash for archive in "$(find . -name '*.rar')"; do destination="${archive%.rar}" if [ ! -d "$destination" ] ; then mkdir "$destination"; fi unrar e "$archive" "$destination" done Any ideas how I can do this?
I have a script in my personal archive that does exactly this. More precisely, it extracts e.g. Spain.rar to a new directory called Spain, except that if all the files in Spain.rar are already under the same top-level directory, then this top-level directory is kept. #!/bin/sh # Extract the archive $1 to a directory $2 with the program $3. If the # archive contains a single top-level directory, that directory # becomes $2. Otherwise $2 contains all the files at the root of the # archive. extract () ( set -e archive=$1 case "$archive" in -) :;; # read from stdin /*) :;; # already an absolute path *) archive=$PWD/$archive;; # make absolute path esac target=$2 program=$3 if [ -e "$target" ]; then echo >&2 "Target $target already exists, aborting." return 3 fi case "$target" in /*) parent=${target%/*};; */[!/]*) parent=$PWD/${target%/*};; *) parent=$PWD;; esac temp=$(TMPDIR="$parent" mktemp -d) (cd "$temp" && $program "$archive") root= for member in "$temp/"* "$temp/".*; do case "$member" in */.|*/..) continue;; esac if [ -n "$root" ] || ! [ -d "$member" ]; then root=$temp # There are multiple files or there is a non-directory break fi root="$member" done if [ -z "$root" ]; then # Empty archive root=$temp fi mv -v -- "$root" "$target" if [ "$root" != "$temp" ]; then rmdir "$temp" fi ) # Extract the archive $1. process () { dir=${1%.*} case "$1" in *.rar|*.RAR) program="unrar x";; *.tar|*.tgz|*.tbz2) program="tar -xf";; *.tar.gz|*.tar.bz2|*.tar.xz) program="tar -xf"; dir=${dir%.*};; *.zip|*.ZIP) program="unzip";; *) echo >&2 "$0: $1: unsupported archive type"; exit 4;; esac if [ -d "$dir" ]; then echo >&2 "$0: $dir: directory already exists" exit 1 fi extract "$1" "$dir" "$program" } for x in "$@"; do process "$x" done Usage (after installing this script in your $PATH under the name extract and making it executable): extract Folder/*.rar
Unrar to folder with same name as archive
1,448,949,666,000
I enjoy using squashfs for compression because of the simplicity of mounting them as loop devices to access the files inside. I have a lot of rar, tgz and zip files that I would like to convert to squashfs. In this answer, I saw that it is possible to use a pseudo file when compressing a disk image to squashfs to avoid having to use a temporary file the size of the whole disk. mkdir empty-dir mksquashfs empty-dir squash.img -p 'sda_backup.img f 444 root root dd if=/dev/sda bs=4M' I would like to use pseudo files to convert from rar, tgz or zip to squashfs in the same way (on the fly), so I don't have to first extract the whole archive to disk and then compress to squashfs in a separate operation. Some of these archives contain thousands of individual files, some of which will have spaces or other special characters in their filenames. I looked at the README, and I think I would need to use the -pf <pseudo-file> option, but I'm not sure how to create the pseudo file on the fly (and also not have problems with filenames with spaces). I think I would need to use process substitution to create the list of files from the source archive. Ideally I would like to have a command that is able to convert any rar, tgz or zip without having to individually create the pseudo file for each archive, but if anyone can tell me how I can do it with one of those archive formats, then hopefully I can work it out for the others. Thanks everyone.
You could mount them with fuse-zip or archivemount and then create the squashfs file from the mount point. For example, this would work for a zip file: $ mkdir /tmp/zmnt $ fuse-zip -r /path/to/file1.zip /tmp/zmnt $ mksquashfs /tmp/zmnt /path/to/file1.squashfs $ fusermount -u /tmp/zmnt
How to convert from rar or tgz to squashfs without having to extract to temporary folder?
1,448,949,666,000
Suppose you have a multipart rar file, say file.part1.rar, file.part2.rar, file.part3.rar. I know that I can extract only the first parts using for example unrar e -kb file.part1.rar However assume that I have only file.part3.rar and not part 1 and 2. Is then there a way to extract the content of part 3? Is this possible if the rar file contains a video file? (Extracting part 3 schould then result in a video file which contains only the last x minutes). If it is not possible, is it because it is in principle impossible or because there is currently just no program which can do it. Edit: Is it possible to extract say the last part if you have just the last and the first part but not the parts between?
it seems to be impossible, cause multiparted rar archive contents metadata about all the files inside all the parts of an archive. and even if you tries to unrar a single file (movie) not from the beginning, it will fail, cause file contains metadata of itself in the beginning, and even video stream is a container with a special format ant it's headers and so on... (mp4, mkv...). Even if you'll try to chop the file, it's a bad idea. Just find a full source of the file you want ))
unrar part of a multipart rar file
1,448,949,666,000
I had three RAR files in the same directory: file1.rar, file2.rar and file3.rar. I wanted to extract them with one command using expansion, bearing in mind that asterisks have to be escaped in arguments for unrar, unzip, 7z, etc. I tried this command: unrar x file\*rar It resulted in: UNRAR 5.00 beta 8 freeware Copyright (c) 1993-2013 Alexander Roshal No files to extract However, this command worked: unrar x file\* And this command works: ls file*rar It results in: file1.rar file2.rar file3.rar So why doesn't the first command work?
It doesn't work in 5.2.7 (newer version) either. I'd suggest trying unrar x file\*.rar, note the dot before rar. That goes down a slightly different code path, at least in 5.2.7, and it works in 5.2.7. Why? Well, after a few minutes of looking through unrar's source (take a look at match.cpp if you want to try!), I can comfortably say "because Alexander Roshal really, really, reallly should have used glob(3) instead". Why didn't he? Probably because its not available on Windows, where AFAIK rar originates. On Windows extensions are special, and it seems the unrar code treats it as sort-of-not-really part of the filename—a plain, final trailing * will match one, but a * in the middle will not. Not sure if this is expected behavior on Windows, but its surely not on Unix. Workaround The sane way to deal with brokenness like this is probably something like: for f in file*rar; do unrar x "$f" done Let the shell expand the glob and give unrar one file at a time. Just hope none of your files have * in their names... I at first said it worked in 5.2.7, that was mistaken: I lost the backslash while testing…
Why can't unrar expand this expression?
1,448,949,666,000
I want to extract only a specific file type using unrar. With unzip command I can extract all archives with a specific extension. unzip "$FileName" *[.txt,.TXT] How can I do the same with unrar? Do I need to iterate through every file?
unrar x "$FileName" \*.txt \*.TXT In bash you can also use: unrar x "$FileName" \*.{txt,TXT} Bash transforms this to the former form.
Unrar specific files using wildcards
1,448,949,666,000
When bash-completion is loaded, unrar x completes after pressing tab to RAR-archives in the directory. But for multipart archives with the new naming convention, like Filename.part01.rar Filename.part02.rar Filename.part03.rar it doesn't see any difference between the first archive ending on .part1.rar, .part01.rar or .part001.rar and all the others parts like .part02.rar which are never opened directly, it completes them all. Is it possible to configure bash-completion so that only the first part of multipart RAR-archives is completed? This means files which end on .rar but must not end on .part□.rar where □ is a number greater than 1 with leading zeros (e.g. 2 or 02 or 002)? The following works for me. I DO NOT know if this is 100% correct: # unrar(1) completion -*- shell-script -*- _unrar() { local cur prev words cword cmp_opts=1 i _init_completion || return # Check if all of the middle part are options. # If not, we break at the last-option idx, and won't complete opts. for ((i=1; i<${#words[@]}-1; i++)); do # not using the whole list for checking -- too verbose if [[ ${words[i]} != -* || ${words[i]} == '--' ]]; then cmp_opts=0 break fi done if [[ $cur == -* ]] && ((cmp_opts)); then # options COMPREPLY=( $( compgen -W '-ad -ap -av- -c- -cfg- -cl -cu -dh -ep -f -idp -ierr -inul -kb -o+ -o- -ow -p -p- -r -ta -tb -tn -to -u -v -ver -vp -x -x@ -y' -- "$cur" ) ) elif ((cword == 1)); then # command COMPREPLY=( $( compgen -W 'e l lb lt p t v vb vt x' -- "$cur" ) ) elif ((cword == i+1)); then # archive _filedir '[rR][aA][rR]' # If there is a second, third, ... ninth part for i in "${COMPREPLY[@]}"; do if [[ $i == *.part*(0)[2-9].[rR][aA][rR] ]]; then # Only look for the first, since it's the only useful one COMPREPLY=() _filedir 'part*(0)1.[rR][aA][rR]' break fi done else # files.../path... _filedir fi } && complete -F _unrar unrar # ex: ts=4 sw=4 et filetype=sh
Look at https://github.com/scop/bash-completion/pull/12/files to get a sense of how this filtering can be done. Basically you will need to post-process COMPREPLY[] in some ways to get rid of the mis-completions. You can add a wrapper around too: _mycomp_unrar(){ local i _unrar "${[@]}" # use the old one # now copy the for i in "${COMPREPLY[]}" stuff } && complete -p rar # remove old completion complete -F _mycomp_unrar # use your good new one Or you can send a Pull Request (as shown above) and see what happens. Added commit https://github.com/Arthur2e5/bash-completion-1/commit/a586ede to fix the problem that the existance of parts will stop normal files from showing up. (The glob as a whole is.. unreadable.) Now you need to copy the if ((cmp_parts)) part too. Also, make cmp_parts local.
Bash completion for `unrar`
1,448,949,666,000
I am used to raring files -- but I am looking for something faster. I see that there is a "split" command. How does this command compare to rar? What is are the differences between split and rar?
split is a traditional UNIX tool, that does one job only—splitting files. If you had a bunch of files to archive to individual disks, you might do it like this: ____________________ | FILESYSTEM | _________ ____________ | dir1/ dir2/ | tar | | gzip | | | file1 file3 | -------------->| ARCHIVE |------------>| COMPRESSED | | file2 file4 | |_________| | ARCHIVE | |____________________| | | |____________| | | s | p | l | i | t | +----------------+----------------+------------------+ | | | \|/ \|/ \|/ ____________ ____________ ____________ | | | | | | | COMPRESSED | | COMPRESSED | | COMPRESSED | | ARCHIVE | | ARCHIVE | | ARCHIVE | . . . | DISK 1 | | DISK 2 | | DISK 3 | |____________| |____________| |____________| You use tar to combine a bunch of files into one archive; you use gzip to make that archive smaller by compressing it; and you finally use split to cut that compressed archive into chunks that fit on your disks. The advantage here is that you can easily switch out parts—say, you could use bzip2 or xz to compress your archive. Or cpio to make your archive. rar (and also zip) come from the DOS/Windows world, where you don't normally chain together tools. So, they actually combine an archiver (like tar), a compressor (like gzip), and a file splitter (like split) into one tool. The advantage is that they can three parts have more knowledge of the other—say, you could avoid splitting a single file across disks (which is near impossible with the distinct programs).
What are the differences between split and rar?
1,448,949,666,000
I have many not compressed rar multi-part archives on my ftp server. Under Windows, with 7zip I have no problem with extracting part1 when I have not downloaded part2 from server yet. There is notification about errors, but if some files were complete in part1 - they extract correctly and I can use these files. Under linux - looks like there is 7zip with rar non-free module (I think 7z was installed when I was installing Gnome). How can I extract my files from incomplete multi-part archive under Linux? I'm using Debian 7 (amd64 architecture) with Gnome.
With unrar, you have the switch -kb (“keep broken)”, which doesn't erase the extracted files even when there are errors.
Extract incomplete RAR archive under linux (desktop)
1,448,949,666,000
I'm dealing with a large amount of password protected .rar files which need to be repacked to remove the password. (The password is known.) I was wondering if there was a script to batch/recursively extract & repack them while keeping the same name and directory structure that they had before.
I would split this task up in two elements, the first is that you need a script rerar that extracts and builds the rar and takes the name of a rar as parameter: #!/bin/bash R="$PWD"/"$1" # if realpath is available you can use R=$(realpath "$1") tmpdir=$(mktemp -d --suff rerar) pushd "$tmpdir" # extract preserving directory structure of the archive # replace YOUR_PASS_WORD in the next line, with no space after "-p"! unrar x -pYOUR_PASS_WORD "$R" # backup the rar file, optional mv "$R" "$R".org # re-create recursively going over the files here rar a -r "$R" . popd rm -rf "$tmpdir" Now you only have to run this on all the rar files involved e.g. by using find find . -name "*.rar" -exec ./rerar {} \; It is not as efficient as calling the script with multiple parameters, but here the timeconsuming action is recreating the rar archive, that is why I went for the simple solution.
Batch Extract & Repack .RAR Files
1,448,949,666,000
I have a 100+ rar files which I want to extract using find's exec command. I'd like to see the usual rar output so I can monitor its progress, and also to pipe the output to grep and then on to wc to count the 'All OK' lines (which rar prints if an archive is extracted successfully). I tested with the following command (without the final wc -l component), which was designed to find 2 rar archives, but that simply printed 4 'All OK' lines, 2 for each of the 2 rar archives that were extracted. $ find -iname 'TestNum*.rar' -execdir rar e '{}' \; | tee - | grep -i 'All OK' All OK All OK All OK All OK Note: 2 archives extracted, but 4 'All OK' lines, and none of rar's output. What I wanted was something like: $ find -iname 'TestNum*.rar' -execdir rar e '{}' \; | ... ??? Extracting from TestNum1.rar All OK Extracting from TestNum2.rar All OK 2 The final line with just 2 being the wc -l output, showing the actual number of matches of 'All OK'. Is this possible? Thanks. EDIT @ 2018-04-25 19:21 I've just realized that I could just do the following which works fine: find . -iname 'TestNum*.rar' -execdir rar e '{}' \; | tee rar_out grep -i 'All OK' rar_out | wc -l Still out of interest is what I originally asked possible?
tee can send to stdout and to a file. In your example you send both outputs to stdout (which in this case is the pipe). One way around this is to use a named pipe to capture the output: mkfifo p cat p & # this blocks until something is written to p find -iname 'TestNum*.rar' -execdir rar e '{}' \; | tee p | grep -c 'All OK'
Send find's output to stdout and piped to grep
1,448,949,666,000
This is running 64-bit rar on an aws ami linux ec2 instance (4 cpu cores, Intel(R) Xeon(R) CPU E5-2666 v3 @ 2.90GHz, 8gb RAM). I have a folder with 8568 files. When I create a rar file and add all the files to it, it takes about 3 minutes before it begins processing adding the files. Is this normal? Do you know what the reason for the delay might be (analyzing the files?)? The command I'm using: /usr/local/bin/rar a -r -iddpq -ierr /path/to/compress/myfile.rar /path/to/compress/*.log Each file is only about 980 bytes.
After running a reduced-scope test, I see that the rar binary is calling stat on each file 7 times before finally opening the file to read the contents. I would have chased down the behavior in the source code, but it's not available (in debian, at least). $ strace -o rar.strace rar a -r -iddpq -ierr path/to/compress/myfile.rar path/to/compress/*.log ... $ grep /5.log rar.strace execve("/usr/bin/rar", ["rar", "a", "-r", "-iddpq", "-ierr", "path/to/compress/myfile.rar", "path/to/compress/1.log", "path/to/compress/2.log", "path/to/compress/3.log", "path/to/compress/4.log", "path/to/compress/5.log"], [/* 16 vars */]) = 0 stat64("path/to/compress/5.log", {st_mode=S_IFREG|0644, st_size=980, ...}) = 0 stat64("path/to/compress/5.log", {st_mode=S_IFREG|0644, st_size=980, ...}) = 0 stat64("path/to/compress/5.log", {st_mode=S_IFREG|0644, st_size=980, ...}) = 0 stat64("path/to/compress/5.log", {st_mode=S_IFREG|0644, st_size=980, ...}) = 0 stat64("path/to/compress/5.log", {st_mode=S_IFREG|0644, st_size=980, ...}) = 0 stat64("path/to/compress/5.log", {st_mode=S_IFREG|0644, st_size=980, ...}) = 0 stat64("path/to/compress/5.log", {st_mode=S_IFREG|0644, st_size=980, ...}) = 0 open("path/to/compress/5.log", O_RDONLY|O_LARGEFILE) = 5
long delay in (win)rar before adding files
1,448,949,666,000
I created a custom action in Thunar for extracting RAR archives with unar: unar %N It works, but it doesn’t inform me when it’s done. Is it possible to show some kind of indicator (e.g., a progress bar) while it’s extracting? Or a notification as soon as it’s finished?
You could always run the command in a terminal. Your notification is when the terminal closes itself :). It will also show whatever progress / activity indicator is provided by the unar command. gnome-terminal -x unar -- %N I have not tested whether xfce4-terminal accepts the -x option. xterm -e unar -- %N uxrvt should also accept the -e option. Apologies in advance for any eyestrain due to running xterm with its default font size. gnome-terminal also has a -e option. With gnome-terminal, the option takes a single command argument, and splits in based on spaces. E.g. gnome-terminal -e "sleep 1". We can't use this because filenames could also contain spaces. With xterm, -e can actually behave either way, depending on how many arguments you pass. So the behaviour of gnome-terminal is less magic and probably nicer, provided you don't mind that gnome-terminal --help fails to document either option.
Inform me when extraction of RAR archive (with unar via Thunar custom action) is finished
1,448,949,666,000
The file test.rar can be opened with 7z e test.rar. When to double click on test.rar with Xarchiver,an error info occurs as following: How can i add support for rar file with Xarchiver ?
sudo apt-get install rar unrar Once installed ,right click on the rar file.
How to open RAR file with Xarchiver?
1,448,949,666,000
I have only enabled main component of Debian repository (Debian 8, Jessie) yet file-roller (Archive Manager) can extract a RAR compressed file. I have the following programs installed: file-roller, p7zip-full but I do not have these programs on my machine, p7zip, p7zip-rar, rar, unrar, unrar-free. What is the backend that file-roller is using?
Posting don's comment as a potential Answer: It's using unar (from unarchiver) as of v 3.6
Why can file-roller extract rar files in Debian 8?
1,448,949,666,000
Prior to reinstalling VPS and upgrading from Debian 6 to Debian 8, I have archived /etc/ folder. Now, I am trying to extract and overwrite everything, but somewhere in the process I get this message Extracting /etc/rc2.d/K01sendmail OK Extracting /etc/rc2.d/S03maldet OK Extracting /etc/rc2.d/S01rsyslog OK Extracting /etc/login.defs OK Extracting /etc/ucf.conf OK Extracting /etc/memstat.conf OK Extracting /etc/mtab OK Cannot close the file /etc/mtab Program aborted What is this /etc/mtab and how can I prevent it from aborting my /etc/ folder overwriting? I am doing rar x to extract over the current /etc. It looks like something broke down because it cannot boot anymore
Ouch, are you really using rar? I don't think rar properly stores symbolic links, ownership and permissions. In /etc, that would break many many things. /etc/mtab is just one that happens to be a symbolic link to a read-only file, so you got an error for this one — but many other symbolic links were saved as regular files and while extracting them from the backup succeeded the end result is not a valid system. The worst problems would be from the permissions though — you can probably still boot with symbolic links replaced by their restored content (but then you'd run into problems whenever you install software) but not with broken permissions or ownership. Use a native Unix tool such as tar, cpio or pax to back up system directories. Evem then, beware that some things won't work if you blithely extract a backup of /etc from a different installation on Debian, because some services use dynamically-assigned user and group IDs; when you restore /etc/passwd and /etc/group from a different bakcup, that will introduce an inconsistency between /etc and permissions elsewhere. I'm not sure if there's a good solution to that one if you just want to restore /etc as a whole. You can't restore your rar backup. Reinstall the system, then extract the rar archive in a different directory. Figure out what files you modified on the original system (based on the dates, maybe) and copy only those. Don't copy any file you don't understand. In the user and group databases (/etc/passwd, /etc/group, /etc/shadow, /etc/gshadow), copy only the entries for human users, let Debian manage the system users. Going forward, a much better way to back up /etc independently is to put it under version control. Etckeeper is great at that. Run etckeeper init after installation. When you make some change in /etc, run etckeeper commit and enter a message to describe your change (your future self will thank you). Push a copy of the repository to your backup area. To restore a backup, initialize etckeeper on the new system, add the backup as an external repository and merge it into the local branch.
Cannot close the file /etc/mtab when restoring /etc with rar
1,448,949,666,000
I try to install rar using yaourt, problem is I get +5k results and can't filter out the package containing rar. Neither |grep helps nor |head, the first hundreds of lines are lib'rar'ies. What could I do to get around this?
You use an AUR helper that actually works and is not fundamentally insecure: cower -s '^rar$' aur/rar 5.3.0-1 (668, 6.91) A command-line port of the rar compression utility
finding rar in yaourt
1,448,949,666,000
Here is an example of what I am trying to do: I have a folder (called 'dir') that contains the following: dir |_sub1.rar |_sub2.rar |_sub3.rar I will cd ~/ to dir and want to run a command that will extract all .rar files and place the contents into a folder with the same name. sub1.rar should be extracted to sub1, sub2.rar should be extracted to sub2, and so on.
set -e cd dir for rar in ./*.rar do [ -f "$rar" ] || continue dir=${rar%.rar} mkdir "$dir" ( cd "./$dir" unrar x "../$rar" ) # maybe rm "$rar" done Nothing clever here. Assumes you have an unrar command that takes an x option to do the eXtract. Just run a loop over the things matching ./*.rar, make sure it is a file, make a directory, then use a subshell to change directory and extract it.
Unrar all .rar files in a directory to a folder with the same name
1,448,949,666,000
unrar used to be available on EPEL repository. But now, it is gone. I noticed that CERT Forensics Tools has it now, then I installed it, unrar e [my file] works but using Archive Manager (GUI) doesn't work with rar files. I also tried unar as this article suggested. Same issue. Any clue how to get it work with Archive Manager? Thanks!
Download rar from here. Then do: tar xzvf /pathtofile/rarlinux-your_version.tar.gz ln -s /pathtofile/rar/rar /usr/bin/rar ln -s /pathtofile/rar/unrar /usr/bin/unrar The command to decompress with unrar is: unrar x filename.part1.rar or rar rar x filename.part1.rar Make sure all the files are in the current directory. Sample output: Extracting from myfile1.splitted.r36 ... myfile1 Extracting from myfile1.splitted.r37 ... myfile1 Extracting from myfile1.splitted.r38 ... myfile1 Extracting from myfile1.splitted.r39 ... myfile1 Extracting from myfile1.splitted.r40 ... myfile1 OK All OK Or use rpmfusion: rpmfusion can be found here. rpmfusion configuration Centos RHEL 7 or compatible like CentOS: sudo yum localinstall --nogpgcheck https://mirrors.rpmfusion.org/free/el/rpmfusion-free-release-7.noarch.rpm https://mirrors.rpmfusion.org/nonfree/el/rpmfusion-nonfree-release-7.noarch.rpm
Extract Rar File on CentOS 7?
1,448,949,666,000
I am trying to extract a muli-part rar archive. When I extract the very first file of the archive, several different folders get created and in some of them not any files exist. I viewed all the rar archive contents by: unrar l filename_n.rar, and for the very last one of them, the output was something like the following lines: Attributes Size Date Time Name ----------- --------- ---------- ----- ---- * ..A.... 38007775 2017-06-12 02:08 32 xxx-xxxxx-xx * ..A.... 27291830 2017-06-12 02:08 33 xxx-xxxxx-xx * ..A.... 519 2017-06-12 02:08 33 xxx-xxxxx-xx * ..A.... 45289788 2017-06-12 02:08 33 xxx-xxxxx-xx ...D... 0 1979-11-30 00:00 27 xxx-xxxxx-xx ...D... 0 1979-11-30 00:00 27 xxx-xxxxx-xx ...D... 0 1979-11-30 00:00 27 xxx-xxxxx-xx ...D... 0 1979-11-30 00:00 27 xxx-xxxxx-xx ...D... 0 1979-11-30 00:00 31 xxx-xxxxx-xx ...D... 0 1979-11-30 00:00 33 xxx-xxxxx-xx I want to know what each of the attributes (A, D) mean in the output and why some of the files have the size of 0 when there is a name existing for them and why they don't result in any file after extraction.
Those are FAT-style attributes. A means “archive”, and is used to track files that need to be backed up. D means directory, which also explains why the entries have a zero size. As far as the strange dates and times go, as I understand the RAR technote, directory entries don’t necessarily even have an associated timestamp; this might be what’s going on here (November 30, 1979 at midnight looks like a default timestamp to me).
RAR archive files yield incomplete extraction results
1,448,949,666,000
I need a solution to create Rarfiles with old style, like *.r00 ... since version 5 it does not work with -vn switch is not supported for RAR 5.x archive format. ...
According to the rar command's help page: vn Use the old style volume naming scheme Thus, if you want to create volumes with the old naming scheme you need to use the following: rar a -vn -v<volume size> archive [files ...] The -vn switch is not working on version 5.+, but you can force using version 4 with the -ma4 switch: rar a -ma4 -vn -v<volume size> archive [files ...]  Related: Switch -VN - use the old style volume naming scheme Switch -MA[4|5] - specify a version of archiving format
Create Rarfiles with old style?
1,448,949,666,000
The unrar is package is installed in my Arch Linux. I want to create a rar file, so I tried to install rar package from Arch AUR, but the repo is currently unavailable. So, I wonder if it is possible to create a rar file with the help of 'unrar' package?
You can't create rar archive with unrar. unrar can be only used to unpack rar files. You need to have full rar binary to create a rar file on linux.
Create a rar file with the help of unrar package
1,448,949,666,000
I'm trying to install unrar package for my cpanel. as some people suggested that I need this package unrar-4.20-2.mga3.nonfree.x86_64.rpm . So when I run rpm -ivh unrar-4.20-2.mga3.nonfree.x86_64.rpm I get this error: [root /]# rpm -ivh unrar-4.20-2.mga3.nonfree.x86_64.rpm warning: unrar-4.20-2.mga3.nonfree.x86_64.rpm: Header V3 RSA/SHA1 signature: NOKEY, key ID 80420f66 error: Failed dependencies: libc.so.6(GLIBC_2.14)(64bit) is needed by unrar-4.20-2.mga3.nonfree.x86_64 libc.so.6(GLIBC_2.7)(64bit) is needed by unrar-4.20-2.mga3.nonfree.x86_64 rpmlib(PayloadIsXz) <= 5.2-1 is needed by unrar-4.20-2.mga3.nonfree.x86_64 [root /]# I saw posts suggesting to install EPEL and still the problem exists. I didn't see a relevant for installing the EPEL but still I tried and nothing worked. Any suggestions will be much appreciated. Note: I have Centos 5.7
Install rpmforge from here: http://wiki.centos.org/AdditionalResources/Repositories/RPMForge#head-5aabf02717d5b6b12d47edbc5811404998926a1b . Then 'yum update && yum install unrar'
unrar for centos 5.7 needs unrar-4.20-2.mga3.nonfree.x86_64.rpm which causes an error
1,660,407,829,000
I am trying to install rar in Kali Linux to create a rar archive file. I have followed articles that says to use this command: sudo apt install rar. However, it returns with: Reading package lists... Done Building dependency tree... Done Reading state information... Done Package rar is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source E: Package 'rar' has no installation candidate I also try to search rar with aptitude search rar | grep rar, but I can't find rar. I have also updated Kali Linux with sudo apt update EDIT: rar can now be installed on Kali Linux. See https://packages.debian.org/sid/rar.
You can download the last version of rar from official site and install it. The link to download.
How to install rar in Kali Linux
1,371,844,630,000
Say we have a named pipe called fifo, and we're reading and writing to it from two different shells. Consider these two examples: shell 1$ echo foo > fifo <hangs> shell 2$ cat fifo foo shell 1$ echo bar > fifo <hangs> shell 1$ cat > fifo <typing> foo <hangs> shell 2$ cat fifo foo ^C shell 1$ <typing> bar <exits> I can't wrap my head around what happens in these examples, and in particular why trying to write 'bar' to the pipe in the first example results in a blocking call, whereas in the second example it triggers a SIGPIPE. I do understand that in the first case, two separate processes write to the pipe, and thus it is opened twice, while in the second case it is only opened once by a single process and written to twice, with the process reading from the pipe being killed in the meantime. What I don't understand is how that affects the behaviour of write. The pipe(7) man page states: If all file descriptors referring to the read end of a pipe have been closed, then a write(2) will cause a SIGPIPE signal to be generated for the calling process. This condition doesn't sound clear to me. A closed file descriptor just ceases to be a file descriptor, right? How does saying "the reading end of the pipe has been closed" differ from "the reading end of the pipe is not open"? I hope my question was clear enough. By the way, if you could suggest pointers for understanding in details the functioning of Unix pipes in relationship to open, close, read and write operations, I'd greatly appreciate it.
Your example is using a fifo not a pipe, so is subject to fifo(7). pipe(7) also tells: A FIFO (short for First In First Out) has a name within the filesystem (created using mkfifo(3)), and is opened using open(2). Any process may open a FIFO, assuming the file permissions allow it. The read end is opened using the O_RDONLY flag; the write end is opened using the O_WRONLY flag. See fifo(7) for further details. Note: although FIFOs have a pathname in the filesystem, I/O on FIFOs does not involve operations on the underlying device (if there is one). I/O on pipes and FIFOs The only difference between pipes and FIFOs is the manner in which they are created and opened. Once these tasks have been accomplished, I/O on pipes and FIFOs has exactly the same semantics. So now from fifo(7): The kernel maintains exactly one pipe object for each FIFO special file that is opened by at least one process. The FIFO must be opened on both ends (reading and writing) before data can be passed. Normally, opening the FIFO blocks until the other end is opened also. So before both ends (here meaning there is at least a reader and a writer) are opened, write blocks as per fifo(7). After both ends have been opened, and then (the) reading end(s) closed, write generates SIGPIPE as per pipe(7). For an example of pipe usage (not fifo) look at the example section of pipe(2) : involves pipe() (no open(), since pipe() actually created the pipe pair opened), close(), read() write() and fork() (there's almost always a fork() around when using a pipe). The simpliest way to handle SIGPIPE from your own C code if you don't want it to die when writing to a fifo, would be to call signal(SIGPIPE, SIG_IGN); and handle it by checking for errno EPIPE after each write() instead.
Under what conditions exactly does SIGPIPE happen?
1,371,844,630,000
Solutions that I have come across for replacing the contents of an input file with converted output involve using a temp file or the sponge utility. Stephane Chazelas's answer here indicates another way involving opening the file in read-write mode as below. tr ' ' '\t' < file 1<> file How does this actually work without corrupting the file in question?
This only works because tr does not change the file size. 1<>file opens file as standard output in overwrite mode. (<> is called read/write mode, but since few programs read stdout, it's more useful to focus on what it actually does.) Normally, when you redirect output (>file), the file is opened in "write" mode, which causes it to either be created or emptied. Another common option is >>file, "append" mode, which skips the step where the file is emptied, but puts all output at the end. 1<>file also skips emptying the file, but it puts the write cursor at the beginning of the file. (You need the 1 because <> defaults to redirecting stdin, not stdout). This is only very occasionally useful, since very few utilities are so precise in their modification. Another case would be a search and replace where the replacement is exactly the same length as the original. (A shorter replacement wouldn't work either, because the file is not truncated at the end; if the output is shorter than the original, you'd end up with whatever used to be at the end of the file still being at the end of the file.)
Overwriting an input file
1,371,844,630,000
I have an old ext4 disk partition that I have to investigate without disturbing it. So I copied the complete partition to an image file and mounted that image file while continuing my investigation. Now while I do not write to the mounted filesystem, I do have to mount it with read/write access, because one of the programs makes assumptions on what I intend to do, and requires write access, even though I do not intend to write to it. You know the kind of 'smart' programs. Now the problem is that, when mounting an ext4 filesystem read/write, the last mount point is written into the filesystem itself, i.e. the mount command changes my image file, including file access time and file modification time. That is annoying for a lot of other reasons. I cannot find an option in mount(8) nor in ext4(5) to avoid this. Is there another way to mount with read/write access, without the mount command writing to the filesystem?
I agree with @UlrichSchwarz mount it read-only, then use overlayFS or unionFS to create a writeable file-system. You can make the writable layer (the bit where the modifications go, disposable, or persistent. Ether way the changes are not stored on the master file-system.
Can I mount ext4 without writing the last mountpoint to the filesystem?
1,371,844,630,000
I want a process (and all its potential children) to be able to read the filesystem according to my user profile but I want to restrict that process's write permission to only a set of pre-selected folders (potentially only one). chroot seems to act too broadly. Restricting the process to a particular part of the filesystem which makes curbersome the need to mount /bin folders and the like. My process should be able read the content of the filesystem as any normal process I launch. I could use a docker container and mount a volume but that seems overkill: need to install docker, create an image, launch the container in it, etc... Is there a way to do something like?: restricted-exec --read-all --write-to /a/particular/path --write-to /another/particular/path my-executable -- --option-to-the-executable Some sort of unveil but controlled by the calling process and only for write access.
firejail does the job: mkdir -p ~/test && firejail --read-only=/tmp --read-only=~/ --read-write=~/test/ touch ~/test/OK ~/KO /tmp/KO
Restrict linux process write permission to one folder
1,371,844,630,000
Here are my commands mt -f /dev/st0 rewind dd if=/dev/st0 of=- As I understand it the first command rewinds my tape in /dev/st0, and the second command writes contents of /dev/st0 to -. My questions are Where is -? What is this command doing when it writes the data from the tape to -? The result of the command is: dd: writing to '-': No space left on device 1234567+0 records in 1234566+0 records out 140000000000 bytes (141 GB) copied, 14500.9 s, 9.8 MB/s It appears to me I have written the data to something, but I would like to verify where that data was written. Is it just reading the tape? Thanks for the help
It's been a long time since I've used tape. However, here's what I believe is happening mt -f /dev/st0 rewind This rewinds the tape in /dev/st0 ready for writing. Once the device is closed the tape is then automatically rewound because you didn't use the non-rewind device probably called something like /dev/nst0. Obviously in this instance the second part of this operation is effectively a no-op. dd if=/dev/st0 of=- This reads as many blocks of 512 bytes from the tape device /dev/st0 as possible, and writes them to a file called - in your current directory. (Specifically, - is not an alternative name for stdout.) For a tape this can cause a lot of overruns and rewinds as it tries to handle partial reads from the typically larger block size (often 4K or 8K, but can be much larger). At the end of the dd operation the device is closed and the tape will be rewound automatically. Depending on the block size you may want something like this (I've called the output file tape.dat rather than -) dd bs=4K if=/dev/st0 > tape.dat
Testing LTO drive with mt and dd
1,371,844,630,000
I was not able to chmod a file in my /dev/sda3 on my Ubuntu12.04 system. This is what I get when I try to see its information: $ mount | grep 'media' /dev/sda2 on /media/sda2 type fuseblk (rw,nosuid,nodev,allow_other,blksize=4096) /dev/sda6 on /media/sda6 type ext4 (rw) /dev/sda3 on /media/sda3 type fuseblk (ro,nosuid,nodev,allow_other,default_permissions,blksize=4096) When I try to change the mode of /media/sda3 to rw, I get error: $ sudo mount -o remount,rw /media/sda3 Remounting is not supported at present. You have to umount volume and then mount it once again. So, I unmount /media/sda3 $ sudo umount /media/sda3 Then when I try to mount it again, I get the same error: $ sudo mount -o remount,rw /media/sda3 Remounting is not supported at present. You have to umount volume and then mount it once again. I guess, I do not know the right commands. I have done a couple of trial-n-error from google search, but no luck so far. And I am afraid to do a blunder. Any help on how to mount the filesystem with read-write permissions will be great. Thanks a lot!
Don't use -o remount. That's only useful for remounting, that is, unmounting and mounting again in one operation which isn't supported in your case. Therefore, you need to unmount just like you did and then run: sudo mount -o rw /media/sda3
Remounting is not supported at present. You have to umount volume and then mount it once again
1,371,844,630,000
I want to delete a few files from my iso image on hard disk. So, I did: ravbholua@ravbholua-Aspire-5315:/media/ravbholua/f34890dd-20d2-4d78-92c9-1de7c0957f00$ sudo mount -o loop check_bholua99.iso /media/iso2 mount: block device /media/ravbholua/f34890dd-20d2-4d78-92c9-1de7c0957f00/check_bholua99.iso is write-protected, mounting read-only ravbholua@ravbholua-Aspire-5315:/media/ravbholua/f34890dd-20d2-4d78-92c9-1de7c0957f00$ cd /media/iso2 ravbholua@ravbholua-Aspire-5315:/media/iso2$ ls | tail DSC00966.JPG DSC00969.JPG DSC00970.JPG DSC00972.JPG DSC00973.JPG DSC00974.JPG DSC00975.JPG DSC00977.JPG DSC00980.JPG DSC00982.JPG ravbholua@ravbholua-Aspire-5315:/media/iso2$ sudo rm DSC00982.JPG rm: cannot remove ‘DSC00982.JPG’: Read-only file system As it failed to delete for the said reason, I tried to remount with the option of rw ravbholua@ravbholua-Aspire-5315:/media/ravbholua/f34890dd-20d2-4d78-92c9-1de7c0957f00$ sudo umount /media/iso2 ravbholua@ravbholua-Aspire-5315:/media/ravbholua/f34890dd-20d2-4d78-92c9-1de7c0957f00$ sudo mount -o loop,rw check_bholua99.iso /media/iso2 mount: block device /media/ravbholua/f34890dd-20d2-4d78-92c9-1de7c0957f00/check_bholua99.iso is write-protected, mounting read-only Why it messages that the block device is write-protected? Please suggest how to mount it rw so that I can edit it. ( I know one way is to copy the files of the iso onto a diff. directory and edit/delete it; then make a new iso image. But this is not what I prefer. )
ISO9660 is a read-only filesystem. It can't be mounted in rw mode because there is no support for that in the filesystem itself. If you want to make a new ISO with a different set of files, you need to make an entirely new ISO with mkisofs or similar utilities.
mounting iso image: message "block device is write protected, mounting read-only"! [duplicate]
1,371,844,630,000
How can I preferable from the terminal measure the read and write speeds of /dev/sdx?
The easiest way is to use this command: hdparm Typical usage and output: # hdparm -tT /dev/sda1 /dev/sda1: Timing cached reads: 2408 MB in 2.00 seconds = 1201.84 MB/sec Timing buffered disk reads: 260 MB in 3.03 seconds = 85.95 MB/sec Please be careful with this command and don't play with switches too much unless you know what you are doing as some of them might even kill your drive if not correctly used.
How can the read/write speed of a partition or drive be measured? [duplicate]
1,371,844,630,000
I run Ubuntu 16 Desktop as host and on VirtualBox running Ubuntu 16 Server as guest which is using raw partition on another disk different from the one used by the host. I am searching for a solution which will allow me to have safe read-write access to the guest's FS (or at least to some directory on the guest partition!). I'd like to know for each opportunity even if it will sacrifice some ext4 features (security/performance) and will result in actually unsafe FS on the guest side. I am not experienced in the Unix environment but I guess that it is achievable trough proper mounting configuration for the host partition (from fstab) and proper root mounting on the guest side. I have tried by mounting on both sides with "defaults" option but when I create file from the host it is not showing on the guest FS, however it is read-write accessible from the host! When file is edited it is not actually reflecting on the guest.
Don't do this... If two operating systems try to access the same raw block device at the same time then you should expect to see data corruption. Even if one of them is read-only, that read-only instance will cache data (eg directory contents, file contents) and won't know that the underlying data blocks have changed. At best this may result in perceived corruption inside the OS; at worst this may cause the OS to treat the filesystem as bad. If both OSes have write access to the device then the worst case scenerio is that you can expect the filesystem itself to be corrupted. (There are some filesystems that will allow multi-server access, but they are not common). Instead you should have one OS access the block device and then NFS export this to the other OS, which can then mount the filesystem over the network.
How to get read-write access (safe) to ext4 filesystem used by second OS running from virtualbox
1,371,844,630,000
When using SWAP on an SSD, does the filesystem offer any write-cycle protection? i.e. randomizing or otherwise managing writes to avoid "flash wear-out" since modern SSDs are generally rated in the low thousand write-cycles before wear-out. I had previously assumed that all filesystes (SWAP included) would, though when reading ArchWiki A recommended tweak for SSDs using a swap partition is to reduce the swappiness of the system to some very low value (for example 1), and thus avoiding writes to swap. I noticed that my laptop genearlly has 100MB-400MB of swap at any given time. Am I accelerating the wear on my SSD by allowing it to go to swap?
Swap is not a filesystem. I don't think OSes take any particular care of how they arrange the swap space. Most filesystems either don't care or are optimized for rotating disk drives in that they try to privilege sequential reads and writes (i.e. avoid fragmenting files too much), which is not relevant on SSD. Modern SSD drives (as opposed to cheap flash media) do their own wear leveling in firmware, so the OS shouldn't need to care about this. When the OS is writing to the same address, the firmware maps each access to different physical blocks, in order to avoid having many erase cycles to the same block. With an SSD drive in a PC or server, as opposed to a flash memory in an embedded device, flash wear-out is usually not something you need to care much about. You may however want to reduce swappiness on SSD compared to HDD. Swappiness is a compromise: it's the parameter that controls whether the kernel prefers to keep file contents or process data in RAM. Higher swappiness means that the kernel is more likely to swap out process data in order to make room for file contents. With many workloads (but not always), a lot of file contents is only ever read, whereas application data is written relatively often, so increasing swappiness increases the proportion of I/O in the write direction. Since SSD are usually relatively slow to write, compared with reading, the optimal swappiness tends to be lower for SSD.
SWAP write protection on SSD
1,371,844,630,000
I have a usb drive. It mounts as "ro." When I mount -o "remount,rw" I see this in dmesg. hfsplus: filesystem was not cleanly unmounted, running fsck.hfsplus is recommended. leaving read-only. The verbose flag (-v),mount -v` doesn't tell me anything more. It actually says the mount worked, mount: /dev/sdb1 mounted on /media/ecarroll/myDevice. Running fsck.hfsplus on the block device, gets me $ sudo fsck.hfsplus /dev/sdb1 ** /dev/sdb1 ** Checking HFS Plus volume. ** Checking Extents Overflow file. ** Checking Catalog file. ** Checking Catalog hierarchy. ** Checking Extended Attributes file. ** Checking volume bitmap. ** Checking volume information. ** The volume myDevice appears to be OK. I wanted this to be clear in the above, but the -o remount,force also doesn't work. sudo fsck.hfsplus -f /dev/sdb1; sudo mount -o remount,force,rw /dev/sdb1 ; sudo dmesg -c ** /dev/sdb1 ** Checking HFS Plus volume. ** Checking Extents Overflow file. ** Checking Catalog file. ** Checking Catalog hierarchy. ** Checking Extended Attributes file. ** Checking volume bitmap. ** Checking volume information. ** The volume 2819010011 appears to be OK. [97230.751669] hfsplus: filesystem was not cleanly unmounted, running fsck.hfsplus is recommended. leaving read-only. How do I mount this device read-write?
After you get ** The volume myDevice appears to be OK. You can unplug the device, and plug it back in and automount will take care of it with -o rw. I'm unaware of what it takes to get -o rw to work, after mount thinks the device is dirty, without pulling it and plugging it back in.
I can't -o "remount,rw" a usb drive
1,371,844,630,000
Say I have a program being run on multiple computers all on the same network (and all on the same account). Every once in a while the program needs to read/write a file save.dat. The actual file itself isn't important, more so is it's contents (just 3 seperate numbers that need to be kept track of). As soon as the file is updated, the other computers should see that it has updated too. I.e. The program will read it and won't see that it is an old copy. What way could I go about having these programs all accessing this constantly updating file? Perhaps some website (like pastebin)?
Set up one of the machines as an NFS server and let it serve an NFS share to the others. Let the file live on that shared network filesystem. This is a fairly common solution for e.g. sharing home directories between many client machines from a file server. You will be able to find more information about how to do this on the web, along with tutorials for how to set up NFS servers. This is done slightly differently depending on what flavour of Unix you may be using.
Having multiple computers on network access file?
1,371,844,630,000
I use Ubuntu 17.04, though this problem was persistent in many previous versions. Long ago (a year or two, I think) I started to spot it. Immediately (no more than a couple of minutes) after the system start up the system load indicator is starting to report writing to the disk, as it is shown on the next screen shot (sorry for a photo, since I don't know how to make a screenshot with the dropdown window visible) As you can see, the disk write is being reported as 1.5 MB/s (though it is often 1-2 GB/s) and the iotop program reports no writing at all. Then, after five or ten minutes everything turns back to normal. I tried to search for this, but advices which were given in this question seem to yield no result in my case, like the advice to use iotop. Does anyone know what is going on here and should I be worried? Thank you.
Your system is under heavy load (99.24%) from: fstrim (8) - discard unused blocks on a mounted filesystem which is an obvious source of writes to the disk. On the other side, the fact that a rate goes very high doesn't mean that there is a lot of writting. If you write 100 bytes in 1 nanosecond, you will get a rate of 100 Giga Bytes per second. But you actually just write 100 bytes. that is not exactly what happens in what is being reported due to averages and other issues, but you should get the idea of why a rate could be very high.
Why is the write rate to the disk being up to several GB/s, while no process reports any writing?
1,371,844,630,000
New Linux (Debian) installation. Two hard disks, A and B. On A I have the whole system, on B I just have the lost+found directory. Only root has read/write access to B. How can I create a directory in B from the command line (with root privileges)? With: mkdir /newDir The directory is created in A. The directory /mnt is empty. The directory /media contains: cdrom0 This is the output of mount: sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime) proc on /proc type proc (rw,nosuid,nodev,noexec,relatime) udev on /dev type devtmpfs (rw,relatime,size=10240k,nr_inodes=8251934,mode=755) devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000) tmpfs on /run type tmpfs (rw,nosuid,relatime,size=13206492k,mode=755) /dev/sda2 on / type ext4 (rw,relatime,errors=remount-ro,data=ordered) securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime) tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev) tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k) tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755) cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/lib/systemd/systemd-cgroups-agent,name=systemd) pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime) cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset) cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct) cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices) cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer) cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio) cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio) cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event) systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=23,pgrp=1,timeout=300,minproto=5,maxproto=5,direct) hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime) mqueue on /dev/mqueue type mqueue (rw,relatime) debugfs on /sys/kernel/debug type debugfs (rw,relatime) fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime) /dev/sda1 on /boot/efi type vfat (rw,relatime,fmask=0077,dmask=0077,codepage=437,iocharset=utf8,shortname=mixed,errors=remount-ro) rpc_pipefs on /run/rpc_pipefs type rpc_pipefs (rw,relatime) binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime) tmpfs on /run/user/1000 type tmpfs (rw,nosuid,nodev,relatime,size=6603248k,mode=700,uid=1000,gid=1000) gvfsd-fuse on /run/user/1000/gvfs type fuse.gvfsd-fuse (rw,nosuid,nodev,relatime,user_id=1000,group_id=1000) /dev/sdb1 on /media/pietrom/2ffc680f-08e5-4a14-bbb7-f8c01fdff532 type ext4 (rw,nosuid,nodev,relatime,data=ordered,uhelper=udisks2)
I understand that your hard disk B is mounted on /media/pietrom/2ffc680f-08e5-4a14-bbb7-f8c01fdff532, so in order to create a new folder on it, do: mkdir /media/pietrom/2ffc680f-08e5-4a14-bbb7-f8c01fdff532/newDir
How to write on a second hard disk?
1,627,133,222,000
How do I tell zypper to reinstall all currently installed packages?
You can reinstall all currently installed packages by this command: zypper in -f $(rpm -q -a --qf '%{NAME} ') Maybe this information will be useful.
How to reinstall all installed packages with zypper
1,627,133,222,000
I'll change my system from 32 bits to 64 bits, and will be the same I had before, Debian Squeeze, but I do not want to lose the programs I installed before, because I do not remember the name of them all. So I wanted a command to do this for me, save the name of all the programs I installed on a file, but not the standard programs that came with the system, and when I'm using the other system I would mention the name of the file and your directory for everything to be installed automatically. One more question: is it possible to do this with programs that were installed manually with dpkg-i package.deb? If so, how can I do that?
Have you tried to use dpkg --get-selections >packages? If you want to exclude some packages, you can edit the output file packages. When you're done, transfer it to the target system and say: dpkg --set-selections <packages And packages will be marked for installation. You'll most likely also need to say aptitude update; aptitude dist-upgrade. The other question: if those packages are i386 architecture packages, and you have multiarch installed, you can install the .debs with the usual dpkg -i package.deb. But it's probably better to investigate on a case-by-case basis and install 64 bit versions of those packages that have them.
How to create a list of installed packages for easy/automatic reinstall after disk is formatted
1,627,133,222,000
I'm intending to replace a NAS's Custom Linux with Arch Linux (details at the bottom), of course wishing to preserve all user data and (due to the SSH-only headless access) attempting to be fool-proof since a mistake may require firmware reinstallation (or even brick the device). So instead of running the installation (downloading the appropriate Arch Linux ARM release and probably using the chroot into LiveCD approach) without appropriate preparations, what must I keep in mind? (Please feel free to answer without restriction to Arch Linux) More precisely: Do I somehow have to bother with figuring out how a specific device's boot process works (e.g. which parts of the bootloader reside on the flash memory and which ones are on the harddisk) or can I rely on the distribution's installer to handle this correctly? How can I determine whether some (possibly proprietary) drivers are used and how can I migrate them into the new setup? Is the RAID configuration safe from accidental deletion? Is there a way to fake the booting process so I can check for correct installation while the original system remains accessible by simply rebooting? E.g. using chroot and kexec somehow? What else should I be aware of? The specific case is that I want to replace the custom Linux from a Buffalo LinkStation Pro Duo (armv5tel architecture, the nas-central description is a bit more helpful here and also provides instructions on how to gain SSH root access) with Arch Linux ARM. But a more general answer may be more helpful for others as well.
with the required skill and especially knowledge about the installed linux it is not worth while anymore to replace it. and whatever you do, you probably never want to replace the already installed kernel. however, you can have your arch linux relatively easy and fool proof! the concept: you install arch linux into some directory on your NAS and chroot (man chroot) into it. that way you don't need to replace the nas linux. you install and configure your arch linux and replace the native linux's services by arch linux services step by step. the more complete and powerful your arch linux installation gets you auotomate the chrooting procedure, turn off the services provided by the native linux one by one while automating the starting of services within the chrooted arch linux. when you're done the boot procedure of your NAS works like this: load the kernel and mount the hdds, chroot into arch linux, exec /sbin/init in your chrooted environment. you need to work out the precise doibg yourself, b/c i know neither arch linux nor your NAS and its OS. you need to create the target directory into which you want to install arch linux; it needs to be on a device with sufificient available writable space (mkdir /suitable/path/archlinux), then you need to bootstrap your arch linux cd /suitable/path/archlinux wget -nd https://raw.githubusercontent.com/tokland/arch-bootstrap/master/arch-bootstrap.sh bash arch-bootstrap.sh yournassarchitecture Now you have a basic arch linux in that path. You can chroot into it along the lines of cp /etc/resolv.conf etc/resolv.conf cp -a /lib/modules/$(uname -r) lib/modules mount -t proc archproc proc mount -t sysfs archsys sys mount -o bind /dev dev mount -t devpts archdevpts dev/pts chroot . bin/bash Then you should source /etc/profile. Now your currnt shell is in your arch linux and you can use it as if you had replaced your native linux ... which you have for the scope of your current process. Obviously you want to install stuff and configure your arch linux. When you use your current shell to execute /etc/init.d/ssh start you are actually starting the ssh daemon of your arch linux installation. When you're done and you really want to entirely replace your native linux (services) with arch linux, your NAS's native linux doesn't start any services anymore but executes the chroot procedure above with the difference that the last line is exec chroot . sbin/init. This is not as complete as a real replacement, but as fool proof as it gets. and as stated initially, with the knowledge and skill required for this, IMHO (!), a complete replacement is not necessary and worth while.
How to safely replace one Linux distribution with another one via SSH?
1,627,133,222,000
I'm having some issues with my system and I would like to reinstall packages to see if that resolves it, but I'm not sure how I'd go about doing this. How do I reinstall all installed packages in Alpine Linux?
You can do this by using a combination of apk info and apk fix: $ apk info | xargs sudo apk fix Be careful however as this may break your system if you do not have enough storage available to reinstall every package.
How do I reinstall all installed packages in Alpine Linux?
1,627,133,222,000
I have a 2 year old laptop with Linux Mint 13 on it. Recently I've been having some problems with it (computer freezing, my settings suddenly disappearing and more) so I've been thinking about installing a new distro. I was recommended Xubuntu and I want to try it. Is is possible to install it instead of my Mint, but keeping (it becoming my new /home) /home directory? I have lots of files in there, including various IDEs for various programming languages (my /home takes about 100 GB) and I'm 100% sure that if I decided to back up everything I would sooner or later realize that I forgot to backup something.
First of all, it is always good to have /home on a different partition, precisely to enable multiple installations to use the same home (you can have them installed simultaneously on different partitions, all of them using the same home). But it's too late for it now. You can always copy everything on a different hard drive (100GB is nothing nowadays). But you can also do what you want. You don't have to erase the entire hard drive to install linux, you can just remove the distro-specific files (/usr, /bin, /sbin, /lib, /var,... everything except home) and then proceed with the installation. However, you have to be careful - installation wizards are usually annoying and want to reformat and repartition your hard drive. You can usually state that you don't want to do that, but ubuntu is the most windows-like distro and there could be problems (I've never installed it for precisely that reason - it wants to be smarter than me and just gets in my way). I'd recommend you make a backup to an external drive just in case. Resizing partitions is a tricky business and I wouldn't recommend it (it's not always possible to do it the way you want). What would I do? I'd just put in a new hard drive and have separate drives for home and system (system is also usually partitioned to separate /boot and sometimes /var from the rest). Edit: after installation, if you don't assign the same user ids as before, the ownership will be messed up and you will have to chown the directory recursively.
Installing different distro without losing /home
1,627,133,222,000
I have accidentally deleted /bin/sh, and I am trying to to re-install it. If I type sh in the terminal, It says The program 'sh' can be found in the following packages: * bash * dash If I try to apt-get install bash, I get bash is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 109 not upgraded. So, how am I supposed to get /bin/sh back?
Try sudo ln -s dash /bin/sh. The "dash" package should already have set this symlink in its post-installation routines.
How to reinstall /bin/sh
1,627,133,222,000
Since Windows still does not offer means to skip rewriting the MBR, what can I do before reinstalling Windows to get Grub2 back into the MBR as easy as possible? (I'll also appreciate it if the answer then describes how to restore Grub)
Some people have been able to get NTLDR to chain to grub: http://stringofthoughts.wordpress.com/2009/04/27/booting-linux-from-ntloader-windows-bootloader/ Although in practice it's easier to use a live CD. I usually do something like: mount /dev/sda2 /mnt/somewhere # mount the linux partition chroot /mnt/somewhere bash mount /proc grub-install /dev/sda2
What preparations should I make before reinstalling windows?
1,627,133,222,000
I have done something stupid and messed up my Debian Squeeze installation. Now, the problem is I have enough data and sowftware installations which I don't want to repeat; is it possible to reinstall the OS and still somehow save and maintain data on my box? I have still not tinkered with the installation, I am just frustrated with issues and lack of time.
If you don't mind acting like a cowboy (and I have never tried this): Back up your /etc and /home directories (and next time, put your homedir in a separate partition). Do the same with anything else you changed outside those two directories. Do a fresh install. Restore your backups. The system-wide config dir, /etc, should be overwritten by this action. As for home, there's no risk at all restoring, assuming you are going to be logging in as the same username (and assuming that this was the first user you created during both installations).
Is it possible to reinstall Debian afresh and retain all the cutomisation and data from existing installation?
1,627,133,222,000
I want to be able to format my OS SDD and format and reinstall without losing any user data, is this possible? I'm currently doing this on Windows 7 and 8 with the help of portable apps and some hacks. Drivers are downloaded automatically on Windows so I don't know how to translate this on Linux. For example, my Firefox and Thunderbird profiles are pointed to my second drive so I just update the path and I'm done. What would be the best approach? I'm very confused about where things are installed, after reading a few articles apparently it's all over the filesystem (from /opt to /usr) so there's no easy way to do this? And of course I'm moving the /home folder to the other disk.
It's actually quite straight forward. You need to know how you use your system and what will grow over time. In general, the simplest is the following: /home = your user directory as you already mentioned /var = log files go here; mail spool; printer, etc -> this is good to place on a separate partition /tmp = temporary files (you could do the allocation on RAM for quicker access) swap file = put on your fastest HD and google swap file recommended size for linux (~1.5x RAM) When you get more experience you'll know where on your linux system you seem to put your files, but optionally you may also want to consider these: /usr/local = this may be a good place if you like to 'make' most of your installs /opt = like /usr/local; also many apps are installing here more and more Migrate your current data to some other storage area you can use rsync or cp to copy the files to another storage location Attach all the hard drives you want to have participate in your file system to the PC Install the fresh copy of linux. I have not used all the distributions, but one common theme among the ones I've used is the ability to configure your partitions and mount points during install if you go with my recommendations above, you'll want to choose 'customize' partitions or mount points when the install gets to the point of formatting your drive at this point, the wizard should show you a new screen that allows you to create, edit and delete partitions on your hard drives it it's anything like Fedora it will have defaulted to only have the / partition and the swap partition swap is usually defaulted at a sensible value; keep it modify your / partition, subtracting from the default size the sizes that you want to assign /home and /var ... alternatively, if these will be on another hard drive, then you don't have to modify /, just define the /home and /var and other mount points on the other hard drive note that if you don't want to do the optional ones now you can do them after without losing data once you have the partitions/mounts configured like you want finish the wizard and let the system come up Copy your backed up data back to the right places (e.g., user1 data to /home/user1 and so forth) the wizard install will have taken care of mounting the /var file system to the correct place if you need details outside of the install wizard (i.e., you want to do it yourself) let me know and I can walk you through that. Essentially the wizard is defining your partitions on the hard drives and then updating the /etc/fstab (for more 'man fstab' and 'man mount') so that your file system mounts correctly at boot time.
Separating OS on a SSD from user settings and apps on HDD
1,627,133,222,000
This is primarily a sanity check before installing the latest openSUSE on my parents’ computer. To make sure that they can continue where they left off, is it a simple case of using the -d and -M flags to reuse the old home dir? useradd -u previous-uid -d old-home -M user-name There aren't any particular gotchas that I need to look out for? (have backups, etc. sorted already).  I am primarily checking this will be fine (or otherwise) in relation to file permissions, as it would be reusing the same UID from a different install. Further to Julie Pelletier’s comment, when looking for config files, the main things I see are Libre Office, kde and backup software installed with zypper, and Mozilla Firefox and Thunderbird installed directly. While this gave me more to think about, it's not my primary focus with this question.
If they have one account that should be enough, but if each of your parents have their own login, you might want to check that the groups are similar set up, especially if they are sharing files and using group permissions to read and write in some shared directory. You can add a group with groupadd and use usermod to add such a group to the accounts created by useradd (if you first do the groupadd you can also specify the group as an option to useradd)
When installing an updated Linux version, how do I reuse old home directory? [closed]
1,627,133,222,000
I've had my workstation for a couple of years now and its slowly become a complex beast, taking on many roles for my dev and test work. I have done some research and was thinking of setting up some chroot environments to help keep things contained (web/app server, 32/64bit development env, etc...). I'd like to configure this and migrate from the current setup, so I can make sure it all works before committing to the move. I would then like to reinstall my host OS. (A more recent version of Ubuntu as I'm on an older LTS release. I could reinstall the same version if it complicates things, I just want a clean host, as things have become a mess over time) If I create the chroots on a mounted filesystem, can I reinstall the host OS, reconfigure chroot, mount the chroot directory and things work? Am I oversimplifying things? Any information or links are very much appreciated.
If you install the OS inside a chroot in a directory deep in an existing filesystem, you won't be able to boot it without hassle. It's possible if you work on it, but it's not a good way to get a clean installation. I recommend first making a clean installation in a system partition of its own. Shrink one of your existing partitions to make room. If you need to do that from outside your installed operating system, get SystemRescueCD or GParted Live. After you've done the initial installation, you can start running the new installation in a chroot. When you're confident that it can do what you want, switch over to running your new installation. Mount your old installation somewhere and run your old programs in a chroot if necessary. Mount your data partitions under the new installation. Finally, relinquish the old installation. For most services, you'll have to choose whether to run the service from the chroot and the service from the master installation. Running services in the chroot will be more complicated to arrange. Note that Ubuntu starts services when you install them with dpkg or APT, so if you do package installations in the chroot, you should disable that; see “Services in the chroot” in this answer for how to do that. In your new installation, follow these recommendations to keep things clean: Never manually modify a file under /bin, /sbin, /lib, /usr or /var, except that /usr/local is fair game. Install the etckeeper package. All your changes in /etc will be tracked under version control (Bazaar by default). Changes will be committed automatically every night and every time you run APT by default, but strive to commit your changes manually with a meaningful message. Make all modifications that are personal preferences and not intimately tied to the machine in your home directory, not in /etc. If you need both a 64-bit environment and a 32-bit environment, this guide should help.
Backing up chroots for host reinstall
1,627,133,222,000
I'm used to keep all my files in my /home directory after whiping out my root / device and reinstalling distributions. This is pretty handy as all private files and personal settings are kept after reinstallation of a linux distribution. Now I noticed that there are like ~ 300k files are currently in my /home device. I'm wondering: Should I clean up or even whipe out /home before I install a new distro? Is there anything that could cause problems If I use /home directory across multiple (different) linux distributions?
Your home directory is meant to be used for your own files, which means that you definitely can use it with different distributions. Problem can arise if you use different version of the same software, older one could break because of incompatible changes in config files, but that should not be the case if versions are not very distant.
Should I clean up the home directory prior installing a new distribution?
1,627,133,222,000
I have a Debian based server but I don't have physical access to it and it doesn't have a DVD-drive or similar. I only have root access. Is it possible to format and reinstall Debian just by using the root account? I was thinking of solutions like installing to a separate partition and after install format the current partition or perhaps using VMWare and always run it as a VM although this would impede performance. This is because my Debian server is currently in my home country(at my parents) while I moved to another country. Do you think perhaps I should dump the server altogether and go for a hosted solution? I normally would prefer to keep my server because hosted solutions normally cost you more than having your own server and paying only for electricity.
I would say the answer is maybe but I wouldn't do it and I would STRONGLY recommend you DO NOT TO ATTEMPT IT. The idea is fairly simple but requires perfect execution which Murphy's Law will mess up. If your hardware has PXE boot and another Linux machine on the network where your server resides you can set up a Network Boot Environment wipe your MBR on the primary drive to force a network boot and reboot hoping that your Network Boot configuration is perfect, there are no issues installing the packages and they don't ask for any input and post install configuration such as getting a root or some other admin user enabled works perfectly and everything is happy after. My experience tells me that there is a great chance that it will not be so unless you have console access quite possibly physical access DO NOT ATTEMPT IT!!! Another approach depends on the hardware you have and your ability to connect to something like DRAC or HP's ILO, which allows you to mount CDs via network and boot from them. But again this requires you have these cards installed in the server and your hardware is actually capable of supporting them.
Reinstall Debian
1,627,133,222,000
I uninstall samba this way: My os is debian11. sudo rm -f /etc/samba/smb.conf sudo apt purge samba sudo apt install samba Now check the default samba's configuration file. sudo ls /etc/samba/smb.conf ls: cannot access '/etc/samba/smb.conf': No such file or directory
The package which “owns” smb.conf is samba-common; you need to purge that, and re-install samba (since it will be removed when removing samba-common): sudo apt purge samba-common sudo apt install samba
Why can't get the default configuration file after reinstalling samba?
1,627,133,222,000
I was attempting to update my CentOS 7 system tonight and kept getting an error from python-urllib3. I tracked down the error to a directory that should not have been present. So, I went to remove the offending directory and inadvertently deleted the parent instead. In this case, the parent was /usr/lib/python2.7/site-packages Anyone who has worked with yum for long enough knows that would break yum, so... What to do? The solution I came up with is below, and worked to fix my system. Depending on what modifications were made to your system, you may have to re-do some of those (customized configurations in /etc are overwritten by yum reinstall) but this should work for 99% of cases.
Since rpm does not require python (thank god), we use rpm to find out the full name of every package that either has python in the name, or requires the base python package. # rpm -qa |grep -i python |sort # rpm -q --whatrequires python |sort Once you have the full list of packages, you need to find out where yum downloads them from. # grep -i '\[base\]' /etc/yum.repos.d/* This should give you /etc/yum.repos.d/CentOS-Base.repo Then, you need to visit the mirrorlist page with your web browser # egrep 'mirrorlist.*=(os|updates)' /etc/yum.repos.d/CentOS-Base.repo There should be 2 lines. You'll have to do a tiny bit of modification to them (in a notepad!) before you paste it to the browser: mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=os&infra=$infra mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=updates&infra=$infra Specifically, you need to change the release and arch, and delete the infra. Mine ended up looking like this when I went to paste it. http://mirrorlist.centos.org/?release=7&arch=x86_64&repo=updates http://mirrorlist.centos.org/?release=7&arch=x86_64&repo=os Once that was done, I was able to wget all of the packages I needed by combining several of the commands above together into 4 one-liner scripts: # while read line; do wget http://yum.tamu.edu/centos/7.5.1804/updates/x86_64/Packages/${line}.rpm; done < <(while read line; do rpm -ql ${line} |grep -iq /usr/lib/python2.7/site-packages; if [ $? -eq 0 ]; then echo ${line}; fi; done < <(rpm -q --whatrequires python |sort)) # while read line; do wget http://yum.tamu.edu/centos/7.5.1804/os/x86_64/Packages/${line}.rpm; done < <(while read line; do rpm -ql ${line} |grep -iq /usr/lib/python2.7/site-packages; if [ $? -eq 0 ]; then echo ${line}; fi; done < <(rpm -q --whatrequires python |sort)) # while read line; do wget http://yum.tamu.edu/centos/7.5.1804/updates/x86_64/Packages/${line}.rpm; done < <(while read line; do rpm -ql ${line} |grep -iq /usr/lib/python2.7/site-packages; if [ $? -eq 0 ]; then echo ${line}; fi; done < <(rpm -qa |grep -i python |sort)) # while read line; do wget http://yum.tamu.edu/centos/7.5.1804/os/x86_64/Packages/${line}.rpm; done < <(while read line; do rpm -ql ${line} |grep -iq /usr/lib/python2.7/site-packages; if [ $? -eq 0 ]; then echo ${line}; fi; done < <(rpm -qa |grep -i python |sort)) Note, if you copy and paste the package names from the rpm commands above into a text file on your distro, this could be reduced to 2 lines. For example, if you place the package names into /tmp/packagedownload.txt, you could do the following, instead of the above: # while read line; do wget http://yum.tamu.edu/centos/7.5.1804/updates/x86_64/Packages/${line}.rpm; done </tmp/packagedownload.txt # while read line; do wget http://yum.tamu.edu/centos/7.5.1804/os/x86_64/Packages/${line}.rpm; done </tmp/packagedownload.txt Once the rpm files are downloaded with wget, you can simply issue the below command to fix the system, and get yum working: # rpm -ivh --force *.rpm Then you can fix any other packages that might still be broken (hopefully none are) by issuing the below: # while read line; do xargs yum -y reinstall $line; done </tmp/packagedownload.txt
How to recover from a major mistake that breaks yum
1,627,133,222,000
I have installed postfix using the ports tree without making modifications to anything. In my main.cf file i can't specify any 'Mysql:/' arguments because postfix does not have mysql support. Now i want to reinstall postfix with mysql support. i have tried the following: make -f Makefile.init makefiles \ 'CCARGS=-DHAS_MYSQL -I/usr/local/mysql/include' \ 'AUXLIBS_MYSQL=-L/usr/local/mysql/lib -lmysqlclient -lz -lm' This command outputs 'make: cannot open Makefile.init.' And when i try to make in custom 'Make' file with this code: make makefiles \ CCARGS="-DHAS_MYSQL -I/usr/include/mysql \ AUXLIBS="-L/usr/lib/mysql/ -lmysqlclient \ Freebsd outputs: 'don't know how to make makefiles. Stop' Thanks in advance
Note: If you're looking for the recently released Postfix 3.0 series, you should substitute mail/postfix-current for mail/postfix below. You may manually set configuration options when using the ports tree, but you don't have to. If you installed postfix via pkg, run pkg delete postfix with root privileges. If you installed via local ports tree compilation, do: cd /usr/ports/mail/postfix make deinstall If your ports tree is not located at /usr/ports, substitute the path as necessary. To compile with MySQL support: cd /usr/ports/mail/postfix make config Then select the 'MYSQL' option and any others you'd like, and make install clean If you'd prefer not to use the interactive options, follow the directions here instead. FreeBSD's extensive handbook is one of its best features (which is saying a lot, because there are a lot of good things to find in FreeBSD. I would suggest reading through the chapter on ports. Good Luck!
FreeBSD Reinstall postfix with mysql support
1,627,133,222,000
Problem Previously, I had a system dual-booting Linux Mint and Windows 10. Having become increasingly frustrated by some configuration issues that seemed to stem from my initial choice of /home partition, I decided to re-install Mint and fix the problem at the root, so to speak. My intent in doing this was to replace the old Linux Mint partitions without touching any of the Windows stuff. Unfortunately, in doing this, I must have accidentally selected the wrong partition on which to install GRUB. Now, when I boot my computer, Windows doesn't show up as an option in GRUB. The drive partition containing the Windows installation seems to still be there (the C:\ drive), but I must have overwritten something related by mistake. I was insufficiently cautious - I kept a backup of all my files, but did not make a full disk image, so I can't just roll back and try again. Is there a solution that does not involve re-installing Windows from disk? Further context, potentially useful I have two relevant drives - an SSD (nvme0n1)for booting and an HDD (sda) for data. fdisk -l has the following output (skipping the ram stuff): Disk /dev/nvme0n1: 238.5 GiB, 256060514304 bytes, 500118192 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x24419fa1 Device Boot Start End Sectors Size Id Type /dev/nvme0n1p1 * 2048 1026047 1024000 500M 7 HPFS/NTFS/exFAT /dev/nvme0n1p2 1026048 249968639 248942592 118.7G 7 HPFS/NTFS/exFAT /dev/nvme0n1p3 249970686 461053951 211083266 100.7G 5 Extended /dev/nvme0n1p5 249970688 461053951 211083264 100.7G 83 Linux Disk /dev/sda: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: E7593B53-7765-4219-8B4C-D029ADEA196E Device Start End Sectors Size Type /dev/sda1 34 262177 262144 128M Microsoft reserved /dev/sda2 264192 1743808511 1743544320 831.4G Microsoft basic data /dev/sda3 1743808512 1953523711 209715200 100G Linux filesystem Partition 1 does not start on physical sector boundary. Windows' C:\ is on nvme0n1p2. Linux Mint has \ on nvme0n1p5. sda2 is an NTFS partition for documents and such, to be shared between the two. sda3 is the \home for the Linux side. I don't know what the other partitions are for; I could guess, but a poor guess is what got me in this situation to begin with.
Well. Kept doing research, found the solution was sudo update-grub. That found the Windows. Everything seems to be working so far, leaving this up in case someone else hits the problem.
Accidentally replaced my boot loader, new one isn't picking up old system
1,627,133,222,000
A few days ago, i changed a files in the windows directory using debian. After that it would cause kernel panic in my windows os. Is there any way to install new windows on the same corrupt disk partition ?? Please help me..
insert your windows disk and then do manual partitioning and choose to install ONLY that partition, but I think you'll have with the grub, and then you will need to repair it with a live, in any case, before you do anything you do a backup of data
Remove corrupt windows from laptop and install new windows again using linux [closed]
1,627,133,222,000
I'm new to Arch Linux and I want to ask how often will I need to reinstall whole Arch Linux. So if you're also a Windows user you'll (or you might) now you'll need to remove and reinstall a new windows to reduce the disk usage or for some security reason (like viruses etc.) So I'm wondering how often will you need to do the same process for Arch Linux (or other Linux distro)?
No stable operating system requires reinstallation as a matter of general use. I can think of a few reasons one might want to do so, and why they don't apply to a typical Linux distribution: Installed application programs acting erratically. If you keep track of applications through the package manager (pacman in the case of Arch), you can reinstall only the misbehaving application. Though if you bypass the package manager, you might be asking for trouble. Disk layout of system files. Over time, updates happen. This could result in system files being placed randomly across the hard drive, rather than in a single (fast) place. This could be the cause for system slowdown. Many (but not all) Linux installations separate the system files from the user files by partitions, which lessens the impact of this (among other benefits). Now, if you find your system in a broken state, it may be easier to reinstall from scratch. This should be a rare state though.
(How often) Will you need to reinstall Arch Linux [closed]
1,627,133,222,000
A few months ago, I bought a Chromebook CR-48 Mario on eBay and I "modded" it and flashed a new BIOS and installed a different Linux version. Now I want to re-install Chrome OS. Where can I download the .img or .iso for Chrome OS?
You can get most releases at http://getchrome.eu/download.php For official documentation from Google on recovering your device go to: https://support.google.com/chromebook/answer/1080595?hl=en
Where Can I Download Chrome OS?
1,627,133,222,000
I've recently reinstalled my Debian server but forgot to backup mysql databases which were in folder /var/lib/mysql. I need to recover them and tried to use PhotoRec but I can't find the mysql databases there. I have shutdown my server for the moment so it doesn't deteriorate the recovering process. Is there any better way to do that, perhaps something with more chance of success? Thank you so much!!
You were correct to power off your system. Your best bet is to boot from a rescue disk, such as SystemRescueCD, and try to recover the files using file recovery utilities. SystemRescueCD comes installed with PhotoRec and TestDisk. The extundelete utility is also worth a try. While it does not come installed on SystemRescueCD, you could install it onto removable media or customize into SystemRescueCD.
recover mysql databases after reinstall debian
1,627,133,222,000
I have wheezy installed from a pen drive. Updated things and thought I had got maven and eclipse. But maven projects would not compile. I uninstalled all the maven programs I could find in Add/remove software. That was 2 days back. Now can't remember what I removed and anyway fear something was missing. What -all- do I need to install to get maven working with java projects—run from command line and eclipse? Should I reinstall eclipse/STS? What I tried : downloaded libmaven-install-plugin-java_2.3-4_all.deb' Tried to install : dpkg -i libmaven-install-plugin-java_2.3-4_all.deb Output: Selecting previously unselected package libmaven-install-plugin-java. (Reading database ... 133826 files and directories currently installed.) Unpacking libmaven-install-plugin-java (from .../libmaven-install-plugin-java_2.3-4_all.deb) ... dpkg: dependency problems prevent configuration of libmaven-install-plugin-java: libmaven-install-plugin-java depends on libmaven2-core-java; however: Package libmaven2-core-java is not installed. libmaven-install-plugin-java depends on libplexus-digest-java; however: Package libplexus-digest-java is not installed. dpkg: error processing libmaven-install-plugin-java (--install): dependency problems - leaving unconfigured Errors were encountered while processing: libmaven-install-plugin-java
Do not manually download packages and try to install them with dpkg. Instead, use apt-get or aptitude that will figure out the necessary dependencies and download and install the correct packages for you, for example: aptitude install maven I don't know anything about Maven, so I don't know if this is the correct package and if it's really the one your missing. You will have to figure that out by yourself: http://packages.debian.org/search?keywords=maven Now can't remember what I removed Look into /var/log/dpkg.log, /var/log/aptitude.log and /var/log/apt/*. One of these log files probably contains information on what packages you removed.
wheezy maven install
1,627,133,222,000
I recently wiped an old ubuntu server of mine and installed debian, but i forgot to backup a very important file. I know there is a chance that the computer has already wrote over the file, but what can I do to look for this file?
The first thing is to not use the system at all. Everything you do increases the chances that you'll overwrite a part of the file. Depending on how important it is, I would pull the plug and image the hard drive first (e.g., with dd). If nothing else, use a live CD to look for the file. If it's very important, you could send it in to a recovery service. The technique varies depending on how you wiped the old system, the file system, and perhaps disk media. If you just deleted everything and reinstalled, you could try something like extundelete. If you deleted the partition, you'll need to scan the disk/image file for files and pick through the matches. If it's a text file, you could even just grep through the image file.
Recover file from previous installation? [duplicate]
1,588,501,819,000
I wish to run python script that I have locally on disk on remote machine. I used to run bash scripts like this: cat script.sh | ssh user@machine but I do not know how to do same for Python script.
As others have said, pipe it into ssh. But what you will want to do is give the proper arguments. You will want to add -u to get the output back from ssh properly. And want to add - to handle the output and later arguments. ssh user@host python -u - < script.py If you want to give command line arguments, add them after the -. ssh user@host python -u - --opt arg1 arg2 < script.py
Run local python script on remote machine
1,588,501,819,000
I have a service which is sporadically publishing content in a certain server-side directory via rsync. When this happens I would like to trigger the execution of a server-side procedure. Thanks to the inotifywait command it is fairly easy to monitor a file or directory for changes. I would like however to be notified only once for every burst of modifications, since the post-upload procedure is heavy, and don't want to execute it for each modified file. It should not be a huge effort to come up with some hack based on the event timestamp… I believe however this is a quite common problem. I was not able to find anything useful though. Is there some clever command which can figure out a burst? I was thinking of something I can use in this way: inotifywait -m "$dir" $opts | detect_burst --execute "$post_upload"
Drawing on your own answer, if you want to use the shell read you could take advantage of the -t timeout option, which sets the return code to >128 if there is a timeout. Eg your burst script can become, loosely: interval=$1; shift while : do if read -t $interval then echo "$REPLY" # not timeout else [ $? -lt 128 ] && exit # eof "$@" read || exit # blocking read infinite timeout echo "$REPLY" fi done You may want to start with an initial blocking read to avoid detecting an end of burst at the start.
Monitor a burst of events with inotifywait
1,588,501,819,000
I'm an amateur radio operator, and I use a software package called WSJT-X to connect my computer to my radio to operate it using what we call "digital modes". This works by sending audio signals from the speaker output of a USB sound card to my radio, and reading audio signals from the microphone input on that same card. Because I am lazy, and my nice TV with the nice speakers and comfy couch is upstairs, but my computer and radio are downstairs, I'd like to run WSTJ-X from my laptop upstairs. I know that it's possible to use PulseAudio as a remote audio sink, but is it possible to use it as a remote source as well?
Yes, you can (this is an example of something you can do): How can I use PulseAudio to share a single LINE-IN/MIC jack on the entire LAN? On the sender side simply load the RTP sender module: load-module module-rtp-send On the reciever sides, create an RTP source: load-module module-null-sink sink_name=rtp load-module module-rtp-recv sink=rtp set-default-source rtp_monitor Now the audio data will be available from the default source rtp_monitor. This can help you with the setup. You will have to add both the input sink and the output one. You might also want to read How to set up PulseAudio remote properly and securely?.
PulseAudio as remote source *and* sink?
1,588,501,819,000
My office has one default gateway and behind that is a local network with locally assigned IP addresses to all computers including mine. I hold admin in my Ubuntu installed office PC and is it essential that I access the computer during weekends through SSH. At office, I do not have a public IP but I always get the same local IP from the DHCP. I'm free to set up any software I like in my pc although I cannot set up port forwarding in the main firewall. I get a public IP to my home computer which also runs Linux. please note I cannot install Team Viewer-like software. How can I solve my problem?
It's easy: [execute from office machine] Setup connection Office -> Home (as Home has public IP). This will setup reverse tunnel from your office machine to home. ssh -CNR 19999:localhost:22 homeuser@home [execute from home machine] Connect to your office from home. This will use tunnel from the step 1. ssh -p 19999 officeuser@home Please ensure, that ssh tunneling is not against your company policies, cause sometimes you can get fired for such connection schema (e.g. my employer will fire me for that). ps. In the first step you may want to use autossh or something like that, so your tunnel connection will be automatically restored in case of unstable network.
SSH PC at office in local network from home
1,588,501,819,000
Is it possible to transport whole device as in /dev entry over TCP? I'm talking about transporting e.g. joystick over TCP or mouse/rs232 port/framebuffer dev, soundcard dev, disks, etc. I'm mostly interested in input devices - keyboards, joysticks, tablets, mice, etc. in a more generic fashion than specialized software for remote mice/keyboards.
As long as those are USB devices, what you're looking for has been possible for several years with USB/IP. It has since been introduced in Linux 3.17. See the usbip package on Debian-like systems. You may even have Windows clients (i.e. accessing USB devices plugged on a Linux server). As for the block devices, Linux has been offering Network Block Device support for even much longer.
Is it possible to transport any device over TCP?
1,588,501,819,000
I need to enable remote execution of a shell script on a server, but deny all other access. The script is a toggle that accepts a single parameter (on/off/status). This comes close to answering my question, but while it works to run scripts remotely, I still can't pass arguments. Is there a way to do this without making a new shell account for each possible argument? Basically, I want to be able to run on my remote server: myclient$ ssh remotecontrol@myserver on myclient$ ssh remotecontrol@myserver off myclient$ ssh remotecontrol@myserver status And have this correspond to: myserver$ ./myscript.sh on myserver$ ./myscript.sh off myserver$ ./myscript.sh status Arcege's response in the linked article could do this, but it doesn't work as a one-liner in a shell script. SOLUTION DETAILS: # useradd -Urms /home/remotecontrol/shell.sh remotecontrol Added the following line to ~remotecontrol/.ssh/authorized_keys to disable remote login: no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty ssh-rsa [...public key...] remoteuser@remotehost Added the following shell script as ~remotecontrol/shell.sh: #!/usr/bin/env bash case "$2" in on) echo 'Command: on' ;; off) echo 'Command: off' ;; status) echo 'Command: status' ;; *) echo 'Invalid command' ;; esac
If you use a custom shell as suggested by Arcege and 2bc, then that shell will receive the command which the user intends to execute as an argument because the shell is invoked like this: shellname -c the_original_command So ignore the -c (that your $1) and find the command in $2. For example: #!/bin/sh case "$2" in on) do something ;; off) do something else ;; *) echo "wrong command name!" >&2 exit 1 esac If you use a forced command as suggested by Coren then you will find the command that the user intended to invoke in the environment variable $SSH_ORIGINAL_COMMAND. You can use a script similar to the one above.
Restricted ssh remote execution with arguments
1,588,501,819,000
I started an fsck locally from tty1 on my Linux server and it seems to take forever. Would I have thought about this before or known I would have run a screen fsck. Is there any way to monitor tty1 via SSH or see the output from the fsck process running? I don't need to interact, just see how it's going.
If tty1 is the first virtual console on a Linux system, you can view its contents via /dev/vcs1: cat /dev/vcs1 (as root). (Thanks to Sato Katsura for pointing out that this is Linux-specific!)
Monitor/spy on tty1 or see the output from tty1 via SSH? [duplicate]
1,588,501,819,000
From time to time I want to use vim as scratch pad for commands that I would like to send to a command line shell like psql (Postgres), ghci (Haskell programming language), octave ('calculator'), gnuplot (plot) etc. The advantages would be that you could put comments next to command lines, directly document your session, incrementally develop command lines, test examples ad-hoc in manuals etc. Pro features I would like to use: send a selection to a shell, send e.g. the next 10 lines to a shell, display the output of a shell command into a vim output buffer, into a vim yank-register, directly insert it etc. There should be some support of a shell-session concept, i.e. the shell should not be started for each command from scratch. I could live with a kind of remote controlled xterm which I would put side by side to a vim window.
Try vim-slime, an environment inspired by Emacs's SLIME mode. It sends the contents of Vim to a screen or tmux session. In the future you can probably also use Xiki, but for now its Vim support is incomplete.
How to configure vim to interact with interactive command line shells?
1,588,501,819,000
I want to replace TeamViewer with a FOSS solution. I need to support some remote computers. I have a working SSH tunnel set up between two computers using a middleman server like this: Kubuntu_laptop--->nat_fw--->Debian_Server<--nat_fw<--Kubuntu_desktop This SSH tunnel is working now. Next I want to connect to the desktop on "Kubuntu_desktop" from "Kubuntu_laptop" using the SSH tunnel. Regarding the connection for this leg: Debian_Server<--nat_fw<--Kubuntu_desktop Here is how it is established: autossh -M 5234 -N -f -R 1234:localhost:22 [email protected] -p 22 I cannot change the existing monitoring port (5234) or the remote (- R) port number (1234 in this example). Can vnc tunnel over this existing SSH connection? UPDATE: the answer is no; I need to set up a new SSH tunnel for use with vnc as described here. Regarding the connection for this leg: Kubuntu_laptop--->nat_fw--->Debian_Server I can use any SSH parameters required. I cannot open up any ports on the routers/firewalls. x11vnc server was recommended to me, so I'm testing with that. It is running on the desktop and listening on port 5900. However, I did not use any command line options when starting x11vnc, so it probably isn't configured correctly yet. Will vnc work over this existing SSH connection? Notice that there are no ports 5900 defined. And note that I cannot change the port number for the -R option as I mentioned above. I have a lot of questions about how to get this working, but one is whether vnc can listen on the existing port (-R 1234 in the example above). And if so, can I still ssh into that box as I do now? Here's what I tried so far: On remote desktop (where x11vnc server is installed): tester@Kubuntu_desktop:~> autossh -M 5234 -i ~/.ssh/my_id_rsa -fNR 1234:localhost:5901 [email protected] make sure x11vnc server is running on port 5901: tester@Kubuntu_desktop:~> x11vnc -autoport 5901 On my laptop: sudo ssh -NL 5901:localhost:1234 -i ~/.ssh/admin_id_rsa [email protected] connect local vnc client to localhost port 5901 Open KRDC in Kubuntu_laptop and connect to (vnc) localhost:5901 I'm getting a failed connection - server not found.
It sounds like you currently have a default ssh connection between the laptop and server: Kubuntu_laptop--->nat_fw--->Debian_Server Modify the parameters to the ssh connection so you have -fNL [localIP:]localPort:remoteIP:remotePort For example: -fNL 5900:localhost:1234 If your laptop used VNC on the default port of 5900 then you would tell your laptop to vnc to localhost which would then send the VNC traffic on port 5900 to the server on port 1234. Next you need to catch the traffic arriving on port 1234 server side and forward that to the desktop: Debian_Server<--nat_fw<--Kubuntu_desktop Modify the parameters to the desktop ssh connection to include -fNR [remoteIP:]remotePort:localIP:localPort For example: -fNR 1234:localhost:5900 All traffic sent to port 1234 on the localhost of the server will now be transported to the desktop and arrive on port 5900 where the VNC server is hopefully listening. Change port 5900 to be appropriate for the protocol you are using. Could be 3389 for RDP or 5901 for VNC since 5900 might be in use. Also, I just picked port 1234 randomly for use on the server. *Some notes in response to your updated question: the default port for ssh is 22, so the -p 22 is redundant since it overrides the default and sets it to 22 the settings that look like localPort:remoteIP:remotePort have nothing to do with the port that ssh is using for the tunnel which is still 22 unless you override it on the client with a -p and override the port on the ssh server as well. So all of the previously mentioned ssh commands are using port 22 and you can confirm this by looking at your listening and established network connections. You will not need to open any additional ports on a firewall. The previous commands were correct. based on what you added in the update, the command for the desktop should be autossh -M 5234 -fNR 1234:localhost:5900 [email protected] sorry, I have no suggestions as far as a VNC client is concerned. You'll have to open a separate question for that, however I'm guessing it will be down-voted since it is an opinion question.
Remote support: routing RDP over ssh tunnel?
1,588,501,819,000
I needed to replace a Gyration remote control (which was for controlling MythTV) and it is no longer made. So I got a Keyspan TSBX-2404. It works in the sense that it communicates with the box (Fedora 15) but unlike the Gyration it sends multiple keypresses whenever I hit a button. For example, in xev if I press a button it will give 4-6 entries. It is controlling MythTV this way too--for example hitting the right arrow key will scroll two stops in a horizontal list rather than one. This is not acceptable, and I am wondering if there is some way to deal with it. For example, is there a setting in X that will filter out multiple keypresses that happen within a certain amount of time?
What you want to do is turn on BounceKeys. I've never needed that, and I'm not quite sure how you turn it on in Fedora 15, but maybe that will give you something to search for.
Preventing multiple keypresses from RF remote
1,588,501,819,000
I have an infrared remote control which sends RC-5 signals and a computer with an IR receiver. The computer runs Debian 8 and I'm trying to set up LIRC so that I can control the music player daemon (MPD) with the remote. I have installed the lirc package and added a configuration file for RC-5 signals in /etc/lirc/lircd.conf.d/. The daemon seems to be active: $ systemctl status lirc.service ● lirc.service - LSB: Starts LIRC daemon. Loaded: loaded (/etc/init.d/lirc) Active: active (exited) since Sun 2016-01-31 20:18:17 CET; 32s ago Process: 408 ExecStart=/etc/init.d/lirc start (code=exited, status=0/SUCCESS) However, when I try to test the remote control with irw it fails: $ irw connect: No such file or directory According to man irw this seems to be cause by the absence of the socket file /var/run/lirc/lircd. The directory /var/run/lirc is empty. Any clues would be greatly appriciated.
Updated 2021-01-10 for LIRC 0.10.1 Here are the steps I need perform to make it work. Install LIRC: # apt install lirc In /etc/lirc/lirc_options.conf, set driver and device to the following values: driver = default device = /dev/lirc0 Download a configuration file for the remote control and copy it to /etc/lirc/lircd.conf.d/. Make sure the file ends with .conf. In my case the protocol is RC-5 and I found a working configuration file at http://lirc.sourceforge.net/remotes/rc-5/RC-5. Restart the LIRC daemon: # systemctl restart lircd To find out the name for each button, run irw, point the remote control to the IR receiver and press buttons. Specify what should happen when a button is pressed in the file /etc/lirc/irexec.lircrc. Here is the file I created for MPD: begin button = sys_14_command_21 prog = irexec config = mpc prev end begin button = sys_14_command_20 prog = irexec config = mpc next end begin button = sys_14_command_35 prog = irexec config = mpc play end begin button = sys_14_command_30 prog = irexec config = mpc pause end begin button = sys_14_command_36 prog = irexec config = mpc stop end Start irexec: # systemctl start irexec Run irexec at startup: # systemctl enable irexec
Setting up LIRC in Debian 8
1,588,501,819,000
If we have a physical server running OpenBSD, what are the available remote console solutions for it? Are there any unique methods because it's an OpenBSD? Console of course means if there is no network connection for the machine via ex.: SSH.
I can think of the following alternatives: making the serial port available using a network attached serial console switch connect keyboard, video and mouse to a network attached KVM switch connect the serial port to another host and make the serial port network connected with socat
Available console solution for a physical machine running OpenBSD?
1,588,501,819,000
In iKVM (Supermicro) I have switched to text console mode using <Ctrl>+<Alt>+<F1>. How I can switch again to graphic mode? (the same keyshortcut pressing again doesn't do anything).
Have you tired <Ctrl>+<Alt>+<F6>, by default F6 is for graphic mode and first five are for text console in linux from F1 to F5. Switching back to X Window: Alt + F7.
iKVM: return to graphic mode
1,588,501,819,000
I have found an old infrared remote controller with the receptor connected as USB. I connect it into my Linux box (Mint LMDE kernel 3.2.0-4-amd64). It's recognized with lsusb as "Zydacron HID Remote Control". It works ... almost ... I can change the volume, start/stop the media player, choose the track in the playlist. But some key seems to not react (not configured). How can configure all the keys? Should I install "lirc"?
So I have to come back on this because I found a "better" solution (IMHO) without LIRC ! As I said, the first time I connected the USB receiver, almost all buttons on the remote was working, without any other software nor any configuration. On different advice (not only here), I installed LIRC and plugins I found for the software I use the most often. After some difficulties, I configured LIRC in the sense that the computer was receiving scancode and they was translated. After this, I started "Totem" and activate the LIRC plugin ... and nothing work anymore !!! :-( even not the key which was working before Same thing with Banshee or VLC ! However, when I closed the application or disable the LIRC plugin, my key works again and I can set the volume, start, stop and pause any mp3 or video ...etc. As I understood, making the remote being recognized by LIRC isn't enough, I had to write a configuration file for each and every program I would like to use ... even for keys which was working without LIRC. Sound crazy ... without talking about the fact that finding accepted LIRC actions by every plugin seems rather difficult and some software (like Banshee by example) don't offer more possibilities than those I already had without LIRC (even less). So I searched ... First find, since kernel 2.6.36, the drivers of LIRC are integrated. This is the reason why, when I configured LIRC, I had to use "devinput" driver. Since this version, all remote control are recognized as external keyboard ! This explain also why most of the keys was working out of the box. So, as it's a keyboard, what we have to do is to "remap" the non working key on another code/action. This is how: Start by doing an "lsusb" and identify your remote controller: Bus 006 Device 002: ID 13ec:0006 Zydacron HID Remote Control You must write down the ID 13ec:0006 , it will be useful. Now display the content of /dev/input/by-id in long format. ls -l /dev/input/by-id/ lrwxrwxrwx 1 root root 10 Apr 15 19:27 usb-13ec_0006-event-kbd -> ../event10 You find the correct line thanks to the ID and then the event associated to it! Now, with this information, we will try to read from the remote sudo /lib/udev/keymap -i input/event10 When you press a key on the remote, you should see the scan code and the currently associated keycode: scan code: 0xC00CD key code: playpause scan code: 0x70028 key code: enter scan code: 0x7002A key code: backspace scan code: 0x7001E key code: 1 scan code: 0x70022 key code: 5 Beware some key may return a keycode but this keycode may not be recognized by your window manager (Gnome3 in my case). Or the keycode isn't correct. In my case, I had to remap the key number to keypad (Belgium keyboard) and the special key (audio, video, DVD, ...) to some unused function key. Now we will write our keymap file. You can use any name, in my case, I name it 'zydacron' sudo vi /lib/udev/keymaps/zydacron There is already several files in this folder. The format is very simple: <scan code> <keycode> <# comment eventually> example : 0x70027 kp0 0x7001E kp1 0x7001F kp2 0xC0047 f13 # music 0xC0049 f14 # photo 0xC004A f15 # video 0xC00CD playpause # Play/Pause You can put only key which need to be remapped ! You will find on this page the official list of all key code. Again, it doesn't means that every key code on this list is supported by your window manager, you will have to test to be sure. When the file is done, we can test it with : sudo /lib/udev/keymap input/event10 /lib/udev/keymaps/zydacron If something doesn't work, you will have to try another keycode. And then redo the mapping. When everything works as you expect, we will make it permanent. Edit the file /lib/udev/rules.d/95-keymap.rules sudo vi /lib/udev/rules.d/95-keymap.rules In the file after LABEL="keyboard_usbcheck" but before GOTO="keyboard_end" add the following line: ENV{ID_VENDOR_ID}=="13ec", ENV{ID_MODEL_ID}=="0006", RUN+="keymap $name zydacron" You can recognize the vendor id and model id as the 2 parts of the ID found with lsusb, and also the name of my file.Adapt it to your own values. Restart the udev process: sudo service udev restart (or reboot your computer) , And you are done. Now each time, you plug your receiver, no matter on which USB port nor the event number given by the system, the mapping will be done automatically Little tip :I mapped one key as "tab" and another as "F10", very useful in Banshee, to "jump" across sub-window and to open the main menu.
Configure remote control Zydacron
1,588,501,819,000
I want to display the desktop of my raspberry pi on my laptop. On the rPI runs Raspbian with LXDE. I am on Ubuntu 12.04/awesome. Is it possible to display the complete rPi-desktop on my laptops Xserver? I dont want to see just windows with the ssh -X ... way. I want to have the complete desktop. As I read VNC sends just picture over the net. And what exactly NX does, i did not understand really :D. Some compression on top of X11... What is the real raw X windowing system remote desktop procedure?
Well first create a new nested Xserver: user@host $ Xephyr :1 -screen 800x600 & A window called "Xephyr on :1" should spawn. Ssh into the remote host an forward the display to the created display: user@host $ DISPLAY=:1 ssh -Y username@remotehost Now start a session on the remotehost, in my case LXDE: user@remotehost $ lxsession You should now see the desktop in Xephyr. Otherwise a second xServer could be created. hf
X windowing system remote desktop procedure? (No VNC, no XN)
1,588,501,819,000
In Windows you could enter file properties and see which users have an access to a particular files. [Including remote users / groups that are part of your domain] In unix when you enter properties of a file - You only get the local users[Or groups] / Owner / Others. Is there any way of knowing which remote users have access to a particular files? [Is there anything 'built-in' unix?] My current solution is using 'last' command. [Remote users which entered a local user have to have at least the local user permissions] But this solution only works if the user actively logged in to the system. Which means I miss every remote user that didn't log into the system.
Depends entirely on what mechanism you're using. NFS with sys authentication purely relies on the user/group/other permissions and UID/GID matching. So you have to figure out 'by hand' whether any given user is a member of the right group. Remote users ... are validated by the server hosting the storage, so you can simply refer to your local name lookup. E.g. ldap, local groups files, etc. So you can usually do this by the simple expedient of running the id command id $username to see memberships. Depending on precisely what you have configured locally, something like getent group $group_name will show you group members from LDAP. (Or just read /etc/groups if that's your auth source) If you're looking for something more potent, you can start to look towards NFSv4 and Kerberos, which allow for more detailed (CIFS-like) ACLs and stronger (E.g. 'domain level' authenticated) user authentication.
Is there any way of knowing who [including remote users] can access a paticular file / folder?
1,588,501,819,000
In our school lab, we have 20 computers and we want to set up Linux on them. Is it possible to control them (I mean tasks like updating, upgrading and so on) from a central computer? N.B: Zorin Grid can do the task I want, it is currently in the development phase. Is anything like it available? Edit: Thanks to all for helping me. As I'm new to Linux and still learning about cli, I was unaware about the usage of ssh. ssh is just an awesome tool for remote access.
Yes that's possible; it's very standard. Actually, the usual Linux way of working makes it easy. So, first of all, on Linux, all administrative things are usually done using the command line. So, you enter some command to update your computer. Thanks to ssh, you can log in from anywhere to your computer (as long as you know its address), and do that. You don't have to sit in front of it. As soon as you have more than one computer on which you want to do the same things that way, it's not a good idea to do it manually for every one of your laptops. Instead, you use a simple automation tool to do the same thing on all computers, checking whether everything worked and similar things. There's a great deal of different ways of doing that! I'm not sure why the world needs "Zorin Grid", since so many other tools that do the same already exist. I personally like ansible. I just have a list of my computers in a text file, and a list of the software I want installed, the settings I want made and the files to be backed up in another text file, and then I tell ansible to go and do that. It will do it, and tell me the results.
Controlling other Linux desktops from a central one