date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,383,413,207,000
In the context of /var is that an abbreviation for "variables"? Or what's the intention behind the naming of this folder?
From the Filesystem Hierarchy Standard, /var contains variable data files. This includes spool directories and files, administrative and logging data, and transient and temporary files. So you're almost correct, it's not variables, it's an abbreviation for variable. Files which have variable, changing data.
What is "var" an abbreviation of?
1,383,413,207,000
On a terminal in Ubuntu (14.04) when I hit Tab after cd /usr/bin it gives cd /usr/bin/X11. If I keep hitting Tab, I get cd /usr/bin/X11/X11/X11/X11/X11/X11/X11 and so on. Should it be like this or am I looking at something funny?
Yes it looks somewhat funny but that's intended configuration we all have for the backward compatibility. On Debian/Ubuntu based systems x11-common package actually provides such a symlink: $ ls -l /usr/bin/X11 lrwxrwxrwx 1 root root 1 Mar 17 02:52 /usr/bin/X11 -> ./ $ dpkg -S /usr/bin/X11 x11-common: /usr/bin/X11 man hier contains some historical description about /usr/bin/X11 and /usr/X11R6 dirs which are no longer used.
Infinitely nested directories within /usr/bin/X11
1,383,413,207,000
I am running a web application and I want to move a directory containing my php files above the server root directory. Where would be the best place to put them? On my web server I rent from Namecheap, there is a directory called php in the same directory where public_html is. Is this a good place to put my php files? The php directory has some directories such as data, docs, and ext. Or is this directory merely for configuration files? FYI, this server is using Apache.
I use a structure like: /var/www/sites/ /var/www/sites/project.com/ /var/www/sites/project.com/includes/ /var/www/sites/project.com/library/ /var/www/sites/project.com/www/ /var/www/sites/project.com/www/index.php Where /var/www/sites/project.com/www/ is set as the virtual host's document root, and I use index.php to include files from library/ & includes/ This way I have organized my project to have the bulk of the PHP outside of apache's document root -- as you're looking to do. So that the server doesn't 'blat' the contents of PHP scripts, etc. However -- if it's managed hosting, you're going to have to play within the box drawn by the host. I'd say at the same level as public_html/ is a good place to put your directories with PHP files. The public_html/ is about equivalent to /var/www/sites/project.com/www/ in my example directory structure. As to the nature of the php/ directory, maybe only your host knows. If it's empty, I'd say that they're encouraging you to use it.
What is the best place to put php files above server root directory?
1,383,413,207,000
I recently downloaded Master PDF Editor. It's a proprietary software for Linux and the archive contained bascically only a *.desktop file and the actual binary. Looking at the *.desktop file, the binary is supposed to be placed in /opt/master-pdf-editor-3. I'm aware I could change that but I followed the suggestion. Naturally, I still cannot call the binary on its own since it's not in my PATH. I can think of several solutions. I could add the binary path to PATH, I could create a (soft or hard) link inside a folder that already is in PATH, such as /usr/bin, or I could write a shell script in the same place that will call the binary. I was wondering, is there some sort of commonly accepted best practice or rule when to use one over the other? If it matters, I'm on Arch Linux. P.S. This question is very similar but the focus there is on the directory structure and not on the different possibilities how to call the binary itself.
Creating a hardlink should probably be avoided, there's no need for one and a symlink is simpler and safer. Your other solutions are also fine though. You can create as script that calls the binary or you can add the directory to your PATH. The latter might be preferable if you expect to add other binaries in /opt as well. This is essentially a matter of preference. In such cases, usually the simplest solution is the best. So, just create a soft link and you're all set: sudo ln -s /opt/master-pdf-editor-3 /usr/bin Alternatively, of course, you could just call the binary with its full path: /opt/master-pdf-editor-3 Finally, if it's only for your user, you can create an alias by adding this line to your shell's initialization file (e.g. ~/.bashrc): alias master-pdf-editor-3='/opt/master-pdf-editor-3' Anyway, no, there isn't any single Best Way© to do this. It depends on how you want your system to be set up and your own preferences as the system's administrator.
How to call a binary outside PATH
1,383,413,207,000
The SDK for Dart comes as a zip file for Linux. I've only previously installed software using apt-get. Is there a convention for where I should put the dart-sdk folder? And should I add dart-sdk\bin to my path, or symlink it into a folder that's already in the path?
In the arch package, it seems Dart gets installed into /opt/dart-sdk. This also seems to match with what FHS says: /opt is reserved for the installation of add-on application software packages. The use of /opt for add-on software is a well-established practice in the UNIX community. The System V Application Binary Interface [AT&T 1990], based on the System V Interface Definition (Third Edition), provides for an /opt structure very similar to the one defined here. The Arch package also seems to be put stuff in /usr/bin (I suspect symlinks to the things in /opt/dart-sdk/bin).
Where is the recommended place to install applications that are not from apt?
1,669,279,247,000
I have learn linux recently, but i confused about how linux system works, especially about how linux handle program files, in windows all program is in one directory Programfiles (and some in Windows) but in linux, when i install program automaticly (apt) i think its "randomly" place the program files not in one place (not just programfiles folder). Can i change DEFAULT place/folder of program in linux? I have linux mint 17.3 64 bit.
In short: As long as you use the package manager that comes with your Linux distribution, you can't change the place where the binaries are installed. However, the installation paths follow a long-established convention (with minor differences between distributions). If you build packages on your own you could, in theory, choose the installation prefix yourself, but usually departing from the conventions comes with some kind of penalty (read: It's a bad idea if you don't know exactly what you're doing).
Change the default program installation location?
1,669,279,247,000
I was looking to download some tool and it said to update your PATH variable, but I thought /usr/bin was the "standard".
It is not too uncommon to have tools that expect to be installed at user level. As such, they will not assume that you can modify anything directly under /usr. It is often common, however, to have a ~/bin or ~/usr/bin directory where you can include symlinks to tools that you have installed for your user. Such that you don't have to constantly update a $PATH variable.
Should I use symlinks in /usr/bin or use the PATH variable instead?
1,669,279,247,000
I've installed windows 7 and linux dualboot. My partitions are: /dev/sda2: UUID="EC328C61328C329E" TYPE="ntfs" /dev/sda3: UUID="800E88610E8851D8" TYPE="ntfs" /dev/sda4: UUID="20e7c430-bab0-4aa1-8afe-caa9d97e1de3" TYPE="ext4" where sda2 is windows sda3 is shared partition and sda4 is linux sd3 has mounting point /windows Because sda2 and sda4 are small partitions I created directories Music, Documents, etc. and redirected windows libraries in here. I want do the same in linux but editing ~/.config/user-dirs.dirs to XDG_DESKTOP_DIR="$HOME/Plocha" XDG_DOWNLOAD_DIR="$HOME/" XDG_TEMPLATES_DIR="$HOME/Šablony" XDG_PUBLICSHARE_DIR="$HOME/Veřejné" XDG_DOCUMENTS_DIR="/windows/home/Documents" XDG_MUSIC_DIR="/windows/home/Music" XDG_PICTURES_DIR="/windows/home/Pictures" XDG_VIDEOS_DIR="/windows/home/Videos" has no effect. Folders has icons as it if works but when I click on Music in the file browser it goes to /home/myUser/Music not into/windows/home/Music. It would be great if it would work for cd ~/Music command too :)
Keep the lines as they were in original user-dirs.dirs : XDG_MUSIC_DIR="$HOME/Music" XDG_PICTURES_DIR="$HOME/Pictures" XDG_VIDEOS_DIR="$HOME/Videos" And now create symbolic links to point to your windows folders (make sure you have no important data in the three concerned folders : cd ~ rm -fr Music Pictures Videos ln -s /windows/home/Music ln -s /windows/home/Pictures ln -s /windows/home/Videos By the way, you would better create a swap partition. You don't mention you did it already.
Redirect home to shared NTFS partition
1,669,279,247,000
I want to start using our home server as a media server for movies and shows, the question is where is the most appropriate place to store them? The most appropriate places seems to be: /var My user directory Create a new user specifically for media and store stuff there Ideas or suggestions?
Its up to you really, I store all my media in /home/{name}/videos and share it with samba. No problems with that. You can store it in /var; nothing wrong with that either. There is only problems if you are working in a multi-user environment. If that was the case, I would suggest you store personal files in /home/{name} and shared files in /var/ Provided you don't have any extra partitions, it does not matter where you put the files.
Where to store media files on a shared linux server
1,669,279,247,000
If ls -aR | grep ":$" returns ./.hiddendir2: ./dir1: ./dir3: ./dir3/.hiddendir4: then how can you update the pattern given to grep so that you only get ./dir1: ./dir3: (note that the last line containing .hiddendir4 was omitted even though it's parent was not hidden) Here is a wordy version of the same question with some context: I found a great script at http://www.centerkey.com/tree/ for printing a directory tree, but the output is cluttered sometimes by the descendants of the .git directory for repos I'm working on. If I read the source correctly I need to change the pattern given to grep at this line: ls -aR | grep ":$" So far I came up with ^\./(?!\.)\w+:$ which seemed to work in testing at http://regexpal.com/, but then broke when I tried running in bash. I suspect the combination of (1) differences between regex testing tools i'm using to how grep parses a pattern, (2) bash escape character requirements, and (3) pattern design all need to be addressed to solve this. Help appreciated before I end up spending too long on this.
The first command you posted can't give the results you show, because they don't have colons at the end; presumably you stripped them. The script you refer to does this to select directory paths, which ls -R displays with a colon appended, but there is nothing preventing a file name & path from ending with a colon and giving a false positive. This also makes your title misleading; you want to keep most directories and exclude only a few. Question as asked: There are several different "flavors" (standards) for regular expressions, most similar but with important differences in details. There are two common in Unix and Unix-origin software, called unimaginatively Basic Regular Expression (BRE) and Extended Regular Expression (ERE). There is an even simpler form used in most shells (and standard find) to match filenames (and case choices) (only ? * and [...]) that isn't even called regexp, just pattern. There is an even more extended form defined by Perl, but usable outside, called Perl Compatible Regular Expression (PCRE). See Why does my regular expression work in X but not in Y? and http://en.wikipedia.org/wiki/Regular_Expression . Your (?! "lookahead" is only in PCRE, while standard grep does BRE by default or ERE with -E, although it appears some versions of grep can do PCRE or you can get and install a separate pcregrep. But you don't need it. If you wanted non-hidden children of curr dir, just do '^\./\w' or '^\./[^.]' depending how strict you want to be. But you say you want no hidden dir anywhere in the path, which is harder to do with a positive regexp, and much easier with negative matching like grep -v '/\.' . Backslash is special in both bash (and most if not all shells) and grep (BRE or ERE), so they it must be either doubled \\ or single-quoted; I prefer the latter. Note double quotes are not sufficient here. Better approaches: you actually want only directory paths, so as suggested by other answers find -type d | grep -v /\. is a better approach. That doesn't waste time listing ordinary-file names you then discard. Alternatively you can just use ls -R | grep :$ without the -a; by default ls already skips hidden entries (both ordinary-files and directories). As the script you refer to does!
How to exclude directories from `ls -R` output?
1,669,279,247,000
I'm quite new in the UNIX world, so feel free to let me know if my question is silly. So-called Filesystem Hierarchy Standard states that the /var directory is supposed to keep data like logs and caches of including but not limited to local packages: /var contains variable data files. This includes spool directories and files, administrative and logging data, and transient and temporary files. ... /var is specified here in order to make it possible to mount /usr read-only. Everything that once went into /usr that is written to during system operation (as opposed to installation and software maintenance) must be in /var I'm primarily wondering how should it work in local multi-user system. /var is kind of global directory for a system, and all cache-data and logs of all users appear to be shared among them. Isn't it considered wrong in any way? I mean that programs (packages) being launched on behalf of different users will use the same cache, let alone the fact that all local user can read logs and look through cache-data of each other. Please help me to understand that concept. Thanks.
/var is kind of global directory for a system, and all cache-data and logs of all users appear to be shared among them. This is partially incorrect. Not all cache and logs are stored under /var, and what is stored there is not necessarily shared by all users. Applications and/or the OS own what is stored in /var. The only exception is a directory which is effectively shared and writable by all users: /var/tmp. Users and/or application deciding to store something here can still protect the subdirectories and files they create with unix file permissions. I mean that programs (packages) being launched on behalf of different users will use the same cache, let alone the fact that all local user can read logs and look through cache-data of each other. No, different users generally use different cache. There are also cases where a common cache is a plus. When confidential / personal data is stored under /var, this data is protected by the application so the users are not granted the right to see other people's data; e.g. the mail spool is not readable by (non root) users.
Why isn't the var directory user-specific?
1,669,279,247,000
I have a folder structure like this: /domains/some-domain-1/applications/j2ee-apps /domains/some-domain-2/applications/j2ee-apps /domains/some-domain-3/applications/j2ee-apps What is the best way to handle folders like this where the parent and child folders are the same? For example, if I wanted to cd to a cousin dir j2ee-apps of another domain folder from is there an easy way? What if I wanted to run a ls from the top level and get everything in the bottom folders (eg: j2ee-apps)? Does anyone have any clever advice on how best to work with this?
You can set a variable to hold the variant part. Like so: domain='some-domain-1' cd /domains/$domain/applications/j2ee-apps domain='some-domain-2' cd /domains/$domain/applications/j2ee-apps Since the cd command is the same, you can recall it on your shell with the arrow keys. Depending on how frequently you do this, you might want to define a function in your .bashrc. cdj2(){ cd /domains/"$1"/applications/j2ee-apps } Then you can cdj2 some-domain-1. Shell globbing (aka pathname expansion) can take care of the other part (see Stéphane Gimenez's answer). The find command would be useful if the directory structures weren't exactly the same, but you still want to see all the files matching a certain name. find /domains -name 'j2ee-apps' -exec ls {} \;
how to have find on a directory with a changing pattern?
1,669,279,247,000
I'm running FreeBSD 10 and I'd like to build dwm. I've installed Xorg using pkg install. Where are the headers located? Maybe I'm just old fashioned but I first looked in /usr/X11R6 ... not there. Anyone has any idea where Xorg will install its headers files in FreeBSD?
You can install dwm from port (/usr/ports/x11-wm/dwm). You can use own config.h: make DWM_CONF=/path/to/dwm/config.h I think you should use the port system instead of own compiling - it appears in your packages list.
Location of Xorg headers on FreeBSD 10
1,669,279,247,000
My Ubuntu boot has iwconfig living in /sbin. I was under the impression that /sbin is usually for programs that are required for bootstrapping, or at least need to be available before /usr is mounted. I don't really see how iwconfig fits either of these conventions, though. What's the rationale?
As per Filesystem Hierarchy Standard, /sbin is a place where Utilities used for system administration (and other root-only commands) are stored. And yes, iwconfig may be needed by some startup scripts before /usr/ is mounted (if it uses a different partition) - therefore the place for it is not /usr/sbin/.
Why is iwconfig in /sbin?
1,669,279,247,000
Although I did some research on this subject, I couldn't reach the exact information I wanted. Actually, not exactly disclosed, everyone approached in a different way. For the: Filesystem Hierarchy Standard I should store my files at: Temp files: /var/temp/app_name/* or /temp/app_name/* Cache files: /var/cache/app_name/* Config files: ~/.config/app_name/* Log files: /var/log/app_name/* Data files (database, etc.): ??? Q1: Is that the right approach for the recent systems? For the XDG standart that explained here: Temp files: ??? or /temp/app_name/* Cache files: ~/.cache/app_name/* Config files: ~/.config/app_name/* Log files: ??? Data files (database, etc.): ??? I cant understand why we store a cache file in the ~/.cache. It doesn't make any sense to me because there was a built-in cache folder called /var/cache In that case, I'm confused. Everywhere I investigate, there have been different approaches. Q2: Where should we put the files (datas, logs, temps, configs, etc.) for a pure Linux distribution (which does not use $XDG) to create applications? Q3: Some applications use the Linux structure, but some of use the XDG structure. How do they choose this? According to what situation? Do they use $XDG environment variables if we're using them? According to the above situation, my env | grep -i "XDG" output: XDG_VTNR=1 XDG_SESSION_ID=1 XDG_DATA_DIRS=/home/furkan/.local/share/flatpak/exports/share:/var/lib/flatpak/exports/share:/usr/local/share:/usr/share XDG_RUNTIME_DIR=/run/user/1000 XDG_SEAT=seat0 P.S: I don't know the parts I'm showing with ???
Part of your confusion may be the distinction between user applications and system applications. So, for example, apache isn't run as an end user ("Harry" normally doesn't run Apache; it's run from a system startup script - systemd or init or however). These sort of applications typically will follow the file system standard and store log files in /var/log, configuration files in /etc and so on. Similarly, commands executed by the systems administrator as root designed to impact the whole machine (e.g. apt or yum) also follow the file system standard. However, applications designed to be executed by the end user (e.g. a web browser and other desktop applications) follow the XDG standard. Here "Harry" has his own personal cache, which is different to "Julie"; they visit different web sites, so have cached different pages. Similarly, Harry may configure his desktop different to Julie, and so the configuration will be in the ~/.config area. Some locations (eg /tmp) are designed to be shared by all users, so even desktop apps can use them... but even here a more modern /run/user/ structure is sometimes used.
Where should I store my application files (data, cache, logs, crashes, etc.) [closed]
1,669,279,247,000
I'm learning the dd command in Linux. For test purposes, i initiated this command: sudo dd bs=4M if=/dev/mmcblk0 of=/media/some_remote_host/stuff/myImage.img I know that dd is for taking an image of the given disk/drive. But I'm curious whether it includes mounting folders according to Unix file structure (file structure), such as /mnt or /media. Tried to look it up but couldn't find it. And if it includes /mnt and /media directories, does it mean it can potentially stay in infinite loop ? Because it'll index and start storing the contents of /media folder whilst writing to it. Or actual writing of *.img occurs when all folders are scanned and taken to the memory. TL;DR: Does dd command include /mnt and /media folders into the image ? Hope you can enlighten me on these questions. Thank you. Device specs: Raspbian OS 32 bit, single drive (SD card).
dd doesn't know anything about mounts or folders or unix file structure in filesystems. dd only knows about raw data and a few trivial transformations of raw data and data blocks. It was originally designed to read and write data from or to block devices (including disks and tapes) and could handle changing the structure of that data back and forth between blocked and streamed data and do some trivial character transformations and padding adjustments. Running dd if= on the device of a writable mounted filesystem is dangerous, as that filesystem can change while dd is reading it. Blocks dd has already read may be changed (and changes not seen by dd) at the same time blocks it hasn't read are changed, and the results will be a corrupt image of the filesystem on output. Since dd is reading from a block device of (presumably) fixed size, and reading it without any knowledge of the underlying filesystem, if you are writing the output to a file in the same filesystem, it won't be an infinite loop, but it increases the likelihood and severity that your output image will be a corrupt filesystem. As filesystems mounted on directories are not part of the parent filesystem (but the (empty) directories are), they will not be in the dd output image (but the directories themselves will be, including files that might be under the mount point if it wasn't empty before mounting).
DD command included directories
1,669,279,247,000
I have gone through a related thread here. A colleague of mine has copied the phantomjs binary to /usr/bin. In our system, this binary will be used only by a non-root user. At the moment, the regular user is able to run this binary fine, but going to the thread posted in this question, it does not like a right thing to put a binary used by a non-root user under /usr/bin. So is there any reason for me to move it from /usr/bin to /usr/local/bin or is it better to leave it at /usr/bin? In other words, would it make any difference to a binary if it is put under /usr/bin or /usr/local/bin or it is a 'rule' that is generally followed? [root@seco01 ~]# which phantomjs /usr/bin/phantomjs [root@seco01 ~]# P.S: /usr/local/bin is included in the $PATH.
The Filesystem Hierarchy Standard says, per /usr/local : Local hierarchy, that The /usr/local hierarchy is for use by the system administrator when installing software locally. It needs to be safe from being overwritten when the system software is updated. It may be used for programs and data that are shareable amongst a group of hosts, but not found in /usr. Locally installed software must be placed within /usr/local rather than /usr unless it is being installed to replace or upgrade software in /usr. As a general rule of thumb, in modern Linux distributions, software that is managed by the distributions package manager uses /usr/bin etc., while /usr/local/bin is used by local installations of software that is not managed by the package manager. The typical case here is software that is installed via make, make install. Static stand-alone binaries also fall into that category. Another possibility is /opt. The division between /opt and /usr/local is not clear, but /usr/local is more customary for local installations. Here is what the FHS says about /opt:/opt : Add-on application software packages.
Should I move a binary used by a non-root user from /usr/bin/ to /usr/local/bin/?
1,669,279,247,000
find . * -depth -print0 | xargs -0 rmdir It finds and removes all empty folders (including hidden ones) recursively. I only tried it on my home folder and a pendrive in Linux PC and it worked but I don't know if it is safe to run from / as root I once nuked my OS by running some command off the internet (which I didn't understand) like that.
The man page for rmdir says:- Remove the DIRECTORY(ies), if they are empty. If you want to remove all empty directories then it will be safe. The question you need to ask is:- Do you want to remove all empty directories? Some applications need a directory even if it's empty. For example, journald can be configured so that it only logs to persistent storage if /var/log/journald exists. If you run your command when that directory is empty then it will be deleted. Afterwards journald will not log to persistent storage as it can't find the directory. I believe Fedora is configured this way by default. Also, empty (unmounted) mount points could also be deleted by your command. They should be reasonably easy to fix, but it could still catch you out.
Is this command safe to run from / as root
1,669,279,247,000
I just bought an external hard drive that's 1TB. I formatted my laptop and burned Linux to it. The thing is, I also use my hard drive for other things. Can I move these boot files into a custom folder? Or should I leave them alone?
I'm not sure why you would want to move the /boot stuff. It should be possible, but it would break a lot of fundamental assumptions in boot-related tools. One common thing that people do is store /boot on a separate partition, in which case /boot/boot should be a symlink to '.' so that when the bootloader mounts that partition, it still can access files under /boot. That actually related to the underlying question, "I also use my hard drive for other things", which sounds like you might want a separate partition for that. But it's hard to be sure without knowing exactly what youwant to do
Possible to move boot files?
1,669,279,247,000
I have a tmp folder in /var and it has the file channel.xml in it. I don't know what this is and if it should be there and what permissions it should have because at the moment it is wide open. Also what is the folder above /var called please?
The directory /var/tmp should have permissions rwxrwxrwt, which allow anyone to change to this directory, create and read files in this directory and write/rename/delete files they own (this last restriction is caused by the t in the permissions, which is the "sticky bit"). The file you see in there is a temporary file created by something on your system. You can look at its ownership or try and see what process has it open to get a clue as to what wrote it. The XML schema in the top of the file may also help out with that. The folder above /var is /, also know as the root directory.
What is the tmp folder in /var?
1,669,279,247,000
When installing software in $HOME, how do the linux filesystem hierarchy directories map to subdirectories of $HOME? I am asking this question so that I can write a build system that picks reasonable default paths for a user install. The build system infrastructure is Haskell-specific (Cabal) but the installed files include C++ headers and libraries, for which Cabal doesn't have a default install path.
$PREFIX is ~/.local/. Everything else maps under there.
Filesystem organization for software in HOME
1,669,279,247,000
Like most of us, I have several machines: at home, at work, for travelling... etc. I mainly write papers or books while I code. But I'm tired of svn'ing, rsync'ing and so on, so I've decided to carry a pendrive with me, with my Ubuntu customizations (bash, emacs, ...) and at the end of the day, do a rsnapshot. My question is: how do I minimally run my home directory from a pendrive? What should I put in there? Thanks for any input.
The answer is pretty simple :-) You should put in there your documents you are working on and the dotfiles of the applications you use. Theres no such thing like a minimal set of files you need. If an application is missing its configuration file, it will usually create a new one like at the first start. Which files you will need depends on the applications you use, so you are the only one who can answer this. If you are unable to trace some config files, keep an eye on the subdirectories of ~/.gnome2 or ~/.kde. To tell the system, where the location of your new home directory is, you should just automount your pendrive to /home/username or simply change the location of your users home directory in /etc/passwd to your pendrives mountpoint. If this doesn't fit your question, please be more specific. :-)
Carry-on Ubuntu Customization
1,669,279,247,000
I'm using a Linux system on which I don't have root. My home directory is remote-mounted and backed up (and there's a quota on that filesystem). Now, I would like to work on some files on this machine itself, they don't need to be backed up, and I would like to (but not have to) have them relatively large. Where can I put these files? Where can I create a folder of my own outside my own home directory? Notes: Under /tmp is not a solution, the files need to persist. It's a Fedora 20, but if you have a Debian'ish answer that's interesting as well. The local filesystems don't have a quota.
You could use /var/tmp, but: If you have a quota, the admin will probably not appreciate you creating large files outside of your $HOME directory. Quite likely you even got a limit for /var/tmp. So that might just be an option for small files. However, if you are member of the fuse group (ask your admin to add you to the group if you're not), you can use sshfs and mount a remote file system of arbitrary size. Example: sshfs <user@host:directory> <mountpoint>
Where can I/should I place files outside my home directory?
1,669,279,247,000
I'm trying to structure an android project for continuous integration/continuous delivery via gradle and git. The main code will be pulled from git on the build server without various files that contain keys. Gradle needs these files to successfully build the project. I will pull those files on the build server separately, but i'm looking for a common place to store these files both on my local env and on the build server. So that I can reference this location in ENVs, then point to that location in the gradle build file. The build server runs in root mode, obviously my local is running as a user. Where is a non-root accessible, public place in the linux file system besides /home/$USER? The dist i'm using are ubuntu and debian.
The most official location for such files would be in /srv, which is for “site-specific data which is served by this system”. Alternatively, you can use /home/autobuilder as the home directory of the autobuilder system user and store the files there. Another common convention is to have a toplevel directory such as /net which contains one subdirectory per machine name, and put site-specific data there — that's probably the clearest convention if the files are stored in a single machine and mounted elsewhere over NFS or some other network filesystem. Or you can choose a directory under /var/local — /var is a kind of default place for variable non-user-own data. Or you can use a separate toplevel directory: as the system administrator, you get to decide how you organize your storage, as long as it doesn't conflict with assumptions made by your distribution's tools (i.e. keep off directories that are reserved for the distribution).
Public, Non-Root Access to file system?
1,669,279,247,000
I am configuring a Linux work station that will be used by a small number of users (20-30). These users will belong to a small set of groups (5-10) with each user belonging to at least one group and potentially multiple groups. On the work station there are files that should only be writeable by members of a particular group. Each file is only writeable by members of one group so standard Linux permissions should work just fine. I have two questions. Who should own the files that already exist? I was thinking either root or creating a set of dummy users corresponding to the groups. Is there a better choice that I am missing? It seems like this is unlikely a unique situation so I was hoping there was a standard convention. The second question is where should I put the files. If I made dummy users I could create subdirectories in /home/. If root owns the files do I go with /srv/groups/ or maybe `/share/? Again is there a convention?
Who should own the files that already exist? I was thinking either root or creating a set of dummy users corresponding to the groups. Leaving them owned root but belonging to a common group, presuming the files are masked 0002 (i.e., are group writable) has a little bit of an advantage in terms of preventing them from becoming accidentally reowned if you create users to match the groups and the people who are in the groups can log in as that user. I'm referring to accident here because of course a malicous user in the group will just be able to delete the files in any case. But if they are owned root (or any other user that is not the group), then while someone in the group can still write to them (and thus delete them), they won't be able to reown or modify the permissions such that other members of the group won't be able to subsequently access the file. Using a group but no fixed owner (i.e., files can be owned by anyone, but should be in the correct group with group permissions) has an advantage if users will be creating files (see below). Creating new users just to match the groups will probably create more potential problems than it actually solves. If using group permissions works, stick with that. You can also create a little command for the superuser: #!/bin/sh chown -R root:groupx $1 chmod -R g+w $1 And use it foo /some/directory. This will ensure everything in the tree is owned root, with group groupx, and group writable. There is a potential problem using root as the owner if root then adds the setuid bit to a file, but I believe only owner can do that. If you are really worried, create a dummy user -- but not one that matches the group. One that has no privileges but no one can use. There is one further issue with users creating new files, which by default will be owned by them. They will be able to change it to the correct group, which will make the file accessible to others, but they won't be able to change the owner. For that reason, and because people may forget, you may want to run foo /some/directory at regular intervals or opportune moments (e.g. when no one is logged in, since changing the ownership may affect software which has the file open). Taking the last paragraph into account, you could just say the owner does not matter at all, only the group is important. In that case the foo command should use: chgrp -R groupx $1 instead of chown. where should I put the files Creating a /home/groupx is absolutely fine even if groupx is a group and not a user. The only potential issue would be if you then go and create a user with the same name -- but you don't want that anyway. Put the files there and foo /home/groupx. If you don't want users to be able to create files, set the directory 755. They will still be able to modify files owned by their group.
Who should own files shared by a group and where should they go
1,669,279,247,000
There is a server with several users. A user may want to read another user's files (few gigabytes) and may want to put processed files in other's directory. What's the common practice to do it? I can think of create a /share folder and make it rwx to all. Is this the common practice?
I think you are searching for /usr/local/share/ but it is hard to answer since this depends on what kind of file your are planning to "share" between users. But if we are talking about office files or something like that maybe you should use use some kind of revision system like subversion or git. And then the users will have a checkout/clone in their homedir. Update: A way to make this a little bit better could would be that every user gets it's own subdir in the shared folder. He is allowed to write in his own folder but not the other subfolders. And all users are allowed to read from all the directories. That way you don't have to think about file collisions if two users use the same filename, or deletes the colleges files by mistake. Btw the idea behind /usr/ is described in the Filesystem Hierarchy Standard ( http://www.pathname.com/fhs/2.2/fhs-4.1.html ) -"/usr is shareable, read-only data." So I would probably use a dir in either /home/ or /var/ instead...
The common practice to share file for all users on the same machine? [duplicate]
1,669,279,247,000
I have some sources I want to compile using make. The sources will be compiled into a driver I'm going to use. What is the correct place for such files? /usr/share? /opt? /usr/local/...? Edit: the driver is going to be a kernel driver, and I'll be using dkms for the installation. The distro I'll be using is Ubuntu, but I'll might also use it for other distros in the future
It's really up to you but normally sources are stored in /usr/src as mentioned in the comments. Since system-wide installed applications could install their sources in this directory as well, to avoid possible conflicts, you could use /usr/local/src instead but again in the end no one stops you from storing sources anywhere you want as long as you remember where they are and there are no conflicts. You may as well create /src - easy to find, easy to cd to.
What is the correct place for driver sources?
1,669,279,247,000
I am encrypting my /home partition and I would also like any secrets /root (not /) contains to be encrypted. Is the /root folder considered part of the /home partition by default, or do I need to explicitly add that folder to the encrypted volume? Thanks!
/root is normally on the root partition. It's meant to be available even if something goes wrong and other partitions can't be mounted. Note that /root only contains what you put yourself. Sensitive data created by the system ends up under /etc or /var. These days, with most CPUs having accelerated AES instructions, disk encryption is very cheap. So if you're concerned about confidential data outside of your home directory, you should encrypt the whole system except /boot. This is a good idea anyway because confidential data can end up in other places: wifi passwords in /etc, printed documents in /var/spool/cups, logs showing what wifi networks you connected to and when in /var/log, etc. If you really want to encrypt only /home and not the whole system, which I repeat is not a good idea, you can make /root a symbolic link: mv /root /home ln -s /home/root /
On Debian 9, is /root part of the /home partition?
1,669,279,247,000
In a project of mine I have a couple of specialized commands which are only called by other commands, and are not supposed to be invoked by the end user. Should these internal commands be installed in PREFIX/bin or are they better installed somewhere else, for instance in PREFIX/lib? Edit: The internal commands in question are all Awk scripts. Since they are architecture independent I consider installing them in PREFIX/share/PACKAGE-NAME/bin. Would that be a good choice?
The Linux Filesystem Hierarchy Standard says this, /bin contains commands that may be used by both the system administrator and by users, but which are required when no other filesystems are mounted (e.g. in single user mode). It may also contain commands which are used indirectly by scripts. and The /lib directory contains those shared library images needed to boot the system and run the commands in the root filesystem, ie. by binaries in /bin and /sbin. and, Utilities used for system administration (and other root-only commands) are stored in /sbin, /usr/sbin, and /usr/local/sbin. /sbin contains binaries essential for booting, restoring, recovering, and/or repairing the system in addition to the binaries in /bin. Programs executed after /usr is knownto be mounted (when there are no problems) are generally placed into /usr/sbin. Locally-installed system administration programs should be placed into /usr/local/sbin. Finally, /usr/lib includes object files and libraries. On some systems, it may also include internal binaries that are not intended to be executed directly by users or shell scripts. Applications may use a single subdirectory under /usr/lib. If an application uses a subdirectory, all architecture-dependent data exclusively used by the application must be placed within that subdirectory. There's obviously /usr/bin, and a number of other options. Your definition to me, feels most like /usr/lib.
Where do I install non-end user commands?
1,669,279,247,000
I want to secure my data on a multi-user system, so ful-system encryption is not an option. I encrypted /home/me with ecryptfs, the swap with cryptsetup. But I read that the applications can sometimes use /tmp to store personal data, so I will encrypt it exactly like the swap. Is there any other directory where user data can be ? I'm wondering if it is the case of /var for example. I checked and found nothing sensible, but I want to be sure. In what directories can user data be ?
It might be worth looking at the Filesystem Hierarchy Standard which describes what the various filesystems are generally for (from a standards perspective, not everyone has to follow it), which will let you know where personalised data might be stored. However, the short answer is - anywhere an application wants to store it, and has permission to store it. Generally, non-admin users only have write access to /tmp/, /home/user, /var/tmp/, some directories in /run/ (/run/lock and /run/shm for example). However, the mail daemon and the print daemon, which could in theory be used to store personalised data in files, if only temporarily, have write access to locations in /var/spool/. Syslog could in theory store personal information, depending on how users interact with the system and hence there could be content in /var/log/. None of that covers other products which might be creating data in other locations (/opt/, /srv/, anywhere else they're configured to do so). Doing it piecemeal is going to be complex. NB: There's a summary of the FHS on Wikipedia.
Where can users' personal data be?
1,669,279,247,000
I am programming in shell scripting, and would like to log results. Is there any way to know what is the default path storage for logfiles in my operating system? I have researched: set | grep "log" -i but there is nothing that seems to be like a log path directory.
For any system that follows the Filesystem Hierarchy Standard this should be /var/log. I think this is safe to assume for most modern systems. Note that this is for system processes (ie daemons etc), for user processes the common thing to do is just to create the log file as a hidden file in the user's home directory. Eg ~/.myscript.log
How can I know the default path for log directory?
1,669,279,247,000
I created some scripts for administrative tasks etc, I made them to be independent from environment - every dependency is injected through arguments. However it is annoying to provide to script commonly used dependencies every time I run it, and I don't want to hardcode in it any local information, so I created wrappers. I put my general scripts in $HOME/bin but where should I put wrappers that contain local information and are only for speeding up invocation? Example: Think about script that makes and sends to given ftp server encrypted system backup. It was made as a generic script that can be used with any gpg public keys or ftp servers, however I'm using always only specific public key and uploading it only to specific ftp server, so I created a wrapper with this information. This generic script is actually in /root/bin as this is administrative tool, but where to put that wrapper?
Forget the wrapper stuff:-) All you need is a .file (dot file) with the user configuration options, in the $USER directoy. You can have one in /etc for system wide config options as well. Make your script check for these .fils (dot file) and if they exist, use them. HTH, .
Where to put wrapper scripts?
1,669,279,247,000
I just installed npm and node.js, and I couldn't access npm. And I'm like "why?" and my OS is like "because /usr/local/bin is at 700 permissions" and I'm like "should it really be that way?" /usr/local is supposed to be .. the local user's bin folder? Then why does it require root access? It is filled with GAE stuff. Maybe Google App Engine changed it, I don't know.
No, /usr/local/bin and pretty much everything in it should be set 755.
Should my /usr/local/bin be 700 permissions?
1,669,279,247,000
Some how, can't remember why, I got into the habit of downloading source to the directory /opt, which I chown to my user/group. I have this feeling that it is not a good thing to do. Is there anything wrong with owning a directory that is outside of your own home directory?
First of all, another question explains the /opt directory. Now, your question about the ownership depends on the environment. Is this is a work system, or personal system? If it's entirely your computer system, download wherever you want to! That seems like a fine place to maintain ownership of on your own system. If this is a work computer, it would make more sense to have a more restricted account own that directory, in case other people need the source you're downloading for any reason like using it or auditing it.
Bad to own a directory outside of your home directory?
1,669,279,247,000
I have files like these : - REPORT_100_COMPLETED.csv - REPORT_100_FAILED.csv - REPORT_101_COMPLETED.csv - REPORT_101_FAILED.csv - REPORT_102_COMPLETED.csv - REPORT_102_FAILED.csv I want all of them to be put inside subfolder according to the related id : 100 | REPORT_100_COMPLETED.csv | REPORT_100_FAILED.csv 101 | REPORT_101_COMPLETED.csv | REPORT_101_FAILED.csv 102 | REPORT_102_COMPLETED.csv | REPORT_102_FAILED.csv and so on, anyone can help? Thank you in advance!
for i in REPORT_*_*.csv ;do dir=$(cut -d'_' -f2 <<<$i) mkdir -p $dir && mv $i $dir/ done
How can I move all files matching a pattern into a new folder?
1,402,555,469,000
The updatedb command, as I understand it, is basically the utility that keeps the mlocate.db database file updated. And it's a good idea to keep it updated regularly. For that matter, I've kept a daily cron to run the updatedb command. Now I'm looking under the hood, for the sake of understanding it better. When I cat /etc/updatedb.conf I see a few options: PURNE_BIND_MOUNTS PRUNENAMES PRUNESPATHS PRUNEFS These options, as I read the manpage and other sites, let updatedb know to skip scanning specific files or directories on the file system. So, here's my question. Why would we want to skip indexing anything at all? Obviously there must be good moments for it.
First there's an error in your updatedb.conf as it says PURNE_BIND_MOUNTS when it should be PRUNE_BIND_MOUNTS. Now to answer your question, there are various reasons why you might want to ignore indexing specific directories: PRUNE_BIND_MOUNTS - prevents from indexing bind mounts. bind mounts allow you to mount a specific folder or device on the filesystem more than once. There will never be any differences between the two, so most of the time there would be no point to index them twice. NFS/Remotes - you might not want to index remote filesystem mounts on a local filesystem, as that might be slow or not even needed. Temporary directories (/tmp) which often change or are updated. You might not want to index them either. There might be cases when you might have any other specific directories which you only want to index for quick locating of files, you might even not want to index your system files at all and keep it to specific/personal directories.
updatedb for a beginner
1,402,555,469,000
I need encrypt whole disk on Debian 7.5, (it will be work as server) but I need enter encryption password via SSH. So I need encrypt whole disk except primary system features as ssh server, because for example I need remotely restart this server. Do you know about any effective options?
First step would be to decide what needs to be encrypted and what not. There is no need to encrypt a standard debian server release, its not like it contains any secrets. Create at least two partitions, one for the normal stuff and one for the sensitive stuff. Then you install the complete server as normal, without any sensitive data (on the normal partition). Disable autostarting for all services that need the sensitive data. Setup the encryption stuff, see if manual mounting and manual starting the servers work. Finally, to reduce work needed, create a script to automate that. For example name it /root/decrypt-and-start.sh #!/bin/sh # mount the encrypted filesystem # this will ask for a password mount-encrypted-file-system # start the services service apache2 start service foo start You can no start this script with ssh root@server ./decrypt-and-start.sh, you will need to provide the root password (or use passwordless authentication) and the disk password.
Encrypt my disk but allowing to enter the password with SSH
1,402,555,469,000
This thread What do the numbers in a man page mean? answers the question of the significance of the numbers between parentheses within a man page. My question is related. I have a local installation of a package called ffmpeg. The build folder has the typical bin, lib, etc. and then the folder: man/man1/ with the following files: ffmpeg-bitstream-filters.1 ffmpeg-scaler.1 libavdevice.3 ffmpeg-codecs.1 ffmpeg-utils.1 libavfilter.3 ffmpeg-devices.1 ffmpeg.1 libavformat.3 ffmpeg-filters.1 ffplay.1 libavutil.3 ffmpeg-formats.1 ffprobe.1 libswresample.3 ffmpeg-protocols.1 ffserver.1 libswscale.3 ffmpeg-resampler.1 libavcodec.3 My questions are: Why is there a subfolder under man called man1? Why not just in man? And why the suffix 1? Which path should I add to MANPATH? The one pointing to man ? or man/man1? What do the suffices in the files above mean? Are they the same numbers within parentheses described in the thread I mentioned above?
The suffixes (such as 1) correspond to the numbers mentioned in "What do the numbers in a man page mean?". The represent sections of the manual. "Which path should I add to MANPATH? The one pointing to man?" Yes (ie, not one of the inner man1, man2, etc. directories). These have the same significance as the directory suffixes from #1. Notice man1 contains all .1 files, man2 all .2 files, etc.
Man folders and MANPATH
1,402,555,469,000
I am trying to find this path on FreeBSD but it is not working: /usr/local/lib/python2.7/dist-packages This same path is working on Ubuntu. Can please some tell me where I can find Python dist-packages? Actually, I am trying to find the Django folder.
Start your python and then type import sys print sys.path You are likely to have something ending in site-packages or dist-packages in there. Django probably has its own, additional, folder structure (I know that web2py does), so you might have more luck using a web page with that code and analysing the output from that.
Where are Python dist-packages stored in FreeBSD?
1,402,555,469,000
I wrote a package, and would like to use /var to persist some data. The data I'm storing would perhaps even be thought of as an addition for /var/db. The pattern I observe is that files in /var/db, and the surrounds, are owned by root. The primary (intended) use of the package filters cron jobs - meaning you would need permissions to edit the crontab. Should I presume a sudo install of the package? Should I have the package gracefully degrade to a /usr subdir, and if so then which one? If I 'opinionate' that any non-sudo install requires a configrc (with paths), where should the package look (presuming a shared-host environment) for that config file? Should I use /usr/lib as per the thoughts in this article? Incidentally, this package is a ruby gem, and you can find it here.
If that package is installed as root it has access to /var. If it's installed by a user (who can neither write to /var nor /usr) the default procedure is to set --prefix=$HOME/somedir in the configure script. Or you provide other means to set the directory to a location the user has write access to.
Do best-practices indicate that usage of /var should be restricted to sudoers
1,402,555,469,000
I have a set of libraries and some apps that depend on it. Some of these libraries names might conflict with already installed libraries. The easiest way for me to deploy them would be Install the libraries in a fixed-path "/usr/local/[my-firm]/lib" Compile my apps with a rpath pointing to this path My apps' installer can tell if the libraries are installed by looking at something like "/usr/local/[my-firm]/libversion" What do you think about it ? Is installing the libraries in a fixed path acceptable ? Edit I should add that I wish to be able to ship my libraries and my apps independently.
If the libraries you install are specific for your application and may conflict with system libraries installed then I would recommend setting up a structure like this: /opt/<app>/<version>/lib or /opt/<app-libs>/<version>/lib This way you can deploy at will separately from others and not affect anything that someone else might require and you can force your application to look at those paths if you choose.
Deploy libs in hard-coded dir
1,402,555,469,000
I've got a pogoplug running Arch Linux that I'm using for a NAS with rsync backing up my data over SSH. Right now, I'm the only one using it, but I'd like to add my roommate as well. She has her own external hard drive on which to store her backup data. Would it be strange to mount /home/my-user/ on the hard drive I currently have my data backing up to, and /home/roommate-user/ on her hard drive? Or is there a more sensible way to do it? What's the best practice here?
Yes, that's a perfectly reasonable way to do it. Having home directories on a separate partition from the OS is pretty common. Having home directories directly under /home is common on systems with a small number of users; systems in institutions with a large number of users often have subdirectories under /home corresponding to different departments of the institution, which may be mounted from different disk pools or different servers. On a system with one or a small number of main users, making these users' home directories mount points is reasonable; the only downside is that each of them will contain a lost+found directory. You may prefer to have /home/my-disk and /home/roommate-disk as mount points and /home/my-disk/my-user and /home/roommate-disk/roommate-user as home directories, but having /home/my-user and /home/roommate-user as both mount points and home directories is also fine.
Linux multi-user system: Each user's home partition on its own Hard Drive
1,402,555,469,000
I have a default OpenBSD 5.1 install. How Can I enabled httpd, OK. QUESTION: How can I use the /dev/sd0a on /mnt/usbhdd type ffs (local, nodev) instead of the default "/var/www" directory? What are the most secure mount options (for a webservers that only servers static html/files)?
Either mount /dev/sd0a to /var/www or configure your web server to use /mnt/usbhdd instead of /var/www. – Marco Oct 29 '12 at 11:41
How to move the /var/www directory to a USB HDD?
1,402,555,469,000
I have 142 notes in Tomboy but I don't understand why is there such an amount of files and folders created inside the main folder of Tomboy. In there, there are 5 subfolders, named "0", "1"... "4" and in each one of them there are 100 subfolders named: "0", "1", ..., "99", inside folder "0" "100", "101", ..., "199" inside folder "1" and so on... Inside each subfolder there may be a file or two. In total, nautilus tells me there are about 1300 items inside the folder of Tomboy. Can someone please explain to me why is this so? My concern is that I sync these notes on Dropbox which, I think, might have a hard time to crawl all these folders for changes, eg. when starting Ubuntu. Correct me please if this isn't true.
That kind of multiple level directory structure is a common solution to speed up file retrieval when you have to handle a large number of files. A lot of other application use that kind of structure to keep some kind of file cache (e.g. firefox, squid). Instead of a single large directory with all the files, the application create a structure of subdirectories and uses some rule to choose in which directory to put each file. In this way, it is easier and quicker to find the needed file. Let's do a simple example: If I want to keep a file for each one of my customers. I'll create a directory for each alphabet letter and then, in each one of these directories, a directory for each alphabet letter. Now I can put "DimitrisTzortzis.info" file in the T/D directory. From this point, I start to do some supposition based only on my experience, because I do not know the details of dropbox and tomboy implementations. Usually, to manage a large structure of nested subdirectory is easier than to manage a single large directory (at least, on unix filesystems). This should be true also for dropbox so you do not have to worry about this. That "strange" structure will help dropbox to speed up the things. Unluckily Tomboy directory structure seems to grow when you create new notes and not to shrink when you delete them. This could explain why your structure is so big with so few notes in it.
Why are there so many files and folders within the folder of Tomboy?
1,402,555,469,000
I've been looking for where some files on my system have come from, such as (for sake of having an example, but question not specific to this) /etc/udev/hwdb.bin: $ pacman -Qo /etc/udev/hwdb.bin error: No package owns /usr/lib/udev/hwdb.bin Then searching, it seems clear it's compiled by systemd-hwdb, which is itself distributed with systemd (and included in its file listing): $ pacman -Qo "$(which systemd-hwdb)" /usr/bin/systemd-hwdb is owned by systemd 245.5-2 I've seen this with several different packages, and at first thought it's simply an omission, and they should be listed - but perhaps it's because they're files generated by an included executable, rather than being distributed with the package itself? Is that correct? So if a hypothetical package is packaged as a script that merely downloads and installs the 'real' package, the first's file listing would be nothing more than 'installer.sh'?
Package file listings for Arch Linux include files that are contained in the package, that are installed when the package is installed. They don't include files that the installed application could potentially create on your system. For instance, the package for the Evolution e-mail program would not list every e-mail that could be downloaded to your system, and a video game package would not list save game files that might be created by the user (both for obvious reasons - they can't be predicted). The file lists include files that are installed and managed by the package manager.
Are package file listings supposed to include only distributed files, or runtime generated files too?
1,402,555,469,000
I need to write a program that receives a block device as input, like /dev/sda1, and has to perform a set of operations depending on if the filesystem inside are currently running or not. We'll assume the input will always has a correct linux directory tree, the only I need to know is if there's a particular directory structure or file/s that can reliably determine whether the system inside is running. I mean whether the filesystem contains the root of a system that is powered on. It should work for any filesystem or linux kernel version. Thanks!
I’ve written a function that returns 1 if the argument is the root device, 0 if it is not, and a negative value for error: #include <stdio.h> #include <stdlib.h> #include <sys/stat.h> static int root_check(const char *disk_dev) { static const char root_dir[] = "/"; struct stat root_statb; struct stat dev_statb; if (stat(root_dir, &root_statb) != 0) { perror(root_dir); return -1; } if (!S_ISDIR(root_statb.st_mode)) { fprintf(stderr, "Error: %s is not a directory!\n", root_dir); return -2; } if (root_statb.st_ino <= 0) { fprintf(stderr, "Warning: %s inode number is %d; " "unlikely to be valid.\n", root_dir, root_statb.st_ino); } else if (root_statb.st_ino > 2) { fprintf(stderr, "Warning: %s inode number is %d; " "probably not a root inode.\n", root_dir, root_statb.st_ino); } if (stat(disk_dev, &dev_statb) != 0) { perror(disk_dev); return -1; } if (S_ISBLK(dev_statb.st_mode)) /* That's good. */ ; else if (S_ISCHR(dev_statb.st_mode)) { fprintf(stderr, "Warning: %s is a character-special device; " "might not be a disk.\n", disk_dev); } else { fprintf(stderr, "Warning: %s is not a device.\n", disk_dev); return(0); } if (dev_statb.st_rdev == root_statb.st_dev) { printf("It looks like %s is the root file system (%s).\n", disk_dev, root_dir); return(1); } // else printf("(It looks like %s is NOT the root file system.)\n", disk_dev); return(0); } The first two tests are basically sanity checks: if stat("/", …) fails or “/” is not a directory, your filesystem is broken.  The st_ino tests are something of a shot in the dark. AFAIK, inode numbers should never be negative or zero.  Historically (by which I mean 30 years ago), the root directory always had inode number 1.  This may still be true for a few flavors of *nix (anybody heard of “Minix”?), and it may be true for the special filesystems, like /proc, and for Windows (FAT) filesystems, but most contemporary Unix and Unix-like systems seem to use inode number 1 for tracking bad blocks, pushing the root up to inode number 2. S_ISBLK is true for “block devices”, like /dev/sda1, where the output from ls -l begins with “b”.  Likewise, S_ISCHR is true for “character devices”, where the output from ls -l begins with “c”.  (You may occasionally see disk names like /dev/rsda1; the “r” stands for “raw”.  Raw disk devices are sometimes used for fsck and backup, but not mounting.)  Every inode has a st_dev, which says what filesystem that inode is on.  Inodes for devices also have st_rdev fields, which say what device they are.  (The two comma-separated numbers you see in place of the file size when you ls -l a device are the two bytes of st_rdev.) So, the trick is to see whether the st_rdev of the disk device matches the st_dev of the root directory; i.e., is the specified device the one that “/” is on?
How to determine whether a linux filesystem belongs to a running system or not
1,402,555,469,000
I am new to this, so sorry if its obvious. I am running a Debian server and installing the likes of UWSGI, NGinx etc on there. The configurations keep talking about pointing to "sockets". In the build options I seem to be able to specify where the sockets for each program go. By default it looks like most of them go in /tmp (not all of them). Is this a good place for them to go? I'm trying to keep things as organized as possible but just bunging them in /tmp doesn't seem like the best option.
I would do a mkdir /var/run/nginx (or whatever the program name is) and locate them there. You can then restrict access to the sockets if needed by changing ownership of the that directory. It's probably not terribly needed for security reasons unless you are a bit paranoid or let people you don't know very well log in via ssh.
Where to locate "sockets"
1,402,555,469,000
Edit: places where I said $HOME/.bin I should have written $HOME/bin or any equivalent is fine. I.E, any user-writable directory that is in the user's PATH. So I have a bash script which I am distributing as a client for my API. Current version installs like this curl -s http://api.blah.com/install | sudo sh. I may try to deal with six different package management systems so that they can just apt-get or brew install at some point, but for now I am going with the one-liner since I want this solution to work for multiple systems. However, apparently there are quite a few users on systems like cygwin or even Macs who don't have sudo at all or don't have it set up. The scenario is a user signs up for my API, enters their credit card information. I have a bash client for the API that doubles as a reference implementation and also a way to try out the API or deploy VMs and Docker containers using the command line. I want to create an easy way for users to install the API client. For example, there used to be a one-line install for npm curl http://npmjs.org/install.sh | sh. Also homebrew has a one-line installer ruby -e "$(curl -fsSL https://raw.github.com/Homebrew/homebrew/go/install)" (see http://brew.sh). My install script just downloads the API client script and puts it in /usr/bin and makes it executable. But I am thinking based on the issues with sudo and the fact that this doesn't really need to be installed globally, I would like to just install it into the user's $HOME/.bin or $HOME/local/bin (create that if it there is no existing equivalent). This is my current install script: #!/bin/bash BASE="https://api.blah.com" sudo bash -c "curl -s $BASE/mycmd > /usr/bin/mycmd" sudo chmod uga+x /usr/bin/mycmd First wrinkle that occurs to me is that many users are now in zsh. So if I add or modify a line in ~/.bashrc that updates the PATH to include $HOME/.bin, that won't work for those systems. So to restate the question, what code should I use to download and install my script into a user-writable directory in that user's PATH ($HOME/local/bin, or whatever is available in PATH by default) directory and make sure that is in their PATH, with the requirement (or at least strong desire) that this will work on almost everyone's system (if they have something like a Unix prompt) (without requiring sudo)? Since some people consider modifying the PATH to be evil, I would like to install in the default $HOME/whatever/bin for that system, that is already in the user's PATH, if possible, rather than automatically modifying their PATH to include some particular directory. Thanks very much!
I ended up giving them two options on my "Getting Started" page. I explain briefly what the installer script does, such as looking for ~/.local/bin or the like and then potentially adding that to PATH in ~/.zshrc or ~/.bashrc. I also give them the option of manually installing, instead of using the script, with simple instructions to do that. To run the automatic installer the user would paste and execute a command like this: curl -s https://thesite.com/installmycmd > /tmp/inst; source /tmp/inst This is the installmycmd script: #!/bin/bash BASE="https://thesite.com" declare -a binddirs bindirs=($HOME/bin $HOME/.local/bin $HOME/.bin) founddir="false" findprofile() { profiles=($HOME/.zshrc $HOME/.bashrc $HOME/.bash_login $HOME/.login $HOME/.profile) for prof in "${profiles[@]}"; do if [ -f "$prof" ]; then echo "$prof" return fi done touch $HOME/.profile echo "$HOME/.profile" } for bindir in "${bindirs[@]}"; do if [ -d "$bindir" ]; then founddir=true echo "You have a user bin dir here $bindir." whichprofile=$(findprofile) pathline=$(grep ^PATH= $whichprofile) if [[ ! $pathline == *$bindir* ]]; then echo "Appending $bindir to PATH in $whichprofile" echo -e "\nexport PATH=\$PATH:$bindir" >> "$whichprofile" NEWPATH=$PATH:$bindir export NEWPATH else echo "That is in your PATH in $whichprofile" fi break; fi done if [ ! -z $NEWPATH ]; then echo "Exported PATH: $NEWPATH" export PATH=$NEWPATH fi if [[ "$founddir" == "false" ]]; then echo "Could not find ~/.bin or ~/.local/bin or ~/bin." echo "Creating ~/.local/bin and adding to PATH" mkdir -p $HOME/.local/bin bindir=$HOME/.local/bin whichprofile=$(findprofile) echo "Appending PATH edit to $whichprofile" echo -e "\nexport PATH=$PATH:$HOME/.local/bin" >> "$whichprofile" export PATH=$PATH:$HOME/.local/bin fi bash -c "curl -s $BASE/JSON.sh > $bindir/JSON.sh" bash -c "curl -s $BASE/mycmd > $bindir/mycmd" chmod ug+x $bindir/mycmd chmod ug+x $bindir/JSON.sh
What code should I use to download and install my script into a user-writable directory in the user's PATH (without requiring sudo)?
1,402,555,469,000
I currently have this which works but isn't as tidy as I'd like; mkdir -p backup/{daily/{directories,databases,logs},weekly/{directories,databases,logs},monthly/{directories,databases,logs}} Is it possible to nest the directories,databases,logs part so as to not have to have it 3 times in the command?
No need for nesting: mkdir -p backup/{daily,weekly,monthly}/{directories,databases,logs}
created nested directories the short way
1,402,555,469,000
I was working on a project to control my digital camera using Ubuntu and gphoto2. At one point I noticed a new "~" directory in my project folder, /home/greg/project/~. When I enter this new "~" directory, it takes me backwards to my home directory /home/greg/ (as you might expect). I don't know exactly how this happened, but my suspicion is that when I ran gphoto2 on the command line and specified an output file, I typed "~/filename.jpg" expecting the file would show up in my home directory, but instead bash put a new "~" directory in the working directory. I'm wasn't aware something like that could be done (if that's what happened). More important than how it got there is how I should get rid of it. I can't remove the directory, because as far as I can tell it is my home directory. It doesn't appear to be a sym-link, (at least not according to ls -l), but I'm not sure what that would even really mean.
You need to quote it to protect it from shell expansion. ls ~ # list your home directory ls "~" # list the directory named ~ ls \~ # list the directory named ~ Same thing with rm, rmdir, etc. The shell changes ~ to /home/greg before passing it to the commands, unless you quote or escape it. You can see this with echo: anthony@Zia:~$ echo ~ /home/anthony anthony@Zia:~$ echo \~ ~ You'll want to be careful, because rm -Rf ~ would be a disaster. I suggest if at all in doubt, first rename it (mv -i \~ newname) then you can examine newname to make sure you want to delete it, and then delete it.
Where did this copy of my home directory come from?
1,402,555,469,000
I have been using unix systems the majority of my life. I often find myself teaching others about them. I get a lot of questions like "what is the /etc folder for?" from students, and sometimes I have the same questions myself. I know that all of the information is available with a simple google search, but I was wondering if there are any tools or solutions that are able to add descriptions to folders (and/or files) that could easily be viewed from the command line? This could be basically an option to ls or a program that does something similar. I would like there to be something like this: $ ls-alt --show-descriptions / ... /etc – Configuration Files /opt – Optional Software /bin - Binaries /sbin – System Binaries /tmp – Temporary Files ... Could even take this a step further and have a verbose descriptions option: $ ls-alt --show-descriptions-verbose / ... /etc – The /etc directory contains the core configuration files of the system, use primarily by the administrator and services, such as the password file and networking files. /opt – Traditionally, the /opt directory is used for installing/storing the files of third-party applications that are not available from the distribution’s repository. /bin - The ‘/bin’ directly contains the executable files of many basic shell commands like ls, cp, cd etc. Mostly the programs are in binary format here and accessible by all the users in the Linux system. /sbin – This is similar to the /bin directory. The only difference is that is contains the binaries that can only be run by root or a sudo user. You can think of the ‘s’ in ‘sbin’ as super or sudo. /tmp – This directory holds temporary files. Many applications use this directory to store temporary files. /tmp directories are deleted when your system restarts. ... I know that there is no default way to do this with ls, and to add such a feature would probably require a lot of re-writing of kernel code to account for the additional data being stored, so I'm not asking how to do this natively necessarily (unless there is an easy way I am overlooking). I am more asking if there is a tool that already exists for educational purposes that enables this sort of functionality? I guess it would take output from ls and then do a lookup to match directory names to descriptions it has already saved somewhere, but I digress.
tree --info will do what you want. You can create .info txt-Files that contain your remarks about certain files and folders or groups of files and folders (using wildcards). tree --info will then show them in the directory listing. Multi-line comments are possible. There is also a Global info file in /usr/share/finfo/global_info that contains explanations for the Linux file system (see here). This file shows you also easily how the .info file syntax looks. The homepage of the software is https://fossies.org/linux/misc/tree-2.1.1.tgz/.
Is there a way to add a "description" field / meta-data that could viewed in ls output (or an alternative to ls)?
1,402,555,469,000
How would I create a file named . (dot) and read or write data to it, given that . also refers to the current directory? I know this is possible because I have a directory structure. I'm looking at with ls --all -l that shows a file named . owned by a different user than the user that owns the . and .. directories.
I'm afraid it only looks like you have a file called .. What is very likely happening is that you have a file whose name starts with a dot but is then followed by a whitespace or other special character. To demonstrate how you'd figure this out: $ cd "$(mktemp --directory)" $ touch '. ' $ for path in .* > do > printf '%s' "$path" | xxd > done 00000000: 2e . 00000000: 2e20 . 00000000: 2e2e .. The dotfile (the second entry above) shows up as a dot (0x2e) followed by a space character (0x20).
Creating a file named '.' and how to read and write data to it?
1,402,555,469,000
If I run df -h in an Oracle LINUX 5 server, I got below output: $ df -h Filesystem Size Used Avail Use% Mounted on /dev/sda2 12G 8.8G 1.8G 84% / /dev/sda4 3.8G 592M 3.0G 17% /home /dev/sda1 99M 70M 24M 75% /boot tmpfs 942M 0 942M 0% /dev/shm /dev/sdc1 51G 1.8G 46G 4% /u000 I installed MySQL 5.1.73-community-log in this server which is running. If I run below command: mysql> show variables like '%dir%'; +-----------------------------------------+----------------------------+ | Variable_name | Value | +-----------------------------------------+----------------------------+ | basedir | / | | binlog_direct_non_transactional_updates | OFF | | character_sets_dir | /usr/share/mysql/charsets/ | | datadir | /var/lib/mysql/ | | innodb_data_home_dir | | | innodb_log_group_home_dir | ./ | | innodb_max_dirty_pages_pct | 90 | | plugin_dir | /usr/lib64/mysql/plugin | | slave_load_tmpdir | /tmp | | tmpdir | /tmp | +-----------------------------------------+----------------------------+ 10 rows in set (0.00 sec) Please advice about: how can I Know in which partition, MySQL has been installed? Which Partition will be used by MysQL to store data? What is /dev/sdc1 and how can I use the enough space available (46G)? What is tmpfs?
What is tmpfs? Tmpfs is a file system which keeps all files in virtual memory. Read More What is /dev/sdc1 and how can I use the enough space available (46G) /dev/sdc1 is just another file system that you have created and mounted in your system. The mount point is /u000. Read more on mount points how can I Know in which partition, MySQL has been installed? Which Partition will be used by MysQL to store data? Check answer here Quoting from answer in the link above: mysql -uUSER -p -e 'SHOW VARIABLES WHERE Variable_Name LIKE "%dir"' basedir gives the installation directory. datadir gives the directory where the data is stored. Refer here for a detailed explanation of each dir --basedir=path The path to the MySQL installation directory. --datadir=path The path to the MySQL data directory. To check which filesystem (mount point) a directory belongs to: For eg, if I want to find out which file system the /home directory belongs to, df /home output: /home (/dev/sda4 ): 5895840 blocks 92467 i-nodes where /dev/sda4 is the file system where the /home resides Refer here to change the mount point of MySQL.
How to know in which partition my application has been installed?
1,402,555,469,000
I am very new to Linux (Ubuntu 13.10) and having difficulty understanding the file system, particularly with regards to installing applications. In Windows, when I installed an application, it put the contents of the application in a folder in Program Files, by default. I knew exactly where to look if I wanted to browse all my applications, for example to see if I had already installed something. However, I have found that with Linux, there is much less consistency. For example, when I installed the Komodo code editor, by default it created a folder in /home. Then, when I installed the TexLive Latex editor, by default it created a folder in /usr/local. From searching around, I have found that most applications are supposed to be installed in /bin or /usr/bin. Why are there so many different default places to install applications? Why not the consistency found in Windows? On a slightly different note, if I were to move my Komodo folder from /home to /bin, for example, will doing a simple cut and paste be acceptable, or will there be references to the original folder in /home that will now be invalid? I have already added the Komodo bin folder to $PATH, so I already know about that one, but are there any other such references?
It does actually tend to be consistent. The standard is the FHS specification and while it is admittedly not always followed it mostly is: /bin : Essential user command binaries (for use by all users) /boot : Static files of the boot loader /dev : Device files /etc : Host-specific system configuration /home : User home directories (optional) /lib : Essential shared libraries and kernel modules /media : Mount point for removeable media /mnt : Mount point for a temporarily mounted filesystem /opt : Add-on application software packages /root : Home directory for the root user (optional) /sbin : System binaries /srv : Data for services provided by this system /tmp : Temporary files Then, you also have /usr/local : The /usr/local hierarchy is for use by the system administrator when installing software locally. It needs to be safe from being overwritten when the system software is updated. It may be used for programs and data that are shareable amongst a group of hosts, but not found in /usr. The approach is just different is all. While Windows stores files by source (all files installed by a program are placed in the same folder), *nix systems install by type. So, the manual page will be in /usr/man or /usr/local/man, the executables (.exe in Windows) in /usr/bin or /usr/local/bin, the libraries (.dll in WIndows) in /usr/lib or /usr/local/lib etc. The good thing is that you don't care, that's all controlled by the package manager (dpkg in Debian based systems like Ubuntu). So, to see where a particular package has installed its files, you can use this command (using the package xterm as an example) : $ dpkg-query -L xterm /usr /usr/share /usr/share/menu /usr/share/menu/xterm /usr/share/doc-base /usr/share/doc-base/xterm-faq /usr/share/doc-base/xterm-ctlseqs /usr/share/icons /usr/share/icons/hicolor /usr/share/icons/hicolor/scalable /usr/share/icons/hicolor/scalable/apps /usr/share/icons/hicolor/scalable/apps/xterm-color.svg /usr/share/icons/hicolor/48x48 /usr/share/icons/hicolor/48x48/apps /usr/share/icons/hicolor/48x48/apps/xterm-color.png /usr/share/applications /usr/share/applications/debian-xterm.desktop /usr/share/applications/debian-uxterm.desktop /usr/share/pixmaps /usr/share/pixmaps/filled-xterm_32x32.xpm /usr/share/pixmaps/mini.xterm_32x32.xpm /usr/share/pixmaps/xterm-color_32x32.xpm /usr/share/pixmaps/xterm_32x32.xpm /usr/share/pixmaps/filled-xterm_48x48.xpm /usr/share/pixmaps/mini.xterm_48x48.xpm /usr/share/pixmaps/xterm-color_48x48.xpm /usr/share/pixmaps/xterm_48x48.xpm /usr/share/man /usr/share/man/man1 /usr/share/man/man1/xterm.1.gz /usr/share/man/man1/uxterm.1.gz /usr/share/man/man1/resize.1.gz /usr/share/man/man1/lxterm.1.gz /usr/share/man/man1/koi8rxterm.1.gz /usr/share/doc /usr/share/doc/xterm /usr/share/doc/xterm/xterm.terminfo.gz /usr/share/doc/xterm/xterm.termcap.gz /usr/share/doc/xterm/README.i18n.gz /usr/share/doc/xterm/ctlseqs.ms.gz /usr/share/doc/xterm/ctlseqs.txt.gz /usr/share/doc/xterm/xterm.faq.gz /usr/share/doc/xterm/changelog.Debian.gz /usr/share/doc/xterm/NEWS.Debian.gz /usr/share/doc/xterm/copyright /usr/share/doc/xterm/README.Debian /usr/share/doc/xterm/xterm.faq.html /usr/share/doc/xterm/xterm.log.html /usr/bin /usr/bin/resize /usr/bin/xterm /usr/bin/uxterm /usr/bin/lxterm /usr/bin/koi8rxterm /etc /etc/X11 /etc/X11/app-defaults /etc/X11/app-defaults/XTerm-color /etc/X11/app-defaults/XTerm /etc/X11/app-defaults/UXTerm-color /etc/X11/app-defaults/UXTerm /etc/X11/app-defaults/KOI8RXTerm-color /etc/X11/app-defaults/KOI8RXTerm So, while it is easy enough to see where everything is installed you rarely need to do so. To remove a package, just use apt: sudo apt-get remove xterm You can safely let the system worry about where everything is installed, unlike under Windows, you don't need to have a specific deinstaller to remove each program, the whole thing is managed centrally by the package manager and is actually much more transparent to the user.
Consistency in default installation directories
1,402,555,469,000
I'm trying to locate where some files are stored and I can easily browse to them via ssh by going to "cd ~/foldername", however, I have no idea what directory "~/" actually is. When I browse around folders via WinSCP (yes, I'm a Windows admin), I can't seem to locate this folder at all. Note: I'm using Amazon Linux on EC2.
The tilde character is shorthand for the home directory of the current logged-in user. If you are logged in as jason, then ~ is likely /home/jason/. It is the home directory of any username, as given in /etc/passwd. It is also the same as the $HOME environment variable. The expansion from ~ to the home directory is done by the shell.
What is the "~/" directory?
1,402,555,469,000
(See update at the foot of the question). This is a followup question to "Make directory copies using find". This question involved manipulating a bunch of directories, This got too complicated to handle in a single command, so I decided to go with an approach which saved the list of directories to a bash array, despite reservations about portability. It seems that the POSIX shell standard (the Unix shell standard, as I understand it) does not have arrays. The makefile I'm using appears below. This works except for the last step. A summary of what I'm trying to do follows. As discussed in the earlier question, I want to loop over the directory x86-headers, and collect into a bash array, a list of its top level subdirectories that contain the file C/populate.sh (but may also contain other files). In my test setup, for example there is only one directory in x86-headers which contains the file libc/C/populate.sh, namely libc. I then perform some operations on these subdirectories. The most important of these is that I make a temporary copy of each directory which looks like libc.tmp_g4zlb. Namely dirname followed by 'tmp_' followed by a 5 digit random string. So, some questions: 1) As discussed in the earlier question, I'm looping over the directory 'x86-headers'. Here I am still using find. @Gilles indicated in his answer that this was not ideal. He might be right. Problems with find here: a) The values returned look like ./libc. I don't want a leading ./. b) The find command I'm using lists all directories. I only want to consider those that contain a file with the relative path of C/populate.sh. The approach Gilles was using might be better, but I don't understand it. See the gilles target below. I'd like to get the list of the directories and save them to an array. 2) The bit that I'm having problems with is the last step, namely echo "(progn (require 'parse-ffi) (ccl::parse-standard-ffi-files :$$i'.tmp_'$(RND)))" | \ /usr/lib/ccl-bootstrap/lx86cl -I /usr/lib/ccl-bootstrap/lx86cl.image; done \ The relevant bit is that I'm trying to pass the temporary value libc.tmp_g4zlb to ccl, which is a Common Lisp compiler. Without the substitution, it would look like echo "(progn (require 'parse-ffi) (ccl::parse-standard-ffi-files :libc.tmp_g4zlb))" | \ /usr/lib/ccl-bootstrap/lx86cl -I /usr/lib/ccl-bootstrap/lx86cl.image; done \ This works. The version using $$i above doesn't. The problem seems to be the leading ./. Without that, it should work. For the record, the error I get is ? > Error: Too many arguments in call to #<Compiled-function CCL::PARSE-STANDARD-FFI-FILES #x488B8406>: > 3 arguments provided, at most 2 accepted. > While executing: CCL::PARSE-STANDARD-FFI-FILES, in process listener(1). This is, however, not the error that I get when passing ./libc.tmp_g4zlb to the compiler. That error looks like > Error: Filename "/home/faheem/test/foo/x86-headers/.\\/libc.tmp_g4zlb/C/**/" contains illegal character #\/ > While executing: CCL::%SPLIT-DIR, in process listener(1). So it is possible there is something else going on. 3) Does the overall approach look reasonable? Please feel free to suggest possible improvements, even if it involves a completely different strategy. #!/usr/bin/make -f # -*- makefile -*- export SHELL=/bin/bash export RND:=$(shell tr -cd a-z0-9 < /dev/urandom | head -c5) export CCL_DEFAULT_DIRECTORY=$(CURDIR) clean: rm -rf x86-headers/libc.tmp* foo: clean PATH=$$PATH:$(CURDIR)/ffigen4/bin; \ echo $$PATH; \ export CURDIR=$(CURDIR); \ echo $$CURDIR; \ array=( $$(cd x86-headers && find . -mindepth 1 -maxdepth 1 -type d) ); \ cd x86-headers && \ for i in "$${array[@]}"; do \ echo $$i; done; \ for i in "$${array[@]}"; do \ mkdir -p $$i."tmp_"$(RND)/C; done; \ for i in "$${array[@]}"; do \ cp -p $$i/C/populate.sh $$i".tmp_"$(RND)/C; done; \ for i in "$${array[@]}"; do \ cd $$i".tmp_"$(RND)/C; ./populate.sh; done; \ for i in "$${array[@]}"; do \ echo $$i'.tmp_'$(RND); done; \ for i in "$${array[@]}"; do \ echo "(progn (require 'parse-ffi) (ccl::parse-standard-ffi-files :$$i'.tmp_'$(RND)))" | \ /usr/lib/ccl-bootstrap/lx86cl -I /usr/lib/ccl-bootstrap/lx86cl.image; done; \ gilles: cd x86-headers; for x in */C/populate.sh; do \ echo -- "$${x%%/*}$$suffix"; done; \ UPDATE: It is possible that the question (or questions) got lost in all the details. So, let me try to simplify things. In his answer, Gilles wrote for x in */C/populate.sh; do mkdir -- "${x%%/*}$suffix" mkdir -- "${x%%/*}$suffix/C" cp -p -- "$x" "./${x%%/*}$suffix/C" done As I commented on his question, x here matches patterns of the form */C/populate.sh. Also, ${x%%/*} matches the first part of the string, namely the top level directory name. Now, something like for x in */C/populate.sh; do myarr[] = "${x%%/*}" done would create an array containing a list of top level directories, which is what I want. However, I don't know what syntax to use. I need to use a counter which runs over the loop, like i=0, 1,... to index myarr on the LHS. If I had a working piece of code like this, it would go some way towards solving my issue.
If you want to append things to an array in a loop, you can use something like: for x in */C/populate.sh; do myarr=(${myarr[@]} "${x%%/*}") done See the bash arrays documentation for lots of stuff you can do with them. Alternatively, you can use += to append to an array (see here) like this: myarr+=("${x%%/*}") A few comments: As I commented on his question, x here matches patterns of the form */C/populate.sh. That's not how I'd have explained it. */foo/bar is a glob pattern. It is expanded by the shell to a list of all files/directories that match this pattern. (Try echo */C/populate.sh, you'll see all the matches printed.) The for loop iterates over this set of matches, using $x as the loop variable. Also, ${x%%/*} matches the first part of the string, namely the top level directory name. ${x%%/*} doesn't "match" anything. It's a string manipulation function that operates on $x. From the docs, ${var%%string} removes the longest match of string from the end of $var. In this case, it removes everything from the first / onward and "returns" (expands to) that. So to break the above three lines of code down, what happens is: the shell generates a list of items (files or directories) that match the glob */C/populate.sh. for each of those, the loop body is executed with $x set to that item ${myarr[@]} expands to the list of each item in the array ${x%%/*} expands to $x minus everything from the first / onward myarr is then reconstructed with its old contents plus the new, stripped-down item.
Looping over directory entries in bash and saving to an array
1,402,555,469,000
My problem is that my /var partition is over 80% (total size: 4.9G). I was reviewing the files into this partition and I saw a file which has 3.4G. (/var/lib/pengine) I dig into that /pengine file and I saw it is full of .bz2 extension files. I want to move these files but I would like to know if is risky or not.
bz2 is a type of data compression, it soesn;t tell anything about the purpose of the files . Pengine (whatever that is, a game?) probably needs them. If the files are using up most of the space on var you could consider moving them to a partition with more space eg /home # umask 22 # mkdir /home/var_lib_overflow # mv /var/lib/pengine /home/var_lib_overflow/ # ln -s /home/var_lib_overflow/pengine /var/lib/ FHS suggests they could be "crash recovery files" from an editor in whih case they should go away by themselves.
Are .bz2 files inside /var/lib/pengine safe to delete?
1,402,555,469,000
At a low level, /usr/share/geany.glade does configure some important aspects of Geany which are not controlled by geany.conf (e.g., visibility of the menubar). Copying a modified geany.glade file in ~/.local/share/geany/ did not make any change. So, how can this program be altered without root privileges?, what is the local directory where this file is to be placed, such that it overrides the content of /usr/share/geany.glade, much in the same way as .conf files can be stored in .config/share.
geany.glade defines the user interface, and isn’t user-overridable (in the same way that the application’s code isn’t user-overridable). If you want to tweak the interface without being root, you’ll need to install your own copy of Geany in your home directory and modify it there.
What is the per-user correlate of /usr/share?
1,402,555,469,000
Asking this question on mpv player and dvds, I stumbled into a more generic question: is it generally possible to specify a path in which one of the directory names is variable? Let's say that I want to execute a file with a command. The executable is in /dir1/dir2/dir3/, but the name of dir2 is variable, although it will always contain dir3 (similar to VIDEO_TS, which is always similar to /media/username/NAME-OF-DVD/VIDEO_TS/ while NAME-OF-DVD varies). If I want to execute that file with a command I have to specify the path. Can a such command be used (with a path in which one directory-name may be "generic")?
Bash can make use of globbing. Globbing allows you to specify a pattern that will match multiple values. It works similarly to REGEX, but it is important to note they are not the same. *(pattern) matches a pattern 0 or more times ?(pattern) matches a pattern 0 or 1 times +(pattern) matches a pattern 1 or more times [ ] can match a value contained within, including [a-z] for a through z ( | ) can match values on either side of the pipe If you don't put a pattern the pattern acts as a wildcard. So a path like /dir1/dir2/dir3/ can be represented as: /dir1/*/dir3/ /dir1/dir*/dir3/ /dir1/*(dir2|otherdir)/dir3/ /dir1/dir*[1-99]/dir3/ For more info check out this link: http://mywiki.wooledge.org/glob or this one: http://www.linuxjournal.com/content/bash-extended-globbing
Is it possible to specify a path in which a directory name is variable?
1,402,555,469,000
I am building a small embedded system for a x86_64 target, with a Linux kernel and an initramfs which contains a dynamically linked busybox. I tried to install the needed libraries (libm.so.6, libc.so.6) into /lib and the linker ld-linux-x86-64.so.2 into /lib64( because the busybox binary request it at this place). lib ├── libc.so.6 └── libm.so.6 lib64 └── ld-linux-x86-64.so.2 but it failed to link: /sbin/init: error while loading shared libraries: libm.so.6: cannot open shared object file: No such file or directory I managed to make it work by moving everything in /lib64: lib64 ├── ld-linux-x86-64.so.2 ├── libc.so.6 └── libm.so.6 or by creating a symlink between /lib and /lib64 lib ├── ld-linux-x86-64.so.2 ├── libc.so.6 └── libm.so.6 lib64 -> lib But I still don't understand why the first configuration do not work. Why the linker is not able to find libraries in /lib ? EDIT: To make it working properly (Thanks to yaegashi): Go into your initramfs root directory. Create a file /etc/ld.so.conf with the library path you need. echo /lib > etc/ld.so.conf Generate your ld.so.cache file. ldconfig -r . Re generate your initramfs. It's done
Read the manual of ld.so (the dynamic linker/loader). The actual search paths are mainly determined by /etc/ld.so.cache (which is compiled from /etc/ld.so.conf by ldconfig) or built-in paths in your ld.so binary. So check your platform configurations and how you built your glibc. You can watch detailed activities of ld.so by running any binary with LD_DEBUG=libs set in the environment variable. $ LD_DEBUG=libs ls 17441: find library=libselinux.so.1 [0]; searching 17441: search cache=/etc/ld.so.cache 17441: trying file=/lib/x86_64-linux-gnu/libselinux.so.1 17441: 17441: find library=libacl.so.1 [0]; searching 17441: search cache=/etc/ld.so.cache 17441: trying file=/lib/x86_64-linux-gnu/libacl.so.1 17441: 17441: find library=libc.so.6 [0]; searching 17441: search cache=/etc/ld.so.cache 17441: trying file=/lib/x86_64-linux-gnu/libc.so.6 ...
The linker does not find libraries in /lib
1,422,471,670,000
Using Mint Linux 19 here. I have a script file called test.sh. This script-file is located in a path e.g. /home/shyam/Mi A1/tissot, and I use it to flash Android system on my phone. The script contain command lines like: fastboot $* flash modem_a `dirname $0`/images/modem.img if [ $? -ne 0 ] ; then echo "Flash modem_a error"; exit 1; fi fastboot $* flash modem_b `dirname $0`/images/modem.img if [ $? -ne 0 ] ; then echo "Flash modem_b error"; exit 1; fi..... The images folder mentioned in these commands is located in a separate folder in the same folder tissot and has files such as modem.img in the snippet above adb and fastboot runs fine with the android device Test.sh is executable as explained in Isiah's answer in How do I run .sh files I tried to run the script by double clicking on the file and selecting run in terminal, but it did not work, nothing is flashed on in my phone and the terminal window banished in a jiffy. Question How do I run this file and see the output similar to "Target reported size...OKAY" I am sure that the path in the test file does not match but I don't know how to fix that as also to see the output as explained for each line
Open the terminal application (you can find it in the application menu) and run: cd /home/shyam/Mi\ A1/tissot chmod u+x test.sh ./test.sh Each line is a command. Keep in mind that running application found on the internet can be dangerous. Before doing this you should learn how to use the linux shell ( this is a good starting point: https://en.wikipedia.org/wiki/Bash_(Unix_shell) ) and understand what the program does.
How to run a script file
1,422,471,670,000
I have a bunch of directories that contain MP3 files inside. These directories contain no other directories inside. How do I delete all directory structure without deleting the files? That would be basically move all files found inside these directories to the current directory. The current directory is the directory where the other directories are.
With find and single linear: find . -mindepth 2 -type f -execdir sh -c 'mv -vt ../ "$@" ; rmdir "$PWD"' _ {} + -mindepth 2 will let the find command to ignore current directories' files. -execdir this is important here, and this make find to change the current directory to the directory where a file found and the commands inside will run on that directory itself. mv -vt ../ "$@", this will expand to mv -vt ../ "file1" "file 2" "..." "fileN" rmdir "$PWD" will delete the directory where -execdir is there which will run after all files mived up to the parentDirectory. Ba careful you won't overwrite the files with same fileName when moving to destination path.
copying files recursively without preserving directories
1,422,471,670,000
I want to install PHPStorm IDE on my PC. The Linux version is distributed as a .tar.gz archive which contains bin, help, jre64, lib, license and plugin directories and 2 text files. I have searched around and the place for user-intalled programs is apparently /usr/local or /opt. Should I install it into /usr/local or /opt? In case that /usr/local was the answer to the first question: /usr/local already contains some directories which are the same as in the PHPStorm archive (bin, lib); should I copy the directories from the archive directly into /usr/local or create a phpstorm subdirectory and put them there?
Consider that you have two options installing software: 1) System-wide: the application will be accesible by all users, and have to be installed with administrator (root) privilegies. 2) Only for your user, inside your /home/user, installation does not need administrator privilegies. In the 1) case you normally have the two places you mention: /usr/local and /opt. If the .tar.gz has an own directory structure, I recomend you put files inside /opt/PHPStorm, mantaining the structure. /usr/local is more suitable for installers that knows its sub-directories. Nevertheless, for an IDE like PHPStorm, it would be better install it inside your home, in location like /home/user/bin/PHPStorm. This way it would be easier to upgrade, to install complements, etc.
Should I install programs directly into /usr/local or a subdirectory in usr/local or into /opt? [duplicate]
1,422,471,670,000
I'm currently doing a CTF challenge and have extracted what I assume is the flag by injecting a relative path into the $filename parameter of a call to the PHP function file_get_contents($filename) that is executed by an Apache server on the target system. I got the name of the file the flag was hidden in through a hint at some other point but I was wondering: Is there a file that's commonly present on a Linux system that holds information about the directory structure and files contained in these directories? I've tried searching through several log files in order to find references to interesting files but I've yet to find a detailed list of file names. I also do not have permission to read entire device files. I do have full control over the $filename parameter but from I gathered, I can only insert absolute or relative paths without wildcards in there.
The locate command, which is installed on most Linux machines, gets its information from a database containing all the paths on the system. A cron job maintains this database. There are competing implementations of locate and probably competing ideas on where the database should be, so you may have to dig up a few versions to find all the places where it might be hiding. (If you were on the machine already you could locate locatedb!) On my Debian machine it's /var/cache/locate/locatedb
File containing directory structure
1,422,471,670,000
I'm writing a tar implementation for a package manager, and I wonder if I can skip the permission setting of the user in the tar headers and keep what is there by default, the user who wrote the files, in my case root. Would this present a problem? The package manager is designed to not write anything inside /home.
Most but not all files that are part of the system are owned by the root user. It's rare for system files not to be owned by root, because a user that owns system files can modify them and this is usually not desirable. It's a lot more common to have files that are owned by a group other than root, and that have mode 660 or 664 or 640. It's possible to design a Unix system where all system files (outside of /dev, /home, and the parts of /var containing user data such as mailboxes and crontabs) are owned by root. I don't know whether this is the case for Arch Linux. But not allowing files to be owned by a different group would significantly restrict the security protections of the system, it wouldn't be viable. So you'll need to remember group ownership anyway. Why not remember user ownership as well?
Are all files outside of /home/userabc owned by root?
1,422,471,670,000
I've googled, searched Unix & Linux Stack Exchange, and... well, there's not really much else I can do, is there? I was browsing some quines in various shell languages, and I came upon the following shebang: #!/xhbin/bash A lot of old websites (seem to) talk about it like we talk about /bin today, but on the other hand, a lot of the aforementioned shell quines use /bin/bash. What on earth was xhbin? I thought it was /bin all the way down back to like, 1970?
Mikel is right. From this post on Google Groups' group for CS246 at the University of Waterloo: Don't let the /xhbin part spook you. This is a peculiarity of the MFCF/CSCF environment. I don't know all of the details, but some time ago MFCF developed a program called xhier to make it easier to distribute software to a large number of machines. The xh in xhbin just means that your shell has been changed to an instance of the shell that is under xhier's control. It's still bash. MFCF and CSCF are, respectively, the Math Faculty Computing Facility and Computer Science Computing Facility.
What was xhbin?
1,422,471,670,000
I've been writing a Linux device driver for some measurement devices I'm attaching to my Raspberry Pi. I've created my kernel module and an application to access the character device driver, but the device needs to be calibrated regularly and I need to store the calibration data somewhere. Where is that data usually stored? My best guess is /etc, but I'd like to hear from someone who knows more about this than I do.
Per the Filesystem Hierarchy Standard, /var/lib/ might be the right place: This hierarchy holds state information pertaining to an application or the system. State information is data that programs modify while they run, and that pertains to one specific host. Users must never need to modify files in /var/lib to configure a package's operation. State information is generally used to preserve the condition of an application (or a group of inter-related applications) between invocations and between different instances of the same application. State information should generally remain valid after a reboot, should not be logging output, and should not be spooled data. /etc isn't right for calibration data, since /etc should be able to be mounted read-only.
Where to store calibration files for a custom Linux device driver
1,422,471,670,000
I have Debian wheezy chrooted in my Android. However all its directory is in my internal memory. So, if I apt-get install something, it gets installed in /data/data/.../debian/usr/local/bin directory. I have bound my external sd under /sdcard/sdext2 in Debian. I can access it by cd /sdcard/sdext2 and verified with ls that it is ok. I would like to have Debian install apps under /sdcard/sdext2/usr/local/bin instead of /usr/local/bin. How can I do that without moving the whole root directory?
Debian does not install anything into /usr/local, in the sense that official Debian packages are forbidden to touch that hierarchy. Also, Debian packages can assume absolute installation paths, so they may not work correctly if moved by hand (or by somehow tricking dpkg into installing them into a different hierarchy). On the other hand, software packages using the GNU Autotools build system (ie. those you install by ./configure && make && sudo make install) indeed use the /usr/local hierarchy by default, and you can override that: $ ./configure --prefix=/sdcard/sdext2/usr/local You may want to override other default directories, too. Browse the output of ./configure --help for those not influenced by --prefix.
How to change the default directory where programs get installed
1,422,471,670,000
Could someone please explain this to me? I'm having trouble understanding how this is set up and why it behaves the way it does. I wanted to see where zsh is actually installed on a machine that I use. So I used ls to find where the symlink points. $ ls -l /bin/zsh lrwxrwxrwx 1 root root 14 Dec 1 2011 /bin/zsh -> ../sfw/bin/zsh* When I saw this, I thought, "Okay, the symlink uses a relative path. No big deal." But if I then try to cd directly to what should be where the relative path points, I get this: $ cd /bin/sfw/bin cd: no such file or directory: /bin/sfw/bin However, if I specifically type this $ cd ../sfw/bin when /bin is the working directory, then it works. And then I also get this: $ pwd /usr/sfw/bin What is going on here?
/bin is probably a symlink to /usr/bin on your system. If that were true then: /bin/../sfw/bin/zsh would actually be the same as /usr/bin/../sfw/bin/zsh which reduces to /usr/sfw/bin/zsh which is where zsh actually lives. Note that what you tried, which was /bin/sfw/bin does not correspond to any path that you actually could see on the system. The correct way to resolve a relative path (../sfw/bin/zsh) given the absolute path path that forms the base for that relative path (/bin) is to concatenate them together as /bin/ + ../sfw/bin/zsh → /bin/../sfw/bin/zsh.
Strangeness with Symlinks and Relative Paths
1,422,471,670,000
On my server I have a bunch of accounts that have WordPress installed and also have backup directories in the format: softaculous_backups. I can find all the directories using the command: $ find / -type d -name "softaculous_backups" -ls I can use commands to check the disk space used by each folder like: $ du -smc * | sort -n $ du -sh * etc... But, how do I combine the two commands into one, so I get output like: Size Directory 123456789 /home/useraccount/softaculous_backups
How about that: find / -type d -name "softaculous_backups" -exec du -sm {} \; | sort -n For every found directory, du -sm is executed. After that all output is sorted numerically.
Search directories and return size / disk space usage for each
1,422,471,670,000
I'm under the impression that there are a bunch of directory names that have common semantics, like (in no particular order): usr tmp src lib etc share opt bin sbin var data include There are others that may or may not be UNIX-y (e.g. libexec) but they show up frequently enough to imply that conventions exist. I noticed that Racket uses a collects directory. Is there any other software that uses a collects directory?
Racket has a concept of collections. From its website: A library is module declaration for use by multiple programs. Racket further groups libraries into collections that can be easily distributed and added to a local Racket installation. It appears as though the collects directory holds collections that are distributed with Racket. To confirm this, I asked a question in the #racket IRC channel. I received a response from a Racket developer. < EvanTeitelman> What is the purpose of the `collects` directory in the Racket git repository? < [developer]> It used to hold all libraries (aka collections) of Racket implemented in Racket. < EvanTeitelman> soegaard: Are those collections distributed with Racket? < [developer]> yes Outside of Racket, I have never seen a directory named collects used in that manner.
Is there any common meaning to a `collects` directory?
1,422,471,670,000
If I were to write a library for a C-incompatible language (like D), where would be the best place to install my "header files" to? usr/include seems like a bad idea, since the FHS says it's for "header files included by C programs."
You define your own conventions, but I'd indeed stay away from /usr/include. /usr/lib/<lang> seems popular here for interpreted languages (I've at least /usr/lib/python, /usr/lib/perl and /usr/lib/ruby with variants for the handling of version specific stuff) I think that /usr/share/<lang> is more proper from the FHS (I've also /usr/share/tcl with a symbolic link from /usr/lib/tcl) if there is no binary data there (or at least only architecture independent binary data). Still in the FHS spirit, I'd tend to use /opt/<lang>/share or /opt/<lang>/lib while providing the installer (or the distribution) an easy way to use /usr/share/<lang> or /usr/lib/<lang>.
Where are "headers" for other languages kept?
1,422,471,670,000
In Linux, according to the Filesystem Hierarchy Standard, /opt is the designated location for add-on application software packages. Thus, when developing my own software package, which is not a dependency for anything else, I would place that in /opt/somepackage, with a hierarchy of my choice underneath. FreeBSD, according to the link above, does not strictly follow the FHS but installs third-party packages into /usr/local. OPNsense, which is based on FreeBSD, installs its own code (at least in part) into /usr/local/opnsense. The hier manpage on FreeBSD makes no mention of /opt – thus a package installing itself in that location would be unlikely to collide with anything else, but would introduce a top-level path that is almost as exotic as installing straight to /somepackage. What would be the appropriate installation location in FreeBSD? /usr/local/somepackage instead of /opt/somepackage, again with a hierarchy of my choice underneath? Note that I have seen the following posts, which provide some insight but don’t fully answer my question: In Linux I'd use "/opt" for custom software. In FreeBSD? – asks specifically about software not managed by the package manager, whereas I am asking about developing my own .pkg. What might be an equivalent to Linux /opt/ in OpenBSD? – asks about OpenBSD, which may be different from FreeBSD
You install user programs in /usr/local/. There is no /opt in the standard install or mentioned in the hier man page. From the man page: /usr/ contains the majority of user utilities and applications And: local/ local executables, libraries, etc. Also used as the default destination for the ports(7) framework. Within local/, the general layout sketched out by hier for /usr should be used. And: NOTES This manual page documents the default FreeBSD file system layout, but the actual hierarchy on a given system is defined at the system administrator's discretion. A well-maintained installation will include a customized version of this document. If you are installing an executable that includes other things like documentation, data, helper files, etc., I would put this in /usr/local/$package because it's a package of things. If it's the executable alone, it should go in /usr/local/bin cause it's a binary.
Where to install custom software packages on FreeBSD?
1,422,471,670,000
ls - list directory contents empty_dir# ls -al total 0 drwxr-xr-x. 2 root root 6 Dec 31 09:49 . dr-xr-x---. 6 root root 284 Dec 31 09:49 .. find - search for files in a directory hierarchy empty_dir# find .
find has to deliberately exclude . and .. It has to avoid descending into them, as it would do for other directories returned by readdir(). Rather than show the directories . and .. but not show any of their contents, it excludes them entirely. This is the desired behaviour, for example if you used find -exec touch \{\} \;. Users would not wish this command to affect .. (the parent directory). A satisfactory answer would point to some formal specification which describes this. Arguably, POSIX is trying to document this. I don't understand well enough to rely on it as a formal spec. But the bolded sentence below suggests that it does not "encounter" . and ... The find utility shall recursively descend the directory hierarchy from each file specified by path, evaluating a Boolean expression composed of the primaries described in the OPERANDS section for each file encountered. Each path operand shall be evaluated unaltered as it was provided, including all trailing characters; all pathnames for other files encountered in the hierarchy shall consist of the concatenation of the current path operand, a if the current path operand did not end in one, and the filename relative to the path operand. The relative portion shall contain no dot or dot-dot components, no trailing characters, and only single characters between pathname components. https://pubs.opengroup.org/onlinepubs/9699919799/utilities/find.html
Why does find . not show all files
1,422,471,670,000
I've got a computer that boots off an SSD, with an HDD for miscellaneous user data (currently, my /home partition). I want to move some installed software from /usr/local/bin to a directory on the HDD, to save space on the SSD for things where the improved loading time is more significant (and also save wear on the SSD). Is there a standard place to put that sort of thing, or should I just come up with something myself and add it to my PATH/symlink it into /usr/local/bin?
I'm not aware of any standard having rules or recommendations for this situation, however it is fairly common so well worth discussing. Firstly, I would avoid symlinks. In my opinion it is usually much cleaner to modify the path. Using either /etc/environment or /etc/profile is probably best. As for directory structure, I would recommend something along the lines of local/ (/mnt/hdd/local/bin for example). This would be consistent with using $HOME/.local as a user software prefix and /usr/local as a sysadmin's custom/non-distro software prefix. As for only installing the binaries, for most software that would be a case of setting exec_prefix=/path/to/local/. For other software, you would need to look at their specific build documentation.
Proper location for software on non-boot partition
1,422,471,670,000
I have some sshfs mounts which I want to put in a Linux filesystem location following the Filesystem Hierarchy Standard. The standard is strangely silent on where network mounts should be placed: media Mount point for removeable media mnt Mount point for mounting a filesystem temporarily Mounting under /net could conflict with NFS autofs mounts from the same hostname. Where is a sensible place to put sshfs mounts given that creating directories directly under / is frowned upon?
The FHS is defining directory names and usage. Creating a custom directory directly under the root one is considered risky as it might conflict with a future version of the standard or with a new OS owned directory. Unlike many other Unix and Unix like OSes file system standards (e.g. freeBSD and Solaris), the FHS fails for some reason to define /net as a generic mount point for automounted NFS shares. On the other hand, the FHS defines /mnt and /media for a similar but distinct purposes. While /media is for locally attached devices like CD, DVDs and thumb drives, /mnt doesn't restrict the kind of device so should theoretically be usable to store your sshfs mount, for example in /mnt/sshfs/xxx, but creating an exclusive subdirectory under /mnt might conflict with existing admin usage so I wouldn't recommend doing it. /mnt is defined to hold file systems temporarily mounted here by the administrator, which doesn't exactly match file systems automatically mounted by a daemon. There is no way to use /net to store sshfs mounts as autofs configuration is forbidding to have multiple handlers for the same mount point. As auto.smb is suggesting /cifs for its root mount point directory, I would simply use /sshfs. The risk for /sshfs to clash in the future with an OS owned directory is essentially zero. Excerpt from the auto.smb manual page: # Put a line like the following in /etc/auto.master: # /cifs /etc/auto.smb --timeout=300 Excerpt for the auto.master default configuration file: # NOTE: mounts done from a hosts map will be mounted with the # "nosuid" and "nodev" options unless the "suid" and "dev" # options are explicitly given. # # /net -hosts
Where should sshfs mounts be placed in the filesystem?
1,422,471,670,000
I have a NFS-mounted home directory where I keep some executable programs for different OS and machine architectures. (A particular executable is built for one particular architecture only.) I have to store these executables in different bin directories (one bin directory for each OS/machine combination) so that I can easily put those directories in PATH. Is there a conventional place to put executables for a particular OS/architecture? The best I could find was MultiarchSpec - Filesystem layout | Ubuntu Wiki. It's only for libraries, e.g. /lib/x86_64-linux-gnu - it doesn't say anything about executables. Currently I have this: d=~/".local/$(uname | tr A-Z a-z)-$(uname -m | tr A-Z a-z)/bin" if [ -d "$d" ]; then export PATH="$PATH":"$d" fi
While the current version of the FHS does not discuss locations of binaries for different architectures, a related issue came up in their bugtracker. This proposal could be extended to include the OS as well, but that should be discussed in the FHS mailing list. In short: no standard exists (yet) and every site is free to choose its own, whatever is suited best for the particular requirement. If your PATH setup above works for you, it's the right setup :-)
Multi-arch bin directories
1,422,471,670,000
What I want to do is present a directory, the contents of which are sourced from a series of other directories on the file system. It would be like a soft-link, only a link to multiple directories at once, rather than just one. The directory might be considered read-only - or, if updates to a file are made, it would be good if those updates occurred in the source-file (i.e. the directory is a "window" into a number of locations on the file-system) If a new file is created in this 'pseudo' directory, it should either be dropped into some default location, or rejected altogether. Similar functionality exists in Windows (under the name libraries) and OSX (smart folders) - is there any Unix equivalent?
It's not available as part of standard Unix or a graphical Linux interface. Linux system administrators can use overlayfs. Actually one of the most important uses is to allow modifications to a running LiveCD system, e.g. installing extra packages. There are also equivalents in FUSE, which can be used on Linux without root privileges. There will be additional limitations from FUSE but it could work well for the same cases as the features you mentioned. unionfs-fuse appears to be available in ubuntu. ("union mount" is an older term for the concept).
Unix equivalent of smart folders
1,422,471,670,000
I have created new module code. And want to compile it for my kernel. But I am not sure about I thing. Should I copy it to some designated directory before I start the compilation? Or I can just compile it wherever I want? Thank you very much!
There isn't anywhere special the source code needs to be. Normally it'd be wherever your repository is. If you want to leave it somewhere for the next admin to find, the most obvious place would be a company VCS server. /usr/src would also be a reasonable place to look, as well as $HOME. Eventually, if you decide to submit the module for inclusion in the kernel, you'll have to put it in the proper spot in your linux git checkout.
Where (the directory) should I put my newly created module in the kernel?
1,422,471,670,000
I am writing a piece of software (C++) which generates Python scripts. Where should these temporarily existing scripts be placed in the file system? I read a couple pages about the Filesystem Hierarchy Standard, but I didn't find any reference to generated scripts. /usr/bin does not seem to be a good idea as it might be read-only for certain users. My idea would be to place them under /var/run/my_program. Is that right/ok? Or what is the "right place"? Edit: The scripts are only used while the creating programs runs. That means they do not have to live past a reboot.
Temporary files whose lifetime doesn't exceed that of the program that creates them, and in particular aren't supposed to survive a reboot, go into /tmp. Or rather, the convention is to use the directory indicated by the environment variable TMPDIR, and fall back to /tmp if it isn't set. You can execute files in /tmp. While a system administrator could mount it without executable permissions, this would be a hardening configuration for a system that only runs specific applications: it is to be expected that preventing execution under /tmp would break some applications, and it typically wouldn't improve security anyway. Keep in mind that this directory is often shared between users, so you need to be careful when creating files there not to accidentally start using an existing file owned by another user. Use the mktemp utility or the mkstemp function, or better, create a private temporary directory with mktemp or mkdtemp and work in that directory. /run or /var/run are not appropriate because you may not have the permission to create files there (in fact, you will not have the permission to create files there unless granted by the system administrator). They're for system use, not for applications.
Where should generated scripts be placed in the filesystem?
1,422,471,670,000
Is there a conventional location to store non-secret bulk files that can be accessible by multiple users? I know that I can create a new folder (and set the right permissions) in most places in the file system. But I'd like to follow a established convention so (a) the structure is more familiar to our new users and (b) it avoids subtle problems with certain directories that I'm not experienced enough to anticipate. If it matters, this is a fresh Red Hat 6 install on a VM. Users will access it through ssh, and maybe RDP or the vCloud Director. The tarred & gziped files are 6.6GB; uncompressed is 11GB.
Usually it depends on exactly what the files are for. In practice I've seen people use /srv for server data, /var/www/html for web-accessible files, and /usr/local as sort of a catch-all. I've also seen a lot of vendors using /opt as being similar to C:\Program Files on Windows (i.e a collection third party application roots) which may themselves have files that non-root users should have access to. There's also nothing all that wrong with creating a directory directly underneath root (e.g /accountingData, etc) as long as it's consistent between servers.
recommended storage location for multiple users [duplicate]
1,422,471,670,000
This is a bit vague of a question, as I could just create a folder anywhere in the filesystem and set the permissions accordingly. However, I would like to know where the "correct" location to place shared media in the Linux filesystem hierarchy would be. In my case I have a single machine with a user for personal/recreational stuff and a work user. I have my entire music library duplicated in each users home directory, but it is not really an optimal solution in the long term with regards to adding music. My initial guess would be to place a folder in /usr, since the data is mostly static. But in the case where I would like to share other items, for instance documents, between the two accounts, should the share then rather be created in /var, or is /opt a better place for a shared drive/folder?
There are as many answers to this as there are linux servers out there. Personal advice is to do it one way and then stick with it. My personal setup is inherited from ubuntu (as it was once upon a time); all my data is stored in an LVM volume group, so each data category has its own LVM drive. I.e music is on the 'music' drive, movies on the 'movies' drive etc. Thus it is logical that I share each of them from their mount points. In the ubuntu-world, removable media used to be in the /media/ folder; i.e. /media/music /media/movies etc. Nowadays removable media is mounted a bit differently (/media/username/drive I think), but I stick to the old way as it works well for me. Then all the other things with user & group permission mentioned above is of course required so all the different users can access the data. My $0.02 Cheers!
"Correct" place to store shared media on a single machine
1,422,471,670,000
Not sure what I'm missing here, but when I try to list the contents of the Applications directory from the Terminal it produces a directory with a couple .dmg files and none of the actual application directories or content. I would like to get access to the list of actual Applications within the directory. I'm on Yosemite and have attached a screen shot. Any help would be great.
You are using the ~/Applications/ Folder and Not /Applications. Try: cd /Applications And then: ls ~ Means home, which is /Users/user/. There is a users Application folder but it is very rarely used and it is for Applications that are installed for only that certain user. All default Applications are in /Applications.
Unable to view Applications Directory Content in Terminal
1,422,471,670,000
I'm new to Unix, and wanted to give it a try. I had an old PC lying around and decided to turn it into a network based backup system to keep the data in the other PCs safe. So I installed FreeBSD on it, and set up Bacula on it to handle the backups. So far so good. I'm configuring the system now, and I noticed that the default configuration stores backups in /tmp. The FreeBSD manual says that /tmp should be used for files that are not usually preserved across a reboot, which is obviously not the case for backups. I have a separate disk I want to keep the backups on, and I know how to configure Bacula to write to wherever I want it to write to. My question is, where should I mount the disk? It seems like maybe /var would work. I was thinking of creating a /var/bacula/ directory and mounting my disk there. Would this be appropriate, or is there some other directory that should be used for long term storage?
For a permanently attached disk for storing backups of other hosts, /var/bacula is fine; hier(7) says /var is for "multi-purpose log, temporary, transient and spool files" (emphasis mine). Backups by their very nature change over time, making /var a good choice. MySQL, on some platforms, is configured to use /var as its primary storage, for example. Alternatively, you could mount it at /usr/local/bacula to follow the FreeBSD convention of putting software installed from ports, and its associated configuration and data files, under /usr/local. On the other hand, I have my backups stored under a new top-level directory, /data, which also contains my NFS and SMB shares.
Where should I mount a disk to store backups?
1,422,471,670,000
The kdebase-workspace package on Arch Linux only preservers changes made to /usr/share/config/kdm/kdmrc when the package is updated. I need to edit /usr/share/config/kdm/Xsetup to get my monitors to rotate correctly, but the changes get lost every time kdebase-workspace gets updated. The Arch Wiki recommends copying /usr/share/config/kdm/Xsession to /usr/share/config/kdm/Xsession.custom. I could do this with /usr/share/config/kdm/Xsetup, but I thought files in /usr/share/ are supposed to be managed by the package manager. It seems like this might be a bug in the package (i.e., should it be saving all the configuration files) or should I be making a change in /usr/share/config/kdm/kdmrc to tell it to look some place else and if so where?
Files under /usr are meant to be under the control of the package manager (except for files under /usr/local). Configuration files that the system administrator may modify live in /etc. This is part of the traditional unix directory structure and codified for Linux in the Filesystem Hierarchy Standard. The recommendation in the Arch Wiki to edit files under /usr is a bad idea; the fact that your changes are overwritten by an upgrade is expected. Arch Linux manages files in a somewhat nonstandard way. You can mark the file as not to be changed on upgrade (this is documented on the wiki) by declaring it in /etc/pacman.conf: NoUpgrade = usr/share/config/kdm/Xsetup You may want to replace /usr/share/config/kdm/Xsetup by a symbolic link to a file under /etc (e.g. /etc/kdm/Xsetup), to make it easier to keep track of the customizations that you've made.
Why do upgrades to KDM/KDE not preserve changes to configuration files?
1,422,471,670,000
I'm writing a server on a shared development desktop that moonlights as a server. This is my first attempt at writing a linux server for someone else on a shared box. I'd like to conform to unix standards and make it as professional as possible. In what directory should the websocket server be placed? I was thinking /var, but it's root's, so I don't know how exactly to navigate that. How should execution be managed? In other words, should a new group be made that has permissions to execute the server in case it crashes or needs restarting? If so, how?
I'm not sure what you mean by “websocket”. If this is a web server, it listens on TCP, and there's no reason why it would also have to listen through a unix socket. Assuming you do want to use a unix socket, if your server is started by root, you can create a subdirectory in /var/run, give your daemon write permissions there, and have your daemon create its socket there under a predictable name (e.g. /var/run/gracchusd/sock). If your server is not started by root, you can create the socket under /tmp, with a name containing a randomly-generated part (e.g. /tmp/gracchusd-nyBBCxs9.sock). Generally speaking, the Filesystem Hierarchy Standard tells you the main rules about where to put files (though beware that it's a little out of date, in particular it doesn't mention /run which is now common but not yet universal). To start and stop your service, create service starting scripts or description files. You (or the people who port your server to different distributions) will need to create one for each init system as they work differently. Some init systems come with their own monitoring mechanism, or you can use a separate monitoring daemon. Unless your server needs to run as root, it is good practice to run it as a dedicated user and group. Configuration files should be owned by root and not group-writable, so that even if the daemon is breached, the attacker cannot change the configuration. If the daemon needs to read confidential files, they can be owned by root or other users and readable by the daemon group.
Server socket location, startup and monitoring
1,422,471,670,000
In the past when I have compiled applications from source I have extracted the source code to ~/src and compiled from there. I realize now that there may be no need for me to create the ~/src directory, as Linux probably already has an established location for source code for applications such as this. Is this the case? What is the directory in Linux that is established as the place for source code from third party applications that I want to compile?
There's no pre-determined, or even globally preferred, location. The closest analogue I know of would be the /usr/src tree in Red Hat Enterprise Linux and derivatives, but most applications that you compile are designed to be unrolled into their own directories, compiled as a non-privileged user, and only then installed with root privileges.
Where to place source code for applications compiled from source?
1,422,471,670,000
Which directories should I expect to have in an install prefix when I'm writing makefiles? I've noticed that in the common prefix /usr, there is no /etc, yet there is an /include dir, which isn't in the root directory. Which paths are hard-coded such as /etc and /var maybe and which directories lie in a prefix? As far as I can see /bin and /lib are standard.
See the FHS (Filesystem Heirarchy Standard) for details: http://en.wikipedia.org/wiki/Filesystem_Hierarchy_Standard and http://www.pathname.com/fhs/
What do I install into a given install prefix
1,487,538,464,000
Update: The specification provided in the answer below negates the actual question, that is it applies to a broader scope of specifications, thereby eliminating the need for this question (see answer). I'm struggling to find cohesive documentation on this one. Basically, I would like to know, with references where possible, what the standards are for an executable mounting a volume. To clarify: I'm looking for a specification. Below are some examples: *. The executable *may* mount a volume at a subdirectory of the path provided by the caller (say it's not an empty directory), eg $ARGX/$NEWPLACE *. The executable *cannot* create directories ever. *. The executable *cannot* create directories unless specified by the caller. *. The executable *may* create directories specified by the caller if they do not exist. *. The executable *must* mount a volume at /mnt/$OTHERPATH if the mount point passed by the caller is unavailable. *. The executable *cannot* mount a volume at $BADPATH, $WORSEPATH, and the like, even if specified by the caller. *. The caller *expects* the volume mounted at $ARGX *. The caller *expects* to be alerted if the mount point is not empty *. The caller *expects* the executable to abort on all errors. The reason is, just looking at various distros, not only have they changed over the years, but they each have their own opinion of where things go (slight tangent here: Wouldn't /mnt/$USER/$VOLUME be a better global solution to the hierarchy since /mnt was originally for mounting things and having a separate directory for each user would function the way /home/$USER/ does?). Back on topic, I want to remain as distribution-agnostic as possible (which I realise is like asking to make all of the people happy all of the time) so any "Specification" would be appreciated. Thanks.
I couldn't find a specification specifically referencing mounting, but Chapter 19. Additional Recommendations - Linux Standard Base Core Specification, Generic Part seems applicable. 19.1.1. Directory Write Permissions The application should not depend on having directory write permission in any directory except /tmp, /var/tmp, and the invoking user's home directory.
GNU/Linux: What is the behaviour specification (if any) of a program mounting a volume?
1,487,538,464,000
It seems to me that there's a lot of confusion (or at least there is on my part) about how different file systems are structured among different distributions of Linux/Unix. It would stand to reason then that instead of having different types of packages for each system, it would be more useful to have environment variables that point to the different directories in individual file system structures. For example: If I wanted to know the location of the "program files" directory on a windows system, I could use the environment variable %ProgramFiles% or %ProgramFiles(x86)%. Is there any such facility on Linux or Unix systems?
Linux doesn't have an equivalent to Windows's %ProgramFiles% variable because it doesn't need one. There's a standard location for programs that are installed in their own directory: /opt. Bit most programs aren't installed there, because they come in packages, and their files are located where they will be found by other programs. The reasons why Windows has a %ProgramFiles% variable are in fact largely historical: Windows has drive letters. Even if \Program Files was the standard location, there'd still be the question of whether it's c:\Program Files, D:\Program Files, etc. Linux has never had this issue because symbolic links allow a directory to appear anywhere, regardless of which physical storage medium it's on. (Modern Windows don't need this because they have an equivalent feature, but the location remains modifiable for backward compatibility.) Windows lets the administrator choose the name of system directories. Linux doesn't; that's ok because it never did, whereas Windows had to accommodate administrators who chose different locations for backward compatibility. Every Windows program comes with its own installer, so there's no real package management mechanism, and the only way to keep track of what files belong to what package is to have one package per directory. That's beginning to change, but not quite there yet. In contrast, Linux usually stores files where they will be found and lets the package manager keep track of who they belong to. Linux does have environment variables that specify paths: PATH for executable commands, LD_LIBRARY_PATH for shared libraries, MANPATH for manual pages, etc. They're all about where to find files, not where to put files. Where to put files isn't something that programs know, it's something that package managers know. Package managers have their data files, they don't need environment variables to tell them where things are. The directory structure on Linux systems is standardized in the Filesystem Hierarchy Standard. There's no need for environment variables for most of these, either because the location is standard or because there's no need for a single location. The fact that different distribution have different package systems isn't due to having different directory structures. It's one of the main differentiators between distributions.
Standard environment variables for distribution-specific paths
1,487,538,464,000
I know that ls lists the names of the files in a given directory and ls -i shows the names and the inode numbers. But why is it slower? EDIT: This happens with big directories The names and the inode numbers are stored in the directory information block together, hence why does it take more time to query the inode numbers?
strace shows me that ls -i is calling lstat() on each filename That would explain the extra work. Given that readdir() has already returned the inode number this appears to be sub-optimal while this feels like a bug, this behaviour is for consistency with mount points. (see Thomas' comment)
Why is ls -i slower than ls?
1,487,538,464,000
Suppose, I create seperate partition for "/usr" directory. This is the directory which contains all user programs. If I switch from one OS to other(for eg. From Ubuntu to Arch Linux or vice versa), can Arch linux run that program? Will blender that I installed on Ubuntu work on Arch Linux?
You can make them run on different distributions with some work. The main things programs rely on are libraries. These libraries will be stored in different locations in different distributions, but you can find out where these libraries are linked with the ldd command. For example, this is the output of ldd when run against /usr/bin/vlc on Debian linux-vdso.so.1 (0x00007fff11969000) libvlc.so.5 => /usr/lib/libvlc.so.5 (0x00007f597eb01000) libvlccore.so.5 => /usr/lib/libvlccore.so.5 (0x00007f597e819000) libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f597e5fd000) libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f597e3f9000) libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007f597e0f5000) libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f597ddf7000) libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f597dbe1000) libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f597d834000) libdbus-1.so.3 => /lib/x86_64-linux-gnu/libdbus-1.so.3 (0x00007f597d5ee000) librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007f597d3e6000) /lib64/ld-linux-x86-64.so.2 (0x00007f597ed37000) You can see that all the libraries are to ones in /lib/x86_64-linux-gnu While on Arch, libraries for vlc are located in /usr/lib linux-vdso.so.1 (0x00007fff5a1fe000) libvlc.so.5 => /usr/lib/libvlc.so.5 (0x00007f84fd7c2000) libpthread.so.0 => /usr/lib/libpthread.so.0 (0x00007f84fd5a4000) libdl.so.2 => /usr/lib/libdl.so.2 (0x00007f84fd3a0000) libc.so.6 => /usr/lib/libc.so.6 (0x00007f84fcff5000) libvlccore.so.7 => /usr/lib/libvlccore.so.7 (0x00007f84fcce1000) librt.so.1 => /usr/lib/librt.so.1 (0x00007f84fcad9000) libdbus-1.so.3 => /usr/lib/libdbus-1.so.3 (0x00007f84fc892000) libm.so.6 => /usr/lib/libm.so.6 (0x00007f84fc58f000) /lib64/ld-linux-x86-64.so.2 (0x00007f84fd9e0000) As you can see, the binaries are located in slightly different locations with the vlc binary itself having different dependences. So theoretically with a Arch install of VLC, I could run it on Debian by linking the libraries to the correct places. You could also expand the places where the system looks for libraries by setting the LD_LIBRARY_PATH variable like so - export LD_LIBRARY_PATH=/usr/local/libs:$LD_LIBRARY_PATH
Does software installed in one distribution of linux run in another distribution of linux?
1,487,538,464,000
I have the following directory structure: /media/storage/sqlbackup/CUSTOMER1 /media/storage/sqlbackup/CUSTOMER2 ... /media/storage/sqlbackup/CUSTOMER* Each CUSTOMER* directory may contain subdirectories named daily, weekly, and monthly. If a CUSTOMER* directory does not contain daily OR weekly OR monthly, I want it to be created, if it does, then I want it to remain. Before: CUSTOMER1/daily After: CUSTOMER1/{daily,weekly,monthly} I was trying to do this with clever use of find, but trying to return all that don't match.
You can create the directories while hiding any error related to the directory already existing: for custDir in /media/storage/sqlbackup/CUSTOMER* do mkdir -p "$custDir"/{daily,weekly,monthly} done You cannot use /media/storage/sqlbackup/CUSTOMER*/{daily,weekly,monthly} because the {...} sequence is expanded before the wildcard, and a wildcard pattern will only match files/directories that exist.
Find all directories NOT containing matched subdirectory and create them
1,487,538,464,000
I would like a cleaner way to nest this command; if [ ! -d $BACKUPDIR ]; then mkdir -p $BACKUPDIR mkdir -p $BACKUPDIR/directories mkdir -p $BACKUPDIR/databases mkdir -p $BACKUPDIR/logs else : fi
With brace expansion, you could do just mkdir -p "$BACKUPDIR"/{directories,databases,logs} If you want to make sure the subdirectories exist, too, you can just run mkdir without the test. With -p it shouldn't complain about existing directories, and there will be no chance of the main directory $BACKUPDIR existing, but the subdirectories missing. (Of course, if BACKUPDIR is empty, this will (try to) create the subdirectories in the file system root directory. But I'll assume you've set BACKUPDIR to some value earlier in the script.)
create a nest of directories in bash
1,487,538,464,000
For example, running a command on n-number of sorted subdirectories, where n is an input. Or how can I run a for loop on a range of subdirectories where I can give that range as an input? Like the following except how can I define the range here? for d in ["sd1"-"sd2"] do ( cd "$d" && do stuff ) done
Use brace expansion if you have a shell that supports it: for d in sd{1..2}; do ( cd "$d" && dostuff ) done With zsh, ksh93 or yash -o braceexpand (but not bash), you can do n=4 for d in sd{1..$n}; do ( cd "$d" && dostuff ) done Related question: Can I use variables inside {} expansion without `eval`? A variation on this would be for (( i=1; i<=n; ++i )); do str="sd$i" ( cd ... ) done This is the C-style for loop supported by bash and other shells (still an extension to the POSIX standard though).
How can I do subdirectory manipulation in shell?
1,487,538,464,000
I would like to put everything into an archive from a given directory, while also saving the whole dierctory layout, so superior directory names would be required. I run the following command: tar --create --gzip --recursion --file=/home/user/test_backup.tgz --directory=/home/user/opt . However this only saves the directory layout from under opt. dir1 dir1/file file1 file2 As I understand from the manual, I should use --absolute-names or -P, but neither of them works when I add them to the command above. tar --absolute-names --create --gzip --recursion --file=/home/user/test_backup.tgz --directory=/home/user/opt . What could be the problem? The desired directory layout in the archive should be: /home/user/opt/dir1 /home/user/opt/dir1/file /home/user/opt/file1 /home/user/opt/file2
When you use --directory FOO . you're telling tar to change to the FOO directory and start archiving from there. If you want full path names then you should specify them as the pattern. eg tar -czf /home/user/test_backup.tgz /home/user/opt However this will strip off the leading / character, so you need to tell tar to not do that: tar --absolute-names -czf /home/user/test_backup.tgz /home/user/opt
Tar --absolute-names flag does not work
1,487,538,464,000
For example, on my Red Hat Enterprise Linux 7.2 there are /etc/rc.d/init.d/rabbitmq-server /etc/logrotate.d/rabbitmq-server /usr/sbin/rabbitmq-server /usr/lib/ocf/resource.d/rabbitmq/rabbitmq-server more than 4 'kinds' of rabbitmq-server (in fact there are 2 other ones ), are they the same thing? I mean, if I want to start rabbitmq can I use either of those command rabbitmq-server?
No, they are totally different and have different content. For example I have A/file B/file $ cat A/file hello $ cat B/file there We can see they say different things. In your case: /etc/rc.d/init.d/rabbitmq-server - this will be the startup at boot time script /etc/logrotate.d/rabbitmq-server - this will manage the log files /usr/sbin/rabbitmq-server - this is the main server program that's started by the init script /usr/lib/ocf/resource.d/rabbitmq/rabbitmq-server - this is part of your HA configuration. So all 4 files do different things.
Are same name in different directory the same thing?
1,487,538,464,000
I have a directory structure DIR SUBDIR1 11-01-11.txt 13-05-23.txt SUBDIR2 12-05-56.txt 13-04-02.txt 15-04-06.txt I would like to write a bash script that leads to this Desired output DIR SUBDIR1 sub_dir1_merged.txt SUBDIR2 sub_dir2_merged.txt I would like to maintain the original directory structure, merge files into a single sub_dirname_merged.txt file for each and delete all original files. I tried the following code for f in */ do cat $f/*.txt > "$f"/$f_merged.txt rm $f/*.txt done this does not work fully.
The script: for f in */*.txt; do cat "$f" >> "$(dirname "$f")/$(dirname "$f")_merged.txt" rm "$f" done Loop the txt files across the subdirectories: for f in */*.txt; do $(dirname "$f") returns the name of the folder in which the txt resides wich is used both to name the file and the path to save the file. rm "$f" removes the file. When you are going to use rm command be sure the result of the script is what you expected before executing it.
merge multiple files in several subdirectories into 1 file for each subdirectory and delete original files
1,487,538,464,000
I created a directory dir at Desktop and then i keyed in cd dir so as to make dir as my current directory and then i typed in the terminal rmdir /home/user_name/Desktop/dir from the dir directory itself, and surprisingly this removed the dir directory but when i checked my current working directory using pwd it was still showing that i am in the dir directory,So my question is that how it is possible that i am working in a directory that has already been deleted.i am currently working on Ubuntu
If you want to understand why this is, you need to understand the difference between files and inodes. rm, rmdir and mv all take action on the inodes describing the file/directory, not the actual file. If you have a file/dir open (e.g. by being in the directory), the inode information is removed, but the actual data file associated with the file/dir is not removed until all file handles pointing to it are closed. So, when you "cd .." the filesystem can swoop in and remove the directory and all its contents. https://en.wikipedia.org/wiki/Inode http://www.grymoire.com/Unix/Inodes.html
Trying to remove current directory using rmdir
1,487,538,464,000
Man expects the man directories listed in $MANPATH or $(manpath) to be split by section into directories named "man$section". This duplicates the section information that is already available in the suffix of the manpage. (e.g. for ls.1.gz, the .1 info gets duplicated in man1/). Why not skip the middle man-directories and make the manpath directories flat when flat seems good enough for $PATH directories?
It doesn't duplicate the information: you can have more suffixes in a given directory than the plain ".1" or ".3", e.g., (depending on the platform) letters following the numbers. For example, Debian follows the ".3" with a an application suffix such as "pm" for Perl modules. Here is (part) of the listing from /usr/share/man/man1, to illustrate: -rw-r--r-- 1 root 592 Apr 17 2012 411toppm.1.gz -rw-r--r-- 1 root 3827 Tue 15:21:13 CA.pl.1ssl.gz lrwxrwxrwx 1 root 17 Feb 19 2012 GET.1p.gz -> lwp-request.1p.gz lrwxrwxrwx 1 root 17 Feb 19 2012 HEAD.1p.gz -> lwp-request.1p.gz lrwxrwxrwx 1 root 17 Feb 19 2012 POST.1p.gz -> lwp-request.1p.gz -rw-r--r-- 1 root 2490 Aug 29 2011 SOAPsh.1p.gz -rw-r--r-- 1 root 2428 Aug 29 2011 XMLRPCsh.1p.gz -rw-r--r-- 1 root 5112 Apr 5 2012 alien.1p.gz -rw-r--r-- 1 root 3130 Oct 26 2012 apt-show-versions.1p.gz -rw-r--r-- 1 root 4011 Tue 15:21:13 asn1parse.1ssl.gz -rw-r--r-- 1 root 2847 Tue 15:21:13 c_rehash.1ssl.gz -rw-r--r-- 1 root 9796 Tue 15:21:13 ca.1ssl.gz -rw-r--r-- 1 root 6410 Tue 15:21:13 ciphers.1ssl.gz -rw-r--r-- 1 root 8419 Tue 15:21:13 cms.1ssl.gz -rw-r--r-- 1 root 6394 Jun 26 2012 cpanm.1p.gz -rw-r--r-- 1 root 2631 Tue 15:21:13 crl.1ssl.gz -rw-r--r-- 1 root 2636 Tue 15:21:13 crl2pkcs7.1ssl.gz -rw-r--r-- 1 root 2272 Jun 19 2014 dbilogstrip.1p.gz -rw-r--r-- 1 root 3255 Jun 19 2014 dbiprof.1p.gz Additionally, the various directories are split up because in systems using cat directories, the filenames would be (usually) duplicated. And finally - there's a split-up to keep directory size (relatively) small and improve performance.
Man directory layout -- why the subdirectories?