date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,404,054,539,000
I realised that pdftk does not update the PageLabel metadata, when using update_data_utf8. I've got a pdf file (let's call it file.pdf), which contains the metadata PageLabelBegin PageLabelNewIndex: 1 PageLabelStart: 1 PageLabelNumStyle: LowercaseLetters PageLabelBegin PageLabelNewIndex: 3 PageLabelStart: 1 PageLabelNumStyle: LowercaseRomanNumerals If I issue the commands pdftk file.pdf dump_data_utf8 > data.txt pdftk file.pdf cat 1-end output file2.pdf pdftk file2.pdf update_info_utf8 data.txt output file2_updated.pdf I would expect that file2_updated.pdf contains the same metadata as file.pdf. However, all PageLabel metadata is lost. Though the Bookmark metadata, and hence the table of contents, is preserved. What is happening here? Did I make a mistake, or is that a bug in pdftk? For reference, I use version 2.02, which appears to be the newest one.
You are not doing anything wrong, pdftk never supported updating page labels (although the code suggests it was a planned feature). If you want to keep using software based on pdftk I suggest the fork pdftk-java, which implements this missing feature. Disclaimer: I maintain pdftk-java.
pdftk does not update PageLabel metadata
1,587,536,926,000
If I touch a new file, I see that its modification time is showing as the current time in my local time-zone rather than in UTC. If I were to copy that file to a machine that's in another time-zone, would it update to that machine's time-zone or would it still show the same time it showed on my machine? Do file modification times contain time-zone information?
They should be in Unix time: Seconds since 1970-01-01T00:00 UTC. If you move it then it will be the same time. However time is displayed in local time. The above is true on Unix file-systems. On at least some Microsoft file-systems the time is stored in local time (for backward compatibility with MS operating systems).
Are file modification times in UTC or in the local time zone?
1,587,536,926,000
I tried to code a bash script that loops through a given directory and all its subdirectories and extracts metadata attributes with the mdls command into separate text files. Since I have a lot of filenames containing spaces and other special characters, my code base is derived from the answer "Looping through files with spaces in the names?". But after each file the script waits for pressing Enter/Return manually. How can I make it automatically loop through files and folders? #!/bin/bash # write metadata of files in separate file find . -type f -name '*.*' -exec sh -c ' for file do echo "$file" mdls "$file" > "$file"_metadata.txt read line </dev/tty done ' sh {} +
The read line </dev/tty command is reading a line from the terminal (i.e., the keyboard).  If you don’t want to do that, delete that command. Does the loop seem to be working well aside from that?  (At a glance, it looks pretty much OK to me.)  If not, edit your question to say what happens. -name '*.*' says to look only at files whose names contain a period (a.k.a. “dot”, .).  If you want to exclude files whose names don’t include a . (like data and README), then that’s the perfect way to do it.  But if you want to handle all files, leave off the -name clause — find looks at all files by default; clauses like -type f, -name something, -mtime value, etc., all specify restrictions (i.e., exclusions) to the search.
Bash script for writing metadata attributes of files into separate text files
1,587,536,926,000
I have over 80,000 photos that have been given proper EXIF keywords but Google Drive requires data in the Description for it to be searchable in their online app. I need to copy the contents of the Keywords entry to the description on this mass of photos that goes deep into sub directories.
Using ExifTool you could just run: exiftool -TagsFromFile file.jpg '-Keywords>Description' file.jpg You can find more info in the manpage for exiftool.
Copy EXIF meta data from the Keywords to the Description on a huge amount of photos in sub folders
1,587,536,926,000
This is just my comment from How to cache or otherwise speed up `du` summaries? formulated as a question of its own: Are there any extended discussions about creating a filesystem in which the total size of each directory (think du) is saved and "bubbles up" (i.e., is propagated upwards in the tree so that all parent directory sizes are correct as well) whenever changed e.g. due to a write to a file, deletion, etc., so that du would be instant? From the answer I linked above, it's clear that I/O performance would suffer as a result of doing this, I just wonder by how much. Would it decrease by orders of magnitude or just a couple (dozen) percent? Closely related to this is the concept of "bubbling up" mtimes in the same manner, so that each directory's mtime reflects the most recent change within its entire sub-tree. Both of these features together could, for instance, speed up rsync's --update mode considerably for trees with many deeply nested files.
Modern filesystems such as zfs / btrfs / bcachefs actually go in an opposite direction and allow / encourage sharing extents between files. With this, the notion of "how much data does a directory occupy" becomes less well-defined (though this was already true to some extent due to hard links): using reflinks, it's possible to create a directory that apparently contains much more data than would even fit on the filesystem (at least as far as simple disk analysis tools like du or ncdu can understand it). One way to rephrase the question would be "how much free space would be freed if this directory was deleted", which is less ambiguous but not all that much useful, because as soon as you create a snapshot, then most directories will now have a unique size of 0 (because their data also is reachable via the snapshot). I have also encountered the problem of: It's difficult to understand space usage on filesystems which allow data sharing Analyzing the space usage on large filesystems takes too much time (I/O) For this reason, I created btdu, a sampling disk usage profiler, which solves these problems for btrfs. As for the general "bubbling up" concept: I'm not sure about other filesystems but this does actually resemble the way btrfs works internally, where there is one main root (b-)tree which recursively references other trees. When any tree (many levels deep) is updated, a new copy is written elsewhere on the disk (hence the COW aspect of btrfs) and the parent is updated to point to the new copy, which in turn causes its parent to get updated in the same way, and so on up until the root tree. (In practice, the implementation employs many optimizations to preserve its invariants but keep the performance reasonable.)
Filesystem with "bubbling" directory size
1,587,536,926,000
When doing an mp3 → mp3 (or flac → mp3) conversion, -map_metadata can be used to copy metadata from the input file to the output file: ffmpeg -hide_banner -loglevel warning -nostats -i "${source}" -map_metadata 0 -vn -ar 44100 -b:a 256k -f mp3 "${target}" However, when I use this, I notice that it doesn't copy all the metadata correctly. Using the tool eyeD3, I inspecting the input and output files see this: $ eyeD3 input.mp3 input.mp3 [ 4.15 MB ] -------------------------------------------------------------------------------- Time: 01:46 MPEG1, Layer III [ 320 kb/s @ 44100 Hz - Stereo ] -------------------------------------------------------------------------------- ID3 v2.3: title: Track title artist: Artist Name album: Album Name album artist: Various Artists composer: Composer Name recording date: 2019 eyed3.id3:WARNING: Non standard genre name: Soundtracks track: 17/37 genre: Soundtracks (id None) disc: 1/1 FRONT_COVER Image: [Size: 86555 bytes] [Type: image/jpeg] Description: PRIV: [Data: 42 bytes] Owner Id: Google/StoreId PRIV: [Data: 40 bytes] Owner Id: Google/StoreLabelCode -------------------------------------------------------------------------------- $ eyeD3 path/to/output.mp3 /tmp/test.mp3 [ 3.26 MB ] -------------------------------------------------------------------------------- Time: 01:46 MPEG1, Layer III [ 256 kb/s @ 44100 Hz - Stereo ] -------------------------------------------------------------------------------- ID3 v2.4: title: Track title artist: Artist Name album: Album Name album artist: Various Artists composer: Composer Name recording date: 2019 eyed3.id3:WARNING: Non standard genre name: Soundtracks track: 17/37 genre: Soundtracks (id None) disc: 1/1 PRIV: [Data: 40 bytes] Owner Id: Google/StoreLabelCode PRIV: [Data: 42 bytes] Owner Id: Google/StoreId -------------------------------------------------------------------------------- Specifically, it's not copying the FRONT_COVER image correctly - somehow it's being dropped along the way. How can I ensure that the FRONT_COVER Image is copied during the conversion process?
The front cover is treated as a video stream with a special disposition. Use of -vn will disable its processing. Use ffmpeg -hide_banner -loglevel warning -nostats -i "${source}" -map_metadata 0 -c:v copy -disposition:v:0 attached_pic -ar 44100 -b:a 256k -f mp3 "${target}"
ffmpeg not copying FRONT_COVER image metadata during conversion
1,587,536,926,000
In our company we have a shared Windows folder. If I access it from Windows, I can pop up Properties menu of a file and find its metadata: who created it, last access times and so on. On linux I have this folder mounted with mount -t cifs. I want to write a script which grabs some stats about folder usage. Is there any way to access this metadata from linux? UPD: I can't use getfacl, stat or ls -la for my task, because all those give me only the local linux username under which this folder is mounted, bot not names from Windows domain server. UPD2: I mount the share with a command: sudo mount -t cifs //data/Shared /mnt/Shared -o uid=1000,gid=1000,user=<my_windows_account_name>,dom=<my_domain>,pass=<my_windows_password>, where uid=1000 and gid=1000 are uid and gid of my linux account.
You're using mount -t cifs //data/Shared /mnt/Shared -o uid=1000,gid=1000,user=<my_windows_account_name>,dom=<my_domain>,pass= What this tells the local system is two-fold: Authenticate to to the remote server with the credentials specified as the tuple { user, domain, password } Fake all accesses to/from the remote share as if they are from the user account with UID 1000 and GID 1000 You need to continue using #1, although I would strongly recommend that you move the user credentials into a secure file that can be read only by root and the local user representing the account credentials. See man mount.cifs for the details # As root... cat >/usr/local/etc/Shared.cifs <<'X' username=my_windows_account_name domain=my_domain password=my_windows_password X chmod u=rw,go= /usr/local/etc/Shared.cifs chown my_unix_account_name:root /usr/local/etc/Shared.cifs # Then mount becomes mount -t cifs //data/Shared /mnt/Shared -o credentials=/usr/local/etc/Share.cifs,noperm However, you need to stop using #2 and instead have your local client understand the names used within the AD context. That's too much for here, but the essentials are these Install realmd and the samba dependencies Ensure your DNS servers are the AD domain servers (or local equivalents) Run realm discover to find and check that you can see the correct AD domain Run realm join {domain} to join to the domain You will probably now want to deny logins from other AD users to your local system. The commands to review are variations of realm deny -all and realm permit --groups 'domain admins', along with AllowUsers and AllowGroups in /etc/ssh/sshd_config. If you're not a Domain Admin you'll need to change this accordingly. The man pages are quite good. You can test that the join was successful with commands such as these net ads testjoin getent password my_windows_account_name # As above getent group "domain admins" # An example group that will exist
How can I fetch file metadata on cifs mounted folder?
1,587,536,926,000
I have a folder with opus sound files. I would like to convert that folder listing to something I can use in a spreadsheet. I want the result to contain something like file name, length of the sound and file size. How can this be done?
I'd have recommended exiftool for that as it can extract metadata from most types of files and format it in csv: exiftool -csv -FileSize -Duration -- *.opus > list.csv But at least my version of exiftool doesn't seem to support extracting duration from opus files. As mentioned by @dodrg, ffprobe can though and also supports formatting its output as CSV: { echo filename,duration,size for file in *.opus; do ffprobe -v warning \ -of csv=p=0 \ -show_entries format=filename,duration,size \ -i "$file" done } > list.csv That's much slower though as it looks like ffprobe cannot return information from more than one file at a time, so we need to run one invocation per file. Beware the order of the fields seems to be fixed. As in whether you specify -show_entries format=filename,duration,size or -show_entries format=size,filename,duration, the fields are always listed in filename, duration, size order. You could also tell it to output everything in JSON format and reformat the information with the fields you want and in the order you want with jq and mlr. mlr can also help you join information returned by exiftool with information returned by ffprobe. For example: exiftool -csv -- *.opus | mlr --csv join --ijson -f <( for file in *.opus; do ffprobe -v warning -show_format -of json -i "$file" done | jq .format) -j SourceFile -l filename > list.csv Add then cut -f SourceFile,duration,size,Artist... to the mlr command to restrict the output to the fields you're interested in.
Convert directory with opus files to CSV or similar?
1,587,536,926,000
I would like to search for "ytversion", replace it with "mqversion" in the title tag of multiple mp3 files. I would like this process not edit/delete any other parts of the metadata content. Is this possible? If yes, which tools would I have to use? I know I can search for a certain string in the metadata of multiple mp3 files. This is possible in EasyTag. However, how can I replace that particular string with another string for the pre-defined metadata field (the title field in the example above)? I do not need to use EasyTag, it's just what I had installed at some point. I suppose the answer to my question relies on regular expressions which I would definitely be ok with to use. It just that I do not know of any program (whether it has to be used in the CLI or has a GUI) that is capable to employ them or actually implements them.
You can do this using the id3v2 tool, which should be in the repositories for your operating system (note that this solution assumes GNU grep, the default if you are running Linux): ## Iterate over all file/dir names ending in mp3 for file in /path/to/dir/with/mp3/files/*mp3; do ## read the title and save in the variable $title title=$(id3v2 -l "$file" | grep -oP '^(Title\s*|TIT2\s*.*\)):\K(.*?)(?=Artist:)'); ## check if this title matches ytversion if [[ "$title" =~ "ytversion" ]]; then ## If it does, replace ytversion with mqversion and ## save in the new variable $newTitle newTitle=$(sed 's/ytversion/mqversion/g' <<<"$title") ## Set the tite tag for this file to the value in $newTitle id3v2 -t "$newTitle" "$file" fi done This si slightly complicated because the id3v2 tool prints the title and artist on the same line: $ id3v2 -l foo.mp3 id3v1 tag info for foo.mp3: Title : : title with mqversion string Artist: Album : Year: , Genre: Unknown (255) Comment: Track: 0 id3v2 tag info for foo.mp3: TENC (Encoded by): iTunes v7.0 TIT2 (Title/songname/content description): : title with mqversion string The -o flag tells grep to only print the matching portion of a line, and the -P enables PCRE regular expressions. The regex is searching for either a line starting with Title followed by 0 or more whitespace characters and then an : (^Title\s*:) or a line starting with TIT2, and then having a ): (^TIT2\s*.*\)). Everything matched up to that point is then discarded by the \K. it then searches for the shortest string of characters (.*?) followed by Artist:.((?=Artist:); this is called a positive lookahead and matches the string you are looking for without having it count in the match, so it isn't printed by grep).
Is it possible to search for a string and replace it with another string in the title tag of multiple mp3 files?
1,587,536,926,000
What I understand is, you could change the permission of a file for its owner by, say chmod u=0 file.txt In this case, we removed r, w and x permission for the owner of this file. But under what circumstances would we like to do that? If you are the file owner, why would you like to downgrade the permission of your own file?
It is not against an intelligent actor, because as owner, they could chmod() the file any time, giving their permissions back. It might be useful against programs, if you want to avoid your own programs to play with some of your file on any reason. However, typically it is more feasible to simply move that file away. It might be also useful, if the underlying filesystem driver doesn't support chmod(). For example, davfs or vfat file modes are determined by the mount flags and not by the filesystem metadata.
Under what circumstances would a user/superuser change the permission of a file for its owner? [closed]
1,587,536,926,000
Many Linux players (like Audacious, Banshee, Amarok, Exaile) can easily access audio CDs tracks info (names and other metadata) but Deadbeef cannot, although it has a cdda plugin that should do just that. Along VLC, Amarok, Rhythmbox, Xine and Kaffeine, Deadbeef is one of the players that can read CD-Text (tracks info accessible offline, from the cd itself), but it oddly fails at accessing online CDDB, a somewhat trivial task these days. Can those settings be adjusted to make this work?
The problem is with the option "Prefer CD-Text over CDDB" (under Preferences-Plugins-Audio CD player-Configure): that should NOT be checked. When it was, instead of preferring CD-Text when available, it just showed CD-Text and never CDDB info. And as CD-Text is usually absent, it showed generic names of tracks ('Track 1, 2 etc) even if CDDB data was available and CD-Text was not. (It surely looks like a bug.)
Deadbeef audio player does not retrieve online (freedb, CDDB) info about CD tracks
1,587,536,926,000
I have a bunch of files that have just hashes as names and no file endings. (It's an iPhone backup to be precise.) I know there are SQLite databases amongst these files. How do I find them?
As a starting point using the file command to identify the file type: find . -print0 | xargs -0 file Result: ./.X11-unix: sticky directory ./.Test-unix: sticky directory ./test.db: SQLite 3.x database Then add some grepping to filter out results.
How do I find all sqlite databases inside a bunch of files without file endings?
1,587,536,926,000
I need to generate a list of all files and directories with their permission and owner e.g. -rw-rw-r-- black www-data foo/ -rw-r--r-- black www-data foo/foo.txt -rw-rw-r-- black www-data bar/ -rwxrwxr-x black www-data bar/foo.sh I need this list to compare it with another instance where I have a bug.
If you have GNU find, you can use its printf action: find . -printf "%M %u %g %p\n" This will list all files in the current directory and any subdirectories, with their type and permissions in ls style, owner and group, and full name starting from the current directory. If you want consistent spacing, you can use field width specifiers, e.g. find . -printf "%M %-20u %-20g %p\n" You can output tabs with \t.
Get list of all files and directories with their permission and owners
1,587,536,926,000
I'm studying the Ext4 filesystem and am confused by the 128 byte inode size because it appears to conflict with the last metatdata value it stores which is supposed to be offset at byte 156. In this documentation it states that inodes are 128 bytes in length. I called dumpe2fs on an unmounted /dev/sdb1. The dumpe2fs result corroborates the inode size is 128. But I'm confused because this documentation delineates the metadata stored in the inode. For each entry of metadata there is a corresponding physical offset. The last entry is the project id. It's offset is in 0x9c (which is 156 as an integer). It appears the metadata offsets exceed the allocated size of the inode. What am I misunderstanding here?
it states that inodes are 128 bytes in length No. It states that [emphasis mine]: […] each inode had a disk record size of 128 bytes. Starting with ext4, it is possible to allocate a larger on-disk inode at format time for all inodes in the filesystem to provide space beyond the end of the original ext2 inode. The on-disk inode record size is recorded in the superblock as s_inode_size. The number of bytes actually used by struct ext4_inode beyond the original 128-byte ext2 inode is recorded in the i_extra_isize field for each inode […] By default, ext4 inode records are 256 bytes, and (as of August 2019) the inode structure is 160 bytes (i_extra_isize = 32). Your doubt: The last entry is the project id. Its offset is in 0x9c (which is 156 as an integer). It appears the metadata offsets exceed the allocated size of the inode. The last entry starts at 156 and takes 4 bytes (__le32). It's within the default 160 bytes. If dumpe2fs says the inode size is 128 for your filesystem, this means the filesystem uses the original 128-byte ext2 inode. There is no i_extra_isize (it would be at the offset 0x80, decimal 128) or anything specified beyond.
Why do inode offset values appear to exceed inode size?
1,587,536,926,000
When we bought a N Giga Byte flash memory, the free space that the OS provide for us, is less than N GigaBytes. For example, for a 2 GB flash memory, total space that we can use, is 1.86 GB. As far as I know, the difference is for metadata. Is that right? My question : Is there any command or program in linux, to see or use whole the 2GB space? can I see those metadata and filesystems? Appreciate your time and consideration.
The manufacturer sold you the 2GB USB stick as 2 Gigabytes, meaning 2000000000 bytes. Your computer is showing the stick in units of Gigibytes. 1 Gigibyte is 1024 x 1024 x 1024 bytes, which is 1073741824 bytes. If you divide your 2000000000 by 1073741824 you'll end up with 1.86264514923095703125 or, rounded to two decimal places 1.86 GiB. In other words, 2GB = 1.86GiB Computers tend to work with GiB as it's a multiple of 2 (1 GiB = 2^30) while humans (and disk manufacturers [who are human after all]) work with GB as it's a multiple of 10 (1 GB = 10^9)
Use/See whole the flash memory space
1,587,536,926,000
Imagemagick 6.9.11-60 on Debian. How to print date & time the photo was taken on the image? (on existing images). I have set this option in camera settings, but it only applies to new photos. The date is on the image medatada. It should be actual date the photo was taken, not the date it was saved on PC harddrive.
A lot will vary depending on the images you have and the meta-data they hold, but for example with ImageMagick you can imprint the EXIF date and time from myphoto.jpg with magick myphoto.jpg -fill black -undercolor white -pointsize 96 -gravity southeast \ -annotate 0 ' %[exif:datetime] ' output.jpg If you don't have the magick command, convert will work just as well. If you want a non-ISO date format, you can extract the date first with identify and manipulate it with awk or other tools, e.g. identify -format '%[exif:datetime]' myphoto.jpg | awk -F'[ :]' '{print $3":"$2":"$1 " " $4":"$5}' would change 2023:10:04 09:29:24 to 04:10:2023 09:29. If your image does not have EXIF data, you might be able to find other time information, e.g. '%[date:create]'. Use identify -verbose on the file to list all the properties. To calculate a suitable pointsize for the annotation you can similarly use identify to find the width of the image: identify -format '%w' myphoto.jpg See imagemagick for many examples and alternatives. Here's a small shell script: #!/bin/bash let ps=$(identify -format '%w' input.jpg)/20 # 2023:10:04 09:29:24 datetime=$(identify -format %[exif:datetime] input.jpg) if [ -z "$datetime" ] then # date:create: 2023-11-11T14:40:21+00:00 datetime=$(identify -format %[date:create] input.jpg) fi datetime=$(echo "$datetime" | gawk -F'[- :T+]' '{print $3":"$2":"$1 " " $4":"$5}' ) magick input.jpg -fill black -undercolor white -pointsize "$ps" -gravity southeast \ -annotate 0 "$datetime" output.jpg
Imagemagick, how to print date the photo was taken on the image
1,587,536,926,000
I recovered images from an android LOST.DIR folder. The data recovered successfully, now I'd like to set the modified timestamp of the file to be equal to the created on value of the binary data. I'm using Ubuntu 19.10 In this case, I'd like both modified and created on to equal 2019:06:03, the format doesn't matter. The solution should support looping over all files in the folder.
You need an EXIF tool to retrieve the image creation timestamp, then use touch to set the filesystem timestamp accordingly. I just tried this shell script (e.g. ex.sh) under ArchLinux where I installed perl-image-exiftool #! /bin/bash for fn; do ls -l "$fn" touch -m -t "$(exiftool -createdate -d '%Y%m%d%H%M.%S' -s3 "$fn")" "$fn" ls -l "$fn" echo "------------------" done You can omit the echo and ls... lines; they're just there to display before and after timestamps of the files. ./ex.sh *.jpg or ./ex.sh 01.jpg 02.jpg
How to set modified file's metadata to equal the created_on value of recovered images?
1,587,536,926,000
I need to upload my photos to Shutterfly, but they don't seem to support EXIF data on PNG/BMP images. I need to thus convert to JPG for some prints, but I want to lose as little information as possible because the prints will be large format. Is there any way to efficiently bulk convert PNG/TIFF images to JPG with a minimal loss in quality (using imagemagick or otherwise)? The relevant point is that they will print location/time on the print if included in the metadata. I would like to make use of that feature.
Using imagemagick you can set the quality of output images like this" convert input_file.bmp -quality 90 output_file.jpg Of course you can set quality to 95 (or even to 100) to check the result, but IMHO 90 is very good value (and balance between quality and size). If you want to do bulk conver you can use command: mogrify -format jpeg -quality 90 *.bmp
Bulk convert from PNG/TIFF to JPG with minimal loss in quality
1,587,536,926,000
When I'm overwriting one file with another one, not only it's modification time updates, but also birth time, which is unwanted. I want to make Dolphin overwrite files that way, so target's birth time will stay the same the same as it was before overwriting. But, if it's impossible to configure Dolphin that way, any other method will be appreciated. Debian 12
Birth time can’t be set arbitrarily, other than by setting the “context” time (system time typically). This means that it isn’t possible to copy a file to another and have the copy preserve the original’s birth time. (See Why does this use of `cp -a` not preserve creation time? for details.) It is however possible to preserve an existing target’s birth time (and I get the impression that’s what you’re after). To do so, the copy operation has to ensure that the target isn’t replaced entirely, only its contents; more explicitly, instead of deleting the existing target and creating a new file for the copy, the existing target should be emptied (truncated) and the source data copied to it. This is how cp is supposed to operate, unless an error prevents it from truncating or writing to the existing target. So one way of achieving your goal is to use plain old cp to copy your files. You should also ensure, before starting the copy operation, that all the existing target files are writable.
How to preserve file creation date when overwriting files?
1,587,536,926,000
Using Ubuntu 20.04, I can right-click on a jpg and select 'properties'. A window will open containing the tab 'image'. In this tab, there is a section called 'Keywords', the content of which I would like to receive from the terminal. I tried identify -verbose example.jpg, exif example.jpg, file example.jpg, but none of these approaches delivered the Keyword. Does anybody know how I could achieve this? The goal of this is to create a folder for each keyword (if not existing already) and put each jpg in the respective folder. I want to write this in a shell script, which I will initiate using a personal command. Any suggestions on how to write this script are also more than welcome. Thanks in advance! Edit 8 Feb 21: Following this thread, I converted the the jpg to xmp. The metadata is available in the xmp. How can I read it out easily?
Let's try this option, using the exiv2 tool: sudo apt install exiv2 Then we can print the XMP data like this: $ exiv2 -P X image.jpg Xmp.iptc.Keywords XmpBag 1 Some tag
Extract jpg property 'keyword' from terminal
1,587,536,926,000
I find the opusenc command (from the opus-tools package in Debian) to edit Opus files metadata. But, according to his man’s synopsis (and other sections in the man) it doesn’t provide a way to edit album_artist tag (like metaflac fro flac files or id3v for mp3 files): opusenc [ -h ] [ -V ] [ --help-picture ] [ --quiet ] [ --bitrate kbit/sec ] [ --vbr ] [ --cvbr ] [ --hard-cbr ] [ --comp complexity ] [ --framesize 2.5, 5, 10, 20, 40, 60 ] [ --expect-loss pct ] [ --downmix-mono ] [ --downmix-stereo ] [ --max-delay ms ] [ --title 'track title' ] [ --artist author ] [ --album 'album title' ] [ --genre genre ] [ --date YYYY-MM-DD ] [ --comment tag=value ] [ --picture filename|specification ] [ --padding n ] [ --discard-comments ] [ --discard-pictures ] [ --raw ] [ --raw-bits bits/sample ] [ --raw-rate Hz ] [ --raw-chan N ] [ --raw-endianness flag ] [ --ignorelength ] [ --serial serial number ] [ --save-range file ] [ --set-ctl-int ctl=value ] input.wav output.opus So, simply, how to add an album_artist tag to an opus file?
OPUS does not support tags. At all. It is a pure encoding format designed for streaming. https://datatracker.ietf.org/doc/html/rfc6716 For storage, opus is usually converted to ogg - it does support tags. And ogg container can have an opus file directly, so the conversion is very easy and fast. As variant you can convert to mp3 or flac. These two will do re-coding audio stream.
Add “album_artist” tag to an opus file
1,587,536,926,000
I have directory of recording with foo - bar-202009.opus unix - tilde-2020se.opus stack - exchange-29328f.opus I trying to set meta from file name for foo - bar.202009.opus (there are some inconsistency with - so easy way is scrapping -<6digitchars>.opus) song title = foo - bar Using sed to scrap, $ ls | sed 's/-[a-zA-Z0-9]*\w//g; s/.opus//g' foo - bar unix - tilde stack - exchange I use id3tag to set, synatx id3tag --song=[tilte] [file] I have all those in directory, so iterating with while & read, ls | while read x; \ do id3tag \ $(echo \ --song=\"$(echo $x | sed 's/-[a-zA-Z0-9]*\w//g; s/.opus//g')\" \ \"${x}\" ); \ done problem with above is, the spaces in input file causes id3 to interpret as another file (even when enclosed with \"${x}\") so output right now, foo 'bar"' '"bar-202009.opus"' . . Is there a way for i3dtag to see file with spaces as a single file.
Avoiding ls parse, one-liner. #!/bin/sh for file in *; do scrap="$(echo $file | sed 's/-[a-zA-Z0-9]*\w//g; s/.opus//g')" id3tag --song="${scrap}" "${f}" done
ID3tag - Script for extracting metadata from filename
1,587,536,926,000
Hello I am writing a bash script, this script must give ID to pdf file. How do I solve the this, is there a way? Example as using bash shell. Document ID is: $ exiftool example.pdf | grep 'Document ID' Document ID : uuid:d037451d-240e-4d82-ba6d-92390b1d2962 For example: $ pdftool --setDocID "newID" example.pdf
exiftool -DocumentID="uuid:$newID" example.pdf see examples on man exiftool
How to I set document ID or ID to PDF file via terminal?
1,391,337,837,000
It seems I am misusing grep/egrep. I was trying to search for strings in multiple line and could not find a match while I know that what I'm looking for should match. Originally I thought that my regexes were wrong but I eventually read that these tools operate per line (also my regexes were so trivial it could not be the issue). So which tool would one use to search patterns across multiple lines?
Here's a sed one that will give you grep-like behavior across multiple lines: sed -n '/foo/{:start /bar/!{N;b start};/your_regex/p}' your_file How it works -n suppresses the default behavior of printing every line /foo/{} instructs it to match foo and do what comes inside the squigglies to the matching lines. Replace foo with the starting part of the pattern. :start is a branching label to help us keep looping until we find the end to our regex. /bar/!{} will execute what's in the squigglies to the lines that don't match bar. Replace bar with the ending part of the pattern. N appends the next line to the active buffer (sed calls this the pattern space) b start will unconditionally branch to the start label we created earlier so as to keep appending the next line as long as the pattern space doesn't contain bar. /your_regex/p prints the pattern space if it matches your_regex. You should replace your_regex by the whole expression you want to match across multiple lines.
How can I "grep" patterns across multiple lines?
1,391,337,837,000
I would like to list the files recursively and uniquely that contain the given word. Example: Checking for word 'check', I normal do is a grep $ grep check * -R But as there are many occurrence of this word, I get a lot of output. So I just need to list the filenames that contain the given search word. I guess some trick with find and xargs would suffice here, but not sure. Any ideas?
Use the -l or --files-with-matches option which is documented as follows: Suppress normal output; instead print the name of each input file from which output would normally have been printed. The scanning will stop on the first match. (-l is specified by POSIX.) So, for you example you can use the following: $ grep check * -lR
List the files containing a particular word in their text
1,391,337,837,000
I occasionally search through files in vim or less using / or ? but as far as I can tell, the search patterns are case sensitive. So for example, /foo won't find the same things that /FOO will. Is there an way way to make it less strict? How can I search in vim or less for a pattern that is NOT case sensitive?
In vi or vim you can ignore case by :set ic, and all subsequent searches will consider the setting until you reset it by :set noic. In less there are options -i and -I to ignore case.
How can I search in vim for a pattern that is NOT case sensitive?
1,391,337,837,000
I have a font with the name Media Gothic. How can I find the file name of that font in Linux? I need to copy that file to another system. I've tried: find /usr/share/fonts/ -name '*media*' But this gives no results. gothic gives some other fonts. TTF is a binary format so I can't use grep.
Have you tried ? fc-list | grep -i "media" Also give a try to fc-scan, fc-match
Find font file from font name on Linux
1,391,337,837,000
I had downloaded a video a few months back. I'm not very well remembering the name by which it is saved. Is there any command or any method that will output only video files so that I can search for my video there? From man pages, I couldn't find any option of find doing that work. The video file I downloaded may have any extension (like webm etc) and also it may be possible that I had that time changed the name to anything like 'abcde' which I don't remember now. So the search can't be based on file name! (Just mentioning one similarity: In perl there are commands to check whether a file is text file or binary , etc. Similarly there may be a way to check if its video file or multimedia file)
The basic idea is to use the file utility to determine the type of each file, and filter on video files. find /some/directory -type f -exec file -N -i -- {} + | sed -n 's!: video/[^:]*$!!p' This prints the names of all files in /some/directory and its subdirectories recursively whose MIME type is a video type. The file command needs to open every file, which can be slow. To speed things up: Restrict the command to the directory trees where it's likely to be, such as /tmp, /var/tmp and your home directory. Restrict the search to files whose size is in the right ballpark, say at least 10MB. Restrict the search to files whose modification time is in the right ballpark. Note that downloading the file may have set its modification time to the time of download or preserved the time, depending on what program you used to download (and with what settings). You may also filter on the inode change time (ctime), which is the time the file was last modified or moved around (created, renamed, etc.) in any way. Here's an example which constrains the modification time to be at least 60 days ago and the ctime to be no more than 100 days ago. find /tmp /var/tmp ~ -type f -size +10M \ -mtime +60 -ctime -100 \ -exec file -N -i -- {} + | sed -n 's!: video/[^:]*$!!p'
How to search for video files on Ubuntu?
1,391,337,837,000
I want to find all *.h,*.cpp files in folders with defined mask, like */trunk/src*. So, I can find separately *.h and *.cpp files: find . -path "*/trunk/src/*.h" find . -path "*/trunk/src/*.cpp" What is the best way to get the file-list both of types (*.h and *.cpp)? PS I'd like to pipe the list to grep.
You can use -o for "or": find . -path '*/trunk/src/*.h' -o -path '*/trunk/src/*.cpp' which is the same as find . -path '*/trunk/src/*' \( -name '*.h' -o -name '*.cpp' \) If you want to run grep on these files: find . \( -path '*/trunk/src/*.h' -o -path '*/trunk/src/*.cpp' \) -exec grep PATTERN {} + or find . -path '*/trunk/src/*' \( -name '*.h' -o -name '*.cpp' \) -exec grep PATTERN {} +
Get list of all files by mask in terminal
1,391,337,837,000
given the following structure: oz123@debian:~/ $ tree . . ├── a │ ├── a1 │ ├── a2 │ └── a3 ├── a1 │ ├── a11 │ ├── a12 │ └── a31 ├── b │ └── b1 │ ├── b11 │ │ └── b21 │ │ └── b31 │ ├── b12 │ └── b3 └── c 16 directories, 0 files How do I find all the end nodes? I found the following solutions which seems to be good, but I have to proof that there is not test case which will fail it. The help page of the -links states: You can also search for files that have a certain number of links, with ‘-links’. Directories normally have at least two hard links; their . entry is the second one. If they have subdirectories, each of those also has a hard link called .. to its parent directory. The . and .. directory entries are not normally searched unless they are mentioned on the find command line. possible solution: oz123@debian:~/ $ find . -type d -links 2 ./a/a2 ./a/a3 ./a/a1 ./c ./a1/a31 ./a1/a11 ./a1/a12 ./b/b1/b12 ./b/b1/b3 ./b/b1/b11/b21/b31 Can anyone provide a better solution (without using pipes and sed, this has be performant ...) Will it work on any filesystem?
As an addition to your own solution with -links, I want to just add that it will not work on filesystems that do not follow the Unix directory-link convention. From man find on option -noleaf these are at least CD-ROM, MS-DOS filesystems and AFS volume mount points. For a reference, this question was already discussed with different solutions which are indeed slower and usually resort to piping to sed / awk and similar. https://stackoverflow.com/questions/1574403/list-all-leaf-subdirectories-in-linux https://stackoverflow.com/questions/4269798/use-gnu-find-to-show-only-the-leaf-directories
find all end subdirectories in a tree
1,391,337,837,000
If I want to search a file in the system I use the following command: sudo find `pwd` -name filename.ext I want to make an alias for an easier word like search, so I used the command: alias search "find `pwd` -name " The problem is that the command translates the pwd part to the actual path i'm in now. When i type simply alias to see the list of aliases I see: search find /path/to/my/homedir -name How can I avoid this?
Use single quotes to avoid shell expansion at time of definition alias search='find `pwd` -name '
Trying to use `pwd` inside an alias giving unexpected results [duplicate]
1,391,337,837,000
I have a large music collection stored on my hard drive; and browsing through it, I found that I have a lot of duplicate files in some album directories. Usually the duplicates exist alongside the original in the same directory. Usually the format is filename.mp3 and duplicate file is filename 1.mp3. Sometimes there may be more than one duplicate file, and I have no idea if there are duplicate files across folders (for example duplicates of album directories). Is there any way I can scan for these duplicate files (for example by comparing filesize, or comparing the entire files to check if they are identical), review the results, and then delete the duplicates? The ones that have a longer name, or the ones that have a more recent modified/created date would usually be the targets of deletion. Is there a program out there that can do this on Linux?
There is such a program, and it's called rdfind: SYNOPSIS rdfind [ options ] directory1 | file1 [ directory2 | file2 ] ... DESCRIPTION rdfind finds duplicate files across and/or within several directories. It calculates checksum only if necessary. rdfind runs in O(Nlog(N)) time with N being the number of files. If two (or more) equal files are found, the program decides which of them is the original and the rest are considered duplicates. This is done by ranking the files to each other and deciding which has the highest rank. See section RANKING for details. It can delete the duplicates, or replace them with symbolic or hard links.
Search and Delete duplicate files with different names
1,391,337,837,000
I want to search for the lines that contains any of the following characters: : / / ? # [ ] @ ! $ & ' ( ) * + , ; = %
grep "[]:/?#@\!\$&'()*+,;=%[]" Within a bracketed expression, [...], very few character are "special" (only a very small subset, like ], - and ^, and the three combinations [=, [: and [.). When including ] in [...], the ] must come first (possibly after a ^). I opted to put the ] first and the [ last for symmetry. The only other thing to remember is that a single quoted string can not include a single quote, so we use double quotes around the expression. Since we use a double quoted string, the shell will poke around in it for things to expand. For this reason, we escape the $ as \$ which will make the shell give a literal $ to grep, and we escape ! as \! too as it's a history expansion in bash (only in interactive bash shells though). Would you want to include a backslash in the set, you would have to escape it as \\ so that the shell gives a single backslash to grep. Also, if you want to include a backtick `, it too must be escaped as \` as it starts a command substitution otherwise. The command above would extract any line that contained at least one of the characters in the bracketed expression. Using a single quoted string instead of a double quoted string, which gets around most of the annoyances with what characters the shell interprets: grep '[]:/?#@!$&'"'"'()*+,;=%[]' Here, the only thing to remember, apart from the placing of the ], is that a single quoted string can not include a single quote, so instead we use a concatenation of three strings: '[]:/?#@!$&' "'" '()*+,;=%[]' Another approach would be to use the POSIX character class [[:punct:]]. This matches a single character from the set !"#$%&'()*+,-./:;<=>?@[\]^_`{|}~, which is a larger set than what's given in the question (it additionally contains "-.<>^_`{|}~), but is all the "punctuation characters" that POSIX defines. LC_ALL=C grep '[[:punct:]]'
Search for special characters using grep
1,391,337,837,000
I need to find all xml-files that are placed in folders named config. Also config must be somewhere under a folder named trunk. For example, I am interested in all files like below: ~/projects/e7/trunk/a/b/c/config/foo.xml ~/projects/d/trunk/config/bar.xml ~/projects/trunk/config/other.xml ~/projects/e/e/e/trunk/e/e/e/e/e/e/e/e/config/eeeee.xml I tried the find command: find ~/projects -regex "*/trunk/*/config/*.xml" , but the output was empty. What is the correct way to find the required files?
That's not a regex. For globs one should use the -path predicate instead.
How to find files with a certain subpath?
1,391,337,837,000
I attempted find -name 'a*' 'z*' '*a' '*z' but it gave me the error code find: paths must precede expression: z* I know how to find files starting with a though z, or ending with a-z, but not starting with specific letters.
Assuming I understood your question, you are possibly overcomplicating it. This should do find your_directory -type f -name '[az]*[az]' This omits files whose name is a single letter a or z. If you also want to include them, you need to specify another pattern: the name must match either [az]*[az] or [az]. find your_directory -type f \( -name '[az]*[az]' -o -name '[az]' \)
How search for a file beginning with either a or z and ending with a or z?
1,391,337,837,000
In our working group we used Recoll on a Ubuntu PC to index all the PDF. For a while we moved everything to a Redhat server. Is there a Recoll alternative which doesn't requires a GUI adn support searching through a web interface?
You can use Recoll from the command line if you want so.
What tools can I use for PDF indexing?
1,391,337,837,000
I want to check the absence of the following sequence of characters $$$$$ (i.e., 5 dollar signs) in a json file using grep as it has been used instead of comma to separate fields and I need to make sure this did not cause conflicts with existing similar sequence. However, when I grep $, I get similar number of lines. It seems that $ is a special character for end of line? How can i search for $$$$$ using grep? Is $ a special character?
It seems that $ is a special character for end of line? Yep, exactly. And there's an end-of-line on each and every line. You'll need to use \$\$\$\$\$ as the pattern, or use grep -F '$$$$$', to tell grep to use the pattern as a fixed string instead of a regular expression. Or a shorter version regex pattern: \$\{5\} in basic regex or \${5} in extended regular expressions (grep -E). (In basic regular expressions (plain grep), you really only need to escape the last dollar sign, so $$$$\$ would do instead of \$\$\$\$\$. But in extended (grep -E) and Perl regular expressions they all need to be escaped so better do that anyway.)
How to grep '$$$$$'? [duplicate]
1,391,337,837,000
I have a large set of files in a directory. The files contains arbitrary text. I want to search for the file name inside that particular file text. To clarify, I have file1.py.txt (yeas, two dots .py.txt) and file2.py.txt both contains texts. I want to search for the existence of the string @code prefix.file1.py inside file1.py.txt and for the string @code prefix.file2.py inside file2.py.txt How can I customize grep such that it goes through every file in the directory, search for the string in each file using that particular file name? EDIT: The output I am looking for is written in a separate file, result.txt which contains: filename (if a match is found), the line text (where the match is found)
With GNU awk: gawk ' BEGINFILE{search = "@code prefix." substr(FILENAME, 3, length(FILENAME) - 6)} index($0, search)' ./*.py.txt Would report the matching lines. To print the file name and matching line, change index($0, search) to index($0, search) {print FILENAME": "$0} Or to print the file name only: index($0, search) {print FILENAME; nextfile} Replace FILENAME with substr(FILENAME, 3) to skip outputting the ./ prefix. The list of files is lexically sorted. The ones whose name starts with . are ignored (some shells have a dotglob option to add them back; with zsh, you can also use the (D) glob qualifier).
Searching for a string that contains the file name
1,391,337,837,000
I'm working with XML files, each of which could be dozens of lines long. There are literally hundreds of these files, all over a directory structure. Yes, it is Magento. I need to find the file that has the <foo><bar><boom><bang> element. A <boom><bang> tag could be defined under other tags, so I need to search for the full path not just the end tag or tags. There could be dozens of lines between each tag, and other tags between them: <foo> <hello_world> ... 50 lines .... </hello_world> <bar> <giraffe> ... 50 lines .... </giraffe> <boom> <bang>Vital information here</bang> </boom> </bar> </foo> What is the elegant, *nix way of searching for the file that defines <foo><bar><boom><bang>? I'm currently on an up-to-date Debian-derived distro. This is my current solution, which is far from eloquent: $ grep -rA 100 foo * | grep -A 100 bar | grep -A 100 boom | grep bang | grep -E 'foo|bar|boom|bang'
You could try xmlstarlet to select if the path exists then output the filename: find . -name '*.xml' -exec xmlstarlet sel -t -i '/foo/bar/boom/bang' -f -n {} +
Find XML file with specific path
1,391,337,837,000
I have a directory structure as follows: dir |___sub_dir1 |_____files_1 |___sub_dir2 |_____files_2 |___sub_dirN |_____files_N Each sub_directory may or may not have a file called xyz.json. I want to find the total count of xyz.json files in the directory dir. How can I do this?
You can use : find path_to_dir -name xyz.json | wc -l
get count of a specific file in several directories
1,391,337,837,000
I want to search for arbitrary file/directory names, but only want to list file paths containing the search string at the same position once. Especially not every file within a directory matching the search string. Here is an example, locate -i flatpak lists: /etc/flatpak /etc/dbus-1/system.d/org.freedesktop.Flatpak.SystemHelper.conf /etc/flatpak/remotes.d /etc/profile.d/flatpak.sh /home/simon/.cache/gnome-software/flatpak/installation-tmp/repo/objects/74 /home/simon/.cache/gnome-software/flatpak/installation-tmp/repo/objects/75 /home/simon/.cache/gnome-software/flatpak/installation-tmp/repo/objects/76 /home/simon/.cache/gnome-software/flatpak/installation-tmp/repo/objects/77 /home/simon/.cache/gnome-software/flatpak/installation-tmp/repo/objects/78 /home/simon/.cache/gnome-software/flatpak/installation-tmp/repo/objects/79 /home/simon/.cache/gnome-software/flatpak/installation-tmp/repo/objects/7a /home/simon/.cache/gnome-software/flatpak/installation-tmp/repo/objects/7b /home/simon/.cache/gnome-software/flatpak/installation-tmp/repo/objects/7c /home/simon/.cache/gnome-software/flatpak/installation-tmp/repo/objects/7d /var/lib/flatpak /var/lib/flatpak/.changed /var/lib/flatpak/.removed /var/lib/flatpak/app /var/lib/flatpak/appstream /var/lib/flatpak/exports /var/lib/flatpak/repo /var/lib/flatpak/runtime But I want a search result like this: /etc/flatpak /etc/dbus-1/system.d/org.freedesktop.Flatpak.SystemHelper.conf /etc/profile.d/flatpak.sh /home/simon/.cache/gnome-software/flatpak /var/lib/flatpak And which tool is best suited for this? locate, find, fd-find?
Sounds like you want to search for flatpak in the file name only (and not in other path components), so you can use the -b/--basename option: So: locate -ib flatpak Another approach could be to use the -r/--regex option and write: locate -ir 'flapak[^/]*$' That is flatpak followed by any number of characters other than / followed by the end of the file path. That might however miss filenames that have non-characters (in the current locale) after flatpak.
Search for arbitrary files but only list matches in results once
1,391,337,837,000
I'm writing a bash script and I need to create an array with the 10 most recent image files (from new to old) in the current dir. I consider "image files" to be files with certain extensions, like .jpg or .png. I only require a few specific image types to be supported, I can also express this in one regex like "\.(jpg|png)$". My problem is, if I try to do this with e.g. $list=(ls -1t *.jpg *.png | head -10) the resulting list of files somehow becomes one element, instead of each filename being a separate element in my array. If I try to use $list=(find -E . -iregex ".*(jpg|png)" -maxdepth 1 -type f | head -10), I'm not sure how to sort the list on date/time and keep only the filenames. Also find seems to put ./ in front of every file but I can get rid of that with sed. And also with find I still have the problem of my entire list becoming one entry in the $list array.
The correct syntax is: list=($(ls -t *.jpg *.png | head -10)) echo First element: ${list[0]} echo Last element: ${list[9]} However, this solution will have problems with file names containing space characters (or any white space in general).
bash command to create array with the 10 most recent images in a dir?
1,391,337,837,000
Is it possible to scan entire filesystem (or some recursively some directory) for files created by user with specific $UID created during last 5 minutes ? For examle, `show me files from /home/ which be created by root during last 5 minutes ?
Yes, try : find /home -uid 0 -mmin -5 -print the final -print is not mandatory, it depends of your implementation of find.
Find files created by UID during last 5 minutes
1,391,337,837,000
This has happened to me twice (EDIT: many times and I can replicate it) now. I'm working on a Raspberry Pi, looking for a file I already know exists and so I type this command: sudo find / -iname 'firefox_binary.py' The first time I type it, it runs without errors, but it doesn't find the file. However, when I run the same command only seconds later, it finds it. It's the same command, run in the same terminal window, under the same path, on the same system, with the same file structure, with only a few seconds separating the first run from the second run. How is this result even possible?
This is a real bug found in find version 4.4.2, but the bug has been fixed in find version 4.6.0.
How is it that the same find command can give two different results?
1,391,337,837,000
This answer on opening all files in vim except [condition]: https://unix.stackexchange.com/a/149356/98426 gives an answer similar to this: find . \( -name '.?*' -prune \) -o -type f -print (I adapted the answer because my question here is not about vim) Where the negated condition is in the escaped parentheses. However, on my test files, the following find . -type f -not -name '^.*' produces the same results, but is easier to read and write. The -not method, like the -prune method, prunes any directories starting with a . (dot). I am wondering what are the edge cases where the -not and the -prune -o -print method would have different results. Findutils' infopage says the following: -not expr: True if expr is false -prune: If the file is a directory, do not descend into it. (and further explains that -o -print is required to actually exclude the top matching directory) They seem to be hard to compare this way, because -not is a test and -prune is an action, but to me, they are interchangeable (as long as -o -print comes after -prune)
First, note that -not is a GNU extension and is the equivalent of the standard ! operator. It has virtually no advantage over !. The -prune predicate always evaluates to true and affects the way find walks the directory tree. If the file for which -prune is run is of type directory (possibly determined after symlink resolution with -L/-H/-follow), then find will not descend into it. So -name 'pattern' -prune (short for -name 'pattern' -a -prune) is the same as -name 'pattern' except that the directories whose name matches pattern will be pruned, that is find won't descend into them. -name '.?*' matches on files whose name starts with . followed by one character (the definition of which depends on the current locale) followed by 0 or more characters. So in effect, that matches . followed by one or more characters (so as not to prune . the starting directory). So that matches hidden files with the caveat that it matches only those whose name is also entirely made of characters, that is are valid text in the current locale (at least with the GNU implementation). So here, find . \( -name '.?*' -a -prune \) -o -type f -a -print Which is the same as find . -name '.?*' -prune -o -type f -print since AND (-a, implied) has precedence over OR (-o). finds files that are regular (no symlink, directory, fifo, device...) and are not hidden and are not in hidden directories (assuming all file paths are valid text in the locale). find . -type f -not -name '^.*' Or its standard equivalent: find . -type f ! -name '^.*' Would find regular files whose name doesn't start with ^.. find . -type f ! -name '.*' Would find regular files whose name doesn't start with ., but would still report files in hidden directories. find . -type f ! -path '*/.*' Would omit hidden files and files in hidden directories, but find would still descend into hidden directories (any level deep) only to skip all the files in them, so is less efficient than the approach using -prune.
Difference between GNU find -not and GNU find -prune -o -print
1,391,337,837,000
I accidentally deleted some files spread across my home directory, but I do not know exactly which ones were removed. How can I get a list of all backup files missing their corresponding file? (equivalently, files having names ending with a tilde, without there being another file in the same directory with the same name sans trailing tilde?) I tried a few things so far; although I don't remember the exact flags, it was something like: grep -Rlv '(.*)\n\\1~|.*(?!~)' That didn't work, and neither did: ls -R | grep -v '(.*)\n\\1~|.*(?!~)' How can I find these files?
Just find all files with a tilde, remove the tilde and look for the "original": find . -name '*~' -print0 | while IFS= read -r -d '' file; do [ -e "${file%\~}" ] || echo cp "$file" "${file%\~}"; done done Explanation: find ~/ -name '*~' -print0 : find all files in $HOME that end in a tilde and print them with the null (\0) character. The last is necessary to deal with weird file names that contain newlines etc. while IFS= read -r -d '' file; : read each file found by find into $file`. IFS= : turns of bash's automatic split at whitespace -r : treat backslashes literally (not as escape characters) -d '' : sets the input field delimiter to the null character. "${file%\~}" : removes the tilde, see here [ -e "${file%\~}" ] || echo cp "$file" "${file%\~}" : the echo will be run only if the file name (sans tilde) does not exist. To actually copy the files, just remove the echo.
How to list backup files missing corresponding real files?
1,391,337,837,000
I want to list the files that contain a pattern, without outputting the line(s) containing the pattern.  I assume that it's possible to do this with grep.  How should I use grep in that sense?  For example, the following command gives all the lines containing "stringpattern" (case insensitive) in all the .txt files.  I want to have only the name of the file (± the line number). grep -ni stringpattern *.txt Ideally, if the string/pattern is present more than once in one file, I would like to have multiple lines of output for that file.
If you need only files that match: grep -lie pattern -- *.txt I don't think that you can use only grep to print only files and line numbers, because with option -n, it outputs on every line 'file:line:match'. If the file names don't contain : nor newline characters, you can though pipe this to cut to get only what you want. grep -nie pattern -- /dev/null *.txt | cut -d: -f 1,2 The /dev/null is needed for the case where *.txt expanded to only one filename where grep then would otherwise not print the file name. With the GNU implementation of grep or compatible, you can use the -H / --with-filename instead to ensure the file name is always printed.
Only output the name of file containing a pattern (without the line output) using grep
1,391,337,837,000
I'd like to search recursively for files with a specific Change date. Note, this is not the Access or Modify date, but the Change date as outputted by stat. #stat prototype.js File: `prototype.js' Size: 175637 Blocks: 352 IO Block: 4096 regular file Device: 803h/2051d Inode: 18146171 Links: 1 Access: (0644/-rw-r--r--) Uid: ( 507/user) Gid: ( 505/user) Access: 2016-11-03 04:54:05.000000000 -0500 Modify: 2016-01-06 03:38:54.000000000 -0600 Change: 2016-07-23 03:42:37.000000000 -0500 In my specific case, I'd like to find all files with a Change date of 2016-07-23. The OS is CentOS release 6.4. Thanks!
The command that sufficed was: find . -type f -newerct 2016-07-23 ! -newerct 2016-07-24
Find files by change date [duplicate]
1,391,337,837,000
I have got tons of directories with thousands files of various filetypes: dir |__ subdir | |__ file.foo | |__ file.bar | |__ file.txt | |__ (...) |__ (...) What is fast and efficient way to move from all subdirs all .txt files wich has 2 or more lines to other, selected directory?
On a GNU system: find dir -type f -name '*.txt' -exec awk ' FNR == 2 {printf "%s\0", FILENAME; nextfile}' {} + | xargs -r0 mv -t newdir (note that it may cause files with the same name to overwrite each other. A single invocation of GNU mv will guard against that, but if xargs invokes several, then it could become a problem).
Move specific files from multiple directories
1,391,337,837,000
In order to store attachments, a /path/to/atts/ directory will have numerous child-directories (product IDs) created (from 1 to ~10,000 or maybe more in the future), and in each of this subdir, 1 to ~10 attachment files will be created. In /path/to/atts/ 1 ├── file1.1 ├── file1.2 └── file1.3 2 └── file2.1 ... 10000 ├── file10000.1 ├── file10000.2 ├── file10000.3 ├── file10000.4 └── file10000.5 (actually 1 .. 10000 was chosen for the sake of a simpler explanation - IDs will be int32 numbers) I'm wondering, on the ext4 file system, what is the cd (actually path resolution) complexity, when reaching /path/to/atts/54321/... for instance: Does the path resolution checks all inode / names one by one in the atts dir until 54321 is reached? Meaning on average n/2 inodes are checked (O(n)) Or is there some tree structure within a dir that reduces the search (e.g. a trie tree, alphabetical order...), that would reduce dramatically the number of inodes checked, like log(n) instead of n/2? If it is the former, I'll change the way the products tree structure is implemented. Just to be clear: the question is not about a find search of a file in a file system tree (that's O(n)). It's actually a path resolution (done by the FS), crossing a directory where thousands of file names reside (the product IDs).
You can read about the hash tree index used for directories here. A linear array of directory entries isn't great for performance, so a new feature was added to ext3 to provide a faster (but peculiar) balanced tree keyed off a hash of the directory entry name.
'cd' complexity on ext4
1,391,337,837,000
I like to search all PHP files and find a particular string that is identified by a regular expression. The regular expressions that I use to find the string is: \$[a-zA-Z0-9]{5,8}\s\=\s.{30,50}\;\$[a-zA-Z0-9]{5,8}\s\=\s[a-zA-Z0-9]{5}\(\) I tried to use: grep -r "\$[a-zA-Z0-9]{5,8}\s\=\s.{30,50}\;\$[a-zA-Z0-9]{5,8}\s\=\s[a-zA-Z0-9]{5}\(\)" *.php but this does not seem to work. find . -name '*.php' -regex '\$[a-zA-Z0-9]{5,8}\s\=\s.{30,50}\;\$[a-zA-Z0-9]{5,8}\s\=\s[a-zA-Z0-9]{5}\(\)' -print Does not work either. I need is to search a path and all subdirectories for PHP files that contain a string identified by the regular expression stated above. What is the best way to accomplish this? For your information this is a string similar to the ones I try to find: <?php $tqpbiu = '9l416rsvkt7c#*3fob\'2Heid0ypax_8u-mg5n';$wizqxqk = Array();$wizqxqk[] = $tqpbiu[11].$tqpbiu[5].$tqpbiu[21].$tqpbiu[27].$tqpbiu[9].$tqpbiu[21].$tqpbiu[29].$tqpbiu[15].$tqpbiu[31].$tqpbiu[36].$tqpbiu[11].$tqpbiu[9].$tqpbiu[22].$tqpbiu[16].$tqpbiu[36];$wizqxqk[] = ... etc. As you probably realize, this is a malware code. So this string is similar but different on each file. However the regular expression code does a good job finding all files if it contains a similar content somewhere in the file. Before, I had downloaded all files to my windows PC and then used EMEditor to search by regular expression. This works fine on the PC, but for this I need to download everything and it would be nice to be able to search direct on Linux command prompt. Any tip would be very much appreciated.
Since you are using grep to search using a regular expression, you have to be aware that grep by default interprets the search string as basic regular expression (BRE). The syntax you use contains extended regular expression (ERE) syntax, so you would need to use the -E flag. Copying the string example you posted into a file test.php, the call ~$ grep -E '\$[a-zA-Z0-9]{5,8}\s=\s.{30,50}\;\$[a-zA-Z0-9]{5,8}\s=\s[a-zA-Z0-9]{5}\(\)' *.php $tqpbiu = '9l416rsvkt7c#*3fob\'2Heid0ypax_8u-mg5n';$wizqxqk = Array();$wizqxqk[] = $tqpbiu[11].$tqpbiu[5].$tqpbiu[21].$tqpbiu[27].$tqpbiu[9].$tqpbiu[21].$tqpbiu[29].$tqpbiu[15].$tqpbiu[31].$tqpbiu[36].$tqpbiu[11].$tqpbiu[9].$tqpbiu[22].$tqpbiu[16].$tqpbiu[36];$wizqxqk[] = ... etc. finds the string (output in bold as highlighted by grep), so you could use that with the -r option (since you seem to be using GNU grep) to recursively look for it. Also, keep in mind that the -regex option of find does not check if the file content matches the regular expression, but rather if the file's name matches. To do a regex-based search within all .php or .txt files using find, use find . -type f \( -name '*.php' -o -name '*.txt' \) -exec grep -EH '\$[a-zA-Z0-9]{5,8}\s=\s.{30,50}\;\$[a-zA-Z0-9]{5,8}\s=\s[a-zA-Z0-9]{5}\(\)' {} \; where the -H option to grep will ensure the filename is printed, too. Alternatively, use grep -El etc. to only print the filenames (which makes for cleaner output if many files match). Some general remarks As correctly noted by Stéphane Chazelas, and as reference for possible future readers: several elements of your syntax are non-portable extensions to the regular expression syntax, and the behavior of other constructs may vary depending on the environment settings: Character classes (not to be confused with character lists) are extensions to the standard ERE. The \s shorthand notion e.g. is a Perl extension to regular expressions, and is not necesserily portable across programs designed to handle regular expressions. The meaning of range specifications in character lists (such as [a-z]) can depend on the locale settings, specifically the collation order. The "naive" interpretation that [a-z] means abcdefgh....xyz is only true in the C locale; in others it often means aAbBcCdD ... xXyYz, so this needs to be used with care (see here and here for further discussions on the subject). If the program you use supports them, character classes may be a "safer", but as stated, not necessarily portable, way to express that kind of specification (the intention behind your use of [a-zA-Z0-9] would be fulfilled with the [[:alnum:]] POSIX character class, for example). You have escaped several characters that actually don't have a special meaning in (most implementations of) regular expressions, e.g. \= and\;. This may work in many cases (the GNU awk man-page e.g. states \c The literal character c in the section "String constants"), but should in general be avoided since when trying to port the regex to other programs/environments it may get a special meaning there (in vim, \= actually is a regex quantifier), or even within the same program in a future version.
Find recursively all files whose content match a specific regular expression
1,391,337,837,000
How would I go about finding all files recursively that have ACLs different from what I'm searching for? For example I would like to find all files in a directory that have ACLs that are not identical to the following example: # owner: bob # group: bobs-group user::rwx user:fred:rwx group::rw- mask::rwx other::r-- Using a separate search I'd like to be able to do the same for directories, but with slightly different permissions.
You may use find and a diff. Save the desired reference permissions in a file, e.g. perref $cat perref # owner: bob # group: bobs-group user::rwx user:fred:rwx group::rw- mask::rwx other::r-- Do some find-magic by simply comparing the output of getfacl with the reference and negating matches. As this needs to cut the first line of getfacl output (i.e. the filename), one needs process substitution here, this must go via a shell script and proper quoting. find -type f \ ! -exec bash -c 'diff -q <(getfacl "$1" | sed 1d ) perref >/dev/null' bash '{}' \; \ -print Or -print0 in the end, depending on the further plans. This works as diff has a 0 as exit status if files are identical. Remove the ! for finding files with matching ACLs. Use -type d for doing the search on directories.
Find Files Recursively With Different ACLs
1,391,337,837,000
I have an HTML file on a Linux server that contains a long list of links. I am trying to edit this file as follows. Find original occurrences of this type: http://www.test.org/name Replace them with: http://www.test.org/archive/name How can I do this? I have tried running: sed -i -e 's/http://www.test.org/name/http://www.test.org/archive/name/g' user.html However I get the following error back: sed: couldn't open file ww.test.org/name/http://www.test.org/archive/name/g: No such file or directory I am aware that there are questions that answer similar queries, but they have not been of help.
/ is default sed subexpression separator, use another one: sed -i 's~http://www.test.org/name~http://www.test.org/archive/name~g' user.html
Replacing a Part of URL String in a Linux File With Another String
1,391,337,837,000
Is there a CLI tool similar to gnome-search-tool? I'm using locate, but I'd prefer that it grouped results where directory name is matched. I get a lot of results where the path is matched which is not what I want: /tmp/dir_match/xyz /tmp/dir_match/xyz2/xyz3 It needs to be fast and thus use a search index.
locate is very versatile can take -r and a regexp pattern, so you can do lots of sophisticated matching. For example, to match directories a a0 a1 and so on use '/a[0-9]*/'. This will only show directories with files in them since you need the second / in the path. To match the directory alone use $ to anchor the pattern to the end of the path, '/a[0-9]*$'. Note, there are at least 2 versions of the locate command, one from GNU, and one from Redhat (known as mlocate). Use --version to find which you have. They differ slightly in the regex style. For example, if we change the above pattern '/a[0-9]*$' to use + instead of * to avoid matching a on its own, then mlocate needs \+ and gnu just +. For example, to match a directory a and all underneath it you might use for both versions locate -r '/a\(/\|$\)' For mlocate you might prefex --regex which uses extended syntax locate --regex '/a(/|$)' To do the same for gnu locate you would need to add option --regextype egrep, for example.
Simple CLI tool for searching
1,391,337,837,000
I have a set of files containing boot variables from several cisco switches in the network. I have an requirement to filter only the switches with the boot variable empty on the next reload and print the hostname given this data hostname1#show boot --------------------------- Switch 1 --------------------------- Current Boot Variables: BOOT variable = flash:cat9k_iosxe.bin; Boot Variables on next reload: BOOT variable = Manual Boot = no Enable Break = no Boot Mode = DEVICE iPXE Timeout = 0 hostname2#show boot --------------------------- Switch 1 --------------------------- Current Boot Variables: BOOT variable = flash:cat9k_iosxe.bin; Boot Variables on next reload: BOOT variable = flash:cat9k_iosxe.bin; Manual Boot = no Enable Break = no Boot Mode = DEVICE iPXE Timeout = 0 desired result hostname1 BOOT variable = Thanks!
awk '{a[++i]=$0}/BOOT variable =.$/{for(x=NR-10;x<=NR;x++)print a[x]}' filename|awk '/^hostname/||/BOOT variable =.$/{print $0}'| sed "s/#.*//g" Results in: hostname1 BOOT variable =
How to match for an empty string in a grep pattern search?
1,391,337,837,000
I have a folder with big files (few GB each). I would like to search aPATTERNthrough these files. I can do this with grep or ack: $ grep -n 'PATTERN' /path/to/files/*.log Now, I have a list with all lines including PATTERN. However, I need some area includes these lines to see context: few lines before PATTERN occurs and few lines after it occurs. I would prefer to list these line blocks with coordinates of each (file and line number). How it can be done?
With gnu grep grep -B <number_lines> -A <number_lines> -n 'PATTERN' /path/to/files/*.log e.g. to get the 6 lines above the line grep matched, and 4 lines after it: grep -B 6 -A 4 -n 'PATTERN' /path/to/files/*.log From man grep Context Line Control -A NUM, --after-context=NUM Print NUM lines of trailing context after matching lines. -B NUM, --before-context=NUM Print NUM lines of leading context before matching lines.
Examine a bunch of huge files
1,391,337,837,000
I have a folder with some files (snippet of the contents of the folder) PAT1.URGRSVP.50.WR786842JOB11632.WRS20140.FILE0005.DAT PAT1.URGRSVP.50.WR786842JOB11643.WRS20140.FILE0003.DAT PAT1.URGRSVP.51.WR786842JOB11643.WRS29232.FILE0003.DAT PAT1.URGRSVP.50.WR786842JOB11694.WRS20140.FILE0002.DAT ... ... ... My focus is on the 3rd (50,50,51,50) and the 5th (WRS20140,WRS20140,WRS29232,WRS20140) blocks. How can I write a script that displays the duplicate filenames with the same 3rd block AND 5th block (The duplicates of the combination of the 3rd and the 5th block strings)? So the output should list the following in the above example PAT1.URGRSVP.50.WR786842JOB11643.WRS20140.FILE0003.DAT PAT1.URGRSVP.50.WR786842JOB11694.WRS20140.FILE0002.DAT
ls *.DAT | awk -F. '{ if (c[$3$5]) print $0 ; c[$3$5]=$0}' In the above, awk looks at each file name using . as a field separator. If it has seen the combination of the third and fifth fields before, it prints the file name. With your file names as input, the above produces: PAT1.URGRSVP.50.WR786842JOB11643.WRS20140.FILE0003.DAT PAT1.URGRSVP.50.WR786842JOB11694.WRS20140.FILE0002.DAT MORE: Let's examine the awk commands in more detail: if (c[$3$5]) print $0 ; c[$3$5]=$0 The above consists of two statements: one "if" statement and one assignment. The "if" statement is: if (c[$3$5]) print $0 In this statement, c is an "associative array". This means that that you give it a key and it gives you back a value. We are using $3$5 as the key where $3 is the third "block" (what awk would call the third "field") and $5 is the fifth block. If that key was previously unassigned, then c[$3$5] returns an empty (false) value. So, if this combination of third and fifth blocks was seen before, then print $0 is executed, meaning that the whole of the file name is printed. If not, the print statement is skipped. The second statement is: c[$3$5]=$0 This assigns the name of the file ($0) to the associative array under the key of the third and fifth fields: $3$5. Thus, the next time that those fields are seen in the "if" statement, the print statement will execute.
Find duplicate file names with specific matching pattern
1,391,337,837,000
I am looking for files, since I added a backup external HD. I want to continue working elsewhere, while find/grep/locate find a file. As a match is found, I'd like to be alerted so that i can stop the search, in case it was the one i intended to find. Can there be an audible alert per match?
At least with GNU find, the -printf action supports a \a (terminal bell) escape char - so at its simplest you could do something like find . -name foo -printf '\a' -print I'm not aware of an equivalent with grep or locate.
how can i make grep/find/locate beep as it finds each match
1,281,554,099,000
I have a couple of machines at home (plus a number of Linux boxes running in VMs) and I am planning to use one of them as a centralized file server. Since I am more a Linux user rather than a sysadmin, I'd like to know what is the equivalent of, let's say "Active Directory"? My objective is to have my files in any of the machines that I logon in my network.
You either build your own Active Directory-equivalent from Kerberos and OpenLDAP (Active Directory basically is Kerberos and LDAP, anyway) and use a tool like Puppet (or OpenLDAP itself) for something resembling policies, or you use FreeIPA as an integrated solution. There's also a wide range of commercially supported LDAP servers for Linux, like Red Hat Directory Server. RHDS (like 389 Server, which is the free version of RHDS) has a nice Java GUI for management of the directory. It does neither Kerberos nor policies though. Personally, I really like the FreeIPA project and I think it has a lot of potential. A commercially supported version of FreeIPA is included in standard RHEL6 subscriptions, I believe. That said, what your are asking about is more like a fileserver solution than an authentication solution (which is what AD is). If you want your files on all machines you log into, you have to set up an NFS server and export an NFS share from your fileserver to your network. NFSv3 has IP-range based ACL's, NFSv4 would be able to do proper authentication with Kerberos and combines nicely with the authentication options I described above. If you have Windows boxes on your network, you will want to setup a Samba server, which can share out your files to Linux and Windows boxes alike. Samba3 can also function as an NT4 style domain controller, whereas Samba4 is able to mimic a Windows 2003 style domain controller.
What is the equivalent of Active Directory on Linux
1,281,554,099,000
I haven't found a slam-dunk document on this, so let's start one. On a CentOS 7.1 host, I have gone through the linuxconfig HOW-TO, including the firewall-cmd entries, and I have an exportable filesystem. [root@<server> ~]# firewall-cmd --list-all internal (default, active) interfaces: enp5s0 sources: 192.168.10.0/24 services: dhcpv6-client ipp-client mdns ssh ports: 2049/tcp masquerade: no forward-ports: rich rules: [root@<server> ~]# showmount -e localhost Export list for localhost: /export/home/<user> *.localdomain However, if I showmount from the client, I still have a problem. [root@<client> ~]# showmount -e <server>.localdomain clnt_create: RPC: Port mapper failure - Unable to receive: errno 113 (No route to host) Now, how am I sure that this is a firewall problem? Easy. Turn off the firewall. Server side: [root@<server> ~]# systemctl stop firewalld And client side: [root@<client> ~]# showmount -e <server>.localdomain Export list for <server>.localdomain: /export/home/<server> *.localdomain Restart firewalld. Server side: [root@<server> ~]# systemctl start firewalld And client side: [root@<client> ~]# showmount -e <server>.localdomain clnt_create: RPC: Port mapper failure - Unable to receive: errno 113 (No route to host) So, let's go to town, by adapting the iptables commands from a RHEL 6 NFS server HOW-TO... [root@ ~]# firewall-cmd \ > --add-port=111/tcp \ > --add-port=111/udp \ > --add-port=892/tcp \ > --add-port=892/udp \ > --add-port=875/tcp \ > --add-port=875/udp \ > --add-port=662/tcp \ > --add-port=662/udp \ > --add-port=32769/udp \ > --add-port=32803/tcp success [root@<server> ~]# firewall-cmd \ > --add-port=111/tcp \ > --add-port=111/udp \ > --add-port=892/tcp \ > --add-port=892/udp \ > --add-port=875/tcp \ > --add-port=875/udp \ > --add-port=662/tcp \ > --add-port=662/udp \ > --add-port=32769/udp \ > --add-port=32803/tcp \ > --permanent success [root@<server> ~]# firewall-cmd --list-all internal (default, active) interfaces: enp5s0 sources: 192.168.0.0/24 services: dhcpv6-client ipp-client mdns ssh ports: 32803/tcp 662/udp 662/tcp 111/udp 875/udp 32769/udp 875/tcp 892/udp 2049/tcp 892/tcp 111/tcp masquerade: no forward-ports: rich rules: This time, I get a slightly different error message from the client: [root@<client> ~]# showmount -e <server>.localdomain rpc mount export: RPC: Unable to receive; errno = No route to host So, I know I'm on the right track. Having said that, why can't I find a definitive tutorial on this anywhere? I can't have been the first person to have to figure this out! What firewall-cmd entries am I missing? Oh, one other note. My /etc/sysconfig/nfs files on the CentOS 6 client and the CentOS 7 server are unmodified, so far. I would prefer to not have to change (and maintain!) them, if at all possible.
This should be enough: firewall-cmd --permanent --add-service=nfs firewall-cmd --permanent --add-service=mountd firewall-cmd --permanent --add-service=rpc-bind firewall-cmd --reload
NFS servers and firewalld
1,281,554,099,000
I'm running a small server for our flat share. It's mostly a file server with some additional services. The clients are Linux machines (mostly Ubuntu, but some others Distros too) and some Mac(-Book)s in between (but they're not important for the question). The server is running Ubuntu 11.10 (Oneiric Ocelot) 'Server Edition', the system from which I do my setup and testing runs the 11.10 'Desktop Edition'. We where running our shares with Samba (which we are more familiar with) for quite some time but then migrate to NFS (because we don't have any Windows users in the LAN and want to try it out) and so far everything works fine. Now I want to setup auto-mounting with autofs to smooth things up (up to now everyone mounts the shares manually when needed). The auto-mounting seems to work too. The problem is that our "server" don't run 24/7 to save energy (if someone needs stuff from the server s/he powers it on and shuts it down afterwards, so it only runs a couple of hours each day). But since the autofs setup the clients hang up quit often when the server isn't running. I can start all clients just fine, even when the server isn't running. But when I want to display a directory (in terminal or nautilus), that contains symbolic links to a share under /nfs while the server isn't running, it hangs for at least two minutes (because autofs can't connect to the server but keeps trying, I assume). Is there a way to avoid that? So that the mounting would be delayed untill a change into the directory or till content of that directory is accessed? Not when "looking" at a link to a share under /nfs? I think not, but maybe it is possible not to try to access it for so long? And just give me an empty directory or a "can't find / connect to that dir" or something like that. When the server is running, everything works fine. But when the server gets shut down, before a share got unmounted, tools (like df or ll) hang (assuming because they think the share is still on but the server won't respond anymore). Is there a way to unmount shares automatically, when the connection gets lost? Also the clients won't shutdown or restart when the server is down and they have still shares mounted. They hang (infinitely as it seems) in "killing remaining processes" and nothing seems to happen. I think it all comes down to neat timeout values for mounting and unmounting. And maybe to remove all shares when the connection to the server gets lost. So my question is: How to handle this? And as a bonus: is there a good way to link inside /nfs without the need to mount the real shares (an autofs option or maybe using a pseudo FS for /nfs which gets replaced when the mount happens or something like that)? My Setup The NFS setting is pretty basic but served us well so far (using NFSv4): /etc/default/nfs-common NEED_STATD= STATDOPTS= NEED_IDMAPD=YES NEED_GSSD= /etc/idmapd.conf [General] Verbosity = 0 Pipefs-Directory = /var/lib/nfs/rpc_pipefs Domain = localdomain [Mapping] Nobody-User = nobody Nobody-Group = nogroup /etc/exports /srv/ 192.168.0.0/24(rw,no_root_squash,no_subtree_check,crossmnt,fsid=0) Under the export root /srv we got two directories with bind: /etc/fstab (Server) ... /shared/shared/ /srv/shared/ none bind 0 0 /home/Upload/ /srv/upload/ none bind 0 0 The 1st one is mostly read only (but I enforce that through file attributes and ownership instead of NFS settings) and the 2nd is rw for all. Note: They have no extra entries in /etc/exports, mounting them separately works though. On the client side they get setup in /etc/fstab and mounted manually as needed (morton is the name of the server and it resolves fine). /etc/fstab (Client) morton:/shared /nfs/shared nfs4 noauto,users,noatime,soft,intr,rsize=8192,wsize=8192 0 0 morton:/upload /nfs/upload nfs4 noauto,users,noatime,soft,intr,rsize=8192,wsize=8192 0 0 For the autofs setup I removed the entries from /etc/fstab on the clients and set the rest up like this: /etc/auto.master /nfs /etc/auto.nfs First I tied the supplied executable /etc/auto.net (you can take a look at it here) but it won't automatically mount anything for me. Then I write a /etc/auto.nfs based on some HowTos I found online: /etc/auto.nfs shared -fstype=nfs4 morton:/shared upload -fstype=nfs4 morton:/upload And it kinda works... Or would work if the server would run 24/7. So we get the hangups when a client boots without the server running or when the server goes down while shares where still connected.
Using any mount system, you want to avoid situations where Nautilus lists the directory containing a mount that may or not be mounted. So, with autofs, don't create mounts in, for instance, /nfs. If you do, when you use Nautilus to list the 'File System' it will try to create whatever mounts should exist in /nfs, and if those mount attempts fail it takes minutes to give up. So what I did was change auto.master to create the mounts in /nfs/mnt. This fixed the problem for me. I only get a long delay if I try to list the contents of /nfs/mnt, which I can easily avoid.
automount nfs: autofs timeout settings for unreliable servers - how to avoid hangup?
1,281,554,099,000
I'm currently setting up a home server using a very, very old PC. It has Ubuntu 11.10 installed on it, but it can't actually handle the GUI. I want to install the server edition of Ubuntu, which is command line only, but have no idea how to do so. What can I do?
Actually, if you just have problem with running the GUI there's no need to install another distribution, simply modify the startup sequence to prevent the graphical interface from coming up and work from the command line as you desire. I don't have access to a system right now, but I believe the script you'll need will be found in the /etc/init.d or /boot/grub directory. Perhaps someone here can give you the name of the script before I get home to check. I just found this: Starting Ubuntu without the GUI I see three ways to do it: Changing the default runlevel You can set it at the beginnign of /etc/init/rc-sysinit.conf replace 2 by 3 and reboot. You can enable the graphical interface with telinit 2.(More about runlevels) Do not launch the graphical interface service on boot update-rc.d -f xdm remove Quick and easy. You can re-enable the graphical interface with service xdm start or revert your changes with update-rc.d -f xdm defaults Remove packages apt-get remove --purge x11-common && apt-get autoremove I think it suits best for a computer considered as a server. You can re-enable the graphical interface by reinstalling the packages There's also this: Possible to install ubuntu-desktop and then boot to no GUI The point being, you can prevent the GUI from coming up if that's your main issue.
How do I switch from Ubuntu desktop to Ubuntu server?
1,281,554,099,000
I'm looking into installing a file server on my network, for serving data and backups. I want this machine to be available at all times, but I would rather not keep it on all the time (as to conserve power). Is it possible to set things up so that the thing automatically suspends (or powers off) after some time and then automatically powers back on when I try to connect to it (using some wake-up on LAN magic, without having to send explicit WOL packets)?
OS X can do this now, as of Snow Leopard. It's made possible through the Sleep Proxy Service. It's pretty much automatic. The only requirement is that you have a second always-on Apple device on your LAN that can act as the sleep proxy. Their current low-power embedded boxes all support this, I believe: Airport, Time Machine, and Apple TV. In the general case, though, I believe the answer is no. I'm not aware of any other OS that has implemented a service like this. The technology is open source, so there's no reason this couldn't be everywhere eventually. It's probably too new to see widespread adoption just yet. You might now be asking, why do you need a second Apple box on the LAN? When a PC is asleep, the kernel — and therefore the network stack — is not running, so there is no code in your OS that can respond to a "magic" packet of the sort you're wishing for. Wake-on-LAN magic packets aren't handled by the OS. They're recognized by the network interface IC, which responds by sending a signal to the CPU that releases it from the sleep state. It can do this because the IC remains powered up in some sleep states. (This is why the Ethernet link light stays on while a PC is "off" on some machines.) The reason the Apple technology works is that just before the PC goes to sleep, it notifies the sleep proxy. The sleep proxy then arranges to temporarily accept traffic for the sleeping machine, and if it gets something interesting, it sends a WOL packet to the PC and hands off the traffic it received.
How to power off a system but still keep it available on the network
1,281,554,099,000
I would like to learn more about Linux. I briefly went through a few books and quite a few articles online, but the only way to learn something is to actually start using it. I would like to jump in the deep end and configure a Linux server. So far I have downloaded Ubuntu Server. I'm looking for goal or a challenge if you like, something that will familiarize me with Linux servers. Ideally, I would like to be able to configure a secure mail, file and web servers. I have a strong programming background so I hope that it will help me out. I understand that this is not a specific question, I'm just looking for a milestone or a goal, otherwise I can spend weeks reading books and online articles. Edit 1: Thank you all for replies. Based on what you have said so far, I think that there are few different areas that I need to learn about: Kernels. Am I correct to say that this is a first thing I should concentrate on? Virtualisation. Once I'm happy with my knowledge about kernels I'd like to concentrate on KVM. I've read briedly about hypervisors and I believe that they also fall under virtualisation. Please correct me if I'm wrong. Security. Ideally I would like to leave this till last, but I guess that the majority of packages that I will require are online. So I'm not sure whether I should give this a higher priority. SSH, Linux as Firewall and remote access through shell fall under this category. Finally I will have a look at backup routines (using Linux as a file-server) and I'll configure web and mail servers. I guess that mail server might be a pain. I'm tempted to start a blog and see where it takes me after two weeks. In regards to distributives, I have seen that there are hundreds of different Linux distributives. To be perfectly honest I don't want anything simple, but, at the same time, I don't want to spend hours on a very basic operation to start with. Ideally I would like to work only from command prompt, once I can do that I'll be able to work with most of pretty GUIs (I hope so anyway). Once again, thank you for your help and I will really appreciate any further advise. Edit 2: This leaves me with a final question on what distribution of Linux I should be using?
Here's a couple: run Linux as your primary operating system, on both your desktop and your laptop, if any install KVM and virt-manager and build a couple of virtual machines build a package for your distro of choice (a .deb or .rpm file); it helps in understanding a lot of things build your own kernel These might not seem directly related to your own personal goals of learning to build web servers, but I assure you, if you understand Linux, you will build all kinds of servers easily.
A small challenge to familiarize myself with Linux [closed]
1,281,554,099,000
I have a 100gig file on a remote server, what I need to do is connect to that machine and zcat that file and pipe the output of zcat to a command on a local machine... I was hoping smbclient would help but I can't seem to find a way to run a command locally but have the left side of the pipe come from a remote source. What I think it will kinda look like with my made up command remoteZcatCommand remoteZcatCommand 100gigRemotefile.gz | grep findSomeStuff This maybe somewhat misleading as grep findSomeStuff will actually be a command that requires the hardware on the local machine. Also moving the 100gig file to the local machine is not an option.
smbclient -E -U $USERNAME //server/sharename $PASSWORD -c 'get \\dir\\filename.gz /dev/fd/1' 2>/dev/null | zcat | yourcommand The -E instructs smbclient to send all messages to standard error instead of standard out where those messages will mess up the output we actually want. I'm not interested in those messages so those are discarded by the 2>/dev/null. The -U $USERNAME is to indicate what username should be used when connecting to the SMB server. The //server/sharename should be obvious. The $PASSWORD is $USERNAME's password on the SMB server. The -c 'get \\dir\\filename.gz /dev/fd/1' is the command that should be executed: get the named file (escaping the backslashes by doubling them), and send it to the local file /dev/fd/1 which is the same as the command's standard output. The standard output is then piped through zcat to expand it before whatever further processing is necessary.
Using SMB to pipe contents of file to local command on local machine
1,281,554,099,000
Is there any (simple) way to deny FTP connections based on the general physical location? I plan to use FTP as a simple cloud storage for me and my friends. I use an odroid c2 (similar to raspberry pi but uses arm64 architecture) running Debian 8 with proftpd and ufw as my firewall. Ftp server runs on a non-standard port which I prefer not to mention here. I want to do this to increase the security of my server.
Use pam and geoip module This PAM module provides GeoIP checking for logins. The user can be allowed or denied based on the location of the originating IP address. This is similar to pam_access(8), but uses a GeoIP City or GeoIP Country database instead of host name / IP matching.
Limit FTP connections by area
1,281,554,099,000
I'm very new to Unix, but after having become comfortable with bash over the past year and after having played with Ubuntu recently, I've decided to make my next computer run Ubuntu, and I think my wife is on board for her next computer as well. Is it easy to set up a central family server so that each computer acts as a client for the information that is stored only in a single place? What are the options? Are there any online how-to documents for this?
You can use Fish or SFTP to transfer files between computers, with minimal prior setup. Both protocols transfer files over SSH, which is secure and encrypted. They are very well integrated into KDE: you can type fish:// or sftp:// URLs into Dolphin's Location Bar, or you can use the "Add Network Folder" wizard. SFTP at least seems to be supported by Gnome too. I personally use Fish. On the server machine Fish and SFTP need only an SSH server running, that you can also use to administrate the server machine. Everyone who wants to access the server over Fish or SFTP needs a user account on the server. The usual file access permissions apply, for files accessed over the network. Fish and SFTP are roughly equivalent to shared directories on Windows, but both work over the Internet too. Usual (command line) programs however can't see the remote files, only programs that use the file access libraries of either Gnome or KDE can see them. To access the remote files through scripts, KDE has the kioclient program. - For a setup with a central server that serves both user identities and files look at NIS and NFS. Both are quite easy to set up, especially with the graphical installers from Opensuse. This is the setup where every user can work at any machine and find his/her personal environment. However the client machines become unusable when they can't access the server. Furthermore a simple NFS installation has very big security holes. The local computers, where the users sit, have to handle the access rights. The NFS server trusts any computer that has the right IP address. A smart 12 year old kid with a laptop can get access to every file, by replacing one of the local machines with the laptop and recreating the NFS client setup (which is easy). Edit: Off course there is Samba, which has already been mentioned by Grokus. It seems to be quite universal: It can serve files, printers, and login information. It is compatible with Windows and Linux; there is really a PAM Module (Winbind) that lets Linux use the login information form a Samba or Windows server. Samba (and Windows) does not have the security problems of NFS, it handles user identification and access rights in the server. (Please note: I did never administrate or install a Samba server.) My conclusion: Fish or SFTP are IMHO best for usage at home. Use Samba if you have Windows clients too. NFS is only useful if you can trust everybody, but I expect it to create the lowest CPU load.
Setting up a family server
1,281,554,099,000
I have recently purchased myself an HP rack server for use as a personal fileserver. This server currently lives under my bed as I have nowhere else to put it. For those not aware (as I was not fully) this server is VERY LOUD. I need to be able to access my files a lot of the time during the day, and due to the situation of my server, turning it off every night at the wall (it likes to suddenly spring into action for no apparent reason) isn't really an option. I would really like if the server could remain powered on all the time, but when not in use enter a sleep state such that the fans turn off, if nothing else, over LAN. The server also runs Debian. If this kind of setup can't happen for whatever reason, I could settle for the machine shutting down at a certain time of day (or night) and starting up again in the morning, or something to that effect. I have very little idea about how to go about such a task, other than to use wake/sleep-on-LAN.
It seems after trial and tribulation of innumerable ways to get my server to do what it's told, the best way to solve my problem of it being loud is just to put it in the garage and hope no water damage occurs during cold nights (which it shouldn't, as the server will be on 24/7). Thanks to everyone who offered actual technical help, but it seems what I wanted ideally cannot be done.
How can I make my Linux server sleep and wake on LAN when not in use?
1,281,554,099,000
I work with a couple different nodejs live servers as part of my job and there seems to be some kind of leak within my tooling/workflow causing file watchers to accumulate over time until they hit the system limit. I then get the following cli errors: Error from chokidar (<path-to-folder>): Error: ENOSPC: System limit for number of file watchers reached, watch '<path-to-folder>/<filename>' I found the following command that should return the number of wile watchers of use: find /proc/*/fd -user "$USER" -lname anon_inode:inotify -printf '%hinfo/%f\n' 2>/dev/null | xargs cat | grep -c '^inotify' and it returns 515160, even though I've seemingly shut down all of my live servers. I have two sets of questions: How can I diagnose this? Can I get a list of all registered watchers, their watched path and corresponding PID, or something of that sort? Is there a way for me to kill them all? Is killing all file watchers even a good idea? Can I kill only watchers registered by my servers? I'm running debian 11
The command you provided is searching in /proc for any file descriptors in /proc/*/fd/ which are symlinks to anon_inode:inotify. It's pretty straightforward to also report the commands and PIDs of these processes, along with the number of watches set: #!/bin/bash cd /proc for p in [0-9]* do cd $p if find fd -user "$USER" -lname anon_inode:inotify 2>/dev/null | grep -q . then IFS= read -d '' cmd < cmdline numwatch=$(cat fdinfo/* | grep -c '^inotify') [[ $numwatch -ge 1 ]] && printf '%s\n PID %s\t %s watches\n' "$cmd" "$p" "$numwatch" fi cd .. done In fact, I found that oligofren has already written a similar script, inotify-consumers, as an answer which has more nicely formatted output. However, finding the actual paths being watched turns out to be more complicated. You only have the inodes from /proc/*/fdinfo, so you have to search through the whole filesystem to find the path which maps to the inode. This is an expensive operation. There is a C++ program inotify-info which does this; also found from an answer here. I just built it on my machine and it works. Run with no arguments, it just lists numbers of watches per process, the same as the inotify-consumers script. Given a particular command name or PID, it also searches for the paths to the inodes watched by that process. Killing all watchers is probably not a good idea, but after seeing which processes are using lots of watches you can make an informed choice.
How to tackle leaking file watchers on Debian 11?
1,281,554,099,000
I'm novice in linux world and I'm french, I don't have a good English. I want to create a data platform with a table in an intranet network! So, How to set up localhost server with http protocol on apache? Exists a web directory in local on Linux? What packages should I install? How to allow other computers to connect to my localhost? It is possible to make an intranet server? It is possible connect the localhost to the worldwide?
How to setup localhost server with http protocol on apache? Its depends upon the OS you are using. if you are using Ubuntu sudo apt-get install apache2 if you are using CentOS/Redhat sudo yum install httpd and start service with service httpd start (or) service apache2 start and check with localhost:80 you can host your localwebsite at /var/www/html but you have to mention configuration like what content you would like to host and with what address(URL) you wish to access that content. Exists a web directory in local on Linux? What packages should I install? How to allow other computers to connect to my localhost? You can not allow other users access your http server via localhost. you can let them access it by with either IP Address and Hostname(DNS or Host entry needed) or URL(DNS or Host entry needed) It is possible to make an intranet server? It is possible connect the localhost to the worldwide? Intranet server is possible but making worldwide access with localhost, no not possible. you need to host it either on a static IP or contact some website hoster. Hosting website with Ubuntu Hosting website with CentOS
How to set up a localhost server with http protocol on apache [closed]
1,281,554,099,000
During CentOS install, there is a choise of "base environment". I want to install a general file, DNS, mail, etc. server, but there appears to be 3 different options for servers: Infrastructure Server File and Print Server Server with GUI What are the differences between these options, and is this documented somewhere? Alternatively, is the "Server with GUI" choise a kind of all-inclusive choise, that I can can use to be sure that I am not missing anything at a later point?
Here's the environment choice for a CentOS 7 install: The difference is in the packages being installed, and in the out-of-the-box configuration (firewall, services started at boot, etc.) So for instance, "File and Print Server" will install Samba and NFS, "Basic Web Server" will install and configure a basic httpd service, and so on. Note that this choice is done for your convenience; you can always install additional packages to have the server do whatever you want. So you can safely go with "Minimal Install" and then add what you need. "Server with GUI" only means that the server will boot up on a graphical X environment. I guess it's "Minimal Install" plus GUI.
What are the differences between options for base environment in CentOS install?
1,281,554,099,000
When your University sends everyone an e-mail saying "We're sorry! One or more of our disks have failed; thus we've lost 'x' number of home directories on our servers." Is it basically just the root directory allocated for each student/professor/faculty-member? And also, what do they mean by "server" here? Thanks :)
A user's home directory is the initial directory when a user logs in. Normally the user may create files and directories only in in home directory (apart from temporary directories). Also various settings (user specific startup files and such) are usually stored in the user's home directory. Server is just annother name for a host (a computer). Think of a computer that offers other computer some kind of service (e.g. web server, file server, etc.)
What exactly is a "home directory"?
1,281,554,099,000
So I made a dedicated Samba file server on my Debian(3.2) machine. I have had great success accessing it from both Windows and Unix. I can SSH into it on the local network. When I try to SSH into it via the public IP address, it says connection refused. I would like to be able to ssh into it remotely, directing into the Samba share. How would I go about doing this? I hear I might have to port forward? Do I need to change anything in the smb.conf file? Here's my sshd_config file: # Package generated configuration file # See the sshd_config(5) manpage for details # What ports, IPs and protocols we listen for Port 22 # Use these options to restrict which interfaces/protocols sshd will bind to #ListenAddress :: #ListenAddress 0.0.0.0 Protocol 2 # HostKeys for protocol version 2 HostKey /etc/ssh/ssh_host_rsa_key HostKey /etc/ssh/ssh_host_dsa_key #Privilege Separation is turned on for security UsePrivilegeSeparation yes # Lifetime and size of ephemeral version 1 server key KeyRegenerationInterval 3600 ServerKeyBits 768 # Logging SyslogFacility AUTH LogLevel INFO # Authentication: LoginGraceTime 120 PermitRootLogin yes StrictModes yes RSAAuthentication yes PubkeyAuthentication yes #AuthorizedKeysFile %h/.ssh/authorized_keys # Don't read the user's ~/.rhosts and ~/.shosts files IgnoreRhosts yes # For this to work you will also need host keys in /etc/ssh_known_hosts RhostsRSAAuthentication no # similar for protocol version 2 HostbasedAuthentication no # Uncomment if you don't trust ~/.ssh/known_hosts for RhostsRSAAuthentication #IgnoreUserKnownHosts yes # To enable empty passwords, change to yes (NOT RECOMMENDED) PermitEmptyPasswords no # Change to yes to enable challenge-response passwords (beware issues with # some PAM modules and threads) ChallengeResponseAuthentication no # Change to no to disable tunnelled clear text passwords #PasswordAuthentication yes # Kerberos options #KerberosAuthentication no #KerberosGetAFSToken no #KerberosOrLocalPasswd yes #KerberosTicketCleanup yes # GSSAPI options #GSSAPIAuthentication no #GSSAPICleanupCredentials yes X11Forwarding yes X11DisplayOffset 10 PrintMotd no PrintLastLog yes TCPKeepAlive yes #UseLogin no #MaxStartups 10:30:60 #Banner /etc/issue.net # Allow client to pass locale environment variables AcceptEnv LANG LC_* Subsystem sftp /usr/lib/openssh/sftp-server # Set this to 'yes' to enable PAM authentication, account processing, # and session processing. If this is enabled, PAM authentication will # be allowed through the ChallengeResponseAuthentication and # PasswordAuthentication. Depending on your PAM configuration, # PAM authentication via ChallengeResponseAuthentication may bypass # the setting of "PermitRootLogin without-password". # If you just want the PAM account and session checks to run without # PAM authentication, then enable this but set PasswordAuthentication # and ChallengeResponseAuthentication to 'no'. UsePAM yes
Scriptonaut, probably your problem has nothing to do with Samba, but has to do with port forwarding/NAT. If you have your SAMBA serving Debian computer in a LAN network, behind a router, you need it configured to transfer requests to some of its ports to your SAMBA running machine: First, I'll tell, how outgoing connections work with router. When 2 machines speak via TCP/IP each machine (source machine and destination machine) is addressed with a pair IP/port number, so the connection is determined by 2 pairs: source IP/port number and destination IP/port number. When you open a tab in Mozilla and access Google on your 192.168.1.2 machine, it transfers some IP packets to the Router with source address of itself IP=192.168.1.2 and arbitrary outgoing TCP port number it allocated for that tab of browser (like 43694) and asks the router to transfer that packets to Google machine with certain IP on 80 port of that machine, cause 80 is standard port for incoming http connections (you can see the list of standard TCP ports in /etc/services file on Linux). Router allocates a port of its own at random (e.g.12345), replaces source IP/port pair in that packets with its own WAN IP (74.25.14.86) and port 12345 and remembers, that if it gets response on port 12345 from Google, it should automatically transfer that response back to 192.168.1.2, port 43694. Now, what happens when an outer machine wants to access your server? When you try to access your SAMBA server from the outer machine, it sends IP packets to your WAN IP=74.25.14.86, port 22 of it (because, 22 is a standard TCP port for listening to SSH connections, you can see the list of standard TCP ports in /etc/services file on Linux). Your Router receives that packets. By default, firewalls on routers are configured to block all incoming connections to any port, if there was no outgoing connection, bound to that port (so, when you were accessing Google in previous case, router didn't block response from Google to port 12345 of itself, cause it remembered that your 192.168.1.2 initiated connection to Google and response from google should come to port 12345). But it would block attempts to initiate connections from the outer world to port 22 of it, cause port 22 was not mapped for any connections incoming from LAN. So, what you need to do is to configure your router to transfer all the connections to its port 22 from the outside to port 22 of your 192.168.1.2. This can be done in web-interfaces of hardware routers, usually the menu option you need is called "Port-forwarding" or "NAT - network address translation".
How do I SSH into my Samba file server?
1,281,554,099,000
I have a Debian Jessie (Version 8.1) server that serves multiple domain names. Each has their own folder configured under /var/www/. Each domain name has a unique conf (example.com.conf) file under /etc/apache2/sites-enabled which is linked to a matching conf file under /etc/apache2/sites-available. Each conf file has: <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www/example_com_dir ServerName example.com ServerAlias *.example.com </VirtualHost> I wanted to be able to accept all emails sent to each of the domains (any email sent to any [email protected]) and forward it to my Gmail. I successfully installed EXIM4 on it, and configured using dpkg-reconfigure exim4-config as follows: mail sent by smarthost; no local mail System mail name: myDomainName.TLD IP-addresses to listen on for incoming SMTP connections: 127.0.0.1 ; ::1 Other destinations for which mail is accepted: <BLANK> Visible domain name for local users: <BLANK> IP address or host name of the outgoing smarthost: smtp.gmail.com::587 Keep number of DNS-queries minimal (Dial-on-Demand)? No Split configuration into small files? No Root and postmaster mail recipient: <BLANK> Then I completed all the other steps in this tutorial: https://www.vultr.com/docs/setup-exim-to-send-email-using-gmail-in-debian. Inside /etc/hosts I have: 127.0.0.1 localhost 127.0.1.1 install.install install # The following lines are desirable for IPv6 capable hosts ::1 localhost ip6-localhost ip6-loopback ff02::1 ip6-allnodes ff02::2 ip6-allrouters Inside /etc/hostname I have one line: example.com Inside /etc/email-addresses I have: root: [email protected] info: [email protected] *: [email protected] When I run echo 'Test Message.' | mail -s 'Test Message' [email protected] I do get an email in my Gmail. Also, if a run any script from cron.d and it outputs any prints, I do get those as email notifications. So I know outgoing emails work. But when I send an email from [email protected] to [email protected] I do not get any notification in [email protected]. Question #1: I want to be able to get all incoming emails and forward them to somewhere else. For example, I want to send from [email protected] to my domain [email protected] and have the server send it to [email protected]. What do I have to configure in order to do so? How can that be configured for a server serving multiple domains? Question #2: I know it might be opinion based, but what are some of the free, user friendly, with web GUI access email servers that I can configure on Debian Jessie (8.1)?
Reconfigure your config by running # dpkg-reconfigure exim4-config General type of mail configuration: internet site Other destinations for which mail is accepted: example.com IP-addresses to listen on for incoming SMTP connections: fill in your IP address Those should be the most important items to change. Remove any smarthost if it's still asked. Now it should accept incoming SMTP connections (if you entered the IP address correctly), and send email on via the aliases you're already created.
How to configure EXIM4 to relay emails?
1,281,554,099,000
I would very much like to allow users in a small office environment harness the power of slocate indexed database on the file server. Currently when users are looking for a file in our fileserver, they need to run find from their Windows workstations on the network shares that are available from the server. This loads up the server while other are working. Alternatively, I could set the indexers in every workstation to index the server locations. This is not ideal either, as the server would again be loaded a task that must be run multiple times a day on the same set of data! Ideally, the file server will carry out its own indexing and my users (who are oblivious to Linux and its command-line) will be able to log on to a simple website on the file server and run a search in much the same way I run locate commands in the command line. Is there something available?
I looked and did not find any offering that provided just a web app interface to an existing slocate database file. So you have the following options: Roll your own. Shouldn't be too difficult use a CGI based approach which would allow users to search for entries in your pre-built slocate database file. Skip using the slocate database file and use a dedicated search engine such as one of the following that includes both a crawler and a web frontend: OpenSearchServer Hyper Estraier Recoll + Recoll-WebUI Wumpus Search Engine
is there a web app for returning results to a search on an indexed database?
1,281,554,099,000
I'm trying to run tnftpd on OS X, which is NetBSD's FTP server and used to be OS X's FTP server. I built and installed it from Apple's sources. Unfortunately, it seems that I cannot run the server without root privileges. These have been my approaches so far to get the server to work without root privileges: I've tried changing the port number via the -P option, to ensure that it uses no privileged ports. I've tried fiddling with config files, such as ftpd.conf and ftpusers. I've also tried the -r option (which disallows root privileges once a user logs in). All of these attempts have been to no avail. Some examples to illustrate my attempts: $ ftpd -lnD # exit code is 0, but `ps' shows no server running $ ftpd -lnDr # supposed to drop root privileges, but same as above $ # let's try running on a different port... $ ftpd -lnDr -P 50001 # exit code still 0, but no dice However, if I try something like this (this is a scenario where I have no custom configurations in place): $ sudo ftpd -lnD Password: $ ps aux | grep -i ftpd root 21998 0.0 0.0 4298888 720 ?? Ss 10:41PM 0:00.00 ftpd -lnD I have no problems. How can I run the tnftpd server without root privileges? Is it even possible?
According to the man page tnftpd(8) ... The server uses the TCP protocol and listens at the port specified in the ``ftp'' service specification; see services(5). and a scan through ftpd.conf(5) shows no obvious means to fiddle with the listen port (as opposed to the data port, which is different) so let's see if we can modify the services file, which is probably a bad idea. $ sudo perl -i.oops -pe 's/^(ftp\s+21)/${1}21/' /etc/services $ grep 2121 /etc/services ftp 2121/udp # File Transfer [Control] ftp 2121/tcp # File Transfer [Control] scientia-ssdb 2121/udp # SCIENTIA-SSDB scientia-ssdb 2121/tcp # SCIENTIA-SSDB nupaper-ss 12121/tcp # NuPaper Session Service nupaper-ss 12121/udp # NuPaper Session Service $ And with this horrible, horrible kluge effected we now start ftpd... (this is on a 10.11.6 system which has ftpd installed by default under /usr/libexec) $ /usr/libexec/ftpd -lnDr -P 50001 $ And it is running as not-root at the not-21 port: $ pgrep -lf ftpd 35258 /usr/libexec/ftpd -lnDr -P 50001 $ lsof -P -p 35258 | grep 2121 ftpd 35258 jhqdoe 4u IPv4 0x817b7cd1effd8d7f 0t0 TCP *:2121 (LISTEN) ftpd 35258 jhqdoe 5u IPv6 0x817b7cd1effa3107 0t0 TCP *:2121 (LISTEN) $ Whether this works or not I dunno; do you really need FTP? To undo this change, sudo mv /etc/services.oops /etc/services
How to run tnftpd without root on OS X?
1,281,554,099,000
I'm running a personal server at home with a CentOS 7 OS and a 12TB zpool. It's been running for a couple of years and yesterday I noticed some problems so I went in to have a look. At first it seemed like one of my drives had failed, with zpool import giving the following results: pool: media id: 1363376331138686016 state: DEGRADED status: One or more devices contains corrupted data. action: The pool can be imported despite missing or damaged devices. The fault tolerance of the pool may be compromised if imported. see: http://zfsonlinux.org/msg/ZFS-8000-4J config: media DEGRADED raidz1-0 DEGRADED ata-ST3000DM001-1ER166_W500G55Q ONLINE ata-ST3000DM001-1CH166_Z1F278KB UNAVAIL sdc ONLINE sde ONLINE sdf ONLINE This looks ok, however I can't seem to import the pool directly. Running zpool import media gives me: cannot import 'media': I/O error Destroy and re-create the pool from a backup source. I've looked around to figure out the problem but everything I've found has given me nothing. Some other things I've tried: zpool import -fFX: zpool could not be imported zpool import -fFV: zpool imported with FAULTED status zpool status: no pools available Is anyone able to point me in the right direction? I'm not sure what my next course of diagnosis should be.
Your best bet is to destroy the pool, recreate it with a replacement for the failed drive ata-ST3000DM001-1CH166_Z1F278KB, and then restore from backup. If that's not an option (it should be - neither ZFS nor RAID are a substitute for backups! nor were they ever intended to be), then you could try taking the zpool offline until you have a replacement for ata-ST3000DM001-1CH166_Z1F278KB ready to be installed. When you have acquired a replacement drive, try to import the pool in DEGRADED state and immediately replace the failed drive with the good new drive. BTW, https://serverfault.com/questions/548568/zfs-recover-from-faulted-pool-state may have some useful information for you.
DEGRADED zpool can't be imported with I/O error
1,281,554,099,000
For research purposes, I use the University server. The server login is a two-step process. First, I type ssh -p 44 [email protected] (Altered for security reason) Then the password is prompted. I type the credential Then I type ssh [email protected] Then the password is prompted. I type the credential Now if I have to transfer a file (say a pdf file namely first.pdf) to the working directory I use the following code scp - P 44 first.pdf [email protected]:~/ scp first.pdf [email protected]:~/ Until here it is clear My doubt begins after this If I have to bring back the first.pdf from the final working directory to the local machine. (working computer). How to do it? I was able to access the intermediate directory ([email protected]:~/) through Filezilla. But I was not able to access the final main directory through Filezilla. I tried through other software such as winscp neither it worked. The GUI software generally has a single tab for typing username and password. But here I have 2 step login process. Hence the whole confusion. How to access the final working directory through a GUI software.
rsync command can help you with: rsync -av -e "ssh -p 44 [email protected]" [email protected]:~/first.pdf ./ The server [email protected] will be used as a proxy to reach the file on remote host. The file first.pdf will be copied on your local machine.
How to bring back the files from a remote server (initially accessed through 2 step login) back to our local computer?
1,281,554,099,000
I am trying to setup a RaspberryPi as a Plex Media Server. The server is setup and running. I can access it via the web interface. I want my media files in the directory /mnt/sda/will/plex. This is the permissions of that directory: pi@raspberrypi:/mnt/sda/will $ sudo ls -lstr total 8 4 drwxrwxrwx 3 will root 4096 Dec 28 16:53 plex 4 drwxrwxrwx 2 will will 4096 Dec 28 16:53 'test folder' My Plex server is using user will: pi@raspberrypi:/mnt/sda/will $ sudo nano /etc/default/plexmediaserver with line export PLEX_MEDIA_SERVER_USER=will. Neither folders plex or test folder can be see by Plex: No media added to directory/mnt/sda/will/ or either sub-directories is picked up by Plex. This has all the hallmarks of a permissions issue but I can't see at all where they are incorrect. It's might be worth noting that I can access the folders via networked drive using user will.
Take a look at the permissions on the folders at or above /mnt/sda/will. All of those folders need to have r and x permission on them. 'x' means something different on folders than files. You could probably solve the problem running the following commands: sudo chmod a+rX /mnt sudo chmod a+rX /mnt/sda sudo chmod a+rX /mnt/sda/will When running chmod +X (the X being capitalized) it will only apply the x permission to folders which makes it very useful when used recursively.
Plex Media Server cannot see sub folders.. Permissions issue?
1,281,554,099,000
Preface to my Situation and Motivation I am trying to familiarize myself with Ubuntu Server edition as I set up a home server. I am setting up a small virtual network with a hypervisor for experimenting. Now, many hypervisors do have a snapshot feature, as does mine, I still would like to learn about LVM snap shots as there is no third party with a real server. My personal intent with my home server is to set up headless file server that will also function as a web server and perhaps DNS, mail, and active directory--something that will be rarely touched if ever, other than updating. That being said, THE QUESTIONS: In a server environment, how big should a snapshot be? Ideally snapshots after a mile stone; a snap shot after setting up all of the user accounts; another after configuring something like samba, etc. In a desktop environment, how big should a snap shot be? ideally after major milestones, such as personal projects, installing and uninstalling different programs etc. I cannot convey exactly what I am meaning, but hopefully it is clear. Personal Thoughts As I understand, snapshots work by tracking changes to software and not actual direct copies like mirrored-backups. That being said, I assume that something more than a gigabyte is more than perfect for the life-time or majority of the lifetime of a server that will never really be modified. I assume even a few hundred megabytes would be more than enough. In a personal-desktop, I assume you would want a snap shot the space of a few gigabytes, or even a series of snapshots several gigabytes.
Start with what you think is a reasonable value, and then set up LVM2's snapshot auto extension feature to cover things if something goes wrong. LVM2 has this really neat feature that will automatically extend snapshot volumes that are nearly full without requiring any user intervention. To use it, you need to edit some of the defaults in the LVM2 configuration file (usually located in /etc/lvm/lvm.conf). The relevant configuration parameters are snapshot_autoextend_threshold and snapshot_autoextend_percent, both found in the activation section of the configuration. The first one (snapshot_autoextend_threshold) controls the maximum amount of space that LVM will let the snapshot have allocated before trying to extend it as a percentage of the total size, and the second one (snapshot_autoextend_percent) controls the amount of space to add when it gets extended as a percentage of the current size. There are a couple of caveats to using this functionality though: This isn't event based, but instead polls the state of the snapshot. This means that there is some lag between when a write happens that pushes the snapshot space usage over the auto extension threshold and when the snapshot actually gets extended. Make sure and plan your threshold and extension amount accordingly. Each snapshot may end up taking up a bit more space than the original volume. Unless your careful, this can quickly run you out of space.
How large should an LVM snapshot be?
1,281,554,099,000
I have installed a Samba server on Ubuntu Server 12.04 so I can store files there from my desktop or laptop or whatever. /etc/samba/smb.conf: #======================= Global Settings ===================================== [global] workgroup = WORKGROUP server string = Samba Server %v netbios name = xxx security = user map to guest = bad user dns proxy = no #============================ Share Definitions ============================== [Public] path = xxx browsable = yes writable = yes guest ok = yes read only = no create mask = 0660 directory mode = 0770 [Jochem] path = xxx browsable = yes writable = yes guest ok = no valid users = jochem read only = no create mask = 0660 directory mode = 0770 I added a user: sudo adduser jochem, sudo addgroup smbgrp, sudo adduser jochem smbgrp, sudo smbpasswd -a jochem Then restarted the service: sudo service smbd restart. When I open Windows Explorer and go to \\xxx\ I can see 2 folders: Public and Jochem. I can access Public without any problem, can write files to it, read, etc. Then when I want to access Jochem, I get this window: I then fill in my username and password as entered via smbpasswd -a jochem, but then I get an error message. What is wrong here? I think it might be because I am logging in via my PC's domain JOCHEM-PC, but I've tried xxx\jochem and it doesn't work either.. Any ideas?
I found that this happened because I was already logged in with a different account to the same network but a different shared directory. I left out other directories since posting the whole config was a bit too much, or so I thought.. It appears, as the error says, you can only be connected to the shared directories with one user at the same time. Adding my account to multiple shares instead of having different accounts for different shares was the solution!
Samba shares - Cannot access with users
1,281,554,099,000
I have installed ftp and sftp on my linux mint system, for wordpress file transfer I had to create new user account for ftp and sftp on my system during the installation process. Is it possible to run ftp and sftp from home user account, if yes, how to configure it. And if single user account access is possible- what is advisable to have - multiple/seperate user accounts for ftp/sftp OR a single account merged with home user should be used Tried searching for solutions on various forums, but did not get any clarity Adding for more clarity: For wordpress, i referred this link wordpress installation To upload wordpress files to github, i required to convert it to static site, so for using 'simply static' wordpress plugin, i required ftp to upload files from my local wordpress installation. Below are the links i used for ftp and sftp installation- I used these 2 links for ftp installation ftp install link 1 & its user creation snap ftp install link 2 & its user creation snap and for sftp sftp install link & its user creation snap in both ftp and sftp , had to create new users for using them on filezilla/ftp client So i want to clarify , if I can use my default system user for ftp/sftp access and how to configure it. And what is advisable to create new seperate users and the reason for it. I searched how and why for cant I use the system user for all these server process, but did not get any clarity from internet
The SFTP article you quoted is misleading at best, and inaccurate for the remainder. SFTP has nothing to do with vsftpd. You can use sftp with your own user account already: sftp you@yourhost If you have SFTP you almost certainly do not need to install vsftpd. At least, not unless you have a legacy file transfer application you need to support. In direct response, Is it possible to run ftp and sftp from home user account, if yes, how to configure it Yes. Undo everything in the SFTP document, including the creation of a user account. It was already installed and configured for that.
can we run ftp or sftp from single user account?
1,281,554,099,000
Im a part of a project group, and one of our tasks is to upload some content to a server. I am completely new to this and have some questions. There are content on different servers(distributed on several servers). My task is to get all the content, and upload them on a single server, and make them available for others, e.g everybody that visits that server(by website) should be able to download the content. How does one do that in a UNIX server? And how do I make them available for others? I have no experience with servers, but I have experience with Java(also Java EE). What do I need to learn to be able to do these tasks? Ill be getting an account for the server soon, which will give me access to it. Please, if the question is in wrong place, or it violates any rules, let me know. Thanks.
You will need access to the old servers too. Then you should be able to simply copy the contents using scp or, more efficiently, by creating a gzipped tar file of the content on the old servers, moving everything to the new one with scp and untaring there. If this is going to be a repeated task, with changing content on the old servers, it might be better to use rsync. This will copy everything on the first run and limit itself to the changes that occured since the last rsync on later runs. To make the content accessible via the webserver, you have to configure apache accordingly. The details about the Unix-commands mentioned can be found in the man pages. One example for a simple apache setup can be found here: http://www.thegeekstuff.com/2011/07/apache-virtual-host/ For more details see the official apache documentation: http://httpd.apache.org/docs/
How to upload to a unix server?
1,623,772,750,000
I just signed up with digital ocean for my Debian server. In the server I created multiple users, each user having an access to 1 different folder to view files in there. Now what I want to do is to access that folder from my web browser and login with username and password so that I can start view my txt file. What is the approach that I should take?
You need to setup a web server, for instance Apache, setting up users areas, and WebDAV services if you want users to write files there. As an introduction to the theme:how to configure webdav access with apache on ubuntu-14-04 For starters, you can do: sudo apt install apache2 sudo a2enmod dav sudo a2enmod dav_fs The user authentication will depend on the method you created the users.
Access Debian server folder via web browser with user login
1,623,772,750,000
Our IT department needs to set up a file server that will hold the organization's documentation and all other important information; it's currently being held on our Windows AD server. Main objective: To allow all windows users the ability to access that Linux File server using a networked folder as they did before. We decided to use Ubuntu Server 14.04 64-bit. I am no expert in Linux, but I know in the past I have worked on servers using SAMBA; I am not sure if SAMBA is what I need to achieve this. The Server must be on the Organization's Domain, and be able to be accessed using Active Directory credentials. Is Samba what I need to make this happen?
Samba is a good choice for your situation. If you've decided to go with Ubuntu, there's a guide here. This page should set up exactly what you're looking for. If you're new to Linux, my recommendation is to read that guide for understanding, before attempting any commands. Don't run a command if you're not sure what it will do first. The command man man will explain man pages and how to use them to find information. Many commands can also be appended with --help or -h to offer a shorter explanation. Good luck!
Ubuntu and Windows environment
1,623,772,750,000
I run a Windows Server 2008 machine. It is used as a file server (File Services feature), in addition to local shares I use CrushFTP for SFTP and HTTP access to my files. I would like to convert to Linux (Ubuntu at first). What I want is to install the OS on a single HDD (500 GB) and then have a software RAID 1 with two 2 TB HDDs. The RAID 1 volume will be used only for storing, and all programs will be installed on the OS HDD. In what stage of this setup should I make the RAID 1 volume? I have been searching around and it seems like most guides suggests making it during the installation of the OS. Is this the best way to do it, and when I upgrade with more HDDs, will I be able to extend the volume?
You need to create the RAID1 volume during installation if you want to install to a RAID1 volume. You're finding this in guides because usually, when people are asking about installation and about RAID, it's because they want to install on RAID. Since you don't want to install on a RAID volume, you have no RAID volume to create during installation. You need to create the RAID volume when you connect the first disk that should be in this volume to your system. This is done with the mdadm command. The size of a RAID1 volume is the size of the smallest disk that makes up that volume. If at some point you replace the smallest disk by a larger disk, you can grow the volume to cover the disk that is now the smallest one.
Ubuntu Server - OS on single HDD, file server on software RAID 1
1,623,772,750,000
I am looking for a linux based file server that I can use to store all my files and access remotely through Internet. I have come across a few different alternatives, most promising amahi, but most of these servers prefer the server to manage DHCP, which I can not do as first of all, restarting all the devices on my network would be a seriously tedious task and the router currently managing the DHCP will have more up-time than my server. I also have a windows home server, which is connected to the Internet through my router, but I can't mount it when I'm not at the local network, I can only manage the files through a browser, which is really unpractical. So basically what I want is: A home file server that can be mounted as a disk, or viewed in finder in any other way, in OSX (and windows, but I'll mostly be using OSX) and I must be able to do this both at the local network and from any other location, without having to manage the DHCP from the server. I have thought of setting up openVPN (or another vpn connection if available) to access the local network remotely as if my machines was physically there, but I have no idea of how to setup the router to make this work, especially since I already have a server connected to the router.
If all you need is simple access to your files, I think that setting up a dedicated file server might be overkill. I often need to access files from home when I'm at work or vice versa and I just mount the remote volume locally using sshfs. I haven't tried this on OSX but according to their page, osxfuse should be able to do it. What you need to do is forward the appropriate port (22 for example) from your router to your server then install osxfuse and do something like sshfs server:/path/to/folder /local/path/ You can then access you files at/local/path just like any other mounted volume. Apparently there is also a way of doing this on Windows but I have never tried it.
Linux based internet file server
1,623,772,750,000
I have two Macs that I'd like to start backing up using Time Machine, but all of my storage is attached to my Linux file server. How can I use my Linux file server (which happens to be running Ubuntu Karmic) as a custom Time Capsule replacement, and have my Macs (running 10.6) automatically back up to it using Time Machine? And lastly, is this wise? Is there any inherent risk in doing this that compromises the whole point of the backup?
There are a few hack-ish options out there, see here, and here. But I certainly wouldn't do it. This is a hack, it's not supported by apple in anyway and there is no guarantee that the next OS X update won't break it and if it does you're stuck with your backups in a network share that is pretty much useless at that point; perhaps if you only need the initial backup you'll make do, but remember that time machine backups are incremental, you are very likely not to be able to restore the latest version of your data. With some bash $voodoo you might be able to pull it off (getting the initial backup, looking at time stamps, merging...) From my point of view you have two options: you either stick with the apple supported solution, may that be local drives for time machine or investing on a time capsule, or you find another way to backup your two macs to your linux server avoiding time machine all together. and on that last note I know a lot of people who swear by superduper, and for many years I've used iBackup which is in reality a glorified GUI for rsync and that gives me some comfort.
How can I (and should I) use my Linux file server as a Time Machine backup server for my Macs?
1,623,772,750,000
1) I run SSH session on the remote client to get files from the server. 2) There is a server and it keeps a very broad directory structure. 3) I've got a list of thousands path-names to the files on the server. Yet they are only a small fracture of the whole content of the server. So, the files are to be fetched one-by-one, not by directories, no wild cards. Task: get all files by the list and place them on the client machine with all the relative paths created on the client. The problem I've encountered is that the sftp cannot write a file in the non-existed dir: get -p /q/w/e/r/t/y/file /base/q/w/e/r/t/y/file does not create all the sequence of q/w/e/r/t/y/ in the /base/ (not even the 1-st subdirectory) Note: the solution may be for a single file too. I'll try to make batch after it. Not important note: actually, I don't need those paths - but there may be files with the same names and they should not conflict on the new place. So the idea of draining all files in one plain directory is not acceptable.
Use a combination of mkdir and dirname before you do a sftp mkdir -p $(dirname /base/q/w/e/r/t/y/file) dirname will extract the full directory path to the file mkdir -p will ensure that the entire directory tree is created (even if it is partially available)
Get remote files by a list preserving their relative paths, and do it on a remote machine
1,623,772,750,000
I'm very new to the world of active directory, windows server etc., so I apologise if some of the questions I ask are a bit stupid, but I'll try and explain exactly what I want to do below, and my currrent setup. I'm running Ubuntu Server on a Raspberry Pi, using kerberos and other software detailed in this video to use it as an AD-DC for my four clients that connect to it. At the moment this is really a test network on my Pi 2, before I launch on my Pi 4. The Raspbrry Pi is only just powerful enough to run the network and authenticate user logons and manage group policy etc, but DNS resolutions are extremely slow. From the client perspective, the network is operating completely fine with logons and policy etc. EXCEPT what they have noticed is the time it takes to make a quick google search has increased dramatically and sometimes the search even fails. Now, here's the question... is there a way to operate my AD-DC server setup to manage group policy, users, groups, logon etc. without sending external DNS requests e.g. bbc.co.uk or google.com via the AD-DC. I want them to be processed as they would have before the server came along (by the router??) simply because it can't handle them, and the setup before the server was perfectly fine at handling them The windows clients are configured in dns settings to use the ADDC as their preferred dns server (if I change this, then they lose connection to the domain and can't find it...) and use 8.8.8.8 google's dns server as their secondary one, but whether I enter this in or not doesn't really seem to have an effect. And if the ADDC server is down, ALL external dns requests across the entire network fail. It's like the backup isn't even there. You can't get onto google from a client when the DC is down. Any info I'm happy to provide. Secondary bonus question wondering why samba network transfer speed is dramatically slower using AD on this rapsberry pi rather than just installing samba and having it as a network share. Gone from 30Mb/s to 2Mb/s
The windows clients are configured in dns settings to use the ADDC as their preferred dns server (if I change this, then they lose connection to the domain and can't find it...) and use 8.8.8.8 google's dns server as their secondary one That's an invalid configuration, prone to unexpected errors in the Active Directory context. In an AD environment your (Windows) clients must not use a DNS server that doesn't know about your internal AD setup. Is there a way to operate my AD-DC server setup to manage group policy, users, groups, logon etc. without sending external DNS requests e.g. bbc.co.uk or google.com via the AD-DC The Domain Controller for Active Directory must be canonical for your DNS domain. If you're asking whether you can have clients that don't use the DC for external DNS requests, then the answer is a qualified yes. All DNS requests for your AD-controlled DNS domain must go to a DC DNS requests for somewhere else may be resolved by any suitably responsive DNS server You must not send DNS requests for your internal AD-controlled domain to an external server because its "NXDOMAIN" response will cause breakage In the case of #2 for a Linux-based or other UNIX system clients you can use either systemd or dnsmasq to implement the stateful selection. (I've done both in various cases previously.) systemd Create /etc/systemd/network/20-local.network, setting the DNS server to be your AD DC, and your local domain as appropriate, remembering the leading ~: # Network interface name (*=any) [Match] Name=* # Specific DNS server(s) to use for this domain [Network] DNS=10.0.0.1 Domains=~contoso.com Create /etc/systemd/resolved.conf.d/20-local.conf, setting the default external DNS servers as appropriate: [Resolve] DNS=1.1.1.1 9.9.9.9 Reload the network and check the resolver systemctl restart systemd-networkd resolvectl status You should see that queries for your AD-controlled domain go to your Domain Controller, and everything else goes to 1.1.1.1 and/or 9.9.9.9. dnsmasq.conf Edit /etc/dnsmasq.conf setting the following values appropriately: # Global nameservers server=1.1.1.1 server=9.9.9.9 # Domain-specific nameserver (forward and reverse) server=/contoso.com/10.0.0.1 server=/1.0.0.10.in-addr.arpa/10.0.0.1 Restart dnsmasq Potentially useful references How to configure the network interfaces from Dave Embedded Systems Configure systemd-resolved to use a specific DNS nameserver for a given domain from Gist (Github) Need to set different DNS configurations for home and work from Ask Fedora domain-based routing with systemd-resolved from SuperUser
Active directory server set up DNS resolution failure or VERY SLOW, can I route external DNS requests the traditional way, before the server existed?
1,623,772,750,000
I have a linux system running Debian Wheezy stable on it and am using it as a file repository (not a true file server because I am using BitTorrent sync to move files as I move around the world). Some family has expressed a desire to store data similarly but I would like to give them the space without me necessarily having direct access to their files. I know there are some sites that if you store files on them they make it a point to let you know that they can't access it or they can't see it, etc. What mechanism are they using to achieve that level of privacy for their users? Are they able to still scan the directories for viruses or malware or by providing that kind of privacy, do they effectively make it where they cannot see the directory to scan? Or is it something else altogether (encryption, etc)? My apologies if this is better asked somewhere else or if it has already been answered. Thank you for your help! Cheers.
You pretty much have to trust the administrators of a service with the data you upload to it. Any assertion that a service provider cannot see data provided to it is dubious at best (speaking as a security professional). Even if the provider restricts access or encrypts data, you still have to trust their access restrictions (and usually they will ultimately need access to the data somehow just to provide the service) and their encryption. The only way to be reasonably certain is to encrypt the data yourself with your own encryption tools before uploading it.
Prevent access to files on linux file server
1,623,772,750,000
I have been researching about the security threats to a web server. It makes me want to secure my own on my Raspbian OS system. What are the list of things that are recommended or optional to install or configure on the server. I currently have: ClamAV Fail2ban Apache httpd Also, provide me a way that I can configure these packages to make it as secure as possible including SSH as I will be working on it remotely.
1 . Upgrade everything apt update && apt upgrade -y 2 . Secure ssh grep -P ^Pass /etc/ssh/sshd_config PasswordAuthentication no 3 . Configure iptables (or ufw, etc) to permit only ssh and https apt install -y iptables-persistent iptables --flush iptables -A INPUT -i lo -j ACCEPT iptables -A INPUT -p icmp --icmp-type 255 -j ACCEPT iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -A INPUT -p tcp --dport 22 -j ACCEPT iptables -A INPUT -p tcp --dport 443 -j ACCEPT iptables -A INPUT -j REJECT iptables -A FORWARD -j REJECT --reject-with icmp-host-prohibited iptables-save > /etc/iptables/rules.v4 ip6tables --flush ip6tables -A INPUT -i lo -j ACCEPT ip6tables -A INPUT -p ipv6-icmp -j ACCEPT ip6tables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT ip6tables -A INPUT -p tcp --dport 22 -j ACCEPT ip6tables -A INPUT -j REJECT ip6tables -A FORWARD -j REJECT ip6tables-save > /etc/iptables/rules.v6 4 . Either prevent apache fom running scripts (PHP, etc) or make sure your code is good. 5 . Use off site air gapped backups, layered security, intrusion detection systems, full disk enctiption, computers isolated by task. 6 . Use static analysis https://www.ssllabs.com/ssltest/ https://securityheaders.com/ https://developers.google.com/web/tools/lighthouse/ https://jigsaw.w3.org/css-validator/ https://github.com/exakat/php-static-analysis-tools etc 6 . Educate yourself... https://arstechnica.com/author/dan-goodin/ https://www.youtube.com/watch?v=jmgsgjPn1vs&list=PLhixgUqwRTjx2BmNF5-GddyqZcizwLLGP https://www.youtube.com/user/BlackHatOfficialYT https://tools.ietf.org/html/ https://www.w3schools.com/ etc
Securing my Web Server with NAS
1,512,206,558,000
When I log in to an SSH server/host I get asked whether the hash of its public key is correct, like this: # ssh 1.2.3.4 The authenticity of host '[1.2.3.4]:22 ([[1.2.3.4]:22)' can't be established. RSA key fingerprint is SHA256:CxIuAEc3SZThY9XobrjJIHN61OTItAU0Emz0v/+15wY. Are you sure you want to continue connecting (yes/no)? no Host key verification failed. In order to be able to compare, I used this command on the SSH server previously and saved the results to a file on the client: # ssh-keygen -lf /etc/ssh/ssh_host_rsa_key.pub 2048 f6:bf:4d:d4:bd:d6:f3:da:29:a3:c3:42:96:26:4a:41 /etc/ssh/ssh_host_rsa_key.pub (RSA) For some great reason (no doubt) one of these commands uses a different (newer?) way of displaying the hash, thereby helping man-in-the-middle attackers enormously because it requires a non-trivial conversion to compare these. How do I compare these two hashes, or better: force one command to use the other's format? The -E option to ssh-keygen is not available on the server.
ssh # ssh -o "FingerprintHash sha256" testhost The authenticity of host 'testhost (256.257.258.259)' can't be established. ECDSA key fingerprint is SHA256:pYYzsM9jP1Gwn1K9xXjKL2t0HLrasCxBQdvg/mNkuLg. # ssh -o "FingerprintHash md5" testhost The authenticity of host 'testhost (256.257.258.259)' can't be established. ECDSA key fingerprint is MD5:de:31:72:30:d0:e2:72:5b:5a:1c:b8:39:bf:57:d6:4a. ssh-keyscan & ssh-keygen Another approach is to download the public key to a system which supports both MD5 and SHA256 hashes: # ssh-keyscan testhost >testhost.ssh-keyscan # cat testhost.ssh-keyscan testhost ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItb... testhost ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0U... testhost ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMKHh... # ssh-keygen -lf testhost.ssh-keyscan -E sha256 256 SHA256:pYYzsM9jP1Gwn1K9xXjKL2t0HLrasCxBQdvg/mNkuLg testhost (ECDSA) 2048 SHA256:bj+7fjKSRldiv1LXOCTudb6piun2G01LYwq/OMToWSs testhost (RSA) 256 SHA256:hZ4KFg6D+99tO3xRyl5HpA8XymkGuEPDVyoszIw3Uko testhost (ED25519) # ssh-keygen -lf testhost.ssh-keyscan -E md5 256 MD5:de:31:72:30:d0:e2:72:5b:5a:1c:b8:39:bf:57:d6:4a testhost (ECDSA) 2048 MD5:d5:6b:eb:71:7b:2e:b8:85:7f:e1:56:f3:be:49:3d:2e testhost (RSA) 256 MD5:e6:16:94:b5:16:19:40:41:26:e9:f8:f5:f7:e7:04:03 testhost (ED25519)
How to compare different SSH fingerprint (public key hash) formats?
1,512,206,558,000
I want to make efficient use of the fingerprint reader on my laptop. I was able to configure fingerprint reading through fprint and PAM (using the steps described in the second comment here), but I've encountered a small problem. When logging in with the fingerprint reader the GNOME keyring isn't unlocked. Now I understand that this is this way because fprintand the keyring have no support for hardware-based keystore unlocking like for example Windows Hello does. I have no problem with this restriction, but it means that I have to type my password on login anyways. How I get around this right now is by waiting for 10s on the first login so the fingerprint reader times out and I get the password prompt. Then I enter the password to login and the keyring gets unlocked with the login. When I unlock my device or run sudo commands afterwards I will still use the fingerprint reader. So my question is if it is possible to configure PAM in a way that allows me to do the first login directly with the password (without waiting for the fingerprint sensor to time out) while still allowing me to unlock and run sudo commands with the fingerprint reader. I'm running Linux Mint with the Cinnamon desktop.
Well, this is a bit tricky. The authentication methods for various services are controlled by files in /etc/pam.d/ directory. The pam-auth-update command will update the common-* files, which are @included by the service-specific files. I'm more of a KDE guy myself, but Cinnamon is a GNOME derivative, so its initial login is probably handled by /etc/pam.d/gdm (or /etc/pam.d/gdm-greeter if it exists). You might achieve what you want by replacing the @include lines in that file by the contents of the corresponding /etc/pam.d/common-* files, leaving out the lines referring to pam_fprintd. This should result the initial login omitting all the components related to the fingerprint sensor, but everything else still having the fingerprint authentication available as usual. (You can also achieve the same in the opposite way: by using pam-auth-update to remove the fingerprint authentication configuration from the common-* files, and then adding them back in to the service-specific file for those services you want only. But it seems like this would be more work than the first method.) Warning: Changing the PAM configuration files can easily lock you out of your computer in case of typos or other mistakes. Before editing the configuration files and logging out to test the changes, make sure you have an alternative way in to undo your change in case it turns out it does not work. In this case, switch to the text-mode virtual console, log in from there and become root, so that you'll be already fully authenticated on that console. Then switch back to the GUI mode, make your changes (first making a backup copy of any file you'll change) and log out of the GUI to test it. If the GUI login no longer works, just switch back to the text-mode virtual console where you're already logged in with root powers, and use it to undo your changes. Alternatively, start a SSH session of two from another computer before making the changes, and backup any files before changing them. If you lock yourself out with no alternative way in, you'll need to boot the system using a live Linux media, then mount your root filesystem and undo your changes to it.
Use fingerprint reader for everything but first login
1,512,206,558,000
I administer a lot of hosts, and every time I ssh into a new batch for the first time, it is tedious to tell my secure shell client yes for each and every host that I accept the host key fingerprint for adding into ~/.ssh/known_hosts. If we accept as a given that I am confident that there are in fact no compromised host keys, is there any way to automate this? I do not want to disable key checking for subsequent connections. For the sake of discussion, let's say that I have a list of all hosts in a text file, hostlist.txt.
ssh-keyscan will check, but not verify, a remote host key fingerprint. Iterate through the host list and append to ~/.ssh/known_hosts: while read host; do if entry=$(ssh-keyscan $host 2> /dev/null); then echo "$entry" >> ~/.ssh/known_hosts fi done < hostlist.txt
How can I automate adding entries to .ssh/known_hosts?
1,512,206,558,000
When I connect to my Dropbear SSH server for the first time, I get the following message: me@laptop:~$ ssh me@server The authenticity of host 'server' can't be established. RSA key fingerprint is SHA256:NycCxoRiiSAGA7Rvlnuf1gU8pazIpXJKZ3ukdivyam8. Are you sure you want to continue connecting (yes/no)? To make sure that this is the correct server, I want to compare the stated fingerprint from that message to the server's real fingerprint. How can I find out the server's RSA host key fingerprint?
Locate the host key file on the server: me@server:~$ ls /etc/dropbear/ authorized_keys config dropbear_rsa_host_key Use dropbearkey to get the public key portion and fingerprint of that host key: me@server:~$ sudo dropbearkey -y -f /etc/dropbear/dropbear_rsa_host_key Public key portion is: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCk/0IEQhlDHfe3jd1MafWLEsTMFADflBWiODik6CnHXmXUrp1XmQ0fo16ffRxupnIuieU44VZkfCP8MX+WIVMRc/+UOJAox7U+v7B3T9H0a4ZeB48NyPdUCZ9MVSbk+kWmHn+UoXtPdMZ/htQ13XHJLHU8h2I+4dTUs1TYWeW4b8LppRexUJPCjdc9YxmkwI+ctHs6I1oguqSy6IP+aAlK0+QkNrG8HeFe1Nmg2iL5SuYfJCIgxJylK+s6KVMpzVPv7VNX3bSt1NJvf2etowR7kzTZs+uCJyzdILO2p5yAo9V80/zzwyqV+exPHUjD/SE9tYjEBkzKKNo215xQvAzV me@server Fingerprint: sha1!! 41:b0:5e:af:8c:4d:2b:ae:fd:75:7d:f1:d5:35:e1:49:14:2e:08:12 The hash algorithm will be different, depending on your version. Pipe the public key portion into ssh-keygen to use a specific hash algorithm: me@server:~$ sudo dropbearkey -y -f /etc/dropbear/dropbear_rsa_host_key | ssh-keygen -l -f - -E sha256 2048 SHA256:NycCxoRiiSAGA7Rvlnuf1gU8pazIpXJKZ3ukdivyam8 me@server (RSA) me@server:~$ sudo dropbearkey -y -f /etc/dropbear/dropbear_rsa_host_key | ssh-keygen -l -f - -E sha1 2048 SHA1:QbBer4xNK679dX3x1TahSRMuLBI me@server (RSA) me@server:~$ sudo dropbearkey -y -f /etc/dropbear/dropbear_rsa_host_key | ssh-keygen -l -f - -E md5 2048 MD5:bb:36:37:3e:ae:36:69:d3:6d:63:b8:a3:97:c3:78:60 me@server (RSA)
How to verify fingerprint of Dropbear RSA host key?
1,512,206,558,000
When I try any other finger with: %> fprintd-enroll left-index-finger Using device /net/reactivated/Fprint/Device/0 failed to claim device: Not Authorized: net.reactivated.fprint.device.setusername It doesn't work for me; But if I don't specify finger (which uses right-index by default): %> fprintd-enroll Using device /net/reactivated/Fprint/Device/0 Enrolling right index finger. It works Running on Arch Linux , and packages installed from aur: fprintd 0.4.1-4 libfprint 0.4.0-3 UPDATE %> fprintd-enroll -f left-index-finger Using device /net/reactivated/Fprint/Device/0 Enrolling right index finger.
I think this is only supported as of fprintd 0.5.1: http://cgit.freedesktop.org/libfprint/fprintd/commit/?id=7eb1f0fd86a4168cc74c63b549086682bfb00b3e When I build fprintd 0.5.1, the -f option does work correctly.
fprintd-enroll works with right-index finger only
1,512,206,558,000
OpenSSH's display format for host key fingerprints has changed recently - between versions 6.7 and 6.8. When connecting to a new host, the message now looks like this: user@desktop:~$ ssh 10.33.1.114 The authenticity of host '10.33.1.114 (10.33.1.114)' can't be established. ECDSA key fingerprint is SHA256:9ZTSzJsnk0byQRs24iKoYrf/d5eDvQL60tR/zO41k/I. Are you sure you want to continue connecting (yes/no)? On the remote host server (which I reached through a 3rd machine, where I had accepted the key earlier using an older client), I can see the fingerprint with user@server:~$ ssh-keygen -l -f /etc/ssh/ssh_host_ecdsa_key 256 a2:7e:2b:87:4c:47:69:16:78:9e:1a:4b:db:a7:a2:57 root@server (ECDSA) But there's no way to match these two up. If I install an older ssh version on desktop, and first connect using that, I see user@desktop:~$ ssh 10.33.1.114 The authenticity of host '10.33.1.114 (10.33.1.114)' can't be established. ECDSA key fingerprint is a2:7e:2b:87:4c:47:69:16:78:9e:1a:4b:db:a7:a2:57. Are you sure you want to continue connecting (yes/no)? That matches, so I can safely accept it, and it gets added to my ~/.ssh/known_hosts. Then the newer version of ssh also accepts it. But that requires me to build/install the older ssh version on desktop. From an answer to another question about server fingerprints, I learned that the old form can be shown with ssh-keygen -E md5, and the new one is -E sha256. But the -E option only appeared when SHA256 became the default - the version of ssh-keygen on server can only show MD5. To see the SHA256 fingerprint of the key I trust, I'd first have to retrieve it (eg. through that 3rd machine) and put it where the newer ssh-keygen can find it. Or I'd have to run a newer ssh-keygen on server. (-E means something completely different for ssh.) How can I display both keys (the one that I trust, and the one that I'm being presented with) in the same format? Preferably without installing additional versions, or copying key files around?
Use ssh -o FingerprintHash=md5 10.33.1.114 to get the old-md5 fingerprint from the client.
Verify host key fingerprint in old format
1,512,206,558,000
I am using gpg - $ gpg --version gpg (GnuPG) 2.2.12 libgcrypt 1.8.4 I am trying to understand the difference between the two commands : $ gpg --list-key and: $ gpg --fingerprint from whatever little I see, I don't see any difference between two outputs. Am I looking at something wrong ?
The --fingerprint option prints the fingerprint into 10 groups of 4 caracters to easily verify the gpg key.
which part is the fingerprint in gpg public key
1,512,206,558,000
I am trying to make work my finger print sensor thinkpad x390 yoga. I installed printfd package using yay. When I try to run fprintd-enroll, I get this error: Using device /net/reactivated/Fprint/Device/0 failed to claim device: GDBus.Error:net.reactivated.Fprint.Error.Internal: Open failed with error: The driver encountered a protocol error with the device. When I try to run it for second time I get this: Using device /net/reactivated/Fprint/Device/0 failed to claim device: GDBus.Error:net.reactivated.Fprint.Error.Internal: Open failed with error: Device 06cb:00bd is already open I tried installing thinkfinger package but stillno luck. How can I resolve this problem ? This is my lsusb output: Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 001 Device 010: ID 06cb:00bd Synaptics, Inc. Prometheus MIS Touch Fingerprint Reader Bus 001 Device 008: ID 04f2:b67c Chicony Electronics Co., Ltd Integrated Camera Bus 001 Device 033: ID 2cb7:0210 Fibocom L830-EB-00 LTE WWAN Modem Bus 001 Device 005: ID 056a:51af Wacom Co., Ltd Pen and multitouch sensor Bus 001 Device 012: ID 8087:0aaa Intel Corp. Bluetooth 9460/9560 Jefferson Peak (JfP) Bus 001 Device 002: ID 058f:9540 Alcor Micro Corp. AU9540 Smartcard Reader Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Thanks for help
The solution was to update firmware of fingerprint device. I achieved by: Installing fwupd sudo pacman -S fwupd Check if system can see device: fwupdmgr get-devices Refresh firmware database: fwupdmgr refresh --force Updating my firmware: fwupdmgr update You have to reboot immediatlly to apply update and prevent your device from weird behavior. After all these steps fprintd-enroll will run without problems
fprintd: The driver encountered a protocol error with the device