date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,437,404,310,000
According to Using "&&" is not effective for cURL submit form (don't execute second line if first line fails) In below code; first line is curl submit; second line is for queuing: curl -b cookies.txt \ -d title="$(sed '1,/sblmtitle/d;/slpstitle/,$d' sedut.html)" \ -d taxonomy%5Btags%5D%5B1%5D="$( sed '1,/sblmkategori/d;/slpskategori/,$d' sedut.html )" \ -d teaser_include=1 \ -d body="$(sed '1,/sblmkonten/d;/slpskonten/,$d' sedut.html)" \ -d field_source%5B0%5D%5Burl%5D="$( sed '1,/sblmurl/d;/slpsurl/,$d' sedut.html )" \ -d changed= \ -d form_build_id=form-424f851ad50bd4781c8c25ab7efd5c4c \ -d form_token=0e7cc7437faf816f1ecd96087286bda9 \ -d form_id=post_node_form \ -d op=Save http://www.web.org/submit/post && for file in $(ls *.html | sort -r | tail -1); do mv $file sedut.html done If cURL fails to submit, it will print out sedut.html contents. If cURL success submitting, it will print nothing. But cURL always giving 0 no matter if it success or failed in submitting. I think the best workflow is "If cURL print nothing, run second line" and "If cURL print something, don't run second line" I've just looked linux if command but still have no idea how to implement this because the example have different case.
So you hit upon the key here, if curl's output is different then you can test for it. First you can direct that to a file. Next you can leverage the -s option from test: -s file True if file exists and has a size greater than zero. Here is some example code: curl -b cookies.txt ... -o /tmp/curl_output if [ -s /tmp/curl_output ]; then do failure stuff here else do successful stuff here fi Another implementation for academic reasons: if curl -b cookies.txt ... 2>&1 | grep 'some line from sedut.html' >/dev/null 2>&1; then do failure stuff here else do successful stuff here fi
Success/fail tests for cURL (don't execute second line if first line fails)
1,437,404,310,000
I can see that uppercase letter means default here. Is there a standard for this? I'd like to read the full standards.
It isn't a written standard, it is, however, a de facto standard, meaning it had been used and found useful, so it came to be the common way to do things. As Alex said - from common sense.
What's the standard used by yum prompt "Is this ok [y/N]:"?
1,437,404,310,000
I'm writing a perl program where I want to accept all kinds of "file names" from the user, including /home/foo/bar.txt and scp://server/some/file.txt or whatever. And then I thought that if I can find a cmd line program that does it, I'll find a perl module that does it too, or I'll just run it through perl's system() call. I thought of cURL and/or lftp, but on ubuntu, I get: > curl scp://server/some/file curl: (1) Protocol scp not supported or disabled in libcurl > lftp -c 'get scp://server/some/file' get: scp - not supported protocol For curl I investigated why that was on Ubuntu and found curl and pycurl is not compiled with sftp support - basically it ain't gonna happen unless I recompile libcurl*, but I don't want to require my users to apply a patch to libcurl*. Sure, I can write it myself as in: if ($proto eq 'scp') { # handle scp open I, 'scp ...' or die; } elsif ($proto eq 'http') { open I, 'wget ...' or die; } else { # whatever } But I'd much rather somebody else write/test/debug that with a huge list of protocols and I'm surprised I couldn't find anything that does this out-of-the-box! Do you know of any that I missed?
I believe protocol name is sftp, not scp. In my system, following works: lftp -c 'get sftp://someserver/file', as well as ftp and http.
Need command-line program to download all of http:// https:// scp:// ftp:// style links (on e.g. ubuntu)
1,437,404,310,000
When I try to send email from alpine I get the following error: Error sending: connection failed to gmail-smtp... (The message disappear before I have the time to copy it). I suspect this is a firewall issue (I'm in a university campus). How can I fix this (or at least confirm)? Edit: I tried to telnet: yotam@sumsum:$> telnet smtp.gmail.com 587 Trying 209.85.229.109... telnet: Unable to connect to remote host: Connection timed out
Try connecting directly to the SMTP port on the server via telnet? See the Wikipedia page for an illustrative example. Start with the following (for a SMTP connection at port to smtp.gmail.com at port 25). Replace with the server and port you are using. In the following, EHLO and HELP are typed at the client prompt, as part of the SMTP negotiation. Note that the server response TO HELP is to point you to a copy of the Simple Mail Transfer Protocol (SMTP). $ telnet smtp.gmail.com 25 Trying 74.125.53.109... Connected to gmail-smtp-msa.l.google.com. Escape character is '^]'. 220 mx.google.com ESMTP k9sm2692779pbc.22 EHLO 250-mx.google.com at your service, [59.183.41.125] 250-SIZE 35882577 250-8BITMIME 250-STARTTLS 250 ENHANCEDSTATUSCODES HELP 214 2.0.0 http://www.google.com/search?btnI&q=RFC+2821 k9sm2692779pbc.22 If you can get this far, I suspect you don't have connection issues. Just one question to clarify. Is Alpine trying to deliver your mail directly, or are you handing off the mail delivery to a local mail server eg. Exim or Postfix? I have all my mail handed off to Exim, and then Exim sends it to a smarthost, which seems the preferred thing to do for hosts that don't have a static IP address at least. One advantage of the latter approach is that you can look at your mail server logs and/or queue to see what is happening with your mail.
alpine failed to connect to smtp.gmail.com
1,437,404,310,000
So someone decided to set my gnome background as a practical joke to a rather disturbing picture. However, I don't use gnome and only log into it by accident. I need to remove the picture so when I accidently log into gnome it's not there. Can someone tell me what part of what file to modify?
gconftool-2 -t string -s /desktop/gnome/background/picture_options scaled # background style gconftool-2 -t string -s /desktop/gnome/background/picture_filename PATHTOIMAGEHERE # background file
How do I unset a background in gnome on the command line?
1,437,404,310,000
I've installed Ubuntu Server 10.10, and when I press up in a terminal nothing happens. Using Ubuntu Desktop it gave me the previous command that I ran. How can I make it do this in Ubuntu Server as well?
Does the up arrow key work in other contexts? It sounds like you might have the keyboard mapped wrong, bash history should be pretty easy.
Pressing `up` to get the previous command in a tty on Ubuntu Server
1,437,404,310,000
I am planning to setup a groupware server that's either Citadel or SOGo, which supports the GroupDAV, CardDAV, or SyncML protocols. Is there a commandline e-mail client that supports syncing contacts via such protocols either out of the box or with a plugin/extension?
According to the documentation of both of the software products you linked to (here and here) both support storing directory information using LDAP. If you do not find a command line email client that supports the protocols that you mentioned, you could try using LDAP instead. Every decent email client supports LDAP.
Commandline e-mail client that syncs contacts with external server?
1,437,404,310,000
I have un-tarred the nzbget0.70 debug version and have put it in the /ffp/bin dir. and I have a config file in the /ffp/etc/ dir but when I try to run it, I get the following: root@NAS:/mnt/HD_a2/ffp/bin# nzbget -sh: nzbget: not found root@NAS:/mnt/HD_a2/ffp/bin# sh nzbget nzbget: line 1: syntax error: word unexpected (expecting ")") I used this how-to: http://www.aroundmyroom.com/2009/01/27/the-how-to-that-replaces-all/ I used this tar: nzbget-0.7.0-bin-dns323-arm-debug.tar.gz from http://sourceforge.net/projects/nzbget/files/ what did I do wrong? ps. I logged in as root
nzbget is a binary file; you can't use sh to process it, you would do that if nzbget were a shell script. Running just nzbget didn't work because by default the current directory is not on the PATH, so you need to do something like: $ ./nzbget Or: $ /mnt/HD_a2/ffp/bin/nzbget
CH3MNAS Fun Plug and NZBget. Cannot launch NzbGet 0.7, word unexpected
1,287,184,988,000
I wrote a bash command to scan /var/log/auth.log for messages occurring on the current day indicating unauthorised access. Currently it just fetches messages matching BREAK-IN and unauthorized. What other strings should I search for in /var/log/auth.log to keep tabs on unauthorized access? Here's the script for reference: cat /var/log/auth.log|grep "$(date|awk '{print $2" "$3}')"|grep -E '(BREAK-IN|Invalid user|Failed|refused|su|Illegal)' Edit Here's the amended command based on Justins suggestions and what I found through Google grep "$(date|awk '{print $2" "$3}')" /var/log/auth.log|grep -E '(BREAK-IN|Invalid user|Failed|refused|su|Illegal)'
You could look for "Invalid user" which is thrown when someone tries to logon with an account that does not exist. It will also throw up "Failed password" when you enter in an invalid password. Also, you dont need to use cat to dump the file to grep. Grep can already look at the file as its second option. 'Grep search-criteria /path/tp/file'
What strings should I look for in /var/log/auth.log?
1,287,184,988,000
I have command line utility that generates text strings, by running a command within the directory: ./Utility -P it prints output on terminal. How to make a shell script showing the generated string in the Info dialog box, where the output can be selected by mouse and copied. So I need send information from a shell script to a GUI desktop environment. I mean launch a shell scipt by double click on it, so there is no longer needs to open terminal every time and write command. EDIT: working example #!/bin/bash output="$( ./Utility -P )" zenity --info --text "${output}" --icon-name=info --title "Daily test" --width=300 --height=80
This will capture standard output from this sub shell and save it to an environment variable named output. You can call it any other name. output="$( ./Utility -P )" zenity --info --text "${output}" --title "Utility finished" There are alternative tools such as yad (a fork of zenity iirc), or even a way to use your desktop's notification utility (using dbus) if you have one. Assuming you have xfce4-notification-daemon or some other daemon consuming that notification stuff: notify-send --icon /opt/icons/Utility.png -- "Utility finished" "${output}" Where the first string will go to the "title" of the notification, and the second string will be the "body" of the notification. EDIT: or even just xmessage "${output}" if you like minimal dependencies, though that assumes ${output} doesn't match one of the options supported by xmessage. Alternatively: ./Utility -P | xmessage -file - which doesn't have the problem.
Shell script to run command and show output in notification box
1,287,184,988,000
I'd like to generate a full-page QR code from the command line. The following command sort of accomplishes this: qrencode -o- https://example.net | convert png:- -gravity center -units pixelsperinch -density 300 -resize 2551x3295 -extent 2551x3295 qrcode.pdf Unfortunately, the resulting QR code is quite blurry, as though imagemagick is doing some sort of unwanted antialiasing. Is there a way to make the code completely crisp, or just a better/simpler approach to generating a PDF file with a full-page QR code?
Change your resize to scale which uses "nearest neighbour" interpolation - i.e. it only uses colours already present in the original image without introducing new ones by doing interpolation between the old ones: qrencode -o- https://example.net | convert png:- -gravity center -units pixelsperinch -density 300 -scale 2551x3295 -extent 2551x3295 qrcode.pdf
How to generate full-page QR code from command line?
1,287,184,988,000
If you have only one instance of VLC running, you can talk to VLC with dbus-send using org.mpris.MediaPlayer2.vlc as destination: $ dbus-send --dest=org.mpris.MediaPlayer2.vlc ... If you have two instances of VLC running, they have different destinations xxxx and yyyy. If you want to talk to one of them, you must use xxxx or yyyy as destination: $ dbus-send --dest=:xxxx ... The destination of the first instance I can find in this way: $ dbus-send --print-reply --dest=org.freedesktop.DBus /org/freedesktop/DBusorg.freedesktop.DBus.ListQueuedOwners string:org.mpris.MediaPlayer2.vlc method return time=1702494718.199915 sender=org.freedesktop.DBus -> destination=:1.1256 serial=3 reply_serial=2 array [ string ":1.1251" ] Then I see that the destination is 1.1251 But how can I get the destination of the second instance?
i always use busctl to get the complete dbus infos. There is a system bus (busctl --system) and a session bus (busctl --user) or complete list (busctl -l) enter: busctl --user | grep "vlc" then you get something like this :1.641 91266 vlc xxxxx :1.641 session-c2.scope c2 - :1.642 91266 vlc xxxxx :1.642 session-c2.scope c2 - :1.643 91266 vlc xxxxx :1.643 session-c2.scope c2 - :1.644 91266 vlc xxxxx :1.644 session-c2.scope c2 - :1.654 91361 vlc xxxxx :1.654 session-c2.scope c2 - :1.655 91361 vlc xxxxx :1.655 session-c2.scope c2 - :1.656 91361 vlc xxxxx :1.656 session-c2.scope c2 - :1.657 91361 vlc xxxxx :1.657 session-c2.scope c2 - org.kde.StatusNotifierItem-91266-2 91266 vlc xxxxx :1.644 session-c2.scope c2 - org.kde.StatusNotifierItem-91361-2 91361 vlc xxxxx :1.657 session-c2.scope c2 - org.mpris.MediaPlayer2.vlc 91266 vlc xxxxx :1.641 session-c2.scope c2 - org.mpris.MediaPlayer2.vlc.instance91361 91361 vlc xxxxx :1.654 session-c2.scope c2 The org files are the active services. you see there is a second service called org.mpris.MediaPlayer2.vlc.instance91361 the first vlc instance is on PID 91266, the second vlc instance is on PID 91361 here in my example call: dbus-send --print-reply --dest=org.freedesktop.DBus /org/freedesktop/DBus org.freedesktop.DBus.ListQueuedOwners string:org.mpris.MediaPlayer2.vlc.instance91361 will give you your information about the second instance ----------- By the way you can examine the dbus service for the object tree busctl --user tree org.mpris.MediaPlayer2.vlc.instance91361 you get: └─/org └─/org/mpris └─/org/mpris/MediaPlayer2 then you also can introspect the objects busctl --user introspect org.mpris.MediaPlayer2.vlc.instance91361 /org/mpris/MediaPlayer2 output: NAME TYPE SIGNATURE RESULT/VALUE FLAGS org.freedesktop.DBus.Introspectable interface - - - .Introspect method - s - org.freedesktop.DBus.Properties interface - - - .Get method ss v - .GetAll method s a{sv} - .Set method ssv - - .PropertiesChanged signal sa{sv}as - - org.mpris.MediaPlayer2 interface - - - .Quit method - - - .Raise method - - - .CanQuit property b true emits-change .CanRaise property b false emits-change .CanSetFullscreen property b false emits-change .DesktopEntry property s "vlc" emits-change .Fullscreen property b false emits-change writable .HasTrackList property b false emits-change .Identity property s "VLC media player" emits-change .SupportedMimeTypes property as 29 "audio/mpeg" "audio/x-mpeg" "video/m… emits-change .SupportedUriSchemes property as 21 "file" "http" "https" "rtsp" "realrt… emits-change org.mpris.MediaPlayer2.Player interface - - - .Next method - - - .OpenUri method s - - .Pause method - - - .Play method - - - .PlayPause method - - - .Previous method - - - .Seek method x - - .SetPosition method ox - - .Stop method - - - .CanControl property b true emits-change .CanPause property b false emits-change .CanPlay property b false emits-change .CanSeek property b false emits-change .LoopStatus property s "None" emits-change writable .MaximumRate property d 32 emits-change writable .Metadata property a{sv} 0 emits-change .MinimumRate property d 0.032 emits-change writable .PlaybackStatus property s "Stopped" emits-change .Position property i 0 emits-change .Rate property d 1 emits-change writable .Shuffle property d false emits-change writable .Volume property d 0 emits-change writable org.mpris.MediaPlayer2.TrackList interface - - - .AddTrack method sob - - .GetTracksMetadata method ao aa{sv} - .GoTo method o - - .RemoveTrack method o - - .CanEditTracks property b true emits-change .Tracks property ao 0 emits-change .TrackAdded signal a{sv}o - - .TrackListReplaced signal aoo - - .TrackMetadataChanged signal oa{sv} - - .TrackRemoved signal o - - you see all defined interfaces and methods. for example, lets take the "OpenUri" method from the interface: org.mpris.MediaPlayer2.Player busctl --user call org.mpris.MediaPlayer2.vlc /org/mpris/MediaPlayer2 org.mpris.MediaPlayer2.Player OpenUri s "your url" -- will open the video in the first instance busctl --user call org.mpris.MediaPlayer2.vlc.instance91361 /org/mpris/MediaPlayer2 org.mpris.MediaPlayer2.Player OpenUri s "your url" -- will open a video in the second instance call Syntax: ============ call SERVICE OBJECT INTERFACE METHOD [SIGNATURE [ARGUMENT...]] for further information - see busctl - help have a nice day!
How to get the destinations of two instances of VLC
1,287,184,988,000
I can run a command in all directories named _QWE using: find . -name '_QWE' -type d -execdir touch {}/1234.txt \; However, I need to run -execdir rename 's!^\./(\d+ -)\s(\d+\.)!$1!' {} \; in all the immediate sub-directories of _QWE to rename the sub-directories of the sub-directories of _QWE. I mean suppose I have a directory structure like: ├── folder_1 │   └── _QWE │   ├── Course 1 │   │   ├── 1 - Introduction │   │   ├── 2 - 1. Basics of Course 1 │   │   ├── 3 - 2. Conclusion of Course 1 │   ├── Course 2 │   │   ├── 1 - Introduction │   │   ├── 2 - 1. Basics of Course 2 │   │   ├── 3 - 2. Conclusion of Course 2 ├── folder_2 │   └── folder_3 │    └── _QWE │    ├── Course X1 │    │   ├── 1 - Introduction │    │   ├── 2 - 1. Basics of Course X1 │    │   ├── 3 - 2. Conclusion of Course X1 │    ├── Course X2 │    │   ├── 1 - Introduction │    │   ├── 2 - 1. Basics of Course X2 │    │   ├── 3 - 2. Conclusion of Course X2 Here I want to rename: 1 - Introduction 2 - 1. Basics of Course 1 3 - 2. Conclusion of Course 1 1 - Introduction 2 - 1. Basics of Course 2 3 - 2. Conclusion of Course 2 1 - Introduction 2 - 1. Basics of Course X1 3 - 2. Conclusion of Course X1 1 - Introduction 2 - 1. Basics of Course X2 3 - 2. Conclusion of Course X2 Here for example, 3 - 2. Conclusion of Course X2 will be renamed to 3 - Conclusion of Course X2. This is what 's!^\./(\d+ -)\s(\d+\.)!$1!' does. Just to be clear, 3 - 2. Conclusion of Course X2 is a directory name and not file name. How can I do that? Update 1: I could get the paths using: for dir in $(find . -name '_QWE' -type d)/*/ ; do echo $dir ; done or, for dir in $(find . -name '_QWE' -type d)/*/ ; do (cd "$dir"; pwd); done But, for dir in $(find . -name '_QWE' -type d)/*/*/ ; do rename 's!^\./(\d+ -)\s(\d+\.)!$1!' $dir ; done Not yielding any output.
You could use the -path modifier to pick out directories immediately under the matching directories: find . -depth -type d -path '*/_QWE/*/*' ! -path '*/_QWE/*/*/*' -exec rename 's!(/\d+)\s+-\s+\d+\.\s+([^/]*)$!$1 $2!' {} + I have modified your RE slightly to anchor it to the filename component of the matched path. (For PCRE expressions as used here, \s+ matches one or more whitespace characters; \d+ matches one or more digits; [^/]* matches a run of zero or more characters except /.) Using your example tree of directories, here is the corresponding output when using rename -n: ./folder_1/_QWE/Course 1/2 - 1. Basics of Course 1 renamed as ./folder_1/_QWE/Course 1/2 Basics of Course 1 ./folder_1/_QWE/Course 1/3 - 2. Conclusion of Course 1 renamed as ./folder_1/_QWE/Course 1/3 Conclusion of Course 1 ./folder_1/_QWE/Course 2/2 - 1. Basics of Course 1 renamed as ./folder_1/_QWE/Course 2/2 Basics of Course 1 ./folder_1/_QWE/Course 2/3 - 2. Conclusion of Course 1 renamed as ./folder_1/_QWE/Course 2/3 Conclusion of Course 1 ./folder_2/folder_3/_QWE/Course X1/2 - 1. Basics of Course 1 renamed as ./folder_2/folder_3/_QWE/Course X1/2 Basics of Course 1 ./folder_2/folder_3/_QWE/Course X1/3 - 2. Conclusion of Course 1 renamed as ./folder_2/folder_3/_QWE/Course X1/3 Conclusion of Course 1 ./folder_2/folder_3/_QWE/Course X2/2 - 1. Basics of Course 1 renamed as ./folder_2/folder_3/_QWE/Course X2/2 Basics of Course 1 ./folder_2/folder_3/_QWE/Course X2/3 - 2. Conclusion of Course 1 renamed as ./folder_2/folder_3/_QWE/Course X2/3 Conclusion of Course 1 For testing, I also added a sample set of directories using the same names under a folder_4 but not under a directory _QWE. As expected these were correctly ignored.
Rename the sub-sub-directories of a directory named _QWE
1,287,184,988,000
In my lubuntu 18.04 distribution I have installed google-chrome version 114. By a bash script, at every boots, Chrome is launched and connects to a specific URL. The URL is always the same, but sometimes the content of the web site changes so I need to clear the cache of google-chrome at every boot. This need comes from the fact that my system is always switched off (without a correct shutdown) when chrome is running so the browser is not correctly closed and its cache at the next boot is not empty. If the browser was closed properly I could select the option of Google-Chrome Clear cookies and site data when you quit Chrome as explained for example in this link. My problem is the same described by this post. To clear the cache I have followed the tips present in this link and I have created a bash script which contains the following commands: rm -rf /home/myuser/.cache/google-chrome/* rm -rf /home/myuser/.config/google-chrome/Default/* The script is executed at boot before starting Google Chrome. Sometimes, even if very rarely, I have noticed some malfunctioning of Google Chrome which could depend on the presence of cached data despite deleting the previous folders. Due to these malfunctions, I'm asking if the previous commands are enough to delete all cache data or if I must execute other operations. EDIT: It is also useful for me to know the role of the 2 folders (in .cache and in .config) that I am deleting on boot.
After many tests of switch off and on on my system it is reasonable to think that a script which executes commands: rm -rf /home/myuser/.cache/google-chrome/* rm -rf /home/myuser/.config/google-chrome/Default/* is sufficient to clear all cache data of Google Chrome.
How to clear all cache data of google-chrome by commands executed by a bash script?
1,287,184,988,000
I've got a program that will be generating 4 large binary files (400GB+ each) that I need to upload to AWS S3 as quickly as possible. I'd like to begin uploading before the files are completely written; I'm toying with a few approaches and one that I feel may be viable is using split, but my implementation has a lot of room for improvement and I'm wondering if anyone has more appropriate techniques: By piping tail -f of the output file into split I can split the file successfully, but need to kill the tail process once the file completes and that just seems sub-optimal. This splits the file into 1MB chunks (small, for testing): tail -f -n +0 file1.bin | split -d -a 5 -b 1M - file1.bin_split_ Can someone suggest a better solution to splitting a file in realtime? I am looking for a command-line solution; my shell is Bash.
Thanks for the input folks; lots of good suggestions and questions have allowed me to come up with a workable solution for the handful of times I need to use this. To answer a bunch of questions though: it's for a migration to a different AWS account, including restoring the DB; I'm constrained by security and architecture and want to make the best use of the tools at my disposal rather than putting time and effort into changes that may have negligible benefits; I can do it the slow way, carrying out each step sequentially, but there are benefits to getting this done quicker; it's AWS so I can spec/add whatever disks I decide; I don't want to stream it directly in case there is an interruption of some sort (i.e. I really need to put the data onto local EBS as a plan-B); is rsync not relatively slow?; and finally, named pipes are an option but then i'd need to rename the output files - i'm not convinced they would give me a huge benefit. Anyway, the solution i've come up with is as follows: source: backup from EBS volume1 --> EBS volume2; split 20GB chunks from EBS volume2 --> EBS volume3; upload chunks to S3 from EBS volume3 target: download chunks from S3 to stdout, appending on to target file(s) in EBS volume2; restore from EBS volume2 into EBS volume1 Code (apologies this is hacked together, but it will be thrown away soon): tails3.sh #!/bin/bash #tails3.sh #takes two parameters: file to tail/split, and location to upload it to #splits the file to /tmpbackup/ into 20GB chunks. #waits while the chunks are still growing #sends the final (less than 20GB) chunk on the basis that the tail -f has completed #i.e. for splitting 3 files simultaneously #for file in <backup_filename>.00[1-3]; do bash -c "./tails3.sh $file s3://my-bucket/parts/ > /dev/null &"; done # $1=filename # $2=bucket/location set -o pipefail LOGFILE=$1.log timestamp() { date +"%Y-%m-%d %H:%M:%S"; } function log { printf "%s - %s\n" "$(timestamp)" "${*}" printf "%s - %s\n" "$(timestamp)" "${*}" >> "${LOGFILE}" } function closeoff { while kill -0 $tailpid 2>/dev/null; do kill $tailpid 2>/dev/null sleep 1 log "slept waiting to kill" done } tail -f -n +0 $1 > >(split -d -a 5 -b 20G - /tmpbackup/$1_splitting_) & tailpid=$! inotifywait -e close_write $1 && trap : TERM && closeoff & log "Starting looking for uploads in 5 seconds" sleep 5 FINISHED=false PARTSIZE=21474836480 FILEPREVSIZE=0 until $FINISHED; do FILETOTRANSFER=$(ls -1a /tmpbackup/${1}_splitting_* | head -n 1) RC=$? kill -0 $tailpid >/dev/null STILLRUNNING=$? log "RC: ${RC}; Still running: ${STILLRUNNING}" if [[ $RC > 0 ]]; then if [[ ${STILLRUNNING} == 0 ]]; then log "tail still running, will try again in 20 seconds" sleep 20 else log "no more files found, tail finished, quitting" FINISHED=true fi else log "File to transfer: ${FILETOTRANSFER}, RC is ${RC}" FILEPART=${FILETOTRANSFER: -5} FILESIZE=$(stat --format=%s ${FILETOTRANSFER}) log "on part ${FILEPART} with current size '${FILESIZE}', prev size '${FILEPREVSIZE}'" if [[ ${FILESIZE} == ${PARTSIZE} ]] || ([[ ${STILLRUNNING} > 0 ]] && [[ ${FILESIZE} == ${FILEPREVSIZE} ]]); then log "filesize: ${FILESIZE} == ${PARTSIZE}'; STILLRUNNING: ${STILLRUNNING}; prev size '${FILEPREVSIZE}'" log "Going to mv file ${FILETOTRANSFER} to _uploading_${FILEPART}" mv ${FILETOTRANSFER} /tmpbackup/${1}_uploading_${FILEPART} log "Going to upload /tmpbackup/${1}_uploading_${FILEPART}" aws s3 cp /tmpbackup/${1}_uploading_${FILEPART} ${2}${1}_uploaded_${FILEPART} mv /tmpbackup/${1}_uploading_${FILEPART} /tmpbackup/${1}_uploaded_${FILEPART} log "aws s3 upload finished" else log "Sleeping 30" sleep 30 fi FILEPREVSIZE=${FILESIZE} fi done log "Finished" and s3join.sh #!/bin/bash #s3join.sh #takes two parameters: source filename, plus bucket location #i.e. for a 3 part backup: #`for i in 001 002 003; do bash -c "./s3join.sh <backup_filename>.$i s3://my-bucket/parts/ > /dev/null &"; done ` #once all files are downloaded into the original, delete the $FILENAME.downloading file to cleanly exit #you can tell when the generated file matches the size of the original file from the source server # $1 = target filename # $2 = bucket/path set -o pipefail FILENAME=$1 BUCKET=$2 LOGFILE=${FILENAME}.log timestamp() { date +"%Y-%m-%d %H:%M:%S"; } function die { log ${*} exit 1 } function log { printf "%s - %s\n" "$(timestamp)" "${*}" printf "%s - %s\n" "$(timestamp)" "${*}" >> "${LOGFILE}" } touch ${FILENAME}.downloading i=0 while [ -f ${FILENAME}.downloading ]; do part=$(printf "%05d" $i) log "Looking for ${BUCKET}${FILENAME}_uploading_${part}" FILEDETAILS=$(aws s3 ls --summarize ${BUCKET}${FILENAME}_uploaded_${part}) RC=$? if [[ ${RC} = 0 ]]; then log "RC was ${RC} so file is in s3; output was ${FILEDETAILS}; downloading" aws s3 cp ${BUCKET}${FILENAME}_uploaded_${part} - >> ${FILENAME} ((i=i+1)) else log "could not find file, sleeping for 30 seconds. remove ${FILENAME}.downloading to quit" sleep 30 fi done Using the above, I start my backup, then immediately trigger tails3.sh with the backup filenames that are being generated. This splits those files to a different volume. When the split parts get to 20GB (hardcoded) they start uploading to s3. This repeats until all files are uploaded and the tail -f of the backup file is terminated. Shortly after this begins, on the target server I trigger s3join.sh using the backup filenames being generated on source. This process then polls s3 periodically, downloading any 'parts' that it finds and appending them to backup files. This keeps going until I tell it to stop (by deleting .downloading) as I was too lazy to set it to stop after downloading any file that isn't exactly 20GB... And, for good measure, as soon as the first set of parts is appended into the target backup file, I can begin the DB restore, because restore is the slowest part of the process, backup the next slowest, and s3 upload/download is the fastest. i.e. backup at ~500MB/s; upload/download at up to ~700MB/s; restore at ~400MB/s. Test of the process today on a dev environment which should have been (1hr backup + 20 mins upload + 20 mins download + 1hr restore = 2hr 40m) did the source to target restore in about 1hr 20m. One final thing to note - I get the impression that there is some caching on tail -f for the aws s3 cp as Read MB/s doesn't seem to be getting hit as hard as it should.
Split large file in realtime whilst it is still being written to
1,287,184,988,000
Is is possible in bash - or other sh-derivative shell - to run in the foreground from command-line a list of commands that have their own variable scope (so any values assigned to variables in that scope will not known outside that scope), but also - if spawning a background command - that background command still be a job under the parents shell, i.e. still under the command-line shell's job control? If there are more than 1 way to do this, which way is the most practically short? I know using parentheses will create new subshell that have its own scope, but any spawned background command will not be under the shell's job control anymore.
In zsh, you can use an anonymous function, but you still need to declare variables as local. For instance: $ () { local t; for t in 100 200 300; do sleep $t & done; } [2] 4186 [3] 4187 [4] 4188 $ jobs [2] running sleep $t [3] - running sleep $t [4] + running sleep $t $ typeset -p t typeset: no such variable: t With any shell with support for local scope in functions, you can use a normal function like: run() { eval "$@"; } Then: run 'local t; for t in 100 200 300; do sleep "$t" & done'
Separate variable scope spawning commands under job control
1,287,184,988,000
I am working with an old Raspberry Pi that only has text mode. Distro is Raspberry Pi OS based on Debian 11. By default the emoji characters only print as white diamond shapes in tty; in fbterm they show up as question mark in diamond shape. I can get fbterm to display glyph like Chinese characters by installing a proper font, e.g., "fonts-wqy-zenhei". However, this does not seem to work even after I install emoji fonts, like "noto-color-emoji".
Emojis are graphical elements. Terminals like fbterm are character-based, and cannot display graphical elements. In terminal environments you're limited to smileys such as :-) or ;-) just like in ye olden days.
Display emoji in tty or fbterm?
1,287,184,988,000
I'm using exiv2 0.27.2. I want to print the tag values of multiple webp files, but without the filename being printed. With the following command: exiv2 -g Exif.Image.Artist -Pv *.webp I get the following output: 3q2NIGNI_o.webp tomato 3qAwrJWu_o.webp orange 3qDZg9vz_o.webp cantelope I just want the tag name output, without the filename, like so: tomato orange cantelope
You either post-process the output with a tool like sed etc exiv2 -g Exif.Image.Artist -Pv ./*.webp | sed 's/.*\.webp[[:blank:]]*//' or use a loop to pass a single file at a time: for f in ./*.webp; do exiv2 -g Exif.Image.Artist -Pv "$f"; done or use exiftool e.g. exiftool -q -p '$Exif:Artist' ./*.webp
Exiv2: How to print tag values without printing the corresponding filenames
1,287,184,988,000
How to chain commands, i.e. make the output of one command become the input of another? basename | dirname /dev/null Expected Output: dev Actual Output: usage: basename string [suffix] basename [-a] [-s suffix] string [...] /dev Also tried the following but it also did not work: dirname /dev/null | basename
It should be: basename "$(dirname /dev/null)" Though for arbitrary file paths where you can't guarantee path components won't start with -, you'd need to add some --s to mark the end of options: basename -- "$(dirname -- "$file")" $(...) can be used to run a command and collect output to be used as a command line parameter to another program. Beware that since $(...) removes all the trailing newline characters from the output of commands, the above still doesn't work for arbitrary file paths; in the example above, those whose dirname ends in newline characters. Working around that, though possible, is painful enough that the problem is usually left ignored. In the zsh shell, one can use csh-style modifiers to get the tail (basename) or head (dirname) of files which don't have the issue: tail_of_head=$file:h:t Those modifiers are also available in vim. They are also available in bash, but only for history expansion, not parameter expansion like in csh or zsh.
How to chain commands
1,287,184,988,000
I have a few csv-s, each containing 3 columns separated by ",". Example: header1,header2,header3 value1,value2,value3 value1,value2,value3 ... Using this tutorial, I thought if I execute paste -d "," *csv > output.csv I will end up with something like this: header1,header2,header3,header1,header2,header3,... value1,value2,value3,value1,value2,value3,... value1,value2,value3,value1,value2,value3,... but instead the output looks like this: header1,header2,header3, header1,header2,header3, header1,header2,header3, ... value1,value2,value3, value1,value2,value3, ... especially each line is 3 columns wide, instead of the number of csv files * 3 wide. What am I doing wrong?
Most likely, your original files have \r\n end of lines. If it is so, the final file would have an extra \r between each line segment. Try using tr: paste -d "," *csv | tr -d "\r" > output.csv
paste command puts data from csv files vertically line by line instead of horizontally next to each other
1,287,184,988,000
When I type a long command on a command-line interface. Something strange may happen in the layout. The characters I typed don't show in lines correctly. Instead, they merge into 1 line or overwrite each other. And the cursor isn't displayed in its right place. For example: I want to type: /home/user/example/a/b/c>$ tar --create --file example.tar e xample But it shows: xampleuser/example/a/b/c>$ tar --create --file example.tar e As shown above, the second line overwrite the first line. This problem happens in Linux on different computers. I've met similiar problems both in tty and GUI terminal emulator. It's only a problem in the display, because what I type is exactly what I enter, although it may not be what is shown. I use American keyboard. The encoding and keymap settings are all the default ones. The keyboard is fine. More details: font: terminus-132n(tty), terminus 24pt(GUI terminal emulator) OS: Linux 5.18.15-arch1-1 $LANG: en_US.UTF-8
Try this... Enter this command: export PS1="$PWD>" Then try a long command and see if the behavior changes. If it does, there's most likely a problem with the PS1 definition in your profile. When customizing PS1 try not to get too fancy, and avoid special characters and control codes if at all possible.
Chaotic Command-line Interface Layout
1,287,184,988,000
Sometimes I have output from a command line script that I would like to further process/filter with cli tools. I can't re-run the command because it takes a long time or will not produce the same output again. Currently I paste the output into a new file in the text editor, save it and then use cat on cli to pipe it into tr, sed and other tools. This is cumbersome. Is there a quicker way for such text processing tasks?
As suggested by @muru I use a clipboard-to-shell tool now: xsel. When I find that cli output needs to be mangled further, I select and copy it with ctrl+shift+c to the clipboard. Then I use xsel -b (= xsel --clipboard) to paste it as standard input to other tools: $ xsel -b | grep foo | sed s/bar/baz/ This is solution that works everywhere without needing setup.
Quickly filter text with cli commands
1,287,184,988,000
I have linux server. I expect there's headless GUI that can be controlled by CLI for my server. I do know it possible display GUI with XRDP. But I expect I can control it with SSH or CLI. It works fine when i'm using XRDP. I have OpenBox (a windows manager) installed. I expect I can interact GUI with CLI or maybe there's Python Library that can handle it. mouseclick(2,3) # mouse click area at coordinate (2,3) screenshot("./current_screen.png") # saving screenshot of current screen in specified path. And another feature that library can handle it. I found similiar library, it was pyautogui. But pyautogui work if there's existing GUI. I mean the python script error Display Not Found if i run it in CLI. # t.py import pyautogui print(pyautogui.size()) It gave me error: root@server-kentang:~/py# python3 t.py Traceback (most recent call last): File "/usr/local/lib/python3.8/dist-packages/Xlib/support/unix_connect.py", line 76, in get_socket s.connect('/tmp/.X11-unix/X%d' % dno) FileNotFoundError: [Errno 2] No such file or directory During handling of the above exception, another exception occurred: Traceback (most recent call last): File "t.py", line 1, in <module> import pyautogui File "/usr/local/lib/python3.8/dist-packages/pyautogui/__init__.py", line 249, in <module> import mouseinfo File "/usr/local/lib/python3.8/dist-packages/mouseinfo/__init__.py", line 223, in <module> _display = Display(os.environ['DISPLAY']) File "/usr/local/lib/python3.8/dist-packages/Xlib/display.py", line 80, in __init__ self.display = _BaseDisplay(display) File "/usr/local/lib/python3.8/dist-packages/Xlib/display.py", line 62, in __init__ display.Display.__init__(*(self, ) + args, **keys) File "/usr/local/lib/python3.8/dist-packages/Xlib/protocol/display.py", line 58, in __init__ self.socket = connect.get_socket(name, host, displayno) File "/usr/local/lib/python3.8/dist-packages/Xlib/support/connect.py", line 76, in get_socket return mod.get_socket(dname, host, dno) File "/usr/local/lib/python3.8/dist-packages/Xlib/support/unix_connect.py", line 78, in get_socket raise error.DisplayConnectionError(dname, str(val)) Xlib.error.DisplayConnectionError: Can't connect to display ":0": [Errno 2] No such file or directory
It seems that Environment Variable DISPLAY was not set correctly here. It should be set to something like this: export DISPLAY=:0.0 Or more generally: export DISPLAY=$HOSTNAME:$N.$W Where $HOSTNAME & $N & $W should match the Existing Situation. In OP case, HOSTNAME is empty (which means localhost) & N is 10 (which can change over time at each Execution Instance of the X Server) & W is 0 (which may be mostly constant) to get a working configuration.
Interacting with GUI from CLI (headless GUI)
1,287,184,988,000
I would like to know if there is any difference in the final output between the various command-line tools for encoding FLAC files, like ffmpeg, sox, the “official” flac etc. In some contexts, I have noticed that it's recommended to use flac over the others, but given that FLAC represents lossless encoding, am I correct in assuming that they should all produce identical output (given the same options)?
The FLAC encoder has a ton of parameters, so you'll need to consult with the source code of ffmpeg/sox to see how they use the codec but despite all of this does it really matter? FLAC is a lossless encoder, so even if flac, ffmpeg and sox produce different FLAC files, they will all decode bit perfectly. FFmpeg will produce a different output (header) as it adds itself to tags unless instructed otherwise.
FLAC encoders – any difference in output between the command-line tools?
1,287,184,988,000
I use Z shell on Mac and have some settings in both .zshrc and .profile in my home directory. I found that when I only have .profile, zsh import .profile in a new session; and when I have both .profile and .zshrc, zsh only import .zshrc while ignoring .profile. Is there any way to make the shell respect both settings files, or import .profile in .zshrc?
Zsh typically sources .zprofile rather than .profile at login. (It will not source .zshrc if the login does not go to an interactive session.) However, if you want it to source .profile and there's nothing specific to another shell in there (like bash-specific stuff), you can always put source .profile in the .zprofile file. .zshrc is sourced when an interactive shell is invoked, whether it's a login shell or not. It doesn't source .zprofile again if you're not logging in, however. I would just put everything you want always to be sourced in one of the two files, and then just have the other one source that one. For example, I keep all my aliases and environmental variables in my .zshrc and have the line source $HOME/.zshrc in my .zprofile so it always gets read, even in non-interactive login shells. Just be sure not to have each of them source each other, which is tempting, as that obviously will create an infinite cycle. You can also use a third file for the stuff you want both to do and have them both source that third file.
How to make Z shell respect both ~/.zshrc and ~/.profile simultaneously?
1,287,184,988,000
I have 1 x big folder that has lots of .txt files I am trying to group these .txt files in separate subfolders based on specific rules from a rules.csv file (on what subfolder they belong to): LARGE FOLDER: file1.txt file2.txt ... file100.txt The rules would be: file1.txt file3.txt file8.txt belong to "subfolder1" file2.txt file4.txt file23.txt belong to "subfolder2" etc Here's the list of rules in a CSV (rules.csv): first column are the filenames and 2nd column is the subfolder in which I wanna move them to. file1.txt subfolder1 file3.txt subfolder1 file8.txt subfolder1 file2.txt subfolder2 file4.txt subfolder2 file23.txt subfolder2 file5.txt subfolder3 file6.txt subfolder3 file9.txt subfolder3 file11.txt subfolder3 file16.txt subfolder3 file12.txt subfolder4 file13.txt subfolder4 file14.txt subfolder4 file19.txt subfolder4 file24.txt subfolder4 file28.txt subfolder4 file30.txt subfolder4 file78.txt subfolder5 file37.txt subfolder5 file49.txt subfolder5 file88.txt subfolder5 That's what I am trying to achieve. Would there be a way to move these .txt files in their respective subfolders based on a terminal command like "mv" + read these rules from the CSV file mentioned above?(not sure if even possible) I tried something like this mv file1.txt,file3.txt,file8.txt* /subfolder1 but seems counterproductive to do it manually for each without the rules :(
Assuming your files has no whitespaces in their name as shown in your question which they don't, then you could use a simple shell-loop to move the files into thier corresponding directories in the second column. while IFS=' ' read -r fileName dirName; do mkdir -p "./$dirName" && mv "./$fileName" "./$dirName"; done <rules.txt if your rules.csv is really a .csv file (comma delimited file), then you can change the IFS=, above (still remember that your files name or directories name should not contain a comma character then).
Move files in subfolders based on the specific rules from a file
1,287,184,988,000
I have following files starting with a digit. $ echo [0-9]* 1001tracklistsIcon.svelte 1passwordIcon.svelte 3mIcon.svelte 42Icon.svelte 4chanIcon.svelte 4dIcon.svelte 500pxIcon.svelte And I'd like to rm them. I tried this but not working: $ find . -type f -name [0-9]* -exec rm {} \; find: 1passwordIcon.svelte: unknown primary or operator How can I do it?
You need to wrap your pattern in "": $ find . -type f -name "[0-9]*" -exec rm {} \; Otherwise your shell will replace it with the matching file names before running find.
How to rm files starting with a digit
1,287,184,988,000
I have this bash script: #!/bin/bash OriginFilePath="/home/lv2eof/.config/google-chrome/Profile 1/" OriginFileName="Bookmarks" OriginFilePathAndName="$OriginFilePath""$OriginFileName" DestinationFilePath="/home/Config/Browser/Bookmarks/ScriptSaved/Chrome/Profile 1/" DestinationFileName=$(date +%Y%m%d-%H%M%S-Bookmarks) DestinationFilePathAndName="$DestinationFilePath""$DestinationFileName" echo cp \"$OriginFilePathAndName\" \"$DestinationFilePathAndName\" cp \"$OriginFilePathAndName\" \"$DestinationFilePathAndName\" When I execute it from the command line I get this output: [~/] lv2eof@PERU $$$ csbp1 cp "/home/lv2eof/.config/google-chrome/Profile 1/Bookmarks" "/home/Config/Browser/Bookmarks/ScriptSaved/Chrome/Profile 1/20211207-001444-Bookmarks" cp: target '1/20211207-001444-Bookmarks"' is not a directory [~/] lv2eof@PERU $$$ So I get an error and the file isn't copied. Nevertheless if I issue in the command line the command: [~/] lv2eof@PERU $$$ cp "/home/lv2eof/.config/google-chrome/Profile 1/Bookmarks" "/home/Config/Browser/Bookmarks/ScriptSaved/Chrome/Profile 1/20211207-001444-Bookmarks" [~/] lv2eof@PERU $$$ As you can see everything works fine and the file is copied. Shouldn't the commands work the same inside and outside bash scripts? What am I doing wrong?
It is perhaps hard to notice, but message gives you two hints: cp: target '1/20211207-001444-Bookmarks"' is not a directory | | | +-- Notice quote +-- Space in target In other words 1/20211207-001444-Bookmarks" is not a directory. So why is it saying that? In your script you have: cp \"$OriginFilePathAndName\" \"$DestinationFilePathAndName\" By escaping the quotes, you are saying the quotes are part of the arguments. Or: threat quotes as literal text. They are concatenated with the value of the variable. Should be: cp "$OriginFilePathAndName" "$DestinationFilePathAndName" In short: you quote the variable to tell bash this should be threaded as one argument. From your question, the actual arguments to cp becomes 4, not 2: "/home/lv2eof/.config/google-chrome/Profile 1/Bookmarks" "/home/Config/Browser/Bookmarks/ScriptSaved/Chrome/Profile 1/20211207-001444-Bookmarks" In other words copy 1, 2 and 3 into 4.
Linux "cp" command inside bash script
1,287,184,988,000
Object: to find the IP addresses of HTTP servers in a pcap file with a specific header string. Can or should the -l option to flush be used? One way: the following was done but am wondering if it can be shortened. If this question is too broad, please advise. tshark -r file.pcap -T fields -e ip.src -e http.server > name.txt && cat name.txt | sort | uniq -c | sort -nr | grep "xxx_xxx"
If you want a count of the src IP addresses in the frames that also contain an HTTP response with a Server header containing xxx_xxx, you could do: tshark -r file.pcap -T fields -e ip.src 'http.server contains "xxx_xxx"' | sort | uniq -c | sort -nr See the doc for the syntax of wireshark display filters. Some of tshark's own analysis reports (with -z) might also be useful to you like: tshark -r file.pcap -z http_srv,tree -2R 'http.server contains "xxx_xxx"' tshark -r file.pcap -z hosts,ip -2R 'http.server contains "xxx_xxx"' tshark -r file.pcap -z conv,ip -2R 'http.server contains "xxx_xxx"'
TShark pcap filter command possibly simplified?
1,287,184,988,000
I have a laptop with Artix Linux on it that I'm using as a web server. I want to keep it minimalist, w/o graphical environment and only the absolute necessary software. My problem is I still don't know how to turn off/on the display (to save energy) when I'm not interacting with it (which I do very rarely). I am aware of these posts: Turn off monitor using command line How to turn off the monitor under TTY But they either talk about solutions that work for a graphical environment or they use some additional software (vbetool) that I'm not even able to install. It would also be very cool If I could turn the display off/on through ssh
Nevermind, I found a page in ArchWiki that explains everything. No additional software is needed. All I have to do is change the value in /sys/class/backlight/intel_backlight/brightness to 0 to turn the display off. To turn it back on, I can use any value grater than 0. The maximum value can be found in /sys/class/backlight/intel_backlight/max_brightness. Note that the intel_backlight part is hardware dependent. It might be something else, like acpi_video0 on a different machine.
Turn off/on laptop display from TTY without additional software
1,287,184,988,000
I have an example to better illustrate what I'm talking about: $ touch tas $ ln -s /etc/leviathan_pass/leviathan3 /tmp/l2/tas ln: failed to create symbolic link '/tmp/l2/tas': File exists Basically I can only symlink if the file I want to link doesn't exist. I understand this issue when talking about hard links - there's no way of linking two different files as it would lead to an inode conflict (so the file must be created at the time the command is running, to assure, and I'm presuming, they both "point" to the same inode). Now when talking about soft links it doesn't make sense to me, symlinks have nothing to do with the inodes, so what could be the problem? Thanks in advance for any help.
The command ln won’t clobber existing files by default. You can use ln -sf TARGET LINK_NAME to force overwriting the destination path (LINK_NAME) with a symlink. You can use ln -f TARGET LINK_NAME to overwrite LINK_NAME with a hard link to, your explanation doesn’t make any sense about inode conflict. It just replaces the file. You are partially right that the target has to exist first for hard links.
Why can't I symlink a preexisting file to a target file? [duplicate]
1,287,184,988,000
With the screen command, the -X option allows you to execute a command in the specified screen session, but when you try to use it when creating a new screen, e.g: screen -dmS -S downloader -X "wget https://google.com" you get the error No screen session found.. So it's clear the the -X option only works for pre-existing screen sessions. Is it possible to specify a command to be run on the creation of a new screen? If it's not possible in screen, is it possible in another multiplexer like tmux?
I found my answer, although it was under a different title and the question was slightly different, this does the job: screen -d -S downloader -m wget https://google.com It creates a new screen called downloader, detaches it and runs the command.
Execute command when creating new screen session
1,287,184,988,000
I'm using stat like this: stat -f "%Sp %p %l %Su %u %Sg %g %z %a %N %Y" /* I need also to tell if the file is hidden or not (MacOS). The . notation is not enough. MacOS hides more files. For example, this is what I need: ls -lO total 9 drwxrwxr-x 32 root admin sunlnk 1024 Jun 4 22:00 Applications drwxr-xr-x 66 root wheel sunlnk 2112 Feb 18 23:23 Library drwxr-xr-x@ 9 root wheel restricted 288 Jan 1 2020 System drwxr-xr-x 7 root admin sunlnk 224 May 18 08:12 Users drwxr-xr-x 4 root wheel hidden 128 Jun 7 12:49 Volumes drwxr-xr-x@ 38 root wheel restricted,hidden 1216 Jan 1 2020 bin drwxr-xr-x 2 root wheel hidden 64 Jun 6 2020 cores dr-xr-xr-x 3 root wheel hidden 4602 Jun 1 14:24 dev lrwxr-xr-x@ 1 root wheel restricted,hidden 11 Jan 1 2020 etc -> private/etc I need to run it as one command for sake of processing speed. My goal is all from my above stat plus the 5th column of the ls command. Any hints? I've noticed that %T prints @ for hidden items. It could however show it also for other reasons. Can this be used or not? It no stat solution is found, is there a way to merge stat results with the extra ls -lO column on a command line?
If macos stat is like FreeBSD's, the flags can be expressed in the format specification with %f for the numeric form or %Sf for the decoded text form like in ls -lo. See man stat, man chflags and man ls on your system for details.
Can stat show if file is hidden?
1,287,184,988,000
I have a below curl command which works fine and I get the response back. I am posting json data to an endpoint which gives me response back after hitting it. curl -v 'url' -H 'Accept-Encoding: gzip, deflate, br' -H 'Content-Type: application/json' -H 'Accept: application/json' -H 'Connection: keep-alive' -H 'DNT: 1' -H 'Origin: url' --data-binary '{"query":"\n{\n data(clientId: 1234, filters: [{key: \"o\", value: 100}], key: \"world\") {\n title\n type\n pottery {\n text\n pid\n href\n count\n resource\n }\n }\n}"}' --compressed Now I am trying to read the binary data from temp.txt file outside but somehow it doesn't work and I get an error - curl -v 'url' -H 'Accept-Encoding: gzip, deflate, br' -H 'Content-Type: application/json' -H 'Accept: application/json' -H 'Connection: keep-alive' -H 'DNT: 1' -H 'Origin: url' --data-binary "@/Users/david/Downloads/temp.txt" --compressed Below is the content I have in my temp.txt file - Original "temp.txt" file { data(clientId: 1234, filters: [{key: "o", value: 100}], key: "world") { title type pottery { text pid href count resource } } } This is the error I am getting - ....... * upload completely sent off: 211 out of 211 bytes < HTTP/1.1 500 Internal Server Error < date: Fri, 28 May 2021 23:38:12 GMT < server: envoy < content-length: 0 < x-envoy-upstream-service-time: 1 < * Connection #0 to host url left intact * Closing connection 0 Is there anything wrong I am doing? Also if I copy the exact same content in the temp.txt file that I have in my original curl command with \n in it then it works fine. Updated "temp.txt" file Meaning if I keep the content like this in temp.txt file then it works fine from my second curl - {"query":"\n{\n data(clientId: 1234, filters: [{key: \"o\", value: 100}], key: \"world\") {\n title\n type\n pottery {\n text\n pid\n href\n count\n resource\n }\n }\n}"} It means I need to find a way to convert new lines to \n manually from temp.txt file before sending the curl request or is there any other way?
Your data payload is a JSON document containing a query key. The value of that key is a JSON-encoded document, possibly describing some form of query, which is not in itself a JSON document. Newlines are encoded as \n in JSON values, and the JSON parser that the server is using would translate these into literal newlines when it receives your request. Your attempt to put the decoded query value in a separate file and pass that in your curl call fails, because the API you are talking to expects the data to be a JSON document with a JSON-encoded value for the query key. The correct thing to do to offload the query into a separate file is to do exactly what you did in your last example. Put the JSON document with the encoded query in a file and reference it using --data-binary @filename on the curl command line. curl \ --header 'Content-Type: application/json' \ --data-binary '@/Users/david/Downloads/temp.txt' "$API_ENDPOINT"
How to send curl request with post data imported from a file
1,287,184,988,000
I'm running a bash script to output logs under a log directory. Initially I want to know how many logs are generated, simply do the following: ls logs | wc -l And as the bash script is running in real time, I can further use: watch -n 5 'ls logs | wc -l' to display the number of log files concurrently. But I also know the total number of log files that should be generated by checking how many lines under a file called file.txt by: cat file.txt | wc -l Now I want to display how many more logs need to be generated by substracting the two numbers, I searched, there are several ways to do math in shell, e.g. using expr or use double brackets. For example, assuming my total number of lines in file.txt are 1000. If I hard code this into the command, the final command would be like `watch -n 5 'ls logs | wc -l | xargs expr 1000 - ' But I want to substitute 1000 with the result coming from cat file.txt | wc -l command. I tried the following but not working. I'm not sure if xargs can accept two arguments from 2 commands, or this is not how we use it here. watch -n 5 'ls logs | wc -l | xargs expr "cat file.txt | wc -l" + ' (Note: what if I only want to count number of lines in file.txt that not begin with a # letter and not an empty line? How do I do that here?) Much appreciated.
I'm not sure if xargs can accept two arguments from 2 commands. It can run a command with multiple arguments spread across multiple lines from input. For example, this will tell xargs to run expr with three arguments taken from input (which are, in order, the number of logs, -, and the number of lines in file.txt): (ls logs | wc -l; printf "%s\n" -; wc -l < file.txt) | xargs -n 3 expr It might be more idiomatic to use command substitution here: expr "$(ls logs | wc -l)" - "$(wc -l < file.txt)" [W]hat if I only want to count number of lines in file.txt that not begin with a # letter and not an empty line? Count with grep: grep -c -v -e '^#' -e '^$' file.txt -c to count matching lines -v to invert the match, so lines not matching the following are selected -e '^# to match lines beginning (^) with a # -e '^$ to match lines which end ($) immediately after they begin (^),, i.e., they are empty.
how to use one command line to calculate substration of two intergers as results from two other commands
1,619,895,044,000
The time command is very useful for checking how much time it takes for a piece of code that I develop to run... However, I'd like to have a way to check the memory consumption of the codes that I develop too, regardless of the language that I use. So, if it's bash, python, or node.js... I'd like to have a way of checking how much RAM memory I used on the code, just so I can get more aware of what I'm doing avoiding memory duplication and stuff like that. Is there any command line that I can use for checking the amount of memory that a script that I create consumes?
On many Unix-like systems, yes, GNU’s implementation of /usr/bin/time (with the path, to avoid the similar shell built-in) will tell you how much memory a given program execution used; for example: $ /usr/bin/time ls ... 0.00user 0.00system 0:00.00elapsed 50%CPU (0avgtext+0avgdata 2208maxresident)k 0inputs+0outputs (0major+139minor)pagefaults 0swaps shows that ls used at most 2208K of RAM. Other tools such as Valgrind will show more information, specifically concerning heap usage: $ valgrind ls ... ==10107== ==10107== HEAP SUMMARY: ==10107== in use at exit: 32,928 bytes in 83 blocks ==10107== total heap usage: 506 allocs, 423 frees, 97,271 bytes allocated ==10107== ==10107== LEAK SUMMARY: ==10107== definitely lost: 0 bytes in 0 blocks ==10107== indirectly lost: 0 bytes in 0 blocks ==10107== possibly lost: 0 bytes in 0 blocks ==10107== still reachable: 32,928 bytes in 83 blocks ==10107== suppressed: 0 bytes in 0 blocks ==10107== Rerun with --leak-check=full to see details of leaked memory ==10107== ==10107== For counts of detected and suppressed errors, rerun with: -v ==10107== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)
Is there any way of checking the max memory consumption of a code that I run with the command line? [duplicate]
1,619,895,044,000
I'd like to use keys from a second keyboard to create custom shortcuts that are useful to me. For example, having to write a lot of math formulas, I'd like to press a key from the second keyboard, let's say the a key, for writing "\nabla" in a LaTeX editor. At the moment, I'm following this answer, that exploits the actkbd daemon - http://users.softlab.ntua.gr/~thkala/projects/actkbd/. The answer suggests to use actkbd for copying the character(s) I'm interested in to the clipboard, and this solution works nicely. (So far I've just tried with the keyboard of my laptop, since I do not have a second keyboard yet, but I assume I can use actkbd only on the second keyboard just by changing the input device in the actkbd command, as described in the answer). Unfortunately, there is one drawback: the key that I'm binding is still doing its old job (so after pressing a, the "a" character will be written in the editor), while I'd like to completely overwrite the action of that key. Therefore, at the moment I could only use those keys that do not affect the writing flow (for example, ctrl, shift, alt, and so on). Do you have any suggestions on how to adjust this solution in order to "overwrite" the action of those keys? OS: Ubuntu 20.04 LTS
For this I would use kmonad although it is probably overkill. It allows you to redefine every key differently on every connected keyboard. These keys can be traditional keyboard macros, or can act as new modifiers. This allows you to define new "layers" so you can type say Greek or maths. For the specific case of nabla, you can either get the system to type \ n a b l a or ctrlshiftu2207space depending on your OS to enter the Unicode character directly.
Remap keys from a second keyboard in Ubuntu - towards the optimal solution
1,619,895,044,000
My script takes arbitrary number of arguments and also few options: I need to extract any options plus the first argument into one string, and any remaining arguments into a second string. ie: ./script.sh foo FILE1 [s1="foo" s2="FILE1]" ./script.sh foo FILE1 FILE2 FILE3 [s1="foo" s2="FILE1 FILE2 FILE3]" ./script.sh -i -l foo FILE1 [s1="-i -l foo" s2="FILE1]" ./script.sh -i -l foo FILE1 FILE2 FILE3 [s1="-i -l foo" s2="FILE1 FILE2 FILE3]" I just need to split $@ into these two strings. I do't need to process the arguments, ie with getopt. What is the easiest way to do this? EDIT: extracting into arrays arrays rather than strings is fine.
set -o extendedglob first_set=("$@[1,(i)^-*]") first=${(j[ ])first_set} would store in $first the concatenation of all arguments up to the first that doesn't start with - with one space character in between. Then for $second, you can get the rest: second_set=("$@[$#first_set + 1, -1]") second=${(j[ ])second_set} In any case, note that $@ is not a string, it's a list of 0 or more strings. For instance, if you invoke your script from a Bourne like shell using a command line such as: script.sh -i -l 'foo bar' 'File 1' File\ 2 "File 3" That executes your script as: execve("/path/to/script.sh", ["script.sh", "-i", "-l", "foo bar", "File 1", "File 2", "File 3"], environ) Which becomes (assuming the script starts with #! /bin/zsh -): execve("/bin/zsh", ["/bin/zsh" /* or "script.sh" depending on the system*/, "-", "/path/to/script.sh", "-i", "-l", "foo bar", "File 1", "File 2", "File 3"], environ) And in your script, $@ will contain all those strings in the argv[] argument to execve() that come after /path/to/script.sh. Above we split that list into two sets ($first_set and $second_set array variables), and then join the arguments in those sets into two scalar variable ($first and $second). But after that joining is done, you can no longer get back to the original list of arguments. For instance, $second in that case will contain File 1 File 2 File 3 and there's no way to tell which of those space characters are the ones that delimit arguments and which ones were part of the arguments.
zsh: extract command line arguments into two strings
1,619,895,044,000
I have two tab-delimited files where I need to match text in the first column of file 1 to any position in lines of file 2. Upon a match I then want to print what's in the second column of the matching line of file 1 to the end of the matching line in file 2 (example below). I know this can almost certainly be done with awk, but I'm not very competent in using awk or sed and searching related questions on here and trying to adapt their script hasn't worked out for me. Any input would be much appreciated. File 1 protein_1.p1 note "PJD5F7, match to databaseID=64575, (species X)"; protein_1.p2 note "PJD5F7, match to databaseID=64575, (species X)"; protein_3.p1 note "PA5F9H, match to databaseID=93689, (species W)"; protein_4.p1 note "Q7GT5J, match to databaseID=89045, (species Y)"; protein_4.p3 note "YE6G3L, match to databaseID=44968, (species Z)"; File 2 chromosome_1 programID transcript_id "protein_1.p1"; parent "protein_1"; chromosome_1 programID transcript_id "protein_1.p2"; parent "protein_1"; chromosome_1 programID transcript_id "protein_2.p1"; parent "protein_2"; chromosome_1 programID transcript_id "protein_2.p2"; parent "protein_2"; chromosome_1 programID transcript_id "protein_3.p1"; parent "protein_3"; chromosome_1 programID transcript_id "protein_4.p1"; parent "protein_4"; chromosome_1 programID transcript_id "protein_4.p2"; parent "protein_4"; chromosome_1 programID transcript_id "protein_4.p3"; parent "protein_4"; Desired output chromosome_1 programID transcript_id "protein_1.p1"; parent "protein_1"; note "PJD5F7, match to databaseID=64575, (species X)"; chromosome_1 programID transcript_id "protein_1.p2"; parent "protein_1"; note "PJD5F7, match to databaseID=64575, (species X)"; chromosome_1 programID transcript_id "protein_2.p1"; parent "protein_2"; chromosome_1 programID transcript_id "protein_2.p2"; parent "protein_2"; chromosome_1 programID transcript_id "protein_3.p1"; parent "protein_3"; note "PA5F9H, match to databaseID=93689, (species W)"; chromosome_1 programID transcript_id "protein_4.p1"; parent "protein_4"; note "Q7GT5J, match to databaseID=89045, (species Y)"; chromosome_1 programID transcript_id "protein_4.p2"; parent "protein_4"; chromosome_1 programID transcript_id "protein_4.p3"; parent "protein_4"; note "YE6G3L, match to databaseID=44968, (species Z)";
We could parse file1, map values ($2) to keys ($1), then parse file2 and append value to line, when a part of the line ($3) matches any key. BEGIN {OFS = FS = "\t"} FNR == NR {arr[$1] = $2; next} {for (x in arr) if ($3 ~ x) {$0 = $0 " " arr[x]; break}} {print} This prints correct results for your example but it is not what you want for many reasons. The first of them is that it could fail for various cases, like protein_1.p1 and protein_1.p11. The second reason is perfomance, the time for every line of file2 is not constant but ~ size of file1. So we have to modify the above script. You probably want to define a regex for the protein string to match. This way, the matching becomes strict enough and also at second parse, time depends on matching a regex on a field, not on the array size. BEGIN {OFS = FS = "\t"; re = "\\<protein_[[:digit:]]+.p[[:digit:]]+\\>"} FNR == NR {if ($1 ~ re) arr[$1] = $2; next} match($3, re) {$0 = $0 " " arr[substr($3,RSTART,RLENGTH)]} {print} Notes: re: "protein_" followed by one or more digits, ".p" and again one or more digits All this inside word bountaries. The dot is literal. Word characters are [:alnum:] and _ so the rest are bountaries. Also there is a sanity check for the 1st field of file1. If a match() is found, then built-in variables RSTART, RLENGTH hold the index and the length of the matched string, and this substring is what we use into the hash. Usage: > awk -f tst.awk file1 file2 chromosome_1 programID transcript_id "protein_1.p1"; parent "protein_1"; note "PJD5F7, match to databaseID=64575, (species X)"; chromosome_1 programID transcript_id "protein_1.p2"; parent "protein_1"; note "PJD5F7, match to databaseID=64575, (species X)"; chromosome_1 programID transcript_id "protein_2.p1"; parent "protein_2"; chromosome_1 programID transcript_id "protein_2.p2"; parent "protein_2"; chromosome_1 programID transcript_id "protein_3.p1"; parent "protein_3"; note "PA5F9H, match to databaseID=93689, (species W)"; chromosome_1 programID transcript_id "protein_4.p1"; parent "protein_4"; note "Q7GT5J, match to databaseID=89045, (species Y)"; chromosome_1 programID transcript_id "protein_4.p2"; parent "protein_4"; chromosome_1 programID transcript_id "protein_4.p3"; parent "protein_4"; note "YE6G3L, match to databaseID=44968, (species Z)";
Map field of first file based on patten matching in second file
1,619,895,044,000
Consider a directory containing sub directories sub1, sub2, sub3 etc. Now consider the case that I am in sub3 and want to switch to sub4, I do something like cd ../sub4. However I want something like next to switch to the "next" directory and prev to switch to the previous one, where the order should be alphanumerical (and optionally by mtime). Maybe this could be also be bound to a convenient keyboard shortcut. For example using next when you are in sub3 brings you to sub4 etc. Is there any build in functionality of zsh or any tool to get something like this out ouf the box?
You could define cdprev and cdnext functions like: cdnext cdprev() { local dirs i dirs=(${PWD%/*}/*(nN-/)) if (($#dirs <= 1)); then print -ru2 No other dir in $PWD:h return 1 fi i=$dirs[(Ie)$PWD] if [[ $0 = cdnext ]]; then ((i++)) else ((i--)) fi ((i <= $#dirs)) || i=1 ((i >= 1 )) || i=-1 print -ru2 $0: ${(D)dirs[i]} cd $dirs[i] }
Fast way to switch to next/previous directory in command line
1,619,895,044,000
This sequence of commands works OK: pngtopnm file.png 2> /dev/null > dump1 pnmfile < dump1 stdin: PPM raw, 1920 by 1080 maxval 255 ls -l dump1 -rw-r----- 1 cmb 6220817 Sep 15 14:26 dump1 But redoing the pipeline to use 'tee' truncates the output in the dump file: pngtopnm file.png 2> /dev/null | tee dump2 | pnmfile stdin: PPM raw, 1920 by 1080 maxval 255 ls -l dump2 -rw-r----- 1 cmb 49152 Sep 15 14:34 dump2 I'm not clear on what difference it makes where 'tee' is sending stdin to what gets saved in the dump file - why is 'dump2' truncated, and not identical to 'dump1'? cmp dump[12] cmp: EOF on dump2 after byte 49152, in line 4 I suspect its something to do with 'pnmfile', since putting something else at the end of the pipeline seems to work OK - 'dump3' is the right size/same content as dump1, and the end of the pipe ('fmt') is doing something to the file...: pngtopnm file.png 2> /dev/null | tee dump3 |fmt -10 > dump4 ls -l dump[34] -rw-r----- 1 cmb 6220817 Sep 15 14:41 dump3 -rw-r----- 1 cmb 6224311 Sep 15 14:41 dump4 (XUbuntu 20.04, diffutils 3.7, Netpbm 10.0, coreutils 8.30)
pngtopnm file.png 2> /dev/null | tee dump2 | pnmfile pnmfile only reads until it has enough information to output the file information and then closes the pipe. At that point tee closes the pipe and dump2 as well. Try tee -p dump2.
tee pipeline and pnmtools - truncated file
1,619,895,044,000
I am trying to make a custom command prompt that looks like this: [][][][]$, where the [] can be filled with custom information. For example, if I write in the console . file.sh 0 2 "date -R" then the command prompt looks like this [Sat, 29 Aug 2020 11:02:40 +0200][][][]$ the 0 stands for position, and 2 stands for the type of the value (1 is string, 2 is command which is in this example, and 3 is a csv file) Basically, I want my command prompt to be dynamic, so every time I hit enter the values should be updated (not all values have to be updated, for example string stays the same all the time, or a csv column.) So when I hit enter I want my prompt go from [Sat, 29 Aug 2020 11:02:40 +0200][][][]$ to [Sat, 29 Aug 2020 11:02:45 +0200][][][]$ for example. Here is my full code: #!/bin/bash updatedata() { v=$(awk -v strSearch="$1" ' BEGIN{ FS=";" } { gsub(/\r/,"") for(i=1;i<=NF;i++){ if($i==strSearch){ print i exit } } } ' data.csv) sum=0 for x in `cut -f $v -d ';' data.csv` do x="${x/$'\r'/}" let sum=$sum+$x done if [ $pos -eq 0 ] then v0=$sum elif [ $pos -eq 1 ] then v1=$sum elif [ $pos -eq 2 ] then v2=$sum elif [ $pos -eq 3 ] then v3=$sum fi } while [ "$#" -gt 0 ]; do pos=$1 typevar=$2 stringvar=$3 case $pos in 0) v0=$3 ;; 1) v1=$3 ;; 2) v2=$3 ;; 3) v3=$3 ;; *) echo "One of the values has invalid position entered, try again" esac case $typevar in 1) if [ $pos -eq 0 ] then if [ "$stringvar" != "null" ] then v0=$stringvar else v0="" fi elif [ $pos -eq 1 ] then if [ "$stringvar" != "null" ] then v1=$stringvar else v1="" fi elif [ $pos -eq 2 ] then if [ "$stringvar" != "null" ] then v2=$stringvar else v2="" fi elif [ $pos -eq 3 ] then if [ "$stringvar" != "null" ] then v3=$stringvar else v3="" fi fi ;; 2) if [ $pos -eq 0 ] then v0=`eval $3` elif [ $pos -eq 1 ] then v1=`eval $3` elif [ $pos -eq 2 ] then v2=`eval $3` elif [ $pos -eq 3 ] then v3=`eval $3` fi ;; 3) updatedata $3 ;; *) echo "Invalid type of variable, try again" esac shift shift shift done export PS1="[$v0][$v1][$v2][$v3]$" I tried using export for the PS1, didn't work. I also tried using single quoted for the PS1 like this: export PS1='[$v0][$v1][$v2][$v3]$' and that didn't work either. I also tried to do this: export PS1='[$(v0)][$(v1)][$(v2)][$(v3)]$' and that didn't work either. I don't know what to do! example of CSV file: Date_of_report;Municipality_code;Municipality_name;Province;Total_reported;Hospital_admission;Deceased 2020-03-13 10:00:00;GM0003;Appingedam;Groningen;0;0;0 2020-03-13 10:00:00;GM0010;Delfzijl;Groningen;0;0;0 2020-03-13 10:00:00;GM0014;Groningen;Groningen;3;0;0 2020-03-13 10:00:00;GM0024;Loppersum;Groningen;0;0;0 2020-03-13 10:00:00;GM0034;Almere;Flevoland;1;1;0 2020-03-13 10:00:00;GM0037;Stadskanaal;Groningen;0;0;0 2020-03-13 10:00:00;GM0047;Veendam;Groningen;0;0;0 2020-03-13 10:00:00;GM0050;Zeewolde;Flevoland;1;0;0 2020-03-13 10:00:00;GM0059;Achtkarspelen;Friesland;0;0;0 2020-03-13 10:00:00;GM0060;Ameland;Friesland;0;0;0 2020-03-13 10:00:00;GM0072;Harlingen;Friesland;0;0;0 2020-03-13 10:00:00;GM0074;Heerenveen;Friesland;0;0;0
Your script currently only updates the prompt when it is explicitly sourced. If you want it to run every time the prompt refreshes, I think you need to use PROMPT_COMMAND. Try the following modified script. This will call the function set_prompt to update the prompt every time. I've also exported the commands to generate the text so that they can be run again to update when you get a new prompt. Using your example command of . file.sh 0 2 "date -R", I can then see the date update when I press enter. #!/bin/bash updatedata() { v=$(awk -v strSearch="$1" ' BEGIN{ FS=";" } { gsub(/\r/,"") for(i=1;i<=NF;i++){ if($i==strSearch){ print i exit } } } ' data.csv) sum=0 for x in `cut -f $v -d ';' data.csv` do x="${x/$'\r'/}" let sum=$sum+$x done echo $sum } while [ "$#" -gt 0 ]; do pos=$1 typevar=$2 stringvar=$3 case $pos in 0) v0=$3 ;; 1) v1=$3 ;; 2) v2=$3 ;; 3) v3=$3 ;; *) echo "One of the values has invalid position entered, try again" esac case $typevar in 1) if [ $pos -eq 0 ] then if [ "$stringvar" != "null" ] then export PROMPT0="echo $stringvar" else export PROMPT0="" fi elif [ $pos -eq 1 ] then if [ "$stringvar" != "null" ] then export PROMPT1="echo $stringvar" else export PROMPT1="" fi elif [ $pos -eq 2 ] then if [ "$stringvar" != "null" ] then export PROMPT2="echo $stringvar" else export PROMPT2="" fi elif [ $pos -eq 3 ] then if [ "$stringvar" != "null" ] then export PROMPT3="echo $stringvar" else export PROMPT3="" fi fi ;; 2) if [ $pos -eq 0 ] then export PROMPT0="exec $3" elif [ $pos -eq 1 ] then export PROMPT1="exec $3" elif [ $pos -eq 2 ] then export PROMPT2="exec $3" elif [ $pos -eq 3 ] then export PROMPT3="exec $3" fi ;; 3) if [ $pos -eq 0 ] then export PROMPT0="updatedata $3" elif [ $pos -eq 1 ] then export PROMPT1="updatedata $3" elif [ $pos -eq 2 ] then export PROMPT2="updatedata $3" elif [ $pos -eq 3 ] then export PROMPT3="updatedata $3" fi ;; *) echo "Invalid type of variable, try again" esac shift shift shift done function set_prompt() { v0=$($PROMPT0) v1=$($PROMPT1) v2=$($PROMPT2) v3=$($PROMPT3) export PS1="[$v0][$v1][$v2][$v3]$" } export PROMPT_COMMAND=set_prompt
The values don't get updated in my custom command prompt when I hit enter
1,619,895,044,000
Okay, so I'm kinda stumped here. I'm in the midst of a deep-dive into BASH, for the purposes of writing some automation scripts for my employer. Also, full disclosure here: I'm a web guy; hitherto I've known JUST enough CLI to operate git. My issue is, about half the staff are using MacOS, the other half are using GitBash, and it seems there are different flags supported (or, more relevantly, not supported) on the two different BASH instantiations. Further conundrum: what with everyone working from home and the world ending and all, I cannot reliably demand every one of our staff "switch to distro X"/"upgrade to version Y". Now, I know how to test if a given program is installed (though, I confess: I'm not crystal-clear why one is preferable to the other, and please: correct me if either is a terrible way of handling this), in the forms of: type foo >/dev/null 2>&1 || { echo >&2 "COMMAND NOT FOUND"; } ...and... [ -x "$(command -v foo)" ] || echo 'COMMAND NOT FOUND' ...but in my specific case, BOTH platforms HAVE foo installed, but, as a for-instance, only MacOS has a -b/--bar modifier flag. So, how do I test to see if a given FEATURE of a command is supported? Every time I try to pass a flag to one of the tests, the program PERFORMING the test seems to believe it's directed at IT, ignores it entirely, or errors out. How can I test to see if foo -b/foo --bar is a valid/installed/accessible command? Update/Answer Since the general consensus presented in the well-reasoned and honestly excellent answers received below appears to be a hybrid of "no, one cannot", punctuated with "there's simply too many factors to be able to reliably glean anything resembling conclusive, useful data", I'm closing out the question. My intention here was to ascertain what the correct approach to employ was to verify the presence of a potentially non-existent flag between two different platforms - known platforms, fortunately, in my case - and the appropriate tact in this case would seem to be the one I'm currently taking: maintain two separate scripts, one for each platform. Thank you to all who took the time to respond! It's much appreciated.
In a useful and reliable way, you probably can't. You basically need to test each command individually to see what it does. (The command "echo" is a particularly bad case of this. There are a lot of variations.) What you can do, is check version numbers and operating system. For BASH, try bash --version | head -1 or bash_version="$( bash --version | head -1|sed -e 's/.* version //;s/ .*//' )" or just use the predefined symbol $BASH_VERSION For the operating system, uname -s is a good start, and other options might give you more details. Once you have these, you can start writing conditional code as needed. Having said all that... for your specific case of a foo command that may or may not take a --bar option: Figure out what the two versions do with matching command lines that are otherwise safe to run. For instance, if both versions of foo report the first invalid option, try writing something like: output="$( foo --bar --somethingtotallyinvalid 2>&1 )" and then figuring out what is in variable output. Of course, in this case, it is liable to be the valid option list. If so, you might consider: output="$( foo --somethingtotallyinvalid 2>&1 | grep -e --bar )"
Is there a convenient way to test if a given flag is supported on a command?
1,619,895,044,000
I am using Debian Buster and would like to find out which process does the most writes on a specific partition, just like iotop but limited to a single block device?
iotop cannot do that because it reads processes IO counters (/proc/PID/io) which are common for all block devices, including virtual filesystems like tmpfs. What you'll need to do is block I/O tracing: https://tunnelix.com/debugging-disk-issues-with-blktrace-blkparse-btrace-and-btt-in-linux-environment/ https://www.collabora.com/news-and-blog/blog/2017/03/28/linux-block-io-tracing/ https://www.linux.com/topic/networking/linux-block-io-tracing/ As far as I know there are no ready solutions for that.
How can I see which process does the most writes on a specific partition?
1,619,895,044,000
How mount a mount point available in desktop/file manager under Removable Media category but has not ever been clicked so not been recognized from shell yet, by use of command line? Then only if this first click was done, it further can be recognized and used on shell by commands
On a systemd system you likely have udisksctl, from the udisks2 package. Quoting the udisks(8) manual: udisks provides interfaces to enumerate and perform operations on disks and storage devices. Any application (including unprivileged ones) can access the udisksd(8) daemon via the name org.freedesktop.UDisks2 on the system message bus[1]. Use $ udisksctl status to list the devices you can act upon, $ udisksctl info --block-device /dev/sdXn to inspect them (either the block device sdX or one of its partitions, sdXn) and $ udisksctl mount --block-device /dev/sdXn to mount a volume. The command will output the full path of its mount point. Finally, to unmount and power-off a device: $ udisksctl unmount --block-device /dev/sdXn $ udisksctl power-off --block-device /dev/sdX See also How to mount an image file without root permission? Mounting from dolphin vs commandline Eject / safely remove vs umount
mount point available in desktop Removable Media using shell command line
1,619,895,044,000
I need to transfer a particular file from my Linux PC to Android phone using a Bash script. I have already exposed my phone filesystem to the PC. With this I can easily communicate between the two using nautilus and GS Connect on PC and KDE Connect on phone. By the way my both devices are on same home network. Please Help!
Finally, I got the solution using sftp. I have used the following script. #! /bin/bash #Capture and share screenshot to my phone gnome-screenshot cd /home/prm/Pictures FILE="$(ls -Art | tail -n 1)" #To get the last created file echo $FILE sftp sftp://192.168.1.3:1761/primary/DCIM/Screenshots <<EOF put "$FILE" bye EOF
Linux command to move file from desktop to phone on home network
1,619,895,044,000
am trying to write a bash script to get the total size of sub folders in a S3 bucket. My bucketpath s3://path1/path2/subfolders Inside the path2 folder i have many sub-folder like 2019_06 2019_07 2019_08 2019_09 2019_10 2019_11 2019_12 I need to get the size of each subfolder in a bash script. I wrote a script like #!/bin/bash FILES=$(mktemp) aws s3 ls "s3://path1/path2/" >> "$FILES" cat $FILES echo for file in $FILES do if [ ! -e "$file" ] then s3cmd du -r s3://path1/path2/$file echo "$file"; echo continue fi echo done The output of cat $tmpfile is as below 2019_06 2019_07 2019_08 2019_09 2019_10 2019_11 2019_12 But am getting error. While passing the variable into the for loop. Ideally my aim is like for each iteration when for loop runs inside do .....The command should be like s3cmd du -r s3://path1/path2/2019_06 s3cmd du -r s3://path1/path2/2019_07 s3cmd du -r s3://path1/path2/2019_08 etc... So that i can get the total size of the folder Kindly help! Update I have edited the code as suggested #!/bin/bash FILES=$(mktemp) aws s3 ls "s3://path1/path2/" >> "$FILES" for file in `cat $FILES` do if [ -n "$file" ] echo $file done
First of all, if you want to check if a file exist no need for exclamation mark ! since[ -e FILE ] will return True if FILE exists. But the problem is your bash script cannot check if 2019_06 existed because these files are in S3. Lines in $FILES are just strings. You can check with [ -n STRING ] which means True if the length of "STRING" is non-zero. for file in `cat $FILES` do if [ -n "$file" ] then echo $file s3cmd du -r s3://path1/path2/$file fi done
How to pass each line of an output file as an argument to a for loop in the same bash script?
1,619,895,044,000
I copied column 7,8 and 9 from file 1 into columns 7,8 and 9 in file 2 producing a new file 3. the produced file (file 3) is not aligned as the original files, How can I edit it to preserve the alignment ? I used the command: awk '(getline line < "file 1") > -1 {split(line,a); $7 = a[7]; $8 = a[8]; $9= a[9]} 1' file 2 > file 3 file 1: GRM in vacuum 192700 1GRM C1 1 17.188 0.311 13.994 -0.5971 0.0204 -0.0724 1GRM C2 2 0.094 0.383 0.005 0.4831 -0.8709 -0.2204 1GRM C3 3 0.091 0.524 0.008 -0.7098 0.3449 -0.3952 file 2: GRM in vacuum 192760 1GRM C1 1 0.061 0.071 14.000 1GRM C2 2 0.184 0.142 14.000 1GRM C3 3 0.184 0.284 0.000 file 3 (The output): GRM in vacuum 192760 1GRM C1 1 0.061 0.071 14.000 -0.5971 0.0204 -0.0724 1GRM C2 2 0.184 0.142 14.000 0.4831 -0.8709 -0.2204 1GRM C3 3 0.184 0.284 0.000 -0.7098 0.3449 -0.3952 To solve the alignment problem I used: awk 'BEGIN{fmt="%10s%9s%7d%8.3f%8.3f%8.3f%8.4f%8.4f%8.4f"} (getline line < "file 1") > -1 {n = split(line,a)} n > 6 {$0 = sprintf(fmt,$1,$2,$3,$4,$5,$6,a[7],a[8],a[9])} 1' "file 2" > file 3 but still I have 2 problems. The first problem is the alignment of the columns in the output file is not like the original files (file 1 and file 2). The second problem happens at line 10002, column 2 and 3 combine together which lead to disappearing a complete column in the output file starting from line 10002 to the end, below are the 3 files at line 10002: file 1: 2500GRM C3 9999 15.716 8.242 0.002 0.2372 -0.2989 -0.0758 # line 10001 2500GRM C410000 15.592 8.311 0.003 0.2603 -0.2492 -0.2394 # line 10002 2501GRM C110001 15.591 8.453 0.006 0.0887 -0.2458 -0.7014 # line 10003 2501GRM C210002 15.714 8.524 0.007 -0.0788 0.0598 -0.9619 # line 10004 file 2: 2500GRM C3 9999 15.433 8.378 0.000 # line 10001 2500GRM C410000 15.310 8.449 0.000 # line 10002 2501GRM C110001 15.310 8.591 0.000 # line 10003 2501GRM C210002 15.433 8.662 0.000 # line 10004 file 3: 2500GRM C3 9999 15.433 8.378 0.000 0.2372 -0.2989 -0.0758 # line 10001 2500GRM C410000 15.310 8.449 0.000 -0.2492 -0.2394 # line 10002 2501GRM C110001 15.310 8.591 0.000 -0.2458 -0.7014 # line 10003 2501GRM C210002 15.433 8.662 0.000 0.0598 -0.9619 # line 10004 I have attached all the files in the below link: https://drive.google.com/drive/folders/13diMVxlp-T9BXE_jnm_LL1jUPbz8eren
The problem is that you have either 8 or 9 data fields in file1 and 5 or 6 in file2. Either C3 9999 is one badly formatted field or C410000 should be two fields C4 and 10000. To adjust the formatting depending on the number of fields you can use two format strings and switch between them save the number of array elements n when you split the line and take the last three values a[n-2], a[n-1], a[n] awk ' BEGIN{ fmt1="%8s %6s%5s %7.3f %7.3f %7.3f %7.4f %7.4f %7.4f" ORS fmt2="%8s %11s %7.3f %7.3f %7.3f %7.4f %7.4f %7.4f" ORS } (getline line < "file 1") > -1{ n=split(line, a) } NF<=3{ print; next } # print original line NF==6{ printf fmt1, $1, $2, $3, $4, $5, $6, a[n-2], a[n-1], a[n]; next } # 6 + 3 fields { printf fmt2, $1, $2, $3, $4, $5, a[n-2], a[n-1], a[n] } # 5 + 3 fields ' "file 2" > "file 3" Output: ... 2500GRM C3 9999 15.433 8.378 0.000 0.2372 -0.2989 -0.0758 2500GRM C410000 15.310 8.449 0.000 0.2603 -0.2492 -0.2394 2501GRM C110001 15.310 8.591 0.000 0.0887 -0.2458 -0.7014 2501GRM C210002 15.433 8.662 0.000 -0.0788 0.0598 -0.9619 ...
How to solve the alignment problem of the columns in a text file after copy and paste?
1,619,895,044,000
When I use less, at times I like to have things go on for a while so I use the ESC-F sequence key which, up to here, does what I want. The only way I've found to go back to the normal less command is to use Ctrl-C. However, when I do that, it stops (Cancels) the running process. What I'm looking for is a way to return to the normal less functionality without stopping the running process so I can look at a few things, then see the following output without having to restart my process. Is there such a capability?
The correct answer is Ctrl+X. Otherwise in pipe operations like find /var/log -name "*.log" | xargs less, Ctrl+C will terminate less and exit to prompt.
How do I cancel the effect of "ESC-F" in "less" without canceling the running process?
1,619,895,044,000
If I have, for example, a column of numbers: 4 6 8 10 12 Is there a way in Vim that I could find all numbers higher than 8, than do math only with these numbers? Obtaining a new column as a result: 4 6 8 (10+2)=12 (12+2)=14
You can achieve this kind of substitution using the \= special replace expression. Check out :help sub-replace-special for all the details, but here's how this specific replacement could work: %s/\d\+/\=str2nr(submatch(0)) > 8 ? str2nr(submatch(0)) + 2 : submatch(0) In the replacement part of the :substitute command, after the \=, submatch(0) gives you the full match. The str2nr global function converts that to a number. If you'd like the logic to be more readable, you could extract it to a function: function! IncrementNumber(number_string) let number = str2nr(a:number_string) if number > 8 let number += 2 endif return number endfunction You can put that in your .vimrc, and tweak it, set the threshold of 8 as a parameter, etc, and then call this on the file: %s/\d\+/\=IncrementNumber(submatch(0))
How can I edit columns of numbers in vim using conditionals?
1,619,895,044,000
Computer Environment OS: Arch linux - Manjaro Shell: zsh AIM I'm trying to enable the following commands to not require a password input for my main user account: ab sudo systemctl stop NetworkManager sudo systemctl start NetworkManager FAILED ATTEMPT I've read and have tried to follow some of the online help and got so far as this by using sudo -i to create the file /etc/sudoers.d/ab with the following code: ab ALL=(root) NOPASSWD: sudo systemctl stop NetworkManager ALL=(root) NOPASSWD: sudo systemctl start NetworkManager TROUBLESHOOTING ATTEMPTS I've tried to make the following edits without success: changing root to ALL changing systemctl to /bin/systemctl deleting sudo Each time I make the edit and save, I cat /etc/sudoers.d/ab to check that changes were made, and I always open up a new terminal to try out the command, each time trying a combination of the following while still being asked for a password input: sudo systemctl start NetworkManager sudo /bin/systemctl start NetworkManager systemctl start NetworkManager /bin/systemctl start NetworkManager QUESTIONS Is starting a new terminal enough, or do I need to restart my whole system to initiate the changes? Or maybe I'm forgetting another step?
The lines in /etc/sudoers.d/ab should probably be like this: ab ALL=(root) NOPASSWD: /bin/systemctl stop NetworkManager ab ALL=(root) NOPASSWD: /bin/systemctl start NetworkManager With sudo and normal, locally stored sudoers.d files (and nothing advanced like sudoers information stored in a LDAP server), any changes to the sudoers files should take effect immediately, with no need to logout/login, start new terminals, or anything like that. Normally sudo will log both successful and failed attempts to use it, so you should look at the appropriate log file (usually either /var/log/secure or /var/log/auth.log, depending on distribution) for messages from sudo. Those messages will include the command the user is attempting to execute through sudo, in the exact form you'll need to write it into the sudoers file to allow it.
Trouble trying to set no password for certain cli commands in linux
1,619,895,044,000
I've looked at a lot of bash tab completion questions and haven't yet found one that answers this one. I'm on a linux system (GNU bash, version 4.2.46(2)-release (x86_64-redhat-linux-gnu)) and normally tab completion works just fine. However, when I go to use tab completion for an environment variable or command-line option, it fails. For example, ls /v<TAB> -> ls /var/ export FOO=/v<TAB> -> export FOO=/v (bell plays) When I do this on my Mac (GNU bash, version 5.0.16(1)-release (x86_64-apple-darwin18.7.0)) tab completion of the environment variable value works fine. On both machines, $COMP_WORDBREAKS is "'><=;|&(:. Someone (I don't know where this came from) suggested that shopt -u progcomp might help, and it does fix the problem! However, I don't need to unset that option Mac-side and I'm worried it might cause other strange side effects. Is that a normal thing to unset? Is there anything else I can do to figure this out?
shopt -u progcomp disables programmable completion, i.e. scripts that may provide e.g. per-program completion. They might provide features like only completing files that match *.tar or such when the command line starts with tar, etc. Those scripts usually come with your distribution, or not, in the case of the Mac. It's fine to disable progcomp if the bugs and misfeatures caused by those scripts seem more annoying than the actual features are useful (and for me, that's about 100 % of the time).
Bash tab completion not working following = character
1,619,895,044,000
On an Amazon Linux 2 instance, the command line is throwing the following connection refused error every time a command is run that references a file path. The same error is thrown when an https url is used in place of the file path. Why is this happening, and how can this problem be remediated so that the file can be read and used from the command line? Here is the console output: [kubernetes-host@ip-of-ec2-instance ~]$ sudo kubectl apply -f rbac-kdd.yaml | tee kubeadm-rbac-kdd.out unable to recognize "rbac-kdd.yaml": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused unable to recognize "rbac-kdd.yaml": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused [kubernetes-host@ip-of-ec2-instance ~]$ The relative path of the file is correct. The command is trying to apply calico to a Kubernetes cluster created by kubeadm, if that helps. But I am thinking this is a basic linux question. SELinux has been disabled on this Amazon Linux 2 EC2 instance. Would appreciate some pointers on this as I try to identify possible causes. PROBLEM ISOLATED: Also, the contents of .kube/config indicate port 6443 as follows: [kubernetes-host@ip-of-ec2-instance ~]$ cat /home/kubernetes-host/.kube/config apiVersion: v1 clusters: - cluster: certificate-authority-data: <encrypted-certificate-authority-data-here> server: https://ip-of-ec2-instance:6443 name: kubernetes contexts: - context: cluster: kubernetes user: kubernetes-admin name: kubernetes-admin@kubernetes current-context: kubernetes-admin@kubernetes kind: Config preferences: {} users: - name: kubernetes-admin user: client-certificate-data: <encrypted-client-certificate-data-here> client-key-data: <encrypted-client-key-data-here> [kubernetes-host@ip-of-ec2-instance ~]$ The problem seems to be that the kubectl apply command is using port 8080 while the Kubernetes apiserver is using port 6443. How can this mismatch be remediated so that the kubectl apply command uses port 6443? Further, kubectl is able to see that 6443 is the correct port, and curl can reach the correct 6443 port, as follows: [kubernetes-host@ip-of-ec2-instance ~]$ kubectl cluster-info Kubernetes master is running at https://ip-of-ec2-instance:6443 KubeDNS is running at https://ip-of-ec2-instance:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. [kubernetes-host@ip-of-ec2-instance ~]$ curl https://ip-of-ec2-instance:6443 curl: (60) SSL certificate problem: unable to get local issuer certificate More details here: https://curl.haxx.se/docs/sslcerts.html curl failed to verify the legitimacy of the server and therefore could not establish a secure connection to it. To learn more about this situation and how to fix it, please visit the web page mentioned above. [kubernetes-host@ip-of-ec2-instance ~]$ [kubernetes-host@ip-of-ec2-instance ~]$ curl https://127.0.0.1:6443 curl: (60) SSL certificate problem: unable to get local issuer certificate More details here: https://curl.haxx.se/docs/sslcerts.html curl failed to verify the legitimacy of the server and therefore could not establish a secure connection to it. To learn more about this situation and how to fix it, please visit the web page mentioned above. [kubernetes-host@ip-of-ec2-instance ~]$ Why is kubectl apply NOT able to map to port 6443, when kubectl cluster-info is able to map to the correct port?
This looks like you can't connect to the Kubernetes API server. This could be for many reasons The kubernetes API server is not running The API server is not listening on TCP/8080 The API server is not listening on the loopback address of 127.0.0.1 The API server is not listening with HTTP (but with HTTPS) A local firewall (such as iptables) is blocking the connection TCPwrapper is blocking the connection. A mandatory access control system such as SELinux is blocking the connection, but you said this was disabled. And if you have AppArmor installed on Amazon Linux, then I don't know if anyone can help you. :) and this list can go on to many more esoteric reasons why this connection won't happen. some remediation/troubleshooting steps Make sure the k8s api server is running (I don't know how you've installed it, so I can't suggest how you'd check, probably with systemctl status or docker ps). Run ss -ln and check for something listening on 127.0.0.1:8080 or *:8080 see if you can connect to the socket with something else curl -k https://127.0.0.1:8080 to check https, or curl http://127.0.0.1:8080 for HTTP. If your API server is running in a docker container, make sure it's listening on 8080 on the host. docker ps or docker inspect to see the port forwarding. Check the firewall, iptables -S, this is a longshot, not often will you see rules blocking packets going to localhost. Check /etc/hosts.deny for anything that might stop you (again, this is a long shot, because this doesn't usually get configured by accident). Edit After seeing some more of your troubleshooting data. I noticed that you're running kubectl as root. And your kubeconfig is in a user directory. You should run the kubectl as the user "kubernetes-host" by just dropping the sudo at the beginning of your command. The kubeconfig file will direct Kubectl to the right endpoint (address and port), but running as root, kubectl will not check in /home/kubernetes-host/.kube/config. So try kubectl apply -f rbac-kdd.yaml If you have to run as root for some reason, you should: 1) Question the life choices that lead you here. 2) run sudo kubectl apply --kubeconfig=/home/kubernetes-host/.kube/config -f rbac-kdd.yaml to explicitly use the config in the kubernetes-host user's home directory.
unable to recognize file. connection refused
1,619,895,044,000
I'd like to show that entering passwords via read is insecure. To embed this into a half-way realistic scenario, let's say I use the following command to prompt the user for a password and have 7z¹ create an encrypted archive from it: read -s -p "Enter password: " pass && 7z a test_file.zip test_file -p"$pass"; unset pass My first attempt at revealing the password was by setting up an audit rule: auditctl -a always,exit -F path=/bin/7z -F perm=x Sure enough, when I execute the command involving read and 7z, there's a log entry when running ausearch -f /bin/7z: time->Thu Jan 23 18:37:06 2020 type=PROCTITLE msg=audit(1579801026.734:2688): proctitle=2F62696E2F7368002F7573722F62696E2F377A006100746573745F66696C652E7A697000746573745F66696C65002D7074686973206973207665727920736563726574 type=PATH msg=audit(1579801026.734:2688): item=2 name="/lib64/ld-linux-x86-64.so.2" inode=1969104 dev=08:03 mode=0100755 ouid=0 ogid=0 rdev=00:00 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 type=PATH msg=audit(1579801026.734:2688): item=1 name="/bin/sh" inode=1972625 dev=08:03 mode=0100755 ouid=0 ogid=0 rdev=00:00 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 type=PATH msg=audit(1579801026.734:2688): item=0 name="/usr/bin/7z" inode=1998961 dev=08:03 mode=0100755 ouid=0 ogid=0 rdev=00:00 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 type=CWD msg=audit(1579801026.734:2688): cwd="/home/mb/experiments" type=EXECVE msg=audit(1579801026.734:2688): argc=6 a0="/bin/sh" a1="/usr/bin/7z" a2="a" a3="test_file.zip" a4="test_file" a5=2D7074686973206973207665727920736563726574 type=SYSCALL msg=audit(1579801026.734:2688): arch=c000003e syscall=59 success=yes exit=0 a0=563aa2479290 a1=563aa247d040 a2=563aa247fe10 a3=8 items=3 ppid=2690563 pid=2690868 auid=1000 uid=1000 gid=1000 euid=1000 suid=1000 fsuid=1000 egid=1000 sgid=1000 fsgid=1000 tty=pts17 ses=1 comm="7z" exe="/usr/bin/bash" key=(null) This line seemed the most promising: type=EXECVE msg=audit(1579801026.734:2688): argc=6 a0="/bin/sh" a1="/usr/bin/7z" a2="a" a3="test_file.zip" a4="test_file" a5=2D7074686973206973207665727920736563726574 But the string 2D7074686973206973207665727920736563726574 is not the password I entered. My question is twofold: Is audit the right tool to get at the password? If so, is there something I have to change about the audit rule? Is there an easier way, apart from audit, to get at the password? ¹ I'm aware that 7z can prompt for passwords by itself.
What's insecure is not read(2) (the system call to read data from a file). It isn't even read(1) (the shell builtin to read a line from standard input). What's insecure is passing the password on the command line. When the user enters something that the shell reads with read, that thing is visible to the terminal and to the shell. It isn't visible to other users. With read -s, it isn't visible to shoulder surfers. The string passed on the command line is visible in the audit logs. (The string may be truncated, I'm not sure about that, but if it is it would be only for much longer strings than a password.) It's just encoded in hexadecimal when it contains characters such as spaces that would make the log ambiguous to parse. $ echo 2D7074686973206973207665727920736563726574 | xxd -r -p; echo -pthis is very secret $ perl -l -e 'print pack "H*", @ARGV' 2D7074686973206973207665727920736563726574 -pthis is very secret That's not the main reason why you shouldn't pass a secret on the command line. After all, only the administrator should be able to see audit logs, and the administrator can see everything if they want. It is worse to have the secret in the logs, though, because they may be accessible to more people later (for example through an improperly secured backup). The main reason why you shouldn't pass a secret on the command line is that on most systems the command line is also visible to other users. (There are hardened systems where this isn't the case, but that's typically not the default.) Anyone running ps, top, cat /proc/*/cmdline or any similar utility at the right time can see the password. The 7z program overwrites the password soon after it starts (as soon as it's been able to make an internal copy), but that only reduces the window of danger, it doesn't remove the vulnerability. Passing a secret in an environment variable is safe. The environment is not visible to other users. But I don't think 7z supports that. To pass the password without making it visible through the command line, you need to pass it as input, and 7z reads from the terminal, not from stdin. You can use expect to do that (or pexpect if you prefer Python to TCL, or Expect.pm in Perl, or expect in Ruby, etc.). Untested: read -s -p "Enter password: " pass pass=$pass expect \ -c 'spawn 7z a -p test_file.zip test_file' \ -c 'expect "assword:" {send $::env(pass)}' \ -c 'expect eof' -c 'catch wait result' unset pass
Sniff password entered with read and passed as a command line argument
1,619,895,044,000
$ ls -ltr /{,usr/}bin/l*|tail -4 -r-xr-xr-x 1 root bin 31544 Dec 20 2017 /usr/bin/login -r-xr-xr-x 1 root bin 31544 Dec 20 2017 /bin/login lrwxrwxrwx 1 root root 15 Aug 28 2018 /usr/bin/libpng-config -> libpng12-config lrwxrwxrwx 1 root root 15 Aug 28 2018 /bin/libpng-config -> libpng12-config This gives name of all executable files that starts with letter l in /usr/bin and /bin directory. If I paly with , by changing its position I get results that I don't understand. For e.g., refer screenshot below. $ ls -ltr /{,usr,/}bin/l*|tail -4 /usrbin/l*: No such file or directory -r-xr-xr-x 1 root bin 31544 Dec 20 2017 /bin/login -r-xr-xr-x 1 root bin 31544 Dec 20 2017 //bin/login lrwxrwxrwx 1 root root 15 Aug 28 2018 /bin/libpng-config -> libpng12-config lrwxrwxrwx 1 root root 15 Aug 28 2018 //bin/libpng-config -> libpng12-config Same as above (order of list has changed) but usr is missing. $ ls -ltr /{usr,/}bin/l*|tail -4 /usrbin/l*: No such file or directory -rwxr-xr-x 1 root other 2286 Jun 15 2017 //bin/libpng14-config -r-xr-xr-x 1 root bin 28608 Oct 20 2017 //bin/ldaplist -r-xr-xr-x 1 root bin 31544 Dec 20 2017 //bin/login lrwxrwxrwx 1 root root 15 Aug 28 2018 //bin/libpng-config -> libpng12-config This is the files present only in /bin with extra / as prefix with every entry. Please explain what kind charm is being performed by ,.
The braces are replaced by each of the strings between the commas, so X{,a,b}Y is expanded to XY XaY XbY, so your /{usr,/}bin/l is expanded to /usrbin/l* /bin/l*, and not /usr/bin/l* as you seem to expect. /{,usr/}bin/l* #-> /[]bin/l* /[usr/]bin/l* /{,usr,/}bin/l* #-> /[]bin/l* /[usr]bin/l* /[/]bin/l* /{usr,/}bin/l* #-> /[usr]bin/l* /[/]bin/l*
Significance of comma ',' operator, for concatenating strings in Unix
1,619,895,044,000
We want to remove the list of commands that were run in the past in the New Command window. It is the AutoFill or AutoComplete history of commands. To see them, follow these steps: Launch Terminal Open Shell menu, and click on New Command If it is empty, type something and run it. The next time you open the New Command, you'll be able to see the prior commands because it is stored in the dropdown list of the New Command window. Any idea on how to clear these?
If you run this command : open -a Finder ~/Library/Preferences/com.apple.Terminal.plist You'll see a CommandHistory tab where you can remove entries. If com.apple.Terminal.plist is not in above path, you can search it by : find ~ -name com.apple.Terminal.plist
On macOS, how do we remove the past commands history from Terminal > Shell > New Command?
1,619,895,044,000
I have a text files in ascii format and wanted to replace a specific value by value saved in another text file. consider a file of name text_1. . . . 50.asc 5 columns, 4 rows sample data in a file 0.40007 0.544 0.6795 0.1545 -3.4028 0.61488 0.8471 0.7444 0.3537 0.0709 0.65128 0.6651 0.7948 0.9200 0.893 0.70952 0.5990 0.5061 0.610 0.893 And I wanted to replace (5th column, 1st row) of each file by value same in Replacing_values.txt. It have data 1 2 3 4 . . . 50 Expected result (continued to all files) 0.40007 0.544 0.6795 0.1545 1 0.61488 0.8471 0.7444 0.3537 0.0709 0.65128 0.6651 0.7948 0.9200 0.893 0.70952 0.5990 0.5061 0.610 0.893 I have tried this for i in `seq 50`; do x=`awk 'FNR==(1) {print $5}' *.asc`; y=`cat Replacing_values.txt`; echo $x==$y ;done
With flexible GNU awk features: gawk -i inplace -v repl="Replacing_values.txt" 'FNR==1{ getline $5 < repl }1' *.asc -i inplace - allows to modify the input file(s) in-place -v repl="Replacing_values.txt" - a variable keeping the filename with replace values FNR==1 - consider only the 1st line of each input file getline $5 < repl - read next record from repl file and assign it to the 5th column $5
Update specific value of one file by values in another text file
1,619,895,044,000
We have several clustered servers and need to check 1 server in each cluster. How do I compare entries in a list to return only one server in each cluster? All server names follow [a-z]-[a-z]-[a-z][0-9].domain_name Example server list. test-rac-1.domain_name test-rac-2.domain_name test-rac-3.domain_name test-rac-dg1.domain_name test-rac-dg2.domain_name test-rac-dg3.domain_name qat-rac-1.domain_name qat-rac-2.domain_name qat-rac-3.domain_name ser-ser-ser.domain_name long-serv-name.domain_name Result server list. test-rac-1.domain_name test-rac-dg1.domain_name qat-rac-1.domain_name ser-ser-ser.domain_name long-serv-name.domain_name -- result can be any of the servers in the cluster.
You could filter all lines with grep using a regex. grep '[a-z-]1\?\.domain_name' file > newfile This returns all lines with letters or minus and an optional 1 before ".domain_name". The inverse operation would be to remove all lines containing numbers > 1 before ".domain_name": grep -v '\([2-9]\|[0-9]1\)\.domain_name' file > newfile This matches lines with 2 to 9 as last digit before ".domain_name" or numbers with at least two digits where the last digit is 1 (to match 11 or 21 etc). The -v option is used to select non-matching lines.
uniq cluster name
1,619,895,044,000
I currently installed Arch Linux and I want to install the wxPython module for Spyder3. I had problems installing it with pip. Therefore, I downloaded the wxpython tarball. Here are the steps which I followed: cd Downloads sudo tar -xvzf wxPython-4.0.6.tar.gz cd wxpython-4.0.6 After that, I wanted to build the setup.py file, but I got this error: python setup.py build running build WARNING: Building this way assumes that all generated files have been generated already. If that is not the case then use build.py directly to generate the source and perform the build stage. You can use --skip-build with the bdist_* or install commands to avoid this message and the wxWidgets and Phoenix build steps in the future. "/usr/bin/python" -u build.py build Traceback (most recent call last): File "build.py", line 30, in <module> import pathlib2 ModuleNotFoundError: No module named 'pathlib2' Command '"/usr/bin/python" -u build.py build' failed with exit code 1. I also tried python setup.py install, but I got here the same error. Does anyone know how to fix it? All help is welcome.
Ensure pip, setuptools, and wheel are up to date While pip alone is sufficient to install from pre-built binary archives, up to date copies of the setuptools and wheel projects are useful to ensure you can also install from source archives: python -m pip install --upgrade pip setuptools wheel "ModuleNotFoundError: No module named 'pathlib2' " It means the required module is missing so, try to install the module and try it. try this sudo pip install pathlib2 or sudo pip3 install pathlib2
Failed to build setup.py on Arch Linux
1,619,895,044,000
If I want to use the name of my primary screen (HDMI-0, found out via xrandr) within another command, the device is never found. Instead I have to use the name HEAD-0. From what I've already read I assume this is probably a nVidia-thing, but I don't understand how it works, why it's done and most importantly: How can I find out, which of my screens has which HEAD-name?
Not sure if it's the correct answer, but you can query connected displays via nvidia-settings --query dpys. If I understand it correctly, HEAD-x is mapped to a display of the output of nvidia-settings in the order they appear. For example: HEAD-0 is the first connected display, HEAD-3 the fourth, etc.
How to find out, which screen is HEAD-0, HEAD-1, etc.?
1,619,895,044,000
I can use goobook to create a google contact with NAME and EMAIL but I need to add a PHONE.
I have patched goobook to accept a phone number when adding new contacts. Clone my forked repository: git clone https://gitlab.com/ardrabczyk/goobook && cd goobook Now you can follow the instructions in README.rst. In this case, as you're now installing goobook from source just do this: sudo python3 ./setup.py install Personally, I don't like installing packages globally and using sudo if there's no such need so consider doing this instead: python3 setup.py install --user You won't need to enter root's password and goobook will be installed to ~/.local/bin. Check the new help for add command: $ ~/.local/bin/goobook add -h usage: goobook add [-h] [NAME] [EMAIL] [PHONE] Create new contact, if name and email is not given the sender of a mail read from stdin will be used. positional arguments: NAME Name to use. EMAIL E-mail to use. PHONE Phone number to use. optional arguments: -h, --help show this help message and exit Add a new test entry with a phone number: ~/.local/bin/goobook add fork-goobook [email protected] 789456123 Make sure it was created correctly: $ ~/.local/bin/goobook query 789456123 [email protected] [email protected] Keep in mind that the change I've introduced has not been formally accepted by goobook developers and that you're now using a fork.
How to create a google contact from command line?
1,619,895,044,000
I have multiple (22) files that are named like this: chr1.out, chr2.out...,chr22.out each of those files has 46 columns and multiple lines The first 6 columns and 6 rows in one of those files looks like this: alternate_ids rsid chromosome position alleleA alleleB index rs4814683 rs4814683 NA 9795 G T 1 rs6076506 rs6076506 NA 11231 T G 2 rs6139074 rs6139074 NA 11244 A C 3 rs1418258 rs1418258 NA 11799 C T 4 rs7274499 rs7274499 NA 12150 C A 5 rs6116610 rs6116610 NA 12934 G A 6 Let's say this is in file chr1.out what I would like to do is to replace all NAs in column chromosome with 1. so it would look like this: alternate_ids rsid chromosome position alleleA alleleB index rs4814683 rs4814683 1 9795 G T 1 rs6076506 rs6076506 1 11231 T G 2 rs6139074 rs6139074 1 11244 A C 3 rs1418258 rs1418258 1 11799 C T 4 rs7274499 rs7274499 1 12150 C A 5 rs6116610 rs6116610 1 12934 G A 6 I would like to do the same for each of those 22 files. So chr2.out get 2 in 3rd column, chr3.out get's 3 in 3rd column etc
Using a bash script: #!/bin/bash tmp_d=$(mktemp -q -d -t 'replace.XXXXX' || mktemp -q -d) for f in chr*.out; do tmp_f="${tmp_d}/$f" n="${f#chr}" n="${n%.out}" awk -v n="$n" '$3 == "NA" { $3=n }1' "$f" > "$tmp_f" mv "$tmp_f" "$f" done rm -r "$tmp_d" First we make a tmp directory as we will be creating tmp files Then we loop through each chr*.out file. Create a variable for this file in the tmp directory remove the chr prefix remove the .out suffix awk will then replace any NA in the third column with the number extracted from the filename and save that to the tmp file replace the original file with the tmp file After the loop finishes we remove the tmp directory. all the tmp stuff can be avoided if you have GAWK which can use the -i in place option
How to replace all values (all NAs) in a column with numeric part of the file name?
1,555,798,757,000
I'm taking a Linux course and I have no idea on how to get past this annoying error: chcon: cannot access path: No such file or directory But before that we had to define our virtual host Perhaps maybe that's where the error resides, but I'm not sure because I've checked and retyped everything but still get the error after replacing everything with my FQDN. Then he asked us to "create the actual directories we just defined for our Virtual Hosts". cd /var/www/html sudo mkdir default sudo chcon -R -t httpd_sys_content_t beta.lt.unt.edu/ beta-vh.lt.unt.edu/ sudo systemctl restart httpd.service [ This is where my brain starts hurting because instead I get this error. chcon: cannot access ‛elm.lt.unt.edu/elm-vh.lt.unt.edu/’: No such file or directory Why is it saying that? I asked the instructor but it was no help.
Two things: As steve (vaguely) said, you appear to be trying to change the context of directories that don’t exist yet.  You have showed us mkdir default; you need to create elm.lt.unt.edu and elm-vh.lt.unt.edu also. It looks like you actually said sudo chcon -R -t httpd_sys_content_t elm.lt.unt.edu/elm-vh.lt.unt.edu/ when you should have said sudo chcon -R -t httpd_sys_content_t elm.lt.unt.edu/ elm-vh.lt.unt.edu/ (with a space between the two directory names).
chcon: cannot access (file): No such file or directory
1,555,798,757,000
The following script takes a user input (path to a mounted macOS volume such as /Volumes/Macintosh\ HD/) #!/bin/bash # Author: Swasti Bhushan Deb # macOS 10.13.3 # kMDItemWhereFroms.sh read -e -p "Enter the full path to the Mounted Volume (e.g /Volume /Macintosh HD): " path var=$(mdfind -name 'kMDItemWhereFroms="*"' -onlyin "$path") echo "$var" Output: /Users/swastibhushandeb/Documents/encase_examiner_v710_release_notes.pdf /Users/swastibhushandeb/Desktop/AirPrint Forensics.pdf As a next step I would like the script to perform mdls (prints the values of all the metadata attributes associated with the files) on each output from kMDItemWhereFroms.sh,which can also be perfromed manually by: mdls /Users/swastibhushandeb/Documents/encase_examiner_v710_release_notes.pdf However if such processing is to be automated,what are the available bash coding strategies/options available?How can the output be directed to a csv file so that each column contains fields from mdls command output?
You can make use of mdfind's -0 option together with xargs to have the names found terminated by a NUL character (and therefore not having to worry about space/tab/newlines etc). read -e -p 'Path? ' path mdfind -0 -name 'kMDItemWhereFroms="*"' -onlyin "$path" | xargs -0 mdls If you want to see the path/file names as well (and not only the output of mdls) it becomes a bit more elaborate: mdfind -0 -name 'kMDItemWhereFroms="*"' -onlyin "$path" | \ xargs -0 -n 1 sh -c 'echo "$1" && mdls "$1"' _ (The _ at the end is just syntactical sugar for sh which will assign the first argument, typically the name of the command, to $0)
Parsing "mdls" output
1,555,798,757,000
To find a python script, I use pgrep -af python. Is there a similar command to find node.js scripts?
You could just do: pgrep -a node This could potentially detect false positives if you have another process with node in its name. Also note this wouldn't work if the node script is using a node hashbang and was run without the node command, although I think that would also be the case for python scripts.
Show all running node.js scripts using bash
1,555,798,757,000
I want to create a cli in bash that allows a user to be able to fetch a list of database instances on our platform. I want them to be able to type in something like: $ dbinv instances show --environment=all Equally, if they want to look at the users of a specific instance called db1, they might type: $ dbinv instances users show --environment=production --instance=db1 How would one go about developing this from a cli perspective. What should I be researching or exploring? Should I be looking at something like Ruby or Python, rather than bash?
You may be able to use all 3 languages. I would recommend looking to see how svn, hg, git, docker do it. They will have a wrapper command: e.g. dbinv. It will look at its 1st argument, and then call a helper script. e.g. dbinv show arg1, will call «directory-containing-dbinv-commands»/dbinv-show arg1. This wrapper can easily be done in bash. The sub-commands can be done in other languages (use a #! at the top of a script to specify the language). To do the wrapper, look-up "$1", "$@", shift and exec(not as important).
Sub-commands in bash [closed]
1,555,798,757,000
I am using this command, find -name (file name) -ls | awk '{print $11,"\t",$5,"\t",$7,"\t",$8,$10}' to gather information of tons of files. However, some files are giving us weird numbers where date should be, if files are modified in 2018. was wondering if you have any suggestion on this to convert those numbers to standard format, i.e, May 2016, May 2017, May 2018. Have no problem with output of files that were modified before 2017. Is there any way to get an output with current year in that format, like May 2018?
Since you are using Linux, you can make use of the -printf argument to the find command: find -name 'pattern' -printf '%p\t\t%Tb %TY\n' Sample output: $ find -name 'file*' -printf '%p\t\t%Tb %TY\n' ./file1 Sep 2018 ./file6 Sep 2018 ./file4 Sep 2018 ./file2 Sep 2018 ./file3 Sep 2018 ./file5 Sep 2018
convert the dates to a standard format
1,555,798,757,000
I am supposed to use ls to find files that end in a certain letter, but it does not matter if the file has an extension or not. For example, I want it to do this > ls test test.txt test.ascii other.txt > ls [something] test test.txt test.ascii So that I can find files that end with a 't', but it doesn't include the file where the extension ends in t Edit: I am supposed to assume that the only period characters in the filename will be for the extension and there will be no others
something= |grep -e ".*t\.[^.]*" -e "^[^.]*t$"
Using ls to find files that end in a character, ignoring extension
1,555,798,757,000
I want to remove the .html file from the /home/user1/html/ directory. I have tried nearly all of the solutions posted on a myriad of other web sites. Nothing is working. user1@comp1:~/html$ sudo rm -f .html rm: cannot remove '.html': Permission denied Properties of directory: user1@comp1:~$ ls -al total 0 drwxrwxrwx 1 user1 user1 4096 Aug 21 14:48 html Properties of file: user1@comp1:~/html$ ls -al total 3912 -rwxrwxrwx 0 user1 user1 1365246 Aug 20 17:20 .html Things I have tried on directory (all run successfully): sudo chown $USER:$USER ./html sudo chmod 777 ./html sudo chmod -R 777 ./html Things I have tried on the file (all run successfully): sudo chown $USER:$USER .html sudo chmod 777 .html sudo chmod 777 . I tried looking at the file's attributes (did not run successfully): user1@comp1:~/html$ lsattr .html lsattr: Inappropriate ioctl for device While reading flags on .html strace with sudo: user1@comp1:~/html$ strace sudo rm -f .html execve("/usr/bin/sudo", ["sudo", "rm", "-f", ".html"], [/* 17 vars */]) = -1 EPERM (Operation not permitted) write(2, "strace: exec: Operation not perm"..., 38strace: exec: Operation not permitted ) = 38 exit_group(1) = ? +++ exited with 1 +++ strace without sudo: user1@comp1:~/html$ strace rm -f .html execve("/bin/rm", ["rm", "-f", ".html"], [/* 17 vars */]) = 0 brk(NULL) = 0x805000 access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory) access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory) open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3 fstat(3, {st_mode=S_IFREG|0644, st_size=39157, ...}) = 0 mmap(NULL, 39157, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7fcfcb47e000 close(3) = 0 access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory) open("/lib/x86_64-linux-gnu/libc.so.6", O_RDONLY|O_CLOEXEC) = 3 read(3, "\177ELF\2\1\1\3\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0P\t\2\0\0\0\0\0"..., 832) = 832 fstat(3, {st_mode=S_IFREG|0755, st_size=1868984, ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fcfcb470000 mmap(NULL, 3971488, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7fcfcae30000 mprotect(0x7fcfcaff0000, 2097152, PROT_NONE) = 0 mmap(0x7fcfcb1f0000, 24576, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x1c0000) = 0x7fcfcb1f0000 mmap(0x7fcfcb1f6000, 14752, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7fcfcb1f6000 close(3) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fcfcb460000 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fcfcb450000 arch_prctl(ARCH_SET_FS, 0x7fcfcb460700) = 0 mprotect(0x7fcfcb1f0000, 16384, PROT_READ) = 0 mprotect(0x60d000, 4096, PROT_READ) = 0 mprotect(0x7fcfcb425000, 4096, PROT_READ) = 0 munmap(0x7fcfcb47e000, 39157) = 0 brk(NULL) = 0x805000 brk(0x826000) = 0x826000 open("/usr/lib/locale/locale-archive", O_RDONLY|O_CLOEXEC) = 3 fstat(3, {st_mode=S_IFREG|0644, st_size=1668976, ...}) = 0 mmap(NULL, 1668976, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7fcfcb28d000 close(3) = 0 ioctl(0, TCGETS, {B38400 opost isig icanon echo ...}) = 0 newfstatat(AT_FDCWD, ".html", {st_mode=S_IFREG|0777, st_size=1365246, ...}, AT_SYMLINK_NOFOLLOW) = 0 unlinkat(AT_FDCWD, ".html", 0) = -1 EACCES (Permission denied) open("/usr/share/locale/locale.alias", O_RDONLY|O_CLOEXEC) = 3 fstat(3, {st_mode=S_IFREG|0644, st_size=2995, ...}) = 0 read(3, "# Locale name alias data base.\n#"..., 4096) = 2995 read(3, "", 4096) = 0 close(3) = 0 open("/usr/share/locale/en_US.UTF-8/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/share/locale/en_US.utf8/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/share/locale/en_US/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/share/locale/en.UTF-8/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/share/locale/en.utf8/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/share/locale/en/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/share/locale-langpack/en_US.UTF-8/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/share/locale-langpack/en_US.utf8/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/share/locale-langpack/en_US/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/share/locale-langpack/en.UTF-8/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/share/locale-langpack/en.utf8/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/share/locale-langpack/en/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory) write(2, "rm: ", 4rm: ) = 4 write(2, "cannot remove '.html'", 21cannot remove '.html') = 21 open("/usr/share/locale/en_US.UTF-8/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/share/locale/en_US.utf8/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/share/locale/en_US/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/share/locale/en.UTF-8/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/share/locale/en.utf8/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/share/locale/en/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/share/locale-langpack/en_US.UTF-8/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/share/locale-langpack/en_US.utf8/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/share/locale-langpack/en_US/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/share/locale-langpack/en.UTF-8/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/share/locale-langpack/en.utf8/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/share/locale-langpack/en/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory) write(2, ": Permission denied", 19: Permission denied) = 19 write(2, "\n", 1 ) = 1 lseek(0, 0, SEEK_CUR) = -1 ESPIPE (Illegal seek) close(0) = 0 close(1) = 0 close(2) = 0 exit_group(1) = ? +++ exited with 1 +++
You must run fsck on the partition where the file is. To do that you must boot in single mode and do something like fsck.ext4 /dev/yourpartdevice (or change ext4 to the partition type - and replace yourpartdevice by the partition with problem) But... "lsattr: Inappropriate ioctl for device While reading flags on .html" looks like to be a hardware problem, and maybe fsck will be not capable of correcting the file. If this solve your problem please mark this as the correct answer. UPDATE for other users reading this answer: Consider that RAM memory can do a lot of crazy things, so checking your RAM is good before running fsck, because that can make fsck to run very destructive. Good Lucky!
Yet another `rm: cannot remove 'file': Permission denied`
1,555,798,757,000
I noticed this interesting set of commands today: $ seq 5 > alfa.txt $ awk '{print 6 > ARGV[1]} 1' alfa.txt 1 2 3 4 5 $ cat alfa.txt 6 6 6 6 6 My first question was why am I getting several 6 rather than just one, but then I remembered you need to close each time: awk '{print 6 > ARGV[1]; close(ARGV[1])} 1' alfa.txt However what also puzzles me is if I am clobbering the input from the very beginning, how am I able to still go through and read the entire file? My guess is that Awk is actually writing to a buffer, then writing to the actual file at the end or perhaps every time the buffer fills. If the latter is true, what is the buffer size?
At least on my system, it appears to be 32768 from a file, and 65536 from a pipe: $ yes | head -100000 | tee file > pipe $ awk '{print "n" > ARGV[1]}' file $ sed s/y/n/ pipe | awk 'BEGIN {while (getline < "-") print > ARGV[1]}' pipe $ wc -l file pipe 32768 file 65536 pipe
Awk buffer size
1,555,798,757,000
Okay, so this might be a very silly question and I don't write shell scripts too often. I'm trying to start 3 processes in the background, one after the another within a shell script, for example: #!/bin/sh PROCESS1 & PROCESS2 & PROCESS3 & Here is the problem. I need to start these processes in the same order as shown. Also, the PID of the PROCESS2 needs to be passed as the command line argument to the PROCESS3. All of these processes run in the infinite loop and they do work smoothly when ran in 3 separate terminals. I tried: #!/bin/sh PROCESS1 & PROCESS2 & PID_PROCESS2=$! PROCESS3 ${PID_PROCESS2} & This starts the PROCESS1 and PROCESS3 but the PROCESS2 exits immediately without printing any error. It just vanishes. The ps command shows no traces of PROCESS2. Printing the PID_PROCESS2 returns some value 'p' and the PROCESS3 runs just fine with the value 'p' as its argument. What's the problem and where am I lacking? PROBABLY IMPORTANT DETAILS In the above example, I'm using the qualified paths to invoke the respective processes and all of them are native binaries and are in the same directory. For example, #!/bin/sh /usr/bin/PROCESS1 & The output of ps is as described above, $ps | grep "/path/to/PROCESS" 10064 root 16536 S /path/to/PROCESS1 10066 root 11084 S /path/to/PROCESS3 10065 which clearly tells the PROCESS2 started but exited for some unknown reason. PROCESS2 communicates with PROCESS1 via a FIFO (named pipe) and it's a one-way communication. WORKAROUND #/bin/sh /path/to/PROCESS1 & /path/to/PROCESS2 & PROCESS2_PID=$! export P2PID=${PROCESS2_PID} sh -c "/path/to/PROCESS3 ${P2PID}" This seems to do the job with one extra process for sh. $ps | grep "/path/to/PROCESS" 10174 root 16536 R /path/to/PROCESS1 10175 root 71720 S /path/to/PROCESS2 10177 root 27772 S sh -c /path/to/PROCESS3 10175 10076 root 11084 S /path/to/PROCESS3 100175 But I still don't have any idea why this works. Can someone suggest what sort of "magic" happened in this case?
Based on what you're describing it sounds like there's something fundamentally wrong with PROCESS2 that's causing it to exit. If I model what you're describing with 3 processes, it mostly works as one would expect when you background the 3 processes and then capture and pass the 2nd process's PID to process 3. Example example script $ cat runny.bash #!/bin/bash proc3func() { echo $1 sleep 7 & } sleep 9 & sleep 8 & PID2=$! proc3func ${PID2} & example run $ ./runny.bash ; sleep 2; ps -eaf 4279 UID PID PPID C STIME TTY TIME CMD ... vagrant 4278 1 0 20:21 pts/1 00:00:00 sleep 9 vagrant 4279 1 0 20:21 pts/1 00:00:00 sleep 8 vagrant 4282 1 0 20:21 pts/1 00:00:00 sleep 7 In the above output we can see the PID, 4279 being echoed to the screen followed by the output of ps -eaf which shows our 3 processes. Debugging I'd suggest enabling set -x so that you can follow what commands are executing as you run your script or running it like this: $ bash -x ./runny.bash + PID2=4612 + sleep 9 + sleep 8 + proc3func 4612 + echo 4612 4612 + sleep 7
How to pass PID of one process to another process within the same shell script?
1,555,798,757,000
I have large tab delimited file like below ENSBTAP00000053998 GO:0005576 GO:0006952 ENSBTAP00000014280 GO:0005515 XP_010996658.1 GO:0005515 GO:0032947 ENSCAFP00000005761-D107 GO:0006826 GO:0006879 GO:0008199 XP_010987712.1-D2 GO:0004579 GO:0008250 GO:0016021 ENSBTAP00000018349-D5 GO:0003677 GO:0003700 GO:0005634 GO:0006355 GO:0043565 How could I convert the above table as shown below ENSBTAP00000053998 GO:0005576 ENSBTAP00000053998 GO:0006952 ENSBTAP00000014280 GO:0005515 XP_010996658.1 GO:0005515 XP_010996658.1 GO:0032947 ENSCAFP00000005761-D107 GO:0006826 ENSCAFP00000005761-D107 GO:0006879 ENSCAFP00000005761-D107 GO:0008199 XP_010987712.1-D2 GO:0004579 XP_010987712.1-D2 GO:0008250 XP_010987712.1-D2 GO:0016021 ENSBTAP00000018349-D5 GO:0003677 ENSBTAP00000018349-D5 GO:0003700 ENSBTAP00000018349-D5 GO:0005634 ENSBTAP00000018349-D5 GO:0006355 ENSBTAP00000018349-D5 GO:0043565
With Awk: awk '{for(i=2;i<=NF;i++) print $1,$i}' OFS='\t' file or Perl perl -alne '$x = shift @F; print "$x\t$_" for @F' file
Split multiple column into two columns based on first column
1,555,798,757,000
is there a better way, preferably without an extra software-stack, to lock specific ssh users into a program without access to a working shell? Imagine a cli program which should be the only interface a user has access to via ssh. My hacky solution: In /etc/passwd replacing the user-shell with following script: #!/bin/bash /bin/bash -c /usr/bin/cli exit 1
Issue at Hand You desire to lock remote users into using a specific shell when they connect to your server. As you have probably found out, chsh or other solutions are geared towards local users. Solution As per this solution by user muru, I would edit your sshd_config to use the ForceCommand option. You could use a ForceCommand along with Match: Match Address 10.1.0.0/16 ForceCommand /usr/bin/[some shell] From man sshd_config: Match Introduces a conditional block. ... The arguments to Match are one or more criteria-pattern pairs or the single token All which matches all criteria. The available criteria are User, Group, Host, LocalAddress, LocalPort, and Address. ForceCommand Forces the execution of the command specified by ForceCommand, ignoring any command supplied by the client and ~/.ssh/rc if present. The command is invoked by using the user's login shell with the -c option. So, the command you specify would be executed using the user's login shell, which must accept the -c option. The connection is closed when the command exits, so for all practical purposes, that command is their shell. Using ForceCommand in your configuration file you can force the use of a shell that supports the -c option. I would also reference this serverfault post to get more information on how to complete this task. Conclusion Use your sshd_config options to force the use of a shell that can support -c as that will close the shell and session once complete. Please comment if you have any questions or issues with this answer. I appreciate feedback to correct any misconceptions and to improve my posts. I can update my answer as needed. Best of Luck!
Replace login shell with program (mini-jail)
1,555,798,757,000
I want to find all .mb files in multiple big folders, but I only want to return one file from each folder if there are many files matching my search criterion. folder structure .. --abc |_scenes | |__ file1.mb | |__ file2.mb |... --def |_scenes | |__ file3.mb | |__ file4.mb |... if I do find /net/*/scenes -maxdepth 1 -type f -size +200M It returns all file1.mb file2.mb file3.mb file4.mb How can I return only file1.mb and file3.mb?
find + awk solution: find /net/*/scenes -maxdepth 1 -type f -name "*.mb" \ | awk -F'/' '{ fn = $NF; $NF = "" }!a[$0]++{ print $0 fn }' OFS='/' -F'/' and OFS='/' - stand for input and output field separator fn = $NF - storing the last field value (i.e. a filename) into variable fn !a[$0]++{ print $0 fn } - on the 1st occurrence of unique directory path (presented by $0) print the line(the whole filepath) Or using GNU coreutils pipeline: find /net/*/scenes -maxdepth 1 -type f -name "*.mb" -printf "%H %p\n" \ | sort -k1,1 -u | cut -d' ' -f2
How do I return the first matched result in each folder with find command in tcsh?
1,555,798,757,000
I tried chfn to change the umask value of a user as follows sudo chfn -o umask=022 username But I have this error chfn: Office: '=' is not allowed I also tried a failed attempt to escape the = sign as follows sudo chfn -o umask\=022 username sudo chfn -o "umask=022 username" How can I use or escape the equal sign with this command? Thx
chfn changes the information in the fourth field of /etc/passwd (or equivalent). Most of the data there is only used for display purposes, and it's even called the "user name or comment field" in the passwd(5) man page. Debian's man page for chfn(1) however mentions that part of it is used for "accounting information". Apparently pam_umask.so also reads it, which is what I suppose you want. The man page also mentions the prohibition on the equal sign: These fields must not contain any colons. Except for the other field, they should not contain any comma or equal sign. -o, --other OTHER Change the user's other GECOS information. This field is used to store accounting information used by other applications, and can be changed only by a superuser. It seems that the chfn on your CentOS follows a different syntax, and doesn't provide a way to change the "other" part. Testing on Debian, the result of chfn -o 'umask=022' username is: username:x:1000:1000:Full name,,,,umask=022:/home/username:/bin/bash So, a workaround for the lack of functionality in chfn would be edit the file manually (with vipw), and add the umask=022 after the fourth comma in the comment field.
How do I use equal sign on chfn command?
1,555,798,757,000
When you write on the terminal 'vim filename', i know vim receives filename as a parameter but i guess that a program receives vim as a parameter too... is it just the terminal emulator or another program?
At the point at which you type vim filename on the command line, the shell has already started, so the shell (and the terminal emulator) does not get vim as a command line argument. vim, on the other hand does, but this is not available to the user. When a program is started, its name is usually given as the zeroth command line argument. You can see this by starting a shell and echoing $0: $ sh $ echo $0 sh $ exit The shell executes the commands on the command line using execve() (or a similar exec() function), whose POSIX specification says The value in argv[0] should point to a filename string that is associated with the process being started by one of the exec functions. argv[0] in the text above corresponds to $0 in a shell script. The Rationale section goes on to say: The requirement on a Strictly Conforming POSIX Application also states that the value passed as the first argument be a filename string associated with the process being started. Although some existing applications pass a pathname rather than a filename string in some circumstances, a filename string is more generally useful, since the common usage of argv[0] is in printing diagnostics. In some cases the filename passed is not the actual filename of the file; for example, many implementations of the login utility use a convention of prefixing a <hyphen-minus> (-) to the actual filename, which indicates to the command interpreter being invoked that it is a "login shell".
Which program does receive command line commands?
1,555,798,757,000
For changing a specific image's size I use the following command: mogrify -geometry x50 my_image.png Every time that I take a new print screen, an image is saved on my ~/Pictures folder. I'd like to make a script that watches my Pictures folder and takes an action copying a reduced sized version of my new image to a different folder on my computer, e.g. ~/.icons/... I know I could solve this problem using cron, but I don't really want to take actions at regular intervals of time. I want a command ( or a script) that can find out what's different on a folder based on logs or something like this. Is that possible? How can I do it?
I've created this functional script that solves my problem using inotify-tools. So I'm leaving it here in case it's useful for someone else. #!/bin/bash watchedFolder=~/Pictures iconsFolder=~/.icons imageGeometry=100 while [ true ] do fileName=$(inotifywait -q -e create --format "%f" "$watchedFolder") if ! [ -d $iconsFolder ]; then mkdir -p $iconsFolder ; fi sleep 1s cp $watchedFolder/"$fileName" $iconsFolder mogrify -geometry x$imageGeometry $iconsFolder/"$fileName" done Save it as e.g. ~/automatedIcons.bash and make it executable with chmod +x ~/automatedIcons.bash. Now if you run it, it's already working, it's going to copy every new picture that is created inside the Pictures folder to a new location and change its size. To make it run on boot use crontab -e and write one line with the script's location on it, e.g. @reboot /home/myUserName/automatedIcons.bash. This is just a functional script. So if anyone has any suggestion about improving the way it works, feel free to write in the comments.
How can I execute scripts based on changes that are happening on a specific folder?
1,555,798,757,000
Long story short, for my first thread here, I have a software RAID5 array set up as follow: 4 disk devices with a linux-RAID partition on each. Those disks are: /dev/sda1 /dev/sdb1 /dev/sdd1 /dev/sde1 /dev/md0 is the raid5 device with a ciphered LVM on it. I use cryptsetup to open the device, then vgscan and lvcan -a to map my volumes. Yesterday, I found out that /dev/sdd1 was failing. Here are the steps I followed: 0. remove the failing disk # mdadm --remove /dev/md0 /dev/sdd1 1. perform a check on the faulty drive mdadm --examine /dev/sdd1 I got the "could not read metadata" error. 2. tried to read the partition table I used parted and discovered that my Linux-RAID partition was gone, and when I tried to re-create it (hoping to be able to re-add the drive) I got the "your device is not writable" So, it's been clear: that hard drive is dead. 3. Extract the hard drive from my case (bad things follow) So I tried to extract /dev/sdd1 from my case not knowing which of the 4 drives it was. So I unplugged one SATA cable to find out that I had just unplugged /dev/sde1 ; I replugged it and unplugged the following one, nice catch! it was /dev/sdd1 4. what have I done?! sad face using : # mdadm --detail /dev/md0 I realized that /dev/sde1 left the array marked as "removed". I tried to re-add it, not using --re-add, but : mdadm --add /dev/md0 /dev/sde1 /proc/mdstat showed me the rebuilding process and mdadm --detail /dev/md0 displayed /dev/sde1 as "spare" ; I know I might have done something terrible here. I tried to remove /dev/sde1 from the array and use --re-add but mdadm told me he couldn't do it and advise me to stop and reassemble the array 5. Where to go from here? First thing first, I am waiting for a new hard drive to replace the faulty one. Once I will have it and will set it up as a new Linux-RAID partition device known as /dev/sdd1, I will have to stop the array (LVM volumes are not mounted no more, obviously, cryptsetup closed the ciphered device, yet mdadm has not been able to stop the array yet). I was thinking about rebooting the entire system and work from a clean start. Here is what I figued I should do: # mdadm --stop /dev/md0 # mdadm --stop /dev/md0 # mdadm --examine /dev/sd*1 # mdadm --assemble --scan --run --verbose I read that without --run option, mdadm wll refuse to scan the degraded array. Best case scenario: /dev/sde1 is recognized by the re-assembling process and new /dev/sdd1 is used to repair the previous faulty one. I would not have lost any data and will be happy. Worst, and most common, case scenario: Re-assembling the array fails to recover /dev/sde1 and I have to start from a blank new array. Am I missing something here? What should I review from this procedure? Best Regards from France
So, I managed to get a full recovery, thanks to this link What I did is as follow: I replaced the faulty disk and restarted the server. Then, I formatted the new disk as a Linux-RAID partition type. # mdadm --examine /dev/sda1 /dev/sdb1 /dev/sdd1 /dev/sde1 Then, based on the link above, I (re)created the array, based on the infos given by the --examine command. # mdadm --create /dev/md0 --level=5 --raid-devices=4 --chunk=512 --name=server:0 /dev/sda1 /dev/sdb1 missing /dev/sde1 --assume-clean As stated on this link, the --assume-clean did the trick! It avoided the "spare" state from /dev/sde1 and used it as a active part of the new array. Key thing upon re-creating the array from "existing" devices might be not to mess up with the chunk parameter, unless you will loose the data. I then added the new device to this new array: # mdadm --add /dev/md0 /dev/sde1 The server started rebuilding (took 6hrs for 10 Tb), and after this, I forced an integrity check on the whole array (which took 6 hrs aswell) I recovered everything and I am quite relieved!
MDADM - Disaster recovery or move on from the state I put my RAID5 array into
1,555,798,757,000
Suppose that I have only the following in ~/foo: . .. foo With file managers if I cut the subfolder foo and paste it into ~ it automatically replaces the contents of ~/foo with that of ~/foo/foo. But is there a native command-line tool to do so, although I can achieve the goal with a function, too?
I don’t know any way to do it in one step, but the easiest way around the problem is to remove the problem.  The fact that the two directories have the same name is a problem; so, rename one of them: mv foo foo2 && mv foo2/foo foo && rmdir foo2
How do I replace a folder with its only subfolder of the same name in CLI?
1,555,798,757,000
How can I tell if a printer is shared on the command line instead of CUPS web GUI shell (e.g., http://localhost:631/printers/HP_LaserJet_Professional_P1108).
You can find this information in the /etc/cups/printers.conf file. Use: view /etc/cups/printers.conf or less /etc/cups/printers.conf
How to know whether a printer is shared on CLI shell?
1,555,798,757,000
I use rsync -rP --rsh=ssh user@[ip]:~/datasets/ ./ to download a directory containing lots of large files from the server. I want to resume the progress when it is interrupted (by Ctrl+C or network error). But I found that when I restart rsync with the same parameters, it downloads from the very beginning, even when there is an existing file in the local directory. Why? How to use rsync properly?
The rsync command will not resume properly because the modification times of the remote and local files differ. You may request that the modification times on the local files are set to the same timestamp as the remote files with --times (or -t), but most often one uses --archive (or -a) which implies both -r and -t as well as a number of other options that are useful when creating an exact copy of a set of files (-rlptgoD): rsync --archive -P user@server:datasets/ ./ Note that --rsh=ssh is the default and that unless the remote server is set up in a peculiar way, you will not need to use ~ to get your home directory.
rsync command does not resume
1,555,798,757,000
Suppose I have a file called index.html and I want to compress it and display the compression size of it. Well I would do this... bzip2 index.html -v Now that gives me all of the data bits/bytes, percent compression ratio, and the in and out compression. Suppose I want the in number (in my case it is a 20). Well this what I tried and it worked in other contexts with gzip I don't have a problem. So normally I would use awk like so (but it doesn't work). bzip2 index.html -v | awk '{print $4}' I also tried bzip2 index.html -v | cut -f4 The above attempts only produce whatever -v was giving me anyway and doesn't extract only the information that I want. Here is an example output from my compressed index.html file bzip2 index.html -v index.html: 0.346:1, 22.00 bits/bytes, -175.00% saved, 20 in, 55 out I'm trying to get the "20 in", more specifically just the number 20.
bzip2 prints that information to stderr. This prevents error messages from intermingling with decompressed data when one decompresses to stdout as with bzip2 -dc or bzcat. You need to send stderr to awk. My bzip2 produces this format $ bzip2 index.html -v index.html: 1.444:1, 5.542 bits/byte, 30.73% saved, 179 in, 124 out. To redirect stderr to stdout and use awk to select the compression number: $ bzip2 index.html -v 2>&1 | awk '{print $5}' 30.73% In shell, 0 is standard in, 1 is standard out, and 2 is standard error. 2<&1 tells the shell to take standard error (2) and send it to standard out (1).
How do you extract certain information from bzip2 -v?
1,555,798,757,000
I have a command, # ssh -t computer.A 'command' to execute a simple command on computer.A remotely over SSH. To add to this, I want to execute a command on computer.B through computer.A.  In my head it would look like this: # ssh -t computer.A 'ssh -t computer.B 'shutdown -p now' shutdown -p now' This command would first shutdown computer.B and then computer.A, but it only shuts down computer.B, and ignores the command to shutdown computer.A. What is going wrong?
You would need to escape your quotes and separate the two commands from each other. ssh -t computer.A 'ssh -t computer.B \\'shutdown -p now\\'; shutdown -p now' If you wanted the second shutdown on computer.A only to run if the first was successful, replace the ; with &&. You could also alternate the quotes like so: ssh -t computer.A 'ssh -t computer.B "shutdown -p now"; shutdown -p now'
Performing multiple operations in one SSH command
1,555,798,757,000
According to the man page: -b, --bytes=SIZE put SIZE bytes per output file -C, --line-bytes=SIZE put at most SIZE bytes of lines per output file So if -b already splits a file by bytes per file, what is the purpose of -C? How is it any different?
-C attempts to put complete lines of output into the target file, up to a maximum size of SIZE, whereas -b just counts bytes without regards to line endings. -C may put less output into the output file in order to stop at the closest line ending that doesn't put it over size.
What is the difference between split -C and split -b?
1,555,798,757,000
Is there a way that I could reference the same path from one command to the next? For example, I may want to list the contents of a specific folder: $ ls ~/Documents/some/dir Then, once I've done that, I may want to perform some action in that same directory: $ mv ~/Documents/some/dir/file.txt ~/Documents/other/dir Is there a way to, essentially, invoke that path without typing it again (or using some sort of auto-suggestion or auto-fill that I can do with .zsh)? I vaguely remember reading about something along these lines but I don't remember what the technique is?
That's the insert-last-word widget which in emacs mode is bound to Alt+_ and Alt+. by default. So $ ls ~/Documents/some/dir $ ls Alt+_/file.txt Alt+_ Or you can use the $_ special variable, or csh-style history expansion (!$) if you prefer.
How can I reference the same path from one command to the next? [duplicate]
1,555,798,757,000
I have a command that takes a pid and operates on it. Works great. lsof -p 1112| wc -l But when I use the approach to pipe in the pid, that I normally use, it fails because this is a java app: lsof -p (ps -e | grep logstash) | wc -l it fails to work, because java apps do not show up by their name in ps -e, rather they show up as java. (Which doesn't help, because there are multiple java apps) You can see logstash 7 up from the bottom of this output from ps aux 498 1795 16.9 50.7 551391388 12422888 ? Sl Dec14 1425:36 /usr/bin/java root 1896 0.0 0.0 80900 3344 ? Ss Dec14 0:01 /usr/libexec/po postfix 1901 0.0 0.0 81152 3360 ? S Dec14 0:00 qmgr -l -t fifo root 1926 0.0 0.0 183032 1792 ? Ss Dec14 0:00 /usr/sbin/abrtd root 1938 0.0 0.0 116880 1260 ? Ss Dec14 0:00 crond root 1957 0.0 0.0 21108 492 ? Ss Dec14 0:00 /usr/sbin/atd root 1992 0.0 0.0 4064 512 tty1 Ss+ Dec14 0:00 /sbin/mingetty root 1994 0.0 0.0 4064 516 tty2 Ss+ Dec14 0:00 /sbin/mingetty root 1996 0.0 0.0 4064 512 tty3 Ss+ Dec14 0:00 /sbin/mingetty root 1998 0.0 0.0 4064 516 tty4 Ss+ Dec14 0:00 /sbin/mingetty root 2000 0.0 0.0 4064 516 tty5 Ss+ Dec14 0:00 /sbin/mingetty root 2002 0.0 0.0 4064 512 tty6 Ss+ Dec14 0:00 /sbin/mingetty logstash 37916 10.7 2.2 4767300 553372 ? SNsl Dec19 167:39 /usr/bin/java - root 37972 0.0 0.0 0 0 ? S Dec19 1:12 [flush-253:2] postfix 47810 0.0 0.0 80980 3384 ? S 13:30 0:00 pickup -l -t fi root 48006 0.0 0.0 0 0 ? S 14:00 0:00 [flush-253:3] root 48064 0.1 0.0 104616 4592 ? Ss 14:04 0:00 sshd: root@pts/ root 48066 0.0 0.0 108352 1828 pts/0 Ss 14:04 0:00 -bash root 48083 0.0 0.0 110240 1136 pts/0 R+ 14:05 0:00 ps aux What is the way to grep out the pid for logstash ?
We established two solutions: var1=`pgrep -f logstash`; ls -al /proc/$var1/fd |wc -l or, more hack, ls -al /proc/`pgrep -f logstash`/fd |wc -l Note the use of back ticks.
how to pipe PID of java app into a command?
1,555,798,757,000
I have several folders ,where each folder contain two files fastq.gz. Usually they are named as sample_R1.fastq.gz and sample_R2.fastq.gz. where sample_ can represent the folder name ,or something else. But in my case the folders are : 1008_a 2085_a 2130_a 2192_a 2221_a 2242_a 2269_a 2482_a And each of these folder consists of these files as : 1008_a Files : C85CBANXX_s6_1_O07_0452_SL137634.fastq.gz C85CBANXX_s6_2_O07_0452_SL137634.fastq.gz 2085_a : C7V65ANXX_s6_1_M19_0413_SL131164.fastq.gz C7V65ANXX_s6_2_M19_0413_SL131164.fastq.gz How can I rename these files to just like 1008_a_R1.fastq.gz & 1008_a_R2.fastq.gz for folder 1008_a 2085_a_R1.fastq.gz ,2085_a_R2.fastq.gz for folder 2085_a And so on ,since all other folders have different kinds of patterns inside them. Thanks, Ron
find + bash solution: Sample folder structure (for ex. 1080_a and 2085_a): $ tree 1008_a/ 2085_a/ 1008_a/ ├── C85CBANXX_s6_1_O07_0452_SL137634.fastq.gz └── C85CBANXX_s6_2_O07_0452_SL137634.fastq.gz 2085_a/ ├── C7V65ANXX_s6_1_M19_0413_SL131164.fastq.gz └── C7V65ANXX_s6_2_M19_0413_SL131164.fastq.gz The job: find . -type f -regextype posix-egrep \ -regex ".*/[0-9]{4}_a/[[:alnum:]_]+_[12]_[[:alnum:]_]+\.fastq\.gz$" -exec bash -c \ 'path=${0%/*}/; bn=${0##*/}; dir_n=${0%/*}; dir_n=${dir_n##*/}; new_fn=$(sed -E "s/.+_([12])_.+(\.fastq\.gz)$/${dir_n}_R\1\2/" <<<"$bn"); mv "$0" "$path$new_fn"' {} \; Results: $ tree 1008_a/ 2085_a/ 1008_a/ ├── 1008_a_R1.fastq.gz └── 1008_a_R2.fastq.gz 2085_a/ ├── 2085_a_R1.fastq.gz └── 2085_a_R2.fastq.gz
Renaming files inside folders
1,555,798,757,000
I want to target all files called fooxxxbarxxx. The common thing among all those files is that it contains foo and bar. I've tried to use *foo*bar* and *foo**bar* but it doesn't work. Specifically, I'm trying to create soft links to those files, and the rest of the code already works for more straightforward executions (looks into all subfolders of path): shopt -s globstar ln -s /path/**/*foo*bar* . Thanks
In bash shell you need to use extglob option for this OR type shell expansions. shopt -s extglob nullglob and then do the globbing as ln -s /path/**/@(*foo*bar*|*bar*foo*)
How can I stack wildcards to target specific files?
1,555,798,757,000
I frequently need to do the following things in MySQL: Create a non-root user Set that user's password Grant that user all privileges Make a database with the same name as the new user Allow usage only for users from the localhost. Previously I did this using PHPmyadmin, but I would prefer doing it directly from BASH. Is there a CLI way to execute these steps?
This works for me: echo 'create database testdb; create user "testdb"@"%" identified by "mypassword"; grant all privileges on testdb.* to testdb;' | mysql -u root -p The percentage sign indicates that connections to this database may be made from other systems. Replace the % with localhost if you only need an account that needs access to and from the same system.
Creating an autorized, all privileged user and an identically named DB via CLI (Bash) in one row
1,555,798,757,000
I could do: !systemctl:p to get systemctl reload bind result printed (as last command in the history starting with systemctl string). but doing the same with the partial search on the command history: !?reload:p results in zsh: no such event: reload:p the former looks the most recent event in the history that starts with systemctl string and prints it on the screen, thanks to :p modifier, instead of executing. i thought :p is true for !? as well on any shell. and bash also results in bash: !?reload:p: event not found. how can i achieve the printing and not executing of the found command line on partial command history search in common unix shells?
Per the manual (emphasize mine): !?str[?] Refer to the most recent command containing str. The trailing '?' is necessary if this reference is to be followed by a modifier or followed by any text that is not to be considered part of str. so in your case it's !?reload?:p that is, you need a trailing ? after the search string.
printing and not executing the result of zsh history expansion on partial search
1,555,798,757,000
I need to securely erase a CD without marks of intentional data loss (can't scratch or break it), so I was wondering: how could I use growisofs or dd to burn all non burned spaces in the disk to render it unreadable?
growisofs works only with DVD or BD media. dd cannot write to unformatted CD (only CD-RW but not CD-R could be formatted). A CD-R medium might be still writable (aka "appendable") on its unused area. But it cannot be blanked in any way. You may only overwrite the unused area by some harmless bytes. (Question is why you want to do that.) For that you need a CD capable burn program, like cdrecord, wodim, cdrskin, or xorriso. First check whether the CD-R is still writable: prog=cdrskin $prog -v dev=/dev/sr0 -msinfo You may use "cdrecord", "wodim", "xorrecord", or "xorriso -as cdrecord" instead of "cdrskin". If the CD-R is still writable, you will get two comma separated numbers on standard output. If it is not writable any more (aka "closed") then you will get no output on stdout but rather some error message like: cdrskin: FATAL : -msinfo can only operate on appendable (i.e. -multi) discs cdrecord: Cannot get next writable address for 'invisible' track. wodim: Cannot get next writable address for 'invisible' track. xorriso : FAILURE : Output medium is not appendable. Cannot obtain -msinfo. If you get the numbers, you may burn to the medium with random bytes until the burner throws an error because it is full: prog=cdrskin dd if=/dev/urandom bs=1M | $prog -v dev=/dev/sr0 -eject - or if all zeros is good enough for your purpose: dd if=/dev/zero bs=1M | $prog -v dev=/dev/sr0 -eject - But as said, why would you want to do that ? Have a nice day :) Thomas
Can growisofs or dd be forced to erase a CD-R?
1,499,006,042,000
I'm trying to learn some more Linux, and from experience the best way is to try to bang your head against the wall. So now that I've done a task manually a few times I would like to automate it. This involves making a oneliner to kill some tasks so I can restart them. At the moment I'm working with the following: for i in `ps aux | egrep "[c]ouchpotato|[s]abnzbd|[s]ickbeard|[d]eluge|[n]zbhydra"|awk '{print $2, $11, $12}'`; do echo $i; done The thing is that as soon as I run the for loop it breaks up the lines I get from awk. Running ps aux | egrep "[c]ouchpotato|[s]abnzbd|[s]ickbeard|[d]eluge|[n]zbhydra"|awk '{print $2, $11, $12}' gives me the result I'm looking for, namely: 27491 /usr/local/couchpotatoserver-custom/env/bin/python /usr/local/couchpotatoserver-custom/var/CouchPotatoServer/CouchPotato.py 27504 /usr/local/deluge/env/bin/python /usr/local/deluge/env/bin/deluged 27525 /usr/local/deluge/env/bin/python /usr/local/deluge/env/bin/deluge-web 27637 /usr/local/nzbhydra/env/bin/python /usr/local/nzbhydra/share/nzbhydra/nzbhydra.py 27671 /usr/local/sabnzbd/env/bin/python /usr/local/sabnzbd/share/SABnzbd /SABnzbd.py 28084 /usr/local/sickbeard-custom/env/bin/python /usr/local/sickbeard-custom/var/SickBeard/SickBeard.py But adding it to my for loop breaks it into: 27491 /usr/local/couchpotatoserver-custom/env/bin/python /usr/local/couchpotatoserver-custom/var/CouchPotatoServer/CouchPotato.py 27504 /usr/local/deluge/env/bin/python /usr/local/deluge/env/bin/deluged etc... My goal is for $i to contain the whole line - is this possible? Also, is it possible to use get only the command from $11 and $12? I don't need to have the whole path to python and I don't even need to have the whole path to the application. Thanks!
Notice that the for loop output is broken apart at the word boundaries, viz., whitespaces/newlines. Whereas what you said you wanted is the whole line to come contained in the $i. So you need to do these 2 things: set the input field separator to a newline. disable the wildcards expansion. set -f;IFS=$'\n'; for i in `.....`;do echo "$i"; done Note: DO NOT quote the backquotes else you shall end up giving to the for loop one big blob of argument,which would be the whole ps's output, and that doesn't do you no good. HTH
Scripting with 'for' and grep/egrep
1,499,006,042,000
I am kind of stuck in a tricky situation. I have a file, whose contents look like this: 3 2017-05-30 2017-09-29 2 2017-05-27 2017-08-26 1 2017-05-27 2017-08-26 Now, user selects to modify a date by choosing labelNum which would be values in column 1. Upon prompt, if user enters 3. I want to print column 2. So I wrote. cat temp.txt | grep $labelNum | awk '{print $2}' If labelNum is 3, I get the output as 2017-05-30 But, if user enters labelNum as 2, then i get: 2017-05-30 2017-05-27 2017-05-27 Because it is looking for '2' everywhere in the .txt file. However, I want the column 2 for labelNum 2, which would be 2017-05-27 Is there a way to do this? I tried using awk to replace grep but no luck. Thanks. Edit: The rows are dynamic and can change as and when more entries are added to text file. So can't really use sed to skip to the line
With single awk: awk -v lbl=$labelNum '$1 == lbl{ print $2 }' temp.txt -v lbl=$labelNum - passing in labelNum variable value into awk script $1 == lbl - if the 1st column value equal to the variable value - executes the followed expression
Grep for a specific element
1,499,006,042,000
Under Waayland I used busctl --user set-property org.gnome.Mutter.DisplayConfig /org/gnome/Mutter/DisplayConfig org.gnome.Mutter.DisplayConfig PowerSaveMode off to turn off / on the display however after having to go back to X11 due to Wayland being unusable this command works same as dpms force off. With X11 I can run sleep 1; xset dpms force off but this only puts the monitor into standby and will wake as soon as any input is detected such as mouse moves. This is unwanted behaviors and I prefer the ability to wake the display with a specific shortcut. This way I can be sure the display won't turn on on it's own or accidentally. So, how do I force the display to turn off in such a way as to prevent user input from waking it again under X11?
I think you possibly misunderstand what DPMS "off" means. Look at the table in Wikipedia, what DPMS actually does is to signal the power saving state by turning the horizontal sync and vertical sync signals (or the HDMI equivalent) off, and disabling the DAC in the graphics card, while the rest of the graphics card keeps running. So you are not turning everything completely off, you are entering the "deepest" power saving mode possible. OTOH, using xrandr --off really completely shuts off the output, and disables everything in the graphics card that is used to produce the output, as if the monitor was not connected to anything at all. And of course, if it is your only monitor, this doesn't work, as then there is no more graphics display to draw anything on. This is really for enabling and disabling additional second or third monitors. So you don't want it "completely off", you want the deepest DPMS power saving state which happens to be called "off". Your busctl command tells Wayland to use PowerSaveMode, i.e. DPMS. And Wayland doesn't seem to re-enable DPMS when it detects mouse or keyboard inputs, so it stays off. In the same way, xset dpms tells the X server to use DPMS. This is completely the same thing. The difference is that the X server re-enables DPMS when it detects inputs. As to "why", it's how the developers decided how it should work. In X, xset dpms works even when there is not extra screensaver, which is why the way to turn the screen on again was incorporated in the X server. For Wayland, the designers seem to have decided that you always need an extra screensaver program (whose job it is to communicate the wanted PowerSaveMode to Wayland), so it leaves it to the screen saver to monitor inputs and turn the screen on again. That you are able to fake being a screensaver program using busctl is more or less an accident. It's not a bug, it's different design. As I said, try grabbing the mouse and keyboard inputs with evtest --grab /dev/input/eventX (use just evtest to see which device is which. Careful, numbers don't need stay the same across boots, look at the udev symlinks) or the equivalent ioctl if you are writing your own screensaver program. If you want to monitor the inputs for a specific combination, you need to do that anyway.
Turn off X11 / Xorg display (not standby)
1,499,006,042,000
I have a large tab file with 15 columns (FILE1) and a list (FILE2) of names which should appear in the table. The problem is the name may appear in columns 4 to 10 in FILE1 and it may not be a case match. I want a command which searches line for a hit and then print the whole line. Preferably this would not be case sensitive and would not print lines where the names in FILE2 are part of a larger word. I have tried the following: grep -Fwf FILE2 FILE1 > out xargs -I {} grep "^{}" FILE1 < FILE2 > out the first just copies FILE1 into out. The second give a blank out file. I've also tried a few awk commands which will either give an empty out file or as above copy FILE1. I'm trying to improve my Linux skills at the moment so if possible, if you explain your method I would be very grateful. File1 tax_id GeneID Symbol LocusTag Synonyms dbXrefs chromosome map_location description type_of_gene Symbol_from_nomenclature_authority Full_name_from_nomenclature_authority Nomenclature_status Other_designations Modification_date 7 5692769 NEWENTRY - - - - - Record to support submission of GeneRIFs for a gene not in Gene (Azotirhizobium caulinodans. Use when strain, subtype, isolate, etc. is unspecified, or when different from all specified ones in Gene.). other - - - - 20160818 9 1246500 repA1 pLeuDn_01 - - - - putative replication-associated protein protein-coding - - - - 20160813 9 1246501 repA2 pLeuDn_03 - - - - putative replication-associated protein protein-coding - - - - 20160716 9 1246502 leuA pLeuDn_04 - - - - 2-isopropylmalate synthase protein-coding - - - - 20160903 9 1246503 leuB pLeuDn_05 - - - - 3-isopropylmalate dehydrogenase protein-coding - - - - 20150520 9 1246504 leuC pLeuDn_06 - - - - isopropylmalate isomerase large subunit protein-coding - - - - 20160806 9 1246505 leuD pLeuDn_07 - - - - isopropylmalate isomerase small subunit protein-coding - - - - 20160730 9 1246509 ibp pBPS1_01 - - - - Ibp protein protein-coding - - - - 20150801 9 1246510 repA1 pBPS1_02 - - - - repA1 protein protein-coding - - - - 20160813 File2 sacX arcB metB sprT adrB_2 fadD trpC ansP2 group_1428 plsX repA
Answer in comments above see @Philippos and @George Vasiliou replies Briefly the answer is grep -Fwf FILE2 FILE1 > out I was having an issue whereby then I executed the command it would copy FILE1. This was occurring because of trailing blank lines in FILE2. When I removed these the command worked perfectly. As some of my text in the files may not match case-wise I included -i in the above command. Thanks to all who helped.
extract lines from large tab delimited file using a list
1,499,006,042,000
I was testing imapsync 1.727 to sync imap from an older version of zimbra (7.1.4) to version 8.7.7 and got an error as above with command below: imapsync \ --maxsize 52428800 --buffersize 8192000 \ --nofoldersizes --nosyncacls --subscribe --syncinternaldates \ --authmech2 PLAIN \ --exclude '(?i)\b(Junk|Spam|Trash)\b' \ --skipheader 'X-*' \ --regexflag 's/\\\\(Answered|Flagged|Deleted|Seen|Recent|Draft)[^\s]*\s*//ig' --debugflags \ --regextrans2 's,:,-,g' \ --regextrans2 's,\",'\'',g' \ --regextrans2 's,\s+(?=/|$),,g' \ --regextrans2 's,^(Briefcase|Calendar|Contacts|Emailed Contacts|Notebook|Tasks)(?=/|$), $1 Folder,ig' \ --host1 "$host1" --host2 "$host2" \ --user1 "$username" --authuser1 admin_account_name \ --password1 admin_account_password \ --user2 "$username" --authuser2 admin_account_name \ --password2 admin_account_password \ --regextrans2 's,\",-,g' \ # change quotes to dashes --regextrans2 's,&AAo-|&AA0ACg-|&AA0ACgANAAo-(?=/|$),,g' \ --ssl1 --authmech1 PLAIN --maxcommandlength1 16384 \ --dry --debug --debugimap \ Why it failed on line 18 but not regtrans2 on other lines?
You can't have a line continuation that is followed by a comment on the same line. This is ok: echo \ hello This is not ok: echo \ #newline here hello In the first example, the \ escapes the newline, and the command that is executed will be echo hello. In the second case, the \ just escapes the space after it, and we get #newline here as output, followed by the error message hello: not found [No such file or directory] (or similar). So, remove the comment (everything, including the space, after the last \) on the line that now reads --regextrans2 's,\",-,g' \ # change quotes to dashes
--regextrans2: command not found
1,499,006,042,000
var $abc contains: abc jkl def mno ghi pqr var $def contains: stu vwx yz Expected output: abc jkl stu def mno vwx ghi pqr yz heemayl's solution: I tried paste <(echo "$abc") <(echo "$def") but it is giving output as below ASFSFGFGGRRFDFFFFFH 33566 AHSHDFFBORDASHFYEHFYUCH 33568 FASFSSFHJUYRT 33371 FASIFIDFGGGDDDDD 33364 AFDDDGGGGGDER 33371 FDGGGGHJJK 16225 AISJFKDJFKDDKFJKDJFF 33568 KDFJKDJFKDJFKDFJK 33567 How to align the second column correctly? Solution: paste <(echo "$abc") <(echo "$def") | column -t
Using paste, with help from process substitution to get two file descriptors for paste to operate on: paste <(echo "$abc") <(echo "$def") Be careful with the quoting of the variables. Example: $ echo "$abc" abc jkl def mno ghi pqr $ echo "$def" stu vwx yz $ paste <(echo "$abc") <(echo "$def") abc jkl stu def mno vwx ghi pqr yz
Join multiple columns from multiple variables? [duplicate]
1,499,006,042,000
I didn't have mail installed before, so I've did it using: apt-get install mailutils After that I've tried to send a mail with this command: mail -s "Ssubjects" [email protected] But I see just this error message: cannot send message: Process exited with a non-zero status anyway I can't understand what is the problem. Should I configure something? Thanks
Possible duplicate: mail: cannot send message: process exited with a non-zero status Try: sudo dpkg-reconfigure postfix as proposed in the answer.
'mail' use problem from command line
1,499,006,042,000
How do I show an alert in virtual console ttyX (not necessarily the active one) so that the user sees the alert on the next command invocation. I'm looking for something similar to the "you have mail" alert.
If you know which user is logged in on which virtual console, you can use write. E.g., assume user dirk is logged in on tty2, you can do echo 'You have a message' | write dirk tty2 and the user will see the message (with two additional lines). The user on the virtual console needs to enable receiving messages with mesg y, unless you send the message as root IIRC. The user sees this message immediately, no matter if he invokes a command or not. The alternative would be to hook into the shell of the user (possible using the PS prompts) by setting up .profile etc. to actively check for messages in some file etc.
Alert in virtual console
1,499,006,042,000
I'm trying to compare a new file (e.g., new.txt) to an old file (e.g., old.txt) to see what was added in the new file. I'm trying to add the newly added information to a new file called newCourses.txt and the modified information to modifiedCourses.txt. If this is not possible with a diff, what are the alternatives without installing a package or software? old.txt 2016 2BUSI 4850 K002 BUSINESS MW 02:10P-09:30P 2016 2BUSI 4840 K002 PRESPECH MW 07:10P-09:30P 2016 2BUSI 4820 K002 SCHLOFSC MW 07:10P-09:30P 2016 2BUSI 4870 K002 HISTORYZ MW 04:10P-09:30P new.txt 2016 2BUSI 4850 K002 BUSINESS MW 07:10P-09:30P 2016 2BUSI 4840 K002 PRESPECH MW 07:10P-09:30P 2016 2BUSI 4820 K002 SCHLOFSC MF 07:10P-09:30P 2016 2BUSI 4870 K002 HISTORYZ MW 06:10P-09:30P 2017 4NONE 2938 K112 RECREATI TS 11:10P-11:55P The output when I do diff old.txt new.txt: 1c1 < 2016 2BUSI 4850 K002 BUSINESS MW 02:10P-09:30P --- > 2016 2BUSI 4850 K002 BUSINESS MW 07:10P-09:30P 3,4c3,5 < 2016 2BUSI 4820 K002 SCHLOFSC MW 07:10P-09:30P < 2016 2BUSI 4870 K002 HISTORYZ MW 04:10P-09:30P \ No newline at end of file --- > 2016 2BUSI 4820 K002 SCHLOFSC TF 07:10P-09:30P > 2016 2BUSI 4870 K002 HISTORYZ MW 06:10P-09:30P > 2017 4NONE 2938 K112 RECREATI TS 11:10P-11:55P \ No newline at end of file How can I output it to two different files such as newCourses.txt would contain 2017 4NONE 2938 K112 RECREATI TS 11:10P-11:55P and modifiedCourses.txt would contain 2016 2BUSI 4850 K002 BUSINESS MW 07:10P-09:30P 2016 2BUSI 4820 K002 SCHLOFSC TF 07:10P-09:30P 2016 2BUSI 4870 K002 HISTORYZ MW 06:10P-09:30P
You could use awk: awk 'NR==FNR{ z[$5]=$0; next}{ if ($5 in z){ if ($0!=z[$5]){ print >"modifiedCourses.txt"}} else { print >"newCourses.txt"}}' old.txt new.txt This reads old.txt and saves the lines into an array (the indices are the names of the courses) and then reads new.txt and for each course it checks if it's an index of the array: if it is, it checks if the line has changed and if so it prints it to modifiedCourses.txt ; if not an index, it prints the line to newCourses.txt You can change $0 to $7 if the only change that matters is the hours.
Saving diffs to two files for modified and new additions