date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,321,145,927,000 |
Are there any utilities to filter a sample from a stream on the command-line, e.g.
print every 100th line of a file or
print a line of a file out with probability 0.01 or
via algorithms like reservoir sampling?
Update: So far I found:
print every 100th line of a file: sed -n '0~100p'
|
The simple solutions with (GNU) awk:
Every one in 100 (lines with number divisible by 100):
do_something | awk 'NR % 100 == 0'
or pseudorandomly:
do_something | awk 'rand() < 0.01'
The numbers will likely not be exactly uniform and it may be necessary to add BEGIN{ srand() } to initialize a new seed for each run.
| Take a sample from a file or stream? |
1,321,145,927,000 |
I want to parse some output so I exclude all lines that contain either foo or bar, as well as all lines immediately preceding those lines. For example:
echo "
1 some line
2 line to exclude because the next line has one of the terms
3 line with foo
4 line to exclude because the next line has one of the terms too
5 line with bar
6 another line
">InputFile
I want the output:
1 some line
6 another line
I tried cat InputFile|grep -v "foo"|grep -v "bar", but it doesn't exclude line 2 and 4, and the option -B1 for the line before doesn't work either.
|
Here's the more verbose version of essentially the same thing @1_CR posted above and @ ash showed at pastebin, hopefully using more readable syntax:
awk '{
lastLine = currentLine;
currentLine = $0;
}
/foo|bar/ \
{
getline currentLine
next
}
NR > 1 \
{
print lastLine;
}
END \
{
if ( currentLine !~ /foo|bar/ )
print currentLine;
}
' InputFile
| grep filter all lines plus one before each hit |
1,321,145,927,000 |
I have sample:
"name": "The title of website",
"sync_transaction_version": "1",
"type": "url",
"url": "https://url_of_website"
I want to get the following output:
"The title of website" url_of_website
I need to remove the protocol prefix from the URL, so that only url_of_website is left (and no http in the front).
Problem is I'm not quite familiar with sed reading multiple lines, doing some research reach me https://unix.stackexchange.com/a/337399/256195, still can't produce the result.
A valid json object that I'm trying to parse is Bookmark of google chrome , sample:
{
"checksum": "9e44bb7b76d8c39c45420dd2158a4521",
"roots": {
"bookmark_bar": {
"children": [ {
"children": [ {
"date_added": "13161269379464568",
"id": "2046",
"name": "The title is here",
"sync_transaction_version": "1",
"type": "url",
"url": "https://the_url_is_here"
}, {
"date_added": "13161324436994183",
"id": "2047",
"meta_info": {
"last_visited_desktop": "13176472235950821"
},
"name": "The title here",
"sync_transaction_version": "1",
"type": "url",
"url": "https://url_here"
} ]
} ]
}
}
}
|
This works on the JSON document given in the question:
$ jq -r '.roots.bookmark_bar.children[]|.children[]|["\"\(.name)\"",.url]|@tsv' file.json
"The title is here" https://the_url_is_here
"The title here" https://url_here
This accesses the .children[] array of each .roots.bookmark_bar.children[] array entry and creates a string that is formatted according to what you showed in the question (with a tab character in-between the two pieces of data).
If the double quotes are not necessary, you could change the cumbersome ["\"\(.name)\"",.url] to just [.name,.url].
To trim the https:// off from the URLs, use
.url|ltrimstr("https://")
instead of just .url.
| How to combine strings from JSON values, keeping only part of the string? |
1,321,145,927,000 |
I read the man page but I could not find how to use -f when using tmux list-panes. It seems to be a filter, I assume this works like grep in some way?
Does anyone know how -f works?
|
The -f option existed already on some other commands, like choose-tree where a hint as to how it works is given. It takes a format string, and, for each pane, if that evaluates to true (i.e. not 0 nor empty) then that pane is listed.
For example, if you have 2 panes, one of which is in tree-mode:
$ tmux list-panes -F '#{pane_id} >#{pane_mode}<'
%0 ><
%1 >tree-mode<
then you can show only the one in tree-mode by a matching filter, #{m:a,b} matches glob a with string b and is true if they are the same:
$ tmux list-panes -F '#{pane_id} >#{pane_mode}<' -f '#{m:tree-mode,#{pane_mode}}'
%1 >tree-mode<
To invert the choice to show only panes not in tree mode, use #{?e,a,b} which selects string a if string e is true, else b:
$ tmux list-panes -F '#{pane_id} >#{pane_mode}<' -f '#{?#{m:tree-mode,#{pane_mode}},0,1}'
%0 ><
| How does tmux list-panes -f (filter) work? |
1,321,145,927,000 |
I want to back up my /home directory with rsync. I have read rsync's man page and decided to use filter rules for this task.
What I would like to achieve: Exclude all files and directories in the Repos directory but keep all pull_all.sh files and output directories --- regardless where they are located within the Repos directory.
So far, I have ended up with following filter list, but this backs up only the pull_all.sh files but not the output directories:
# Files prefixed with "+ " are included. Files prefixed with "- " are excluded.
#
# The order of included and excluded files matters! For instance, if a folder
# is excluded first, no subdirectory can be included anymore. Therefore,
# mention included files first. Then, mention excluded files.
#
# See section "FILTER RULES" of rsync manual for more details.
# Included Files
# TODO: This rules do not work properly!
+ output/***
+ pull_all.sh
- Repos/**
# Excluded Files
- .android
- .cache
...
I use the filter list in my script run_rsync.sh:
#!/bin/bash
date="$(date +%Y-%m-%d)"
hostname="$(hostname)"
# debug_mode="" # to disable debug mode
debug_mode="--list-only"
# Note: With trailing "/" at source directory, source directory is not created at destination.
rsync ${debug_mode} --archive --delete --human-readable --filter="merge ${hostname}.rsync.filters" --log-file=logfiles/$date-$hostname-home.log --verbose /home backup/
Unfortunately, the existing StackExchange threads have not solved my problems:
https://stackoverflow.com/questions/8270519/rsync-exclude-a-directory-but-include-a-subdirectory
Using Rsync include and exclude options to include directory and subdirectory but exlude files in subdirectory
What's going wrong here?
[Update] Here is an example how the home directory looks like and which files to keep and which files to ignore:
user@hostname:~$ tree /home/ | head
/home/
└── user
├── Desktop -> keep this
│ ├── file1 -> keep this
│ └── file2 -> keep this
├── Documents -> keep this
├── Repos
│ ├── pull_all.sh -> keep this
├── subdir1
│ ├── output -> keep this
├── subdir2
├── another_subdir
├── output -> keep this
├── subdir3 -> do not keep (because does not contain any "output")
├── file3 -> do not keep
|
Slightly restating what I've interpreted as your requirements,
Include all pull_all.sh files regardless of where we find them
Include all output directories and their contents regardless of where we find them
Exclude the Repos directory, other than what we have already stated
Include everything else
This can be specified as follows
rsync --dry-run --prune-empty-dirs -av
--include 'pull_all.sh'
--include 'Repos/**/output/***'
--include '*/'
--exclude 'Repos/***'
/home backup/
Some notes
The --include '*/' is required so that rsync will consider heading down into the Repos directory tree (to look for pull_all.sh files), which would otherwise be excluded by the final --exclude statement.
The three different uses of * are different:
* matches anything except / characters
** matches anything including / characters
dir/*** is a shortcut equivalent to specifying dir/ and dir/**.
The --prune-empty-dirs flag stops rsync creating empty directories, which is particularly important as we need to process the Repos directory tree looking for pull_all.sh and output items.
Remove --dry-run when you are happy with the results.
| rsync: Use filters to exclude top-level directory but include some of its subdirectories |
1,321,145,927,000 |
How to specify multiple filter conditions connected with AND in less?
I would like to filter a log having lines do NOT contain "nat" but contains an IP address 192.168.1.1, like:
&/!nat && 192.168.1.1
However it does not work, the result is empty...
Please advise.
|
I am convinced less do not allow to display only lines with 192.168.1.1 but without nat.
You can use | (or)
||, & and && do not exist
You can only invert match for the whole expression (with ! at the beginning), but ! is not special after
An alternative is to use sed before less.
sed -n -e '/nat/ d' -e '/192\.168\.1\.1/ p' FILE | less
| less: multiple filter conditions with AND |
1,321,145,927,000 |
I'm trying to filter unwanted messages from a cron job (systemd) from rsyslog output. However rsyslog always complains about the second argument of re_match(). The filter rule I have is:
if $programname == "systemd" and re_match($msg, '^Started [Ss]ession \d+ of user ntpmon\.$') then stop
I started putting the regex in double-quotes, and rsyslog complained. Then I put the regex in single quotes, and rsyslog still complains.
The documentation is a bit vague:
re_match(expr, re)
returns 1, if expr matches re, 0 otherwise. Uses POSIX ERE.
How do I fix it (the filter, not the docs)?
|
You need to double the backslash, otherwise rsyslog tries to interpret \d as an escape sequence within a string, and this is not parseable. So it should be \\d.
But \d is not a Posix ERE. You presumably meant [0-9], for example, for a digit. So try
'^Started [Ss]ession [0-9]+ of user ntpmon\\.$'
| What is the correct syntax for rsyslog's re_match()? |
1,321,145,927,000 |
I want to take a list of numbers representing line numbers of a source file I want to filter and filter those lines from the source file. How I can build a unix pipeline to extract these lines from the source file?
The pipeline might look like:
cat sourcefile.tsv | some-filter linenumbers.txt > extractedrecords.tsv
I can't think of a combination of unix tools to do this off the top of my head. The fallback is to write a bash script that does sed -n [number]p sourcefile.tsv for every number in linenumbers.txt.
If my fallback plan is reasonably efficient compared to alternatives, please let me know that too.
|
Assuming linenumbers.txt has one number per line
awk 'NR == FNR{a[$0]; next};FNR in a' linenumbers.txt sourcefile.csv > extractedrecords.tsv
Might do the job.
Or, with bash
join -t':' -o2.1,2.2 <(sort linenumbers.txt) <(awk '{print NR":"$0}' \
sourcefile.csv | sort -k1,1 -t':') | sort -k1,1n -t':' | cut -f2- -d':'
All the extra jumping through hoops is needed because join does not support numerically sorted input files
| How do I efficiently select specific line numbers from a list of records? |
1,321,145,927,000 |
I'm analyzing some packetfilter logs and wanted to make a nice table of some output, which normally works fine when I use column -t. I can't use a tab as my output field separator (OFS) in this case because it jacks up the multi-word string fields with the table view.
My original data consists of rows like this:
2018:01:24-09:31:21 asl ulogd[24090]: id="2103" severity="info" sys="SecureNet" sub="ips" name="SYN flood detected" action="SYN flood" fwrule="50018" initf="eth0" srcmac="12:34:56:78:90:ab" dstmac="cd:ef:01:23:45:67" srcip="192.168.1.123" dstip="151.101.65.69" proto="6" length="52" tos="0x00" prec="0x00" ttl="128" srcport="59761" dstport="80" tcpflags="SYN"
I'm getting the data into a comma-delimited (CSV) format using:
grep -EHr "192\.168\.1\.123" |
cut -d':' -f2- |
awk -F '"' 'BEGIN{
OFS=",";
print "name","action","srcip","srcport","dstip","dstport","protocol","tcpflags"
}
{
print $10,$12,$22,$36,$24,$38,$26,$(NF-1)
}'
This works fine and produces this kind of output (IP addresses all changed, I don't really have an internal host flooding this site):
name,action,srcip,srcport,dstip,dstport,protocol,tcpflags
SYN flood detected,SYN flood,192.168.1.123,59761,151.101.65.69,80,6,SYN
SYN flood detected,SYN flood,192.168.1.123,59764,151.101.65.69,80,6,SYN
SYN flood detected,SYN flood,192.168.1.123,59769,151.101.65.69,80,6,SYN
SYN flood detected,SYN flood,192.168.1.123,59771,151.101.65.69,80,6,SYN
SYN flood detected,SYN flood,192.168.1.123,59772,151.101.65.69,80,6,SYN
SYN flood detected,SYN flood,192.168.1.123,59890,151.101.65.69,80,6,SYN
SYN flood detected,SYN flood,192.168.1.123,60002,151.101.65.69,80,6,SYN
SYN flood detected,SYN flood,192.168.1.123,60005,151.101.65.69,80,6,SYN
SYN flood detected,SYN flood,192.168.1.123,60006,151.101.65.69,80,6,SYN
For some reason, whenever I use column to display the table output (-t), it adds a newline after the first column where no newline exists in the original data. For example:
$ cat mydata.csv | column -s ',' -t
name
action srcip srcport dstip dstport protocol tcpflags
SYN flood detected
SYN flood 192.168.1.123 59761 151.101.65.69 80 6 SYN
SYN flood detected
SYN flood 192.168.1.123 59764 151.101.65.69 80 6 SYN
SYN flood detected
SYN flood 192.168.1.123 59769 151.101.65.69 80 6 SYN
The expected output would be like follows:
name action srcip srcport dstip dstport protocol tcpflags
SYN flood detected SYN flood 192.168.1.123 59761 151.101.65.69 80 6 SYN
SYN flood detected SYN flood 192.168.1.123 59764 151.101.65.69 80 6 SYN
SYN flood detected SYN flood 192.168.1.123 59769 151.101.65.69 80 6 SYN
Adding -x to column makes no difference either, nor does specifying the number of columns with -c (I have plenty of screen width in the terminal). Why is it doing that when there is no newline in the original data?
I really don’t think it is a character in my data because it is also happening with the header column which I created in my awk BEGIN block.
|
I can reproduce your issue if I insert a row in the CSV file who's first comma-separated value is a very long string.
name
action srcip srcport dstip dstport protocol tcpflags
SYN flood detected
SYN flood 192.168.1.123 59761 151.101.65.69 80 6 SYN
SYN flood detected
SYN flood 192.168.1.123 59764 151.101.65.69 80 6 SYN
SYN flood detected
SYN flood 192.168.1.123 59769 151.101.65.69 80 6 SYN
SYN flood detected
SYN flood 192.168.1.123 59771 151.101.65.69 80 6 SYN
SYN flood detected
SYN flood 192.168.1.123 59772 151.101.65.69 80 6 SYN
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxx SYN flood 192.168.1.123 59890 151.101.65.69 80 6 SYN
SYN flood detected
SYN flood 192.168.1.123 60002 151.101.65.69 80 6 SYN
SYN flood detected
SYN flood 192.168.1.123 60005 151.101.65.69 80 6 SYN
SYN flood detected
SYN flood 192.168.1.123 60006 151.101.65.69 80 6 SYN
Note that there is no newline between the name and action columns in the actual output, but a line wrap (due to the line being so long) giving the illusion of a newline followed by indentation.
This means that you should look in your data for an entry with a very long name value.
| Why is column adding a newline in the middle of my row where one is not present in the original data? |
1,435,844,238,000 |
I'm trying to create an exim filter to block spam messages containing
" 85% OFF"
in the subject line. That's a blank space, then any two digits, then the percent sign, a blank space and the word OFF in all caps. This is what I wrote:
# Exim filter
if
$header_subject: contains " \\d{2}% OFF"
then
fail text "no spam please"
seen finish
endif
However, that doesn't seem to work and lets matching messages through, although it passes the regex101 testing. What am I missing? What's the correct syntax?
|
I temporarily replaced my .forward file with yours and confirmed that it doesn't work.
There are two problems.
contains performs a substring match, and does not understand regular expressions. For regexes you want matches rather than contains.
The \d PCRE-style character class appears to be broken as does the {N} syntax! I tried all kinds of combinations. The only syntax which worked was [0-9][0-9]. Even [0-9]{2} didn't work, and neither did [0-9]\\{2\\}, nor \\d\\d. (I'm running version 4.72.) Edit: Here is the reason: Quotes do not have to be used on regexes in Exim filter files, except when regexes contain whitspace. However, backslashes must be doubled up even in unquoted regexes. In quoted regexes, they must be doubled up again: $header_subject: matches "\\\\d{2}% OFF". Count `em: four backslashes.
Test your script with /usr/sbin/exim -bF <scriptfile> -f <sender>. A test message must be supplied on standard input (e.g. redirected from a file).
For testing Subject: processing, it can contain just that header line and nothing else. The capital F in -bF is to enable processing of your fail command which is disabled if you use -bf.
| regex usage in exim filtering |
1,435,844,238,000 |
I'm trying to set a couple of DNS servers to resolve only specified domains.
My first attempt was to run DNSmasq and create a manual list of domains/IPs, like so:
no-resolv
address=/whatsapp.com/192.155.212.202
But big services like google, twitter, whatsapp, facebook, etc.. use several IP ranges and distribute them in different ways (subdomains, protocols, geolocation, device type of the client, etc.), and this is causing some troubles.
I think the simplest approach would be to say something like:
Forward DNS queries of these domains to resolv.conf and block anything else
Is there a way to do it?
|
You're looking for the server= directive instead of the address= directive. Unfortunately, you'll have to specify your actual DNS servers, it won't get them from resolv.conf (since you are using no-resolv to prevent that).
server=/whatsapp.com/8.8.8.8
server=/whatsapp.com/8.8.4.4
server=/example.com/8.8.8.8
server=/example.com/8.8.4.4
⋮
You probably want to generate that with a script. And of course you can use your normal DNS servers instead of Google Public DNS.
Alternatively, you can use BIND (though note that there are other configs if your goal isn't filtering).
| DNS whitelist domains |
1,435,844,238,000 |
OpenDNS offers a quite simple way for internet filtering by categories. Of course who could get the correct IP address can easily bypass the filter but it would be enough for my expectations.
The bigger problem is that changing DNS provider at client side is not a big deal.
So my question is whether it is possible to force to use only specific DNS provider at local network.
The target device is a WiFi router with OpenWRT.
(However I would welcome any similarly simple to set up filtering solution but the main question is the DNS provider forcing.)
|
With iptables firewall this works (Openwrt also uses iptables):
iptables -t nat -A PREROUTING -s 192.168.1.0/24 -p udp --dport 53 -j DNAT --to 192.168.1.1
iptables -t nat -A PREROUTING -s 192.168.1.0/24 -p tcp --dport 53 -j DNAT --to 192.168.1.1
On your router use Opendns servers. 192.168.1.1 is the Openwrt router ip. 192.168.1.0/24 is the LAN network subnet. Modify the above rules according to your network subnet setup. If you are trying out the above rules on the openwrt prompt, then replace -A with -I. If you are saving the rules in a script that loads on bootup or on restart then -A switch should work. With this setup whatever dns servers the client machines use, when the dns request reaches the router, the destination ip will be changed to that of your router's ip. You can find out more about iptables on Openwrt here.
| Force to use specific DNS provider at network |
1,435,844,238,000 |
I have a 400MB+ Tomcat log file (catalina.out). How can I pull out entries for a given time period?
|
Not sure if this would work well for your 400MB file, but here are some CLI one liners that would do the trick.
If you're looking for entries for a specific date, grep -c can probably do what you need.
Otherwise, you could probably use sed:
sed -n '/date1/,/date2/p' filename
For example with an input file "test":
Day 0: foo
Day 1: hello
Day 2: world
Day 3: blah
You could run
[me@mybox tmp]$ sed -n '/Day 1/,/Day 2/p' test
Day 1: hello
Day 2: world
| How can I get entries for a given time period from a 400MB+ log file? |
1,435,844,238,000 |
How to get a list of the latest files with the extension .jmx recursively and use the file names in another process?
I have a list of JMX files organized into different folders and want to pick the latest file in each subfolder and use the JMX files in another process. (run the JMeter tests)
Ideally, the latest file should be the one with the higher version number available in the file name.
Sample list from two subfolders
/test_plans/accounts/filter/TestPlan-API-Accounts-Filter-1.2.jmx
/test_plans/accounts/filter/TestPlan-API-Accounts-Filter-1.1.jmx
/test_plans/accounts/filter/TestPlan-API-Accounts-Filter-1.0.jmx
/test_plans/account-activation/TestPlan-Account-Activation-1.2.jmx
/test_plans/account-activation/TestPlan-Account-Activation-1.1.jmx
/test_plans/account-activation/TestPlan-Account-Activation-1.1 .jmx
/test_plans/account-activation/TestPlan-Account-Activation-1.0 .jmx
I need to pick TestPlan-API-Accounts-Filter-1.2.jmx and TestPlan-Account-Activation-1.2.jmx
I can get a list of files recursively with find ./test_plans -type f | sort -nr
|
With zsh, you could do:
typeset -A latest
for jmx (**/*.jmx(nN)) latest[$jmx:h]=$jmx
Which builds the $latest associative array, where latest[some/dir]=some/dir/file-99.jwx with the value being the file whose name sorts last numerically (thanks to the n glob qualifier which enables numericglobsort for that glob).
Then to do something with those files:
ls -ld -- $latest
Or loop over them with:
for file ($latest) {
...
}
Or the Bourne-style syntax if you prefer:
for file in $latest; do
...
done
To loop over the keys (the directories) of the associative array:
for dir (${(k)latest}) ...
Or both key and value:
for dir file (${(kv)latest}) ...
(though you can always also use dir=$file:h to get the parent directory from the file, or $latest[$dir] for the dir from the file).
To sort the files by last modification time instead of numerically on their name, replace the n glob qualifier with Om.
To do something similar with GNU bash 4.4+, find and GNU sort:
typeset -A latest
readarray -td '' files < <(
LC_ALL=C find . -name '.?*' -prune -o -name '*.jmx' -print0 |
sort -zV)
for jmx in "${files[@]}"; do
latest[${jmx%/*}]=$jmx
done
And then:
ls -ld -- "${latest[@]}"
for file in "${latest[@]}"; do
...
done
for dir in "${!latest[@]}"; do
file=${latest[$dir]}
...
done
Above, it's sort -V (version sort) that sorts the list of files in a similar fashion as zsh's n glob qualifier.
sort -n wouldn't work as it's only for sorting numbers alone.
| How to get a list of latest files recursively with a specific extension and copy them into a folder |
1,435,844,238,000 |
I use the tail, head and grep commands to search log files. Most of the time the combination of these 3 commands, in addition to using pipe, gets the job done. However, I have this one log that many devices report to literally every few seconds. So this log is very large. But the pattern of the reporting is the same:
Oct 10 11:58:50 Received Packet from [xxx.xx.xxx.xx:xxxx]: 0xD 0xD 0xD
Oct 10 11:58:50 Unit ID: 1111
In the above example, it shows that UDP packet was sent to the socket server for a specific unit id.
Now sometimes I want to view the packet information for this unit within a specific time range by quering the log.
Oct 10 11:58:50 Received Packet from [xxx.xx.xxx.xx:xxxx]: 0xD 0xD 0xD
Oct 10 11:58:50 Unit ID: 1111
... // A bunch of other units reporting including unit id 1111
Oct 10 23:58:50 Received Packet from [xxx.xx.xxx.xx:xxxx]: 0x28 0x28 0x28
Oct 10 23:58:50 Unit ID: 1111
So in the example above, I would like to display log output only for Unit ID: 1111 within the time range of 11:58 and 23:58. So the possible results can look like this:
Oct 10 11:58:50 Received Packet from [xxx.xx.xxx.xx:xxxx]: 0xD 0xD 0xD
Oct 10 11:58:50 Unit ID: 1111
Oct 10 12:55:11 Received Packet from [xxx.xx.xxx.xx:xxxx]: 0x28 0xD 0x28
Oct 10 12:55:11 Unit ID: 1111
Oct 10 15:33:50 Received Packet from [xxx.xx.xxx.xx:xxxx]: 0x33 0xD 0x11
Oct 10 15:33:50 Unit ID: 1111
Oct 10 23:58:50 Received Packet from [xxx.xx.xxx.xx:xxxx]: 0x28 0x28 0x28
Oct 10 23:58:50 Unit ID: 1111
Notice the results only display Unit ID: 1111 information and not the other units.
Now the problem with using something like this:
tail -n 10000 | grep -B20 -A20 "Oct 10 23:58:50 Unit ID: 1111"
is that will display a lot of stuff, not just the stuff that I need.
|
awk '$3 >= "11:58" && $3 <= "23:58" && /Unit ID: 1111/{print l"\n"$0};{l=$0}'
| Search and Filter Text on large log files |
1,435,844,238,000 |
Can anyone point me at a 'pick' filter script that works somewhat as described below?
I've spent about an hour hunting for a simple bash script/filter that will allow me to pipe in a list of values and will spit out a subset of them depending on choices I make at the console. I know there are examples written in C but I wanted a mostly-portable bash script I can use in Cygwin / Gitbash etc. (The context: I want to be able to run some command in some subdirectories, and I want to separate the choice of which directories to run the command, from the choice of command to run.)
As hypothetical example of usage:
$ echo "foo
> bar
> baz" | pick.sh
* Options:
* 1. foo
* 2. bar
* 3. baz
* Choices? 2 3
bar
baz
The lines marked * are supposed to be where the script interactively lets me choose which elements to 'pick' and once I decided lines 2 and 3 it proceeds to send those out to STDOUT.
Choices ideally could be a combination of space-separated numbers eg 2 3 4, inclusive ranges eg 2-4 .. or maybe even fancy enough to use some kind of autocompletion allowing typing the first few letters of the items themselves.
Well, there it is, I think it would be a very useful bash pipeline filter in general!
(Thanks for reading this far..)
|
Self-demonstrating example using sed line specifiers (where N,M means lines N through M):
$ ./pick.sh < ./pick.sh
Lines:
1 #!/usr/bin/env bash
2
3 set -o errexit -o nounset -o noclobber
4
5 trap 'rm --force --recursive "$working_directory"' EXIT
6 working_directory="$(mktemp --directory)"
7 input_file="${working_directory}/input.txt"
8
9 cat > "$input_file"
10
11 echo 'Lines:'
12 cat --number "$input_file"
13
14 IFS=' ' read -p 'Choices: ' input < /dev/tty
15 lines=($input)
16
17 sed --quiet "$(printf '%sp;' "${lines[@]}")" "$input_file"
Choices: 1 5,7 17
#!/usr/bin/env bash
trap 'rm --force --recursive "$working_directory"' EXIT
working_directory="$(mktemp --directory)"
input_file="${working_directory}/input.txt"
sed --quiet "$(printf '%sp;' "${lines[@]}")" "$input_file"
Basically, save standard input to a temporary file, print the file with line numbers, prompt for input ranges, and pass the input ranges to sed to print each of them.
One quirk of this method is that lines will be printed in the order they appear in the file, not the order you specify:
…
Choices: 3 1
#!/usr/bin/env bash
set -o errexit -o nounset -o noclobber
If you really need input order it would be simple to loop over lines, although this is of course less efficient.
The script assumes you have GNU cat, sed, etc. installed. If you're using BSD tools the command flags will be different.
| Picking selected items from stdin via choices typed into console |
1,435,844,238,000 |
I have a fairly simple little script. Basically, it performs ping over a given domain. It is like this:
ping -c2 $1 | head -n4
And it prints out for example:
PING google.com (172.217.17.206): 56 data bytes
64 bytes from 172.217.17.206: icmp_seq=0 ttl=55 time=2.474 ms
64 bytes from 172.217.17.206: icmp_seq=1 ttl=55 time=2.668 ms
which is okay for me.
But for example like you know sometimes the ping command does not return any response from the ICMP request. Like for example:
ping intel.com
PING intel.com (13.91.95.74): 56 data bytes
Request timeout for icmp_seq 0
Request timeout for icmp_seq 1
--- intel.com ping statistics ---
And when this happens the script stuck for several seconds and then it resumes on its way.
I'm trying to think of a way when this happens to just skip it and just proceed down. I actually not sure if it is possible at all.
I was thinking at first to pipe it to grep for 'Request timeout' or to put the result in a variable and then cat | grep the variable.
Can someone think of a way for this and is it possible at all to just skip the execution when it hits Request timeout?
|
If you don't want it to be stuck for "several seconds" when a server fails to respond, add a timeout using the -W option. For example:
ping -c2 -W2 "$1"
-W2 sets a two-second time. Change the limit to fit your needs.
Aside
When referencing shell variables, like $1, always put them in double-quotes, like "$1", unless you explicitly know about and want word splitting and pathname expansion.
Documentation
From man ping:
-W timeout
Time to wait for a response, in seconds. The option affects only timeout in absence of any responses, otherwise ping waits
for two RTTs.
| Filtering the output from the ping command |
1,435,844,238,000 |
I've got a folder with several thousand files with names like ousjgforuigor-TIMESTAMP.txt
The timestamp is a standard Unix timestamp (e.g. 1543932635). Is there an easy way to list only files with a filename-timestamp > a provided one?
The number of characters before the timestamp is variable, but the name always ends with -TIMESTAMP.txt
I could write a bash script to do this, but that seems like overkill.
|
Using zsh's expression-as-a-glob-qualifier,
t=1543951252 zsh -c 'datefilter() { ts=${REPLY##*-}; ts=${ts%*.txt}; ((ts >= $t)) }; print -l *-<->.txt(+datefilter)'
The overall command (towards the end) is print -l, which prints each argument on a separate line. Well, the overall command is a presumed bash shell call to zsh that sets the environment variable t to some given value. Instead of printing the filenames, you could put them in an array or delete them or do anything else you want with them.
The glob qualifier *-<->.txt picks up potentially-matching filenames -- ones that begin with anything (*), followed by a dash (-), followed by any range of numbers (zsh's range operator <->), followed by .txt; that globbing is then sent to the glob qualifier (+datefilter), which is a call to the corresponding function.
The datefilter function takes the incoming filename (in $REPLY) and prunes it down to the timestamp value. It returns true if that timestamp value is greater than or equal to the given timestamp in t. Files that succeed in that test are kept as filenames; the rest are dropped.
You could do something similar in bash by manually looping over the glob:
for f in *-*.txt
do
ts=${f##*-}
ts=${ts%.txt}
[[ ts -ge t ]] && printf '%s\n' "$f"
done
Although the bash wildcard * could pick up stray filenames such as foo-bar.txt where bar is not required to be a number. You'd have to hard-code in some assumptions otherwise, such as:
for f in *-[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9].txt; do # ...
or
for f in *-[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]*.txt; do # ...
to force some number of digits to appear between the dash and the period.
| Filter files by string timestamp in filename |
1,435,844,238,000 |
For example, I want to get only the 3rd element in each row when I call:
xinput --list --short|grep "slave pointer"
I get the output:
⎜ ↳ Virtual core XTEST pointer id=4 [slave pointer (2)]
⎜ ↳ SynPS/2 Synaptics TouchPad id=11 [slave pointer (2)]
⎜ ↳ MCE IR Keyboard/Mouse (ite-cir) id=12 [slave pointer (2)]
⎜ ↳ Logitech Unifying Device. Wireless PID:101a id=14 [slave pointer (2)]
I would like to get only the names like "SynPS/2 Synaptics TouchPad", ...
I saw somewhere a solution with awk and print somehow, but isn't there a simpler solution to achieve that without awk or perl or such?
|
Gawk is pretty simple for this kind of thing but OK, you can also use cut:
xinput --list --short|grep "slave pointer" | cut -f 1
That will also include the leading space and ↳ characters. If you need to get rid of those, try this:
xinput --list --short|grep "slave pointer" | cut -f 1 | cut -d" " -f 5-
| Filtering the Xth element in a row? |
1,435,844,238,000 |
I want to construct a huge blob of data (a backup of a sort) and send it over the network (ssh or rsync) to another host. There is enough space on the remote for the data, but not on the local host, so I cannot store it as a local file. I'd like to compute a checksum of the data as it is entering the pipe (and later compare with the checksum of the resulting remote file). So I'm looking for a program that I can put in the middle of a pipeline and let it compute a checksum of everything passing through.
Two "MITM" programs that pop up in my head are pv and mbuffer but neither seems to have this functionality. Also cat and dd fall short :-) The various programs for computing checksums like md5sum, sha1sum etc. all consume their input and do not pass it on. Help? Thanks!
|
You can use tee and process substitution >(…), e.g.
cat blob | tee >(md5sum >&2) | ssh user@remote 'tee >(md5sum >&2) >/tmp/blob'
This pipe is writing the checksums to stderr to not interfere with stdout.
You can redirect the hash to a file if you want to keep it.
cat blob | tee >(md5sum >blob.md5) | <your pipe>
| checksum a pipeline |
1,435,844,238,000 |
I have a huge number of images from which I'd like to feed some to ffmpeg time to time.
But I only want to feed ones that are alphabetically after certain image (last frame of previous run, name stored in some file).
Can I for example find out an order/index number of that one file and then do head/tail using that number?
Or is there some magical -pattern_type glob I could use as ffmpeg -i parameter?
Best filtering solution so far seems to be this but it seems a bit heavy:
find . -maxdepth 1 -type f | sort | awk '$0 > "./picture_2022-04-22_13-46-12.jpg"'
One alternative would be to put the list into text file, do parsing there and feed the text file to ffmpeg but I'd like to think there is some simpler way?
|
After sorting, some sed or awk could be used to match from pattern until the end of the stream. I assume that your final ffmpeg command accepts a list of file arguments. I use a printf instead of ffmpeg below.
find . -type f -print0 | sort -z | sed -nz '/pattern/,$p' | xargs -r0 printf '%s\n'
GNU arguments separation is used. sed command filters the sorted arguments, from pattern (including) to the end of the stream.
If you want to get files from pattern until, not the end, but until a second pattern, the only modification is for the sed, it becomes sed -zn '/pattern1/,/pattern2/p'.
A shorter alternative for your case (depth 1) where we don't test for regular files, would be:
printf '%s\0' ./*.jpg | sed -nz '/pattern/,$p' | xargs -r0 printf '%s\n'
Here the files are already alphabetically sorted after the first step.
Also, you can compare with a string, not necessarily existing in the filenames, excluding or including, preferably using awk like you already do. For example, get all files named with a date later than 2022-01:
printf '%s\0' ./*.jpg | awk 'BEGIN {RS=ORS="\0"} $0 > "./picture_2022-01"' | xargs -r0 printf '%s\n'
| Filtering file list to show only files alphabetically after certain file |
1,435,844,238,000 |
I am trying to create a BASH script that will run a command, filter the output of that command and then read the results of that output to then print only sections that meet the given requirement.
For example:
I have managed to reduce the output of the original command so that the output looks like this:
Profile: 1
PsA of Profile 1: 13
PsL of Profile 1: 15
Profile: 2
PsA of Profile 2: 0
PsL of Profile 2: 0
I am trying to write a BASH script that will read each Profile section individually, and only print the profile numbers that have PsA and PsL values of over 0.
For clarity, the output needs to be only the profile value, so in this example - 1, with 2 discarded.
It also does need to be a BASH script due to the work that I am trying to do.
I am really new to all of this, and am utterly stuck. Please help!
** EDIT **
For clarity, I am trying to work with the Volatility Framework. I am looking at the profiles that can be obtained, and currently - the exact output looks like this:
Profile suggestion (KDBGHeader): WinXPSP3x86
PsActiveProcessHead : 0x8055a158 (31 processes)
PsLoadedModuleList : 0x80553fc0 (122 modules)
Profile suggestion (KDBGHeader): WinXPSP2x86
PsActiveProcessHead : 0x8055a158 (31 processes)
PsLoadedModuleList : 0x80553fc0 (122 modules)
What I need is for the script to check the PsActiveProcessHead and PsLoadedModuleList (PsA and PsL) - specifically, the number of processes and modules found - if BOTH of those, which are shown in brackets, are above 0 then print the Profile suggestion.
There may be 1 profile suggestion, there may be more - I need the script to output any profile found that has both modules and processes listed above 0.
My apologies for the unclear original question, I tried to make it simpler and adapt the answers but am still struggling.
Sorry!
(To be abundantly clear, the above is just an example of the output format, they will not always both have numbers above 0 and there may be more than 2 profile suggestions)
|
You can use sed with an N;D scheme:
sed -n 'N;s/PsA\( of Profile \([0-9]*\): \)[^0].*\nPsL\1[^0].*/\2/p;D'
The N appends the next line, so you always have two in the pattern space. Then you simply define a pattern with both values of the same profile and a number that does not start with zero (so values with leading zeroes will fail!). If there is a match, replace the pattern by the referenced profile number and print it (while default output is disaabled by -n option. Then start over with D, reducing the earlier line if there are two.
Update according to question update
For the real world scenario you gave, I suggest a different approach:
sed -n '/Profile suggestion/!d;h;n;/(0/d;n;//d;g;p' yourfile
Explanation:
/Profile suggestion/!d means: Drop all lines that are no profile suggestions. Stop the script here to continue with the next line.
h copies the profile suggestion to the hold space, so we can print it if needed
n continues with the next line. The current one is not printed because of the -n option to the sed command
/(0/d deletes this cycle if we found the pattern (0, because this means no processes
n;//d exactly like above, to make sure the second line also has processes
At this point of the script we know we had a prilfe suggestion with two lines following, each with a non-zero number of processes. g copies the hold space back to be pattern space, so we can print the suggestion
| BASH Script - Reading output and cutting by value |
1,435,844,238,000 |
I am running MacOS(really BSD), and I want to redirect certain traffic over an ssh tunnel using a a local forward. Seems easy enough, but I am repeatedly blocked by the ambiguous "/etc/pf.conf:29: syntax error" message at every turn. I must have gone through 30 iterations of the rule by now. Additionally, I have read the relevant OpenBSD packet filter information regarding syntax and redirection. Am at quiet the loss, and seek the help of someone smarter than myself about the BSD packet filter.
The goal is to take any traffic sourcing from my local machine destined to a machine on the internet to port 1234 and redirect the traffic to 127.0.0.1:1234. My specific os is OS X 10.10.2 Yosemite.
Here is the latest iteration of the rule which causes pfctl to return "syntax error"
pass out quick on en6 from any to en6 port 1234 rdr-to 127.0.0.1 port 1234
Based on the documentation and other random blogs on the Internet, this rule looks correct; pfctl however, disagrees.
The breakdown based on my understanding of the documentation is:
pass - the action to pass the traffic
out - the direction of traffic flow
quick - if the packet matches this rule, then consider this the last rule in the chain
on en6 - the interface on which to apply the rule
from any - the source of the packet (should always be my machine)
to en6 port 1234 - to anything on the interface destined for port 1234
rdr-to 127.0.0.1 port 1234 - redirect the packet to this interface
|
You can do it :
rdr pass quick on $ext_inf inet proto tcp from any to any port 1394 -> $target port 1394
| macos - local port redirection using pfctl and syntax errors |
1,435,844,238,000 |
I am unable to launch Gnome System Log Viewer after setting some filters. This is so, even after rebooting and reinstalling this GUI program. I found the following relevant line in /var/log/messages:
kernel - [ 2345.123456] traps: logview[1234] trap int3 ip:32682504e9
sp:7fff9123c150 error:0
It seems to be some exception error with the kernel. How to deal with it and get the viewer to launch again?
UPDATE:
I tried launching it manually with the following command: gnome-system-log and it gives me a more verbose error:
GLib-GIO-ERROR **: g_menu_item_set_detailed_action: Detailed action
name 'win.filter_hide info' has invalid format Trace/breakpoint trap
It appears that the regex I wrote for win.filter_hide has some invalid format. How can I access this and change it manually without the GUI?
UPDATE2:
I tried:
$ gsettings get org.gnome.gnome-system-log filters
@as []
$ gsettings reset org.gnome.gnome-system-log filters
It doesn't work. I think I am somewhere close, but not sure how to access win.filter_hide from here. From this image, I don't see how installing dconf-editor would help me access that filter.
UPDATE3:
I finally manage to take a peep at the values by logging in as root:
# gsettings get org.gnome.gnome-system-log filters
['hide info:1:::\\tinfo', 'error:0:#efef29292929::\\terr', 'show all:0:#000000000000::\\d']
# gsettings reset org.gnome.gnome-system-log filters
(process:3453): dconf-WARNING **: failed to commit changes to dconf: The connection is closed
Not sure where is the problem. But as can be seen, I can't even do a reset when logged on as root. And I can't access those values when logged on as normal user.
UPDATE4:
Finally it is solved. The reason why connection is closed is because the root is logged in the user environment. This should work:
$ su -c "gsettings reset org.gnome.gnome-system-log filters" -
|
The filter settings are saved as a gsettings scheme: org.gnome.gnome-system-log.filters. You can edit them with dconf-editor (org>gnome>gnome-system-log>filters). Replace the space in the name of the filter with a dash (or some other character), and gnome-system-log will work again.
| Unable to launch Gnome System Log Viewer after setting filters |
1,435,844,238,000 |
So I'm extremely new to rsyslog (recently switch from syslog-ng) and I really like that I can have dynamic filenames... My work has recently started using docker and they're sending a lot of fields in the syslogtag to a remote host. So instead of setting up filters for every instance, I'm trying to write a dynamic filter to parse out the relevant details and put it to it's own log/directory such as /var/log/docker/app name/syslog.log
I have the 'app name' working when they're providing the proper delimiters between the fields but when they're not using the proper one, the regex is returning **NO MATCH** and is putting everything to a '/var/log/docker/**NO MATCH**/syslog.log'. Using the **NO MATCH** directory is not an issue but grouping every remote host together in one file is. Is there a way to test if the regex returned no match and then have it change the filename from 'syslog.log' to '%hostname%.log' ?
|
I am not an expert, but you can start with this: you will need to repeat your regexp (I don't know how you can evaluate a template to get the **NO MATCH** string). For example, say your "appname" part is matched by the regexp [a-z]+:, then you can write
$template nomatch,"/var/log/docker/nomatch/%hostname%.log"
if (not re_match($msg, "[a-z]+:")) then {
action(type="omfile" dynaFile="nomatch")
stop
}
The $template describes your wanted filename, the if tries to find the match, and then does the action of writing to the file, with no further handling of this message.
The action() parameters are described here in RainerScript.
| rsyslog7 filter to hostname if no match to regex |
1,435,844,238,000 |
I was wondering how to filter a table with several columns based on a specific value in each of the columns of interest.
I have this example here:
Chr1 16644 0 0 1 1
Chr1 16645 0 0 1 1
Chr1 16646 0 0 1 1
Chr1 16647 0 0 1 1
Chr1 16648 0 0 1 1
Chr1 16649 0 0 1 1
Chr1 16650 0 0 1 1
Chr1 16651 0 0 1 1
Chr1 16782 0 0 0 0
Chr1 16783 0 0 0 0
Chr1 16784 0 0 0 0
Chr1 16785 0 0 0 0
Chr1 16786 0 0 1 1
Chr1 16787 0 0 1 1
Chr1 16788 0 0 1 1
Chr1 16789 0 0 1 1
Chr1 16790 0 0 1 1
And I would like to remove all the rows containing a zero in all of the columns 3,4,5,6.
I have tried it as such
cat STARsamples_read_depth.txt | awk '$3 != 0 && $4 != 0&& $5 != 0 && $6 != 0' | less
But it removes also the rows where only some of these columns have a zero, not in all four!
Is there a way to do it?
thanks
Assa
|
As @Devon mentioned in the comments: Use || instead of &&.
The reason is that you want show lines where at least one of the columns 3,4,5,6 is different then zero.
Here's another way to understand it. You're trying to remove lines where those columns are all zeros. Let's begin with the other way around: print all the lines where all those columns are 0. This is easy:
awk '$3 == 0 && $4 == 0 && $5 == 0 && $6 == 0'
Now you want to invert this statement: Show all the lines that don't match the condition above. So you just negate the statement.
awk '(!($3 == 0 && $4 == 0 && $5 == 0 && $6 == 0))'
The command above will also fulfill your requirement, by the way.
Anyway, according to logical negation rules, the negation of the statement "A and B" is "not A or not B". So to negate this statement:
$3 == 0 && $4 == 0 && $5 == 0 && $6 == 0
You need to negate each expression, and transform all the "and" operators to "or".
$3 != 0 || $4 != 0 || $5 != 0 || $6 != 0
Now you can better understand why your command didn't work. The negation of the statement you used would be:
$3 == 0 || $4 == 0 || $5 == 0 || $6 == 0
Which means it would remove all the lines where at least one of the columns (and not all) is zero.
| How to filter a table using awk |
1,435,844,238,000 |
I got a command foo that outputs a list of files separated by a new line.
How can I filter those files by their content using regex, and output the filtered files list?
|
If your system has GNU xargs, you could do something like
foo | xargs -d '\n' grep -l regex
| Filter a list of files by content |
1,435,844,238,000 |
I have a file containing full path names of files an directories.
From this list I would like to filter out any pathnames that reference directories so that I am left with a list containing only file paths.
Can anybody think of an elegant solution?
|
while IFS= read -r file; do
[ -d "$file" ] || printf '%s\n' "$file"
done <input_file
Would print the files that are not determined to be of type directory (or symlink to directory). It would leave all other type of files (regular, symlink (except to directories), sockets, pipes...) and those for which the type cannot be determined (for instance because they don't exist or are in directories which you don't have search permission to).
Some variations depending on what you meant by file and directory (directory is one of many types of files on Unix):
the file exists (after symlink resolution) and is not of type directory:
[ -e "$file" ] && [ ! -d "$file" ] && printf '%s\n' "$file"
file exists and is a regular file (after symlink resolution):
[ -f "$file" ] && printf '%s\n' "$file"
file exists and is a regular file before symlink resolution (excludes symlinks):
[ -f "$file" ] && [ ! -L "$file" ] && printf '%s\n' "$file"
etc.
| Filter directories from list of files and directories |
1,435,844,238,000 |
I have a large file that needs to be filtered by the first field (which is never repeated). Example as below:
NC_056429.1_398 2 3 0.333333 0.333333 0.333333 0.941178
NC_056429.1_1199 2 0 0.333333 0.333333 0.333333 0.941178
NC_056442.1_7754500 0 3 0.800003 0.199997 0.000000 0.000001
NC_056442.1_7754657 1 2 0.000000 0.199997 0.800003 0.888891
NC_056442.1_7754711 2 0 0.888891 0.111109 0.000000 0.800002
NC_056442.1_7982565 0 1 0.800003 0.199997 0.000000 0.666580
NC_056442.1_7982610 1 0 0.800003 0.199997 0.000000 0.000000
NC_056442.1_7985311 2 0 0.888891 0.111109 0.000000 0.000000
I am trying to use awk to filter a file in a shell script by the first column, and I need to use a variable because its in a while loop. The while loop calls in a text file such as:
NC_056442.1 7870000 # 1st field = $chrname, 2nd field = $pos
NC_056443.1 1570000
Previously in the script, I find a target value using a calculation with $pos to get $startpos and $endpos as shown below:
chrname="NC_056442.1" # column 1 in pulled file
startpos=7754657 # calculated in prior script
endpos=7982610 # calculated in prior script
start=${chrname}_${startpos} # this was an attempt to simplify the awk command
end=${chrname}_${endpos}
awk -v s="$start" -v e-"$end" '/s/,/e/' file.txt > cut_file.txt
If I manually type in the values, like below, I get a file that includes lines 5-8 only.
awk '/NC_056442.1_7754657/,/NC_056442.1_7982610/' file.txt > cut_file.txt
Output File
NC_056442.1_7754657 1 2 0.000000 0.199997 0.800003 0.888891
NC_056442.1_7754711 2 0 0.888891 0.111109 0.000000 0.800002
NC_056442.1_7982565 0 1 0.800003 0.199997 0.000000 0.666580
NC_056442.1_7982610 1 0 0.800003 0.199997 0.000000 0.000000
I am struggling because I do not know how to get the s and e variables to actually run. I have tried a variety of options including "ENVIRON[]". As someone relatively new to bash (and a first post here), I do not know how to troubleshoot this. I am open to answers outside of awk. Please let me know if I need to rephrase my question or add more information.
|
Don't try to do this by matching regular expressions. Instead, use _ or space as awk's field separator so you have the chromosome and positions in easy to use variables:
start=1234567
end=7654321
awk -v s="$start" -v e="$end" -F '[ _]' '$3 >= s && $3 <= e' file.txt > cut_file.txt
Also, avoid using CAPS for variable names in shell scripts. By convention, the global environment variables are capitalized so if you use capital letters for your own variables, that can lead to naming collisions and hard to find bugs.
Now, you haven't shown us the loop you are using. Whatever it is though, you would be better off looping in awk itself instead of the shell. Shell loops are slow.
| How to use variables (that change in loop) in awk statement to cut a file |
1,435,844,238,000 |
I have two files
file1:
U 20 100 1_A 1_A
U 14 200 1_B 1_B
U 14 300 1_C 1_C
file2:
D 12 90 1_A 1_A
D 15 97 1_A 1_A
D 16 99.5 1_A 1_A
D 9 111 1_A 1_A
D 71 200 1_B 1_B
D 88 198 1_B 1_B
D 12 210 1_B 1_B
D 11 211 1_B 1_B
D 9 266 1_C 1_C
D 18 278 1_C 1_C
D 20 300.5 1_C 1_C
D 17 300 1_C 1_C
The 4th column includes the same values in both files (the 5th column too which is the same as the 4th) but in file1 every value appears only once meanwhile in file2 each value is present multiple times with differences in the 2nd and 3rd column.
I would like to get the lines from file2 where the 3rd column's value is within the range of ±1 of the corresponding line from file2 (where the 4th column's values are the same).
Expecting output:
D 16 99.5 1_A
D 71 200 1_B
D 20 300.5 1_C
D 17 300 1_C
tried using this:
while read c1 c2 c3 c4
do
awk '{if ( a = $4 && b < $3+1 && b > $3-1 ) print $1 " " $2 " " $3 " " $5 }' a="$c4" b="$c3" file2.txt > output.txt
done < file1.tx
and I got this output:
D 20 300.5 1_C
D 17 300 1_C
so it's only using the b value from the last line.
|
Use just awk without need of a shell-loop:
awk 'NR==FNR{ col4[$4]=$3; next }
(-1< col4[$4]-$3 && col4[$4]-$3 <1) { print $1, $2, $3, $5 }' file1 file2
you should check if the substraction result of two numbers are within (-1,1) exclusively rather than adding ±1 to third column value and comparing with its pair.
If you want differences within [-1,1] inclusively:
awk 'NR==FNR{ col4[$4]=$3; next }
(-1<= col4[$4]-$3 && col4[$4]-$3 <=1) { print $1, $2, $3, $5 }' file1 file2
| Filtering a file based on values from another file |
1,636,736,061,000 |
I want to filter a file for lines starting with a space. I use the following command:
grep -v "^ " < input > input_no_starting_space
To double check my results, I run the following:
grep "^ " < input > double_check
and then count the number of lines in input_no_starting_space and double_check to see whether their sum adds up to the number of lines in input. For this I use wc -l.
For some reason, this check fails. Meaning, the sum of the number of lines is less than the number of lines in input. My file has millions of lines, but I cannot seem to reproduce the issue on a small example. Is there by any chance something wrong with the way I use grep (since I would expect that grep and grep -v always give the complement of one another), or is this more likely an artifact in my file? In case of the latter, what could this artifact be?
This is using GNU grep 3.4 on Ubuntu 20.04.3.
|
May be, your input_file do not contains only text data.
Try to use grep with -a option.
See also --binary-files=TYPE option for grep command and man grep first paragraph about data enconding and NULL value:
If a file's data or metadata indicate that the file contains binary data, assume that the file is of type TYPE. Non-text bytes indicate binary data; these are either output bytes that are improperly encoded for the current locale, or null input bytes when the -z option is not given.
| grep -v does not return complement of grep |
1,636,736,061,000 |
I had question about IPtables.
Let's start with this example of my book:
What rules you would set for a mail server accepting connections for
EMSTP (port 465) and IMAP (port 993) having a network interface eth1
exposed to the Internet and another network interface eth2 exposed to
the corporate network?
I tried to respond with this:
Iptable -A FORWARD -p EMSTP, IMAP -s all -i eth1 -m multiport 465,993 state –state NEW, ESTABILISHED -j ACCEPT
Iptable -A FORWARD -p EMSTP, IMAP -s all -i eth2 -m multiport 465,993 state –state NEW, ESTABILISHED -j ACCEPT
I thought about FORWARD because isn't specified if traffic is INPUT
or OUTPUT... So I used the generic in/out (FORWARD if I can use in
this mode)
The protocol is specified(so I think don't have problems about)
I Used two rules because I used different interface, but I think
can do all in the same rules, just adding another -i inside the same rule.
For the network, I think that one is (internet) and another one is
local network (I really don't know what mean for "corporate")
My question is if my response is good and if it is mandatory to use this type of format.
What change is I swap the order of the rules?
In this case ad example:
Iptable -A FORWARD -j ACCEPT -i eth1 -p EMSTP, IMAP -s all -m multiport 465,993 state –state NEW, ESTABILISHED
Just swapping the jump and the inteface (-j and -i)
Someone can help to understand?
|
First, some reminders:
-p argument is to specificy protocols like TCP, UDP, ICMP ... not higher level protocol like IMAP.
OUTPUT and INPUT chains are for the packets outgoing from the machine and incoming to the machine. If you want to filter packets that are forwarded (when your machine act as a gateway), you must use the FORWARD chain. To distinguish IN and OUT, use the input or output interfaces and the source and destination IPs
ESTABILISHED --> typo !!! :)
Now, let's have a look to your problem:
What rules you would set for a mail server accepting connections for EMSTP (port 465) and IMAP (port 993) having a network interface eth1 exposed to the Internet and another network interface eth2 exposed to the corporate network?
The problem is too broad since it says that:
The machine is a mail server.
It has two interfaces
It must accept connections for mail related protocols.
But it isn't said that the connections must be accepted for both networks (internet / corporate). Anyway, let's assume that it is the case.
iptables works with discriminants: -i is one to match packets incoming to THAT interfaces.
Since you want the traffic to be accepted on every interfaces, then simply remove -i.
As mentionned previously, -p is to specify the transport protocol. Mails work in TCP, so use ̀ -p tcp`.
So your first responses would work (minus typo and some syntax error, the idea is OK).
Your last won't cause it allows packets coming from internet (eth1) to pass throught your server and go to your corporate network.
| Iptable order of rules with example |
1,636,736,061,000 |
I have a binary that I will be running multiple times in parallel, each instance executed with different input from the command line.
I wanted htop to list only these processes so that I can compare the usage of memory based on the cli inputs.
I tried [htop -p ] but this lists only one process even if I give muliple process ids as the input.
Is there any way to get the output with input being multiple process IDs or with the part of the process name.
Example as I hope to see in htop:
PID USER PRI NI VIRT RES SHR S CPU% MEM% TIME+ Command
356 root 20 0 52952 7980 6632 S 0.0 0.8 0:00.00 ./test 1
357 root 20 0 2356 416 352 S 0.0 0.8 0:00.00 ./test 2
358 root 20 0 2356 332 268 S 0.0 0.8 0:00.00 ./test 3
Many thanks!
|
From man htop:
F4, \
Incremental process filtering: type in part of a process command line and only
processes whose names match will be shown. To cancel filtering, enter the Filter
option again and press Esc.
So, once you start htop, type \test and press Enter to filter in only commands containing test.
| htop - See/Filter all the instances of a binary |
1,636,736,061,000 |
I am dealing with datasets on xml files; which have the following structure:
<?xml version="1.0" standalone="yes"?>
<exper>
<entry>
<Source />
<Status>pass</Status>
<Title>S2</Title>
</entry>
<entry>
<Source />
<Status>fail</Status>
<Title>S1</Title>
</entry>
<entry>
<Source />
<Status>pass</Status>
<Title>S3</Title>
</entry>
I could parse this in Python and get done with it; but is there anything that can be done on the fly, maybe using some sort of visual editor, to get only a list of the title tag? for each entry?
I am using Notepad on Windows to read the file, which is not the best way to go for sure. I also have miniGW, so I could run AWK maybe; although I was told that parsing XML files is not ideal for neither SED nor AWK.
|
1) Install xmlstarlet https://sourceforge.net/projects/xmlstar/files/xmlstarlet/1.6.1/
2) Process XML documents from command line:
xmlstarlet sel -t -v "//Title" -n input.xml
The output:
S2
S1
S3
| Easy way to filter xml files in a visual way |
1,636,736,061,000 |
I have a big file with many columns and rows, looks like:
A B C D E F1 F2 F3 F4 F5
a1 b1 c1 d1 e1 0 0 1 0 1
a2 b2 c2 d2 e2 1 0 0 1 1
a3 b3 c3 d3 e3 1 1 0 0 1
....
The A, B, C, D, E columns contain some information, and F1-5 columns represent some ids. The 0s or 1s mean absence/presence of the A-E information for that id.
I want to create files for each id, while every file contains the ABCDE information that the id has.
For example, F5 have three 1s in the first 3 rows, so
F5.txt:
A B C D E
a1 b1 c1 d1 e1
a2 b2 c2 d2 e2
a3 b3 c3 d3 e3
F1 has two 1s in the first 3 rows, so
F1.txt:
A B C D E
a2 b2 c2 d2 e2
a3 b3 c3 d3 e3
How to filter this file and create new files with the id names (F1, F2...) using awk?
|
AWK solution:
awk 'NR==1{ split($0,h); columns=sprintf("%s %s %s %s %s",h[1],h[2],h[3],h[4],h[5]); next }
{ for (i=6;i<=NF;i++)
if ($i) {
if (!a[h[i]]++) print columns > h[i]".txt";
print $1,$2,$3,$4,$5 > h[i]".txt"
}
}' file
split($0,h) - split the 1st record into array h to obtain header column names
columns=sprintf("%s %s %s %s %s",h[1],h[2],h[3],h[4],h[5]) - constructing common columns string A B C D E
if($i) - if the current field (starting from the 6th field) is not empty, i.e. not ""(empty string) or 0 - ready for further processing
h[i] - points to the current filename, i.e. F1 etc (or as you wrote: represent some ids)
if (!a[h[i]]++) print columns > h[i]".txt" - if the file under name h[i] is first time written - print the header/columns line to it (as 1st line)
Viewing results:
$ head F*.txt
==> F1.txt <==
A B C D E
a2 b2 c2 d2 e2
a3 b3 c3 d3 e3
==> F2.txt <==
A B C D E
a3 b3 c3 d3 e3
==> F3.txt <==
A B C D E
a1 b1 c1 d1 e1
==> F4.txt <==
A B C D E
a2 b2 c2 d2 e2
==> F5.txt <==
A B C D E
a1 b1 c1 d1 e1
a2 b2 c2 d2 e2
a3 b3 c3 d3 e3
| Filter rows with a specific header name and containing "1" in a column |
1,636,736,061,000 |
I have 3 files: list_file, file1, and file2. I want to extract entire lines from file1 and file2 based on a list_file pairwise and concatenate the results on output.
Namely, I need to extract only lines from file1 and file2 with names on 4th column matching names on first and second column of list_file (respectively), then concatenate both entire lines on output following the same paired order displayed on list file.
Names in column 1 of list_file are present in file1, and names in column 2 of list_file are present in file2.
list_file:
uth1.g20066 uth2.g18511
uth1.g3149 uth2.g22348
uth1.g20067 uth2.g18512
uth1.g20068 uth2.g18514
uth1.g3154 uth2.g22355
file1
ut1A 11256 13613 uth1.g20065
ut1A 25598 47989 uth1.g20066
ut1A 39912 40142 uth1.g3148
ut1A 40324 40617 uth1.g3149
ut1A 40699 41034 uth1.g3150
file2
ut1B 16951 39342 uth2.g18511
ut1B 31265 31495 uth2.g22347
ut1B 31677 31970 uth2.g22348
ut1B 32052 32387 uth2.g22349
ut1B 41596 46862 uth2.g18522
Desired output:
ut1A 25598 47989 uth1.g20066 ut1B 16951 39342 uth2.g18511
ut1A 40324 40617 uth1.g3149 ut1B 31677 31970 uth2.g22348
In order to perform this task I tried the below python code and it works, however it is clumsy (many loops) and it is very slow on large input files, so it would be great to make it more concise. It would also be interesting to have entirely new script as an alternative, perhaps with awk. Thanks.
data = open("list_file.txt")
data1 = open("file1.txt")
all_lines1 = data1.readlines()
data2 = open("file2.txt")
all_lines2 = data2.readlines()
output = open("output.txt", "w")
for line in data:
columns = line.split( )
geneH1data = columns[0]
geneH2data = columns[1]
for line1 in all_lines1:
columns1 = line1.split( )
chr1 = columns1[0]
start1 = int(columns1[1])
end1 = int(columns1[2])
geneH1data1 = columns1[3]
for line2 in all_lines2:
columns2 = line2.split( )
chr2 = columns2[0]
start2 = int(columns2[1])
end2 = int(columns2[2])
geneH2data2 = columns2[3]
if geneH1data==geneH1data1 and geneH2data==geneH2data2:
output.write(chr1 + " " + str(start1) + " " + str(end1) + " " + geneH1data + " " + chr2 + " " + str(start2) + " " + str(end2) + " " + geneH2data + '\n')
output.txt
ut1A 25598 47989 uth1.g20066 ut1B 16951 39342 uth2.g18511
ut1A 40324 40617 uth1.g3149 ut1B 31677 31970 uth2.g22348
|
Using GNU awk for ARGIND:
$ awk '
ARGIND<3 { a[ARGIND,$4]=$0; next }
((1,$1) in a) && ((2,$2) in a) { print a[1,$1], a[2,$2] }
' file1 file2 list_file
ut1A 25598 47989 uth1.g20066 ut1B 16951 39342 uth2.g18511
ut1A 40324 40617 uth1.g3149 ut1B 31677 31970 uth2.g22348
If you don't have GNU awk just tweak it to:
$ awk '
FNR==1 { argind++ }
argind<3 { a[argind,$4]=$0; next }
((1,$1) in a) && ((2,$2) in a) { print a[1,$1], a[2,$2] }
' file1 file2 list_file
ut1A 25598 47989 uth1.g20066 ut1B 16951 39342 uth2.g18511
ut1A 40324 40617 uth1.g3149 ut1B 31677 31970 uth2.g22348
and then it'll work in any awk. If you want the output tab-separated instead of blank-separated just tweak it again:
$ awk '
BEGIN { OFS="\t" }
FNR==1 { argind++ }
argind<3 { a[argind,$4]=$0; next }
((1,$1) in a) && ((2,$2) in a) { print a[1,$1], a[2,$2] }
' file1 file2 list_file
ut1A 25598 47989 uth1.g20066 ut1B 16951 39342 uth2.g18511
ut1A 40324 40617 uth1.g3149 ut1B 31677 31970 uth2.g22348
| Filtering lines of two files based on a third list file |
1,636,736,061,000 |
I am writing an Ansible playbook that grants and rejects access to some dirs.
Base is a list of user dictionaries:
users:
- {name: 'user1', dirs: ['dir1', 'dir2']}
- {name: 'user2', dirs: ['dir1', 'dir3']}
- {name: 'user3', dirs: []}
As a helper fact, I created a list of all dirs occurring in any of the user records.
Now I want to transform this list into two new lists:
dirs_allowed:
- {'dir1': ['user1', 'user2']}
- {'dir2': ['user1']}
- {'dir3': ['user2']}
this one is easy, but I cannot find a solution for this one:
dirs_forbidden:
- {'dir1': ['user3']}
- {'dir2': ['user2', 'user3']}
- {'dir3': ['user1', 'user3']}
So, my question is: How can I get a list of all users who don't have the current dir (=item in a with_items loop over all dirs) in their 'dirs' attribute?
It is certainly possible to do it somehow with helper variables/facts or 'when'-conditions on the task but I would really like to transform the lists themselves because I want to learn how to deal with such complex transformations.
The idea behind it is to provide just a single dict as input and extract everything needed for the particular tasks from this dict, without lots of set_fact-tasks in between which would make the playbook difficult to read and possibly fail because of undefined variables if the task is moved to a place where the intermediate variables are not (yet) set.
|
Use filter difference. For example
- set_fact:
users_all: "{{ users|json_query('[].name') }}"
- set_fact:
dirs_forbidden: "{{ dirs_forbidden|default([]) + [
{(item.keys()|list|first):
(users_all|difference(item.values()|list|first))}] }}"
loop: "{{ dirs_allowed }}"
- debug:
var: dirs_forbidden
give
"dirs_forbidden": [
{
"dir1": [
"user3"
]
},
{
"dir2": [
"user2",
"user3"
]
},
{
"dir3": [
"user1",
"user3"
]
}
]
FWIW. Working with dictionaries makes the code simpler in this case. For example
- set_fact:
users_all: "{{ users|json_query('[].name') }}"
- set_fact:
dirs_all: "{{ users|
json_query('[].dirs')|flatten|unique }}"
- set_fact:
dirs_allowed: "{{ dirs_allowed|default({})|
combine({item: users|json_query(query)}) }}"
vars:
query: "[?dirs.contains(@, '{{ item }}')].name"
loop: "{{ dirs_all }}"
- debug:
var: dirs_allowed
- set_fact:
dirs_forbidden: "{{ dirs_forbidden|default({})|
combine({item: (users_all|difference(dirs_allowed[item]))}) }}"
loop: "{{ dirs_all }}"
- debug:
var: dirs_forbidden
give
"dirs_allowed": {
"dir1": [
"user1",
"user2"
],
"dir2": [
"user1"
],
"dir3": [
"user2"
]
}
"dirs_forbidden": {
"dir1": [
"user3"
],
"dir2": [
"user2",
"user3"
],
"dir3": [
"user1",
"user3"
]
}
| Ansible - negative filter question |
1,636,736,061,000 |
I can discard lpr default filters using the -l (or -o raw) option. But how can I list them ?
(FWIW I'm using lpr from cups-client 1:2.2.6-15.fc28, Fedora.)
|
This is not the most satisfying solution, but setting
LogLevel info
in /etc/cups/cupsd.conf (then restarting CUPS e.g. with sudo systemctl restart cups), the filters are listed in CUPS logs when a job is sent.
CUPS logs being handled by journald by default in Fedora (28, at least), they can be accessed by
$ journalctl -b -u cups
…
juil. 23 15:31:56 Schenker cupsd[14390]: [Job 20] Adding start banner page "none".
juil. 23 15:31:56 Schenker cupsd[14390]: [Job 20] Queued on "Brother_MFC-9330CDW" by "goug".
juil. 23 15:31:56 Schenker cupsd[14390]: REQUEST localhost - - "POST /printers/Brother_MFC-9330CDW HTTP/1.1" 200 358 Create-Job successful-ok
juil. 23 15:31:56 Schenker cupsd[14390]: [Job 20] File of type application/pdf queued by "goug".
juil. 23 15:31:56 Schenker cupsd[14390]: [Job 20] Adding end banner page "none".
juil. 23 15:31:56 Schenker cupsd[14390]: [Job 20] Started filter /usr/lib/cups/filter/pdftopdf (PID 14599)
juil. 23 15:31:56 Schenker cupsd[14390]: [Job 20] Started filter /usr/lib/cups/filter/pdftops (PID 14600)
juil. 23 15:31:56 Schenker cupsd[14390]: [Job 20] Started filter /usr/lib/cups/filter/brother_lpdwrapper_mfc9330cdw (PID 14601)
juil. 23 15:31:56 Schenker cupsd[14390]: [Job 20] Started backend /usr/lib/cups/backend/dnssd (PID 14602)
juil. 23 15:31:56 Schenker cupsd[14390]: REQUEST localhost - - "POST /printers/Brother_MFC-9330CDW HTTP/1.1" 200 40108 Send-Document successful-ok
juil. 23 15:31:59 Schenker cupsd[14390]: [Job 20] Job completed.
juil. 23 15:31:59 Schenker cupsd[14390]: Expiring subscriptions...
NB : Option -f of journalctl may proves handy.
| How can I list default lpr filters? |
1,636,736,061,000 |
I tried to filter my log file by the functionality
For example:
195.xx.x.x - - [13/Apr/2017:09:60:xx +0200] "POST /userx/index.php?m=contacts&xxxx...
192.xx.x.x - - [13/Apr/2017:09:45:xx +0200] "POST /userx/index.php?m=customer&xxxx...
197.xx.x.x - - [13/Apr/2017:09:10:xx +0200] "POST /userx/index.php?m=meeting&xxxx...
197.xx.x.x - - [13/Apr/2017:09:20:xx +0200] "POST /userx/index.php?m=dashboard&xxxx...
In this case my functionalities are contacts,customer,meeting,dashboard
I try to ignore The welcome page that's by default. I used
awk '$7 !~ /m=dashboard/ ' log file
my question is If can I ignore more functionalities that are in a file ?
cat file:
dashboard
meeting
For to have just this lines:
195.xx.x.x - - [13/Apr/2017:09:60:xx +0200] "POST /userx/index.php?m=contacts
192.xx.x.x - - [13/Apr/2017:09:45:xx +0200] "POST /userx/index.php?m=customer
|
sed '/\//!{H;d};G;/m=\(.*\)\n.*\1/d;P;d' file log
Explanation: First read file with the filter keywords, then the logfile. Lines containing no / are interpreted to be keywords and appended to the hold space (H). Other lines get the hold space appended (G) and are deleted if the keyword after the m= is repeated in the keyword list (/m=\(.*\)\n.*\1/d). If not, it's printed without the appended hold space (P).
| Filter a log file |
1,636,736,061,000 |
This is what I am trying to do:
I have a server using Postfix on a Ubuntu precise 64bits and I have a table list of emails in /etc/postfix/virtual, like this:
[email protected] [email protected]
[email protected] [email protected]
Now I want to put a filter that get all mails sent and add some prefix to the subject or add something else to the end of the mail.
In the file /etc/postfix/master.cf I put:
filter unix - n n - 10 pipe
flags=Rq user=filter argv=/home/filter/filtro.php -f ${sender} -- ${recipient}
I created the user filter and put the file /home/filter/filtro.php:
#!/usr/bin/php
<?php
$myFile = "/home/filter/testFile.txt";
$fh = fopen($myFile, 'a');
fwrite($fh, "\n-----------------------\n");
fwrite($fh, json_encode($_SERVER['argv']) );
?>
It was just to see if it´s working. But it´s not.
Anyone can see a light for my problem?
Thanks!
|
You can use mimedefang configured as smtpd_milter with postfix. It can change/add/delete headers (action_change_header/action_insert_header/action_delete_header) and append text (append_text_boilerplate) to the mails. More info here
| Create a Filter for Postfix on Virtual Mails |
1,636,736,061,000 |
How do I search in a textfile with grep for the occurrence of a word or another word?
I want to filter the apache log file for all lines including "bot" or "spider"
cat /var/log/apache2/access.log|grep -i spider
shows only the lines including "spider", but how do I add "bot"?
|
use classic regex:
grep -i 'spider\|bot'
or extended regex (or even perl regex -P):
grep -Ei 'spider|bot'
or multiple literal patterns (faster than a regular expression):
grep -Fi -e 'spider' -e 'bot'
| Use grep with or [duplicate] |
1,636,736,061,000 |
I use the following command successfully
rsync -e 'ssh' -avr [email protected]:/home/mikrotik /bck/mikrotik/
How can I add date filter to this command? I would like to sync only files that are newer than n days from remote dir [email protected]:/home/mikrotik to local dir /bck/mikrotik/
|
Unless you intentionally delete files periodically from /bck/mikrotik that are still present on the source system, or you have many thousands of files and you're seeing a time impact while rsync skips the files it's already transferred, your date filter shouldn't be necessary.
However, having said that you can use find to generate the set of candidate files for transfer. Here we're considering only files that have been created/modified within the last seven days:
ssh -n [email protected] 'cd /home/mikrotik && find . -type f -mtime -7 -print0' |
rsync -av --files-from='-' --from0 [email protected]:/home/mikrotik /bck/mikrotik/
If you don't have a version of find that supports -print0, replace it with -print and remove --from0 from the rsync. The difference is that you then won't be able to copy files containing an embedded newline in their name
| Rsync with date filter over ssh |
1,636,736,061,000 |
I have log.txt with sample output like these:
...
10-Feb-2022 15:15:14.099 lorem
10-Feb-2022 15:15:15.133 ipsum
10-Feb-2022 15:15:16.233 dolor
...
I expect the output of filtered log.txt will be
...
1644480914 lorem
1644480915 ipsum
1644480916 dolor
...
I have figured out how to convert date string to timestamp
date --date='10-Feb-2022 15:15:14.099' +"%s"
Output:
1644480914
My brain still don't get it how to apply that date command to log.txt.
Also How do I pipeline that date command?
printf "10-Feb-2022 15.15.17.012 water" | date --date=<what must i put here?> + "%s"
I expect the output of pipelined command is same, the different is without disturb other column, that is 1644480917 water
|
Your date command seems to allow for the -d option to convey a date/time. Try like
cut -d' ' -f-2 file | date +%s -f- | paste -d' ' - file | cut -d' ' -f1,4-
1644502514 lorem
1644502515 ipsum
1644502516 dolor
It cuts the date/time fields from the input, feeds them to the date command via the -f (read DATEFILE) option, pastes it back to the input file, and cuts out the old date/time fields.
| date string column to timestamp column in a log file |
1,636,736,061,000 |
I have a question about a problem that I have to solve, I have my lines in this way:
Input
GTEX-1117F-0003-SM-58Q7G
GTEX-1117F-0003-SM-5DWSB
GTEX-111CU-0826-SM-5EGIJ
GTEX-111CU-0926-SM-5EGIK
GTEX-ZZPU-2726-SM-5NQ8O
GTEX-ZZPU-2626-SM-5E45Y
K-562-SM-2AXVE
K-562-SM-26GMQ
I have another file that tells me that the first letters are the "patients" (e.g. GTEX-1117F, GTEX-111CU, GTEX-ZZPU, and K-562).
I need a unique code to know, which patient has the most samples?
Thus, I need to know for example how many samples has the "patient" GTEX-1117F, in this case I have 2.
Output required
GTEX-1117F 2
GTEX-111CU 2
GTEX-ZZPU 2
K-562 2
And then I need to know the "patient" with more samples (e.g. K-562 140).
|
I will give a different sample so the count is more visible:
GTEX-1117F-0003-SM-58Q7G
GTEX-1117F-0003-SM-58Q7G
GTEX-1117F-0003-SM-5DWSB
GTEX-111CU-0826-SM-5EGIJ
GTEX-111CU-0926-SM-5EGIK
GTEX-ZZPU-2726-SM-5NQ8O
GTEX-ZZPU-2626-SM-5E45Y
K-562-SM-2AXVE
The command, assuming the patient id is in the format string-string:
$ cut -d'-' -f1,2 file | uniq -c | awk -F' ' '{ print $2,$1}' | sort -rk2 | head -1
GTEX-1117F 3
| reading and discarding lines |
1,636,736,061,000 |
I'm trying to figure this out. How does the Ansible IP Addr filter work, this always seems to return False
$ ansible -m debug -a 'msg={{"www.google.com"|ipv4}}' 10.1.38.15
10.1.38.15 | SUCCESS => {
"msg": false
}
|
The ipv4 filter is not a name resolution filter. It simply tests if the passed string is a valid IPv4 address.
If you want to resolve a DNS address you probably should be using the lookup plugin 'dig'.
https://docs.ansible.com/ansible/latest/plugins/lookup/dig.html
Example
$ ansible localhost -m debug \
-a 'msg={{lookup("dig","www.google.com/a",wantlist=true)|first}}'
localhost | SUCCESS => {
"msg": "172.217.14.196"
}
| Ansible hostname resolution with ipaddr filter returns False? |
1,636,736,061,000 |
I have a file where I want to grep for an md5 hash.
I was able to do that but how can I display the match to stdout?
When I do grep -e "[0-9a-f]\{32\}" file
I just get:
Binary file file matches.
Is there a way to print the result to stdout?
|
From https://www.gnu.org/software/grep/manual/html_node/Usage.html
Why does grep report “Binary file matches”?
If grep listed all matching “lines” from a binary file, it would probably generate output > that is not useful, and it might even muck up your display. So GNU grep suppresses output > from files that appear to be binary files. To force GNU grep to output lines even from
files that appear to be binary, use the -a or ‘--binary-files=text’ option. To eliminate > the “Binary file matches” messages, use the -I or ‘--binary-files=without-match’ option.
| grep for pattern |
1,636,736,061,000 |
as a newbie to linux kernel and all the commands, I am reaching out to you guys, hoping you can help me solve my issue.
When running the next command
sudo dmidecode -t 5
I get the following output:
# dmidecode 3.0
Getting SMBIOS data from sysfs.
SMBIOS 2.4 present.
Handle 0x0084, DMI type 5, 46 bytes
Memory Controller Information
Error Detecting Method: None
Error Correcting Capabilities:
None
Supported Interleave: One-way Interleave
Current Interleave: One-way Interleave
Maximum Memory Module Size: 32768 MB
Maximum Total Memory Size: 491520 MB
Supported Speeds:
70 ns
60 ns
Supported Memory Types:
FPM
EDO
DIMM
SDRAM
Memory Module Voltage: 3.3 V
Associated Memory Slots: 15
0x0085
0x0086
0x0087
0x0088
0x0089
0x008A
0x008B
0x008C
0x008D
0x008E
0x008F
0x0090
0x0091
0x0092
0x0093
Enabled Error Correcting Capabilities:
None
Is there any command to filter the output so I get the supported speeds (70ns, 60ns) in any way?
I tried
sudo dmidecode -t 5 | grep -i -e DMI -e speed
which gave me this output:
# dmidecode 3.0
Handle 0x0084, DMI type 5, 46 bytes
Supported Speeds:
but this doesn't output the following lines.
Any suggestions are very welcome, thanks!
|
This will list the supported speeds:
dmidecode | awk '/^\t[^\t]/ { speeds = 0 }; /^\tSupported Speeds:/ { speeds = 1 } /^\t\t/ && speeds'
This works by matching lines as follows:
lines starting with a single tab mean that we’re not expecting speeds;
lines starting with a single tab followed by “Supported Speeds:” mean that we are expecting speeds;
lines starting with two tabs when we are expecting speeds are output as-is.
| Filter dmidecode memory controller for supported speeds |
1,636,736,061,000 |
Extend of this question: How to combine strings from JSON values, keeping only part of the string?
Above link's output also include the "name" of folder type, need to exclude these "type":
"date_added": "13170994909893393",
"date_modified": "13184204204228233",
"id": "2098",
"name": "ElasticSearch",
"sync_transaction_version": "1",
"type": "folder"
How to get only field if "type" of the same object is "url", otherwise ignore:
A valid pattern that will be put in the output:
"type": "url",
"url": "https://url_here"
|
Given an array of objects, jq can select the ones that fulfil a certain criteria using select().
It sounds like you may want to use
.array[] | select(.type == "url")
Given a JSON document like
{ "array": [ { "type": "folder", "name": "example folder" },
{ "type": "url", "name": "example url" } ] }
the above query will return
{
"type": "url",
"name": "example url"
}
| jq: parse json file with constraint from other field [duplicate] |
1,636,736,061,000 |
I'd like to filter an input like this
foo 2022-11-11
foo 2022-12-11
something else
bar 2022-12-07
to obtain
foo 2022-11-11
bar 2022-12-07
I'm starting with grep -oP "^[A-z]{3}" | sort -u but of course this will not print the full line.
|
I suggest to take only from first column to first column (-k 1,1) into consideration:
grep -E '^[[:alpha:]]{3} ' | sort -k 1,1 -u
Output:
bar 2022-12-07
foo 2022-11-11
| grep unique results but show full line containing the match |
1,636,736,061,000 |
I have some files in a folder. By running the following command I can create a list with the filtered filenames ( test1, test2,test3) and pass it using xargs to a grep filtering command filter file in the command contains a few values to be filtered out.
ls -Art1 | grep test | xargs -L1 grep -vf filter > ouput
However when run this command output file contain the filtered result of test1, test2 and test3. All in one.
I would like to have separate file for each test1 > test1_output, test2 >test2_output,...
so taking the xargs value and just add an extra string to it "_output" to generate an output filename
|
There's no need to pipe the output of ls to search for a name match. Just use a shell glob. This also allows you to define an output file for each filter attempt
for file in *test*
do
[ -f "./$file" ] && grep -vf filter "./$file" >"${file}_output"
done
Technically this is potentially a slightly different set of file matches to your ls -A as your code considers dot files whereas mine does not. Fixable if relevant.
In your comment you mention you are performing two actions on each file. If I have understood you correctly, then for such a situation you can modify the code like this:
for file in *test*
do
if [ -f "./$file" ]
then
grep -f filter "./$file" >"${file}_delta"
grep -vf filter "./$file" >"${file}_output"
fi
done
| Passing value with xargs to generate dynamic output filename |
1,636,736,061,000 |
So I have files that need to remove/move/filter.
All files have this pattern like this in a directory, let's say directory frames contain this pattern filename timestamp_in_nanosecond.jpg.
And this is sample of that files with using pipelined tail from ls. ls | tail. (I'm using tail because ls too slow, maybe because too many files to list).
.../uwc/frames $ ls | tail
1660649146201561661.jpg
1660649146411875151.jpg
1660649146622526505.jpg
1660649146832063432.jpg
1660649147042957234.jpg
1660649147254488848.jpg
1660649147466753015.jpg
1660649147889093171.jpg
1660649148193314525.jpg
1660649148786199681.jpg
What if I want move files to another directory like frames2 with specified range like this range:
From 1660649147000000000.jpg
Until 1660649148000000000.jpg
Hence, frames2 dir will contain these files only:
1660649147042957234.jpg
1660649147254488848.jpg
1660649147466753015.jpg
1660649147889093171.jpg
|
Use a wildcard
mv 1660649147?????????.jpg frames2/
Depending on whether or not you mean to include the upper limit, maybe also
mv 1660649148000000000.jpg frames2/
If there are too many files for the wildcard to match without running out of buffer space, use find instead:
find . -name 'frames2' -prune -o -name '1660649147?????????.jpg' -exec mv -t frames2/ {} +
Notes
The -name 'frames2' -prune clause prevents find descending into the frames2 subdirectory. You don't need it (or the -o "or") if frames2 is actually elsewhere.
If your mv does not have the GNU extension -t, change the -exec clause to -exec mv {} frames2/ \;, but realise it will take considerably longer to complete.
In all cases, you can amend mv to echo mv to see the set of chosen files without actually moving them.
| Remove/move files in a directory with filename timestamp pattern |
1,658,821,946,000 |
Sorry if this is a basic question, I don't use linux often but I have a 13GB file that I want to filter down.
I have 123 columns and I want to remove rows that have only a . in the 75th column.
I've been looking into how to do this, at the moment I've got:
awk '$75 !~/./ {print $0}' oldfile.txt > newfile.txt
Am I along the right lines? When I run this it outputs an empty file.
|
try
awk '$75 != "." ' oldfile.txt > newfile.txt
this will match 75 th column for an extact dot.
what you did
awk '$75 !~/./ {print $0}'
will try to match 75th column for something different from any char (regexp match by . )
more precisely
awk '$75 ~ /./ '
will match any row where 75th columen have at least one char (which is all row, if you have more than 75 columns)
| How to filter rows by one column? |
1,658,821,946,000 |
Suppose I have log.txt
The sample of log.txt are like these format:
Code Data Timestamp
...
C:57 hello 1644498429
C:56 world 1644498430
C:57 water 1644498433
...
If I want filter string line that contain C:57 I can achieve it with
cat log.txt | grep C:57
then I redirect output to the new file hence
cat log.txt | grep C:57 > filtered_log.txt
How ever when there's new change in log.txt, I should repeat execute that command again. I want it executes periodically or for every new change in file or only when there's new line that contain string C:57.
|
You can use tail -f thusly:
tail -f log.txt|grep C:57 >> filtered_log.txt
This continuously reads log.txt grepping for the token C:57 and adding any matches to the filtered.log.txt.
The use of cat to read the log and pipe that to grep is a useless use of cat. grep can directly read a file. You're wasting I/O by combining a cat and a grep.
The one drawback here is that the appearance of filtered output may be delayed due to buffering. This can be circumvented with:
tail -f log.txt|grep --line-buffered C:57 >> filtered_log.txt
or by using the stdbuf -o0 command:
tail -f log.txt|stdbuf -o0|grep C:57 >> filtered_log.txt C:57
| Filter changing file periodically and redirect filtered output to new file |
1,658,821,946,000 |
I want to sync all *.sh files in exactly one sub directory. I tried this cmd but all files in a directory are synced instead of only particular file types.
rsync -vr -n --include="*.sh" --exclude="*/*/" --prune-empty-dirs /source /target
I also tried adding a filter like
--filter="+ *.sh"
but this did not change the result.
Another filter
rsync -vr -n --include="*.sh" --filter="-! *.sh" --exclude="*/*/" --prune-empty-dirs
gives me an empty list. If I exclude "*" I also exclude "*.sh"...
What is wrong? Thanks!
The depth should be one - the name of the subdirectory is not known. For example,
home/subdirectory1/subsubdirectory/subsubsubdirectory/file.sh
home/subdirectory1/file1.sh
home/subdirectory2/file2.sh
home/subdirectory3/file
In the above example rsync should start in home, sync directory hierarchy and only the files file1.sh and file2.sh
|
Your examples don't seem to match your description. I think what you are saying you want is this,
match all the *.sh files in an unknown subdirectory immediately underneath your /home
put the files in the unknown subdirectory on the target
do not include /home in the destination path
Looking from /home, this command will match all files */*.sh and copy them and their partial paths to the target. (Remove the --dry-run if you are happy with the intended result.)
rsync --dry-run -avR /home/./*/*.sh /target
For example, /home/subdir/file.sh will be copied to /target/subdir/file.sh
| rsync only particular file types in subdirectories with a fixed maximum depth |
1,658,821,946,000 |
I'm trying to use the following command to extract a time period forthe current day date and grep for the specific message as you can see below:
awk -v date="$(date +%Y%m%d')" '$1="date" && $3>"180000" && $3<"192000"' /app/exploit/log/FILE_send.log | grep "End success file transfer /app/reception/FILENAME.dat to HOSTX"
But I'm having trouble to filter by the date column which is the first.
Here's the output of the awk command when I remove the variable from the awk command and specify a date on $1=20190405:
1 - 180050 | INFO | FILE_SEND.sh | | FILENAME.dat | 1800498307000 | End success file transfer /app/reception/FILENAME.dat to HOSTX
Here is the line that log has:
20190405 - 180050 | INFO | FILE_SEND.sh | | FILENAME.dat | 1800498307000 | End success file transfer /app/reception/FILENAME.dat to HOSTX
I can't see the reason for the column be replaced by 1.
How can I filter the first column for the current day.
|
You can use the command:
awk -F'[-|]' -v date="20190405" '$1 == date && $2>180000 && $2<192000' file_name | grep "End success file transfer /app/reception/FILENAME.dat to HOSTX"
The output of awk command will be:
20190405 - 180050 | INFO | FILE_SEND.sh | | FILENAME.dat | 1800498307000 | End success file transfer /app/reception/FILENAME.dat to HOSTX
With variable date use:
awk -F'[-|]' -v date="$(date +%Y%m%d)" '$1 == date && $2>180000 && $2<192000' filename
| awk filtering one column log file |
1,658,821,946,000 |
With html bookmark saved from chrome or others, frequently exported as html file that comes with <a href tag that I'd like to filter and arrange:
<a href="https://<a-web-site>">Title of the website</a>
How to use basic linux's util like sed/grep/awk to filter and arrange items like:
Title of the website https://<a-web-site>
|
With sed:
$ echo '<a href="https://<a-web-site>">Title of the website</a>' | sed -e 's|.*href="\(.*\)".*>\(.*\)</a>|\2 \1|g'
Title of the website https://<a-web-site>
| Filter on html tag |
1,658,821,946,000 |
I'd like to parse incoming mail with custom script. Something which would be easily done with
man aliases(5) for real users or with procmail for vitual users.
But on my system is is plesk and setup is: virtual_transport = plesk_virtual, thus I cannot put configure it as virtual_transport = procmail.
What is plesk_virtual anyway, where are the configuration files, documentation? ("plesk_virtual" site:https://docs.plesk.com/ finds nothing, Mail doc neither).
|
OK, at least I've found that plesk_virtual is an external delivery method, defined in /etc/postfix/master.cf:
plesk_virtual unix - n n - - pipe flags=DORhu user=popuser:popuser
argv=/usr/lib64/plesk-9.0/postfix-local -f ${sender} -d ${recipient} -p
/var/qmail/mailnames
Thus it's a binary /usr/lib64/plesk-9.0/postfix-local.
source, graph on page 3
| Mail processing with custom script on Postfix and Plesk (plesk_virtual) |
1,658,821,946,000 |
I am building a wordlist which will contain words like error,fail,kill,warn,out of,over,too.....etc.
so that I can filter tons of logs for issues within seconds using grep.
Especially, it's for digging linux logs.
Found 1: https://github.com/cornet/ccze
static char *words_bad[] = {
"warn", "restart", "exit", "stop", "end", "shutting", "down", "close",
"unreach", "can't", "cannot", "skip", "deny", "disable", "ignored",
"miss", "oops", "not", "backdoor", "blocking", "ignoring",
"unable", "readonly", "offline", "terminate", "empty", "virus"
};
static char *words_error[] = {
"error", "crit", "invalid", "fail", "false", "alarm", "fatal"
};
Found 2: https://raygun.com/platform/crash-reporting
So, my question is, is there any wordlist exists already for this kind
of bad words?
Sorry for the typos.
Thank you.
|
Shell wrapper with wordlist
grepbad() {
grep --color=always -i "warn\|restart\|exit\|stop\|end\|shutting\|down\|close\|\
unreach\|can't\|cannot\|skip\|deny\|disable\|ignored\|\
miss\|oops\|not\|backdoor\|blocking\|ignoring\|\
unable\|readonly\|offline\|terminate\|empty\|virus" $*
}
grepgood() {
grep --color=always "activ\|start\|ready\|online\|load\|ok\|register\|detected\|\
configured\|enable\|listen\|open\|complete\|attempt\|done\|\
check\|listen\|connect\|finish\|clean" $*
}
greperror() {
grep --color=always -i 'error\|crit\|invalid\|fail\|false\|alarm\|fatal\|over\|too\|out of\|kill\|exception\|ban\|not' $*
}
grepsystem() {
grep --color=always "ext2-fs\|reiserfs\|vfs\|iso\|isofs\|cslip\|ppp\|bsd\|\
linux\|tcp/ip\|mtrr\|pci\|isa\|scsi\|ide\|atapi\|\
bios\|cpu\|fpu\|discharging\|resume" $*
}
Screenshot
| searching logs for failures using grep with wordlist? |
1,658,821,946,000 |
I have a program that returns some content every time I run it. Some of that content may have already shown during the last run.
foo
bar
baz
and on next execution it might show two "old lines" and one "new line":
bar
baz
house
Is it possible to filter out and save previously unseen content of these commands to a file, so that I end up with
foo
bar
baz
house
Note that I run this command very irregularly, so there may be minutes up to days between two executions.
|
If you have sponge from the moreutils package installed:
$ printf '%s\n' foo bar baz > log # first call
$ printf '%s\n' bar baz house | grep -v -F -x -f log | sponge -a log # subsequent calls
$ cat log
foo
bar
baz
house
| Save "unseen" output of command to file |
1,658,821,946,000 |
All I need is the Zip file name.
In the first step I searched for the author:
egrep -ni -B1 --color "$autor: test_autor" file_search_v1.log > result1.log
whatever worked, the result was:
zip: /var/www/dir_de/html/dir1/dir2/7890971.zip
author: test_autor
zip: /var/www/dir_de/html/dir1/dir2/10567581.zip
author: test_autor
But, as mentioned above, the Ziip file name.
In the second step I tried to filter the result of the first search again:
egrep -ni -B1 --color "$autor: test_autor" file_search_v1.log | xargs grep -i -o "\/[[:digit:]]]\.zip"
to search only for the filename, unfortunately this does not work.
My question.
How should the second grep filter "look" so that I only get the zip file name?
|
grep -B1 "$autor: test_autor" file_search_v1.log | grep -o "[^/]*\.zip$"
Change the first grep as needed. The second grep filters out the parts at the end of the line containing non-/ characters followed by the .zip suffix.
If you know that your zip files only contains digits, you could exchange [^/] with [[:digit:]].
| grep with filter grep, how? |
1,658,821,946,000 |
I have a file.txt (it does not have the same number of columns for each row):
e.g.
1 21:10 21:23
2 1:94 1:100 1:123
3 14:1 14:60 14:23
I have another file (file2.txt) that contains 4 columns (separated with " ")
a 21 20 60
b 2 80 90
c 14 50 100
d 2 10 20
e 14 1 12
I want to check the initial part for each row of the 1st file (21;1;14, it is equal for each row) and select the elements that have the part after ":" between the value in the second file (3th e 4th column) iff the 2th column is equal to the part before ":"
e.g. of computation:
> file.txt: 1th row: 21:10 21:23 --> 21 is in a in file2.txt so 20<10<60 NO
> but 20<23<60 is true so I take it.. and so on, I see for each row in file2.txt
another example:
file.txt: 3th row: 14:1 14:60 14:23 --> 14 is in c and e in file2.txt so
50<1<100 NO (for c)
50<60<100 YES (for c)
50<23<100 NO (for c)
1<1<12 YES (for e)
1<60<12 NO (for e)
1<23<12 NO(for e)
if an element if between 2 value (for only a row for file2.txt) I take it.
Ex. of results:
1 21:23
2 14:1 14:60
(row 2 is eliminated because 1 is not contained in any cell of column 2 of file2.txt)
|
Assuming you mean <= instead of < since 1<1<12 is not YES:
$ cat tst.awk
NR==FNR {
cnt[$2]++
beg[$2,cnt[$2]] = $3
end[$2,cnt[$2]] = $4
next
}
{
out = ""
for (i=2; i<=NF; i++) {
split($i,parts,/:/)
key = parts[1]
for (j=1; j<=cnt[key]; j++) {
if ( (beg[key,j] <= parts[2]) && (parts[2] <= end[key,j]) ) {
out = out OFS $i
break
}
}
}
if ( out != "" ) {
print ++outNr out
}
}
$ awk -f tst.awk file2 file1
1 21:23
2 14:1 14:60
The above was tested using the 2 input files you provided:
$ tail file1 file2
==> file1 <==
1 21:10 21:23
2 1:94 1:100 1:123
3 14:1 14:60 14:23
==> file2 <==
a 21 20 60
b 2 80 90
c 14 50 100
d 2 10 20
e 14 1 12
and the expected output you provided.
| Filtering a .txt file using a regular expression and a second file |
1,658,821,946,000 |
I have a list of directories similar to the below in a .txt file
/Season_1/101
/Season_1/101/Thumbnails
/Season_1/101/Thumbnails/Branded
/Season_1/101/massive_screengrabs
/Season_1/102/massive_screengrab
/Season_1/102/thumbnails
/Season_1/102/thumbnails/Branded
/Season_1/103/Thumbnails
/ARCHIVE/480x360 v6/Season 2
/ARCHIVE/480x360 v6/Season 3
/ARCHIVE/480x360 v6/Season 4
I'm looking for a way to filter out directories based on the shortest common root directory when compared to the rest of the list. The results would look like the below.
/Season_1/101
/Season_1/102/massive_screengrab
/Season_1/102/thumbnails
/Season_1/103/Thumbnails
/ARCHIVE/480x360 v6/Season 2
/ARCHIVE/480x360 v6/Season 3
/ARCHIVE/480x360 v6/Season 4
Another need for this is to be compatible with all sorts of randomly named directories so anything that would use a string like "/Season_1/101" to solve this specific example would not work as the directories could be named anything.
Any help is greatly appreciated.
|
The following command will work with text files that do not contain blank lines. If you need to accommodate blank lines some modification would be required.
cat textfile | sort | awk 'BEGIN { FS="/" }; { if ( NR == 1 || $0 !~ lastField ) { print $0; lastField = $NF } }' > newtextfile
Where textfile is your text file and newtextfile is where you want to output the results to. You can omit > newtextfile if you want to see the results on standard output.
The file is first sorted to set it up for awk to iterate record by record, starting with the shortest version of any similar lines. Awk determines if the last part of a given record is contained within the next line. It only outputs lines which don't have the duplication.
| Filter directory list text file based on the short common root directory |
1,658,821,946,000 |
I try to filter my log file by request. I want to filter all the request(that you can find on the 7th column:/userx/index...) that have (m=xxx and a=xxx) or (m=xxx and doajax=xxx) and only have the request with those parameters
For example:
192.xx.x.x - - [11/Apr/2017:09:59:xx +0200] "POST /userx/index.php?m=xxxx&doajax=xxxx&action=xxxxx&id=x
192.xx.x.x - - [11/Apr/2017:09:59:xx +0200] "POST /userx/index.php?detailed=1&id=amgervais
192.xx.x.x - - [11/Apr/2017:09:59:xx +0200] "POST /userx/index.php?m=xxx&a=xxxx&dialog=x&actionId=x&prospectId=xx
result of the filter:
192.xx.x.x - - [11/Apr/2017:09:59:xx +0200] "POST /userx/index.php?m=xxxx&doajax=xxxx&action=xxxxx
192.xx.x.x - - [11/Apr/2017:09:59:xx +0200] "POST /userx/index.php?m=xxx&a=xxxx
I tried to use this command to look the request which have m=xxx and a=xxx but I don't know how to do for other case ( when I can find m=xxx and doajax=xxx) at the same time.
awk '$7 ~ /m=/' logfile | awk '$7 ~ /&a=/'
|
what's wrong with
awk '( $7 ~ /m=xxx/ ) && (( $7 ~ /a=xxx ) || ( $7 ~ /doajax=xxx/ )) {
split($7,A,"&") ; $7 = A[1] "&" A[2] ;print ;} ' logfile
where
&& stand for logical and,
|| logical or,
split($7,A,"&") will split 7-th field into array, using & as separator,
$7 = A[1] "&" A[2] change (not in file) 7-th field to selected sub-field
print print.
(this can be one lined, I break line for readability).
this give
192.xx.x.x - - [11/Apr/2017:09:59:xx +0200] "POST /userx/index.php?m=xxxx&doajax=xxxx
192.xx.x.x - - [11/Apr/2017:09:59:xx +0200] "POST /userx/index.php?m=xxx&a=xxxx
If you want full line with doajax :
awk '/doajax/ { print ; next ; }
( $7 ~ /m=xxx/ ) && ( $7 ~ /a=xxx/ ) { split($7,A,"&") ; $7 = A[1] "&" A[2] ;print ;}'
| filter a log file by request |
1,658,821,946,000 |
Is there any distribution with a built in solution for web content filtering for my network?
|
Ubuntu has Squid in it's repository which is easy to configure. I do believe other distros have it as well.
https://help.ubuntu.com/lts/serverguide/squid.html
| Linux Distrib with build in Web filter solution |
1,658,821,946,000 |
I am having the following problem in linux:
I need to find specific files from a directory and copy those files (if they exist) into another directory
this is the command (i need to find specific text files starting with the word log) and only take the last 10 recent ones; the command works
find /mydir -type f -name 'log*.txt' | tail -n 10
there is no recursion involved, i can just find the files and copy them
however i am having a problem combining this with a copy command ; i tried this:
find /mydir -type f -name 'log*.txt' | tail -n 10 -exec cp --parents \{\} /tmp/mydir \;
it can't execute -exec , something is wrong here.
thanks
|
I would use xargs to copy all files.
You can pipe the output to xargs which executes cp to all passed arguments.
The Wikipedia article describes xargs better than I could do and you could look into the manpage of Linux.
A call that does what you want could be:
find /mydir -type f -name 'log*.txt' | tail -n 10 | xargs -I % cp % /tmp/mydir/
The -I flag followed by % cp % means that all passed arguments will concatenated to the last %
| How to combine find and cp in linux to copy specific files |
1,658,821,946,000 |
Currently I have the following bash script that gets the width and height of all the files in the directory:
ls | cat -n | while read n f; do
width=$(identify -format "%w" "$f")
height=$(identify -format "%h" "$f")
echo "$width , $height"
done
How can I only get the height/width of files where the filename ends in -example99.jpg?
|
for filename in *-example99.jpg
do
width=$(identify -format "%w" "$filename")
height=$(identify -format "%h" "$filename")
done
| bash filter the file names by regex or similar in a while loop |
1,658,821,946,000 |
I want to filter any comma and any double quote mark from output by some command with.
For some entries.
Pseudocode:
removechar --any -, -"
Current output could look like any of these
lorem, ipsum " dolor ,"
",,lorem,, ipsum ,,, """ dolor ","
,lorem ipsum ,,, """ dolor ,
Desired output:
lorem ipsum dolor
lorem ipsum dolor
lorem ipsum dolor
Update
I might also need to remove any redundant whitespace character, for example:
a, b"
will become
ab
Question
How to strip characters by argument?
|
You could use tr:
<input tr -d ',"' >output
or, to remove the comma and quote characters and squeeze adjacent spaces (as shown in your desired output)
<input tr -d ',"' | tr -s ' ' >output
or more generally to remove all punctuation and squeeze all horizontal whitespace
<input tr -d '[:punct:]' | tr -s '[:blank:]' >output
| How to strip characters by argument? |
1,658,821,946,000 |
I am struggling over a filter where I am trying to trim data in a specific column of a CSV after the 3rd or nth occurrence of the character \.
My data looks something like this:
data,data,c:\path1\folder2\folder3\folder4\...,data,data,data
data,data,c:\path1\folder2\folder3\folder4\...,data,data,data
data,data,c:\path1\folder2\folder3\folder4\...,data,data,data
data,data,c:\path1\folder2\folder3\folder4\...,data,data,data
I want the filter to produce:
data,data,c:\path1\folder2\folder3\,data,data,data
The 3rd column contains a file path, and it may be anywhere from one folder to many. I want there to be a maximum of 3 folders.
I don't want to delete the other remaining columns but edit the file in place.
I've been experimenting with awk, sed, and trying to combine the cut command cut -f1-4 -d '\' into an awk statement but cannot for the life of me get this to work.
|
awk -F "\\" '{gsub(/\.*,/,",",$0);print $1"\\"$2"\\"$3"\\"$4$NF}' file.txt
data,data,c:\path1\folder2\folder3,data,data,data
data,data,c:\path1\folder2\folder3,data,data,data
data,data,c:\path1\folder2\folder3,data,data,data
data,data,c:\path1\folder2\folder3,data,data,data
Python
#!/usr/bin/python
import re
qw=re.compile(r'\.*')
k=open('file.txt','r')
for i in k:
respa=re.sub(qw,"",i.strip()).strip().split('\\')
print "{0}\\{1}\\{2}\\{3}{4}".format(respa[0],respa[1],respa[2],respa[3],respa[-1])
output
data,data,c:\path1\folder2\folder3,data,data,data
data,data,c:\path1\folder2\folder3,data,data,data
data,data,c:\path1\folder2\folder3,data,data,data
data,data,c:\path1\folder2\folder3,data,data,data
| Trim pathname in CSV file |
1,658,821,946,000 |
I have a csv file and I need to filter it out into two files based on whether the last column contains the word "ecDNA". I already have two more copies of the file to edit without changing the original file. Is there any way I can delete all the lines that do not contain "ecDNA" from one file and only retain lines that contain "ecDNA" from another copy of the file?
|
awk -F, '$NF ~ /ecDNA/' oldfile > newfile
NF is the number of fields (columns) on the current input line, so $NF is the value (contents) of the last field. If $NF contains "ecDNA", then print the the line. Otherwise, ignore it.
If you need the match to be case-insensitive (and you're using GNU awk), use:
awk -F, -v IGNORECASE=1 '$NF ~ /ecDNA/' oldfile > newfile
For the inverted match (lines without ecDNA in the last field), negate the condition operator:
awk -F, '$NF !~ /ecDNA/' oldfile > newfile2
| How to compile lines with a certain word in the last column into a separate file? |
1,658,821,946,000 |
I am able to get the file difference using git diff command and I got it filtered like below:
-This folder contains common database scripts.
+This folder contains common database scripts.
+
+
+
+New Line added.
However I want to be able to get only the difference that is the line New Line added. how can I achieve that - note that here I want to delete a pair of line containing
'+This folder contains common database scripts.' and
'-This folder contains common database scripts.'
and also remove white spaces (three '+ ' lines)
|
Try This:
If +New Line added. is the last line of the output of git diff:
git diff | tail -1 | tr -d '\n'
If you want to get rid of +
git diff | tail -1 | sed -e 's/^+//' | tr -d '\n'
| Filter the below text using shell commands |
1,297,916,768,000 |
I want to run a java command once for every match of ls | grep pattern -. In this case, I think I could do find pattern -exec java MyProg '{}' \; but I'm curious about the general case - is there an easy way to say "run a command once for every line of standard input"? (In fish or bash.)
|
That's what xargs does.
... | xargs command
| Execute a command once per line of piped input? |
1,297,916,768,000 |
In Fish when you start typing, autocompletion automatically shows the first autocompleted guess on the line itself.
In zsh you have to hit tab, and it shows the autocompletion below. Is there anyway to make zsh behave more like fish in this regard?
(I am using Oh My Zsh...)
|
I have implemented a zsh-autosuggestions plugin.
It should integrate nicely with zsh-history-substring-search and zsh-syntax-highlighting which are features ported from fish.
| How to make zsh completion show the first guess on the same line (like fish's)? |
1,297,916,768,000 |
I have recently come across a file whose name begins with the character '♫'. I wanted to copy this file, feed it into ffmpeg, and reference it in various other ways in the terminal. I usually auto-complete weird filenames but this fails as I cannot even type the first letter.
I don't want to switch to the mouse to perform a copy-paste maneuver. I don't want to memorize a bunch of codes for possible scenarios. My ad hoc solution was to switch into vim, paste !ls and copy the character in question, then quit and paste it into the terminal. This worked but is quite horrific.
Is there an easier way to deal with such scenarios?
NOTE: I am using the fish shell if it changes things.
|
If the first character of file name is printable but neither alphanumeric nor whitespace you can use [[:punct:]] glob operator:
$ ls *.txt
f1.txt f2.txt ♫abc.txt
$ ls [[:punct:]]*.txt
♫abc.txt
| Dealing with file names with special first characters (ex. ♫) |
1,297,916,768,000 |
bash and fish scripts are not compatible, but I would like to have a file that defines some some environment variables to be initialized by both bash and fish.
My proposed solution is defining a ~/.env file that would contain the list of environment variables like so:
PATH="$HOME/bin:$PATH"
FOO="bar"
I could then just source it in bash and make a script that converts it to fish format and sources that in fish.
I was thinking that there may be a better solution than this, so I'm asking for better way of sharing environment variables between bash fish.
Note: I'm using OS X.
Here is an example .env file that I would like both fish and bash to handle using ridiculous-fish's syntax (assume ~/bin and ~/bin2 are empty directories):
setenv _PATH "$PATH"
setenv PATH "$HOME/bin"
setenv PATH "$PATH:$HOME/bin2"
setenv PATH "$PATH:$_PATH"
|
bash has special syntax for setting environment variables, while fish uses a builtin. I would suggest writing your .env file like so:
setenv VAR1 val1
setenv VAR2 val2
and then defining setenv appropriately in the respective shells. In bash (e.g. .bashrc):
function setenv() { export "$1=$2"; }
. ~/.env
In fish (e.g. config.fish):
function setenv; set -gx $argv; end
source ~/.env
Note that PATH will require some special handling, since it's an array in fish but a colon delimited string in bash. If you prefer to write setenv PATH "$HOME/bin:$PATH" in .env, you could write fish's setenv like so:
function setenv
if [ $argv[1] = PATH ]
# Replace colons and spaces with newlines
set -gx PATH (echo $argv[2] | tr ': ' \n)
else
set -gx $argv
end
end
This will mishandle elements in PATH that contain spaces, colons, or newlines.
The awkwardness in PATH is due to mixing up colon-delimited strings with true arrays. The preferred way to append to PATH in fish is simply set PATH $PATH ~/bin.
| Share environment variables between bash and fish |
1,297,916,768,000 |
At the moment I need to set the fish shell to be my default shell on NixOS and there is no official documentation on how to do that declaratively (not by running chsh) in NixOS.
|
In your configuration.nix,
{ pkgs, ... }:
{
...
programs.fish.enable = true;
users.users.<myusername> = {
...
shell = pkgs.fish;
...
};
}
Followed by nixos-rebuild switch.
More info in NixOS Wiki.
| How to change the default shell in NixOS? |
1,297,916,768,000 |
Using Bash I've often done things like cd /study && ls -la
I understand that the double ampersand is telling the terminal don't execute part two of this command unless part one completes without errors.
My question is, Having just moved to the Fish shell and trying the same command I get an error stating I can't use && and instructing me to use a single & which I believe backgrounds the task which isn't what I want.
Can anyone tell me the correct syntax to run my old Bash command in the Fish shell?
|
Instead of &&, which doesn't exist in fish, use ; and the command and:
cd /study; and ls -la
According to the fish tutorial:
Unlike other shells, fish does not have special syntax like && or || to combine commands. Instead it has commands and, or, and not.
| Run a command only if the previous command was successful in Fish (like && in bash) |
1,297,916,768,000 |
Is there a way I can do something like run myscript.sh in fish ?
I am using Arch Linux, and have installed the fish shell together with oh-my-fish
Can someone tell me which file I must edit to add my custom shell startup commands?
In zsh it was the ~/.zshrc file. What is it in the fish shell?
I have a problem: if I put my stuff in bashrc it is not loaded by fish. If I enter bash commands in the fish file ~/.config/fish/config.fish, it throws errors
Is there a way to get fish to load an "sh" file so that I can put all my bash things in that file?
|
I tried sourcing .profile on fish startup and it worked like a charm for me.
just do :
echo 'source ~/.profile;clear;' > ~/.config/fish/config.fish
Quit terminal or iterm2 followed by firing up an alias from .profile to test.
| How to edit the fish shell startup script? |
1,297,916,768,000 |
I created a directory d and a file f inside it. I then gave myself only read permissions on that directory. I understand this should mean I can list the files (e.g. here), but I can't.
will@wrmpb /p/t/permissions> ls -al
total 0
drwxr-xr-x 3 will wheel 102 4 Oct 08:30 .
drwxrwxrwt 16 root wheel 544 4 Oct 08:30 ..
dr-------- 3 will wheel 102 4 Oct 08:42 d
will@wrmpb /p/t/permissions> ls d
will@wrmpb /p/t/permissions>
If I change the permissions to write and execute, I can see the file.
will@wrmpb /p/t/permissions> chmod 500 d
will@wrmpb /p/t/permissions> ls d
f
will@wrmpb /p/t/permissions>
Why is this? I am using MacOS.
Edit: with reference to @ccorn's answer, it's relevant that I'm using fish and type ls gives the following:
will@wrmpb /p/t/permissions> type ls
ls is a function with definition
function ls --description 'List contents of directory'
command ls -G $argv
end
|
Some preparations, just to make sure that ls does not try more things
than it should:
$ unalias ls 2>/dev/null
$ unset -f ls
$ unset CLICOLOR
Demonstration of the r directory permission:
$ ls -ld d
dr-------- 3 ccorn ccorn 102 4 Okt 14:35 d
$ ls d
f
$ ls -l d
ls: f: Permission denied
$ ls -F d
ls: f: Permission denied
In traditional Unix filesystems, a directory was simply a list of (name, inode
number) pairs. An inode number is an integer used as index into the filesystem's
inode table where the rest of the file metadata is stored.
The r permission on a directory allows to list the names in it,
but not to access the information stored in the inode table, that is,
getting file type, file length, file permissions etc, or opening the file.
For that you need the x permission on the directory.
This is why ls -l, ls -F, ls with color-coded output etc fail
without x permission, whereas a mere ls succeeds.
The x permission alone allows inode access, that is, given an explicit
name within that directory, x allows to look up its inode and access that directory entry's metadata:
$ chmod 100 d
$ ls -l d/f
-rw-r--r-- 1 ccorn ccorn 0 4 Okt 14:35 d/f
$ ls d
ls: d: Permission denied
Therefore, to open a file /a/b/c/f or list its metadata,
the directories /, /a, /a/b, and /a/b/c must be granted x permission.
Unsurprisingly, creating directory entries needs both w and x permissions:
$ chmod 100 d
$ touch d/g
touch: d/g: Permission denied
$ chmod 200 d
$ touch d/g
touch: d/g: Permission denied
$ chmod 300 d
$ touch d/g
$
Wikipedia has a brief overview in an article on file system permissions.
| Why can't I list a directory with read permissions? |
1,297,916,768,000 |
I do not know why, but after making a whole bunch of fish aliases. I'm assuming I have neglected one simple step after assigning them all but I cannot seem find the solution myself.
Can anyone lend me a hand?
Thank you very much!
~Ev
|
It basically boiled down to:
Open ~/.config/fish/config.fish in your favorite editor. If it's not already there, it'll make it for you. (Don't su it though.)
Add all the aliases you want. It'll save them and always load then because this is apparently Fish's version of bashrc.
Save it, baby!
Enjoy.
| Fish-Shell Will Not Save my Aliases |
1,297,916,768,000 |
I have fish installed in my Linux Mint DE. I really like how fish makes things easier and it looks so pretty although I haven't find a correct answer about why I can't execute:
sudo: !!: command not found
At first I tried to escape the exclamation signs with sudo !! but didn't work either. Does someone know why is this failing?
|
I haven't found a inbuilt replacement for !! in Fish however you can write a function that allows you to keep using !!
Taken from this answer https://superuser.com/a/719538/226822
function sudo --description "Replacement for Bash 'sudo !!' command to run last command using sudo."
if test "$argv" = !!
echo sudo $history[1]
eval command sudo $history[1]
else
command sudo $argv
end
end
| fish: sudo: !!: command not found |
1,297,916,768,000 |
While using fish as my shell, i'm trying to set permissions on a bunch of c source files in current dir with
find . -type f -name "*.c" -exec chmod 644 {} +;
I get an error
find: missing argument to `-exec'
or
find . -type f -name "*.c" -exec chmod 644 {} \;
I get an error
chmod: cannot access '': No such file or directory
What's wrong?
|
fish happens to be one of the few shells where that {} needs to be quoted.
So, with that shell, you need:
find . -type f -name '*.c' -exec chmod 644 '{}' +
When not quoted, {} expands to an empty argument, so the command becomes the same as:
find . -type f -name '*.c' -exec chmod 644 '' +
And find complains about the missing {} (or ; as + is only recognised as the -exec terminator when following {}).
With most other shells, you don't need the quotes around {}.
| find -exec not working in fish |
1,297,916,768,000 |
I'm a bash user, starting a new job at a place where people use fish shell.
I'm looking at the history command which I often use in bash. When I use it in fish I get a long list of my history which I can scroll up and down on with the arrow keys.
There are no numbers like in bash and pressing enter is the same as the down key.
How can I run a past command with fish shell's history?
|
The history command in the fish shell isn't bash-compatible, it's just displaying it in a pager (e.g. less).
To select an old command, you'll probably want to enter the part you remember right into the commandline, press up-arrow until you have found what you want and then press enter to execute.
E.g. on my system I enter mes, press up and rm -I meson.build appears (with the "mes" part highlighted). I then press enter and it executes.
| How does history work in fish shell? |
1,297,916,768,000 |
On an Ubuntu ($ uname -a : Linux kumanaku 4.15.0-43-generic #46-Ubuntu SMP Thu Dec 6 14:45:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux), I just installed fish ($ fish --version : fish, version 2.7.1) using the following commands :
sudo apt-add-repository ppa:fish-shell/release-2
sudo apt-get update
sudo apt-get install fish
chsh -s /usr/bin/fish
echo /usr/bin/fish | sudo tee -a /etc/shells
I can launch fish and use it but when I launch a simple shell file like :
echo "something"
I got the following message :
$ ./myscript.sh
Failed to execute process './myscript.sh'. Reason:
exec: Exec format error
The file './myscript.sh' is marked as an executable but could not be run by the operating system.
There's no shebang in my script. If I add #!/usr/bin/env fish, everything's ok (i.e. the script is successfully launched) but I'd like to avoid such a line to keep my script compatible with different shells.
Any idea ?
|
You need a shebang line if the executable file cannot be run natively by the kernel. The kernel can only run machine code in a specific format (ELF on most Unix variants), or sometimes other formats (e.g. on Linux you can register executable formats through binfmt_misc). If the executable file needs an interpreter then the kernel needs to know which interpreter to call. That's what the shebang line is for.
If your script is in fish syntax, its first line must be
#!/usr/bin/env fish
(You can use the absolute path instead, but then you'll have to modify the script if you want to run it on a machine where the fish executable is in a different location, e.g. /usr/bin/fish vs /usr/local/bin/fish.)
If your script is in sh syntax, use
#!/bin/sh
(All modern Unix systems have a POSIX sh at /bin/sh so you don't need env.)
If your script is in bash syntax (which is sh plus some bash-specific extensions), use
#!/usr/bin/env bash
On Linux, in practice, #!/bin/bash will also work.
All of this is independent of which shell you're calling the script from. All that matters is what language the script is written in.
| fish shell : exec format error |
1,297,916,768,000 |
I have an array whose elements may contain spaces:
set ASD "a" "b c" "d"
How can I convert this array to a single string of comma-separated values?
# what I want:
"a,b c,d"
So far the closest I could get was converting the array to a string and then replacing all the spaces. The problem is that this only works if the array elements don't contain spaces themselves
(echo $ARR | tr ' ' ',')
|
Since fish 2.3.0 you can use the string builtin:
string join ',' $ASD
The rest of this answer applies to older versions of fish.
One option is to use variable catenation:
echo -s ,$ASD
This adds an extra comma to the beginning. If you want to remove it, you can use cut:
echo -s ,$ASD | cut -b 2-
For completeness, you can also put it after and use sed:
echo -s $ASD, | sed 's/,$//'
| In the Fish shell, how can I join an array with a custom separator? |
1,297,916,768,000 |
Is there an equivalent of POSIX shells' set -x or set -o xtrace that cause the shell to display the commands being run in the fish shell?
|
Since fish 3.1.0, the fish_trace variable makes this functionality available:
> set fish_trace on; isatty; set -e fish_trace
++ source share/fish/functions/isatty.fish
++++ function isatty -d 'Tests if a file descriptor is a tty'
+ isatty
+++ set -l options h/help
+++ argparse -n isatty h/help --
+++ if
+++ set -q _flag_help
+++ end if
+++ if
+++ set -q 'argv[2]'
+++ end if
+++ set -l fd
++++ set fd 0
+++ '[' -t 0 ']'
+ set -e fish_trace
The location of the trace output is controlled by the --debug-output option to the fish process.
Before fish 3.1.0, you can use fish -p some_trace_file to run a fish session which outputs a profile to "some_trace_file", which can achieve almost the same effect (with some disadvantages - see the comments below).
| xtrace equivalent in the fish shell |
1,297,916,768,000 |
In bash, if I run kill %1, it kills a backgrounded command in the current shell (the most recent one, I believe).
Is there an equivalent of this in fish? I haven't been able to find it online in a bit of web searching.
I'm not sure if I did it wrong, but
$ ps
PID TTY TIME CMD
73911 pts/5 00:00:00 fish
73976 pts/5 00:00:00 ps
$ sleep 100
^Z⏎
$ kill %1
$ ps
PID TTY TIME CMD
73911 pts/5 00:00:00 fish
74029 pts/5 00:00:00 sleep
74121 pts/5 00:00:00 ps
|
Your command has worked, but because the job is stopped it has not responded to the signal.
From your example where it didn't seem to work, try continuing the process with fg or bg, or forcibly terminate the process with kill -SIGKILL %1, and it will exit.
kill %1 works immediately in bash and zsh because it is a builtin command in these shells and sends SIGCONT in addition to SIGTERM (or the specified signal).
| kill %1 equivalent in fish |
1,297,916,768,000 |
I want to be able to check if a fish shell is being run in login, interactive, or batch mode, and this question only discusses bash.
|
Use the status command:
$ fish -c 'status --is-interactive; and echo yes; or echo no'
no
$ status --is-interactive; and echo yes; or echo no
yes
Also, status --is-login. That should cover your bases.
| How can I check if a shell is login/interactive/batch in fish? |
1,297,916,768,000 |
I'd like to always run a fish script in the background even if the user doesn't specify that.
In bash, this can be done by surrounding the script with ( at the start and ) & at the end.
Is there anyway for a fish script to run itself in the background?
|
fish does not fork to execute subshells, so it is not yet possible to run fish script in the background - see https://github.com/fish-shell/fish-shell/issues/563
A hackish workaround is to invoke fish again, like so:
#!/usr/local/bin/fish
fish -c 'sleep 5 ; echo done' &
| Run fish script in background? [closed] |
1,297,916,768,000 |
In bash, I usually do grep -f <(command) ... (I pick grep just for example) to mimic a file input.
What is the equivalent in fish shell? I cannot find it in the documentation.
|
The <() and >() constructs are known as "process substitution". I don't use fish, but according to its documentation, it doesn't directly support this:
Subshells, command substitution and process substitution are strongly related. fish only supports command substitution, the others can be achieved either using a block or the psub shellscript function.
Indeed, psub seems to be what you want:
## bash
$ seq 10 | grep -f <(seq 4 5)
4
5
## fish
~> seq 10 | grep -f (seq 4 5 | psub)
4
5
| Bash's Process Substitution "<(command)" equivalent in fish shell |
1,297,916,768,000 |
I would have assumed that following examples work perfectly fine.
$ which -a python python3 pip | xargs -I {} {} --version
No '{}' such file or folder
$ which -a python python3 pip | xargs -I _ _ --version
No '_' such file or folder
$ which -a python python3 pip | xargs -I py py --version
No 'py' such file or folder
But they don't work at all when I run it interactively it doesn't even substitute the string. I see no note in manual page regarding a special case in first position. Why does this not work?
$ which -a python python3 pip | xargs -I py -p eval py --version
eval ~/.pyenv/shims/python --version?...y
xargs: eval: No such file or directory
This is even more surprising because it does substitute correctly.
How can I use the argument in first place? I don't want to use ... -I py sh -c "py --version" because this will create a new sub-process. I wish to know how to eval command in current env.
|
In xargs -I replstr utility arguments, POSIX requires that the replstr be only substituted in the arguments, not utility. GNU xargs is compliant in that regard (busybox xargs is not).
To work around it, you can use env as the utility:
which -a ... | xargs -I cmd xargs env cmd --version
(chokes on leading whitespace, backslash, single quote, double quote, newline, possibly sequences of bytes nor forming valid characters, because of the way xargs interprets its input).
Or better:
for cmd in (which -a ...)
$cmd --version
end
Which would limit problematic characters in file names to newline only.
In any case, you can't and don't want to use eval here. eval is the shell builtin (special builtin in POSIX shells) command to interpret shell code, not to run commands. xargs is an external command to the shell, so it cannot run builtins of your shell or of any shell without starting an interpreter of that shell like with:
which -a ... |
xargs -rd '\n' sh -c 'for cmd do "$cmd" --version; done' sh
Using sh instead of fish here as AFAIK fish inline scripts can't take arguments. But still not using eval here which wouldn't make sense as we don't want those file names to be interpreted as shell code.
Also using -rd '\n' which is GNU-specific, but doesn't have all the issues of -I, to pass the full contents of all lines as separate arguments to the utility (here sh).
| xargs | Use input as command |
1,297,916,768,000 |
In bash I could either do echo -e "a\tb" or echo a$'\t'b.
How do you do this in fish?
|
\t, without quotes. Same thing for other control characters (\n for a newline, etc.).
| Print tab character in fish |
1,297,916,768,000 |
I want to use FISH shell. But I've read FISH is not a POSIX shell so setting it to default shell by chsh is not recommended. What I want is whenever I start xfce4-terminal I would like to start FISH shell instead of bash. Adding exec fish to .bashrc seems to be a solution, but I want a to know if there is a way to start fish without starting it on top of bash.
|
Yes, sure. Run:
xfce4-terminal --preferences
And make: Run a custom command instead of my shell and type fish in the box just below. That's it, close and start xfc4-terminal. That's it. Enjoy.
| How can I make xfce4-terminal start fish shell? |
1,297,916,768,000 |
If I run fish from a bash prompt, it will inherit the environment variables I have set in my .bashrc and .profile files, including the important $PATH variable. So far so good.
Now, I want xfce4-terminal, my terminal emulator, to use fish instead of bash. I found that it supports a -e command line parameter for that but if I run xfce4-terminal -e fish from a GUI application launcher1 then I don't get the environment variables from bash. What gives?
1 Launching xfce4-terminal from an interactive bash prompt also didn't work but gnome-terminal, who has a similar command line switch did. However, gnome-terminal also didn't get the variables when launched from a GUI shortcut.
Edit: I've since bitten the bullet and just made fish into my login shell with chsh --shell /usr/bin/fish. Its much simpler than what I was trying to do before and it avoids some undesirable effects of running fish inside bash (such as having the $SHELL environment variable set to bash)
|
When bash is run as a non-interactive shell it will not source the .bashrc file and if you use a graphical display manager then the login shell used to run the GUI launcher commands might not have sourced the .profile file. Thus, commands run using the GUI launcher might not have the desired environment variables set when run.
A workaround I found was to tell the terminal emulator to run bash in interactive mode (with the -i flag) and then immediately run fish inside of it:
xfce4-terminal -e 'bash -i -c fish'
| How do I create a GUI application launcher for xfce4-terminal with fish but inheriting the environment variables from bash? |
1,297,916,768,000 |
I often find myself wanting to move a file,
then create a symlink where it was.
In doing this by hand I tend to twist my mind.
(Esp after doing half a dozen files)
Use cases:
Moving all my "dot files" to a folder so i can version control them
Moving a file onto a faster disk (scratch) for High Performance Computing
If there is not a single command,
I would appreciate a fish script.
(fish is not a POSIX shell, does not s pport the sh language)
|
function lnmv
set dest_dir $argv[1]
set files $argv[2..-1]
for f in $files
set dest $dest_dir/$f
mv -- $f $dest
and ln -s -- $dest $f
end
end
| Is there a command to move a file, and symlink it back to where it was? |
1,297,916,768,000 |
I'm trying to get the CLI password manager pass to work in my fish shell with auto completion. I've already found the necessary file, yet am having trouble finding out where to put it, or rather getting it to work. So far I've added it to:
~/.config/fish/pass.fish
~/.config/fish/completions/pass.fish
and added the content to my ~/.config/fish/config.fish file.
with no success.
|
The second option listed (~/.config/fish/completions/pass.fish) is the preferred approach. The third should also work.
I tried the following:
Put the file at ~/.config/fish/completions/pass.fish
Type pass followed by a space
Hit tab
And I see completions from that file.
It's possible that fish is looking somewhere else. Try echo $fish_complete_path and verify that it includes ~/.config/fish/completions/ . If it does not, you can put back the defaults by erasing it and starting a new session: set -e fish_complete_path.
| Adding pass completion to fish shell |
1,297,916,768,000 |
I use the fish shell and would like to be able to "source" some shell scripts written with sh-compatible syntax, which fish cannot read. For example lots of software expects you to source some shell scripts to export helpful environment variables:
# customvars.sh
FOOBAR=qwerty
export FOOBAR
If I don't care about preserving local variable and function definitions, one workaround is to use exec to replace the current process with /bin/sh and then go back to fish
# We start out on fish
exec /bin/sh
# Running a POSIX shell now
. customvars.sh
exec /usr/bin/fish
# Back on fish
echo $FOOBAR
Is there a way to boil this pattern down to an one-liner that I can save in a function somewhere? If I try doing
exec /bin/sh -c '. vars.sh; /usr/bin/fish'
my terminal emulator window closes immediately instead of taking me back to an interactive fish prompt.
|
For the specific problem of having scripts in POSIX-compatible shell language that set environment variables and wanting to use them in fish, one existing solution is Bass. It is a script that works similarly to terdon's answer but can handle more corner cases.
| Is it possible to exec some commands in a subshell without immediately exiting afterwards? |
1,297,916,768,000 |
I would like tab completion to behave differently
when the cursor is at the beginning of a word
than when the cursor is at the end of a word.
I've only ever seen shells that tab-complete the suffix, like this:
$ tiff2␣
tiff2bw tiff2pdf tiff2ps tiff2rgba
However, sometimes I would also like to tab-complete the prefix
when the caret is at the beginning of the word.
That is, I want to expand to all the commands that end in 2tiff
if the cursor is at the beginning of the word 2tiff, like this:
$ ␣2tiff
raw2tiff gif2tiff bmp2tiff ppm2tiff pnmtotiff ras2tiff e2mtiff fax2tiff
Fish does this in some cases:
~> ␣2tiff
bmp2tiff (Executable, 17kB) ppm2tiff (Executable, 14kB)
fax2tiff (Executable, 18kB) ras2tiff (Executable, 14kB)
gif2tiff (Executable, 18kB) raw2tiff (Executable, 17kB)
This also has the side effect of moving the cursor to the end of the word,
and only works if there is no valid suffix completion:
~> ␣tiff
tiff2bw (Convert a color TIFF image to greyscale)
tiff2pdf (Convert a TIFF image to a PDF document)
tiff2ps (Convert a TIFF image to)
tiff2rgba (Convert a TIFF image to RGBA color space)
…and 10 more rows
I cannot find a way to make bash or zsh
do prefix tab-completion in either case.
|
Zsh does this provided that you enable the “new-style completion system” and turn on the complete_in_word option.
autoload -U compinit; compinit
setopt complete_in_word
After that, you can press Tab anywhere in a word, including at the beginning, and you'll get completion proposals for the middle of the word (for the beginning, if the cursor is at the beginning of the word). (With some fancier settings, completion might insert things elsewhere in the word too.)
Another option in zsh is to activate wildcard completion with setopt glob_complete. For example, type *2tiff and press Tab (with the cursor at the end of the word) to complete at the beginning of the word.
There is a limitation that in the default configuration, pressing Tab at the beginning of the command line inserts a tab instead of completing. I think this limitation is in expand-or-complete; if you bind Tab to menu-complete or complete-word then it completes at the beginning of the command line too. With expand-or-complete, you do get completion at the beginning of a word provided that it isn't the very first non-whitespace thing on the command line, e.g. in =2tiff or ;2tiff.
| Shell that tab-completes prefix? |
1,297,916,768,000 |
I use fish as my standard shell.
And I use sudo sometimes.
But I'm having problems with credential caching.
On an Ubuntu system, this works:
niklas@Niklas-Mobil~> sudo true
[sudo] password for niklas:
niklas@Niklas-Mobil~> sudo true
niklas@Niklas-Mobil~>
On a Debian system, this doesn't work:
niklas@ThinServer ~> sudo true
[sudo] password for niklas:
niklas@ThinServer ~> sudo true
[sudo] password for niklas:
niklas@ThinServer ~>
But on the same system as the same user with bash:
niklas@ThinServer:~$ sudo true
[sudo] password for niklas:
niklas@ThinServer:~$ sudo true
niklas@ThinServer:~$
It works using dash, too.
Why aren't the credentials cached when I'm using fish?
(And what can I do to solve this problem?)
|
Edit your /etc/sudoers, add this line (or edit if it's existed):
Defaults !tty_tickets
fish somehow thinks command is from separated session. It's maybe due to tty's modification date as reported by stat is changing under fish. This was caused by fish's futimes() call
See more details:
fish issue #122
Disable futimes()
| sudo and fish: no credential caching |
1,297,916,768,000 |
In bash it's possible to do
my_function() {
echo "hello" |
#remove 'l'
tr -d 'l'
}
but in fish ( http://fishshell.com ) I wasn't able to do the same:
function my_function
echo "hello" | \
# remove 'l'
tr -d 'l'
end
I've tried with backslash at the end of the comment too but no luck.
I tend to use this idiom a lot in bash, piping several commands and splitting each command in its own line with a commment for each.
Is there any way to achieve something similar in fish?
|
It's impossible because of the bug reported at https://github.com/fish-shell/fish-shell/issues/983.
However, there are works on real fish grammar (as opposed to quick hacky parser) in ast branch of fish-shell repository (now merged, but disabled by default). Currently, there is no patch to support this syntax, but it's still work in progress, and I'm almost sure that the final grammar will have support for this.
| Is it possible to have comments in multiline commands in fish? |
1,297,916,768,000 |
I'm trying to improve the performance of my fish prompt, and since my prompt includes my current git branch, I'm wondering if there may be a way to make it faster.
Right now I'm using git symbolic-ref HEAD | sed 's/refs\/heads\///', and when I first cd into a git repo, it sometimes hangs for a little while. I'm wondering if there is a known faster method, or how I could find out. Whenever I run time git symbolic-ref HEAD, it simply outputs 0.00 real.
|
git symbolic-ref HEAD is as far as I know the fastest method, it basically just opens .git/HEAD and some config files (/etc/gitconfig, $HOME/.gitconfig and .git/config). If you are sure that the delay is caused by the git command it is probably due to some io delay.
If you want a faster method you have to read .git/HEAD yourself but I doubt that it will make things faster.
| What's the fastest (CPU time) way to get my current git branch? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.