date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,416,436,772,000 |
This is related to Shell script too slow for output to Conky
This code is nearly perfect:
stdbuf -oL jack_cpu_load \
| grep --line-buffered "jack DSP load" \
| stdbuf -oL cut -d' ' -f4 \
| while read line; do
echo "scale=0; $line*100/1" | bc -l > /tmp/buffer
done &
The only issue is that bc removes the decimal point and zero when the output is <1.
I'd like to see two places after the decimal point, but no fractions at all, with a zero for output <1 would be fine.
|
You just need to make it
stdbuf -oL jack_cpu_load |
grep --line-buffered "jack DSP load" |
stdbuf -oL cut -d' ' -f4 |
while read line; do echo "$line" > /tmp/buffer; done &
to output the value that you input without modification.
| bc removes decimal point |
1,416,436,772,000 |
I am comparing the floating point values in shell script based on this reference. Following is the script contents
num1=50.960
num2=6.65E+07
echo "${num1} < ${num2}" | bc
When I ran the script the output is '0'. But according to the comparison it should be '1'. I need inputs on why the comparison is not working as expected ?
|
The bc utility does not understand 6.65E+07 as the number you want it to be.
On OpenBSD, the E here is hexadecimal, so 6.65E is 6.664 (6.65 + 0.014), and then +07 will add 7 to that, yielding 13.664, and that is clearly less than 50.960. On GNU systems, 6.65E is 6.659 which is also not what you want.
Instead, you want num2 to be the string 6.65*10^7 or 66500000.
$ num1=50.960; num2='6.65*10^7'; printf '%s < %s\n' "$num1" "$num2" | bc
1
| Floating point comparison in shell |
1,416,436,772,000 |
I want to do simple coding by storing the output calculation in variable and display the result in decimal point using command bc. But it only works on for minus calculation not for division calculation.
numberTotalX=$(echo "$varnameY - $var1" | bc)
echo " Number.total.x is $numberTotalX "
the result is:
Number.total.x is 7443.576
but when it come to this command:
numberSplitInteger=$(echo "$numberTotalX / $var2" | bc)
echo " Number of split integer is $numberSplitInteger "
the result is normal integer not in decimal point
Number of split integer is 2461
|
You are looking for scale, just use:
numberSplitInteger=$(echo "scale=x;$numberTotalX / $var2" | bc)
It will give you x values after decimal.
| How display the output of calculation in decimal point using bc command |
1,416,436,772,000 |
I am trying to find square root of 12345 and then scale it to 100 decimal places using the command shown below:
val=$(bc <<< "scale=100 ; sqrt ( 12345) " )
But the problem is I want to rescale the value to 10 decimal places using bc only in echo command but it is not working, I tried following code of lines but none worked.
echo "scale=10 ; "$val" " | bc
echo "scale=10 ; $val " | bc
echo "scale=10 ; val " | bc
Can somebody here help me and make out the difference.
Note: I don't want to directly scale the value to 10 places in sqrt() because last place of decimal value differ from the answer so to have high accuracy I have to do in this way.
|
If you are not worried about rounding in bc, you can do:
$ echo "scale=10 ; "$val/1" " | bc
111.1080555135
It is when a value gets divided (even by 1) that the number of decimals gets adjusted to the scale.
Of course, you can always use the shell printf (again, some rounding might crawl in).
$ printf '%5.10f\n' "$val"
111.1080555135
Why do you need precision?
| Rescaling a scaled value using bc command |
1,416,436,772,000 |
I'm trying to remove an integer from a decimal value that I currently have.
current syntax:
h=$(echo "scale=2; (($e/$g/$g))" | bc)
echo $h
The following is used to convert seconds to minutes and then hours however it returns "21.15" hours.
I want to keep the 0.15 and multiply it by 60 (leaving me with 9 minutes) -- and eventually 21 hours 9 minutes.
|
The bc remainder operator is %
expr % expr
The result of the expression is the "remainder" and it is com‐
puted in the following way. To compute a%b, first a/b is com‐
puted to scale digits. That result is used to compute a-(a/b)*b
to the scale of the maximum of scale+scale(b) and scale(a). If
scale is set to zero and both expressions are integers this
expression is the integer remainder function.
Ex.
$ echo '21.15 % 1' | bc
.15
$ echo '(21.15 % 1) * 60' | bc
9.00
| Discard integer and keep floating point |
1,612,895,013,000 |
I have a bind9 testing environment in Debian wheezy that I am trying to set up two A records that are returned in a fixed order. In my named.conf.options file I have the following configuration:
options {
...
rrset-order { order fixed; };
};
This is functional to the point that my records are always returned in the same order, but the problem is that bind is choosing to sort them numerically (smallest numbers first) and I am trying to sort them the other direction.
Based on this link I understand that the fixed keyword should set the response to whatever order I've got in my configuration file. However, I cannot alter the order of the returned results by changing the order of the records in the zone file.
Does anyone know how to return multiple A records for a DNS address in a specific order?
|
Bind9 on Wheezy doesn't allow for that option. Also one must ask himself why one wants/needs this as it breaks when it hits the cache of some recursor. Also for failover purposes it is not really suited as most clients don't have the code to make that happen.
If you maintain the client code, then having a look into SRV resource records that allow you to set priority and load settings for every record. But this depends on the rest of your problem that you try to solve.
| How to return multiple DNS A records in a specific order using bind9? |
1,612,895,013,000 |
I have a BIND 9.9.5-9+deb8u8-Raspbian DNS server running on a RPi3 in my network. It is - for everything that's not my home-zone - configured as a "forward only" with the forwarders "{ 8.8.8.8; 8.8.4.4; 208.67.222.222; 208.67.220.220; };".
a) the normal case
Usually, dns resolution works perfectly. Even when results are not in cache, they are fetched from the forward servers and delivered back to my clients often within less than 100 ms. Here's an example:
client:~ $ time nslookup faz.net
Server: [my_server]
Address: [my_server]#53
Non-authoritative answer:
Name: faz.net
Address: 40.118.6.229
real 0m0.095s […]
This is how the traffic looks in tcpdump, everything's perfect as far as I can see, and DNSSEC-validation seems to work fine as well:
06:48:21.880240 IP [my_client].59563 > [my_server].domain: 614+ A? faz.net. (25)
06:48:21.881246 IP [my_server].28766 > google-public-dns-a.google.com.domain: 30021+% [1au] A? faz.net. (36)
06:48:21.916031 IP google-public-dns-a.google.com.domain > [my_server].28766: 30021 1/0/1 A 40.118.6.229 (52)
06:48:21.917093 IP [my_server].44792 > google-public-dns-a.google.com.domain: 10551+% [1au] DS? faz.net. (36)
06:48:21.952356 IP google-public-dns-a.google.com.domain > [my_server].44792: 10551 0/6/1 (757)
06:48:21.956635 IP [my_server].domain > [my_client].59563: 614 1/0/0 A 40.118.6.229 (41)
b) the problematic case
However, in some cases it takes very long - up to 14 seconds - to deliver a result. Or I don't get one at all (even though a dig @8.8.4.4 domain gives me a proper address within around 50 ms.) Here's one example:
client:~ $ time nslookup sueddeutsche.net
Server: [my_server]
Address: [my_server]#53
Non-authoritative answer:
Name: sueddeutsche.net
Address: 213.61.179.40
Name: sueddeutsche.net
Address: 213.61.179.41
real 0m4.994s […]
And this is what goes on for these 4,9 seconds in traffic via tcpdump:
06:48:47.608104 IP [my_client].53592 > [my_server].domain: 51417+ A? sueddeutsche.net. (34)
06:48:47.609158 IP [my_server].1507 > google-public-dns-a.google.com.domain: 56678+% [1au] A? sueddeutsche.net. (45)
06:48:48.809517 IP [my_server].27279 > google-public-dns-b.google.com.domain: 22525+% [1au] A? sueddeutsche.net. (45)
06:48:50.009592 IP [my_server].37926 > resolver2.opendns.com.domain: 41597+% [1au] A? sueddeutsche.net. (45)
06:48:50.081468 IP resolver2.opendns.com.domain > [my_server].37926: 41597 2/0/1 A 213.61.179.41, A 213.61.179.40 (77)
06:48:50.082768 IP [my_server].1301 > resolver2.opendns.com.domain: 24793+% [1au] DS? sueddeutsche.net. (45)
06:48:50.233748 IP resolver2.opendns.com.domain > [my_server].1301: 24793 0/1/1 (121)
06:48:50.235373 IP [my_server].57628 > resolver1.opendns.com.domain: 8635+% [1au] DS? sueddeutsche.net. (45)
06:48:50.282862 IP resolver1.opendns.com.domain > [my_server].57628: 8635 0/1/1 (121)
06:48:50.287796 IP [my_server].32127 > google-public-dns-a.google.com.domain: 924+% [1au] DS? sueddeutsche.net. (45)
06:48:51.487853 IP [my_server].61208 > google-public-dns-b.google.com.domain: 39031+% [1au] DS? sueddeutsche.net. (45)
06:48:51.547262 IP google-public-dns-b.google.com.domain > [my_server].61208: 39031 0/6/1 (766)
06:48:51.551509 IP [my_server].52786 > resolver2.opendns.com.domain: 28853+% [1au] Type32769? sueddeutsche.net.dlv.isc.org. (57)
06:48:51.589595 IP resolver2.opendns.com.domain > [my_server].52786: 28853 NXDomain 0/1/1 (125)
06:48:51.592942 IP [my_server].30477 > resolver2.opendns.com.domain: 17693+% [1au] DS? sueddeutsche.net.dlv.isc.org. (57)
06:48:51.790903 IP resolver2.opendns.com.domain > [my_server].30477: 17693 NXDomain 0/1/1 (125)
06:48:51.792342 IP [my_server].6503 > resolver1.opendns.com.domain: 17946+% [1au] DS? sueddeutsche.net.dlv.isc.org. (57)
06:48:52.005244 IP resolver1.opendns.com.domain > [my_server].6503: 17946 NXDomain 0/1/1 (125)
06:48:52.006662 IP [my_server].52356 > google-public-dns-b.google.com.domain: 39821+% [1au] DS? sueddeutsche.net.dlv.isc.org. (57)
06:48:52.334093 IP google-public-dns-b.google.com.domain > [my_server].52356: 39821 NXDomain 0/6/1 (748)
06:48:52.342161 IP [my_server].56473 > resolver1.opendns.com.domain: 17279+% [1au] Type32769? sueddeutsche.net.dlv.isc.org. (57)
06:48:52.382211 IP resolver1.opendns.com.domain > [my_server].56473: 17279 NXDomain 0/1/1 (125)
06:48:52.383674 IP [my_server].52741 > google-public-dns-b.google.com.domain: 65018+% [1au] Type32769? sueddeutsche.net.dlv.isc.org. (57)
06:48:52.424757 IP google-public-dns-b.google.com.domain > [my_server].52741: 65018 NXDomain$ 0/6/1 (748)
06:48:52.430544 IP [my_server].domain > [my_client].53592: 51417 2/0/0 A 213.61.179.40, A 213.61.179.41 (66)
Now from what I can understand, it seems that at 06:48:50.081468 my_server got a proper response from resolver2.opendns.com, while google didn't deliver anything. Then my_server asked back for DNSSEC-validation to which resolver2.opendns.com replied. Shouldn't that be the point at which my_server delivers the result back to my_client? Why doesn't it do this?
Here's a second case, for dasoertliche.de, a domain that should work without any problems:
23:39:01.569234 IP [my_client].52174 > [my_server].domain: 57873+ A? dasoertliche.de. (33)
23:39:01.570413 IP [my_server].60368 > resolver2.opendns.com.domain: 39796+% [1au] A? dasoertliche.de. (44)
23:39:01.617721 IP resolver2.opendns.com.domain > [my_server].60368: 39796 1/0/1 A 82.98.79.52 (60)
23:39:01.618712 IP [my_server].41112 > resolver2.opendns.com.domain: 47487+% [1au] DS? dasoertliche.de. (44)
23:39:01.667144 IP resolver2.opendns.com.domain > [my_server].41112: 47487 0/1/1 (100)
23:39:01.668616 IP [my_server].24077 > resolver1.opendns.com.domain: 13310+% [1au] DS? dasoertliche.de. (44)
23:39:01.854327 IP resolver1.opendns.com.domain > [my_server].24077: 13310 0/1/1 (100)
23:39:01.856006 IP [my_server].38412 > google-public-dns-a.google.com.domain: 56597+% [1au] DS? dasoertliche.de. (44)
23:39:03.056263 IP [my_server].35182 > google-public-dns-b.google.com.domain: 45370+% [1au] DS? dasoertliche.de. (44)
23:39:04.256333 IP [my_server].47744 > google-public-dns-a.google.com.domain: 50222+% [1au] DS? dasoertliche.de. (44)
23:39:04.305620 IP google-public-dns-a.google.com.domain > [my_server].47744: 50222| 0/0/1 (44)
23:39:04.306296 IP [my_server].45040 > google-public-dns-a.google.com.domain: Flags [S], seq 3961861791, win 29200, options [mss 1460,sackOK,TS val 11766745 ecr 0,nop,wscale 7], length 0
23:39:04.345835 IP google-public-dns-a.google.com.domain > [my_server].45040: Flags [S.], seq 2448404480, ack 3961861792, win 42408, options [mss 1380,sackOK,TS val 4100658423 ecr 11766745,nop,wscale 7], length 0
23:39:04.345899 IP [my_server].45040 > google-public-dns-a.google.com.domain: Flags [.], ack 1, win 229, options [nop,nop,TS val 11766749 ecr 4100658423], length 0
23:39:04.346167 IP [my_server].45040 > google-public-dns-a.google.com.domain: Flags [P.], seq 1:47, ack 1, win 229, options [nop,nop,TS val 11766749 ecr 4100658423], length 4662876+% [1au] DS? dasoertliche.de. (44)
23:39:04.385803 IP google-public-dns-a.google.com.domain > [my_server].45040: Flags [.], ack 47, win 332, options [nop,nop,TS val 4100658463 ecr 11766749], length 0
23:39:04.394945 IP google-public-dns-a.google.com.domain > [my_server].45040: Flags [P.], seq 1:752, ack 47, win 332, options [nop,nop,TS val 4100658472 ecr 11766749], length 75162876 0/6/1 (749)
23:39:04.394975 IP [my_server].45040 > google-public-dns-a.google.com.domain: Flags [.], ack 752, win 240, options [nop,nop,TS val 11766753 ecr 4100658472], length 0
23:39:04.398143 IP [my_server].45040 > google-public-dns-a.google.com.domain: Flags [F.], seq 47, ack 752, win 240, options [nop,nop,TS val 11766754 ecr 4100658472], length 0
23:39:04.401876 IP [my_server].37878 > resolver2.opendns.com.domain: 50849+% [1au] Type32769? dasoertliche.de.dlv.isc.org. (56)
23:39:04.437774 IP google-public-dns-a.google.com.domain > [my_server].45040: Flags [F.], seq 752, ack 48, win 332, options [nop,nop,TS val 4100658515 ecr 11766754], length 0
23:39:04.437858 IP [my_server].45040 > google-public-dns-a.google.com.domain: Flags [.], ack 753, win 240, options [nop,nop,TS val 11766758 ecr 4100658515], length 0
23:39:04.456088 IP resolver2.opendns.com.domain > [my_server].37878: 50849 NXDomain 0/1/1 (124)
23:39:04.457411 IP [my_server].38743 > resolver2.opendns.com.domain: 45844+% [1au] DS? dasoertliche.de.dlv.isc.org. (56)
23:39:04.658497 IP resolver2.opendns.com.domain > [my_server].38743: 45844 NXDomain 0/1/1 (124)
23:39:04.659855 IP [my_server].39296 > resolver1.opendns.com.domain: 17204+% [1au] DS? dasoertliche.de.dlv.isc.org. (56)
23:39:04.708134 IP resolver1.opendns.com.domain > [my_server].39296: 17204 NXDomain 0/1/1 (124)
23:39:04.713195 IP [my_server].55899 > google-public-dns-a.google.com.domain: 5854+% [1au] DS? dasoertliche.de.dlv.isc.org. (56)
23:39:04.780837 IP google-public-dns-a.google.com.domain > [my_server].55899: 5854 NXDomain 0/6/1 (736)
23:39:04.786940 IP [my_server].27908 > resolver1.opendns.com.domain: 4148+% [1au] Type32769? dasoertliche.de.dlv.isc.org. (56)
23:39:05.267688 IP resolver1.opendns.com.domain > [my_server].27908: 4148 NXDomain 0/1/1 (124)
23:39:05.269026 IP [my_server].38523 > google-public-dns-a.google.com.domain: 60609+% [1au] Type32769? dasoertliche.de.dlv.isc.org. (56)
23:39:06.469277 IP [my_server].58501 > google-public-dns-b.google.com.domain: 5485+% [1au] Type32769? dasoertliche.de.dlv.isc.org. (56)
23:39:06.572296 IP [my_client].52174 > [my_server].domain: 57873+ A? dasoertliche.de. (33)
23:39:07.669762 IP [my_server].52520 > google-public-dns-a.google.com.domain: 43149+% [1au] Type32769? dasoertliche.de.dlv.isc.org. (56)
23:39:07.706440 IP google-public-dns-a.google.com.domain > [my_server].52520: 43149| 0/0/1 (56)
23:39:07.706903 IP [my_server].59047 > google-public-dns-a.google.com.domain: Flags [S], seq 4227748459, win 29200, options [mss 1460,sackOK,TS val 11767085 ecr 0,nop,wscale 7], length 0
23:39:07.747595 IP google-public-dns-a.google.com.domain > [my_server].59047: Flags [S.], seq 719567413, ack 4227748460, win 42408, options [mss 1380,sackOK,TS val 3894129283 ecr 11767085,nop,wscale 7], length 0
23:39:07.747657 IP [my_server].59047 > google-public-dns-a.google.com.domain: Flags [.], ack 1, win 229, options [nop,nop,TS val 11767089 ecr 3894129283], length 0
23:39:07.747982 IP [my_server].59047 > google-public-dns-a.google.com.domain: Flags [P.], seq 1:59, ack 1, win 229, options [nop,nop,TS val 11767089 ecr 3894129283], length 5821405+% [1au] Type32769? dasoertliche.de.dlv.isc.org. (56)
23:39:07.788998 IP google-public-dns-a.google.com.domain > [my_server].59047: Flags [.], ack 59, win 332, options [nop,nop,TS val 3894129324 ecr 11767089], length 0
23:39:07.789344 IP google-public-dns-a.google.com.domain > [my_server].59047: Flags [P.], seq 1:739, ack 59, win 332, options [nop,nop,TS val 3894129325 ecr 11767089], length 73821405 NXDomain$ 0/6/1 (736)
23:39:07.789372 IP [my_server].59047 > google-public-dns-a.google.com.domain: Flags [.], ack 739, win 240, options [nop,nop,TS val 11767093 ecr 3894129325], length 0
23:39:07.790414 IP [my_server].59047 > google-public-dns-a.google.com.domain: Flags [F.], seq 59, ack 739, win 240, options [nop,nop,TS val 11767093 ecr 3894129325], length 0
23:39:07.796565 IP [my_server].domain > [my_client].52174: 57873 1/0/0 A 82.98.79.52 (49)
23:39:07.831137 IP google-public-dns-a.google.com.domain > [my_server].59047: Flags [F.], seq 739, ack 60, win 332, options [nop,nop,TS val 3894129366 ecr 11767093], length 0
23:39:07.831221 IP [my_server].59047 > google-public-dns-a.google.com.domain: Flags [.], ack 740, win 240, options [nop,nop,TS val 11767097 ecr 3894129366], length 0
Again, at 23:39:01.617721 my_server receives the proper answer from resolver2.opendns.com but for a reason I don't understand a whole flood of communication ensues between my_server and the google dns servers.
Any ideas what is happening here and how I can improve the situation?
c) Update
As per request, here's the output of free...
[my_server]:~ $ free
total used free shared buffers cached
Mem: 996448 331696 664752 15752 29808 180668
-/+ buffers/cache: 121220 875228
Swap: 0 0 0
... and vmstat:
[my_server]:~ $ vmstat
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
1 0 0 664752 29808 180676 0 0 0 1 20 23 0 0 100 0 0
d) Update 2:
Just found out that my /var/log/syslog has entries concerning the resolution problem with dasoertliche.de (the second problematic case). It's the following:
Dec 13 23:39:01 raspi-server named[642]: validating @0x713c0030: de SOA: got insecure response; parent indicates it should be secure
Dec 13 23:39:01 raspi-server named[642]: error (no valid RRSIG) resolving 'dasoertliche.de/DS/IN': 208.67.220.220#53
Dec 13 23:39:01 raspi-server named[642]: validating @0x712c0030: de SOA: got insecure response; parent indicates it should be secure
Dec 13 23:39:01 raspi-server named[642]: error (no valid RRSIG) resolving 'dasoertliche.de/DS/IN': 208.67.222.222#53
Dec 13 23:39:04 raspi-server named[642]: success resolving 'dasoertliche.de/DS' (in '.'?) after reducing the advertised EDNS UDP packet size to 512 octets
Dec 13 23:39:04 raspi-server named[642]: validating @0x711c2040: dlv.isc.org SOA: got insecure response; parent indicates it should be secure
Dec 13 23:39:04 raspi-server named[642]: validating @0x73401378: dlv.isc.org SOA: got insecure response; parent indicates it should be secure
Dec 13 23:39:04 raspi-server named[642]: error (no valid RRSIG) resolving 'dasoertliche.de.dlv.isc.org/DS/IN': 208.67.220.220#53
Dec 13 23:39:04 raspi-server named[642]: validating @0x713c0030: dlv.isc.org SOA: got insecure response; parent indicates it should be secure
Dec 13 23:39:04 raspi-server named[642]: error (no valid RRSIG) resolving 'dasoertliche.de.dlv.isc.org/DS/IN': 208.67.222.222#53
Dec 13 23:39:04 raspi-server named[642]: error (insecurity proof failed) resolving 'dasoertliche.de.dlv.isc.org/DLV/IN': 208.67.220.220#53
Dec 13 23:39:05 raspi-server named[642]: validating @0x712c0030: dlv.isc.org SOA: got insecure response; parent indicates it should be secure
Dec 13 23:39:05 raspi-server named[642]: error (insecurity proof failed) resolving 'dasoertliche.de.dlv.isc.org/DLV/IN': 208.67.222.222#53
Dec 13 23:39:07 raspi-server named[642]: success resolving 'dasoertliche.de.dlv.isc.org/DLV' (in '.'?) after reducing the advertised EDNS UDP packet size to 512 octets
Now it all starts to make a lot more sense, I believe. What I suppose happened: It seems OpenDNS does not support DNSSEC, so when my_server got back the initial (correct) resolution from OpenDNS and asked for DNSSEC, it must have figured the response to be insecure, because there was no proper DNSSEC response. Hence from 23:39:01.856006 onwards, it tried to get confirmation from google DNS. But what do the following lines in syslog and tcpdump mean exactly? Why are there seconds in which google didn't reply and what do the replies and the following exchange of information between google and my_server mean?
|
After more research, the problem appears to be the following:
The initial setup contained both forwarders that were DNSSEC-capable (GoogleDNS 8.8.8.8, 8.8.4.4) and some that weren't (OpenDNS 208.67.222.222, 208.67.220.220). I had BIND9 running with DNSSEC fully enabled, as per the following configuration:
dnssec-enable yes;
dnssec-validation yes;
dnssec-lookaside auto;
a) Whenever a request (A?) was forwarded to the GoogleDNS servers, my_server got a reply (A), sent a DNSSEC-query (DS?) and got a proper answer. Resolution done, signature confirmed, all working like a charm, case closed (see above, the normal case).
b) Whenever a request (A?) was forwarded to the OpenDNS servers, my_server got a reply (A), sent a DNSSEC-query (DS?) which OpenDNS failed to answer properly as it does not support DNSSEC. So BIND9 threw an error in syslog, stating that it got insecure response and tried to get it's DNSSEC-validation elsewhere (see above, the problematic case).
I still don't fully understand what happened then, but obviously that's when the hiccups started. Now I don't know if the GoogleDNS-servers didn't like giving out DS?-answers without having served the corresponding A?-query first. But in both of the problematic cases (sueddeutsche.net, dasoertliche.de) it seems that the entries were also not properly signed so DNSSEC didn't produce correct validation. So DNSSEC-lookaside validation (DLV) was started (Type32769?) and again it all went south. No idea why.
c) Solution: After all this, I have done the following and have not yet run into any problem (so it seems the question is solved):
First, I switched forwarders to only
forwarders { 8.8.8.8; 8.8.4.4; };
so that there's no longer a mixup of DNSSEC support. Second, I commented out
//dnssec-lookaside auto;
because after digging through lots of tcpdumps, it appears that whenever resolution is slow, the delays are either caused by GoogleDNS taking some time to give back an answer (very rarely happens) or - regularly - happen during DLV. Since DLV is currently being phased out anyway with entries no longer available in 2017
https://www.isc.org/blogs/dlv/
I believe that's acceptable from a security-standpoint.
Now an alternative solution would be to ditch the GoogleDNS servers and only use OpenDNS as forwarders. But then I'd have to comment out all dnssec-entries mentioned above completely disabling DNSSEC, because OpenDNS does not support it. That would leave my dns queries open to attacks which would require adding an alternative layer of security like dnscrypt (as mentioned by Rui F Ribeiro). While that looks like a worthwile project (as it completely encrypts dns traffic, therefore not only keeping it unaltered but also unreadable by attackers), it's a little over my current time budget.
Any DNS experts out there who want to chime in if the explanations above make sense?
| BIND9: DNS resolves sometimes (!) take very long or don't work at all |
1,612,895,013,000 |
I'm configuring BIND9 to obtain a wildcard certificate from Let's Encrypt.
When I try to generate TSIG key according to instruction here, I got the following error:
# dnssec-keygen -a HMAC-SHA512 -b 512 -n HOST keyname.
dnssec-keygen: fatal: unknown algorithm HMAC-SHA512
Then I read help and document about dnssec-keygen, there is no algorithm called HMAC-SHA512 indeed:
# dnssec-keygen -h
Usage:
dnssec-keygen [options] name
Version: 9.14.2
name: owner of the key
Options:
-K <directory>: write keys into directory
-a <algorithm>:
RSASHA1 | NSEC3RSASHA1 |
RSASHA256 | RSASHA512 |
ECDSAP256SHA256 | ECDSAP384SHA384 |
ED25519 | ED448 | DH
-3: use NSEC3-capable algorithm
-b <key size in bits>:
RSASHA1: [1024..4096]
NSEC3RSASHA1: [1024..4096]
RSASHA256: [1024..4096]
RSASHA512: [1024..4096]
DH: [128..4096]
ECDSAP256SHA256: ignored
ECDSAP384SHA384: ignored
ED25519: ignored
ED448: ignored
(key size defaults are set according to
algorithm and usage (ZSK or KSK)
-n <nametype>: ZONE | HOST | ENTITY | USER | OTHER
(DNSKEY generation defaults to ZONE)
-c <class>: (default: IN)
-d <digest bits> (0 => max, default)
-E <engine>:
name of an OpenSSL engine to use
-f <keyflag>: KSK | REVOKE
-g <generator>: use specified generator (DH only)
-L <ttl>: default key TTL
-p <protocol>: (default: 3 [dnssec])
-s <strength>: strength value this key signs DNS records with (default: 0)
-T <rrtype>: DNSKEY | KEY (default: DNSKEY; use KEY for SIG(0))
-t <type>: AUTHCONF | NOAUTHCONF | NOAUTH | NOCONF (default: AUTHCONF)
-h: print usage and exit
-m <memory debugging mode>:
usage | trace | record | size | mctx
-v <level>: set verbosity level (0 - 10)
-V: print version information
Timing options:
-P date/[+-]offset/none: set key publication date (default: now)
-P sync date/[+-]offset/none: set CDS and CDNSKEY publication date
-A date/[+-]offset/none: set key activation date (default: now)
-R date/[+-]offset/none: set key revocation date
-I date/[+-]offset/none: set key inactivation date
-D date/[+-]offset/none: set key deletion date
-D sync date/[+-]offset/none: set CDS and CDNSKEY deletion date
-G: generate key only; do not set -P or -A
-C: generate a backward-compatible key, omitting all dates
-S <key>: generate a successor to an existing key
-i <interval>: prepublication interval for successor key (default: 30 days)
Output:
K<name>+<alg>+<id>.key, K<name>+<alg>+<id>.private
I dug into another question: can't generate key via dnssec-keygen, but my problem remains unsolved.
What should I do?
|
After a bit searching, I found the document of plugin certbot-dns-rfc2136 is obsolete!
In BIND9's official git repository, I found the following commit message:
[func] The use of dnssec-keygen to generate HMAC keys is
deprecated in favor of tsig-keygen. dnssec-keygen
will print a warning when used for this purpose.
All HMAC algorithms will be removed from
dnssec-keygen in a future release. [RT #42272]
So, the final solution is:
tsig-keygen -a hmac-sha512 tsig-key > /etc/bind/tsig.key
| How to generate TSIG key for certbot plugin 'certbot-dns-rfc2136' |
1,612,895,013,000 |
I was looking at bind9-host
shirish@debian:"04 Jan 2020 15:48:02" ~$ aptitude show bind9-host=1:9.11.5.P4+dfsg-5.1+b1
Package: bind9-host
Version: 1:9.11.5.P4+dfsg-5.1+b1
State: installed
Automatically installed: no
Priority: standard
Section: net
Maintainer: Debian DNS Team <[email protected]>
Architecture: amd64
Uncompressed Size: 369 k
Compressed Size: 271 k
Filename: pool/main/b/bind9/bind9-host_9.11.5.P4+dfsg-5.1+b1_amd64.deb
Checksum-FileSize: 271156
MD5Sum: 8cd326a23a51acdb773df5b7dce76060
SHA256: 977287c7212e9d3e671b85fdd04734b4908fe86d4b3581e47fb86d8b27cfdb3b
Archive: testing
Depends: libbind9-161 (= 1:9.11.5.P4+dfsg-5.1+b1), libdns1104 (= 1:9.11.5.P4+dfsg-5.1+b1), libisc1100 (= 1:9.11.5.P4+dfsg-5.1+b1), libisccfg163 (= 1:9.11.5.P4+dfsg-5.1+b1), liblwres161 (= 1:9.11.5.P4+dfsg-5.1+b1), libc6 (>= 2.14), libcap2 (>= 1:2.10), libcom-err2 (>= 1.43.9), libfstrm0 (>= 0.2.0), libgeoip1, libgssapi-krb5-2 (>= 1.6.dfsg.2), libidn2-0 (>= 2.0.0), libjson-c4 (>= 0.13.1), libk5crypto3 (>= 1.6.dfsg.2), libkrb5-3 (>= 1.6.dfsg.2), liblmdb0 (>= 0.9.6), libprotobuf-c1 (>= 1.0.0), libssl1.1 (>= 1.1.0), libxml2 (>= 2.6.27)
Provides: host
Description: DNS lookup utility (deprecated)
This package provides /usr/bin/host, a simple utility (bundled with the BIND 9.X sources) which can be used for converting domain names to IP addresses and the reverse.
This utility is deprecated, use dig or delv from the dnsutils package.
Homepage: https://www.isc.org/downloads/bind/
What is and was interesting to me is that while the utility itself is being deprecated and has numerous issues with the program itself, the utility still seems to be there in keeping it but don't see it why? I also don't see any deprecation notice or documentation in /usr/share/doc/bind9-host . There is usually a NEWS.gz which gives this info. but in this package there isn't one. Changelog.gz and others don't have.
Interestingly, they continue to do it -
$ apt-cache policy bind9-host
bind9-host:
Installed: 1:9.11.5.P4+dfsg-5.1+b1
Candidate: 1:9.11.5.P4+dfsg-5.1+b1
Version table:
1:9.15.7-1 100
100 http://cdn-fastly.deb.debian.org/debian experimental/main amd64 Packages
1:9.11.14+dfsg-1 100
100 http://cdn-fastly.deb.debian.org/debian unstable/main amd64 Packages
*** 1:9.11.5.P4+dfsg-5.1+b1 900
900 http://cdn-fastly.deb.debian.org/debian testing/main amd64 Packages
100 /var/lib/dpkg/status
|
host is not deprecated by Internet Systems Consortium, the BIND company. It does not even deprecate nslookup as it once did.
This deprecation of host was done in 2018 by a Debian Developer, on xyr own initiative, in response to a 2013 Debian bug report about the package description that did not actually mention deprecation. The Debian package description the only place where deprecation is mentioned, and there is no rationale for it.
If one were going to deprecate ISC tools — again — there is a far more obvious better place to start.
As a Debian user, you might like to submit a bug report about this deprecation.
Further reading
Justin B Rye (2013-11-14). bind9-host: unhelpful package description. Debian bug #729561.
Bernhard Schmidt (2018-03-22). Update bind9-host description. salsa.debian.org.
Mark Andrews (2004-08-19). 1700. [func] nslookup is no longer to be treated as deprecated.. gitlab.isc.org.
Jonathan de Boyne Pollard (2001). nslookup is a badly flawed tool. Don't use it.. Frequently Given Answers.
https://unix.stackexchange.com/a/446293/5132
| why host from bind9-host is/was deprecated and when? |
1,612,895,013,000 |
I'm running a name server using bind9 on Debian.
I noticed that there are multiple "named" processes running, when bind starts:
How can I limit this to n bind instances (processes)?
What is the recommended use of multiple bind processes? I know that bind is a relatively low intensive application in terms of CPU and network.
|
Depending on your distro there's likely a configuration file that can contain the following switch to named, -n #cpus.
from the named man page
-n #cpus
Create #cpus worker threads to take advantage of multiple CPUs. If
not specified, named will try to determine the number of CPUs
present and create one thread per CPU. If it is unable to
determine the number of CPUs, a single worker thread will be
created.
On Debian
$ sudo vi /etc/defaults/bind9
Append config line:
OPTIONS="-n 4"
Restart the server:
$ sudo service bind9 restart
On CentOS/Fedora
$ sudo vi /etc/sysconfig/named
To force bind to take advantage of 4 CPUs, add / modify as follows:
OPTIONS="-n 4"
Restart the service:
$ sudo service named restart
Referecnes
Force BIND DNS Server to take full advantage of Dual Core Multiple Intel / AMD Cpu
| Multiple named processes for bind9 in Debian |
1,612,895,013,000 |
Reading about acl statement in bind's ARM found the following:
localnets:
"Matches any host on an IPv4 or IPv6 network for which the
system has an interface. When addresses are added or removed,
the localnets ACL element is updated to reflect the changes."
"for which the system has an interface" - sound like a nosense.
I understand what network interface is, but don't understand aforementioned text.
Could you please tell the meaning of the above quote,
and what is the difference between localhost and localnets?
Thanks
|
I guess localhost refer to one IP address which is by default 127.0.0.1, but, localnet refer to every network that you have an IP address from it on interface on your machine.
For example, if you have two interfaces and every one have its IP from different network so localnets can match all networks.
eth0 ip 10.0.0.1 netmask 255.0.0.0
eth1 ip 192.168.0.1 netmask 255.255.255.0
So localnets match (10.0.0.0\8, 192.168.0.0\24).
| What is the difference between localhost and localnets in named configuration |
1,612,895,013,000 |
I have a bind9 server spun up on one of my old test test boxes, and it's close. Everything appears to be working, however I'm getting 'time out resolving' errors spamming my sys.log from what appears to be 3 specific DNS servers...
68.237.161.12
68.237.161.14
156.154.71.1
bind9 info
Jul 25 07:18:59 toe-lfs named[23935]: starting BIND 9.14.4 (Stable Release) <id:ab4c496>
Jul 25 07:18:59 toe-lfs named[23935]: running on Linux x86_64 4.9.9 #1 SMP Sat Sep 23 11:18:52 EDT 2017
Jul 25 07:18:59 toe-lfs named[23935]: built with '--prefix=/usr' '--sysconfdir=/etc' '--localstatedir=/var' '--mandir=/usr/share/man' '--enable-threads' '--with-libtool' '--disable-static' '--without-python'
Jul 25 07:18:59 toe-lfs named[23935]: running as: named -4 -u named -t /srv/named -c /etc/named.conf
Jul 25 07:18:59 toe-lfs named[23935]: compiled by GCC 6.3.0
Jul 25 07:18:59 toe-lfs named[23935]: compiled with OpenSSL version: OpenSSL 1.0.2k 26 Jan 2017
Jul 25 07:18:59 toe-lfs named[23935]: linked to OpenSSL version: OpenSSL 1.0.2k 26 Jan 2017
Jul 25 07:18:59 toe-lfs named[23935]: compiled with zlib version: 1.2.11
Jul 25 07:18:59 toe-lfs named[23935]: linked to zlib version: 1.2.11
here's a sampling of my sys.log
Jul 25 06:24:56 toe-lfs named[16927]: timed out resolving 'ns2prod.18.azuredns-prd.info/A/IN': 68.237.161.14#53
Jul 25 06:24:57 toe-lfs named[16927]: timed out resolving 'static.xx.fbcdn.net/A/IN': 68.237.161.14#53
Jul 25 06:24:58 toe-lfs named[16927]: timed out resolving 'azuredns-prd.info/DS/IN': 68.237.161.12#53
Jul 25 06:24:59 toe-lfs named[16927]: timed out resolving 'azuredns-prd.info/DS/IN': 68.237.161.14#53
Jul 25 06:26:56 toe-lfs named[16927]: timed out resolving 'settingsfd-geo.trafficmanager.net/A/IN': 156.154.71.1#53
Jul 25 06:26:57 toe-lfs named[16927]: timed out resolving 'settingsfd-geo.trafficmanager.net/A/IN': 68.237.161.12#53
Jul 25 06:26:59 toe-lfs named[16927]: timed out resolving 'settingsfd-geo.trafficmanager.net/A/IN': 68.237.161.14#53
Jul 25 06:27:00 toe-lfs named[16927]: timed out resolving 'beacons.gcp.gvt2.com/A/IN': 68.237.161.12#53
Jul 25 06:27:01 toe-lfs named[16927]: timed out resolving 'beacons.gcp.gvt2.com/A/IN': 68.237.161.14#53
Jul 25 06:58:26 toe-lfs named[16927]: timed out resolving 'us-ne-courier-4.push-apple.com.akadns.net/A/IN': 68.237.161.14#53
Jul 25 06:58:27 toe-lfs named[16927]: timed out resolving 'gsp-ssl-geomap.ls-apple.com.akadns.net/A/IN': 68.237.161.14#53
Jul 25 06:58:28 toe-lfs named[16927]: timed out resolving 'us-ne-courier-4.push-apple.com.akadns.net/A/IN': 68.237.161.12#53
Jul 25 06:58:28 toe-lfs named[16927]: timed out resolving 'gsp-ssl-geomap.ls-apple.com.akadns.net/A/IN': 68.237.161.12#53
Jul 25 06:58:29 toe-lfs named[16927]: timed out resolving 'gsp-ssl-gspxramp.ls-apple.com.akadns.net/A/IN': 68.237.161.12#53
Jul 25 06:58:29 toe-lfs named[16927]: timed out resolving 'e4478.a.akamaiedge.net/A/IN': 68.237.161.12#53
Jul 25 06:58:29 toe-lfs named[16927]: timed out resolving 'e6858.dsce9.akamaiedge.net/A/IN': 68.237.161.12#53
Jul 25 06:58:30 toe-lfs named[16927]: timed out resolving 'help.apple.com/A/IN': 68.237.161.12#53
Jul 25 06:58:30 toe-lfs named[16927]: timed out resolving 'cds.apple.com/A/IN': 68.237.161.12#53
Jul 25 06:58:30 toe-lfs named[16927]: timed out resolving 'stocks-edge.apple.com/A/IN': 68.237.161.12#53
Jul 25 06:58:30 toe-lfs named[16927]: timed out resolving 'apple-finance.query.yahoo.com/A/IN': 68.237.161.12#53
Jul 25 06:58:30 toe-lfs named[16927]: timed out resolving 'stocks-sparkline.apple.com/A/IN': 68.237.161.12#53
Jul 25 06:58:30 toe-lfs named[16927]: timed out resolving 'gateway-carry.icloud.com/A/IN': 68.237.161.12#53
Jul 25 06:58:31 toe-lfs named[16927]: timed out resolving 'gsp-ssl-gspxramp.ls-apple.com.akadns.net/A/IN': 68.237.161.14#53
Jul 25 06:58:31 toe-lfs named[16927]: timed out resolving 'e4478.a.akamaiedge.net/A/IN': 68.237.161.14#53
Jul 25 06:58:31 toe-lfs named[16927]: timed out resolving 'e6858.dsce9.akamaiedge.net/A/IN': 68.237.161.14#53
Jul 25 06:58:31 toe-lfs named[16927]: timed out resolving 'help.apple.com/A/IN': 68.237.161.14#53
Jul 25 06:58:31 toe-lfs named[16927]: timed out resolving 'cds.apple.com/A/IN': 68.237.161.14#53
Jul 25 06:58:31 toe-lfs named[16927]: timed out resolving 'stocks-edge.apple.com/A/IN': 68.237.161.14#53
Jul 25 06:58:31 toe-lfs named[16927]: timed out resolving 'apple-finance.query.yahoo.com/A/IN': 68.237.161.14#53
Jul 25 06:58:31 toe-lfs named[16927]: timed out resolving 'stocks-sparkline.apple.com/A/IN': 68.237.161.14#53
Jul 25 06:58:31 toe-lfs named[16927]: timed out resolving 'gateway-carry.icloud.com/A/IN': 68.237.161.14#53
Jul 25 06:58:31 toe-lfs named[16927]: timed out resolving 'clientservices.googleapis.com/A/IN': 68.237.161.14#53
I can include the conf files if they'd be helpful. I would just need to triple check and sanitize them. Any thoughts?
edit: included named.conf
acl corpnets {
localhost;
172.30.24.0/22;
};
key "rndc-key" {
algorithm hmac-sha256;
secret "*****some secret key******";
};
controls {
inet 127.0.0.1 port 953
allow { 127.0.0.1; } keys { "rndc-key"; };
};
options {
directory "/etc/namedb";
pid-file "/var/run/named.pid";
statistics-file "/var/run/named.stats";
## listen-on { 172.30.24.1; };
managed-keys-directory "/etc";
recursion yes;
allow-recursion { corpnets; };
allow-query { corpnets; };
allow-transfer { none; };
forwarders {
156.154.71.1;
68.237.161.12;
68.237.161.14;
8.8.8.8;
8.8.4.4;
};
};
zone "." {
type hint;
file "root.hints";
};
zone "0.0.127.in-addr.arpa" {
type master;
file "pz/127.0.0";
};
## zone "30.172.IN-ADDR.ARPA" {
## type master;
## file "/etc/namedb/db.30.172";
## };
zone "24.30.172.IN-ADDR.ARPA" {
type master;
file "/etc/namedb/db.24.30.172";
};
// Bind 9 now logs by default through syslog (except debug).
// These are the default logging rules.
logging {
category default { default_syslog; default_debug; };
category unmatched { null; };
channel default_syslog {
syslog daemon; // send to syslog's daemon
// facility
severity info; // only send priority info
// and higher
};
channel default_debug {
file "named.run"; // write to named.run in
// the working directory
// Note: stderr is used instead
// of "named.run"
// if the server is started
// with the '-f' option.
severity dynamic; // log at the server's
// current debug level
};
channel default_stderr {
stderr; // writes to stderr
severity info; // only send priority info
// and higher
};
channel null {
null; // toss anything sent to
// this channel
};
};
|
I just had this and fixed it by removing the faulty forwarders. At the end of each timeout error is an IP from a forwarder from your config, but the errors never complain about google's ns (8.8.8.8). If you delete the first three forwarders, the errors should go away.
| bind9 - timed out resolving |
1,612,895,013,000 |
Resolvconf is a package born to handle different specific situations like lans with dhcp, vpn, and other situation where everyone try to change manually the /etc/resolv.conf file.
It has an algorithm where the max priority is obtained with a list of interfaces, as example tun and dhcp clients goes over a ppp connection.
/etc/resolvconf/interface-order
# interface-order(5)
lo.inet6
lo.inet
lo.@(dnsmasq|pdnsd)
lo.!(pdns|pdns-recursor)
lo
tun*
tap*
hso*
em+([0-9])?(_+([0-9]))*
p+([0-9])p+([0-9])?(_+([0-9]))*
eth*([^.]).inet6
eth*([^.]).ip6.@(dhclient|dhcpcd|pump|udhcpc)
eth*([^.]).inet
eth*([^.]).@(dhclient|dhcpcd|pump|udhcpc)
eth*
@(ath|wifi|wlan)*([^.]).inet6
@(ath|wifi|wlan)*([^.]).ip6.@(dhclient|dhcpcd|pump|udhcpc)
@(ath|wifi|wlan)*([^.]).inet
@(ath|wifi|wlan)*([^.]).@(dhclient|dhcpcd|pump|udhcpc)
@(ath|wifi|wlan)*
ppp*
*
My problem is that if you have resolvconf package and you also install a DNS Server like NAMED BIND9 or DNSMASQ, the resolvconf software will assign automatically top precedence to 127.0.0.1..
Well I don't want that, I want resolvconf package works normally as if bind9/dnsmasq wasn't installed. But I can't find an option like "ignore local dns as possible dns choice" into resolvconf software configuration.
|
Fine seems after some tries i found a solution...
By commenting all the localhost lines in interface-order file, expecially this two lines:
# lo.@(dnsmasq|pdnsd)
# lo.!(pdns|pdns-recursor)
Everything worked as wanted ;)
| Prevent resolvconf package assign localhost if named bind9/dnsmasq is found on host |
1,612,895,013,000 |
I came across a strange problem. Maybe someone knows the answer. I use named-checkzone to check my zone file.
The following zone file will display ignoring out-of-zone data when I use fully qualified domain names:
$TTL 30
@ IN SOA localhost. admin.example.com. (
2017072702 ; serial
3 ; refresh
1 ; retry
2 ; expire
1M) ; negative TTL
IN NS localhost.;
www.example.com. IN A 192.168.111.45
www.example.com. IN AAAA fe80::22c9:d0ff:1ecd:c0ef
foo.example.com. IN A 192.168.121.11
bar.example.com. IN CNAME www.example.com.
;generate 100 hosts
$GENERATE 1-100 host$.example.com. IN A 10.20.45.$
However with relative names no such messages are displayed as in this zone file:
$TTL 30
@ IN SOA localhost. admin.example.com. (
2017072702 ; serial
3 ; refresh
1 ; retry
2 ; expire
1M) ; negative TTL
IN NS localhost.;
www IN A 192.168.111.45
www IN AAAA fe80::22c9:d0ff:1ecd:c0ef
foo IN A 192.168.121.11
bar IN CNAME www.example.com.
;generate 100 hosts
$GENERATE 1-100 host$ IN A 10.20.45.$
Could someone explain why that is so?
The command I enter is the following sudo named-checkzone www.example.com /var/named/example.com.zone.
The output for the file containing the fully qualified domain names (FQDN) is as follows:
/var/named/example.com.zone:11: ignoring out-of-zone data (foo.example.com)
/var/named/example.com.zone:12: ignoring out-of-zone data (bar.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host1.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host2.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host3.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host4.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host5.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host6.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host7.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host8.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host9.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host10.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host11.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host12.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host13.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host14.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host15.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host16.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host17.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host18.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host19.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host20.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host21.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host22.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host23.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host24.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host25.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host26.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host27.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host28.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host29.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host30.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host31.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host32.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host33.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host34.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host35.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host36.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host37.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host38.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host39.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host40.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host41.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host42.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host43.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host44.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host45.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host46.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host47.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host48.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host49.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host50.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host51.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host52.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host53.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host54.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host55.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host56.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host57.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host58.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host59.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host60.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host61.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host62.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host63.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host64.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host65.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host66.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host67.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host68.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host69.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host70.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host71.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host72.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host73.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host74.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host75.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host76.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host77.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host78.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host79.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host80.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host81.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host82.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host83.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host84.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host85.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host86.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host87.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host88.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host89.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host90.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host91.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host92.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host93.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host94.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host95.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host96.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host97.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host98.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host99.example.com)
/var/named/example.com.zone:14: ignoring out-of-zone data (host100.example.com)
zone www.example.com/IN: loaded serial 2017072702
OK
The output for the file containing the relative domain names is as follows:
zone www.example.com/IN: loaded serial 2017072702
OK
|
The command was wrong. Instead of the FQDN the first parameter has to be the zone:
sudo named-checkzone example.com /var/named/example.com.zone
| named-checkzone displays “ignoring out-of-zone” data |
1,612,895,013,000 |
I tried to block the .zip TLD on my laptop (running fedora 38) with bind.
Installing bind
Updating named.conf:
options {
listen-on port 53 { 127.0.0.1; };
listen-on-v6 port 53 { ::1; };
directory "/var/named";
dump-file "/var/named/data/cache_dump.db";
statistics-file "/var/named/data/named_stats.txt";
memstatistics-file "/var/named/data/named_mem_stats.txt";
secroots-file "/var/named/data/named.secroots";
recursing-file "/var/named/data/named.recursing";
allow-query { localhost; };
/*
- If you are building an AUTHORITATIVE DNS server, do NOT enable recursion.
- If you are building a RECURSIVE (caching) DNS server, you need to enable
recursion.
- If your recursive DNS server has a public IP address, you MUST enable access
control to limit queries to your legitimate users. Failing to do so will
cause your server to become part of large scale DNS amplification
attacks. Implementing BCP38 within your network would greatly
reduce such attack surface
*/
recursion yes;
forwarders { 8.8.8.8; };
dnssec-validation yes;
managed-keys-directory "/var/named/dynamic";
geoip-directory "/usr/share/GeoIP";
pid-file "/run/named/named.pid";
session-keyfile "/run/named/session.key";
/* https://fedoraproject.org/wiki/Changes/CryptoPolicy */
include "/etc/crypto-policies/back-ends/bind.config";
/* this makes it block everything */
// response-policy { zone "zip"; };
};
logging {
channel default_debug {
file "data/named.run";
severity dynamic;
};
};
zone "zip" IN {
type master;
file "zip-rpz";
allow-update { none; };
};
zone "." IN {
type hint;
file "named.ca";
};
include "/etc/named.rfc1912.zones";
include "/etc/named.root.key";
Added /var/named/zip-rpz:
$TTL 1D ; default expiration time (in seconds) of all RRs without their own TTL value
@ IN SOA ns.zip. postmaster.ns.zip. ( 2020091025 7200 3600 1209600 3600 )
@ IN NS ns1 ; nameserver
* IN A 127.0.0.1 ; localhost
IN AAAA :: ; localhost
Apply temporarily
sudo systemctl enable named
sudo service named restart
resolvectl dns wlp0s20f3 127.0.0.1
However, running dig url.zip returns 127.0.0.1 only for the next minute or so – after that it shows the "correct" ip (and I can visit the site in the Browser again).
Why is it getting reset?
If I remove the forwarders line, same result.
If I set recursion no;, I am unable to resolve anything other than .zip urls (those point to 127.0.0.1)
|
I think I solved it?
If I’m not mistaken, the problem seems to be with systemd-resolve / resolvectl not persisting it’s settings for long...
If I change the file /etc/systemd/resolved.conf such that it contains
...
[Resolve]
DNS=127.0.0.1
...
And then reboot, it seems to do (finally) what it should.
I’d still like to know why then apparantly
resolvectl dns wlp0s20f3 127.0.0.1
only takes effect so briefly
| How do I get BIND (DNS) to be authoritative about a tld for more than a minute |
1,612,895,013,000 |
I set up a DNS server using the bind9 utility, took the settings from the example, and this is how the configuration turned out:
file: /etc/bind/named.conf.local
//
// Do any local configuration here
//
// Consider adding the 1918 zones here, if they are not used in your
// organization
//include "/etc/bind/zones.rfc1918";
zone "testing.net" {
type master;
file "/etc/bind/db.forward.com";
};
zone "12.168.192.in-addr.arpa" {
type master;
file "/etc/bind/db.reverse.com";
};
file: /etc/bind/db.forward.com
;
; BIND data file for local loopback interface
;
$TTL 604800
@ IN SOA ns.testing.net. root.localhost. (
2 ; Serial
604800 ; Refresh
86400 ; Retry
2419200 ; Expire
604800 ) ; Negative Cache TTL
;
@ IN NS ns.testing.net.
ns IN A 192.168.12.1
server IN A 192.168.12.1
www 3600 IN CNAME ns.testing.net.
file: /etc/bind/db.reverse.com
;
; BIND reverse data file for local loopback interface
;
$TTL 604800
@ IN SOA ns.testing.net. root.localhost. (
1 ; Serial
604800 ; Refresh
86400 ; Retry
2419200 ; Expire
604800 ) ; Negative Cache TTL
;
@ IN NS ns.
1 IN PTR ns.testing.net.
1 IN PTR ns.testing.net.
I have a service running on the system which is available by the name demo.testing.net.
I also implement an access point on this system, clients connected to it can access the name ns.testing.net.
Could you tell me please, how do I set up a config in bind9 so that clients can access the system called demo.testing.net? What should be specified in the configuration files?
Thank you very much!
|
For Linux hosts you should put the following in their /etc/resolv.conf:
nameserver 192.168.12.1
search testing.net
They should then test with:
ping demo
Don't forget to put the following line in /etc/bind/db.forward.com:
demo IN A 192.168.12.<host ip>
and in /etc/bind/db.reverse.com:
<host ip> IN PTR demo.testing.net.
and restart your bind9 server.
Good luck.
| Add another name to DNS server bind9 |
1,612,895,013,000 |
I have a DNS server with Bind9 installed, that has IP 192.168.145.119. This works as a resolver for a DNS server on IP 192.168.145.1.
I have setup so it works as a forwarder when using ping, using dig etc. I have also setup a zone with CNAME's. This works fine, as intended. However, reverse lookups doesn't work. If I run nslookup 192.168.145.96 I get:
** server can't find 96.145.168.192.in-addr.arpa: NXDOMAIN
How can I resolve this issue?
This is my named.conf
include "/etc/bind/named.conf.local";
include "/etc/bind/named.conf.default-zones";
options {
directory "/var/cache/bind";
recursion yes;
allow-query { any; };
allow-transfer {
localhost;
# Bind9 slave
192.168.145.218;
};
forwarders {
192.168.145.1;
};
dnssec-enable no;
dnssec-validation false;
auth-nxdomain no; # conform to RFC1035
listen-on-v6 { any; };
};
include "/etc/bind/domain.conf";
domain.conf
zone "domain" {
type master;
file "/etc/bind/zones/db.domain";
allow-transfer {
192.168.145.218;
};
notify yes;
};
db.domain
;
; BIND reverse data file for broadcast zone
;
$TTL 604800
@ IN SOA ns1.domain admin.domain. (
202001161 ; Serial
604800 ; Refresh
86400 ; Retry
2419200 ; Expire
604800 ) ; Negative Cache TTL
;
IN NS ns1.domain.
IN NS ns2.domain.
ns1.domain. IN A 192.168.145.119
ns2.domain. IN A 192.168.145.218
docker-registry-vm1.domain IN CNAME docker-registry-vm1.internal.
dns-master-vm1.domain. IN CNAME dns-master-vm1.internal.
dns-slave-vm1.domain. IN CNAME dns-slave-vm1.internal.
|
In one of the configurations I had a lot of empty zones. I had to add empty-zones-enable no; to my named.conf.
Now it looks like this:
include "/etc/bind/named.conf.local";
include "/etc/bind/named.conf.default-zones";
options {
directory "/var/cache/bind";
recursion yes;
allow-query { any; };
empty-zones-enable no;
allow-transfer {
localhost;
#Bind9 slave
192.168.145.167;
};
forwarders {
192.168.145.1;
};
dnssec-enable false;
dnssec-validation false;
auth-nxdomain yes; # conform to RFC1035
listen-on-v6 { any; };
};
include "/etc/bind/domain.conf";
| Forward reverse lookups with Bind9 |
1,612,895,013,000 |
I am running a DNS and DCHP service on a local server (Raspberry on Stretch).
When checking the zone files, I get:
# [2019-02-03 10:32] maxg@rpiserver /etc/bind/zones $
named-checkzone rpiserver argylecourt.org.db
argylecourt.org.db:22: ignoring out-of-zone data (argylecourt.org)
argylecourt.org.db:23: ignoring out-of-zone data (argylecourt.org)
zone rpiserver/IN: has no NS records
zone rpiserver/IN: not loaded due to errors.
This is the contents of the argylecourt.org.db zone file:
; Host-to-IP Address DNS Pointers for argylecourt.org
; Note: The extra “.” at the end of the domain names are important.
;
; $ORIGIN .
$TTL 86400 ; 1 day
; rpiserver.argylecourt.org. IN SOA rpiserver.argylecourt.org. hostmaster.argylecourt.org. (
@ IN SOA rpiserver.argylecourt.org. hostmaster.argylecourt.org. (
2019020203 ; serial
8H ; refresh
4H ; retry
2W ; expire
1D ; minimum
)
; NS indicates that rpiserver is the name server on argylecourt.org
; MX indicates that rpiserver is (also) the mail server on argylecourt.org
argylecourt.org. IN NS rpiserver.argylecourt.org.
argylecourt.org. IN MX 10 rpiserver.argylecourt.org.
;$ORIGIN argylecourt.org.
; Set the address for localhost.argylecourt.org
;localhost IN A 127.0.0.1
;localhost IN A 192.168.1.7
rpiserver IN A 192.168.1.7
www IN CNAME argylecourt.org
I also have errors in the reverse zone:
# [2019-02-03 10:43] maxg@rpiserver /etc/bind/zones $
named-checkzone rpiserver rev.1.168.192.in-addr.arpa
zone rpiserver/IN: NS 'rpiserver' has no address records (A or AAAA)
zone rpiserver/IN: not loaded due to errors.
... which has this contents:
$TTL 86400 ; 1 day
; IP Address-to-Host DNS Pointers for the 192.168.1 subnet
@ IN SOA rpiserver.argylecourt.org. hostmaster.argylecourt.org. (
2019020203 ; serial
8H ; refresh
4H ; retry
2W ; expire
1D ; minimum
)
; define the authoritative name server
; IN NS rpiserver.argylecourt.org.
IN NS rpiserver.
[update 1] Have just read: BIND Reverse DNS Ignoring out-of-zone data -- which resulted in 0 errors when applied to my situation.
# [2019-02-03 10:46] maxg@rpiserver /etc/bind/zones $
named-checkzone 1.168.192.in-addr.arpa rev.1.168.192.in-addr.arpa
zone 1.168.192.in-addr.arpa/IN: loaded serial 2019020203
OK
# [2019-02-03 10:52] maxg@rpiserver /etc/bind/zones $
named-checkzone argylecourt.org argylecourt.org.db
zone argylecourt.org/IN: loaded serial 2019020203
OK
[update 2] restarting bind9 results in:
# [2019-02-03 11:19] maxg@rpiserver /etc/bind/zones $
sudo service bind9 status
● bind9.service - BIND Domain Name Server
Loaded: loaded (/lib/systemd/system/bind9.service; enabled; vendor preset: enabled)
Active: active (running) since Sun 2019-02-03 11:19:40 AEST; 22s ago
Docs: man:named(8)
Process: 5661 ExecStop=/usr/sbin/rndc stop (code=exited, status=0/SUCCESS)
Main PID: 5667 (named)
CGroup: /system.slice/bind9.service
└─5667 /usr/sbin/named -f -u bind
Feb 03 11:19:40 rpiserver named[5667]: managed-keys-zone: journal file is out of date: removing journal file
Feb 03 11:19:40 rpiserver named[5667]: managed-keys-zone: loaded serial 648
Feb 03 11:19:40 rpiserver named[5667]: zone 0.in-addr.arpa/IN: loaded serial 1
Feb 03 11:19:40 rpiserver named[5667]: zone localhost/IN: loaded serial 2
Feb 03 11:19:40 rpiserver named[5667]: zone 127.in-addr.arpa/IN: loaded serial 1
Feb 03 11:19:40 rpiserver named[5667]: zone 1.168.192.in-addr.arpa/IN: loaded serial 2017061507
Feb 03 11:19:40 rpiserver named[5667]: zone 255.in-addr.arpa/IN: loaded serial 1
Feb 03 11:19:40 rpiserver named[5667]: zone argylecourt.org/IN: loaded serial 2017061536
Feb 03 11:19:40 rpiserver named[5667]: all zones loaded
Feb 03 11:19:40 rpiserver named[5667]: running
Where do I need to look to fix this problem?
|
I started digging further when I realised the old serial number.
I looked up cat /etc/bind/named.conf.local, which pointed to [file "/var/lib/bind/argylecourt.org.db";]
... while I was updating /etc/bind/zones/argylecourt.org.db
| BIND9 DNS zone file check reveals "ignoring out-of-zone data" |
1,612,895,013,000 |
I'd like to build bind zone files via Ansible. To decide how to structure the jinja2 template I need to know if there is any difference in any of these zone configurations:
1.) good old fashioned way:
$ORIGIN foo.bar.
@ IN SOA dns.foo.bar. hostmaster.foo.bar. (
2018111601
3H
1H
604800
86400)
86400 IN NS ns01.foo.bar.
86400 IN NS ns02.foo.bar.
www IN A 10.0.0.1
-
2.) Do I have to specify $ORIGIN if the zone name is already foo.bar?
from named.conf:
zone "foo.bar" in{
type master;
file "zones/foo.bar";
};
from zones/foo.bar:
@ IN SOA dns.foo.bar. hostmaster.foo.bar. (
2018111601
3H
1H
604800
86400)
86400 IN NS ns01.foo.bar.
86400 IN NS ns02.foo.bar.
www IN A 10.0.0.1
-
3.) Split-up apex and use '@' multiple times
$ORIGIN foo.bar.
@ IN SOA dns.foo.bar. hostmaster.foo.bar. (
2018111601
3H
1H
604800
86400)
www IN A 10.0.0.1
@ 86400 IN NS ns01.foo.bar.
@ 86400 IN NS ns02.foo.bar.
-
4.) No use of '@' placeholder
foo.bar. IN SOA dns.foo.bar. hostmaster.foo.bar. (
2018111601
3H
1H
604800
86400)
foo.bar. 86400 IN NS ns01.foo.bar.
foo.bar. 86400 IN NS ns02.foo.bar.
$ORIGIN foo.bar.
www IN A 10.0.0.1
-
I always want this as answer:
$ dig foo.bar ANY +noall +answer
foo.bar. 1784 IN SOA dns.foo.bar. hostmaster.foo.bar. 2018121401 10800 3600 604800 86400
foo.bar. 86384 IN NS ns01.foo.bar.
foo.bar. 86384 IN NS ns02.foo.bar.
$ dig www.foo.bar +short
10.0.0.1
Question:
Do all variants results in the same dns answer?
|
Answer: yes, they're all the same. Though note I haven't actually loaded these zones in to a DNS server to confirm; e.g., I may have missed a typo when reading the question. Load them in to a DNS server, allow zone transfers, and then transfer them — you should get the exact same result.
Details:
If you check “Other Zone File Directives” in the BIND9 manual, $ORIGIN defaults to the zone you specify in named.conf. Mainly you'd using $ORIGIN in manually-written files e.g., to make it easier to deal with subdomains ($ORIGIN subdmain.domain.com., then define all your records for the subdomain).
Same section tells you that @ is a shortcut for the current origin. So spelling it out is exactly the same thing.
When you specify two records for the same name in a row without repeating the name, the second record just implicitly uses the last one's name. To quote RFC 1035 (which calls the name the record's owner):
The last two forms represent RRs. If an entry for an RR begins with a blank, then the RR is assumed to be owned by the last stated owner. If an RR entry begins with a <domain-name>, then the owner name is reset.
(BTW: $ORIGIN and @ are in the RFC as well, so they should apply to servers other than BIND that use the same zone file format. I just used the BIND manual to get terminology newer than 1987.)
These are all convenience features of the "master file" format — they have nothing to do with the DNS wire protocol. They don't even survive loading the file into BIND (if you have bind rewrite the zone file, e.g., due to allowing DNS updates, then you'll find it'll rewrite the file much closer to your #4).
| There is any difference between this zone syntax? |
1,612,895,013,000 |
i use bind for simple setup on my lan, just a cache for external domanin and the LAN internal resolver, the problem that the output of the reverse resolver is wrong, it should return only the domain name; it seems that for some error the server doesn't find resources to answer correctly, but in the logs I have not found any error; i paste here below the configuration and output of nslookup:
output nslookup:
$ nslookup server1.example.com
Server: 192.168.1.131
Address: 192.168.1.131#53
Name: server1.example.com
Address: 192.168.1.130
$ nslookup 192.168.1.130
130.1.168.192.in-addr.arpa name = server1.example.com.1.168.192.in-addr.arpa.
bind config:
// This is the primary configuration file for the BIND DNS server named.
options {
directory "/opt/etc/bind";
pid-file "/opt/etc/bind/named.pid";
query-source address * port 53;
forwarders {
// OPENDNS dns
208.67.222.222;
208.67.220.220;
// GOOGLE dns
8.8.8.8;
8.8.4.4;
};
auth-nxdomain no; # conform to RFC1035
};
logging {
channel update_debug {
file "/var/log/bind_update_debug.log" versions 3 size 100k;
severity debug;
print-severity yes;
print-time yes;
};
channel security_info {
file "/var/log/bind_security_info.log" versions 1 size 100k;
severity info;
print-severity yes;
print-time yes;
};
channel bind_log {
file "/var/log/bind.log" versions 3 size 1m;
severity info;
print-category yes;
print-severity yes;
print-time yes;
};
channel query_log {
file "/var/log/bind_query.log" versions 3 size 1m;
severity debug 3;
print-category yes;
print-severity yes;
print-time yes;
};
category default { bind_log; };
category queries { query_log; };
category lame-servers { null; };
category update { update_debug; };
category update-security { update_debug; };
category security { security_info; };
};
// prime the server with knowledge of the root servers
zone "." {
type hint;
file "/etc/bind/db.root";
};
// be authoritative for the localhost forward and reverse zones, and for
// broadcast zones as per RFC 1912
zone "localhost" {
type master;
file "/etc/bind/db.local";
};
zone "127.in-addr.arpa" {
type master;
file "/etc/bind/db.127";
};
zone "0.in-addr.arpa" {
type master;
file "/etc/bind/db.0";
};
zone "255.in-addr.arpa" {
type master;
file "/etc/bind/db.255";
};
zone "example.com" {
type master;
file "/etc/bind/db.example.com";
notify no;
};
zone "1.168.192.in-addr.arpa" {
type master;
file "/etc/bind/db.192";
notify no;
};
db.example.com:
;
; BIND data file for local loopback interface
;
$TTL 604800
@ IN SOA example.com. admin.example.com. (
2 ; Serial
1D ; Refresh
1H ; Retry
1W ; Expire
3H ) ; Negative Cache TTL
; name server - NS records
IN NS ns.example.com.
; name server - A records
ns IN A 192.168.1.131
; 192.168.1.0/255 - A records
laptop IN A 192.168.1.102
server1 IN A 192.168.1.130
server2 IN A 192.168.1.131
router IN A 192.168.1.1
db.192:
;
; BIND reverse data file for empty rfc1918 zone
;
$TTL 604800
@ IN SOA example.com. admin.example.com. (
2 ; Serial
1D ; Refresh
1H ; Retry
1W ; Expire
3H ) ; Negative Cache TTL
; name server
IN NS ns.example.com.
; name server PTR record
131 IN PTR ns.example.com
; PTR Records
102 IN PTR laptop.example.com
130 IN PTR server1.example.com
131 IN PTR server2.example.com
1 IN PTR router.example.com
Can anyone suggest where the mistake lies? Is it a trivial configuration error? thx
|
IN NS ns.example.com.
131 IN PTR ns.example.com
102 IN PTR laptop.example.com
130 IN PTR server1.example.com
You used a fully qualified domain name once, and then did not use it in any of the other cases. You clearly intend to use fully-qualified domain names here, given the question that you are asking. So make all of those names fully-qualified.
A fully-qualified (human-readable form) domain name ends with a dot.
| bind9 reverse resolve problem |
1,612,895,013,000 |
BIND9 v9.18 improves support for DNS-over-TLS (DoT) and DNS-over-HTTPS (DoH). However, while the docs explain how to use TLS for the server part, it does not reveal how to enable DNS-over-TLS for query forwarding. Does BIND9 v9.18 support it?
How does the config snippet need to be tweaked to use DoT for the forwarders?
options {
[…]
forwarders {
// Forward to Cloudflare public DNS resolver
1.1.1.1;
1.0.0.1;
};
[…]
}
Simply adding port 853 and expecting some magic to happen does not seem to be enough.
|
As this is the top hit on Google for configuring BIND9 to forward via DNS-over-TLS, here's how I've configured and tested on BIND 9.19.13, connecting to OpenDNS.
I created a named.conf.dot in /etc/bind/ and referenced it via an include, but you could just as easily add this directly to named.conf
tls OpenDNS-DoT {
ca-file "/etc/ssl/certs/IdenTrust_Commercial_Root_CA_1.pem";
remote-hostname "dns.opendns.com";
};
options {
forwarders port 853 tls OpenDNS-DoT {
// OpenDNS public
208.67.222.222;
208.67.220.220;
};
};
According to Aram at ISC, this feature will be included in the next stable release, 9.20.
| How to use DNS-over-TLS with BIND9 forwarders |
1,612,895,013,000 |
I'm wondering whether there is a command I can run to get a complete list of the domain names that my BIND9 currently manages on my master DNS server.
Hypothetically, something such as:
named --list
And that would give me all the names of all the zones I have currently setup on that master.
Now, the reason for asking is that the way I've been setting up my slave BIND9 is by adding a new entry for each master entry. For example:
zone "example.app" {
type slave;
file "/var/cache/bind/example.app.zone";
masters { 192.168.0.1; };
allow-transfer { none; };
};
This allows my slave BIND9 to ask for the info from the master BIND9. It works and it's find when you have a few entries. When you have like well over 50, it's not just tedious, you make many mistakes and that means no second DNS for those mistakes and like nothing tells you that the second DNS is missing...
I'm thinking that there is probably a much better way to setup the slave saying that any domain name managed by the master is to be replicated on the slave. But I do not want my slave to manage anyone else DNS. So only allow my master (192.168.0.1 in my example) to make changes.
Either solution would be fine with me. The second one would be better, of course.
|
First of all, usually, one only ask one question per post since its better for site lisibility and referencing...
For your first question, you can ask bind to dump the zones it's currently managing, see the dumpdb command of rndc:
dumpdb [-all|-cache|-zones|-adb|-bad|-fail] [view ...]
Dump cache(s) to the dump file (named_dump.db).
Nevertheless:
not sure it will dump only the zone where the server is master
you will have to parse the output cause it's not basically a list of zone but the full dump.
For your second question, there's actually two possibilities:
It looks like bind9 implemented a new feature to manage this, see http://ftp.isc.org/isc/bind9/cur/9.11/doc/arm/Bv9ARM.ch04.html#catz-info
Write a script that would generate slave's configuration from master's one.
| How do I list all the domain names that my BIND9 manages as a master? |
1,612,895,013,000 |
I want to create a Debian based DNS Server to run BIND9.
There is plenty of information on package dependencies. but it is all about how to install required packages when installing package-x.y.z. However, I cannot find anything about how to find out all the packages that are not required by package-x.y.z and uninstall them.
What I want to be able to do is get the answer to this question:
What are the absolute minimum set of packages required in order to successfully run BIND9, such that I can uninstall (or not install in the first place) all packages that are not required (directly or indirectly) in order to run BIND9?
For example, it's obvious that named/bind requires network connectivity, else it could not serve inbound DNS queries, so we know all packages and drivers for IP networking will be required. We also know we must have NTP because that serves a pretty important time keeping function which, although maybe not directly required or used by the named/bind process, is definitely indirectly required in order to enforce DNSSEC and other PKI tasks. We obviously also need everything for local user authentication, and also for running the SSH service to allow server management.
Perhaps I really should be asking - what is the absolute bare minimum Debian install that I can build, onto which I would only need to install BIND9?
|
For a stand-alone hardware server or VM:
Step 1.) install Debian "base system" and nothing else. This should automatically provide you with a minimalist system that can still boot on its own and update itself over an internet connection, both of which I consider basic requirements for any sort of modern stand-alone installation.
After this step you can run apt list --installed to review the current state. Anything marked as [installed,automatic] is there because something else depends on it. Pay attention to the package wich are marked just [installed].
If you remove a non-automatic package that causes a dependency, the package management will suggest the automatic package(s) that are no longer depended on by anything for uninstallation.
Step 2.) apt install bind9 and review any dependencies that are of the type Recommends:. I actually prefer the aptitude package management UI for this kind of minimalist installation work - it can display dependencies both in normal and reverse direction (i.e. "this package is depended on/recommended by/suggested by these other packages") before actually installing anything.
Step 3.) See if you need anything else, e.g. the openssh-server package if you want SSH access, or tools for DNS zone file or log management.
Step 4.) You're done!
For building a containerized service, there is usually a container build tool (docker-compose or similar) to which you specify what you need (i.e. bind9), and which can then automatically fill in any dependencies. The resulting container should have everything needed to satisfy your stated requirements and nothing more.
| How do I determine the bare minimum Debian package requirements to run BIND9? |
1,612,895,013,000 |
The Setup
I have a containerized named service which is given their own IP with the following container file
FROM alpine:latest
RUN apk --no-cache add bind bind-tools bind-dnssec-tools bind-dnssec-root
COPY --chmod=500 --chown=root:root init.sh /usr/sbin/init
COPY --chmod=444 --chown=root:root bindetc/named.conf /etc/bind/named.conf
RUN chmod 770 /var/bind
RUN chown root:named /var/bind
COPY --chmod=440 --chown=root:named bindetc/direct.db /var/bind/direct.db
COPY --chmod=440 --chown=root:named bindetc/reverse.db /var/bind/reverse.db
VOLUME "/var/bind"
EXPOSE 53/tcp 53/udp
CMD /usr/sbin/named -f -g -u named
I have a mix of an authority server and an recursive one with the following configuration
bindetec/named.conf
acl LAN {
192.168.0.0/24;
}
options {
directory "/var/bind";
allow-recursion {
192.168.0.0/24;
127.0.0.1/32; // localhost
};
forwarders {
1.1.1.1; // Cloudflare
208.67.222.222; // OpenDNS
};
listen-on { 192.168.0.136; 127.0.0.1; };
listen-on-v6 { none; };
allow-transfer port 53 { 192.168.0.136; 0.0.0.0; };
allow-query { localhost; LAN; };
recursion yes;
pid-file "/var/run/named/named.pid";
dump-file "/var/bind/data/cache_dump.db";
statistics-file "/var/bind/data/named_stats.txt";
memstatistics-file "/var/bind/data/named_mem_stats.txt";
};
zone "." IN {
type master;
file "/var/bind/direct.db";
allow-update { none; };
};
zone "in-addr.arpa" IN {
type master;
file "/var/bind/reverse.db";
allow-update { none; };
};
With the the following bindetc/direct.db:
$TTL 3600
$ORIGIN intranet.domain.
@ IN SOA ns1.intranet.domain. postmaster.intranet.domain. (909090 9000 900 604800 1800)
@ IN NS ns1.intranet.domain.
ns1 IN A 192.168.0.136
and the following bindetc/reverse.db:
$TTL 604800
@ IN SOA ns1.intranet.domain. postmaster.intranet.domain. (909090 9000 900 604800 1800)
@ IN NS ns1.intranet.domain.
136.0.168.192 IN PTR ns1.intranet.domain.
The IP of the container is 192.168.0.136.
The problem
When trying to resolve any public dns record like for example google.com it gives basically and empty response like the following instead of asking Cloudflare or OpenDNS what is the IP of such DNS record.
; <<>> DiG 9.16.44 <<>> google.com @192.168.0.136
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 27326
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
; COOKIE: 1f5514b62f24a19b0100000065ed3501a3ae047abe73afef (good)
;; QUESTION SECTION:
;google.com. IN A
;; Query time: 48 msec
;; SERVER: 192.168.0.136#53(192.168.0.136)
;; WHEN: Sat Mar 09 22:20:17 CST 2024
;; MSG SIZE rcvd: 67
|
The dig command indicates your BIND is responding with a SERVFAIL error code, so it probably thinks your configuration has a fatal error. You really should see what log messages BIND is producing.
Your zone declaration zone "." IN specifies your BIND is type master for the DNS root zone, so effectively you're telling BIND that it already knows all the top level domains in existence. So why should it ask someone else about "google.com" when it already knows there is no such top-level domain as ".com"?
Unless you really want to declare a separate DNS universe for yourself, or are actually maintaining a root nameserver, you should never configure your nameserver as a master of zone ".".
A more normal declaration for the root zone might be:
// prime the server with knowledge of the root servers
zone "." {
type hint;
file "/usr/share/dns/root.hints";
};
where /usr/share/dns/root.hints is a standard list of root DNS servers, available at https://www.internic.net/domain/named.cache . At startup, BIND will use this list to contact one of the root name servers, to get an absolutely up-to-date version of the same list.
Because you are planning to use forwarders, you can also omit it entirely: BIND will use its built-in list of root DNS servers if you don't specify it. The zone "." of type hint is just a way to replace the built-in list if it becomes out of date.
Since your BIND should send queries for any zones it isn't authoritative for to the forwarders, you shouldn't need to care whether BIND has an up-to-date list of root nameservers or not.
If you don't want your BIND to start attempting to contact other nameservers on its own if the forwarders are not responding, you may want to add the line forward only; to the options{ ... }; segment of your configuration.
You should declare your intranet.domain forward zone as:
zone "intranet.domain" IN {
type master;
file "/var/bind/direct.db";
allow-update { none; };
};
Since you have allow-query { localhost; LAN; };, you probably should have an acl LAN { ... }; block somewhere, too.
| BIND9 as DNS server unable to fallback not defined directions to public DNS |
1,612,895,013,000 |
I want to setup my Kerberos authentication using DNS lookups to define its servers. This can be done with URI records in the DNS database. There is given an example for KDC Discovery that looks like:
_kerberos.EXAMPLE.COM URI 10 1 krb5srv:m:tcp:kdc1.example.com
Now I try to add this record to the DNS database with nsupdate:
~$ sudo nsupdate
> update add _kerberos.EXAMPLE.COM URI 10 1 krb5srv:m:tcp:kdc1.example.com
ttl 'URI': not a valid number
>
Doesn't work this way. What is the command to add the URI record? Is there another way to add the record to the DNS database?
|
When you run nsupdate to add a record, you must specify a Time-To-Live value (TTL) for it to specify the maximum time the record can be cached by any resolver DNS server before querying an authoritative DNS server for an up-to-date version of the record again. This is true for all record types. The TTL value goes in between the name and the record type.
If you wanted to specify a TTL of 7200 seconds (2 hours), for example:
$ sudo nsupdate
> update add _kerberos.EXAMPLE.COM 7200 URI 10 1 "krb5srv:m:tcp:kdc1.example.com"
Please also note the double quotes around the string.
If your nearest DNS server is the authoritative one for your DNS zone, and the record is used locally only, then the TTL value might not be very important, but it still needs to be specified.
| How to add URI record to bind9 DNS zone? |
1,326,966,806,000 |
Don't you just love it when two commands each do one thing you want but neither do both?
This is what cal does. Nice formatting. Lacks week numbers though:
$ cal
January 2012
Su Mo Tu We Th Fr Sa
1 2 3 4 5 6 7
8 9 10 11 12 13 14
15 16 17 18 19 20 21
22 23 24 25 26 27 28
29 30 31
This is what ncal does. Weird formatting, but with week numbers:
$ ncal -w
January 2012
Su 1 8 15 22 29
Mo 2 9 16 23 30
Tu 3 10 17 24 31
We 4 11 18 25
Th 5 12 19 26
Fr 6 13 20 27
Sa 7 14 21 28
1 2 3 4 5
The kind of output I want, actually a crossbreed between cal and ncal -w:
$ cal --magik-calendar-week-option
January 2012
Su Mo Tu We Th Fr Sa
1 1 2 3 4 5 6 7
2 8 9 10 11 12 13 14
3 15 16 17 18 19 20 21
4 22 23 24 25 26 27 28
5 29 30 31
|
If neither of these commands suit your needs you can use gcal to do what you want instead.
Example
$ gcal -K
April 2014
Su Mo Tu We Th Fr Sa CW
1 2 3 4 5 13
6 7 8 9 10 11 12 14
13 14 15 16 17 18 19 15
20 21 22 23 24 25 26 16
27 28 29 30 17
Prints the week number in the last column to the right.
NOTE: gcal is available in most package managers, for e.g., brew install gcal, apt install gcal, dnf install gcal, etc.
References
Gcal the ultra-powerful command line GNU calendar
Gcal user's manual
The many uses of gcal
cal command - start monday
| Displaying week's number in certain format using ncal or cal |
1,326,966,806,000 |
Given two numbers, month and year, how can I compute the first and the last day of that month ? My goal is to output these three lines:
month / year (month in textual form but that is trivial)
for each day of the month: name of the day of the week for the current day: Fri. & Sat. & Sun. [...]
day number within the month: 1 & 2 & 3 [...] & 28 & .. ?
I'm looking for a solution using GNU date or BSD date (on OS X).
|
Some time ago I had similar issue. There is my solution:
$ ./get_dates.sh 2012 07
The first day is 01.2012.07, Sunday
The last day is 31.2012.07, Tuesday
$ cal
July 2012
Su Mo Tu We Th Fr Sa
1 2 3 4 5 6 7
8 9 10 11 12 13 14
15 16 17 18 19 20 21
22 23 24 25 26 27 28
29 30 31
Script itself:
#!/bin/bash
# last day for month
lastday() {
# ja fe ma ap ma jn jl ag se oc no de
mlength=('xx' '31' '28' '31' '30' '31' '30' '31' '31' '30' '31' '30' '31')
year=$1
month=$2
if [ $month -ne 2 ] ; then
echo ${mlength[$month]}
return 0
fi
leap=0
((!(year%100))) && { ((!(year%400))) && leap=1 ; } || { ((!(year%4))) && leap=1 ; }
feblength=28
((leap)) && feblength=29
echo $feblength
}
# date to Julian date
date2jd() {
year=$1
month=$2
day=$3
lday=$(lastday $year $month) || exit $?
if ((day<1 || day> lday)) ; then
echo day out of range
exit 1
fi
echo $(( jd = day - 32075
+ 1461 * (year + 4800 - (14 - month)/12)/4
+ 367 * (month - 2 + (14 - month)/12*12)/12
- 3 * ((year + 4900 - (14 - month)/12)/100)/4
- 2400001 ))
}
jd2dow()
{
days=('Sunday' 'Monday' 'Tuesday' 'Wednesday' 'Thursday' 'Friday' 'Saturday')
jd=$1
if ((jd<1 || jd>782028)) ; then
echo julian day out of range
return 1
fi
((dow=(jd+3)%7))
echo ${days[dow]}
}
echo -n "The first day is 01.$1.$2, "
jd2dow $(date2jd $1 $2 01)
echo -n "The last day is $(lastday $1 $2).$1.$2, "
jd2dow $(date2jd $1 $2 $(lastday $1 $2))
I didn't have GNU date on machines I need it, therefore I didn't solve it with date. May be there is more beautiful solution.
| First and last day of a month |
1,326,966,806,000 |
I use alpine to read mails, and occasionally get emails from people with vCalendar files in attachment. Is there a command line utility that reads and displays vCalendar files?
|
I just googled and found vcal, a perl script for displaying vcal files. According to the man page it should do exactly what you need.
There is also gcalcli a command line interface for google calendar which allows you to manage your google calendar. This may allow you to add events you received directly to your existing calendar.
| Command line utility to read vCalendar files |
1,326,966,806,000 |
I'm looking to use the terminal more and more, and I'd like to find a terminal calendar app that can sync with Google calendar.
I'm running ubuntu 14.04
|
Take a look at:
gcalcli,
and also:
remind , which has PHP scripts to convert iCAL entries to Remind format.
| Google calendar in the terminal |
1,326,966,806,000 |
In gnome-shell's top bar, calendar items are shown. This is great. However, I miss the possibility to click onto an item and then see more details/or simply to be led to the specific event item in evolution or another preferred calendar application.
The missing functionality is this: click on calendar item@top bar --> open default calendar's details about this calendar item.
Is it configurable, and if yes how, to make gnome-shell calendar open a specific calendar app when clicking onto a calendar item?
|
At the moment, this does not seem possible, because no API exists, that would allow gnomeshell to interact with events from e.g. evolution or gnome-calendar. see https://gitlab.gnome.org/GNOME/gnome-shell/issues/262#note_540721
For an issue that discussed this issue specifically, see https://gitlab.gnome.org/GNOME/gnome-shell/-/issues/1297
| How to make gnome-shell calendar open calendar app's event details when clicking onto calendar entry? |
1,326,966,806,000 |
I need a calendar that stores everything locally as I work from places that don't have internet sometimes (I'm a virtual office). I really can't figure it out from reading what I can find about the subject. I am using Mint 12.
|
Mozilla Thunderbird plus lightning addon (official replacement for Sunbird which is no longer actively maintained)
| Which application to use for a calendar? |
1,326,966,806,000 |
I am an OpenBSD user and I am writing an awk script which automatically generates TeX course calendars for all courses that I teach. To obtain actual calendar out of the system I use Unix cal command. The problem is that the output of the cal command uses spaces as a delimiters which creates all sorts of problems when I apply my nawk script to it. I looked the source code for cal and seems that nothing short of hacking the source code will force cal command to use delimiter other than the space. What would be the simplest way to get calendar to look like this
June 2012
Su, Mo, Tu, We, Th, Fr, Sa
, , , , , 1, 2
3, 4, 5, 6, 7, 8, 9
10, 11, 12, 13, 14, 15, 16
17, 18, 19, 20, 21, 22, 23
24, 25, 26, 27, 28, 29, 30
|
You could use sed for that.
$ cal|sed -e '1n;s/\(..\)\(.\)/\1,\2/g'
May 2012
Su, Mo, Tu, We, Th, Fr, Sa
, , 1, 2, 3, 4, 5
6, 7, 8, 9, 10, 11, 12
13, 14, 15, 16, 17, 18, 19
20, 21, 22, 23, 24, 25, 26
27, 28, 29, 30, 31
1n prints the first line and moves to the next. The replacement then takes the chars three by three and prints the first two followed by , then the third.
| Cal no space delimiter |
1,326,966,806,000 |
In Debian Jessie 64-bit with Gnome 3.14.1, System Monitor shows evolution-calendar-factory process is using 1.1 GiB, and evolution-alarm-notify is using 826.6 MiB of virtual memory. I don't use no calendar or alarm, so isn't this somewhat out of purpose? Almost 2 GiB of memory (even virtual) for what, exactly? How can I lower this to be proportional to usage (i.e. almost nothing). Actually my only "calendar use" is that small calendar that pops up when I click on the date in the top bar, and need to browse a few months to see which day of the week some nearby date was/will be. Considering I cannot even browse full years there (only month by month), it seems the biggest waste of memory I've ever seen.
When I click on the date in the top bar, and then choose "Open Calendar", I get a Welcome screen, where I read: "Welcome to Evolution. The next few screens will allow Evolution to connect to your email accounts, and to import files from other applications." That means Evolution isn't connected to anything yet, so what are those 2 GiB of memory for?
Another, related question: where is all this virtual memory located (gnome-shell and firefox-esr are using another 3.2 GiB), since I have 0 (zero) byes of Swap usage?
|
The virtual size or vsz of a process is not physical memory usage.
Virtual memory can be allocated space and not used physical space. It can also be mmapped files which are already backed by disk. 64bit machines should be able to address 256TiB of virtual space. The virtual space metric was more important on 32 bit machines when processes were trying to allocate > 2GB without PAE when it was possible to hit the addressable limits.
Unlike Windows, the term "virtual memory" does not refer to the area where active memory is paged to disk. This is referred to as swap space.
If you want something closer to actual physical memory usage per process, look at the PSS metric in /proc/${pid}/smaps which accounts for shared memory.
awk '/^Pss:/ { total += $2 } END{ print total }' /proc/*/smaps
| Why does evolution-calendar-factory uses so much virtual memory? |
1,326,966,806,000 |
Is there a CLI utility that allows one to sync up with their iCloud calendar (i.e., iPhone calendar on the cloud) to modify calendar events, etc.?
|
Another solution is suggested in this blog:
I used a software from http://icloud.niftyside.com/ which I installed on my Uberspace. It was just unpacking it into a directory of the webserver and visiting the site. Then entering my credentials and I got all the URLs.
| iCloud calendar command line utility |
1,326,966,806,000 |
I have centos 7 with xfce.
And I cannot find where is the necessary configuration file located to make Monday as first day of the week.
|
Thanks Artem S. Tashkinov, about links.
I have found solution for centos:
https://www.rosehosting.com/blog/how-to-set-up-system-locale-on-centos-7/
And the right command to change the locale for centos is:
localectl set-locale LANG=en_GB.utf8
And after that it is necessary to restart the OS.
| Configure first day of the week for xfce calendar in centos 7 |
1,326,966,806,000 |
I've configured 2-Step Verification with my Google accont (under https://myaccount.google.com/security). Emails with OAuth 2 seem to come through fine but I keep getting
"Failed to connect calendar “[email protected] : MyName”
Data source “MyName” does not support OAuth 2.0 authentication
and
Failed to connect address book “[email protected] : Contacts”
why is this and how can I fix that?
I'm using Evolution 3.34.1
|
The "fix" that worked for me was to give up on Evolution and I switched to using Thunderbird instead
| Evolution fails to connect to my Google calendar |
1,326,966,806,000 |
Is it possible to install a php extension if PHP is already installed without having to rebuild PHP? I need to install the calendar extension but I don't like to build PHP, I just installed it with apt-get and did not builded it from source.
https://www.php.net/manual/en/calendar.installation.php
|
If you installed PHP with apt-get, then you also need to install any required PHP modules in the same way.
In your case you need to do this:
apt-get install php-calendar
| Install php extension if PHP is already installed |
1,326,966,806,000 |
Hi I'm trying to launch cal with sxhkd. And it doesn't work, the terminal window is closed right after the command is executed.
I tried to do it using the following:
# launch urxvt 20x8 with cal
super + c
urxvt -geometry 20x8 -e cal -m
I also tried to set bspwm to open the window as floating with this: bspc rule -a urxvt:cal state:floating but it doesn't have any efect
|
You need to add -hold on urxvt, in order to not destroy its window when the program executed within it exits.
urxvt -hold -geometry 20x8 -e cal -m
| Launching urxvt with calendar (cal) using sxhkd |
1,326,966,806,000 |
For Linux Mint 18.3, 32-bit, MATE desktop 1.18.0.
In BASH, typing calendar produces the following error.
rbv@rbv-F80Q ~ $ calendar
In file included from /usr/share/calendar/calendar.all:23:0,
from <stdin>:16:
/usr/share/calendar/calendar.croatian:10:0: fatal error: hr_HR/calendar.all: No such file or directory
#include <hr_HR/calendar.all>
^
compilation terminated.
Feb 15 Galileo Galilei born in Pisa, Italy, 1564
------ list of dates ------
Feb 16 Stephen Decatur burns US frigate in Tripoli, 1804
rbv@rbv-F80Q ~ $
I've located the cited file calendar.croatian and offending line:
/*
* Croatian calendar files
*
* $FreeBSD$
*/
#ifndef _calendar_croatian_
#define _calendar_croatian_
/* THIS IS THE LINE CITED IN THE ERROR */
#include <hr_HR/calendar.all>
#endif /* !_calendar_croatian_ */
But I have no idea what to do. Delete the line? Edit it? Or?
Or is the best solution to edit calendar.all and simply delete the line #include <calendar.croatian>? Although I'd like to actually fix the problem if possible rather than simply deleting things...
EDIT #1: Unable to reinstall bsdmainutils
Tried the suggestion to reinstall basmainutils but it seems not to exist on my system AND I'm unable to use apt-get to download and install it.
rbv@rbv-F80Q ~ $ sudo apt-get install --reinstall bsdmainutils
[sudo] password for rbv:
Reading package lists... Done
Building dependency tree
Reading state information... Done
Reinstallation of bsdmainutils is not possible, it cannot be downloaded.
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
rbv@rbv-F80Q ~ $
With specific regard to reinstall, it does not seem to already exist on this 32-bit Linux Mint / MATE computer.
EDIT #2: Attempt to use dpkg-query to identify owner of basmainutils
In response to commentor suggestion, ran the following:
rbv@rbv-F80Q ~ $ dpkg-query -S /usr/share/calendar/calendar.all
bsdmainutils: /usr/share/calendar/calendar.all
This seemingly indicates that although I can't use about or which or apropos or man to find anything about bsdmainutils, it's evidently the owner of calendar.all.
Note also that the locate bsdmainutils command produced the following:
rbv@rbv-F80Q ~ $ locate bsdmainutils
/etc/cron.daily/bsdmainutils
/etc/default/bsdmainutils
/usr/share/doc/bsdmainutils
/usr/share/doc/bsdmainutils/README
/usr/share/doc/bsdmainutils/calendarJudaic.py.gz
/usr/share/doc/bsdmainutils/changelog.gz
/usr/share/doc/bsdmainutils/copyright
/usr/share/doc/bsdmainutils/source.data.gz
/usr/share/lintian/overrides/bsdmainutils
/var/lib/dpkg/info/bsdmainutils.conffiles
/var/lib/dpkg/info/bsdmainutils.list
/var/lib/dpkg/info/bsdmainutils.md5sums
/var/lib/dpkg/info/bsdmainutils.postinst
/var/lib/dpkg/info/bsdmainutils.prerm
rbv@rbv-F80Q ~ $
So on the one hand bsdmainutils seems to not be available to apt-get and so on, yet there are some basmainutils files present on the system.
EDIT #3: Circumvention found, see my answer to my own question, below
Although apt-get was unable to to locate and so reinstall bsdmainutils, the Synaptic package manager did list and so could reinstall it. Details below.
|
Well, I "fixed" the problem -- but I'm not happy with the way it ended up being fixed.
The solution was to use Synaptic rather than apt-get on the command line to reinstall bsdmainutils. After doing that the error with calendar no longer occurred.
But this reflects another recurring problem I'm having ala Synaptic won't list programs that apt-get will install...?
In this instance the opposite of that existing post's issue occurred: Synaptic "knew about" a package that apt-get on the CLI did not. Point being that I can't figure out why apt-get often finds programs that Synaptic doesn't list. And in this instance, the opposite...
Edit #1: Problem may occur because Bleachbit deletes calendar files
IF you reinstall bsdmainutils and calendar again works properly, AND later the same problem recurs, THEN note that depending on settings, Bleachbit may be deleting files contained in /usr/share/calendar/. Use Edit -> Preferences -> Whitelist -> Add folder -> to exclude the files contained in /usr/share/calendar from deletion. NOTE that you may need to perform this operation for both Bleachbit as root and Bleachbit in user-account modes...
| calendar built-in displays error for "#include <hr_HR/calendar.all>" |
1,326,966,806,000 |
I am a new laptop user after using an iMac desktop for about seven years. Over that time, I had become fairly reliant on Apple's services syncing between my iPhone and my iMac.
Now that I'm on Fedora, I'm having to run two separate calendars and contact books, and manually update changes between the two.
Is there a way I can integrate Apple's iCloud features (namely contacts and calendar) onto my laptop? It would make day-to-day computing much easier.
|
No, there is no clear way to integrate a Linux desktop with iCloud services, as Apple makes it difficult for anyone who is not using an Apple machine to use their services. ;)
| Is it possible to integrate Apple iCloud services (e.g. contacts, calendars) with Fedora 20? |
1,326,966,806,000 |
I recently moved to a different country and want to see the national holidays in the KDE calendar that shows when you click the clock (the Digital Clock 3 in the taskbar). However, the KDE cal only shows US holidays and I cannot find a way in the config and on the web to change that.
|
Click on the gear symbol near the top right of your screenshot to open the settings window of the KDE clock widget (which is responsible for the calendar display).
Then select "Holidays" from the left column, and pick the country or countries you want.
| Show Non-US holidays in the KDE Digital Clock calendar view |
1,326,966,806,000 |
I'm trying to replace VEVENT to VTODO entries in an .ics file if it matches current date on another line (it was exported incorrectly):
BEGIN:VCALENDAR
BEGIN:VEVENT
DTSTART:20220340T140000
END:VEVENT
BEGIN:VEVENT
DTSTART:20230620T193700
END:VEVENT
BEGIN:VEVENT
DTSTART:20210210T193800
END:VEVENT
END:VCALENDAR
The second VEVENT entry has current time so it should become:
BEGIN:VTODO
DTSTART:20230620T193700
END:VTODO
There are more entries between BEGIN:VEVENT and END:VEVENT lines, I've redacted them for clarity.
I've tried this with sed, but the ranges pick the first occurrence of VEVENT in the entire file, not first occurrence after (or before) the matched pattern, so it replaces all of them.
sed -i "/BEGIN:VEVENT/,/DTSTART:$(date +%Y%m%dT%H%M)/{s/VEVENT/VTODO/}" org.ics
I was trying to adapt it to another question here, which I thought was relevant: Find a string and replace another string after the first is found
sed -n "/DTSTART:$(date +%Y%m%dT%H%M)/,${/END:VEVENT/{x//{x b}g s/VEVENT/VTODO/}}" org.ics
but it didn't work at all:
sed: -e expression #1, char 25: unexpected ,'`
|
So the following should work:
sed 'H;/BEGIN:VEVENT/h;/END:VEVENT/!d;x;/DTSTART:'"$(date +%Y%m%dT%H%M)"'/s/VEVENT/VTODO/g' org.ics
Explanation:
H: Appends to the hold space which creates kind of a buffer (space), then we do pattern matching
/BEGIN:VEVENT/h and store it in the hold space, so now we run another pattern matching
/END:VEVENT/!d and if the pattern doesn't match, then delete it.
x; Exchanges hold space with pattern space so right now we have the lines we need in the pattern space
Finally, substitute the line DTSTART... if it matches the date. So s/.../.../g is executed only if there is a match.
/DTSTART:'"$(date +%Y%m%dT%H%M)"/s/VEVENT/VTODO/g
Update
sed '1n;H;/BEGIN:VEVENT/h;/END:VEVENT/!d;x;/DTSTART:'"$(date +%Y%m%dT193700)"'/s/VEVENT/VTODO/g' org.ics | sed '$aEND:CALENDAR
| Double match - substitute pattern on subsequent line if prevous line matches another pattern? |
1,326,966,806,000 |
On my host I can show the adoption of the Gregorian calendar as it occurred for Great Britain and its colonies in 1752:
$ cal september 1752
September 1752
Su Mo Tu We Th Fr Sa
1 2 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
The corresponding adoption for Denmark and Norway happened much earlier:
Sunday, 18 February 1700, was followed by Monday, 1 March 1700.
I thought I could override the timezone environment variable to show this, but that doesn't work:
$ TZ=DK february 1700
February 1700
Su Mo Tu We Th Fr Sa
1 2 3
4 5 6 7 8 9 10
11 12 13 14 15 16 17
18 19 20 21 22 23 24
25 26 27 28 29
How do I get cal to show Denmark's adoption of the Gregorian calendar?
My cal is supplied by bsdmainutils version 11.1.2.
|
cal follows the POSIX specification for cal, which says
as though the Gregorian calendar had been adopted on September 14, 1752.
There’s no option to show the calendar with a different switch date.
ncal supports different switch dates with the -s option:
$ ncal -s DK february 1700
February 1700
Mo 5 12
Tu 6 13
We 7 14
Th 1 8 15
Fr 2 9 16
Sa 3 10 17
Su 4 11 18
| Can `cal` show other Gregorian adoptions? |
1,326,966,806,000 |
Is there such an application that updates your wallpaper according to your planner/calendar so that you can have a weekly view of your planner/calendar.
|
Yes, gcalcli with Conky.
You can follow the detailed installation tutorial in this article
| Auto-updating desktop calendar as wallpaper |
1,453,145,646,000 |
I have found multiple examples of "esac" appearing at the end of a bash case statement but I have not found any clear documentation on it's use. The man page uses it, and even has an index on the word (https://www.gnu.org/software/bash/manual/bashref.html#index-esac), but does not define it's use. Is it the required way to end a case statement, best practice, or pure technique?
|
Like fi for if and done for for, esac is the required way to end a case statement.
esac is case spelled backward, rather like fi is if spelled backward. I don't know why the token ending a for block is not rof.
| What does "esac" mean at the end of a bash case statement? Is it required? |
1,453,145,646,000 |
I would like to do something like this where on Friday, the output is for both conditions that match:
#!/bin/bash
#!/bin/bash
NOW=$(date +"%a")
case $NOW in
Mon)
echo "Mon";;
Tue|Wed|Thu|Fri)
echo "Tue|Wed|Thu|Fri";;
Fri|Sat|Sun)
echo "Fri|Sat|Sun";;
*) ;;
esac
As the code above is written, the only output on Friday would be:
Tue|Wed|Thu|Fri
Desired output on Friday:
Tue|Wed|Thu|Fri
Fri|Sat|Sun
I understand that normally, only the commands corresponding to the first pattern that matches the expression are executed.
Is there a way to execute commands for additional matched patterns?
EDIT: I am not looking for fall-through behavior, but that's also a nice thing to know about. Thanks steeldriver.
|
You can use the ;;& conjunction. From man bash:
Using ;;& in place of ;; causes the shell to test
the next pattern list in the statement, if any, and execute any
associated list on a successful match.
Ex. given
$ cat myscript
#!/bin/bash
NOW=$(date -d "$1" +"%a")
case $NOW in
Mon)
echo "Mon";;
Tue|Wed|Thu|Fri)
echo "Tue|Wed|Thu|Fri";;&
Fri|Sat|Sun)
echo "Fri|Sat|Sun";;
*) ;;
esac
then
$ ./myscript thursday
Tue|Wed|Thu|Fri
$ ./myscript friday
Tue|Wed|Thu|Fri
Fri|Sat|Sun
$ ./myscript saturday
Fri|Sat|Sun
For more information (including equivalents in other shells) see
Can bash case statements cascade?
| Possible to match multiple conditions in one case statement? |
1,453,145,646,000 |
I am trying to use a variable consisting of different strings separated with a | as a case statement test. For example:
string="\"foo\"|\"bar\""
read choice
case $choice in
$string)
echo "You chose $choice";;
*)
echo "Bad choice!";;
esac
I want to be able to type foo or bar and execute the first part of the case statement. However, both foo and bar take me to the second:
$ foo.sh
foo
Bad choice!
$ foo.sh
bar
Bad choice!
Using "$string" instead of $string makes no difference. Neither does using string="foo|bar".
I know I can do it this way:
case $choice in
"foo"|"bar")
echo "You chose $choice";;
*)
echo "Bad choice!";;
esac
I can think of various workarounds but I would like to know if it's possible to use a variable as a case condition in bash. Is it possible and, if so, how?
|
The bash manual states:
case word in [ [(] pattern [ | pattern ] ... ) list ;; ] ... esac
Each pattern examined is expanded using tilde expansion, parameter and variable expansion, arithmetic substitution, command substitution, and process substitution.
No «Pathname expansion»
Thus: a pattern is NOT expanded with «Pathname expansion».
Therefore: a pattern could NOT contain "|" inside. Only: two patterns could be joined with the "|".
This works:
s1="foo"; s2="bar" # or even s1="*foo*"; s2="*bar*"
read choice
case $choice in
$s1|$s2 ) echo "Two val choice $choice"; ;; # not "$s1"|"$s2"
* ) echo "A Bad choice! $choice"; ;;
esac
Using « Extended Globbing »
However, word is matched with pattern using « Pathname Expansion » rules.
And « Extended Globbing » here, here and, here allows the use of alternating ("|") patterns.
This also work:
shopt -s extglob
string='@(foo|bar)'
read choice
case $choice in
$string ) printf 'String choice %-20s' "$choice"; ;;&
$s1|$s2 ) printf 'Two val choice %-20s' "$choice"; ;;
*) printf 'A Bad choice! %-20s' "$choice"; ;;
esac
echo
String content
The next test script shows that the pattern that match all lines that contain either foo or bar anywhere is '*$(foo|bar)*' or the two variables $s1=*foo* and $s2=*bar*
Testing script:
shopt -s extglob # comment out this line to test unset extglob.
shopt -p extglob
s1="*foo*"; s2="*bar*"
string="*foo*"
string="*foo*|*bar*"
string='@(*foo*|*bar)'
string='*@(foo|bar)*'
printf "%s\n" "$string"
while IFS= read -r choice; do
case $choice in
"$s1"|"$s2" ) printf 'A first choice %-20s' "$choice"; ;;&
$string ) printf 'String choice %-20s' "$choice"; ;;&
$s1|$s2 ) printf 'Two val choice %-20s' "$choice"; ;;
*) printf 'A Bad choice! %-20s' "$choice"; ;;
esac
echo
done <<-\_several_strings_
f
b
foo
bar
*foo*
*foo*|*bar*
\"foo\"
"foo"
afooline
onebarvalue
now foo with spaces
_several_strings_
| How can I use a variable as a case condition? |
1,453,145,646,000 |
In many languages it is possible to assign the result of a case/switch statement to a variable, rather than repeating the variable assignment many times within the case statement. Is it possible to do something like this in the Bash shell?
color_code=$(case "$COLOR" in
(red) 1;;
(yellow) 2;;
(green) 3;;
(blue) 4;;
esac)
(Or, as an aside, in any other shells?)
|
The variable=$(...) construct will take the standard output of whatever command is in $(...) and assign it to variable. Thus, to get variable assigned the way that you want, the values have to be sent to standard output. This is easily done with the echo command:
color_code=$(case "$COLOR" in
red) echo 1;;
yellow) echo 2;;
green) echo 3;;
blue) echo 4;;
esac)
This will work on bash as well as all other POSIX shells.
The Optional Left Parens
According to the POSIX standard, the left parens in a case statement is optional and the following works as well:
color_code=$(case "$COLOR" in
(red) echo 1;;
(yellow) echo 2;;
(green) echo 3;;
(blue) echo 4;;
esac)
As Gilles points out in the comments, not all shells accept both forms in combination with $(...): for an impressively detailed table of compatibility, see "$( )" command substitution vs. embedded ")".
| Variable assignment outside of case statement |
1,453,145,646,000 |
I want to catch if a variable is multiline in a case statement in POSIX shell (dash).
I tried this:
q='
'
case "$q" in
*$'\n'*) echo nl;;
*) echo NO nl;;
esac
It returns nl in zsh but NO nl in dash.
Thanks.
|
The dash shell does not have C-strings ($'...'). C-strings is an extension to the POSIX standard. You would have to use a literal newline. This is easier (and looks nicer) if you store the newline in a variable:
#!/bin/dash
nl='
'
for string; do
case $string in
*"$nl"*)
printf '"%s" contains newline\n' "$string"
;;
*)
printf '"%s" does not contain newline\n' "$string"
esac
done
For each command line argument given to the script, this detects whether it contains a newline or not. The variable used in the case statement ($string) does not need quoting, and the ;; after the last case label is not needed.
Testing (from an interactive zsh shell, which is where the dquote> secondary prompt comes from):
$ dash script.sh "hello world" "hello
dquote> world"
"hello world" does not contain newline
"hello
world" contains newline
| POSIX catch newline in case statement |
1,453,145,646,000 |
My question is the zsh equivalent of the question asked here: How can I use a variable as a case condition? I would like to use a variable for the condition of a case statement in zsh. For example:
input="foo"
pattern="(foo|bar)"
case $input in
$pattern)
echo "you sent foo or bar"
;;
*)
echo "foo or bar was not sent"
;;
esac
I would like to use the strings foo or bar and have the above code execute the pattern case condition.
|
With this code saved to the file first,
pattern=fo*
input=foo
case $input in
$pattern)
print T
;;
fo*)
print NIL
;;
esac
under -x we may observe that the variable appears as a quoted value while the raw expression does not:
% zsh -x first
+first:1> pattern='fo*'
+first:2> input=foo
+first:3> case foo (fo\*)
+first:3> case foo (fo*)
+first:8> print NIL
NIL
That is, the variable is being treated as a literal string. If one spends enough time in zshexpn(1) one might be aware of the glob substitution flag
${~spec}
Turn on the GLOB_SUBST option for the evaluation of spec; if the
`~' is doubled, turn it off. When this option is set, the
string resulting from the expansion will be interpreted as a
pattern anywhere that is possible,
so if we modify $pattern to use that
pattern=fo*
input=foo
case $input in
$~pattern) # !
print T
;;
fo*)
print NIL
;;
esac
we see instead
% zsh -x second
+second:1> pattern='fo*'
+second:2> input=foo
+second:3> case foo (fo*)
+second:5> print T
T
for your case the pattern must be quoted:
pattern='(foo|bar)'
input=foo
case $input in
$~pattern)
print T
;;
*)
print NIL
;;
esac
| Using a variable as a case condition in zsh |
1,453,145,646,000 |
I'm trying to revive my rusty shell scripting skills, and I've run into a problem with case statements. My goal in the program below is to evaluate whether a user-supplied string begins with a capital or lowercase letter:
# practicing case statements
echo "enter a string"
read yourstring
echo -e "your string is $yourstring\n"
case "$yourstring" in
[A-Z]* )
echo "your string begins with a Capital Letter"
;;
[a-z]* )
echo "your string begins with a lowercase letter"
;;
*)
echo "your string did not begin with an English letter"
;;
esac
myvar=nope
case $myvar in
N*)
echo "begins with CAPITAL 'N'"
;;
n*)
echo "begins with lowercase 'n'"
;;
*)
echo "hahahaha"
;;
esac
When I enter a string beginning with a lowercase letter (e.g., "mystring" with no quotes), the case statement matches my input to the first case and informs me that the string begins with a capital letter. I wrote the second case statement to see if I was making some obvious syntax or logic error (perhaps I still am), but I don't have the same problem. The second case structure correctly tells me that the string held by $myvar begins with a lowercase letter.
I have tried using quotes to enclose $yourstring in the first line of the case statement, and I've tried it without quotes. I read about the 'shopt' options and verified that 'nocasematch' was off. (For good measure, I toggled it on and tried again, but I still didn't get the correct result from my first case statement.) I've also tried running the script with sh and bash, but the output is the same. (I call the shell explicitly with "sh ./case1.sh" and "bash ./case1.sh" because I did not set the execution bit. Duplicating the file and setting the execution bit on the new file did not change the output.)
Though I don't understand all of the output from running the shell with the '-x' debug option, the output shows the shell progressing from the first "case" line to execution of the command following the first pattern. I interpret this to mean that the first pattern was a match for the input string, but I am uncertain why.
When I switch the order of the first two patterns (and corresponding commands), the case statement succeeds for lowercase letters but incorrectly reports "MYSTRING" as beginning with lowercase letters. Since anything alphabetic is detected as matching whichever pattern appears first, I think I have a logical error...but I'm not sure what.
I found a post by "pludi" on unix.com in which it is advised that "the tests for lowercase and upper case characters were [a-z] and [A-Z]. This no longer works in certain locales and/or Linux distros." (see https://www.unix.com/shell-programming-and-scripting-128929-example-switch-case-bash.html) Sure enough, replacing the character ranges with [[:upper:]] and [[:lower:]] resolved the problem.
I'm on Fedora 31, and my locale output is as follows:
LANG=en_US.UTF-8
LC_CTYPE="en_US.UTF-8"
LC_NUMERIC="en_US.UTF-8"
LC_TIME="en_US.UTF-8"
LC_COLLATE="en_US.UTF-8"
LC_MONETARY="en_US.UTF-8"
LC_MESSAGES="en_US.UTF-8"
LC_PAPER="en_US.UTF-8"
LC_NAME="en_US.UTF-8"
LC_ADDRESS="en_US.UTF-8"
LC_TELEPHONE="en_US.UTF-8"
LC_MEASUREMENT="en_US.UTF-8"
LC_IDENTIFICATION="en_US.UTF-8"
LC_ALL=
I'd like to know whether I'm not understanding character ranges, or not understanding how pattern matching works in case statements, or if the underlying shell capabilities changed (and why?). If anyone has the patience, I would greatly appreciate an explanation; I'm also happy to read relevant documentation. Thanks!
|
A simple answer, one which no doubt others can supersede.
The character set ordering is now different depending on which locale is in use. The concept of locale was introduced to support different nationalities and their different languages. As you can see from the output of locale there are several different areas now addressed - not just collation.
In your case it's US, and for sorting and collation purposes the alphabet is either AaBbCc...Zz or A=a, B=b, C=c, etc.(I forget which, and I'm not at a computer where I can verify one over the other). Locales are very complicated, and in certain locales there can be characters that are invisible as far as sorting and collation are concerned. The same character can sort differently depending on which locale is in use.
As you've found, the correct way to identify lowercase characters is with [[:lower:]]; this will include accented characters where necessary, and even lowercase characters in different alphabets (Greek, Cyrillic, etc.).
If you want the classic ordering you can revert per application or even per command by setting LC_ALL=C. For a contrived example,
grep some_pattern | LC_ALL=C sort | nl
| How do I differentiate between uppercase and lowercase characters in a case statement? |
1,453,145,646,000 |
I have the following code.
read -p "Enter a word: " word
case $word in
[aeiou]* | [AEIOU]*)
echo "The word begins with a vowel." ;;
[0-9]*)
echo "The word begins with a digit." ;;
*[0-9])
echo "The word ends with a digit." ;;
[aeiou]* && [AEIOU]* && *[0-9])
echo "The word begins with vowel and ends with a digit." ;;
????)
echo "You entered a four letter word." ;;
*)
echo "I don't know what you've entered," ;;
esac
When I run this:
Enter a word: apple123
case2.sh: line 10: syntax error near unexpected token `&&'
case2.sh: line 10: ` [aeiou]* && [AEIOU]* && *[0-9])'
It looks like case statement doesn't support AND operator, and also I believe the && operator in my above case statement logically incorrect.
I understand that we can use if...else to check if the input starts with a vowel and digit. But I am curious if case has any builtin function like AND operator.
|
You are correct in that the standard definition of case does not allow for a AND operator in the pattern. You're also correct that trying to say "starts with a lower-case vowel AND starts with an upper-case vowel" would not match anything. Note also that you have your patterns & explanations reversed for the begins/ends with a digit tests -- using a pattern of [0-9]* would match words that begin with a digit, not end with a digit.
One approach to this would be to combine your tests into the same pattern, most-restrictive first:
case $word in
([AaEeIiOoUu]??[0-9]) echo it is four characters long and begins with a vowel and ends with a digit;;
([AaEeIiOoUu]*[0-9]) echo it is not four characters long begins with a vowel and ends with a digit;;
# ...
esac
Another (lengthy!) approach would be to nest your case statements, building up appropriate responses each time. Does it begin with a vowel, yes or no? Now, does it end in a digit, yes or no? This would get unwieldy quickly, and annoying to maintain.
Another approach would be to use a sequence of case statements that builds up a string (or array) of applicable statements; you could even add * catch-all patterns to each if you wanted to provide "negative" feedback ("word does not begin with a vowel", etc).
result=""
case $word in
[AaEeIiOoUu]*)
result="The word begins with a vowel." ;;
esac
case $word in
[0-9]*)
result="${result} The word begins with a digit." ;;
esac
case $word in
*[0-9])
result="${result} The word ends with a digit." ;;
esac
case $word in
????)
result="${result} You entered four characters." ;;
esac
printf '%s\n' "$result"
For examples:
$ ./go.sh
Enter a word: aieee
The word begins with a vowel.
$ ./go.sh
Enter a word: jeff42
The word ends with a digit.
$ ./go.sh
Enter a word: aiee
The word begins with a vowel. You entered four characters.
$ ./go.sh
Enter a word: 9arm
The word begins with a digit. You entered four characters.
$ ./go.sh
Enter a word: arm9
The word begins with a vowel. The word ends with a digit. You entered four characters.
Alternatively, bash extended the syntax for the case statement to allow for multiple patterns to be selected, if you end the pattern(s) with ;;&:
shopt -s nocasematch
case $word in
[aeiou]*)
echo "The word begins with a vowel." ;;&
[0-9]*)
echo "The word begins with a digit." ;;&
*[0-9])
echo "The word ends with a digit." ;;&
????)
echo "You entered four characters." ;;
esac
Note that I removed the * catch-all pattern, since that would match anything & everything, when falling through the patterns this way. Bash also has a shell option called nocasematch, which I set above, that enables case-insensitive matching of the patterns. That helps reduce redundancy -- I removed the | [AEIOU]* part of the pattern.
For examples:
$ ./go.sh
Enter a word: aieee
The word begins with a vowel.
$ ./go.sh
Enter a word: jeff42
The word ends with a digit.
$ ./go.sh
Enter a word: aiee
The word begins with a vowel.
You entered four characters.
$ ./go.sh
Enter a word: 9arm
The word begins with a digit.
You entered four characters.
$ ./go.sh
Enter a word: arm9
The word begins with a vowel.
The word ends with a digit.
You entered four characters.
| How to specify AND / OR operators (conditions) for case statement? |
1,453,145,646,000 |
How add I add the condition in case whereby if it does not detect the required conditions, it will execute the command.
My code:
case $price in
[0-9] | "." | "$") echo "Numbers, . , $ Only"
;;
esac
This command will execute if it detects numbers, "." and "$". How do change it in a sense if it does not detect those, the command will execute. Or are there other better commands to use to do this function.
|
Add a default case:
case $price in
[0-9] | "." | "$") true
;;
*)
do-something
;;
esac
| How to negate a case pattern |
1,453,145,646,000 |
Suppose one has the following case:
#!/bin/sh
case $1 in
e|ex|exa|exam|examp|exampl|example) echo "OK"
;;
t|te|tes|test) echo "Also OK"
;;
*) echo "Error!"
;;
esac
Is there a more elegant and at the same time POSIX-compliant solution (i.e., no bash, zsh, etc.) to a situation like this?
P.S. No need for exampleeee or Exam to work.
|
What you can do is turn the comparison around:
case "example" in
"$1"*) echo OK ;;
*) echo Error ;;
esac
With multiple words, you can stick with your original idea
case "$1" in
e|ex|exa|exam|examp|exampl|example) : ;;
t|te|tes|test) : ;;
f|fo|foo) : ;;
*) echo error ;;
esac
or use a loop and a "boolean" variable
match=""
for word in example test foo; do
case "$word" in
"$1"*) match=$word; break ;;
esac
done
if [ -n "$match" ]; then
echo "$1 matches $match"
else
echo Error
fi
You can decide which is better. I think the first one is elegant.
| How to match a specific word or its parts in a case statement? |
1,453,145,646,000 |
I'm building a function that will calculate the gauge of wire required given amperage, distance(in feet), and allowable voltage drop.
I can calculate the "circular mils" given those values and with that get the AWG requirement. I started building a large if elif statement to compare the circular mils to it's respected gauge but I believe case is the right tool for this.
I haven't yet found any examples of case being used to compare numbers though so I'm wondering if it's even possible to do something like below:
what.gauge () {
let cmils=11*2*$1*$2/$3
let amils=17*2*$1*$2/$3
case $cmils in
320-403)
cawg="25 AWG"
;;
404-509)
cawg="24 AWG"
;;
510-641)
cawg="23 AWG"
;;
etc...
}
|
case $cmils in
3[2-9][0-9]|40[0-3])
cawg="25 AWG"
;;
40[4-9]|4[1-9][0-9]|50[0-9])
cawg="24 AWG"
;;
51[0-9]|6[0-3][0-9]|64[01])
cawg="23 AWG"
;;
| Can I use comparison operators in case? |
1,453,145,646,000 |
I want to check whether an argument to a shell script is a whole number
(i.e., a non-negative integer: 0, 1, 2, 3, …, 17, …, 42, …, etc,
but not 3.1416 or −5) expressed in decimal (so nothing like 0x11 or 0x2A).
How can I write a case statement using regex as condition (to match numbers)? I tried a few different ways I came up with (e.g., [0-9]+ or ^[0-9][0-9]*$); none of them works. Like in the following example, valid numbers are falling through the numeric regex that's intended to catch them and are matching the * wildcard.
i=1
let arg_n=$#+1
while (( $i < $arg_n )); do
case ${!i} in
[0-9]+)
n=${!i}
;;
*)
echo 'Invalid argument!'
;;
esac
let i=$i+1
done
Output:
$ ./cmd.sh 64
Invalid argument!
|
case does not use regexes, it uses patterns
For "1 or more digits", do this:
shopt -s extglob
...
case ${!i} in
+([[:digit:]]) )
n=${!i}
;;
...
If you want to use regular expressions, use the =~ operator within [[...]]
if [[ ${!i} =~ ^[[:digit:]]+$ ]]; then
n=${!i}
else
echo "Invalid"
fi
| Matching numbers with regex in case statement |
1,453,145,646,000 |
How to use bash's "case" statement for "very specific" string patterns (multiple words, including spaces) ?
The problem is: I receive a multi-word String from a function which is pretty specific, including a version number. The version number now changes from time to time. Instead of adding more and more specific string patterns to my case statement, I would like to use a joker (or whatever the word for this is, like "Ubuntu 16.04.? LTS" or "Ubuntu 16.04.* LTS" . However, I didn't find any solution to this yet.
See this shell script I used so far:
.
.
.
case "${OS}" in
"SUSE Linux Enterprise Server 11 SP4")
echo "SLES11 detected."
;;
"Ubuntu 16.04.3 LTS" | "Ubuntu 16.04.4 LTS" )
echo "UBUNTU16 detected."
;;
"CentOS Linux 7 (Core)")
echo "CENTOS7 detected."
;;
*)
echo "Unknown OS detected. Quitting for safety reasons."
exit -1
;;
.
.
.
|
You can use patterns in case statements, however you can't quote them so you have to escape whitespace.
case ${OS} in
"SUSE Linux Enterprise Server 11 SP4")
echo "SLES11 detected."
;;
Ubuntu\ 16.04.[3-4]\ LTS)
echo "UBUNTU16 detected."
;;
"CentOS Linux 7 (Core)")
echo "CENTOS7 detected."
;;
*)
echo "Unknown OS detected. Quitting for safety reasons."
exit -1
;;
| how to use "joker" or wildcard in string patterns (spaces separated words) in a bash case statement? |
1,453,145,646,000 |
Can I write a pattern in sed that matches patterns like Aa, Bb, Cc, etc. (i.e., Given an uppercase letter, it should match the corresponding lowercase letter) without enumerating all possibilities?
|
With perl, you can do:
$ echo 'fooÉébAar' | perl -Mopen=locale -pe 's/([[:upper:]])(??{lc$^N})/<$&>/g'
foo<Éé>b<Aa>r
That uses the (??{code}) special perl operator, where you can dynamically specify the regexp to match on. Here lc$^N is the lowercase version of $^N, the last capture group.
With GNU sed, you could do:
$ echo 'fooÉébAar' | sed -Ee 's/./&\L&/g;s/([[:upper:]](.)\2.)/<<\1>>/g;s/(.)./\1/g'
foo<Éé>b<Aa>r
The idea is that we first append each character in the input with their lower case version (X becomes Xx, x becomes xx), so if we see a Xxx after that (([[:upper:]](.)\2: X followed a repeated character), that means we've got an uppercase character followed by its lower case version.
Note that those would not work for characters in decomposed form. For instance for É when expressed as E followed by a combining acute accent. To work around that you could use perl's \X graphem cluster regexp operator instead:
$ printf 'E\u0301\u0302\u00e9\u0302 \u00c9e\u301 foo Ee\u301\n' |
perl -Mopen=locale -MUnicode::Normalize -pe '
s/((?=[[:upper:]])\X)(?{$c1 = $^N})(\X)(??{
NFD(lc$c1) eq NFD($^N) ? qr{} : qr{(?!)}})/<$&>/g'
<É̂é̂> <Éé> foo Eé
Above using canonical normalisation forms (NFD) so that graphem clusters are always represented in the same way at the character level.
It would still fail to match on things like Fffi where that ffi (U+FB03) is a single (typographical ligature) character but that's probably just as well anyway.
| Matching Uppercase/lowercase pairs with sed |
1,453,145,646,000 |
If you write a Bash case statement, can you get the current match without explicitly assigning it to a variable ?
Consider
case $(some subshell command sequence) in
one) stuff ;;
*) stuff "$case_match";;
esac
I know that I can do the below but wanted to be more succinct.
case_match=$(some subshell command sequence)
case "$case_match" in
...
Is there a special variable representing $case_match ?
Following comment from @Hauke Laging another contrived example. You can do this:
today=$(date '+%A')
case "$today" in
*) echo "It's $today" ;;
esac
But it would be nice to do this - the question asked if this was possible (but it isn't):
case $(date '+%A') in
*) echo "It's $case_match" ;;
esac
(where case_match is the golden variable that eludes us)
|
You can with a bit of a hack, but you need to be sure you have an unset variable available. For example:
unset foo # Make sure foo is unset, or at least set to the empty string
case ${foo:-$(date +%m)} in
1) echo "Jan" ;;
2) echo "Feb" ;;
# ...
11) echo "Nov" ;;
12) echo "Dec" ;;
esac
Here, foo is being used as a dummy variable solely to allow the default parameter expansion to produce the output of the command substition.
| Is there a special variable containing a case statement match |
1,453,145,646,000 |
summary: I'd like to use a bash case statement (in other code) to classify inputs as to whether they are
a positive integer
a negative integer
zero
an empty string
a non-integer string
Executable code follows, which is correctly classifying the following inputs:
''
word
a\nmultiline\nstring
2.1
-3
but is classifying both the following as ... negative integers :-(
0
42
details:
Save the following to a file (e.g. /tmp/integer_case_statement.sh), chmod it, and run it:
#!/usr/bin/env bash
### This should be a simple `bash` `case` statement to classify inputs as
### {positive|negative|zero|non-} integers.
### Trying extglob, since my previous integer-match patterns failed.
### Gotta turn that on before *function definition* per https://stackoverflow.com/a/34913651/915044
shopt -s extglob
declare cmd=''
function identify() {
local -r it=${1} # no quotes in case it's multiline
# shopt -s extglob # can't do that here
case ${it} in
'')
# empty string, no need for `if [[ -z ...`
>&2 echo 'ERROR: null arg'
;;
?(-|+)+([[:digit:]]))
# it's an integer, so just say so and fallthrough
>&2 echo 'DEBUG: int(it), fallthrough'
;&
-+([[:digit:]]))
# it's negative: just return it
>&2 echo 'DEBUG: int(it) && (it < 0), returning it'
echo "${it}"
;;
0)
# it's zero: that's OK
>&2 echo 'DEBUG: int(it) && (it == 0), returning it'
echo '0'
;;
++([[:digit:]]))
# it's positive: just return it
>&2 echo 'DEBUG: int(it) && (it > 0), returning it'
echo "${it}"
;;
*)
# not an integer, just return it
>&2 echo 'DEBUG: !int(it)'
echo "${it}"
;;
esac
} # end function identify
echo -e "'bash --version'==${BASH_VERSION}\n"
echo "identify '':"
identify ''
echo
# > ERROR: null arg
echo 'identify word:'
identify word
echo
# > DEBUG: !int(it)
# > word
echo 'identify a
multiline
string:'
identify 'a
multiline
string'
echo
# > DEBUG: !int(it)
# > a
# > multiline
# > string
echo 'identify 2.1:'
identify 2.1
echo
# > DEBUG: !int(it)
# > 2.1
echo 'identify -3:'
identify -3
echo
# > DEBUG: int(it), fallthrough
# > DEBUG: int(it) && (it < 0), returning it
# > -3
echo 'identify 0:'
identify 0
echo
# > DEBUG: int(it), fallthrough
# > DEBUG: int(it) && (it < 0), returning it
# > 0
echo 'identify 42:'
identify 42
echo
# > DEBUG: int(it), fallthrough
# > DEBUG: int(it) && (it < 0), returning it
# > 42
exit 0
The current output is inlined in the file, but for ease of reading, here's my current output separately:
'bash --version'==4.3.30(1)-release
identify '':
ERROR: null arg
identify word:
DEBUG: !int(it)
word
identify a
multiline
string:
DEBUG: !int(it)
a
multiline
string
identify 2.1:
DEBUG: !int(it)
2.1
identify -3:
DEBUG: int(it), fallthrough
DEBUG: int(it) && (it < 0), returning it
-3
identify 0:
DEBUG: int(it), fallthrough
DEBUG: int(it) && (it < 0), returning it
0
identify 42:
DEBUG: int(it), fallthrough
DEBUG: int(it) && (it < 0), returning it
42
The latter 2 inputs are my problem: why is the case statement identifying
0 as a negative integer (rather than as 0)
42 as a negative integer (rather than as positive)
? Your assistance is appreciated.
|
summary: Thanks to
frostschutz for s/;&/;;&/
Freddy for the correct positive-integer pattern (guess I just got nightblind)
I also added an additional clause to detect signed zeros, and a few more testcases.
details:
Save this improved code to a file (e.g. /tmp/integer_case_statement.sh), chmod it, and run it:
#!/usr/bin/env bash
### Simple `bash` `case` statement to classify inputs as {positive|negative|zero|non-} integers.
### Trying extglob, since my previous integer-match patterns failed.
### Gotta turn that on before *function definition* per https://stackoverflow.com/a/34913651/915044
shopt -s extglob
declare input=''
### For `case` *patterns* (NOT regexps), see
### https://www.gnu.org/software/bash/manual/html_node/Pattern-Matching.html
function identify() {
local -r it=${1} # no quotes in case it's multiline
# shopt -s extglob # can't do that here
case ${it} in
'')
# empty string, no need for `if [[ -z ...`
>&2 echo 'ERROR: null arg'
;;
[+-]0)
>&2 echo 'ERROR: zero should not be signed'
;;
?(-|+)+([[:digit:]]))
# it's an integer, so just say so and fallthrough
>&2 echo 'DEBUG: int(it), fallthrough'
# ;& # this only runs the next clause, thanks https://unix.stackexchange.com/users/30851/frostschutz
;;& # works
-+([[:digit:]]))
>&2 echo 'DEBUG: it < 0'
;;
0)
>&2 echo 'DEBUG: it == 0'
echo '0'
;;
?(+)+([[:digit:]])) # thanks https://unix.stackexchange.com/users/332764/freddy
>&2 echo 'DEBUG: it > 0'
;;
*)
>&2 echo 'DEBUG: !int(it)'
;;
esac
} # end function identify
echo -e "'bash --version'==${BASH_VERSION}\n"
for input in \
'' \
'@#$%^&!' \
'word' \
'a
multiline
string' \
'2.1' \
'-3' \
'+3' \
'+0' \
'0' \
'-0' \
'42' \
; do
echo "identify '${input}'"
identify "${input}"
ret_val="${?}"
if [[ "${ret_val}" -ne 0 ]] ; then
>&2 echo "ERROR: retval='${ret_val}', exiting ..."
exit 3
fi
echo # newline
done
exit 0
On this Debian workstation, the above currently outputs:
'bash --version'==4.3.30(1)-release
identify ''
ERROR: null arg
identify '@#$%^&!'
DEBUG: !int(it)
identify 'word'
DEBUG: !int(it)
identify 'a
multiline
string'
DEBUG: !int(it)
identify '2.1'
DEBUG: !int(it)
identify '-3'
DEBUG: int(it), fallthrough
DEBUG: it < 0
identify '+3'
DEBUG: int(it), fallthrough
DEBUG: it > 0
identify '+0'
ERROR: zero should not be signed
identify '0'
DEBUG: int(it), fallthrough
DEBUG: it == 0
0
identify '-0'
ERROR: zero should not be signed
identify '42'
DEBUG: int(it), fallthrough
DEBUG: it > 0
Your assistance is appreciated!
| bash `case` statement to classify input as non- and integers |
1,453,145,646,000 |
I want the user to type in -s followed by a number (e.g. -s 3). However, I can't seem to be passing another variable next to the existing -s. This is my code:
echo choose
read choice
case $choice in
-a) echo you chose a
;;
-s $num) echo you chose the number $num #this is the -s number (e.g. -s 3) choice.
;;
esac
exit 0
|
There is almost certainly a better way to handle what you're doing. For starters you should avoid prompting the user for any input and instead make them provide arguments on the command line while running the program, but modifying your code to work:
read -rp 'choose: ' choice
case $choice in
-a) echo 'you chose a';;
-s\ [1-4]) "echo you chose the number ${choice#"-s "}";;
esac
Your num variable doesn't appear to be set so it will expand to nothing making your case pattern simply -s ) and -s 4 wont match -s ) because...well they aren't the same. So we need to modify that to expect a number after it (-s\ [1-4])). Then we use parameter expansion to remove the -s.
The way I would handle it would be to use getopts similar to:
#!/bin/bash
while getopts as: opt; do
case $opt in
a) printf '%s\n' 'You have chosen: a';;
s) n=$OPTARG; printf '%s: %d\n' 'You have chosen s with an argument of' "$n";;
esac
done
With this you would run specify the arguments on the command line such as:
./script.sh -s 4
| How to have a case parameter in shell scripting that is followed by a different parameter? |
1,453,145,646,000 |
I have two digit month value (01 to 12). I need to get the three letter month abbreviation (like JAN, FEB, MAR etc.) I am able to get it in mixed case using the following command:
date -d "20170711" | date +"%b"
The output is "Jul" I want it to be "JUL". Is there a standard date option to get it?
|
Since you're dealing with a fairly static piece of information (barring more intercalary events), just use built-in shell commands:
function capdate() {
case "$1" in
(01) printf "JAN";;
(02) printf "FEB";;
(03) printf "MAR";;
(04) printf "APR";;
(05) printf "MAY";;
(06) printf "JUN";;
(07) printf "JUL";;
(08) printf "AUG";;
(09) printf "SEP";;
(10) printf "OCT";;
(11) printf "NOV";;
(12) printf "DEC";;
(*) printf "invalid"; return 1;;
esac
}
Sample run:
$ m=$(capdate 01); echo $?, $m
0, JAN
$ m=$(capdate Caesar); echo $?, $m
1, invalid
Adjust the text if your locale has different date +%b names.
| How to get Month in all upper case |
1,453,145,646,000 |
case "$1" in
all)
echo "$1"
;;
[a-z][a-z][a-z][a-z][a-z][a-z])
echo "$1"
;;
*)
printf 'Invalid: %s\n' "$3"
exit 1
;;
esac
With this the only input accepted is all, and 6 characters. It won't accept 4 characters or more than 6.
What I want to do here is to only allow characters, not digits or symbols, but of unlimited length.
What is the correct syntax? Thanks
|
You can do this with the standard pattern match by looking for any of the non-allowed characters, and rejecting the input if you find any. Or you can use extended globs (extglob) or regexes and explicitly make sure the whole string consists of characters that are allowed.
#/bin/bash
shopt -s extglob globasciiranges
case "$1" in *([a-zA-Z])) echo "case ok" ;; esac
[[ "$1" = *([a-zA-Z]) ]] && echo " [[ ok"
[[ "$1" =~ ^[a-zA-Z]*$ ]] && echo "rege ok"
globasciiranges prevents [a-z] from matching accented letters, but the regex match doesn't obey it. With the regex, you'd need to set LC_COLLATE=C to prevent matching them.
All of those allow the empty string. To prevent that, change the asterisks to plusses (* to +).
| Case statement allow only alphabetic characters? |
1,453,145,646,000 |
I wrote this code to echo a greeting depending on what time of day it is, but when I run it it doesn't show any errors but doesn't echo anything to the command line either. To try to troubleshoot I commented out everything and echoed just the time variable, which worked fine. So, what am I doing wrong?!
#!/bin/bash
time=$(date +%H)
case $time in
#check if its morning
[0-11] ) echo "greeting 1";;
#check if its afternoon
[12-17] ) echo "greeting 2";;
#check if its evening
[18-23] ) echo "greeting 3"
esac
|
[...] introduces a character class, not an integer interval. So, [18-23] is identical to [138-2], which is the same as [13], as there's nothing between 8 and 2.
You can use the following as a fix:
case $time in
#check if its morning
0?|1[01] ) echo "greeting 1";;
#check if its afternoon
1[2-7] ) echo "greeting 2";;
#check if its evening
1[89]|2? ) echo "greeting 3"
esac
| bash script with case statement not returning an output |
1,453,145,646,000 |
I am writing a script which must accept a word from a limited predefined list as an argument. I also would like it to have completion. I'm storing list in a variable to avoid duplication between complete and case. So I've written this, completion does work, but case statement doesn't. Why? One can't just make case statement parameters out of variables?
declare -ar choices=('foo' 'bar' 'baz')
function do_work {
case "$1" in
"${choices[*]}")
echo 'yes!'
;;
*)
echo 'no!'
esac
}
complete -W "${choices[*]}" do_work
|
The list in complete -W list is interpreted as a $IFS delimited list, and it's the $IFS at the time of completion that is taken into account.
So if you have:
complete -W 'a b,c d' do_work
do_work completion will offer a, b,c and d when $IFS contains space and a b and c d when $IFS contains , and not space.
So it's mostly broken by design. It also doesn't allow offering arbitrary strings as completions.
With these limitations, the best you can do is assume $IFS will never be modified (so will always contain space, tab and newline, characters that as a result can't be used in the completion words), and do:
choices='foo bar baz'
do_work() {
case "$1" in
(*' '*) echo 'no!';;
(*)
case " $choices " in
(*" $1 "*) echo 'yes!';;
(*) echo 'no!';;
esac;;
esac
}
complete -W "$choices" do_work
You could add a readonly IFS to make sure $IFS is never modified, but that's likely to break things especially considering that bash doesn't let you declare a local variable that has been declared readonly in a parent scope, so even functions that do local IFS=, would break.
As for the more generic question of how to check whether a string is found amongst the elements of an array, bash (contrary to zsh) doesn't have an operator for that but, you could easily implement it with a loop:
amongst() {
local string needle="$1"
shift
for string do
[[ $needle = "$string" ]] && return
done
false
}
And then:
do_work {
if amongst "$1" "${choices[@]}"; then
echo 'yes!'
else
echo 'no!'
fi
}
The more appropriate structure to look-up strings is to use hash tables or associative arrays:
typeset -A choices=( [foo]=1 [bar]=1 [baz]=1 )
do_work() {
if [[ -n ${choices[+$1]} ]]; then
echo 'yes!'
else
echo 'no!'
fi
}
complete -W "${!choices[*]}" do_work
Here with "${!choices[*]}" joining the keys of the associative array with whichever is the first character of $IFS at that point (or with no separator if it's set but empty).
Note that bash associative arrays can't have an empty key, so the empty string can't be one of the choices, but anyway complete -W wouldn't support that either and completing an empty strings is not very useful anyway except maybe for the completion listing showing the user it's one of the accepted values.
| Bash, use case statement to check if the word is in the array |
1,453,145,646,000 |
I'm having a hard time getting regex matches to work in a bash case statement.
Example code:
#!/bin/bash
str=' word1 word2'
echo "With grep:"
echo "$str" |grep '^\s*\<word1\>'
echo "With case:"
case "$str" in
'^\s*\<word1\>') echo "$str" ;;
esac
The example works with grep, but not with case... I'm confused, because some simpler regexes work with case. Does case use a different syntax for regex? Am I just not escaping things properly?
|
That is because case doesn't use regex's but bash Pathname Expansion. You can learn more from bash man page or from Bash Reference Manual.
| Regex in case statement [duplicate] |
1,453,145,646,000 |
case "$1","$name" in
-py | --python | --python3,*) if [[ "$name" =~ \..+$ ]]; then
That doesn't catch stuff, which actually it should,
like…
USERNAME@HOSTNAME:~$ myscript --python surfer
Funny thing:
Simplify the multi pattern conditional to…
--python,*) if [[ "$name" =~ \..+$ ]]; then
and it works!
With the bitterly-repetitive outlook to have to place that section 3 times: 1st for -py, then for --python, and finally for --python3 for catching all patterns.
But the other thing is - the other way around:
case "$1" in
-py | --python | --python3) if [[ ! "$name" =~ \.py$ ]]; then
That's fine, that works!
So, that disproves my assumption, that the multi pattern syntax might be incorrect,
might needs the spaces to be removed, or any kind of bracket around the sum of all 3 patterns to be interpreted as a group, where the first OR the second OR the third pattern is supposed to be catched.
And with all this I really have the impression, that you can't have both in
GNU bash, version 4.3, multi pattern AND aside of that conditional a second conditional like "$name". Could that be? Or have I made a mistake in trying to acchieve that?
|
The object of your case clause needs to match properly:
case "$1","$name" in
-py | --python | --python3,*) if [[ "$name" =~ \..+$ ]]; then
should be
case "$1","$name" in
-py,* | --python,* | --python3,*) if [[ "$name" =~ \..+$ ]]; then
But this could probably be more clearly expressed with less repetition as:
if [[ "$name" =~ \..+$ ]]; then
case "$1" in
-py | --python | --python3)
do_stuff
;;
esac
fi
| How is the correct syntax for a more complex case statement? |
1,453,145,646,000 |
I often execute raw versions of remote Bash scripts in GitHub with this pattern:
wget -O - https://raw.githubusercontent.com/<username>/<project>/<branch>/<path>/<file> | bash
Generally I can do it without problems but since I have added the following code to a certain script I get an endless loop of echo (it often occurs even if I just copy-paste it from GitHub to the terminal and execute directly):
while true; do
read -p "Example question: Do you wish to edit the PHP file now?" yn
case $yn in
[Yy]* ) nano PROJECT/PHP_FILE; break;;
[Nn]* ) break;;
* ) echo "Please answer yes or no.";;
esac
done
I can at least partly solve the problem with something like:
cd ACTION_DIRECTORY
wget https://raw.githubusercontent.com/<username>/<project>/<branch>/<path>/<file> &&
source FILENAME &&
rm FILENAME
Which indicates that the | bash piping at least worsens the problem because the problem always happens with it.
Why would echo "Please answer yes or no." happen "endlessly"? (which I stop it with CTRLC+C)
Do you find any problem either in the single lined execution command and/or in the while true; do case esac done?
|
Why would echo … happen "endlessly"?
With wget … | bash you pipe wget to bash. The stdin of bash comes from wget. read reads from the same stdin.
In general read reading from where the script comes from can consume parts of the script. In your case bash needs to read the whole while … done fragment (because e.g. it could be while … done <whatever). When read works, there is nothing more to read. Even the first read fails.
The script in question doesn't check if read fails.
Additionally read -p prints its prompt to stdin, so you never see the prompt.
If in the script there was </dev/tty read … or while … done </dev/tty then read would not read from the stdin of bash, it would read from the console. This would work but the method requires a change in the script itself. In general the change would break things if you run the script differently and need read to read from the stdin of the entire script that would happen to be different than /dev/tty.
But then nano (if you answered y) may complain about its stdin. If your fix was </dev/tty read … then nano would yield Too many errors from stdin because its stdin would be the (already broken) pipe from wget. If your fix was while … done </dev/tty then it would work.
With a bigger script there may be more places to "fix" this way. A proper general solution is not to hijack the stdin at all.
A general solution is to run one of these:
bash <(wget -O - …)
. <(wget -O - …)
depending on if you want the script to run in a separate bash (like in your wget … | bash … try) or in the current shell (like in your source FILENAME solution).
Note you can now redirect the stdin independently, e.g.:
yes | bash <(wget -O - …)
The <(some_command …) syntax is called process substitution. It works in Bash and in few other shells but not in pure sh (see bashisms). Some interesting observations here: Process substitution and pipe (the "Preserving STDIN" section of this answer is exactly your question in a nutshell).
| Executing a remote script from a code repository causes endless loop |
1,453,145,646,000 |
Hello ALL and thanks in advance.
I have searched the forum for my situation and have been unable to locate a solution. I've got a script that I am passing arguments/options/parameters to at the command line. One of the values has a space in it, which I have put in double quotes. It might be easier to provide an example. Forgive my usage of arguments/options/parameters.
$: ./test1.ksh -n -b -d "Home Videos"
My problem is setting a variable to "Home Videos" and it being used together. In my example, the -d is to specify a directory. Not all the directories have spaces, but some do in my case.
This is an example of the code I have that is not working as I expect it to.
#!/bin/ksh
Function1()
{
echo "Number of Args in Function1: $#"
echo "Function1 Args: $@"
SetArgs $*
}
SetArgs()
{
echo -e "\nNumber of Args in SetArgs: $#"
echo "SetArgs Args: $@"
while [ $# -gt 0 ]
do
case $1 in
-[dD])
shift
export DirectoryName=$1
;;
-[nN])
export Var1=No
shift
;;
-[bB])
export Var2=Backup
shift
;;
*)
shift
;;
esac
done
Function2
}
Function2()
{
echo "Directory Name: ${DirectoryName}"
}
Function1 $*
When I run this, I'm getting only Home for the DirectoryName instead of Home Videos. Seen below.
$ ./test1.ksh -n -b -d "Home Videos"
Number of Args in Function1: 5
Function1 Args: -n -b -d Home Videos
Number of Args in SetArgs: 5
SetArgs Args: -n -b -d Home Videos
Var1 is set to: No
Var2 is set to: Backup
Directory Name: Home
What I am expecting and I have not been able to get it to happen is:
$ ./test1.ksh -n -b -d "Home Videos"
Number of Args in Function1: 4
Function1 Args: -n -b -d "Home Videos"
Number of Args in SetArgs: 4
SetArgs Args: -n -b -d "Home Videos"
Var1 is set to: No
Var2 is set to: Backup
Directory Name: Home Videos <-- Without double quotes in the final usage.
Any help I can get on this will be greatly appreciated... I've tried escaping the double quotes, without any success.
Thank you for your time and efforts in helping me figure this out.
Regards,
Daniel
|
Using $* or $@ unquoted never makes sense.
"$*" is the concatenation of the positional parameters with the first character (or byte depending on the shell) of $IFS, "$@" is the list of positional parameters.
When unquoted, it's the same but subject to split+glob (or only empty removal with zsh) like any other unquoted parameter expansion, (some shells do also separate arguments in $* even if $IFS is empty).
Here you want to pass the list of arguments as-is to your function, so it's:
SetArgs "$@"
[...]
Function1 "$@"
Note that with ksh88, $IFS has to contain the space character (which it does by default) for that to work properly (a bug inherited from the Bourne shell, fixed in ksh93).
Also note that with some implementations of ksh (like older versions of zsh in ksh emulation),
export DirectoryName=$1
is a split+glob invocation case. export is one of those commands in Korn-like shells that can evaluate shell code through arithmetic evaluation in array indices), so it's one of those cases where it's important to quote variables to avoid introducing command injection vulnerabilities.
Example:
$ (exec -a ksh zsh-4.0.1 -c 'export x=$a' ksh 'foo psvar[0`uname>&2`]')
Linux
Note that [ $# -gt 0 ] is another split+glob invocation which doesn't make sense (less likely to be a problem at least with the default value of $IFS).
| Passing options/args/parameters with spaces from the script to a function within |
1,453,145,646,000 |
I am trying to use case to run this function
if [[ $input -gt 0 || $input -eq 0 ]];
Is it possible to put in case to test the input for greater than 0 or equal to 0, or even 0 and less than 0, in case.
|
If only integers need to be handled and -0 is not need to be handled correctly,
the following works:
case "$input" in
''|*[!0-9-]*|[0-9-]*-*)
echo "invalid input"
;;
[0-9]*)
echo "input >= 0"
;;
-[1-9]*)
echo "input < 0"
;;
*)
echo "invalid input"
;;
esac
But it is usually better to use if .. then .. elif ... then .. else .. fi constructs for distinction of cases with more complicated expressions than case with pattern matching.
| How to enter the condition to check 0 or more than 0 in case |
1,453,145,646,000 |
Take a look at these attempts:
$ case `true` in 0) echo success ;; *) echo fail ;; esac
fail
$ if `true` ; then
> echo "success"
> else
> echo "fail"
> fi
success
Now, why is the case statement failing? You might wonder why I don't just use the if statement and I shall explain. My command if complex and might return different return codes on which I want act on. I don't want to run the command multiple times and I can't do:
my_command
res = $?
case $? in
...
esac
This is because I use set -e in my script and therefore if my_command returns failure the script aborts.
But I have a workaround...
set +e
my_command
res=$?
set -e
case $? in
...
esac
But this is ugly, so returning to my initial question... why can I just use the case my_command in ... esac version?
|
You can't use
case $(somecommand) in ...
to test the exit status of somecommand because the command substitution expands to the output of the command, not its exit status.
Using $(true) doesn't work since true doesn't produce any output on standard output.
You could do
{ somecommand; err="$?"; } || true
case $err in
0) echo success ;;
*) echo fail
esac
This will stop the script running under errexit (-e) from exiting.
From the bash manual (from the description of set -e):
The shell does not exit if the command that fails is
part of the command list immediately following a while
or until keyword, part of the test following the if or
elif reserved words, part of any command executed in a
&& or || list except the command following the final &&
or ||, any command in a pipeline but the last, or if the
command's return value is being inverted with !.
In this case, with only the possibilities to either succeed or to fail, it would be easier to just do
if somecommand; then
echo success
else
echo fail
fi
| How to use case statement to deal with multiple return values |
1,453,145,646,000 |
I am trying to create a small script for creating simple, all-default Apache virtual host files (it should be used any time I establish a new web application).
This script prompts me for the domain.tld of the web application and also for its database credentials, in verified read operations:
read -p "Have you created db credentials already?" yn
case $yn in
[Yy]* ) break;;
[Nn]* ) exit;;
* ) echo "Please create db credentials and then comeback;";;
esac
read -p "Please enter the domain of your web application:" domain_1 && echo
read -p "Please enter the domain of your web application again:" domain_2 && echo
if [ "$domain_1" != "$domain_2" ]; then echo "Values unmatched. Please try again." && exit 2; fi
read -sp "Please enter the app DB root password:" dbrootp_1 && echo
read -sp "Please enter the app DB root password again:" dbrootp_2 && echo
if [ "$dbrootp_1" != "$dbrootp_2" ]; then echo "Values unmatched. Please try again." && exit 2; fi
read -sp "Please enter the app DB user password:" dbuserp_1 && echo
read -sp "Please enter the app DB user password again:" dbuserp_2 && echo
if [ "$dbuserp_1" != "$dbuserp_2" ]; then echo "Values unmatched. Please try again." && exit 2; fi
Why I do it with Bash
As for now I would prefer Bash automation over Ansible automation because Ansible has a steep learning curve and its docs (as well as some printed book I bought about it) where not clear or useful for me in learning how to use it). I also prefer not to use Docker images and then change them after-build.
My problem
The entire Bash script (which I haven't brought here in its fullness) is a bit longer and the above "heavy" chuck of text makes it significantly longer - yet it is mostly a cosmetic issue.
My question
Is there an alternative for the verified read operations? A utility that both prompts twice and compares in one go?
Related: The need for $1 and $2 for comparison with an here-string
|
How about a shell function? Like
function read_n_verify {
read -p "$2: " TMP1
read -p "$2 again: " TMP2
[ "$TMP1" != "$TMP2" ] &&
{ echo "Values unmatched. Please try again."; return 2; }
read "$1" <<< "$TMP1"
}
read_n_verify domain "Please enter the domain of your web application"
read_n_verify dbrootp "Please enter the app DB root password"
read_n_verify dbuserp "Please enter the app DB user password"
Then do your desired action/s with $domain, $dbrootp, $dbuserp.
$1 is used to transport the variable name for the later read from the "here string", which in turn is used as it's easier here than a (could be used as well) "here document".
$2 contains the prompt (free) text, used last to allow for (sort of) "unlimited" text length.
Upper case TMP and [ ... ] && "sugar-syntax" (whatever this might be) are used by personal preference.
if - then - fi could be used as well and would eliminate the need for the braces that collect several commands into one single command to be executed as the && branch.
| read-verification alternative (two prompts and if-then comparison alternative) |
1,453,145,646,000 |
Today I have learned some tricks about menu option in command line.
One of these was
cat << EOF
Some lines
EOF
read -n1 -s
case $newvar in
"1") echo "";
ecsa
It's really magical.
I can't find any description in man page about this option. How the input to read command was pushed into case option ? It usually use a variable to do this thing as I know.
I just want to understand the process of this combination further.
while :
do
clear
cat<<EOF
==============================
Menu Install DHCP Tool
------------------------------
Please enter your choice:
(1) Config Network Interface
(2) Check status
(3) Config DHCP server
(Q)uit
------------------------------
EOF
read -n1 -s
case "$REPLY" in
"1") config_network ;;
"2") check_status ;;
"3") config_dhcp ;;
"q") exit ;;
* ) echo "invalid option" ;;
esac
sleep 0.2
done
|
The documentation of read notes that:
If no names are supplied, the line read is assigned to the variable REPLY.
From that point it's a normal case statement. -n1 reads a single byte and -s turns off terminal echo of the input.
| What does "read -n1 -s" mean in this script? |
1,453,145,646,000 |
I have a script loaded as a service in /etc/init.d/myfile
When I try to start the service I get the error
/etc/init.d/myservice: 21: /etc/init.d/myservice: Syntax error: "(" unexpected
The issue seems to be with the process substitution <( in the source command. I use it without any problem in other scripts to extract variables from my main config file but inside a case statement I don't know how to make it work.
myservice contains:
#!/bin/sh
#/etc/init.d/myservice
### BEGIN INIT INFO
# Provides: myservice
# Required-Start: $remote_fs $syslog $network
# Required-Stop: $remote_fs $syslog
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: my service
# Description: Start the myservice service
### END INIT INFO
case "$1" in
start)
# start processes
# Import the following variables from config.conf: cfgfile, dir, bindir
source <(grep myservice /opt/mysoftware/config.conf | grep -oP '.*(?= #)')
if [ -f $cfgfile ]
then
echo "Starting myservice"
/usr/bin/screen -U -d -m $bindir/myscript.sh $cfgfile
else
echo "myservice could not start because the file $cfgfile is missing"
fi
;;
stop)
# kill processes
echo "Stopping myservice"
screen -ls | grep Detached | cut -d. -f1 | awk '{print $1}' | xargs kill
;;
restart)
# kill and restart processes
/etc/init.d/myservice stop
/etc/init.d/myservice start
;;
*)
echo "Usage: /etc/init.d/myservice {start|stop|restart}"
exit 1
;;
esac
exit 0
The file config.conf is a list of variable declarations with a short description and the name of the script using them. I use grep filters to source only the variables I need for a given script.
It looks like this:
var1=value # path to tmp folder myservice
var2=value # log file name myservice script1.sh script2.sh
var3=value # prefix for log file script1.sh script2.sh
Note: The service worked fine before I converted it to start using the config file instead of hardcoded values.
Thank you.
|
Bash, ksh93, zsh, and other recent shells support process substitution (the <(command) syntax), but it is a non-standard extension. Dash (which is /bin/sh on Ubuntu systems) doesn't support it, and bash when invoked as /bin/sh doesn't, either.
If you have bash available, change the first line of your script to, for example, #!/bin/bash .
[On systems that have bash in a directory on a mountable filesystem, such as /usr/local/bin on some systems, you might need to make sure that filesystem is available before your service is started.]
| How to use process substitution within a case statement without getting syntax errors? |
1,453,145,646,000 |
How to have spaces or tabs in the menu list?
PS3='Please enter your choice: '
options=("Option 1" "Option 2" "Quit")
select opt in "${options[@]}"
do
case $opt in
"Option 1")
echo "Your choise is 1"
;;
"Option 2")
echo "Your choise is 2"
;;
"Quit")
break
;;
*) echo "Invalid option;;
esac
done
And I got this:
[user@Server:/home/user] ./test.sh
1) Option 1
2) Option 2
3) Option 3
4) Quit
Please enter your choice:
But I'd like something like this:
[user@Server:/home/user] ./test.sh
1) Option 1
2) Option 2
3) Option 3
4) Quit
Please enter your choice:
Ideas?
|
The select statement in bash, which is what displays the menu, does not allow specifying an indent for the menu.
Just a comment on the code: It's usually easier to let the case statement act on $REPLY rather than the variable with the selected string. It saves you from having to type in the strings twice.
E.g.
select opt in "${options[@]}"
do
case $REPLY in
1)
echo "Your choice is 1"
;;
2)
echo "Your choice is 2"
;;
3)
break
;;
*) echo 'Invalid option' >&2
esac
done
or, for this specific example,
select opt in "${options[@]}"
do
case $REPLY in
[1-2])
printf 'Your choice is %s\n' "$REPLY"
;;
3)
break
;;
*) echo 'Invalid option' >&2
esac
done
| Linux - case command |
1,453,145,646,000 |
FuzzyTime()
{
local tmp=$( date +%H )
case $((10#$tmp)) in
[00-05] )
wtstr="why don't you go to bed"
;;
[06-09] )
wtstr="I see your very eager to start the day"
;;
[10-12] )
wtstr="and a very good day too you"
;;
[13-18] )
wtstr="Good Afternoon"
;;
[19-21] )
wtstr="Good Evening"
;;
[22-23] )
wtstr="it is getting late, it's time to party or go to bed"
;;
*)
wtstr="guess the planet your on has more than a 24 hour rotation"
echo 'case value is:' $tmp
;;
esac
}
The case variable represent hours in a 24-hour context, however it seems numbers 08 and 17 cause an issue. I resolved the 08 by using $((10#$tmp)) but now 17 is an issue; any advice? This is my first bash script ever so sorry in advance if this is a silly question.
|
[] denotes character ranges:
[10-12] means digits 1 2 and the range between digits 0-1 -- this will match a single digit in range 0-2.
Use simple comparisons with if-elif-else-fi:
if [ "$tmp" -ge 0 ] && [ "$tmp" -le 5 ]; then
echo "<0,5>"
elif [ "$tmp" -ge 6 ] && [ "$tmp" -le 9 ]; then
echo "<6,9>"
#...
else
#...
fi
(Or you could iterate over an array of range limits if you want every interval, but you might as well hardcode it in this case--as you are trying to do).
Edit: requested array version:
FuzzyTime(){
local needle=$1 #needle is $1
: ${needle:=$( date +%H )} #if no needle is empty, set it to "$(date +%H)
local times=( 0 6 10 13 19 22 24 0 )
local strings=(
"why don't you go to bed"
"I see your very eager to start the day"
"and a very good day too you"
"Good Afternoon"
"Good Evening"
"it is getting late, it's time to party or go to bed"
"guess the planet your on has more than a 24 hour rotation"
)
local b=0
# length(times) - 2 == index of the penultimate element
local B="$((${#times[@]}-2))"
for((; b<B; b++)); do
if ((needle >= times[b] && needle < times[b+1])); then break; fi
done
echo "${strings[$b]}"
}
FuzzyTime "$1"
test:
$ for t in {0..27}; do FuzzyTime "$t"; done
0 -- why don't you go to bed
1 -- why don't you go to bed
2 -- why don't you go to bed
3 -- why don't you go to bed
4 -- why don't you go to bed
5 -- why don't you go to bed
6 -- I see your very eager to start the day
7 -- I see your very eager to start the day
8 -- I see your very eager to start the day
9 -- I see your very eager to start the day
10 -- and a very good day too you
11 -- and a very good day too you
12 -- and a very good day too you
13 -- Good Afternoon
14 -- Good Afternoon
15 -- Good Afternoon
16 -- Good Afternoon
17 -- Good Afternoon
18 -- Good Afternoon
19 -- Good Evening
20 -- Good Evening
21 -- Good Evening
22 -- it is getting late, it's time to party or go to bed
23 -- it is getting late, it's time to party or go to bed
24 -- guess the planet your on has more than a 24 hour rotation
25 -- guess the planet your on has more than a 24 hour rotation
26 -- guess the planet your on has more than a 24 hour rotation
27 -- guess the planet your on has more than a 24 hour rotation
| case statement not behaving as expected (fuzzytime() function) |
1,453,145,646,000 |
I have made a simple backup program for my bin folder. It works.
Code and resultant STDOUT below.
Using rsync to copy from local ~/bin folder to a /media/username/code/bin folder. The code works fine when only one result from mount | grep media but I can not quite fathom how to advance it to letting me select from multiple results from the mount/grep.
I suspect the for LINE below is lucky to work at all as I believe for is delimited by spaces, in shell scripting, but as there are no spaces in the results it then delimited on the \n ? I tried find /media and of course got a lot of results. Not the way to go I think. O_O
check_media () {
FOUNDMEDIA=$(mount | awk -F' ' '/media/{print $3}')
echo -e "foundmedia \n$FOUNDMEDIA"
echo For Loop
for LINE in $FOUNDMEDIA
do
echo $LINE
done
CHOSENMEDIA="${FOUNDMEDIA}/code/bin/"
echo -e "\nchosenmedia \n$CHOSENMEDIA\n"
exit
}
foundmedia
/media/dee/D-5TB-ONE
/media/dee/DZ61
For Loop
/media/dee/D-5TB-ONE
/media/dee/DZ61
chosenmedia
/media/dee/D-5TB-ONE
/media/dee/DZ61/code/bin/
You can see how I add the save path /code/bin to the found media but with multiple results I get a chosenmedia which cannot work. I would like to be able to choose the media to which I want to rsync my backup to, or restore from.
|
Assuming your mount points don't contain whitespace you can use a script like this:
#!/bin/bash
check_media() {
local found=(_ $(mount | awk '/media/{print $3}'))
local index chosen
until [[ "$chosen" =~ ^[0-9]+$ ]] && [[ "${chosen:-0}" -ge 1 ]] && [[ "${chosen:-0}" -lt ${#found[@]} ]]
do
for ((index=1; index < "${#found[@]}"; index++))
do
printf '%2d) %s\n' $index "${found[$index]}" >&2
done
read -p "Choice (q=quit): " chosen || { echo; return 1; }
[ "$chosen" == q ] && return 1
done
printf '%s\n' "${found[$chosen]}"
}
if chosen=$(check_media)
then
printf 'Got: %s\n' "$chosen"
fi
However, if you are writing for bash the whole thing can be encapsulated in just a couple of lines, replacing check_media like this:
select chosen in $(mount | awk '/media/{print $3}')
do
printf 'Got: %s\n' "$chosen"
# rsync
break
done
| Selecting from various media using awk shell script |
1,453,145,646,000 |
st.txt
"failed" "aa" "2018-04-03T17:43:38Z"
while read status name date; do
case "$status" in
'aborted')
echo -1
;;
"failed")
echo -1
;;
'succeeded')
echo 0
;;
*)
echo 0
esac
exit 0
done < st.txt
But I always get 0 as the output.
|
You should replace "failed" with "\"failed\"". It should be:
while read status name date; do
case "$status" in
'aborted')
echo -1
;;
"\"failed\"")
echo -1
;;
'succeeded')
echo 0
;;
*) echo 0
esac
exit 0
done<st.txt
Also consider using read with -r.
There is also an easier way to do what you want:
if [ "$(cut -d ' ' -f1 st.txt)" = "\"failed\"" ]
then
printf -- "-1\n"
fi
| compare variable with string bash |
1,405,814,980,000 |
I've seen the phrase "sh compatible" used usually in reference to shells. I'm not sure if it also applies to the programs that might be run from within shells.
What does it mean for a shell or other program to be "sh compatible"? What would it mean to be "sh incompatible"?
Edit:
This question asking the difference between bash and sh is very relevant:
Difference between sh and bash
I'd still like a direct answer to what it means to be "sh compatible". A reasonable expectation might be that "sh compatible" means "implements the Shell Command Language" but then why are there so many "sh compatible" shells and why are they different?
|
Why are there so many “sh compatible” shells?
The Bourne shell was first publicly released in 1979 as part of Unix V7. Since pretty much every Unix and Unix-like system descends from V7 Unix — even if only spiritually — the Bourne shell has been with us “forever.”¹
The Bourne shell actually replaced an earlier shell, retronymed the Thompson shell, but it happened so early in Unix’s history that it’s all but forgotten today. The Bourne shell is a superset of the Thompson shell.²
Both the Bourne and Thompson shells were called sh. The shell specified by POSIX is also called sh. So, when someone says sh-compatible, they are handwavingly referring to this series of shells. If they wanted to be specific, they’d say “POSIX shell” or “Bourne shell.”³
The POSIX shell is based on the 1988 version of KornShell, which in turn was meant to replace the Bourne shell on AT&T Unix, leapfrogging the BSD C shell in terms of features.⁴ To the extent that ksh is the ancestor of the POSIX shell, most Unix and Unix-like systems include some variant of the Korn shell today. The exceptions are generally tiny embedded systems, which can’t afford the space a complete POSIX shell takes.
That said, the Korn shell — as a thing distinct from the POSIX shell — never really became popular outside the commercial Unix world. This is because its rise corresponded with the early years of Unix commercialization, so it got caught up in the Unix wars. BSD Unixes eschewed it in favor of the C shell, and its source code wasn’t freely available for use in Linux when it got started.⁵ So, when the early Linux distributors went looking for a command shell to go with their Linux kernel, they usually chose GNU Bash, one of those sh-compatibles you’re talking about. While GNU Bash does go beyond POSIX in many ways, you can ask it to run in a more pure POSIX mode.
That early association between Linux and Bash sealed the fate of many other shells, including ksh, csh and tcsh. There are die-hards still using those shells today, but they’re very much in the minority.⁶
All this history explains why the creators of relative latecomers like bash, zsh, and yash chose to make them sh-compatible: Bourne/POSIX compatibility is the minimum a shell for Unix-like systems must provide in order to gain widespread adoption.
In many systems, the default interactive command shell and /bin/sh are different things. /bin/sh may be:
The original Bourne shell. This is common in older UNIX® systems, such as Solaris 10 (released in 2005) and its predecessors.⁸
A POSIX-certified shell. This is common in newer UNIX® systems, such as Solaris 11 (2010).
The Almquist shell. This is an open source Bourne/POSIX shell clone originally released on Usenet in 1989, which was then contributed to Berkeley’s CSRG for inclusion in the first BSD release containing no AT&T source code, 4.4BSD-Lite.⁷ The Almquist shell is often called ash, even when installed as /bin/sh.
There are two important ash forks outside the BSD world:
dash, famously adopted by Debian and Ubuntu in 2006 as the default /bin/sh implementation. (Bash remains the default interactive command shell in Debian derivatives.)
The ash command in BusyBox, which is frequently used in embedded Linuxes and may be used to implement /bin/sh. Since it postdates dash and it was derived from Debian’s old ash package, I’ve chosen to consider it a derivative of dash rather than ash, despite its command name within BusyBox.
(BusyBox also includes a less featureful alternative to ash called hush. Typically only one of the two will be built into any given BusyBox binary: ash by default, but hush when space is really tight. Thus, /bin/sh on BusyBox-based systems is not always dash-like.)
GNU Bash, which disables most of its non-POSIX extensions when called as sh.
This choice is typical on desktop and server variants of Linux, except for Debian and its derivatives.
Apple switched the default in Mac OS X from tcsh to Bash in 2003 for version 10.3 (Panther), then kept with it thru 10.14 (Mojave), released in 2018. It wasn’t until the following year’s release — 10.15 (Catalina) — that they switched away from Bash to zsh, which differs in many ways, but not in being broadly POSIX-compatible.
A shell with ksh93 POSIX extensions, as in OpenBSD. Although the OpenBSD shell changes behavior to avoid syntax and semantic incompatibilities with Bourne and POSIX shells when called as sh, it doesn’t disable any of its pure extensions, being those that don’t conflict with older shells.
This is not common; you should not expect ksh93 features in /bin/sh.
I used “shell script” above as a generic term meaning Bourne/POSIX shell scripting. This is due to the ubiquity of Bourne family shells. To talk about scripting on other shells, you need to give a qualifier, like “C shell script.” Even on systems where a C family shell is the default interactive shell, it is better to use the Bourne shell for scripting.
It is telling that when Wikipedia classifies Unix shells, they group them into Bourne shell compatible, C shell compatible, and “other.”
This diagram may help:
(Click for SVG version, 30 kiB, or view full-size PNG version, 213 kiB.)
What would it mean to be “sh incompatible”?
Someone talking about an sh-incompatible thing typically means one of three things:
They are referring to one of those “other” shells.⁹
They are making a distinction between the Bourne and C shell families.
They are talking about some specific feature in one Bourne family shell that isn’t in all the other Bourne family shells. ksh93, bash, and zsh in particular have many features that don’t exist in the older “standard” shells. Those three are also mutually-incompatible in a lot of ways, once you get beyond the shared POSIX/ksh88 base.
It is a classic error to write a shell script with a #!/bin/sh shebang line at the top but to use Bash or Korn shell extensions within. Since /bin/sh is one of the shells in the Korn/POSIX family diagram above on so many systems these days, such scripts will work on the system they are written on, but then fail on systems where /bin/sh is something from the broader Bourne family of shells. Best practice is to use #!/bin/bash or #!/bin/ksh shebang lines if the script uses such extensions.
There are many ways to check whether a given Bourne family shell script is portable:
Go through the Portable Shell Programming chapter in the GNU Autoconf manual. You may recognize some of the problematic constructs it talks about in your scripts.
Run checkbashisms on it, a tool from the Debian project that checks a script for “bashisms.”
Run it under posh, a shell in the Debian package repository that purposely implements only features specified by SUS3, plus a few other minor features.
Run it under obosh from the Schily Tools project, an improved version of the Bourne shell as open sourced by Sun as part of OpenSolaris in 2005, making it one of the easiest ways to get a 1979 style Bourne shell on a modern computer.
The Schily Tools distribution also includes bosh, a POSIX type shell with many nonstandard features, but which may be useful for testing the compatibility of shell scripts intended to run on all POSIX family shells. It tends to be more conservative in its feature set than bash, zsh and the enhanced versions of ksh93.
Schily Tools also includes a shell called bsh, but that is an historical oddity which is not a Bourne family shell at all.
Why are they different?
For the same reasons all “New & Improved!” things are different:
The improved version could only be improved by breaking backwards compatibility.
Someone thought of a different way for something to work, which they like better, but which isn’t the same way the old one worked.
Someone tried reimplementing an old standard with incomplete understanding, causing them to mess up and create an unintentional difference.
Footnotes and Asides:
Early versions of BSD Unix were just add-on software collections for V6 Unix. Since the Bourne shell wasn’t added to AT&T Unix until V7, BSD didn’t technically start out having the Bourne shell. BSD’s answer to the primitive nature of the Thompson shell was the C shell.
Nevertheless, because the first standalone versions of BSD (2.9BSD and 3BSD) were based on V7 or its portable successor UNIX/32V, they did include the Bourne shell.
(The 2BSD line turned into a parallel fork of BSD for Digital’s PDP minicomputers, while the 3BSD and 4BSD lines went on to take advantage of newer computer types like Vaxen and Unix workstations. 2.9BSD was essentially the PDP version of 4.1cBSD; they were contemporaneous, and shared code. Because PDPs didn’t all instantly disappear when the VAX arrived, the 2BSD line is still shambling along.)
It is safe to say that the Bourne shell was everywhere in the Unix world by 1983. That’s a good approximation to “forever” in the computing industry. MS-DOS got a hierarchical filesystem that year (awww, how cuuute!) and the first 24-bit Macintosh with its 9” B&W screen — not grayscale, literally black and white — wouldn’t come out until early the next year.
The Thompson shell was quite primitive by today’s standards. It was only an interactive command shell, rather than the script programming environment we expect today. It did have things like pipes and I/O redirection, which we think of as prototypically part of a “Unix shell,” so that we think of the MS-DOS command shell as getting them from Unix.
The Bourne shell also replaced the PWB shell, which added important things to the Thompson shell like programmability (if, switch and while) and an early form of environment variables. The PWB shell is even less well-remembered than the Thompson shell since it wasn’t part of every version of Unix.
When someone isn’t specific about POSIX vs Bourne shell compatibility, there is a whole range of things they could mean.
At one extreme, they could be using the 1979 Bourne shell as their baseline. An “sh-compatible script” in this sense would mean it is expected to run perfectly on the true Bourne shell or any of its successors and clones: ash, bash, ksh, zsh, etc.
Someone at the other extreme assumes the shell specified by POSIX as a baseline instead. We take so many POSIX shell features as “standard” these days that we often forget that they weren’t actually present in the Bourne shell: built-in arithmetic, job control, command history, aliases, command line editing, the $() form of command substitution, etc.
Although the Korn shell has roots going back to the early 1980s, AT&T didn’t ship it in Unix until System V Release 4 in 1988. Since so many commercial Unixes are based on SVR4, this put ksh in pretty much every relevant commercial Unix from the late 1980s onward.
(A few weird Unix flavors based on SVR3 and earlier held onto pieces of the market past the release of SVR4, but they were the first against the wall when the revolution came.)
1988 is also the year the first POSIX standard came out, with its Korn shell based “POSIX shell.” Later, in 1993, an improved version of the Korn shell came out. Since POSIX effectively nailed the original in place, ksh forked into two major versions: ksh88 and ksh93, named after the years involved in their split.
ksh88 is not entirely POSIX-compatible, though the differences are small, so that some versions of the ksh88 shell were patched to be POSIX-compatible. (This from an interesting interview on Slashdot with Dr. David G. Korn. Yes, the guy who wrote the shell.)
ksh93 is a fully-compatible superset of the POSIX shell. Development on ksh93 has been sporadic since the primary source repository moved from AT&T to GitHub with the newest release being about 3 years old as I write this, ksh93v. (The project’s base name remains ksh93 with suffixes added to denote release versions beyond 1993.)
Systems that include a Korn shell as a separate thing from the POSIX shell usually make it available as /bin/ksh, though sometimes it is hiding elsewhere.
When we talk about ksh or the Korn shell by name, we are talking about ksh93 features that distinguish it from its backwards-compatible Bourne and POSIX shell subsets. You rarely run across the pure ksh88 today.
AT&T kept the Korn shell source code proprietary until March 2000. By that point, Linux’s association with GNU Bash was very strong. Bash and ksh93 each have advantages over the other, but at this point inertia keeps Linux tightly associated with Bash.
As to why the early Linux vendors most commonly choose GNU Bash over pdksh, which was available at the time Linux was getting started, I’d guess it’s because so much of the rest of the userland also came from the GNU project. Bash is also somewhat more advanced than pdksh, since the Bash developers do not limit themselves to copying Korn shell features.
Work on pdksh stopped about the time AT&T released the source code to the true Korn shell. There are two main forks that are still maintained, however: the OpenBSD pdksh and the MirBSD Korn Shell, mksh.
I find it interesting that mksh is the only Korn shell implementation currently packaged for Cygwin.
csh/tcsh was usually the default interactive shell on BSD Unixes through the early 1990s.
Being a BSD variant, early versions of Mac OS X were this way, through Mac OS X 10.2 “Jaguar”. OS X switched the default shell from tcsh to Bash in OS X 10.3 “Panther”. This change did not affect systems upgraded from 10.2 or earlier. The existing users on those converted systems kept their tcsh shell.
FreeBSD used tcsh as the default root shell until version 14, but it is now one of the POSIX-compatible Almquist shell variants. This is true on NetBSD as well.
OpenBSD uses a fork of pdksh as the default shell instead.
The higher popularity of Linux and OS X makes some people wish FreeBSD would also switch to Bash, but they won’t be doing so any time soon for philosophical reasons. It is easy to switch it, if this bothers you.
4.4BSD-Lite in turn became the base for all modern BSD derivatives, with /bin/sh remaining as an Almquist derivative in most of them, with one major exception noted below. You can see this direct descendancy in the source code repositories for NetBSD and FreeBSD: they were shipping an Almquist shell derivative from day 1.
It is rare to find a system with a truly vanilla Bourne shell as /bin/sh these days. You have to go out of your way to find something sufficiently close to it for compatibility testing.
I’m aware of only one way to run a genuine 1979 vintage Bourne shell on a modern computer: use the Ancient Unix V7 disk images with the SIMH PDP-11 simulator from the Computer History Simulation Project. SIMH runs on pretty much every modern computer, not just Unix-like ones.
With OpenSolaris, Sun open-sourced the SVR4 version of the Bourne shell for the first time. Prior to that, the source code for the post-V7 versions of the Bourne shell was only available to those with a Unix source code license.
That code is now available separately from the rest of the defunct OpenSolaris project from a couple of different sources.
The most direct source is the Heirloom Bourne shell project. This became available shortly after the original 2005 release of OpenSolaris. Some portability and bug fixing work was done over the next few months, but then development on the project halted.
Jörg Schilling has done a better job of maintaining a version of this code as obosh in his Schily Tools package. See above for more on this.
Keep in mind that these shells derived from the 2005 source code release contain multi-byte character set support, job control, shell functions, and other features not present in the original 1979 Bourne shell.
One way to tell whether you are on an original Bourne shell is to see if it supports an undocumented feature added to ease the transition from the Thompson shell: ^ as an alias for |. That is to say, a command like ls ^ more will give an error on a Korn or POSIX type shell, but it will behave like ls | more on a true Bourne shell.
Occasionally you encounter a fish, scsh or rc/es adherent, but they’re even rarer than C shell fans.
The rc family of shells isn’t commonly used on Unix/Linux systems, but the family is historically important, which is how it earned a place in the diagram above. rc is the standard shell of the Plan 9 from Bell Labs operating system, a kind of successor to 10th edition Unix, created as part of Bell Labs’ continued research into operating system design. It is incompatible with both Bourne and C shell at a programming level; there’s a lesson in there.
| What does it mean to be "sh compatible"? |
1,405,814,980,000 |
Will the executable of a small, extremely simple program, such as the one shown below, that is compiled on one flavor of Linux run on a different flavor? Or would it need to be recompiled?
Does machine architecture matter in a case such as this?
int main()
{
return (99);
}
|
It depends. Something compiled for IA-32 (Intel 32-bit) may run on amd64 as Linux on Intel retains backwards compatibility with 32-bit applications (with suitable software installed). Here's your code compiled on RedHat 7.3 32-bit system (circa 2002, gcc version 2.96) and then the binary copied over to and run on a Centos 7.4 64-bit system (circa 2017):
-bash-4.2$ file code
code: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.2.5, not stripped
-bash-4.2$ ./code
-bash: ./code: /lib/ld-linux.so.2: bad ELF interpreter: No such file or directory
-bash-4.2$ sudo yum -y install glibc.i686
...
-bash-4.2$ ./code ; echo $?
99
Ancient RedHat 7.3 to Centos 7.4 (essentially RedHat Enterprise Linux 7.4) is staying in the same "distribution" family, so will likely have better portability than going from some random "Linux from scratch" install from 2002 to some other random Linux distribution in 2018.
Something compiled for amd64 would not run on 32-bit only releases of Linux (old hardware does not know about new hardware). This is also true for new software compiled on modern systems intended to be run on ancient old things, as libraries and even system calls may not be backwards portable, so may require compilation tricks, or obtaining an old compiler and so forth, or possibly instead compiling on the old system. (This is a good reason to keep virtual machines of ancient old things around.)
Architecture does matter; amd64 (or IA-32) is vastly different from ARM or MIPS so the binary from one of those would not be expected to run on another. At the assembly level the main section of your code on IA-32 compiles via gcc -S code.c to
main:
pushl %ebp
movl %esp,%ebp
movl $99,%eax
popl %ebp
ret
which an amd64 system can deal with (on a Linux system--OpenBSD by contrast on amd64 does not support 32-bit binaries; backwards compatibility with old archs does give attackers wiggle room, e.g. CVE-2014-8866 and friends). Meanwhile on a big-endian MIPS system main instead compiles to:
main:
.frame $fp,8,$31
.mask 0x40000000,-4
.fmask 0x00000000,0
.set noreorder
.set nomacro
addiu $sp,$sp,-8
sw $fp,4($sp)
move $fp,$sp
li $2,99
move $sp,$fp
lw $fp,4($sp)
addiu $sp,$sp,8
j $31
nop
which an Intel processor will have no idea what to do with, and likewise for the Intel assembly on MIPS.
You could possibly use QEMU or some other emulator to run foreign code (perhaps very, very slowly).
However! Your code is very simple code, so will have fewer portability issues than anything else; programs typically make use of libraries that have changed over time (glibc, openssl, ...); for those one may also need to install older versions of various libraries (RedHat for example typically puts "compat" somewhere in the package name for such)
compat-glibc.x86_64 1:2.12-4.el7.centos
or possibly worry about ABI changes (Application Binary Interface) for way old things that use glibc, or more recently changes due to C++11 or other C++ releases. One could also compile static (greatly increasing the binary size on disk) to try to avoid library issues, though whether some old binary did this depends on whether the old Linux distribution was compiling most everything dynamic (RedHat: yes) or not. On the other hand, things like patchelf can rejigger dynamic (ELF, but probably not a.out format) binaries to use other libraries.
However! Being able to run a program is one thing, and actually doing something useful with it another. Old 32-bit Intel binaries may have security issues if they depend on a version of OpenSSL that has some horrible and not-backported security problem in it, or the program may not be able to negotiate at all with modern web servers (as the modern servers reject the old protocols and ciphers of the old program), or SSH protocol version 1 is no longer supported, or ...
| Will a Linux executable compiled on one "flavor" of Linux run on a different one? |
1,405,814,980,000 |
Node.js is very popular these days and I've been writing some scripts on it. Unfortunately, compatibility is a problem. Officially, the Node.js interpreter is supposed to be called node, but Debian and Ubuntu ship an executable called nodejs instead.
I want portable scripts that Node.js can work with in as many situations as possible. Assuming the filename is foo.js, I really want the script to run in two ways:
./foo.js runs the script if either node or nodejs is in $PATH.
node foo.js also runs the script (assuming the interpreter is called node)
Note: The answers by xavierm02 and myself are two variations of a polyglot script. I'm still interested in a pure shebang solution, if such exists.
|
The best I have come up with is this "two-line shebang" that really is a polyglot (Bourne shell / Node.js) script:
#!/bin/sh
':' //; exec "$(command -v nodejs || command -v node)" "$0" "$@"
console.log('Hello world!');
The first line is, obviously, a Bourne shell shebang. Node.js bypasses any shebang that it finds, so this is a valid javascript file as far as Node.js is concerned.
The second line calls the shell no-op : with the argument // and then executes nodejs or node with the name of this file as parameter. command -v is used instead of which for portability. The command substitution syntax $(...) isn't strictly Bourne, so opt for backticks if you run this in the 1980s.
Node.js just evaluates the string ':', which is like a no-op, and the rest of the line is parsed as a comment.
The rest of the file is just plain old javascript. The subshell quits after the exec on second line is completed, so the rest of the file is never read by the shell.
Thanks to xavierm02 for inspiration and all the commenters for additional information!
| Universal Node.js shebang? |
1,405,814,980,000 |
I know there are many differences between OSX and Linux, but what makes them so totally different, that makes them fundamentally incompatible?
|
The whole ABI is different, not just the binary format (Mach-O versus ELF) as sepp2k mentioned.
For example, while both Linux and Darwin/XNU (the kernel of OS X) use sc on PowerPC and int 0x80/sysenter/syscall on x86 for syscall entry, there's not much more in common from there on.
Darwin directs negative syscall numbers at the Mach microkernel and positive syscall numbers at the BSD monolithic kernel — see xnu/osfmk/mach/syscall_sw.h and xnu/bsd/kern/syscalls.master. Linux's syscall numbers vary by architecture — see linux/arch/powerpc/include/asm/unistd.h, linux/arch/x86/include/asm/unistd_32.h, and linux/arch/x86/include/asm/unistd_64.h — but are all nonnegative. So obviously syscall numbers, syscall arguments, and even which syscalls exist are different.
The standard C runtime libraries are different too; Darwin mostly inherits FreeBSD's libc, while Linux typically uses glibc (but there are alternatives, like eglibc and dietlibc and uclibc and Bionic).
Not to mention that the whole graphics stack is different; ignoring the whole Cocoa Objective-C libraries, GUI programs on OS X talk to WindowServer over Mach ports, while on Linux, GUI programs usually talk to the X server over UNIX domain sockets using the X11 protocol. Of course there are exceptions; you can run X on Darwin, and you can bypass X on Linux, but OS X applications definitely do not talk X.
Like Wine, if somebody put the work into
implementing a binary loader for Mach-O
trapping every XNU syscall and converting it to appropriate Linux syscalls
writing replacements for OS X libraries like CoreFoundation as needed
writing replacements for OS X services like WindowServer as needed
then running an OS X program "natively" on Linux could be possible. Years ago, Kyle Moffet did some work on the first item, creating a prototype binfmt_mach-o for Linux, but it was never completed, and I know of no other similar projects.
(In theory this is quite possible, and similar efforts have been done many times; in addition to Wine, Linux itself has support for running binaries from other UNIXes like HP-UX and Tru64, and the Glendix project aims to bring Plan 9 compatiblity to Linux.)
Somebody has put in the effort to implement a Mach-O binary loader and API translator for Linux!
shinh/maloader - GitHub takes the Wine-like approach of loading the binary and trapping/translating all the library calls in userspace. It completely ignores syscalls and all graphical-related libraries, but is enough to get many console programs working.
Darling builds upon maloader, adding libraries and other supporting runtime bits.
| What makes OSX programs not runnable on Linux? |
1,405,814,980,000 |
I've always been unlucky with regards to choosing a laptop that I can install Linux on. If it's not the wireless card that's not working out of the box, it's the video card. Also, I'm still not able to hibernate my computer, close the lid and resume where I left off at a later point. I always have to shut down the laptop or leave it on.
Is there a laptop vendor that is considered to have the best trade off between performance and compatibility with Linux? If not, then what should I look for when buying a laptop?
|
I'm not sure what issues you're constantly experiencing but I run Gentoo on Lenovo Thinkpad without problems (fingerprint reader does not work) - with possible problems with removal of BKL in recent kernels (however 2.6.33 worked ok). Previously I used IBM Thinkpad.
From my small experience with them:
Thinkpads seems to have a community which helps configuring them (IRC channel, website).
Unless you need high-performace of graphics use intel. I had much trouble with getting ATI card (XPress 200M) to work (basic OpenGL was ok but there were problems with KMS - at least some time ago)
Don't trust windows recovery tool. Back up the position of partitions - it said it won't change partions other then C: but it deleted first secondary partition (/dev/sda5). Strangely grub was left on its place and data was undamaged (fortunatly I could reverse-engeneer the positions).
In addition to recommending linux laptops I can recomend Thinkpads (you asked) - I didn't use many other laptops but they worked.
| Which laptop is most compatible with Linux? [closed] |
1,405,814,980,000 |
I have picked up -- probably on Usenet in the mid-1990s (!) -- that the construct
export var=value
is a Bashism, and that the portable expression is
var=value
export var
I have been advocating this for years, but recently, somebody challenged me about it, and I really cannot find any documentation to back up what used to be a solid belief of mine.
Googling for "export: command not found" does not seem to bring up any cases where somebody actually had this problem, so even if it's genuine, I guess it's not very common.
(The hits I get seem to be newbies who copy/pasted punctuation, and ended up with 'export: command not found or some such, or trying to use export with sudo; and newbie csh users trying to use Bourne shell syntax.)
I can certainly tell that it works on OS X, and on various Linux distros, including the ones where sh is dash.
sh$ export var=value
sh$ echo "$var"
value
sh$ sh -c 'echo "$var"' # see that it really is exported
value
In today's world, is it safe to say that export var=value is safe to use?
I'd like to understand what the consequences are. If it's not portable to v7 "Bourne classic", that's hardly more than trivia. If there are production systems where the shell really cannot cope with this syntax, that would be useful to know.
|
export foo=bar
was not supported by the Bourne shell (an old shell from the 70s from which modern sh implementations like ash/bash/ksh/yash/zsh derive). That was introduced by ksh.
In the Bourne shell, you'd do:
foo=bar export foo
or:
foo=bar; export foo
or with set -k:
export foo foo=bar
Now, the behaviour of:
export foo=bar
varies from shell to shell.
The problem is that assignments and simple command arguments are parsed and interpreted differently.
The foo=bar above is interpreted by some shells as a command argument and by others as an assignment (sometimes).
For instance,
a='b c'
export d=$a
is interpreted as:
'export' 'd=b' 'c'
with some shells (ash, older versions of zsh (in sh emulation), yash) and:
'export' 'd=b c'
in the others (bash, ksh).
While
export \d=$a
or
var=d
export $var=$a
would be interpreted the same in all shells (as 'export' 'd=b' 'c') because that backslash or dollar sign stops those shells that support it to consider those arguments as assignments.
If export itself is quoted or the result of some expansion (even in part), depending on the shell, it would also stop receiving the special treatment.
See Are quotes needed for local variable assignment? for more details on that.
The Bourne syntax though:
d=$a; export d
is interpreted the same by all shells without ambiguity (d=$a export d would also work in the Bourne shell and POSIX compliant shells but not in recent versions of zsh unless in sh emulation).
It can get a lot worse than that. See for instance that recent discussion about bash when arrays are involved.
(IMO, it was a mistake to introduce that feature).
| Where is "export var=value" not available? |
1,405,814,980,000 |
Of course, the standard way of testing if a file is empty is with test -s FILE, but one of our clients have received a script containing tests like this:
RETVAL=`ls -s ./log/cr_trig.log | awk '{print $1}'`
if test $RETVAL -ne 0
then
echo "Badness: Log not empty"
exit 25
fi
with claims from the supplier that it works in the two environments that they have tested it in. Needless to say, it failed badly on both two places that I tested it.
So, I got curious. When does ls -s print 0 for empty files?
This is my findings so far:
GFS on Linux: 4
ext4 on Linux: 0
ZFS on Solaris: 1
UFS on Solaris: 0
jfs on AIX : 0
VxFS on HP-UX: 0
HFS on HP-UX: 0
HFS on Mac OS X: 0
I haven't examined networked file systems yet.
Question: How can I elegantly explain to the others that their scripts are wrong ?
In my opinion, the "correct" version would be:
if test ! -s ./log/cr_trig.log
then
echo "Badness: Log not empty"
exit 25
fi
|
Very interesting finding. Although I've never used ls -s to check whether a file is empty or not, I would have assumed, that it reports 0 for empty files, too.
To your question: As Mat already commented, show them your test results. To explain the results to them, state that ls -s reports the amount of allocated blocks in the filesystem, not the actual size in bytes. Obviously some filesystem implementations allocate blocks even if they don't have to store any data instead of storing just a NULL pointer in the inode.
The explanation for this may be performance-related. To create empty files that will stay empty is an exeption for normal processing (the most common usage I've seen would be the creation of status files where the existance of a file represents a certain state of the software).
But normally a file created will get some data soon, so the designers of a certain FS may have assumed that it pays off to immediately allocate a data block upon file creation, so when the first data arrives this task is already done.
Second reason could be that a file has contained data in the past which has been erased. Instead of freeing the last data block it may be worthy to keep that data block for reuse by the same file.
EDIT:
One more reason came to mind: The filesystems where you have found values > 0 are ZFS, the the RAID+LVM+FS implementation and GFS, a cluster filesystem. Both may have to store metadata to maintain file integrity that is not stored in inodes. It could be that ls -s counts in data blocks allocated for this metadata.
| When does `ls -s` print "0" |
1,405,814,980,000 |
RedHat and CentOS are binary compatible. So everything that works on the one will most probably work on the other (same RPMs, same libs, same versions, same dependencies)...
Does the same hold true when comparing Ubuntu LTS with Debian? When trying to build up a mirror for Ubuntu LTS I noticed that the packages were coming from a Debian repository...
Will everything work the same in the same sense it does with RH/CO, or is this a day/night difference (like OpenSuSE compared with SLES)?
|
Ubuntu is derived from Sid, the unstable and rolling release version of Debian, every Ubuntu major release is nothing more than a Sid frozen at a certain point in time, and enriched with everything that transforms a Debian into an Ubuntu distribution.
The answer to your question is no.
Some libraries are also placed in directories with different naming conventions. The Ubuntu kernel is not even close to the vanilla flavour and is full of patches.
| Is Ubuntu LTS binary compatible with Debian? |
1,405,814,980,000 |
This question answers why Linux can't run OSX apps, but is there some application similar to Wine that allows one to do so?
|
Since wine is a re-implementation of the Windows API - you're looking for a re-implementation of the Macintosh API or various "kits" that Apple provides to let OSX apps link to the system frameworks. I don't know of any that fit the bill. The only thing even close is the Chamelion Project which brings the UIKit from iOS to Mac OS X.
Since I don't have a real library for you, Lion is allowed to be virtualized on Mac hardware. Perhaps that would work for your needs while you wait for a lighter implementation like wine?
There are about a hundred hits on Google about "how to run lion in vmware" and all basically point to the check for a server plist file presence check that the installer wants to see before it will proceed. Here is one that's fairly clear on the steps.
| Is there something like wine to run OSX apps on linux? |
1,405,814,980,000 |
I'm curious about the file or symlink /etc/mtab. I believe this is a legacy mechanism. On every modern linux I've used this is a symbolic link to /proc/mounts and if mtab were to be a regular file on a "normal" file system /etc there would be challenges in making software work with mount namespaces.
For a long time I'd presumed that one of two things were true. Either:
We're waiting for software referencing /etc/mtab to age out or be updated
Other non-linux OS still use the same file name and the link is there for cross platform compliance
However both of these seem shaky ideas. I can't find good reference to any modern OS keeping the same file name outside Linux. And it seems to have lived for much too long to be simply a backward compatibility issue; far more significant changes seem to have come and gone in that same time.
So I'm left wondering if /etc/mtab is really just there for historic reasons. Is it in any way officially deprecated? Is there any solid modern reason [as of 2023] to keep it?
I don't want to delete it from my system, but as a software developer I'd like to understand its usefulness and whether to avoid it.
|
Should the use of /etc/mtab now be considered deprecated?
Depends on who you ask. If you ask the authors of mount on Linux, yes; since 2018 it says
… is completely disabled in compile time by default, because on current Linux systems it is better …
I think that's pretty strong a statement. Prior to that, /etc/mtab was "also supported", but it was considered better to not use it:
This real mtab file is still supported, but on current Linux systems it is
better to make it a symlink to…
That sentence was there since 2014. Before that, it was only recommended:
The mtab file is still supported, but it's recommended to use a symlink to …
In other words, yeah. This has been deprecated for nearly a decade. You shouldn't rely on it. Ignore.
The source of truth is /proc/mounts, if anything. (Listing mounts correctly, uniquely and unambigously gets a logically non-trivial problem considering Linux mount namespaces exist)
| Should the use of /etc/mtab now be considered deprecated? |
1,405,814,980,000 |
Wikipedia says that dash executes faster than bash. My question is, if I set /bin/sh to dash, will all scripts that use /bin/sh in their shebang line that was intended for bash work under dash?
|
No, not all scripts intended for bash work with dash. A number of 'bashism' will not work in dash, such as C-style for loops and the double-bracket comparison operators. If you have a set of bash scripts that you want to use for dash, you may consider using checkbashisms. This tool will check your script for bash-only features that aren't likely to work in dash.
| dash compatibility to bash |
1,405,814,980,000 |
The C locale is defined to use the ASCII charset and POSIX does not provide a way to use a charset without changing the locale as well.
What would happen if the encoding of C were switched to UTF-8 instead?
The positive side would be that UTF-8 would become the default charset for any process, even system daemons. Obviously there would be applications that would break because they assume that C uses 7-bit ASCII. But do these applications really exist? Right now a lot of written code is locale- and charset-aware to a certain extent, I would be surprised to see code that can only deal with 7-bit clean input and cannot be easily adapted to accept a UTF-8-enabled C.
|
The C locale is not the default locale. It is a locale that is guaranteed not to cause any “surprising” behavior. A number of commands have output of a guaranteed form (e.g. ps or df headers, date format) in the C or POSIX locale. For encodings (LC_CTYPE), it is guaranteed that [:alpha:] only contains the ASCII letters, and so on. If the C locale was modified, this would call many applications to misbehave. For example, they might reject input that is invalid UTF-8 instead of treating it as binary data.
If you want all programs on your system to use UTF-8, set the default locale to UTF-8. All programs that manipulate a single encoding, that is. Some programs only manipulate byte streams and don't care about encodings. Some programs manipulate multiple encodings and don't care about the locale (for example, a web server or web client sets or reads the encoding for each connection in a header).
| What would break if the C locale was UTF-8 instead of ASCII? |
1,405,814,980,000 |
grep and sed are both described as using "basic regex" ("BRE") by default. BRE is well described here.
But consider this output:
# echo ' aaaaa ' | grep '\(aaaaa\|bbbbb\)'
aaaaa
# echo ' aaaaa ' | sed '/\(aaaaa\|bbbbb\)/ s/ /_/g'
aaaaa
In the first command, the \( ... \| ... \) syntax clearly acted as (X OR Y), since the output passed grep.
In the second command, the \( ... \| ... \) syntax clearly didn't act as (X OR Y), because the spaces weren't changed to underscores.
(By contrast, both commands recognise \+ as "one or more repetitions")
What's gone on? Why do there seem to be two flavours of BRE in FreeBSD, one of which recognises syntax that the other doesn't?
The deeper question is, many projects look at BRE to provide portability to other unix-like systems. But this suggests that even BREs aren't likely to be the same across platforms, if they can't even be the same within individual platforms. Argh?
|
The description in the linked article is wrong.
The actual POSIX definition states that:
The interpretation of an ordinary character preceded by an unescaped <backslash> ( '\' ) is undefined, except for [(){}, digits and inside a bracket expression]
And ordinary characters are defined as any except the BRE special characters .[^$* and the backslash itself.
So, unlike that page claims, the \+ is undefined in BRE, and so is \|.
Some regex implementations define them as the same as ERE + and | though, particularly the GNU ones. But you shouldn't count on that, stick to the defined features instead.
The problem here, of course is that the ERE alternation operator | doesn't exist at all in BRE, and the equivalent to ERE + is hideously ugly (it's \{1,\}). So you probably want to use ERE instead.
| Does FreeBSD contain multiple variants of basic regex? |
1,405,814,980,000 |
So, GNU awk has some extensions that are missing in the macOS awk.
I want to be sure that my awk program also runs on the macOS awk (which I don't have access to).
Now GNU awk has two different compatibility flags and I'm not sure which to use: --traditional and --posix.
The latter is more strict. Does --traditional suffice to achieve compatibility with the macOS awk?
|
No because
MacOS implements features that are part of POSIX but weren't part of BWK awk (which gawk --traditional is intended to be compatible with) such as RE intervals so some language constructs don't mean the same across the 2 variants despite being valid in both.
MacOS awk has bugs that aren't present in GNU awk so a working gawk script could fail on MacOS no matter what options you give it.
Both awks can/do implement functionality that's undefined by POSIX or the "traditional" awk spec however they like.
So --posix will be closer to what you want than --traditional but still has differences with MacOS, and neither option nor any other option does what you want - guarantee a gawk script will run the same in MacOS awk.
For example, with gawk (which does not support RE intervals like {2} in traditional mode but does in posix mode):
$ awk --version | head -1
GNU Awk 5.0.1, API: 2.0
$ echo 'ab{2}c' | awk --traditional '/b{2}/'
ab{2}c
$ echo 'ab{2}c' | awk --posix '/b{2}/'
$
$ echo 'ab{2}c' | awk --traditional '/b\{2\}/'
awk: cmd. line:1: warning: regexp escape sequence `\{' is not a known regexp operator
awk: cmd. line:1: warning: regexp escape sequence `\}' is not a known regexp operator
ab{2}c
whereas with MacOS which does support RE intervals:
$ awk --version | head -1
awk version 20200816
$ echo 'ab{2}c' | awk '/b{2}/'
$
$ echo 'ab{2}c' | awk '/b\{2\}/'
ab{2}c
For example, with gawk:
$ awk 'BEGIN{print 1 == 2 ? 3 : 4}'
4
$ awk --traditional 'BEGIN{print 1 == 2 ? 3 : 4}'
4
$ awk --posix 'BEGIN{print 1 == 2 ? 3 : 4}'
4
whereas with MacOS:
$ awk 'BEGIN{print 1 == 2 ? 3 : 4}'
awk: syntax error at source line 1
context is
BEGIN{print 1 >>> == <<<
awk: illegal statement at source line 1
awk: illegal statement at source line 1
See https://unix.stackexchange.com/a/588743/133219 for more info on that specific error.
Another difference in how handling a directory as a file name is handled:
$ mkdir foo
$ echo 7 > bar
with GNU awk:
$ awk '{print FILENAME, $0}' foo bar
awk: warning: command line argument `foo' is a directory: skipped
bar 7
$ awk --traditional '{print FILENAME, $0}' foo bar
awk: fatal: cannot open file `foo' for reading (Is a directory)
$ awk --posix '{print FILENAME, $0}' foo bar
awk: fatal: cannot open file `foo' for reading (Is a directory)
and MacOS awk:
$ awk '{print FILENAME, $0}' foo bar
bar 7
| GNU awk --traditional vs --posix |
1,405,814,980,000 |
I occasionally do work on an older Solaris machine whose default version of grep is non-POSIX-compliant. This causes problems in my rc files because the default grep on the machine doesn't support the options I need.
This is a machine at my place of work, and I'm not an admin; so I can't just install newer/better versions of commands as I see fit. However, I notice that the machine does have a suitable XPG version of grep at /usr/xpg4/bin/grep.
Obviously, I can solve the problem (for Solaris) in my rc files with:
alias grep='/usr/xpg4/bin/grep'
But what about machines where this isn't necessary? My goal is to have a single rc file for each shell that I can drop into any Unix-like system and have it just work.
This got me thinking...
Is there ever a case where I wouldn't want to use the XPG version of a command?
If so, when?
Couldn't I just blindly add /usr/xpg4/bin/ to the beginning of $PATH in my rc files on all machines and forgo aliasing individual commands to their XPG* versions?
Or will this cause problems for some commands?
Is it the case that /usr/xpg4/bin/ exists only on machines where it is "necessary"?
I ask because I notice that /usr/xpg4/bin/ doesn't exist on my Ubuntu machine.
So to sum up, is this a good a idea?
if [ -d "/usr/xpg4/bin" ]; then
#Place XPG directory at beginning of path to always use XPG version of commands
export PATH="/usr/xpg4/bin:$PATH"
fi
If not, why not?
|
Several commercial Unix systems have backward-compatible utilities in /bin and /usr/bin, and a directory such as /usr/xpg4/bin that contain POSIX-compliant utilities. That way, old applications can stick to the old PATH with just /bin and /usr/bin, and newer applications use a PATH with the POSIX utilities first. Unless you need backward compatibility with the 1980s utilities from that Unix system, you're better served by using the POSIX utilities.
In my .profile, I put the following directories ahead of /bin and /usr/bin, if they exist:
/bin/posix
/usr/bin/posix
/usr/xpg6/bin
/usr/xpg4/bin
/usr/xpg2/bin
I think this covers at least Solaris, Tru64 (aka Digital Unix aka OSF/1) and HP-UX.
Rather than determine the PATH automatically, you should be able to find a suitable PATH for POSIX-relying applications by calling the getconf utility:
PATH=$(getconf PATH)
Some systems such as *BSD and Linux only ship POSIX-compliant utilities, so they're in the usual directories (/bin, /usr/bin) and there's no need for any separate directory.
| When to use XPG* version of a command? |
1,405,814,980,000 |
This works perfectly well on any Linux :
$ echo foo bar | sed -n '/foo/{/bar/{;p;}}'
foo bar
But fails on OSXs ancient BSD variant :
❯ echo foo bar | sed -n '/foo/{/bar/{;p;}}'
sed: 1: "/foo/{/bar/{;p;}}": extra characters at the end of } command
Am I missing some magical incantation?
Is there a way to write this in a portable manner ?
I'd hate to have to revert to a pipeline of grep | grep | grep commands.
Update : low rep here so can't upvote but thanks all repliers for your well considered advice.
|
A sed editing command should be terminated by ; or a literal newline. GNU sed is very forgiving about this.
Your script:
/foo/{/bar/{;p;}}
Expanded:
/foo/{
/bar/{
p
}
}
This would work as a sed script fed to sed through -f.
If we make sure to replace newlines with ; (only needed at the end of commands and {...} groups of commands) so that we can use it on the command line, we get
/foo/{/bar/{p;};}
This works with OpenBSD sed (the original did not, due to that second ; missing).
In this particular case, this may be further simplified to
/foo/{/bar/p;}
| BSD sed vs GNU - is it capable of nested matches? |
1,405,814,980,000 |
Most software that runs on Linux can run on FreeBSD using an optional built-in compatibility layer.
AIX is based on UNIX System V with BSD-compatible extensions. Is there a Linux compatibility layer in IBM AIX?
|
If you're thinking about running Linux binaries directly on AIX, then no there is no such feature (even if you can find binaries for the Power architecture for the Linux software you're trying to use).
IBM does provide something called the AIX Toolbox for Linux Applications which should help porting software developed for Linux to AIX. It is a collection of tools and libraries usually found on Linux including GCC, Gnome and KDE, and a bunch of libraries and tools (gawk, bash, ncurses, rsync, lsof, ...). But you'll have to recompile:
Because Linux and AIX use different Application Binary Interfaces (ABIs) (like
Linux on different hardware platforms uses different ABIs), there is in general no
binary compatibility when changing operating systems or hardware architectures.
For example:
Linux applications that have been compiled under Linux on hardware other
than IBM pSeries or IBM iSeries can in general not run under Linux for
pSeries without recompilation.
Linux applications that have been compiled under Linux for pSeries cannot
run under AIX, including the AIX Toolbox for Linux Applications.
Linux applications that have been compiled under AIX using the AIX Toolbox
for Linux Applications cannot run under Linux for pSeries.
This is from the Linux Applications on pSeries IBM Redbook (PDF link, 4.7M), which describes the toolkit and has some porting notes and a chapter on running native Linux in pSeries hardware.
| Linux compatibility layer for IBM AIX |
1,405,814,980,000 |
Summary
Is xargs -I s printf s more compatible than xargs -n 1 printf?
Background
To handle binary data that may include 0x00.
I know how to convert binary data to text, like this:
# make sure that you have done this: export LC_ALL=C
od -A n -t x1 -v | # or -t o1 or -t u1 or whatever
tr ABCDEF abcdef | # because POSIX doesn't specify in which case
tr -d ' \t\n' | # because POSIX says they are delimiters
fold -w 2 |
grep . # to make sure to terminate final line with LF
... and here is how to convert back to binary:
# input: for each line, /^[0-9a-f]\{2\}$/
# also make sure export LC_ALL=C before
awk -v _maxlen="$(getconf ARG_MAX 2>/dev/null)" '
BEGIN{
# (1) make a table
# assume that every non-null byte can be converted easily
# actually not portable in Termux; LC_ALL=C does not work and
# awk is gawk by default, which depends on locale.
# to deal with it, here is alternative:
# for(i=0;i<256;i++){
# xc[sprintf("%02x",i)]=sprintf("\\\\%03o",i);
# xl[sprintf("%02x",i)]=5;
# }
# # and skip to (2)
# but why not just env -i awk to force one true awk, if so.
# also is not it pretty rare that C locale is not available?
for(i=1;i<256;i++){
xc[sprintf("%02x",i)]=sprintf("%c",i);
xl[sprintf("%02x",i)]=1;
}
# now for chars that requires special converting.
# numbers; for previous char is \\ooo.
for(i=48;i<58;i++){
xc[sprintf("%02x",i)]=sprintf("\\\\%03o",i);
xl[sprintf("%02x",i)]=5;
}
# and what cannot be easily passed to xargs -n 1 printf
# null
xc["00"]="\\\\000"; xl["00"]=5;
# <space>
xc["09"]="\\\\t"; xl["09"]=3;
xc["0a"]="\\\\n"; xl["0a"]=3;
xc["0b"]="\\\\v"; xl["0b"]=3;
xc["0c"]="\\\\f"; xl["0c"]=3;
xc["0d"]="\\\\r"; xl["0d"]=3;
xc["20"]="\\\\040"; xl["20"]=5;
# meta chars for printf
xc["25"]="%%"; xl["25"]=2;
xc["5c"]="\\\\\\\\";xl["5c"]=4;
# hyphen; to prevent to be treated as if it were an option
xc["2d"]="\\\\055"; xl["2d"]=5;
# chars for quotation
xc["22"]="\\\""; xl["22"]=2;
xc["27"]="\\'\''"; xl["27"]=2;
# (2) preparation
# reason why 4096: _POSIX_ARG_MAX
# reason why length("printf "): because of ARG_MAX
# reason why 4096/2 and _maxlen/2: because some xargs such as GNU specifies buffer length less than ARG_MAX
if(_maxlen==""){
maxlen=(4096/2)-length("printf ");
}else{
maxlen=int(_maxlen/2)-length("printf ");
}
ORS=""; LF=sprintf("\n");
arglen=0;
}
{
# (3) actual conversion here.
# XXX. not sure why arglen+4>maxlen.
# but I think maximum value for xl[$0] is 5.
# and maybe final LF is 1.
if(arglen+4>maxlen){
print LF;
arglen=0;
}
print xc[$0];
arglen+=xl[$0];
}
END{
# for some xargs who hates input w/o LF termination
if(NR>0)print LF;
}
' |
xargs -n 1 printf
I found an issue for null input: in GNU/Linux, it fails, like this:
$ xargs -n 1 printf </dev/null
printf: missing operand
Try 'printf --help' for more information.
Then I found xargs -n 1 printf 2>/dev/null || :, adding if(NR==0)printf"\"\"\n"; on END block, and xargs -I s printf s are alternatives.
I have seen only the first one is actually used on ShellShoccar-jpn's programs, but I think it's kinda forceful.
The second one is also less clean than the last one.
Can the third one be also an alternative on not only GNU/Linux, but also every other (or most of the other) environment?
Since I have GNU/Linux only, I have no ideas how to validate my idea on every other environment.
The easiest way is to obtain their source and refer them, or refer to their manuals.
If it is impossible to validate at all, then I have to give up.
My knowledge
It seems that printf requires at least one argument, as POSIX says so.
Some xargs ignores input without LF termination; grep ^ | xargs something here is more portable than xargs something here for input that may not have LF termination.
xargs is not portable for input without non-blank lines; printf ' \n\n' | xargs echo foo outputs nothing on FreeBSD and foo on GNU/Linux. In this case, you have to make the command for xargs safe for such input or let the command ignore the error.
FreeBSD's xargs receives its arguments as if they were $@ while GNU/Linux's as if they were "$@".
Escaping by backslash works for xargs, like printf '\\\\\\'"'" | sed "$(printf 's/[\047\042\\]/\\\\&/g')" | xargs printf to obtain \' as output.
PS
I found out that xargs -E '' is more compatible than without the option, as some xargs defaults -E _.
|
xargs is probably the worst POSIX utility when it comes to portability (and interface design). I would stay away from it. How about:
<file.hex awk -v q="'" -v ORS= '
BEGIN{
for (i=0; i<256; i++) c[sprintf("%02x", i)] = sprintf("\\%o", i)
}
NR % 50 == 1 {print sep"printf "q; sep = q"\n"}
{print c[$0]}
END {if (sep) print q"\n"}
' | sh
instead for instance?
The awk part outputs something like:
printf '\61\12\62\12\63\12\64\12\65\12\66\12\67\12\70\12\71\12\61\60\12\61\61\12\61\62\12\61\63\12\61\64\12\61\65\12\61\66\12\61\67\12\61\70\12\61\71\12\62\60'
printf '\12\62\61\12\62\62\12\62\63\12\62\64\12\62\65\12\62\66\12\62\67\12\62\70\12\62\71\12\63\60\12\63\61\12\63\62\12\63\63\12\63\64\12\63\65\12\63\66\12\63'
printf '\67\12\63\70\12\63\71\12\64\60\12\64\61\12\64\62\12\64\63\12\64\64\12\64\65\12\64\66\12\64\67\12\64\70\12\64\71\12\65\60\12\65\61\12\65\62\12\65\63\12'
printf '\65\64\12\65\65\12\65\66\12\65\67\12\65\70\12\65\71\12\66\60\12\66\61\12\66\62\12\66\63\12\66\64\12\66\65\12\66\66\12\66\67\12\66\70\12\66\71\12\67\60'
printf '\12\67\61\12\67\62\12\67\63\12\67\64\12\67\65\12\67\66\12\67\67\12\67\70\12\67\71\12\70\60\12\70\61\12\70\62\12\70\63\12\70\64\12\70\65\12\70\66\12\70'
printf '\67\12\70\70\12\70\71\12\71\60\12\71\61\12\71\62\12\71\63\12\71\64\12\71\65\12\71\66\12\71\67\12\71\70\12\71\71\12\61\60\60\12'
For sh to interpret. In sh implementations where printf is builtin, that would not fork extra processes. In those where it's not, those lines should be more than short enough to avoid the ARG_MAX limit but still not run more than one printf for every 50 bytes.
Note that you can't really determine the max length of a command line based on the value of ARG_MAX alone. How that limit is reached and handled depends greatly on the system and version thereof. On many, ARG_MAX is on the limit of a cumulative size including the argv[] and envp[] list of pointers (so typically 8 byte per argument/envvar on a 64bit system) plus the size of each arg/env strings in bytes (including the NUL delimiter). Linux also has an independent limit on the size of a single argument.
Also note that replacing \12 with \n for instance is only valid on ASCII-based systems. POSIX doesn't specify the encoding of characters (other than NUL). There are still POSIX systems using some variants of EBCDIC instead of ASCII.
| Is "xargs -I s printf s" more compatible than "xargs -n 1 printf"? |
1,405,814,980,000 |
I want to install the Systemd-free Artix Linux, but noticed on DistroWatch that it misses many packages. Being an Arch based distribution, is it possible to install packages directly from Arch repository? May the fact that the two use two different init systems (OpenRC vs Systemd) be a problem?
|
I am not familiar with Artix; however it says on their Wikipedia page:
Artix Linux has its own repos but most packages without systemd
dependencies from Arch Linux repos and the Arch User Repository (AUR)
can also be used.
For packages that do rely on systemd, they would need to provide replacement packages that can work with OpenRC (or whichever alternative init system you want to use). Note that this applies mostly to daemon packages, which run in the background and provide a service, which need to be started on boot by the init system.
Personally, I use Parabola, which is a libre distro that is also based on Arch. They support OpenRC and have a [nonsystemd] repository that contains replacement pacakages that work with OpenRC.
If it were me, I would post on the Artix forum, to see if they have something similar and verify if they have versions of the packages you want that work with OpenRC.
| Can Systemd-free Artix Linux install packages from parent Arch Linux? |
1,405,814,980,000 |
I was pondering getting a Google Chromecast or Cubetek Ezcast the other day, mostly for its novelty and maybe using it as a media player or device to conduct presentations.
The way they set up, with the whole DIAL technology seems a little weird to me, but that may or may not be due to the reason I've never seen it in action. I know however, that there are issues with some routers. This is a concern, as I have OpenWrt routers, only.
I know the control app runs on Android, but what about my Linux laptop and/or desktop/workstation? I know Chrome uses a Chrome app, but is it gonna work if the client doesn't have a WLAN radio, but is otherwise connected to a wireless network access point (bridging both networks, mind you)? And what about OpenWrt, is it gonna work out of the box?
Now, there are at least two ways how I'd like to use the stick:
As media player - pretty much in stand-alone mode, either playing streams off the internet
As presentation dongle - so I can "take control" of any TV or projector featuring an HDMI port and use that for conducting a presentation or show a video, etc.
Are my use cases even covered by that? The idea is not to attach it permanently to my home TV, but instead have it with me on the go. The form factor almost implies mobility, even though attaching power to it seems a bit weird.
Both use cases should be covered by using my (Android) tablet, and Linux laptop.
In case I'm investigating the totally wrong cave, please suggest alternatives.
|
If the Cubetek Ezcast is like the Tronsmart EZcast, it probably has its own proprietary extension to uPnP protocol, which makes it near-unusable with anything but the EZcast software. I started digging into how to use it with Linux, but it didn't seem possible at the time.
It's probably just a matter of time before someone reimplements the EZcast protocol, though.
| Chromecast / Cubetek Ezcast with Linux computer? |
1,405,814,980,000 |
I was using the following command on my previous dedicated server with the same version of the FreeBSD installation:
# uname -a
FreeBSD 9.2-RELEASE FreeBSD 9.2-RELEASE #0 r255898: Thu Sep 26 22:50:31 UTC 2013 [email protected]:/usr/obj/usr/src/sys/GENERIC amd64
The command is following:
netstat -ntu -f inet
Output:
netstat: illegal option -- t
Why doesn't it work anymore? I don't have an access to my previous dedicated server so I can't use the man to check the differences.
|
Up to FreeBSD 8.x (at least as of 8.4-RELEASE) it was possible to use the -t option with netstat -i/-I (show the state of all network interfaces/a specific interface).
From FreeBSD 8.4-RELEASE netstat man:
If -t is also present, show the contents of watchdog timers.
This indeed had disappeared from FreeBSD 9.x (see FreeBSD 9.2-RELEASE netstat man).
We can only conclude that it is not possible anymore to check the value of these timers through netstat (if ever these timers have meaning with the 9.x releases).
By the way, -t had no meaning with -n. So I guess it was not reporting any error because the syntax checker was a bit too permissive but it was adding nothing to your netstat output.
| netstat command doesn't work anymore on the new dedicated server |
1,405,814,980,000 |
I have some software that, among other things, needs to:
Assess a file's rwxrwxrwx permissions;
Work under every possible flavor of Unix and Linux you can find in the wild.
Currently, it does that running the ls -l command. If the file is a symlink, I have to get the permissions of the target file. The -L switch works nicely for that.
The question: Are there flavors of Unix in which I run the risk of that switch not being available? If so, which ones? (If they're rare enough and old enough, I might just be able to ignore the problem.)
|
Well, it's in the GNU, FreeBSD, OpenBSD, and general BSD ls implementations, so I'd say it's pretty likely to be anywhere.
| How universal is the -L (dereference symlink) switch of the 'ls' command? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.