date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,400,384,865,000 |
Added support for detecting duplicate IPv4 addresses, with a timeout
configurable through the ipv4.dad-timeout connection property.
-- NEWS
dad-timeout / int32 / -1
Timeout in milliseconds used to check for the presence of duplicate IP addresses on the network. If an address conflict is detected, the activation will fail. A zero value means that no duplicate address detection is performed, -1 means the default value (either configuration ipvx.dad-timeout override or zero). A value greater than zero is a timeout in milliseconds. The property is currently implemented only for IPv4.
-- IPv4 Settings
Quote taken from the manual for NetworkManager, as of version 1.14.4.
I failed to find ipvx.dad-timeout documented anywhere. It is described as an "override", not a default. So it sounds more likely that ipvx.dad-timeout is not set by default. In other words, the default is not to enable IPv4 Duplicate Address Detection. Is that right?
|
These properties are generally part of the connection profile. However, some of these properties have a special value that indicates the "default" value. For those, the default value may be configured in NetworkManager.conf. Consequently, that's documented in man NetworkManager.conf -- as opposed to man nm-settings.
But note that the default value in NetworkManager.conf only matters if you don't specify an explicit value in the profile itself. The profile's value has precedence.
"-1 means the default value (either configuration ipvx.dad-timeout override or zero)" means that -1 is the default value for this property in the profile. This allows fallback to a configured default value in NetworkManager.conf, and in case that it is still unspecified there, the final value 0 (disabled) is used.
| Do the default settings of NetworkManager detect if there is a conflicting IPv4 address on the network? |
1,400,384,865,000 |
I have a fedora core machine, and it is picking up an IPV6 address, but not an IPV4 address. I have done dhclient -r eth0 but the ipv4 address has not come. I have reset the port using ifconfig, but the address keeps coming up as an IPV6 address.
|
It's possible that the lightning strike destroyed the network port on your machine, and that the IPv6 address is a self-assigned address. Have you tried using the 'ping6' command to try pinging an outside IP address via IPv6 to really ensure that you've got IPv6 connectivity to the outside world?
Also, I concur with mattdm here -- FC4 is very old -- it's definitely worth looking to upgrade to a more modern version of Fedora.
| Fedora Core 4 won't get an IPV4 address |
1,400,384,865,000 |
After some research, I was wondering if it was possible to define a TTL by interface as I can the Hop Limit definition in ipv6.
To change the TTL in IPv4, I can change the file
/proc/sys/net/ipv4/ip_default_ttl
But this changes TTL for all interfaces.
However in IPv6, you can put a different hop limit value for each interfaces
/proc/sys/net/ipv6/conf/eth*/hop_limit
So do I miss something? or there is no way to configure a different TTL for each interface ?
|
If that entry does not exist for ipv4, it's probably not supported.
But have you tried to modify TTL values with iptables? See if the TTL target helps
| Define specific TTL for each interfaces |
1,400,384,865,000 |
At my place I have a router that gives me an IPV4 address. My Gentoo PC works
fine but my Gentoo laptop stopped resolving names. I visited my parents and
tried connecting to their router, which gives me an IPV6 address, and presto,
everything was back to normal (I added the IPV6 equivalents of my nameservers
to /etc/resolv.conf). Then I tried another router in my parent's network,
which sits behind the IPV6 one but which gives me an IPV4 address, and then
again I can't resolve names (though if I boot a Gentoo LiveCD I can, so it's a
configuration issue on my laptop and related to IPV4) (I re-added the IPV4
addresses back to /etc/resolv.conf, and I also tried leaving the IPV6 ones). I
can ping addresses just fine, I just can't resolve names. What could be
causing this and how could I fix it?
My resolv.conf (IPV4, used to work in the past):
# dnsmasq
nameserver 127.0.0.1
# OpenNIC
nameserver 31.171.155.107
nameserver 79.133.43.124
IPV6 (working in the IPV6 router):
# dnsmasq
nameserver ::1
# OpenNIC
nameserver 2a05:dfc7:5::53
nameserver 2001:19f0:7001:929:5400:00ff:fe30:50af
Notice that the IPV4 one used to work on the IPV4 routers, it stopped working
without me modifying it. Something else got modified and name resolving
stopped working. My entire /etc folder (except ssl, shadow, etc) can be found
here.
I flushed all iptables rules and set all policies to ACCEPT. No network related
services are running except dnsmasq (I tried disabling it and removing the
localhost line from resolv.conf, to no avail). I connect to the internet using
bare wpa_supplicant and and ip, and it works fine, it's just the name resolving
which does not (not even when connecting though the wired interface using
Gentoo's rc scripts). Both iptables -L and ip6tables -L return:
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Some tests and their results (all on an IPV4 router connected wirelessly,
though the results are the same with a wired connection):
$ nslookup google.com 8.8.8.8
;; connection timed out: no servers could be reached.
...
$ dig @8.8.8.8 gentoo.org
;; connection timed out: no servers could be reached.
tcpdump registers nothing when trying to ping a hostname such as google.com,
but successfully register ICMP echoes when pinging an IP address:
$ ping gentoo.org
ping: unknown host gentoo.org
$ ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=48 time=74.0 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=48 time=73.7 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=48 time=73.7 ms
^C
--- 8.8.8.8 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 73.708/73.810/74.012/0.344 ms
tcpdump:
$ tcpdump -i wlp3s0 port 53
dropped privs to tcpdump
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on wlp3s0, link-type EN10MB (Ethernet), capture size 262144 bytes
^C
0 packets captured
0 packets received by filter
0 packets dropped by kernel
$ tcpdump -i wlp3s0
dropped privs to tcpdump
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on wlp3s0, link-type EN10MB (Ethernet), capture size 262144 bytes
07:05:55.807483 IP 192.168.25.11 > 8.8.8.8: ICMP echo request, id 5903, seq 1, length 64
07:05:55.881446 IP 8.8.8.8 > 192.168.25.11: ICMP echo reply, id 5903, seq 1, length 64
07:05:56.808617 IP 192.168.25.11 > 8.8.8.8: ICMP echo request, id 5903, seq 2, length 64
07:05:56.882287 IP 8.8.8.8 > 192.168.25.11: ICMP echo reply, id 5903, seq 2, length 64
07:05:57.810421 IP 192.168.25.11 > 8.8.8.8: ICMP echo request, id 5903, seq 3, length 64
07:05:57.884089 IP 8.8.8.8 > 192.168.25.11: ICMP echo reply, id 5903, seq 3, length 64
^C
6 packets captured
6 packets received by filter
0 packets dropped by kernel
ifconfig on the ipv4 router:
wlp3s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.25.11 netmask 255.255.255.0 broadcast 192.168.25.255
inet6 fe80::16ec:71f7:dcc5:f175 prefixlen 64 scopeid 0x20<link>
ether 00:07:c8:82:a2:96 txqueuelen 1000 (Ethernet)
RX packets 6 bytes 568 (568.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 24 bytes 4038 (3.9 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
ip route:
$ ip route
default via 192.168.25.1 dev wlp3s0
169.254.0.0/16 dev wlp3s0 proto kernel scope link src 169.254.144.184 metric 304
192.168.25.0/24 dev wlp3s0 proto kernel scope link src 192.168.25.11
nsswitch:
# /etc/nsswitch.conf:
# $Header: /var/cvsroot/gentoo/src/patchsets/glibc/extra/etc/nsswitch.conf,v 1.1 2006/09/29 23:52:23 vapier Exp $
passwd: compat
shadow: compat
group: compat
# passwd: db files nis
# shadow: db files nis
# group: db files nis
hosts: files dns
networks: files dns
services: db files
protocols: db files
rpc: db files
ethers: db files
netmasks: files
netgroup: files
bootparams: files
automount: files
aliases: files
I also ran strace -e open dig @8.8.8.8 gentoo.org and the last thing it does
is open /etc/resolv.conf (successfully).
|
iptables -L gives an incomplete view; in particular, it does not show the NAT nor mangle tables, which can and do influence how packets flow. For complete debugging these various tables will also need to be inspected
iptables -t nat -n -L
or the entire firewall ruleset dumped e.g. with
iptables-save
| Can not resolve names but can ping addresses when connected to IPV4 router, all works fine with IPV6 router |
1,400,384,865,000 |
I have IPv6 configured on my local network via radvd; it advertises a routable IPv6 block, that all the machines auto-configure.s
I have IPv4 assigned to a NAT'd block via dhcpd and that updates named.
My problem is that when I set the AAAA record for a host for it's IPv6 address (which doesn't change), named will then start rejecting name updates from dhcpd for the A record.
named reports the following error:
'name not in use' prerequisite not satisfied (YXDOMAIN)
dhcpd will report the following error:
Has an A record but no DHCID, not mine
How can I either convince dhcpd to ignore the AAAA record when doing the named update.
|
I found an answer here http://www.gelato.unsw.edu.au/IA64wiki/IPv6DDNS
Essentially dhcpd has a way to add hooks for events, so on a IPv4 registration call a script that will generate the standard MAC based IPv6 address and register that.
UPDATE:
(I'm using ICH DHCP 4.1)
When using the "on commit" hook, it removes the existing dynamic update so you need to copy that into your "on commit" section, mine now looks like this:
on commit {
if (not static) {
# Setup IPv6 Address
set new-ddns-fwd-name = pick-first-value(ddns-hostname, host-decl-name);
if (exists host-name and option host-name ~~ "^[a-z0-9.-]+$") {
set new-ddns-fwd-name = option host-name;
} elsif (exists dhcp-client-identifier and option dhcp-client-identifier ~~ "^[a-z0-9.-]+$") {
set new-ddns-fwd-name = substring(option dhcp-client-identifier, 1, 50);
} elsif (new-ddns-fwd-name = "") {
set new-ddns-fwd-name = binary-to-ascii (16, 8, "-",
substring (hardware, 1, 6));
}
set ddns-fwd-name = new-ddns-fwd-name;
execute ("/opt/bin/ddns-ipv6", ddns-fwd-name, ucase(
binary-to-ascii(16, 8, ":", substring(hardware, 1, 6))),
binary-to-ascii(10, 8, ".", leased-address));
unset new-ddns-fwd-name;
switch (ns-update (not exists (IN, A, ddns-fwd-name, null),
add (IN, A, ddns-fwd-name, leased-address,
lease-time / 2))) {
default:
unset ddns-fwd-name;
break;
case NOERROR:
set ddns-rev-name =
concat (binary-to-ascii (10, 8, ".", reverse (1, leased-address)), ".",
pick (config-option server.ddns-rev-domainname,
"in-addr.arpa."));
switch (ns-update (delete (IN, PTR, ddns-rev-name, null),
add (IN, PTR, ddns-rev-name, ddns-fwd-name, lease-time / 2)))
{
default:
unset ddns-rev-name;
on release or expiry {
execute ("/opt/bin/ddns-ipv6", "-d", pick-first-value(ddns-hostname, host-decl-name));
switch (ns-update (delete (IN, A, ddns-fwd-name,
leased-address))) {
case NOERROR:
unset ddns-fwd-name;
break;
}
on release or expiry;
}
break;
case NOERROR:
on release or expiry {
execute ("/opt/bin/ddns-ipv6", "-d", pick-first-value(ddns-hostname, host-decl-name));
switch (ns-update (delete (IN, PTR, ddns-rev-name, null))) {
case NOERROR:
unset ddns-rev-name;
break;
}
switch (ns-update (delete (IN, A, ddns-fwd-name,
leased-address))) {
case NOERROR:
unset ddns-fwd-name;
break;
}
on release or expiry;
}
}
break;
}
}
}
| How to configure dhcpd to register ipv4 with bind while having static ipv6 addresses? |
1,495,372,992,000 |
I'm running debian 8 on my server. Recently, the server starting only using IPv6 for all outgoing TCP connections. It still accepts IPv4 for incoming connections, however.
Because of this, I can't access any web sites (port 80), make any ssh connections (port 22), nor access any other outgoing host via any TCP port from my server now.
I completely disabled all iptables rules via iptables -F followed by iptables -X, and the problem persists.
Here are some command outputs which might be pertinent:
% sudo ip address
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: dummy0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether aa:bf:5c:77:b2:82 brd ff:ff:ff:ff:ff:ff
3: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether f2:3c:91:96:da:28 brd ff:ff:ff:ff:ff:ff
inet 45.33.123.70/24 brd 45.33.123.255 scope global eth0
valid_lft forever preferred_lft forever
inet 45.33.5.47/24 scope global eth0:1
valid_lft forever preferred_lft forever
inet 192.168.135.4/17 scope global eth0:2
valid_lft forever preferred_lft forever
inet6 2600:3c00::f03c:91ff:fe96:da28/64 scope global mngtmpaddr dynamic
valid_lft 87sec preferred_lft 27sec
inet6 fe80::f03c:91ff:fe96:da28/64 scope link
valid_lft forever preferred_lft forever
4: teql0: <NOARP> mtu 1500 qdisc noop state DOWN group default qlen 100
link/void
5: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
link/ipip 0.0.0.0 brd 0.0.0.0
6: gre0@NONE: <NOARP> mtu 1476 qdisc noop state DOWN group default qlen 1000
link/gre 0.0.0.0 brd 0.0.0.0
7: gretap0@NONE: <BROADCAST,MULTICAST> mtu 1476 qdisc noop state DOWN group default qlen 1000
link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
8: erspan0@NONE: <BROADCAST,MULTICAST> mtu 1464 qdisc noop state DOWN group default qlen 1000
link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
9: ip_vti0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
link/ipip 0.0.0.0 brd 0.0.0.0
10: ip6_vti0@NONE: <NOARP> mtu 1364 qdisc noop state DOWN group default qlen 1000
link/tunnel6 :: brd ::
11: sit0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
link/sit 0.0.0.0 brd 0.0.0.0
12: ip6tnl0@NONE: <NOARP> mtu 1452 qdisc noop state DOWN group default qlen 1000
link/tunnel6 :: brd ::
13: ip6gre0@NONE: <NOARP> mtu 1448 qdisc noop state DOWN group default qlen 1000
link/gre6 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00 brd 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00
% sudo route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 45.33.123.1 0.0.0.0 UG 0 0 0 eth0
45.33.5.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
45.33.123.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
192.168.128.0 0.0.0.0 255.255.128.0 U 0 0 0 eth0
I don't know what could have caused this change to IPv6-only on output. But in any case, I just want to go back to IPv4 for the default for all outgoing connections.
Thank you for any insights and suggestions.
|
I take the "nuke it from orbit" approach when it comes to IPv6.
Add ipv6.disable=1 to your kernel options in /etc/default/grub then run update-grub and reboot:
GRUB_CMDLINE_LINUX_DEFAULT="... ipv6.disable=1"
Alternatively if you can't easily modify kernel parameters add this to your sysctl.conf or run sysctl to set manually:
net.ipv6.conf.all.disable_ipv6=1
net.ipv6.conf.default.disable_ipv6=1
| debian 8: My machine started to only use IPv6 for *outgoing* connections. How to go back to IPv4? |
1,495,372,992,000 |
My understanding is that
tcp6 is used for connections over IPv6 & tcp is used for connections over IPv4.
and
::ffff:127.0.0.1 is representing IPv6 address which is mapped to IPv4 address.
But when I use netstat to find open connections on a port like
netstat -anp | grep 31210
I get output as
tcp 0 0 ::ffff:127.0.0.1:64876 ::ffff:127.0.0.1:31210 ESTABLISHED 23755/java
Which means, IPv6 communication is done using tcp.
How is this possible?
|
TCP4 or TCP6 procotols don't exist. They can be used as a shorthand to indicate respectively TCP with IPv4 and TCP with IPv6, but that's an abuse of language -- the protocol used is always TCP.
Due to the separation of layers in the ISO/OSI model, the TCP segment (level 4) is always the same whether it's accompanying a IPv4 or IPv6 packet (level 3).
The only thing that changes in the TCP segment is the Checksum field, calculated according to RFC 793 for IPv4 and RFC 2460 for IPv6, since the size of an IP address differs between the two versions of the protocol. (I am not sure whether the Options field is used differently too.) However, it's still the same ol' TCP.
And yes, ::ffff:127.0.0.1 represents an IPv4 address (loopback in this case) mapped to a IPv6 address.
| IPv6 over TCP or TCP6 |
1,495,372,992,000 |
I have Debian 4.19.194-1 as a router server with LAN, WAN, PPPOE (as gateway) and COMPUTER1 in LAN network which should have access to internet through Debian router.
As firewall I use nftables with rules:
#!/usr/sbin/nft -f
flush ruleset
define EXTIF = "ppp0"
define LANIF = "enp1s0"
define WANIF = "enp4s0"
define LOCALIF = "lo"
table firewall {
chain input {
type filter hook input priority 0
ct state {established, related} counter accept
ct state invalid counter drop
ip protocol icmp counter accept
ip protocol igmp counter accept comment "Accept IGMP"
ip protocol gre counter accept comment "Accept GRE"
iifname { $LOCALIF, $LANIF } counter accept
tcp dport 44122 counter accept
udp dport 11897 counter accept
udp dport 1194 counter accept
udp dport {67,68} counter accept comment "DHCP"
counter reject
}
chain forwarding {
type filter hook forward priority 0
# teleguide.info for ntf monitor
ip daddr 46.29.166.30 meta nftrace set 1 counter accept
ip saddr 46.29.166.30 meta nftrace set 1 counter accept
udp dport 1194 counter accept
tcp dport 5938 counter accept
udp dport 5938 counter accept
ip daddr 10.10.0.0/24 counter accept
ip saddr 10.10.0.0/24 counter accept
ip protocol gre counter accept comment "Accept GRE Forward"
counter drop comment "all non described Forward drop"
}
chain outgoing {
type filter hook output priority 0
oifname $LOCALIF counter accept
}
}
table nat {
chain prerouting {
type nat hook prerouting priority 0
iifname $EXTIF udp dport 1194 counter dnat to 10.10.0.4
}
chain postrouting {
type nat hook postrouting priority 0
ip saddr 10.10.0.0/24 oifname $EXTIF counter masquerade
}
}
lsmod:
tun 53248 2
pppoe 20480 2
pppox 16384 1 pppoe
ppp_generic 45056 6 pppox,pppoe
slhc 20480 1 ppp_generic
binfmt_misc 20480 1
i915 1736704 0
ppdev 20480 0
evdev 28672 2
video 49152 1 i915
drm_kms_helper 208896 1 i915
iTCO_wdt 16384 0
iTCO_vendor_support 16384 1 iTCO_wdt
parport_pc 32768 0
coretemp 16384 0
sg 36864 0
serio_raw 16384 0
pcspkr 16384 0
drm 495616 3 drm_kms_helper,i915
parport 57344 2 parport_pc,ppdev
i2c_algo_bit 16384 1 i915
rng_core 16384 0
button 20480 0
nft_masq_ipv4 16384 3
nft_masq 16384 1 nft_masq_ipv4
nft_reject_ipv4 16384 1
nf_reject_ipv4 16384 1 nft_reject_ipv4
nft_reject 16384 1 nft_reject_ipv4
nft_counter 16384 25
nft_ct 20480 2
nft_connlimit 16384 0
nf_conncount 20480 1 nft_connlimit
nf_tables_set 32768 3
nft_tunnel 16384 0
nft_chain_nat_ipv4 16384 2
nf_nat_ipv4 16384 2 nft_chain_nat_ipv4,nft_masq_ipv4
nft_nat 16384 1
nf_tables 143360 112 nft_reject_ipv4,nft_ct,nft_nat,nft_chain_nat_ipv4,nft_tunnel,nft_counter,nft_masq,nft_connlimit,nft_masq_ipv4,nf_tables_set,nft_reject
nf_nat 36864 2 nft_nat,nf_nat_ipv4
nfnetlink 16384 1 nf_tables
nf_conntrack 172032 8 nf_nat,nft_ct,nft_nat,nf_nat_ipv4,nft_masq,nf_conncount,nft_connlimit,nft_masq_ipv4
nf_defrag_ipv6 20480 1 nf_conntrack
nf_defrag_ipv4 16384 1 nf_conntrack
ip_tables 28672 0
x_tables 45056 1 ip_tables
autofs4 49152 2
ext4 745472 2
crc16 16384 1 ext4
mbcache 16384 1 ext4
jbd2 122880 1 ext4
fscrypto 32768 1 ext4
ecb 16384 0
crypto_simd 16384 0
cryptd 28672 1 crypto_simd
glue_helper 16384 0
aes_x86_64 20480 1
raid10 57344 0
raid456 172032 0
async_raid6_recov 20480 1 raid456
async_memcpy 16384 2 raid456,async_raid6_recov
async_pq 16384 2 raid456,async_raid6_recov
async_xor 16384 3 async_pq,raid456,async_raid6_recov
async_tx 16384 5 async_pq,async_memcpy,async_xor,raid456,async_raid6_recov
xor 24576 1 async_xor
raid6_pq 122880 3 async_pq,raid456,async_raid6_recov
libcrc32c 16384 3 nf_conntrack,nf_nat,raid456
crc32c_generic 16384 5
raid0 20480 0
multipath 16384 0
linear 16384 0
raid1 45056 2
md_mod 167936 8 raid1,raid10,raid0,linear,raid456,multipath
sd_mod 61440 6
ata_generic 16384 0
ata_piix 36864 4
libata 270336 2 ata_piix,ata_generic
psmouse 172032 0
scsi_mod 249856 3 sd_mod,libata,sg
ehci_pci 16384 0
i2c_i801 28672 0
uhci_hcd 49152 0
lpc_ich 28672 0
ehci_hcd 94208 1 ehci_pci
mfd_core 16384 1 lpc_ich
usbcore 299008 3 ehci_pci,ehci_hcd,uhci_hcd
r8169 90112 0
realtek 20480 2
libphy 77824 2 r8169,realtek
usb_common 16384 1 usbcore
ntf monitor trace(verdict accept everywhere):
trace id 2c2a8923 ip firewall forwarding packet: iif "enp1s0" oif "ppp0" ether saddr xxx ether daddr xxx ip saddr 10.10.0.96 ip daddr 46.29.166.30 ip dscp cs0 ip ecn not-ect ip ttl 127 ip id 32611 ip length 52 tcp sport 62489 tcp dport https tcp flags == syn tcp window 8192
trace id 2c2a8923 ip firewall forwarding rule ip daddr 46.29.166.30 nftrace set 1 counter packets 0 bytes 0 accept (verdict accept)
trace id 2c2a8923 ip nat postrouting packet: oif "ppp0" @ll,xxx ip saddr 10.10.0.96 ip daddr 46.29.166.30 ip dscp cs0 ip ecn not-ect ip ttl 127 ip id 32611 ip length 52 tcp sport 62489 tcp dport https tcp flags == syn tcp window 8192
trace id 2c2a8923 ip nat postrouting rule ip saddr 10.10.0.0/24 oifname "ppp0" counter packets 0 bytes 0 masquerade (verdict accept)
trace id 73f8f405 ip firewall forwarding packet: iif "ppp0" oif "enp1s0" ip saddr 46.29.166.30 ip daddr 10.10.0.96 ip dscp af32 ip ecn not-ect ip ttl 58 ip id 0 ip length 52 tcp sport https tcp dport 62489 tcp flags == 0x12 tcp window 29200
trace id 73f8f405 ip firewall forwarding rule ip saddr 46.29.166.30 nftrace set 1 counter packets 0 bytes 0 accept (verdict accept)
trace id ca8ec4f5 ip firewall forwarding packet: iif "enp1s0" oif "ppp0" ether saddr xxx ether daddr xxx ip saddr 10.10.0.96 ip daddr 46.29.166.30 ip dscp cs0 ip ecn not-ect ip ttl 127 ip id 32612 ip length 40 tcp sport 62489 tcp dport https tcp flags == ack tcp window 256
And I don't know why, but some sites work fine from COMPUTER1, but some not with such rules.
For example: https://google.com works well from server and from computer1, but https://teleguide.info works well from server(wget), but not works from computer1.
Any idea whats wrong?
|
The firewall rules did not cause the problem. Instead, it's due to the MTU difference in "plain" Ethernet and PPPoE. Since PPP header takes up (at least) 8 bytes, and the usual MTU of Ethernet itself is 1500 bytes, the MTU of PPPoE in that case will be at most 1492 bytes.
I don't know MTU stuff well enough to tell the details, but as far as I know, if the TCP SYN packet advertise MSS to be larger than what can fit into the MTU of the interface that the replies will come in through, the replying traffic could end up having trouble from actually getting in.
AFAIK, the reason it works fine with the router/server itself is that, the MSS is derived from the MTU of its outbound interface (ppp0), while on the other hand, COMPUTER1's outbound interface is plain Ethernet.
For TCP traffics, one can workaround the problem by having a rule in a hook forwarding chain:
tcp flags syn tcp option maxseg size set 1452
1452 comes from 1500 - 8 - 40, where the 40 is the size of a IPv4 header. For IPv6 you may need 1500 - 8 - 60 = 1432.
You might need to have the rule ordered before any accept rules. (It could depend on the whole structure of the ruleset though, I think.)
P.S. Not sure if you need any measure for UDP traffics.
Alternatively, you can probably just set the MTU of the Ethernet interfaces of all the LAN "clients" of this "router" (and that of its LANIF) to 1492. It's probably a less "workaround" approach, but could be quite a hassle.
| Router with nftables doesn't work well |
1,495,372,992,000 |
A connection is a 5 tuple (ip src/dst, port src/dst, protocol).
What about different connections between ipv4 and ipv6?
If I define the iptables rule:
iptables -A INPUT -p tcp -m connlimit --connlimit-above 50 -j REJECT --reject-with tcp-reset
It limits the tcp connections to 50.
What about ipv6 tcp connections? should I write also
ip6tables -A INPUT -p tcp -m connlimit --connlimit-above 50 -j REJECT --reject-with tcp-reset
?
Does it mean that I can have 100 tcp connections overall? (50 ipv4 50 ipv6) ?
How does it work?
Thanks.
|
You will have 50 of each connections, since iptables will handle only ipv4 and ip6tables will deal with ipv6 connections. They will not "sum" up, because they are managed by different tools on each protocol version.
Will nftables, the "new firewall" be able to deal with both protocols summing up everything? No. You will have the "same tool"(nft binary) to deal with protocols independently using the rule keyword: nft add rule ip6 ... and nft add rule ip ...
As pointed out in the comments, the nft_connlimit extension was recently added to Linux 4.18, allowing you to count sum ipv4 and ipv6 if you use the reserved word inet while creating rules.
Related Stuff:
Serverfault: How do you set a max connection limit with nftables?
| iptables limit the number of connections in the system for both ipv4 ipv6 |
1,495,372,992,000 |
ss -lnp in server shows following information:
# ss -lnp
Recv-Q Send-Q Local Address:Port Peer Address:Port
0 128 :::22 :::* users:(("sshd",3847,4))
0 128 *:22 *:* users:(("sshd",3847,3))
0 10 127.0.0.1:25 *:* users:(("sendmail",1605,4))
0 128 127.0.0.1:199 *:* users:(("snmpd",22765,8))
0 128 :::80 :::* users:(("httpd2-prefork",15058,4),("httpd2-prefork",2235,4),("httpd2-prefork",1209,4))
#
According to output of ss one might think that Apache listens only on TCP port 80 on all the IPv6 addresses. Actually Apache also serves requests over IPv4. Why is that so? In addition, how is it possible that PIDs 15058, 2235 and 1209 all listen on same TCP port?
|
1) This is how Linux works (by default) if you listen for connections on an ipv6 port.
https://utcc.utoronto.ca/~cks/space/blog/linux/Ipv6DualBinding
https://utcc.utoronto.ca/~cks/space/blog/programming/ModernIPv6Handling
2) The processes share the same "socket", which was created and "bound" to port 80.
In this case it is shared because the processes forked (cloned) after opening the socket. This is exactly the same as forked processes inheriting open files. Like when you run ls, it inherits file descriptors from the shell, which includes a handle allowing it to write its output to the terminal. Unix treats lots of things as files :).
However it wouldn't be possible to bind a second socket to listen on the same port (no matter what process you are). (Pedantry: unless both processes use SO_REUSEPORT).
| According to socket statistics Apache listens only on IPv6, but actually serves IPv4 as well |
1,495,372,992,000 |
Does IPv6 have the 0-1024 superuser requirements for ports? I saw in the changelog for Linux 4.11 that they added a sysctl option to change it, but it only lists it under IPv4.
Also does Open/FreeBSD have that restriction in IPv6?
|
These ports are for TCP/UDP, which are protocols running on top of IP. So things should be same whatever version of IP you are using.
| IPv6 superuser ports |
1,495,372,992,000 |
I have a Debian Jessie 8 server with three unique IPv4 IPs. I connect to the server from Windows 7 via Putty. I can open three Putty windows using each of the three IPs. I am trying to execute a Perl script that checks whois information using Perl's use Net::Whois::Raw and the system's whois using backticks
$domain_info = `whois google.com 2>&1`;
$domain_info2 = whois(google.com);
The script is automated and keeps on checking different URLs. The issue is, that because of the call frequency from the three windows I am getting
whois limit exceeded - see www.pir.org/whois for details
How can I execute the Perl script so that each Putty window will use the public IP I used to login to?
|
According to
http://search.cpan.org/~nalobin/Net-Whois-Raw-2.85/lib/Net/Whois/Raw.pm, you can:
set_ips_for_server('whois.ripn.net', ['127.0.0.1']);
You can specify
IPs list which will be used for queries to desired whois server. It
can be useful if you have few interfaces, but you need to access whois
server from specified ips.
| Execute Perl commands from a specific IP? |
1,495,372,992,000 |
I've connected two PCs running Linux Mint 20.2 with NetworkManager with Ethernet cable. On enabling the interface PCs obtained ip6 addresses and I'm able to ping one from another. But I'm getting annoying GUI notification "activation of network connection failed" and status of wired connection in NetworkManager applet "Connecting...".
My initial guess was it is due to unable to obtain ip4 from DHCP, so I've disabled ip4 in GUI of NetworkManager for that wired connection. Still message remained the same. Then I've disabled DNS and routes again in GUI of NetworkManager for ip6 wired. Still message remained. Now however wired connection gets automatically disconnected completely per GUI of NetworkManager though leds on RJ45 remained lid/blinking with green/orange (after sudo ifconfig eth down leds turn off completely on PC where command is run). After some time connection via ip6 gets re-established for reasons yet unclear to me (ping again starts to gets replies).
What "Connecting...", "activation of network connection failed" mean in the above situation?
I want a most simple scripted way to connect two PCs with Linux, preferably via ip6. As of now, as I see it, it works almost out-of-the-box, but those messages might interfere (and surely annoying and I've found not way to disable them in Cinnamon yet).
Added 1:
Jan 11 04:52:55 mint NetworkManager[1184]: <info> [1641876775.8604] manager: NetworkManager state is now DISCONNECTED
Jan 11 04:52:55 mint NetworkManager[1184]: <warn> [1641876775.8655] device (enp0s25): Activation: failed for connection 'Wired connection 1'
Jan 11 04:52:55 mint NetworkManager[1184]: <info> [1641876775.8660] device (enp0s25): state change: failed -> disconnected (reason 'none', sys-iface-state: 'managed')
Jan 11 04:52:55 mint NetworkManager[1184]: <info> [1641876775.8836] dhcp4 (enp0s25): canceled DHCP transaction
Jan 11 04:52:55 mint NetworkManager[1184]: <info> [1641876775.8837] dhcp4 (enp0s25): state changed timeout -> done
Jan 11 04:52:55 mint NetworkManager[1184]: <info> [1641876775.8879] policy: auto-activating connection 'Wired connection 1' (*****)
Jan 11 04:52:55 mint NetworkManager[1184]: <info> [1641876775.8900] device (enp0s25): Activation: starting connection 'Wired connection 1' (*****)
Jan 11 04:52:55 mint NetworkManager[1184]: <info> [1641876775.8938] device (enp0s25): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed')
Jan 11 04:52:55 mint NetworkManager[1184]: <info> [1641876775.8944] manager: NetworkManager state is now CONNECTING
Jan 11 04:52:55 mint NetworkManager[1184]: <info> [1641876775.8947] device (enp0s25): state change: prepare -> config (reason 'none', sys-iface-state: 'managed')
Jan 11 04:52:55 mint NetworkManager[1184]: <info> [1641876775.8956] device (enp0s25): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed')
Jan 11 04:52:55 mint NetworkManager[1184]: <info> [1641876775.8963] dhcp4 (enp0s25): activation: beginning transaction (timeout in 45 seconds)
Jan 11 04:53:40 mint NetworkManager[1184]: <warn> [1641876820.8574] dhcp4 (enp0s25): request timed out
Jan 11 04:53:40 mint NetworkManager[1184]: <info> [1641876820.8575] dhcp4 (enp0s25): state changed unknown -> timeout
Jan 11 04:53:40 mint NetworkManager[1184]: <info> [1641876820.8577] device (enp0s25): state change: ip-config -> failed (reason 'ip-config-unavailable', sys-iface-state: 'managed')
Jan 11 04:53:40 mint NetworkManager[1184]: <info> [1641876820.8600] manager: NetworkManager state is now DISCONNECTED
Added 2:
Above added 1 was before ip4 was disabled, below when disabled there remained fewer lines (dhcp4 gone):
Jan 11 07:49:13 mint NetworkManager[1184]: <info> [1641887353.8456] device (enp0s25): state change: ip-config -> failed (reason 'ip-config-unavailable', sys-iface-state: 'managed')
Jan 11 07:49:13 mint NetworkManager[1184]: <info> [1641887353.8478] manager: NetworkManager state is now DISCONNECTED
Jan 11 07:49:13 mint NetworkManager[1184]: <warn> [1641887353.8536] device (enp0s25): Activation: failed for connection 'Wired connection 1'
Jan 11 07:49:13 mint NetworkManager[1184]: <info> [1641887353.8560] device (enp0s25): state change: failed -> disconnected (reason 'none', sys-iface-state: 'managed')
Jan 11 07:49:13 mint NetworkManager[1184]: <info> [1641887353.8588] policy: auto-activating connection 'Wired connection 1' (*****)
Jan 11 07:49:13 mint NetworkManager[1184]: <info> [1641887353.8622] device (enp0s25): Activation: starting connection 'Wired connection 1' (****)
Jan 11 07:49:13 mint NetworkManager[1184]: <info> [1641887353.8627] device (enp0s25): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed')
Jan 11 07:49:13 mint NetworkManager[1184]: <info> [1641887353.8639] manager: NetworkManager state is now CONNECTING
Jan 11 07:49:13 mint NetworkManager[1184]: <info> [1641887353.8647] device (enp0s25): state change: prepare -> config (reason 'none', sys-iface-state: 'managed')
Jan 11 07:49:13 mint NetworkManager[1184]: <info> [1641887353.8660] device (enp0s25): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed')
Jan 11 07:49:45 mint NetworkManager[1184]: <info> [1641887385.8471] device (enp0s25): state change: ip-config -> failed (reason 'ip-config-unavailable', sys-iface-state: 'managed')
Jan 11 07:49:45 mint NetworkManager[1184]: <info> [1641887385.8497] manager: NetworkManager state is now DISCONNECTED
|
Read NetworkManager's log messages. They should tell you in more detail what is happening and what is failing on your network connection.
On systems using systemd-journald as a primary log mechanism (such as modern Ubuntu/Mint), you'll need a command like this:
journalctl -x -b _SYSTEMD_UNIT=NetworkManager.service
This will display all messages logged by NetworkManager since the latest system boot-up. The first line of output should be -- Journal begins at <timestamp>, ends at <timestamp>. -- which tells you the time range of available journals (it is adjustable, but logs from before the beginning of the journal are already gone).
On systems with traditional syslog logging, you should usually look at logs stored in /var/log, e.g. /var/log/daemon.log (Debian/Ubuntu-based systems) or /var/log/messages (RedHat-style systems).
Your log indicates NetworkManager is still trying to get an IPv4 address by DHCP. It is using a connection definition named Wired connection 1: you will be able to see how it is defined in detail if you type nmcli connection show 'Wired connection 1'. In particular, check:
nmcli connection show 'Wired connection 1' | grep method
The response should be about three lines, like this:
ipv4.method: auto
ipv6.method: auto
proxy.method: none
For your use case, ipv4.method should be either disabled or link-local, and ipv6.method should probably be link-local too, to tell NetworkManager that a global internet connection is not expected with this connection definition.
See man 5 nm-settings-nmcli for details on each setting in the nmcli connection show <connection name> output. Note that they are organized hierarchically, so to find ipv4.method for example, you should first search for a section title ipv4 setting and then search for just method after that.
| "Connecting...", "Connection failed. Activation of network connection failed" How to find out what does it mean exactly? (ping works) |
1,495,372,992,000 |
I have a use case where I want to forward certain IPv4 ports incoming into a machine, to to the same ports on another machine that uses IPv6.
I assume I can do this with [auto]ssh, but wonder if this is high performance, or if there is something else I could use? IPtables is one option, but I understand that this is IPv4 only and that I therefore need to use IP6tables. Will that work for IPv4 <-> IPv6 (bidirectional)?
What are my options for the highest performance, and preferably something that can run as a service?
|
You could use socat. It's a relay for bidirectional data transfers between two independent data channels. You can forward ipv4 to ipv6 and the other way around too.
Example for port 4000:
sudo socat TCP4-LISTEN:4000,fork TCP6:[xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx]:4000
| Highest performance way to route traffic from IPv4 to IPv6 on Linux? |
1,495,372,992,000 |
I have installed the Oracle VM VirtualBox in my laptop. Then I downloaded the Full ISO image for Oracle Linux 9.1 version (via https://yum.oracle.com/oracle-linux-isos.html) and used that in order to create an Oracle Linux (64 bit) virtual machine. The installation was successful and I was able to login to my VM.
I can ssh from my laptop to my Linux VM without any issues. I can browse the internet within the Linux machine.
I opened up a terminal and executed the below command
ping google.com
Unfortunately, it doesn't send any replies.
[root@localhost ~]# ping google.com
PING google.com(bom12s05-in-x0e.1e100.net (2404:6800:4009:80f::200e)) 56 data bytes
--- google.com ping statistics ---
113 packets transmitted, 0 received, 100% packet loss, time 114760ms
I did nslookup google.com as well as cat /etc/resol.conf. Name server details are fetched properly
I am not sure whether I am missing something here since I can access internet via an internet browser within Linux VM.
Your thoughts and opinions are much appreciated.
Result of resolvectl status
[root@localhost ~]# resolvectl status
bash: resolvectl: command not found...
Install package 'systemd-resolved' to provide command 'resolvectl'? [N/y] y
* Waiting in queue...
* Loading list of packages....
The following packages have to be installed:
systemd-resolved-250-12.0.1.el9_1.x86_64 System daemon that provides network name resolution to local applications
Proceed with changes? [N/y] y
* Waiting in queue...
* Waiting for authentication...
* Waiting in queue...
* Downloading packages...
* Requesting data...
* Testing changes...
* Installing packages...
Failed to get global data: Could not activate remote peer.
Result of nslookup google.com
Server: X.X.X.X
Address: X.X.X.X#98
Non-authoritative answer:
Name: google.com
Address: 142.250.199.174
Name: google.com
Address: 2567:7845:4756:74e::352e
Result of grep nameserver /etc/resolv.conf
nameserver X.X.X.X
nameserver fe35::f2db:74ff:fe65:576f%enp0s3
Please note that the Server value and nameserver value in nslookup and grep commands are identical.
Cheers
|
According to the user manual IPv6 by default is disabled for guests using the NAT adapter.
You could try to change the guest network adapter to Bridged if you have IPv6 enabled on your LAN (network connections should be restarted in the guest or you could simply reboot it).
or
You could enable IPv6 using VBoxManage natnetwork modify --netname natnet1 --ipv6=on (to find out all NAT network adapters run VBoxManage natnetwork list).
Anyways, ping -4 host.com should work.
| ping google.com within the Oracle Linux 9.1 VM is not working |
1,495,372,992,000 |
This page, linked from the avahi-autoipd man page says:
Most modern Linux distributions already include full IPv4 link-local support
However, if I look at the routing table on my Fedora 34 machine, I only see these three routes:
default via 10.180.64.1 dev wlo1 proto dhcp metric 600
10.180.64.0/22 dev wlo1 proto kernel scope link src 10.180.66.146 metric 600
192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1 linkdown
By my understanding, the first line means packets with an IPv4LL destination (169.254.x.x) will be sent to the router rather than directly to their destination.
This would mean that the packet would only be delivered if the router was aware of IPv4LL addresses, which I don't think is true of all routers.
Does Fedora actually handle IPv4LL addresses out of the box? If so, how?
|
Linux distributions stopped doing IPv4ALL by default.
network-manager: no longer falls back to link-local ipv4
Date: Wed, 11 Mar 2009 19:42:01 UTC
since the upgrade to 0.7, it seems that [NetworkManager] doesn't fall back to
link-local ipv4 in case dhcp times out, and instead goes into
'disconnected' state. ...
Re: DHCP fall back to link-local? (IPv4)
Date: Thu, 16 Apr 2009 11:53:01 -0400
... you've just added about 45
seconds of latency to the connection
... if you know you want zeroconf, use zeroconf, don't use DHCP
... fallback to zeroconf is
simply confusing for a ton of users (which is why that behavior was
removed in the first place)
My Fedora Workstation system mentions "Link-Local Only", as an alternative option to DHCP, under the IPv4 tab. I haven't tried it, so I give no promises about whether it works at all :-).
Note your link was to an Apple page, that was last updated in 2005. The Apple page also has a note at the top, saying that it is not updated any more.
| Is Fedora 34 configured for IPv4 link-local addresses? |
1,495,372,992,000 |
I'm trying to setup a raspberrypi 4 as a wifi access point. Following the official documentation I managed to bridge my eth0 interface and setup hostapd. The bridges IP is provided by an exisiting DHCP server on the network in contrast to the documentation.
The problem I'm facing is that none of the connected wifi devices are getting an IPv4 address. They are capable of connecting to the internet by IPv6 though but then fail to connect to any IPv4 target on the internet.
Browsing IPv6 based websites works fine from any wifi client (Android or Windows 10).
Linux raspberrypi4 4.19.75-v7l+ #1270 SMP Tue Sep 24 18:51:41 BST 2019 armv7l GNU/Linux
hostapd v2.8-devel
|
A.B's comment is actually right. My raspi and the current raspbian on it do load the br_netfilter module. Upon removing it and after restarting hostapd all my clients do now get valid IPv4 adresses. Inserting the module breaks the functionality again.
Thank you A.B!
| hostapd clients don't get ipv4 addresses |
1,495,372,992,000 |
There is some .pcap-file with fragmented IP traffic. I replay this file with tcpreplay, but also I need to replay it with DF (don't fragment) bit set in some packets.
I supposed that tcprewrite will help, but it seems that there is no ability to change IP-header flags in this utility.
So which utility (console preferably) should I use to correctly alter IP-header flags in pcap-file in Linux? If tcprewrite or any other can do so, some examples would be helpful.
By the way, after altering DF-bit the checksum of IP-header should be updated respectively.
|
There are various methods I would approach this.
If there aren't many packets, or it's a onetime change, I really like WireEdit, TraceWrangler is another GUI option.
Otherwise, two options if you have any programming experience, are Scapy(python), and PcapPlusPlus(C++). This PcapPlusPlus link might be enough of a tutorial of what you are trying to do, that with very little programming experience you can do what you want.
Finally, I found bittwist that is an older application, but has the -f d option.
-f flags
Control flags. Possible characters for flags are:
- : remove all flags
r : set the reserved flag
d : set the don’t fragment flag
m : set the more fragment flag
Example: -f d
If any of the flags is specified, all original flags are
removed automatically.
| Setting 'DF'-bit in IP-header inside pcap file |
1,495,372,992,000 |
I tried to solve my problem that my touchpad is not running on my Lenovo 720-15IKB by installing the latest rc kernel 4.14.rc5 as described here, which really worked! The touchpad is working then! But now I have a new problem caused by that kernel:
Networking doesn't work correctly with kernel 4.14-rc5
I don't get any IPv4 IP any more in my local Network. IPv6 works correctly. If IPv6 is running in your network, You could add all needed addresses by hand in my /etc/hosts file, but that is no solution ;)
I could only workaround it like this:
Instead of DHCP I used manual wifi configuration which still didn't help at first. Then I connected a USB-LAN adapter once and noticed, that I got a correct internet settings via LAN then. This seems somehow to have fixed some misconfiguration. I can now get correct internet settings via WiFi too. Also after a reboot I can reconnect via WiFi only. But DHCP still doesn't work. I tested this with 3 different WiFis in different places.
I just installed plain standard Ubuntu 17.10 with systemd and Network Manager, no modifications.
How can I get IP4 with DHCP running with the latest kernel?
|
Looking on google:
https://ubuntuforums.org/showthread.php?t=2372492
https://www.phoronix.com/scan.php?page=news_item&px=AppArmor-Linux-4.14
https://bugs.launchpad.net/ubuntu/+source/apparmor/+bug/1724450
It seems related to AppArmor, and I read there are patches (like perhaps editing apparmor's configuration: apparmor-for-4.14.diff ).
This Ubuntu page on Apparmor gives informations on how to partially disable it. The same command aa-complain can be used to allow both a given command or a whole profile to be bypassed. So first install the required tools (be creative if network isn't working yet...):
apt install apparmor-utils
For dhclient and related binaries (including its communication with NetworkManager), doing this fixes the DHCP issue:
sudo aa-complain /etc/apparmor.d/sbin.dhclient
In case an other unrelated command behaves differently than before and there's no easy profile for it, just using sudo aa-complain /path/to/command should allow it to work unhindered. Keep security considerations in mind.
| Get DHCP running with Kernel 4.14-rc5 |
1,495,372,992,000 |
I have a Linux-Board with two Ethernet-Interfaces (eth0, eth1).
On eth0 I have a IPv4 network, on eth1 there's a IPv6 network.
Now I want to route packets from specific devices on the IPv4-network to the IPv6-network and vice versa. Each IPv4-device has a unique IPv6-address and each IPv6-device has a unique IPv4-adress which shall be specified in a textfile.
I read about tayga but it seems that I can use it with only one eth-interface. I don't know if this is really what I need.
Isn't it possible to manage this with standard linux tools?
Do you think a simple C-program which receives IP-Packages on one interface, changes the IP-addresses and IP-PDU-Layout, and sends it back on the other interface will work?
|
If I understand your situation correctly I think the best solution for you would be to use SIIT-DC (SIIT-DC: Stateless IP/ICMP Translation for IPv6 Data Center Environments). It allows you to map an IPv4 address to an IPv6 address and vice versa.
The tool to do this with I personally like best is Jool. It is a Linux kernel module that implements both NAT64 and SIIT.
| Routing packets between IPv4 and IPv6 networks on different Interfaces |
1,495,372,992,000 |
So, one of my servers is behind NAT, and since there is already a publicly accessible apache server going on my LAN, I decided to access it from the outside with different ports, and remap them to the standard port of the apache on this new machine I want to get a cert on. I did that with classic port forwarding via my router.
Now, if I want to use letsencrypt on said server, it obviously fails because it tries to use the standard port, which will direct to my other server's apache installation (which btw. already has a letsencrypt-cert).
Now I guess I need some way to tell letsencrypt to use my self-defined port instead of the standard one to connect from the outside, but I haven't found anything yet. Is that even possible? If it is, how?
|
It's not possible to use non-standard port, as conforming ACME server will still try to contact default 80 / 443 for http-01 / tls-sni-01 challenges.
E.g.certbot has a separate options for to listen to non-standard port, but that still doesn't help to pass the challenge:
certonly:
Options for modifying how a cert is obtained
--tls-sni-01-port TLS_SNI_01_PORT
Port used during tls-sni-01 challenge. This only
affects the port Certbot listens on. A conforming ACME
server will still attempt to connect on port 443.
(default: 443)
--http-01-port HTTP01_PORT
Port used in the http-01 challenge.This only affects
the port Certbot listens on. A conforming ACME server
will still attempt to connect on port 80. (default:
80)
Probably in your case the best way would be to use another verification method -- webroot.
In this case you don't need your 80 and 443 to be available to the outside world, but just a specific directory (which might be configured with proxy on webserver side, I assume).
Details are available here
| Changing the port letsencrypt tries to connect on |
1,495,372,992,000 |
Is there a tool that we can convert a list of IP into a fixed network CIDR form with x.x.x.0/24 and x.x.0.0/16? For example, to demonstrate this, I have list of IPs that possibly can be converted to these fixed form like below:
First if I want to convert list of IP below to possible CIDR 24 but with fixed form of x.x.x.0/24:
./list2cidr 24 iplist.txt
1.22.3.4
1.28.3.5
1.211.3.7
1.211.3.2
1.211.3.1
Output:
1.22.3.4
1.28.3.5
1.211.3.0/24
How does it work?
First, it scans through the list of IP with the same network which it looks at the 3rd octet if they are in the same network, we combine it to become x.x.x.0/24. In this case, there are 3 IPs with the same network in the 3rd octect:
1.211.3.7
1.211.3.2
1.211.3.1
So this would be: 1.211.3.0/24
Another example, if I want to convert list of IP below to possible CIDR 16 with fixed form of x.x.0.0/16:
./list2cidr 16 iplist.txt
1.1.4.1
1.1.4.2
1.22.44.1
1.22.3.2
1.22.1.9
Output:
1.1.0.0/16
1.22.0.0/16
When I passed the argument that contains 16 it will look at the 2nd octet if they are in the same network, so we combine it to become x.x.0.0/16
Should I start writing a script for this or is there a tool that exists for this purpose?
EDIT:
I'm interested to make the CIDR form looks like one of these: 1.1.1.0/24 and 1.1.0.0/16. So, 1.1.0.0/24 or 1.1.1.0/16 are not what I want.
That means, if pass list2cidr 24 iplist.txt, it must form this output
x.x.x.0/24
and if I pass list2cidr 16 iplist.txt, it must form this output
x.x.0.0/16
Currently, I have done the first part of the output using bash script but I have not fully tested it yet
|
I don't know one, so I wrote one (in perl). I also figured your tool design was incomplete.
It should only give CIDR blocks of the requested levels, even when there is only one address therein.
It should also work without a target level.
Thus:
$ cat sample
1.22.3.4
1.28.3.5
1.211.3.7
1.211.3.2
1.211.3.1
$ list2cidr 24 sample
1.22.3.0/24
1.28.3.0/24
1.211.3.0/24
$ list2cidr 16 sample
1.22.0.0/16
1.28.0.0/16
1.211.0.0/16
$ list2cidr sample
1.0.0.0/8
1.16.0.0/12
1.22.3.4
1.28.3.5
1.211.3.0/29
1.211.3.0/30
1.211.3.1
1.211.3.2
1.211.3.7
$
My current implementation only does IPv4. The code is:
#!/usr/bin/perl -w
use strict;
my $target;
if ($ARGV[0] =~ m{^\d+}) {
$target = + shift @ARGV;
}
my $map = [];
sub record($)
{
my $v = shift;
my $m = $map;
for my $i ( 0 .. 31 ) {
my $k = $v & (1 << (31-$i));
$m = $m->[!!$k] ||= (($i == 31) ? $v : []);
}
}
while (<>) {
chomp;
if (m{^\s*(\d+)\.(\d+)\.(\d+)\.(\d+)\s*\z}) {
if (($1 < 256) && ($2 < 256) && ($3 < 256) && ($4 < 256)) {
record(($1<<24) | ($2<<16) | ($3 << 8) | $4);
next;
}
}
printf("Invalid: %s\n", $_);
}
sub output($$$) {
my ($addr, $bits, $indent) = @_;
printf "%*s%d.%d.%d.%d",
$indent*4, '',
0xff & ($addr >> 24),
0xff & ($addr >> 16),
0xff & ($addr >> 8),
0xff & ($addr );
printf("/%d", $bits) if $bits < 32;
print "\n";
}
sub walk($$$$);
sub walk($$$$) {
my ($prefix, $bits, $indent, $m) = @_;
#printf ("%d %d %d ...\n", $prefix, $bits, $indent);
if ($bits == ($target//-1)) {
output $prefix<<(32-$bits), $bits, 0;
} elsif ($bits == 32) {
warn 'mismatch '.$prefix.' != '.$m unless $prefix == $m;
output $prefix, $bits, $indent unless defined $target;
} elsif (defined $m->[0]) {
if (defined $m->[1]) {
output $prefix<<(32-$bits), $bits, $indent unless defined $target;
walk($prefix*2, $bits+1, $indent+1, $m->[0]);
walk($prefix*2+1, $bits+1, $indent+1, $m->[1]);
} else {
walk($prefix*2, $bits+1, $indent, $m->[0]);
}
} else {
if (defined $m->[1]) {
walk($prefix*2+1, $bits+1, $indent, $m->[1]);
} else {
warn sprintf('Empty node at prefix=%x bits=%d indent=%d', $prefix, $bits, $indent);
}
}
}
walk (0, 0, 0, $map);
| Convert list of IP into fixed CIDR form |
1,495,372,992,000 |
I don't have support for IPv6 on my system, and I am only using IPv4. My wpa_supplicant logs are flooded with following error messages:
wpa_supplicant[3370]: nl80211: Failed to open /proc/sys/net/ipv6/conf/wlan0/drop_unicast_in_l2_multicast: No such file or directory
wpa_supplicant[3370]: nl80211: Failed to set IPv6 unicast in multicast filter
which in itself would be harmless, but makes it difficult to actually find other useful messages.
How can I tell wpa_supplicant to only use IPv4 and not try to configure IPv6?
|
As I mentioned in your other post, disabling IPv6 support is not possible in wpa_supplicant. If your only goal is to stop wpa_supplicant from logging the two errors mentioned in your question, just clone the source code and modify this function by commenting out the lines that set IPv6 params.
// comment out these lines in nl80211_configure_data_frame_filters(...)
static int nl80211_configure_data_frame_filters(void *priv, u32 filter_flags)
{
struct i802_bss *bss = priv;
char path[128];
int ret;
/* P2P-Device has no netdev that can (or should) be configured here */
if (nl80211_get_ifmode(bss) == NL80211_IFTYPE_P2P_DEVICE)
return 0;
wpa_printf(MSG_DEBUG, "nl80211: Data frame filter flags=0x%x",
filter_flags);
/* Configure filtering of unicast frame encrypted using GTK */
ret = os_snprintf(path, sizeof(path),
"/proc/sys/net/ipv4/conf/%s/drop_unicast_in_l2_multicast",
bss->ifname);
if (os_snprintf_error(sizeof(path), ret))
return -1;
ret = nl80211_write_to_file(path,
!!(filter_flags &
WPA_DATA_FRAME_FILTER_FLAG_GTK));
if (ret) {
wpa_printf(MSG_ERROR,
"nl80211: Failed to set IPv4 unicast in multicast filter");
return ret;
}
/** THIS BLOCK
os_snprintf(path, sizeof(path),
"/proc/sys/net/ipv6/conf/%s/drop_unicast_in_l2_multicast",
bss->ifname);
ret = nl80211_write_to_file(path,
!!(filter_flags &
WPA_DATA_FRAME_FILTER_FLAG_GTK));
if (ret) {
wpa_printf(MSG_ERROR,
"nl80211: Failed to set IPv6 unicast in multicast filter");
return ret;
}
**/
/* Configure filtering of unicast frame encrypted using GTK */
os_snprintf(path, sizeof(path),
"/proc/sys/net/ipv4/conf/%s/drop_gratuitous_arp",
bss->ifname);
ret = nl80211_write_to_file(path,
!!(filter_flags &
WPA_DATA_FRAME_FILTER_FLAG_ARP));
if (ret) {
wpa_printf(MSG_ERROR,
"nl80211: Failed set gratuitous ARP filter");
return ret;
}
/* Configure filtering of IPv6 NA frames */
/** THIS BLOCK
os_snprintf(path, sizeof(path),
"/proc/sys/net/ipv6/conf/%s/drop_unsolicited_na",
bss->ifname);
ret = nl80211_write_to_file(path,
!!(filter_flags &
WPA_DATA_FRAME_FILTER_FLAG_NA));
if (ret) {
wpa_printf(MSG_ERROR,
"nl80211: Failed to set unsolicited NA filter");
return ret;
}
**/
return 0;
}
But really what you should do is send an email to the folks at Hostap ([email protected]) and explain that you do not support IPv6, and wpa_supplicant is spamming your logs and you'd like it to stop. I'll be honest, the maintainers are pretty hit or miss on answering questions, so make sure that you ask your question clearly, and give all the information they need to make an assessment.
| wpa_supplicant: disable IPv6 |
1,495,372,992,000 |
All of the articles I read that explains why there are 13 root dns servers saying each ip address takes 32 bytes and hence (13 x 32) = 416 bytes leaving up to 96 bytes for other protocol information. For example, see below
"At the time the DNS was designed, the IP address in use was IPv4, which contains 32 bits. For efficient networking and better performance, these IP addresses should fit into a single packet (using UDP, the DNS’s default protocol). Using IPv4, the DNS data that can fit into a single packet is limited to 512 bytes. As each IPv4 address requires 32 bytes, having 13 servers uses 416 bytes, leaving up to 96 bytes for the remaining protocol information."
Isn't each ip address is of 32-bit? What does it imply in the above statement where it states "each IPv4 address requires 32 bytes"? Why does 32-bit address take 32 bytes?
|
"... As each IPv4 address requires 32 bytes, having 13 servers uses 416 bytes, leaving up to 96 bytes for the remaining protocol information."
The DNS protocol never transmits just plain IP addresses, but properly formatted queries and answers composed of DNS resource records.
The "IPv4 address requires 32 bytes" probably does not refer to the size of the plain IP address, but to the size of the A resource record as formatted for transfer in the DNS protocol.
It looks like this value would have been accurate back when all root DNS servers had unique, non-systematic names, but since the root nameservers have now been re-named to the format x.ROOT-SERVERS.NET, the current state is a bit more complicated.
I just ran tcpdump on a BIND9 DNS server start-up, and it looks like the first A record will take just slightly more than 32 bytes, as it includes:
the full name a.root-servers.net (with one byte for the length of each name component and one zero byte at the end = 20 bytes total)
a 16-bit record type code (2 bytes)
a 16-bit record class code (2 bytes)
a 32-bit TTL value (4 bytes)
a 16-bit data length value (2 bytes)
a 32-bit IP address (4 bytes)
So if you're requesting the A records for the root DNS servers, the first answer record would actually take 34 bytes.
Any subsequent answer records in the same DNS message can refer back to any previously-mentioned name or part of one, so that if a.root-servers.net is mentioned in full, then b.root-servers.net can be expressed in just 4 bytes (2 bytes for the b part, 2 bytes to back-reference the root-servers.net suffix). As a result, any other A records for the root servers will take just 17 bytes each.
The actual start-up query by BIND9 is equivalent to dig . NS and happens over TCP rather than UDP.
As a result, the first answer record is a NS record of 31 bytes, listing the first root DNS server with full name. Subsequent NS records for the other root servers will take just 15 bytes each. As the A records presented as additional information will be able to back-reference each root server hostname in full, each A record for a root DNS server will take just 16 bytes. The response also includes the IPv6 AAAA records for the root nameservers. Even so, total length of the DNS response is just 1097 bytes.
| Ip address is of 32 bit, which means 4 bytes. Yet all answers to question on 13 root dns servers say otherwise [closed] |
1,495,372,992,000 |
My scenario
Relevant entries in my /etc/hosts (I have them written in the same order you see them here)
172.22.5.107 www.wordpress-rend-adri.com
192.168.1.116 www.wordpress-rend-adri.com
I use my laptop in my house and school, hence I'm always dealing with 2 address spaces:
192.168.1.0/24
172.22.0.0/16
So I have those entries because I have a vm with a Wordpress for doing an exercise. That way, it doesn't matter where I am that I'll be able to access my Wordpress (as long as the DHCP offers me the same IP in both networks obviously)
My question
Knowing all of this, now I can tell you that I just made that configuration in my /etc/hosts because one teacher said to me that I only can have 1 record for a name pointing to a single IP. He said to me that If I had a doubled register for the same name, It always take the first one, and stops. But he also said to me that I should try it out, so I did.
The reality, is that for example in my house (where I'm using 192.168.1.0/24), even though the first record is for the other IP, I still can make a connection, and when I ping the name, the correct IP answers to me. And yes, I did try to be completely sure about this, and I did it in an incognito firefox window, and I also tried to comment the line of the IP of my house to check what happened.
Then, I tried to exchange both records. I mean, I just did this:
192.168.1.116 www.wordpress-rend-adri.com
172.22.5.107 www.wordpress-rend-adri.com
So in this case, obviously it is still working.
And when I went to school, the same happened when using the other address space.
So...
¿Why is it said that you can only have 1 record for a name in your /etc/hosts, if this configuration actually worked for me?
¿Is firefox, the ping binary, or anything that you use, doing an internal process of name resolution to check what's the entry that actually works, before doing the final connection?
I'm asking this because for example with ping, you just start getting an answer from the IP that works. You don't get failed connections like trying to connect to the other previous IPs
|
I have done a few tests on my debian/wsl
~$ uname -a
Linux DESKTOP-OMM8LBC 4.4.0-17763-Microsoft #864-Microsoft Thu Nov 07 15:22:00 PST 2019 x86_64 GNU/Linux
# /etc/hosts
172.22.5.107 www.wordpress-rend-adri.com # Unreachable IP from my LAN
216.58.198.164 www.wordpress-rend-adri.com # IP for www.google.com
192.168.0.12 www.wordpress-rend-adri.com # IP for another running machine on my LAN
157.240.1.35 www.wordpress-rend-adri.com # IP for www.facebook.com
~$ ping www.wordpress-rend-adri.com
PING www.wordpress-rend-adri.com (192.168.0.12) 56(84) bytes of data.
64 bytes from www.wordpress-rend-adri.com (192.168.0.12): icmp_seq=1 ttl=64 time=49.9 ms
64 bytes from www.wordpress-rend-adri.com (192.168.0.12): icmp_seq=2 ttl=64 time=5.85 ms
64 bytes from www.wordpress-rend-adri.com (192.168.0.12): icmp_seq=3 ttl=64 time=5.58 ms
64 bytes from www.wordpress-rend-adri.com (192.168.0.12): icmp_seq=4 ttl=64 time=6.25 ms
64 bytes from www.wordpress-rend-adri.com (192.168.0.12): icmp_seq=5 ttl=64 time=6.19 ms
--- www.wordpress-rend-adri.com ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 9ms
rtt min/avg/max/mdev = 5.575/14.754/49.919/17.584 ms
So ping picked the local IP placed between two working WAN IP.
Second test:
/etc/hosts
172.22.5.107 www.wordpress-rend-adri.com # Unreachable IP from my LAN
216.58.198.164 www.wordpress-rend-adri.com # IP for www.google.com
#192.168.0.12 www.wordpress-rend-adri.com # IP for one running machine on my LAN
157.240.1.35 www.wordpress-rend-adri.com # IP for www.facebook.com
~$ ping www.wordpress-rend-adri.com
PING www.wordpress-rend-adri.com (172.22.5.107) 56(84) bytes of data.
# Stuck here
Third test:
/etc/hosts
#172.22.5.107 www.wordpress-rend-adri.com # Unreachable IP from my LAN
216.58.198.164 www.wordpress-rend-adri.com # IP for www.google.com
#192.168.0.12 www.wordpress-rend-adri.com # IP for one running machine on my LAN
157.240.1.35 www.wordpress-rend-adri.com # IP for www.facebook.com
~$ ping www.wordpress-rend-adri.com
PING www.wordpress-rend-adri.com (216.58.198.164) 56(84) bytes of data.
64 bytes from www.wordpress-rend-adri.com (216.58.198.164): icmp_seq=1 ttl=54 time=24.5 ms
64 bytes from www.wordpress-rend-adri.com (216.58.198.164): icmp_seq=2 ttl=54 time=22.4 ms
64 bytes from www.wordpress-rend-adri.com (216.58.198.164): icmp_seq=3 ttl=54 time=21.7 ms
64 bytes from www.wordpress-rend-adri.com (216.58.198.164): icmp_seq=4 ttl=54 time=30.5 ms
--- www.wordpress-rend-adri.com ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 7ms
rtt min/avg/max/mdev = 21.734/24.768/30.457/3.440 ms
Fourth test:
/etc/hosts
#172.22.5.107 www.wordpress-rend-adri.com # Unreachable IP from my LAN
216.58.198.164 www.wordpress-rend-adri.com # IP for www.google.com
192.168.0.12 www.wordpress-rend-adri.com # IP for one running machine on my LAN
192.168.0.1 www.wordpress-rend-adri.com # IP for my router
157.240.1.35 www.wordpress-rend-adri.com # IP for www.facebook.com
~$ ping www.wordpress-rend-adri.com
PING www.wordpress-rend-adri.com (192.168.0.1) 56(84) bytes of data.
64 bytes from www.wordpress-rend-adri.com (192.168.0.1): icmp_seq=1 ttl=64 time=1.56 ms
64 bytes from www.wordpress-rend-adri.com (192.168.0.1): icmp_seq=2 ttl=64 time=1.35 ms
--- www.wordpress-rend-adri.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 2ms
rtt min/avg/max/mdev = 1.349/1.455/1.561/0.106 ms
So my conclusion is that ping does not try one IP after another. It favours router, local IP over WAN IP.
Update :
The choice of IP above is confirmed by following python command:
python -c 'import socket;print(socket.gethostbyname("www.wordpress-rend-adri.com"))'
| Different IP:hostName mappings for same host in `/etc/hosts`. Why does this work? |
1,495,372,992,000 |
I'm trying to add one main IP, two extra IP and one IP6. Here is my interfaces file located at /etc/network/interfaces:
source /etc/network/interfaces.d/*
auto lo
iface lo inet loopback
allow-hotplug ens192
iface ens192 inet static
address 23.227.198.250/26
gateway 23.227.198.194
dns-nameservers 8.8.8.8
dns-search deb12.domain.com
auto ens192:0
iface ens192:0 inet static
address 23.227.198.253
gateway 23.227.198.194
auto ens192:1
iface ens192:1 inet static
address 23.227.198.254
gateway 23.227.198.194
iface ens192 inet6 static
address 2a02:748:4000:6::0199/64
gateway 2a02:748:4000:6::1
Then I restart networking using the command below:
output of the command systemctl restart networking:
Job for networking.service failed because the control process exited with error code.
See "systemctl status networking.service" and "journalctl -xeu networking.service" for details.
Then journalctl -x says:
Subject: A start job for unit networking.service has begun execution
Defined-By: systemd
Support: https://www.debian.org/support
A start job for unit networking.service has begun execution.
The job identifier is 412.
ifup[2057]: RTNETLINK answers: File exists
ifup[2048]: ifup: failed to bring up ens192:1
systemd[1]: networking.service: Main process exited, code=exited, status=1/FAILURE
Subject: Unit process exited
Defined-By: systemd
Support: https://www.debian.org/support
An ExecStart= process belonging to unit networking.service has exited.
The process' exit code is 'exited' and its exit status is 1.
systemd[1]: networking.service: Failed with result 'exit-code'.
Subject: Unit failed
Defined-By: systemd
Support: https://www.debian.org/support
The unit networking.service has entered the 'failed' state with result 'exit-code'.
systemd[1]: Failed to start networking.service - Raise network interfaces.
Subject: A start job for unit networking.service has failed
Defined-By: systemd
A start job for unit networking.service has finished with a failure.
The job identifier is 412 and the job result is failed.
the output of systemctl status networking:
networking.service - Raise network interfaces
Loaded: loaded (/lib/systemd/system/networking.service; enabled; preset: enabled)
Active: failed (Result: exit-code) since Mon 2023-11-20 04:53:15 EST; 3min 15s ago
Docs: man:interfaces(5)
Process: 2048 ExecStart=/sbin/ifup -a --read-environment (code=exited, status=1/FAILURE)
Process: 2074 ExecStopPost=/usr/bin/touch /run/network/restart-hotplug (code=exited, status=0/SUCCESS)
Main PID: 2048 (code=exited, status=1/FAILURE)
CPU: 18ms
systemd[1]: Starting networking.service - Raise network interfaces...
fup[2057]: RTNETLINK answers: File exists
ifup[2048]: ifup: failed to bring up ens192:1
systemd[1]: networking.service: Main process exited, code=exited, status=1/FAILURE
systemd[1]: networking.service: Failed with result 'exit-code'.
ystemd[1]: Failed to start networking.service - Raise network interfaces.
I could not find any solution for this issue on web.
It means a lot to me if you could help me to resolve this issue.
I've tried rewriting the network file and checked if I have any typo in it.
Also, vim does not highlight the comments in the file (I've deleted the comments while posting here), So I'm thinking that maybe the OS does not recognize this file as it should be.
I also checked the chmod and chown of this file and everything is fine.
Something strange has happened, even though I receive the error (mentioned above), I can ping all IPs (V4 and V6) from my local network. Also, I can ping google IPv4 and IPv6 from this server.
|
There is only one default gateway: the default gateway used in the default route, which is what gateway controls. The second time the network configuration tools attempt to add a default route with the same metric etc. this triggers an RTNETLINK answers: File exists error.
Just keep the first instance of:
gateway 23.227.198.194
and delete the two duplicates.
Note: better bring down the interface (using ifdown ens192) before doing changes to the configuration to avoid a desynchronization between the ifupdown tool's state and the actual network state. Of course one should keep a way to access the system if it's a remote access (or else reboot).
Information below is not needed to solve the problem, it's just a remark telling that ens192:0 and ens192:1 could and should be completely avoided, but this remark can be ignored.
There's no reason to use so-called alias interfaces (they are not interfaces, they are labels attached to the additional addresses). Today ifupdown uses iproute2: ip link, ip addr internally and not ifconfig, the only known remaining "customer" of these fake interfaces handled through the older API (netdevice(7)): ifconfig is obsolete on Linux and should not be used anymore by tools or humans. It should be replaced with ip link and ip addr which use the newer kernel API (rtnetlink(7)). For example to display an IPv4 address with such compatibility label (and not the other addresses on the same interface) one can use for example:
ip -details addr show dev ens192 label ens192:0
But of course, just doing:
ip addr show dev ens192
will display all 3 addresses on the interface.
You can remove the :0 and :1 everywhere in the configuration and leave only ens192. ifupdown will just add the addresses as usual (and without a label, so making them not displayed with ifconfig anymore). Beside, that would be the only method for IPv6 (though the extra :x would be ignored) since so-called alias interfaces on Linux are a workaround for IPv4 and not for IPv6 which never required them (ifconfig for IPv6 uses a different syntax with add and del instead).
| raise network interfaces - debian 12 |
1,495,372,992,000 |
I am trying to ssh into an embedded Linux device (let's call it petalinux) connected to my Ubuntu server running 22.04 (let's call it oip). petalinux and oip are connected by a direct ethernet cable and a serial UART. I am able to connect to the device using a serial terminal (minicom), but ssh times out. I am also unable to ping the device. So, I believe it is one of two things:
Network settings are incompatible (subnet mask, gateway, etc.), or
The firewall on the embedded Linux device is not letting traffic through.
In order to check the first one, I tried finding the IP address of oip; however, I do not see an IPv4 associated with eno1:
EDIT (add output of ifconfig, arp, netstat, and ip route)
$ ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether cc:48:3a:66:a3:b3 brd ff:ff:ff:ff:ff:ff
altname enp0s31f6
inet6 fe80::91fe:23e9:430c:4c90/64 scope link noprefixroute
valid_lft forever preferred_lft forever
3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
link/ether 52:54:00:5d:96:04 brd ff:ff:ff:ff:ff:ff
inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
valid_lft forever preferred_lft forever
4: wlp60s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 18:47:3d:31:82:dd brd ff:ff:ff:ff:ff:ff
inet 192.168.29.146/24 brd 192.168.29.255 scope global dynamic noprefixroute wlp60s0
valid_lft 40510sec preferred_lft 40510sec
inet6 2405:201:d001:8b9b:6aa1:628a:c18a:852c/64 scope global temporary dynamic
valid_lft 4186sec preferred_lft 4186sec
inet6 2405:201:d001:8b9b:9fbf:f0f:d3d4:34f5/64 scope global dynamic mngtmpaddr noprefixroute
valid_lft 4186sec preferred_lft 4186sec
inet6 fe80::8d6c:7a7c:f242:2c67/64 scope link noprefixroute
valid_lft forever preferred_lft forever
5: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:0c:f4:0a:c5 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
6: br-71e276fedf6f: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:93:94:0b:70 brd ff:ff:ff:ff:ff:ff
inet 172.19.0.1/16 brd 172.19.255.255 scope global br-71e276fedf6f
valid_lft forever preferred_lft forever
$ ifconfig -a
br-71e276fedf6f: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.19.0.1 netmask 255.255.0.0 broadcast 172.19.255.255
ether 02:42:93:94:0b:70 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
ether 02:42:0c:f4:0a:c5 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eno1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
ether cc:48:3a:66:a3:b3 txqueuelen 1000 (Ethernet)
RX packets 12827 bytes 1018633 (1.0 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 3854 bytes 679852 (679.8 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
device interrupt 16 memory 0xed700000-ed720000
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 31692 bytes 4654677 (4.6 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 31692 bytes 4654677 (4.6 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
virbr0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 192.168.122.1 netmask 255.255.255.0 broadcast 192.168.122.255
ether 52:54:00:5d:96:04 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
wlp60s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.29.146 netmask 255.255.255.0 broadcast 192.168.29.255
inet6 fe80::8d6c:7a7c:f242:2c67 prefixlen 64 scopeid 0x20<link>
inet6 2405:201:d001:8b9b:9fbf:f0f:d3d4:34f5 prefixlen 64 scopeid 0x0<global>
inet6 2405:201:d001:8b9b:2e48:2aca:eaf7:f31b prefixlen 64 scopeid 0x0<global>
ether 18:47:3d:31:82:dd txqueuelen 1000 (Ethernet)
RX packets 7661151 bytes 11185482986 (11.1 GB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 2146363 bytes 514484076 (514.4 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
$ arp -an
? (192.168.29.73) at 5c:e9:1e:9c:3a:b0 [ether] on wlp60s0
? (192.168.29.1) at a8:da:0c:c0:06:48 [ether] on wlp60s0
$ netstat -rn
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
0.0.0.0 192.168.29.1 0.0.0.0 UG 0 0 0 wlp60s0
169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 virbr0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
172.19.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br-71e276fedf6f
192.168.29.0 0.0.0.0 255.255.255.0 U 0 0 0 wlp60s0
192.168.122.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr0
$ ip route
default via 192.168.29.1 dev wlp60s0 proto dhcp metric 600
169.254.0.0/16 dev virbr0 scope link metric 1000 linkdown
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
172.19.0.0/16 dev br-71e276fedf6f proto kernel scope link src 172.19.0.1 linkdown
192.168.29.0/24 dev wlp60s0 proto kernel scope link src 192.168.29.146 metric 600
192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1 linkdown
On oip's side, the ip addr is more straightforward. Here, I have set the eth0 IP statically.
EDIT (add output of ifconfig, arp, netstat, and ip route)
$ ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: sit0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
link/sit 0.0.0.0 brd 0.0.0.0
3: can0: <NOARP,ECHO> mtu 16 qdisc noop state DOWN group default qlen 10
link/can
4: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether a6:e6:68:9d:46:5f brd ff:ff:ff:ff:ff:ff
inet 192.168.0.10/24 brd 192.168.0.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::a4e6:68ff:fe9d:465f/64 scope link
valid_lft forever preferred_lft forever
$ ifconfig -a
can0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
NOARP MTU:16 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:10
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
Interrupt:47
eth0 Link encap:Ethernet HWaddr A6:E6:68:9D:46:5F
inet addr:192.168.0.10 Bcast:192.168.0.255 Mask:255.255.255.0
inet6 addr: fe80::a4e6:68ff:fe9d:465f/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:77 errors:0 dropped:0 overruns:0 frame:0
TX packets:692 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:17852 (17.4 KiB) TX bytes:54248 (52.9 KiB)
Interrupt:48
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:626 errors:0 dropped:0 overruns:0 frame:0
TX packets:626 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:56300 (54.9 KiB) TX bytes:56300 (54.9 KiB)
sit0 Link encap:IPv6-in-IPv4
NOARP MTU:1480 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
$ arp -an
-sh: arp: command not found
$ netstat -rn
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
0.0.0.0 192.168.0.1 0.0.0.0 UG 0 0 0 eth0
192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
$ ip route
default via 192.168.0.1 dev eth0 proto static
192.168.0.0/24 dev eth0 proto kernel scope link src 192.168.0.10
In addition, the SSH server is running on the device, as checked via the sudo systemctl status sshd command. The end-goal is to debug why I am unable to ping or ssh to the device.
|
With 192.168.0.10/24 already set on petalinux's eth0 linked to oip's eno1, to allow communication between oip and petalinux, run this on oip (as root user, meaning it probably should be prepended by sudo ):
ip addr add 192.168.0.11/24 dev eno1
And that's it: each should now be able to reach the other using the other's IP address: 192.168.0.10 or 192.168.0.11 (both being within 192.168.0.0/24). I don't see any strange problem here, except the superfluous information (libvirt and Docker are running too).
Note: petalinux won't be reachable directly from the PC (mentioned in comment) nor can it reach "outside" through oip for at least two reasons: Ubuntu's Docker sets filter/FORWARD to DROP, and the PC probably doesn't have an adequate route to 192.168.0.0/24. But that's not part of the question.
| Unable to ping or ssh into embedded Linux device: eno1 has no IPv4 address |
1,495,372,992,000 |
My linux box has 3 IP4 addresses and a range of IP6 addresses.
Supposing I wish to make a curl fetch, how to stipulate from which address the request emanates?
Note: I'm actually using Python/PyCurl, however I'm interested in both bash+curl and curl-only solutions. If curl-only I can implement with PyCurl. If bash+curl, I can rewrite my code in bash.
|
Do you mean the --interface option? From man curl:
--interface
Perform an operation using a specified interface. You can
enter interface name, IP address or host name. An example
could look like:
curl --interface eth0:1 https://www.example.com/
If this option is used several times, the last one will be used.
Note that you may also use a specific DNS interface.
| Specify which of my machine's IP4 or IP6 addresses is to be used for a curl request |
1,495,372,992,000 |
The official Debian networking documentation tells to use:
ifup 6to4
But ifup is not found (ifupdown and ifupdown2 are commands not found too, even after having been installed). Does it have something to do with prefix delegation? If so, do I have to configure it?
And the linux documentation project (i.e. tldp.org) says to use
ip -6 addr add <ipv6address>/<prefixlength> dev <interface>
but ONLY when you have a global IPv6 address, which is not my case.
I read other tutorials which nevertheless did'nt enabled me to reach a solution, so I'm asking for help here.
I'm using Debian stable 10.4 with Xfce 4.12 and Zsh 5.7.1.
I have a TP-LINK N900 Wireless PCI Express Adapter TL-WDN4800 and a Intel I219-V Gibabit LAN controller.
As a side note, the Ethernet network dialog in the desktop panel prints: "device not managed".
MAIN OBJECTIVE: I need to activate IPv6 connectivity to fetch some IPv6 web servers.
➜ ping6 wiki.debian.org
connect: Network is unreachable
Whereas echo requests with IPv4 work without any loss:
➜ ping4 wiki.debian.org
PING wilder.debian.org (82.195.75.112) 56(84) bytes of data.
64 bytes from wilder.debian.org (82.195.75.112): icmp_seq=1 ttl=52 time=35.4 ms
64 bytes from wilder.debian.org (82.195.75.112): icmp_seq=2 ttl=52 time=35.3 ms
64 bytes from wilder.debian.org (82.195.75.112): icmp_seq=3 ttl=52 time=190 ms
64 bytes from wilder.debian.org (82.195.75.112): icmp_seq=4 ttl=52 time=35.3 ms
64 bytes from wilder.debian.org (82.195.75.112): icmp_seq=5 ttl=52 time=181 ms
64 bytes from wilder.debian.org (82.195.75.112): icmp_seq=6 ttl=52 time=181 ms
^C
--- wilder.debian.org ping statistics ---
6 packets transmitted, 6 received, 0% packet loss, time 12ms
rtt min/avg/max/mdev = 35.277/109.735/190.063/74.440 ms
Here are my network devices:
➜ ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp0s31f6: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast state DOWN group default qlen 1000
link/ether 4c:cc:6a:cf:5f:bd brd ff:ff:ff:ff:ff:ff
3: wlp4s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 18:d6:c7:1c:b7:d5 brd ff:ff:ff:ff:ff:ff
inet 192.168.0.189/24 brd 192.168.0.255 scope global dynamic noprefixroute wlp4s0
valid_lft 7171sec preferred_lft 7171sec
inet6 fe80::b08:601b:a8d8:1474/64 scope link noprefixroute
valid_lft forever preferred_lft forever
You can notice in the before-last line that the LOCAL link address (i.e. fe80::) have a /64 mask, which is a GLOBAL one! A local link mask would be /10 where as a global address would start by 2xxx (e.g. 2001::).
Except for lo which probably means localhost, I do not know what are enp0s31f6 and wl4ps0.
I only know that enp0s31f6 was renamed from eth0 but that doesn't explain to me anything, apart from the fact I'm using new syntax for network interfaces names:
➜ sudo dmesg | grep -i eth
[ 1.701805] e1000e 0000:00:1f.6 eth0: (PCI Express:2.5GT/s:Width x1) 4c:cc:6a:cf:5f:bd
[ 1.701809] e1000e 0000:00:1f.6 eth0: Intel(R) PRO/1000 Network Connection
[ 1.701912] e1000e 0000:00:1f.6 eth0: MAC: 12, PHY: 12, PBA No: FFFFFF-0FF
[ 1.703934] e1000e 0000:00:1f.6 enp0s31f6: renamed from eth0
[ 7.706185] Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
Here are some settings to help you understand my network configuration:
➜ cat /etc/network/interfaces
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).
source /etc/network/interfaces.d/*
# The loopback network interface
auto lo
iface lo inet loopback
auto enp0s31f6
allow-hotplug enp0s31f6
iface enp0s31f6 inet dhcp
iface enp0s31f6 inet6 auto
➜ cat /etc/hosts
127.0.0.1 localhost
127.0.1.1 omega.dominion omega
# The following lines are desirable for IPv6 capable hosts
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
➜ cat /etc/resolv.conf
# Generated by NetworkManager
nameserver 192.168.0.1
Please can someone help me to correctly set up IPv6 connectivity?
EDIT:
I'm behind a TP-LINK AC1350 wireless router Archer C59 v1.0 (but no proxy configured) (general specs are here: https://www.tp-link.com/us/home-networking/wifi-router/archer-c59/#specifications)
I'm using nn-connection-editor application to manage my network. Here are my current settings:
➜ sudo cat /etc/NetworkManager/system-connections/TP-LINK_902C
[connection]
id=TP-LINK_902C
uuid=f2fef445-f44e-4216-8d51-eb4dd4e23ea6
type=wifi
permissions=
timestamp=1589139366
[wifi]
mac-address-blacklist=
mode=infrastructure
seen-bssids=50:C7:BF:90:90:2C;
ssid=TP-LINK_902C
[wifi-security]
key-mgmt=wpa-psk
psk-flags=1
[ipv4]
dns=8.8.8.8;8.8.4.4;
dns-search=
method=auto
[ipv6]
addr-gen-mode=eui64
dns-search=
ip6-privacy=2
method=auto
Now I run the diagnostic tool ndisc6:
➜ rdisc6 wlp4s0
Soliciting ff02::2 (ff02::2) on wlp4s0...
Timed out.
Timed out.
Timed out.
No response.
Which is strange because of the LAN discovery on all routers via echo requests seems to work correctly:
➜ ping -c3 -I wlp4s0 ff02::02
ping6: Warning: source address might be selected on device other than wlp4s0.
PING ff02::02(ff02::2) from :: wlp4s0: 56 data bytes
64 bytes from fe80::52c7:bfff:fe90:902c%wlp4s0: icmp_seq=1 ttl=64 time=45.4 ms
64 bytes from fe80::52c7:bfff:fe90:902c%wlp4s0: icmp_seq=2 ttl=64 time=1.65 ms
64 bytes from fe80::52c7:bfff:fe90:902c%wlp4s0: icmp_seq=3 ttl=64 time=1.62 ms
--- ff02::02 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 5ms
rtt min/avg/max/mdev = 1.624/16.230/45.421/20.641 ms
I took some screenshots from web administration's interface (i.e. http://tplinkwifi.net) to help investigating further:
1) IPv6 internet settings
2) Possible options
3) IPv4 general settings
4) Their counter-part in IPv6
5) Other wireless network system parameters
EDIT2:
It seems my modem supplied by my ISP doesn't provide any IPV6 connectivity, so it become clear that I need a more recent modem or a IPv6 tunnel. I from now on consider the question answered and I thank you user4556274, Johan Myréen and bey0nd for your insights :)
|
Maybe some basics first:
A IPv6 address of a host/interface always consists of 128bit which include the prefix (first 64bit) and the interface identification [IID] (last 64bit). Therefor the CIDR notation for a host/interface address is always /64.
The scope of an IPv6 host/interface address is one of the following:
Local-Link: An address out of the fe80::/64 range. As the prefix is always fe80:0:0:0, there is no distinct separation of layer 3 networks and therefore this address is only used for communication in the current layer 2 segment of the connected LAN.
Local : An address out of the fd00::/8 range which consists of a 64bit prefix and a 64bit IID. It should only be routed in the LAN and not over the internet.
Global : An address out of the 2000::/3 range which can be routed over the internet and also consists of a 64bit prefix and a 64bit IID.
Your computer seems to have two network interfaces.
- enp0s31f6 which seems to be a Ethernet interface without connection and
- wlp4s0 which seems to be a wireless network interface connected to an AP.
As there is only a Link-Local address on the wireless interface, it seems that this interface is either
- not configured to accept any automated configuration (SLAAC or DHCPv6) or
- the access point (AP) doesn't provide an IPv6 network.
edit: As the result of rdisc6 wlp4s0 shows, there is no IPv6 autoconfig information broadcasted in your network, even so the router seems to be set to SLAAC + Stateless DHCP. Therefore, as Johan Myréen stated in his comment, you'll need to talk to your ISP to see if there is IPv6 available at all or find a way to tunnel IPv6 with a Tunnel provider.
| How to add an IPv6 address? |
1,495,372,992,000 |
It used to be that you could force command line FTP to use IPv4 like so:
ftp -4 ftp.example.com
However, at some point in the relatively recent past the "-4" (and for that matter, the "-6") option seems to have been removed. Despite exhaustively searching the Web (even for the exact error "ftp: 4: unknown option") I can't find out how to, as the old man page reads, "Use only IPv4 to contact any host" and force use of IPv4. Instead I'm forced to wait for the client to time out on the IPv6 in the DNS before trying IPv4, which is waste of time.
Is there any other way to accomplish this?
And before I get lectured on the insecurity of FTP, I'm aware of that and my options. However, I'm connecting to a very old server with non-critical log-in credentials to retrieve non-sensitive data.
My ftp on Xubuntu 14.04 LTS supports the -4 option, but ftp on CentOS 7.7 doesn't.
|
-4 and -6 are options added by a patch in the Debian version of netkit-ftp; you’ll find these available in any Debian derivative. Fedora, RHEL and CentOS don’t have an equivalent patch, so their ftp doesn’t support these options.
To force IPv4, you could try specifying the target IP address rather than the host name.
| What happened to the "-4" option for command line FTP? |
1,495,372,992,000 |
I am using Red Hat Enterprise 6 and I'm trying to search through the /etc/ directory for files that contain any IPv4 address.
|
Sure:
grep -lrE '([0-9]{1,3}\.){3}[0-9]{1,3}' /etc
-l means list only matching files
-r is recursive
-E is extended regex
Regex taken from https://unix.stackexchange.com/a/296597/243015
| How do I use grep to output only the names of files that contain any ipv4 address |
1,363,937,239,000 |
Linux systems have been having a historical (>5 yrs) problem in configuring audio devices especially commonplace headphones with combo jacks.
Since many people want to use their favorite linux systems for video chatting, there are records of frustrating unresolved problems all across various forums.
I get that the drivers for the external mic (in headphones with combo jacks) are not currently available (or not developed(?)).
So given that, the user should be able to use the internal microphone for input and headphone for output.
Along this line I went down the rabbit hole (digged up upto 5-6 yr old issues) and tried lot of things only to get no success in the end (I am using common combo jack headphones and hp laptop running ubuntu 16.04).
Many people have variously reported this issue.
Here's what commonly happens..
When headphone is not connected,
Internal microphone and speakers work well.
PulseAudio shows:
pacmd list-cards shows:
ports:
analog-input-internal-mic: Internal Microphone (priority 8900, latency offset 0 usec, available: unknown)
properties:
device.icon_name = "audio-input-microphone"
analog-input-mic: Microphone (priority 8700, latency offset 0 usec, available: no)
properties:
device.icon_name = "audio-input-microphone"
analog-output-speaker: Speakers (priority 10000, latency offset 0 usec, available: unknown)
properties:
device.icon_name = "audio-speakers"
analog-output-headphones: Headphones (priority 9000, latency offset 0 usec, available: no)
properties:
device.icon_name = "audio-headphones"
When headphone is connected,
Output through headphones work well.
But external microphone (located on headphone) does not work (stuttering noise, drivers not present, alright), but then Internal microphone is 'unplugged' too. (So there is no way to record any sound.)
PulseAudio shows:
pacmd list-cards shows:
ports:
analog-input-internal-mic: Internal Microphone (priority 8900, latency offset 0 usec, available: no)
properties:
device.icon_name = "audio-input-microphone"
analog-input-mic: Microphone (priority 8700, latency offset 0 usec, available: yes)
properties:
device.icon_name = "audio-input-microphone"
analog-output-speaker: Speakers (priority 10000, latency offset 0 usec, available: no)
properties:
device.icon_name = "audio-speakers"
analog-output-headphones: Headphones (priority 9000, latency offset 0 usec, available: yes)
properties:
device.icon_name = "audio-headphones"
So, headphones gives the output, that's great, but is there any way to force internal microphone for input? (somehow make available: yes)
|
After many trials and tests I found that this issue can be solved through hardware rather than software. Using a 'USB Headphone/Microphone Splitter', I was able to force internal mic for input and headphones for output.
Output need to be set to 'USB Audio Device'.
Input would be default built-in internal speakers. Just need to make sure that it is ON.
It works with skype too. Just need to set input and output to default and turn off the automatic adjustments of microphone.
| Headphones with combo jack: Force internal mic for input and headphones for output |
1,363,937,239,000 |
Every time time I connect headphones to the 3.5mm audio jack on my Dell XPS 13, I hear continuous white noise in addition to the audio I expect to hear. It's much louder than the typical noise floor for a headphone jack.
I've found many other reports of this same problem for both the XPS 13 9350 (1, 2) and the XPS 13 9360 (1, 2, 3), so it doesn't seem like I have a faulty unit.
Is there a way to stop this noise?
|
Set Headphone Mic Boost gain to 10dB. Any other value seems to cause the irritating background noise in headphones. This can be done with amixer:
amixer -c0 sset 'Headphone Mic Boost' 10dB
To make this happen automatically every time you headphones are connected install acpid.
Start it by running:
sudo systemctl start acpid.service
Enable it by running:
sudo systemctl enable acpid.service
Create following event script /etc/acpi/headphone-plug
event=jack/headphone HEADPHONE plug
action=/etc/acpi/cancel-white-noise.sh %e
Then create action script /etc/acpi/cancel-white-noise.sh:
#! /bin/bash
amixer -c0 sset 'Headphone Mic Boost' 10dB
Now Headphone Mic Boost will be set to 10dB every time headphones are connected. To make this effective you need to restart your laptop.
| How to prevent white noise in headphones on Dell XPS 13 9350/9360 |
1,363,937,239,000 |
After I use Jack, the PulseAudio outputs and inputs are replaced by a dummy device. I've tried to kill PulseAudio and reload Alsa, but the only way I can use an Alsa-based application again is to reboot. I know that there must be a way to fix the problem without rebooting. I have had this problem in multiple Linux distros, including Ubuntu and currently Fedora 19.
Output of service alsa-utils restart:
Redirecting to /bin/systemctl restart alsa-utils.service
Failed to issue method call: Unit alsa-utils.service failed to load:
No such file or directory. See system logs and 'systemctl status
alsa-utils.service' for details.
And systemctl status alsa-utils.service:
alsa-utils.service
Loaded: error (Reason: No such file or directory)
Active: inactive (dead)
alsactl kill quit and alsactl init proceed with no errors.
|
The solution turned out to be simpler than it appeared. The output of fuser -v /dev/snd/* revealed jackd was silently hogging the audio card even after QjackCtl supposedly killed it. Running killall jackd fixed the problem. The problem wasn't with PulseAudio, but rather jackd running invisibly in the background.
| How to restart Alsa/PulseAudio after using Jack |
1,363,937,239,000 |
I am trying to set up Jack, as I've heard it's the Linux equivalent to ASIO on Windows. I play guitar for fun and thought it would be cool to play with Ardour or find a FOSS equivalent to Guitar Rig.
However I do not understand... well, anything. I don't understand what Jack does. From what I can gather, the general flow is
[sound hardware] → [kernel] → [JACK] → [ALSA] → [PulseAudio] → [Phonon] → [my headphones]
(Phonon comes in because I use KDE. I think.)
I don't actually know what the arrows represent. The JACK website contains essentially zero beginning user oriented documentation, except for one page describing how to use JACK with PulseAudio.
As a beginner who, regardless of JACK, doesn't understand how sound works in Linux, where can I go to learn? I'd like to gain an understanding of the sound stack. But for JACK all I was able to find is its barren Wiki (including two juicy links named Configuring and running a JACK server and Setting up a simple audio chain, which both turn out to be "Coming Soon" pages which haven't been edited in five years) and a Linux Journal article from 2005.
Many things confuse me. How can I tell which sound devices Linux recognizes? I have three: an onboard chip, a USB audio interface (an M-Audio FastTrack), and a USB webcam that has a microphone. Do all of these things get recognized by Linux? Do they all register specifically as sound devices? Does each device have to have independent drivers for JACK, ALSA, PulseAudio, etc.? Is there a basic way I can test my device to make sure it has output? Is there a way I can monitor my devices to see if the software is actually using them?
Right now Amarok sound is audible, but Youtube sound isn't. Amarok is also running through my USB FastTrack instead of my onboard sound chip. Hydrogen refuses to start, presumably because I have JACK or Alsa or something configured wrong. I have no idea how to figure out the rhyme or reason for these things.
|
In my endeavor with Linux sound I have ended up disabling autospawning of Pulse Audio (so it doesn't restart when shut down):
Add autospawn=no to ~/.pulse/client.conf.
Stop with pactl exit
Start with pulseaudio
Doing live sound stuff or the like I shut down PA and run JACK only. No PA bridge. I have never gotten latency satisfactorily lowered using PA or JACK+PA.
This article seems to give a rather good and quick introduction to the layers, which also mentions Phonon.
You have perhaps read this, and is also not up to date, but would perhaps bring you closer to an understanding: Linux Music Workflow: Switching from Mac OS X to Ubuntu with Kim Cascone. Note the diagram above heading "Workflow". (Which you can also find here under JACK Schematic diagram.) Also read the links e.g. the one on top Introduction to Linux Audio, even though from 2004, it gives you a quick view of ALSA.
Though I'm not to familiar with either myself I believe a good approach is to split out the learning in various parts.
Get an understanding of ALSA
Get an understanding of JACK (Especially since you want to do studio work.)
Get an understanding of Pulse Audio
in that order. It is no wonder one struggle with grasping Linux sound. That has quite a bit to do with history and how it all has evolved. That is also why, if one want to truly understand it, it is a good thing to learn history of it. Thus again - ALSA is a good place to start. Do some sniffing on OSS. And work your way up.
Quick way to might get it to work is follow either of these guides.
Simplistically; ALSA is part of the kernel and know how to handle various hardware. JACK as well as Pulse Audio uses API to control and interact with the hardware. ALSA can also be used alone as a sound server. Applications uses JACK/PA API to do multi thread sound work.
A quick view of your system can be achieved by running the alsa-info.sh script found here.
A very simplified diagram of a blurry view showing some of the connections:
+------------------------------------------------+
| SOUNDCARD |
|------------------------------------------------| _____ __
| ___________ | / \/ \
| | ADC | <---- analog in --[o---7 :===========|==|==|=[';]
| -----|----- | \____7 \__/
| __________ AMP | |
| | MIXER |----+------o |
| +---|---+-- AMP_____|______ | _______
| | | DAC | ---> analog out -[o------[ o o o ] ♫ ♬ ♪ ♩ ♭ ♪
| | +----------+ | | |
| | | | (o) |
| -- -+---^-- --v-- -- -- --^-- --v-- --+-- | | |
| CONTROLS | | ((0)) |
| | |_______|
| |
+------------------------------||----------------+
||
ADC: Analog to digital ||
DAC: Digital to analog |- udev trigged and mounted
_______________________________||________________
| |
| KERNEL |
|¨ ¨ ¨ ¨ ¨ ¨ ¨ ¨ ¨ ¨ ¨ ¨ ¨ ¨ ¨ -|-|-|-|- ¨ ¨ ¨ ¨ ¨|
| |
| ALSA API <--> [Device Drivers] |
| ^ | module-alsa-card +--------|--
| | | | |
+---------|--|---------------------------| Memory Buffer I/O
: | v | |
| +----|---|--
| JACK ------------ PULSE AUDIO --------------+ |
| sinks | |--
| * hardware-access-points * hardware-sink | | Uses ALSA API for HW I/O
| * virtual-devices * mediaplayer-sink | | Mixing, Control etc.
| * recorder-sink | |
| * ... | |--
| | |
|-----------------|------|--------------------|---|
| APPLICATIONS -----------------+ |
|-------------------------------------------------|
| |
| Software based mixing |
| |
+-------------------------------------------------+
| How do I use Jack? How does Linux sound work? [closed] |
1,363,937,239,000 |
Linux newbie: How do I use Jack? How does Linux sound work?
I have an app that is trying to output sound through ALSA or JACK, but I am not hearing anything.
Here are a couple of articles, from which I learned that ALSA is the kernel-mode sound driver for linux, and libasound is the user-space library to which applications interface.
Furthermore PulseAudio and JACK are audio servers/routers to allow multiple applications to control multiple hardware and applications.
Here is a nice article how to route all-applications -> jack -> PulseAudi -> ALSA. Looks pretty simple, but I do not understand why the need of so many layers. Why not directly jack to alsa?
How do I list the client applications, using alsa? (I need to find out why am I not hearing anything)
How do I route jack directly to alsa? Or should I do as the last article points out - route it through PulseAudio?
After all this is done, how do I list information on all client applications to jack?
|
I can understand your confusion, i've been there :)
Lets start with the fact that PulseAudio, like JACK are sound servers in a sense, with different aims in mind though. JACK is aimed at the professional audio user/musician, while PA aims at providing ease of use.
The audio route is a little different than what you have in your question.
all-applications->PA to jack sink->jack audio server -> libasound and ALSA.
This way the PA which is, as usual the default audio output (sink) pipes the sound to jack. The above looks like this in jack's patchbay (after the sink and source modules have been loaded with load-module)
the 'system' entries are provided by the ALSA backend, while the PA JACK sink and source are provided by the PA to jack modules.
If you are running some flavour of ubuntu, then you can add the following in qjackctl -> setup -> "options" tab -> execute after startup
pactl load-module module-jack-sink channels=2; pactl load-module module-jack-source channels=2; pactl set-default-sink jack_out; pactl set-default-source jack_in
The above should load the "PA to jack" modules (2 channels L+R for each), and set the default playback device for all applications to be the PA to jack sink module. Additionally it connect the line in/mic input to the PA to jack source input, so that applications that need access to the default input device (such as skype) can get it through the PA to jack source module.
Now if an application outputs sound to ALSA it should playback through the default device , i.e. through pulseaudio. Which begs the question, do you really need jack altogether? And which application is that?
In any case, if the application is jack-aware it should show up on qjackctl's patchbay and then you can connect it in the audio path as you see fit.
For more information see here. Also JACK's FAQ and wiki are tremendously helpful.
| Linux sound: how does it work and why do I need to chain 3 architectures to use JACK? |
1,363,937,239,000 |
How do I print and connect jack-audio and midi ports from the command line, similar to aconnect -io or aconnect 20:0 132:1 for inputs and outputs of ALSA MIDI?
|
jack_lsp [options] [filter string]
is able to print all jack ports (Audio and MIDI).
From the help-text:
List active Jack ports, and optionally display extra information.
Optionally filter ports which match ALL strings provided after any options.
Display options:
-s, --server <name> Connect to the jack server named <name>
-A, --aliases List aliases for each port
-c, --connections List connections to/from each port
-l, --latency Display per-port latency in frames at each port
-L, --latency Display total latency in frames at each port
-p, --properties Display port properties. Output may include:
input|output, can-monitor, physical, terminal
-t, --type Display port type
-h, --help Display this help message
--version Output version information and exit
For more information see http://jackaudio.org/
to connect the ports from the command line, you can use jack_connect.
with jack_lsp you could get an output like this showing all current jack ports:
system:capture_1
system:capture_2
system:playback_1
system:playback_2
system:midi_capture_1
system:midi_playback_1
amsynth:L out
amsynth:R out
amsynth:midi_in
system:midi_playback_2
system:midi_capture_2
as an example you could connect system:midi_capture_1 with amsynth:midi_in by running: jack_connect system:midi_capture_1 amsynth:midi_in
To see which ports are connected you could use jack_lsp -c and get an output similar to this:
system:capture_1
system:capture_2
system:playback_1
amsynth:L out
system:playback_2
amsynth:R out
system:midi_capture_1
amsynth:midi_in
system:midi_playback_1
amsynth:L out
system:playback_1
amsynth:R out
system:playback_2
amsynth:midi_in
system:midi_capture_1
system:midi_playback_2
system:midi_capture_2
| Print and connect Jack Audio and MIDI ports from the command line |
1,363,937,239,000 |
I am a bit confused about audio device names. If I use command aplay -l I get the list of all audio devices on my system:
**** List of PLAYBACK Hardware Devices ****
card 0: NVidia [HDA NVidia], device 0: VT1708S Analog [VT1708S Analog]
Subdevices: 1/1
Subdevice #0: subdevice #0
card 0: NVidia [HDA NVidia], device 2: VT1708S Alt Analog [VT1708S Alt Analog]
Subdevices: 1/1
Subdevice #0: subdevice #0
card 0: NVidia [HDA NVidia], device 3: VT1708S Digital [VT1708S Digital]
Subdevices: 1/1
Subdevice #0: subdevice #0
card 1: HDMI [HDA ATI HDMI], device 3: HDMI 0 [HDMI 0]
Subdevices: 1/1
Subdevice #0: subdevice #0
card 3: USB [Scarlett 2i4 USB], device 0: USB Audio [USB Audio]
Subdevices: 1/1
Subdevice #0: subdevice #0
Notice that the order is card 0, card 1, card 3 where card 2 is not listed. This confuses me.
I know that every entry here is a single device (not card) so if I am correct (and please confirm or correct me if I am wrong) I would name my soundcard "Scarlet 2i4" like hd:3,0? Or is it hd:2,0 because card 2 is missing?
Now when I open a JACK and want to tweak audio settings, I have different names than the ones above. The names are:
hw:USB,0
hw:USB
hw:0
plughw:0
/dev/audio
/dev/dsp
Where does a JACK get this device table? What kind of naming convention is this and how can I figure out which device is which (I want JACK to primarily use my "Scarlet 2i4")? Is there any terminal command which will let me know this?
At the moment my ~/.jackrc settings are like this:
/usr/bin/jackd -nziga-scarlet-2i4 -t2000 -dalsa -dhw:0 -r48000 -p128 -n2
|
Each card has a number (also called "index").
Typically, a driver grabs the first free number, but it's possible to force drivers to use another number. It's also possible for numbers to remain free because they were used previously by an unplugged device.
Each card has name (such as "HDA NVidia"), and a unique ID (such as "NVidia").
Each PCM device has a number/index (which is fixed, and determined by the driver), a name, and an ID (typcially, ID and name are identical).
In a device name like hw:0,0, the first parameter is the card (either the card number, or the card ID), and defaults to 0.
The second parameter is the device number (using the ID is not possible), and defaults to 0.
In ALSA device names, hw specifies a hardware device, while plughw adds plugins to automatically convert sample formats and rates if the capabilities of the hardware and the application do not match. (Jack typcially does not need this.)
Jack does not have a list of devices.
That window is the QJackCtl tool, which is commonly used to start Jack.
The /dev/audio and /dev/dsp devices are OSS devices; this interface is obsolete in Linux, and showing them in this list does not make sense (these devices are actually the same as hw:0).
The default list in QJackCtl does not show other cards than the first one; you have to click the button next to the list.
| Alsa & JACK - card and device names (different naming conventions) |
1,363,937,239,000 |
I need a simple way to connect the midi keyboard to pulse audio and leave it active. ( i'm not worried about low latency.)
So far, I've looked at Ted's Linux MIDI Guide and followed all of that, but I reverted to normal latency kernel, when the low-latency caused trouble with my input devices. Following Ted's instructions, I can run /usr/bin/audio start and then the vmpk script, which is nice, but then I can't use pulse (for watching tutorial on youtube.)
Is it best in the long run to use jack audio for everything, even on a normal 250hz kernel?
|
For beginners who don't need to fuss with studio-grade settings...
executable file pulsepiano, adapted from Ted's Linux Midi Guide to use Pulse instead of Jack.
So far I only can't get the script to hook up the MIDI-out from the keyboard, but that might be another topic.
You have to install fluidsynth, vmpk, and get the soundfont: FluidR3_GM.sf2. The trailing ampersand runs the command in the background. The aconnect info also adapted from Ted's guide.
If you have problems,
use: kill -9 [PID of vmpk|fluidsynth|qsynth]
or: killall fluidsynth, killall vmpk, and so on.
Hope it isn't too much info. Without opening each app manually, this is about as beginner as it gets for midi.
#!/bin/bash
fluidsynth --server \
--no-shell \
--audio-driver=pulseaudio \
--gain=1.0 \
--reverb=0.42 \
--chorus=0.42 \
/usr/share/sounds/sf2/FluidR3_GM.sf2 &>/tmp/fluidsynth.out &
sleep 2
vmpk &
sleep 2
vmpkport=$(aconnect -i |grep "client.*VMPK Output" | cut -d ' ' -f 2)0
synthport=$(aconnect -i |grep "FLUID Synth" | cut -d ' ' -f 2)0
echo "vmpk on ${vmpkport} & synth on ${synthport}"
| Simple way to connect midi keyboard to pulseaudio without using Jack |
1,363,937,239,000 |
Audacity has a very nice noise cancellation filter. Is it possible, using JACK with ALSA, to pipe live audio through Audacity's noise cancellation filter?
|
No, for real time processing you would want to use Ardour
| Live noise cancellation of microphone audio with JACK, ALSA, Audacity? |
1,363,937,239,000 |
What is the difference between softwares like JACK, PulseAudio, ALSA, etc?
And how does these relate to audio server and audio device driver in a linux system?
|
Very briefly:
ALSA contains the actual device drivers (in the kernel source), and a library to access those drivers. You use sound perfectly fine with just ALSA alone.
PulseAudio implements an additional audio routing level on top of ALSA, including volumes and conversions. Most distros use the PulseAudio + ALSA combo as the default.
JACK is intended for high fidelity minimal latency applications, like a digital audio workstation (DAW). It uses a single audio card as a master clock (while Pulseaudio automatically converts between formats, bit rates, and clock skew between cards). Like PulseAudio, you can also route audio between devices. Unlike PulseAudio, it also handles MIDI.
Today, JACK also uses mostly the ALSA drivers.
"Audio server" isn't a particular well-defined concept. ALSA is a library, PulseAudio and JACK both run a server process. You can have other "audio servers" on top of that, depending on your definition.
Details are easy to find on the internet, e.g. with the link mentioned above in the comments.
| Pulse Audio vs ALSA vs audio server vs audio device driver |
1,363,937,239,000 |
I've installed jack2 as a substitution for jack from official repositories (I'm on Arch Linux):
# pacman -S jack2
I need to use jack2 because it provides jackd (it's needed for another application), while jack2_dbus does not provide it.
According to this manual, in order to configure such parameters as sampling rate, one should use jack_control, but it is available only for jack2_dbus (which I cannot use).
I also have read this article, but unfortunately, I can't follow it (it was written for jack, apparently jack2 does not include jackstart anymore):
[mark@arch ~]$ jackstart -R -d alsa -d hw:1U -p 512 -r 48000 -z s
bash: jackstart: command not found
I would like to somehow set default audio card, because when an application uses jack on my system, it uses card with 0 index and this is not what I want (I want, say, audio card with index 2).
Here is my ~/.asoundrc:
#
# ALSA Configuration File
#
defaults.ctl.card 2
defaults.pcm.card 2
defaults.dmix.rate 44100
defaults.dmix.channels 2
Is there configuration file that controls which audio card will be used when an application invokes jackd? Any other means to set the parameter (and others)?
|
You only choose audio card once when starting jackd. You can list cards available to alsa with aplay -l (aplay is part of alsa-utils). Then you can start the jack daemon, and pick the card to use with jackd -d alsa -d hw:<card>,<device>.
| How to configure which sound card jack2 will use |
1,363,937,239,000 |
I am running espeak on Linux Mint 14.
Whenever I try to run it, it shows following warnings ( Not errors as it works correctly ).
ALSA lib pcm.c:2217:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.rear
ALSA lib pcm.c:2217:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.center_lfe
ALSA lib pcm.c:2217:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.side
ALSA lib audio/pcm_bluetooth.c:1614:(audioservice_expect) BT_GET_CAPABILITIES failed : Input/output error(5)
ALSA lib audio/pcm_bluetooth.c:1614:(audioservice_expect) BT_GET_CAPABILITIES failed : Input/output error(5)
ALSA lib audio/pcm_bluetooth.c:1614:(audioservice_expect) BT_GET_CAPABILITIES failed : Input/output error(5)
ALSA lib audio/pcm_bluetooth.c:1614:(audioservice_expect) BT_GET_CAPABILITIES failed : Input/output error(5)
ALSA lib pcm_dmix.c:957:(snd_pcm_dmix_open) The dmix plugin supports only playback stream
Cannot connect to server socket err = No such file or directory
Cannot connect to server request channel
jack server is not running or cannot be started
I searched on the net about these kinds of error and got this answer. I tried using this :
espeak "Hello, I am Espeak, the voice synthesizer" 2>/dev/null
This shows no warnings but when I use it within my code, It shows the error.
|
My espeak also returns similar messages:
$ espeak -v en-us+3 -s 120 -k 20 "Pray. For. Moe. Jo."
ALSA lib pcm.c:2212:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.rear
ALSA lib pcm.c:2212:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.center_lfe
ALSA lib pcm.c:2212:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.side
ALSA lib pcm_dmix.c:957:(snd_pcm_dmix_open) The dmix plugin supports only playback stream
Cannot connect to server socket err = No such file or directory
Cannot connect to server socket
jack server is not running or cannot be started
Redirecting them to /dev/null get's rid of them but that's only hiding the messages:
$ espeak -v en-us+3 -s 120 -k 20 "Pray. For. Moe. Jo." 2>/dev/null
$
PulseAudio
According to this thread it looks like there is a issue with how PulseAudio is configured, specifically that there are pcm's in ALSA's configuration that aren't correct. The thead says you can safely ignore those if you like.
Specifically these messages:
ALSA lib pcm.c:2217:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.rear
ALSA lib pcm.c:2217:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.center_lfe
ALSA lib pcm.c:2217:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.side
The other messages are related to BlueTooth (hence the BT_...) in the message.
Specifically these messages:
ALSA lib audio/pcm_bluetooth.c:1614:(audioservice_expect) BT_GET_CAPABILITIES failed : Input/output error(5)
ALSA lib audio/pcm_bluetooth.c:1614:(audioservice_expect) BT_GET_CAPABILITIES failed : Input/output error(5)
ALSA lib audio/pcm_bluetooth.c:1614:(audioservice_expect) BT_GET_CAPABILITIES failed : Input/output error(5)
ALSA lib audio/pcm_bluetooth.c:1614:(audioservice_expect) BT_GET_CAPABILITIES failed : Input/output error(5)
In general it looks like all these messages can safely be ignored. If you're inclined to try and get rid of them I would focus my attention on if I have any bluetooth services running, and turn them off. Additionally I'd look through the ALSA configurations under /etc/alsa and /etc/pulse.
Workaround
If you want to completely disregard these messages you can run espeak ... and redirect these messages to /dev/null.
espeak -v en-us+3 -s 120 -k 20 "Pray. For. Moe. Jo." &> /dev/null
| Espeak showing some warnings and Input output error |
1,363,937,239,000 |
I'm trying to play with ardour. When I started it up, it complained that jackd isn't running, so I ran jackd -d alsa, which displayed:
jackdmp 1.9.6
Copyright 2001-2005 Paul Davis and others.
Copyright 2004-2010 Grame.
jackdmp comes with ABSOLUTELY NO WARRANTY
This is free software, and you are welcome to redistribute it
under certain conditions; see the file COPYING for details
no message buffer overruns
no message buffer overruns
JACK server starting in realtime mode with priority 10
audio_reservation_init
Acquire audio card Audio0
creating alsa driver ... hw:0|hw:0|1024|2|48000|0|0|nomon|swmeter|-|32bit
Using ALSA driver HDA-Intel running on card 0 - HDA Intel at 0xfc320000 irq 44
configuring for 48000Hz, period = 1024 frames (21.3 ms), buffer = 2 periods
ALSA: final selected sample format for capture: 32bit integer little-endian
ALSA: use 2 periods for capture
ALSA: final selected sample format for playback: 32bit integer little-endian
ALSA: use 2 periods for playback
It seems this didn't help much because running ardour2 displayed the following:
WARNING: Your system has a limit for maximum amount of locked memory!
This might cause Ardour to run out of memory before your system runs
out of memory. You can view the memory limit with 'ulimit -l', and it
is normally controlled by /etc/security/limits.conf
Ardour 2.8.11
(built using 7387 and GCC version 4.4.5)
Copyright (C) 1999-2008 Paul Davis
Some portions Copyright (C) Steve Harris, Ari Johnson, Brett Viren, Joel Baker
Ardour comes with ABSOLUTELY NO WARRANTY
not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
This is free software, and you are welcome to redistribute it
under certain conditions; see the source for copying conditions.
loading default ui configuration file /etc/ardour2/ardour2_ui_default.conf
loading user ui configuration file /home/wena/.ardour2/ardour2_ui.conf
Loading ui configuration file /etc/ardour2/ardour2_ui_dark.rc
theme_init() called from internal clearlooks engine
ardour: [INFO]: Ardour will be limited to 1024 open files
loading system configuration file /etc/ardour2/ardour_system.rc
ardour: [INFO]: No H/W specific optimizations in use
librdf warning - Model does not support contexts
librdf warning - Model does not support contexts
librdf warning - Model does not support contexts
ardour: [INFO]: looking for control protocols in /home/wena/.ardour2/surfaces/:/usr/lib/ardour2/surfaces/
ardour: [INFO]: Control protocol Tranzport not usable
ardour: [INFO]: Control surface protocol discovered: "Mackie"
ardour: [INFO]: Control surface protocol discovered: "Generic MIDI"
powermate: Opening of powermate failed - No such file or directory
ardour: [INFO]: Control protocol powermate not usable
Cannot connect to server socket err = Connection refused
Cannot connect to server socket
jack server is not running or cannot be started
[note] These are native Debian packages.
|
A snippet from the Debian-specific README file for ardour (located at "/usr/share/doc/ardour/README.Debian"):
You have to run jackd and ardour as the same user.
That is, I assumed I was supposed to start jackd as root.
| Issues with ardour and jackd |
1,363,937,239,000 |
I am using jackd and pulseaudio with the pulseaudio-jack-module so that I can have pulseaudio and jack running at the same time. I mostly use jack applications, except for web browsing and a few other applications.
I am trying to record audio but I get really bad feedback if I try to record audio. If I plug in my headphones the feedback mostly goes away but if I touch the laptop I can hear it in my recording.
It sounds like linux is still recording through my built in mic. I am wondering if I can fix this or will I need to buy a usb mic or something like that?
I am using kxstudio's audio
Here's some debugging output
aplay -L
null
Discard all samples (playback) or generate zero samples (capture)
pulse
PulseAudio Sound Server
default
Playback/recording through the PulseAudio sound server
sysdefault:CARD=Loopback
Loopback, Loopback PCM
Default Audio Device
front:CARD=Loopback,DEV=0
Loopback, Loopback PCM
Front speakers
surround21:CARD=Loopback,DEV=0
Loopback, Loopback PCM
2.1 Surround output to Front and Subwoofer speakers
surround40:CARD=Loopback,DEV=0
Loopback, Loopback PCM
4.0 Surround output to Front and Rear speakers
surround41:CARD=Loopback,DEV=0
Loopback, Loopback PCM
4.1 Surround output to Front, Rear and Subwoofer speakers
surround50:CARD=Loopback,DEV=0
Loopback, Loopback PCM
5.0 Surround output to Front, Center and Rear speakers
surround51:CARD=Loopback,DEV=0
Loopback, Loopback PCM
5.1 Surround output to Front, Center, Rear and Subwoofer speakers
surround71:CARD=Loopback,DEV=0
Loopback, Loopback PCM
7.1 Surround output to Front, Center, Side, Rear and Woofer speakers
dmix:CARD=Loopback,DEV=0
Loopback, Loopback PCM
Direct sample mixing device
dmix:CARD=Loopback,DEV=1
Loopback, Loopback PCM
Direct sample mixing device
dsnoop:CARD=Loopback,DEV=0
Loopback, Loopback PCM
Direct sample snooping device
dsnoop:CARD=Loopback,DEV=1
Loopback, Loopback PCM
Direct sample snooping device
hw:CARD=Loopback,DEV=0
Loopback, Loopback PCM
Direct hardware device without any conversions
hw:CARD=Loopback,DEV=1
Loopback, Loopback PCM
Direct hardware device without any conversions
plughw:CARD=Loopback,DEV=0
Loopback, Loopback PCM
Hardware device with all software conversions
plughw:CARD=Loopback,DEV=1
Loopback, Loopback PCM
Hardware device with all software conversions
sysdefault:CARD=PCH
HDA Intel PCH, CS4208 Analog
Default Audio Device
front:CARD=PCH,DEV=0
HDA Intel PCH, CS4208 Analog
Front speakers
surround21:CARD=PCH,DEV=0
HDA Intel PCH, CS4208 Analog
2.1 Surround output to Front and Subwoofer speakers
surround40:CARD=PCH,DEV=0
HDA Intel PCH, CS4208 Analog
4.0 Surround output to Front and Rear speakers
surround41:CARD=PCH,DEV=0
HDA Intel PCH, CS4208 Analog
4.1 Surround output to Front, Rear and Subwoofer speakers
surround50:CARD=PCH,DEV=0
HDA Intel PCH, CS4208 Analog
5.0 Surround output to Front, Center and Rear speakers
surround51:CARD=PCH,DEV=0
HDA Intel PCH, CS4208 Analog
5.1 Surround output to Front, Center, Rear and Subwoofer speakers
surround71:CARD=PCH,DEV=0
HDA Intel PCH, CS4208 Analog
7.1 Surround output to Front, Center, Side, Rear and Woofer speakers
iec958:CARD=PCH,DEV=0
HDA Intel PCH, CS4208 Digital
IEC958 (S/PDIF) Digital Audio Output
dmix:CARD=PCH,DEV=0
HDA Intel PCH, CS4208 Analog
Direct sample mixing device
dmix:CARD=PCH,DEV=1
HDA Intel PCH, CS4208 Digital
Direct sample mixing device
dsnoop:CARD=PCH,DEV=0
HDA Intel PCH, CS4208 Analog
Direct sample snooping device
dsnoop:CARD=PCH,DEV=1
HDA Intel PCH, CS4208 Digital
Direct sample snooping device
hw:CARD=PCH,DEV=0
HDA Intel PCH, CS4208 Analog
Direct hardware device without any conversions
hw:CARD=PCH,DEV=1
HDA Intel PCH, CS4208 Digital
Direct hardware device without any conversions
plughw:CARD=PCH,DEV=0
HDA Intel PCH, CS4208 Analog
Hardware device with all software conversions
plughw:CARD=PCH,DEV=1
HDA Intel PCH, CS4208 Digital
Hardware device with all software conversions
hdmi:CARD=NVidia,DEV=0
HDA NVidia, HDMI 0
HDMI Audio Output
hdmi:CARD=NVidia,DEV=1
HDA NVidia, HDMI 1
HDMI Audio Output
hdmi:CARD=NVidia,DEV=2
HDA NVidia, HDMI 2
HDMI Audio Output
dmix:CARD=NVidia,DEV=3
HDA NVidia, HDMI 0
Direct sample mixing device
dmix:CARD=NVidia,DEV=7
HDA NVidia, HDMI 1
Direct sample mixing device
dmix:CARD=NVidia,DEV=8
HDA NVidia, HDMI 2
Direct sample mixing device
dsnoop:CARD=NVidia,DEV=3
HDA NVidia, HDMI 0
Direct sample snooping device
dsnoop:CARD=NVidia,DEV=7
HDA NVidia, HDMI 1
Direct sample snooping device
dsnoop:CARD=NVidia,DEV=8
HDA NVidia, HDMI 2
Direct sample snooping device
hw:CARD=NVidia,DEV=3
HDA NVidia, HDMI 0
Direct hardware device without any conversions
hw:CARD=NVidia,DEV=7
HDA NVidia, HDMI 1
Direct hardware device without any conversions
hw:CARD=NVidia,DEV=8
HDA NVidia, HDMI 2
Direct hardware device without any conversions
plughw:CARD=NVidia,DEV=3
HDA NVidia, HDMI 0
Hardware device with all software conversions
plughw:CARD=NVidia,DEV=7
HDA NVidia, HDMI 1
Hardware device with all software conversions
plughw:CARD=NVidia,DEV=8
HDA NVidia, HDMI 2
Hardware device with all software conversions
aplay -l
**** List of PLAYBACK Hardware Devices ****
card 0: Loopback [Loopback], device 0: Loopback PCM [Loopback PCM]
Subdevices: 8/8
Subdevice #0: subdevice #0
Subdevice #1: subdevice #1
Subdevice #2: subdevice #2
Subdevice #3: subdevice #3
Subdevice #4: subdevice #4
Subdevice #5: subdevice #5
Subdevice #6: subdevice #6
Subdevice #7: subdevice #7
card 0: Loopback [Loopback], device 1: Loopback PCM [Loopback PCM]
Subdevices: 8/8
Subdevice #0: subdevice #0
Subdevice #1: subdevice #1
Subdevice #2: subdevice #2
Subdevice #3: subdevice #3
Subdevice #4: subdevice #4
Subdevice #5: subdevice #5
Subdevice #6: subdevice #6
Subdevice #7: subdevice #7
card 1: PCH [HDA Intel PCH], device 0: CS4208 Analog [CS4208 Analog]
Subdevices: 0/1
Subdevice #0: subdevice #0
card 1: PCH [HDA Intel PCH], device 1: CS4208 Digital [CS4208 Digital]
Subdevices: 1/1
Subdevice #0: subdevice #0
card 2: NVidia [HDA NVidia], device 3: HDMI 0 [HDMI 0]
Subdevices: 1/1
Subdevice #0: subdevice #0
card 2: NVidia [HDA NVidia], device 7: HDMI 1 [HDMI 1]
Subdevices: 1/1
Subdevice #0: subdevice #0
card 2: NVidia [HDA NVidia], device 8: HDMI 2 [HDMI 2]
Subdevices: 1/1
Subdevice #0: subdevice #0
I would just like to be able to record some short clips, not for music or anything, more like a voice over.
I have qas mixer and it
In the above screenshot I have headphones plugged in so that I don't blow out my ears.
I made this sample clip, it's very short but you can literally hear my keystrokes and me moving my hands or try to type. You can listen to the 10 second clip here
So does this mean that I need a usb mic or is there something wrong with my audio setup? How could I fix this?
|
It does look like you're boosting your mic gain; that does make noise more pronounced.
Also, you probably have too many mics active. You should methodically go through each capture option and mute it, and then unmute one-by-one until you identify the one you actually are using.
| linux audio feedback |
1,363,937,239,000 |
There is not sound anymore in my headphones (jack connection) even though it used to work.
During my last session, I did quite a lot of tweaks on my
installation, so far, here is what I could retrieve:
sudo apt install qt5ct qt4-qtconfig libqt5svg5 kvantum
Updates with the update manager (including upgrading to Linux kernel 4.15.0.46.48)
Since then, I have rebooted and that is when I noticed sound was gone.
Headphones seems to be detected when plugged (Headphones built-in audio shows up in the Device section of the Audio configuration GUI
when I plug it)
The headphones work (tested elsewhere)
I've tried booting on the previous kernel and removing the newly installed one (current situation: I'm on 4.15.0.45 now), and I still
have no sound.
This is all beyond my knowledge, and my researches have led me nowhere.
How can I troubleshoot such issue?
|
I've been given the answer on the LinuxMint forum:
Add to manually reload Alsa sudo alsa force-reload
It sounds like a strange solution as I would assume that rebooting would reload modules anyway. But it didn't apparently, and this solution fixed the issue durably.
| No sound with headphones in Linuxmint 19.1 |
1,363,937,239,000 |
In the shell, when I pipe jack_cpu_load through sed, or cut, no matter what options I use it stops printing just before the lines I want to see.
jack_cpu_load | sed -n 8p will print:
Jack: JackClient::kActivateClient name = jack_cpu_load ref = 4
The very next line should read something like jack DSP load 0.294772 which is what I'm looking for, but when I run jack_cpu_load | sed -n 9p which should print that line, there is nothing. Just a cursor, until I hit Ctrl+C and kill it.
Unfortunately there is very little documentation on this command and I'm just a user, a musician no less, trying to hack together something that will let me see the dsp load at a glance in my status bar.
Terminal Output:
tony@hydra ~ $ jack_cpu_load
Jack: JackClient::SetupDriverSync driver sem in flush mode
Jack: JackPosixSemaphore::Connect name = jack_sem.1000_default_jack_cpu_load
Jack: JackPosixSemaphore::Connect sem_getvalue 0
Jack: Clock source : system clock via clock_gettime
Jack: JackLibClient::Open name = jack_cpu_load refnum = 4
Jack: JackClient::Activate
Jack: JackClient::ClientNotify ref = 4 name = jack_cpu_load notify = 2
Jack: JackClient::kActivateClient name = jack_cpu_load ref = 4
jack DSP load 0.163633
jack DSP load 0.159914
jack DSP load 0.159449
jack DSP load 0.164087
jack DSP load 0.159971
^CJack: jack_client_close
For this:
tony@hydra ~ $ jack_cpu_load 2>&1 | sed -n 8p
Jack: JackClient::kActivateClient name = jack_cpu_load ref = 4
And for this:
tony@hydra ~ $ jack_cpu_load 2>&1 | sed -n 9p
There's nothing.
The output for strace -f jack_cpu_load http://justpaste.it/e7st
|
Maybe the output doesn't all go to stdout. Try jack_cpu_load 2>&1 | sed -n 8p
Or it is a buffering issue. Try stdbuf -i0 -o0 -e0 jack_cpu_load | sed -n 9p
| Pipe output of jack_cpu_load through sed |
1,363,937,239,000 |
I have the Jack Audio Connection Kit (JACK) installed, but cannot seem to get jack_control start to start the service.
I'm using Slackware64-current, which recently updated its /etc/dbus-1/system.conf to have a more restrictive configuration:
<!-- ... -->
<policy context="default">
<!-- All users can connect to system bus -->
<allow user="*"/>
<!-- Holes must be punched in service configuration files for
name ownership and sending method calls -->
<deny own="*"/>
<deny send_type="method_call"/>
<!-- Signals and reply messages (method returns, errors) are allowed
by default -->
<allow send_type="signal"/>
<allow send_requested_reply="true" send_type="method_return"/>
<allow send_requested_reply="true" send_type="error"/>
<!-- All messages may be received by default -->
<allow receive_type="method_call"/>
<allow receive_type="method_return"/>
<allow receive_type="error"/>
<allow receive_type="signal"/>
<!-- Allow anyone to talk to the message bus -->
<allow send_destination="org.freedesktop.DBus"/>
<!-- But disallow some specific bus services -->
<deny send_destination="org.freedesktop.DBus"
send_interface="org.freedesktop.DBus"
send_member="UpdateActivationEnvironment"/>
</policy>
Ever since the update, running jack_control start as a regular user produces the following error:
--- start
DBus exception: org.jackaudio.Error.Generic: failed to activate
dbusapi jack client. error is -1
It did not do this before. The new configuration file says I'm supposed to punch a hole for it in the service configuration files. I'm not even quite sure what DBUS has to do with JACK.
Extra information:
JACK2 SVN revision 4120 (2011-02-09)
DBUS version 1.4.1
DBUS-Python version 0.83.1
|
I figured this out a while ago. Turns out it was a CAS-ARMv7 patch to JACK that broke DBUS functionality and I managed to fix using this patch. The issues were resolved some time ago in the JACK subversion repository and it works fine now.
| Configuring DBUS to start JACK |
1,363,937,239,000 |
I'm currently trying to script the headphone plug-in/out event. I found out that I can script this quite easily as an acpi event..
I created a file in /etc/acpi/events/ with the event event=jack[ /]headphone, which then just calls my script.
I've also determined the file and exact line, which holds information about whether the headphones are currently plugged in or not. In the file /proc/asound/card0/codec#0 one specific Pin-ctls: is 0x00 if plugged in and 0x40: OUT if unplugged.
Now the problem I see there, is when I check the the current status of the headphone jack, as soon as the acpi event is triggered, will the codec#0 file already contain the current value? Might I have a race condition here? Or is it safe to use like that?
|
I found out that the problem can be easily circumvented, by checking for the specific plug/unplug event on the jack. The solution below will give the script the information about the specific jack events, which will mute sound, when the jack is unplugged.
/etc/acpi/events/jack:
event=jack[ /]headphone
action=/etc/acpi/actions/jack.sh "%e"
/etc/acpi/actions/jack.sh:
#!/bin/bash
event=$(echo "$1" | cut -d " " -f 3)
case "$event" in
plug)
;;
unplug)
amixer set Master mute
;;
*)
#null
esac
| Possible race condition when scripting headphone plug-in event |
1,363,937,239,000 |
The only command line solution for gapless playback I found so far (working with ALSA and JACK) is moc (»music on console«). While I'm still searching for a simpler way I was wondering if it is possible to loop an audio file into a new file for a given number of times?
Something like:
loop-audio infile.flac --loop 32 outfile.flac
for repeating infile.flac 32 times into outfile.flac
|
Sometimes it is just good to know that linux-life can be as easy as imagined, in this case by using SoX (Sound eXchange):
sox infile.flac outfile.flac repeat 32
this even works with different file formats like:
sox infile.flac outfile.mp3 repeat 32
would loop into a 128 kbps MP3
other bit rates can be set using the option:
-C|--compression FACTOR Compression factor for output format
getting an 320 kbps MP3 would be obtained with this command:
sox infile.flac -C 320 outfile.mp3 repeat 32
and finally a simple gapless playback from the command line with mpv:
mpv --loop-file infile.flac
or the same even simpler:
mpv --loop infile.flac
| Loop audio file from the command line (gapless) or into new file |
1,363,937,239,000 |
I made the stupid mistake to install jack2 while using pulseaudio. The audio wasn't working at all (and I realized I didn't need Jack) so I decided to remove jack2. Now I do have sound (with the laptop's built in speakers), but pavucontrol is not loading (frozen at Establishing connection with pulseaudio. Please wait..." and I get this when I run "pulseaudio"
~>$ pulseaudio
E: [pulseaudio] ltdl-bind-now.c: Failed to open module module-jack-sink.so: module-jack-sink.so: cannot open shared object file: No such file or directory
E: [pulseaudio] module.c: Failed to open module "module-jack-sink".
E: [pulseaudio] main.c: Module load failed.
E: [pulseaudio] main.c: Failed to initialize daemon.
How can I remove all that depends from jack? I want to run plain pulseaudio.
I'm using arch linux
Thanks!
|
I managed to solve it removing all local config files from ~/.config/pulse and ~/.pulse
Now it works perfectly
| Remove everything jack related |
1,363,937,239,000 |
Even with the realtime kernel, following the steps here: https://jackaudio.org/faq/linux_rt_config.html and setting jack to realtime (QJackCtl -> Settings -> Parameters -> Realtime, or using the jackd -R command line arg), I was getting a ton of XRUNs (visible in QJackCtl as the red number, or in the Messages dialog), making the sound stutter a lot.
Increasing the Frames/Period and Periods/Buffer made the XRUNs go away, but increased the latency to a few hundred ms.
It works on a different computer with a similar manjaro installation, so I guess it might be somewhat related to the hardware. The mainboard is a "MSI B450M Mortar Max", it's an AMD system. aplay --list-devices says Realtek ALC892, lspci -v | grep -i audio says "Starship/Matisse HD Audio Controller".
|
It works when I increase the Sample Rate. I have it at 88200 set now and it works really good without XRUNs and stuttering.
You can do that in QJackCtl in Setup... -> Settings -> Parameters
| Lots of XRUNs in Jack |
1,363,937,239,000 |
I have two computers and they have pretty similar manjaro installations. Both have the same jack2, QJackCtl and kernel versions installed.
Computer 1
This is the one that works:
I can start jack and hear stuff in lmms and Hydrogen. Other audio output from pulseaudio will then stop which is expected afaik. When stopping and starting jack in QJackCtl on this one, it looks like this:
01:01:48.817 Client deactivated.
01:01:48.827 JACK is stopping...
Jack main caught signal 15
Released audio card Audio0
audio_reservation_finish
01:01:49.074 JACK was stopped
01:01:51.610 JACK is starting...
01:01:51.611 /usr/bin/jackd -dalsa -dhw:0 -r48000 -p1024 -n2
Cannot connect to server socket err = No such file or directory
Cannot connect to server request channel
jack server is not running or cannot be started
JackShmReadWritePtr::~JackShmReadWritePtr - Init not done for -1, skipping unlock
JackShmReadWritePtr::~JackShmReadWritePtr - Init not done for -1, skipping unlock
01:01:51.652 JACK was started with PID=1969.
Cannot create RT messagebuffer thread: Operation not permitted (1)
Retrying messagebuffer thread without RT scheduling
Messagebuffer not realtime; consider enabling RT scheduling for user
no message buffer overruns
Cannot create RT messagebuffer thread: Operation not permitted (1)
Retrying messagebuffer thread without RT scheduling
Messagebuffer not realtime; consider enabling RT scheduling for user
no message buffer overruns
Cannot create RT messagebuffer thread: Operation not permitted (1)
Retrying messagebuffer thread without RT scheduling
Messagebuffer not realtime; consider enabling RT scheduling for user
no message buffer overruns
jackdmp 1.9.14
Copyright 2001-2005 Paul Davis and others.
Copyright 2004-2016 Grame.
Copyright 2016-2019 Filipe Coelho.
jackdmp comes with ABSOLUTELY NO WARRANTY
This is free software, and you are welcome to redistribute it
under certain conditions; see the file COPYING for details
JACK server starting in realtime mode with priority 10
self-connect-mode is "Don't restrict self connect requests"
Cannot lock down 82280346 byte memory area (Cannot allocate memory)
audio_reservation_init
Acquire audio card Audio0
creating alsa driver ... hw:0|hw:0|1024|2|48000|0|0|nomon|swmeter|-|32bit
configuring for 48000Hz, period = 1024 frames (21.3 ms), buffer = 2 periods
ALSA: final selected sample format for capture: 32bit integer little-endian
ALSA: use 2 periods for capture
ALSA: final selected sample format for playback: 32bit integer little-endian
ALSA: use 2 periods for playback
Cannot use real-time scheduling (RR/10) (1: Operation not permitted)
AcquireSelfRealTime error
01:01:53.832 JACK connection change.
01:01:53.834 Server configuration saved to "/home/mango/.jackdrc".
01:01:53.835 Statistics reset.
01:01:53.866 Client activated.
01:01:53.867 Patchbay deactivated.
01:01:53.882 JACK connection graph change.
Cannot lock down 82280346 byte memory area (Cannot allocate memory)
Computer 2
On this one, all pulseaudio apps will keep playing sound, lmms and Hydrogen won't. When stopping and starting jack in QJackCtl, this is all I see:
00:52:35.422 Client deactivated.
00:52:36.599 JACK connection change.
00:52:36.618 Client activated.
00:52:36.619 Patchbay deactivated.
Cannot lock down 82280346 byte memory area (Cannot allocate memory)
As you can see, it's not logging much stuff.
Inserting the /usr/bin/jackd -dalsa -dhw:0 -r48000 -p1024 -n2 command from the working machine here yields:
`default' server already active
Failed to open server
If I run the same command directly after a reboot or after using jack_control stop, it yields:
audio_reservation_init
Acquire audio card Audio0
creating alsa driver ... hw:0|hw:0|1024|2|48000|0|0|nomon|swmeter|-|32bit
ALSA: Cannot open PCM device alsa_pcm for playback. Falling back to capture-only mode
Released audio card Audio0
audio_reservation_finish
Cannot initialize driver
JackServer::Open failed with -1
Failed to open server
Same for jackd -d alsa
the PCM error message does not appear on computer 1.
On both machines, QJackCtl claims Jack to be "Active"
Where could I continue to look for the problem?
Thanks
|
The solution for no audio playing at all was to check the output devices in QJackCtl
Setup... -> Settings -> Advanced -> Output Device
and setting it to my soundcard.
| jack is not playing audio, pulseaudio keeps being active |
1,363,937,239,000 |
I have failed to pipe the recorded audio input on my audio device (an alsa driven hifiberry with an adc hosted on a raspberry pi) despite trying my best to pipe the output into tee after arecord, install and set up pulseaudio and utilize parec, as well as jackd and jack_capture.
Critical diagram of my mission:
Line level audio -> ADC -> PCM/Wav File -> DAC -> line level audio.
The problem is that no output ever plays during recording.
I am recording a line level input and would like to hear it. Latency is not critical but I expect it to be 200ms or less.
I have succeeded in this incredibly hackish solution which is just to "arecord" in one terminal window and "aplay" in another (which works) but this cannot possibly be the solution to my problem.
To compound the issue, my attempts at googling this have failed miserably as google believes I simply must be trying to capture the output of one application to a file. I am not. I want to monitor the sound card's input. In Apple's Logic Pro this is referred to as "software monitoring" -- I figured this would be easy. I have also seen it referred to as "play-through" but maybe this is something else.
My hopes were raised with pulseaudio -- it's just "sources and sinks" they said.
I did succeed to record a pcm file with pulseaudio's parec, alsa's arecord and jackd + jack_capture. I'm clearly missing something obvious.
parec -d alsa_input.platform-soc_sound.stereo-fallback | sox -t raw -b 16 -e signed -c 2 -r 44100 - /mnt/audio/pulsetest.wav
Clearly my os and hardware can "duplex" because I can arecord and aplay at the same time.
Can this be done or should I continue using arecord and aplay?
|
Please try something like
pacmd load-module module-loopback source="alsa_input.platform-soc_sound.stereo-fallback" sink="alsa_output.whatever"
where "alsa_output.whatever" is the actual name of the line-out sink. You can see the sinks with
pacmd list-sinks | grep name:
I thought "loopback" is a term relegated to capturing system audio
"Loopback" is a generic term for sending some output back to some input. You have a loopback interface for networking (lo), a loopback function on real LAN devices, an ALSA loopback driver, and the Pulseaudio loopback module, and probably many more.
I set default-sample-format and rate in the pulse conf
Don't do this, Pulseaudio does resampling. Don't mess with anything else, just establish a module-loopback connection from your source to your sink.
It should be so simple.
It is simple. I have no idea what you are doing to make it difficult.
I guess i should bust out a sniffer and probe the chips to see what alsa is doing.
This is a fun thing to do (in particular on real PC, with Intel HDA drivers - on a RaspPi it's very boring), but since you are working on the Pulseaudio level, and not ALSA, probably not helpful. As you have a working aplay and arecord, this shouldn't be an issue.
If you have the additional requirement (which you never mentioned in your question) of having the source and the sink use 96k/24bit, then this is going to be a journey in Pulseaudio: Pulseaudio is designed to run ALSA hardware sources/sinks with reasonable defaults, and then will upsample/downsample when streaming, as required. Fiddling with Pulseaudio internals to change that is tricky.
Have a look at the module-loopback docs: You can set the desired rate and format with extra parameters (and that's what you should do), but there's no guarantee that the source and sink will end up in this mode, and Pulseaudio won't do resampling somewhere.
You also didn't say where this requirement comes from; if you want to use the RaspPi to do professional-grade audio-processing (DAW, digital audio workstation), then JACK is a better choice than Pulseaudio. But all of this depends on your situation.
That said, if you are running module-loopback together with parec --rate=96000 --format=s24le, then of course it cannot change bitrate and format on a running sink, but if you already got the format and bitrate you want, then just leave it alone and be happy.
And yes, with Pulseaudio, you can have multiple streams from a source, and multiple streams into a sink.
| What is the ideal way to play-through or monitor audio input using arecord, parec or jackd? |
1,618,762,703,000 |
There are several guides of how to use JACK with ALSA dmix plugin, like this and this. All of them suggest to route the JACK output through dmix, which causes a latency to the jackified programs. To avoid it, I've decided to plug dmix into JACK instead. I tried the following .asoundrc:
pcm.!default {
type plug
slave.pcm "dmixer"
}
pcm.dmixer {
type dmix
ipc_key 1024
slave {
pcm "jack"
period_time 0
period_size 1024
buffer_size 4096
rate 48000
format S24_3LE
}
bindings {
0 0
1 1
}
}
pcm.jack {
type jack
playback_ports {
0 system:playback_1
1 system:playback_2
}
capture_ports {
0 system:capture_1
1 system:capture_2
}
}
But when I try to use it, I receive an error.
$ aplay test
ALSA lib pcm_direct.c:1525:(_snd_pcm_direct_get_slave_ipc_offset) Invalid type 'jack' for slave PCM
aplay: main:722: audio open error: Invalid argument
Is there any way to route dmix output through JACK?
|
The dmix plugin works only with a hw plugin as slave.
If you want to mix the output of Jack and other programs, use Jack on top of dmix, or consider using PulseAudio.
| Plug dmix into JACK |
1,618,762,703,000 |
I built and compiled amSynth 1.5.1 from source. I then started qjackctl and ran amsynth from command line, but it produced the following error.
JACK init failed:
error: could not open ALSA MIDI interface
However, I had amSynth 1.3.2 (the one in the Mint 17 repositories) running perfectly fine.
What caused this/how can I fix it?
|
Discovery: the reason this happened is because I did not configure amsynth to be built with JACK and ALSA support when compiling.
Look at the output of running ./configure:
| Build with ALSA support............................... : no
| Build with JACK support............................... : no
Two steps to fix:
Make sure the following packages are installed: libjack-jackd2-dev libasound2-dev
Go back to the amsynth-1.5.1 directory and run sudo ./configure --with-alsa --with-jack followed by the usual sudo make and sudo make install
| AmSynth 1.5.1 with jack and alsa (JACK init failed: error: could not open ALSA MIDI interface) |
1,618,762,703,000 |
Sound newbie here.
I'm trying to configure listener but get A LOT of errors.
My goal is to record sound using that tool from external usb mic which is in webcam.
So, I have headless (no X running) Raspberry Pi model B+ running Raspbian 10, there's no realtime priority because I was unable to set it up on this OS and to be honest I'm unsure I need it: I'm OK if the record will be a bit shifted in time.
I had set up libsndfile and portaudio as well as
apt install -y jackd2 pulseaudio-module-jack jack-tools libasound2-dev libbjack-ocaml libbjack-ocaml-dev libjack-jackd2-0 libjack-jackd2-dev
I do see the device and was able to record sound by
arecord -D hw:C525,0 -d 5 -f dat test.wav -c 1
By plugging the device in and out I found that it is mapped as /dev/media2, /dev/video0 and /dev/video1 (those disappear when the webcam is unplugged) so I tried to run setlistener /dev/media2 but it fails with errors (same as linked above).
I tried (to be honest not fully understand what it does):
[as user] pulseaudio --start
[below as root]
export DBUS_SESSION_BUS_ADDRESS=unix:path=/run/dbus/system_bus_socket
# the file above does exist
export DISPLAY=":0"
jackd -r -d alsa
jackdmp 1.9.12
...
xcb_connection_has_error() returned true
JACK server starting in non-realtime mode
self-connect-mode is "Don't restrict self connect requests"
audio_reservation_init
dbus_bus_request_name() failed. (1)
Failed to acquire device name : Audio0 error : Connection ":1.23" is not allowed to own the service "org.freedesktop.ReserveDevice1.Audio0" due to security policies in the configuration file
Audio device hw:0 cannot be acquired...
Cannot initialize driver
JackServer::Open failed with -1
Failed to open server
is not allowed is confusing because I run it as root. Also tried jackd -r -d C525, jackd -r -d hw:C525 and even jackd -r -d hw:C525,0 but these three return
xcb_connection_has_error() returned true
Unknown driver "[the name]"
What am I doing wrong?
Alternatively, I'm looking for for tool which will record audio only when the sound is louder than certain level of dB.
|
Partial answer:
I've never used listener, and your link doesn't seem to include a man page. But it says there's an ALSA version of listener.
So since you are running headless anyway: Remove PulseAudio, remove JACK, remove DBUS (unless you need that for something else). Download the ALSA version of listener, point it directly to your hw:C525,0 device (or maybe use plughw instead, if you need format conversion).
This should get rid of all the trouble with PulseAudio and JACK running at the same time (bad idea in the first place), either of them hogging the actual hardware, and one of them trying to access X through funny dependencies.
| setlistener: errors with jack and alsa |
1,618,762,703,000 |
After an upgrade to debian wheezy (I did not upgrade the kernel - it is still 3.8.2) I can no longer start jackd in the way I used to do it. I get you are not allowed to use realtime scheduling.
My investigation show, that this is related to a sudo command in my script, where I sudo from root to martin. The sudo is required, because I start jackd when my firewire mixing console gets switched on, using an udev rule. I can reproduce the problem by typing sudo from the commandline.
In short, this is what I observe
start jackd as martin -> works
start jackd as root -> works
login as root and su - martin, then start jackd -> works
as root sudo -u martin /usr/bin/jackd ... -> does not work
as above but sudo -E -u martin ... -> does not work
My /etc/security/limits.conf contains these lines
@audio - rtprio 40
@audio - nice -20
@audio - memlock 1554963
sudo -u martin id shows that I am in the audio group, however root is not. After sudoing from root to martin, martin has no realtime permissions
sudo -u martin sh -c "ulimit -e -r"
scheduling priority (-e) 0
real-time priority (-r) 0
Adding root to the audo group made no difference. Root still has no realtime permissions and after sudo -u martin martin still looks as above
|
I would imagine that sudo is preserving your environment of the root user, and therefore may not have paths or other environment variables that the martin user has set. It may be also that you need to run jack via sudo from a shell with the -s /path/to/shell option.
However as root, you have the rights to su (substitute user) without being prompted for a password (and not require configuration of sudo to achieve this, sudo is specifically aimed at non-root users).
su - martin -c /usr/bin/jackd ...
-c tells su what command to run, and the - option (which can also be done via -l) will attempt to set up the environment similar to that of the user it is being ran as (in this case martin).
| Losing (realtime) permission when sudoing from root to myself |
1,618,762,703,000 |
I see that with audio servers (in my case, pipewire) you can alter the "latency". (please forgive me, I am not very knowledgeable with these things.)
PIPEWIRE_LATENCY="128/48000"
The Arch Linux wiki described this as "request[ing] a custom buffer size".
I was wondering, is there a "downside" to setting the latency really low. Is it simply more responsive audio a higher cost of resources?
|
When the buffer is small, It fills up more quickly and empties out more quickly. This is why the latency shrinks.
However, the processes that put data into the buffer and take data out of the buffer will be triggered more often. So you may see higher consumption of your computer's CPU by the audio software when you make the buffer too small. In extreme cases, using the audio system with a small buffer can make the other software on your computer respond slower, or perhaps "choppy" or "stuttering" where it alternates between smooth and frozen.
A small buffer can also cause the audio stream to stutter if the process that puts audio data into the buffer can't respond fast enough and the buffer goes completely empty for brief moments. The process that's taking the audio data out of the buffer and passing it through the output to your speakers (or headphones) will run out of data and there will be interruptions in the sound (often called "drop outs").
It's hard to predict what size will be "too small", so you may have to experiment and see what compromise gives you the shortest latency without impacting the audio streams and the rest of your computer.
| Downside of decreasing audio latency |
1,618,762,703,000 |
To test and resolve this problem I am using a fresh install of Fedora 33, audio settings or configuration files left untouched.
By default the audio is heavily distorted (it is possible to make out what is being played, but overall it is not usable).
However when I install and start jackd and leave it running with the following settings:
jackd -r -dalsa -dhw:0 -r48000 -p256 -n2
and then try an audio file with mpv (which is able to use JACK), the sound is crisp and clear, working as intended:
mpv --ao=jack test.flac
Note: jackd with -r44100 works too.
This is of course not a satisfying general approach because not every software is able to use JACK by itself, so it doesn't work with Firefox, for example.
Because JACK is able to handle things properly, I guess that either the pulse or alsa (automatic) settings are causing the problem? Or could it be something else?
In short: How can I replicate what JACK does using an ALSA configuration (or pulseaudio for that matter). A solution through ALSA would be preferred to make this answer work without pulse as well. It is of course also possible that pulseuadio is the part is causing the problem, I do not know.
Additional information:
Output of aplay -l:
**** List of PLAYBACK Hardware Devices ****
card 0: Studio [Audiofuse Studio], device 0: USB Audio [USB Audio]
Subdevices: 0/1
Subdevice #0: subdevice #0
card 1: Generic [HD-Audio Generic], device 0: ALC1220 Analog [ALC1220 Analog]
Subdevices: 1/1
Subdevice #0: subdevice #0
card 1: Generic [HD-Audio Generic], device 1: ALC1220 Digital [ALC1220 Digital]
Subdevices: 1/1
Subdevice #0: subdevice #0
Output of jackd -r -dalsa -dhw:0 -r48000 -p256 -n2:
jackdmp 1.9.14
Copyright 2001-2005 Paul Davis and others.
Copyright 2004-2016 Grame.
Copyright 2016-2019 Filipe Coelho.
jackdmp comes with ABSOLUTELY NO WARRANTY
This is free software, and you are welcome to redistribute it
under certain conditions; see the file COPYING for details
no message buffer overruns
no message buffer overruns
no message buffer overruns
JACK server starting in non-realtime mode
self-connect-mode is "Don't restrict self connect requests"
audio_reservation_init
Acquire audio card Audio0
creating alsa driver ... hw:0|hw:0|256|2|48000|0|0|nomon|swmeter|-|32bit
configuring for 48000Hz, period = 256 frames (5.3 ms), buffer = 2 periods
ALSA: final selected sample format for capture: 32bit integer little-endian
ALSA: use 2 periods for capture
ALSA: final selected sample format for playback: 32bit integer little-endian
ALSA: use 2 periods for playback
Output of aplay --dump-hw-params -D hw:Studio -t raw /dev/zero:
Playing raw data '/dev/zero' : Unsigned 8 bit, Rate 8000 Hz, Mono
HW Params of device "hw:Studio":
--------------------
ACCESS: MMAP_INTERLEAVED RW_INTERLEAVED
FORMAT: S32_LE
SUBFORMAT: STD
SAMPLE_BITS: 32
FRAME_BITS: [320 576]
CHANNELS: [10 18]
RATE: [44100 192000]
PERIOD_TIME: [125 297211)
PERIOD_SIZE: [6 13107]
PERIOD_BYTES: [240 524288]
PERIODS: [2 1024]
BUFFER_TIME: (62 594422)
BUFFER_SIZE: [12 26214]
BUFFER_BYTES: [480 1048576]
TICK_TIME: ALL
--------------------
aplay: set_params:1343: Sample format non available
Available formats:
- S32_LE
Output of cat /proc/asound/Studio/stream0 (only "Playback"):
Playback:
Status: Stop
Interface 1
Altset 1
Format: S32_LE
Channels: 18
Endpoint: 1 OUT (ASYNC)
Rates: 44100, 48000, 88200, 96000, 176400, 192000
Data packet interval: 125 us
Bits: 24
Interface 1
Altset 2
Format: S32_LE
Channels: 18
Endpoint: 1 OUT (ASYNC)
Rates: 44100, 48000, 88200, 96000, 176400, 192000
Data packet interval: 125 us
Bits: 24
Interface 1
Altset 3
Format: S32_LE
Channels: 10
Endpoint: 1 OUT (ASYNC)
Rates: 44100, 48000, 88200, 96000, 176400, 192000
Data packet interval: 125 us
Bits: 24
Channel map: FL FR FC LFE RL RR FLC FRC RC SL
|
I see, I tried converting the file myself using sox. What I got then was the following error: Playing WAVE 'CONVERTED-test.wav' : Signed 32 bit Little Endian, Rate 44100 Hz, Stereo aplay: set_params:1349: Channels count non available. Using channels 18 with the sox command then finally makes aplay play the file properly and clear! Note: When I convert the file using sox with channels 10 the file is played by aplay too, however in that case the file is again distorted. Any other number for channels will result in the channels count non available error.
So it looks like the interface with 10 channels has a bug in the driver or somewhere else, and you need to convince Pulseaudio to use the interface with 18 channels.
Looking at module-alsa-sink, you can put it an ALSA device.
Which probably means you need to do the configuration in ~/.asoundrc.
You configure the number of channels in the hw plugin, but I've never done this myself, and I don't have hardware with those channel choices.
So I guess this will need some experimentation with the various configuration files. This is hard to do remotely, and I cannot give you step-by-step instructions.
Another option would be to file a bugreport on the ALSA tracker. They might find out the problem in the driver (which probably needs a quirk, if it's the generic USB driver), or give you advice how the configuration files would look like.
ALSA Bug tracking is explained here.
| Audio distorted on pulse/alsa, works fine on JACK. How to analyze why JACK works correctly and use that knowledge to fix pulse/alsa configuration? |
1,618,762,703,000 |
Attempting to write some small midi programs with rtmidi on my Raspberry PI (Debian Wheezy). But, when I compile it, g++ cannot find jack/jack.h.
However, when I run sudo apt-get install jackd it says I have everything up to date. I'm missing something...
|
/usr/include/jack/jack.h is provided by libjack-dev (jackd1) or libjack-jackd2-dev (jackd2) package. You need to install either of them.
Debian has the service where you can search for packages which contain specific files. Here's the result for jack/jack.h.
| Jack and Raspberry Pi |
1,618,762,703,000 |
Debian Buster. I use Pulseaudio as sound server but sometimes launch Jack for MAO.
When Jack is on, I can get the sound of Pulseaudio applications thanks to the pulseaudio-module-jack that adds a Pulseaudio sink to Jack (as I explained in https://askubuntu.com/a/1213554/419514).
Except I came to realize that not all applications work.
I do get the sound of vlc. But when using Firefox, Quodlibet or Audacity, nothing comes out. In fact, when clicking the "play" button, the cursor on the time slider doesn't even move. The "play" button indicates the file is playing but it is not. The playback begins as soon as I stop Jack.
I couldn't find any relevant log.
|
I had to open pavucontrol and set "Jack sink" as output for each application. Once this is set, it gets back to my soundcard when I stop jackd and it is automatically set back to "Jack sink" when I start it again.
| Pulse audio + Jack : some pulse audio apps work some don't |
1,398,727,302,000 |
Are there any notable differences between LXC (Linux containers) and FreeBSD's jails in terms of security, stability & performance?
On first look, both approaches look very similar.
|
No matter the fancy name used here, both are solutions to a specific problem: A better segregation solution than classic Unix chroot. Operating system-level virtualization, containers, zones, or even "chroot with steroids" are names or commercial titles that define the same concept of userspace separation, but with different features.
Chroot was introduced on 18 March 1982, months before the release of 4.2 BSD, as a tool to test its installation and build system, but today it still has its flaws. Since the first objective of chroot was only to provide a newroot path, other aspects of system that needed to be isolated or controlled got uncovered (network, process view, I/O throughput). This is where the first containers (User-level virtualization) appeared.
Both technologies (FreeBSD Jails and LXC) make use of userspace isolation to provide another layer of security. This compartmentalization will ensure that a determined process will communicate only with other processes in the same container on the same host, and if using any network resource to achieve "outside world" communication, all will be forwarded to the assigned interface/channel that this container has.
Features
FreeBSD Jails:
Considered stable technology, since it is a feature inside FreeBSD since 4.0;
It takes the best of ZFS filesystem at the point where you could clone jails and create jail templates to easily deploy more jails. Some more ZFS madness;
Well documented, and evolving;
Hierarchical Jails allow you to create jails inside a jail (we need to go deeper!). Combine with allow.mount.zfs to achieve more power, and other variables like children.max do define max children jails.
rctl(8) will handle resource limits of jails (memory, CPU, disk, ...);
FreeBSD jails handle Linux userspace;
Network isolation with vnet, allowing each jail to have its own network stack, interfaces, addressing and routing tables;
nullfs to help linking folders to ones that are located on the real server to inside a jail;
ezjail utility to help mass deployments and management of jails;
Lots of kernel tunables (sysctl). security.jail.allow.* parameters will limit the actions of the root user of that jail.
Maybe, FreeBSD jails will extend some of the VPS project features like live migration in a near future.
There is some effort of ZFS and Docker integration running. Still experimental.
FreeBSD 12 supports bhyve inside a jail and pf inside a jail, creating further isolation to those tools
Lots of interesting tools were developed during the last years. Some of them are indexed on this blog post.
Alternatives: FreeBSD VPS project
Linux Containers (LXC):
New "in kernel" technology but being endorsed by big ones(specially Canonical);
Unprivileged containers starting from LXC 1.0, makes a big step into security inside containers;
UID and GID mapping inside containers;
Kernel namespaces, to make separation of IPC, mount, pid, network and users. These namespaces can be handled in a detached way, where a process that uses a different network namespace will not necessarily be isolated on other aspects like storage;
Control Groups (cgroups) to manage resources and grouping them. CGManager is the guy to achieve that.
Apparmor/SELinux profiles and Kernel capabilities for better enforcing Kernel features accessible by containers. Seccomp is also available on lxc containers to filter system calls. Other security aspects here.
Live migration functionality being developed. It’s really hard to say when it will be ready for production use, since docker/lxc will have to deal with userspace process pause, snapshot, migrate and consolidate - ref1, ref2. Live migration is working with basic containers(no device passthrough neither complex network services or special storage configurations).
APIs bindings to enable development in python3 and 2, lua, Go, Ruby and Haskell
Centralized "What's new" area. Pretty useful whenever you need to check if some bug was fixed or a new feature got committed. Here.
An interesting alternative could be lxd, that under the hood works with lxc but, it has some nice features like a REST api, OpenStack integration, etc.
Another interesting thing is that Ubuntu seems to be shipping zfs as the default filesystem for containers on 16.04. To keep projects aligned, lxd launched it's 2.0 version, and some of the features are zfs related.
Alternatives: OpenVZ, Docker
Docker. Note here that Docker uses namespaces, cgroups creating "per app"/"per software" isolation. Key differences here. While LXC creates containers with multiple processes, Docker reduces a container as much as possible to a single process and then manage that through Docker.
Effort on integrating Docker with SELinux and reducing capabilities inside a container to make it more secure - Docker and SELinux, Dan Walsh
What is the difference between Docker, LXD, and LXC
Docker no longer uses lxc. They now have a specific lib called runc that handles the integration with low-level Kernel namespace and cgroups features directly.
Neither technology is a security panacea, but both are pretty good ways to isolate an environment that doesn’t require Full Virtualization due to mixed operating systems infrastructure. Security will come after a lot of documentation reading and implementation of kernel tunables, MAC and isolations that those OS-Level virt offer to you.
See Also:
Hand-crafted containers
BSD Now: Everything you need to know about Jails
ezjail – Jail administration framework
A Brief History of Containers: From the 1970s to 2017
Docker Considered Harmful - Good article about the security circus around container technologies.
| Linux LXC vs FreeBSD jail |
1,398,727,302,000 |
Were I root, I could simply create a dummy user/group, set file permissions accordingly and execute the process as that user. However I am not, so is there any way to achieve this without being root?
|
More similar Qs with more answers worth attention:
https://stackoverflow.com/q/3859710/94687
https://stackoverflow.com/q/4410447/94687
https://stackoverflow.com/q/4249063/94687
https://stackoverflow.com/q/1019707/94687
NOTE: Some of the answers there point to specific solutions not yet mentioned here.
Actually, there are quite a few jailing tools with different implementation, but many of them are either not secure by design (like fakeroot, LD_PRELOAD-based), or not complete (like fakeroot-ng, ptrace-based), or would require root (chroot, or plash mentioned at fakechroot warning label).
These are just examples; I thought of listing them all side-by-side, with indication of these 2 features ("can be trusted?", "requires root to set up?"), perhaps at Operating-system-level virtualization Implementations.
In general, the answers there cover the full described range of possibilities and even more:
virtual machines/OS
(the answer mentioning virtual machines/OS)
kernel extension (like SELinux)
(mentioned in comments here),
chroot
Chroot-based helpers (which however must be setUID root, because chroot requires root; or perhaps chroot could work in an isolated namespace--see below):
[to tell a little more about them!]
Known chroot-based isolation tools:
hasher with its hsh-run and hsh-shell commands. (Hasher was designed for building software in a safe and repeatable manner.)
schroot mentioned in another answer
...
ptrace
Another trustworthy isolation solution (besides a seccomp-based one) would be the complete syscall-interception through ptrace, as explained in the manpage for fakeroot-ng:
Unlike previous implementations, fakeroot-ng uses a
technology that leaves the
traced process no choice regarding whether it will use
fakeroot-ng's "services" or
not. Compiling a program statically, directly calling the
kernel and manipulating
ones own address space are all techniques that can be trivially
used to bypass
LD_PRELOAD based control over a process, and do not apply to
fakeroot-ng. It is,
theoretically, possible to mold fakeroot-ng in such a way as to have
total control
over the traced process.
While it is theoretically possible, it has not been done.
Fakeroot-ng does assume
certain "nicely behaved" assumptions about the process being
traced, and a process
that break those assumptions may be able to, if not totally escape
then at least
circumvent some of the "fake" environment imposed on it by
fakeroot-ng. As such,
you are strongly warned against using fakeroot-ng as a
security tool. Bug reports
that claim that a process can deliberatly (as opposed to inadvertly)
escape fake‐
root-ng's control will either be closed as "not a bug" or marked as
low priority.
It is possible that this policy be rethought in the future. For
the time being,
however, you have been warned.
Still, as you can read it, fakeroot-ng itself is not designed for this purpose.
(BTW, I wonder why they have chosen to use the seccomp-based approach for Chromium rather than a ptrace-based...)
Of the tools not mentioned above, I have noted Geordi for myself, because I liked that the controlling program is written in Haskell.
Known ptrace-based isolation tools:
Geordi
proot
fakeroot-ng
... (see also How to achieve the effect of chroot in userspace in Linux (without being root)?)
seccomp
One known way to achieve isolation is through the seccomp sandboxing approach used in Google Chromium. But this approach supposes that you write a helper which would process some (the allowed ones) of the "intercepted" file access and other syscalls; and also, of course, make effort to "intercept" the syscalls and redirect them to the helper (perhaps, it would even mean such a thing as replacing the intercepted syscalls in the code of the controlled process; so, it doesn't sound to be quite simple; if you are interested, you'd better read the details rather than just my answer).
More related info (from Wikipedia):
http://en.wikipedia.org/wiki/Seccomp
http://code.google.com/p/seccompsandbox/wiki/overview
LWN article: Google's Chromium sandbox, Jake Edge, August 2009
seccomp-nurse, a sandboxing framework based on seccomp.
(The last item seems to be interesting if one is looking for a general seccomp-based solution outside of Chromium. There is also a blog post worth reading from the author of "seccomp-nurse": SECCOMP as a Sandboxing solution ?.)
The illustration of this approach from the "seccomp-nurse" project:
A "flexible" seccomp possible in the future of Linux?
There used to appear in 2009 also suggestions to patch the Linux kernel so that there is more flexibility to the seccomp mode--so that "many of the acrobatics that we currently need could be avoided". ("Acrobatics" refers to the complications of writing a helper that has to execute many possibly innocent syscalls on behalf of the jailed process and of substituting the possibly innocent syscalls in the jailed process.) An LWN article wrote to this point:
One suggestion that came out was to
add a new "mode" to seccomp. The API
was designed with the idea that
different applications might have
different security requirements; it
includes a "mode" value which
specifies the restrictions that should
be put in place. Only the original
mode has ever been implemented, but
others can certainly be added.
Creating a new mode which allowed the
initiating process to specify which
system calls would be allowed would
make the facility more useful for
situations like the Chrome sandbox.
Adam Langley (also of Google) has
posted a patch which does just that.
The new "mode 2" implementation
accepts a bitmask describing which
system calls are accessible. If one of
those is prctl(), then the sandboxed
code can further restrict its own
system calls (but it cannot restore
access to system calls which have been
denied). All told, it looks like a
reasonable solution which could make
life easier for sandbox developers.
That said, this code may never be
merged because the discussion has
since moved on to other possibilities.
This "flexible seccomp" would bring the possibilities of Linux closer to providing the desired feature in the OS, without the need to write helpers that complicated.
(A blog posting with basically the same content as this answer: http://geofft.mit.edu/blog/sipb/33.)
namespaces (unshare)
Isolating through namespaces (unshare-based solutions) -- not mentioned here -- e.g., unsharing mount-points (combined with FUSE?) could perhaps be a part of a working solution for you wanting to confine filesystem accesses of your untrusted processes.
More on namespaces, now, as their implementation has been completed (this isolation technique is also known under the nme "Linux Containers", or "LXC", isn't it?..):
"One of the overall goals of namespaces is to support the implementation of containers, a tool for lightweight virtualization (as well as other purposes)".
It's even possible to create a new user namespace, so that "a process can have a normal unprivileged user ID outside a user namespace while at the same time having a user ID of 0 inside the namespace. This means that the process has full root privileges for operations inside the user namespace, but is unprivileged for operations outside the namespace".
For real working commands to do this, see the answers at:
Is there a linux vfs tool that allows bind a directory in different location (like mount --bind) in user space?
Simulate chroot with unshare
and special user-space programming/compiling
But well, of course, the desired "jail" guarantees are implementable by programming in user-space (without additional support for this feature from the OS; maybe that's why this feature hasn't been included in the first place in the design of OSes); with more or less complications.
The mentioned ptrace- or seccomp-based sandboxing can be seen as some variants of implementing the guarantees by writing a sandbox-helper that would control your other processes, which would be treated as "black boxes", arbitrary Unix programs.
Another approach could be to use programming techniques that can care about the effects that must be disallowed. (It must be you who writes the programs then; they are not black boxes anymore.) To mention one, using a pure programming language (which would force you to program without side-effects) like Haskell will simply make all the effects of the program explicit, so the programmer can easily make sure there will be no disallowed effects.
I guess, there are sandboxing facilities available for those programming in some other language, e.g., Java.
Cf. "Sandboxed Haskell" project proposal.
NaCl--not mentioned here--belongs to this group, doesn't it?
Some pages accumulating info on this topic were also pointed at in the answers there:
page on Google Chrome's sandboxing methods for Linux
sandboxing.org group
| How to "jail" a process without being root? |
1,398,727,302,000 |
In FreeBSD 4.9 it was very easy to accomplish with just a single command like
jail [-u username] path hostname ip-number command
if path was / you had running just the same program as usual but all its network communication was restricted to use only given IP-address as the source. Sometimes it's very handy.
Now in Linux there's LXC, which does look very similar to FreeBSD's jail (or Solaris' zones) — can you think of similar way to execute a program?
|
Starting the process inside a network namespace that can only see the desired IP address can accomplish something similar. For instance, supposed I only wanted localhost available to a particular program.
First, I create the network namespace:
ip netns add limitednet
Namespaces have a loopback interface by default, so next I just need to bring it up:
sudo ip netns exec limitednet ip link set lo up
Now, I can run a program using ip netns exec limitednet and it will only be able to see the loopback interface:
sudo ip netns exec limitednet ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
If I wanted to limit it to an address other than localhost, I could add other interfaces into the namespace using:
ip link set DEVICE_NAME netns NAMESPACE
I'd have to experiment a bit more to figure out how to add a single IP address into a namespace in the case where an interface might have more than one IP address
The LWN article on namespaces is also helpful.
| Linux: Is there handy way to exec a program binding it to IP-address of choice? |
1,398,727,302,000 |
I have a FreeBSD jail in which I run a server using the command:
/usr/sbin/daemon /path/to/script.py
At the moment I have to run this command every time I restart the machine and the jail starts. I'd like to have this command started from /etc/rc. Is there an easy way to create a FreeBSD rc script for a daemon command?
UPDATE: I read through this BSD documentation about rc scripts, and from that I created the following script in /etc/rc.d/pytivo:
#!/bin/sh
. /etc/rc.subr
name=pytivo
rcvar=pytivo_enable
procname="/usr/local/pytivo/pyTivo.py"
command="/usr/sbin/daemon -u jnet $procname"
load_rc_config $name
run_rc_command "$1"
This works to start the python script I am wanting as a daemon when the jail starts... (given pytivo_enable="YES" is in /etc/rc.conf) but the rc script doesn't know if the daemon is running (it thinks it isn't when it is) and it gives a warning when I try to start it:
[root@meryl /home/jnet]# /etc/rc.d/pytivo start
[: /usr/sbin/daemon: unexpected operator
Starting pytivo.
[root@meryl /home/jnet]#
So it's close, and it works, but I feel like I should be able to get better functionality than this.
|
command should not contain multiple words. This is the cause of the [ error you see. You should set any flags separately.
Also, you should use pytivo_user to set the running uid, and not daemon -u. See the rc.subr(8) man page for all these magic variables.
Also, you should let the rc subsystem know that pytivo is a Python script so that it can find the process when it checks to see if it's running.
Finally, you should use the idiomatic set_rcvar for rcvar.
Something like this (I'm not sure this is the right Python path):
#!/bin/sh
# REQUIRE: LOGIN
. /etc/rc.subr
name=pytivo
rcvar=`set_rcvar`
command=/usr/local/pytivo/pyTivo.py
command_interpreter=/usr/local/bin/python
pytivo_user=jnet
start_cmd="/usr/sbin/daemon -u $pytivo_user $command"
load_rc_config $name
run_rc_command "$1"
| Is there an easy way to create a FreeBSD rc script? |
1,398,727,302,000 |
I've installed jailkit on Ubuntu 12.04 and I have set up a user's shell to /bin/bash - but when it is invoked it runs /etc/bash.bashrc instead of /etc/profile
If you haven't used jailkit before here's the gist of it:
A "jailed" version of the system root is created somewhere, like /home/jail
Jailed users home directories are moved inside that folder like /home/jail/home/testuser
Relavant configuration files are copied to /home/jail/etc/ - including a limited /etc/passwd
Programs that you want to allow access to are copied to the corresponding directories, like
/bin/bash
When a jailed user logs in they are chrooted to /etc/jail/ and can't see any files above that
So I have a testuser who has an entry in /etc/passwd like this:
testuser:x:1002:1003::/home/jail/./home/testuser:/usr/sbin/jk_chrootsh
In the file /home/jail/etc/passwd there is an entry like:
testuser:1001:1003::/home/testuser:/bin/bash
I've read though the bash(1) and so I think the problem is that bash thinks it is not being invoked as a login shell:
When bash is invoked as an interactive login shell, or as a non-interactive shell with the --login option, it first reads and executes commands from the file
/etc/profile, if that file exists.
I get that bash is actually being invoked by /usr/sbin/jk_chrootsh but I don't understand how bash is determining what type of shell it is, and what set of startup files it should run.
I'd like to see if I can troubleshoot this - but I don't understand:
How does bash know how it is being invoked?
ps: I also looked into login(1) without much luck.
|
Normally bash knows that it's a login shell because when the login program invokes it, it tells bash that its name is -bash. That name is in argv[0], the zeroth command line argument, which is conventionally the way the user invoked the program. The initial hyphen is a convention to tell a shell that it's a login shell. Bash also behaves as a login shell if you pass it the option --login or -l. See Difference between Login Shell and Non-Login Shell? for more details.
As of Jailkit 2.16, jk_chrootsh reads the absolute path to the shell to invoke from various sources, and passes this path as argv[0], and passes its own command line arguments down to that shell. In the normal use case where jk_chrootsh is itself used in /etc/passwd, there is no way to pass an argument such as -l. Since the absolute path doesn't begin with -, there is no way to make jk_chrootsh invoke a login shell, short of using a tiny intermediate program.
#include <unistd.h>
int main () {
execl("/bin/bash", "-bash", NULL);
return 127;
}
I would have expected jk_chrootsh to have an easy way of invoking a login shell. I suggest making a feature request.
| How does bash know how it is being invoked? |
1,398,727,302,000 |
The goal is to install and run programs in a displaced (relocated) distro (whose / must not coincide with the global /) inside a host Linux system. The programs are not adapted for using a different / .
fakechroot is not a complete solution because it employs library-substitution instead of acting on the level of system calls (so not good for statically linked binaries).
|
The solution must probably be based either on ptrace or namespaces (unshare).
ptrace-based solutions are probably less efficient then namespaces/unshare-based (but the latter technology is cutting-edge and is not well explored path, probably).
ptrace-based
UMView
As for ptrced-based solutions, thanks to the comments at https://stackoverflow.com/a/1019720/94687, I've discovered UMView:
http://wiki.virtualsquare.org/wiki/index.php/ViewFS
http://wiki.virtualsquare.org/wiki/index.php/Virtual_installation_of_software
The linked docs describe how to have a "copy-on-write view" of the host fs -- that's not exactly like performing a chroot. Exact intructions on how to achieve /-substitution in umview would be nice to have in an answer to my question (please write one if you figure out how to do this!).
umview must be open-source, because it is included in Ubuntu and Debian -- http://packages.ubuntu.com/lucid/umview.
"Confining programs"
Another implementation is described in http://www.cs.vu.nl/~rutger/publications/jailer.pdf, http://www.cs.vu.nl/~guido/mansion/publications/ps/secrypt07.pdf.
They have a change-root-ing policy rule, CHRDIR, whose effect is similar to chroot. (Section "The jailing policy")
However, they might have not published their source code (partially based on a modified strace http://www.liacs.nl/~wichert/strace/ -- Section "Implementation")...
geordi
Geordi (http://www.eelis.net/geordi/, https://github.com/Eelis/geordi) could probably be modified to make the wanted rewriting of file arguments to system calls in the jailed programs.
proot
PRoot is a ready to use ptrace-based tool for this. http://proot.me/:
chroot equivalent
To execute a command inside a given Linux distribution, just give
proot the path to the guest rootfs followed by the desired command.
The example below executes the program cat to print the content of a
file:
proot -r /mnt/slackware-8.0/ cat /etc/motd
Welcome to Slackware Linux 8.0
The default command is /bin/sh when none is specified. Thus the
shortest way to confine an interactive shell and all its sub-programs
is:
proot -r /mnt/slackware-8.0/
$ cat /etc/motd
Welcome to Slackware Linux 8.0
unshare-based
user_namespaces support in the Linux kernel has got more mature since when the question was asked. Now you can play with performing a chroot as a normal with the help of unshare like in Simulate chroot with unshare:
unshare --user --map-root-user --mount-proc --pid --fork
chroot ......
su - user1
| How to achieve the effect of chroot in userspace in Linux (without being root)? |
1,398,727,302,000 |
Is it possible to use LXC on a desktop system to confine browsers and other pieces of software that have in the past been shown to be prone to certain kinds of exploits. So what I want to achieve is to jail, say Firefox, be still able to view its windows etc and yet be sure it only has read and write access to anything "inside the bubble", but not the host system.
The example lxc-sshd container in LXC suggests something like this should be possible (app-level containers), but I have only seen this for program that require a TTY at most.
Can this work also under KDE, GNOME, Unity ...?
|
Firejail is a Linux namespaces sandbox program that can jail Firefox or any other GUI software. It should work on any Linux computer.
| Can LXC be used to jail instances of an installed browser? |
1,398,727,302,000 |
I have a question about giving a shell account to somebody. How safe is it? He could read /etc. How can I give a secured shell account that will only restrict the user to some bins and his own home? Is the only way a chroot jail?
|
One of the most easy/efficient way to control what a user can do is lshell.
lshell is a shell coded in Python, that lets you restrict a user's environment to limited sets of commands, choose to enable/disable any command over SSH (e.g. SCP, SFTP, rsync, etc.), log user's commands, implement timing restriction, and more.
| How can I safely give a shell to somebody? |
1,398,727,302,000 |
I run Fedora with GNOME and recently installed Viber for linux. It's very good and all is working just fine.
As Viber is not open-source software, I decided to run it as another user to make sure that it doesn't have access to my files without my concent, by creating a user:group viber:viber. If I try to open from inside Viber files that belong to me or any other user, it can't read them. Goal achieved but partly. Now, I stumbled against another problem, which is Viber works only with messages and when I try to make a call, it says, that it can't find neither microphone nor speakers? It feels that I know why (I run X Server (GNOME) as myself and Viber as viber:viber (user:group) ).
How to make Viber run as viber:viber and at the same time, to let it use microphone and speakers, while running GNOME session as myusername:myusername?
SELinux solutions are also welcome but with concrete examples! chroot is not the solution, as it will lead, as I understand, to the same problem as described above.
EDIT 1:
The exec for running it is: su - viber -c /opt/viber/Viber
|
I wasn't aware of Pulseaudio running on Fedora as audio server.
After researching, I have finally found a way to share audio (microphone and speakers) among other users, while running Pulseaudio as normal user (myself) and not in System Mode.
In order to do that you will only need to copy initial configuration file to your home directory:
cp /etc/pulse/default.pa ~/.pulse/default.pa
Afterwards, add the following configuration option to it (~/.pulse/default.pa):
load-module module-native-protocol-tcp auth-ip-acl=127.0.0.1
Now, under those users/user, that you want to share the audio with, in their home directories (NOT YOURS) create Pulseaudio custom user-config file ~/.pulse/client.conf and add the following option:
default-server = 127.0.0.1
Don't forget to restart your audio server or your computer/server to apply new settings.
Eventually, I can run Viber as another user viber:viber and have access to microphone and speakers, while running GNOME session as myusername:myusername.
Successfully tested on Fedora 20.
| Running Viber as another user while using "mine" X Server's microphine and speakers |
1,398,727,302,000 |
If you have a web-server (e.g. nginx) often you use a fast-cgi server or another application-http-server for dynamic content. That means in both cases you have a nice process separation between the web-server process and the fast-cgi (or application-http-server process - in the following called slave).
The web-server is configured such that fast-cgi goes over a socket or http request are proxied.
Creating different users for the slave and the web-server you can protect filesystem locations, if there is a security problem in the slave process.
But how do I jail the slave process even more under Linux?
(Such that it cannot access the net, send mails etc.)
I can think of following routes:
SELinux
Linux system namespaces ('containers', cgroups)
What is the most convenient way on a current distribution like e.g. Debian? How to do it in practice? Any configuration examples?
|
Under Ubuntu, another way of jailing is apparmor!
It is a path based mandatory access control (MAC) Linux Security Module (LSM). In Ubuntu 10.04 it is enabled by default for selected services.
The documentation is quite fragmented. The Ubuntu documentation could be ... better. Even the upstream documentation does not give a good introduction. The reference page states:
WARNING: this document is in a very early stage of creation it is not in any shape yet to be used as a reference manual
However, getting started is relatively easy. An AppAmor profile matches a executable path, e.g. /var/www/slave/slave. The default rule of a profile is deny (which is great), if nothing else matches. Profile deny-rules match always before allow-rules. An empty profile denies everything.
Profiles for different binaries are stored under /etc/apparmor.d. apparmor_status displays what profiles are active, what are in enforce-mode (good), or only in complain mode (only log messages are printed).
Creating a new profile for /var/www/slave/slave is just:
aa-genprof /var/www/slave/slave
Start in another terminal /var/www/slave/slave and do a typical use case. After it is finished press s and f in the previous terminal.
Now /etc/apparmor.d contains a profile file var.www.slave.slave. If the slave does some forking the profile is only very sparse - all the accesses of the childs are ignored.
Anyway, the profile is now active in enforce mode and you can just iteratively trigger actions in the slave and watch tail -f /var/log/messages for violations. In another terminal you edit the profile file and execute aa-enforce var.www.slave.slave after each change. The log displays then:
audit(1308348253.465:3586): operation="profile_replace" pid=25186 name="/var/www/slave/slave"
A violation looks like:
operation="open" pid=24583 parent=24061 profile="/var/www/slave/slave"
requested_mask="::r" denied_mask="::r" fsuid=10004 ouid=10000 name="/var/www/slave/config"
A profile rule like:
/var/www/slave/config r,
would allow the access in the future.
This is all pretty straight forward.
AppAmor supports coarse grained network rules, e.g.
network inet stream,
Without this rule no internet access is possible (including localhost), i.e. with that rule you can use iptables for finer-grained rules (e.g. based on slave uid).
Another documentation fragment contains something about sub profiles for php scripts.
The var.www.slave.slave profile skeleton looks like:
#include <tunables/global>
/var/www/gapapp/gap.wt {
#include <abstractions/base>
network inet stream,
/var/www/slave/config r,
/var/www/slave/exehelper/foo ix,
/var/www/slave/db/* rw,
...
}
With such a profile the slave is not able anymore to call utilities like mail or sendmail.
| How to jail a fastcgi server (or a web-proxied server)? |
1,398,727,302,000 |
I am trying to set up a ssh-chroot jail on one of my NAS servers. The system runs on NAS4Free (which is based on nanobsd). The user should be able to run only one command, which is a bash-script that opens ssh to another server and executes one command there.
To setup the chroot I have this in my sshd config.
Match User op
ChrootDirectory %h
X11Forwarding no
AllowTcpForwarding no
The script has this line in it:
ssh -i /.ssh/id_rsa backup@$externalresource -t "/mnt/storage/backup/run_project.sh '$1' '$2'"
I can log in to that chroot using ssh but when I run the script it gives the following error when trying to execute the ssh command in it.
Couldn't open /dev/null: Operation not supported
The same happens, when I try to run ssh plain within the chroot
[I have no name!@nas /]$ ssh
Couldn't open /dev/null: Operation not supported
/dev/null looks as this:
$ ls -la dev/
total 8
drwx--x--x 2 root staff 512 Nov 29 18:16 .
drwxr-xr-x 8 root staff 512 Nov 29 18:06 ..
crw-rw-rw- 1 root staff 0x18 Nov 29 18:16 null
Without the 666 permissions I get a /dev/null permission denied error of course.
I created dev/null using
mknod dev/null c 2 2
I have tried to find an explaination why /dev/null returns operation not permitted but have not found anything that helps.
Could someone please explain how to fix this?
|
I created dev/null using mknod dev/null c 2 2
Your knowledge is outdated. Things do not work this way any more, now that NAS4Free is based on the likes of FreeBSD 10 and 11. (Nor are those the device numbers for the null device anyway.) Read the mknod manual. You can still run mknod to create device nodes in an actual disc or RAM filesystem, but the nodes that you create will be pretty much entirely useless. As you can see, the kernel does not let you open devices with them.
This is why in jails — actual jails, the ones that come with the operating system, not simple chrooted environments that one can set up with sshd_config — one obtains the device files by mounting a devfs instance within the jail. It is also why jails have knobs to control whether devfs can be mounted and what devfs ruleset applies to it.
If you want a /dev/null in your changed root environment, you'll have to use mount_nullfs to make the actual /dev tree visible within the changed root. If you use a bona fide jail, just configure it to mount a devfs on /dev.
If you do use a bona fide jail, you of course set it up to run sshd inside the jail, listening on the jail's IP address and enabled as a service in the jail's /etc/rc.conf in the normal way.
Further reading
mknod. FreeBSD 11.0 Manual.
devfs. FreeBSD 11.0 Manual.
devfs.rules. FreeBSD 11.0 Manual.
Documentation:Howto:Jails. NAS4Free wiki.
Matteo Riondato. "Jails". FreeBSD Handbook.
Scott Robb (2015-03-04). FreeBSD Jails.
| ssh in chrooted jail doesn't work because of /dev/null operation not supported |
1,398,727,302,000 |
Is there a common way of distinguishing between the messages of multiple processes in syslog-ng beside setting different facilities?
+1 if filtering and therefore logging in different files would be possible.
I have a system setup with two running sshd instances. One is running in a chrooted environment. Since syslog is used, all messages end up in the same logfile.
One possibility would be to change the facility of the jailed sshd to something like local0, but I wonder if there is some 'cleaner' way to do this.
Installing other syslog daemon, for example rsyslog is not an option here.
This question is somehow related to:
https://stackoverflow.com/questions/20010873/syslog-process-specific-priority
and
syslog: process specific priority
|
Change the name of the executable (note that that also affects PAM configuration).
ln /path/to/sshd /path/to/sshd-whatever
Start as /path/to/sshd-whatever. And define PAM configuration in /etc/pam.d/sshd-whatever. Log entries will show as sshd-whatever instead of sshd.
| Separate messages of multiple sshds in syslog-ng |
1,398,727,302,000 |
After calling make installworld (or make world), there are two ways of updating source files in the new world: calling mergemaster -p or make distribution. I know that mergemaster calls make distribution but what else does it do and why would I call it instead of just make distribution?
|
make distribution just installs new configuration files, while mergemaster walks interactively over all config files and asks you which ones you want (and intelligently upgrades files you never edited in the first place if possible). It even gives you the option to merge them as needed. Basically, it automates the process of installing updated config files, doing all the diffs automatically and giving you a nicer way of merging the old and new config trees.
If you're curious how it works, mergemaster is just a shell script.
| What does 'mergemaster' do that 'make distribution' doesn't? |
1,398,727,302,000 |
Let's assume I have a casual user who can log in to the system via SSH into a bash shell. I also have a PHP (though the language is irrelevant here) script that acts as a process accepting various commands and other user input and acts according to them (essentially a 'shell-like' script).
Now, what I want to do is to lock the user inside said PHP script, ie. run it as soon as the user logs in (this part is simple via .bashrc) but at the same time ensure that when the script execution ends, the user is also automatically 'kicked out' of bash and consequently the ssh session, so that he cannot do anything via bash itself and stays limited to what the PHP script offers.
Is that even possible? If yes, how would I go about doing it?
Update: Based on the answers so far - having bash inbetween my script and the user logging in via SSH is no requirement whatsoever. It just seemed like a necessity to me at first. Anything that forces the user into my script only directly after a SSH login is a welcome answer.
|
Following the updated information, you should have them do private/public key pairs and inside the .ssh/authorized_keys file set it to only run script.php file. You shouldn't rely on the .bashrc for protection, especially since that is needed to initialize the environment.
| "Virtual" shell, ie. jailing an user inside a process after the (SSH) login |
1,398,727,302,000 |
Strangely enough, it appears that Squid cannot start if the squid user is not affected to the UID 100. When trying to use another PID, the service systematically fails to start with the following error:
2016/03/10 10:53:13| storeDirWriteCleanLogs: Starting...
2016/03/10 10:53:13| Finished. Wrote 0 entries.
2016/03/10 10:53:13| Took 0.00 seconds ( 0.00 entries/sec).
FATAL: Ipc::Mem::Segment::create failed to shm_open(/squid-cf__metadata.shm): (17) File exists
Squid Cache (Version 3.5.15): Terminated abnormally.
CPU Usage: 0.026 seconds = 0.026 user + 0.000 sys
Maximum Resident Size: 48640 KB
Page faults with physical i/o: 0
Here is how I usually proceed to change a service PID:
vi /etc/group to change service's GID:
Before:
squid:*:100:
After:
squid:*:1234:
vipw to change the service UID:
Before:
squid:*:100:100::0:0:squid caching-proxy pseudo user:/var/squid:/usr/sbin/nologin
After:
squid:*:1234:1234::0:0:squid caching-proxy pseudo user:/var/squid:/usr/sbin/nologin
Restore files ownership using the two following commands:
find / -uid 100 -exec chown -v squid {} +
find / -gid 100 -exec chgrp -v squid {} +
I'm installing Squid in a FreeBSD jail managed with ezjail.
I've thought about IPC's but did not found any IPC using ipcs -a when Squid is running properly.
I've also tried, in a new clean jail, to first create the squid user with a custom UID and then install the squid package to ensure that the right UID is used right from the beginning and no squid user with PID 100 existed at any time, but same issue (I also conclude from this that this issue cannot come from an ownership issue). Setting the UID to 100 make Squid able to boot.
I tried several PID (including lower ones like 101), all of them show the same behavior.
I've also tried to play with the squid_user parameter in rc.conf but with no luck: setting it to either squid or root does not seem to change anything.
I've another jail on the same system with a Squid running with PID 100, but shutting it down did not change the issue in any way (in all cases I would be highly surprised by a interference between two jails).
|
OK, I got it now. After a few check on the FreeBSD forum, I can now confirm that FreeBSD jails act a bit like some Swiss cheese as long as SHM objects are concerned. Indeed, FreeBSD does not provide any isolation at all for SHM objects: all jails can access all SHM objects, system-wide, with no way to prevent it.
The error mentioned in this question is therefore quite logical:
When different UID were used, Squid was not able to start because it tried to access SHM objects owned by a different user, hence the error message "create failed to shm_open(/squid-cf__metadata.shm): (17) File exists",
When the same UID was used in both installation, Squid started successfully, but the situation was probably even worse since both instances would now fight against the same memory object, rewriting each other data and deleting the object on shutdown...
Because of this, specific measure must be taken to ensure that each Squid instance will use different SHM objects names.
By default, Squid creates the following SHM objects on FreeBSD systems (the exact behavior is OS-dependent):
/squid-cf__metadata.shm
/squid-cf__queues.shm
/squid-cf__readers.shm
squid offers the -n parameter to allow to give a specific name to the instance. Concretely this name will (among other things) replace "squid" in the SHM object names above so the SHM object name will become unique system-wide.
Therefore, when setting-up a new Squid instance in a new jail, it is necessary to edit /etc/rc.conf and add an entry such as below (replace "something_unique" by an instance name unique on your host):
squid_flags="-n something_unique"
This allows to properly start both Squid servers, each one with different UIDs.
Obviously, while a bit out-of-scope here, the fact that Squid's SHM objects can be freely accessed from any jail can constitute a security issue in its own right and must not be ignored...
| How to change Squid service UID in FreeBSD? |
1,398,727,302,000 |
I'm using FreeBSD 11.2 at the moment (likely to move to 12 in a while). I need a tiny authoritative-only DNS server (no lookup or caching needed, under 10 domains and under 10 queries/hr, almost no record changes).
I'm likely to go with TinyDNS, part of djbdns, which is well reputed for security and also seems minimal and does what I need.
Part of the reason for security is that it'll be exposed to the internet, although behind an IP/port filter and very low rate, rate limiter (using pf for this). But for that reason, I do want to take especial care on how the daemon is set up, to avoid obvious vulnerabilities. By that, I mean aspects such as the needed users and groups, start/stop scripts, jail/chrooting, and minimising/disabling/denying the main non-essential access/capabilities that an attacker could pivot.
(I should mention that I can install tinydns "ordinarily" on a test system, and create the needed .conf file, so it's purely how to run it in a secure manner, that's missing)
I'm not experienced in setting up software to run chrooted/jailed, or reviewing a chrooted/jailed package for appropriate security practices, and this'll also be my first time trying to do something like this, although I've chosen the specific DNS server package specifically for its apparent setup simplicity.
Ignoring the .conf file, what would a "recipe" look like, to set TinyDNS up to run correctly as a service, and ideally with minimised access to "other stuff not essential for the daemon but helpful to an attacker"?
|
This is rather lenghty so the very short version for those who cannot be bothered to read it all:
echo "testjail { }" >> /etc/jail.conf
mkdir -p /usr/local/jails/testjail
bsdinstall jail /usr/local/jails/testjail
service jail start testjail
pkg -j testjail install -y nginx
sysrc -j testjail "nginx_enable"=YES
service -j testjail nginx start
I would typically rate this question as "too broad" and refer people directly to the FreeBSD Handbook. But I myself find that section to be rather poorly written and confusing. It is all there - but things are much easier than they seem! I just wish they focused more on the concepts rather than listing commands. And you might have fun reading Jails – High value but shitty Virtualization
What I will do instead is to humbly outline what I personally do and what stumbling blocks I have met. My fails might not be yours, but as I once was in the same situation as you, I hope my journey can help. The better you understand FreeBSD overall the easier jails become.
What is a jail?
Many descriptions gloss over the important part of what a jail is. It helps immensely to understand what the kernel does. It "runs" code and keep track of PID (Process ID) and UID (User ID). This is then common knowledge for many Unix users. The FreeBSD kernel has then added the concept of a JID (Jail ID). The kernel is then able to partition processes into jails. This in effect means that the FreeBSD kernel is able to "virtualize" systems without any overhead. You still only have one kernel but can have multiple systems. This was key for my understanding of the concept.
If you run a vanilla box "without" jails. Then all processes belong to JID 0. When you start playing with jails we then call this the "jail host".
With that in mind the next step for me was to understand the connection with how FreeBSD actually works. If you come from a Linux background you probably know that Linux is "only" the kernel. What makes the system is then the userland supplied with the distribution (Ubuntu, Debian, Slackware etc.). FreeBSD is the combination of the kernel and userland. It is the full OS (Operating System).
So a very crude summary of Chapter 12. The FreeBSD Booting Process is:
BIOS/UEFI Bootstrap
FreeBSD Bootloader
FreeBSD Kernel
FreeBSD init runs rc scripts and all that magic
You might know how modular FreeBSD is and that you could replace the rc system with something like OpenRC but we ignore that for now to keep it simple.
When you then start (boot) a jail the kernel then does init with a new JID (ie. 1). Everything here is then confined to jail 1. What init needs to touch must be within the chroot'ed filesystem. If you want to keep things simple that is then the full userland. But we want to keep the jail seperate from the host system so this is then a full copy of the userland!
The importance of how these things are interconnected cannot be understated. When you fully grok it you will notice that jails are directly supported by many FreeBSD tools. Not just simple things as ps -J but bsdinstall, freebsd-update and pkg! If maintaining a FreeBSD system is second nature to you then jails will be a walk in the park!
But for most of us we are somewhere inbetween on that path to Nirvana and might have to struggle with some of the concepts to get it done "right".
hier
I am a huge fan of hier. It gives a clear an consistent view over where things should be placed. Unfortunately the powers that be never made any decision on what a nice default location for jails would be. When you combine this with a lot of different tutorials that use different locations and mix in various jargon as "base install" and "templates" things get confusing fast!
The Handbook does not discuss this and just refer to /here/is/the/jail. And in the final example they use /home/j and /home/j/mroot. I prefer to keep user directories and only user directories in /home. And simply using a shorthand notation like j is a big no-no in my book.
I would say that the most common and "correct" location used is /usr/local/jails. A strong contender mostly for people using ZFS is /jails
In this location I would put the chroots for my jails. Thus I have spoken. Hencefort jails shall be found here.
I found this very confusing when I started out. Giving me freedom to place it anywhere made me more insecure as I did not understand the consequences.
To add to this confusion many tutorials was using "basejail", "skeleton directories", "templates". This added to the confusion of what a jail really is and conflated it with a lot of management.
Filesystem
We do not really care what filesystem we are using. It can be UFS or ZFS. For the sake of the jail we only need a directory for our chroot.
When using UFS the important thing to note is how you have arranged your slices. The beginner has often not given this much thought. Which slice have enough space to accomodate the chroots. This is why I specifically do not like to use /home for this purpose. Maybe you create a slice for the purpose or you just create a directory somewhere and then later figure out if you have space enough.
If you are using ZFS the issues are actually the same. Because of the way ZFS is structured you could argue that it is better to create a new dataset rather than just doing mkdir. But for the beginner you should not worry. Doing a ´mkdir´ with ZFS is better for you. This makes things simpler and you can move on without ZFS.
The next problem is that many tutorials also operate with a "basejail" which is a full vanilla system ready to copy to create a new jail. Some tutorials will then tell you to do a ZFS clone rather than doing a copy. A long the way you will update the basejail and naturally make snapshots of it. But then you notice that you cannot delete any snapshots which have clones of them. So in actual use I prefer to use zfs send/receive.
In my mind tutorials should only touch on ZFS as an addendum. ZFS is a huge topic on itself and should be treated like that. When you are comfortable with both jails and ZFS then you can combine to your hearts content and reap the benefits. If not you end up building a very brittle tower without any ability to fix things when they break.
fat/thick vs thin
The FreeBSD Handbook talks about “complete” jails and “service” jails. Most other places uses the terms thick (fat/full) and thin jails. The thick and thin jails are both "full" virtualized FreeBSD systems which The FreeBSD Handbook refers to as "complete" jails. The "service" jail is however much more elusive
A thick jail is then a copy of the full operating system. But how much fat is that? FreeBSD 11.2 with base/lib32/ports weigh in at 1.4G but for a jail, "base" alone would likely do which keeps you at a relatively svelte 512M (Compared to the 20G of C:\Windows for Windows 10).
A thin jail is then a bag of trick to minimize the on-disk usage of a "complete" jail. By using nullfs (a loopback file system sub-tree) to mount a read-only filesystem and a couple of well placed symlinks to enable read/write parts you slim down the disk usage. Unfortunately I do not have any numbers at hand. When you have the on-disk structure in place then you simply add mount.fstab to your jail.conf to mount the filesystem when starting the jail.
I got the ideas how to do this from FreeBSD Jails the hard way. This taught me how to handle jails without any 3rd party utilities and finally made the pieces click for me. And despite the title - it is actually the easy way. They did however get one thing wrong. You should not do a zfs clone but rather a zfs send/recieve at described in Unadulterated Jails, the Simple Way. Both have probably found inspiration in the slightly outdated Multiple FreeBSD Jails with nullfs. Another worthwhile source is FreeBSD Thin Jails. Finally a newer source which also covers VNET (we will get back to that!) is How to configure a FreeBSD Jail with vnet and ZFS
The above is commonly referred to as a thin jail but it could be done in other ways. All roads lead to Rome and you decide for yourself how you want to implement it locally.
All this leads us to the elusive "service" jail. A jail which just runs one specific service in the most locked down scenario. Just the actual application and only a thin layer to support it. It can be done but I have not seen a lot of work on it.
The typical jail.conf would contain:
exec.start = "/bin/sh /etc/rc"; # Start command
exec.stop = "/bin/sh /etc/rc.shutdown"; # Stop command
This is what ensures that the rc system is run a jail when it start/stop. If you have an extremely simple executable you could simply point to that instead. So the question is: How much of the OS does my service need access to? What shared libraries are needed and do you run scripts?
I know of no work which has done this for you. So you start with a full jail and then removes what is not needed for your particular service. Many command-line utilities such as top, ps and tail are safe to remove as they are typically not used by a daemon. But you might miss them if you are debugging within the jail yourself. If you start a daemon directly with exec.start you do not need rc.subr and friends.
If you go that route and start a killing spree to determine how much can be removed from the OS (to reduce attack surface) while the service remains functional then you should be aware of nanobsd. This can help you tailor the installation to your needs.
So - definitely doable. But I know of no public work. Most people go with thick or thin jails and just run their designated service there.
Dependency hell
When doing thin jails you need to be very careful to do everything correct or things will break. When you update the system you need to remember to update all the different parts. The basejail or sources you started from. The templates and the actual jails. This is a lot of moving wheels which makes management harder. It can be done but you should weigh the effort against the prize. Disk space is very cheap - so you you would need quite a few services to make it worth the effort.
I would then recommend by having simple fat jails with a full OS within.
If you are very comfortable with ZFS then by all means go wild. But if not then I would strongly suggest to have the chroot in a simple directory. When you feel comfortable working with a jail then you can start adding the cream.
The key is then to treat the jail as a seperate system. If you maintain your system properly today you should already have the habit of using freebsd-update fetch install and you will know the it updates the kernel and the userland.
When adding jails to the mix you just need to remember to update those as well:
freebsd-update -b /usr/local/jails/testjail install
If you are doing upgrades - then remember the jails as well:
freebsd-update -b /usr/local/jails/testjail --currently-running 10.3-RELEASE -r 11.0-RELEASE upgrade
Just the same way as you keep your packages fresh
pkg upgrade
pkg -j testjail upgrade
Hey, man! I just wanted to jail a service!
OK! With all the caveat emptors let's get this done the easy way on a totally vanilla system. TinyDNS is a bad example so let us just install nginx
Create a zane jail config in /etc/jail.conf
Allow jails to start on boot
Add the jail name "testjail"
Add the directory for the "testjail" chroot
Install the OS into the chroot (Choose only base. De-select lib32/ports)
Start the jail
Add nginx package to the jail.
Tell the jail to start nginx at startup
Start nginx now!
Done!
1 & 2 only for getting ready for jails. 3 - 6 for each new jail. 7 - 9 for each service.
cat <<'EOF'>/etc/jail.conf
# Global settings applied to all jails.
host.hostname = "${name}.jail";
ip4 = inherit;
ip6 = inherit;
path = "/usr/local/jails/${name}";
exec.start = "/bin/sh /etc/rc";
exec.stop = "/bin/sh /etc/rc.shutdown";
exec.clean;
mount.devfs;
# Specific seetings can be applied for each jail:
'EOF'
sysrc "jail_enable"=YES
echo "testjail { }" >> /etc/jail.conf
mkdir -p /usr/local/jails/testjail
bsdinstall jail /usr/local/jails/testjail
service jail start testjail
pkg -j testjail install -y nginx
sysrc -j testjail "nginx_enable"=YES
service -j testjail nginx start
This is what I would claim to be the "correct" way of doing a jail and simple management. There are many variations - the last line could be replace with service jail restart testjail. At this point I am able to browse web page on my jailed nginx instance.
Some FreeBSD defaults do not lend itself so well to jails. So consider setting these as well:
sysrc -j testjail sendmail_enable="NONE"
sysrc -j testjail sendmail_submit_enable="NO"
sysrc -j testjail sendmail_outbound_enable="NO"
sysrc -j testjail sendmail_msp_queue_enable="NO"
Remove a jail
To delete a jail:
Stop the jail
Remove the jail name from ´/etc/jail.conf`
Reset flags so you are allowed to delete the files
Delete the files
service jail stop testjail
sed -i '' '/^testjail {/ d' /etc/jail.conf
chflags -R noschg /usr/local/jails/testjail
rm -rf /usr/local/jails/testjail
The above is to show that it is not hard to manage jails with the tools at hand and without the need for at 3rd part jail management tool.
Hey, man! I just wanted to jail TinyDNS!
You write you could install tinydns "ordinarily" - hence I will leave that to you. For others who might be following they would probably guess that it should be as simple as:
pkg -j testjail install djbdns
But whoever created that package was not nice enough to create a service wrapper for tinydns in /usr/local/etc/rc.d. You then need to figure out how to run it as a daemon. You could choose to use alternate service management such as supervisor or a simple hack as adding it to /etc/rc.local. Or for brownie points create a rc script and contribute it back for all of us to enjoy!
This leads us to the only two specific jail commands you really need to know: jls which lists running jails and jexec which allows you to execute stuff inside the jail.
root@test:~ # jls
JID IP Address Hostname Path
3 testjail.jail /usr/local/jails/testjail
To get a command-line as root we simply execute the root shell (remember that is tcsh!)
root@test:~ # jexec testjail /bin/tcsh
root@testjail:/ # tinydns
tinydns: fatal: $IP not set
root@testjail:/ # exit
exit
root@test:~ #
When working within that shell everything is within your jail/chroot.
What they do not tell you!
Up to now I have not strayed much afar from what most tutorials will tell you. After wasting a lot of time in the filesystem they skip the exciting part. Today there is not much fun with computers without network access. In the above example we shared the network stack with the host. This means that you cannot run anything on the jail host which utilizes the same ports as a jail. In my example this was port 80 for the webserver.
On the other hand it is now very easy to setup a firewall. You simply treat all ports as local. If you are using IPv6 things are a breeze. But most (all?) of us still need to struggle with IPv4. IPv4 is simple enough but you probably do not have the addresses your need and then we need to resort to some sort of NAT.
If you want to get really fancy with networking we have virtual network interfaces VNET. It has matured over the years but to use it you need to compile a kernel with VIMAGE support. This is enabled in the generic kernel from FreeBSD 12.0 and onwards.
When using VNET you have a nice virtual interface where you can run your firewall inside the jail.
I do currently not use VNET as a unified firewall on the jail host is enough for me and what I want. With this I control which traffic goes to and from the jail. This makes it even harder break things from the jail.
On the jail host I often allow some outbound traffic (http/ftp/nameresolution). But all traffic in my jails I specifý exactly which ports are allowed in both directions and between jails.
The trick is to move the traffic to another interface. The local loopback interface lo0 is a great candidate. But to keep the rules better seperated it is even better to clone it to a new interface name lo1. You can set this up and prepopulate the IP addresses for this in the jail host using ifconfig. There is however no need as the jail subsystem handles all this for you automatically.
To do this our /etc/jail.conf now looks like this:
# Global settings applied to all jails.
host.hostname = "${name}.jail";
interface = "lo1";
path = "/usr/local/jails/${name}";
mount.fstab = "/usr/local/jails/${name}.fstab";
exec.start = "/bin/sh /etc/rc";
exec.stop = "/bin/sh /etc/rc.shutdown";
exec.clean;
mount.devfs;
# Specific seetings can be applied for each jail:
testjail { ip4.addr = 172.17.2.1; }
Notice how ip4/6 = inherit; has changed to interface="lo1". Notice also mount.fstab - this is how I add my nullfs mounts but I will not go further into this now.
Then I have added the internal IP I want to use to testjail.
We then clone the by adding lo1 to /etc/rc.conf:
sysrc "cloned_interfaces"="lo1"
You need to reboot for this. If you want to try it out now without a reboot you need to create the interface yourself:
ifconfig lo1 create
ifconfig lo1 up
This has the same effect as setting cloned_interfaces="lo1" in /etc/rc.conf. You do not need to create aliases for the IP addresses as this is handled automaticly when the jail starts.
This is however boring as traffic gets nowhere.
To get the traffic going we need to set up the firewall and get some NAT going. My poison of choice is pf. You have your ruleset in /etc/pf.conf
To NAT from the outside to a jail IP for http do like this:
rdr pass inet proto tcp from any to (em0) port http -> 172.17.2.1 port http
Many services do not like to start if they are not able to connect to themselves. Hence I add a rule for that as well.
pass on lo1 proto tcp from 172.17.2.1 to 172.17.2.1 port http
The NAT rule is only needed if you want someone from the outside to connect. When you start to have multiple jails you often just want one jail to be able to connect to another.
pass on lo1 proto tcp from 172.17.2.1 to 172.17.2.2 port 8180
With these simple firewall setting you can have a very nice network isolation between your jails and the outside.
Summary
From what I have experienced myself I would suggest the following path to understand how to use jails:
Understand basic kernel functionality - what is a process
Understand chroot
Understand the boot process
Build a fat jail
Work with the jail. Use/update
Understand hier
Build a thin jail
Work with multiple jails
Understand networking
Setup Networking
Understand ZFS
That is my recipe. Probably not quite what you hoped for :-)
If you do not want to handle the gritty bits with the supplied tools then have a look at some of the 3rd party tools:
Bastille
ezjail
iocage
From all this you can see that your question was rather broad and depends heavily on your preferred setup. I believe I have shown a receipe for a well behaved package such as nginx but maybe you need to ask some other questions:
How do I daemonize tinydns?
How do I run tinydns as an unpriviledged user?
How do I write a rc script
But then I would leave the jail parts out of it.
| First time installing TinyDNS as a service in a FreeBSD jail - how to do it? |
1,398,727,302,000 |
I'm running FreeBSD 10.3 p4 and observed some strange behavior
When restarting the machine pf starts due to /etc/rc.conf entry
# JAILS
cloned_interfaces="${cloned_interfaces} lo1"
gateway_enable="YES"
ipv6_gateway_enable="YES"
# OPENVPN -> jails
cloned_interfaces="${cloned_interfaces} tun0"
# FIREWALL
pf_enable="YES"
pf_rules="/etc/pf.conf"
fail2ban_enable="YES"
# ... other services ...
# load ezjail
ezjail_enable="YES"
but ignores all rules concerning jails. So I have to reload rules manually to get it started by
sudo pfctl -f /etc/pf.conf
My pf.conf reads:
#external interface
ext_if = "bge0"
myserver_v4 = "xxx.xxx.xxx.xxx"
# internal interfaces
set skip on lo0
set skip on lo1
# nat all jails
jails_net = "127.0.1.1/24"
nat on $ext_if inet from $jails_net to any -> $ext_if
# nat and redirect openvpn
vpn_if = "tun0"
vpn_jail = "127.0.1.2"
vpn_ports = "{8080}"
vpn_proto = "{tcp}"
vpn_network = "10.8.0.0/24"
vpn_network_v6 = "fe80:dead:beef::1/64"
nat on $ext_if inet from $vpn_network to any -> $ext_if
rdr pass on $ext_if proto $vpn_proto from any to $myserver_v4 port $vpn_ports -> $vpn_jail
# nsupdate jail
nsupdate_jail="127.0.1.3"
nsupdate_ports="{http, https}"
rdr pass on $ext_if proto {tcp} from any to $myserver_v4 port $nsupdate_ports -> $nsupdate_jail
# ... other yails ...
# block all incoming traffic
#block in
# pass out
pass out
# block fail2ban
table <fail2ban> persist
block quick proto tcp from <fail2ban> to any port ssh
# ssh
pass in on $ext_if proto tcp from any to any port ssh keep state
I had to disable blocking all incoming traffic as ssh via ipv6 stopped working.
Any suggestions how to fix the problem?
|
The problem here is that /etc/rc.d/pf runs before /usr/local/etc/rc.d/ezjail, so the kernel hasn't configured the jailed network by the time it tries to load the firewall rules. You might be tempted to alter the pf script to start after ezjail, but that's not a good idea - you want your firewall to start early in the boot process, but jails get started quite late on. service -r shows what order your rc scripts will run.
You don't show any of your pf.conf rules, but my guess is that they use static interface configuration. Normally, hostname lookups and interface name to address translations are carried out when the rules are loaded. If a hostname or IP address changes, the rules need to be reloaded to update the kernel. However, you can change this behaviour by surrounding interface names (and any optional modifiers) in parentheses, which will cause the rules to update automatically if the interface's address changes. As a simple (and not very useful) example:
ext_if="em0"
pass in log on $ext_if to ($ext_if) keep state
The pf.conf manpage is very thorough. In particular, the "PARAMETERS" section is relevant here.
| Freebsd: pf firewall doesn't work on restart |
1,398,727,302,000 |
Question
I want to separate PHP (PHP-FPM) and Nginx into different jails. One jail with Nginx, and one with PHP-FPM / PHP / Wordpress.
Nginx is good at serving static assets, so I would like to serve those directly with Nginx. How can I mount a folder from one jail into another jail (read-only)?
I also have a Nodejs app in another jail, so I would also like to serve the static assets of it directly with Nginx.
Side question: When you host multiple PHP sites on the same server. Do you have to install PHP / PHP-FPM in each jail if you want each web-app in each own jail?
Info
Version: FreeBSD 10.2
Filesystem: root on ZFS
Sources
keramida.wordpress.com - freebsd-nullfs
cyberciti.biz - freebsd-mount_nullf-usrports-inside-jail
I have found this blog on using mount_nullfs for it. But can you use it between jails instead of between the host and a jail?
|
nullfs can be used to give a jail read-only access to parts of the host's file system. All the jails live within the host's file system, so the idea of jail-to-jail access is moot.
On my system (and I do jails the hard way) I have the following directive in /etc/jails.conf:
mount.fstab = "/etc/fstab.${name}";
which means I have separate fstabs for each jail, which then contains something like:
/jail/base /jail/somejail/base nullfs ro 0 0
There is obviously a whole range of arguments regarding the partitioning of jails, processes and applications. Personally, I like to keep an application self contained within a single jail, then use (yet another) nginx jail to reverse-proxy to all of the application jails. Using ZFS and one application per jail makes it very easy to manage different versions of the stack simultaneously, test new versions, and roll-back where necessary. In summary, I advocate running nginx and php-fpm in each application jail (that contain both static and dynamic content).
| FreeBSD jails - Nginx, PHP-FPM, Wordpress - Share folder between jails (read-only) |
1,398,727,302,000 |
Why does chroot operation result in error: "bash: /root/.bashrc: Permission denied"?
I've been testing chroot for learning purposes, and have encountered the following error, when executing /bin/bash:
nlykkei@debian:~$ id
uid=1000(nlykkei) gid=1000(nlykkei) groups=1000(nlykkei),27(sudo)
nlykkei@debian:~$ sudo chroot --userspec nlykkei:root --groups sudo / /bin/bash
bash: /root/.bashrc: Permission denied
nlykkei@debian:/$ id
uid=1000(nlykkei) gid=0(root) groups=0(root),27(sudo)
It seems like /bin/bash is attempting to access root's .bashrc instead of nlykkei's?
Furthermore, I cannot make NEWROOT e.g. ~ and execute /bin/bash by creating ~/bin/bash (copy):
nlykkei@debian:~$ ls -la ~/bin/bash
-rwxr-xr-x 1 nlykkei nlykkei 1168776 Sep 23 10:49 /home/nlykkei/bin/bash
nlykkei@debian:~$ sudo chroot --userspec nlykkei:root --groups sudo /home/nlykkei/ /bin/bash
chroot: failed to run command ‘/bin/bash’: No such file or directory
Any ways to resolve these issues?
nlykkei@debian:~$ uname -a
Linux debian 4.19.0-5-amd64 #1 SMP Debian 4.19.37-5 (2019-06-19) x86_64 GNU/Linux
|
Passing --userspec to chroot is not the same thing as running su - user inside the chroot environment; i.e. the home directory is still that of root i.e. /root which is why bash is trying to read /root/.bashrc which is not allowed for non-root users.
Your second problem is probably due to not having included all the necessary shared libraries in the chroot environment. From the chroot info document:
If you want to use a dynamically linked executable, say ‘bash’, then
first run ‘ldd bash’ to see what shared objects it needs. Then, in
addition to copying the actual binary, also copy the listed files to
the required positions under your intended new root directory.
Finally, if the executable requires any other files (e.g., data,
state, device files), copy them into place, too.
| Why does `chroot` operation result in error: "bash: /root/.bashrc: Permission denied"? |
1,398,727,302,000 |
I have been reading over the creation of Jails on FreeBSD, and one stumbling block I have is regarding network interfaces.
Say my local router is 10.0.2.1, and the BSD box is 10.0.2.5. I would like jails on 10.0.2.6-10.
Do I just define them in rc.conf (without any other work), or do I have to set up a bridge or something like that (I see FreeNAS uses a bridge)?
I'm not particularly strong the networking side of things, so any good explanation about how the Jails share and access the network would be valuable.
|
Jails get ip aliases on your network interface. If the jails use the same interface as the host and are on the same subnet you don't need to do any additional routing.
If your jails do not use the same interface you would need to bridge the primary interface with the interface the Jails use.
| Network interfaces for Jails |
1,398,727,302,000 |
I want to let some of my friends access my computer by making them user accounts. They will mostly access my computer by sftp and ssh, but they could also sometimes access it at my home. However I don't want them to be able to see all my file (not my personal files from my home directory, I mean files that reside outside of the user directory, like etc, lib...)
I asked the question recently:
OpenSSH, chroot user: Root needs to own the user directory, is there any consequence?
And the awnser that was given to me was that if I chroot the user, I will need to create a complete environment for every user.
Is there a way to actually prevent users from going outside of their home directory and preventing them from an passing argument to a program like cp that would point outside of their home directory, or any way to actually keep my system private? What is the best solution? I want them to be able to fully use all my programs, but unable to copy or read files, or use programs to read or copy file outside of their home directory.
|
Depending on what "fully use all my programs" means, the options are:
Use standard Unix file permissions to protect your files. The advantage here is that it's really easy to set up as it's just a matter of deciding which files you want protected and setting the appropriate permissions on them. The downside is that your friend will not be able to do everything on the system as they won't have root access
Run a FreeBSD jail. FreeBSD has jails that are designed for exactly this purpose. They take a little effort to set up, but you're giving your friend a full filesystem that they can use as they wish: http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/jails.html
Run a true Virtual Machine. Xen or Virtualbox can be run to give a fully operational server to your friend. This can be quite resource-intensive in terms of memory, CPU and disk, but it's the most separate from your files.
| How to completely hide stuff from a user? |
1,398,727,302,000 |
My title question is exactly my question : is it possible to create a freebsd 10 or 11 jail into a freebsd 9 instance?
|
The jail and the host system will both share the same kernel. So:
Old jail, recent host: Running an outdated jail on a newer host should not cause any issue (the FreeBSD kernel will ensure retro-compatibility, enabled by default even for pre-FreeBSD 8 kernel as a kernel compilation option),
Recent jail, old host: I would not try to run a newer jail on an older host. This would mean running a FreeBSD 10/11 environment using a FreeBSD 9 kernel which is definitively not recommended.
So, in your case, the answer is no, it is not possible (even if it may install successfully it will most likely lead you directly to a wall).
| Is it possible to create a freebsd 10 or 11 jail in freebsd 9? |
1,398,727,302,000 |
I created a chroot jail and copied multiple binaries and their corresponding libraries to the relevant subdirectories. Example:
cp -v /usr/bin/edit /home/jail/usr/bin
ldd /usr/bin/edit
linux-vdso.so.1 (0x00007fff565ae000)
libm.so.6 => /lib64/libm.so.6 (0x00007f7749145000)
libtinfo.so.5 => /lib64/libtinfo.so.5 (0x00007f7748f11000)
libacl.so.1 => /lib64/libacl.so.1 (0x00007f7748d08000)
libdl.so.2 => /lib64/libdl.so.2 (0x00007f7748b04000)
libperl.so => /usr/lib/perl5/5.18.2/x86_64-linux-thread-multi/CORE/libperl.so (0x00007f7748771000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f7748554000)
libc.so.6 => /lib64/libc.so.6 (0x00007f77481ad000)
libattr.so.1 => /lib64/libattr.so.1 (0x00007f7747fa8000)
/lib64/ld-linux-x86-64.so.2 (0x00007f7749446000)
libcrypt.so.1 => /lib64/libcrypt.so.1 (0x00007f7747d6d000)
cp -v /lib64/{libm.so.6,libtinfo.so.5,libacl.so.1,libdl.so.2,libpthread.so.0,libc.so.6,libattr.so.1,ld-linux-x86-64.so.2,libcrypt.so.1} /home/jail/lib64/
I did the same with the man command and copied all manual files with cp -rv /usr/share/man/ /home/jail/usr/share/, but if I execute it, it returns this error:
-bash-4.2$ man gzip
execve: No such file or directory
What could be missing?
More details:
-bash-4.2$ ls /usr/share/man
ca da el es fr.ISO8859-1 hu it man0p man1p man3 man4 man6 man8 mann pl pt_BR sk sv zh zh_TW
cs de eo fr fr.UTF-8 id ja man1 man2 man3p man5 man7 man9 nl pt ru sr uk zh_CN
Update:
-bash-4.2$ strace -f /usr/bin/mandb ls 2>ls.log
-bash-4.2$ cat ls.log
execve("/usr/bin/mandb", ["/usr/bin/mandb", "ls"], [/* 45 vars */]) = 0
brk(0) = 0x138b000
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fd43a9ac000
access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory)
open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
open("/lib64/tls/x86_64/libc.so.6", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
stat("/lib64/tls/x86_64", 0x7ffde87d2510) = -1 ENOENT (No such file or directory)
open("/lib64/tls/libc.so.6", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
stat("/lib64/tls", 0x7ffde87d2510) = -1 ENOENT (No such file or directory)
open("/lib64/x86_64/libc.so.6", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
stat("/lib64/x86_64", 0x7ffde87d2510) = -1 ENOENT (No such file or directory)
open("/lib64/libc.so.6", O_RDONLY|O_CLOEXEC) = 3
read(3, "\177ELF\2\1\1\3\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\20\34\2\0\0\0\0\0"..., 832) = 832
fstat(3, {st_mode=S_IFREG|0755, st_size=1974416, ...}) = 0
mmap(NULL, 3828256, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7fd43a3e6000
mprotect(0x7fd43a584000, 2093056, PROT_NONE) = 0
mmap(0x7fd43a783000, 24576, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x19d000) = 0x7fd43a783000
mmap(0x7fd43a789000, 14880, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7fd43a789000
close(3) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fd43a9ab000
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fd43a9aa000
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fd43a9a9000
arch_prctl(ARCH_SET_FS, 0x7fd43a9aa700) = 0
mprotect(0x7fd43a783000, 16384, PROT_READ) = 0
mprotect(0x601000, 4096, PROT_READ) = 0
mprotect(0x7fd43a9ad000, 4096, PROT_READ) = 0
brk(0) = 0x138b000
brk(0x13ac000) = 0x13ac000
open("/usr/lib/locale/locale-archive", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
open("/usr/share/locale/locale.alias", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
open("/usr/lib/locale/de_DE.UTF-8/LC_CTYPE", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
open("/usr/lib/locale/de_DE.utf8/LC_CTYPE", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
open("/usr/lib/locale/de_DE/LC_CTYPE", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
open("/usr/lib/locale/de.UTF-8/LC_CTYPE", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
open("/usr/lib/locale/de.utf8/LC_CTYPE", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
open("/usr/lib/locale/de/LC_CTYPE", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
getuid() = 1000
geteuid() = 1000
getgid() = 100
execve("/usr/lib/man-db/mandb", ["/usr/bin/mandb", "ls"], [/* 45 vars */]) = -1 ENOENT (No such file or directory)
dup(2) = 3
fcntl(3, F_GETFL) = 0x8001 (flags O_WRONLY|O_LARGEFILE)
close(3) = 0
write(2, "execve: No such file or director"..., 34execve: No such file or directory
) = 34
exit_group(-22) = ?
+++ exited with 234 +++
Update2:
Ok this part was missing:
cp -rv /usr/lib/man-db/ usr/lib/
Now I get this error:
man: error while loading shared libraries: libmandb-2.6.6.so: cannot open shared object file: No such file or directory
Strangely it's not part of the ldd return:
# which mandb
/usr/bin/mandb
# ldd /usr/bin/mandb
linux-vdso.so.1 (0x00007fffd64d0000)
libc.so.6 => /lib64/libc.so.6 (0x00007f1885120000)
/lib64/ld-linux-x86-64.so.2 (0x00007f18854c7000)
Finally I needed those libraries:
cp /usr/lib64/libmandb-2.6.6.so usr/lib64/libmandb-2.6.6.so
cp /usr/lib64/libgdbm.so.4 usr/lib64/libgdbm.so.4
After that man loaded, but no text is displayed:
# man ls
Man: find all matching manual pages (set MAN_POSIXLY_CORRECT to avoid this)
* ls (1)
ls (1p)
Man: What manual page do you want?
Man: 1
I compared the strace results of the jail and root user and they differ now only in this part (jail is left):
As I added a bind mount to /var/run/nscd, the socket is available for the jail user:
-bash-4.2$ if [[ -S /var/run/nscd/socket ]]; then echo "socket is available"; fi
socket is available
So the problem seems to be something else?!
Update3:
@nobody
Yes, passwd and group are present:
-bash-4.2$ ls -la /etc
total 124
drwxr-xr-x 4 root root 216 Nov 11 14:15 .
drwxr-xr-x 13 root root 183 Nov 4 08:49 ..
-rw-r--r-- 1 root root 779 Nov 3 12:43 group
-rw-r--r-- 1 root root 67659 Nov 11 13:55 ld.so.cache
-rw-r--r-- 1 root root 2335 Nov 4 09:02 localtime
-rw-r--r-- 1 root root 12061 Nov 11 13:16 manpath.config
-rw-r--r-- 1 root root 1304 Nov 11 14:15 nsswitch.conf
-rw-r--r-- 1 root root 3961 Nov 3 12:43 passwd
drwxr-xr-x 2 root root 4096 Nov 3 14:13 postfix
-rw-r--r-- 1 root root 9168 Nov 4 09:02 profile
drwxr-xr-x 2 root root 4096 Nov 4 09:02 profile.d
-rw-r--r-- 1 root root 8006 Nov 4 09:17 vimrc
Update4:
The -Tascii flag returned more missing binaries:
-bash-4.2$ man -Tascii ls
man: can't execute tbl: No such file or directory
man: can't execute groff: No such file or directory
man: command exited with status 255: /usr/bin/zsoelim | /usr/lib/man-db/manconv -f UTF-8:ISO-8859-1 -t ANSI_X3.4-1968//IGNORE | tbl | groff -mandoc -Tascii
So I copied tbl, groff and zsoelim and the complete dir /usr/share/groff. Now two additional binaries were missing:
-bash-4.2$ man -Tascii ls
groff: couldn't exec troff: No such file or directory
groff: couldn't exec grotty: No such file or directory
man: command exited with status 4: /usr/bin/zsoelim | /usr/lib/man-db/manconv -f UTF-8:ISO-8859-1 -t ANSI_X3.4-1968//IGNORE | tbl | groff -mandoc -Tascii
After copying these, the manual was displayed:
But without the -Tascii flag its still black/empty. :|
Update5:
Default pager seems to be less
-bash-4.2$ env | grep MANPATH
MANPATH=/usr/share/man
-bash-4.2$ env | grep PAGER
PAGER=less
|
You should type the command strace -f man ls 2>ls.log and see how many execve lines there are in the ls.log file. You will have /usr/bin/pager, nroff, groff, tbl… groff would surely need a lot of files to work properly. See how many openat in the log file are successful.
| man returns execve: No such file or directory in chroot jail |
1,398,727,302,000 |
I am trying to secure a custom application as much as possible from outside tampering.
I've seen many pages on jailing a user, but they usually include many exceptions, and I want to lock down this user as much as possible.
The user only needs to execute an application that is a websocket++ client & server that needs the ability to:
Accept incoming connections port forwarded from 443 to another port, for example 8000
Seek outgoing connections
Communicate with a local PostgreSQL server
Read from & write to a few specific files in the directory where the application is executed
Get output from ntpd -c 'rv'
Accept keyboard input
How can my intent be implemented?
|
if you really
want to lock down this user as much as possible
create a virtual machine. The chroot don't really isolate this process.
If a real virtual machine is too heavy, maybe you can have a look at linux containers, a lightweight version of virtual machine. Harder to configure though.
If you want something even more lightweight you can try to configure SELinux. Maybe even harder to configure, but it should do exactly what you want
chroot is not intended as a security measure, and there are various way to work around it.
| Absolutely jail a user with minimum IP, file, & command rights |
1,398,727,302,000 |
I want a process (and all its potential children) to be able to read the filesystem according to my user profile but I want to restrict that process's write permission to only a set of pre-selected folders (potentially only one).
chroot seems to act too broadly. Restricting the process to a particular part of the filesystem which makes curbersome the need to mount /bin folders and the like. My process should be able read the content of the filesystem as any normal process I launch.
I could use a docker container and mount a volume but that seems overkill: need to install docker, create an image, launch the container in it, etc...
Is there a way to do something like?:
restricted-exec --read-all --write-to /a/particular/path --write-to /another/particular/path my-executable -- --option-to-the-executable
Some sort of unveil but controlled by the calling process and only for write access.
|
firejail does the job:
mkdir -p ~/test && firejail --read-only=/tmp --read-only=~/ --read-write=~/test/ touch ~/test/OK ~/KO /tmp/KO
| Restrict linux process write permission to one folder |
1,398,727,302,000 |
Is it possible to jail an older 32-bit FreeBSD, such as 6.4 or 8.4 in a 64-bit FreeBSD 10.2?
I'll also appreciate pointers and explanations on how to accomplish this and information on what prerequisites my host needs to fulfill.
NB: according to this blog article, jailing an older FreeBSD on a current one is possible. But that article makes no mention of 32-bit versus 64-bit.
|
COMPAT_FREEBSD32 needs to be enabled in your FreeBSD kernel (It is enabled in the GENERIC kernel)
There might be problems with the 32bit ps and top programs.
| Can I jail an older 32-bit FreeBSD on a (current) 64-bit FreeBSD? |
1,398,727,302,000 |
I'm trying to setup a chroot jail, but I'm not sure how to make this work in SSH and SFTP. A quick question, will something like this work for both SSH and SFTP or just SSH? If it doesn't work for both, how can I setup a chroot jail (or an alternative) to do so?
|
If SSH does a chroot, then it will be effective for all processes started by SSH.
| Is a Chroot Jail for SSH *and* SFTP? |
1,398,727,302,000 |
First of all, I know about virtualisation and containers. I'm sure "he wants containers" is what popped in your mind. (Don't deny it!)
However containers are like chroot: if you want to execute bash in it, you need to copy/mirror a bash executable somewhere in the container FS as well as all the required libs. (If I misunderstood something, please correct me).
What I want to know is whether I can start a program like busybox from the current namespace (using the original FS, so no copy needed) and then isolate it (for instance using Linux FS namespace) to let it access one unique directory.
Somehow ssh (sftp actually) seems to be able to do something like this without requiring the sshd executable to be in the chrooted FS. But I still lack skills to understand what's going on by myself.
|
I think you're probably looking for containers.
Or perhaps not. Linux namespaces can be pretty transparent, after all. I don't believe there is a way to unshare a namespace for a process which has already been called, but you definitely can unshare a namespace at call time with little to no effect.
cd /tmp
echo you >hey
sudo unshare -m busybox
echo hey >you; cat hey
you
...and from another terminal...
cd /tmp
cat you
hey
...the mount tree is shared by default from the parent namespace, and, though busybox's mount propagation flags are all set to private by default, it makes no difference until a change to the mount tree is somehow effected in the namespace. This need not be done through busybox, either.
...in busybox's terminal...
echo "#$$"
#8854
...and now from the other...
sudo nsenter -t8854 -m mount -t tmpfs none /tmp
cd .; cat hey
you
...but from busybox's terminal, and therefore from the namespace we just effected a mount...
cd .; cat hey
cat: hey: no such file or directory
...because a new private tmpfs was mounted over the shared /tmp in the nsenter command...
cd ..
umount tmp
cat tmp/hey
you
| Is there a way to isolate a running program from the rest of a Linux system? |
1,398,727,302,000 |
I use FreeBSD 9.1 64-bit from the list here.
On my freebsd amazon instance, I have a jail running :
# jls
JID IP Address Hostname Path
1 192.168.1.101 01.gideon.com /jails/01.gideon.com
If I goto that jails console I can't install perl on it. (If I do portsnap fetch inside the jail I get : host: isc_socket_bind: address not available and then it says no mirrors, giving up.)
I've looked at several articles and posts but I'm confused about what goes where, I just want you to tell me where I should put the right entries, this is a sample from this article, my system info is below:
rc.conf
hostname="" #what goes here?
defaultrouter=""#what goes here?
#I don't understand what this is for?
ifconfig_em0="inet 192.168.0.10 netmask 255.255.255.0"
#I'm guessing this should be like this:
ifconfig_xn0="inet 192.168.1.101 netmask 255.255.255.0"
# Should I use an alias?
# ifconfig_em0_alias0="inet 192.168.0.111 netmask 255.255.255.0"
This is what ifconfig -a gives me:
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384
options=600003<RXCSUM,TXCSUM,RXCSUM_IPV6,TXCSUM_IPV6>
inet6 ::1 prefixlen 128
inet6 fe80::1%lo0 prefixlen 64 scopeid 0x1
inet 127.0.0.1 netmask 0xff000000
nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
xn0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
options=503<RXCSUM,TXCSUM,TSO4,LRO>
ether 12:31:39:2a:dc:cc
inet 10.8.106.58 netmask 0xfffffe00 broadcast 10.8.107.255
nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
media: Ethernet manual
status: active
This is my /etc/resolve.conf:
# Generated by resolvconf
search ec2.internal
nameserver 172.16.0.23
So how do I go about this?
|
Modification of networking inside FreeBSD jail isn't allowed. A jail can use all host addresses, a few ones (restricted set, configured during a jail creating) or no networking at all. And, as far as I see, allowed IPs are automatically placed on interfaces seen inside the jail.
You should specify exact FreeBSD version for updates of the question, because jail behavior is being advanced with each release and details can be subtly different.
| enable networking inside a freebsd jail |
1,398,727,302,000 |
Is it conceptually possible to take a FreeBSD jail and copy the contents to my root filesystem and expect it to boot up?
Note that because the jail doesn't have a kernel, I will have to copy that from the original host.
I'd like to make whatever configuration changes I want and use that as a starting point for my real installation. The point of doing that in the jail is that it won't impact my working environment while the configuration is taking place.
|
The original question was aimed at streamlining the BSD installation process. After having installed FreeBSD numerous times and scripting the install, this question is a different way to go about it and not that useful.
The installation can be completely automated, allowing for hands-free installation.
To answer the question posed above, it wouldn't make much sense to copy the contents of the jail as all the other files would also need to be copied and would be error prone.
| Can the contents of a FreeBSD jail be installed as the main OS? |
1,472,759,778,000 |
I'm running FreeNAS (v9.2.1.9) as a file and media server and have several jails installed, some of them as plugin jails others as "standard" jails.
After I had replaced one of the plugin jails by a "standard" (hand made) jail, I wanted to remove the old plugin jail. I rushed ahead by just deleting the dataset on which the plugin jail was installed, but the plugin jail is still showing up in the web interface under the "Jails" tab. So I assume at some place there is still some residuals of the plugin jail left. Since I have no idea what the plugin installer did and there is no built-in way to remove a plugin jail, I'm stuck with this "unclean" removal.
Now I'm wondering, how can I remove a plugin jail from FreeNAS completely? What additional cleanup is required in my case?
|
I had a similar issue on FreeNAS 9.3 and could not find a remove button. I got it to disappear by going to the jails volume, and removing the directory which held the jail:
# rm -fr /mnt/black/jails/.couchpotato_1.meta/
This also removed the complaints during boot. No negative side effects seen yet.
| How to remove a (plugin) jail from FreeNAS |
1,472,759,778,000 |
I'm in the process of setting up a FreeBSD 10 server with just some services on a KVM server:
NGINX webserver w PHP-FPM
2 domains, 3 subdomains
Mailserver
3 Email addresses
MySQL (or other) database server if needed
The server will only be administered by me. One SSH account with sudo access and later a second one with restrictions, mainly for SFTP operations.
I set everything up with ZFS (encrypted root) and am now thinking about whether I should / need to use jails or not.
I'm also unsure if it would make more sense to encrypt the jails and not the root volume although I don't have any idea how to do that.
As I read the security gain of jails can be debated and I'm unsure whether I gain anything with the added complexity of the whole system.
|
It's really up to you.
The benefit of setting up jails is that you're going to separate security issues that come up between the jails, as well as making it easier to upgrade and manage them separately, which can increase reliability.
The downside of setting up the jails of course is that you have to learn it, it's a bit of management overhead, etc.
Personally, I would probably setup a jail for a front end nginx reverse proxy in one jail then a separate web server jail for each web app. This way an issue with one web app wouldn't affect the others. Similarly, the mail server would get it's own jail, as would MySQL.
Take a look at ezjail, it'll make setting up the jails easier.
| FreeBSD 10: Do I need jails? |
1,472,759,778,000 |
I would like to know how I can constraint a user to only be able to access and have RWX permissions to directories like /etc/httpd, /etc/php and /var/www/html as well as its own home directory.
Also I would like to be able to constraint this user to be able to only star/stop/restart apache service.
All I could think of is chroot, but I have just done that with one directory. Any ideas?
|
A jailed user won't be able to access those folders as-is. If you have acl enabled on the filesystem, you could create a regular user and control access to the directories by using an access control list.
To give user 'Bob' access to the directories, create a group, place Bob in that group and then recursively give the group access to all existing and newly created files in /etc/http/:
# groupadd WebAccessGroup
# usermod -a -G WebAccessGroup Bob
# setfacl -Rm d:g:WebAccessGroup:rwx,g:WebAccessGroup:rwx /etc/httpd/
You could also give just user "Bob" wrx access to /etc/httpd without creating a group:
# setfacl -Rm d:u:Bob:rwx,u:Bob:rwx /etc/httpd/
To allow the WebAccessGroup group to start and stop Apache, you could give the group sudo access to run the specific script that you call to start/stop Apache as root:
Use the 'visudo' command to add the following to your /etc/sudoers file:
# visudo
%WebAccessGroup ALL=(root) NOEXEC: /usr/bin/httpd
And then Bob would start Apache using sudo:
$ sudo /usr/sbin/httpd -k start
** Note: If you run Apache on a non-standard port as a non root user ("anotheruser" in this example), it's safer and better to change All=(root) to All=(anotheruser) and to run the start command like:
sudo -u anotheruser /usr/sbin/httpd -k start
| chroot an user to more than one directory in different locations |
1,472,759,778,000 |
I want to be able to generically pick a certain executable (potentially malicious) x and run it (from an admin account) with write access restricted to certain directories (dynamically deduced) "${dirs[@]}".
The executable should have access to whatever is globally accessible on the system.
I figured I could use simple user-switching and have a dedicated, stateless system user foreveralone, for running theses executables.
Whenever I would want to run x with these restrictions, I would, flock a lock file, chown -R foreveralone:foreveralone -- "${dirs[@]}" and then do sudo -u foreveralone -g foreveralone $PWD/x.
After that, I would chown the write directories to someone else, so that foreveralone would have no persistent files on the filesystem.
I figure I would also need to clean up up global write directories (e.g., /tmp, /dev/shm) from foreveralones files.
My questions are:
Is this a feasible and secure mechanism for jailing processes, given a standardly setup *nix system?
What exactly are the standard globally writable places and files on a a standardly setup *nix?
How can I find them better than with something like
sudo -u foreveralone -g foreveralone find / ! -type l -writable 2>/dev/null | grep -v '^/proc'
(My find game is very weak. /proc/$pid appears to have lots of files that appear writable but, in fact aren't so I'm skipping those (I wonder what's up with that)).
Anyway, on my system, 3. returns (filtered to show filetypes):
character special file /dev/full
character special file /dev/fuse
character special file /dev/net/tun
character special file /dev/null
character special file /dev/ptmx
character special file /dev/random
character special file /dev/tty
character special file /dev/urandom
character special file /dev/zero
character special file /sys/kernel/security/apparmor/.null
directory /run/lock
directory /run/shm
directory /tmp
directory /tmp/.ICE-unix
directory /tmp/.X11-unix
directory /var/local/dumps
directory /var/mail
directory /var/spool/samba
directory /var/tmp
regular empty file /run/sendmail/mta/smsocket
regular empty file /sys/kernel/security/apparmor/.access
socket /dev/log
socket /run/acpid.socket
socket /run/avahi-daemon/socket
socket /run/cups/cups.sock
socket /run/dbus/system_bus_socket
socket /run/gdm_socket
socket /run/mysqld/mysqld.sock
socket /run/samba/nmbd/unexpected
socket /run/sdp
socket /tmp/.ICE-unix/2537
socket /tmp/mongodb-27017.sock
socket /tmp/.X11-unix/X0
Is there a better (simpler/more flexible solution to this)
In my particualar case, x would be a potentially malicious build script which should run without writing to the wrong places or reading things that aren't globally readable.
|
First sorry for my bad english. Lets show something for you using only unix concepts, cause i think it can help (or maybe not).
Imagine that i want that the executable nano can be executed by every users, but must never run as the user that call its executable, but with a limited environment, with access to edit the apache configuration only or files in certain groups, in other words i want nano to be executed like a linux service limited to a specific virtual user privileges.
1- First i will create the user nano and disable its login:
useradd nano -d /var/nano
mkdir /var/nano
chown -R nano:nano /var/nano
passwd -l nano
2- Lets force nano to run as user nano(for example if root call nano it must run as nano and not by root)
chown nano:nano /usr/bin/nano
chmod a+s /usr/bin/nano
Now +s means, that nano will run as the owner and not by who called it.
3- Call nano with root for a test:
#nano
#ps aux | grep nano
nano 3399 0.0 0.0 13828 3840 pts/0 S+ 08:48 0:00 nano
Beautiful! Nano now run as user nano not depending in what user i logged with.
4- So what now? I want nano to edit the files at /var/www/apache2
chgrp -R www-data /var/www/ (yes i now that is unnecessary in Debian if the group are respected)
chmod -R g+rw /var/www
adduser nano www-data
5- What more?
You will note that every user now can use nano (or a special copy of it "nano-special" ;-) to edit /var/www files, so what if you want that only users in group nano can do that?
Simple remove other privileges to execute it:
chmod o-x /usr/bin/nano
And add the users to the group nano
adduser myuser1 nano
| Globally writable files and user-based process jailing |
1,472,759,778,000 |
Update: I'm happy to report @dartonw 's answer worked and I went and did a checkout and then buildworld and it built successfully in about 6 hours.
So I've been having some issues with jails in freebsd. I run FreeBSD9.1 64 bit on EC2 as a small instance. I recently tried :
cd /usr/src;make buildworld
And after nine hours of compiling it gives me :
{standard input}:12044: Warning: end of file not at end of a line; newline inserted
{standard input}:12142: Error: invalid character '_' in mnemonic
c++: Internal error: Killed: 9 (program cc1plus)
Please submit a full bug report.
See <URL:http://gcc.gnu.org/bugs.html> for instructions.
*** [TransAutoreleasePool.o] Error code 1
Stop in /usr/src/lib/clang/libclangarcmigrate.
*** [all] Error code 1
Stop in /usr/src/lib/clang.
*** [all] Error code 1
Stop in /usr/src/lib.
*** [lib__L] Error code 1
Stop in /usr/src.
*** [libraries] Error code 1
Stop in /usr/src.
*** [_libraries] Error code 1
Stop in /usr/src.
*** [buildworld] Error code 1
Stop in /usr/src.
I came across this article which says :
Let's synchronise sources.
# cd /usr/share/examples/cvsup/
# cp standard-supfile /etc/freebsd-supfile
The list CVSup mirror sites is here.
But, the link says:
Warning: cvsup has been deprecated by the project, and its use is not recommended. Subversion should be used instead.
What should I do then? Where can I find an updated article? Should I update my ports collection?
|
You can use Subversion in basically the same way as documented for cvsup. In short:
# portsnap update
# cd /usr/ports/devel/subversion
# make install clean
Then to update /usr/src (assuming you have sources installed):
# svn update /usr/src
If sources are not already installed in /usr/src, you can check out a fresh working copy:
# svn checkout svn+ssh://svn.freebsd.org/base/head /usr/src
See Using Subversion in the FreeBSD Handbook for more options. You can get more information on using Subversion in general at the Subversion Primer.
Unless you want to customize the ports (i.e. make local changes to the source code), use portsnap. It is the official replacement for the port management functionality previously handled by cvsup and will probably meet most of your needs. See portsnap in the FreeBSD Handbook for a detailed but easy to follow guide.
| the right way to synchronise sources on freebsd |
1,472,759,778,000 |
This question is actually too broad...
What I really want to know is whether or not it actually chroots and, if so, how a SSH user deamon[1] can be launched in that jail in spite of the obvious lack of the required binary/lib in the chroot.
Google is surprisingly silent on the matter. But a good reference to explain that woud be enough (however I'm not litterate enough to read and understand their C).
[1]: I'm talking about the actual transient daemon with user priviledges that is launched upon connection by the main root OpenSSH daemon.
|
The other answer is quite vague (also the question is) so I will try to be more verbose about this phenomen. I know that this topic is not for everybody, but for these interested it is quite nice thing to know about.
There are two different places where the chroot is done and you are poking into both of them so I will try to align your ideas:
There is privilege separation, which is security mechanism and part of it is also chroot as a limitation of network child. This is usually some empty directory, like /var/empty.
The reason is in few words, that if there was some vulnerability, it would be probably not exploitable, because this process doesn't see filesystem and is also limited in other ways (sandbox, SECCOMP keywords for further readings).
Later on you can chroot the user's session (not only SFTP) in specific directory to prevent access to whole filesystem. This is probably the part you are interested, based on the title.
The magic about sftp in chroot is that you can specify Subsystem sftp internal-sftp (instead of the full path Subsystem sftp /usr/lib/openssh/sftp-server). This implies that sshd has whole sftp-server compiled-in and instead of exec on the binary, it just calls function where the server behaviour is defined. This doesn't require any supporting files in chroot for user (unlike the normal session, where you need shell and its dependent shared objects). You may also require logging socket, if you are interested in such informations.
| How is OpenSSH sftp jail/chroot working? |
1,472,759,778,000 |
I have been trying to get Forgejo running in a Truenas Core (FreeBSD jail) for over a week. When I manually start Forgejo as the git user it runs as expected, however attempting to get it to run with the included rc file provided by the ports package it errors out.
Forgejo Port
rc.d script
When I start forgejo manually it runs:
root@Forgejo:/home/jailuser # su git
git@Forgejo:/home/jailuser $ forgejo web -c /usr/local/etc/forgejo/conf/app.ini
2024/04/23 18:59:36 cmd/web.go:242:runWeb() [I] Starting Forgejo on PID: 4748
2024/04/23 18:59:36 cmd/web.go:111:showWebStartupMessage() [I] Forgejo version:1.21.11-1 built with GNU Make 4.4.1, go1.21.9 : bindata, pam, sqlite, sqlite_unlock_notify
However, when I attempt to start the forgejo service I get the following pid not found error:
root@Forgejo:/home/jailuser # service forgejo start
/usr/local/etc/rc.d/forgejo: DEBUG: Sourcing /etc/defaults/rc.conf
/usr/local/etc/rc.d/forgejo: DEBUG: pid file (/var/run/forgejo.pid): not readable.
/usr/local/etc/rc.d/forgejo: DEBUG: checkyesno: forgejo_enable is set to YES.
/usr/local/etc/rc.d/forgejo: DEBUG: run_rc_command: doit: forgejo_start
_
root@Forgejo:/home/jailuser # mount
Main/iocage/jails/Forgejo/root on / (zfs, local, noatime, nfsv4acls)
root@Forgejo:/home/jailuser # ll /var
total 81
drwxr-x--- 2 root wheel 2 Mar 1 18:50 account/
drwxr-xr-x 4 root wheel 4 Mar 1 18:50 at/
drwxr-x--- 4 root audit 4 Mar 1 18:50 audit/
drwxrwx--- 2 root authpf 2 Mar 1 18:50 authpf/
drwxr-x--- 2 root wheel 8 Apr 23 03:21 backups/
drwxr-xr-x 2 root wheel 2 Mar 1 18:50 cache/
drwxr-x--- 2 root wheel 3 Mar 1 19:06 crash/
drwxr-x--- 3 root wheel 3 Mar 1 18:50 cron/
drwxr-xr-x 14 root wheel 17 Apr 20 21:43 db/
dr-xr-xr-x 2 root wheel 2 Mar 1 18:50 empty/
drwxrwxr-x 2 root games 2 Mar 1 18:50 games/
drwx------ 2 root wheel 2 Mar 1 18:50 heimdal/
drwxr-xr-x 3 root wheel 23 Apr 23 00:00 log/
drwxrwxr-x 2 root mail 5 Apr 20 21:01 mail/
drwxr-xr-x 2 daemon wheel 3 Apr 20 19:28 msgs/
drwxr-xr-x 2 root wheel 2 Mar 1 18:50 preserve/
drwxr-xr-x 6 root wheel 18 Apr 23 18:56 run/
drwxrwxr-x 2 root daemon 2 Mar 1 18:50 rwho/
drwxr-xr-x 9 root wheel 9 Mar 1 18:50 spool/
drwxrwxrwt 3 root wheel 3 Mar 1 18:50 tmp/
drwxr-xr-x 3 unbound unbound 3 Mar 1 18:50 unbound/
drwxr-xr-x 2 root wheel 4 Mar 1 19:24 yp/
root@Forgejo:/home/jailuser #
Manually executing the daemon command results in an exit status of 0 with no other useful information. Tried relocating the pid file to a directory with 777 permissions and still getting the same error. My only guess right now would be that forgejo is dying almost immediately before daemon is able to create the pid file? Not sure how to get stdout from forgejo to see if there are any errors (forgejo is not logging anything to its log file directory). Any ideas?
UPDATE:
Adding truss to the init script on the call to daemon yields the following:
53609: mmap(0x0,135168,PROT_READ|PROT_WRITE,MAP_PRIVATE|MAP_ANON,-1,0x0) = 34376810496 (0x801048000)
53609: mprotect(0x801044000,4096,PROT_READ) = 0 (0x0)
53609: issetugid() = 0 (0x0)
53609: sigfastblock(0x1,0x801047490) = 0 (0x0)
53609: open("/etc/libmap.conf",O_RDONLY|O_CLOEXEC,0101130030) = 3 (0x3)
53609: fstat(3,{ mode=-rw-r--r-- ,inode=16052,size=35,blksize=4096 }) = 0 (0x0)
53609: read(3,"includedir /usr/local/etc/libmap.d\n",35) = 35 (0x23)
53609: close(3) = 0 (0x0)
53609: open("/usr/local/etc/libmap.d",O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC,0165) ERR#2 'No such file or directory'
53609: open("/var/run/ld-elf.so.hints",O_RDONLY|O_CLOEXEC,0100416054) = 3 (0x3)
53609: read(3,"Ehnt\^A\0\0\0\M^@\0\0\0w\0\0\0\0\0\0\0v\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0",128) = 128 (0x80)
53609: fstat(3,{ mode=-r--r--r-- ,inode=741826,size=247,blksize=4096 }) = 0 (0x0)
53609: pread(3,"/lib/casper:/lib:/usr/lib:/usr/lib/compat:/usr/local/lib:/usr/local/lib/compat/pkg:/usr/local/lib/perl5/5.36/mach/CORE\0",119,0x80) = 119 (0x77)
53609: close(3) = 0 (0x0)
53609: open("/lib/casper/libutil.so.9",O_RDONLY|O_CLOEXEC|O_VERIFY,00) ERR#2 'No such file or directory'
53609: open("/lib/libutil.so.9",O_RDONLY|O_CLOEXEC|O_VERIFY,00) = 3 (0x3)
53609: fstat(3,{ mode=-r--r--r-- ,inode=190,size=79952,blksize=80384 }) = 0 (0x0)
53609: mmap(0x0,4096,PROT_READ,MAP_PRIVATE|MAP_PREFAULT_READ,3,0x0) = 34376945664 (0x801069000)
53609: mmap(0x0,98304,PROT_NONE,MAP_GUARD,-1,0x0) = 34376949760 (0x80106a000)
53609: mmap(0x80106a000,32768,PROT_READ,MAP_PRIVATE|MAP_FIXED|MAP_NOCORE|MAP_PREFAULT_READ,3,0x0) = 34376949760 (0x80106a000)
53609: mmap(0x801072000,49152,PROT_READ|PROT_EXEC,MAP_PRIVATE|MAP_FIXED|MAP_NOCORE|MAP_PREFAULT_READ,3,0x7000) = 34376982528 (0x801072000)
53609: mmap(0x80107e000,4096,PROT_READ|PROT_WRITE,MAP_PRIVATE|MAP_FIXED|MAP_PREFAULT_READ,3,0x12000) = 34377031680 (0x80107e000)
53609: mmap(0x80107f000,4096,PROT_READ|PROT_WRITE,MAP_PRIVATE|MAP_FIXED|MAP_PREFAULT_READ,3,0x12000) = 34377035776 (0x80107f000)
53609: mmap(0x801080000,8192,PROT_READ|PROT_WRITE,MAP_PRIVATE|MAP_FIXED|MAP_ANON,-1,0x0) = 34377039872 (0x801080000)
53609: munmap(0x801069000,4096) = 0 (0x0)
53609: close(3) = 0 (0x0)
53609: open("/lib/casper/libc.so.7",O_RDONLY|O_CLOEXEC|O_VERIFY,012320443000) ERR#2 'No such file or directory'
53609: open("/lib/libc.so.7",O_RDONLY|O_CLOEXEC|O_VERIFY,012320443000) = 3 (0x3)
53609: fstat(3,{ mode=-r--r--r-- ,inode=126,size=1940168,blksize=131072 }) = 0 (0x0)
53609: mmap(0x0,4096,PROT_READ,MAP_PRIVATE|MAP_PREFAULT_READ,3,0x0) = 34376945664 (0x801069000)
53609: mmap(0x0,4190208,PROT_NONE,MAP_GUARD,-1,0x0) = 34377048064 (0x801082000)
53609: mmap(0x801082000,540672,PROT_READ,MAP_PRIVATE|MAP_FIXED|MAP_NOCORE|MAP_PREFAULT_READ,3,0x0) = 34377048064 (0x801082000)
53609: mmap(0x801106000,1343488,PROT_READ|PROT_EXEC,MAP_PRIVATE|MAP_FIXED|MAP_NOCORE|MAP_PREFAULT_READ,3,0x83000) = 34377588736 (0x801106000)
53609: mmap(0x80124e000,40960,PROT_READ|PROT_WRITE,MAP_PRIVATE|MAP_FIXED|MAP_PREFAULT_READ,3,0x1ca000) = 34378932224 (0x80124e000)
53609: mmap(0x801258000,24576,PROT_READ|PROT_WRITE,MAP_PRIVATE|MAP_FIXED|MAP_PREFAULT_READ,3,0x1d3000) = 34378973184 (0x801258000)
53609: mmap(0x80125e000,2240512,PROT_READ|PROT_WRITE,MAP_PRIVATE|MAP_FIXED|MAP_ANON,-1,0x0) = 34378997760 (0x80125e000)
53609: munmap(0x801069000,4096) = 0 (0x0)
53609: close(3) = 0 (0x0)
53609: mprotect(0x80124e000,36864,PROT_READ) = 0 (0x0)
53609: mprotect(0x80124e000,36864,PROT_READ|PROT_WRITE) = 0 (0x0)
53609: mprotect(0x80124e000,36864,PROT_READ) = 0 (0x0)
53609: readlink("/etc/malloc.conf",0x7fffffffc610,1024) ERR#2 'No such file or directory'
53609: issetugid() = 0 (0x0)
53609: mmap(0x0,2097152,PROT_READ|PROT_WRITE,MAP_PRIVATE|MAP_ANON|MAP_ALIGNED(21),-1,0x0) = 34382807040 (0x801600000)
53609: mmap(0x0,2097152,PROT_READ|PROT_WRITE,MAP_PRIVATE|MAP_ANON|MAP_ALIGNED(12),-1,0x0) = 34384904192 (0x801800000)
53609: mmap(0x0,4194304,PROT_READ|PROT_WRITE,MAP_PRIVATE|MAP_ANON|MAP_ALIGNED(21),-1,0x0) = 34387001344 (0x801a00000)
53609: mprotect(0x1026000,4096,PROT_READ) = 0 (0x0)
53609: sigaction(SIGHUP,{ SIG_IGN SA_RESTART ss_t },{ SIG_DFL 0x0 ss_t }) = 0 (0x0)
53609: sigaction(SIGTERM,{ SIG_IGN SA_RESTART ss_t },{ SIG_DFL 0x0 ss_t }) = 0 (0x0)
53609: socket(PF_LOCAL,SOCK_DGRAM|SOCK_CLOEXEC,0) = 3 (0x3)
53609: getsockopt(3,SOL_SOCKET,SO_SNDBUF,0x7fffffffd85c,0x7fffffffd858) = 0 (0x0)
53609: setsockopt(3,SOL_SOCKET,SO_SNDBUF,0x7fffffffd85c,4) = 0 (0x0)
53609: connect(3,{ AF_UNIX "/var/run/logpriv" },106) = 0 (0x0)
53609: openat(AT_FDCWD,"/var/run",O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC,00) = 4 (0x4)
53609: openat(4,"forgejo.pid",O_WRONLY|O_NONBLOCK|O_CREAT|O_CLOEXEC,0600) = 5 (0x5)
53609: flock(5,LOCK_EX|LOCK_NB) = 0 (0x0)
53609: fstatat(4,"forgejo.pid",{ mode=-rw------- ,inode=742728,size=0,blksize=131072 },0x0) = 0 (0x0)
53609: fstat(5,{ mode=-rw------- ,inode=742728,size=0,blksize=131072 }) = 0 (0x0)
53609: ftruncate(5,0x0) = 0 (0x0)
53609: fstat(5,{ mode=-rw------- ,inode=742728,size=0,blksize=131072 }) = 0 (0x0)
53609: cap_rights_limit(4,{ CAP_UNLINKAT }) = 0 (0x0)
53609: cap_rights_limit(5,{ CAP_PWRITE,CAP_FTRUNCATE,CAP_FSTAT,CAP_EVENT }) = 0 (0x0)
53609: sigaction(SIGHUP,{ SIG_IGN 0x0 ss_t },{ SIG_IGN SA_RESTART ss_t }) = 0 (0x0)
53609: fork() = 53610 (0xd16a)
53610: <new process>
53610: setsid() = 53610 (0xd16a)
53609: exit(0x0)
53609: process exit, rval = 0
53610: sigaction(SIGHUP,{ SIG_IGN SA_RESTART ss_t },0x0) = 0 (0x0)
53610: madvise(0x0,0,MADV_PROTECT) ERR#1 'Operation not permitted'
53610: pipe2(0x7fffffffd9c0,0) = 0 (0x0)
53610: kqueuex() ERR#78 'Function not implemented'
53610: SIGNAL 12 (SIGSYS) code=SI_KERNEL
53610: process killed, signal = 12
UPDATE:
TrueNAS-13.0-U6.1
jailuser@Forgejo:~ $ uname -a
FreeBSD Forgejo 13.1-RELEASE-p9 FreeBSD 13.1-RELEASE-p9 n245429-296d095698e TRUENAS amd64
|
2024-03-07, archived:
WARNING - don't upgrade your TrueNAS CORE jails to FreeBSD 13.3 just yet | TrueNAS Community
Today, https://forums.freebsd.org/threads/forgejo-failing-to-start-as-service-pid-file-not-readable.93214/#post-653734:
FreeBSD 13.3
TrueNAS CORE 13.3 is not yet released.
TrueNAS CORE 13.3 Plans - Announcements - TrueNAS Community Forums
(I'm present there, and in other TrueNAS Community topics.)
| Forgejo pid file (/var/run/forgejo.pid) : not readable in Truenas Core (FreeBSD Jail) |
1,472,759,778,000 |
I've inherited some systems that run on freebsd and inside jails. Basically the services running are old versions of qmail, spamd, dovecot, etc. None of the versions are up to date or even maintainable any more.
At present we can't move from these systems but I would at least like to be able to troubleshoot them.
My question:
Normally I would be Able to run service qmail status for example and get some info about the top level process. How do I do this inside a jail. In the case of the qmail process I can use qmailctl but what would be the equivalent for spamd or dovecot ?
also how do you go about troubleshooting these types of services the logs don't really give a very good steer on what could be going wrong.
|
# jls
JID IP Address Hostname Path
1 127.0.0.2 ports12.localhost /SPACE/jails/ports12
2 127.0.0.3 py37jail.localhost /SPACE/jails/py37jail
OK now I know what jails are running. I'm going to logon as root as root is understood in the ports12.localhost jail(8)
jexec -l -U root 1
root@ports12:~ #
Now that you know how to get in. You can do whatever you like, as you would normally do on the/a host system. Nearly every command is available, as would be on the Host system. When you're done. You can simply type exit, or use the key combination of ^d That is; the Ctrl+d keys. See also: jexec(8), jls(8), and jail.conf(5)
| how can I manage services running in freebsd jail |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.