date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,460,253,961,000 |
I have an embedded system based on the Intel-Atom with PCH which we are busy developing. In the embedded environment I have:
A Serial console through the PCH which means this doesn't work with the standard kernel. (as CONFIG_SERIAL_PCH_UART_CONSOLE is required)
The SATA drive is only available in the embedded environment and can't be taken out for install.
I can boot via USB drive.
The system does have ethernet via the PCH which I have not yet confirmed to work.
I have managed to build a custom Linux 3.16.7 kernel that can be booted with console=uartPCH0,115200 and then displays a console on the serial line.
However, to move from here to an actual installation seems to be problematic.
I am unable to convince debian-installer to be built using my custom kernel.
My current theory is a double bootstrap process where I first bootstrap an installation into a usb-drive and then boot that and then bootstrap an installation into the SATA drive on the system?
Any better suggestions?
I'm not sure if there is some way to install via a network console?
The system requires the e1000e driver which I assume will be built into the standard debian installer ISO's, however so far I was unable to find very clear documentation on how to convince the install system to boot and then open up ssh/telnet.
Any hint ?
|
I managed to solve my problem with debootstrap, here is a quick run-down of the process I followed.
unmount usb
Partition the USB (4GB)
Zap out GPT with gdisk, as my board didn't want to boot GPT.
Created just one linux partition, nothing else.
I had lots of problems getting a usb drive bootable on my embedded system.
mkfs.ext4 /dev/sdb1
mount /dev/sdb1 /media/usb
debootstrap jessie /media/usb http://my.mirror/debian
I highly recommend setting up something like apt-cacher
chroot /media/usb
Mount all these:
mount -t devtmpfs dev /dev
mount -t devpts devpts /dev/pts
mount -t proc proc /proc
mount -t sysfs sysfs /sys
Edit /etc/fstab : (I use nano for editing normally)
proc /proc proc defaults 0 0
sysfs /sys sysfs defaults 0 0
UUID=xxxx / ext4 errors=remount-ro 0 1
to write UUID into file use: blkid -o value -s UUID /dev/sdb1 >> /etc/fstab
house-keeping:
apt-get install locales
dpkg-reconfigure locales
apt-get install console-setup
dpkg-reconfigure keyboard-configuration (optional?)
apt-get install console-data
passwd root
adduser linuxuser
Install grub and kernel
apt-get install grub-pc
I installed grub into both /dev/sdb and /dev/sdb1 but you can use install-mbr for /dev/sdb I think
apt-get install linux-image-686-pae
now edit /etc/default/grub:
uncomment GRUB_TERMINAL=console
add GRUB_GFXPAYLOAD_LINUX=text
to GRUB_CMDLINE_LINUX_DEFAULT add: console=tty0 console=ttyPCH0,115200
run upgrade-grub2
edit /etc/default/console-setup :
CODESET="guess"
FONTFACE=
FONTSIZE=
VIDEOMODE=
create /etc/kernel-img.conf with this inside:
image_dest = /
do_symlinks = yes
do_bootloader = yes
do_bootfloppy = no
do_initrd = yes
link_in_boot = no
Now install custom kernel with dpkg -i
For me 2 options was important:
CONFIG_SERIAL_PCH_UART=y
CONFIG_SERIAL_PCH_UART_CONSOLE=y
although I did highly customize the kernel after that.
Currently I am compiling 3.14 with the rt-patch from linux-source-3.14 I downloaded out of wheezy-backports
Other things to do before restarting (optional)
edit /etc/modules to force drivers to load
edit /etc/network/interfaces
echo myHostName > /etc/hostname
apt-get install telnetd
apt-get install openssh-server
At this stage I could boot the usb on my target embedded system and repeat the whole process again to install debian on the SATA drive. Obviously I needed to install things like debootstrap on the usb drive first to facilitate this but that was minor.
| Installing Debian on Embedded system with serial console or network console (PCH) |
1,460,253,961,000 |
I want to run debootstrap multiple times for the same target.
Often I am in transit with no internet access and would like to run it offline.
How can I run the process one with internet access and then multiple times afterwards without internet access?
I am open to using wrappers/alternatives like multistrap and cache options like apt-cacher-ng or squid.
The key requirement is that I can run it completely offline.
|
You can use squid-deb-proxy as is to run offline (even the InRelease files) but you need to modify it slightly to run debootstrap offline even when you are online.
You need to modify the squid-deb-proxy.conf file so take a copy and put it in your project.
There are absolute paths in the conf file to /etc and /var so you need to modify these to be relative if you to separate it from the system.
In order to still use the proxy when you are online you need to add this to the conf file.
#Use cached values when offline
offline_mode on
You can then start it like this:
mkdir -p squid/var/log/squid-deb-proxy
mkdir -p squid/var/run/
echo "Starting an instance of squid using the working dir for caches and logs instead of the system dirs"
squid -Nf squid/squid-deb-proxy.conf
Then before you start debootstrap
#Use a caching proxy to save bandwidth
export http_proxy=http://127.0.0.1:8000
| How Can I Run debootstrap Offline? |
1,460,253,961,000 |
I am running Ubuntu 14.04.2, 64 bit host system. Using debootstrap, I installed a minimal Ubuntu 14.04.2, 32 bit system in trusty32 directory. This is what my schroot configuration look like:
[trusty_i386]
description=Ubuntu 14.04 Trusty for i386
directory=/home/dipanjan/trusty32
personality=linux32
root-users=dipanjan
type=directory
users=dipanjan
I logged in to the 32-bit jail once using chroot, next time using schroot. Astonishingly, the output of uname -m differs. In chroot session, x86_64 (host system architecture) is returned while in schroot session, i686 (guest system architecture) is returned. Can someone explain this discrepancy?
$ sudo chroot trusty32/
(trusty_i386)root@dipanjan-OptiPlex-960:/# uname -m
x86_64
(trusty_i386)root@dipanjan-OptiPlex-960:/# exit
exit
$ schroot -c trusty_i386
(trusty_i386)dipanjan@dipanjan-OptiPlex-960:~$ uname -m
i686
(trusty_i386)dipanjan@dipanjan-OptiPlex-960:~$ exit
logout
|
chroot doesn't change processes' personality by default, so within the chroot you still see the host's (kernel) architecture, x86_64.
On the other hand you've set up your trusty_i386 schroot with a linux32 personality, so schroot runs that when setting the chroot up — and linux32 (which links to setarch) changes the current personality to report a 32-bit kernel architecture, i686.
| Why does uname -m report differently in chroot and schroot environment? |
1,460,253,961,000 |
In https://wiki.debian.org/Multistrap#Steps_for_Squeeze_and_later, it's required to run the following command in chroot environment:
/var/lib/dpkg/info/dash.preinst install
This command runs in Stretch, however there is no /var/lib/dpkg/info/dash.preinst file found in Buster.
What might be the equivalent command to complete the installation in Buster?
Reproduction
git clone https://github.com/ceremcem/multistrap-example
cd multistrap-example
./build.sh buster
|
dash.preinst was removed a year ago, because it was no longer necessary — its purpose was to ensure that /bin/sh’s ownership could switch between bash and dash, but bash stopped shipping /bin/sh.
The equivalent command is nothing, you don’t need to run the preinst any more.
| No /var/lib/dpkg/info/dash.preinst in Buster |
1,460,253,961,000 |
I am trying to install linux in android by chroot method.
Following tutorials available on internet , I tried debootstrap on kali linux and mint. But everytime it returns no output.
debootstrap --verbose –-arch=arm64 -–foreign jessie ./jessie ftp://ftp.debian.org/debian/
I tried various options but it retuns no output.
|
You should create a mount point then mount the needed partition to install debian e,g: sdaX
mkdir /mnt/debinst
mount /dev/sdaX /mnt/debinst
According to man debootstarp the format is:
debootstrap [OPTION...] SUITE TARGET [MIRROR [SCRIPT]]
in your case should be:
debootstrap --verbose –-arch=arm64 -–foreign jessie /mnt/debinst ftp://ftp.debian.org/debian/
Tuto : Installing Debian GNU/Linux from a Unix/Linux System
| debootstrap not working in kali and mint |
1,460,253,961,000 |
I've set up full working debootstrap-ed arm chroot environment. I can chroot in it, run commands, etc.
I am making a script that will customize chroot env, but I struggle with this.
For example:
chroot $target_dir echo this is a test > /tmp/test
Can anyone explain to me, why within this example I get output written on my host environment, not in chrooted?
Just to mention, I could execute f.e.
echo this is a test > $target_dir/tmp/test
but I would like to know why chrooted execution 'fails'
Edit:
This also works:
chroot $target_dir /bin/bash -c "echo test > /tmp/test"
|
When you run:
chroot $target_dir echo this is a test > /tmp/test
The > /tmp/test happens "for" the chroot command, much as if you had written:
> /tmp/test chroot $target_dir echo this is a test
If you want the redirection to happen inside of the chroot command, one way would be:
chroot $target_dir sh -c 'echo this is a test > /tmp/test'
... as this puts sh inside the chroot, letting echo see the correct redirected directory.
| Unable to execute redirection in chroot environment |
1,460,253,961,000 |
I am interested to generate a Debian rootfs for aarch64 machine from an x86_64 workstation.
What are the required steps to achieve this ?
As far as I know the proper way is to use debootstrap:
https://wiki.debian.org/es/debootstrap
Is it correct, or there is any other recommended way ?
EDIT: my goal is to build a rootfs to be mounted from an ARM device. Precisely, I will install this rootfs on a micro SD card.
|
When building your own arm64 Debian port, debootstrap will be the best option. There may already exist a working Debian image/tarball/installer for your arm64 board but if you require more granular control over your system, building it yourself will be best.
I am including a link to the Debian Wiki on debootstrap as well as the manpage for reference.
For more information on arm64 Debian, check out this Debian Wiki page.
| Building arm / aarch64 rootfs |
1,460,253,961,000 |
I want to create my own Debian Live boot stick from a chroot (debootstrap) environment.
The root filesystem should be mounted as squashfs and grub should be able to boot the system from a single EFI partition.
So far, I have a squashfs image of the chroot environment, vmlinuz and initrd.img on the stick.
However, I don't know, how I have to configure grub so that it doesn't boot my local system (tried with: grub-install...) but instead the squashfs of the USB stick.
|
I solved the problem!
First, the chroot environment must have the necessary initframes to load a shquashfs image. For this, I simply installed the live boot packet in the chroot and then updated the initframes. /proc, /dev/pts, /dev, /sys should be available in the chroot for this to work.
# @ root on localhost
mount -o bind /proc /debootstrap/proc
mount -o bind /dev /debootstrap/dev
mount -o bind /dev/pts /debootstrap/dev/pts
mount -o bind /sys /debootstrap/sys
# @ root in chroot
apt install live-boot live-boot-initramfs-tools
update-initramfs -u
When that is done, these directories should be unmounted and the shquashfs can be created to /target/live/filesystem.squashfs.
# @ root on localhost
umount /debootstrap/proc
umount /debootstrap/dev
umount /debootstrap/dev/pts
umount /debootstrap/sys
# @ root on localhost
mksquashfs -comp xz /debootstrap /target/live/filesystem.squashfs
I formatted the USB stick in fat32 and mounted it in /target. Now Grub can be installed.
# @ root on localhost
grub-install --target=x86_64-efi --root-directory=/target
Once that's done, copy vmlinuz and the initrd.img to /target/boot and create the grub.cfg in /target/boot/grub/grub.cfg with the following contend.
insmod all_video
set default=0
set timeout=0
menuentry "debian live" {
linux /boot/vmlinuz boot=live toram=filesystem.squashfs quiet
initrd /boot/initrd.img
}
This is it, your pc should be able to EFI boot this stick.
| Live debian USB stick with debootstrap, squashfs and grub |
1,460,253,961,000 |
My goal is to standup a chroot with the basic Unix tool-set (bash, cp, touch, cat, etc) and any necessary dependencies need to run apt get inside a codespace. Using debootstrap gets me close. The basic tools are installed and I can run apt get. The problem is subprocess do not work. I expect the last line to print "test" but actually I get an error saying that the subprocess' file handle is no good. I would have thought the vanilla debootstrap environment would be sufficiently cooked so that subprocess work, but that does not seem to be the case. Is there a switch I can pass?
$ sudo apt-get install -y binutils debootstrap
$ cd /home/codespace
$ CHROOT=/home/codespace/chroot
$ mkdir -p "${CHROOT}"
$ sudo debootstrap stable "${CHROOT}" http://deb.debian.org/debian/
$ sudo chroot "${CHROOT}"
$ cat < <(echo test)
bash: /dev/fd/63: No such file or directory
To reproduce,
sign up for codespaces beta here.
fire up a codespace for this project, open a bash shell (ctrl+`).
run the above.
This also reproduces outside of codespace so any Ubuntu environment will probably yield similar results.
|
Found the answer here. Turns out /proc was not mounted. To mount proc I ran
$ cd "${CHROOT}"
$ sudo mount --types proc /proc proc/
Now things work!
$ cat < <(echo test)
test
| Subprocess launched inside a chroot created on a codespace with debootstrap fail with /dev/fd/62: No such file or directory |
1,390,493,252,000 |
I am running Linux Mint Debian edition (essentially Debian testing) and the Cinnamon desktop environment. Every time I launch google-chrome it asks to become the default browser. I have told it to do so in all ways I can think of but I still get this pop-up:
What I have tried:
Clicking on "Set as default" in the pop-up.
Making chrome the default in its settings:
Using my desktop environment's (cinnamon) settings app to set it as default:
Associating it with all relevant mimetypes in the various ways and files where such things are defined:
$ xdg-mime query default text/html
chrome.desktop
$ grep chrome .local/share/applications/mimeapps.list
text/html=chrome.desktop
x-scheme-handler/http=chrome.desktop
x-scheme-handler/https=chrome.desktop
x-scheme-handler/about=google-chrome.desktop
x-scheme-handler/about=google-chrome.desktop;
text/html=emacs.desktop;google-chrome.desktop;firefox.desktop;
x-scheme-handler/http=chrome.desktop;
$ grep chrome /usr/share/applications/defaults.list
application/xhtml+xml=google-chrome.desktop
text/html=google-chrome.desktop
text/xml=gedit.desktop;pluma.desktop;google-chrome.desktop
x-scheme-handler/http=google-chrome.desktop
x-scheme-handler/https=google-chrome.desktop
In those files, I replaced all occurrences of firefox (my previous default) with google-chrome. No other browsers are defined anywhere in the file:
$ grep -E 'firefox|opera|chromium' /usr/share/applications/defaults.list \
.local/share/applications/mimeapps.list
$
Launching chrome as root in case that helps but it won't let me:
Using Debian's alternatives system to set it as default:
$ sudo update-alternatives --install /usr/bin/www-browser www-browser /usr/bin/google-chrome 1080
update-alternatives: using /usr/bin/google-chrome to provide /usr/bin/www-browser (www-browser) in auto mode
$ ls -l /etc/alternatives/www-browser
lrwxrwxrwx 1 root root 22 Jan 23 17:03 /etc/alternatives/www-browser -> /usr/bin/google-chrome
None of these seem to have any effect. Will no one rid me of this turbulent pop-up?
|
For Chromium, when I choose "Don't ask again", Chromium stores the following setting in my ~/.config/chromium/Profile 1/Preferences file:
{
"alternate_error_pages": {
"enabled": false
},
"apps": {
"shortcuts_have_been_created": true
},
"autofill": {
"negative_upload_rate": 1.0,
"positive_upload_rate": 1.0
},
"bookmark_bar": {
"show_on_all_tabs": true
},
"bookmark_editor": {
"expanded_nodes": [ "1" ]
},
"browser": {
"check_default_browser": false,
[...]
For standard Google Chrome:
Close Chrome.
In Terminal, paste open ~/Library/Application Support/Google/Chrome/Default/Preferences (and then hit enter)
search for "browser":{ and replace it with "browser":{"check_default_browser":false,
When you start chrome back up it shouldn't prompt you anymore.
Note:
The preferences setting seems to differ substantially between chrome versions. On Chrome-78.0 the setting
"browser":{"default_browser_infobar_last_declined":"13236762067983049"}
seems to work. I assume it simulates clicking the x.
| How can I make chrome (stop asking to be) the default browser? |
1,390,493,252,000 |
If I launch xterm with its default bitmap fonts and then select the 'Large' font from the 'VT Fonts' menu (via ctrl+right mouse), I get a very usable bitmap font with apparently good Japanese character support.
I'd like to know what this font is so that I can use it elsewhere. Unfortunately, I've found no information on what default settings XTerm uses (i.e. when none are explicitly specified). Lots of sites show how to use X resources to specify new settings (e.g. particular fonts), but none I've seen say what defaults are used if I do nothing.
I've tried eyeballing the font, and it looks similar to and is the same width as 9x15, but it uses more vertical space. It appears not to be 9x15 with different line spacing, though, as specifying this font directly fails to display some Japanese characters that 'Large' can handle just fine.
Although I'll be happy to know what this specific font is, I really want to know where to find what defaults XTerm uses for its resources more generally. If it makes any difference, I'm running Ubuntu 12.04 LTS, 64-bit.
[I have seen this question on the subject already, which is why I'm specifically asking about defaults rather than trying to get live values from a running XTerm.]
|
The appres utility lists the resources used by an application, both user and default.
appres XTerm xterm
The first argument is the class name (xterm -class Xxx). The second argument, which is optional, is the instance name (xterm -name xxx).
The “Large” font is .VT100.font5 or .VT100.utf8Fonts.font5. See the manual for whether .utf8Fonts is used, it's a bit complex. If you have more than one among *.VT100.font5 and ?.VT100.font5 and XTerm.VT100.font5 and xterm.VT100.font5, the last one in this list applies; see the X documentation for the gory details of resource name precedence.
appres XTerm | grep font5
| How can I find the default (font) resource XTerm is using? |
1,390,493,252,000 |
In my case, it seems as if LD_LIBRARY_PATH is set to the empty string. But all standard system tools still work fine, so I guess the dynamic linker checks for that case and uses some default for LD_LIBRARY_PATH in that case.
What is that default value? I guess it at least includes /usr/lib but what else? Is there any good systematic way in figuring out the standard locations where the dynamic linker would search?
This question is slightly different from what paths the dynamic linker will search in. Having a default value means, that it will use the value of LD_LIBRARY_PATH if given, or if not given, it will use the default value - which means, it will not use the default value if LD_LIBRARY_PATH is provided.
|
The usual dynamic linker on Linux uses a cache to find its libraries. The cache is stored in /etc/ld.so.cache, and is updated by ldconfig which looks on the paths it’s given in /etc/ld.so.conf (and nowadays typically files in /etc/ld.so.conf.d). Its contents can be listed by running ldconfig -p.
So there is no default value for LD_LIBRARY_PATH, default library lookup doesn’t need it at all. If LD_LIBRARY_PATH is defined, then it is used first, but doesn’t disable the other lookups (which also include a few default directories).
The ld.so(8) manpage has the details:
If a shared object dependency does not contain a slash, then it is
searched for in the following order:
Using the directories specified in the DT_RPATH dynamic section
attribute of the binary if present and DT_RUNPATH attribute does
not exist. Use of DT_RPATH is deprecated.
Using the environment variable LD_LIBRARY_PATH, unless the
executable is being run in secure-execution mode (see below), in
which case it is ignored.
Using the directories specified in the DT_RUNPATH dynamic section
attribute of the binary if present.
From the cache file /etc/ld.so.cache, which contains a compiled
list of candidate shared objects previously found in the augmented
library path. If, however, the binary was linked with the -z nodeflib linker option, shared objects in the default paths are
skipped. Shared objects installed in hardware capability
directories (see below) are preferred to other shared objects.
In the default path /lib, and then /usr/lib. (On some 64-bit
architectures, the default paths for 64-bit shared objects are
/lib64, and then /usr/lib64.) If the binary was linked with the
-z nodeflib linker option, this step is skipped.
If LD_LIBRARY_PATH is not set or is empty, it is ignored. If it is set to empty values (with LD_LIBRARY_PATH=: for example), those empty values are interpreted as the current directory.
| What is the default value of LD_LIBRARY_PATH? [duplicate] |
1,390,493,252,000 |
The TCP KeepAlive (socket option SO_KEEPALIVE) is governed by three options—time after which the mechanism triggers, probing interval, and number of failed probes after which the connecting is declared broken.
Their defaults are:
tcp_keepalive_time = 7200
tcp_keepalive_intvl = 75
tcp_keepalive_probes = 9
Sending probes after 1¼ minutes sound reasonable, and declaring failure after 9 failed probes does as well, but what is the idea behind the initial time being 2 hours?
Even tcp(7) says
Note that underlying connection tracking mechanisms and application timeouts may be much shorter.
The main point of enabling keepalive is to prevent any stateful network elements from dropping the state information, but such elements tend to drop the connections in a couple of minutes. With some rate-limited servers, curl with short --keepalive-time seems to significantly improve reliability of downloads.
So why is the default so long?
|
TCP Keep-alive was defined at a time when even the concept of firewall, let alone stateful firewall or NAT, was probably not widespread. From RFC 1122 (October 1989):
4.2.3.6 TCP Keep-Alives
Implementors MAY include "keep-alives" in their TCP
implementations, although this practice is not universally
accepted. If keep-alives are included, the application MUST
be able to turn them on or off for each TCP connection, and
they MUST default to off.
Keep-alive packets MUST only be sent when no data or
acknowledgement packets have been received for the
connection within an interval. This interval MUST be
configurable and MUST default to no less than two hours.
[...]
The main idea at the time wasn't about stateful information lost:
DISCUSSION:
A "keep-alive" mechanism periodically probes the other
end of a connection when the connection is otherwise
idle, even when there is no data to be sent. The TCP
specification does not include a keep-alive mechanism
because it could: (1) cause perfectly good connections
to break during transient Internet failures; (2)
consume unnecessary bandwidth ("if no one is using the
connection, who cares if it is still good?"); and (3)
cost money for an Internet path that charges for
packets.
[...]
A TCP keep-alive mechanism should only be invoked in
server applications that might otherwise hang
indefinitely and consume resources unnecessarily if a
client crashes or aborts a connection during a network
failure.
I skimmed the updating RFCs, but couldn't fine mention of keep alives.
| Default TCP KeepAlive settings |
1,390,493,252,000 |
When I open a file into ranger with a GUI application not listed in the rifle.conf file (i.e. using the open_with command), the ranger terminal window gets "suspended" until I close the GUI app.
For this reason, I'd like to have a way to open files with a specific application, but still get the ability to navigate the files in the ranger terminal.
This is the default behaviour when you open the same file with one of the application listed in the rifle.conf file.
Is there any way to achieve the goal?
|
Try open_with with the f or t flag:
open_with [application] [flags] [mode]
Open the selected files with the given application, unless it is omitted, in which case the default application is used. flags
change the way the application is executed and are described in their
own section in this man page. The mode is a number that specifies
which application to use. The list of applications is generated by the
external file opener "rifle" and can be displayed when pressing "r" in
ranger.
Note that if you specify an application, the mode is ignored.
Flags give you a way to modify the behavior of the spawned process. They are used in the commands :open_with (key "r") and :shell
(key "!").
f Fork the process. (Run in background)
c Run the current file only, instead of the selection
r Run application with root privilege (requires sudo)
t Run application in a new terminal window
| Ranger - open_with without suspending |
1,390,493,252,000 |
I have 150 Debian Jessie machines that open ODS files in Gnumeric when double-clicked despite LibreOffice Calc being installed. I know it is possible to change this by right-clicking the ODS file and changing its default program from the Properties window, but getting 150 users to do this is not an option. They all use xfce4 and thunar.
I need to do this via CLI so I can do it across all workstations remotely. I have looked in /usr/share/applications and ~/.local/share/application/mimetypes.list with no luck - comparing the files before and after changing it via GUI revealed no changes here.
How can I use bash to make these workstations open ODS files with LibreOffice Calc by default?
EDIT: Unlike the answers to this question, my Jessie installs do not have ~/.config/mimeapps.list or /usr/share/applications/defaults.list
|
You can use mimeopen with -d option:
man mimeopen :
DESCRIPTION
This script tries to determine the mimetype of a file and open it with
the default desktop application. If no default application is
configured the user is prompted with an "open with" menu in the
terminal.
-d, --ask-default
Let the user choose a new default program for given files.
Example:
mimeopen -d file.mp4
sample output:
Please choose a default application for files of type video/mp4
1) VLC media player (vlc)
2) Other...
Verify it:
xdg-open file.mp4
| Setting default application for filetypes via CLI? |
1,390,493,252,000 |
Where is the environment variable $SHELL first set on a UNIX system?
How can I find and print all of this type of default settings of my terminal?
|
Traditionally, by login(1):
ENVIRONMENT
login sets the following environment variables:
HOME The user's home directory, as specified by the password
database.
SHELL The user's shell, as specified by the password database.
Though these days it might be a window manager or terminal program making those settings, depending on the flavor of unix and how far they've departed from tradition. env will show what's currently set in the environment, which a shell or something else may have altered from the default. However, "terminal settings" are not typically environment variables, and shells like bash or zsh have a set command, and other places they hide settings...
| What sets the $SHELL environment variable? |
1,390,493,252,000 |
How do I set a manually downloaded Firefox as my default web browser so that clicking a link in another application will open the link in this Firefox?
I tried these commands, but they didn't seem to work:
update-alternatives --install /usr/bin/x-www-browser x-www-browser /home/user/firefox/firefox 100
update-alternatives --set x-www-browser /home/user/firefox/firefox
What do I have to do?
|
update-alternatives changes the application to use to open a web browser, not the application to use to open a web page. The two are not directly related: “I want to browse the web” is different from “I want to browse this web page”, and there are different kinds of content that happen to all open in a web browser.
What you need to change is which application is associated with the MIME type text/html, and perhaps others. These are configured through the /etc/mailcap file.
On Debian, /etc/mailcap is automatically generated from the applications you have installed. When multiple applications can open the same type, there is a priority system (similar, but distinct, from the priority system for alternatives). You can override these priorities by adding entries to /etc/mailcap.order. For example, the following line will cause Firefox to be used in preference of any other application for all the types it supports:
firefox:*/*
After you've changed /etc/mailcap.order, run /usr/sbin/update-mime as root to update /etc/mailcap.
If you want to use a program that doesn't come from a Debian package, edit it directly into /etc/mailcap, in the User Section.
# ----- User Section Begins ----- #
text/html; /home/user/firefox/firefox '%s'; description=HTML Text; test=test -n "$DISPLAY"; nametemplate=%s.html
# ----- User Section Ends ----- #
If you want to set preferences for your own account, define them in ~/.mailcap: the entries in that file override the ones in /etc/mailcap. You have to put full mailcap lines there, such as
text/html; /home/user/firefox/firefox '%s'; description=HTML Text; test=test -n "$DISPLAY"; nametemplate=%s.html
| How to set downloaded Firefox to default web browser in Debian? |
1,390,493,252,000 |
I use nautilus as file manager and would like to use Vim instead of Gedit to edit my text files. Many files (log files, empty files, …) are already opened with Vim, however not all of them, e.g. tex files and XML files are still opened with Gedit.
update-alternatives --get-selections | grep edit yields
editor auto /usr/bin/vim.gnome
gnome-text-editor manual /usr/bin/vim.gnome
readline-editor auto /usr/bin/rlwrap
and I have also set the VISUAL and EDITOR environment variables to point to vim.
Although the questions is about changing the default applicatin for any file type, it is fine to respond with a solution that just addresses the mentioned problem changing the default editor, since that is what bothers me at the moment.
EDIT:
The answer of “hesse” worked for most file types, but not for all. For instance Makefiles are still opened with Gedit. file --mime-type Makefile returns text/plain, which is already included in ~/.local/share/applications/defaults.list. However file --mime-type somefile also returns text/plain but is opened with Vim.
I use Debian unstable.
|
You should take a look in ~/.local/share/applications/defaults.list under [Default Applications]. There you should set the text/plain to point to the .desktop entry for vim, which is usually located in /usr/share/applications/. E.g:
text/plain=gvim.desktop
| Set default application for particular file types in nautilus |
1,390,493,252,000 |
ARM machines often have a default password. On Arch Linux, this is:
User: alarm
Password: alarm
I am assuming that the "arm" part of "alarm" refers the architecture, but what does the "al" stand for?
Perhaps I am completely off on my assumption.
|
The al part stands for Arch Linux, the arm part for ARM as you surmised.
| What does the "al" in "alarm", the default Arch Linux ARM processor username+password stand for? |
1,390,493,252,000 |
The find command allows you to search by size, which you can specify using units spelled out in the man page:
File uses n units of space. The following suffixes can be used:
`b' for 512-byte blocks (this is the default if no suffix is used)
`c' for bytes
`w' for two-byte words
`k' for Kilobytes (units of 1024 bytes)
`M' for Megabytes (units of 1048576 bytes)
`G' for Gigabytes (units of 1073741824 bytes)
Is there a historical reason b is chosen for "block" rather than "byte", which I suspect would be the more common assumption? And why would block be the default rather than byte? When and why would someone ever want to use this unit? Converting to bytes/kilobytes involves a bit of math, it doesn't seem very convenient to be the default unit.
|
The first versions of Unix happened to use 512-byte blocks in their filesystem and disk drivers. Unix started out as a pretty minimalist and low-level system, with an interface that closely followed the implementation, and leaked details that should have remained abstracted away such as the block size. This is why today, “block” still means 512 bytes in many contexts, even though there can be different block sizes, possibly even different block sizes applying to a given file (one for the filesystem, one for the volume manager, one for the disk…).
The implementation tracked disk usage by counting how many data blocks were allocated for a file, so it was easy to report the size of a file as a number of blocks. The disk usage and the size of a file can differ, not only because the disk usage is typically the size rounded up to a whole number of blocks, but also because sparse files have fewer blocks than the size would normally require. As far as I know, early Unix systems that implemented sparse files had find -size use the number of blocks used by the file, not the file size; modern implementations use the file size rounded up (there's a note to this effect in the POSIX specification).
The earliest find implementations only accepted a number of blocks after -size. At some point, find -size started accepting a c suffix to indicate a number of characters instead of blocks; I don't know who started it, but it was the case in 4.3BSD. Other suffixes appeared later, for example in FreeBSD it was release 6.2 that introduced k, M and other suffixes but not b which I think only exists in GNU and BusyBox find.
Historically, many programs used “character” and “byte” interchangeably, and tended to prefer the term “character”. For example, wc -c counts bytes. Support for multibyte characters, and hence a character count that differs from the byte count, is a relatively recent phenomenon.
In summary, there is no purpose. The 512-byte block size, the fact that it's the default unit, and the use of the letter b did not arise deliberately, but through historical happenstance.
| Purpose of find command's default size unit 512 bytes |
1,390,493,252,000 |
I'm using Linux Mint Debian edition and I have set Firefox as my default browser in my settings.
But HTTP links in other apps like hotot and pidgin open with Chromium!
Why is this happening is there any way to track the problem?
|
I'm going to guess the following
all of those tools use XdgUtils
if you type xdg-open http://google.com it'll open with Chromium
and that you have the problem described in this Ubuntu forumspost
So my suggested answer is:
$ xdg-mime default firefox.desktop x-scheme-handler/http
(and ditto for https)
| My default browser is set to Firefox but links open with Chromium |
1,390,493,252,000 |
I often see instructions that include vim or nano, meaning to open the file in that step in your text editor of choice. Is there an agnostic command I can use in place of the specific program that would open the input in the user's default in-terminal text editor, whether it's vim, nano, or something else?
I see editor mentioned in the Similar Questions sidebar—is that still limited to Debian-based distros? And is there any alternative?
|
You can use $EDITOR, provided that it's been defined:
$EDITOR filename.txt
But I think most docs use nano because if someone's blindly following along, it's a safe bet to use. If the user has decided they actually prefer one editor over another, they'll know enough to replace it with vim, emacs, etc themselves.
edit may work well on Debian-based systems, but on others it invokes ex, which isn't recommended.
| Command for the default in-terminal text editor |
1,390,493,252,000 |
Whenever I open an image in feh, the background is set to the standard, dark gray and gray checkboard pattern like this:
As you can see, it's the checkboard background. How do I permanently change this to black?
I've search Google and other places, but I can't seem to find a straight answer. I'm guessing feh's config file is involved, but I can't find any examples of how to do it in the config file. I know you can do it in the command line with --bg-color black (or something) but I'd like to just have it set to black by default.
|
It seems that you cannot put your desired default options in a config file.
If you know about $PATH you can resort to a hack.
Create this script:
#!/bin/sh
feh --bg-color black "$@"
Call it feh and place it in your $PATH before /usr/bin/ (assuming that feh itself is in /usr/bin/).
Some distros have ~/bin/ in $PATH by default. So you would put that script into ~/bin/ (and make it executable). Otherwise just create this folder yourself and prepend it to your $PATH.
Also, if you want to set multiple default options, you can group them into themes. (Theme is the feh developer's name for a named group of options.) Create ~/.config/feh/themes and add this line to that file:
default --bg-color black
feh -Tdefault will then start feh with your desired default options. This is handy if you want to set multiple options at once. Unfortunately there is no way to set a default theme either. So, in your case it doesn't help. But you can fallback to the same hack as above:
#!/bin/sh
feh -Tdefault "$@"
Alternative:
If you are just going to call feh manually from the commandline, you can instead set an alias in your shell. In bash you would add this line to your ~/.bashrc and restart the interpreter (e.g. re-open the terminal):
alias feh="feh --bg-color black"
In fish shell you would run:
abbr -a feh feh --bg-color black
| How to permanently set default color of feh's background to black? |
1,390,493,252,000 |
By default when you start i3wm all work spaces start as vertical/horizontal split splith/splitv layout.
Is there a way to set the a different default like stacking or tabbed as the default for all containers on all work spaces. Some thing I can added to my ~/.i3/config
In stead of manually specifying each work space to use a specific layout using the mod+"w|e|s"
https://i3wm.org/docs/userguide.html#_changing_the_container_layout
|
If you want to change the behaviour of all new workspaces, just add
workspace_layout stacking
(or tabbed or default) to your .i3/config file, see section 4.8 of the documentation. The default is either horizontal, vertical or automatic, and governed by the default_orientation option, see section 4.7.
You can have finer-grained control using airblader's per-workspace-layout.pl script in the contrib directory
and also using the layout-saving and restoring feature.
| in i3wm how can I set the default layout for all works spaces |
1,390,493,252,000 |
I am on a debian 9 system with unknown desktop environment (ssh access). How can I find out which program is used by default to view a file with a given extension (e.g. pdf)?
Edit: Since extension is not important to the decision, the mime type of the given file can be found by using file
file -i file.ext
|
Ask xdg-mime.
$ xdg-mime query default application/pdf
atril.desktop
| How can I check which application opens a file by extension? [duplicate] |
1,390,493,252,000 |
I have a crontab which launches tmux-launching-script as follows :
-sh-3.00# crontab -l
@reboot /root/scripts/tmux_autostart.sh
where
#!/bin/bash
# setup tmux session
tmux new -d -s my_session
but when the system boots I don't have my regular prompt but shell prompt :
-sh-3.00#
how to change it to bash if I already have this in my config .tmux.conf
set-option -g default-shell /bin/bash
EDIT
-sh-3.00# cat /etc/crontab
SHELL=/bin/bash
PATH=/sbin:/bin:/usr/sbin:/usr/bin
MAILTO=root
HOME=/
# run-parts
01 * * * * root run-parts /etc/cron.hourly
02 4 * * * root run-parts /etc/cron.daily
22 4 * * 0 root run-parts /etc/cron.weekly
42 4 1 * * root run-parts /etc/cron.monthly
|
Your @reboot job is in root's crontab. The variables set in a crontab only apply in this crontab, so the settings in /etc/crontab have no influence on the job executed by root's crontab.
The default shell in Cron is /bin/sh, and the SHELL environment variable is set to /bin/sh unless overridden. So Tmux starts with SHELL=/bin/sh.
It appears that your /bin/sh is Bash 3.00. The prompt indicates that bash was started as a login shell, and that no initialization file set PS1 (there was probably no initialization file at all).
If you set default-shell in ~/.tmux.conf, this takes precedence over the SHELL environment variable. I suspect you aren't showing .tmux.conf in root's home directory but in some other location, maybe your own home directory.
You have a choice of setting SHELL=/bin/bash in root's crontab, or writing a .tmux.conf file in root's home directory.
| How to make my tmux which starts via crontab @reboot use bash? |
1,390,493,252,000 |
I accidentally wreaked unknown amounts of havoc on my web server by running
sudo chown -R myuser:mygroup * .*
in /var/www, not remembering that .* would include the parent directory (as ..). I realized what was happening after a second or so, but by then it was too late, half the directories in /var had been "re-owned". I know I can reset most of it with
sudo chown -R root:root /var
but what files are there that need to be owned by specific non-root users (or groups) that I would have to change manually?
This is on Gentoo, and here's a directory listing:
$ ls -l /var
drwxr-xr-x 9 root root 4096 May 12 2009 cache
drwxr-xr-x 4 root root 4096 Aug 20 22:49 db
drwxr-xr-x 3 root root 4096 Aug 20 22:42 dist
drwxr-xr-x 4 root root 4096 Nov 1 2009 edata
drwxr-xr-x 2 root root 4096 Jun 17 2008 empty
drwxr-xr-x 5 git git 4096 Feb 13 2010 git
drwxr-xr-x 23 root root 4096 Jul 19 03:22 lib
drwxrwxr-x 3 root uucp 4096 Aug 12 00:14 lock
drwxr-xr-x 10 root root 4096 Aug 20 03:10 log
lrwxrwxrwx 1 root root 15 Nov 7 2008 mail -> /var/spool/mail
drwxr-xr-x 10 root root 4096 Aug 21 00:22 run
drwxr-xr-x 8 root root 4096 Feb 13 2010 spool
drwxr-xr-x 2 root root 4096 Jun 17 2008 state
drwxr-xr-x 13 root root 4096 Dec 23 2009 svn
drwxrwxrwt 5 root root 4096 Aug 14 01:53 tmp
drwxr-xr-x 13 root root 4096 Aug 11 20:21 www
drwxr-xr-x 2 root root 4096 Dec 14 2008 www-cache
I can provide listings of subdirectories but that gets pretty long pretty fast. (dist, edata, git, svn, and www are things I manage myself so ownership in those won't be an issue)
|
Well, "/var" is generally for data generated by programs, so it may not be possible to tell you exactly who should own what without duplicating your system. I can think of two ways you might fix it:
Set up another version of your web server on a spare or virtual machine and then check /var.
Just change to root/root and then see what errors come up (most of the directories will have this ownership structure).
The downside to 1 is the amount of time it will take; the plus side being that it will be accurate. Item 2 is much faster but less accurate even if it's mostly true. The big problem here is that on an important production box 2 may not be feasible.
| What files in /var need to have specific owners? |
1,390,493,252,000 |
Is it possible to change the default message broadcasted by shutdown to something else?
|
As @Zelda mentioned the messages are hardcoded. If you want to change it beyond amending the message with additional bits:
$ sudo shutdown -h +120 Save your work.
You'll need to recompile shutdown, creating your own executable that includes the customized message.
For example, here's a sample source file, shutdown.c. Lines such as these would need to be changed, and the .c files would need to be rebuilt.
/*
* Tell everyone the system is going down in 'mins' minutes.
*/
void warn(int mins)
{
char buf[MESSAGELEN + sizeof(newstate)];
int len;
buf[0] = 0;
strncat(buf, message, sizeof(buf) - 1);
len = strlen(buf);
if (mins == 0)
snprintf(buf + len, sizeof(buf) - len,
"\rThe system is going down %s NOW!\r\n",
newstate);
else
snprintf(buf + len, sizeof(buf) - len,
"\rThe system is going DOWN %s in %d minute%s!\r\n",
newstate, mins, mins == 1 ? "" : "s");
wall(buf, 0);
}
| Changing shutdown broadcast message |
1,390,493,252,000 |
I'm on Linux Mint Olivia. I just installed Lynx.
How do I set Lynx as my browser, so when I open links from the terminal, they open in that terminal with Lynx?
|
First make .desktop application for lynx:
[Desktop Entry]
Type=Application
Name=Lynx
Exec=gnome-terminal -e 'lynx %u'
And save it to application directory e.g /usr/share/applications/ naming like lynx.desktop and give it execution permission (chmod +x /usr/share/applications/lynx.desktop).
Then set it as default web browser by using:
xdg-settings set default-web-browser lynx.desktop
Now try to Open link and it will be open with lynx in the terminal.
Note: lynx is command-line web-browser and hence it needs terminal so-that I've used gnome-terminal in my example Exec command. Your terminal application may be different. This works for me with my current system.
| Change default web browser to lynx from terminal |
1,390,493,252,000 |
I have hosts, where I can setup application on a single node architecture or distributed.
So I have an inventory.
[STG]
node1
[LIVE]
app_node
db_node
gateway_node
So a variable with default value be single but can be changed on CLI to distributed.
I have a role definition
- hosts:
gather_facts: no
roles:
- {role: setup, tags: ['setup', 'orchestra']}
So I want host line to be dynamic based on map value
- hosts: 'if single then host == STG else LIVE'
|
There are more options:
Put the logic into the expression of hosts:
shell> cat pb.yml
- hosts: "{{ (map_value == 'single')|ternary('STG', 'LIVE') }}"
tasks:
- debug:
var: ansible_play_hosts
run_once: true
gives what you want
shell> ansible-playbook pb.yml -e map_value=single
PLAY [single] ********************************************************************************
TASK [debug] *********************************************************************************
ok: [node1] =>
ansible_play_hosts:
- node1
PLAY RECAP ***********************************************************************************
node1: ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
shell> ansible-playbook pb.yml -e map_value=distributed
PLAY [distributed] ***************************************************************************
TASK [debug] *********************************************************************************
ok: [app_node] =>
ansible_play_hosts:
- app_node
- db_node
- gateway_node
PLAY RECAP ***********************************************************************************
app_node: ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Create children (group aliases)
shell> cat hosts
[STG]
node1
[single:children]
STG
[LIVE]
app_node
db_node
gateway_node
[distributed:children]
LIVE
Then, the playbook gives the same results
shell> cat pb.yml
- hosts: "{{ map_value }}"
tasks:
- debug:
var: ansible_play_hosts
run_once: true
If you can't change the inventory file put the aliases into a separate file. For example,
shell> tree inventory/
inventory/
├── 01-hosts
└── 02-aliases
shell> cat inventory/01-hosts
[STG]
node1
[LIVE]
app_node
db_node
gateway_node
shell> cat inventory/02-aliases
[single:children]
STG
[distributed:children]
LIVE
Then, the playbook gives the same results
shell> ansible-playbook -i inventory pb.yml -e map_value=single
...
shell> ansible-playbook -i inventory pb.yml -e map_value=distributed
...
Use the inventory plugin constructed. See
shell> ansible-doc -t inventory constructed
For example, the inventory
shell> tree inventory
inventory
├── 01-hosts
└── 02-constructed.yml
shell> cat inventory/01-hosts
[STG]
node1
[STG:vars]
map_group_value=single
[LIVE]
app_node
db_node
gateway_node
[LIVE:vars]
map_group_value=distributed
shell> cat inventory/02-constructed.yml
plugin: constructed
use_extra_vars: true
compose:
map_group: map_value
groups:
map_group: map_group == map_group_value
Then, the playbook
- hosts: map_group
tasks:
- debug:
var: ansible_play_hosts
run_once: true
gives the same results
shell> ansible-playbook -i inventory pb.yml -e map_value=single
...
shell> ansible-playbook -i inventory pb.yml -e map_value=distributed
...
If you insist on testing the use case with a role create one
shell> cat roles/setup/tasks/main.yml
- debug:
var: ansible_play_hosts
run_once: true
and use it in the playbook with any tags you like
shell> cat pb.yml
- hosts: map_group
roles:
- role: setup
| Is it possible to specify hosts for an Ansible role based on a map value? |
1,390,493,252,000 |
Similar to this question, I have some applications (Calibre, texdoc) open PDFs with Mendeley. Opening PDFs from Thunar, Thunderbird, Firefox etc. opens evince, the expected default.
It seems that those applications use xdg-open since:
$ xdg-mime query default application/pdf
mendeleydesktop.desktop
I tried to find where this comes from but was unsuccessful; I fixed it with
xdg-mime default evince.desktop application/pdf
The question remains: where did xdg-open get the idea that Mendeley should be the default PDF viewer from?
I'm using Ubuntu 16.04 with i3 4.11. xdg-open is at version 1.1.0 rc3.
|
The question remains: where did xdg-open get the idea that Mendeley should
be the default PDF viewer from?
This is an eminently reasonable question.
Here's a somewhat long answer in three parts.
Option 1: read the documentation
For example, the FreeDesktop standard
on mimetype associations has this to say:
Association between MIME types and applications
Users, system administrators, application vendors and distributions can
change associations between applications and mimetypes by writing into a
file called mimeapps.list.
The lookup order for this file is as follows:
$XDG_CONFIG_HOME/$desktop-mimeapps.list user overrides, desktop-specific (for advanced users)
$XDG_CONFIG_HOME/mimeapps.list user overrides (recommended location for user configuration GUIs)
$XDG_CONFIG_DIRS/$desktop-mimeapps.list sysadmin and ISV overrides, desktop-specific
$XDG_CONFIG_DIRS/mimeapps.list sysadmin and ISV overrides
$XDG_DATA_HOME/applications/$desktop-mimeapps.list for completeness, deprecated, desktop-specific
$XDG_DATA_HOME/applications/mimeapps.list for compatibility, deprecated
$XDG_DATA_DIRS/applications/$desktop-mimeapps.list distribution-provided defaults, desktop-specific
$XDG_DATA_DIRS/applications/mimeapps.list distribution-provided defaults
In this table, $desktop is one of the names of the current desktop,
lowercase (for instance, kde, gnome, xfce, etc.)
Note that if the environment variables such as XDG_CONFIG_HOME and XDG_DATA_HOME are not set, they will revert to their default values.
$XDG_DATA_HOME defines the base directory relative to which user specific data files should be stored. If $XDG_DATA_HOME is either not set or empty, a default equal to $HOME/.local/share should be used.
$XDG_CONFIG_HOME defines the base directory relative to which user specific configuration files should be stored. If $XDG_CONFIG_HOME is either not set or empty, a default equal to $HOME/.config should be used.
This illustrates one of the trickiest aspects of mimetype associations:
they can be set in many different locations,
and those settings might be overridden in a different location.
However, ~/.config/mimeapps.list is the one that we should use to set our own associations.
This also matches the documentation for the GNOME desktop.
To override the system defaults for individual users, you need to create a
~/.config/mimeapps.list file with a list of MIME types for which you want
to override the default registered application.
There's also this helpful tidbit:
You can use the gio mime command to verify that the default registered
application has been set correctly:
$ gio mime text/html
Default application for “text/html”: myapplication1.desktop
Registered applications:
myapplication1.desktop
epiphany.desktop
Recommended applications:
myapplication1.desktop
epiphany.desktop
The cross-platform command to check mimetype associations is:
xdg-mime query default application/pdf
For GNOME, the command is:
gio mime application/pdf
For KDE Plasma the command is:
ktraderclient5 --mimetype application/pdf
When I look at my ~/.config/mimeapps.list file,
it looks something like this:
[Added Associations]
application/epub+zip=calibre-ebook-viewer.desktop;org.gnome.FileRoller.desktop;
<snip>
application/pdf=evince.desktop;qpdfview.desktop;okularApplication_pdf.desktop;<snip>
<snip>
[Default Applications]
application/epub+zip=calibre-ebook-viewer.desktop
<snip>
application/pdf=evince.desktop;
You can see there only one entry for application/pdf under [Default Applications];
so evince.desktop is the default handler for PDF files.
I don't have Mendeley installed, but one way to make it the default PDF handler
is to put its desktop file here instead of evince.desktop.
Notice we're trusting the documentation here that ~/.config/mimeapps.list
is the correct file; we don't actually know that for sure.
We'll come back to this in part 3.
Option 2: read the source code.
xdg-open is a shell script that behaves differently
depending on the value of $XDG_CURRENT_DESKTOP.
You can see how this works here:
if [ -n "${XDG_CURRENT_DESKTOP}" ]; then
case "${XDG_CURRENT_DESKTOP}" in
# only recently added to menu-spec, pre-spec X- still in use
Cinnamon|X-Cinnamon)
DE=cinnamon;
;;
ENLIGHTENMENT)
DE=enlightenment;
;;
# GNOME, GNOME-Classic:GNOME, or GNOME-Flashback:GNOME
GNOME*)
DE=gnome;
;;
KDE)
DE=kde;
;;
Since you are using i3,
the DE variable will be set to generic and the script will call
its open_generic() function,
which in turn will call either run-mailcap or mimeopen
depending on what is installed.
Note that you can get some extra information
by setting the XDG_UTILS_DEBUG_LEVEL, e.g.
XDG_UTILS_DEBUG_LEVEL=4 xdg-open ~/path/to/example.pdf
However, the debug information is not that informative for our purposes.
Option 3: trace the opened files.
From the previous investigations,
we know that mimetype associations are stored in files somewhere on the hard drive,
not e.g. as environment variables or dconf settings.
This means we don't have to rely on documentation,
we can use strace to determine what files the xdg-open command actually opens.
For the application/pdf mimetype, we can use this:
strace -f -e trace=open,openat,creat -o strace_log.txt xdg-open /path/to/example.pdf
The -f is to trace child processes since xdg-open doesn't do everything by itself.
The -e trace=open,openat,creat is to trace just the syscalls open, openat, and creat.
These are from the man page from man 2 open or online.
The -o strace_log.txt is to save to a log file to inspect later.
The output is somewhat voluminous,
but we can ignore the lines that say ENOENT (No such file or directory)
since these files do not exist.
You can also use other commands such as xdg-mime or gio mime.
I found that gio mime read these files in my home directory:
~/.local/share//mime/mime.cache
~/.config/mimeapps.list
~/.local/share/applications
~/.local/share/applications/mimeapps.list
~/.local/share/applications/defaults.list
~/.local/share/applications/mimeinfo.cache
It also read these system-level files:
/usr/share/mime/mime.cache
/usr/share/applications/defaults.list
/usr/share/applications/mimeinfo.cache
/var/lib/snapd/desktop/applications
/var/lib/snapd/desktop/applications/mimeinfo.cache
To look for application/pdf associations, this should do the trick:
grep 'application/pdf' ~/.local/share//mime/mime.cache ~/.config/mimeapps.list ~/.local/share/applications ~/.local/share/applications/mimeapps.list ~/.local/share/applications/defaults.list ~/.local/share/applications/mimeinfo.cache /usr/share/mime/mime.cache /usr/share/applications/defaults.list /usr/share/applications/mimeinfo.cache /var/lib/snapd/desktop/applications /var/lib/snapd/desktop/applications/mimeinfo.cache | less
From here you can see where Mendeley's desktop file is getting added.
I have some applications (Calibre, texdoc) open PDFs with Mendeley. Opening
PDFs from Thunar, Thunderbird, Firefox etc. opens evince, the expected
default.
Firefox and Thunderbird have their own default application settings.
I believe texdoc relies on xdg-open.
I'm not sure about Thunar,
but I doubt it is relying on xdg-open.
So ultimately this is probably due to:
xdg-open having different fallbacks than other applications on i3; and
Mendeley's installer adding mimetype associations in some files but not others.
Addendum: xdg-open should not use the mimeinfo.cache file on i3,
but if you need to regenerate it, this is the command to use:
update-desktop-database ~/.local/share/applications
and here is the documentation:
Caching MIME Types
To make parsing of all the desktop files less costly, a
update-desktop-database program is provided that will generate a cache
file. The concept is identical to that of the 'update-mime-database' program
in that it lets applications avoid reading in (potentially) hundreds of
files. It will need to be run after every desktop file is installed. One
cache file is created for every directory in $XDG_DATA_DIRS/applications/,
and will create a file called $XDG_DATA_DIRS/applications/mimeinfo.cache.
https://specifications.freedesktop.org/desktop-entry-spec/0.9.5/ar01s07.html
Related:
https://askubuntu.com/questions/939027/pdf-book-opens-in-mendeley-when-openned-from-calibre
https://askubuntu.com/questions/992582/how-do-mimeinfo-cache-files-relate-to-mimeapps-list
How to make xdg-open follow mailcap settings in Debian
xdg-open opens a different application to the one specified by xdg-mime query
| Why does xdg-open use Mendeley as default for PDFs? |
1,390,493,252,000 |
I'm trying to set the default g++ to 4.7.2 which I'm told by my host is installed (I'm also told that c++11 is also installed); however, neither of us know how to set the default g++ to 4.7.2 because g++ --version gives
g++ (GCC) 4.1.2 20080704 (Red Hat 4.1.2-54)
Copyright (C) 2006 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
I tried these sudo commands here, but I just found out that they don't work because CentOS uses yum.
How can I set the default g++ to 4.7.2 (if it's even installed) on CentOS 5.9?
|
You need to set the CXX environment variable. For example, export CXX="/usr/bin/g++-4.7" . And CC is the one that controls the C compiler.
| set default g++ on CentOS? |
1,390,493,252,000 |
One of the most common recommendations that I read for users that recently installed Linux Mint is to enable their firewall, which is pretty simple to do. But why is the firewall off by default in the first place? Is there any reason for this?
|
Link Mint is an Ubuntu-based distribution intended for desktop systems. One of its chief priorities is "ease of use" so a firewall just puts into play something that could break things for users. It's easier if the firewall only gets turned on if the operator is someone who knows what such a thing even is versus a novice user saying "Why don't it no worky?"
| Why is the firewall off by default with Linux Mint? |
1,390,493,252,000 |
I have recently switched to Gnome 3 from Gnome 2 (and switched to Linux recently before that), and Gnome 3 doesn't give me as many options to change settings via the GUI, and especially not to change default settings.
Specifically, I'm trying to change the lid close action on my laptop, since I don't want it to suspend on lid close ever. (I changed this for my own user(s) via the gnome-tweak-tool.)
I've taken a few unsuccessful stabs. I imagine this has to do with sudo for some user, whether sudo for root or gdm.
I've tried (in a console window in a Gnome session and in an SSH session from a remote machine):
> sudo gsettings get org.gnome.settings-daemon.plugins.power lid-close-ac-action
'suspend'
> sudo gsettings set org.gnome.settings-daemon.plugins.power lid-close-ac-action "blank"
For this, I receive an error about an inability to initialize X11.
I've also tried:
> sudo -u gdm gsettings get org.gnome.settings-daemon.plugins.power lid-close-ac-action
But, this asks for gdm's password, which I've never set. I have continued with passwd, but it tells me: Cannot unlock the password for `gdm'! And, I could try forcing an unlock of the user, but this resistance to me indicates that perhaps I should abandon this path.
I don't know if each of these warrants its own line of questioning, but in the end, I'm just trying to set the laptop lid close setting (the default for all users), though I'd like to know more generally how to set Gnome's default preferences.
|
Eureka!
Thanks to a combination of the answers here, a discussion about setting the login screen's wallpaper, and a general discussion about running an X program from another console, I finally managed to solve this.
I do need to set the setting as the gdm user. But, simply running gsettings set ... as gdm will fail because of the X11 error. So, I also need to attach the command to an X session.
But, sudo su gdm didn't give me the terminal as gdm, as I had hoped, so I eventually created a simple shell script to run the commands I need.
setblank.sh:
#!/bin/sh
export DISPLAY=":0"
export XAUTHORITY="$1"
export XAUTHLOCALHOSTNAME="localhost"
gsettings set org.gnome.settings-daemon.plugins.power lid-close-ac-action "blank"
or, more generally (gset.sh):
#!/bin/sh
export DISPLAY=":0"
export XAUTHORITY="$1"
export XAUTHLOCALHOSTNAME="localhost"
gsettings set $2 $3 $4
Once I had this, I could call it like:
sudo sudo -u gdm gset.sh Xauthority-file org.gnome.settings-daemon.plugins.power lid-close-ac-action "blank"
And this does the trick!
One additional note about the Xauthority file: You will need to copy the Xauthority file for your user to a file that gdm has permission to read. (For a quick and dirty example: cp $XAUTHORITY /tmp/.Xauthority and chown gdm:root /tmp/.Xauthority)
| Set Default/Global Gnome Preferences (Gnome 3) |
1,390,493,252,000 |
I want to add a few directories to the skeleton directory. When I add new user I want to add my own directories to the new home directories.
|
As thrig pointed out, all that's needed is to create the directory structure that you want under /etc/skel.
Quoting from the useradd man page
-k, --skel SKEL_DIR
The skeleton directory, which contains files and directories to be copied in the user's home directory, when the home directory is created by useradd.
This option is only valid if the -m (or --create-home) option is specified.
If this option is not set, the skeleton directory is defined by the SKEL variable in /etc/default/useradd or, by default, /etc/skel.
... and the default SKEL variable in /etc/default/useradd is /etc/skel.
| Skeleton directory - how to add my own directories |
1,390,493,252,000 |
I just switched to OpenSUSE (KDE) from my Fedora 20 KDE setup,and I find myself greatly missing a feature that was a default in Fedora: a "Recently Used" option in the file manager when saving and uploading files. As far as I can tell, both are using Dolphin, but the appearance is inconsistent. I still see something like what I want in Gimp (see the "recently used"):
But every other file manager looks like this (notice the impoverished left-side menu):
I've set Dolphin as my default everywhere I can see to go (such as "Configure Desktop" -> "Default Applications" -> "File Manager") but I can't get an appearance that matches the Gimp one (at least in the Places menu). I want the menu back because it saves tons of time on uploads and downloads online, and as a web professional, it's a big deal for me...
|
You can add this entry manually: right click on the Places, choose Add Entry, enter this in Location field: recentdocuments:/// and save it. It should be available now in all KDE/Qt file dialogs.
| Get "recent files" in global file manager (Dolphin?) |
1,390,493,252,000 |
What is the count default in dd command if not specified ?
dd if=/dev/mem bs=1k skip=768
instead of full form like
dd if=/dev/mem bs=1k skip=768 count=50
I did not find an answer with Google.
|
The default is unlimited - keep going until you run out of space.
| dd count default |
1,390,493,252,000 |
I'm running FreeBSD 9.1-RELEASE. I've installed GNU grep with portmaster textproc/gnugrep.
However the "default" grep for users is still FreeBSD grep.
# /usr/local/bin/grep -V
/usr/local/bin/grep (GNU grep) 2.12
# grep -V
grep (GNU grep) 2.5.1-FreeBSD
I want to make GNU grep the default. I understand that the problem is with the order of directories specified in my PATH environment variable:
# echo $PATH
/sbin:/bin:/usr/sbin:/usr/bin:/usr/games:/usr/local/sbin:/usr/local/bin:/root/bin
However, I fear to move the /usr/local/bin entry to the beginning of my PATH. Is it safe?
In Linux distros like Debian such tasks are usually accomplished via dpkg-divert and/or update-alternatives.
What is the best way to do what I want in FreeBSD and not break system upgrades and such?
|
Update: Note this answer is from 2013, it applies to FreeBSD 8.x and earlier. A BSD grep was added in revision 222273 and appeared in FreeBSD-9.0 (oddly that change is missing from the usually comprehensive release notes: Google search). A fully-featured GNU grep continues to be available in the ports collection.
FreeBSD grep is was GNU grep, albeit old and with a few patches applied:
# which grep
/usr/bin/grep
# /usr/bin/grep -V
grep (GNU grep) 2.5.1-FreeBSD
Copyright 1988, 1992-1999, 2000, 2001 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
It has a small number of patches (most of which originate from Fedora Linux), if you have /usr/src/ installed those are detailed in /usr/src/gnu/usr.bin/grep/FREEBSD-upgrade.
If you need something specifically in the port version (2.12 vs 2.5.1) there are many bugfixes, speed improvements, and PCRE support (-P, not enabled in system version), it should be quite safe to reorder your PATH to put /usr/local/bin first, this is what I usually do. (It's good practise to use su - so that root's environment is set correctly, though on FreeBSD the default ~root/.cshrc sets the PATH explicitly.)
Otherwise check your shell man page and set an alias as required, but this is really only for interactive use, shell scripts or Makefiles won't observe it.
| How to make GNU grep the default in FreeBSD? |
1,390,493,252,000 |
Actually, I think my question is too basic, but, even after performing a deep search on internet, I still didn't understand how a browser selects a particular font for rendering a particular script on Linux systems. I know that Linux systems have fontconfig for managing font rendering in applications, but the behaviour of fontconfig is not consistent with respect to browsers. Mainly, Chromium and its derivatives don't always obey fontconfig rules. They try to select fonts on their own. On the other hand, Firefox and its derivatives have very good integration with Linux systems and they tend to obey fontconfig settings. I tried different browsers to test their behaviour and found that even we have configured rules for specific fonts in fontconfig, we still need more tweaking to get a consistent behaviour among browsers. They tend to choose their own fonts. This led me to confusion. So my questions regarding this are :
How a browser actually selects a font as its UI font as well as font for rendering a particular script ?
I know that we can specify fonts in CSS, but what will happen if we dont specify them ? How will browsers fallback fonts ?
What is the role of fontconfig in terms of browsers ? Do browsers use fontconfig or something else to prioritize fonts ?
Am I missing something ?
Please help me to clear my confusion. Any help will be appreciated.
|
After a long time of studying the behavior of fontconfig and browsers, I came to know these points :
Fontconfig is comparatively small, but complex piece of software. It has an algorithm that matches the best font for given conditions and rendering parameters. These conditions and parameters are collectively known as "patterns".
There are several characteristics of a font, and different fonts may have some common characteristics as well. So while selecting fonts for a given pattern, all fonts on the systems must be checked to see their relevance. Some fonts have some characteristics overriding others. For ex. some fonts may have Unicode support for exotic scripts/symbols, but may have poor drawing quality. At the same time, some fonts may have Unicode support of comparatively less number of scripts/symbols, but have good drawing quality. In such case, fontconfig tries to prefer the font that is more likely to render the given character set, and may actually ignore the drawing quality. So we simply cannot guarantee the results of fontconfig, because they are more depended on font files themselves, and requesting application's specific requirements. Finally, as users, we always know what font is good for the given case. But fontconfig does not know it. Obviously, It cannot learn about aesthetics, quality and bugs in a font. It just sees every font as a font.
Some apps, like browsers and graphic designing suites may also use their own patterns and fonts if they want. In that case, they may request fontconfig, or behave on their own.
We can reproduce the priority behavior of fontconfig on different systems, but those systems should have exactly same versions of font files, config files and rendering framework. Otherwise, if some font file has updated metrics or script/language data, it may be preferred over the other.
It's always best to set your own fonts.conf file in your home directory to help fontconfig to match the correct font, in case it fails to do so.
More info here : https://www.freedesktop.org/software/fontconfig/fontconfig-user.html
| Browser's Mechanism of font choosing on Linux Systems |
1,390,493,252,000 |
While writing shell scripts, in order to achieve as much portability as possible, one might want to try and stick to using command line tools that are likely to be already installed on target systems.
We can evaluate for specific tools, as in this example:
Are there versions of Unix that don't have awk in default install?
Also we can search by target distributions, like this:
What packages are installed by default in Debian? [...]
In contrast to doing this kind of search one command/distro at a time, are there any official or somewhat established sets of commands that are if not granted, but at least very likely to be installed on any *nix system? What if we narrowed this to Linux?
|
Anything that can reasonably called Unix or Unix-like has POSIX utilities. You can generally assume that the utilities are present, that they support the listed options, and that they behave as indicated. There are a few limitations:
Features that are marked as optional may not be present everywhere.
Recently added features may not be present everywhere yet. Check the “change history” section.
Software has bugs. Any given system usually deviates from the specification in a few corner cases. And sometimes the developers or the distribution maintainers don't care about deviating from the specification. There's no way to find that out other than from experience.
Here are limitations that you're likely to run into on many Linux distributions:
ed and pax are often missing from the default installation.
Corner cases of job control tend to behave weirdly outside of ksh.
If you limit to non-embedded Linux, you can make some additional assumptions.
Most distributions follow the Filesystem Hierarchy Standard, which mandates a number of utilities beyond POSIX.
Bash is available. But /bin/sh might not be bash.
Most POSIX utilities are the GNU coreutils implementation, which offers quite a few extensions.
util-linux is available (but a few utilities may be replaced by another implementation of a utility with the same name and possibly with different options).
On embedded Linux, the shell and utilities are usually from BusyBox. Because BusyBox is intended for small systems, it deliberately omits some features, including features that are mandated by POSIX. BusyBox has a lot of compile-time configuration options, so you can't really anticipate what will be available on a given system. If you want to maximize portability to embedded Linux, when you use a utility, look at its source code in BusyBox and avoid options that are under a conditional compilation guard. This won't help if an installation is missing that utility altogether though.
| Set of CLI tools that are installed by default on most distributions |
1,390,493,252,000 |
I'm running Ubuntu 20.04. I just downloaded Awesome and I'd like to start running it by default. I tried to open Awesome on the command line and received the message E: awesome: main:772: another window manager is already running (can't select SubstructureRedirect). What does this mean/how can I set up Awesome as the default window manager?
|
You have to select it from the display manager aka lock screen, if you use gnome click on the top right corner and then click log out, after you log out select your user but before you enter your password, click the gear at the lower right of the screen, a menu will appear, select awesome, after that you can input your password and login, now you are using the awesome wm!
| How to set Awesome as the default WM in Ubuntu 20.04 |
1,390,493,252,000 |
Inside my bash script:
This works:
CWD="${1:-${PWD}}"
But, if I replace it with:
CWD="${1:=${PWD}}"
I get the following error
line #: $1: cannot assign in this way
Why I can't assign to ${1}?
|
From bash's manpage:
Positional Parameters
A positional parameter is a parameter denoted by one or more digits,
other than the single digit 0. Positional parameters are assigned from
the shell's arguments when it is invoked, and may be reassigned using
the set builtin command. Positional parameters may not be assigned to
with assignment statements. The positional parameters are temporarily
replaced when a shell function is executed (see FUNCTIONS below).
and later, under Parameter Expansion
${parameter:=word}
Assign Default Values. If parameter is unset or null, the
expansion of word is assigned to parameter. The value of param‐
eter is then substituted. Positional parameters and special
parameters may not be assigned to in this way.
If you want to assign a default value to the positional parameter $1 like in your question, you can use
if [ -n "$1" ]
then
CWD="$1"
else
shift 1
set -- default "$@"
CWD=default
fi
Here I've used a combination of shift and set. I've just come up with this and I'm not sure if it's the proper way to change a single positional parameter.
| Bash: Error in assigning default value to a variable |
1,390,493,252,000 |
DeadBeeF has a new option, 'Designer mode' allowing to modify the displayed features, modules, addons, in the way Foobar2000 is doing by its layout editing mode.
Example of difficulty:
All the interface can be modified, you may end up with no interface at all or you may find it hard to restore the default one.
After removing the upper area of the default layout, it took me some time to put it back and make it look like so:
If that is missing you have to put a HBox in all that area, and then, from left to right, in the three boxes: playback controls, seekbar and volume bar. Then, to make it look as before, right-clicking on each of those: check 'fill' for controls (uncheck all the rest), 'expand' and 'fill' for seekbar, uncheck all for volume bar.
Being so new, it is very easy to make mistakes; saving the default or the custom layout would be good. Is there a configuration file that I can backup so that I can save and restore the changes or the default settings?
I see no reset button for the interface changes neither. Is there one?
|
Try going to /home/$USER/.config/ and remove the whole deadbeef config folder to reset. Save that file to backup configuration.
| How to save customized interface of DeadBeef? |
1,390,493,252,000 |
I'm using Debian 7.5, and I've installed Python 3.3 and 3.2. How do I make 3.3 the default for when someone types python in the command line?
|
To change the version of python that is executed when you type python on the command line, and only then, define an alias in your shell initialization file (the one for interactive shells). This is ~/.bashrc for bash, ~/.zshrc for zsh, ~/.cshrc for csh, ~/.config/fish/config.fish for fish. Use the correct path for Python 3.3 for your installation.
alias python='/usr/local/bin/python3.3'
If you want this to work for all users, you can put it in a system-wide file; however I don't recommend it, because this causes python typed on the command line to be a different version from python executed from a script or any other place, which is confusing.
In Debian wheezy, python in the default search path should be Python 2.7, because there are programs that depend on it (several packages ship Python 2 scripts that have #!/usr/bin/env python as their shebang line). If you want, you can change the system default for Python 3 to be Python 3.3 instead of the 3.2 that ships with Debian wheezy. To do that, create a symbolic link in /usr/local/bin (you'll need to be root to do this). If you installed Python 3 in directly in /usr/local:
ln -s python3.3 /usr/local/bin/python3
If you installed it somewhere else:
ln -s /path/to/python3.3/bin/python3.3 /usr/local/bin/python3
Scripts that ship with Debian with the shebang #!/usr/bin/python3 will keep using 3.2, but scripts that use #!/usr/bin/env python3 will now use 3.3, and typing python3 on the command line will invoke 3.3.
| How to change the default version of Python in Debian 7.5? |
1,469,148,183,000 |
I've installed windows 7 and linux dualboot. My partitions are:
/dev/sda2: UUID="EC328C61328C329E" TYPE="ntfs"
/dev/sda3: UUID="800E88610E8851D8" TYPE="ntfs"
/dev/sda4: UUID="20e7c430-bab0-4aa1-8afe-caa9d97e1de3" TYPE="ext4"
where sda2 is windows sda3 is shared partition and sda4 is linux
sd3 has mounting point /windows
Because sda2 and sda4 are small partitions I created directories Music, Documents, etc. and redirected windows libraries in here.
I want do the same in linux but editing ~/.config/user-dirs.dirs
to
XDG_DESKTOP_DIR="$HOME/Plocha"
XDG_DOWNLOAD_DIR="$HOME/"
XDG_TEMPLATES_DIR="$HOME/Šablony"
XDG_PUBLICSHARE_DIR="$HOME/Veřejné"
XDG_DOCUMENTS_DIR="/windows/home/Documents"
XDG_MUSIC_DIR="/windows/home/Music"
XDG_PICTURES_DIR="/windows/home/Pictures"
XDG_VIDEOS_DIR="/windows/home/Videos"
has no effect. Folders has icons as it if works but when I click on Music in the file browser it goes to /home/myUser/Music not into/windows/home/Music.
It would be great if it would work for cd ~/Music command too :)
|
Keep the lines as they were in original user-dirs.dirs :
XDG_MUSIC_DIR="$HOME/Music"
XDG_PICTURES_DIR="$HOME/Pictures"
XDG_VIDEOS_DIR="$HOME/Videos"
And now create symbolic links to point to your windows folders (make sure you have no important data in the three concerned folders :
cd ~
rm -fr Music Pictures Videos
ln -s /windows/home/Music
ln -s /windows/home/Pictures
ln -s /windows/home/Videos
By the way, you would better create a swap partition. You don't mention you did it already.
| Redirect home to shared NTFS partition |
1,469,148,183,000 |
There are many Clipboard Manager for Unix-based Operating System but is there a way to actually know which one is being used?
I am on Fedora 20 under Gnome 3.10.1 and I know that I'm using GPaste 3.10.
But I would like to know if there is a command line which would ouput GPaste 3.10 (except gpaste --version obviously).
|
After doing an extensive search I wasn't able to find a method for doing this. So it would seem impossible to find out what downstream tools are collecting the results of the clipboards in an attempt to provide a "management" facility around them.
| Knowing default clipboard manager |
1,469,148,183,000 |
Prompted for a system password on an AntiX Live USB/CD and it's not root...
|
AntiX's default password is demo
| What is AntiX's root default password? |
1,469,148,183,000 |
I often attach files when on the web (using Chrome/Firefox), or when using email (Thunderbird). I need to navigate to a specific folder, and there I often need one of the most recent files.
However, Thunar always resets the "Order by" column to "Name", so for every single attachment I need to re-order the column. This becomes annoying after so many attachments from the same folder.
Is there a way of having Thunar remember the "Sort By" column for a certain folder? And if not, is there a way of setting Thunar to always sort by date? I think I prefer this over the "Name" column, generally.
Using: XFCE, Thunar, Arch Linux.
|
$EDITOR .config/gtk-2.0/gtkfilechooser.ini
You are looking for the line SortColumn and SortOrder
SortColumn=modified
SortOrder=descending
should give you what you are trying to achieve.
| Have Thunar remember file ordering for folders |
1,469,148,183,000 |
When I right click an mkv file in file browser and select 'Open With' I see the list:
Dragon Player
VLC Media PLayer
When I double click it, it launches the file in Dragon Player. I'd like to change that so that it launches in VLC Media Player as the default double-click action. Where do I set that?
I looked in Settings > Default Applications, but there is nothing there that seems to apply to this situation.
|
You can right click -> properties -> file type options and edit, delete, reorder or add the entries, which will appear in the "open with" dialog.
You can access this same dialog via `System Settings -> File associations
| How can I prioritize 'open with' apps in KDE? |
1,469,148,183,000 |
I just downloaded opera-stable_56.0.3051.99_amd64.deb from https://www.opera.com/de/download
and ran
sudo dpkg -i opera-stable_56.0.3051.99_amd64.deb
It complained about some missing dependencies so I ran
sudo apt-get -f install
sudo apt install apt-transport-https
then reran
sudo dpkg -i opera-stable_56.0.3051.99_amd64.deb
anyway, am worried about what evil that has done to my otherwise clean debian distribution and how to undo it. At the least it has replaced the firefox which used to launch from the icon at the bottom of xfce with itself.
Actually, the command under the button is:
exo-open --launch WebBrowser %u
..so no opera keyword.
How do I get it back to how it was and if necessary clean up anything else bad which it might have done?
|
When installing software, always use the same methodology to remove said software as you've installed it with... (true on any OS) 0:-)
Therefore it's always a good idea to install from Debian pre-packaged repositories until you have a bit more experience. ;-)
In this case, to uninstall opera:
sudo dpkg --remove opera-stable
to then get rid of any dependencies you now no longer need, run:
sudo apt autoremove && sudo apt autoclean
| Opera browser set itself as default without asking! |
1,469,148,183,000 |
I have a Debian workstation with two interfaces: the ethernet jack attached to the motherboard (eth0), and a USB to ethernet adapter (eth6). eth0 connects to the internet, while eth6 connects to some special equipment only accessible over ethernet.
When the workstation starts, it connects to eth6 by default; when this happens the workstation cannot reach the internet and I must choose to connect to eth0 from the network manager. I would like it to connect to eth0 by default. How can I do this?
|
You just need to set the default gateway to the correct interface.
It should be configured in your GUI (Network Manager for example) or if you feel a bit geeky, you can configure it in your /etc/network/interfaces
This is a minimal configuration example for your /etc/network/interfaces file:
auto eth0
iface eth0 inet static
address 192.168.0.100
netmask 255.255.255.0
gateway 192.168.0.1
dns-nameserver 192.168.0.1 8.8.8.8
You need to replace the values with your personal network settings.
Make sure not to set two gateways at the same time. This will cause problems and is only possible if you use different routing tables.
Edit: Don't use DHCP on eth6 should work as well. Configure just an IP address in your Network Manager and leave eth0 as it is.
| Setting default network interface? |
1,469,148,183,000 |
I am thinking about doing an Ubuntu installation with their mini ISO, which comes only with the barebones system without a desktop environment/GUI.
I remember being given the option to set up automatic login when doing a full Ubuntu desktop installation, but how to I enable that for the mini ISO install? Did I miss something? Also, is there a generalised way to do this on any Linux OS?
Thanks!
|
You might find some ideas in this thread of linuxquestions.org
| Auto login for Ubuntu (or other Linux) without GUI? |
1,469,148,183,000 |
What's the difference between apt install php and apt install php-defaults?
In a first glimpse I would theorize that php should include everything default for a (latest) php program.
I ask this as a follow up to this question.
|
The difference is that
apt install php-defaults
doesn’t work, because php-defaults is a source package, not a binary package.
A source package contains the source code and packaging descriptors used to build one or more binary packages. Source packages aren’t directly installable.
| apt: What's the difference between "apt install php" and "apt install php-defaults"? |
1,469,148,183,000 |
I want to create a directory where multiple users will be able to contribute to the same files and I want each file that any user creates to have write permission by default for everyone in the group.
I did setgid for a directory and all new files have the right group. However new files are still created without write permissions in the group.
Here is an illustration of what I'm trying to do:
(as a root user):
mkdir --mode=u+rwx,g+rws,o-rwx /tmp/mydir
chown root.mygroup /tmp/mydir
touch /tmp/mydir/test.txt
Then when I do ls -la /tmp/mydir/ I'm getting
drwxrws--- 2 root mygroup 4096 Sep 12 12:04 .
drwxrwxrwt 11 root root 4096 Sep 12 12:04 ..
-rw-r--r-- 1 root mygroup 0 Sep 12 12:03 test.txt
So, write permission never gets populated for a group for all new files authored by members of that group. I understand that other group users still can override that by doing chmod g+w for specific files such as test.txt in the example above and this is the right behavior in most of the cases, but is there a way to recursively alter that for a specific directory and allow write permissions to be automatically set for a group as well as the owner for all new files within that dir?
|
Default permissions for new files and folder are determined by umask. If you configure the default umask for your users to 002, group permission will be set to rw for new files and folders. Configuring umask for all users can be done using pam_umask.
To use pam_umask, on Debian based distributions you should configure the module in /etc/pam.d/common-session by appending following to the end of the file:
session optional pam_umask.so
Then configure the desired umask value in /etc/login.defs.
Note that the mask configured using PAM isn't applied to all Gnome applications (for details, see How to set umask for the entire gnome session). However sessions launched from ssh or tty are not affected.
If you do not want to alter the default umask on your system, you can use POSIX Access Control Lists. When ACL is set for a directory, new files inherit the default ACL. ACLs can be set and modified using setfacl and getfacl respectively. Some file systems might need additional mount flag to enable ACLs.
| Auto set write permission for a group |
1,469,148,183,000 |
defaults read -g AppleLanguages
produces something like this:
(
en,
de
)
I'd like to extract just the first element, in this case "en",
defaults read -g AppleLanguages | awk '/\(/ , /,/'
but awk always includes the search patterns. What can I do to produce just "en"?
|
You could just print the second line. You can use many tools for this:
sed
defaults read -g AppleLanguages | sed -n '2s/,//p;' file
Explanation: The 2 means "run the following commands only on the second line". The -n suppresses normal output (nothing is printed unless explicitly told to do so). The substitution (s///) deletes the comma and the /p at the end prints the lines where the substitution occurred.
perl
defaults read -g AppleLanguages | perl -ne 's/,// && print if $.==2'
Explanation: Remove the first comma (s/,//) and print the line if the current line number ($. is 2. The -n means "read the input file line by line and apply the script given by -e to each line.
Unix tools
defaults read -g AppleLanguages | head -n 2 | tail -n 1 | tr -d ,
Explanation: head -n 2 prints the two first lines, tail -n 1 prints the last one (therefore, the second of the file) and tr -d , deletes commas.
awk
defaults read -g AppleLanguage | awk 'NR==2{sub(",","");print}'
Explanation: NR==2{} means "run what's in the brackets only on the second line. gsub(",","") deletes the first comma.
| read first element in array |
1,469,148,183,000 |
I built emacs 24.3 from source on OS X 10.8 and when I attempt to set the default font with
Options -> Set Default Font -> [font]
My choice is not saved next time I open emacs.
I'm attempting to use misc->6x13
|
That's expected behavior: this menu only changes options for the current instance of Emacs. The Options menu mostly provides quick access to some options that people commonly change mid-session.
To make permanent changes, open the Customize interface from the Options menu. Go to “Emacs” (the toplevel customization dialog), then “Faces”, then “Basic Faces”, and configure the “Default face” item. (This is for Emacs 23, I haven't checked if Emacs 24 has the same structure.) Click on “Set for current session” to test and on “Save for future sessions” when you're satisfied with the values.
| Setting default emacs font not saving (built from source OS X 10.8) |
1,469,148,183,000 |
I'm on Lubuntu 20.04, with no PulseAudio installed. I'm having some trouble editing my ALSA setting, as any change I make interferes with my microphone.
In particular, if I use the following basic configuration file:
pcm.!default {
type hw
card 2
}
ctl.!default {
type hw
card 2
}
Then I am unable to run OBS and Discord in parallel, as the first tries to open the microphone in stereo mode, while the latter in mono. The last to try always fails to open the device.
However, with just the lines
defaults.pcm.card 2
defaults.ctl.card 2
Everything works correctly. This hints to me that the default device that ALSA provides is more flexible than a simple type hw plugged to the correct device. I tried to look into somehow making ALSA print its defaults, but could not find anything about it.
How can I replicate the default ALSA device in my configuration file, so that I can make and test my changes as diffs to what ALSA already does for me?
|
The default definition of the default device can be found in /usr/share/alsa/pcm/default.conf. If it does not redirect to a driver-specific default, it is defined like this:
pcm.!default {
type plug
slave.pcm {
type hw
card 2
}
}
The plug plugin implements automatic sample rate/format conversion.
Most drivers do have their own default definition. In particular, most motherboard devices are handled by /usr/share/alsa/cards/HDA-Intel.conf, which defines something like this to allow multiple clients:
pcm.!default {
type asym
playback.pcm {
type plug
slave.pcm "dmix:2"
}
capture.pcm {
type plug
slave.pcm "dsnoop:2"
}
}
| What exactly is the default pcm ALSA device? |
1,469,148,183,000 |
I understand from Richard Stallman in this video, made probably in 2012/2013, that Ubuntu isn't 100% free by the GNU organization definition of Free Software, as according to Stallman there, it "spies" on users and also share the "spied" information with Amazon.
I asked about it in AskUbuntu but my question was putted off topic and heavily disliked so I removed it.
I'm not sure if the mechanisms Stallman mentiones (without even knowing how accurate he was) still exist in Ubuntu in 2018, but do they exist in Debian?
Do Debian desktop has such things?
I don't mind install some third party software on Debian desktop (like Nvidia software) but I would still like to know of the aforementioned mechanisms (by Stallman), or even similar tracking mechanisms exist in Debian.
I ask this question as I stronly contemplate to move to a new OS and I'm very much into Debian systems, from various reasons.
|
This was a feature provided by Unity’s shopping lens; it was removed in Ubuntu 17.10, and was never available in Debian.
| Does the search tracking and Amazon information sharing attributed to Ubuntu part of Debian? |
1,469,148,183,000 |
I'm using LC_TIME="en_AU.UTF-8" in general, and I'm happy with that. However, when I use Thunderbird, I'd like it to use a 12-hour clock. I've created a custom locale, and it works fine if I launch Thunderbird with
LC_TIME=en_AU_12h.utf8 /usr/bin/thunderbird
However, can I make Thunderbird launch like this by default? It seems to me that I'd have to make several modifications.
I sometimes launch Thunderbird from my Desktop Environment, so I'd have to modify thunderbird.desktop.
I sometimes launch Thunderbird from the command line, so I'd have to put the altered command in my $PATH, perhaps /usr/local/bin/thunderbird.
I have a custom script to launch several programs at once, so that would also have to be modified.
Is there a way to change Thunderbird's default environment variables, so I don't have to change so many files?
|
The usual way is to create a script which calls the binary as part of the script. Then you can just set the variables in the script. In fact, it is not uncommon for executables corresponding to complex programs to be set up like that. E.g. chromium. So, if /usr/bin/thunderbird isn't already a script (check) you can create a script called /usr/bin/thunderbird or maybe /usr/local/bin/thunderbird and have it call the original thunderbird executable. Of course, you'd have to rename the original thunderbird for this to work
| How can I consistently set an environment variable for a single program? |
1,469,148,183,000 |
If /etc/sysconfig/network-scripts/eth0 does not include any NM_CONTROLLED setting on a RHEL-based distribution, what's the default behaviour?
Is there any difference in the default setting between RHEL5/6/7?
|
Actually only NM_CONTROLLED="no" (or is it "false" ?) does anything.
Putting "yes" (or is it "true" ?) is the same as not having the line at all:
for network devices supported by NetworkManager it will manage them; for those unknown to NetworkManager it will ignore them anyway.
| RHEL5/6/7 : If NM_CONTROLLED is not set what is the default value? |
1,469,148,183,000 |
I looked through this question about setting the default pdf reader to evince. None of it seems to help for my install of debian 10.3 with the cinnamon desktop. I had a poke around and could find this:
$ cat /usr/share/applications/x-cinnamon-mimeapps.list | grep pdf
application/pdf=evince.desktop;
application/x-ext-pdf=evince.deskto;
and this:
$ cat .config/mimeapps.list
[Default Applications]
x-scheme-handler/tg=telegramdesktop.desktop
application/pdf=evince.desktop
application/x-ext-pdf=evince.desktop
and all other possible places:
$ grep -rnw /usr/share/applications/ -e pdf
/usr/share/applications/mimeinfo.cache:16:application/pdf=libreoffice-draw.desktop;gimp.desktop;inkscape.desktop;org.gnome.Evince.desktop;
/usr/share/applications/mimeinfo.cache:165:application/x-ext-pdf=org.gnome.Evince.desktop;
/usr/share/applications/mimeinfo.cache:359:image/pdf=display-im6.q16.desktop;
/usr/share/applications/org.gnome.Evince.desktop:178:Keywords=pdf;ps;postscript;dvi;xps;djvu;tiff;document;presentation;viewer;
/usr/share/applications/org.gnome.Evince.desktop:179:Keywords[ar]=pdf;ps;بوستسكربت;dvi;xps;djvu;tiff;مستند;عرض;عارض;
/usr/share/applications/org.gnome.Evince.desktop:180:Keywords[be]=pdf;ps;postscript;dvi;xps;djvu;tiff;дакумент;прэзентацыя;праглядальнік;
/usr/share/applications/org.gnome.Evince.desktop:181:Keywords[ca]=pdf;ps;postscript;dvi;xps;djvu;tiff;document;presentació;visualitzador;
/usr/share/applications/org.gnome.Evince.desktop:182:Keywords[cs]=pdf;ps;postscript;postskript;dvi;xps;djvu;tiff;dokument;prezentace;prohlížeč;
/usr/share/applications/org.gnome.Evince.desktop:183:Keywords[da]=pdf;ps;postscript;dvi;xps;djvu;tiff;dokument;præsentation;fremviser;
/usr/share/applications/org.gnome.Evince.desktop:184:Keywords[de]=pdf;ps;postscript;dvi;xps;djvu;tiff;Dokument;Präsentation;Betrachter;
/usr/share/applications/org.gnome.Evince.desktop:185:Keywords[el]=pdf;ps;postscript;dvi;xps;djvu;tiff;έγγραφο;παρουσίαση;εφαρμογή προβολής;document;presentation;viewer;
/usr/share/applications/org.gnome.Evince.desktop:186:Keywords[en_GB]=pdf;ps;postscript;dvi;xps;djvu;tiff;document;presentation;viewer;
/usr/share/applications/org.gnome.Evince.desktop:187:Keywords[es]=pdf;ps;postscript;dvi;xps;djvu;tiff;documento;presentación;visor;
/usr/share/applications/org.gnome.Evince.desktop:188:Keywords[fi]=pdf;ps;postscript;dvi;xps;djvu;tiff;document;presentation;viewer;asiakirja;katselin;
/usr/share/applications/org.gnome.Evince.desktop:189:Keywords[fr]=pdf;ps;postscript;dvi;xps;djvu;tiff;document;présentation;visionneur;visualiseur;
/usr/share/applications/org.gnome.Evince.desktop:190:Keywords[fur]=pdf;ps;postscript;dvi;xps;djvu;tiff;document;presentazion;visualizadôr;
/usr/share/applications/org.gnome.Evince.desktop:191:Keywords[gl]=pdf;ps;postscript;dvi;xps;djvu;tiff;documento;presentación;visor;
/usr/share/applications/org.gnome.Evince.desktop:192:Keywords[hr]=pdf;ps;postskripta;dvi;xps;djvu;tiff;dokument;prezentacija;preglednik;
/usr/share/applications/org.gnome.Evince.desktop:193:Keywords[hu]=pdf;ps;postscript;dvi;xps;djvu;tiff;dokumentum;prezentáció;nézegető;
/usr/share/applications/org.gnome.Evince.desktop:194:Keywords[id]=pdf;ps;postscript;dvi;xps;djvu;tiff;dokumen;presentasi;peninjau;
/usr/share/applications/org.gnome.Evince.desktop:195:Keywords[is]=pdf;ps;postscript;dvi;xps;djvu;tiff;skjal;kynning;skoðari;
/usr/share/applications/org.gnome.Evince.desktop:196:Keywords[it]=pdf;ps;postscript;dvi;xps;djvu;tiff;documento;presentazione;visualizzatore;
/usr/share/applications/org.gnome.Evince.desktop:197:Keywords[kk]=pdf;ps;postscript;dvi;xps;djvu;tiff;document;presentation;viewer;құжат;презентация;көрсету;
/usr/share/applications/org.gnome.Evince.desktop:198:Keywords[ko]=pdf;ps;postscript;포스트스크립트;dvi;xps;djvu;tiff;document;문서;presentation;프리젠테이션;viewer;뷰어;보기;
/usr/share/applications/org.gnome.Evince.desktop:199:Keywords[lt]=pdf;ps;postscript;dvi;xps;djvu;tiff;dokumentas;pateiktis;prezentacija;žiūryklė;
/usr/share/applications/org.gnome.Evince.desktop:200:Keywords[lv]=pdf;ps;postscript;dvi;xps;djvu;tiff;dokuments;prezentācija;skatītājs;
/usr/share/applications/org.gnome.Evince.desktop:201:Keywords[nb]=pdf;ps;postscript;dvi;xps;djvu;tiff;dokument;presentasjon;visning;
/usr/share/applications/org.gnome.Evince.desktop:202:Keywords[nl]=pdf;ps;postscript;dvi;xps;djvu;tiff;document;presentation;presentatie;viewer;weergave;
/usr/share/applications/org.gnome.Evince.desktop:203:Keywords[pl]=pdf;ps;postscript;dvi;xps;djvu;tiff;dokument;prezentacja;przeglądarka;
/usr/share/applications/org.gnome.Evince.desktop:204:Keywords[pt_BR]=pdf;ps;postscript;dvi;xps;djvu;tiff;documento;apresentação;visualizador,visualização;
/usr/share/applications/org.gnome.Evince.desktop:205:Keywords[ro]=pdf;ps;postscript;dvi;xps;djvu;tiff;document;presentation;viewer;prezentare;vizualizator;
/usr/share/applications/org.gnome.Evince.desktop:206:Keywords[ru]=pdf;ps;postscript;dvi;xps;djvu;tiff;документ;презентация;просмотр;
/usr/share/applications/org.gnome.Evince.desktop:207:Keywords[sk]=pdf;ps;postscript;dvi;xps;djvu;tiff;dokument;prezentácia;prehliadač;prezerač;
/usr/share/applications/org.gnome.Evince.desktop:208:Keywords[sl]=pdf;ps;postscript;dvi;xps;djvu;tiff;dokument;predstavitev;pregledovalnik;
/usr/share/applications/org.gnome.Evince.desktop:209:Keywords[sr]=pdf;ps;postscript;dvi;xps;djvu;tiff;document;presentation;viewer;пдф;пс;пост-скрипт;дви;икспс;дежави;тифф;документ;презентација;приказивање;dokument;prezentacija;prikazivanje;
/usr/share/applications/org.gnome.Evince.desktop:210:Keywords[sr@latin]=pdf;ps;postscript;dvi;xps;djvu;tiff;document;presentation;viewer;pdf;ps;post-skript;dvi;iksps;dežavi;tiff;dokument;prezentacija;prikazivanje;dokument;prezentacija;prikazivanje;
/usr/share/applications/org.gnome.Evince.desktop:211:Keywords[sv]=pdf;ps;postscript;dvi;xps;djvu;tiff;dokument;presentation;visare;bläddrare;
/usr/share/applications/org.gnome.Evince.desktop:212:Keywords[tr]=pdf;ps;postscript;dvi;xps;djvu;tiff;belge;sunum;görüntüleyici;
/usr/share/applications/org.gnome.Evince.desktop:213:Keywords[vi]=pdf;ps;postscript;dvi;xps;djvu;tiff;document;tài;liệu;tai;lieu;presentation;trình;diễn;trinh;dien;viewer;xem;
/usr/share/applications/org.gnome.Evince.desktop:214:Keywords[zh_CN]=pdf;ps;postscript;dvi;xps;djvu;tiff;document;presentation;viewer;文档;演示;幻灯;查看器;
/usr/share/applications/org.gnome.Evince.desktop:215:Keywords[zh_TW]=pdf;ps;postscript;dvi;xps;djvu;tiff;document;presentation;viewer;文件;簡報;檢視器;
/usr/share/applications/org.gnome.Evince.desktop:224:MimeType=application/pdf;application/x-bzpdf;application/x-gzpdf;application/x-xzpdf;application/x-ext-pdf;application/postscript;application/x-bzpostscript;application/x-gzpostscript;image/x-eps;image/x-bzeps;image/x-gzeps;application/x-ext-ps;application/x-ext-eps;application/illustrator;application/x-dvi;application/x-bzdvi;application/x-gzdvi;application/x-ext-dvi;image/vnd.djvu+multipage;application/x-ext-djv;application/x-ext-djvu;image/tiff;application/x-cbr;application/x-cbz;application/x-cb7;application/x-cbt;application/x-ext-cbr;application/x-ext-cbz;application/x-ext-cb7;application/x-ext-cbt;application/vnd.comicbook+zip;application/vnd.comicbook-rar;application/oxps;application/vnd.ms-xpsdocument;
/usr/share/applications/inkscape.desktop:309:MimeType=image/svg+xml;image/svg+xml-compressed;application/vnd.corel-draw;application/pdf;application/postscript;image/x-eps;application/illustrator;image/cgm;image/x-wmf;application/x-xccx;application/x-xcgm;application/x-xcdt;application/x-xsk1;application/x-xcmx;image/x-xcdr;application/visio;application/x-visio;application/vnd.visio;application/visio.drawing;application/vsd;application/x-vsd;image/x-vsd;
/usr/share/applications/display-im6.q16.desktop:14:MimeType=image/avs;image/bie;image/x-ms-bmp;image/cmyk;image/dcx;image/eps;image/fax;image/fits;image/gif;image/gray;image/jpeg;image/pjpeg;image/miff;image/mono;image/mtv;image/x-portable-bitmap;image/pcd;image/pcx;image/pdf;image/x-portable-graymap;image/pict;image/png;image/x-portable-anymap;image/x-portable-pixmap;image/ps;image/rad;image/x-rgb;image/rgba;image/rla;image/rle;image/sgi;image/sun-raster;image/targa;image/tiff;image/uyvy;image/vid;image/viff;image/x-xbitmap;image/x-xpixmap;image/x-xwindowdump;image/x-icon;image/yuv;
/usr/share/applications/gimp.desktop:252:MimeType=image/bmp;image/g3fax;image/gif;image/x-fits;image/x-pcx;image/x-portable-anymap;image/x-portable-bitmap;image/x-portable-graymap;image/x-portable-pixmap;image/x-psd;image/x-sgi;image/x-tga;image/x-xbitmap;image/x-xwindowdump;image/x-xcf;image/x-compressed-xcf;image/x-gimp-gbr;image/x-gimp-pat;image/x-gimp-gih;image/tiff;image/jpeg;image/x-psp;application/postscript;image/png;image/x-icon;image/x-xpixmap;image/x-exr;image/x-webp;image/heif;image/heic;image/svg+xml;application/pdf;image/x-wmf;image/jp2;image/x-xcursor;
/usr/share/applications/x-cinnamon-mimeapps.list:2:application/pdf=evince.desktop;
/usr/share/applications/x-cinnamon-mimeapps.list:9:application/x-ext-pdf=evince.deskto;
/usr/share/applications/libreoffice-draw.desktop:25:MimeType=application/vnd.oasis.opendocument.graphics;application/vnd.oasis.opendocument.graphics-flat-xml;application/vnd.oasis.opendocument.graphics-template;application/vnd.sun.xml.draw;application/vnd.sun.xml.draw.template;application/vnd.visio;application/x-wpg;application/vnd.corel-draw;application/vnd.ms-publisher;image/x-freehand;application/clarisworks;application/x-pagemaker;application/pdf;application/x-stardraw;
/usr/share/applications/libreoffice-draw.desktop:217:Keywords=Vector;Schema;Diagram;Layout;OpenDocument Graphics;Microsoft Publisher;Microsoft Visio;Corel Draw;cdr;odg;svg;pdf;vsd;
That seems to indicate that when you double click on a pdf, it should already launch with evince, but it launches with libreoffice. Any ideas how to fix this?
|
If you by double click on a pdf mean opening it from the file manager, than which manager are you using? They can have their own mime-type handling, and for example, in Nautilus you can change it by clicking on the file with the right button, opening properties, selecting Open with tab and setting your application as default.
| set default pdf reader in debian 10 buster with cinnamon desktop |
1,469,148,183,000 |
# ...
def show():
"""
Show image
"""
t = Twitter(auth=authen())
try:
target = g['stuff'].split()[0]
if target != 'image':
return
id = int(g['stuff'].split()[1])
tid = c['tweet_dict'][id]
tweet = t.statuses.show(id=tid)
media = tweet['entities']['media']
for m in media:
res = requests.get(m['media_url'])
img = Image.open(BytesIO(res.content))
img.show()
except:
debug_option()
printNicely(red('Sorry I can\'t show this image.'))
# ...
This is the section of code that the developer claims will open an image with the OS's default image viewer. For me, it opens it with imagemagick but I want it to open with feh. How can I change the OS's default image viewer?
|
Under the hood, PIL defaults to using the display command provided by ImageMagick to display image (or xv, if exists). If you want to open an image with other program, you might have to modify PIL's source, and here is how.
| Change default image viewer |
1,469,148,183,000 |
Topic is the question.
Requirements:
FOSS of course
Independant of desktop environments
is compliant with the Association between MIME types and applications standard
Has optionally a neat GUI (my application uses mimeapps.list and I want to refer my user to a userfriendly way to change default apps)
|
Found the cli tool. A GUI would still be nice to have...
$ xdg-mime query filetype foo.jpg # Get the mimetype of the file
image/jpeg
$ xdg-mime default gwenview.desktop image/jpeg # Set a new association
| Is there an xdg comliant DE independant "default-aplication-setter application"? |
1,469,148,183,000 |
I have installed pycharm via flatpak on linux Mint 21.1. I want to open every .py file in the PyCharm application. But when I right click and choose "open with other application" PyCharm is not on the list.
How can I set PyCharm to be default app for .py files ?
Thank you for help
EDIT:
I found .desktop file in /home/kacka/.local/share/applications/userapp-com.jetbrains.PyCharm-Professional.desktop-95N5X1.desktop but when I try to set it up I get this:
|
Solution was to go to:
/var/lib/flatpak/exports/bin/
where I found this:
lrwxrwxrwx 1 root root 105 Jan 7 13:28 com.jetbrains.PyCharm-Professional -> ../../app/com.jetbrains.PyCharm-Professional/current/active/export/bin/com.jetbrains.PyCharm-Professional
So I set up default aplication to:
/var/lib/flatpak/exports/bin/com.jetbrains.PyCharm-Professional
and now it works
| How to set application installed via flatpak as default aplication for some type of files? |
1,469,148,183,000 |
Normally when I make a directory with mkdir the permissions I expect are 751 or 755. However for some reason when new files are created, even in a users home directory, they are set to 700.
What controls the default permissions on new files and what kind of configuration change led to this happening?
|
As @Tejas mentioned, you need to understand umask and its values for changing the default permissions.
I recommend you read this article so you'll understand how to use it properly.
In addition, you should know that it's not permanent, so after rebooting your system the umask value you've set will be gone. To set it in a permanent way, you need to write a new umask value in your shell’s configuration file (~/.bashrc which is executed for interactive non-login shells, or ~/.bash_profile which is executed for login shells).
Good Luck
| What do are group permission missing on new directories? |
1,469,148,183,000 |
Is there a way to set default options for cryptsetup? For example, lets say I want to make sure that I only open cryptsetup devices with the -r option. I would like to add it to a config file, so that I don't have to type it every time (and potentially forget it)
Reading man cryptsetup did not reveal any information.
|
AFAIK there is no configuration file for cryptsetup. You can of course define an alias and put that somewhere where it gets read in at login:
alias cryptsetup='cryptsetup --readonly'
| setting default options for cryptsetup |
1,469,148,183,000 |
I'm using gnu make and stow to manage some configurations (dotfiles).
I have multiple directories in my repo:
dotfiles/
├── Makefile
├── package1/
└── package2/
Currently, my Makefile looks like:
PACKAGES = package1 package2
.PHONY: all $(PACKAGES)
all: $(PACKAGES)
package1:
stow --no-fold $@
package2:
stow --no-fold $@
I want to define a default rule for packages, so I did:
PACKAGES = package1 package2
.PHONY: all $(PACKAGES)
all: $(PACKAGES)
%:
stow --no-fold $@
But that didn't work:
$ make
make: Nothing to be done for `all'.
$ make package1
make: Nothing to be done for `package1'.
$ make package2
make: Nothing to be done for `package2'.
So: Is it possible to define a "default" rule for directories? If yes, how I do it?
|
You could replace your rule with:
$(PACKAGES):
stow --no-fold $@
| Make pattern match directories |
1,469,148,183,000 |
I have a Linux device that utilizes USB gadget for RNDIS support. The goal was to be able to connect any computer this device without having to mess with IP settings. I've set a static IP address on my RNDIS device. As far as communication goes, everything works. What does not work is my host PC seems to add my RNDIS device as a gateway, thus losing internet connection. I can remote the gateway route each time I plug in my device, but this reduces the user experience.
How do I modify my RNDIS configuration in order for my host PC to not add a gateway?
|
The RNDIS device may have static IP address, but where and how does the host PC get the IP address settings for connecting to the RNDIS device?
If the RNDIS device provides the settings for the host, using DHCP, PPPoE or some other mechanism, then the RNDIS device should not provide a default gateway setting if it is not prepared to act as a Internet gateway.
In the terms of pppd options, that would mean removing any defaultroute options and adding nodefaultroute instead.
In generic DHCP server terms, that would mean not providing the DHCP option #3 at all - if using the ISC dhcpd for example, you should remove any option routers ... line from the dhcpd.conf file.
| RNDIS interface gets a gateway |
1,469,148,183,000 |
How do I set the default folder order in Gnome? I'm talking about setting 'type order' as default so the folders will always be listed at the top, and other files after them.
Here is the image:
You see that everything is listed by the type, and I want this to be the default order so when I reboot, it won't reset.
|
you will be able to achieve this by opening nautilus preferences as follow:
open a nautilus window > click on "files" from top menu > preferences (as showed here)
And then from there setup the Default view. Specifically Arrange items: by type
| Default Folder Order in Gnome |
1,371,822,060,000 |
Is there a command to recover/undelete deleted files by rm?
rm -rf /path/to/myfile
How can I recover myfile? If there is a tool to do this, how can I use it?
|
The link someone provided in the comments is likely your best chance.
Linux debugfs Hack: Undelete Files
That write-up though looking a little intimidating is actually fairly straight forward to follow. In general the steps are as follows:
Use debugfs to view a filesystems log
$ debugfs -w /dev/mapper/wks01-root
At the debugfs prompt
debugfs: lsdel
Sample output
Inode Owner Mode Size Blocks Time deleted
23601299 0 120777 3 1/ 1 Tue Mar 13 16:17:30 2012
7536655 0 120777 3 1/ 1 Tue May 1 06:21:22 2012
2 deleted inodes found.
Run the command in debugfs
debugfs: logdump -i <7536655>
Determine files inode
...
...
....
output truncated
Fast_link_dest: bin
Blocks: (0+1): 7235938
FS block 7536642 logged at sequence 38402086, journal block 26711
(inode block for inode 7536655):
Inode: 7536655 Type: symlink Mode: 0777 Flags: 0x0 Generation: 3532221116
User: 0 Group: 0 Size: 3
File ACL: 0 Directory ACL: 0
Links: 0 Blockcount: 0
Fragment: Address: 0 Number: 0 Size: 0
ctime: 0x4f9fc732 -- Tue May 1 06:21:22 2012
atime: 0x4f9fc730 -- Tue May 1 06:21:20 2012
mtime: 0x4f9fc72f -- Tue May 1 06:21:19 2012
dtime: 0x4f9fc732 -- Tue May 1 06:21:22 2012
Fast_link_dest: bin
Blocks: (0+1): 7235938
No magic number at block 28053: end of journal.
With the above inode info run the following commands
# dd if=/dev/mapper/wks01-root of=recovered.file.001 bs=4096 count=1 skip=7235938
# file recovered.file.001
file: ASCII text, with very long lines
Files been recovered to recovered.file.001.
Other options
If the above isn't for you I've used tools such as photorec to recover files in the past, but it's geared for image files only. I've written about this method extensively on my blog in this article titled:
How to Recover Corrupt jpeg and mov Files from a Digital Camera's SDD Card on Fedora/CentOS/RHEL.
| Recover deleted files on Linux |
1,371,822,060,000 |
Is there a simple option on extundelete how I can try to undelete a file called /var/tmp/test.iso that I just deleted?
(it is not so important that I would start to remount the drive read-only or such things. I can also just re-download that file again)
I am looking for a simple command with that I could try if I manage to fast-recover it.
I know, it is possible with remounting the drive in read-only: (see How do I simply recover the only file on an empty disk just deleted?)
But is this also possible somehow on the still mounted disk?
For info:
if the deleted file is on an NTFS partition it is easy with ntfsundelete e.g. if you know the size was about 250MB use
sudo ntfsundelete -S 240m-260m -p 100 /dev/hda2
and then undelete the file by inode e.g. with
sudo ntfsundelete /dev/hda2 --undelete --inodes 8270
|
Looking at the usage guide on extundelete it seems as though you're limited to undeleting files to a few ways.
Restoring all
extundelete is designed to undelete files from an unmounted partition to a separate (mounted) partition. extundelete will restore any files it finds to a subdirectory of the current directory named “RECOVERED_FILES”. To run the program, type “extundelete --help” to see various options available to you.
Typical usage to restore all deleted files from a partition looks like this:
$ extundelete /dev/sda4 --restore-all
Restoring a single file
In addition to this method highlighted in the command line usage:
--restore-file path/to/deleted/file
Attemps to restore the file which was deleted at the given filename,
called as "--restore-file dirname/filename".
So you should be able to accomplish what you want doing this:
$ extundelete --restore-file /var/tmp/test.iso /dev/sda4
NOTE: In both cases you need to know the device, /dev/sda4 to perform this command. You'll have to remount the filesystem as readonly. This is one of the conditions of using extundelete and there isn't any way around this.
| undelete a just deleted file on ext4 with extundelete |
1,371,822,060,000 |
Hi I have many files that have been deleted but for some reason the disk space associated with the deleted files is unable to be utilized until I explicitly kill the process for the file taking the disk space
$ lsof /tmp/
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
cron 1623 root 5u REG 0,21 0 395919638 /tmp/tmpfPagTZ4 (deleted)
The disk space taken up by the deleted file above causes problems such as when trying to use the tab key to autocomplete a file path I get the error bash: cannot create temp file for here-document: No space left on device
But after I run kill -9 1623 the space for that PID is freed and I no longer get the error.
My questions are:
why is this space not immediately freed when the file is first deleted?
what is the best way to get back the file space associated with the deleted files?
and please let me know any incorrect terminology I have used or any other relevant and pertinent info regarding this situation.
|
On unices, filenames are just pointers (inodes) that point to the memory where the file resides (which can be a hard drive or even a RAM-backed filesystem). Each file records the number of links to it: the links can be either the filename (plural, if there are multiple hard links to the same file), and also every time a file is opened, the process actually holds the "link" to the same space.
The space is physically freed only if there are no links left (therefore, it's impossible to get to it). That's the only sensible choice: while the file is being used, it's not important if someone else can no longer access it: you are using it and until you close it, you still have control over it - you won't even notice the filename is gone or moved or whatever. That's even used for tempfiles: some implementations create a file and immediately unlink it, so it's not visible in the filesystem, but the process that created it is using it normally. Flash plugin is especially fond of this method: all the downloaded video files are held open, but the filesystem doesn't show them.
So, the answer is, while the processes have the files still opened, you shouldn't expect to get the space back. It's not freed, it's being actively used. This is also one of the reasons that applications should really close the files when they finish using them. In normal usage, you shouldn't think of that space as free, and this also shouldn't be very common at all - with the exception of temporary files that are unlinked on purpose, there shouldn't really be any files that you would consider being unused, but still open. Try to review if there is a process that does this a lot and consider how you use it, or just find more space.
| Best way to free disk space from deleted files that are held open |
1,371,822,060,000 |
Never thought this would happen to me, but there you go. ¯\_(ツ)_/¯
I ran a build script from a repository inside the wrong directory without looking at the source first. Here's the script Scripts/BuildLocalWheelLinux.sh:
cd ../Dependencies/cpython
mkdir debug
cd debug
../configure --with-pydebug --enable-shared
make
cd ../../..
cd ..
mkdir -p cmake-build-local
cd cmake-build-local
rm -rf *
cmake .. -DMVDIST_ONLY=True -DMVPY_VERSION=0 -DMVDPG_VERSION=local_build
make -j
cd ..
cd Distribution
python3 BuildPythonWheel.py ../cmake-build-local/[redacted]/core.so 0
python3 -m ensurepip
python3 -m pip install --upgrade pip
[more pip install stuff]
python3 -m setup bdist_wheel --plat-name manylinux1_x86_64 --dist-dir ../dist
cd ..
cd Scripts
The dangerous part seems to be
mkdir -p cmake-build-local
cd cmake-build-local
rm -rf *
But thinking about it, it actually seems like it couldn't possibly go wrong.
The way you're supposed to run this script is cd Scripts; ./BuildLocalWheelLinux.sh. When I ran it the first time, it showed an error on the very last line (as I learned afterwards). I was in a hurry, so I though "maybe the docs are outdated, I'll try running from the project root instead. So I ran ./Scripts/BuildLocalWheelLinux.sh. Suddenly, vscodes theme and zoom level changed, my zsh terminal config was reset, terminal fonts were set to default, and I Ctrl+C'd once I realized what was happening.
There are some files remaining, but there's no obvious pattern to them:
$ ls -la
total 216
drwx------ 27 felix felix 4096 May 12 18:08 .
drwxr-xr-x 3 root root 4096 Apr 15 16:39 ..
-rw------- 1 felix felix 12752 Apr 19 11:07 .bash_history
-rw-r--r-- 1 felix felix 3980 Apr 15 13:40 .bashrc
drwxrwxrwx 7 felix felix 4096 May 12 18:25 .cache
drwx------ 8 felix felix 4096 May 12 18:26 .config
drwx------ 3 root root 4096 Apr 13 21:40 .dbus
drwx------ 2 felix felix 4096 Apr 30 12:18 .docker
drwxr-xr-x 8 felix felix 4096 Apr 15 13:40 .dotfiles
-rw------- 1 felix felix 8980 Apr 13 18:10 examples.desktop
-rw-r--r-- 1 felix felix 196 Apr 19 15:19 .gitconfig
-rw-r--r-- 1 felix felix 55 Apr 16 13:56 .gitconfig.old
-rw-r--r-- 1 felix felix 1040 Apr 15 13:40 .gitmodules
drwx------ 3 felix felix 4096 May 6 10:10 .gnupg
-rw-r--r-- 1 felix felix 1848 May 5 14:24 heartbeat.tcl
-rw------- 1 felix felix 1610 Apr 13 20:36 .ICEauthority
drwxr-xr-x 5 felix felix 4096 Apr 21 16:39 .ipython
drwxr-xr-x 2 felix felix 4096 May 4 09:35 .jupyter
-rw------- 1 felix felix 161 Apr 27 14:23 .lesshst
drwx------ 3 felix felix 4096 May 12 18:08 .local
-rw-r--r-- 1 felix felix 140 Apr 29 17:54 minicom.log
drwx------ 5 felix felix 4096 Apr 13 18:25 .mozilla
drwxr-xr-x 2 felix felix 4096 Apr 13 18:10 Music
drwxr-xr-x 6 felix felix 4096 May 12 17:16 Nextcloud
-rw-r--r-- 1 felix felix 52 Apr 16 11:43 .nix-channels
-rw------- 1 felix felix 1681 Apr 20 10:33 nohup.out
drwx------ 3 felix felix 4096 Apr 15 11:16 .pki
-rw------- 1 felix felix 946 Apr 16 11:43 .profile
drwxr-xr-x 2 felix felix 4096 Apr 13 18:10 Public
drwxr-xr-x 2 felix felix 4096 May 12 18:08 .pylint.d
-rw------- 1 felix felix 1984 May 12 18:06 .pythonhist
-rw-r--r-- 1 felix felix 2443 Apr 19 13:40 README.md
drwxr-xr-x 13 felix felix 4096 May 12 18:08 repos
drwxr-xr-x 6 felix felix 4096 Apr 19 11:08 snap
drwx------ 3 felix felix 4096 May 5 15:33 .ssh
drwxr-xr-x 5 felix felix 4096 Apr 26 17:39 .stm32cubeide
drwxr-xr-x 5 felix felix 4096 May 5 15:52 .stm32cubemx
drwxr-xr-x 2 felix felix 4096 Apr 23 11:44 .stmcube
drwxr-xr-x 2 felix felix 4096 Apr 13 18:10 Templates
drwxr-xr-x 3 felix felix 4096 Apr 19 11:57 test
drwxr-xr-x 2 felix felix 4096 Apr 13 18:10 Videos
-rw------- 1 felix felix 14313 May 12 10:45 .viminfo
-rw-r--r-- 1 felix felix 816 Apr 15 13:40 .vimrc
drwxr-xr-x 3 felix felix 4096 Apr 16 12:08 .vscode
-rw-r--r-- 1 felix felix 2321 Apr 19 18:47 weird_bug.txt
-rw-r--r-- 1 felix felix 162 Apr 15 13:40 .xprofile
.config is gone, as well as some standard XDG dirs like Pictures and Desktop, but .bashrc is still there. .nix-channels is still there, but .nix-defexpr was nuked.
So, this leads me to two questions:
What went wrong? I'd like to fix this build script and make a PR to prevent this from happening in the future.
What order were the files deleted in? Obviously not in alphabetical order, but * expands in alphabetical order, so something else is going on here, it seems.
|
Ouch. You aren't the first victim.
What went wrong?
Starting in your home directory, e.g. /home/felix, or even in /home/felix/src or /home/felix/Downloads/src.
cd ../Dependencies/cpython
Failed because there is no ../Dependencies.
mkdir debug
cd debug
You're now in the subdirectory debug of the directory you started from.
../configure --with-pydebug --enable-shared
make
Does nothing because there's no ../configure or make.
cd ../../..
cd ..
If you started out no more than three directory levels deep, with cd debug reaching a fourth level, the current directory is now the root directory. If you started out four directory levels deep the current directory is now /home.
mkdir -p cmake-build-local
This fails since you don't have permission to write in / or /home.
cd cmake-build-local
This fails since there is no directory cmake-build-local.
We now get to…
What order were the files deleted in?
rm -rf *
This tries to recursively delete every file in the current directory, which is / or /home. The home directories are enumerated in alphabetical order, but the files underneath are enumerated in the arbitrary order of directory traversal. It's the same order as ls --sort=none (unless rm decides to use a different order for some reason). Note that this order is generally not preserved in backups, and can change when a file is created or removed in the directory.
How to fix the script
First, almost any shell script should have set -e near the top. set -e causes the script to abort if a command fails. (A command fails if its exit status is nonzero.) set -e is not a panacea, because there are circumstances where it doesn't go into effect. But it's the bare minimum you can expect and it would have done the right thing here.
(Also the script should start with a shebang line to indicate which shell to use, e.g. #!/bin/sh or #!/bin/bash. But that wouldn't help with this problem.)
rm -rf *, or variants like rm -rf $foo.* (what if $foo turns out to be empty?), are fragile. Here, instead of
mkdir -p cmake-build-local
cd cmake-build-local
rm -rf *
it would be more robust to just remove and re-create the directory. (This would not preserve the permissions on the directory, but here this is not a concern.)
rm -rf cmake-build-local
mkdir cmake-build-local
cd cmake-build-local
Another way is more robust against deleting the wrong files, but more fragile against missing files to delete: delete only files that are known to have been built, by running make clean which has rm commands for known build targets and for known extensions (e.g. rm *.o is ok).
| I just deleted everything in my home directory. How? And why are some files still there? |
1,371,822,060,000 |
I use RHEL4 with LVM2 on it. At times even after removing large files more than a GB, the partition size is not getting updated when using the df command.
-bash-3.00$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/sys-root 3.9G 1.4G 2.3G 39% /
/dev/cciss/c0d0p1 251M 19M 219M 8% /boot
/dev/mapper/sys-home 250G 125G 113G 53% /home
/dev/mapper/sys-tmp 3.9G 41M 3.7G 2% /tmp
/dev/mapper/sys-var 3.9G 3.6G 98M 98% /var
But when I check using du it shows the proper size
-bash-3.00$ sudo du -sh /var/
179M /var/
you can see there that the df output shows /var partition to be 3.6Gb used but the du shows that its just only 179Mb.
Now the problem is that neither sync or partprobe is not updating the information. But surely rebooting the host will resolve the issue.
But as this is a production server I cannot reboot it. Is there any way that I can update the disk information manually without rebooting the host ?
|
When a file is removed/deleted/unlinked, if it is still held open by any process then only the directory entry for the file is erased, not the file's data. When the file is completely closed by all processes the data is returned to the free space pool. It's a feature since you can have anonymous files this way.
To see if you have any open deleted file on a filesystem, run one of these commands, where /mount/point is the mount point (/var in your case):
lsof +L1 /mount/point
This article on open, unlinked files should help explain this some more.
| Updating disk free size without rebooting the host |
1,371,822,060,000 |
I was looking at the man page for the rm command on my MacBook and I noticed the the following:
-W Attempt to undelete the named files. Currently, this option can only be used to recover
files covered by whiteouts.
What does this mean? What is a "whiteout"?
|
A whiteout is a special marker file placed by some "see-through" higher-order filesystems (those which use one or more real locations as a basis for their presentation), particularly union filesystems, to indicate that a file that exists in one of the base locations has been deleted within the artificial filesystem even though it still exists elsewhere. Listing the union filesystem won't show the whited-out file.
Having a special kind of file representing these is in the BSD tradition that macOS derives from: macOS uses st_mode bits 0160000 to mark them. Using ls -F, those files will be marked with a % sign, and ls -W will show that they exist (otherwise, they're generally omitted from listings). Many union systems also make normal files with a special name to represent whiteouts on systems that don't support those files.
I'm not sure that macOS exposes these itself in any way, but other systems from its BSD heritage do and it's possible that external filesystem drivers could use them.
| macOS rm command '-W' option - undelete |
1,371,822,060,000 |
One program created lots of nested sub-folders. I tried to use command
rm -fr * to remove them all. But it's very slow. I'm wondering is there any faster way to delete them all?
|
The fastest way to remove them from that directory is to move them out of there, after that just remove them in the background:
mkdir ../.tmp_to_remove
mv -- * ../.tmp_to_remove
rm -rf ../.tmp_to_remove &
This assumes that your current directory is not the toplevel of some mounted partition (i.e. that ../.tmp_to_remove is on the same filesystem).
The -- after mv (as edited in by Stéphane) is necessary if you have any file/directory names starting with a -.
The above removes the files from your current directory in a fraction of a second, as it doesn't have to recursively handle the subdirectories. The actual removal of the tree from the filesystem takes longer, but since it is out of the way, its actual efficiency shouldn't matter that much.
| What's the fastest way to remove all files & subfolders in a directory? [duplicate] |
1,371,822,060,000 |
I accidentally changed all the contents of the .bashrc file and I haven't scripted it yet so there's no problem for now. I added little scripts to it (just a few alias), so I can write them again one by one.
How can I restore my .bashrc file with the default settings?
I use Linux Mint.
|
There exist backup copies of .bashrc, .profile etc. in /etc/skel/. So one could replace a corrupt .bashrc simply by overwitting from there.
How do I restore .bashrc to its default?
cp /etc/skel/.bashrc ~/
| How can I restore my default .bashrc file again? |
1,371,822,060,000 |
I have a VPS I'm planning to delete. This particular cloud provider makes no guarantee that the data on the drive will be wiped before giving the disk to the next person. What's a best effort attempt I can make to secure-wipe sensitive data (whether existing as files or as deleted data) on the drive?
Assume the provider does not offer a separate, bootable OS to perform maintenance from
If not every last bit of sensitive data can be guaranteed to be wiped, that's ok
(I would have encrypted the data, if it were that critically sensitive!)
|
Use the scrub command1 on the user data portions2 of the VPS filesystem.
BEWARE: The following commands purposely destroy data.
Here is a list of ideas for scrubbing targets, in a sensible order, but you may need to vary it for your particular VPS configuration:
Databases, typically stored under /var. For instance, if you're using MySQL, you'd want to say something like this:
# service stop mysql # command varies by OS, substitute as necessary
# find /var/lib/mysql -type f -exec scrub {} \;
/usr/local should only contain software you added to the system outside the normal OS package system. Nuke it all:
# find /usr/local -type f -exec scrub {} \;
The web root. For most Linux web servers on bare VPSes running Apache, this is a pretty good guess:
# service stop apache # ditto caveat above
# find /var/www -type f -exec scrub {} \;
If you're on a managed VPS with a nice control panel front end which lets you set up virtual hosting, or you're on shared hosting, chances are that your web root lives somewhere else. You'll need to find it and use that instead of /var/www.
Email. Be sure to catch both the MTA's spooling directories as well as the individual users' mailbox files and directories.
Any configuration files with potentially sensitive data in them. I can't rightly think of anything in this category, since configuration data is generally fairly boring. One way to attack it would be to say
# ls -ltr /etc | tail -30
That will give you the 30 files you most recently touched in /etc, which will give you a list of files most likely touched by you, rather than containing stock configuration information.
Be careful! There are files you can scrub in /etc that will prevent you from being able to log back in. You might want to put off scrubbing those until later in the process.
Password files, keys, etc. This list varies considerably between systems, but here are some places to start looking:
/etc/shadow
/etc/pki/*
/etc/ssh/*key*
/etc/ssl/{certs,private}/*
~/.ssh # for each user
At this point, you probably cannot log back in again, so be sure not to drop your SSH connection to the VPS.
Erase the free space on every mounted filesystem that may contain user data:
For each user data filesystem2 mount point MOUNTPT:
# mkdir MOUNTPT/scrub
# scrub -X MOUNTPT/scrub
For instance, if /home is on its own filesystem, you'd create a /home/scrub directory and scrub -X that. You have to do this for each filesystem separately. This fills that filesystem with pseudorandom noise.
If there is user data on the root filesystem, don't do that one yet, since filling the root filesystem may crash the system.
Burn the world. If the OS hasn't crashed by this point, your shell hasn't dropped your session, etc., you can do a best-effort attempt to burn the world:
# find /var /home /etc -type f -exec scrub {} \;
Unix being the way it is about file locking, you still might not lose your connection to the VPS while this command executes, even though it is overwriting files you need to log in. You may nevertheless be unable to execute any more commands once it does finish. This is definitely a "saw off the tree limb you are sitting on" kind of command.
If by some thin chance you are still logged in after this completes, you can now erase the free space on the root filesystem:
# mkdir /scrub
# scrub -X /scrub
Nuke the VPS. Finally, log into your VPS control panel and tell it to reinstall your VPS with a different OS. Pick the biggest and most featureful one your VPS provider offers. This will overwrite part of your VPS's disk with fresh, uninteresting data. There's a chance it will overwrite something sensitive that your prior steps missed.
In all the scrub(1) commands above, I haven't given any special options, since the defaults are reasonable. If you are feeling especially paranoid, there are methods in scrub to use more passes, different data overwriting patterns, etc.
Scrub uses data overwriting techniques that require truly heroic measures to overcome. It's a question of incentives: how much work is someone willing to put in to recover your data? That tells you how paranoid you should be about following the steps above, and adding additional steps.
Due to the nature of virtual machines, there may be "echoes" of your user data in the host system due to VPS migrations and such, but those echoes are inaccessible to outsiders. If you cared about such things, you shouldn't have chosen to use a VPS provider in the first place.
If you added other directories to the standard list2 of user data trees, you should probably scrub those early on, since the order of scrubbing is from most-user-centric to least.
You do the least user centric parts last, since they tend to be parts of the filesystem that affect the system's own functioning. You don't want to lock yourself out of the VPS before you're done scrubbing.
Scrub is highly portable, and is probably in your OS's package repo already, but if you have to build it from source it's not hard.
Typically, the trees containing user data are /home, /usr/local, /var, and /etc, in decreasing "density" of user data vs system default data. You may need to add other directories to this list due to your system administration style or VPS management software preferences.
We aren't going to bother scrubbing places like /usr/bin and /lib, as these should only contain copies of files that are widely available, and thus boring. (The OS, software you've installed from public sources, etc.)
| Secure wipe (scrub) filesystem of VPS from VPS itself |
1,371,822,060,000 |
I have a process running very long time.
I accidentally deleted the binary executable file of the process.
Since the process is still running and doesn't get affected, there must be the original binary file in somewhere else....
How can I get recover it? (I use CentOS 7, the running process is written in C++)
|
It could only be in memory and not recoverable, in which case you'd have to try to recover it from the filesystem using one of those filesystem recovery tools (or from memory, maybe). However!
$ cat hamlet.c
#include <unistd.h>
int main(void) { while (1) { sleep(9999); } }
$ gcc -o hamlet hamlet.c
$ md5sum hamlet
30558ea86c0eb864e25f5411f2480129 hamlet
$ ./hamlet &
[1] 2137
$ rm hamlet
$ cat /proc/2137/exe > newhamlet
$ md5sum newhamlet
30558ea86c0eb864e25f5411f2480129 newhamlet
$
With interpreted programs, obtaining the script file may be somewhere between tricky and impossible, as /proc/$$/exe will point to perl or whatever, and the input file may already have been closed:
$ echo sleep 9999 > x
$ perl x &
[1] 16439
$ rm x
$ readlink /proc/16439/exe
/usr/bin/perl
$ ls /proc/16439/fd
0 1 2
Only the standard file descriptors are open, so x is already gone (though may for some time still exist on the filesystem, and who knows what the interpreter has in memory).
| How to recover the deleted binary executable file of a running process |
1,371,822,060,000 |
I have an hourly hour-long crontab job running with some mtr (traceroute) output every 10 minutes (that is going to go for over an hour prior to it being emailed back to me), and I want to see the current progress thus far.
On Linux, I have used lsof -n | fgrep cron (lsof is similar to BSD's fstat), and it seems like I might have found the file, but it is annotated as having been deleted (a standard practice for temporary files is to be deleted right after opening):
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
...
cron 21742 root 5u REG 202,0 7255 66310 /tmp/tmpfSuELzy (deleted)
And cannot be accesses by its prior name anymore:
# stat /tmp/tmpfSuELzy
stat: cannot stat `/tmp/tmpfSuELzy': No such file or directory
How do I access such a deleted file that is still open?
|
The file can be access through the /proc filesystem: you already know the PID and the FD from the lsof output.
cat /proc/21742/fd/5
| How can I access a deleted open file on Linux (output of a running crontab task)? |
1,371,822,060,000 |
I started downloading a big file and accidently deleted it a while ago. I know how to get its current contents by cping /proc/<pid>/fd/<fd> but since the download is still in progress it'll be incomplete at the time I copy it someplace else.
Can I somehow salvage the file right at the moment the download finishes but before the downloader closes the file and I lose it for good?
|
Using tail in follow mode should allow you to do what you want.
tail -n +0 -f /proc/<pid>/fd/<fd> > abc.deleted
I just did a quick test and it seems to work here. You did not mention whether your file was a binary file or not. My main concern is that it may not copy from the start of file but the -n +0 argument should do that even for binary files.
The tail command may not terminate at the end of the download so you will need to terminate it yourself.
| Recover deleted file that is currently being written to |
1,371,822,060,000 |
In the first terminal A, I create a directory, enter the directory, and create a file:
$ mkdir test
$ cd test
$ touch file1.txt
$ ls
file1.txt
Then in another terminal B, I delete the directory:
$ rm -r test
$ mkdir test
$ cd test
$ touch file2.txt
And back again the terminal A (not doing any cd), I try to list the files:
$ ls
ls doesn't see anything and it doesn't complain either.
What happens in the background? How comes that ls doesn't see the problem? And is there a standard, portable, and/or recommended way to find out that something is not right in the terminal A?
pwd just prints the seemingly correct directory name. touch file3.txt says no such file or directory which is not helpful. Only bash -c "pwd" gives a two long error lines which somehow give away that something is wrong but are not really descriptive and I'm not sure how portable that is between different systems (I'm on Ubuntu 16.04). cd .. && cd test fixes the problem, but does not really explain what happened.
|
How comes that ls doesn't see the problem?
There is no "problem" in the first place.
something is not right in the terminal A
There is nothing not right. There are defined semantics for processes having unlinked directories open just as there are defined semantics for processes having unlinked files open. Both are normal things.
There are defined semantics for unlinking a directory entry that referenced something (whilst having that something open somewhere) and then creating a directory entry by the original name linking to something else: You now have two of those somethings, and referencing the open description for the first does not access the second, or vice versa. This is as true of directories as it is of files.
A process can have an open file description for a directory by dint of:
it being the process's working directory;
it being the process's root directory;
it being open by the process having called the opendir() library function; or
it being open by the process having called the open() library function.
rmdir() is allowed to fail to remove links to a still-open directory (which was the behaviour of some old Unices and is the behaviour of some non-Unix-non-Linux POSIX-conformant systems), and is required to fail if the still-open directory is unlinked via a name that ends in a pathname component .; but if it succeeds and removes the final link to the directory the defined semantics are that a still-open but unlinked directory:
has no directory entries at all;
cannot have any directory entries created thereafter, even if the attempting process has write access or privileged access.
Your operating system is one of the ones that does not return EBUSY from rmdir() in these circumstances, and your shell in the first terminal session has an unlinked but still open directory as its current directory. Everything that you saw was the defined behaviour in that circumstance. ls, for example, showed the empty still open first directory, of the two directories that you had at that point.
Even the output of pwd was. When run as a built-in command in that shell it was that shell internally keeping track of the name of the current directory in a shell/environment variable. When run as a built-in command in another shell, it was the other shell failing to match the device and i-node number of its working directory to the second directory now named by the contents of the PWD environment variable that it inherited, thus deciding not to trust the contents of PWD, and then failing in the getcwd() library function because the working directory does not have any names any longer, it having been unlinked.
Further reading
rmdir(). "System Interfaces". The Open Group Base Specifications. IEEE 1003.1:2017.
https://unix.stackexchange.com/a/413225/5132
Why can't I remove the '.' directory?
Does 'rm .*' ever delete the parent directory?
| What happens when the current directory is deleted? |
1,371,822,060,000 |
I'm setting up an automation process where I am deleting files in a directory which contains sub-directories. I only want to delete the files in the directory, and want to keep the sub-directories intact. So right now I am just using rm * to delete the files in that directory. However, this command throws the message: cannt remove 'dir': Is a directory. I know I'm being knit-picky, but I don't want that message to repeatedly appear in my logs. Is there a better command I can use for deletion or a way that I can tell rm to ignore the sub-directories?
|
You can just throw away the error messages:
rm * 2>/dev/null
That'll throw away all errors. If you want to see other potential errors then we can do something more complicated:
rm * 2>&1 | grep -v 'cannot remove .*: Is a directory'
In this way other errors will still be logged.
| Ignore 'cannot remove `dir`: Is a directory message |
1,371,822,060,000 |
Accidentally a rm -rf command was launched to my root directory instead of current directory. I stopped file removing by Ctrl+C but some files has already been removed. Is there a LINUX command to list all recently removed files from the system to get the affected applications ?
Operating System: CentOS 6.3
|
*nix systems typically have a locate utility installed. It has a database, usually updated nightly, that has the names of (almost) all files on your system. Just run:
locate /path/to/dir/of/interest
and you should see a list of files that were in that directory as of the last database update. You can diff this against the current list.
Because it will be overwritten automatically with a new version, you might make a back-up copy of that database now. On debian-influenced systems, it is stored in /var/lib/mlocate/mlocate.db.
How to show missing files
Make a backup of the old database:
cp /var/lib/mlocate/mlocate.db ~/old.db
Update the database. The command to do this may vary. On a debian-like system, try:
sudo /etc/cron.daily/mlocate
Get the new and old file lists for your directory:
locate -d ~/old.db /your/dir | sort >~/old.list
locate /your/dir | sort >~/new.list
Get a list of all new and missing files:
diff ~/old.list ~/new.list
Additional notes
Not all files are listed in locate's database. A configuration file, typically /etc/updatedb.conf, determines which files and directories are excluded.
In the past I have used some version of locate that, by default, would only list files that still exist. If that is the case for your locate, you will want to turn that feature off.
| Is there a UNIX command to list all recently removed files from a system |
1,371,822,060,000 |
I am using RHEL8, and I see the directory - ~/.local/share/Trash/files
And there were so many files in it. Looking at the name and files present there it gave me an intuition that it is similar to recycle bin of Windows OSes.
Just tried playing around with it, and appeared like the files deleted through File manager only comes into Trash - ~/.local/share/Trash/files and not when I deleted them using the rm command. Why is it so?
Am i missing something here. I tried googling to get more information regarding this but none of them gave a satisfactory answer.
Can I get more understanding of this directory - ~/.local/share/Trash/files ?
|
This is used by programs which comply with the FreeDesktop.org Trash specification. rm doesn’t follow this specification, but many current desktop environments do: instead of deleting files outright, they move them to the appropriate trash directory, thus allowing them to be “undeleted” if necessary.
On the command-line, one tool which can be used is gio trash; gio trash ${file} will move ${file} to the trash, and gio trash --empty will empty the trash. So if you wish you could make rm a function based on gio trash.
| Is ~/.local/share/Trash/files used by GNOME only for deleted files and not by rm command? |
1,371,822,060,000 |
I had a file with symbolic link
link -> original_file
original_file
I mistakenly ran unlink command with original_file
Now the original file is missing and the symbolic link is broken. What to do? How to recover the original file?
|
As the man page specifies, the unlink command will remove a specified file :
UNLINK(1)
NAME
unlink - call the unlink function to remove the specified file
Unlink will remove hard-links and symbolic-links as well.
As a file in Linux is a hard-link to an inode, if a regular file is specified as a parameter, this hard-link will be removed, and if the file is the last hard-link to the inode of the file, then the file is kindof erased.
| unlink original file instead of symbolic link. What to do? |
1,371,822,060,000 |
SERVER:~ # df -mP /home/
Filesystem 1048576-blocks Used Available Capacity Mounted on
/dev/mapper/rootvg-home_lv 496 491 0 100% /home
SERVER:~ #
SERVER:/home # lsof | grep -i deleted | grep -i "home" | grep home
badprocess 4315 root 135u REG 253,2 133525523 61982 /home/username/tr5J6fRJ (deleted)
badprocess2 44654 root 133u REG 253,2 144352676 61983 /home/username/rr2sxv4L (deleted)
...
SERVER:/home #
Files were deleted while they were still in use. So they still consume space. But we don't want to restart the "badprocess*". OS is SLES9, but we are asking this "in general".
Question: How can we remove these already deleted files without restarting the process that holds them, so the space would free up?
|
You can use the entries in /proc to truncate such files.
# ls -l /proc/4315/fd
That will show all the files opened by process 4315. You've already used lsof and that shows that the deleted file is file descriptor 135, so you can free the space used by that deleted file as follows:
# > /proc/4315/fd/135
The same goes for the other deleted file opened by process 44654, there it's file descriptor 133, so:
# > /proc/44654/fd/133
You should now see that the space is freed up.
You can also use this to copy the contents of a file that's been deleted but still held open by a process, just cp /proc/XXX/fd/YY /some/other/place/filename.
| How to reclaim storage of "deleted", but still used files on Linux? |
1,371,822,060,000 |
Using Linux Mint with Cinnamon.
I deleted some files using ShiftDel, so that they wouldn't go to the trash, but hoping they would be deleted immediately. However, the files are still there, they have only been renamed with a ~ at the end, making them invisible, but are not gone.
If I were to delete the tilde from the name the files would be restored. I see the functionality as some kind of safety net thug redundant with the trash can. If I was looking for a two step "safe" operation I would just send the files to the trash and then empty the trash.
Only by deleting the renamed files with ShiftDel again I can finally get rid of them.
So what is the One-Step operation to permanently delete files then?
|
The setting that controls immediate deletion is available in dconf-editor.
org nemo preferences enable-delete.
Even though the default setting is enabled it didn't work correctly. I disabled it and re-enabled it again, and now ShiftDel works as expected: Files-be-gone-for-good...
| How to fully delete files bypassing the trash? |
1,371,822,060,000 |
I've noticed, if a file is renamed, lsof displays the new name.
To test it out, created a python script:
#!/bin/python
import time
f = open('foo.txt', 'w')
while True:
time.sleep(1)
Saw that lsof follows the rename:
$ python test_lsof.py &
[1] 19698
$ lsof | grep foo | awk '{ print $2,$9 }'
19698 /home/bfernandez/foo.txt
$ mv foo{,1}.txt
$ lsof | grep foo | awk '{ print $2,$9 }'
19698 /home/bfernandez/foo1.txt
Figured this may be via the inode number. To test this out, I created a hard link to the file. However, lsof still displays the original name:
$ ln foo1.txt foo1.link
$ stat -c '%n:%i' foo*
foo1.link:8429704
foo1.txt:8429704
$ lsof | grep foo | awk '{ print $2,$9 }'
19698 /home/bfernandez/foo1.txt
And, if I delete the original file, lsof just lists the file as deleted even though there's still an existing hard link to it:
$ rm foo1.txt
rm: remove regular empty file ‘foo1.txt’? y
$ lsof | grep foo | awk '{ print $2,$9,$10 }'
19698 /home/bfernandez/foo1.txt (deleted)
So finally...
My question
What method does lsof use to keep track open file descriptors that allow it to:
Keep track of filename changes
Not be aware of existing hard links
|
You are right in assuming that lsof uses the inode from the kernel's name cache. Under Linux platforms, the path name is provided by the Linux /proc file system.
The handling of hard links is better explained in the FAQ:
3.3.4 Why doesn't lsof report the "correct" hard linked file path
name?
When lsof reports a rightmost path name component for a
file with hard links, the component may come from the
kernel's name cache. Since the key which connects an open
file to the kernel name cache may be the same for each
differently named hard link, lsof may report only one name
for all open hard-linked files. Sometimes that will be
"correct" in the eye of the beholder; sometimes it will
not. Remember, the file identification keys significant
to the kernel are the device and node numbers, and they're
the same for all the hard linked names.
The fact that the deleted node is displayed at all is also specific to Linux (and later builds of Solaris 10, according to the same FAQ).
| How does `lsof` keep track of open file descriptors' filenames? |
1,371,822,060,000 |
I was trying to get SFML working on fedora 24, and I accidentally deleted the usr/include directory in the process. Is there any way to reinstall all the missing files? Or do I have to reinstall the whole OS? I have tried running sudo dnf --exclude=kernel\* reinstall \* and it seemed to fix some of the problem but I am still missing a lot of the files that were in that directory originally. Is there any way to reinstall everything without reinstalling the whole OS?
|
You can make a list of the packages whose include-files are missing by using the "verify" feature of rpm.
Something like this:
#!/bin/sh
rpm -qa|while read name
do
include=$(rpm -ql "$name" |grep -E '^/usr/include/' |wc -l)
[ $include = 0 ] && continue
missing=$(rpm -V "$name" |grep -E '^missing[[:space:]]+/usr/include/' |wc -l)
[ $missing = 0 ] && continue
printf '# missing %d of %d %s\n' $include $missing $name
printf "sudo dnf -y reinstall %s\n" $name
done
It prints a script with comments indicating the number of missing files, as well as commands for reinstalling the broken packages. Here is an example:
# missing 1 of 1 libXcomposite-devel-0.4.4-7.fc23.x86_64
sudo dnf -y reinstall libXcomposite-devel-0.4.4-7.fc23.x86_64
| Accidentally deleted /usr/include. What can I do to reinstall the files that were in that directory? |
1,371,822,060,000 |
I have an old log that stays in (deleted) state, and after applying
> /proc/'pid'/fd/4 the space is not reclaimed.
In fact, the size of file is zeroed, but the space is still used ?
Have I forgotten something ? Do I have to perform unlink of some sort ?
lr-x------ 1 root root 64 Mar 10 16:11 4 -> /var/app/logs/app.log (deleted)
appl 'pid' appl 4r REG 253,2 **0** 6193157 /var/app/logs/app.log (deleted)
|
In fact the space is reclaimed by the filesystem, but the size of the file is only temporarily reduced to 0 until the next write by the process that still has the file open. At that point the size is increased to the previous size, plus the newly written data, but you now have a sparse file, where the start of the file is full of notional zeroes, which take no space on the disc.
You can see this effect with a simple test. Create a large file that is slowly updated every 10 seconds:
$ { dd count=1k if=/dev/zero; while sleep 10;do echo hi; done; } >/tmp/big &
[2] 1050
$ pid=$!
Check its size and the disc space used:
$ ls -ls /tmp/big
516 -rw-r--r-- 1 meuh users 524516 Aug 15 15:58 /tmp/big
$ du -a /tmp/big
516 /tmp/big
$ df /tmp
Filesystem 1K-blocks Used Available Use% Mounted on
tmpfs 1966228 2924 1963304 1% /tmp
The file is 524516 bytes, 516 blocks, and the filesystem has used 2924 blocks.
Now use your > command to truncate the file, and immediately check the size:
$ > /proc/$pid/fd/1; ls -ls /tmp/big
0 -rw-r--r-- 1 meuh users 0 Aug 15 15:59 /tmp/big
It is zero. After 10 seconds check again:
$ ls -ls /tmp/big
4 -rw-r--r-- 1 meuh users 524534 Aug 15 15:59 /tmp/big
$ df /tmp
Filesystem 1K-blocks Used Available Use% Mounted on
tmpfs 1966228 2416 1963812 1% /tmp
As you can see the space has been reclaimed by the disc (from 2924to 2416 used), but the size of the file is as it was before, plus a bit, but the number of blocks it occupies (4, the first number of the ls -ls) is small, hence the sparseness. lsof -p $pid also shows the offset, not the size.
| Free space not reclaimed after truncating the fd |
1,371,822,060,000 |
Apparently on a production server there has been some problem and someone has deleted the contents of the /var folder.
This has caused several errors that I have been solving with various services as a web server.
The problem I'm having with apt, problem that doesn't allow me to update, remove or install packages.
for example I want to reinstall the database server mariadb-server, but it tells me that the package is not installed in the system (this is false, i installed it personally):
root# apt remove mariadb-server
Reading package lists... Done
Building dependency tree
Reading state information... Done
Package 'mariadb-server' is not installed, so not removed
You might want to run 'apt-get -f install' to correct these:
The following packages have unmet dependencies:
chkconfig : Depends: perl but it is not going to be installed
Recommends: insserv but it is not going to be installed
libboost-chrono1.50.0 : Depends: libgcc1 (>= 1:4.4.0) but it is not going to be installed
Depends: libstdc++6 (>= 4.4.0) but it is not going to be installed
libboost-system1.50.0 : Depends: libgcc1 (>= 1:4.4.0) but it is not going to be installed
Depends: libstdc++6 (>= 4.4.0) but it is not going to be installed
libboost-thread1.50.0 : Depends: libgcc1 (>= 1:4.4.0) but it is not going to be installed
Depends: libstdc++6 (>= 4.6) but it is not going to be installed
libc6 : Depends: libgcc1 but it is not going to be installed
libprotobuf-lite7 : Depends: libgcc1 (>= 1:4.4.0) but it is not going to be installed
Depends: libstdc++6 (>= 4.4.0) but it is not going to be installed
Depends: zlib1g (>= 1:1.1.4) but it is not going to be installed
oracle-java8-jdk : Depends: libasound2 (>= 1.0.16)
Depends: libgcc1 (>= 1:4.4.0) but it is not going to be installed
Depends: libx11-6 but it is not going to be installed
Depends: libxext6 but it is not going to be installed
Depends: libxi6 but it is not going to be installed
Depends: libxrender1 but it is not going to be installed
Depends: libxtst6 but it is not going to be installed
Recommends: netbase but it is not going to be installed
redis-server : Depends: init-system-helpers (>= 1.18~) but it is not going to be installed
Depends: libjemalloc1 (>= 2.1.1) but it is not going to be installed
Depends: adduser but it is not going to be installed
redis-tools : Depends: libjemalloc1 (>= 2.1.1) but it is not going to be installed
watchdog : Depends: debconf (>= 0.5) but it is not going to be installed or
debconf-2.0
Depends: init-system-helpers (>= 1.18~) but it is not going to be installed
Depends: makedev (>= 2.3.1-24) but it is not going to be installed or
udev but it is not going to be installed
Depends: lsb-base (>= 3.2-14) but it is not going to be installed
E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution).
if I launch the command that tells me to resolve unmet dependencies:
root# apt-get -f install
Reading package lists... Done
Building dependency tree
Reading state information... Done
Correcting dependencies... Done
The following extra packages will be installed:
adduser apt apt-utils base-passwd ca-certificates coreutils debconf debconf-i18n debianutils dpkg e2fslibs e2fsprogs gnupg gnupg-curl gpgv ifupdown init-system-helpers initscripts insserv iproute2 isc-dhcp-client isc-dhcp-common
krb5-locales libacl1 libalgorithm-c3-perl libapt-inst1.5 libapt-pkg4.12 libarchive-extract-perl libasound2 libasound2-data libatm1 libattr1 libaudit-common libaudit1 libblkid1 libbz2-1.0 libcgi-fast-perl libcgi-pm-perl
libclass-c3-perl libclass-c3-xs-perl libcomerr2 libcpan-meta-perl libcurl3-gnutls libdata-optlist-perl libdata-section-perl libdb5.3 libdebconfclient0 libdns-export100 libfcgi-perl libffi6 libgcc1 libgcrypt20 libgdbm3 libgmp10
libgnutls-deb0-28 libgpg-error0 libgpm2 libgssapi-krb5-2 libhogweed2 libidn11 libirs-export91 libisc-export95 libisccfg-export90 libjemalloc1 libk5crypto3 libkeyutils1 libkrb5-3 libkrb5support0 libldap-2.4-2 liblocale-gettext-perl
liblog-message-perl liblog-message-simple-perl liblzma5 libmodule-build-perl libmodule-pluggable-perl libmodule-signature-perl libmount1 libmro-compat-perl libncurses5 libnettle4 libp11-kit0 libpackage-constants-perl libpam-modules
libpam-modules-bin libpam0g libparams-util-perl libpcre3 libpng12-0 libpod-latex-perl libpod-readme-perl libreadline6 libregexp-common-perl librtmp1 libsasl2-2 libsasl2-modules libsasl2-modules-db libselinux1 libsemanage-common
libsemanage1 libsepol1 libslang2 libsmartcols1 libsoftware-license-perl libss2 libssh2-1 libssl1.0.0 libstdc++6 libsub-exporter-perl libsub-install-perl libsystemd0 libtasn1-6 libterm-ui-perl libtext-charwidth-perl libtext-iconv-perl
libtext-soundex-perl libtext-template-perl libtext-wrapi18n-perl libtinfo5 libusb-0.1-4 libustr-1.0-1 libuuid1 libx11-6 libx11-data libxau6 libxcb1 libxdmcp6 libxext6 libxi6 libxrender1 libxtables10 libxtst6 lsb-base makedev netbase
openssl passwd perl perl-base perl-modules psmisc raspbian-archive-keyring readline-common redis-server redis-tools rename sensible-utils startpar sysv-rc sysvinit-utils tar tzdata util-linux uuid-runtime x11-common zlib1g
Suggested packages:
aptitude synaptic wajig dpkg-dev apt-doc python-apt debconf-doc debconf-utils whiptail dialog gnome-utils libterm-readline-gnu-perl libgtk2-perl libnet-ldap-perl libqtgui4-perl libqtcore4-perl gpart parted fuse2fs e2fsck-static
gnupg-doc libpcsclite1 parcimonie xloadimage imagemagick eog ppp rdnssd net-tools bootchart2 iproute2-doc resolvconf avahi-autoipd libasound2-plugins alsa-utils rng-tools gnutls-bin gpm krb5-doc krb5-user libpam-doc
libsasl2-modules-otp libsasl2-modules-ldap libsasl2-modules-sql libsasl2-modules-gssapi-mit libsasl2-modules-gssapi-heimdal perl-doc make libb-lint-perl libcpanplus-dist-build-perl libcpanplus-perl libfile-checktree-perl
libobject-accessor-perl readline-doc bum bootlogd sash bzip2 ncompress xz-utils tar-scripts dosfstools kbd console-tools util-linux-locales
Recommended packages:
libarchive-tar-perl
The following NEW packages will be installed:
adduser apt apt-utils base-passwd ca-certificates coreutils debconf debconf-i18n debianutils dpkg e2fslibs e2fsprogs gnupg gnupg-curl gpgv ifupdown init-system-helpers initscripts insserv iproute2 isc-dhcp-client isc-dhcp-common
krb5-locales libacl1 libalgorithm-c3-perl libapt-inst1.5 libapt-pkg4.12 libarchive-extract-perl libasound2 libasound2-data libatm1 libattr1 libaudit-common libaudit1 libblkid1 libbz2-1.0 libcgi-fast-perl libcgi-pm-perl
libclass-c3-perl libclass-c3-xs-perl libcomerr2 libcpan-meta-perl libcurl3-gnutls libdata-optlist-perl libdata-section-perl libdb5.3 libdebconfclient0 libdns-export100 libfcgi-perl libffi6 libgcc1 libgcrypt20 libgdbm3 libgmp10
libgnutls-deb0-28 libgpg-error0 libgpm2 libgssapi-krb5-2 libhogweed2 libidn11 libirs-export91 libisc-export95 libisccfg-export90 libjemalloc1 libk5crypto3 libkeyutils1 libkrb5-3 libkrb5support0 libldap-2.4-2 liblocale-gettext-perl
liblog-message-perl liblog-message-simple-perl liblzma5 libmodule-build-perl libmodule-pluggable-perl libmodule-signature-perl libmount1 libmro-compat-perl libncurses5 libnettle4 libp11-kit0 libpackage-constants-perl libpam-modules
libpam-modules-bin libpam0g libparams-util-perl libpcre3 libpng12-0 libpod-latex-perl libpod-readme-perl libreadline6 libregexp-common-perl librtmp1 libsasl2-2 libsasl2-modules libsasl2-modules-db libselinux1 libsemanage-common
libsemanage1 libsepol1 libslang2 libsmartcols1 libsoftware-license-perl libss2 libssh2-1 libssl1.0.0 libstdc++6 libsub-exporter-perl libsub-install-perl libsystemd0 libtasn1-6 libterm-ui-perl libtext-charwidth-perl libtext-iconv-perl
libtext-soundex-perl libtext-template-perl libtext-wrapi18n-perl libtinfo5 libusb-0.1-4 libustr-1.0-1 libuuid1 libx11-6 libx11-data libxau6 libxcb1 libxdmcp6 libxext6 libxi6 libxrender1 libxtables10 libxtst6 lsb-base makedev netbase
openssl passwd perl perl-base perl-modules psmisc raspbian-archive-keyring readline-common rename sensible-utils startpar sysv-rc sysvinit-utils tar tzdata util-linux uuid-runtime x11-common zlib1g
The following packages will be upgraded:
redis-server redis-tools
2 upgraded, 153 newly installed, 0 to remove and 3 not upgraded.
11 not fully installed or removed.
Need to get 37.1 MB of archives.
After this operation, 120 MB of additional disk space will be used.
Do you want to continue? [Y/n]
Get:1 http://archive.raspberrypi.org/debian/ jessie/main libasound2-data all 1.0.28-1+rpi3 [65.3 kB]
Get:2 http://archive.raspberrypi.org/debian/ jessie/main libasound2 armhf 1.0.28-1+rpi3 [320 kB]
Get:3 http://mirrordirector.raspbian.org/raspbian/ jessie/main libgcc1 armhf 1:4.9.2-10+deb8u2 [39.5 kB]
Get:4 http://archive.raspberrypi.org/debian/ jessie/main x11-common all 1:7.7+16 [251 kB]
[...]
Get:152 http://mirrordirector.raspbian.org/raspbian/ jessie/main libterm-ui-perl all 0.42-1 [19.1 kB]
Get:153 http://mirrordirector.raspbian.org/raspbian/ jessie/main libtext-soundex-perl armhf 3.4-1+b2 [13.3 kB]
Get:154 http://mirrordirector.raspbian.org/raspbian/ jessie/main psmisc armhf 22.21-2 [117 kB]
Get:155 http://mirrordirector.raspbian.org/raspbian/ jessie/main rename all 0.20-3 [12.4 kB]
Fetched 37.1 MB in 2min 29s (249 kB/s)
Reading changelogs... Done
E: Cannot get debconf version. Is debconf installed?
debconf: apt-extracttemplates failed: No such file or directory
Extracting templates from packages: 19%E: Cannot get debconf version. Is debconf installed?
debconf: apt-extracttemplates failed: No such file or directory
Extracting templates from packages: 38%E: Cannot get debconf version. Is debconf installed?
debconf: apt-extracttemplates failed: No such file or directory
Extracting templates from packages: 58%E: Cannot get debconf version. Is debconf installed?
debconf: apt-extracttemplates failed: No such file or directory
Extracting templates from packages: 77%E: Cannot get debconf version. Is debconf installed?
debconf: apt-extracttemplates failed: No such file or directory
Extracting templates from packages: 96%E: Cannot get debconf version. Is debconf installed?
debconf: apt-extracttemplates failed: No such file or directory
Extracting templates from packages: 100%
dpkg: regarding .../libgcc1_1%3a4.9.2-10+deb8u2_armhf.deb containing libgcc1:armhf, pre-dependency problem:
libgcc1 pre-depends on multiarch-support
multiarch-support is unpacked, but has never been configured.
dpkg: error processing archive /var/cache/apt/archives/libgcc1_1%3a4.9.2-10+deb8u2_armhf.deb (--unpack):
pre-dependency problem - not installing libgcc1:armhf
Errors were encountered while processing:
/var/cache/apt/archives/libgcc1_1%3a4.9.2-10+deb8u2_armhf.deb
E: Sub-process /usr/bin/dpkg returned an error code (1)
Here is proof that mariadb-server is installed (the error is due to the removal of /var, so I want to reinstall it):
root# mysql_secure_installation
NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB
SERVERS IN PRODUCTION USE! PLEASE READ EACH STEP CAREFULLY!
In order to log into MariaDB to secure it, we'll need the current
password for the root user. If you've just installed MariaDB, and
you haven't set the root password yet, the password will be blank,
so you should just press enter here.
Enter current password for root (enter for none):
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (111 "Connection refused")
As you can see, both apt and mariadb-server throw me errors with missing content in the /var directory. So I've entered a loop where I don't know how to proceed to solve this problem.
|
I will be referencing the following article and Debian Wiki post for my answer.
First, on a working Raspbian system, create a debootstrap environment. Following the instructions from the Debian Wiki you would only need to complete the following:
mkdir /debootstrap
debootstrap stable /debootstrap http://deb.debian.org/debian/
Now, you are using Raspbian so you may need to add the correct gpg-key, as pointed out here, and reference the Raspbian Deb Mirror. Something like this: http://archive.raspbian.org/raspbian. I am going to include a link to a cross-platform build guide encase you are doing this on an amd64 system instead of a Raspberry Pi.
IMPORTANT NOTE: if you are prompted, DO NOT install grub to the MBR! This might break your existing install!
Next, on the broken system get a list of all installed programs. Initially I suggested using dpkg --get-selections, however as user A.B. points out this will fail. dpkg references various /var directories to work and with your /var missing this will of course fail. User A.B. points out that you can reference the contents of /usr/share/doc/ and related symlinks. I would start by see what is available and try building an installed.txt with your results.
cd /usr/share/doc && for i in *; do echo $i install >> /home/user/installed.txt; done
Compare the results of this installed.txt with what you expect. Another place to check would be /usr/share/man. This directory has manpages but will only be accurate to packages that install manpages. If you have a known working Raspberry Pi to also compare against, and it is configured the same as the broken Raspberry Pi, you can build installed.txt with the output of that Raspberry Pi's dpkg --get-selections.
Copy installed.txt from the missing /var host to the fixing /var host. scp or rysnc will work here but at this point you do need to confirm that you have a working ssh configuration between the broken host and the host doing the fixing. Without a working ssh configuration between these two hosts, you will be unable to copy over /var. Any ssh issues need to be solved before moving on.
Now, move this file into the /deboostrap environment and prepare to install all the same software inside this chroot.
cp installed.txt /debootstrap/
chroot /bootstrap/
mount -t proc none /proc
dpkg --set-selections < installed.txt
If need be, make sure your sources.list matches. debootstrap by default only includes main.
apt update
apt install dselect
dselect update
apt-get dselect-upgrade
If you were unable to build a good installed.txt
You may need to simply manually install all necessary packages in your debootstrap chroot. Here is where good setup documentation or infrastructure as code comes in handy. Run apt install package1 package2 package3 ... packageN in the chroot to build your /var to as close as possible match the original /var. Instead of installing dselect you should follow these steps:
chroot /bootstrap/
mount -t proc none /proc
apt update
apt install package1 package2 package3 ... packageN
At this point your debootstrap environment should match that of your target, broken host. Here is where you need to use rsync to copy /var over. rsync -A should be enough. As pointed out in my link, you may need to allow PermitRootLogin without-password in your sshd_config for rsync to work.
I will also just directly quote the last steps and considerations Pete Donnell at Alephnull.uk had to run:
...I had to reinstall the mariadb-server packages on the broken server using apt-get install --reinstall mariadb-server mariadb-server-10.1 mariadb-server-core-10.1.
The next step is to restore your user data. This will be specific to the structure that you use, so I can’t help with instructions for that. Once you’ve done that, check the permissions of the files in /var/lib and /var/log against a working server (or perhaps your bootstrap) to check that they are correct. It’s quite likely that the user IDs of the system accounts will be different between the server and the bootstrap environment.
Now you should check the various services that are installed, using service --status-all. Not all of the services should necessarily be enabled, again I recommend comparing against a working server. Try to restart any services that aren't currently running but should be. If all the permissions are correct and the relevant user data (if any) has been restored then the service should start successfully. If it doesn’t, check the systemd status with systemctl status <service-name>, the systemd logs with journalctl -xe and the service’s log files in /var/log. Those should give you enough information to track down any remaining problems.
I highly recommend you spend time comparing the debootstrap environment and your broken host as well if you are fortunate enough to have a second working host. You want to confirm that all the permissions are right and services are running properly. Only once you have done this, I would reboot the host and confirm that everything came back.
Please thoroughly read through every link I have provided before trying any commands. If there are any mistakes, misconceptions, or issues with my post please let me know and I will work to correct them.
Best of Luck!
| Problems with apt in Raspbian where /var content has been removed or corrupt |
1,371,822,060,000 |
I started a very long running job (expected to take 6 days to finish), and want to see its output, so I did:
$ nohup ./thejob.sh > out.txt
When I need to see the job progress I tail - f the file.
but the out.txt file was growing too much and I deleted the file and created it again.
$ rm out.txt
$ touch out.txt
After this, no output is sending to the file. I think the job lost the bind to the file. I can see the job is running by using top but I can't see its progress anymore.
Is there any way to see it again?
|
The old file isn't yet gone, and you could access it if you know the PID of the process writing to it. You can go to /proc/<pid>/fd and look at 1 which is it's stdout
This also means, though, that you haven't reclaimed any space from trying to remove the file.
Also, once the process exits, the file will be removed.
| How to see the contents of a file I deleted, but that a process is still writing to |
1,371,822,060,000 |
I hadn't sleep much. A bad script was creating a folder under somewhere/~ and I tried to removed it with rm -rf ~.
I Ctrl C quickly and I don't think I've lost much files. I get they are lost for good. Is there still a way to get a list of the files that got removed?
|
I Ctrl C quickly and I don't think I've lost much files. I get they are lost for good. Is there still a way to get a list of the files that got removed?
There is no easy way to "list" what you have deleted. You could do an extundelete /dev/home_partition(sdx) --restore-all to try to recover your data, or follow this extensive answer with a more "safe" approach to manage your deleted data:
accidental fsck on mounted
Testdisk is your friend too on this quest;
| Get list of files deleted by rm -rf |
1,371,822,060,000 |
I was wondering is there any way to delete all the files in the destination using rsync which are found in source. I have 30+ dir in source and 100+ files on the destination
I want only the 30+ in the destination to be delete recursively wondering if rsync or any other would help me do that...
Source
a/
b/
c/
destination
a/
abc/
xyz/
b/
c/
...
|
I don't think that rsync can do that, but you can make a list of files, modify that list and copy it as a script to the destination.
Assuming that your file names don't contain newlines or single quotes ('), run this on the source machine:
cd basedir
find . -type f | sed 's/^/rm -f '\''/' | sed 's/$/'\''/' > /var/tmp/to_remove
then copy over the to_remove file to the destination machine, cd to the base directory there and source it. Any files not in existence, but that are in the list will not have any effect, and others only available in the destination will not be touched.
If you also want to delete directories you can use an additional, but this deletes directories that exists in the source and are empty in the destination, regardless of whether there were files removed from the directory or not.
find . -depth -type d | sed 's/^/rmdir -f '\''/' | sed 's/$/'\''/' > /var/tmp/to_remove
| Delete the files on the destination which are found in the source using rsync |
1,371,822,060,000 |
I'm interested in accurately removing a git repository in a reasonable time.
But it takes quite a while to do so. Here, I have a small test repo where the .git folder is < 5MiB.
$ du -ac ~/tmp/.git | tail -1
4772 total
$ find ~/tmp/.git -type f | wc -l
991
Using shred's default options, this takes quite long. In the next command I use --force to change permissions and --zero to overwrite with zeros after shredding. The default shredding method is to overwrite with random data three times (-n3).
I also want to remove the files afterwards. According to man shred, --remove=wipesync (the default, when --remove is used) only operates on directories, but this seems to slow me down even when I operate only on files. Compare (each time I reinitialized the git repo):
$ time find ~/tmp/.git -type f | xargs shred --force --zero --remove=wipesync
real 8m18.626s
user 0m0.097s
sys 0m1.113s
$ time find ~/tmp/.git -type f | xargs shred --force --zero --remove=wipe
real 0m45.224s
user 0m0.057s
sys 0m0.473s
$ time find ~/tmp/.git -type f | xargs shred --force --zero -n1 --remove=wipe
real 0m33.605s
user 0m0.030s
sys 0m0.110s
Is there a better way to do it?
EDIT: Yes, encryption is the key. I'm now just adding two more benchmarks using -n0.
time find ~/tmp/.git -type f | xargs shred --force --zero -n0 --remove=wipe
real 0m32.907s
user 0m0.020s
sys 0m0.333s
Using 64 parallel shreds:
time find ~/tmp/.git -type f | parallel -j64 shred --force --zero -n0 --remove=wipe
real 0m3.257s
user 0m1.067s
sys 0m1.043s
|
Forget about shred, it spends a lot of time doing useless things and misses the essential.
shred wipes files by making multiple passes of overwriting files with random data (a “Gutmann wipe”), because with the disk technologies of 20–30 years ago and some expensive laboratory equipment, it was possible (at least in theory) to recover overwritten data. This is no longer the case with modern disk technologies: overwriting just once with zeroes is just as good — but the idea of multiple random passes stayed around well after it had become obsolete. See https://security.stackexchange.com/questions/10464/why-is-writing-zeros-or-random-data-over-a-hard-drive-multiple-times-better-th
On the other hand, shred utterly fails in wiping sensitive information, because it only wipes the data in the files that it is told to erase. Any data that was stored in previously erased files may still be recoverable by accessing the disk directly instead of via the filesystem. Data from a git tree may not be very easy to reconstruct; nevertheless this is a realistic threat.
To be able to quickly wipe some data, encrypt it. You can use ecryptfs (home directory encryption), or encfs (encryption of a directory tree), or dm-crypt (whole-partition encryption), or any other method. To wipe the data, just wipe the key.
See also How can I be sure that a directory or file is actually deleted?
| How can I shred a git repository, reasonably fast? |
1,371,822,060,000 |
In the process of trying to fix an issue, I accidentally deleted /sbin/sysctl when I was intending to delete /etc/sysctl.conf.
When I run sysctl I get the error that says
The program 'sysctl' is currently not installed. You can install it by typing:
sudo apt-get install procps
When I try both install and upgrade it says procps is already the newest version. I've also tried removing procps and reinstalling it but I get this error:
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:
The following packages have unmet dependencies:
libegl1-mesa : Depends: libgbm1 (>= 7.11~1) but it is not going to be installed
E: Error, pkgProblemResolver::Resolve generated breaks, this may be caused by held packages.
Is there a way for me to just get the sysctl file from this package? Is there a fix for the error when I try to remove procps so that I can make it work that way?
If it matters, I'm on Linux Mint 17.3 Rosa with Ubuntu 14.04
|
You can use the --reinstall flag to apt-get
apt-get --reinstall install procps
| Restore part of a package that was accidentally deleted |
1,371,822,060,000 |
I have executed rm -f *.gz about 30 days ago. Is there any way to find out the list of files that were deleted?
|
You can use the debugfs utility,
debugfs is a simple to use RAM-based file system specially designed
for debugging purposes
First, run debugfs /dev/sda2 in your terminal (replacing /dev/sda2 with your own partition).
Once in debug mode, you can use the command lsdel to list inodes corresponding with deleted files.
When files are removed in linux they are only un-linked but their
inodes (addresses in the disk where the file is actually present) are
not removed
To get paths of these deleted files you can use debugfs -R "ncheck 320236" replacing the number with your particular inode.
Inode Pathname
320236 /path/to/file
From here you can also inspect the contents of deleted files with cat. (NOTE: You can also recover from here if necessary).
Reference here.
For future precaution you use Inotify Tools. then you can use the inotifywait command to listen for events happening for the specified directory.
Specifically if you want to watch for deleted files and folder use this
inotifywait -m -r -e delete directory_name
and log this output in some file.
And I would also recommend you to look for iwatch.
| View list of deleted files |
1,371,822,060,000 |
I just wrote an important text and stored it as a simple text file. Then i accidentally cut out the content (with Ctrl+X) when i just wanted to copy it, and saved the file. Now it is of course empty. Is there any possibility to get the content back?
Any help is very much appreciated, it really took some time to write it.
|
Solved it with help of the accepted answer here: Can overwritten files be recovered?
For larger files that may be in multiple non-contiguous blocks, I do this:
grep -a -b "text in the deleted file" /dev/sda1
13813610612:this is some text in the deleted file
which will give you the offset in bytes of the matching line. Follow this with a series of dd commands, starting with
dd if=/dev/sda1 count=1 skip=$(expr 13813610612 / 512)
You'd also want to read some blocks before and after that block.
I needed to set count to 10 to get my entire file, has to be chosen by file size.
| Restore deleted text file content [duplicate] |
1,371,822,060,000 |
I tried a little experiment where I created 2 folders Dir1 and Dir2 inside my Desktop directory, such that Dir1 is parent of Dir2.
/home/username/Desktop/Dir1/Dir2
Then, I use cd to set my pwd as /home/username/Desktop/Dir1/Dir2.
Next I used rm -r /home/username/Desktop/Dir1 to remove the Dir1.
Now if I use pwd it still shows it to be /home/username/Desktop/Dir1/Dir2, which now doesn't exist. Also at this time if I use ls or cd .. it generates an error saying 'Cannot access /home/username/Desktop/Dir1/Dir2: No such file or diectory', which is ablsolutely true but I was thinking this issue generated because of pwd not getting updating after folder deletion.
The solution to this is also simple as far as I can think, you can go the parent directory and then delete the requested directory.
I want to know if there is some specific reason for pwd not getting updated, is my solution is correct and/or I just found a bug ?
|
Actually, Dir2 does exist, but the name Dir2 does not. Confused? :) The shell's current directory is still the directory referred by the name Dir2, and this keeps the directory still around. This is analogous to anonymous files. Normally, when a files link count goes to zero, the file is deleted and the inode freed. However, if a process still has the file open, the kernel does not delete the file until the process closes the file, either explicitly or implicitly by exiting. In Dir2's case, the shell is still having the directory "open" as long as it doesn't change its current directory.
What is gone are the names Dir1 in the Desktop catalog and the whole hierarchy of names below it, including the . and .. entries. The directory formerly known as Dir1 is also gone (assuming no other process has it as current directory). Files and directories at the inode level do not form a hierarchy, i.e. there are no links from inodes to parent, child or sibling entries. The hierarchy is built up separately by directory entries, which are essentially (name, inode) pairs, pointing to files and other directories.
After this lengthy introduction we can rephrase your original question so that it reads: "why does the shell not change its current directory to something else, when the directory entry Dir2 is removed from Dir1?" Well, one reason is that the shell doesn't even know this. Some other process has run the rm program and removed the directories, but there is no mechanism by which the shell would be told about this. Second, which directory would the shell choose as its new current directory? The directory is changed using the chdir system call, which takes a string containing the new directory as argument. The shell could try a chdir(".."), but as we saw above, we already destroyed the .. entry! Third, why should the shell change the current directory? It has no reason to do so, it is comfortable where it sits, and it is not in the habit of magically change directories without being explicitly told to do so.
Granted, the situation is kind of pathological, but it is up to the user to avoid it.
| Why does the pwd doesn't update after directory removal? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.