date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,371,822,060,000 |
Today I noticed that I lost approximately 1 TB of very old movies in my collection. I have no idea how it happened, but Munin shows what happened. I'm pretty sure it was my fault. (I was awake at that hour, yes; but, I am not 100% sure.) How can I prevent something like that from happening again? How can I prevent myself (or a program/script) from deleting more than x GB of data? Any suggestion is welcome.
|
As someone who has successfully removed the /Windows subdirectory from a running windows system, AND deleted the contents of /bin on a running linux boxen (it didn't die!)... I know the feeling. (But I don't know HOW I did the Windows thing, shouldn't be possible, Windows locks files in use.)
Several options:
Remove write ability from containing subdirectory. chmod a-w /my_movie_dir
use chattr & lsattr to set/check the immutable flag. chattr +i "Earth vs The Flying Saucers.m4v"
'mount' that drive/partition as read-only by default (see 'fstab'), requiring you to mount -o rw,remount /my_movie_dir to do bad things.
| How to prevent massive file delete on Debian |
1,371,822,060,000 |
I accidentally moved an entire directory of ~100GB to trash. I was trying to place it in bookmarks but dragged it into trash. It's there in the trash. But
when i try to restore I run out of space on the disk
Prior to deletion I had less than 50GB free on disk, if I need to restore the normal way I need about 68GB more free on the disk. That is
if I have to restore I have to delete every file from trash
immediately after restoring it
so i can revert back to initial state. I tried to use "rsync -av --remove-source-files /Trash/file /Dest"
but it also doesn't work.
Any suggestions to solve the problem ?
I use MX17 beta 2 based on debian stable.The disk is NTFS formatted.
|
if you are moving to the same partition then
mv /source/* /dest/
should work without creating a copy or consuming more space
Alternatively, just do the same exercise with /dest/ on an external drive or partition then copy them back once you have cleared space in your original location.
| Accidentally trashed large file |
1,371,822,060,000 |
I accidentally started an rm -rf on a large directory that I was working in. The directory contains, among other things, a data directory containing a number of subdirectories that each contain thousands of text files. Essentially it looks like this
$ tree data
data
├── collection0
│ ├── input
│ │ ├── file0.txt
│ │ ├── file1.txt
│ │ ├── ...
│ │ └── file9999.txt
│ └── output
│ ├── file0.txt
│ ├── file1.txt
│ ├── ...
│ └── file9999.txt
├── ...
└── collection99
├── input
│ ├── file0.txt
│ ├── file1.txt
│ ├── ...
│ └── file9999.txt
└── output
├── file0.txt
├── file1.txt
├── ...
└── file9999.txt
I was able to interrupt the rm -rf process pretty quickly, but of course in the half-second or so of execution time a number of files in other subdirectories were deleted.
My question is, is there a way to ascertain with 100% certainty whether a given subdirectory lost any files during this time? It seems as though the Modify time on directories that lost files got updated to when the files were deleted, and using this method I think no files in the data subdirectories was deleted (assume 2021-09-08 is the date of the rm -rf event):
$ find data -mindepth 2 -maxdepth 2 -type d -exec stat {} -c '%n %y' \;
data/collection0/input 2021-08-28 05:45:49.624228368 -0400
data/collection0/output 2021-08-28 05:45:49.624228368 -0400
...
data/collection99/input 2021-08-29 04:55:38.772912003 -0400
data/collection99/output 2021-08-29 04:55:38.772912003 -0400
$ find data -mindepth 2 -maxdepth 2 -type d -exec stat {} -c '%n %y' \; | grep 2021-09-08
$
Is this a reliable method?
|
The Linux man page for stat(2) says that:
The field st_mtime is changed by file modifications, for example, by mknod(2), truncate(2), utime(2), and write(2) (of more than zero bytes). Moreover, st_mtime of a directory is changed
by the creation or deletion of files in that directory. The st_mtime field is not changed for changes in owner, group, hard link count, or mode.
The field st_ctime is changed by writing or by setting inode information (i.e., owner, group, link count, mode, etc.).
So, yes, should be able to rely on the modification time being updated if the rm touched each directory. Provided of course that it wasn't manually reset afterwards, but in that case, the change timestamp (ctime) should be updated. Well, if I understand the man page text correctly, the ctime also updates any time mtime does, so it should be enough to look at just that one.
Also, you can only use mtime to prove the negative. If the timestamp is updated, there's no way to know if it happened due to a file being removed, or another being created, or the timestamp being manually modified.
| Ensuring no files were deleted from directory at a certain time |
1,515,038,215,000 |
Is there a way to cd out of a directory which has just been deleted (go up one level into the upper folder which still exists?
It often happens to me that I have a console opened for a folder, and then I delete the folder with my temporary test data and create another one.
However, both cd .. and cd $(pwd)/.. only get me to the trash bin, and not to the upper directory when I try to leave the deleted folder.
So, current situation is:
$ mkdir -p /home/me/test/p1
$ cd /home/me/test/p1
now I delete the folder p1
$ cd ..
me:~/.local/share/Trash/files$ ...
I'm now searching for a way to get into /home/me/test/ and not into the Trash bin. Is there such a command?
|
PWD variable hold current path definition.
to go up one level
cd $(dirname $PWD)
will expand to
cd $(dirname /home/me/foo/bar/baz/deleteddirectory)
who expand to
cd /home/me/foo/bar/baz/
this supposed you delete only one level of dir.
| cd out of deleted folder |
1,515,038,215,000 |
Accidentally /usr/ directory got deleted, in Cent OS 8. For recovering CentOS, I have found this link which needs live CentOS 8 installed in USB drive. However CentOS 8 doesn't have live ISO release as per this discussion.
In MS-Windows, I get errors when trying to install DVD iso to USB drive using Rufus and also Etcher. Kindly point me how to recover data in CentOS?
|
Re-Installation. All your installed programs rely on data in /usr, so you can't simply reinstall. Your idea about using a live CD/DVD goes in this direction. You use them to FIX the problem. This would be the reinstallation of all programs, which won't work. Since this would be invoked within a chroot, it doesn't matter that you are on a live system which has some working copies. You could spend one week, to find which directories to bind mount...
Stick to a backup with a live cd like knoppix and a usb drive and then reinstall.
KNOPPIX
| Accidentally deleted /usr/ directory in CentOS 8 |
1,515,038,215,000 |
Can we delete the /bin folder in /usr directory in linux.
If yes, then what are its consequences?
|
Can we delete the /bin folder in /usr directory in linux.
The privileged user can delete the /bin folder
what are its consiquences?
An answer from :Linux Filesystem Hierarchy
The bin directory contains several useful commands that are of use to
both the system administrator as well as non-privileged users. It usually
contains the shells like bash, csh, etc.... and commonly used commands
like cp, mv, rm, cat, ls. For this reason and in contrast to /usr/bin,
the binaries in this directory are considered to be essential. The reason
for this is that it contains essential system programs that must be
available even if only the partition containing / is mounted. This
situation may arise should you need to repair other partitions but have
no access to shared directories (ie. you are in single user mode and
hence have no network access). It also contains programs which boot
scripts may depend on.
| Can anyone delete the '/bin' folder in linux? |
1,515,038,215,000 |
I have a regular directory than contains directly underneath it several btrfs snapshots. Is it safe to do an rm -rf on the parent directory, or do I need to first do a btrfs subvolume delete SUBVOL on each of the snapshots before removing the parent directory?
|
rm -rf is not unsafe per se, so go ahead and run it. However, it won't completely work. For some reason, an empty Btrfs subvolume cannot be removed with the rmdir(2) system call.
rm -rf will remove all of the contents of all of the subvolumes (regular files, etc...) but the empty subvolumes themselves as well as the parent directories of all those subvolumes will remain. You will have to delete those with btrfs subvolume delete and then run rm -rf again to take care of all the now-empty non-subvolume directories that couldn't be deleted earlier.
You probably know that deleting a subvolume with btrfs subvolume delete is much faster than deleting all its contents. So if you know ahead of time that the directory tree you are about to rm -rf has subvolumes in it, you can save yourself some work and time by btrfs subvolume delete'ing them first, then running rm -rf (which will work completely) after that.
| How to safely delete a regular directory that contains several btrfs snapshots inside it? |
1,515,038,215,000 |
I accidentally deleted the dir, /path/to/dir, and all its contents from an application.
However, I still have a terminal window open, still cd-ed in /path/to/dir!
Q: Is there any way to recursively recover /path/to/dir?
Note that lsof gives me this:
$ lsof | egrep '/path/to'
bash 3113 hs cwd DIR 252,0 4096 42207179 /path/to
bash 3487 hs cwd DIR 252,0 0 42207253 /path/to/dir (deleted)
Further, if I do this...
$ ls /proc/3487/fd/
0 1 2 255
$ cd /tmp
$ dd if=/proc/3487/fd/255 of=recovered.dir bs=1M
... the dd command just sits there doing nothing, with recovered.dir's size not growing. I was assuming here that the process 3487 has the directory /path/to/dir open at the file-descriptor 255, and so, if I dd or cat it, I would be able to recover the entire tree /path/to/dir.
There are plenty of articles on the web on how to use lsof to recover deleted regular files, but none for recovering deleted directories.
Would greatly appreciate a quick response!!
|
Unfortunately you cannot do what you seek.
While the directory may be held open, the files that reside within the directory are not part of the directory itself. The directory simply stores the file names. In addition to this, with the files having been deleted, the directory has already been modified to remove those files.
In short, unless each individual file is held open, they cannot be recovered in this manner.
| Linux: Recovering a deleted dir that another terminal is still cd'ed into |
1,515,038,215,000 |
I applied the following wrong Sed command to my nginx configuration:
sed 's/php7.2-fpm/php7.4-fpm/g' /etc/nginx/sites-enabled/default > /etc/nginx/sites-enabled/default
And the entire file is gone. I have no backups.
Is there a way to restore the original file? Re-configuring everything will take me days.
|
Exploring the running Nginx process
If Nginx is still running (don't restart it!), you can ask it to dump its configuration. I have no experience with Nginx, so this is just something I read on the web, I don't know how much information you can recover this way or what might go wrong if you try it.
/usr/sbin/nginx -c /some/other/config -T
On older versions, you can try to dump the configuration from the running process the hard way. I don't know how hard this might be or whether there is a hope of reading the configuration file after the server has been running for a while.
You may be able to dump the configuration information in its internal binary representation even if the original text of the configuration file is no longer available. You'd at least have the configuration, but not the comments.
Looking for deleted files
Deleted content remains on the disk until it's overwritten. But finding it can be very difficult. Disk blocks with deleted content immediately become available for reuse, and there's no particular reason why long-deleted blocks would be overwritten before freshly deleted blocks. So don't get your hopes up.
Another reason not to get your hopes up is that you're likely to find many old copies of the file. If the file is larger than a filesystem block (often 4kB but it depends on the filesystem and configuration), it might be difficult or impossible to piece the parts of a single version together. But there is a small chance that you can find something.
Make sure you don't write to the disk anymore. Ideally, you should mount the filesystem read-only. The command mount -o remount,ro / does that, but it only works if there are no files open for writing, so for example it won't work if /var/log is on the same filesystem as /etc. You may want to shut down the logging service to avoid logs overwriting the freshly deleted file. You may have to force the issue. If you decide not to force it, be very careful to avoid writing as much as possible. If you need to install new software, make sure to write to some other filesystem. If you've recovered some things, write them to some other filesystem.
Recovering files is hard. It's an act of desperation. If you really need to do this, I suggest starting with the Arch Wiki guide (the Arch wiki tends to have good information even if you aren't running Arch Linux).
Preventing similar incidents in the future
Make backups.
Keep configuration files under version control. I use etckeeper for /etc on all the Linux machines I maintain. Activate the daily autocommit if you aren't disciplined about committing. In addition to storing copies of old versions, it gives you the opportunity to document why you made a change.
| Restore deleted version of nginx configuration |
1,515,038,215,000 |
I had an issue with /var running low on space despite it only appearing to have only 2 gigs of files allocated on the 5 gig partition. I determined the issue was that /var/log/messages was deleted but still open by rsyslog and was at 2.88 gigs.
I was able to resolve the issue for now by restarting rsyslog, thus releasing the 2.8 gig file to be properly deleted. However, I want to know how it got into this state to being with. Shouldn't rsyslog be automatically rotating files to prevent the log growing indefinitely? Is there something I can do to prevent this from happening again in the future?
|
The way this is supposed to work on a lot of distros is through logrotate, which is called from cron or a systemd timer every day. Logrotate looks for its configuration, e.g., in /etc/logrotate.d. As an example, on Debian you'd find /etc/logrotate.d/rsyslog, which rotates /var/log/messages (and a lot of other logfiles), here is an excerpt:
/var/log/messages
{
rotate 4
weekly
missingok
notifempty
compress
delaycompress
sharedscripts
postrotate
/usr/lib/rsyslog/rsyslog-rotate
endscript
}
After it rotates the files, it tells rsyslog to close the old log file (what you did by restarting rsyslog) by running /usr/lib/rsyslog/rsyslog-rotate, which sends a SIGHUP to rsyslogd. The daemon will close the files when it gets SIGHUP without restarting; see the “Signals” section of the rsyslogd(8).
| /var/log/messages growing indefinately because rsyslog won't let it be deleted |
1,515,038,215,000 |
I am looking for options regarding recovering deleted files (don't recall if corrupted applied).
Filesystem for the corrupted files are VFAT-formatted on a USB, though I run a Linux system (but can use Windows if necessary). The original files were created using Windows.
The corrupted files I have found (so far) are images from a scanner using "pictures" setting on the scanner to scan the images. Many images are not corrupted, but some are. Part of the image is present, but most of it is covered in a 'grey' color which looks like a piece of dark grey paper, but it is definitely not.
|
I suspect the comment you’re referring to is this one:
See PhotoRec, it fits your requirements and is available packaged in most Linux distributions. For more advanced recovery, see also Foremost.
These tools are file recovery tools, designed to retrieve deleted files; they won’t help you fix corrupted file contents. The corruption you’re describing sounds like typical JPEG bistream corruption, and you’d be better off looking for a bitstream recovery tool — I’m not aware of any such tools for Unix-style platforms other than macOS, but a web search finds a few tools for Windows or macOS (I’m not linking any because I haven’t tried any).
| Recovery software for corrupted files when running a linux system? |
1,515,038,215,000 |
The Scrub utility on Linux can accept different methods of scrubbing. These allow for different types and orders of 'passes'. For example, the 4-pass DoD 5220.22-M section 8-306 procedure is a 4-pass method where the passes are in order of
Random
0x00
0xff
Verify
What is the scope of a pass? Does each pass write to the entire file / drive before beginning the next pass, or is the target for scrubbing divided first into blocks, and the whole 4-pass process is performed on each block before moving to the next one?
|
The scope of pass is one rotation of that pattern, from the start to the end of the object being destroyed then start "another round" with the next pattern available of that pattern group/method.
Even not having explicitly said at docs(and I could not find at the source code any trace of paralel processing patterns), in a 379MB file, you can see that it passes on each pattern as a sequence. Using dod pattern group as an example:
[root@host ~]# scrub -p dod file
scrub: using DoD 5220.22-M patterns
scrub: padding file with 744 bytes to fill last fs block
scrub: scrubbing file 398323712 bytes (~379MB)
scrub: 0x00 |................................................|
scrub: 0xff |................
[root@host ~]# scrub -p dod file
scrub: using DoD 5220.22-M patterns
scrub: padding file with 744 bytes to fill last fs block
scrub: scrubbing file 398323712 bytes (~379MB)
scrub: 0x00 |................................................|
scrub: 0xff |................................................|
scrub: random |.........................
[root@host ~]# scrub -p dod file
scrub: using DoD 5220.22-M patterns
scrub: padding file with 744 bytes to fill last fs block
scrub: scrubbing file 398323712 bytes (~379MB)
scrub: 0x00 |................................................|
scrub: 0xff |................................................|
scrub: random |................................................|
scrub: 0x00 |................................................|
scrub: verify |................................................|
I think it's safe to confirm that scrub will pass all patterns one after another the object being destroyed.
| What constitutes a Scrub "Pass"? |
1,515,038,215,000 |
I did Move to Trash > Empty Trash. However, .fileNames and .directoryNames stay in the filesystem
those .files/.directories which have exactly the same name as my deleted files/directories
For instance, select to host the folder where you did removals. All your removals will be shared too. This is not what I want because it can lead to severe mistakes.
I would like to do the thing as a one-liner, probably by gsettings. This would help the maintenance and future installations.
Malyy's answer is about the GUI approach by enabling Include a delete command bypasses Trash, but I really need a one-liner.
Systems: Ubuntu 16.04
|
From the CLI, as requested:
gsettings set org.gnome.nautilus.preferences enable-delete true
Explanation requested from OP:
Nautilus is a part of GNOME, so it stores preferences under org.gnome.nautilus.preferences. From there, just had to look at the list.
Also, you can get all Nautilus-related settings by gsettings list-recursively | grep nautilus.
| How to Make MoveToTrash mean Delete Permanently by One-liner? |
1,515,038,215,000 |
Is there a way to restore a folder or better the containing files and folders of the folder which was replaced by a empty one with the same name?
FileSystem: Ext4
OS: openSUSE 42.1
If it is possible, what is the easiest way?
Can I do this from the running system itself?
|
Don't do it from the running system, you should run a live cd or usb, mount the hard drive in read only, then try ext undeleted, or don't mount anything and try foremost or photorec. The more you use the system the less likely it is that you will recover your data. good luck
| Recover files inside a folder which was replaced by an empty one. (openSUSE, Ext4) |
1,515,038,215,000 |
I am working on a server with others and I want to completely delete files which were previously deleted using 'rm'. I cannot format the server disk as others share it with me. Is it effective to shred the parent location where I used to store the files.
|
In the future, you would do better to overwrite or shred the file
contents before removing the file.
On a personal machine, you could overwrite free space by copying /dev/zero to new files until the hard disk is full, then deleting
those files, perhaps with some difficulty after booting from
a different device, or in single-user mode.
On a shared server, filling the disk might affect the other users, and dealing with the aftermath might be more awkward.
Also, "filling the disk" completely might not be truly possible
for an un-privileged user since an ext{2,3,4} file system
usually reserves some free space for the root/admin user,
and SSD drives may not erase some parts of hardware.
Google "shred" and "overwrite" and perhaps "forensic
countermeasures" for many endless discussions on how
to securely erase files - for most purposes, overwriting
once is enough, but some people view physical shredding
followed by incineration as insufficiently cautious.
Is it effective to shred the parent location where I used to store the files.
If you mean the parent directory, you could move any other files
to a different directory, then remove the old parent directory.
| Is there a way to completely delete a previously deleted file without formatting the whole drive? |
1,515,038,215,000 |
I'm trying to find some files on my NAS and then delete the files/folders that's more then 5 days old. I can use the find command as:
find /volume1/docker/UV/videos -type f -print
And then i get all the files from videos folder and subdirs.
But if I try:
find /volume1/docker/UV/videos -mtime +2 -print
Then nothing happens and I know there is files more then 2 days old, then same happens if I changes the 2 to 1.
So I can't get a list of files and subdir that's more where there is files more then 2 days old. What I wan't is to find the files/folders and then change -print to -delete, but i'm using -print, so I know what the results will be, but atm. the results is nothing.
Can someone help/guide me ?
|
The issue is that the -atime, -ctime and -mtime options do some unexpected rounding. The explanation is in man find under -atime:
-atime n
File was last accessed n *24 hours ago.
When find figures out how many 24-hour periods ago
the file was last accessed, any fractional part is ignored,
so to match -atime +1,
a file has to have been accessed at least two days ago.
There are alternatives in modern versions of find:
-mmin (and variants) round to the minute, not to the day.
So -mmin "+$(( 60*24*2 ))" works based on current time of day 2 days ago.
-daystart measures times based on 00:00:00 of the current day.
That is a fairly blunt instrument,
and is sensitive to the order of options on the command line.
If you are on a system without these recent extensions to find
(e.g., Solaris or AIX),
or don't want all your housekeeping to have an enforced midnight cutoff,
and don't want a different cutoff time every time you execute,
using a reference file is a good alternative.
Paul--) touch -t 202009020301 FileToRetain
Paul--) touch -t 202009020300 FileOnCusp
Paul--) touch -t 202009020259 FileToDelete
Paul--)
Paul--) touch /tmp/myRefFile -t $( date -d '4 days ago' '+%Y%m%d0300' )
Paul--) ls -ltr /tmp/myRefFile .
-rw-r--r-- 1 paul paul 0 Sep 2 03:00 /tmp/myRefFile
.:
total 0
-rw-r--r-- 1 paul paul 0 Sep 2 02:59 FileToDelete
-rw-r--r-- 1 paul paul 0 Sep 2 03:00 FileOnCusp
-rw-r--r-- 1 paul paul 0 Sep 2 03:01 FileToRetain
Paul--) find . -type f ! -newer /tmp/myRefFile -delete
Paul--) ls -ltr /tmp/myRefFile .
-rw-r--r-- 1 paul paul 0 Sep 2 03:00 /tmp/myRefFile
.:
total 0
-rw-r--r-- 1 paul paul 0 Sep 2 03:01 FileToRetain
Paul--)
The reference file should not be in the directory being cleansed (it might delete itself at an interesting moment), and in production you should probably use mktemp or put the PID as part of the name to avoid problems with concurrent uses, and rm it afterwards.
Of course, if you do not have modern find, then you probably don't have date -d either. My housekeeping was on a monthly basis, so on Solaris one has to script month/year overflow but not month-end or leap-year – just set dd=01. But aligning to specific day in week or month is something else that find by itself cannot do cleanly: date -d 'last Sunday 06:00' is helpful as a reference file.
| Find command in linux and delete, but test with print, give no results |
1,515,038,215,000 |
In a testing environment I executed:
rm -rf /var/www/html/${domain} /etc/nginx/sites-available/${domain} /etc/nginx/sites-enabled/${domain}
The result was that html, sites-available and sites-enabled directories were deleted with all their content.
What's wrong with that phrasing? Given I gave full paths, I miss what may cause that, I assume it has nothing to do with the recursivness of -r.
|
If the value of domain was empty or undefined, you just ran, e.g., rm -rf /var/www/html.
You can check explicitly that domain is defined:
if [ -z "$domain" ]; then
echo "ERROR: domain is undefined" >&2
exit 1
fi
Also, using set -u in your script can prevent this sort of problem. This causes the use of an undefined variable to result in an error:
$ set -u
$ echo $undefined_variable
bash: undefined_variable: unbound variable
| rm -rf destroyed directories of the files set to be deleted (multiple arguments) |
1,515,038,215,000 |
I have an AWS linux instance with a root drive of 32GB.
I have an EBS mount of 20GB
On my root I ran out of space, so I cleared out some files. However my root drive is still full. I can't find out why because when I look at sizes of the directories using du and ncdu they show the drive should have a lot of space.
df
I get the following results
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/xvda1 32894812 31946092 848472 98% /
devtmpfs 2017224 60 2017164 1% /dev
tmpfs 2025364 0 2025364 0% /dev/shm
/dev/xvdh 20511356 4459276 15003504 23% /mnt/ebs
My /dev/xvda1 is still full
After some research I installed a great tool ncdu to display disk space and the results are:
ncdu 1.10 ~ Use the arrow keys to navigate, press ? for help
--- / -------------------------------------
4.2GiB [##########] /mnt
1.5GiB [### ] /var
1.2GiB [## ] /usr
684.9MiB [# ] /opt
464.3MiB [# ] /home
141.7MiB [ ] /lib
53.5MiB [ ] /boot
21.2MiB [ ] /lib64
10.8MiB [ ] /sbin
8.1MiB [ ] /bin
7.1MiB [ ] /etc
2.7MiB [ ] /tmp
60.0KiB [ ] /dev
48.0KiB [ ] /root
e 16.0KiB [ ] /lost+found
e 4.0KiB [ ] /srv
e 4.0KiB [ ] /selinux
e 4.0KiB [ ] /media
e 4.0KiB [ ] /local
. 0.0 B [ ] /proc
0.0 B [ ] /sys
0.0 B [ ] .autofsck
If I du -h my total is
8.3G /
So why would my disk be 95% full when it clearly has a lot of space. Am I missing something to do with the mounts and is there any other tool I can run to find out why it is 95% full?
|
What did you delete? If you remove a file that is still in use by a running process (e.g., a daemon), that disk space is only released when the process is shut down/restarted.
For example, if you removed current Apache log files, the space will still be in use until you restart Apache. Similarly for system logs (those in /var/log).
You can either:
Restart the process(es) in question (e.g. Apache, syslogd, etc).
Restart your system.
Once you do one of the above, you should see more available space.
| Disk still full after deleting some files |
1,515,038,215,000 |
The wipe tool has the option -k Keep files, i.e. do not remove() them after overwriting.
What does that mean not to remove a file that was ... well... wiped?
|
The manpage describes -k thus:
Keep files: do not unlink the files after they have been overwritten. Useful if you want to wipe a device,
while keeping the device special file. This implies -F.
The main use-case for -k is
wipe -k /dev/sda
which will overwrite all the contents of the drive, without removing the device node.
If you use it on a “standard” file, it will wipe the contents of the file, but leave the file itself (its name, and its presence in the parent directory) alone.
| What does it mean to overwrite a file but keep it? |
1,515,038,215,000 |
I am currently working on HP-UX and have a cron job which runs every morning which deletes file older than certain number of days based on retention policy. Recently I added a sub directory at the path from where file was getting deleted and wanted to have a different policy for files residing under this sub directory.
Currently my command is as below:
find /directory/backup -mtime +1 -name "FileName*.EXT" -exec rm {} \;
When this command runs it deletes all the files under backup directory as well as any matching file inside sub directory of backup. I meant there is another directory called backup_db inside backup directory.
pwd
/directory/backup/backup_db
I want to restrict deletion from backup directory only and not from sub directory. How can I achieve that? Any help is highly appreciated.
I tried man rm however didn't find anything which could be of my help.
|
With your command line, the find command will produce a series of individual rm commands and execute them. Each rm command will delete one file, specified with a full pathname. So it is not the rm you need to restrict, but the find.
The syntax to do this can be a bit tricky, for historical reasons.
You could do it like this:
find /directory/backup ! -path /directory/backup -prune -mtime +1 -type f -name "FileName*.EXT" -exec rm {}\+
The find $DIR ! -path $DIR -prune <desired_conditions> [desired_action] is the POSIX-compatible way to restrict find to a single directory. But it will still list the names of sub-directores of that particular directory, unless you add further conditions.
Using -type f will restrict find from attempting to generate rm commands for sub-directories in the backup directory, which would just create useless error messages: to delete a directory, you'd need a recursive rm or rmdir, and I understood that's something you don't want to do in this case.
Replacing the semicolon at the end of the command specification for -exec with a plus sign will improve performance if there are a lot of files to delete: instead of generating one rm command per file, it will generate as few commands as possible, stuffing as many pathname arguments onto each rm command as the command line length allows. This drastically reduces the need to fork() new processes.
| Delete one type of file from specific directory however not from subdirectory |
1,515,038,215,000 |
So I want to delete all files in my current directory using the following pipeline:
ls | sed -rn 's/(.*)\.jpg$/mv -n & \1.jpeg/pi' | sh
How to create a file, named so that running the above pipeline deletes all files in the current directory. Thus through code injection.
Please, give the command used to create that file.
|
The command you have above will (somewhat clumsily) rename all files in the current directly from *.jpg to *.jpeg, it could be modified to delete all files but is hardly appropriate to the task.
However, it sounds like you are trying to craft a suitable filename such that when the above command encounters it, it will delete everything in the current directory instead. This is certainly achievable (although it makes one question your motives...).
DISCLAIMER: You are probably going to do damage with this. That's all on you, test first so you don't ruin something to care about, and expect legal action against you from your target if you are caught maliciously doing this on their system.
Now that's out of the way, you want to create a file that matches the sed input but creates a suitable output to still do what you want.
Create a file called ;rm *;.jpg (or ;rm -rf *;.jpg for a more aggressive removal).
You can run this through just the sed part to see what it will do via:
echo ";rm * ; .jpg" | sed -rn 's/(.*)\.jpg$/mv -n & \1.jpeg/pi'
EDIT: As a side note, that command you've listed can be made to execute (almost) anything you like as the user running it, a delete of all files is a rather unimaginative (and destructive) use of this weakness. Maybe consider something more colourful (and easier to reverse)?
| How to create a file name for the pipeline to delete all files? |
1,515,038,215,000 |
I would like to know if it is possible to recover files that I have deleted?
I have VPS with CentOS 7 and if it is possible I would like to look for (and recover) deleted files by their file extensions.
Thank you.
|
I've had pretty good success with TestDisk & PhotoRec for recovering deleted files. Haven't tried it via an SSH session but make sure you install and run screen command. I've had recovery scan that last over 24 hours.
| Recovering files on CentOS 7 |
1,479,007,317,000 |
The LUKS / dm-crypt / cryptsetup FAQ page says:
2.15 Can I resize a dm-crypt or LUKS partition?
Yes, you can, as neither dm-crypt nor LUKS stores partition size.
I'm befuzzled:
What is "resized" if no size information is stored?
How does a "resize" get remembered across open / closes of a encrypted volume?
|
It's about online resize.
For example if you use LVM, create a LV of 1G size, and put LUKS on that, it's like this:
# lvcreate -L1G -n test VG
# cryptsetup luksFormat /dev/mapper/VG-test
# cryptsetup luksOpen /dev/mapper/VG-test lukstest
# blockdev --getsize64 /dev/mapper/VG-test
1073741824
# blockdev --getsize64 /dev/mapper/lukstest
1071644672
So the LUKS device is about the same size as the VG-test device (1G minus 2MiB used by the LUKS header).
Now what happens when you make the LV larger?
# lvresize -L+1G /dev/mapper/VG-test
Size of logical volume VG/test changed from 1.00 GiB (16 extents) to 2.00 GiB (32 extents).
Logical volume test successfully resized.
# blockdev --getsize64 /dev/mapper/VG-test
2147483648
# blockdev --getsize64 /dev/mapper/lukstest
1071644672
The LV is 2G large now, but the LUKS device is still stuck at 1G, as that was the size it was originally opened with.
Once you luksClose and luksOpen, it would also be 2G — because LUKS does not store a size, it defaults to the device size at the time you open it. So close and open (or simply rebooting) would update the crypt mapping to the new device size. However, since you can only close a container after umounting/stopping everything inside of it, this is basically an offline resize.
But maybe you have a mounted filesystem on the LUKS, it's in use, and you don't want to umount it for the resize, and that's where cryptsetup resize comes in as an online resize operation.
# cryptsetup resize /dev/mapper/lukstest
# blockdev --getsize64 /dev/mapper/lukstest
2145386496
cryptsetup resize updates the active crypt mapping to the new device size, no umount required, and then you can follow it up with resize2fs or whatever to also grow the mounted filesystem itself online.
If you don't mind rebooting or remounting, you'll never need cryptsetup resize as it happens automatically offline. But if you want to do it online, that's the only way.
When shrinking (cryptsetup resize --size x), the resize is temporary. LUKS does not store device size, so next time you luksOpen, it will simply use the device size again. So shrinking sticks only if the backing device was also shrunk accordingly.
For a successful shrink you have to work backwards... growing is grow partition first, then LUKS, then filesystem... shrinking is shrink filesystem first, and partition last.
If the resize doesn't work, it's most likely due to the backing device not being resized, for example the kernel may refuse changes to the partition table while the drive is in use. Check with blockdev that all device layers have the sizes you expect them to have.
| What does `cryptsetup resize` do if LUKS doesn't store partition size? |
1,479,007,317,000 |
I want to list all the physical volume associated with logical volume.
I know lvdisplay, pvscan, pvdisplay -m could do the job. but I don't want to use these commands. Is there any other way to do it without using lvm2 package commands?
Any thoughts on comparing the major and minor numbers of devices?
|
There are two possibilities:
If you accept dmsetup as a non-lvm package command (at openSUSE the is a separate package device-mapper) then you can do this:
dmsetup table "${vg_name}-${lv_name}"
Or you do this:
start cmd: # ls -l /dev/mapper/linux-rootfs
lrwxrwxrwx 1 root root 7 27. Jun 21:34 /dev/mapper/linux-rootfs -> ../dm-0
start cmd: # ls /sys/block/dm-0/slaves/
sda9
| list the devices associated with logical volumes without using lvm2 package commands |
1,479,007,317,000 |
cryptsetup can be invoked with --readonly or -r option, which will set up a read-only mapping:
cryptsetup --readonly luksOpen /dev/sdb1 sdb1
Once I have opened a device as read-only, can I later re-map it to read-write? Obviously, I mean mapping it read-write without closing it first, and then opening it again. Can I remap it without having to type my password again?
If this is not possible, is this just that cryptsetup does not support this, or is there some more fundamental level?
|
It doesn't seem to be possible with the cryptsetup command. Unfortunately cryptsetup has a few such immutable flags... --allow-discards is also one of them. If this wasn't set at the time you opened the container, you can't add it later.
At least, not with the cryptsetup command. However, since cryptsetup creates regular Device Mapper targets, you can resort to dmsetup to modify them. Of course, this isn't recommended for various reasons: it's like changing the partition table of partitions that are in use - mess it up and you might lose all your data.
The device mapper allows dynamic remapping of all devices at runtime and it doesn't care about the safety of your data at all; which is why this feature is usually wrapped behind the LVM layer which keeps the necessary metadata around to make it safe.
Create a read-only LUKS device:
# truncate -s 100M foobar.img
# cryptsetup luksFormat foobar.img
# cryptsetup luksOpen --read-only foobar.img foobar
The way dmsetup sees it:
# dmsetup info foobar
Name: foobar
State: ACTIVE (READ-ONLY)
Read Ahead: 256
Tables present: LIVE
[...]
# dmsetup table --showkeys foobar
0 200704 crypt aes-xts-plain64 ef434503c1874d65d33b1c23a088bdbbf52cb76c7f7771a23ce475f8823f47df 0 7:0 4096
Note the master key which normally shouldn't be leaked, as it breaks whatever brute-force protection LUKS offers. Unfortunately I haven't found a way without using it, as dmsetup also lacks a direct --make-this-read-write option. However dmsetup reload allows replacing a mapping entirely, so we'll replace it with itself in read-write mode.
# dmsetup table --showkeys foobar | dmsetup reload foobar
# dmsetup info foobar
Name: foobar
State: ACTIVE (READ-ONLY)
Read Ahead: 256
Tables present: LIVE & INACTIVE
It's still read-only after the reload, because reload goes into the inactive table.
To make the inactive table active, use dmsetup resume:
# dmsetup resume foobar
# dmsetup info foobar
Name: foobar
State: ACTIVE
Read Ahead: 256
Tables present: LIVE
And thus we have a read-write LUKS device.
Does it work with a live filesystem?
# cryptsetup luksOpen --readonly foobar.img foobar
# mount /dev/mapper/foobar /mnt/foobar
mount: /mnt/foobar: WARNING: device write-protected, mounted read-only.
# mount -o remount,rw /mnt/foobar
mount: /mnt/foobar: cannot remount /dev/mapper/foobar read-write, is write-protected.
So it's read-only. Make it read-write and remount:
# dmsetup table --showkeys foobar | dmsetup reload foobar
# dmsetup resume foobar
# mount -o remount,rw /mnt/foobar
# echo hey it works > /mnt/foobar/amazing.txt
Can we go back to read-only?
# mount -o remount,ro /mnt/foobar
# dmsetup table --showkeys foobar | dmsetup reload foobar --readonly
# dmsetup resume foobar
# mount -o remount,rw /mnt/foobar
mount: /mnt/foobar: cannot remount /dev/mapper/foobar read-write, is write-protected.
So it probably works. The process to add allow_discards flag to an existing crypt mapping is similar - you have to reload with a table that contains this flag. However a filesystem that already detected the absence of discard support, might not be convinced to re-detect this on the fly. So it's unclear how practical it is.
Still, unless you have very good reason not to, you should stick to re-opening using regular cryptsetup commands, even if it means umounting and re-supplying the passphrase. It's safer all around and more importantly, doesn't circumvent the LUKS security concept.
| remap read-only LUKS partition to read-write |
1,479,007,317,000 |
Usually, block device drivers report correct size of the device, and it is possible to actually use all the "available" blocks. So, the filesystem knows how much it can write to such device in prior.But in some special cases, like it is with dm-thin or dm-vdo devices, this statement is false. This kind of block devices can return ENOSPC error at any moment, if their underlying storage (which the upper-level FS knows nothing about) gets full.Therefore, my question is, what happens in such scenario: an EXT4 filesystem is mounted r/w, in async mode (which is the default), and it is doing a massive amount of writes. The disk cache (dirty memory) gets involved too, and at the moment there is a lot of data to be written if user runs sync command.But suddenly, the underlying block device of that EXT4 filesystem starts to refuse any writes due to "no space left". What will be the behavior of the filesystem?Will it print errors and go to r/o mode aborting all the writes and possibly causing data loss? If not, will it just wait for space, periodically retrying writes and refusing new ones? In that case, what will happen to the huge disk cache, if other processes try to allocate lots of RAM? (On Linux, dirty memory is considered Available, isn't it?).Considering worst scenario, if the disk cache was taking up most of the RAM at the moment of ENOSPC error (because admin has set vm.dirty_ratio too high), can the kernel crash or lock up? Or it will just make all processes which want allocate memory wait/hang? Finally, does the behavior differ across filesystems?
Thanks in advance.
|
When the block device overcommits its available data capacity like when using thin provisioning or has other reasons to not be able to accept more writes, like having a snapshot full, it has to send a message to what is writing to it. ENOSPC would make no sense in this context, so the error chosen is usually EIO (Input/output error).
UPDATE: actually LVM has configurable behavior. For Thin provisioned LV:
--errorwhenfull n (default): blocks for up to (configurable) 60 seconds, just as OP considered, then errors. Unless an automatic action is performed during these 60s chances are the result will be the same as immediate error.
Note also that if the timeout is completely disabled:
Disabling timeouts can result in the system running out of resources,
memory exhaustion, hung tasks, and deadlocks. (The timeout applies to
all thin pools on the system.)
--errorwhenfull y: immediately returns an error
If the "user" is a filesystem, it will react to I/O error the same as if this was caused by an actual media error, possibly depending on mount options (eg, for ext4 possible options are errors={continue|remount-ro|panic}). I can't tell for sure what happens to dirty data still in cache when one of the non-panic options is chosen. One could imagine it's either left in cache or will be lost, but one should assume it will be lost anyway.
As this is a severe result, such disk space should be actively monitored and once a threshold is reached, there should be either data freed or more actual space added so the overcommitted space never gets full. Same for snapshots, especially the non-thin-provisioned kind which uses more space over time: it should be removed when not needed anymore. There are even options to auto-increase the thin-provisioned space for emergencies (when the layer providing space to the thin provisioning layer can still provide more).
further references:
Automatically extend thin pool LV
Managing free space on VDO volumes
| How does EXT4 handle sudden lack of space in the underlying storage? |
1,479,007,317,000 |
I currently try to use dm-integrity to run in standalone mode. For that I install a plain ubuntu server 20.04 in a virtual box VM.
In the next steps I create the dm-integrity device, a ext4 filesystem and mount it:
integritysetup format /dev/sdb
integritysetup open /dev/sdb hdd-int
mkfs.ext4 /dev/mapper/hdd-int
mkdir /data
mount /dev/mapper/hdd-int /data
echo "/dev/mapper/hdd-int /data ext4 defaults 0 0" >> /etc/fstab
NOTE: For simplification I use /dev/sdb instead of /dev/disk/by-id/<ID>.
Now I reboot and see, that the device /dev/mapper/hdd-int does not exist and therefore the mount to /data failed.
Now my Question: How can I permanently persist the information of the dm-integrity device, so that the mount after a reboot is already there? Should create a line in /etc/fstab? Or is there another config file?
|
Disclaimer: This is not a standard implementation by any means and also has not been battle tested in practice. It may break at any time. Use at your own risk. Make backups!!!
So in addition to my theoretical answer, here's an example implementation for Standalone DM-Integrity in a fresh Ubuntu 20.04 Desktop install. Steps 1-4 is the setup and installation process, Step 5-8 the custom udev rule and hook.
Ingredients:
a drive using the GPT partitioning scheme (for providing PARTLABEL, since integrity lacks UUID)
one or more partitions using DM-Integrity, identified by integrity-somename label.
custom udev rule to set up DM-Integrity for each labelled partition
custom initramfs hook to include integritysetup binary as well as the udev rule for early setup
Step-by-step implementation:
1. Create partitions
The key point here is that every integrity partition gets a partition label, in this example one integrity-root and one integrity-home, to be used for the root / and /home partitions respectively.
# parted /dev/vda
GNU Parted 3.3
Using /dev/vda
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) unit mib
(parted) mklabel gpt
(parted) disk_set pmbr_boot on
(parted) mkpart grub 1MiB 2MiB
(parted) set 1 bios_grub on
(parted) mkpart boot 2MiB 1024MiB
(parted) set 2 lvm on
(parted) mkpart integrity-root 1024MiB 10240MiB
(parted) set 3 lvm on
(parted) mkpart integrity-home 10240MiB 100%
(parted) set 4 lvm on
(parted) print free
Model: Virtio Block Device (virtblk)
Disk /dev/vda: 19456MiB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags: pmbr_boot
Number Start End Size File system Name Flags
0.02MiB 1.00MiB 0.98MiB Free Space
1 1.00MiB 2.00MiB 1.00MiB grub bios_grub
2 2.00MiB 1024MiB 1022MiB boot lvm
3 1024MiB 10240MiB 9216MiB integrity-root lvm
4 10240MiB 19455MiB 9215MiB integrity-home lvm
19455MiB 19456MiB 0.98MiB Free Space
(parted)
Information: You may need to update /etc/fstab.
Verify that the partitions appear under /dev/disk/by-partlabel accordingly:
# ls -l /dev/disk/by-partlabel
total 0
lrwxrwxrwx 1 root root 10 May 2 17:52 boot -> ../../vda2
lrwxrwxrwx 1 root root 10 May 2 17:52 grub -> ../../vda1
lrwxrwxrwx 1 root root 10 May 2 17:52 integrity-home -> ../../vda4
lrwxrwxrwx 1 root root 10 May 2 17:52 integrity-root -> ../../vda3
2. Set up Integrity
With the partitions set up, you actually have to turn them into integrity devices.
# integritysetup format /dev/disk/by-partlabel/integrity-root
WARNING!
========
This will overwrite data on /dev/disk/by-partlabel/integrity-root irrevocably.
Are you sure? (Type uppercase yes): YES
Formatted with tag size 4, internal integrity crc32c.
Wiping device to initialize integrity checksum.
You can interrupt this by pressing CTRL+c (rest of not wiped device will contain invalid checksum).
Finished, time 01:14.903, 9081 MiB written, speed 121.2 MiB/s
# integritysetup open /dev/disk/by-partlabel/integrity-root integrity-root
Repeat the same for /dev/disk/by-partlabel/integrity-home, then verify it exists under /dev/mapper:
# ls -l /dev/mapper
total 0
crw------- 1 root root 10, 236 May 2 2020 control
lrwxrwxrwx 1 root root 7 May 2 18:07 integrity-home -> ../dm-1
lrwxrwxrwx 1 root root 7 May 2 18:07 integrity-root -> ../dm-0
Note this naming scheme technically collides with LVM, so you should not use integrity as a VG name.
3. Filesystem, RAID or LVM
With integrity in place, you also have to create a filesystem. Otherwise the Ubuntu installer does not know what to make of this mystery device and tries to create a partition table on it, instead.
# mkfs.ext4 /dev/mapper/integrity-root
# mkfs.ext4 /dev/mapper/integrity-home
So this is the point where you put your filesystem on the integrity device.
Alternatively you can go with RAID or LVM here. You could also go with LUKS, I suppose, but why would you do that when LUKS2 already has built-in support for Integrity? If you choose LUKS here, chances are you're following the wrong tutorial.
4. Install Ubuntu
The Ubuntu desktop installer technically does not support integrity at all, however since you set up the filesystems manually, it will allow you to use them anyway. It just won't be able to boot without further steps below.
In the "Installation type" dialog, select "Something else" (for manual partitioning)
"Change" integrity-root to mount point /
"Change" integrity-home to mount point /home
Don't forget about your bootloader! (Impossible to use an integrity device for it)
"Change" /dev/vda1 to "Reserved BIOS boot area"
"Change" /dev/vda2 to mount point /boot
Leave the other partitions alone (don't format the integrity devices)
Note this will be completely different for an UEFI Secure Boot setup. For simplicity, this example uses good old legacy bios grub booting.
Finally it should look like this:
Click "Install Now".
If you continue, the changes listed below will be written to the disks. Otherwise, you will be able to make further changes manually.
WARNING: This will destroy all data on any partitions you have removed as well as on the partitions that are going to be formatted.
The partition tables of the following devices are changed:
Virtual disk 1 (vda)
The following partitions are going to be formatted:
LVM VG integrity, LV home as ext4
LVM VG integrity, LV root as ext4
partition #2 of Virtual disk 1 (vda) as ext2
Since we're basically fooling the installer into using an integrity device as target, it wrongly assumes LVM VG-LV constellation. Just ignore it and proceed.
However, don't reboot. It won't work just yet.
While the installation is running, you can verify it's going smoothly by running lsblk in a terminal:
# lsblk
vda 252:0 0 19G 0 disk
├─vda1 252:1 0 1M 0 part
├─vda2 252:2 0 1022M 0 part /target/boot
├─vda3 252:3 0 9G 0 part
│ └─integrity-root 253:0 0 8.9G 0 crypt /target
└─vda4 252:4 0 9G 0 part
└─integrity-home 253:1 0 8.9G 0 crypt /target/home
Even lsblk does not support integrity devices yet, it wrongfully assumes them to be crypt devices. No matter, everything is going to the right place if integrity-root is /target, integrity home is /target/home and /dev/vda2 is /target/boot.
When the installation is finished, select "Continue testing" instead of "Reboot now".
5. Chroot & install integritysetup
To make Ubuntu actually support mounting the Standalone Integrity partitions, you'll have to chroot into your fresh install and set up a custom udev rule and initramfs hook.
# mount /dev/mapper/integrity-root /target
# mount /dev/mapper/integrity-home /target/home
# mount /dev/vda2 /target/boot
# mount --bind /dev /target/dev
# mount --bind /proc /target/proc
# mount --bind /run /target/run
# mount --bind /sys /target/sys
# chroot /target
Now, integritysetup is probably not installed yet. If you used RAID or LVM, this is also where you have to make sure mdadm, lvm and others are installed too.
# apt-get install cryptsetup
6. Custom udev rule
Custom udev rules go into /etc/udev/rules.d. For reference, the standard rule that creates the /dev/disk/by-partlabel/ links looks like this:
ENV{ID_PART_ENTRY_SCHEME}=="gpt", ENV{ID_PART_ENTRY_NAME}=="?*", SYMLINK+="disk/by-partlabel/$env{ID_PART_ENTRY_NAME}"
So our custom rule could look like this:
ENV{ID_PART_ENTRY_SCHEME}=="gpt", ENV{ID_PART_ENTRY_NAME}=="integrity-?*", RUN+="/usr/sbin/integritysetup open $env{DEVNAME} $env{ID_PART_ENTRY_NAME}"
Save it as /etc/udev/rules.d/99-integrity.rules.
This should make udev run integritysetup open for every partition with an integrity-xyz partition label. Note that these names have to be unique system-wide, so in a RAID setup, each drive needs to use different partition labels.
7. Custom initramfs hook (Ubuntu specific)
By itself, the udev rule might already work fine, if root / itself is not on Integrity. The standard initramfs should mount a non-integrity rootfs fine, at which point the full system takes over to handle everything else.
But with the rootfs itself on Integrity, we need the initramfs to set it up for us, or it won't be able to mount rootfs, and fail booting. That means adding the integritysetup binary as well as the udev rule itself.
With Ubuntu's initramfs-tools, this can be achieved by creating a custom hook script:
#!/bin/sh
PREREQ=""
prereqs()
{
echo "$PREREQ"
}
case $1 in
prereqs)
prereqs
exit 0
;;
esac
. /usr/share/initramfs-tools/hook-functions
# Begin real processing below this line
force_load dm_integrity
copy_exec /usr/sbin/integritysetup /usr/sbin
copy_file text /etc/udev/rules.d/99-integrity.rules
Save it as /etc/initramfs-tools/hooks/integrity.
8. Update initramfs
As with all changes to initramfs configuration, you have to rebuild the initramfs to take effect:
# update-initramfs -u -k all
update-initramfs: Generating /boot/initrd.img-5.4.0-28-generic
cryptsetup: WARNING: target 'integrity-root' not found in /etc/crypttab
update-initramfs: Generating /boot/initrd.img-5.4.0-26-generic
cryptsetup: WARNING: target 'integrity-root' not found in /etc/crypttab
Unfortunately, Ubuntu's default cryptsetup hook is confused and mistakes the integrity device for a cryptsetup one. Thankfully the warning is harmless and can be ignored.
9. Reboot
If everything went well, after rebooting from the Live CD to the installed system, in a terminal lsblk should greet you like this:
integrity@ubuntu $ lsblk
vda 252:0 0 19G 0 disk
├─vda1 252:1 0 1M 0 part
├─vda2 252:2 0 1022M 0 part /boot
├─vda3 252:3 0 9G 0 part
│ └─integrity-root 253:0 0 8,9G 0 crypt /
└─vda4 252:4 0 9G 0 part
└─integrity-home 253:1 0 8,9G 0 crypt /home
And since lsblk misidentifies them as crypt devices, check dmsetup table to see they're in fact integrity devices:
integrity@ubuntu:~$ sudo dmsetup table
[sudo] password for integrity:
integrity-root: 0 18598008 integrity 252:3 0 4 J 6 journal_sectors:130944 interleave_sectors:32768 buffer_sectors:128 journal_watermark:50 commit_time:10000 internal_hash:crc32c
integrity-home: 0 18595960 integrity 252:4 0 4 J 6 journal_sectors:130944 interleave_sectors:32768 buffer_sectors:128 journal_watermark:50 commit_time:10000 internal_hash:crc32c
At that point, you're done. Enjoy your new Linux system with Standalone Integrity!
(Until it breaks, anyway. Use at your own risk, make backups!!!)
| dm-integrity standalone mapper device lost after reboot |
1,479,007,317,000 |
Recently I found article mentioning that recently dm-cache significantly improved in linux. I also found that in userspace you see it as lvmcache. And it's quite confusing for me. I thought that LVM caching mechanism is something different than dm-cache. On my server I'm using dm-cache set up directly on device mapper level using dmsetup commands. No LVM commands involved.
So what is it in the end? Is lvmcache just CLI for easier dm-cache setup? Is it better idea to use it insdead of raw dmsetup commands?
My current script looks like this:
#!/bin/bash
CACHEPARAMS="512 1 writethrough default 0"
CACHEDEVICES="o=/dev/mapper/storage c=/dev/mapper/suse-cache"
MAPPER="storagecached"
if [ "$1" == "-u" ] ; then
{
for i in $CACHEDEVICES ; do
if [ "`echo $i | grep \"^c=\"`" != "" ] ; then
__CACHEDEV=${i:2}
elif [ "`echo $i | grep \"^o=\"`" != "" ] ; then
__ORIGINALDEV=${i:2}
fi
done
dmsetup suspend $MAPPER
dmsetup remove $MAPPER
dmsetup remove `basename $__CACHEDEV`-blocks
dmsetup remove `basename $__CACHEDEV`-metadata
}
else
{
for i in $CACHEDEVICES ; do
if [ "`echo $i | grep \"^c=\"`" != "" ] ; then
__CACHEDEV=${i:2}
elif [ "`echo $i | grep \"^o=\"`" != "" ] ; then
__ORIGINALDEV=${i:2}
fi
done
__CACHEDEVSIZE="`blockdev --getsize64 \"$__CACHEDEV\"`"
__CACHEMETASIZE="$(((4194304 + (16 * $__CACHEDEVSIZE / 262144))/512))"
if [ "$__CACHEMETASIZE" == ""$(((4194303 + (16 * $__CACHEDEVSIZE / 262144))/512))"" ] ; then
__CACHEMETASIZE="$(($__CACHEMETASIZE + 1))" ; fi
__CACHEBLOCKSSIZE="$((($__CACHEDEVSIZE/512) - $__CACHEMETASIZE))"
__ORIGINALDEVSIZE="`blockdev --getsz $__ORIGINALDEV`"
dmsetup create `basename $__CACHEDEV`-metadata --table "0 $__CACHEMETASIZE linear /dev/mapper/suse-cache 0"
dmsetup create `basename $__CACHEDEV`-blocks --table "0 $__CACHEBLOCKSSIZE linear /dev/mapper/suse-cache $__CACHEMETASIZE"
dmsetup create $MAPPER --table "0 $__ORIGINALDEVSIZE cache /dev/mapper/`basename $__CACHEDEV`-metadata /dev/mapper/`basename $__CACHEDEV`-blocks $__ORIGINALDEV $CACHEPARAMS"
dmsetup resume $MAPPER
}
fi
Would lvmcache do it better? I feel kinda okay with doing it this way because I see what's going on I don't value ease of use more than clarity of setup. However if cache set up using lvmcache would be better optimized then i think it's no brainer to use it instead.
|
lvmcache is built on top of dm-cache; it sets dm-cache up using logical volumes, and avoids having to calculate block offsets and sizes. Everything is documented in the manpage; the basic idea is to use
the original LV (slow, to be cached)
a new cache data LV
a new cache meta-data LV
The two cache LVs are grouped into a "cache pool" LV, then the original LV and cache pool LV are grouped into a cached LV which you use instead of the original LV.
lvmcache also makes it easy to set up redundant caches, change the cache mode or policy, etc.
| What's the difference between lvmcache and dm-cache? |
1,479,007,317,000 |
I have to map several loopback devices via dmsetup.
I could track which loopback device is mapped to a particular /dev/dm-X device file, but is there an easy way to get this info by the /dev/dm-X itself?
dmsetup info was of no help for me here.
|
The constituent devices are under /sys/block/dm-X/slaves. E.g.,
$ ls /sys/block/dm-2/slaves/
loop0
| Get target device by /dev/dm-X entry |
1,479,007,317,000 |
I notice that if a device mapping is created with the low-level dmsetup or through ioctls, the device mapping will no longer be there after reboot.
Is this normal? I am using a USB to test out dm_crypt
If it is normal, how do I go about making the mapping stay around? Do I need to look into udev?
Thanks!
Edit for clarification
What I mean by device mapping is the table entry that specifies how to map each range of physical block sectors to a virtual block device. You can see what I mean, if using LVM, with the dmsetup table command. This will dump all current device table mappings. Here's an example for the device mapping linear target, tying two disks together into a LVM swap (physical block abstraction):
vg00-lv_swap: 0 1028160 linear /dev/sdb 0
vg00-lv_swap: 1028160 3903762 linear /dev/sdc 0
The format here is:
<mapping_name>: <start_block> <segment_length> <mapping_target> <block_device> <offset>
Where:
mapping_name: the name of the virtual device
start_block: starting block for virtual device
segment_length: length in sectors (512 byte chunks)
mapping_target: device mapping target such as linear, crypt, or striped
block_device: which physical block device to use, in this case defined by major:minor
offset: offset on physical block device
My problem is that, after creating a new entry in the device mapping table, it disappears after boot. That is, running something like:
dmsetup create TestEncrypted --table "0 $(blockdev --getsz /dev/sdb) crypt serpent-cbc-essiv:sha256 a7f67ad...ee 0 /dev/sdb 0"
and then rebooting causes the mapping table entry to disappear (i.e. doesn't show up with dmsetup table), as well as the corresponding /dev/mapper/TestEncrypted
|
Not 100% I understand what you mean by mapping but, Yes this seems normal. You need to add the device to either /etc/crypttab or /etc/fstab like you would to mount any other drive.
https://wiki.archlinux.org/index.php/Dm-crypt/System_configuration#crypttab
^ Should have the information you're looking for.
| How to make device mappings stay after reboot? |
1,479,007,317,000 |
I wanted to resize my LVM, followed some tutorial on the net and the system totally crashed (I got some boot error with UUID not found). I ran a LiveCD and tried to at least recover some files but I can't mount the drive. I did pvcreate with the missing UUID on /dev/sda1. When in most tutorials I should do
vgchange -ay
and then mount I get:
device-mapper: resume ioctl on failed: Invalid argument.
dmseg prints something like this:
device-mapper: table: 252:0: sda1 too small for target: start=2048, len=15499264, dev_size=497664
Mount obviously doesn't work. I'm totally stuck there and I damn need to recover these files. Any ideas?
Edit:
I need a single folder, any workaround way to get it out (might do on Windows Host, it's a VDI disk) would be okay.
|
After trying hundreds of different command line combinations I found a tutorial (originally targetting other problem) with gparted livecd. I booted it, then in terminal:
testdisk
Chose the first option, then chose:
write
It allowed me to mount the disk and recover the folder I needed.
| Cannot mount LVMm: resume ioctl on failed |
1,479,007,317,000 |
The dmsetup snapshot documentation says:
<persistent?> is P (Persistent) or N (Not persistent - will not survive
after reboot). O (Overflow) can be added as a persistent store option
to allow userspace to advertise its support for seeing "Overflow" in the
snapshot status. So supported store types are "P", "PO" and "N".
The difference between persistent and transient is with transient
snapshots less metadata must be saved on disk - they can be kept in
memory by the kernel.
Where is this persistent data stored?
|
There is a difference between the data in the first block of a persistent vs transient dmsetup snapshot device:
Given these devices:
$ losetup
NAME SIZELIMIT OFFSET AUTOCLEAR RO BACK-FILE DIO
/dev/loop1 0 0 0 0 /home/var/ravi/tmp/issue/snap-dev 0
/dev/loop0 0 0 0 0 /home/var/ravi/tmp/issue/base-dev 0
And an initially zeroed-out snapshot device backing file:
$ od -xc snap-dev
0000000 0000 0000 0000 0000 0000 0000 0000 0000
\0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0
*
3751613000
Here's what happens when using the non-persistent N flag:
$ sudo dmsetup -v create snapdev --table '0 8 snapshot /dev/loop0 /dev/loop1 N 1'
Name: snapdev
State: ACTIVE
Read Ahead: 256
Tables present: LIVE
Open count: 0
Event number: 0
Major, minor: 254, 5
Number of targets: 1
$ od -xc snap-dev
0000000 0000 0000 0000 0000 0000 0000 0000 0000
\0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0
*
3751613000
Note that the backing file is unchanged - it's all still \0 bytes.
Now, trying again with the P flag for persistence:
$ sudo dmsetup remove snapdev
$ sudo dmsetup -v create snapdev --table '0 8 snapshot /dev/loop0 /dev/loop1 P 1'
Name: snapdev
State: ACTIVE
Read Ahead: 256
Tables present: LIVE
Open count: 0
Event number: 0
Major, minor: 254, 5
Number of targets: 1
$ od -xc snap-dev
0000000 6e53 7041 0001 0000 0001 0000 0001 0000
S n A p 001 \0 \0 \0 001 \0 \0 \0 001 \0 \0 \0
0000020 0000 0000 0000 0000 0000 0000 0000 0000
\0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0
*
3751613000
In this case, the first bytes of the device are SnAp\001.
My guess is that persistent data is stored in the first block or blocks of the snapshot device itself.
| dmsetup: Where is persistent metadata stored? |
1,479,007,317,000 |
In the journal, I'm getting lines such as:
Jan 27 18:23:08 tara kernel: device-mapper: table: 254:2: adding target device sdb2 caused an alignment inconsistency: physical_block_size=4096, logical_block_size=512, alignment_offset=0, start=33553920
Jan 27 18:23:08 tara kernel: device-mapper: table: 254:2: adding target device sdb2 caused an alignment inconsistency: physical_block_size=4096, logical_block_size=512, alignment_offset=0, start=33553920
Jan 27 18:23:08 tara kernel: device-mapper: table: 254:3: adding target device sdb2 caused an alignment inconsistency: physical_block_size=4096, logical_block_size=512, alignment_offset=0, start=34393292288
Jan 27 18:23:08 tara kernel: device-mapper: table: 254:3: adding target device sdb2 caused an alignment inconsistency: physical_block_size=4096, logical_block_size=512, alignment_offset=0, start=34393292288
How do I interpret this:
What exactly is aligned incorrectly here?
Where do the start= numbers come from?
How can I make the alignment consistent?
Further info:
[ravi@tara ~]$ uname -a
Linux tara 4.8.17-1-MANJARO #1 SMP PREEMPT Mon Jan 9 10:24:58 UTC 2017 x86_64 GNU/Linux
[ravi@tara ~]$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 3.7T 0 disk
sdb 8:16 0 3.7T 0 disk
├─sdb1 8:17 0 200M 0 part
└─sdb2 8:18 0 3.7T 0 part
├─usb-eMMC_backup 254:2 0 32G 0 lvm
└─usb-ark 254:3 0 3.6T 0 lvm /ark
sdc 8:32 1 7.5G 0 disk
└─sdc1 8:33 1 7.5G 0 part
mmcblk0 179:0 0 29.1G 0 disk
├─mmcblk0p1 179:1 0 200M 0 part /mnt/esp
└─mmcblk0p2 179:2 0 28.9G 0 part
├─lvm-root 254:0 0 24G 0 lvm /
└─lvm-swap 254:1 0 4.9G 0 lvm [SWAP]
mmcblk0boot0 179:8 0 4M 1 disk
mmcblk0boot1 179:16 0 4M 1 disk
mmcblk0rpmb 179:24 0 4M 0 disk
[ravi@tara ~]$
|
Interpretation
The value first start= value of 33553920 is the offset of the first PE of the first LV in the target device (above /dev/sdb2).
This can be confirmed by:
sudo pvs -o +pe_start --units b
There is also another start= value repeated because sdb2's VG contains two LVs (usb-eMMC_backup and usb-ark). I don't know why each is repeated though.
Cause
An alignment inconsistency exists because pe_start is not divisible by the PHY-SEC value of:
lsblk -t /dev/sdb
PHY-SEC was 4096, and 33553920 % 4096 != 0.
All LVs in the VG will be misaligned if pe_start is misaligned (with the default PE size of 4MiB).
Resolution
The VG will need to be created such that pe_start is divisible by the disk's sector size.
Passing --dataalignment 1m to vgcreate will have pe_start = 1048576B = 1MiB
What if I'm still getting alignment inconsistency?
Assuming the underlying partition is aligned, there may still be an (incorrect) alignment message generated. See this answer for a possible cause and resolution, especially if using USB attached SATA (UAS).
| Device mapper table alignment inconsistency |
1,479,007,317,000 |
I have an HP N40L microserver, with 2 identical drives, I used the system to hardware-RAID them as a mirror. I then installed mint on the system about a year ago.
This has been running perfectly, updating, etc. until I upgraded to Mint 17.
I thought everything was fine, but I've noticed that mint is only using 1 of the drives to boot, then for some reason was showing the contents of the other drive.
i.e. it boots sdb1, but df shows sda1. I'm sure df used to show a /dev/mapper/pdc_bejigbccdb1 drive which was the RAID array. Thus any updates to Grub go to sda1, but it boots sdb1 then loads the fs sda1.
N40L marty # df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda1 233159608 113675036 107617644 52% /
none 4 0 4 0% /sys/fs/cgroup
/dev 2943932 12 2943920 1% /media/sda1/dev
tmpfs 597588 1232 596356 1% /run
none 5120 0 5120 0% /run/lock
none 2987920 0 2987920 0% /run/shm
none 102400 4 102396 1% /run/user
From cat /etc/fstab
N40L marty # cat /etc/fstab
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
proc /proc proc nodev,noexec,nosuid 0 0
/dev/mapper/pdc_bejigbccdb1 / ext4 errors=remount-ro 0 1
/dev/mapper/pdc_bejigbccdb5 none swap sw 0 0
If I do ls /dev/mapper/ I get
N40L marty # ls /dev/mapper
total 0
crw------- 1 root root 10, 236 Jul 24 17:03 control
How do I get my raid back and how do I get grub to boot to it?
Further update:
N40L grub # dmraid -r
/dev/sdb: pdc, "pdc_bejigbccdb", mirror, ok, 486328064 sectors, data@ 0
/dev/sda: pdc, "pdc_bejigbccdb", mirror, ok, 486328064 sectors, data@ 0
N40L grub # dmraid -s
*** Set
name : pdc_bejigbccdb
size : 486328064
stride : 128
type : mirror
status : ok
subsets: 0
devs : 2
spares : 0
N40L grub # dmraid -ay -vvv -d
WARN: locking /var/lock/dmraid/.lock
NOTICE: /dev/sdb: asr discovering
NOTICE: /dev/sdb: ddf1 discovering
NOTICE: /dev/sdb: hpt37x discovering
NOTICE: /dev/sdb: hpt45x discovering
NOTICE: /dev/sdb: isw discovering
DEBUG: not isw at 250059348992
DEBUG: isw trying hard coded -2115 offset.
DEBUG: not isw at 250058267136
NOTICE: /dev/sdb: jmicron discovering
NOTICE: /dev/sdb: lsi discovering
NOTICE: /dev/sdb: nvidia discovering
NOTICE: /dev/sdb: pdc discovering
NOTICE: /dev/sdb: pdc metadata discovered
NOTICE: /dev/sdb: sil discovering
NOTICE: /dev/sdb: via discovering
NOTICE: /dev/sda: asr discovering
NOTICE: /dev/sda: ddf1 discovering
NOTICE: /dev/sda: hpt37x discovering
NOTICE: /dev/sda: hpt45x discovering
NOTICE: /dev/sda: isw discovering
DEBUG: not isw at 250059348992
DEBUG: isw trying hard coded -2115 offset.
DEBUG: not isw at 250058267136
NOTICE: /dev/sda: jmicron discovering
NOTICE: /dev/sda: lsi discovering
NOTICE: /dev/sda: nvidia discovering
NOTICE: /dev/sda: pdc discovering
NOTICE: /dev/sda: pdc metadata discovered
NOTICE: /dev/sda: sil discovering
NOTICE: /dev/sda: via discovering
DEBUG: _find_set: searching pdc_bejigbccdb
DEBUG: _find_set: not found pdc_bejigbccdb
DEBUG: _find_set: searching pdc_bejigbccdb
DEBUG: _find_set: not found pdc_bejigbccdb
NOTICE: added /dev/sdb to RAID set "pdc_bejigbccdb"
DEBUG: _find_set: searching pdc_bejigbccdb
DEBUG: _find_set: found pdc_bejigbccdb
DEBUG: _find_set: searching pdc_bejigbccdb
DEBUG: _find_set: found pdc_bejigbccdb
NOTICE: added /dev/sda to RAID set "pdc_bejigbccdb"
DEBUG: checking pdc device "/dev/sda"
DEBUG: checking pdc device "/dev/sdb"
DEBUG: set status of set "pdc_bejigbccdb" to 16
DEBUG: checking pdc device "/dev/sda"
DEBUG: checking pdc device "/dev/sdb"
DEBUG: set status of set "pdc_bejigbccdb" to 16
RAID set "pdc_bejigbccdb" was not activated
WARN: unlocking /var/lock/dmraid/.lock
DEBUG: freeing devices of RAID set "pdc_bejigbccdb"
DEBUG: freeing device "pdc_bejigbccdb", path "/dev/sda"
DEBUG: freeing device "pdc_bejigbccdb", path "/dev/sdb"
So my system sees the two drives and sees they should be part of an array, but will not activate the array and this not create /dev/mapper/pdc_bejigbccdb so I cannot load grub to it and boot from it.
How do I get dmraid to activate and create the mapper entry?
|
I fixed it, but I can't honestly tell you how.
Basically I booted into a LiveUSB version of Mint 17. I noticed the raid array was happy, so I mounted the system and chrooted into it.
I then installed dmraid again and mdadm (don't know why I did that), updated my grub settings and installed grub to the array.
A reboot later, it complained about mdadm, but all is well and it's booting from the array now.
Quite a surprise really. Thank you for all your help.
| RAID-1 mirror has become a single hard disk |
1,479,007,317,000 |
I'm benchmarking various cryptsetup volumes and I'm getting unexpected results on Debian.
I'm using numbers from this talk as a rough reference. One of the slides shows benchmark results for various configurations:
My setup is not identical and I'm running all tests in VMs, so I don't expect results to be exactly identical, but I think they should roughly reflect what's on the slide. In particular I expect to see performance drop of about 35 for authenticated integrity modes (AES-XTS,HMAC-SHA256) compared to non-authenticated counterparts (AES-XTS) and then another 35% for journaled integrity vs. non-journaled integrity.
But here are my results, similar for Ubuntu Server 20.04 and Debian 10.4:
LUKS2 container:
Capacity 1056964608 B
Read 26.5MB/s
Write 8855kB/s
LUKS2 with hmac-sha256, no journal:
Capacity 1040322560 B
Read 19.0MB/s
Write 6352kB/s
LUKS2 with hmac-sha256, journaled:
Capacity 1040322560 B
Read 18.9MB/s
Write 6311kB/s
About 30% performance drop after enabling integrity, that's expected. But then the difference between journaled and non-journaled integrity is marginal. I mean, that's much better than original benchmark so I should be happy, but how do I know that the journal is actually working and if it is, how do I opt out?
Here are my cryptsetup format commands:
cryptsetup luksFormat --type luks2 /dev/sdb --sector-size 4096
cryptsetup luksFormat --type luks2 /dev/sdb --sector-size 4096 --integrity hmac-sha256
cryptsetup luksFormat --type luks2 /dev/sdb --sector-size 4096 --integrity hmac-sha256 --integrity-no-journal
Benchmark command:
fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=/dev/mapper/sdb --bs=4k --iodepth=64 --readwrite=randrw --rwmixread=75
VMs are configured on VirtualBox 6.1 with settings default for Debian or Ubuntu respectively. Disks are 1 GB VDIs, fixed size and pre-filled with zeros, host buffering disabled. Underlying SSD is using 4k sectors, hence --sector-size 4096.
Interestingly, both the basic --integrity variant and the --integrity-no-journal one create intermediate sdb_dif mapped device with journal and both sdb devices have identical size:
$ sudo integritysetup status /dev/mapper/sdb_dif
/dev/mapper/sdb_dif is active and is in use.
type: INTEGRITY
tag size: 32
integrity: (none)
device: /dev/sdb
sector size: 4096 bytes
interleave sectors: 32768
size: 2031880 sectors
mode: read/write
failures: 0
journal size: 8380416 bytes
journal watermark: 50%
journal commit time: 10000 ms
$ sudo blockdev --getsize64 /dev/mapper/sdb
1040322560
|
Summary of answer:
cryptsetup format ignores the --integrity-no-journal flag.
Instead, your options are:
At each open, always provide --integrity-no-journal.
At your first open (i.e. when formatting the inner device with a filesystem, or to add the inner device to an MD RAID), provide --persistent --integrity-no-journal to persist the --integrity-no-journal setting. Then future open will not need the flag. This option only works with cryptsetup, not if you are using direct integritysetup.
While the device is already opened, issue a refresh --persistent --integrity-no-journal. This option only works with cryptsetup, not if you are using direct integritysetup.
Old text:
Did you provide the --integrity-no-journal flag to integritysetup open? It looks like dm-integrity does not save the (non-)existence of a journal in the superblock when you format.
I formatted a USB 2.0 flash disk partition with integritysetup format /dev/sdb1 --no-wipe.
Then I opened it with integritysetup open /dev/sdb1 int-sdb1, and did sync; echo 1 > /proc/sys/vm/drop_caches; dd count=16384 bs=4096 if=/dev/zero of=/dev/mapper/int-sdb1. This consistently gave me results of between 2.1Mb/s to 2.4Mb/s.
I closed it and then reopened with integritysetup open /dev/sdb1 int-sdb1 --integrity-no-journal, and issued the same dd command. This time it gave me from between 4.0Mb/s to 7.0Mb/s, a marked improvement. The massive variance might be due to the flash translation layer; it's a lousy throwaway cheap disk.
I repeated this again with integritysetup format /dev/sdb1 --no-wipe --integrity-no-journal. Again, what matters is wheter you gave --integrity-no-journal to the open command, not to the format command.
So it might be a non-clarity of integritysetup. If you gave --integrity-no=journal to the format command.
| Cryptsetup with dm-integrity - weird benchmark results |
1,479,007,317,000 |
LVM snapshots seem to have extremely poor performance It seems that dm-thin snapshots use a new implementation:
Another significant feature is support for an arbitrary depth of
recursive snapshots (snapshots of snapshots of snapshots ...). The
previous implementation of snapshots did this by chaining together
lookup tables, and so performance was O(depth). This new
implementation uses a single data structure to avoid this degradation
with depth. Fragmentation may still be an issue, however, in some
scenarios.
However, dm-thin seems to be pretty bare bones. In the documentation they say that end-users are advised to use lvm2. lvm seems to have lvm-thin, so I'm wondering whether lvm-thin leverages dm-thin or whether they are different implementations and they meant that the future version of lvm (that don't exist yet) might leverage dm-thin.
|
LVM2 is the current version of LVM, not a future version.
$ rpm -q lvm
package lvm is not installed
$ rpm -q lvm2
lvm2-2.02.177-5.fc28.x86_64
^ lvm 2.02 has been around for some time :)
LVM is very closely tied to DM; "in fact, DM is maintained by the LVM core team". There's no independent implement of thin provisioning in the LVM layer; it depends on DM. AFAIK there's only one "thin provisioning" implementation in DM, so its nice and simple.
I think you're right this isn't explained in any prominent documentation for lvmthin. You could look at the LVM source code, or this blog article by a user.
Also if you use lvmthin, you will notice the devices you're using are still /dev/mapper/... or something related, which is straightforward to verify as being a DM device.
| Does lvm-thin use dm-thin underneath or they are completely separate utilities? |
1,479,007,317,000 |
We are using dm-verity for a squashfs root file system.
Using kernel 4.8.4 everything was ok, after upgrading to kernel 4.14.14 mount fails, even though the veritysetup verify command validates the image.
# veritysetup verify /dev/mmcblk0p5 /dev/mmcblk0p6 --hash-offset 4096 d35f95a4
b47c92332fbcf5aced9c4ed58eb2d5115bad4aa52bd9d64cc0ee676b --debug
# cryptsetup 1.7.4 processing "veritysetup verify /dev/mmcblk0p5 /dev/mmcblk0p6 --hash-offset 4096 d35f95a4b47c92332fbcf5aced9c4ed58eb2d5115bad4aa52bd9d64cc0ee676b --debug"
# Running command verify.
# Allocating crypt device /dev/mmcblk0p6 context.
# Trying to open and read device /dev/mmcblk0p6 with direct-io.
# Initialising device-mapper backend library.
# Trying to load VERITY crypt type from device /dev/mmcblk0p6.
# Crypto backend (OpenSSL 1.0.2m 2 Nov 2017) initialized in cryptsetup library version 1.7.4.
# Detected kernel Linux 4.14.14-yocto-standard armv7l.
# Reading VERITY header of size 512 on device /dev/mmcblk0p6, offset 4096.
# Setting ciphertext data device to /dev/mmcblk0p5.
# Trying to open and read device /dev/mmcblk0p5 with direct-io.
# Activating volume [none] by volume key.
# Trying to activate VERITY device [none] using hash sha256.
# Verification of data in userspace required.
# Hash verification sha256, data device /dev/mmcblk0p5, data blocks 10462, hash_device /dev/mmcblk0p6, offset 2.
# Using 2 hash levels.
# Data device size required: 42852352 bytes.
# Hash device size required: 348160 bytes.
# Verification of data area succeeded.
# Verification of root hash succeeded.
# Releasing crypt device /dev/mmcblk0p6 context.
# Releasing device-mapper backend.
Command successful.
# veritysetup create vroot /dev/mmcblk0p5 /dev/mmcblk0p6 --hash-offset 4096 d3
5f95a4b47c92332fbcf5aced9c4ed58eb2d5115bad4aa52bd9d64cc0ee676b --debug
# mount -o ro /dev/mapper/vroot /mnt/
device-mapper: verity: 179:5: metadata block 2 is corrupted
EXT4-fs (dm-0): unable to read superblock
device-mapper: verity: 179:5: metadata block 2 is corrupted
EXT4-fs (dm-0): unable to read superblock
device-mapper: verity: 179:5: metadata block 2 is corrupted
EXT4-fs (dm-0): unable to read superblock
device-mapper: verity: 179:5: metadata block 2 is corrupted
SQUASHFS error: squashfs_read_data failed to read block 0x0
squashfs: SQUASHFS error: unable to read squashfs_super_block
device-mapper: verity: 179:5: metadata block 2 is corrupted
FAT-fs (dm-0): unable to read boot sector
mount: mounting /dev/mapper/vroot on /mnt/ failed: Input/output error
Same error message appears in dmesg.
The above commands were run on the target device.
On my host machine, Debian 8 (kernel 3.16.0-5), using the files which eventually ended up in /dev/mmcblk0p5 and /dev/mmcblk0p6,
I was able to set up everything working:
# veritysetup create vroot rootfs-image.squashfs rootfs-image.hashtbl --hash-offset 4096 d35f95a4b47c92332fbcf5aced9c4ed58eb2d5115bad4aa52bd9d64cc0ee676b
# mount /dev/mapper/vroot /tmp/mnt
|
By having a look at /proc/crypto I found there are two modules providing sha256: one from Atmel and the generic one:
name : sha256
driver : atmel-sha256
module : kernel
priority : 100
[...]
name : sha256
driver : sha256-generic
module : kernel
priority : 0
By disabling the Atmel SHA hw accelerator in the kernel, CONFIG_CRYPTO_DEV_ATMEL_SHA=n, it will use the generic implementation and then everything works.
It seems like something changed from Kernel 4.8.4 to Kernel 4.14.14 that breaks things. That is another issue...
| veritysetup verify successful but mount fails after upgrade to new kernel |
1,395,964,235,000 |
I can't find the dig command on my new CentOS installation. I've tried dnf install dig but it say that it cannot find the package.
How do I install dig on CentOS?
|
The DIG tool is part of the BIND Utilities so you need to install them. To install the BIND Utilities, type the following:
$ dnf install bind-utils
| How to install dig on CentOS? |
1,395,964,235,000 |
Why do the commands dig and nslookup sometimes print different results?
~$ dig facebook.com
; <<>> DiG 9.9.2-P1 <<>> facebook.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 6625
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 512
;; QUESTION SECTION:
;facebook.com. IN A
;; ANSWER SECTION:
facebook.com. 205 IN A 173.252.110.27
;; Query time: 291 msec
;; SERVER: 8.8.8.8#53(8.8.8.8)
;; WHEN: Sun Oct 6 17:55:52 2013
;; MSG SIZE rcvd: 57
~$ nslookup facebook.com
Server: 8.8.8.8
Address: 8.8.8.8#53
Non-authoritative answer:
Name: facebook.com
Address: 10.10.34.34
|
dig uses the OS resolver libraries. nslookup uses is own internal ones.
That is why Internet Systems Consortium (ISC) has been trying to get people to stop using nslookup for some time now. It causes confusion.
| dig vs nslookup |
1,395,964,235,000 |
I needed to automatically get my own WAN-IP-address from my router. I found this question and, among others, a solution with dig was proposed:
dig +short myip.opendns.com @resolver1.opendns.com
It works perfectly, but now I want to understand what it is doing.
Here is what I (hope to) understand so far (please correct me, if I am wrong):
+short just gives me a short output
@resolver1.opendns.com is the DNS server,
which is asked what IP address belongs to the given domain
What's not clear to me is myip.opendns.com. If I would write www.spiegel.de instead, I would get the IP address of the domain www.spiegel.de, right?
With myip.opendns.com I get the WAN-IP of my router. So is myip.opendns.com just emulating a domain, which is resolved to my router?
How does it do it? Where does it get my IP from? And how is it different to what webpages, like e.g., www.wieistmeineip.de, are doing? They also try to get my IP.
In the answer of Krinkle on the question I mentioned, it is stated
that this "dns-approach" would be better than the "http-approach".
Why is it better and what is the difference?
There has to be a difference, because the WAN-IP I get from dig +short myip.opendns.com @resolver1.opendns.com (ip1) is the one I can also see in the web interface of my router, whereas www.wieistmeineip.de (and other similar sites too) is giving me another IP address (ip2).
I could imagine that my ISP is using some kind of sub-LAN, so that my requests to webservers are going through another (ISP-) router which has ip2, so that www.wieistmeineip.de is just seeing this address (ip2). But, again, what is myip.opendns.com doing then?
Additionally: Opening ip1 from within my LAN is giving me the test website from my RasPi, opening it from the outside of my LAN (mobile internet) does not work. Does it mean, that ip1 is no proper "internet IP" but more like a LAN IP?
|
First to summarize the general usage of dig: it requests the IP assigned to the given domain from the default DNS server. So e.g. dig google.de would request the IP assigned to the domain google.de. That would be 172.217.19.99.
The command you mentioned is:
dig +short myip.opendns.com @resolver1.opendns.com
What this command does is: it sends a request for the IP of the domain myip.opendns.com to the DNS server resolver1.opendns.com. This server is programmed that, if this special domain is requested, the IP the request comes from is sent back.
The reasons why the method of querying the WAN IP using DNS is better were mentioned by krinkle: standardised, more stable and faster.
Note that per default, dig asks for the IPv4 address (DNS A record). If dig establishes a connection to opendns.com via IPv6, you'll get no result back (since you asked for your IPv4 address but have an IPv6 address in use). Thus, a more robust command might be:
dig +short ANY @resolver1.opendns.com myip.opendns.com
This will return your IP address, version 4 or 6, depending on dig's connection. To specify an IP version, use dig -4 or dig -6 as shown in Mafketel's answer.
The reason I could imagine for those two IPs is that your router caches DNS requests and returns an old IP.
Another problem could be DualStack Lite. That is often used by new internet contracts. Do you know whether your ISP is using DS Lite?
| How does dig find my WAN-IP-address? What is "myip.opendns.com" doing? |
1,395,964,235,000 |
On Alpine Linux, I'd like to know how to extract just the IP address from a DNS / dig query. The query I'm running looks like this:
lab-1:/var/# dig +answer smtp.mydomain.net +short
smtp.ggs.mydomain.net
10.11.11.11
I'd like to be able to get just the IP address returned.
I'm currently playing around with the bash pipe and the awk command. But so far, nothing I've tried is working.
Thanks.
|
I believe dig +short outputs two lines for you because the domain
you query, smtp.mydomain.net is a CNAME for smtp.ggs.mydomain.net,
and dig prints the intermediate resolution step.
You can probably rely on the last line from dig's output being the IP
you want, though, and therefore the following should do:
dig +short smtp.mydomain.net | tail -n1
| how to extract just the IP address from a DNS query |
1,395,964,235,000 |
I'm using a local BIND9 server to host some local dns records. When trying to dig for a local domain name I can't find it if I don't explicitly tell dig to use my local BIND9 server.
user@heimdal:~$ dig +short heimdal.lan.se
user@heimdal:~$ dig +short @192.168.1.7 heimdal.lan.se
192.168.1.2
Ubuntu 17.04 and systemd-resolved are used. This is the content of my /etc/resolved
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
# 127.0.0.53 is the systemd-resolved stub resolver.
# run "systemd-resolve --status" to see details about the actual nameservers.
nameserver 127.0.0.53
And the output from systemd-resolve --status
Global
DNS Servers: 192.168.1.7
192.168.1.1
DNSSEC NTA: 10.in-addr.arpa
16.172.in-addr.arpa
168.192.in-addr.arpa
17.172.in-addr.arpa
18.172.in-addr.arpa
19.172.in-addr.arpa
20.172.in-addr.arpa
21.172.in-addr.arpa
22.172.in-addr.arpa
23.172.in-addr.arpa
24.172.in-addr.arpa
25.172.in-addr.arpa
26.172.in-addr.arpa
27.172.in-addr.arpa
28.172.in-addr.arpa
29.172.in-addr.arpa
30.172.in-addr.arpa
31.172.in-addr.arpa
corp
d.f.ip6.arpa
home
internal
intranet
lan
local
private
test
The DNS Servers section does seem to have rightfully configured 192.168.1.7 as the main DNS server (my local BIND9 instance). I can't understand why it's not used ... ?
|
So, changing my wired eth0 interface to be managed solved this issue for me.
Changing ifupdown to managed=true in /etc/NetworkManager/NetworkManager.conf
[ifupdown]
managed=true
Then restart NetworkManager
sudo systemctl restart NetworkManager
After this it works flawlessly..
This was not 100%. I also applied theses changes to try and kill resolver
sudo service resolvconf disable-updates
sudo update-rc.d resolvconf disable
sudo service resolvconf stop
Big thanks to this blog post regarding the subject:
https://ohthehugemanatee.org/blog/2018/01/25/my-war-on-systemd-resolved/ (if unavailable use https://github.com/ohthehugemanatee/ohthehugemanatee.org/blob/main/content/blog/source/2018-01-25-my-war-on-systemd-resolved.markdown)
Lets pray this works.. This whole systemd-resolve business is just so ugly.
| Why doesn't systemd-resolved use my local DNS server? |
1,395,964,235,000 |
I cannot find dig command on my Cygwin, nor any package name that would directly point to it. If there is a package containing it, then which one to install?
|
To find the proper package that contains a specific file, you can always use cygcheck -p to ask the Cygwin server:
$ cygcheck -p bin/dig
Found 6 matches for bin/dig
bind-debuginfo-9.11.5-2.P4 - bind-debuginfo: Debug info for bind
bind-debuginfo-9.11.6-1 - bind-debuginfo: Debug info for bind
bind-debuginfo-9.11.9-1 - bind-debuginfo: Debug info for bind
bind-utils-9.11.5-2.P4 - bind-utils: DNS server and utilities suite
bind-utils-9.11.6-1 - bind-utils: DNS server and utilities suite
bind-utils-9.11.9-1 - bind-utils: DNS server and utilities suite
Additional information source: cygcheck man page:
-p, --package-query search for REGEXP in the entire cygwin.com package repository (requires internet connectivity)
| How to install dig on Cygwin? |
1,395,964,235,000 |
I'm running mint Mate 17.2.
When I use dig, for a certain specific domain name, the resolved IP "answer" is wrong, and the answer server is 127.0.0.1.
Trying to access this domain from my local computer via ssh, a web browser, etc also resolves to the wrong IP.
DNS lookup using online tools or other computers works correctly.
Something on the local machine is intercepting the request and returning a wrong cached result. I have looked at various caching programs, but I don't think I have any installed or configured any.
The IP address being returned is the old IP and the master DNS records were changed over a year ago.
How do I determine what program is intercepting DNS locally and disable it so I can have this domain resolve correctly on my computer?
/etc/resolv.conf:
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
nameserver 127.0.1.1
|
Resolvconf is pointing it out to a local software running in port 53 in the local machine.
To find it out which one:
sudo netstat -anlp | grep :53
As we have found out, it is the avahi daemon.
To trace DNS resolution, also following command is useful:
dig +trace www.cnn.com
If you want to control your DNS setting yourself, specially in server cases (I have notice you said Mint), I would recommend doing away with resolvconf
You can uninstall it with:
dpkg --purge resolvconf
Then, if you got the IP via DHCP leave it as it is, otherwise fill in your DNS servers in /etc/resolv.conf.
If you are not also interested in mDNS resolution or in a corporate network, I recommend uninstalling avahi.
In desktop settings, it maybe advisable either to reboot or restart all services. I would at least restart networking with service networking restart.
The Avahi mDNS/DNS-SD daemon implements Apple's Zeroconf architecture
(also known as "Rendezvous" or "Bonjour"). The daemon registers local
IP addresses and static services using mDNS/DNS-SD and provides two
IPC APIs for local programs to make use of the mDNS record cache the
avahi-daemon maintains.
In a work setting, it maybe also be interesting following up at the network level servers/workstations which are announcing mDNS records, and if they are strictly necessary. I would bet some lost host file or some old server setting is propagating your old IP address via mDNS.
You may also listen in the local network mDNS packets with:
sudo tcpdump -n udp port 5353
From mDNS
The multicast Domain Name System (mDNS) resolves host names to IP
addresses within small networks that do not include a local name
server. It is a zero-configuration service, using essentially the same
programming interfaces, packet formats and operating semantics as the
unicast Domain Name System (DNS). Although Stuart Cheshire designed
mDNS to be stand-alone capable, it can work in concert with unicast
DNS servers.
| How do I figure out where wrong local dns results are coming from? |
1,395,964,235,000 |
Using dig I can query a specific DNS server for some DNS records, for instance
dig example.com A @192.168.1.1
Where in this instance 192.168.1.1 is my router's ip.
Is there a way, using dig or any other program, to find out what DNS servers my router is using? (when it doesn't have the query cached)
I have limited access to the router due to restrictions of the ISP. So in the web interface I cannot find anything.
|
You can use the +trace option to dig to see the entire sequence of queries, from your system to root servers, all the way down to the authoritative servers.
| Finding out what DNS server are being used |
1,395,964,235,000 |
On Ubuntu 14.04, when I'm performing a
dig google.de
on my machine, I get a REFUSED status (reducing to relevant lines):
me@machine:~# dig google.de
;; ->>HEADER<<- opcode: QUERY, status: REFUSED, id: 26926
;; flags: qr rd; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available
My /etc/resolv.conf knows three nameservers:
nameserver 1.2.3.4
nameserver 1.2.3.5
nameserver 8.8.8.8
where the first two are corporate owned nameservers. But at least the last one (8.8.8.8) shouldn't return a refused status. So how can I enable recursion so that the last nameserver is considered?
|
The DNS resolver will only move onto the other name servers if the first one returns an error (i.e SERVFAIL) or can't be reached. If the DNS server returns NXDOMAIN then the resolver considers that the proper answer and won't check the others. NXDOMAIN is considered a final definitive answer that the requested domain does not exist.
In your case the first namserver is reached and is denying you.
In that namserver's named.conf you should have something like allow-query { any; };
Or
One solution might be to temporarily change the order of the nameservers in /etc/resolv.conf & put 8.8.8.8 first
Or
Just to direct dig to use 8.8.8.8 as the DNS server at command line you can do :
dig @8.8.8.8 google.de
| How to enable nameserver recursion? |
1,395,964,235,000 |
Result of dig -6 google.com:
; <<>> DiG 9.8.3-P1 <<>> -6 google.com
;; global options: +cmd
;; connection timed out; no servers could be reached
What does it means if dig -4 google.com works correctly? Does it mean that my provider doesn't support IPv6?
Update
My /etc/resolv.conf
#
# Mac OS X Notice
#
# This file is not used by the host name and address resolution
# or the DNS query routing mechanisms used by most processes on
# this Mac OS X system.
#
# This file is automatically generated.
#
nameserver 192.168.88.1
192.168.88.1 is my router
|
-4/-6 tells dig to only use IPv4/IPv6 connectivity to carry your query to the nameserver - it doesn't change whether to query for A records(IPv4) or AAAA records(IPv6) if that's what you intended. If dig -4 works but dig -6 doesn't, it just means that your local nameserver can't be reached via IPv6, which can have various reasons. Sure, not having IPv6 connectivity is among them but it's unfortunately also common for some specific home routers to not act as a DNS forwarder on IPv6. They don't strictly need to, since your machine can use IPv4 to query for AAAA records.
If you want to quickly check if you can reach google.com via IPv6, you could do
ping6 google.com
| Why does dig -6 google.com not work for me? [closed] |
1,395,964,235,000 |
I have a lab set up with DNS running on a CentOS7 server (dns01.local.lab). The local.lab domain is defined in named.conf:
zone "local.lab" IN {
type master;
file "local.lab.zone";
allow-update { none; };
};
I also have a reverse zone but that doesn't matter for this question as far as I can tell.
The zone file looks like:
$TTL 86400
@ IN SOA dns01.local.lab. root.local.lab. (
1 ; Serial
3600 ; Refresh
1800 ; Retry
604800 ; Expire
86400 ; Minimum TTL
)
@ IN NS dns01.local.lab.
@ IN A 192.168.122.100
@ IN A 192.168.122.1
dns01 IN A 192.168.122.100
virt-host IN A 192.168.122.1
If I use nslookup using just the hostname I get a resolved IP:
[root@dns01 ~]# nslookup dns01
Server: 192.168.122.100
Address: 192.168.122.100#53
Name: dns01.local.lab
Address: 192.168.122.100
However, if I use dig using just the hostname I do not get the expected response:
[root@dns01 ~]# dig dns01
; <<>> DiG 9.9.4-RedHat-9.9.4-29.el7_2.2 <<>> dns01
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 9070
;; flags: qr rd ra ad; QUERY: 1, ANSWER 0; AUTHORITY: 1, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;dns01. IN A
;; AUTHORITY SECTION:
. 10800 IN SOA a.root-servers.net. nstld.verisign-grs.com. 2016020401 1800 900 604800 86400
;; Query time: 95 msec
;; SERVER: 192.168.122.100#53(192.168.122.100)
;; WHEN: Thu Feb 04 09:15:07 HST 2016
;; MSG SIZE rcvd: 109
I only get the expected response when I use the FQDN:
[root@dns01 ~]# dig dns01.local.lab
; <<>> DiG 9.9.4-RedHat-9.9.4-29.el7_2.2 <<>> dns01
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 9070
;; flags: qr rd ra ad; QUERY: 1, ANSWER 1; AUTHORITY: 1, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;dns01.local.lab. IN A
;; ANSWER SECTION:
dns01.local.lab. 86400 IN A 192.168.122.100
;; AUTHORITY SECTION:
local.lab. 86400 IN NS dns01.local.lab.
;; Query time: 8 msec
;; SERVER: 192.168.122.100#53(192.168.122.100)
;; WHEN: Thu Feb 04 09:22:15 HST 2016
;; MSG SIZE rcvd: 74
Reverse lookups with dig provide the expected answer. Likewise with nslookup.
I know that dig and nslookup use different resolver libraries, but from what I understand dig is considered the better way.
As the results above indicate, the correct named server is being queried. It's as if dig doesn't recognize that the server is the authority for hostname being queried.
named.conf:
options {
listen-on port 53 { 127.0.0.1; 192.168.122.100; };
directory "/var/named";
dump-file "/var/named/data/cache_dump.db";
statistics-file "/var/named/data/named_stats.txt";
memstatistics-file "/var/named/data/named_mem_stats.txt";
allow-query {localhost; 192.168.122.0/24; };
recursion yes;
dnssec-enable yes;
dnssec-validation yes;
bindkeys-file "/etc/named.iscdlv.key";
managed-keys-directory "/var/named/dynamic";
pid-file "/run/named/named.pid";
session-keyfile "/run/named/session.key";
};
logging {
channel default_debug {
file "data/named.run";
severity dynamic;
};
};
zone "." IN {
type hint;
file "named.ca";
};
zone "local.lab" IN {
type master;
file "local.lab.zone";
allow-update { none; };
};
zone "122.168.192.in-addr.arpa" IN {
type master;
file "local.lab.revzone";
allow-update { none; };
};
include "/etc/named.rfc1912.zones";
include "/etc/named.root.key";
|
Does dig +search dns01 give you what you want? If so, it it possible that +nosearch somehow got added to your ~/.digrc ?
ETA: Or, if you're like me, maybe the dig fairies failed to come and add +search to your ~/.digrc.
| dig does not resolve unqualified domain names, but nslookup does |
1,395,964,235,000 |
I'm experimenting with a Win10 IoT board that runs a web interface on minwinpc.local. This works fine in the browser, and also when I use ping.
However, when I use dig or nslookup, I cannot get resolve working.
How can ping and the browser possibly get the IP if the more basic tools fail to do the resolve?
Setup is just a DragonBoard with Win10 IoT Core, connected to an iPhone hotspot. Client that tries connecting is running macOS Sierra. No special hosts or resolve files have been adjusted.
ping
$ping minwinpc.local
PING minwinpc.local (172.20.10.3): 56 data bytes
64 bytes from 172.20.10.3: icmp_seq=0 ttl=128 time=6.539 ms
dig
$ dig minwinpc.local any @172.20.10.1
; <<>> DiG 9.8.3-P1 <<>> minwinpc.local any @172.20.10.1
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 61796
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0
;; QUESTION SECTION:
;minwinpc.local. IN ANY
;; Query time: 51 msec
;; SERVER: 172.20.10.1#53(172.20.10.1)
;; WHEN: ...
;; MSG SIZE rcvd: 35
nslookup
$ nslookup minwinpc.local
Server: 172.20.10.1
Address: 172.20.10.1#53
** server can't find minwinpc.local: NXDOMAIN
Related questions:
https://stackoverflow.com/questions/45616546
MSDN forums (same question)
|
This is not a problem of a more basic protocol not working, but rather that there are multiple name service resolution protocols being used; ping here understands multicast DNS (mDNS) and is able to resolve the name minwinpc.local to an IP address via that protocol. dig and nslookup by contrast may only understand or use the traditional DNS protocol, which knows nothing about mDNS, and thus fail.
The .local domain is a clear indicator of mDNS (via a web search on ".local domain"); more can be read about it in [RFC 6762]. Another option for debugging a situation like this would be to run tcpdump or WireShark and look for packets that contain minwinpc.local; this may reveal the mDNS traffic.
Still another option would be to nmap the IP of the minwinpc.local device; this may well show that the device is listening on UDP/5353 and then one can research what services that port is used for (and then one could sudo tcpdump udp port 5353 to inspect what traffic involves that port).
| dig / nslookup cannot resolve, but ping can |
1,395,964,235,000 |
Situation:
Linux machine running in Azure
looking for a public domain that returns 112 results
the packet response size is 1905 bytes
Case 1:
interrogating google DNS 8.8.8.8 - it returns un-truncated response. Everything is OK.
Case 2:
interrogating Azure DNS 168.63.129.16 - it returns a truncated response and tries to switch to TCP, but it fails there, with error "unable to connect to server address". However, it works perfectly well if I run the interrogation with "sudo".
The problem can be reproduced all the time:
Without sudo:
$ dig aerserv-bc-us-east.bidswitch.net @8.8.8.8
; <<>> DiG 9.11.4-P2-RedHat-9.11.4-9.P2.el7 <<>> aerserv-bc-us-east.bidswitch.net @8.8.8.8
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 49847
;; flags: qr rd ra; QUERY: 1, ANSWER: 112, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 512
;; QUESTION SECTION:
;aerserv-bc-us-east.bidswitch.net. IN A
;; ANSWER SECTION:
aerserv-bc-us-east.bidswitch.net. 119 IN CNAME bidcast-bcserver-gce-sc.bidswitch.net.
bidcast-bcserver-gce-sc.bidswitch.net. 119 IN CNAME bidcast-bcserver-gce-sc-multifo.bidswitch.net.
bidcast-bcserver-gce-sc-multifo.bidswitch.net. 59 IN A 35.211.189.137
bidcast-bcserver-gce-sc-multifo.bidswitch.net. 59 IN A 35.211.205.98
--------
bidcast-bcserver-gce-sc-multifo.bidswitch.net. 59 IN A 35.211.28.65
bidcast-bcserver-gce-sc-multifo.bidswitch.net. 59 IN A 35.211.213.32
;; Query time: 12 msec
;; SERVER: 8.8.8.8#53(8.8.8.8)
;; WHEN: Thu Oct 03 22:28:09 EEST 2019
;; MSG SIZE rcvd: 1905
[azureuser@testserver~]$ dig aerserv-bc-us-east.bidswitch.net
;; Truncated, retrying in TCP mode.
;; Connection to 168.63.129.16#53(168.63.129.16) for aerserv-bc-us-east.bidswitc h.net failed: timed out.
;; Connection to 168.63.129.16#53(168.63.129.16) for aerserv-bc-us-east.bidswitch.net failed: timed out.
; <<>> DiG 9.11.4-P2-RedHat-9.11.4-9.P2.el7 <<>> aerserv-bc-us-east.bidswitch.net
;; global options: +cmd
;; connection timed out; no servers could be reached
;; Connection to 168.63.129.16#53(168.63.129.16) for aerserv-bc-us-east.bidswitch.net failed: timed out.
With sudo:
[root@testserver ~]# dig aerserv-bc-us-east.bidswitch.net
;; Truncated, retrying in TCP mode.
; <<>> DiG 9.11.4-P2-RedHat-9.11.4-9.P2.el7 <<>> aerserv-bc-us-east.bidswitch.net
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 8941
;; flags: qr rd ra; QUERY: 1, ANSWER: 112, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1280
;; QUESTION SECTION:
;aerserv-bc-us-east.bidswitch.net. IN A
;; ANSWER SECTION:
aerserv-bc-us-east.bidswitch.net. 120 IN CNAME bidcast-bcserver-gce-sc.bidswitch.net.
bidcast-bcserver-gce-sc.bidswitch.net. 120 IN CNAME bidcast-bcserver-gce-sc-multifo.bidswitch.net.
bidcast-bcserver-gce-sc-multifo.bidswitch.net. 60 IN A 35.211.56.153
.......
bidcast-bcserver-gce-sc-multifo.bidswitch.net. 60 IN A 35.207.61.237
bidcast-bcserver-gce-sc-multifo.bidswitch.net. 60 IN A 35.207.23.245
;; Query time: 125 msec
;; SERVER: 168.63.129.16#53(168.63.129.16)
;; WHEN: Thu Oct 03 22:17:18 EEST 2019
;; MSG SIZE rcvd: 1905
I checked everything I found over internet, I saw nowhere an explanation why this works as intended only when ran from root account or with sudo permissions if the response package size is too big and the response gets truncated, forcing the DNS query to switch from UDP to TCP.
Adding "options edns0" or "options use-vc" or "options edns0 use-vc" to /etc/resolv.conf doesn't help either.
Same behavior in CentOS 7.x, Ubuntu 16.04 and 18.04
Update: tested with curl and telnet, the behavior is the same. Works with sudo or from root account, fails without sudo or from standard account.
Can anyone please provide some insight about why it needs superuser permissions when switching from UDP to TCP and help with some solution, if any?
UPDATE:
I know this is long post, but please read it all before answering.
Firewall is set to allow any to any.
Port 53 is open on TCP and UDP in all the test environments I have.
SELinux/AppArmor is disabled.
Update2:
Debian9 (kernel 4.19.0-0.bpo.5-cloud-amd64 ) works correctly without the sudo.
RHEL8 (kernel 4.18.0-80.11.1.el8_0.x86_64) works correcly, but with huge delays (up to 30sec), without sudo.
Update3:
List of distributions I was able to test and it doesn't work:
RHEL 7.6, kernel 3.10.0-957.21.3.el7.x86_64
CentOS 7.6, kernel 3.10.0-862.11.6.el7.x86_64
Oracle7.6, kernel 4.14.35-1902.3.2.el7uek.x86_64
Ubuntu14.04, kernel 3.10.0-1062.1.1.el7.x86_64
Ubuntu16.04, kernel 4.15.0-1057-azure
Ubuntu18.04, kernel 5.0.0-1018-azure
Ubuntu19.04, kernel 5.0.0-1014-azure
SLES12-SP4, kernel 4.12.14-6.23-azure
SLES15, kernel 4.12.14-5.30-azure
So, basically the only distribution I tested and is without problems is Debian 9. Since RHEL 8 has huge delays, which may trigger time outs, I cannot consider it fully working.
So far, the biggest difference between Debian 9 and the rest of distributions I tested is the systemd (missing on debian 9)... not sure how to check if this is the cause.
Thank you!
|
"Can anyone please provide some insight about why this works like this and help with some solution, if any?"
SHORT ANSWER:
A default Azure VM is created with broken DNS: systemd-resolved needs further configuration. sudo systemctl status systemd-resolved will quickly confirm this. /etc/resolv.conf points to 127.0.0.53- a local unconfigured stub resolver.
The local stub resolver systemd-resolved was unconfigured. It had no forwarder set so after hitting 127.0.0.53 it had nobody else to ask. Ugh. Jump to the end to see how to configure it for Ubuntu 18.04.
If you care about how that conclusion was reached, then please read the Long Answer.
LONG ANSWER:
Why DNS Responses Truncated over 512 Bytes:
TCP [RFC793] is always used for full zone transfers (using AXFR) and
is often used for messages whose sizes exceed the DNS protocol's
original 512-byte limit.
Source: https://www.rfc-editor.org/rfc/rfc7766
ANALYSIS:
This was trickier than I thought. So I spun-up an Ubuntu 18.04 VM in Azure so I could test from the vantage point of the OP:
My starting point was to validate nothing was choking-off the DNS queries:
sudo iptables -nvx -L
sudo apparmor_status
All chains in the iptables had their default policy set to ACCEPT and although Apparmor was set to "enforcing", it wasn't on anything involved with DNS. So no connectivity or permissions issues observed on the host at this point.
Next I needed to establish how the DNS queries were winding through the gears.
cat /etc/resolv.conf
# This file is managed by man:systemd-resolved(8). Do not edit.
#
# This is a dynamic resolv.conf file for connecting local clients to the
# internal DNS stub resolver of systemd-resolved. This file lists all
# configured search domains.
#
# Run "systemd-resolve --status" to see details about the uplink DNS servers
# currently in use.
#
# Third party programs must not access this file directly, but only through the
# symlink at /etc/resolv.conf. To manage man:resolv.conf(5) in a different way,
# replace this symlink by a static file or a different symlink.
#
# See man:systemd-resolved.service(8) for details about the supported modes of
# operation for /etc/resolv.conf.
nameserver 127.0.0.53
options edns0
search ns3yb2bs2fketavxxx3qaprsna.zx.internal.cloudapp.net
So according to resolv.conf, the system expects a local stub resolver called systemd-resolved. Checking the status of systemd-resolved per the hint given in the text above we see it's erroring:
sudo systemctl status systemd-resolved
● systemd-resolved.service - Network Name Resolution
Loaded: loaded (/lib/systemd/system/systemd-resolved.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2019-10-08 12:41:38 UTC; 1h 5min ago
Docs: man:systemd-resolved.service(8)
https://www.freedesktop.org/wiki/Software/systemd/resolved
https://www.freedesktop.org/wiki/Software/systemd/writing-network-configuration-managers
https://www.freedesktop.org/wiki/Software/systemd/writing-resolver-clients
Main PID: 871 (systemd-resolve)
Status: "Processing requests..."
Tasks: 1 (limit: 441)
CGroup: /system.slice/systemd-resolved.service
└─871 /lib/systemd/systemd-resolved
Oct 08 12:42:14 test systemd-resolved[871]: Server returned error NXDOMAIN, mitigating potential DNS violation DVE-2018-0001, retrying transaction with reduced feature level UDP.
<Snipped repeated error entries>
/etc/nsswitch.conf set the order sources of sources used to resolved DNS queries. What does this tell us?:
hosts: files dns
Well, the DNS queries will never hit the local systemd-resolved stub resolver as it's not specified in /etc/nsswitch.conf.
Are the forwarders even set for the systemd-resolved stub resolver?!?!? Let's review that configuration in /etc/systemd/resolved.conf
[Resolve]
#DNS=
#FallbackDNS=
#Domains=
#LLMNR=no
#MulticastDNS=no
#DNSSEC=no
#Cache=yes
#DNSStubListener=yes
Nope: systemd-resolved has no forwarder set to ask if a local ip:name mapping is not found.
The net result of all this is:
/etc/nsswitch.conf sends DNS queries to DNS if no local IP:name mapping found in /etc/hosts
The DNS server to be queried is 127.0.0.53 and we just saw this is not configured from reviewing its' config file /etc/systemd/resolved.conf. With no forwarder specified in here, there's no way we'll successfully resolve anything.
TESTING:
I tried to override the stub resolver 127.0.0.53 by directly specifying 168.63.129.16. This failed:
dig aerserv-bc-us-east.bidswitch.net 168.63.129.16
; <<>> DiG 9.11.3-1ubuntu1.9-Ubuntu <<>> aerserv-bc-us-east.bidswitch.net 168.63.129.16
;; global options: +cmd
;; connection timed out; no servers could be reached
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 24224
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 65494
;; QUESTION SECTION:
;168.63.129.16. IN A
;; Query time: 13 msec
;; SERVER: 127.0.0.53#53(127.0.0.53)
;; WHEN: Tue Oct 08 13:26:07 UTC 2019
;; MSG SIZE rcvd: 42
Nope: seeing ;; SERVER: 127.0.0.53#53(127.0.0.53) in the output tells us that we've not overridden it and the local, unconfigured stub resolver is still being used.
However using either of the following commands overrode the default 127.0.0.53 stub resolver and therefore succeeded in returning NOERROR results:
sudo dig aerserv-bc-us-east.bidswitch.net @168.63.129.16
or
dig +trace aerserv-bc-us-east.bidswitch.net @168.63.129.16
So any queries that relied on using the systemd-resolved stub resolver were doomed until it was configured.
SOLUTION:
My initial- incorrect- belief was that TCP/53 was being blocked: the whole "Truncated 512" was a bit of a red-herring. The stub resolver was not configured. I made the assumption- I know, I know, "NEVER ASSUME ;-) - that DNS was otherwise configured.
How to configure systemd-resolved:
Ubuntu 18.04
Edit the hosts directive in /etc/nsswitch.conf as below by prepending resolve to set systemd-resolved as the first source of DNS resolution:
hosts: resolve files dns
Edit the DNS directive (at a minimum) in/etc/systemd/resolved.confto specify your desired forwarder, which in this example would be:
[Resolve]
DNS=168.63.129.16
Restart systemd-resolved:
sudo systemctl restart systemd-resolved
RHEL 8:
Red Hat almost does everything for you in respect to setting up systemd-resolved as a stub resolver, except they didn't tell the system to use it!
Edit the hosts directive in /etc/nsswitch.conf as below by prepending resolve to set systemd-resolved as the first source of DNS resolution:
hosts: resolve files dns
Then restart systemd-resolved:
sudo systemctl restart systemd-resolved
Source: https://www.linkedin.com/pulse/config-rhel8-local-dns-caching-terrence-houlahan/
CONCLUSION:
Once systemd-resolved was configured my test VM's DNS behaved in the expected way. I think that about does it....
| Unable to run DNS queries when response is bigger than 512 Bytes and truncated |
1,395,964,235,000 |
I have been doing an effort to go full on DNSSEC on my system with the following setup:
dnscrypt-proxy installed, up and running on 127.0.0.1 with require_dnssec = true
systemd-resolved running, with DNSSEC=yes and DNS=127.0.0.1
only nameserver 127.0.0.1 in /etc/resolv.conf
connected through NetworkManager to a WiFi network about which I know DHCP configuration sets 8.8.8.8 and 8.8.8.4 as DNS servers
/run/systemd/resolve/resolv.conf lists 8.8.8.8 and 8.8.8.4 below 127.0.0.1.
resolvectl status shows
DNSSEC setting: yes
DNSSEC supported: yes
Current DNS Server: 127.0.0.1
DNS Servers: 127.0.0.1
in the Global section, but
DNSSEC setting: yes
DNSSEC supported: yes
Current DNS Server: 8.8.8.8
DNS Servers: 8.8.8.8
8.8.8.4
in my interface's section (why?).
tcpdump shows no activity at all on udp:53 when using a web browser, dig, or other normal usage. This I take to mean that my local dnscrypt-proxy is dealing with all DNS requests on my system. I also assume that because of the configuration settings mentioned above, I am going DNSSEC all the way.
However, from time to time the journal contains lines like:
Nov 30 09:10:41 tuxifaif systemd-resolved[179937]: DNSSEC validation failed for question v.dropbox.com IN SOA: failed-auxiliary
Nov 30 09:10:41 tuxifaif systemd-resolved[179937]: DNSSEC validation failed for question bolt.v.dropbox.com IN DS: failed-auxiliary
Nov 30 09:10:41 tuxifaif systemd-resolved[179937]: DNSSEC validation failed for question bolt.v.dropbox.com IN SOA: failed-auxiliary
Nov 30 09:10:41 tuxifaif systemd-resolved[179937]: DNSSEC validation failed for question bolt.v.dropbox.com IN A: failed-auxiliary
Nov 30 09:10:43 tuxifaif systemd-resolved[179937]: DNSSEC validation failed for question v.dropbox.com IN SOA: failed-auxiliary
Nov 30 09:10:43 tuxifaif systemd-resolved[179937]: DNSSEC validation failed for question d.v.dropbox.com IN A: failed-auxiliary
Nov 30 09:10:43 tuxifaif systemd-resolved[179937]: DNSSEC validation failed for question v.dropbox.com IN SOA: failed-auxiliary
Nov 30 09:10:43 tuxifaif systemd-resolved[179937]: DNSSEC validation failed for question d.v.dropbox.com IN A: failed-auxiliary
Nov 30 09:10:43 tuxifaif systemd-resolved[179937]: DNSSEC validation failed for question d2e801s7grwbqs.cloudfront.net IN SOA: failed-auxiliary
Nov 30 09:10:43 tuxifaif systemd-resolved[179937]: DNSSEC validation failed for question d2e801s7grwbqs.cloudfront.net IN A: failed-auxiliary
resolvectl query v.dropbox.com results in the same DNSSEC validation error
dig v.dropbox.com works just fine
dig v.dropbox.com @8.8.8.8 also works just fine (of course resulting in two lines of output for tcpdump)
I also checked https://dnsleaktest.com, which tells me that a lot of 172.253.x.x servers are receiving a request to resolve domain names I enter into my webbrowser. These IPs seem to be owned by Google.
So, what does this mean? Is there any (non DNSSEC) querying going on on this system?
Any insights are appreciated!
|
If both dnscrypt-proxy and systemd-resolved are using 127.0.0.1:53, this should not be the case. You need to disable systemd-resolved as recommended by dnscrypt-proxy wiki, and also lock /etc/resolv.conf for possible changes made by your Network Manager. So, here are the steps:
Disable systemd-resolved:
sudo systemctl stop systemd-resolved
sudo systemctl disable systemd-resolved
Check if there is anything else using the same address:port pair as dnscrypt-proxy: sudo ss -lp 'sport = :domain'. If there are, disable them.
If dnscrypt-proxy is listening on 127.0.0.1:53 and resolv.conf has nameserver 127.0.0.1, add options edns0(required to enable DNS Security Extensions) below the nameserver entry in resolv.conf so that it looks like:
nameserver 127.0.0.1
options edns0
Lock /etc/resolv.conf file for changes: sudo chattr +i /etc/resolv.conf.
You might want to restart dnscrypt-proxy.
Overall, the point is to make sure that:
Only dnscrypt-proxy is using 127.0.0.1:53
resolv.conf has the same address used by dnscrypt-proxy
resolv.conf is protected from changes made by other software such as your Network Manager.
Also, just because the dnsleak test shows Google IPs does not mean that the dns resolver service is operated by Google. It could be that the servers are owned by Google but operated by another entity. If you dont' want it, you can choose a different resolver from dnscrypt-proxy public resolvers list. Make sure that dnssec support is present for the selected resolver. I personally use dnscrypt.eu resolvers, which are no-log, no-filter, non-google and dnssec-enabled.
References:
https://github.com/DNSCrypt/dnscrypt-proxy/wiki/Installation-linux
https://github.com/DNSCrypt/dnscrypt-proxy/wiki/systemd
| Going all-in on DNSSEC |
1,395,964,235,000 |
When trying to resolve my public IP address I get an ampty string
ip=$(dig +short myip.opendns.com @resolver1.opendns.com)
|
For some reason opendns is also not working for me at work. e.g. your command is not at fault, it is simply that opendns is not answering to that specific query to find the public IP address in some settings.
Google also delivers a similar service for finding out which public IP address you are using. Do:
ip=$(dig TXT +short o-o.myaddr.l.google.com @ns1.google.com)
As IPv6 takes precedence when present, for forcing an IPv4 answer, do:
ip=$(dig -4 TXT +short o-o.myaddr.l.google.com @ns1.google.com)
| resolve my ip with dig returns empty string |
1,395,964,235,000 |
I would like to let dig always forget a DNS record.
I mean if I do dig yahoo.com then I have a record back in with ttl for 1790 seconds.
Even if I have no cache service installed, next time i do the same command, the ttl have lowered.
Some how, dig do remember the answer. Is it possible to clear that, so I always get a fresh answer back?
|
dig doesn’t remember queries. But it makes use of name servers listed in /etc/resolv.conf, unless the server to be queried is specified explicitly. Such servers normally accept recursive queries and have caches for their results. So dig can receive records cached by (intermediate) servers.
Use
dig +trace …
to override this behaviour, forcing it to query an authoritative server. See dig(1) for more information.
| Force dig to forget records |
1,395,964,235,000 |
Let's say I'm trying to lookup the IPs mail.yahoo.com, gmail.com and mail.google.com
If I execute:
dig @8.8.8.8 +nocomments +noquestion \
+noauthority +noadditional +nostats +nocmd \
gmail.com mail.yahoo.com mail.google.com
I get:
gmail.com. 299 IN A 173.194.123.21
gmail.com. 299 IN A 173.194.123.22
mail.yahoo.com. 0 IN CNAME login.yahoo.com.
login.yahoo.com. 0 IN CNAME ats.login.lgg1.b.yahoo.com.
ats.login.lgg1.b.yahoo.com. 0 IN CNAME ats.member.g02.yahoodns.net.
ats.member.g02.yahoodns.net. 0 IN CNAME any-ats.member.a02.yahoodns.net.
any-ats.member.a02.yahoodns.net. 17 IN A 98.139.21.169
mail.google.com. 0 IN CNAME googlemail.l.google.com.
googlemail.l.google.com. 243 IN A 173.194.123.21
googlemail.l.google.com. 243 IN A 173.194.123.22
Can I ensure that if I see a CNAME record, the A record corresponding to it won't appear before a CNAME corresponding to another machine or an A record for other hostname?
For instance, let me focus on mail.yahoo.com (I just want the IP or IPs mail.yahoo.com resolves to):
This is the output:
mail.yahoo.com. 0 IN CNAME login.yahoo.com.
login.yahoo.com. 0 IN CNAME ats.login.lgg1.b.yahoo.com.
ats.login.lgg1.b.yahoo.com. 0 IN CNAME ats.member.g02.yahoodns.net.
ats.member.g02.yahoodns.net. 0 IN CNAME any-ats.member.a02.yahoodns.net.
any-ats.member.a02.yahoodns.net. 17 IN A 98.139.21.169
The hostname I'm looking for ( mail.yahoo.com) is the first column of the first entry. Then there's a bunch of CNAMES I really don't care about, and then an A record with the actual IP (which I do care about).
Is there a possibility of getting the CNAMES or A records out of order? Something like:
ats.login.lgg1.b.yahoo.com. 0 IN CNAME ats.member.g02.yahoodns.net. #(!)BAD
ats.member.g02.yahoodns.net. 0 IN CNAME any-ats.member.a02.yahoodns.net. #(!)BAD
mail.yahoo.com. 0 IN CNAME login.yahoo.com.
login.yahoo.com. 0 IN CNAME ats.login.lgg1.b.yahoo.com.
any-ats.member.a02.yahoodns.net. 17 IN A 98.139.21.169
Or even worse (the actual A record on top):
any-ats.member.a02.yahoodns.net. 17 IN A 98.139.21.169
mail.yahoo.com. 0 IN CNAME login.yahoo.com.
login.yahoo.com. 0 IN CNAME ats.login.lgg1.b.yahoo.com.
ats.login.lgg1.b.yahoo.com. 0 IN CNAME ats.member.g02.yahoodns.net.
ats.member.g02.yahoodns.net. 0 IN CNAME any-ats.member.a02.yahoodns.net.
Or the worse of the worse (in a multi-resolution dig execution, as the one shown on top of the post):
ats.member.g02.yahoodns.net. 0 IN CNAME any-ats.member.a02.yahoodns.net.
any-ats.member.a02.yahoodns.net. 17 IN A 98.139.21.169
mail.google.com. 0 IN CNAME googlemail.l.google.com. # This one I want
gmail.com. 299 IN A 173.194.123.21 # This one I want
gmail.com. 299 IN A 173.194.123.22 # This one I want
mail.yahoo.com. 0 IN CNAME login.yahoo.com. # This one I want
login.yahoo.com. 0 IN CNAME ats.login.lgg1.b.yahoo.com.
ats.login.lgg1.b.yahoo.com. 0 IN CNAME ats.member.g02.yahoodns.net.
googlemail.l.google.com. 243 IN A 173.194.123.21
googlemail.l.google.com. 243 IN A 173.194.123.22
|
dig does not reorder the results, it shows them in the order that the nameserver returns them. Nameservers normally shuffle the results (either randomly or round-robin) each time they're queried for a particular record (to implement a simple form of load balancing), although there may be server configuration options that override this. In the case of BIND, the relevant options are rrset-order and sortlist.
As far as I can tell, if you perform multiple queries with a single dig invocation, it's as if you had executed dig separately for each name, in that order. I can't imagine why the code wouldn't just loop through them in the order they're on the command line.
If the server has to follow CNAME records to get the final answer, the DNS specification says that each alias will be added to the response in the order they're processed. So you're guaranteed that the original name you gave will be first, and the final results will be last.
| Dig command: Is the output guaranteed to be sorted? |
1,395,964,235,000 |
Recently wondered how root server returns information about domains it doesn't have information about.
I thought that root dns server, for example, a.root-servers.net, doesn't perform recursive queries itself but instead returns referral - RR pointing to nameserver for TLD of the query.
I issued query about twitter.com hoping to get RR about nameservers for com.
but got:
dig @a.root-servers.net twitter.com +norecurse
; <<>> DiG 9.10.3-P4-Ubuntu <<>> @a.root-servers.net twitter.com +norecurse
; (2 servers found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 51937
;; flags: qr ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 0
;; QUESTION SECTION:
;twitter.com. IN A
;; ANSWER SECTION:
twitter.com. 748 IN A 104.244.42.65
twitter.com. 748 IN A 104.244.42.193
;; Query time: 27 msec
;; SERVER: 198.41.0.4#53(198.41.0.4)
;; WHEN: Thu Dec 29 17:54:09 MSK 2016
;; MSG SIZE rcvd: 61
Could you explain why root server returns IP of Twitter, it seems that it should return referral only?
Correct me if I'm wrong. Thanks.
|
Unable to replicate:
$ dig @a.root-servers.net twitter.com +norecurse
; <<>> DiG 9.8.3-P1 <<>> @a.root-servers.net twitter.com +norecurse
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 47005
;; flags: qr; QUERY: 1, ANSWER: 0, AUTHORITY: 13, ADDITIONAL: 14
;; QUESTION SECTION:
;twitter.com. IN A
;; AUTHORITY SECTION:
com. 172800 IN NS a.gtld-servers.net.
com. 172800 IN NS b.gtld-servers.net.
com. 172800 IN NS c.gtld-servers.net.
com. 172800 IN NS d.gtld-servers.net.
com. 172800 IN NS e.gtld-servers.net.
com. 172800 IN NS f.gtld-servers.net.
com. 172800 IN NS g.gtld-servers.net.
com. 172800 IN NS h.gtld-servers.net.
com. 172800 IN NS i.gtld-servers.net.
com. 172800 IN NS j.gtld-servers.net.
com. 172800 IN NS k.gtld-servers.net.
com. 172800 IN NS l.gtld-servers.net.
com. 172800 IN NS m.gtld-servers.net.
;; ADDITIONAL SECTION:
a.gtld-servers.net. 172800 IN A 192.5.6.30
b.gtld-servers.net. 172800 IN A 192.33.14.30
c.gtld-servers.net. 172800 IN A 192.26.92.30
d.gtld-servers.net. 172800 IN A 192.31.80.30
e.gtld-servers.net. 172800 IN A 192.12.94.30
f.gtld-servers.net. 172800 IN A 192.35.51.30
g.gtld-servers.net. 172800 IN A 192.42.93.30
h.gtld-servers.net. 172800 IN A 192.54.112.30
i.gtld-servers.net. 172800 IN A 192.43.172.30
j.gtld-servers.net. 172800 IN A 192.48.79.30
k.gtld-servers.net. 172800 IN A 192.52.178.30
l.gtld-servers.net. 172800 IN A 192.41.162.30
m.gtld-servers.net. 172800 IN A 192.55.83.30
a.gtld-servers.net. 172800 IN AAAA 2001:503:a83e::2:30
;; Query time: 31 msec
;; SERVER: 198.41.0.4#53(198.41.0.4)
;; WHEN: Thu Dec 29 08:32:23 2016
;; MSG SIZE rcvd: 489
| How does root(DNS) server could answer about twitter.com? |
1,395,964,235,000 |
I'm trying to fetch DNS data without any local or ISP resolvers using the DIG tool.
For example I try google.nl
I start at a rootserver (d.root-servers.net):
dig @199.7.91.13 google.nl
Then I take the registry from this:
;; ADDITIONAL SECTION:
ns1.dns.nl. 172800 IN A 193.176.144.5
dig @193.176.144.5 google.nl
;; AUTHORITY SECTION:
google.nl. 3600 IN NS ns2.google.com.
Now here is where I get stuck. Because google.nl uses the nameservers of google.com, it won't send any glue records.
Should I then dig from root again for google.com?
dig @199.7.91.13 google.com
;; ADDITIONAL SECTION:
a.gtld-servers.net. 172800 IN A 192.5.6.30
dig @192.5.6.30 google.com
Now in this case, google.com is using it's own nameservers so glue records are provided in the ;; ADDITIONAL SECTION:, but it is possible this domain would be using different nameservers.
Then you'd need to fetch those nameservers, I feel like you could go on like that in an endless way untill you are being served an actual IP address instead of a reference to a nameserver.
Is that how it works, or is there a shorter way to the IP address of a nameserver (so to get the IP address of ns2.google.com from google.nl?
|
That is the way it works. Not ideal, to be sure. Resolvers will hit hardcoded limit if recursion goes for too long... See for example
https://cybermashup.com/2014/09/23/how-dns-kills-the-internet/
| How to dig from root to bottom? |
1,395,964,235,000 |
I've got a txt of a bunch of domains I need to run dig +short mx on.
I've got a script set to run the command, print it to a results.txt with:
./dig_domain_mx.sh > results.txt
Downside is, I need to be able to know which domains relate to each result, so my solution is to print the current line being read by the script, then print the dig output for the line, then add a linebreak.
I can't seem to find anything alluding to this being possible with dig commands alone, so how would I go about this within my Bash script?
Current Bash script, nothing special:
#!/bin/bash
dig -f domains.txt +short mx
|
To avoid running one dig and read per line of the file, you could do:
dig -f domains.txt mx +noall +answer
Which would give an output like:
stackexchange.com. 300 IN MX 5 alt1.aspmx.l.google.com.
stackexchange.com. 300 IN MX 1 aspmx.l.google.com.
stackexchange.com. 300 IN MX 10 alt3.aspmx.l.google.com.
stackexchange.com. 300 IN MX 10 alt4.aspmx.l.google.com.
stackexchange.com. 300 IN MX 5 alt2.aspmx.l.google.com.
google.com. 600 IN MX 10 aspmx.l.google.com.
google.com. 600 IN MX 30 alt2.aspmx.l.google.com.
google.com. 600 IN MX 20 alt1.aspmx.l.google.com.
google.com. 600 IN MX 40 alt3.aspmx.l.google.com.
google.com. 600 IN MX 50 alt4.aspmx.l.google.com.
You can pipe to awk '{print $1,$5,$6}' to remove the 300 IN MX.
An alternative to the while read loop could be xargs:
$ xargs -tn1 dig mx +short < domains.txt
dig mx +short stackexchange.com
1 aspmx.l.google.com.
10 alt3.aspmx.l.google.com.
10 alt4.aspmx.l.google.com.
5 alt2.aspmx.l.google.com.
5 alt1.aspmx.l.google.com.
dig mx +short google.com
20 alt1.aspmx.l.google.com.
30 alt2.aspmx.l.google.com.
40 alt3.aspmx.l.google.com.
50 alt4.aspmx.l.google.com.
10 aspmx.l.google.com.
| Bash print current line, line's output, and linebreak to file |
1,395,964,235,000 |
I've pored over the man pages and I'm pretty sure the answer is "no" but is there a way to prevent dig from resolving a CNAME record for a host?
For example:
$ dig +short mail.yahoo.com A
edge.gycpi.b.yahoodns.net.
66.218.84.40
66.218.84.44
66.218.84.41
66.218.84.45
66.218.84.42
66.218.84.43
There is not an A record for this host, so I should get no answer. It seems like A and AAAA are treated differently from any other record type in this regard.
I've tried the +norecurse and +noadditional options without success. I can easily parse the response in my script to see if it has multiple lines where the first one is a FQDN, but it feels like I shouldn't have to.
|
According to RFC 1034 you can ask for a CNAME record type, and if one exists that's what you'll get.
dig -t cname +short www.bbc.co.uk
www.bbc.co.uk.pri.bbc.co.uk.
However, there doesn't seem to be a way to ask for (say) an A record but disallow lookups through a CNAME:
dig -t cname +short uk.www.bbc.co.uk.pri.bbc.co.uk # No output
Indeed, section 3.6.2 of RFC 1034 writes that,
When a name server fails to find a desired RR in the resource set associated with the domain name, it checks to see if the resource set consists of a CNAME record with a matching class. If so, the name server includes the CNAME record in the response and restarts the query at the domain name specified in the data field of the CNAME record. The one exception to this rule is that queries which match the CNAME type are not restarted.
Per RFC terminology, as there is no "should" in this description it is a definitive course of action.
To get the behaviour you're seeking you would probably need to wrap dig in some custom code such as this,
query=www.bbc.co.uk
result=$(dig -t cname +short "$query" | xargs)
[ -z "$result" ] && result=$(dig -t a +short "$query" | xargs)
printf "Result: %s\n" "$result"
| Using dig to query an address without resolving CNAMEs |
1,395,964,235,000 |
I'm using the dig utility to get the TTL value of websites on local DNS and it always shows the same value of 5s for all websites. What can the reason be? How can I get the original DNS TTL value? Am I doing it correctly?
Also, running the command with the master/slave DNS server name of google gives 5m, while for some others I get no result at all.
I think in the case of 5s it's just a cached value and the 5m is the original value. But I don't understand why it's showing the same values for all websites.
|
What is the reason for getting TTLs of 5s for all websites?
Your local DNS server (possibly within your router) seems to be manipulating the DNS data, for unknown reasons.
How can I get the original TTL value?
You sort of answered this yourself already: by using a good DNS server instead of a manipulative one.
Running the command with the master/slave DNS server name of google gives 5m, while for some others I get no result at all.
A DNS server can operate in two roles:
an authoritative DNS server will act as a complete & up-to-date source of information for one or more domains, and will ignore any requests that are not about those domains. There must be at least two public authoritative DNS servers for any publicly-accessible DNS domain. ns1.google.com is the usual name for Google's first public authoritative DNS server.
a resolving DNS server will accept requests regarding any domain, and will make further requests to other DNS servers as necessary to figure out the answers. Usually, resolving DNS servers are for the owning organization's own use and/or their clients only, but there are a number of public DNS resolvers, like Google's 8.8.8.8 and 8.8.4.4 or quad9.net's 9.9.9.9 and 149.112.112.112, or Cloudflare's 1.1.1.1.
These two DNS roles (authoritative and resolving) can be combined into a single DNS server that does both things, but the best current practice is to keep them separate whenever possible.
If you are getting your current DNS server configuration with DHCP, you could try overriding the DHCP-assigned DNS servers with one or two of above-mentioned public resolvers.
Or if your current DNS server IP address is the IP of your router, you might check the router's configuration to figure out what DNS servers the router itself uses, and then configure those to your system directly (to bypass the possibly poor/suspicious DNS implementation of the router). The router probably gets its DNS settings by DHCP or similar technology from your Internet Service Provider, and so uses your ISP's DNS servers by default.
You might also try just resetting your router. If this fixes the problem, your router might have had some DNS-affecting malware on it. Such router malware might be non-persistent, and just resetting the router might clear it (until it gets re-infected using the same vulnerability that allowed the original infection). If this is the case, you probably should see if your router's firmware can be updated to fix the vulnerability.
| Getting the same DNS TTL value for all websites |
1,395,964,235,000 |
I personally love to dig sites that I know. .Here's a wierd thing I saw up on my terminal after running dig www-contestwinners.com:-
; <<>> DiG 9.9.5-3ubuntu0.1-Ubuntu <<>> www-contestwinners.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 27237
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;www-contestwinners.com. IN A
;; ANSWER SECTION:
www-contestwinners.com. 300 IN A 8.29.143.192
;; Query time: 40 msec
;; SERVER: 127.0.1.1#53(127.0.1.1)
;; WHEN: Sun Jun 06 20:31:40 IST 2015
;; MSG SIZE rcvd: 67
Which is absurd as my ISP is about 200kms away and I'm on a DSL connection. Considering the packets travel at speed of light, which makes the time to reach the packet from my device to the ISP server about 0.6 ms which is not much but the site's server is in US which is at a distance of 13000 km from my ISP which makes the time to reach the server about 43 ms. So, the total round trip takes about (43+0.6)x2= 87.2 ms.
I'm totally confused/excited. I think I just broke a basic law of Physics :p
EDIT: I checked dig www-contestwinners.com +trace to check against caching and got:-
;; global options: +cmd
. 412410 IN NS i.root-servers.net.
. 412410 IN NS k.root-servers.net.
. 412410 IN NS f.root-servers.net.
. 412410 IN NS g.root-servers.net.
. 412410 IN NS e.root-servers.net.
. 412410 IN NS l.root-servers.net.
. 412410 IN NS h.root-servers.net.
. 412410 IN NS b.root-servers.net.
. 412410 IN NS a.root-servers.net.
. 412410 IN NS m.root-servers.net.
. 412410 IN NS j.root-servers.net.
. 412410 IN NS c.root-servers.net.
. 412410 IN NS d.root-servers.net.
. 518400 IN RRSIG NS 8 0 518400 20150618050000 20150608040000 48613 . TJF2HD0Ob5niqlCZNhlOYHvwlZmEpebgV8uFwgvRLBCQb22sq+S8Hr4d CX9S5WgzRlTxCSQ3Bi9TJNlyf221rE1K53kFbRae6/vzjR2MukvF5d8G SEWinOcJ9n7l6fTq/HoxCv/GfliY6gTPWxrc8uiABdYYOj3u3XoUmbF7 Cug=
;; Received 397 bytes from 127.0.1.1#53(127.0.1.1) in 5306 ms
com. 172800 IN NS a.gtld-servers.net.
com. 172800 IN NS b.gtld-servers.net.
com. 172800 IN NS c.gtld-servers.net.
com. 172800 IN NS d.gtld-servers.net.
com. 172800 IN NS e.gtld-servers.net.
com. 172800 IN NS f.gtld-servers.net.
com. 172800 IN NS g.gtld-servers.net.
com. 172800 IN NS h.gtld-servers.net.
com. 172800 IN NS i.gtld-servers.net.
com. 172800 IN NS j.gtld-servers.net.
com. 172800 IN NS k.gtld-servers.net.
com. 172800 IN NS l.gtld-servers.net.
com. 172800 IN NS m.gtld-servers.net.
com. 86400 IN DS 30909 8 2 E2D3C916F6DEEAC73294E8268FB5885044A833FC5459588F4A9184CF C41A5766
com. 86400 IN RRSIG DS 8 1 86400 20150618050000 20150608040000 48613 . kSsNvyvdzfiJAxfpaRq4+bAe2JuKcDTcRnHDgGhiHNRsbcg04fHv/TNt Kkl0LuBpLcBWhBr74OkCLJxx5Q1KFkRhum2R7gHj6h5u8s4J84feqWeu fx69Defg2NWhToWDnqz0WzlUKF0nDsEyXTJDjsUeFrXu+baR3NSMLxvb zdU=
;; Received 746 bytes from 192.203.230.10#53(e.root-servers.net) in 5221 ms
www-contestwinners.com. 172800 IN NS dan.ns.cloudflare.com.
www-contestwinners.com. 172800 IN NS anna.ns.cloudflare.com.
CK0POJMG874LJREF7EFN8430QVIT8BSM.com. 86400 IN NSEC3 1 1 0 - CK0QFMDQRCSRU0651QLVA1JQB21IF7UR NS SOA RRSIG DNSKEY NSEC3PARAM
CK0POJMG874LJREF7EFN8430QVIT8BSM.com. 86400 IN RRSIG NSEC3 8 2 86400 20150613045314 20150606034314 33878 com. syTPVIWKgitzBOsgVgzOl7nEIsu7jhsmSXPzzLuGVUwZZC1QHc4dxmKP MkZUR+VcaY657/Knjk7Il5oOKWo8ZlTatk3+34504gWwdnbB3BShqTKS CFsWOEdw5wyf0gumuQk5GKnVR5Noo+q2+ZOxxy7LkEl0F/h7fuYj7sJA VkM=
A7FUA6LKSQCDHL6JDO7SFU649KJ6FAU5.com. 86400 IN NSEC3 1 1 0 - A7G276R35K72HBA9TE7NSAOEFTS5CADU NS DS RRSIG
A7FUA6LKSQCDHL6JDO7SFU649KJ6FAU5.com. 86400 IN RRSIG NSEC3 8 2 86400 20150614050705 20150607035705 33878 com. lIZrvjQdR4oNJTo8gW1uuzs1IuFXiqwZbI757xxBRdrYl22IDSDM4U4G i8PNVSOQ2T3ub++0VhoioWnp3aD+Uc1XmdR/jI5Z5bosIsfIrCj+CSCm ZlDShTEDsfOBLxvZ2LByGwibTHi/yuH57O+Zx3zp21RZu3xLAn2WT2aZ TrE=
;; Received 675 bytes from 192.12.94.30#53(e.gtld-servers.net) in 1539 ms
www-contestwinners.com. 300 IN A 8.29.143.192
;; Received 67 bytes from 173.245.59.108#53(dan.ns.cloudflare.com) in 53 ms
How is this possible?
|
The name server you queried isn't in the US. It's much closer to you. (So, unfortunately, your Nobel will have to wait.)
That trace output shows www-contestwinners.com is using CloudFlare for its DNS provider. CloudFlare operates numerous servers around the world and your query gets directed to the closest (or as best they can manage) server.
(Note that the name server need not be—and often isn't—anywhere near the web server. Often name servers aren't even handled by the same company.)
| Dig faster than speed of light, Possible? |
1,395,964,235,000 |
I have big file that holds something like 40000 rows of domain names.
I would like to read that file and use dig (or something else) to look up the IP addresses of the domain names in the DNS, and print them out to another file.
How do I do this?
EDIT:
Been testing this with some of the proposed solutions. With this for most the part:
#!/bin/bash
> ips.txt
cat test.txt | while read host; do
ip=$(getent hosts "$host")
if [ $? -ne 0 ]; then
echo "Host $host was not resolved.";
continue
fi
ip=$(echo "$ip" | awk '{ print $1 }')
echo "Host: $host, IP: $ip" >> ips.txt
done
This produces a file that is empty.
Not sure why this is not working.
I tried another solution:
for host in 0.accountkit.com 0.bigclicker.me 0.cdn.ddstatic.com 0.facebook.com 0.fls.doubleclick.net 0.hiveon.net 0.mining.garden
do
# get ips to host using dig
ips=($(dig "$host" a +short | grep '^[.0-9]*$'))
for ip in "${ips[@]}";
do
printf 'allow\t\t%s\n' "$ip"
done
done > allowedip.txt
This will print the ip-addresses but problem is that I need to read the DNS names from the file and not in the script itself.
|
Another loop. This one reads a list of hostnames from hosts and writes each hostname and its zero or more IPv4 addresses to ips. I've separated the host from its list of IP addresses with a tab (\t), and each IP address is separated from the next with a space:
#!/bin/bash
while IFS= read -r host
do
if [[ -n "$host" ]]
then
ips=$(dig +short "$host" | grep '^[[:digit:].]*$' | xargs)
printf "%s\t%s\n" "$host" "$ips"
fi
done <hosts >ips
Example data:
Source file hosts
bbc.co.uk
google.co.uk
Results file ips
bbc.co.uk 151.101.192.81 151.101.128.81 151.101.64.81 151.101.0.81
google.co.uk 216.58.213.3
| Read file and print out ip-address to another file |
1,395,964,235,000 |
I'm trying to learn more about the dig command and have come across the -x option which is meant for reverse lookup, i.e. you give an IP address and get back a domain name.
I tried doing dig -x www.google.com which I guess doesn't really make sense with the -x option, but I got back this response:
; <<>> DiG 9.9.4-RedHat-9.9.4-29.el7_2.3 <<>> -x www.google.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 2959
;; flags: qr rd ra ad; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;com.google.www.in-addr.arpa. IN PTR
;; AUTHORITY SECTION:
in-addr.arpa. 3180 IN SOA b.in-addr-servers.arpa. nstld.iana.org. 2015074802 1800 900 604800 3600
;; Query time: 0 msec
;; SERVER: 128.114.142.6#53(128.114.142.6)
;; WHEN: Sun Oct 16 17:06:24 PDT 2016
;; MSG SIZE rcvd: 124
Can anybody help me get a better understanding of what this reponse tells us, I thought you weren't supposed to use the -x option with a domain name.
|
Notice in the response that you got back status: NXDOMAIN and ANSWER: 0. This means there was no record found matching your query.
The -x option to dig is merely a convenience for constructing a PTR query. It splits on dots, reverses it, appends in-addr.arpa., and sets the type to PTR.
The information you did get back is the SOA record for the authoritative domain (in-addr.arpa), and is for result caching. Negative lookups (queries which have no results) can be cached for a duration as specified in the SOA record.
See RFC-2308:
Name servers authoritative for a zone MUST include the SOA record of
the zone in the authority section of the response when reporting an
NXDOMAIN or indicating that no data of the requested type exists.
This is required so that the response may be cached. The TTL of this
record is set from the minimum of the MINIMUM field of the SOA record
and the TTL of the SOA itself, and indicates how long a resolver may
cache the negative answer.
| dig -x www.google.com |
1,395,964,235,000 |
I'm making a script to preform "dig ns google.com" and cut off the all of the result except for the answers section.
So far I have:
#!/bin/bash
echo -n "Please enter the domain: "
read d
echo "You entered: $d"
dr="$(dig ns $d)"
sr="$(sed -i 1,10d $dr)"
tr="$(head -n -6 $sr)"
echo "$tr"
Theoretically, this should work. The sed and head commands work individually outside of the script to cut off the first 10 and last 6 respectively, but when I put them inside my script sed comes back with an error and it looks like it's trying to read the variable as part of the command rather than the input. The error is:
sed: invalid option -- '>'
So far I haven't been able to find a way for it to read the variable as input. I've tried surrounding it in "" and '' but that doesn't work. I'm new to this whole bash scripting thing obviously, any help would be great!
|
sed takes its input from stdin, not from the command line, so your script won't work either theoretically or practically. sed -i 1,10d $dr does not do what you think it does...sed will treat the value of "$dr" as a list of filenames to process.
Try echo "$dr" | sed -e '1,10d' or sed -e '1,10d' <<<"$dr".
BTW, you must use double-quotes around "$dr" here, otherwise sed will not get multiple lines of input separated by \n, it will only get one input line.
Or better yet, to get only the NS records, replace all of your sed commands with just this one command:
tr=$(echo "$dr" | sed -n -e '/IN[[:space:]]\+NS[[:space:]]/p')
Alternatively, eliminate all the intermediate steps and just run this:
tr=$(dig ns "$d" | sed -n -e '/IN[[:space:]]\+NS[[:space:]]/p')
Or you can get just the nameservers' hostnames with:
tr=$(dig ns "$d" | awk '/IN[[:space:]]+NS[[:space:]]/ {print $5}')
BTW, the output of host -t ns domain.example.com may be easier to parse than the output of dig.
| How can I get my bash script to remove the first n and last n lines from a variable? |
1,395,964,235,000 |
When I was scrutinizing the DNS query sent by the dig, I found something odd from what RFC's state. I don't know the different variants of DNS protocols out there, but from RFC 1035
Z : Reserved for future use. Must be zero in all queries and responses.
This is the memory snapshot of the received DNS query, exported from the eclipse IDE in the debugging mode.
00000000 F5 F4 01 20 00 01 00 00 00 00 00 01 03 77 77 77 06 67 6F 6F 67 6C 65 03 63 6F 6D 00 00 01 00 01 00 00 29 10 00 00 00 00 ... .........www.google.com.......).....
00000028 00 00 0C 00 0A 00 08 1F 0A 12 91 67 67 B0 B9 ...........gg..
Mapping the corresponding bits to DNS fields (From the RFC 1035)
F5F4 -> DNS Transaction ID.
01 -> QR, Opcode, AA, TC, RD
20 -> 00100000 => RA -> 0 , Z -> 010
Z is non zero! Does dig use some other variant of DNS, if at all the DNS variants exists? Or should it be considered as an issue in the dig?
|
There are no multiple variants of DNS, but RFC 1035 on this subject was amended by RFC2535 "DNS Security Extensions".
Its section 6.1 shows the message format:
1 1 1 1 1 1
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
| ID |
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
|QR| Opcode |AA|TC|RD|RA| Z|AD|CD| RCODE |
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
| QDCOUNT |
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
| ANCOUNT |
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
| NSCOUNT |
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
| ARCOUNT |
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
As you can see, the 3 bits previous Z field is now split between Z, AD and CD.
Hence, your 0x20 = 0b00100000 is to be split as follows:
RA = 0 : recursion NOT available from the server replying this message
Z = 0 : as expected, should be 0
AD = 1 : "all the data included in the answer and authority
portion of the response has been authenticated by the server
according to the policies of that server."
CD = 0 : checking not disabled (makes sense in query, not in a response, it is defined as "Pending (non-authenticated) data is
acceptable to the resolver sending the query.")
RCODE = 0 : No error
DNS specifications are complicated, often ambiguous, split in many documents, etc.
This recent effort can help finding its way:
https://powerdns.org/hello-dns/
| Understanding the dig's DNS query: Does dig set non zero value for Z field? |
1,395,964,235,000 |
I need to check mail servers' IP addresses from a list of domains to see if they match a certain IP address. Specifically:
Build a list of the domains I want to query
Dig the MX record(s) of each domain
Dig the A record(s) of the results of the MX record query for the IP address
If any of the IPs match a specific IP, return a "yes" or "no"
I'm stuck at step 3.
Here's the relevant portion of my script so far
#!/bin/bash
# Bulk DNS Lookup
#
# File name/path of domain list:
domain_list='domains.txt' # One FQDN per line in file.
# File name of output text
output='ns_output.txt'
# Clears previous output
> $output
# IP address of the nameserver used for lookups:
ns_ip='192.168.250.67'
#
# Seconds to wait between lookups:
loop_wait='1' # Is set to 1 second.
for domain in `cat $domain_list` # Start looping through domains
do
echo $domain "Mail servers" >> $output
MX=$(dig @$ns_ip MX $domain +short) #query MX records from domain list and store it as varial $MX
echo $MX >> $output;
echo " " >> $output
echo " " >> $output
sleep $loop_wait # Pause before the next lookup to avoid flooding NS
done;
The problem is I don't know how to turn the output into a variable so that I can run another A record dig.
c****s.com Name Servers
c****s.com. 14400 IN NS ns1.a****l.com. yes
c****s.com Mail servers
10 mail.c*****s.com. 20 mail2.c****s.com.
Is there any way to query the results to return an IP address for each of the servers returned from the MX query?
Edit: I tried everyone's answer and while they all would have worked, I just found Gilles' easiest to implement. Here's my final code:
MX=$(dig @$ns_ip MX $domain +short) #query MX records from domain list and store it as variable $MX
arr=( $MX ) #creates array variable for the MX record answers
for ((i=1; i<${#arr[@]}; i+=2)); #since MX records have multiple answers, for loop goes through each answer
do
echo ${arr[i]} >> $output; #outputs each A record from above MX dig
dig A +short "${arr[i]}" >> $output #queries A record for IP and writes answer
MX_IP=$(dig A +short "${arr[i]}") #sets IP address from the dig to variable MX_IP
if [[ "${arr[i]}" == *"a****d"* ]] #if the mail server host name contains a***d
then
echo "yes - spam filter" >> $output
else
if [[ $MX_IP == $CHECK_IP ]] #if not, check to see if the mail server's IP matches ours.
then
echo "yes - mail server" >> $output
else
echo "no" >> $output
fi
fi
Here's sample output (domains and IPs censored in a fit of paranoia):
a***l.com Mail servers lastmx.a****d.net.
85.x.x.x
209.x.x.x
95.x.x.x yes - spamfilter
....
mail.b***c.com.
72.x.x.x yes - mail server
backup.b***c.com.
50.x.x.x no
mail2.b***c.com.
50.x.x.x no
|
The way to go :
arr=( $MX )
for ((i=1; i<${#arr[@]}; i+=2)); do dig A +short "${arr[i]}"; done
Output:
108.177.15.26
209.85.233.27
172.253.118.27
108.177.97.26
173.194.202.26
| Script to query a list of domains for MX records then query the answers for the IP addresses? |
1,395,964,235,000 |
I installed dig via pacman -S bind and it hangs when started as dig:
# dig google.com
; <<>> DiG 9.16.25 <<>> google.com
;; global options: +cmd
;; connection timed out; no servers could be reached
DNS resolution works, though:
# ping google.com
PING google.com (216.58.214.174) 56(84) bytes of data.
64 bytes from mad01s26-in-f174.1e100.net (216.58.214.174): icmp_seq=1 ttl=116 time=2.76 ms
64 bytes from mad01s26-in-f14.1e100.net (216.58.214.174): icmp_seq=2 ttl=116 time=2.91 ms
# dig @192.168.10.3 google.com
; <<>> DiG 9.16.25 <<>> @192.168.10.3 google.com
(...)
;; ANSWER SECTION:
google.com. 180 IN A 216.58.214.174
;; Query time: 0 msec
;; SERVER: 192.168.10.3#53(192.168.10.3)
;; WHEN: Sat Feb 05 21:12:14 CET 2022
;; MSG SIZE rcvd: 55
It seems that dig cannot make use of the local DNS configuration - is there something I should do in addition to just installing it?
EDIT: per the comments requests:
I control the firewall and it is open out
/etc/resolv.conf is empty, /etc/systemd/resolved.conf has DNS=192.168.10.3 and FallbackDNS=8.8.8.8 1.1.1.1
dig google.com @8.8.8.8 gives the same result as above (with a different IP due to geolocalization)
traceroute -p 53 -n 8.8.8.8 goes through
|
Arch /etc/resolv.conf is not simlinked to /run/systemd/resolve/resolv.conf and dig uses the former.
# rm /etc/resolv.conf && ln -s /run/systemd/resolve/resolv.conf /etc/resolv.conf
solves the problem but this seems to be a bug is an Arch deliberate choice (see edit below) in how the resolution is set up in Arch (Ubuntu for instance provides a /etc/resolv.conf link).
EDIT: as @muru mentions in the comments, this is documented in Arch and the suggested solution is
ln -rfs /run/systemd/resolve/stub-resolv.conf /etc/resolv.conf
| Naked dig cannot reach servers, otherwise resolution works |
1,395,964,235,000 |
I'm a shell beginner and here's an example, I don't know how to implement it.
Any help, thanks in advance!
Step 1: Get the domain resolution A record via dig.
dig @8.8.8.8 liveproduseast.akamaized.net +short | tail -n1
Step 2: Form the obtained IP address and domain name into a line that looks like this.
23.1.236.106 liveproduseast.akamaized.net
Step 3: Add it to the last line of the /etc/hosts file.
127.0.0.1 localhost loopback
::1 localhost
23.1.236.106 liveproduseast.akamaized.net
Step 4: Set it up to automate the task and run it every 6 hours. When the parsing IP has changed, update it to the /etc/hosts file (replacing the previously added IP).
crontab -e
6 * * * * /root/test.sh 2>&1 > /dev/null
|
One way to do that is basically replacing the old ip with the new one:
$ cat /root/test.sh
#!/bin/sh
current_ip=$(awk '/liveproduseast.akamaized.net/ {print $1}' /etc/hosts)
new_ip=$(dig @8.8.8.8 liveproduseast.akamaized.net +short | tail -n1 | grep '^[.0-9]*$')
[[ -z $new_ip ]] && exit
if sed "s/$current_ip/$new_ip/" /etc/hosts > /tmp/etchosts; then
cat /tmp/etchosts > /etc/hosts
rm /tmp/etchosts
fi
On the sed part, if you're using GNU you can simply do:
sed -i "s/$current_ip/$new_ip/" /etc/hosts
Or if you have moreutils installed
sed "s/$current_ip/$new_ip/" /etc/hosts | sponge /etc/hosts
Explanation
grep '^[.0-9]*$' catches IP address, if there isn't one, then it outputs nothing.
awk '/liveproduseast.akamaized.net/ {print $1}' /etc/hosts
Find a line which contains exactly "liveproduseast.akamaized.net", then grab its first column, which is the IP.
sed "s/what to replace/replacement/" file
Replace the first occurrence of what you want to replace with the replacement value
And for notice, you cannot do:
sed "s/what to replace/replacement/" file > file
More details: https://stackoverflow.com/questions/6696842/how-can-i-use-a-file-in-a-command-and-redirect-output-to-the-same-file-without-t
| How do I write dig output to /etc/hosts file? |
1,395,964,235,000 |
I'm finding that a few commands (for now dig and nslookup) that fail no matter what with the following output:
19-Jan-2016 15:01:50.219 ENGINE_by_id failed (crypto failure)
19-Jan-2016 15:01:50.219 error:2606A074:engine routines:ENGINE_by_id:no such engine:eng_list.c:389:id=gost
dig: dst_lib_init: crypto failure
Even stuff like dig -h results in this, so I guess this happens before the actual command execution starts
I remember these commands used to work, but they're not something I used very often, so I can't exactly pinpoint the origin
I can, however, say that I have messed with ssl options recently. Particularly, I was having problems handling GPG keys, and had to run export OPENSSL_CONF=/etc/ssl/openssl.cnf in order to make it work
I also found this issue, which seems to be similar. But that project has nothing to do with what I'm doing, and their solution (unsetting OPENSSL_CONF) did not work for me
EDIT:
I'm running Arch Linux.
The only change I did regarding OpenSSL configurations was running export OPENSSL_CONF=/etc/ssl/openssl.cnf which I needed to use gpg, but I already tried unsetting that
Running unset OPENSSL_CONF; dig -h results in the same output
|
Run :
ldd $( which dig) | grep crypto, this will show you which crypto lib you're using at the moment. If this is different than expected one (usually openssl) you have few options :
Remove the lib which interferes
Modify LDD_LIBARY_PATH env variable, and point to the openssl lib location
Fix the problem by removing unwanted library' location from /etc/ld.so.conf and /etc/ld.so.cond.d/* files. Running ldconfig afterwards. Warning : this will most probably break application using it.
| "crypto failure" error when running various commands |
1,395,964,235,000 |
I'm unsure if I found a bug in bind. I've setup a simple dns server on debian 12.
in named.conf.options
zone "rpz-test" {
type master;
file "/etc/bind/rpz-test.zone";
check-names ignore;
}
in rpz-test.zone
;RPZ
$TTL 604800
@ IN SOA rpz.zone. rpz.zone. (
2; serial
604800; refresh
86400; retry
2419200; expire
604800; minimum
)
IN NS localhost.
*.com A 127.0.0.1
sub.domain.com A 127.0.0.1
Now... If I use dig to check the configuration once bind9 is started...
This is what happens:
dig whatever.com @localhost -p 53
-> replies 127.0.0.1
dig sub.domain.com @localhost -p 53
-> replies 127.0.0.1
dig domain.com @localhost -p 53
-> breaks the wildcard and is resolved
Practically if there's a subdomain of a domain declared, the main is resolved externally!
Very weird, wasn't the wildcard able to overcome the subsequent declarations?
Probably the problem is in my configuration, not sure if is a bug however the versions I'm using are:
debian 12.2
bind 9.18.19~deb12u1
|
According to RFC 1034, this is the expected behaviour:
Wildcard RRs do not apply
When the query name or a name between the wildcard domain
and the query name is known to exist. For example, if a
wildcard RR has an owner name of "*.X", and the zone also
contains RRs attached to B.X, the wildcards would apply to
queries for name Z.X (presuming there is no explicit
information for Z.X), but not to B.X, A.B.X, or X.
If you find this unclear, you are not alone, to the extent that they wrote RFC 4592 to clarify the usage of wildcards. To sum it up: as soon as you add an RR entry for sub.domain.com., you are defining two domains sub.domain.com. and domain.com. to which your wildcard *.com. does not apply.
| Bind with RPZ acts weirdly if a subdomain is used aside a wildcard |
1,395,964,235,000 |
I'm trying to test some nameservers against a domain name.
For that, I created a script that reads a list of nameservers and asks for a domain name.
Something basic like this:
#!/bin/bash
domain=$1
[ -z $domain ] && read -p "DOMAIN NAME: " domain
namefile="./nameserver"
echo "RESULT - NAMESERVER DOMAIN IP"
for host in $(cat "$namefile"); do
IPADD=$(dig +short "$host" "$domain" A 2> /dev/null)
[[ ! -z $IPADD ]] && result="OK" || result="FAIL"
echo "$result - Nameserver: $host - Domain: $domain - IP answer: $IPADD"
done
The issue I'm having is that, when Dig fails, it is not redirecting errors to null. Thus, the $IPADD variable receives a wrong value.
# CORRECT nameserver
# dig +short @8.8.8.8 google.com A 2> /dev/null
142.250.218.206
# WRONG nameserver
# dig +short @8.8.8.80 google.com A 2> /dev/null
;; connection timed out; no servers could be reached
If I test it with a wrong nameserver address, I still get an error message, like shown above.
As I understand, when redirecting to null, it should not display that error message.
Any idea?
Thank you.
|
I think I understand what's the issue now. Actually... Not an issue, a behavior.
If I intentionally input an invalid option, Dig gives me a syntax error.
$ dig @8.8.8.8 google.com -A
Invalid option: -A
Usage: dig [@global-server] [domain] [q-type] [q-class] {q-opt}
{global-d-opt} host [@local-server] {local-d-opt}
[ host [@local-server] {local-d-opt} [...]]
Use "dig -h" (or "dig -h | more") for complete list of options
If I do the same, but redirecting stderr to null, Dig shows me nothing.
$ dig @8.8.8.8 google.com -A 2> /dev/null
Therefore, it seems Dig is correctly redirecting errors to null, not to stdout.
Now, if I input an incorrect or unresponsive nameserver, Dig actually tells me that it did not get an answer, connection timed out; no server could be reached. Dig does not see that as an error.
Also, it returns code 9, which means "No reply from server".
In other words, no DNS service could be reached with the provided nameserver.
As I understand it, "no server could be reached" might not be the best response. Maybe changing the term server to service would enhance that response a bit.
@schrodingerscatcuriosity comment makes more sense to me now and seems to be a possible way to handle Dig's response in my script.
But, since the actual IP response is required, redirecting all output to null (&> /dev/null) is not desired.
I've add a work around to suppress the timeout output.
for host in $(cat "$namefile"); do
result="OK"
IPADD=$(dig +timeout=1 +short "$host" "$domain" A 2> /dev/null)
echo $IPADD | grep -s -q "timed out" && { IPADD="Timeout" ; result="FAIL" ; }
echo "$result - Nameserver: $host - Domain: $domain - IP answer: $IPADD"
done
| Error redirection fail with bind - dig |
1,395,964,235,000 |
I am logged in via SSH to a remote machine, which is a Raspberry Pi 4. I am trying to use dnsutils to extract the public IP of the remote machine. However, it is returning the public IP of the client machine that I am physically using. I am using a shell script to do this. The specific command that I am using is
public_ip="$(dig +short myip.opendns.com @resolver1.opendns.com)"
I am using SSH via a Windows 11 machine. Not that it matters, but I am using the Visual Studio Code remote - SSH extension package. I have also tried SSH via Windows PowerShell and it's still returning the public IP of the client machine.
|
It's highly unlikely opendns is discovering the IP address of your SSH client if you are executing dig on your remote machine.
The most likely thing is that your client's public IPv4 address is the same as your remote machine's public IPv4 address. This would happen if the two are in the same physical location or otherwise on the same LAN (perhaps through a VPN).
This is due to Network Address Translation introduced in the late 1990s to slow down IPv4 address exhaustion. This means that your home or office router will take one IP address for the whole LAN and and make it appear as if everything on your LAN is just one machine with that single IP address.
Incase it isn't obvious to future readers, requesting myip.opendns.com from the DNS server resolver1.opendns.com (or any of the resolver*.opendns.com servers) will result in an A name DNS record with the DNS client's IP address as seen by the DNS server.
| dnsutils points to public IP of client machine, not server |
1,592,034,953,000 |
I've just switched to bullseye (see sources below)
deb http://deb.debian.org/debian/ testing main contrib non-free
deb-src http://deb.debian.org/debian/ testing main contrib non-free
deb http://deb.debian.org/debian/ testing-updates main contrib non-free
deb-src http://deb.debian.org/debian/ testing-updates main contrib non-free
deb http://deb.debian.org/debian-security testing-security main
deb-src http://deb.debian.org/debian-security testing-security main
deb http://security.debian.org testing-security main contrib non-free
deb-src http://security.debian.org testing-security main contrib non-free
The update and upgrade went fine, but full-upgrade fails due to the following error message:
The following packages have unmet dependencies:
libc6-dev : Breaks: libgcc-8-dev (< 8.4.0-2~) but 8.3.0-6 is to be installed
E: Error, pkgProblemResolver::Resolve generated breaks, this may be caused by held packages.
From what I see on the packages.debian.org, Debian testing should have libgcc-8-dev: 8.4.0-4, so I don't see why an older version is to be installed.
How can I fix this, to finalize the bullseye full-upgrade?
|
Installing gcc-8-base (sudo apt install gcc-8-base) appeared to do the trick for me and fix the problem for me.
| Full-upgrade to Debian testing fails due to libc6-dev : Breaks: libgcc-8-dev |
1,592,034,953,000 |
I'm currently on 18.03 and would like to upgrade to 18.09. How would I go about doing this?
I've found the following via a web search but it's not very conclusive:
https://discourse.nixos.org/t/how-to-upgrade-from-18-03-to-18-09/933
I'm assuming I could possibly just change my channel referenced by nixos? But I'm not sure if this is ideal for allowing to rollback in the case of things going wrong.
sudo nix-channel --list
nixos https://nixos.org/channels/nixos-18.03
unstable https://nixos.org/channels/nixos-unstable
In addition I've also seen the following: https://github.com/NixOS/nixpkgs/issues/40351#issuecomment-388405973 (quoted below) - do I need to take this into consideration?
Also:
/etc/nixos/configuration.nix:
# This value determines the NixOS release with which your system is
to be # compatible, in order to avoid breaking some software such as
database # servers. You should change this only after NixOS release
notes say you # should. system.stateVersion = "17.09"; # Did you
read the comment? I didn't saw when command was issued to change this.
I read the release notes, news and available information. Waited for
the command to do it, but not found one.
Anyway, couple days after release I changed "17.09" -> "18.03".
|
To upgrade NixOS:
Ensure you have a backup of your NixOS installation and that you know how to restore from the backup, if the need arises.
Review the NixOS release notes to ensure you account for any changes that need to be done manually. In particular, sometimes options change in backward-incompatible ways.
As the root user, replace the NixOS channel so it points to the one you want to upgrade to, while ensuring it is named nixos:
nix-channel --add https://nixos.org/channels/nixos-18.09 nixos
and update the channel (nix-channel --update).
As the root user, build your system:
nixos-rebuild --upgrade boot
Reboot to enter your newly-built NixOS.
If things go wrong you can reboot, select the previous generation, use nix-channel to add the old channel, and then nixos-rebuild boot to make the working generation the default; I think it's more reliable to rebuild than to use nixos-rebuild --rollback.
Alternative process
If you want to try the upgrade without messing around with channels,
you can use a GIT clone of the nixpkgs repo:
cd nixpkgs
git checkout release-18.03
nixos-rebuild -I nixpkgs="$PWD" build
If all is well...
sudo nixos-rebuild -I nixpkgs="$PWD" boot
The downside to this approach is that subsequent calls to Nix tools, such as nixos-rebuild, require the -I flag to specify the correct nixpkgs.
That is, until you update the channel.
| How do I upgrade Nixos to use a new channel nixos version? |
1,592,034,953,000 |
I'm trying to upgrade from Fedora 30 to 31 and I've successfully done these two steps:
dnf upgrade --refresh
dnf install dnf-plugin-system-upgrade
However, when I do the next:
dnf system-upgrade download --releasever=31
... I get this:
Before you continue ensure that your system is fully upgraded by running "dnf --refresh upgrade". Do you want to continue [y/N]: y
Adobe Systems Incorporated 35 kB/s | 2.9 kB 00:00
Fedora Modular 31 - x86_64 23 kB/s | 25 kB 00:01
Fedora Modular 31 - x86_64 - Updates 19 kB/s | 16 kB 00:00
Fedora 31 - x86_64 - Updates 17 kB/s | 18 kB 00:01
Fedora 31 - x86_64 37 kB/s | 25 kB 00:00
google-chrome 18 kB/s | 1.3 kB 00:00
MariaDB 9.7 kB/s | 2.9 kB 00:00
packages-microsoft-com-prod 16 kB/s | 3.0 kB 00:00
PostgreSQL common RPMs for Fedora 31 - x86_64 11 kB/s | 3.0 kB 00:00
PostgreSQL 12 for Fedora 31 - x86_64 3.3 kB/s | 3.8 kB 00:01
RPM Fusion for Fedora 31 - Free - Updates 29 kB/s | 9.1 kB 00:00
RPM Fusion for Fedora 31 - Free 26 kB/s | 9.9 kB 00:00
RPM Fusion for Fedora 31 - Nonfree - Updates 11 kB/s | 9.4 kB 00:00
RPM Fusion for Fedora 31 - Nonfree 21 kB/s | 10 kB 00:00
skype (stable) 6.6 kB/s | 2.9 kB 00:00
teams 4.9 kB/s | 3.0 kB 00:00
Fedora 31 - x86_64 - VirtualBox 247 B/s | 181 B 00:00
Visual Studio Code 19 kB/s | 3.0 kB 00:00
Yarn Repository 25 kB/s | 2.9 kB 00:00
terminate called after throwing an instance of 'libdnf::ModulePackageContainer::EnableMultipleStreamsException'
what(): Cannot enable multiple streams for module 'ant'
Aborted (core dumped)
Is there some way to overcome this problem? Any and all ideas are welcome. I don't mind if I have to disable/remove some of my extra package repos, if that is what it takes ...
|
It's really weird but I have stumbled on this issue too and found out that you have to disable these repos:
fedora-modular.repo
fedora-updates-modular.repo
fedora-updates-testing-modular.repo
Thanks @vonbrand and @dbdemon for the idea.
| Upgrade from Fedora 30 to 31: Cannot enable multiple streams for module 'ant' |
1,592,034,953,000 |
I tried to upgrade my Debian System using apt, the repository is set to "testing" so I expected it to change to the next version "Bullseye" from "Buster" automatically but since "Buster" moved on I get:
404 Not Found [IP: 151.101.12.204 80]
when running apt update.
The security.debian.org address does not seem to have Release files, did the address change?
E: The repository 'http://security.debian.org testing/updates Release' no longer has a Release file.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.
this are the relevant entries of my /etc/apt/sources.list:
deb http://ftp.ch.debian.org/debian/ testing main contrib non-free
deb-src http://ftp.ch.debian.org/debian/ testing main contrib non-free
deb http://security.debian.org/ testing/updates main contrib non-free
deb-src http://security.debian.org/ testing/updates main contrib non-free
# jessie-updates, previously known as 'volatile'
deb http://ftp.ch.debian.org/debian/ testing-updates main contrib non-free
deb-src http://ftp.ch.debian.org/debian/ testing-updates main contrib non-free
I checked man apt-secure but could not find or understand the relevant information.
Update: I got two answers so far, both referring to the ofical debian.org page, but suggest a complete different solution. Can someone please explain, since I decided to not remove the security.debian.org entries, but changed the version-attribute format.
|
From https://wiki.debian.org/Status/Testing
deb http://security.debian.org testing-security main contrib non-free
deb-src http://security.debian.org testing-security main contrib non-free
The entries slightly changed after the latest release.
Here is an announcement to debian-devel-announce:
... over the last years we had people getting confused over -updates (recommended updates) and /updates (security updates). Starting with Debian 11 "bullseye" we have therefore renamed the suite including the security updates to -security. An entry in sources.list should look like
deb security.debian.org/debian-security bullseye-security main
For previous releases the name will not change.
| Debian testing - upgrade "Buster" to "Bullseye" version, no server for security.debian.org |
1,592,034,953,000 |
This might be a possible bug, but it is something that has been bothering me for a couple of days now.
The difference between apt-get upgrade and apt-get dist-upgrade has been both well-known and well-established by now i.e. upgrade installs/upgrades while dist-upgrade is capable of install/remove/upgrade if package removal happens to be necessary for either an installation or upgrading of another package. The difference in packages can be easily discovered with something like
(the following is a quick and dirty method and will need sudo password to be entered already in the terminal for copy pasting. Also, as I have several packages and drivers I patched myself that I need kept back for functionality, I included the OR in the awk to extract only those to be installed and those to be upgraded, and not those listed as to be kept back, but the following should work even if those lines are not present in your apt upgrade outputs) :
$echo -e 'n' | sudo apt-get dist-upgrade | awk '
/be installed|be upgraded/{f=1;next}; /not upgraded|kept back/{f=0}f' | awk '
BEGIN {RS=" ";} {print $0}
' | grep . > apt_get_dist_list
$echo -e 'n' | sudo apt-get upgrade | awk '
/be installed|be upgraded/{f=1;next}; /not upgraded|kept back/{f=0}f' | awk '
BEGIN {RS=" ";} {print $0}
' | grep . > apt_get_upgrade_list
and when I compare the two outputs with:
$diff apt_get_dist_list apt_get_upgrade_list | grep -E '<|>'
in my case I get the following:
< gir1.2-nm-1.0
< libcpupower2
< linux-kbuild-5.2
< blueman
< linux-cpupower
< linux-headers-amd64
< linux-image-amd64
< pdf-parser
Which makes the difference quite clear, especially given the presence of linux-header-* and linux-image-* in apt-get dist-upgrade
Now if I repeat the same process for apt upgrade and apt full-upgrade
$echo -e 'n' | sudo apt upgrade | awk '
/be installed|be upgraded/{f=1;next}; /not upgraded|kept back/{f=0}f' | awk '
BEGIN {RS=" ";} {print $0}
' | grep . > apt_upgrade_list
$echo -e 'n' | sudo apt full-upgrade | awk '
/be installed|be upgraded/{f=1;next}; /not upgraded|kept back/{f=0}f' | awk '
BEGIN {RS=" ";} {print $0}
' | grep . > apt_fullupgrade_list
and compare:
$diff apt_get_dist_list apt_fullupgrade_list | grep -E '<|>'
I get nothing, as expected because apt full-upgrade and apt-get dist-upgrade are meant to behave in the same exact way, but when I compare:
$diff apt_get_upgrade_list apt_upgrade_list | grep -E '<|>'
I get the same output as when comparing apt-get upgrade with apt-get dist-upgrade.
> gir1.2-nm-1.0
> libcpupower2
> linux-kbuild-5.2
> blueman
> linux-cpupower
> linux-headers-amd64
> linux-image-amd64
> pdf-parser
and the only conclusion I can arrive at is that apt upgrade is exactly the same as apt full-upgrade which also makes it the same as apt-get dist-upgrade, which ultimately means that not only is apt upgrade redundant, but what is also more of a concern is that currently apt does not allow for the same behavior as apt-get upgrade.
|
They’re not redundant; there’s an additional subtlety:
apt-get upgrade will only upgrade currently-installed packages;
apt upgrade will upgrade currently-installed packages and install new packages pulled in by updated dependencies;
the various dist-upgrade and full-upgrade variants will upgrade currently-installed packages, install new packages introduced as dependencies, and remove packages which are broken by upgraded packages.
Put another way:
Command
Upgrade
Install
Remove
apt-get upgrade
Yes
No
No
apt upgrade
Yes
Yes
No
apt-get dist-upgrade, apt full-upgrade etc.
Yes
Yes
Yes
In practice, apt upgrade is safer than apt-get upgrade (by default) because it allows updated kernels to be installed automatically when the ABI changes. See apt-get upgrade holds back a kernel update. What are the official instructions for applying updates on Debian 9? for an example.
apt-get upgrade can be told to behave like apt upgrade with the --with-new-pkgs option. This is also configurable using APT configuration files; you can see apt’s specific settings with apt-config dump | grep '^Binary::apt (the setting involved here is APT::Get::Upgrade-Allow-New).
Note that package removal above doesn’t include autoremoval, i.e. the removal of packages which are only installed as dependencies and are no longer necessary. So even though apt full-upgrade may remove some packages, it isn’t redundant with apt autoremove or apt full-upgrade --autoremove.
| apt full-upgrade vs apt upgrade redundancy |
1,592,034,953,000 |
Ubuntu's do-release-upgrade command upgrades the operating system to the latest release. What is Debian's way or tool for the same purpose (to upgrade to the latest stable release)?
|
Debian does not provide a single command to upgrade the OS to a new release. The Release Notes for each release include upgrade instructions for supported hardware architectures.
You can find release notes for all Debian releases via the Debian Releases page.
For example, to upgrade a 64-bit PC from stretch to buster, follow the instructions in Chapter 4. Upgrades from Debian 9 (stretch) under Debian 10 -- Release Notes for Debian 10 (buster), 64-bit PC.
You should always be able to find the release notes for the current stable release at https://www.debian.org/releases/stable/releasenotes.
Although upgrading a Debian release from "oldstable" to stable is usually painless, it's important to follow the Release Notes because the OS can differ from release to release in ways that could affect your specific installation.
The Release Notes also contain information and tips about changes in the new release that can save considerable time and effort.
For example, the upgrade process for some previous releases recommended the use of aptitude for the upgrade.
For upgrades from stretch to buster, the apt tool is recommended instead of aptitude.
(Although aptitude is suggested for resolution of some problems after the upgrade.)
| What is Debian's equivalent of do-release-upgrade to upgrade the operating system? (for example, from stretch to buster) |
1,592,034,953,000 |
I ran an apt-get upgrade and an apt-get dist-upgrade on a new update notified today for Debian 12.
The last one is failing with this message, and can see later that it involves NVidia driver (I use the one of the Debian distribution) compilation:
dkms: autoinstall for kernel: 6.1.0-18-amd64 failed!
run-parts: /etc/kernel/postinst.d/dkms exited with return code 11
sudo apt-get dist-upgrade
Lecture des listes de paquets... Fait
Construction de l'arbre des dépendances... Fait
Lecture des informations d'état... Fait
Calcul de la mise à jour... Fait
Les NOUVEAUX paquets suivants seront installés :
libllvm16 linux-headers-6.1.0-18-amd64 linux-headers-6.1.0-18-common linux-image-6.1.0-18-amd64
Les paquets suivants seront mis à jour :
linux-headers-amd64 linux-image-amd64 postgresql-14
3 mis à jour, 4 nouvellement installés, 0 à enlever et 0 non mis à jour.
Il est nécessaire de prendre 0 o/119 Mo dans les archives.
Après cette opération, 593 Mo d'espace disque supplémentaires seront utilisés.
Souhaitez-vous continuer ? [O/n] O
Lecture des fichiers de modifications (« changelog »)... Terminé
Préconfiguration des paquets...
Sélection du paquet libllvm16:amd64 précédemment désélectionné.
(Lecture de la base de données... 822688 fichiers et répertoires déjà installés.)
Préparation du dépaquetage de .../0-libllvm16_1%3a16.0.6-15~deb12u1_amd64.deb ...
Dépaquetage de libllvm16:amd64 (1:16.0.6-15~deb12u1) ...
Sélection du paquet linux-headers-6.1.0-18-common précédemment désélectionné.
Préparation du dépaquetage de .../1-linux-headers-6.1.0-18-common_6.1.76-1_all.deb ...
Dépaquetage de linux-headers-6.1.0-18-common (6.1.76-1) ...
Sélection du paquet linux-headers-6.1.0-18-amd64 précédemment désélectionné.
Préparation du dépaquetage de .../2-linux-headers-6.1.0-18-amd64_6.1.76-1_amd64.deb ...
Dépaquetage de linux-headers-6.1.0-18-amd64 (6.1.76-1) ...
Préparation du dépaquetage de .../3-linux-headers-amd64_6.1.76-1_amd64.deb ...
Dépaquetage de linux-headers-amd64 (6.1.76-1) sur (6.1.69-1) ...
Sélection du paquet linux-image-6.1.0-18-amd64 précédemment désélectionné.
Préparation du dépaquetage de .../4-linux-image-6.1.0-18-amd64_6.1.76-1_amd64.deb ...
Dépaquetage de linux-image-6.1.0-18-amd64 (6.1.76-1) ...
Préparation du dépaquetage de .../5-linux-image-amd64_6.1.76-1_amd64.deb ...
Dépaquetage de linux-image-amd64 (6.1.76-1) sur (6.1.69-1) ...
Préparation du dépaquetage de .../6-postgresql-14_14.11-1.pgdg120+1_amd64.deb ...
Dépaquetage de postgresql-14 (14.11-1.pgdg120+1) sur (14.10-1.pgdg120+1) ...
Paramétrage de linux-image-6.1.0-18-amd64 (6.1.76-1) ...
I: /vmlinuz.old is now a symlink to boot/vmlinuz-6.1.0-17-amd64
I: /initrd.img.old is now a symlink to boot/initrd.img-6.1.0-17-amd64
I: /vmlinuz is now a symlink to boot/vmlinuz-6.1.0-18-amd64
I: /initrd.img is now a symlink to boot/initrd.img-6.1.0-18-amd64
/etc/kernel/postinst.d/dkms:
dkms: running auto installation service for kernel 6.1.0-18-amd64.
Sign command: /usr/lib/linux-kbuild-6.1/scripts/sign-file
Signing key: /var/lib/dkms/mok.key
Public certificate (MOK): /var/lib/dkms/mok.pub
Building module:
Cleaning build area...
env NV_VERBOSE=1 make -j32 modules KERNEL_UNAME=6.1.0-18-amd64........(bad exit status: 2)
Error! Bad return status for module build on kernel: 6.1.0-18-amd64 (x86_64)
Consult /var/lib/dkms/nvidia-current/525.147.05/build/make.log for more information.
Error! One or more modules failed to install during autoinstall.
Refer to previous errors for more information.
dkms: autoinstall for kernel: 6.1.0-18-amd64 failed!
run-parts: /etc/kernel/postinst.d/dkms exited with return code 11
dpkg: erreur de traitement du paquet linux-image-6.1.0-18-amd64 (--configure) :
le sous-processus paquet linux-image-6.1.0-18-amd64 script post-installation installé a renvoyé un état de sortie d'erreur 1
dpkg: des problèmes de dépendances empêchent la configuration de linux-image-amd64 :
linux-image-amd64 dépend de linux-image-6.1.0-18-amd64 (= 6.1.76-1); cependant :
Le paquet linux-image-6.1.0-18-amd64 n'est pas encore configuré.
dpkg: erreur de traitement du paquet linux-image-amd64 (--configure) :
problèmes de dépendances - laissé non configuré
Paramétrage de libllvm16:amd64 (1:16.0.6-15~deb12u1) ...
Paramétrage de linux-headers-6.1.0-18-common (6.1.76-1) ...
Paramétrage de postgresql-14 (14.11-1.pgdg120+1) ...
Paramétrage de linux-headers-6.1.0-18-amd64 (6.1.76-1) ...
/etc/kernel/header_postinst.d/dkms:
dkms: running auto installation service for kernel 6.1.0-18-amd64.
Sign command: /usr/lib/linux-kbuild-6.1/scripts/sign-file
Signing key: /var/lib/dkms/mok.key
Public certificate (MOK): /var/lib/dkms/mok.pub
Building module:
Cleaning build area...
env NV_VERBOSE=1 make -j32 modules KERNEL_UNAME=6.1.0-18-amd64........(bad exit status: 2)
Error! Bad return status for module build on kernel: 6.1.0-18-amd64 (x86_64)
Consult /var/lib/dkms/nvidia-current/525.147.05/build/make.log for more information.
Error! One or more modules failed to install during autoinstall.
Refer to previous errors for more information.
dkms: autoinstall for kernel: 6.1.0-18-amd64 failed!
run-parts: /etc/kernel/header_postinst.d/dkms exited with return code 11
Failed to process /etc/kernel/header_postinst.d at /var/lib/dpkg/info/linux-headers-6.1.0-18-amd64.postinst line 11.
dpkg: erreur de traitement du paquet linux-headers-6.1.0-18-amd64 (--configure) :
le sous-processus paquet linux-headers-6.1.0-18-amd64 script post-installation installé a renvoyé un état de sortie d'erreur 1
dpkg: des problèmes de dépendances empêchent la configuration de linux-headers-amd64 :
linux-headers-amd64 dépend de linux-headers-6.1.0-18-amd64 (= 6.1.76-1); cependant :
Le paquet linux-headers-6.1.0-18-amd64 n'est pas encore configuré.
dpkg: erreur de traitement du paquet linux-headers-amd64 (--configure) :
problèmes de dépendances - laissé non configuré
Traitement des actions différées (« triggers ») pour postgresql-common (257.pgdg120+1) ...
Building PostgreSQL dictionaries from installed myspell/hunspell packages...
en_us
fr
Removing obsolete dictionary files:
Traitement des actions différées (« triggers ») pour libc-bin (2.36-9+deb12u4) ...
Des erreurs ont été rencontrées pendant l'exécution :
linux-image-6.1.0-18-amd64
linux-image-amd64
linux-headers-6.1.0-18-amd64
linux-headers-amd64
E: Sub-process /usr/bin/dpkg returned an error code (1)
Looking about what it complains, I did a cat on the log file it points, and found:
a NVidia card driver compilation problem:
ld -m elf_x86_64 -z noexecstack --no-warn-rwx-segments -r -o /var/lib/dkms/nvidia-current/525.147.05/build/nvidia-uvm.o @/var/lib/dkms/nvidia-current/525.147.05/build/nvidia-uvm.mod
{ echo /var/lib/dkms/nvidia-current/525.147.05/build/nvidia.ko; echo /var/lib/dkms/nvidia-current/525.147.05/build/nvidia-uvm.ko; echo /var/lib/dkms/nvidia-current/525.147.05/build/nvidia-modeset.ko; echo /var/lib/dkms/nvidia-current/525.147.05/build/nvidia-drm.ko; echo /var/lib/dkms/nvidia-current/525.147.05/build/nvidia-peermem.ko; :; } > /var/lib/dkms/nvidia-current/525.147.05/build/modules.order
sh /usr/src/linux-headers-6.1.0-18-common/scripts/modules-check.sh /var/lib/dkms/nvidia-current/525.147.05/build/modules.order
make -f /usr/src/linux-headers-6.1.0-18-common/scripts/Makefile.modpost
sed 's/ko$/o/' /var/lib/dkms/nvidia-current/525.147.05/build/modules.order | scripts/mod/modpost -m -o /var/lib/dkms/nvidia-current/525.147.05/build/Module.symvers -e -i Module.symvers -T -
ERROR: modpost: GPL-incompatible module nvidia.ko uses GPL-only symbol '__rcu_read_lock'
ERROR: modpost: GPL-incompatible module nvidia.ko uses GPL-only symbol '__rcu_read_unlock'
make[3]: *** [/usr/src/linux-headers-6.1.0-18-common/scripts/Makefile.modpost:126 : /var/lib/dkms/nvidia-current/525.147.05/build/Module.symvers] Erreur 1
make[2]: *** [/usr/src/linux-headers-6.1.0-18-common/Makefile:1991 : modpost] Erreur 2
make[2] : on quitte le répertoire « /usr/src/linux-headers-6.1.0-18-amd64 »
make[1]: *** [Makefile:250 : __sub-make] Erreur 2
make[1] : on quitte le répertoire « /usr/src/linux-headers-6.1.0-18-common »
make: *** [Makefile:82 : modules] Erreur 2
What should I do, from here?
Am I in danger if I reboot my computer now ?
Isn't it in the middle, between 6.1.0-17 and 6.1.0-18?
|
This has now been fixed in Bookworm, see the announcement for details. Ensure that bookworm-updates is present in your repository configuration (/etc/apt/sources.list):
deb https://deb.debian.org/debian bookworm-updates main contrib non-free non-free-firmware
(The announcement doesn’t mention contrib, non-free, and non-free-firmware, but they are necessary in this instance.)
Then run apt update and apt upgrade as root, as usual.
| Debian 12 linux-image-6.1.0-18-amd64 dist-upgrade fails on nvidia GPL-incompatible module nvidia.ko uses GPL-only symbol '__rcu_read_lock' |
1,592,034,953,000 |
I'm currently on Ubuntu 16.04 LTS. As of writing this, 18.04 LTS is available. However, I do not wish to upgrade to it.
Instead, I would like to upgrade to 17.04 LTS.
I've done:
sudo apt update
sudo apt dist-upgrade
Many tutorials suggest
sudo do-release-upgrade
as the next step. But I believe that would upgrade to the latest distro and not the target 17.04.
How do I go about this?
|
To answer your question, I don’t think Ubuntu officially supports upgrades to releases other than either the latest release or the latest LTS. It might be possible to upgrade to a specific release by changing the appropriate code name in /etc/apt/sources.list and running apt update && apt dist-upgrade, but that won’t take into account any upgrade step performed by the do-release-upgrade tool (if any).
However in your specific case, 17.04 isn’t an LTS, and is already out of support. 16.04 is still supported; if you don’t want to upgrade to 18.04 you should stick with 16.04.
| Upgrade ubuntu to a specific release |
1,592,034,953,000 |
I cannot find any informations about it. May someone has some insights to share.
apt suggests to downgrade some SSL packages.
# apt-get update && apt-get dist-upgrade --assume-yes
Reading package lists... Done
Building dependency tree
Reading state information... Done
Calculating upgrade... Done
The following packages will be DOWNGRADED:
libssl-dev libssl1.1 openssl
0 upgraded, 0 newly installed, 3 downgraded, 0 to remove and 0 not upgraded.
E: Packages were downgraded and -y was used without --allow-downgrades.
Why this packages would be downgraded? I didn't initiated anything to downgrade them. It's just what happened during my regular daily dist-upgrade.
I assume there's some critical security issue in SSL they cannot fix fast and easy. So they downgrade to the latest version without that issue. But currently I didn't find any information about such thing.
Additional info
Linux <hostname> 4.19.0-14-amd64 #1 SMP Debian 4.19.171-2 (2021-01-30) x86_64 GNU/Linux
libssl-dev/now 1.1.1j-1+0~20210301.25+debian10~1.gbp2578a0 amd64 [installed,local]
libssl-dev/stable 1.1.1d-0+deb10u5 amd64
libssl-dev/stable 1.1.1d-0+deb10u4 amd64
libssl-dev/stable 1.1.1d-0+deb10u5 i386
libssl-dev/stable 1.1.1d-0+deb10u4 i386
libssl1.1/now 1.1.1j-1+0~20210301.25+debian10~1.gbp2578a0 amd64 [installed,local]
libssl1.1/stable 1.1.1d-0+deb10u5 amd64
libssl1.1/stable 1.1.1d-0+deb10u4 amd64
libssl1.1/stable 1.1.1d-0+deb10u5 i386
libssl1.1/stable 1.1.1d-0+deb10u4 i386
openssl/now 1.1.1j-1+0~20210301.25+debian10~1.gbp2578a0 amd64 [installed,local]
openssl/stable 1.1.1d-0+deb10u5 amd64
openssl/stable 1.1.1d-0+deb10u4 amd64
openssl/stable 1.1.1d-0+deb10u5 i386
openssl/stable 1.1.1d-0+deb10u4 i386
# apt policy libssl-dev libssl1.1 openssl
libssl-dev:
Installed: 1.1.1j-1+0~20210301.25+debian10~1.gbp2578a0
Candidate: 1.1.1d-0+deb10u5
Version table:
*** 1.1.1j-1+0~20210301.25+debian10~1.gbp2578a0 100
100 /var/lib/dpkg/status
1.1.1d-0+deb10u5 1000
500 http://security.debian.org/debian-security buster/updates/main amd64 Packages
1.1.1d-0+deb10u4 1000
500 http://ftp.hosteurope.de/mirror/ftp.debian.org/debian buster/main amd64 Packages
libssl1.1:
Installed: 1.1.1j-1+0~20210301.25+debian10~1.gbp2578a0
Candidate: 1.1.1d-0+deb10u5
Version table:
*** 1.1.1j-1+0~20210301.25+debian10~1.gbp2578a0 100
100 /var/lib/dpkg/status
1.1.1d-0+deb10u5 1000
500 http://security.debian.org/debian-security buster/updates/main amd64 Packages
1.1.1d-0+deb10u4 1000
500 http://ftp.hosteurope.de/mirror/ftp.debian.org/debian buster/main amd64 Packages
openssl:
Installed: 1.1.1j-1+0~20210301.25+debian10~1.gbp2578a0
Candidate: 1.1.1d-0+deb10u5
Version table:
*** 1.1.1j-1+0~20210301.25+debian10~1.gbp2578a0 100
100 /var/lib/dpkg/status
1.1.1d-0+deb10u5 1000
500 http://security.debian.org/debian-security buster/updates/main amd64 Packages
1.1.1d-0+deb10u4 1000
500 http://ftp.hosteurope.de/mirror/ftp.debian.org/debian buster/main amd64 Packages
# apt policy
Package files:
100 /var/lib/dpkg/status
release a=now
500 https://packages.sury.org/php buster/main i386 Packages
release o=deb.sury.org,n=buster,c=main,b=i386
origin packages.sury.org
500 https://packages.sury.org/php buster/main amd64 Packages
release o=deb.sury.org,n=buster,c=main,b=amd64
origin packages.sury.org
500 http://ftp.hosteurope.de/mirror/ftp.debian.org/debian buster-updates/non-free i386 Packages
release o=Debian,a=stable-updates,n=buster-updates,l=Debian,c=non-free,b=i386
origin ftp.hosteurope.de
500 http://ftp.hosteurope.de/mirror/ftp.debian.org/debian buster-updates/non-free amd64 Packages
release o=Debian,a=stable-updates,n=buster-updates,l=Debian,c=non-free,b=amd64
origin ftp.hosteurope.de
500 http://ftp.hosteurope.de/mirror/ftp.debian.org/debian buster-updates/main i386 Packages
release o=Debian,a=stable-updates,n=buster-updates,l=Debian,c=main,b=i386
origin ftp.hosteurope.de
500 http://ftp.hosteurope.de/mirror/ftp.debian.org/debian buster-updates/main amd64 Packages
release o=Debian,a=stable-updates,n=buster-updates,l=Debian,c=main,b=amd64
origin ftp.hosteurope.de
500 http://security.debian.org/debian-security buster/updates/non-free i386 Packages
release v=10,o=Debian,a=stable,n=buster,l=Debian-Security,c=non-free,b=i386
origin security.debian.org
500 http://security.debian.org/debian-security buster/updates/non-free amd64 Packages
release v=10,o=Debian,a=stable,n=buster,l=Debian-Security,c=non-free,b=amd64
origin security.debian.org
500 http://security.debian.org/debian-security buster/updates/main i386 Packages
release v=10,o=Debian,a=stable,n=buster,l=Debian-Security,c=main,b=i386
origin security.debian.org
500 http://security.debian.org/debian-security buster/updates/main amd64 Packages
release v=10,o=Debian,a=stable,n=buster,l=Debian-Security,c=main,b=amd64
origin security.debian.org
500 http://ftp.hosteurope.de/mirror/ftp.debian.org/debian buster/contrib i386 Packages
release v=10.8,o=Debian,a=stable,n=buster,l=Debian,c=contrib,b=i386
origin ftp.hosteurope.de
500 http://ftp.hosteurope.de/mirror/ftp.debian.org/debian buster/contrib amd64 Packages
release v=10.8,o=Debian,a=stable,n=buster,l=Debian,c=contrib,b=amd64
origin ftp.hosteurope.de
500 http://ftp.hosteurope.de/mirror/ftp.debian.org/debian buster/non-free i386 Packages
release v=10.8,o=Debian,a=stable,n=buster,l=Debian,c=non-free,b=i386
origin ftp.hosteurope.de
500 http://ftp.hosteurope.de/mirror/ftp.debian.org/debian buster/non-free amd64 Packages
release v=10.8,o=Debian,a=stable,n=buster,l=Debian,c=non-free,b=amd64
origin ftp.hosteurope.de
500 http://ftp.hosteurope.de/mirror/ftp.debian.org/debian buster/main i386 Packages
release v=10.8,o=Debian,a=stable,n=buster,l=Debian,c=main,b=i386
origin ftp.hosteurope.de
500 http://ftp.hosteurope.de/mirror/ftp.debian.org/debian buster/main amd64 Packages
release v=10.8,o=Debian,a=stable,n=buster,l=Debian,c=main,b=amd64
origin ftp.hosteurope.de
Pinned packages:
openssl -> 1.1.1d-0+deb10u5 with priority 1000
openssl -> 1.1.1d-0+deb10u4 with priority 1000
libssl-dev -> 1.1.1d-0+deb10u5 with priority 1000
libssl-dev -> 1.1.1d-0+deb10u4 with priority 1000
libssl-doc -> 1.1.1d-0+deb10u5 with priority 1000
libssl-doc -> 1.1.1d-0+deb10u4 with priority 1000
libssl1.1 -> 1.1.1d-0+deb10u5 with priority 1000
libssl1.1 -> 1.1.1d-0+deb10u4 with priority 1000
Solution
Based on the answere of @Louis Thompson ...
The currently installed packages are in fact provided by the inofficial PHP repository maintained by Ondřej Surý.
https://packages.sury.org/php/
https://packages.sury.org/php/dists/buster/main/debian-installer/binary-amd64/Packages
To stay straight with my debian installation I downgraded these packages. By now everything works fine with my PHP installation and my PHP applications whose are using SSL functionality.
Update
Thanks to @William Turrell. I installed apt-listchanges to get informations about a change in the future. Would've made things a lot easier.
| ERROR: type should be string, got "\nhttps://www.debian.org/security/2021/dsa-4855\nThis, and other package information about openssl in Debian Buster, indicates that 1.1.1d is the current stable version. It looks like you've acquired 1.1.1j from elsewhere (gbp2578a0), and it doesn't have this important security patch\n" | Debian 10: Why some SSL packages will be downgraded? |
1,592,034,953,000 |
I've tried to run command apt-get update && apt-get upgrade && apt-get dist-upgrade as root, but nothing happens. I think that the problem is in non-fully complete apt sources. Am I right? What sources I need to set?
|
Update your apt repositories to use stretch instead of jessie (This can be done manually with a text editor, but sed can be used to automatically update the file.)
[user@debian-9 ~]$ sudo sed -i 's/jessie/stretch/g' /etc/apt/sources.list
Please note : Debian 9 (Stretch) is marked testing for a reason. You may notice stability problems when using it.
| How to upgrade from Debian 8 Jessie to Debian 9 Stretch? |
1,592,034,953,000 |
I would like to upgrade my Debian machine from Jessie to Stretch, but aptitude is reporting that I have 19 obsolete packages. Some of these, like BerkeleyDB, I use routinely.
A set of upgrade instructions say to remove any obsolete software before doing the upgrade, but I want to continue using some of the software. Am I stuck using Jessie forever?
|
If the packages don't conflict with new/updated packages in stretch, there's no particular reason why you should remove them.
If they do conflict, the package manager will let you know.
BTW, I still have some packages installed on my system that haven't been in debian for a decade or two. They still work. I've had others that I had to recompile for newer debian releases, and a few more that I stopped using because they weren't worth the bother of re-compiling (or, more commonly, hacking so that they compiled against the newer versions of various libraries).
I still have old versions of libdb installed:
$ dpkg -l libdb[0-9.]* | grep ii
ii libdb4.6 4.6.21-21 amd64 Berkeley v4.6 Database Libraries [runtime]
ii libdb4.6++ 4.6.21-18 amd64 Berkeley v4.6 Database Libraries for C++ [runtime]
ii libdb5.1:amd64 5.1.29-7 amd64 Berkeley v5.1 Database Libraries [runtime]
ii libdb5.1:i386 5.1.29-7 i386 Berkeley v5.1 Database Libraries [runtime]
ii libdb5.3:amd64 5.3.28-13.1+b1 amd64 Berkeley v5.3 Database Libraries [runtime]
ii libdb5.3:i386 5.3.28-13.1+b1 i386 Berkeley v5.3 Database Libraries [runtime]
ii libdb5.3-dev 5.3.28-13.1+b1 amd64 Berkeley v5.3 Database Libraries [development]
ii libdb5.3-sql:amd64 5.3.28-13.1+b1 amd64 Berkeley v5.3 Database Libraries [SQL runtime]
libdb4.6 hasn't been in Debian since "Squeeze" (Debian 6), around 2014.
I purge them occasionally when I have nothing installed that uses the old libs...if/when I remember.
| How to handle "obsolete" packages when upgrading distribution? |
1,592,034,953,000 |
I was not able to put Solus Linux in place because - as it needed a first big upgrade after installation - this first upgrade crashed and the system would stuck and not boot after that. This happened several times, twice in Solus Budgie and once in Gnome. The problem is also mentioned here.
I have fixed it as said here: not only I have avoided installing anything before this first full upgrade, but I have run the full upgrade command in from a TUI login session (i.e. a virtual terminal login session) instead of from a terminal emulator running in a(n X) GUI login session.
Everything went fine in this way.
As other systems use full upgrades - like between versions (Ubuntu, Mint), I thought I should ask about this, as such upgrades involve risks that maybe could be avoided in this way.
Is this procedure safer? Why?
If yes: why is it not more largely recommended?
UPDATE after comments, answers and edits by others than OP:
I was asked What do you call tty and how it differs from terminal? - but that is what I am trying to know, what I am asking here. I don't know what tty1 etc essentially is, I have just used it sometimes (Ctrl-Shift-F1, F2 etc) to kill a process or to log out forcibly when the desktop was stuck in Linux because I have read about all those steps when needed.
There is no point in underlying the similarities between tty or what's its name and terminal: my point is that during upgrade in normal terminal the aforementioned system used to crash completely. As stated at the link I posted, "the XOrg system would crush". I guess tty1 (I mean the out-of-desktop, out-of-Xorg CLI environment accessed with Ctrl-Alt-F1 ...F6) puts you out of the context that entailed the problem, and thus avoids the latter.
That is at least one big difference between the two (tty and terminal) ways of upgrading, isn't it? - I didn't have problems usually with terminal upgrades, but sometimes I did, and most certainly with the case described above; and now I wander whether that could be avoided through tty in a more general manner - more general than the specific problem that was avoided.
Basically what happened is that I fixed a problem and I want to know what I did - and why. I want to learn something out of it. - (The same case with this other question.)
TUI (what I had initially called "tty") is accessible with Ctrl+Alt+some F key. That may vary between machines. On my present one it's Ctrl+Alt+F2 to F6, while Ctrl+Alt+F1 is to go back to desktop.
|
By using a VT (Ctrl+Alt+F1) for system updates, you're reducing the risk of breaking the system since GUIs crash more often than VTs.
Note that the same robustness can be achieved by running the upgrade with screen or tmux since those processes will survive a GUI crash or SSH disconnection as well. After the GUI crashes or the SSH connection breaks, the upgrade will continue running in the background and the admin can reconnect to tmux or screen at their leisure to check on the upgrade progress.
| Is it safer to do full/heavy system upgrades from a TUI login session (outside Xorg) than a GUI login session? [closed] |
1,592,034,953,000 |
While in the process of migrating some servers from my testing network, a few of them installed systemd; I was a bit surprised as I am using sysV, and have systemd pinned to -1.
What happened?
|
It seems my systemd pinning to -1 was not enough. There are a couple more packages belonging to system to APT pin too.
So, I changed my /etc/apt/preferences.d/01nosystemd to:
Package: systemd
Pin: origin ""
Pin-Priority: -1
Package: dh-systemd
Pin: origin ""
Pin-Priority: -1
Package: systemd-shim
Pin: origin ""
Pin-Priority: -1
Package: libpam-systemd
Pin: origin ""
Pin-Priority: -1
And also ran the commands:
sudo apt-mark hold systemd dh-systemd systemd-shim libpam-systemd
Additional note: libsystemd0 is not forgotten, simply it cannot be avoided so easily, due to several dependencies.
Link: linux.org wiki questions - Prevent systemd installation
| systemd apt pinned to -1 and installed in upgrade from Debian 8 to Debian 9 |
1,592,034,953,000 |
In the future I have to upgrade several Debian 10 systems to Debian 11.
The problem is: The systems have no access to the internet.
What options do I have to upgrade the system?
After some research, I found apt-offline which seemed to be suitable for this task.
I tried apt-offline on a fully updated example Debian 10 in the following manner:
offline system:
change the /etc/apt/sources.list from buster (Debian 10) to bullseye (Debian 11)
create apt-offline.sig with : sudo apt-offline set --upgrade-type dist-upgrade apt-offline.sig
on the online system:
create bundle.zip with apt-offline get --bundle bundle.zip apt-offline.sig
on the offline system:
install bundle with : sudo apt-offline install bundle.zip
This does not work. apt tries to fetch packages from the internet when apt-get dist-upgrade is performed and the bundle.zip is also only 27Mb big.
It doesn't look like apt-offline is suitable for doing Debian release upgrades.
Is there any other method to perform an offline release upgrade from debian 10 to debian 11?
|
You can upgrade a Debian installation using downloadable images, e.g. the amd64 DVD image. You don’t need to re-install the operating system to use these; they can be used to upgrade an existing setup in the same way as repositories hosted on the Internet. The release notes contain detailed instructions; basically, you need to download the image, mount it, then run
apt-cdrom add
to have it taken into account.
You can then run apt upgrade and apt dist-upgrade as usual.
If the images you downloaded don’t contain all the packages necessary to upgrade your specific systems, you can create images containing all of Debian using Jigdo. Alternatively, you can remove the non-upgradable packages temporarily, upgrade to Debian 11, then use apt-offline to install them again.
| How to upgrade Debian 10 to Debian 11 without internet? |
1,592,034,953,000 |
During a dist-upgrade operation I am encountering an issue with apt packages.
When running any of the following commands I encounter the same error:
$ sudo apt dist-upgrade
$ sudo apt --fix-broken install
$ sudo apt-get autoremove
Error:
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Correcting dependencies... failed.
The following packages have unmet dependencies:
gdm3 : Depends: gir1.2-gdm-1.0 (= 41~rc-0ubuntu2pop0~1634915133~21.10~cf40258) but 42.0-1ubuntu6pop1~1650301427~22.04~2055533 is installed
Depends: libgdm1 (= 41~rc-0ubuntu2pop0~1634915133~21.10~cf40258) but 42.0-1ubuntu6pop1~1650301427~22.04~2055533 is installed
gnome-settings-daemon : Depends: gnome-settings-daemon-common (= 40.0.1-1ubuntu3pop0~1639691325~21.10~3bcd31b) but 42.1-1ubuntu3pop0~1651657687~22.04~0386384 is installed
E: Error, pkgProblemResolver::Resolve generated breaks, this may be caused by held packages.
E: Unable to correct dependencies
I found 3 related articles, none of the solutions in them solved my problem:
https://askubuntu.com/questions/124845/eerror-pkgproblemresolverresolve-generated-breaks-this-may-be-caused-by-hel
https://askubuntu.com/questions/1279062/upgrade-from-18-04-to-20-04-prevented-by-eerror-pkgproblemresolverresolve-g
https://askubuntu.com/questions/633544/e-error-pkgproblemresolverresolve-generated-breaks-this-may-be-caused-by-he
In the questions above they appear to be focused on specific packages, not a dist-upgrade, so I don't know how to simply identify and remove an offending package.
If I try to remove an offending package I get what looks like it will be a chain of dependencies that reach into the dist-upgrade, something I guess I don't want to mess with.
$ sudo apt-get remove gdm3
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
You might want to run 'apt --fix-broken install' to correct these.
The following packages have unmet dependencies:
gnome-settings-daemon : Depends: gnome-settings-daemon-common (= 40.0.1-1ubuntu3pop0~1639691325~21.10~3bcd31b) but 42.1-1ubuntu3pop0~1651657687~22.04~0386384 is to be installed
pop-desktop : Depends: gdm3 but it is not going to be installed
Recommends: io.elementary.sideload but it is not installable
E: Unmet dependencies. Try 'apt --fix-broken install' with no packages (or specify a solution).
|
I found the primary PopOS upgrade thread here:
https://www.reddit.com/r/pop_os/comments/ucge6e/upgrade_help_thread/
These steps solved the problem for me:
pop-upgrade release repair
sudo apt-get install -f
sudo apt-get full-upgrade --allow-downgrades
In particular --allow-downgrades was something I was missing based on other resources I had come across.
| On dist-upgrade: Error, pkgProblemResolver::Resolve generated breaks, this may be caused by held packages |
1,592,034,953,000 |
At work, I received a VM on VirtualBox. I have no clue about its composition except that it is running Debian 10.
Quickly, because tools like Chrome were complaining, I had to do an apt-get update + upgrade to put packages in their recent versions enough.
But because I've still some problems (freezing of Virtualbox or of my VM, I can't decide), I am thinking about doing an apt-get dist-upgrade.
At home, I wouldn't have questioned myself and the command would have been issued immediately. But at work, in front of an unknown VM, I'm wondering if it's wise.
What are the (general) risks I'm facing if I launch an apt-get dist-upgrade?
Would you say, even when you don't know what OS and software settings you have below you, that it's, in general, a good idea to do so?
At the opposite, if I refuse to do any dist-upgrade in this "VM life" what may I face? Am I doomed to encounter some troubles?
|
While 'apt upgrade' offers a conservative approach to upgrading packages without installing or removing any additional software, 'apt dist-upgrade' provides a more comprehensive solution, handling complex dependency changes and minimizing the impact on other packages.
The decision to perform a dist-upgrade is recommended for keeping your system up to date with the latest security patches and bug fixes but
on an unknown VM should be based on a careful assessment of the risks involved!
A dist-upgrade may install new packages or remove existing.
If you decide to dist-upgrade, check to have the availability of a backup or a rollback, to restore the original state.
Risks when Decide:
dist-upgrade may install new packages or remove them, this can lead to conflicts between packages, if there are custom configurations or thirdparty software installed on the system.
It can introduce changes that may not be fully tested or compatible with your specific environment, this can result in system instability, crashes, or unpredictable behavior.
If the vm has been customized or has a specific software, the configurations may overwriten or conflicts with those customizations and could lead to issues with the function.
Risks when not decide:
By not upgrading the system components, you might miss out important security updates that could leave your system vulnerable to exploits and attacks.
Newer packages and dependencies may not be compatible with older versions.
You might miss out on bug fixes, performance improvements, and new features introduced in the updated packages.
Why use apt-get upgrade instead of apt-get dist-upgrade?
Apt Upgrade vs Apt Dist-upgrade: The Key Differences
What's the difference between apt-get upgrade vs dist-upgrade?
| What are the risks of doing apt-get upgrade(s), but never apt-get dist-upgrade(s)? |
1,592,034,953,000 |
I just upgraded my system from Debian 11 to 12, by following this guide from cyberciti. This system has been kept up-to-date for more than 9 years, so it has been through at least 4 major upgrades (Debian 7 or 8 to 12 today).
During the first run of apt upgrade --without-new-pkgs, I had an error about libudev (sadly I forgot to keep the output from the command), that I fixed by removing 2 files:
$ rm /lib/x86_64-linux-gnu/libudev.so.1 /lib/x86_64-linux-gnu/libudev.so.1.6.5
I was able to finish the upgrade and reboot.
By performing further investigation, it looks like my system has some other duplicated libraries:
$ dpkg --search /lib/x86_64-linux-gnu/perl/ /usr/lib/x86_64-linux-gnu/perl/
dpkg-query: no path found matching pattern /lib/x86_64-linux-gnu/perl/
libperl5.36:amd64: /usr/lib/x86_64-linux-gnu/perl
It looks like these directories are not symbolic links:
$ ls -ld /lib/ /lib/x86_64-linux-gnu/ /usr/ /usr/lib/ /usr/lib/x86_64-linux-gnu/
drwxr-xr-x 84 root root 4.0K Jul 5 21:05 /lib//
drwxr-xr-x 78 root root 96K Jul 5 21:05 /lib/x86_64-linux-gnu//
drwxr-xr-x 12 root root 4.0K Jul 5 20:32 /usr//
drwxr-xr-x 84 root root 4.0K Jul 5 21:05 /usr/lib//
drwxr-xr-x 78 root root 96K Jul 5 21:05 /usr/lib/x86_64-linux-gnu//
I already had a very similar issue before: Can't restore systemd after upgrade from Debian 10 to 11: “undefined symbol: seccomp_api_get”
Here are the questions:
Is it normal to have duplicated libraries in /lib/x86_64-linux-gnu/ and /usr/lib/x86_64-linux-gnu/
Can I rely on the output of dpkg --search and delete the paths that show no path found …? Or can some tool help me clean this mess?
What events on my system could have made this possible?
Update: actually it looks like there 2 directories are identical:
$ ls -lh /lib/x86_64-linux-gnu/test.ignore /usr/lib/x86_64-linux-gnu/test.ignore
ls: cannot access '/lib/x86_64-linux-gnu/test.ignore': No such file or directory
ls: cannot access '/usr/lib/x86_64-linux-gnu/test.ignore': No such file or directory
$ touch /lib/x86_64-linux-gnu/test.ignore
$ ls -lh /lib/x86_64-linux-gnu/test.ignore /usr/lib/x86_64-linux-gnu/test.ignore
-rw-r--r-- 1 root root 0 Jul 5 22:14 /lib/x86_64-linux-gnu/test.ignore
-rw-r--r-- 1 root root 0 Jul 5 22:14 /usr/lib/x86_64-linux-gnu/test.ignore
But I don't understand how it works. And now I know that I must not delete files from /lib/x86_64-linux-gnu/ because it would delete files in /usr/lib/x86_64-linux-gnu/ too.
|
If you want to see a symlink, you mustn’t add /:
ls -ld /lib
should show you a different result.
Debian 12 enforces a “merged /usr”, the “duplication” you’re seeing is normal.
| After upgrading to Debian 12, duplicated files in /lib/x86_64-linux-gnu/ and /usr/lib/x86_64-linux-gnu/ |
1,592,034,953,000 |
I use Ubuntu 10.04 for more than a year and I often feel that I need to reinstall it to newest version.
I'd like to take most of my configurations and important settings to the new system. I already have some files and directories in mind that I certainly want to backup, but I'm afraid I will forget something.
Is there some checklist, guideline or even software I can use to help me with backing up the important data? I don't want to backup the whole partition. (it's not so critical)
How can I cleanly update to a newer version?
|
As psusi correctly points out, you shouldn't need to reinstall a Debian derivative. Just upgrade.
Regardless, the obvious answer to the backup question is to use version control to back up your home directory and config settings. For the config files in /etc on a Unix-like system, Joey Hess's etckeeper is popular. I'd recommend using a distributed version control system like Mercurial or Git, which can be used to periodically push the repository contents off your hard drive, and thus acts as an automatic backup. With Mercurial you can set up a
post-commit hook which will push after you commit, so your backups always stay completely up to date.
Note that this is not a general backup solution in this case, but works well for important config files and so forth, since they are general small text files, and therefore ideal for source control. And in this situation distributed version control is super-efficient, comparable in performance to rsync but better because of the atomicity of version control. Mercurial, at least, will roll back rather than push a partial changeset, and I imagine Git does the same. Also, version controlling your config files has obvious additional benefits.
| Upgrading Ubuntu to a newer version while keeping important files and settings |
1,592,034,953,000 |
I am trying to update my Kali version to 2020.3 from 2020.1 as I have many important files and tools installed, but I was not able to do it.
I consulted this site https://www.kali.org/docs/general-use/updating-kali/
I tried sudo apt full-upgrade but nothing happened. The version is still 2020.1.
$ lsb_release -a
No LSB modules are available.
Distributor ID: Kali
Description: Kali GNU/Linux Rolling
Release: 2020.1
Codename: kali-rolling
$ sudo apt full-upgrade
Reading package lists... Done
Building dependency tree
Reading state information... Done
Calculating upgrade... Done
The following NEW packages will be installed:
clang-9 libclang-common-9-dev libclang-cpp9 libz3-dev llvm-9 llvm-9-dev llvm-9-runtime llvm-9-tools
The following packages will be upgraded:
clang libomp-8-dev libomp5-8 libpocl2 libpocl2-common pocl-opencl-icd
6 upgraded, 8 newly installed, 0 to remove and 0 not upgraded.
Need to get 47.2 MB/61.0 MB of archives.
After this operation, 310 MB of additional disk space will be used.
Do you want to continue? [Y/n] Y
Get:1 https://kali.mirror.garr.it/mirrors/kali kali-rolling/main amd64 libpocl2 amd64 1.4-6 [15.8 MB]
Get:2 https://kali.mirror.garr.it/mirrors/kali kali-rolling/main amd64 libpocl2-common all 1.4-6 [79.2 kB]
Get:3 https://kali.mirror.garr.it/mirrors/kali kali-rolling/main amd64 libz3-dev amd64 4.8.7-4 [87.3 kB]
Get:4 https://kali.mirror.garr.it/mirrors/kali kali-rolling/main amd64 llvm-9-runtime amd64 1:9.0.1-9 [212 kB]
Get:5 https://kali.mirror.garr.it/mirrors/kali kali-rolling/main amd64 llvm-9 amd64 1:9.0.1-9 [4,850 kB]
Get:6 https://kali.mirror.garr.it/mirrors/kali kali-rolling/main amd64 llvm-9-tools amd64 1:9.0.1-9 [328 kB]
Get:7 https://kali.mirror.garr.it/mirrors/kali kali-rolling/main amd64 llvm-9-dev amd54 1:9.0.1-9 [25.9 MB]
Fetched 39.9 MB in 13s (2,990 kB/s)
Selecting previously unselected package libclang-ccp9.
(Reading database ... 251246 files and directories currently installed.)
Preparing to unpack .../00-libclang-ccp9_1%3a9.0.1-9_amd64.deb ...
Unpacking libclang-cpp9 (1:9.0.1-9) ...
Selecting previously unselected package libclang-common-9-dev.
Preparing to unpack .../01-libclang-common-9-dev_1%3a9.0.1-9_amd64.deb ...
Unpacking libclang-common-9-dev (1:9.0.1-9) ...
Selecting previously unselected package clang-9.
Preparing to unpack .../02-clang-9_1%3a9.0.1-9_amd64.deb ...
Unpacking clang-9 (1:9.0.1-9) ...
Preparing to unpack .../03-clang_1%3a9.0-49_amd64.deb ...
Unpacking clang (1:9.0-49) over (1:8.0-48.3) ...
Preparing to unpack .../04-libomp-8-dev_1%3a8.0.1-8_amd64.deb ...
Unpacking libomp-8-dev (1:8.0.1-8) over (1:8.0.1-4) ...
Preparing to unpack .../05-libomp5-8_1%3a8.0.1-8_amd64.deb ...
Unpacking libomp5-8:amd64 (1:8.0.1-8) over (1:8.0.1-4) ...
Preparing to unpack .../06-pocl-opencl-icd_1.4-6_amd64.deb ...
Unpacking pocl-opencl-icd:amd64 (1.4-6) over (1.3-10) ...
Preparing to unpack .../07-libpocl2_1.4-6_amd64.deb ...
Unpacking ibpocl2:amd64 (1.4-6) over (1.3-10) ...
Preparing to unpack .../08-libpocl2-common_1.4-6_amd64.deb ...
Unpacking ibpocl2-common (1.4-6) over (1.3-10) ...
Selecting previously unselected package libz3-dev:amd64.
Preparing to unpack .../09-libz3-dev_4.8.7-4_amd64.deb ...
Unpacking libz3-dev:amd64 (4.8.7-4) ...
Selecting previously unselected package llvm-9-runtime.
Preparing to unpack .../10-llvm-9-runtime_1%3a9.0.1-9_amd64.deb ...
Unpacking llvm-9-runtime (1:9.0.1-9) ...
Selecting previously unselected package llvm-9.
Preparing to unpack .../11-llvm-9_1%3a9.0.1-9_amd64.deb ...
Unpacking llvm-9 (1:9.0.1-9) ...
Selecting previously unselected package llvm-9-tools.
Preparing to unpack .../12-llvm-9-tools_1%3a9.0.1-9_amd64.deb ...
Unpacking llvm-9-tools (1:9.0.1-9) ...
Selecting previously unselected package llvm-9-dev.
Preparing to unpack .../13-llvm-9-dev_1%3a9.0.1-9_amd64.deb ...
Unpacking llvm-9-dev (1:9.0.1-9) ...
Setting up libz3-dev:amd64 (4.8.7-4) ...
$ grep Version /etc/os-release
VERSION="2020.1"
VERSION_ID="2020.1"
VERSION_CODENAME="kali-rolling"
$ sudo apt full-uprade
Reading package lists... Done
Building dependency tree
Reading state information... Done
Calculating upgrade... Done
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
$ sudo apt update
Hit:1 http://kali.mirror.garr.it/mirrors/kali kali-rolling InRelease
Reading package lists... Done
Building dependency tree
Reading state information... Done
All packages are up to date.
sources.list below
deb http://kali.mirror.garr.it/mirrors/kali kali-rolling main non-free contrib
deb-src http://kali.mirror.garr.it/mirrors/kali kali-rolling main non-free contrib
I could still install a fresh Kali 2020.3, but it will take a lot of time to download and install all the tools. I have a really slow internet.
Please help.
|
You are using the mirror http://kali.mirror.garr.it/mirrors/kali. At the moment, it has an older snapshot of the Kali Linux repository. For example, its base-files version is 1:2020.1.0 while in Kali Linux 2020.3 it is 1:2020.3.1. That is why you cannot upgrade to 2020.3 with your configuration now.
According to kali.org wiki, your sources.list should contain only the default host, which will redirect apt to the nearest up-to-date mirror automatically:
deb http://http.kali.org/kali kali-rolling main non-free contrib
Change your sources.list to that one line and try again.
That single line in sources.list should be enough. You could also add another line for programs' source code as explained at kali.org. The lines in sources.list which start with deb are for pre-compiled binary packages, which are ready to run. The lines which start with deb-src are for source code, which you have to compile into binary before running. If you wonder whether you need that, you, likely, don't. The difference between deb and deb-src lines in sources.list was also explained before.
| Update version of Kali Linux to 2020.3 from 2020.1 |
1,592,034,953,000 |
When upgrading from Jessie to Stretch, at the end of dist-upgrade, it ends with an error:
Errors were encountered while processing:
nagios-nrpe-server
E: Sub-process /usr/bin/dpkg returned an error code (1)
Have tried running apt upgrade, install, and reinstall without correcting this.
What to do?
|
To finish installing nagios-nrpe-server, I ended up verifying the post-install scripts.
At nagios-nrpe-server.postinst:
#!/bin/sh
set -e
# Automatically added by dh_installinit
if [ -x "/etc/init.d/nagios-nrpe-server" ]; then
update-rc.d nagios-nrpe-server defaults >/dev/null
invoke-rc.d nagios-nrpe-server start || exit $?
fi
# End automatically added section
As I have nagios-nrpe being invoked by (x)inetd and not running as a daemon, it failed startup and thus the apt dist-upgrade error.
For the moment commented out the start line, considering whether filing up a bug, and/or changing from xinetd to a daemon. I use xinetd because I also use it to invoke the backup daemon.
| Strange nagios-nrpe-server error upgrading from Jessie to Stretch |
1,592,034,953,000 |
I'm using a cheap VM that's getting pretty old. So old that recently, apt-get update && apt-get upgrade returned errors, because wheezy packages were removed from the mirrors.
So I decided to update my Debian install. I was overconfident and tried to update wheezy straight to buster.
The main problem is that I'm trying to update through ssh, and every time an error occurs, the ssh connexion closes, and I can't see the error details. I have no idea what the errors were on server-side, I just see that my local ssh client crashes.
What I did:
I changed the lines in /etc/apt/sources.list to reference buster rather than wheezy
I did a update && upgrade that updated nothing (I'm guessing none of the packages were compatible) then a dist-upgrade that crashed ssh and, as a bonus, did something so that I can't run nano or vim anymore without my ssh crashing.
I edited the sources.list (using echo > because no editor works anymore) to point to jessie
I did a update && upgrade that upgraded a few things then dist-upgrade that went a bit further than before, then crashed at "Preconfiguring packages ...".
No editor works anymore, every time I try to run nano or vim my local ssh client crashes.
cat /etc/debian_version gets me 8.11 but every time I try to dist-upgrade it stills tries to upgrade everything like nothing was ever upgraded.
I'm guessing I'm in some pretty messed up state and I'll have trouble restoring a stable state, but what can I try to see the actual errors thrown, so I can at least try to make it work again?
|
If you have screen or tmux already installed, you could use those to run apt-get in recoverable sessions. This will have two advantages: when you’re disconnected, you can reconnect and see what happened, and apt-get won’t be interrupted by the terminal suddenly going away.
You should also be able to see everything that happened in /var/log/apt/term.log, although I’m not sure that was true back in Wheezy days.
However given that this is a cheap VM I would suggest creating a new Buster VM and copying any data you need from the old to the new, rather than trying to recover the old VM.
| Upgrade Debian Wheezy through SSH crashes |
1,592,034,953,000 |
Upgraded lubuntu 17.04 to 17.10 on an EeePC 900a. Appears to work fine except that the left side of the display is junk. The full screen looks fine before linux is booted, the Asus EeePC splash.
System has 2GB RAM, 32 GB SSD & wireless USB mouse. EeePC 900a link's specification incorrectly refers to the model as 900 in one spot nor the system does not have a webcam.
Per the instructions in a popup when attempting to do the upgrade that said there was not enough space on /boot, in /etc/initramfs-tools/initramfs.conf I changed COMPRESS from gzip to xz.
I am able to ssh into the system.
Note that even though the left side of the display is garbled, while the mouse pointer is clear on the entire screen.
Booting the same system on 17.04 lubuntu from thumb drive works fine.
|
Exact same problem an no solution so far. There's a workaround: if you suspend the machine and resume it the display works fine again.
You can suspend the computer using the power menu on the login screen (top right corner icon).
| Upon upgrade of lubuntu 17.10 from 17.04 display messed up on an eeepc 900a |
1,592,034,953,000 |
Does Linux Mint have a direct upgrade without a clean install with an USB?
|
Yes, the Linux Mint upgrade guide lists the available upgrade paths and provides links to the corresponding documentation.
In general, you can upgrade from any given release or point release to the latest point release for that major version (e.g. 19.1 to 19.3), and from the latest point release for a given major version to the next major version (e.g. 19.3 to 20).
| Does Linux Mint have a direct upgrade like Ubuntu |
1,592,034,953,000 |
I was upgrading Ubuntu 18.04 to Ubuntu 20.04 using do-release-upgrade -f DistUpgradeViewNonInteractive, but the upgrade halted due to a power cut and upon resuming the upgrade I was getting the following errors:
In the hope to resume the upgrade, upon execution of do-release-upgrade -f DistUpgradeViewNonInteractive I get:
$ sudo do-release-upgrade -f DistUpgradeViewNonInteractive
Checking for a new Ubuntu release
There is no development version of an LTS available.
To upgrade to the latest non-LTS development release
set Prompt=normal in /etc/update-manager/release-upgrades.
I tried to resume the installation of packages using apt-get update --fix-missing and I get:
$ sudo apt-get update --fix-missing
Hit:1 http://in.archive.ubuntu.com/ubuntu focal InRelease
Hit:2 http://in.archive.ubuntu.com/ubuntu focal-updates InRelease
Hit:3 http://in.archive.ubuntu.com/ubuntu focal-backports InRelease
Hit:4 http://security.ubuntu.com/ubuntu focal-security InRelease
appstreamcli: symbol lookup error: appstreamcli: undefined symbol: AS_APPSTREAM_METADATA_PATHS
Reading package lists... Done
E: Problem executing scripts APT::Update::Post-Invoke-Success 'if /usr/bin/test -w /var/cache/app-info -a -e /usr/bin/appstreamcli; then appstreamcli refresh-cache > /dev/null; fi'
E: Sub-process returned an error code
I tried searching for solutions on Google and Stack Overflow's Websites, I got many related issues, but no one was answering the question of my exact problem, i.e. to resume an interrupted upgrade and getting the error of appstreamcli.
So, I am presenting how I fixed the problem.
If you have solved this problem using any other approach, you are welcome to mention it.
|
I tried the following steps:
First I tried purging using sudo apt-get purge libappstream3 as suggested here, but this package was not found in the system.
Then, as suggested in this blog post and this comment, and I tried sudo dpkg --configure -a, and the installation resumed and completed without any error.
After that, I tried running sudo apt-get update && sudo apt-get upgrade -y and it worked properly and updated the packages.
| Interrupted upgrade of Ubuntu from 18.04 to 20.04 gives - Problem executing scripts APT::Update::Post-Invoke-Success |
1,592,034,953,000 |
So I was in the process of upgrading a Debian 8 to 9 following those instructions. Something went wrong, python broke and now I haven't been able to fix it after hours on it. Some samples:
# apt-get install --fix-broken
Reading package lists... Done
Building dependency tree
Reading state information... Done
Correcting dependencies... Done
The following packages were automatically installed and are no longer required:
coinor-libcoinmp1v5 kde-style-oxygen kde-style-oxygen-qt4 libboost-date-time1.62.0 libboost-filesystem1.62.0 libboost-iostreams1.62.0
...
libkf5mailcommon5 libkf5mailimporter5 libkf5messagecomposer5 libkf5messagelist5 libkf5sendlater5 libkf5templateparser5 libkjsembed4 libkntlm4
libokularcore7 liborcus-0.11-0 libpagemaker-0.0-0 libspeechd2 libwps-0.4-4
Use 'apt-get autoremove' to remove them.
The following extra packages will be installed:
accountsservice akonadi-backend-mysql akonadi-server akregator apper appstream appstream-index apt apt-utils ark baloo-kf5 bluedevil bluez
...
python-mpi4py python-scipy python3 python3-apt python3-cffi-backend python3-chardet python3-crypto python3-cryptography python3-dbus
python3-debian python3-decorator python3-dev python3-gi python3-idna python3-joblib python3-keyring python3-keyrings.alt python3-minimal
python3-nose python3-numpy python3-pkg-resources python3-py python3-pyasn1 python3-pycurl python3-pyqt4 python3-pyqt5 python3-pytest
python3-secretstorage python3-setuptools python3-simplejson python3-sip python3-six python3-software-properties python3-wheel python3-xdg
python3.5 python3.5-dev qapt-batch qdbus qdbus-qt5 qml-module-org-kde-activities qml-module-org-kde-bluezqt qml-module-org-kde-draganddrop
...
task-ssh-server tasksel tasksel-data uno-libs3 ure user-manager vlc vlc-bin vlc-data vlc-l10n vlc-nox vlc-plugin-base vlc-plugin-notify
vlc-plugin-qt vlc-plugin-samba vlc-plugin-video-output
Suggested packages:
gnome-control-center akonadi-backend-postgresql akonadi-backend-sqlite limba apt-doc powermgmt-base pulseaudio-module-bluetooth bluez-alsa
...
vlc-plugin-skins2 vlc-plugin-video-splitter vlc-plugin-visualization
The following packages will be REMOVED:
aptitude coinor-libcoinmp1 coinor-libcoinutils3 coinor-libosi1 gparted juk k3b k3b-i18n kaccessible katepart kde-baseapps kde-baseapps-bin
...
task-kde-desktop vlc-plugin-pulse
The following NEW packages will be installed:
accountsservice appstream baloo-kf5 bluedevil bluez bluez-obexd breeze breeze-cursor-theme breeze-icon-theme catdoc coinor-libcoinmp1v5
...
python3-cffi-backend python3-crypto python3-cryptography python3-idna python3-keyring python3-keyrings.alt python3-py python3-pyasn1
python3-pycurl python3-pytest python3-secretstorage python3-xdg python3.5 python3.5-dev qdbus-qt5 qml-module-org-kde-activities
...
python3-debian python3-decorator python3-dev python3-gi python3-joblib python3-minimal python3-nose python3-numpy python3-pkg-resources
python3-pyqt4 python3-pyqt5 python3-setuptools python3-simplejson python3-sip python3-six python3-software-properties python3-wheel qapt-batch
qdbus qml-module-qtquick-controls qml-module-qtquick-layouts qml-module-qtquick-window2 qml-module-qtquick2 qt4-designer qt4-dev-tools
qt4-linguist-tools qt4-qmake qtbase5-dev-tools qtchooser qtcreator qtcreator-data qtdeclarative5-dev-tools qttools5-dev-tools
software-properties-kde sqlitebrowser systemsettings task-desktop task-english task-ssh-server tasksel tasksel-data uno-libs3 ure vlc vlc-data
vlc-nox vlc-plugin-notify vlc-plugin-samba
211 upgraded, 514 newly installed, 165 to remove and 243 not upgraded.
25 not fully installed or removed.
Need to get 0 B/488 MB of archives.
After this operation, 409 MB of additional disk space will be used.
Do you want to continue? [Y/n]
Reading changelogs... Done
Extracting templates from packages: 100%
Preconfiguring packages ...
(Reading database ... 223763 files and directories currently installed.)
Preparing to unpack .../software-properties-kde_0.96.20.2-1+deb9u1_all.deb ...
Failed to import the site module
Traceback (most recent call last):
File "/usr/lib/python3.5/site.py", line 580, in <module>
main()
File "/usr/lib/python3.5/site.py", line 566, in main
known_paths = addusersitepackages(known_paths)
File "/usr/lib/python3.5/site.py", line 287, in addusersitepackages
user_site = getusersitepackages()
File "/usr/lib/python3.5/site.py", line 263, in getusersitepackages
user_base = getuserbase() # this will also set USER_BASE
File "/usr/lib/python3.5/site.py", line 253, in getuserbase
USER_BASE = get_config_var('userbase')
File "/usr/lib/python3.5/sysconfig.py", line 595, in get_config_var
return get_config_vars().get(name)
File "/usr/lib/python3.5/sysconfig.py", line 538, in get_config_vars
_init_posix(_CONFIG_VARS)
File "/usr/lib/python3.5/sysconfig.py", line 410, in _init_posix
from _sysconfigdata import build_time_vars
File "/usr/lib/python3.5/_sysconfigdata.py", line 6, in <module>
from _sysconfigdata_m import *
ImportError: No module named '_sysconfigdata_m'
dpkg: warning: subprocess old pre-removal script returned error exit status 1
dpkg: trying script from the new package instead ...
Failed to import the site module
Traceback (most recent call last):
File "/usr/lib/python3.5/site.py", line 580, in <module>
main()
File "/usr/lib/python3.5/site.py", line 566, in main
known_paths = addusersitepackages(known_paths)
File "/usr/lib/python3.5/site.py", line 287, in addusersitepackages
user_site = getusersitepackages()
File "/usr/lib/python3.5/site.py", line 263, in getusersitepackages
user_base = getuserbase() # this will also set USER_BASE
File "/usr/lib/python3.5/site.py", line 253, in getuserbase
USER_BASE = get_config_var('userbase')
File "/usr/lib/python3.5/sysconfig.py", line 595, in get_config_var
return get_config_vars().get(name)
File "/usr/lib/python3.5/sysconfig.py", line 538, in get_config_vars
_init_posix(_CONFIG_VARS)
File "/usr/lib/python3.5/sysconfig.py", line 410, in _init_posix
from _sysconfigdata import build_time_vars
File "/usr/lib/python3.5/_sysconfigdata.py", line 6, in <module>
from _sysconfigdata_m import *
ImportError: No module named '_sysconfigdata_m'
dpkg: error processing archive /var/cache/apt/archives/software-properties-kde_0.96.20.2-1+deb9u1_all.deb (--unpack):
subprocess new pre-removal script returned error exit status 1
Failed to import the site module
Traceback (most recent call last):
File "/usr/lib/python3.5/site.py", line 580, in <module>
main()
File "/usr/lib/python3.5/site.py", line 566, in main
known_paths = addusersitepackages(known_paths)
File "/usr/lib/python3.5/site.py", line 287, in addusersitepackages
user_site = getusersitepackages()
File "/usr/lib/python3.5/site.py", line 263, in getusersitepackages
user_base = getuserbase() # this will also set USER_BASE
File "/usr/lib/python3.5/site.py", line 253, in getuserbase
USER_BASE = get_config_var('userbase')
File "/usr/lib/python3.5/sysconfig.py", line 595, in get_config_var
return get_config_vars().get(name)
File "/usr/lib/python3.5/sysconfig.py", line 538, in get_config_vars
_init_posix(_CONFIG_VARS)
File "/usr/lib/python3.5/sysconfig.py", line 410, in _init_posix
from _sysconfigdata import build_time_vars
File "/usr/lib/python3.5/_sysconfigdata.py", line 6, in <module>
from _sysconfigdata_m import *
ImportError: No module named '_sysconfigdata_m'
dpkg: error while cleaning up:
subprocess installed post-installation script returned error exit status 1
Errors were encountered while processing:
/var/cache/apt/archives/software-properties-kde_0.96.20.2-1+deb9u1_all.deb
E: Sub-process /usr/bin/dpkg returned an error code (1)
Also on single packages:
# apt-get install --reinstall python3-minimal
Reading package lists... Done
Building dependency tree
Reading state information... Done
You might want to run 'apt-get -f install' to correct these:
The following packages have unmet dependencies:
apper : Depends: packagekit (>= 0.8.6) but it is not going to be installed
python3 : Depends: python3-minimal (= 3.4.2-2) but 3.5.3-1 is to be installed
software-properties-kde : Depends: python3-software-properties (= 0.96.20.2-1+deb9u1) but 0.92.25debian1 is to be installed
E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution).
And basically the same as above with python3 or python
# dpkg --remove --force-remove-reinstreq --force-depends python3
dpkg: python3: dependency problems, but removing anyway as you requested:
python3-pygments depends on python3:any (>= 3.3.2-2~).
software-properties-common depends on python3 (>= 3.2.3-3~).
python3-joblib depends on python3:any (>= 3.3.2-2~).
python3-debian depends on python3:any (>= 3.3.2-2~); however:
Package python3 is to be removed.
python3-ptyprocess depends on python3:any (>= 3.3.2-2~).
python3-traitlets depends on python3:any (>= 3.3.2-2~); however:
Package python3 is to be removed.
python3-decorator depends on python3 (>= 3.2.3-3~).
dh-python depends on python3:any (>= 3.3.2-2~).
debian-goodies depends on python3.
python3-pykde4 depends on python3 (<< 3.5).
python3-pykde4 depends on python3 (>= 3.4~).
python3-pykde4 depends on python3 (<< 3.5).
python3-pykde4 depends on python3 (>= 3.4~).
python3-setuptools depends on python3:any (>= 3.3).
python3-setuptools depends on python3:any (<< 3.5).
python3-setuptools depends on python3:any (>= 3.3).
python3-setuptools depends on python3:any (<< 3.5).
python3-software-properties depends on python3 (>= 3.2.3-3~).
python3-sip depends on python3 (>= 3.4~).
python3-sip depends on python3 (<< 3.5).
python3-sip depends on python3 (>= 3.4~).
python3-sip depends on python3 (<< 3.5).
unattended-upgrades depends on python3.
python3-pip depends on python3:any (>= 3.4~).
python3-chardet depends on python3.
python3-chardet depends on python3:any (>= 3.3.2-2~).
python3-chardet depends on python3.
python3-chardet depends on python3:any (>= 3.3.2-2~).
python3-pexpect depends on python3:any (>= 3.3.2-2~); however:
Package python3 is to be removed.
python3-nose depends on python3; however:
Package python3 is to be removed.
python3-nose depends on python3:any (>= 3.3.2-2~); however:
Package python3 is to be removed.
python3-nose depends on python3; however:
Package python3 is to be removed.
python3-nose depends on python3:any (>= 3.3.2-2~); however:
Package python3 is to be removed.
python3-ipython-genutils depends on python3:any (>= 3.3.2-2~).
python3-uno depends on python3 (>= 3.4~).
python3-uno depends on python3 (<< 3.5).
python3-uno depends on python3 (>= 3.4~).
python3-uno depends on python3 (<< 3.5).
python3-pyqt4 depends on python3 (>= 3.4~).
python3-pyqt4 depends on python3 (<< 3.5).
python3-pyqt4 depends on python3 (>= 3.4~).
python3-pyqt4 depends on python3 (<< 3.5).
python3-wcwidth depends on python3:any (>= 3.3.2-2~).
lsb-release depends on python3:any (>= 3.4~).
python3-gi depends on python3 (>= 3.4~).
python3-gi depends on python3 (<< 3.5).
python3-gi depends on python3 (>= 3.4~).
python3-gi depends on python3 (<< 3.5).
python3-pkg-resources depends on python3:any (>= 3.3).
python3-pkg-resources depends on python3:any (<< 3.5).
python3-pkg-resources depends on python3:any (>= 3.3).
python3-pkg-resources depends on python3:any (<< 3.5).
python3-wheel depends on python3:any (>= 3.3.2-2~).
python3-wheel depends on python3.
python3-wheel depends on python3:any (>= 3.3.2-2~).
python3-wheel depends on python3.
gdebi-core depends on python3:any (>= 3.3.2-2~).
software-properties-kde depends on python3 (>= 3.2.3-3~).
python3-dev depends on python3 (= 3.4.2-2).
python3-six depends on python3:any (>= 3.3.2-2~).
python3-simplejson depends on python3 (>= 3.4~).
python3-simplejson depends on python3 (<< 3.5).
python3-simplejson depends on python3 (>= 3.4~).
python3-simplejson depends on python3 (<< 3.5).
python3-pyqt5 depends on python3 (>= 3.4~).
python3-pyqt5 depends on python3 (<< 3.5).
python3-pyqt5 depends on python3 (>= 3.4~).
python3-pyqt5 depends on python3 (<< 3.5).
python3-numpy depends on python3 (>= 3.4~).
python3-numpy depends on python3 (<< 3.5).
python3-numpy depends on python3 (>= 3.4~).
python3-numpy depends on python3 (<< 3.5).
python3-ipython depends on python3:any (>= 3.3.2-2~); however:
Package python3 is to be removed.
ipython3 depends on python3:any (>= 3.3.2-2~); however:
Package python3 is to be removed.
python3-dbus depends on python3 (<< 3.5).
python3-dbus depends on python3 (>= 3.4~).
python3-dbus depends on python3 (<< 3.5).
python3-dbus depends on python3 (>= 3.4~).
python3-apt depends on python3 (<< 3.5).
python3-apt depends on python3 (>= 3.4~).
python3-apt depends on python3 (<< 3.5).
python3-apt depends on python3 (>= 3.4~).
python3-prompt-toolkit depends on python3:any (>= 3.3.2-2~); however:
Package python3 is to be removed.
python3-simplegeneric depends on python3 (>= 3.1.3-13~).
python3-pickleshare depends on python3:any (>= 3.3.2-2~).
(Reading database ... 223763 files and directories currently installed.)
Removing python3 (3.4.2-2) ...
Failed to import the site module
Traceback (most recent call last):
File "/usr/lib/python3.5/site.py", line 580, in <module>
main()
File "/usr/lib/python3.5/site.py", line 566, in main
known_paths = addusersitepackages(known_paths)
File "/usr/lib/python3.5/site.py", line 287, in addusersitepackages
user_site = getusersitepackages()
File "/usr/lib/python3.5/site.py", line 263, in getusersitepackages
user_base = getuserbase() # this will also set USER_BASE
File "/usr/lib/python3.5/site.py", line 253, in getuserbase
USER_BASE = get_config_var('userbase')
File "/usr/lib/python3.5/sysconfig.py", line 595, in get_config_var
return get_config_vars().get(name)
File "/usr/lib/python3.5/sysconfig.py", line 538, in get_config_vars
_init_posix(_CONFIG_VARS)
File "/usr/lib/python3.5/sysconfig.py", line 410, in _init_posix
from _sysconfigdata import build_time_vars
File "/usr/lib/python3.5/_sysconfigdata.py", line 6, in <module>
from _sysconfigdata_m import *
ImportError: No module named '_sysconfigdata_m'
dpkg: error processing package python3 (--remove):
subprocess installed pre-removal script returned error exit status 1
Failed to import the site module
Traceback (most recent call last):
File "/usr/lib/python3.5/site.py", line 580, in <module>
main()
File "/usr/lib/python3.5/site.py", line 566, in main
known_paths = addusersitepackages(known_paths)
File "/usr/lib/python3.5/site.py", line 287, in addusersitepackages
user_site = getusersitepackages()
File "/usr/lib/python3.5/site.py", line 263, in getusersitepackages
user_base = getuserbase() # this will also set USER_BASE
File "/usr/lib/python3.5/site.py", line 253, in getuserbase
USER_BASE = get_config_var('userbase')
File "/usr/lib/python3.5/sysconfig.py", line 595, in get_config_var
return get_config_vars().get(name)
File "/usr/lib/python3.5/sysconfig.py", line 538, in get_config_vars
_init_posix(_CONFIG_VARS)
File "/usr/lib/python3.5/sysconfig.py", line 410, in _init_posix
from _sysconfigdata import build_time_vars
File "/usr/lib/python3.5/_sysconfigdata.py", line 6, in <module>
from _sysconfigdata_m import *
ImportError: No module named '_sysconfigdata_m'
dpkg: error while cleaning up:
subprocess installed post-installation script returned error exit status 1
Errors were encountered while processing:
python3
I've tried this but an error in another missing package happens:
# ln -fs /usr/lib/python3.5/plat-x86_64-linux-gnu/_sysconfigdata_m.py /usr/lib/python3.5/
A lot of things are now broken:
# lsb_release -a
Failed to import the site module
Traceback (most recent call last):
File "/usr/lib/python3.5/site.py", line 580, in <module>
main()
File "/usr/lib/python3.5/site.py", line 566, in main
known_paths = addusersitepackages(known_paths)
File "/usr/lib/python3.5/site.py", line 287, in addusersitepackages
user_site = getusersitepackages()
File "/usr/lib/python3.5/site.py", line 263, in getusersitepackages
user_base = getuserbase() # this will also set USER_BASE
File "/usr/lib/python3.5/site.py", line 253, in getuserbase
USER_BASE = get_config_var('userbase')
File "/usr/lib/python3.5/sysconfig.py", line 595, in get_config_var
return get_config_vars().get(name)
File "/usr/lib/python3.5/sysconfig.py", line 538, in get_config_vars
_init_posix(_CONFIG_VARS)
File "/usr/lib/python3.5/sysconfig.py", line 410, in _init_posix
from _sysconfigdata import build_time_vars
File "/usr/lib/python3.5/_sysconfigdata.py", line 6, in <module>
from _sysconfigdata_m import *
ImportError: No module named '_sysconfigdata_m'
But the upgrade is halfway through:
# cat /etc/issue
Debian GNU/Linux 9 \n \l
# cat /etc/os-release
PRETTY_NAME="Debian GNU/Linux 9 (stretch)"
NAME="Debian GNU/Linux"
VERSION_ID="9"
VERSION="9 (stretch)"
VERSION_CODENAME=stretch
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"
# ll $(which python)
lrwxrwxrwx 1 root root 9 Jan 24 2017 /usr/bin/python -> python2.7*
# ll $(which python3)
-rwxr-xr-x 1 root root 11855256 Jun 13 2019 /usr/bin/python3*
Why does the installer insist on using a broken python3 instead of python (which is 2.7) ? And how do I fix this mess ?
Edit:
# apt policy python3
E: Invalid operation policy
# apt-cache policy python3
python3:
Installed: 3.4.2-2
Candidate: 3.5.3-1
Version table:
3.5.3-1 0
500 http://httpredir.debian.org/debian/ stretch/main amd64 Packages
*** 3.4.2-2 0
100 /var/lib/dpkg/status
# uname -a
Linux myserver2c 3.16.0-11-amd64 #1 SMP Debian 3.16.84-1 (2020-06-09) x86_64 GNU/Linux
|
The problem come from the security repository, it is not set correctly.
Replace the following line described in the webpage:
deb http://security.debian.org stretch/updates main contrib non-free
with:
deb http://security.debian.org/debian-security stretch/updates main contrib non-free
Then run:
sudo apt update
sudo apt dist-upgrade
To solve the python error switch to python2.7. See Change the Python3 default version in Ubuntu. Then create a /var/lib/dpkg/info/python3.postinst with the following content:
#!/bin/bash
/bin/true
Then run:
sudo dpkg --configure -a
sudo apt update
sudo apt full-upgrade
If still doesn't work remove python3 from /var/lib/dpkg/status after backup then run the above commands.
| Broken python during Debian 8->9 upgrade |
1,592,034,953,000 |
I have been on testing for a while, and would like a more stable system, so I think the best way to accomplish that would be to pin my system to buster.
I know downgrades are risky, but I think this should be a side-grade, and therefore not risky?
Here's my current setup:
------------------------------
$ cat /etc/apt/sources.list
# I've added non-free, to allow installation of nvidia-driver
deb http://ftp.us.debian.org/debian/ testing main contrib non-free
deb-src http://ftp.us.debian.org/debian/ testing main
deb http://security.debian.org/debian-security testing-security main
deb-src http://security.debian.org/debian-security testing-security main
deb http://ftp.us.debian.org/debian/ testing-updates main
deb-src http://ftp.us.debian.org/debian/ testing-updates main
deb http://deb.debian.org/debian stretch-backports main contrib non-free
------------------------------
$ cat /etc/debian_version
buster/sid
Can I just:
change all the testing to buster, or the relevant mirror
run apt update + apt dist-upgrade
So my new /etc/apt/sources.list:
deb http://deb.debian.org/debian/ buster main contrib non-free
deb-src http://deb.debian.org/debian/ buster main contrib non-free
deb http://deb.debian.org/debian/ buster-updates main
deb-src http://deb.debian.org/debian/ buster-updates main
deb http://deb.debian.org/debian-security buster/updates main
deb-src http://deb.debian.org/debian-security buster/updates main
deb http://ftp.debian.org/debian buster-backports main contrib non-free
deb-src http://ftp.debian.org/debian buster-backports main contrib non-free
|
The right time to “side-grade” from testing to the latest stable release of Debian is when testing becomes stable, or shortly thereafter. The main indicator that such a change is no longer possible is when the next version of glibc migrates to testing — and that happened in mid-September.
In any case, you’re in unsupported downgrade territory; it only gets worse after the glibc bump because you’re then more likely to have to downgrade large numbers of packages in one go.
Changing your repository configuration as you suggest won’t actually cause any change when you next run apt upgrade (or even apt full-upgrade), because APT doesn’t downgrade packages by default. To switch from testing to stable, you’d also have to (temporarily) configure your pin priorities so that Buster packages have a priority greater than 1000:
Package: *
Pin: release a=buster
Pin-Priority: 1001
Then “upgrade”, taking careful note of all the downgrades, and as A.B mentions, re-installing affected packages to make sure all their files are present.
I would recommend not doing this. You’d be better off re-installing stable, or staying on testing until the next stable release (probably sometime in 2021).
| Debian: Now that buster is stable, how can I "side-grade" away from testing? |
1,592,034,953,000 |
I am trying to upgrade my Ubuntu installation to version 18.04 from 16.04
I was not offered any upgrade with the software upgrade so I tried to do it using command lines and I followed this set of commands: https://linuxconfig.org/how-to-upgrade-to-ubuntu-18-04-lts-bionic-beaver
But I am getting the message that there is no upgrade:
m@m-XPS-M1530:~$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 16.04.4 LTS
Release: 16.04
Codename: xenial
m@m-XPS-M1530:~$ sudo apt update
[sudo] password for m:
Sorry, try again.
[sudo] password for m:
Ign:1 http://dl.google.com/linux/chrome/deb stable InRelease
Hit:2 http://dl.google.com/linux/chrome/deb stable Release
Hit:3 http://packages.microsoft.com/repos/vscode stable InRelease
Hit:5 http://ppa.launchpad.net/atareao/telegram/ubuntu xenial InRelease
Get:6 http://security.ubuntu.com/ubuntu xenial-security InRelease [107 kB]
Hit:7 http://ppa.launchpad.net/george-edison55/cmake-3.x/ubuntu xenial InRelease
Hit:8 http://ppa.launchpad.net/libreoffice/libreoffice-prereleases/ubuntu xenial InRelease
Hit:9 http://ppa.launchpad.net/noobslab/apps/ubuntu xenial InRelease
Get:10 http://security.ubuntu.com/ubuntu xenial-security/main amd64 DEP-11 Metadata [67.5 kB]
Get:11 http://security.ubuntu.com/ubuntu xenial-security/main DEP-11 64x64 Icons [68.0 kB]
Get:12 http://security.ubuntu.com/ubuntu xenial-security/universe amd64 DEP-11 Metadata [107 kB]
Get:13 http://security.ubuntu.com/ubuntu xenial-security/universe DEP-11 64x64 Icons [142 kB]
Hit:14 http://gb.archive.ubuntu.com/ubuntu xenial InRelease
Get:15 http://gb.archive.ubuntu.com/ubuntu xenial-updates InRelease [109 kB]
Get:16 http://gb.archive.ubuntu.com/ubuntu xenial-backports InRelease [107 kB]
Get:17 http://gb.archive.ubuntu.com/ubuntu xenial-updates/main amd64 Packages [767 kB]
Get:18 http://gb.archive.ubuntu.com/ubuntu xenial-updates/main i386 Packages [708 kB]
Get:19 http://gb.archive.ubuntu.com/ubuntu xenial-updates/main amd64 DEP-11 Metadata [319 kB]
Get:20 http://gb.archive.ubuntu.com/ubuntu xenial-updates/main DEP-11 64x64 Icons [224 kB]
Get:21 http://gb.archive.ubuntu.com/ubuntu xenial-updates/universe amd64 DEP-11 Metadata [246 kB]
Get:22 http://gb.archive.ubuntu.com/ubuntu xenial-updates/universe DEP-11 64x64 Icons [326 kB]
Get:23 http://gb.archive.ubuntu.com/ubuntu xenial-updates/multiverse amd64 DEP-11 Metadata [5,964 B]
Get:24 http://gb.archive.ubuntu.com/ubuntu xenial-backports/main amd64 DEP-11 Metadata [3,328 B]
Get:25 http://gb.archive.ubuntu.com/ubuntu xenial-backports/universe amd64 DEP-11 Metadata [5,088 B]
Fetched 3,309 kB in 14s (225 kB/s)
Reading package lists... Done
Building dependency tree
Reading state information... Done
All packages are up-to-date.
m@m-XPS-M1530:~$ sudo apt upgrade
Reading package lists... Done
Building dependency tree
Reading state information... Done
Calculating upgrade... Done
0 to upgrade, 0 to newly install, 0 to remove and 0 not to upgrade.
m@m-XPS-M1530:~$ sudo apt dist-upgrade
Reading package lists... Done
Building dependency tree
Reading state information... Done
Calculating upgrade... Done
0 to upgrade, 0 to newly install, 0 to remove and 0 not to upgrade.
m@m-XPS-M1530:~$ sudo apt dist-upgrade
Reading package lists... Done
Building dependency tree
Reading state information... Done
Calculating upgrade... Done
0 to upgrade, 0 to newly install, 0 to remove and 0 not to upgrade.
m@m-XPS-M1530:~$ sudo apt install update-manager-core
Reading package lists... Done
Building dependency tree
Reading state information... Done
update-manager-core is already the newest version (1:16.04.12).
0 to upgrade, 0 to newly install, 0 to remove and 0 not to upgrade.
m@m-XPS-M1530:~$ sudo do-release-upgrade
Checking for a new Ubuntu release
No new release found.
What is the problem and how I can fix it?
The laptop that I ma using is relatively an old laptop with 3G ram.
|
That's because the currently released version is 18.04.0. To existing LTS¹ users the upgrade becomes available automatically from 18.04.01 onwards.
If you insist in upgrading right now feel free to use:
do-release-upgrade --devel-release
(I didn't upgrade: I like other people to have the problems and debug version 0 for me) >:-)
¹LTS = Long Time Support.
| Problem upgrading Ubuntu to version 18.04 |
1,592,034,953,000 |
At the moment I am running fedora-27 and I would like to upgrade it to fedeora-30. I followed the steps described in fedora wiki.
sudo dnf upgrade --refresh
sudo dnf install dnf-plugin-system-upgrade
sudo dnf system-upgrade download --refresh --best --allowerasing --releasever=28
sudo dnf system-upgrade reboot
Note that I tried to gradually upgrade to the next release instead of upgrading directly to fedora-30, since from previous experience (i.e. when wanting to upgrade from fedora-24 to fedora-27) I've found out that it was smoother.
After performing the dnf system-upgrade reboot command, my laptop rebooted, the upgrade screen when on and eventually I was booted again in fedora-27 while there is not fedora-28 option on the boot menu.
These are the steps that I followed to understand what's going wrong :
I had a look after @DavidYockey 's suggestion to /boot in case there is something related to f28 but there is nothing there as well. (https://i.sstatic.net/Lgx33.png). I also checked the /boot/grub2/grub.cfg file and there isn't any entry related to f28 (https://pastebin.com/Z81uJ0gr). So I guess that this means, it's not realated to grub.
I checked with journalctl -r -p err but I couldn't see something helpful there, either apart from the following entry which doesn't specify why it failed to upgrade. (https://pastebin.com/dnaDHcAQ)
systemd1: Failed to start System Upgrade using DNF.
I then, had a look at the dnf.log file, which can be found here. I saw some critical errors there, but I am not sure what to do. For example
2019-06-28T05:43:26Z CRITICAL Error opening file for checksum: /var/lib/dnf/system-upgrade/fedora-f21308f6293b3270/packages/compat-libicu57-
57.1-2.fc28.x86_64.rpm
2019-06-28T05:43:26Z CRITICAL Package "compat-libicu57-57.1-2.fc28.x86_64"
from repository "fedora" has incorrect checksum
I run sudo dnf repolist all and it seems that some repositories are disabled
I am wondering how can I enable them; maybe I can't.
I enabled the disabled repositories by editing the .repo files in /etc/yum.repos.d and changing the value of enable to 1 in the cases where it was 0 and then repeated sudo dnf upgrade --refresh, sudo dnf system-upgrade download --refresh --best --allowerasing --releasever=28 and sudo dnf system-upgrade reboot. Still the dnf.log gives me the same critical error seen in 3.
Any idea what to do next in order to eventually upgrade to fedora-30?
|
The problem was the compat-libicu57- 57.1-2.fc28.x86_64.rpm file which was saved in
/var/lib/dnf/system-upgrade/fedora-f21308f6293b3270/packages/
The critical error that was encountered was referring to an incorrect checksum.
To solve the issue the following steps were followed
The /var/lib/dnf/system-upgrade/fedora-f21308f6293b3270/packages/compat-libicu57- 57.1-2.fc28.x86_64.rpm file was deleted
I downloaded the compat-libicu57- 57.1-2.fc28.x86_64.rpm file from rmpfind.net
I moved the downloaded .rpm file to /var/lib/dnf/system-upgrade/fedora-f21308f6293b3270/packages/
I then run sudo dnf system-upgrade reboot and the system was upgraded to f28
| Upgrading from Fedora-27 to Fedora-28 |
1,592,034,953,000 |
After performing a dist upgrade from 17.10 to 18.04 texstudio no longer runs. I get the following error
This application failed to start because it could not find or load the Qt platform plugin "xcb"
in "".
Available platform plugins are: eglfs, linuxfb, minimal, minimalegl, offscreen, vnc, xcb.
Reinstalling the application may fix this problem.
Aborted (core dumped)
I suspect the issue lies with texstudio configuration as the path is "" at the end of the first line of the error.
If I run ldd /usr/bin/texstudio I get
linux-vdso.so.1 (0x00007ffc1f53d000)
libX11.so.6 => /usr/lib/x86_64-linux-gnu/libX11.so.6 (0x00007f64914f2000)
libquazip5.so.1 => /usr/lib/x86_64-linux-gnu/libquazip5.so.1 (0x00007f64912c4000)
libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007f64910a7000)
libhunspell-1.6.so.0 => /usr/lib/x86_64-linux-gnu/libhunspell-1.6.so.0 (0x00007f6490e38000)
libpoppler-qt5.so.1 => /usr/lib/x86_64-linux-gnu/libpoppler-qt5.so.1 (0x00007f6490bbf000)
libQt5PrintSupport.so.5 => /usr/lib/x86_64-linux-gnu/libQt5PrintSupport.so.5 (0x00007f6490950000)
libQt5Widgets.so.5 => /usr/lib/x86_64-linux-gnu/libQt5Widgets.so.5 (0x00007f6490109000)
libQt5Gui.so.5 => /usr/lib/x86_64-linux-gnu/libQt5Gui.so.5 (0x00007f648f9a0000)
libQt5Network.so.5 => /usr/lib/x86_64-linux-gnu/libQt5Network.so.5 (0x00007f648f614000)
libQt5Xml.so.5 => /usr/lib/x86_64-linux-gnu/libQt5Xml.so.5 (0x00007f648f3d8000)
libQt5Script.so.5 => /usr/lib/x86_64-linux-gnu/libQt5Script.so.5 (0x00007f648ef42000)
libQt5Core.so.5 => /usr/lib/x86_64-linux-gnu/libQt5Core.so.5 (0x00007f648e7f7000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f648e5d8000)
libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007f648e24f000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f648deb1000)
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f648dc99000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f648d8a8000)
libxcb.so.1 => /usr/lib/x86_64-linux-gnu/libxcb.so.1 (0x00007f648d680000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f648d47c000)
libpoppler.so.73 => /usr/lib/x86_64-linux-gnu/libpoppler.so.73 (0x00007f648cfe8000)
libGL.so.1 => /usr/lib/x86_64-linux-gnu/libGL.so.1 (0x00007f648cd5c000)
libpng16.so.16 => /usr/lib/x86_64-linux-gnu/libpng16.so.16 (0x00007f648cb2a000)
libharfbuzz.so.0 => /usr/lib/x86_64-linux-gnu/libharfbuzz.so.0 (0x00007f648c88c000)
libicui18n.so.60 => /usr/lib/x86_64-linux-gnu/libicui18n.so.60 (0x00007f648c3eb000)
libicuuc.so.60 => /usr/lib/x86_64-linux-gnu/libicuuc.so.60 (0x00007f648c034000)
libdouble-conversion.so.1 => /usr/lib/x86_64-linux-gnu/libdouble-conversion.so.1 (0x00007f648be23000)
libglib-2.0.so.0 => /usr/lib/x86_64-linux-gnu/libglib-2.0.so.0 (0x00007f648bb0d000)
/lib64/ld-linux-x86-64.so.2 (0x00007f6492612000)
libXau.so.6 => /usr/lib/x86_64-linux-gnu/libXau.so.6 (0x00007f648b909000)
libXdmcp.so.6 => /usr/lib/x86_64-linux-gnu/libXdmcp.so.6 (0x00007f648b703000)
libfreetype.so.6 => /usr/lib/x86_64-linux-gnu/libfreetype.so.6 (0x00007f648b44f000)
libfontconfig.so.1 => /usr/lib/x86_64-linux-gnu/libfontconfig.so.1 (0x00007f648b20a000)
libjpeg.so.8 => /usr/lib/x86_64-linux-gnu/libjpeg.so.8 (0x00007f648afa2000)
libnss3.so => /usr/lib/x86_64-linux-gnu/libnss3.so (0x00007f648ac5e000)
libsmime3.so => /usr/lib/x86_64-linux-gnu/libsmime3.so (0x00007f648aa32000)
libnspr4.so => /usr/lib/x86_64-linux-gnu/libnspr4.so (0x00007f648a7f5000)
liblcms2.so.2 => /usr/lib/x86_64-linux-gnu/liblcms2.so.2 (0x00007f648a59d000)
libtiff.so.5 => /usr/lib/x86_64-linux-gnu/libtiff.so.5 (0x00007f648a326000)
libGLX.so.0 => /usr/lib/x86_64-linux-gnu/libGLX.so.0 (0x00007f648a0f5000)
libGLdispatch.so.0 => /usr/lib/x86_64-linux-gnu/libGLdispatch.so.0 (0x00007f6489e3f000)
libgraphite2.so.3 => /usr/lib/x86_64-linux-gnu/libgraphite2.so.3 (0x00007f6489c12000)
libicudata.so.60 => /usr/lib/x86_64-linux-gnu/libicudata.so.60 (0x00007f6488069000)
libpcre.so.3 => /lib/x86_64-linux-gnu/libpcre.so.3 (0x00007f6487df7000)
libbsd.so.0 => /lib/x86_64-linux-gnu/libbsd.so.0 (0x00007f6487be2000)
libexpat.so.1 => /lib/x86_64-linux-gnu/libexpat.so.1 (0x00007f64879b0000)
libnssutil3.so => /usr/lib/x86_64-linux-gnu/libnssutil3.so (0x00007f6487781000)
libplc4.so => /usr/lib/x86_64-linux-gnu/libplc4.so (0x00007f648757c000)
libplds4.so => /usr/lib/x86_64-linux-gnu/libplds4.so (0x00007f6487378000)
librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007f6487170000)
liblzma.so.5 => /lib/x86_64-linux-gnu/liblzma.so.5 (0x00007f6486f4a000)
libjbig.so.0 => /usr/lib/x86_64-linux-gnu/libjbig.so.0 (0x00007f6486d3c000)
if I run qtchooser -print-env I get
qtchooser -print-env
QT_SELECT="default"
QTTOOLDIR="/usr/lib/x86_64-linux-gnu/qt4/bin"
QTLIBDIR="/usr/lib/x86_64-linux-gnu"
And the contents of /usr/lib/x86_64-linux-gnu/qtchooser is
lrwxrwxrwx 1 root root 50 Dec 22 2017 4.conf -> ../../../share/qtchooser/qt4-x86_64-linux-gnu.conf
lrwxrwxrwx 1 root root 50 Dec 22 2017 5.conf -> ../../../share/qtchooser/qt5-x86_64-linux-gnu.conf
lrwxrwxrwx 1 root root 50 Dec 22 2017 qt4.conf -> ../../../share/qtchooser/qt4-x86_64-linux-gnu.conf
lrwxrwxrwx 1 root root 50 Dec 22 2017 qt5.conf -> ../../../share/qtchooser/qt5-x86_64-linux-gnu.conf
So it looks like qt is defaulting to qt4 but texstudio needs qt5. So i ran
ln -s qt5.conf default.conf in /usr/lib/x86_64-linux-gnu/qtchooser and now qtchooser -print-env prints
QT_SELECT="default"
QTTOOLDIR="/usr/lib/qt5/bin"
QTLIBDIR="/usr/lib/x86_64-linux-gnu"
However, this does not resolve the issue. Texstudio still won't start and prints the same error message
|
There was a grub bug reported for these exact symptoms. Following links in the comments i found the following Virtualbox ticket where another user posted the fix below
sudo apt-get install --reinstall libqt5dbus5 libqt5widgets5 libqt5network5 libqt5gui5 libqt5core5a libdouble-conversion1 libxcb-xinerama0
Essentially, this bug resulted in packages being flagged as installed when they were not.
| Upgrade from Ubuntu 17.10 to 18.04 Broke Texstudio/Qt |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.