date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,509,513,280,000
I just installed Mint 16 and I see that root user is not available at login screen. I log in from normal user and went to "Login Window" option and there I set "Allow root login". Then I restarted the PC and still I don't see root user in login window. I also did the below but it also didn't work. sudo passwd root sudo sh -c 'echo "greeter-show-manual-login=true" >> /etc/lightdm/lightdm.conf'
Linux Mint 16 uses the Mint-X theme by default which only displays the password box for chosen non-root users. In order to enable the User entry field (from which you will be able to specify root) do this. From Menu ==> Administration ==> Login Window ==> Theme choose Clouds and logout.
Enable root login from GUI
1,509,513,280,000
To meet my security needs I set up quite long user password on my notebook. But when I am at home or other secure location, typing it down is cumbersome. It would be nice to let the gdm (or: mdm, since I am using Mint 13 with Mate) search for a specific file (on a pendrive), and when it is present, treat is as a security token and log in me automaticaly with it. I use encrypted home folders.
You want to use pam_usb. Read more here: http://pamusb.org/
Mint 13: Is it possible to skip standard login password dialog in presence of a pendrive with the key
1,509,513,280,000
I've edited /etc/gdm/custom.conf to show the following: [daemon] AutomaticLoginEnable=true AutomaticLogin=username Where username is obviously the username I'd like to automatically log into. I don't know if it matters or not, but I do have multiple users. In mind this should not matter, however, since we're specifying a username here in the settings. I'm using RHEL6.4. Any ideas?
I'm not an expert in this but I would suggest first to make sure gdm is actually your current display manager and that you're using the correct file: check the output of cat /etc/sysconfig/desktop and take a look if there are other .conf files in /etc/gdm/. Maybe trying renaming the file to gdm.conf. Also, did you try GUI solutions or do you require something that can be done from command line? You should be able to configure gdm with sudo gdmsetup.
Automatic login still doesn't work after editing custom.conf
1,509,513,280,000
I have always used text+console boot without graphic interface. This includes the login, that is also text only since I do some sync stuff at login time, specifically on tty1 where I have an autologin script, that executes the following command through the file /etc/systemd/system/[email protected]/override.conf: /usr/sbin/agetty --autologin <myusername> --noclear %I 38400 linux I just upgraded to Fedora 36 (from 35), and the boot goes fine until the very last step. Instead of showing me a text login prompt, it shows a black screen with an underscore at the top left. Why? The command systemctl goes well except precisely for one red line: ● [email protected] loaded failed failed Getty on tty1 In the logs, I can see an error: agetty[4565]: /dev/tty1: "cannot get controlling tty: Operation not permitted" agetty[4565]: setting terminal attributes failed: Input/output error I can see this error in the console each time I try as root service [email protected] start I don't know what went wrong, but it seems the agetty command changed? EDIT: If I put the line: ExecStart=-/usr/sbin/agetty --autologin <myusername> --noclear tty1 38400 linux in the file /etc/systemd/system/[email protected]/override.conf the autologin works. Yet, if I change to ExecStart=-/home/<myusername>/bin/myautologinscript.sh with the same line, it doesn't!!
That cannot get controlling tty: Operation not permitted error by agetty is returned by agetty when the TIOCSCTTY ioctl() it makes so the terminal device becomes the controlling terminal of agetty's session fails. It issues that ioctl() if the terminal currently doesn't control any session, or if it controls a session whose id is not agetty's pid. From the tty_ioctl(2) man page: TIOCSCTTY Argument: int arg Make the given terminal the controlling terminal of the calling process. The calling process must be a session leader and not have a controlling terminal already. For this case, arg should be specified as zero. If this terminal is already the controlling terminal of a dif‐ ferent session group, then the ioctl fails with EPERM, unless the caller has the CAP_SYS_ADMIN capability and arg equals 1, in which case the terminal is stolen, and all processes that had it as controlling terminal lose it. So, it won't work if agetty is not a session leader. If you start a script that runs agetty in a child process instead of starting agetty directly, then the shell will be a session leader, and agetty will not. Using exec agetty will run agetty in the same process as the shell (will replace the shell) and then agetty will be session leader. If that worked in the previous version of Fedora with the same modus operandi, my guess would be that systemd had already made the terminal device the controlling terminal of the session, may by calling open() on it without the O_NOCTTY flag. But even then, having agetty not being the session leader sounds like a bad idea. (I stroke that as in that case the controlled session would still not match agetty's pid, so it would still try to issue the TIOCSCTTY; I don't have another explanation for now).
Text boot without text console: agetty problem in Fedora 36
1,509,513,280,000
I configured AutoLogin on my Linux Mint Mate system because I am using it as a small home server for file sharing, etc. And some app don't work well if the user is not logged in. But I don't want the system to be unprotected, someone can access it if it logins automatically. So how to AutoLock immediately after AutoLogin ?
Answering my own question: I was trying some commands on startup like: mate-screensaver-command -l but it was not working as mate-screensaver could not be running yet So I tried: mate-screensaver sleep 1 mate-screensaver-command -l but with no success either, so I discovered that the problem was that on starting the mate-screensaver, it was not returning until the process ended, and it would not happen. So the final solution is to make a file like this: #!/bin/bash /usr/bin/mate-screensaver& sleep 1 /usr/bin/mate-screensaver-command -l sleep 2 /usr/bin/mate-screensaver-command -l sleep 3 /usr/bin/mate-screensaver-command -l sleep 4 /usr/bin/mate-screensaver-command -l I made the command 4 times just to be absolutely sure that it is going to lock because the command may fail if screensaver has not successfully started. There could be a more professional approach like checking if it has locked with mate-screensaver-command --query. after saving the file, make it executable on its properties or chmod and put it on startup, (just type start on mate menu to find it), then disable the mate-screensaver entry on the startup apps as you are already starting it on this script.
How to AutoLock after AutoLogin
1,509,513,280,000
I am thinking about doing an Ubuntu installation with their mini ISO, which comes only with the barebones system without a desktop environment/GUI. I remember being given the option to set up automatic login when doing a full Ubuntu desktop installation, but how to I enable that for the mini ISO install? Did I miss something? Also, is there a generalised way to do this on any Linux OS? Thanks!
You might find some ideas in this thread of linuxquestions.org
Auto login for Ubuntu (or other Linux) without GUI?
1,509,513,280,000
I could use some help with what I think is a basic request on the newest edition of Linux Mint (which I think would also be applicable to Ubuntu). I have a home system with 3GB of RAM and accounts for family members (4 of them). As the initial login process takes 15-20 seconds and kids are impatient, I'd like a way to have an active session auto-started for each user when the system first boots. ... In other words, a multiple auto-login in the background and a normal login screen. That way, when a user goes to login like normal, their session is already running and is switched to instantly. I have plenty of RAM and this machine always runs, so is there any way to pull this off via some type of login scripts?
Assuming you already have one user that auto-logs in, you can probably use a script that runs after logging in (use gnome-session-properties) that: gets a list of users to auto-login from some file checks for each of those users if they are logged in yet if one is not, use xdotool to switch to the first user (by simulating clicking on Menu and then Logout, etc) Each of these users must auto-run the script as well, thereby daisy chaining the login process if all users are logged in, switch to some specially marked user (the first) that auto logs in, unless that is the user already currently running the script.
Auto-start multiple background user sessions in Linux Mint
1,509,513,280,000
I have Linux Mint 14 Xfce (4.10) and have also installed LXDE desktop, so I can choose between these sessions if I want. Normally I would set one as default and at startup I am not asked for username&password and am logged in automatically as intended. (Under Settings/Login Window/Security - "Enable automatic login" is checked; and have also verified that /etc/lightdm/lightdm.conf contains the line autologin-user=cipricus.) But even before installing LXDE I was asked for username and password after logout despite the fact that in Settings/Users and Groups I have the setting of not being asked for password on login. In /etc/mdm/mdm.conf I see the line: AutomaticLoginEnable=true. (Settings/Session and startup/General - 'Display session chooser' is unchecked. But this regards the other type of 'session': not the first type that involves selecting between users-passwords-desktops, but the one that involves selecting between sets of saved settings of the same user and desktop. More on this distinction/confusion here.) I find it odd that if I restart the computer I can enter default session without password, but if I just log out the username&password are needed to log in. Opening computer and entering default session/DE: no username&password selecting between sessions: username&password needed. In the future I might decide to activate password at startup but I still don't want a password being asked in order to change or re-enter a session after the system has started. Are there other settings to make?
Trying many possible combinations of settings I solved it but the conclusion is that there is something amiss with Xfce's session manager settings or GUI. What I have verified is: As stated in the question, when this problem happens, under Settings/Login Window/Security - "Enable automatic login" is checked, like so: Enable timed login is not checked. The odd thing is that in order to avoid typing username & password after logging out it is enough to check 'Enable timed login'. The login window appears but just 'Enter' is needed to start session in this case: Even with 'Enable automatic login' unchecked, typing username & password after logging out is not necessary if 'Enable timed login' is checked. That doesn't make too much sense to me, but it works. Edit after restart: Because (related to a different problem - here) "Automatically save session on logout" (Menu/Settings/Session and Startup - General tab) was disabled, the solution above was not saved after startup. So, in case session automatic saving is disabled, make the 'good' settings as in the images above and in Menu/Settings/Session and Startup - Session tab: click "Save session" button. In this way, after logout, username and password are not required to login in the default session, but are required to login into a different one. This may seem odd, considering the fact that in Settings/Session and Startup/General there is an option to 'Display chooser on login'. But checking that displays only DE-specific sessions (the ones saved within a certain DE, that is within the generic "session"). In fact it seems that passwords are asked for desktop environments, not for saved "sessions". This double meaning of "session" is confusing. There is no logic in this, this solution is just a limited workaround, there may be many other variables depending on other settings that I haven't touched yet. For example, the login experience varies even depending on the style and theme of the login window... Some of the themes may display the username as a button (if Style "Themed with faced browser" is selected under Under Settings/Login Window/Local), but some may not; clicking enter as said above would enter the session directly; but clicking that username button makes necessary the password. Hopefully this application (Xfce4-session) will be in better shape in a future update.
How to enter/choose session after logout without password in (Linux Mint) Xfce?
1,509,513,280,000
When I opened wireshark (fresh install of live usb kali with persistence) for the first time it complained about me being root. This is what I found in googling it: Yes it's recommended and advisable not to run such tools in super user or high permission account. Giving root to such tools can go sideways should the tool malfunction. You can create a non-super user account or non root account and that should fix the error dialog. Also, the tool should still work when that error dialog shows up, It just warns you of the privileges you are assigning to the tool. https://null-byte.wonderhowto.com/forum/problems-with-wireshark-as-root-user-kali-0169494/ But when I create a user and try to log out as root, I get stuck in an autologin loop as root. I also saw elsewhere that most of the tools in Kali require root permissions (I figured I'd just sudo everything. I started in ubuntu years ago and have become so used to sudo I prefer it to root account). But my question is: which is the proper way of doing things in Kali? Creating a sub-account and disabling the log in (thus, my sub question is how?) or disabling the warnings in wireshark. What can go wrong with wireshark if you run it in root ? And finally: is this some kind of hazing ritual with Kali? (Bill Labanovic jokes in his book on python that installing pip is a kind of hazing ritual for new programmers in python. I thought maybe this is like that). UPDATE: Even when I find a way to log in as non-root (by waiting for the screen to lock and selecting a different user) I can't run wireshark because I don't have the permissions! And I can't seem to figure out a way to run wireshark with permissions without resorting to cli! Update2: I am not new to linux. By a "fresh install of live usb with persistence" I mean I just set up a live usb stick with kali linux and configured persistence following tutorials like this. The error message in wireshark is this one. Most of the tutorials I found online are about disabling the warning. I have set up a sudoer account but every time I boot Kali or attempt to log out of Kali, I go through the boot process and the last item is always something along the lines of 'Started up User Manager for UID 0.', as in root. How do I disable this? Update3: I'm going to clarify my question: what is the normal, correct way of doing this in Kali. According to the meta question on Kali questions, Kali doesn't even have proper apt support. So I'm afraid to go following the directions Draconis posted, because it requires installing special libraries. I'm not trying to run Kali as a production environment or a normal desktop (I have it on a flash drive), I'm just trying to use some tools in it to pentest my server (I am totally new to pentesting, and even though pentesterlab says not to bother using Kali, but just use the distro you're already comfortable in, I didn't want to go installing wireshark and a bunch of other tools in my desktop distro--I figured I'd start learning the basics of pentesting in Kali with pre-rolled tools. Perhaps this was a mistake). I did figure out how to log out as root: it requires using the lock screen button. Sounds amateur not to know that, but I'm not used to gnome and I spend more time in cli than not. I run an ubuntu server and mostly use my bunsenlabs box to mess around with mysql databases and write python scripts with vim. And it seems funny to me that I can't log out as root without getting stuck in an autologin loop. I am not new to linux. But I have been using, for the past 10 years off and on, somewhat easier distros (ubuntu, crunchbang, and now bunsenlabs. Have even dipped my toes into slackware and fedora). But for a tool that I figured was meant to be started up on a flash drive, there's an awful lot of configuration that has to happen before I can even get started using wireshark. This doesn't make sense to me. There is no way that a pentester goes and sets up special accounts to run wireshark every time he boots his flash drive (given most people, I would imagine, wouldn't even bother setting up persistence). So my question is: is it thoroughly normal to run wireshark as root in Kali Linux when running it from a flash drive? If it is not, what is the typical way of doing things in Kali?
There are a few different important points here. But the first one is, Kali is not a good first Linux distribution to start off with. If you're not familiar with account permissions, and especially if you don't want to use the command line, then Kali isn't right for you. I'd recommend Ubuntu instead. Anything you can do in Kali, you can also do in Ubuntu (once you install the right packages and tools), and it actually has a learning curve as opposed to Kali's "learning sheer vertical cliff". (OP, you say you've been using Ubuntu for years, so this warning isn't intended for you. But it's worth saying anyway for other people who find this question.) Second of all, Kali is designed to run pretty much everything as root. Which isn't a very good security practice! Some versions of Wireshark come with a warning: WIRESHARK CONTAINS OVER ONE POINT FIVE MILLION LINES OF SOURCE CODE. DO NOT RUN THEM AS ROOT. But if you're using Kali, it's assumed that: You know what you're doing, and you know why running as root in general is a bad idea, and so you're going to make sure that Even root isn't going to have the power to do anything really bad (because e.g. you're definitely not running this on the production server full of sensitive information) As far as what could go wrong, Wireshark has quite a lot of different "dissectors" to analyze incoming traffic. Because there are so many, and they're so complicated, it's hard to be sure none of them will glitch out when given specially-crafted packets. At best, this could make Wireshark crash. At worst, it could allow arbitrary code execution. And arbitrary code execution as root is very bad. The recommended way of using Wireshark, without letting it run as root, involves giving its dumpcap executable two extra capabilities: CAP_NET_ADMIN (allowing it to control network interfaces) and CAP_NET_RAW (allowing it to access raw packets). Full details on this are outside the scope of this question, but this article explains how to do that. Unfortunately, manipulating capabilities does require using the command line. If you're not comfortable with that, then Kali probably isn't right for you: it's built for command line use first and foremost.
Kali Linux Can't Log in as non-root user and wireshark complaining about root
1,509,513,280,000
Question 1 On Centos 6.5, I have a username called admin. How can I make it so the system logs in automatically? Question 2 How can I configure system to login as root user when there are other users on the system? Please don't bother me with security risks as this is a testbed station. There is no sensitive information that I need to worry about. Thank you.
For Question 1: Just edit /etc/gdm/custom.conf with your favorite editor. Then, under the [daemon] section, add 2 lines so it looks like the code below (change username to the username you want to use): [daemon] AutomaticLoginEnable=true AutomaticLogin=username
How do I remove login password for Centos6.5?
1,509,513,280,000
Running ubuntu server, I've reconfigured /lib/systemd/system/[email protected] to [Service] # the VT is cleared by TTYVTDisallocate # ##ADDED THIS HERE## ExecStart=-/sbin/agetty -a diagnosticuser --noclear %I $TERM Type=idle Restart=always RestartSec=0 UtmpIdentifier=%I TTYPath=/dev/%I TTYReset=yes TTYVHangup=yes TTYVTDisallocate=yes KillMode=process IgnoreSIGPIPE=no SendSIGHUP=yes so that I could have my fancy little device automatically log in to a user whose shell is a diagnostic menu. The problem is that all the consoles automatically log in to that user now. Is there a way to get just the first one to log in, and leave the rest with a regular log-in prompt? (can I have my cake as well as eat it?) I was thinking, maybe I'd replace the "/sbin/agetty" with something that checks whether or not the "diagnosticuser" is already logged in. But I was a little confused by the hyphen in "-/sbin/agetty" and I didn't want to take my chances.
Create a new file for terminal 1 '/lib/systemd/system/[email protected]' and copy into it the config you defined above. In /lib/systemd/system/[email protected] use the following: ExecStart=-/sbin/agetty --noclear %I $TERM Console 1 will autologin as diagnosticuser all other consoles will prompt for credentials.
How to configure agetty to autologon on only one terminal
1,509,513,280,000
I was trying to setup tty auto login for FreeBSD, I copied the P|Pc|Pc part and made a change: Pcal console:\ :al=root:ht:np:sp#9600 And modify the tty for ttyv0: ttyv0 "/usr/libexec/getty Pcal" xterm on secure But now if I do init q, I get:
Try changing: Pcal|Pcal console:\ :al=root:ht:np:sp#9600 ttyv0 "/usr/libexec/getty Pcal" xterm on secure Now Pcal will be the gettytab entry. | is used as a separator. This to enter alternative names (not sure about that I'm not a gettytab expert ;) ). The last entry until :\ looks like a "human readable" string for the gettytab line. See if this works. Does anyone know where the | is used for? And how you should read those gettytab entries?
Setting up tty autologin on FreeBSD
1,498,678,510,000
I've been troubleshooting an issue with a sysVinit service not coming online properly at boot within a systemd environment. What I've found is that when no service file or overrides are present in /etc/systemd/system/ for the said service, it autostarts properly. In this case, as I understand it, systemd should be dynamically loading the startup script via reading "legacy" sysvinit scripts present on the system, although I'm not 100% clear on that. What I'm confused about is that as soon as I pass the edit --full option to systemctl for said service, a flat file is generated at /etc/systemd/system/ and said service now fails to autostart at boot. Using the edit option and trying to add any overrides also seems to cause the service to fail to boot. Examples, if needed, provided below... Example of the system when it works The service (on Centos), in this example called "ProgramExample" has an init script placed in /etc/init.d/programexample and also /etc/rc.d/init.d/programexample: # ls -l /etc/rc.d/init.d/programexample -rwxr-xr-x. 1 root root 2264 Mar 29 14:11 /etc/rc.d/init.d/programexample No service file present at /etc/systemd/system/: # ls -lh /etc/systemd/system/programexample.service ls: cannot access /etc/systemd/system/programexample.service: No such file or directory Systemctl status output in this configuration: # systemctl status programexample.service ● programexample.service - LSB: Start Program Example at boot time Loaded: loaded (/etc/rc.d/init.d/programexample; bad; vendor preset: disabled) Active: active (exited) since Wed 2017-03-29 15:53:06 CDT; 14min ago Docs: man:systemd-sysv-generator(8) Process: 1297 ExecStart=/etc/rc.d/init.d/programexample start (code=exited, status=0/SUCCESS) Mar 29 15:53:05 centos7-box systemd[1]: Starting LSB: Start ProgramExample at boot time... Mar 29 15:53:05 centos7-box su[1307]: (to programexample) root on none Mar 29 15:53:06 centos7-box programexample[1297]: ProgramExample (user programexample): instance name set to centos7-box Mar 29 15:53:06 centos7-box programexample[1297]: instance public base uri set to https://192.168.0.148.programexample.net/programexample/ Mar 29 15:53:06 centos7-box programexample[1297]: instance timezone set to US/Central Mar 29 15:53:06 centos7-box programexample[1297]: starting java services Mar 29 15:53:06 centos7-box programexample[1297]: ProgEx server started. Mar 29 15:53:06 centos7-box systemd[1]: Started LSB: Start ProgramExample at boot time. With the above configuration, without any files created/placed in /etc/systemd/system/, the ProgramExample service autostarts properly. Once systemctl edit --full (or just edit) is used: Once any edits are passed to systemctl, I have observed the following: A flat file or an override directory will be placed in /etc/systemd/system/ Said service, in this case ProgramExample, fails to start at boot. I will be unable to "enable" said service using systemctl Systemctl status output in this configuration (post edit): # systemctl status programexample.service ● programexample.service - LSB: Start ProgramExample at boot time Loaded: loaded (/etc/rc.d/init.d/programexample; static; vendor preset: disabled) Active: inactive (dead) Docs: man:systemd-sysv-generator(8) This is the service file that is being generated and placed in /etc/systemd/system/ when using the edit --full option: # Automatically generated by systemd-sysv-generator [Unit] Documentation=man:systemd-sysv-generator(8) SourcePath=/etc/rc.d/init.d/programexample Description=LSB: Start ProgramExample at boot time Before=runlevel2.target Before=runlevel3.target Before=runlevel4.target Before=runlevel5.target Before=shutdown.target Before=adsm.service After=all.target After=network-online.target After=postgresql-9.4.service Conflicts=shutdown.target [Service] Type=forking Restart=no TimeoutSec=5min IgnoreSIGPIPE=no KillMode=process GuessMainPID=no RemainAfterExit=yes ExecStart=/etc/rc.d/init.d/programexample start ExecStop=/etc/rc.d/init.d/programexample stop ExecReload=/etc/rc.d/init.d/programexample reload What is happening here? Am I correct that without the flat service file and/or service override directory in /etc/systemd/system/ that systemd is dynamically reading this information from said service's init script? I've tried numerous iterations of editing the service file at /etc/systemd/system/ and leaving the default file in place and cannot get autostarting to work or the service to go into an "enabled" state. I believe it would be preferable to have a systemd .service file for systemd configurations instead of relying on systemd to read from init script LSB headers for compatibility and concurrency reasons but the default file systemd is creating is failing to start or enable along with numerous other more simple iterations of the .service file I've attempted.
I've now found that the issue was that the service file automatically generated by systemd-sysv-generator lacks an install section with a WantedBy option. I added the following to my generated file at /etc/systemd/system/programexample.service which allowed me to properly enable the service: [Install] WantedBy = multi-user.target After that I ran systemctl daemon-reload to ensure my service file was read by systemd. Now I received a proper notification that my service was actually symlinked somewhere to be "enabled": [root@centos7-box ~]# systemctl enable programexample.service Created symlink from /etc/systemd/system/multi-user.target.wants/programexample.service to /etc/systemd/system/programexample.service. This link helped me better understand the service file. I am not a fan of the way that systemd-sysv-generator does not include an install section with a WantedBy option by default. If systemd can dynamically read the LSB headers and properly start services at boot, why doesn't it generate the service file accordingly? I suppose some systemd growing pains are to be expected. Update July 7 2020: Working with Debian Buster and trying to enable a SysVInit legacy service, I was presented with this wonderful message, which I believe would have saved me some time when I dealt with this issue in 2017: Synchronizing state of programexample.service with SysV service script with /lib/systemd/systemd-sysv-install. Executing: /lib/systemd/systemd-sysv-install enable programexample The unit files have no installation config (WantedBy=, RequiredBy=, Also=, Alias= settings in the [Install] section, and DefaultInstance= for template units). This means they are not meant to be enabled using systemctl. Possible reasons for having this kind of units are: • A unit may be statically enabled by being symlinked from another unit's .wants/ or .requires/ directory. • A unit's purpose may be to act as a helper for some other unit which has a requirement dependency on it. • A unit may be started when needed via activation (socket, path, timer, D-Bus, udev, scripted systemctl call, ...). • In case of template units, the unit is meant to be enabled with some instance name specified.
init service failing to enable once a systemd service file is generated
1,498,678,510,000
I installed telegram-desktop via Flatpak and would like to auto start the messenger when logging into Gnome 3 (or Unity as a fact). Is there a way to robustly do so?
Starting from the answer given by @intika I found a solution I like more. Instead of replicating the content of the existing desktop-file in /var/lib/flatpak/exports/share/applications/org.telegram.desktop.desktop I linked it inside my personal ~/.config/autostart/. Works like a charm :-)
How to add a Flatpak's app to Gnome 3 auto-starts?
1,498,678,510,000
These don't work: sudo update-rc.d kdeconnectd disable sudo systemctl disable kdeconnectd.service There is no script for it in /etc/init.d/ and the /usr/share/dbus-1/services/org.kde.kdeconnect.service file only has Exec=/usr/lib/x86_64-linux-gnu/libexec/kdeconnectd set sysv-rc-conf, rcconf and bum don't list kdeconnect. /etc/xdg/autostart/kdeconnectd.desktop looks like this: [Desktop Entry] Type=Application Exec=/usr/lib/x86_64-linux-gnu/libexec/kdeconnectd X-KDE-StartupNotify=false X-KDE-autostart-phase=0 X-GNOME-Autostart-enabled=false OnlyShowIn=KDE;GNOME;Unity;XFCE; NoDisplay=true (Kdeconnectd has port 1716 open all the time when running.) It's not just starting automatically when booting but also a while after ending the process without me opening it. I'm running Debian 9/KDE. Update: Debian issue now is here
I have had the same problem, I have solved it by deleting two files: "/usr/share/dbus-1/services/org.kde.kdeconnect.service" and "/etc/xdg/autostart/org.kde.kdeconnect.daemon.desktop". I don't know if it's the proper way to fix it, but it worked for me.
How to prevent kdeconnectd from starting automatically?
1,498,678,510,000
I am using an embedded linux with busybox. I would like to automatically run my app called "myApplication" (runlevel 5 after boot all the services are up). What I have done so far: I made a script under /etc/init.d/ called S90myscript Then I added this line to the inittab: ::sysinit:/etc/init.d/S90myscript The script contains the following: ! /bin/sh ### BEGIN INIT INFO # Provides: myApplication # Should-Start: $all # Required-Start: $remote_fs $network $local_fs # Required-Stop: $remote_fs # Default-Start: 5 # Default-Stop: 0 6 # Short-Description: start myprogram at boot time ### END INIT INFO # set -e . /lib/lsb/init-functions PATH=/root:/bin:/usr/bin:/sbin:/usr/sbin:/usr/local/sbin PROGRAMNAME="myApplication" case "$1" in start) $PROGRAMNAME ;; stop) skill $PROGRAMNAME ;; esac exit 0 Am I missing something? Symlinks? Is what I did wrong? Thank you in advance
Found the solution. I placed myApplication in /usr/sbin/ Created a symlink named myApp to the script located in /etc/init.d/S99myAppScript (notice that there is no .sh and I had to run sudo chmod 755 on this script) Added the following line at the end of rcS file located in /etc/init.d/ just before the command done: myApp & After rebooting the system, myApplication autoruns.
How to Autorun a program using busybox after boot?
1,498,678,510,000
In XFCE4, there is a list of items to be started when an XFCE4-session is started (XFCE4 Settings xfce4-settings-manager → Tab "Application Autostart"): I'm wondering where this list is stored. In ~/.config/autostart I have three .desktop files, which are available in the aforementioned list, but there are many more items in that list than those three files. I was wondering if those items are stored somewhere in a human readable file, or perhaps directory structure. Although I'm not exactly planning to edit those items via scripting, it would help if it was at all possible to modify that list while a session is not active. For instance should I want to edit those items over SSH, while no one is logged in.
The entries you see are populated from: ~/.config/autostart (user-specific) and /etc/xdg/autostart/ (system-wide) If you want to disable something from the second system-wide location, you create the appropriate entry to your start-up directory with this content: [Desktop Entry] Hidden=true E.g. I have /etc/xdg/autostart/blueman.desktop - to disable it you create: ~/.config/autostart/blueman.desktop with the above content. Redefining something looks a tad tedious and over-complicated but you first have to disable it, then create your own desired entry.
XFCE4 - Session and Startup: where are autostart items saved?
1,498,678,510,000
I have a PC for routing purposes, it has 2 network interfaces and runs Debian. I want to add a sound to it, to signal that it is booted and okay. So I've created some /etc/init.s/beep-startup script, according to top-secret LSByization manual (cannot stop laugning of its naming, sorry), which (along with hundreds of LSB strings) contained couple of core commands: ### BEGIN INIT INFO # Provides: beep-startup # Required-Start: $all # Required-Stop: $all # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: qwer # Description: asdf ### END INIT INFO case "$1" in start) #power chord + extreme uprising = fells like its clearly up and feeing good beep -f 220 -l 100 beep -f 330 -l 100 beep -f 880 -l 100 ;; stop) # power chord + slow chirp down, feels like low descending shutdown sound, right after ctrl+alt+del or /sbin/reboot beep -f 440 -l 100 beep -f 330 -l 100 beep -f 220 -l 100 ;; *) echo "Usage: sudo rm / -rf" esac Then of course I've ran chmod +x and update-rc.d beep-startup defaults But now the trick is, that when I see it in action, actual beep is occuring when system is in booting process, but not when the login prompt pops up. So there is 2-3 seconds between beep and login promt. And almost 5-7 seconds between chirp sound and correc routing begins (4 core * 2ghz + 4gb DDR3 ram, not quite bad for a router) Shutdown beep works as a charm, occuring almost instantly, right after ctrl+alt+del combo, as it supposed to confirm "shutdown sequence initiated". I want to beep it just lyrically a "nanosecond" before it shows the login prompt - e.g. after $all and #everything (or however do you what it to write - will stay precisely clear) is really finished initialized and system is ready to display login promt. How can I create an autorun script that executes right when the system is done with everything, but just before it shows the login prompt? PS. Kinda whinning, but.. It's a router with console-only stuff, no xorg or whatsoever: the only required user is root, most of its tasks are root-related. System can be "initializing", either "running", either "shutting down", either "powered off" - no runlevels required. Debian is a real trash in that kind of distro, runlevelless Busybox init feels very "sexy" in that sense. Later on I am planning to get rid of heavy Debian in a favor of buildroot-like distros. But that would happen only in production: only after I stabilize and figure out what software do I need. For now I do like apt-kind of "runtime" package-adding possibility in the development and software-usability testing stage. By now I had great experience with iptables, and worst experinence with shorewall+webmin, when first block second, along with blocking dnsmasq, cutting whole WAN connectivity, so on. I hope there will be found more user friendly solutions. For now question is about "run a script after everything from /etc/init.d had been launched and operating"
Excellent solution here. Solution 1: Create an @reboot entry in your crontab to run a script called /usr/local/bin/runonce. Create a directory structure called /etc/local/runonce.d/ran using mkdir -p. Create the script /usr/local/bin/runonce as follows: #!/bin/sh for file in /etc/local/runonce.d/* do if [ ! -f "$file" ] then continue fi "$file" mv "$file" "/etc/local/runonce.d/ran/$file.$(date +%Y%m%dT%H%M%S)" logger -t runonce -p local3.info "$file" done Now place any script you want run at the next reboot (once only) in the directory /etc/local/runonce.d and chown and chmod +x it appropriately. Once it's been run, you'll find it moved to the ran subdirectory and the date and time appended to its name. There will also be an entry in your syslog Credit is to Dennis Williams https://serverfault.com/questions/148341/linux-schedule-command-to-run-once-after-reboot-runonce-equivalent
How to run script at start when everything is up and running?
1,498,678,510,000
There is an application, Zoom, which depends on software known as ibus. After installation, ibus likes to autostart. I don't want ibus to autostart. I've removed autostart files that start im-daemon from ~/.config/autostart and /etc/xdg/autostart, but the application still starts. I've searched for systemd services and found none which start ibus. If I log out and log back in, the application starts again. How can I find the source and stop this malware-emulating software from autostarting? I'm on Debian 10 Cinnamon. Currently, I have deleted the binary for im-daemon, which causes the autostart program to fail. However, I still want to know how and why this software strives to hide its activity from the user.
I am assuming you are talking about the Zoom Meeting application. I ran the command strace -o debug.txt -e trace=file -f ./ZoomLauncher This showed at one point to run libibusplatforminputcontextplugin.so which is part of the zoom package. To guess what is happening without a full deep dive into the product I ran: strings ./platforminputcontexts/libibusplatforminputcontextplugin.so | grep -i ibus This showed that there are several strings that reference ibus. So the application likely needs it. The easier solutions are not using gnome variant or trying to run the application in wine. If you want to directly want to stop it you could try to stop the changes from happening. I am not running Cinnanmon so I am guessing at the solution so you may need more research. The debug file from earlier can show all files accessed. Running the following can show a cleaned up list of file accesses. cat debug.txt | grep -v "No such file or directory\|RDONLY\|exited\|unfinished\|\"/dev/" | grep "[0-9]* openat(" Zoom seemed interested in my "/run/user/1000/dconf/user" file. To be clear (due to recent events in the news) I am not saying this malicious I am saying on my system the modify time changed the same time as when I was running zoom. My lack of statement is due to my current knowledge of dconf is low and from what I know there are many legitimate reasons to change a field in this field for any friendly application. If it is modifying dconf settings there could be something in there that is starting ibus. If that is the solution the I would recommend changing permissions on that file to not allow your user to change it. chmod 400 /run/user/1000/dconf/user or the dconf file under your home directory if zoom is modifying it. This very possibly can cause poor and unexpected behavior, but the question tone seems to indicate that this would be acceptable. There are also options for locking dconf with the same caveats. Locking dconf
Disable autostart of ibus on login
1,498,678,510,000
I've got a problem: A while ago, I installed a piece of software called wii-u-gc-adapter, and when I run it, I can plug my GameCube controller in to my Linux machine desktop computer to play games. This works great. I must have taken some advice at some point in time to put somewhere in my computer an instruction to run this process at startup. I bring this up because when I turn my computer off, it hangs for one minute 30 seconds waiting for this process to end. When I run ./wii-u-gc-adapter myself, I then manually kill it. But at some point in installing it, I told my system to run it. The program is listed in usr/local/bin, which doesn't surprise me. Here is the end of my $pstree ├─whoopsie───2*[{whoopsie}] ├─wii-u-gc-adapte───2*[{wii-u-gc-adapte}] ├─wpa_supplicant └─xdg-permission-───2*[{xdg-permission-}] In htop I see the following when I filter for wii: When I shutdown my computer, I have to wait for one minute 30 seconds, and when I press F2 I see this message: A stop job is running for Wii U Gamecube Adapter I'd like to clean up this loose end. I usually document what I do when I modify a file, but I don't think I did here, so I'm having a hard time finding where I made a change that causes this program to run on startup. [Some progress here - edit 1] ~$ ps j 1045 PPID PID PGID SID TTY TPGID STAT UID TIME COMMAND 1 1045 1045 1045 ? -1 Ssl 0 0:00 /usr/local/bin/wii-u-gc-adapter So the parent process of 1045 is PPID 1, i.e. it looks like someone told systemd to start this process. I would like to take this process off that list. [ more progress here] Found a gamecube.service file by going to /etc and using ag to search for it. systemd/system/gamecube.service 2:Description=Wii U Gamecube Adapter 8:ExecStart=/usr/local/bin/wii-u-gc-adapter I'd like to completely remove this service. [Third edit] I am following this answer: https://superuser.com/a/936976 [Fourth edit] Following the procedure from superuser, after finding that the parent process was indeed systemd with ps j, this problem is now resolved.
Solution: Find process name in pstree Find process in htop, get PID from there Use command ~$ ps j [PID] to find the PID of the parent process If the PID is 1, then it is being started by systemd If it is started by systemd, use this answer along with the information you got from step 1 and step 2 to totally remove the process from systemd's list of what to run on start up.
What's causing a process to start at startup?
1,498,678,510,000
CentOS 7.x with GNOME 3 Shell by default provides the following *.desktop files under /etc/xdg/autostart/ with AutostartCondition key: # gnome-welcome-tour.desktop [Desktop Entry] Type=Application Name=Welcome Exec=/usr/libexec/gnome-welcome-tour AutostartCondition=if-exists run-welcome-tour OnlyShowIn=GNOME; NoDisplay=true And # gnome-initial-setup-first-login.desktop [Desktop Entry] Name=Initial Setup #... Icon=preferences-system Exec=/usr/libexec/gnome-initial-setup --existing-user Terminal=false Type=Application StartupNotify=true Categories=GNOME;GTK;System; OnlyShowIn=GNOME; NoDisplay=true AutostartCondition=unless-exists gnome-initial-setup-done #... My questions: Am I correct in thinking AutostartCondition key determines if the value of Exec key gets executed by GNOME 3 (or another XDG compliant desktop or session manager) after reading the /etc/xdg/autostart/*.desktop file on startup? How do I query the current value for AutostartCondition? In relation to question #2: I've attempted the following unsuccessfully (I've already completed both gnome-welcome-tour and gnome-initial-setup and am not prompted on login): [user@user-centos-7 ~]$ gconftool-2 --recursive-list / | grep gnome-initial-setup-done [user@user-centos-7 ~]$ gsettings list-schemas | while read -r SCHEMA; do gsettings list-recursively $SCHEMA; done | grep gnome-initial-setup-done [user@user-centos-7 ~]$ [user@user-centos-7 ~]$ gconftool-2 --recursive-list / | grep run-welcome-tour [user@user-centos-7 ~]$ gsettings list-schemas | while read -r SCHEMA; do gsettings list-recursively $SCHEMA; done | grep run-welcome-tour [user@user-centos-7 ~]$
The session manager reads the .desktop files of all the start-up apps. If if finds an AutostartCondition key in any of those files, it checks its value: if the condition is not met, that particular app is removed from the list of start-up apps. The autostart conditions are described in a very old post on freedesktop mailing list: The Autostart-Condition Key The Autostart-Condition key gives a condition which should be tested before autostarting the application; if the condition is not met, then the application MUST NOT be autostarted. The condition can be in one of the following forms: if-exists FILE The application should only be autostarted if FILE exists (relative to $XDG_CONFIG_HOME). unless-exists FILE The application should only be autostarted if FILE *doesn't* exist (relative to $XDG_CONFIG_HOME). DESKTOP-ENVIRONMENT-NAME [DESKTOP-SPECIFIC-TEST] The application should only be autostarted under the named desktop environment (as with OnlyShowIn). If DESKTOP-SPECIFIC-TEST is also given, the desktop environment will evaluate it in some manner specific to that desktop to determine whether or not the application should be autostarted. which would end up being used like: Name=kgpg # start only under KDE, and only if the given kconfig key is set Autostart-Condition=KDE kgpgrc:User Interface:AutoStart:false Name=vino # start only under GNOME, and only if the given gconf key is set Autostart-Condition=GNOME /desktop/gnome/remote_access/enabled Name=beagled # start under any desktop environment, unless # ~/.config/beagle/disable-autostart exists Autostart-Condition=unless-exists beagle/disable-autostart So, in your particular case, the autostart condition is that ./config/run-welcome-tour exists and respectively that ./config/gnome-initial-setup-done doesn't exist.
Understanding AutostartCondition key in .desktop files
1,498,678,510,000
Which program runs $XDG_CONFIG_HOME/autostart in Debian 9? I tried putting the following .desktop file in $XDG_CONFIG_HOME/autostart: [Desktop Entry] Type=Application Name=test Comment=test NoDisplay=true Exec=sh -c 'cat /proc/$$/status >~/test_output' NotShowIn=GNOME;KDE;XFCE; Its PPID is 1 (systemd),but I can't find how systemd handles the $XDG_CONFIG_HOME/autostart entries.
This is handled by desktop environments which implement the Desktop Application Autostart Specification. If you’re using the default desktop environment in Debian 9, GNOME, autostart applications are started by gnome-session. I imagine the fact that your process ends up with systemd as its parent is because its original parent stops and leaves it running; processes whose parent dies are reparented to pid 1.
Which program runs $XDG_CONFIG_HOME/autostart in Debian 9?
1,498,678,510,000
I'm using Telegram Desktop on Debian 8.7. It starts automatically even though I've not created any .desktop file under ~/.kde/Autostart. It's quite annoying as I don't want its window to show up maximised but I'd rather start it with the --startintray flag. I've also tried to create the .desktop file under the Autostart folder but that seems to not override this weird behaviour. By opening the running services I can see why this occurs (why telegram opens by itself), I get the following hint when I go over "Telegram" with the cursor:
I fixed it! KDE was saving the session upon shutdown and restoring it upon startup, thus reopening all programs. I disabled it under Startup and Workspace > Startup and Shutdown > Desktop Session > On Login > Start with an empty session. (Restore previous session was the default)
How can I prevent Telegram from starting automatically on Debian 8.7 Jessie?
1,498,678,510,000
I'm using a beaglebone black which works with Debian 8.6. I want to start a program after reboot. I tried crontab but it didn't work. @reboot sleep 60 && /home/debian/acspilot/start.sh Program consists of a config.sh file and a acsp.py python script. Each code works fine from terminal. Here are the codes: start.sh: #!/bin/sh sudo su cd home/debian/acs/ ./config_pins.sh python acsp.py config_pins.sh : #! /bin/bash cd /sys/devices/platform/bone_capemgr File=slots if grep -q "Override Board Name,00A0,Override Manuf,univ-emmc" "$File"; then cd echo -e "\nHooray!! configuration available" echo -e "\n UART 4 configuration p9.11 and p9.13" sudo config-pin P9.11 uart sudo config-pin -q P9.11 sudo config-pin P9.13 uart sudo config-pin -q P9.13 echo -e "\n UART 1 configuration p9.26 and p9.24" sudo config-pin P9.24 uart sudo config-pin -q P9.24 sudo config-pin P9.26 uart sudo config-pin -q P9.26 echo -e "\n UART 5 configuration p8.38 and p8.37" sudo config-pin P8.38 uart sudo config-pin -q P8.38 sudo config-pin P8.37 uart sudo config-pin -q P8.37 echo -e "\n UART configuration end" else echo "Oops!!configuration is not available" echo "Please check uEnv.txt file and only disable HDMI" fi acsp.py: import Adafruit_BBIO.PWM as PWM import Adafruit_BBIO.UART as UART import time # UART communication begins UART.setup("UART1") # pwm begins PWM.start("P9_14", 5,50) ser = serial.Serial(port = "/dev/ttyO1", baudrate=9600) ser.close() ser.open() while ser.isOpen(): for i in range(1,99) print i ser.write(str(i)+"%") PWM.set_duty_cycle("P9_14", i) time.sleep(5) ser.close() UART.cleanup() PWM.stop("P9_14") PWM.cleanup()
The script has a sudo'ed line, and if it is cronned to a non-root user, the shell cannot be executed by itself. Instead you should first be root with sudo su - And then crontab -e as root user and add the task line
Run shell script after reboot in beaglebone black
1,498,678,510,000
I am on Ubuntu 18.04 and trying swap Ctrl and CapsLock using xmodmap. But failed to find a way of doing that automatically: .[X|x]modmap[rc] and .config/autostart didn't work. What other ways are there? Could it be possible throgh systemd? SHORT: Desktop entry in .config/autostart or /etc/xdg/autostart. Exec is not a full-fledged shell command, so sh -c might be required [Desktop Entry] Type=Application Exec=sh -c "xmodmap ~/.xmodmaprc"
Since Ubuntu switched back from Unity to Gnome in version 17.10, you should be able to use the Gnome autostart mechanism (if it is sufficient that the shell command is launched on login). To do so: you will need sudo privileges create a shell script that runs the necessary command (say switch_ctrl_capslock.sh) and place it in /usr/local/bin create a .desktop file /etc/xdg/autostart/switch_ctrl_capslock.desktop with (more or less) the following content: [Desktop Entry] Type=Application Exec=/usr/local/bin/switch_ctrl_capslock.sh If everything is setup correctly, the script should be run once when the user logs into Gnome. For further reading, have a look at Autostart application on login How do I start applications automatically on login?
Run shell command exectly one time at login
1,498,678,510,000
I am trying a Slackware 14.2. I can start sshd by /etc/rc.d/rc.inet1 sshd start but my question is how to add service to run at boot on slackware linux? Basically how to permanently add services to system on Slackware Linux and also check that service status. So far I am able to achieve the above using this link, $ sudo nano /etc/rc.d/rc.M and adding these lines # Start the sshd server if [ -x /etc/rc.d/rc.sshd ]; then . /etc/rc.d/rc.sshd fi and it did work and ssh server started automatically after boot as I was able to ssh to that system but how to check that service status within the system other than ps aux | grep ssh or netstat -lntp | grep ssh or using tools like lsof? What I meant is some thing usual way like sudo service sshd status or sudo systemctl status sshd .
Slackware use the BSD-style init system. sshd daemon is handled on boot with rc.inet2 script and is handled on shutdown with rc.0 and rc.6 on reboot. To start sshd daemon on boot add execute permission to rc.sshd script: chmod +x /etc/rc.d/rc.sshd To disable sshd on boot remove execute permission: chmod -x /etc/rc.d/rc.sshd In alternative you can manage sshd daemon (stop&start and restart) using the rc script: sh /etc/rc.d/rc.sshd usage /etc/rc.d/rc.sshd start|stop|restart
How to add service to run at boot on slackware linux?
1,498,678,510,000
I saw How to manage startup applications in Debian 9? At that time, gnome-tweak-tool was the go-to-tool however, since a few days, that app. is not there anymore. I am running on Debian buster. https://tracker.debian.org/pkg/gnome-tweak-tool. See the news section and specifically https://tracker.debian.org/news/931994 I even tried out a version from snapshot but it didn't do anything for couple of apps. which I wanted to be there right from boot. I tried bum only to see and understand that it's a run-level editor, useful but not for the scenario I have in mind which is simply asking the system to start some services after logging on to the desktop. Even systemd-ui which was a separate package in no longer there. So how can we do the same in mate ?
To figure out what happened to a package, you need to look at the removal reason from unstable, not testing. In gnome-tweak-tool’s case, this is given as “RoM; source package has been renamed to gnome-tweaks”, and true enough, there is now a gnome-tweaks source package, which builds a gnome-tweaks binary package and a gnome-tweak-tool transitional package. The MATE equivalent is mate-tweak. However in MATE, you configure your startup applications in the main control centre, using the “Startup Applications” applet.
Managing startup applications in debian buster 10 (mate)
1,498,678,510,000
I'm following the answer of Autostarting .desktop application at startup not working, and it is working well. I'm launching a python script. The .desktop application looks like this: [Desktop Entry] Type=Application Name=Autostart Script Exec=shutdown_notify.py Icon=system-run This is working well - but if I log out and log in I see the first instance of my script is still running. $ ps -C shutdown_notify.py PID TTY TIME CMD 4026 ? 00:00:01 shutdown_notify 25421 ? 00:00:00 shutdown_notify Is there a way to make sure the script exits on log out? Do I need to add logic to quit on log out? Update with logind.conf info as requested by @binarysta $ grep -E 'KillUserProcesses|KillOnlyUsers|KillExcludeUsers' /etc/systemd/logind.conf #KillUserProcesses=no #KillOnlyUsers= #KillExcludeUsers=root Update, I have two autostart files: one launches the python script above, the other launches gnome-terminal. After a reboot I see: $ ps aux | grep -E "gnome-terminal|shutdown_notify" training 6495 0.9 0.1 84936 29360 ? S 07:13 0:00 /usr/bin/python3 /usr/local/bin/shutdown_notify.py training 6565 0.7 0.2 622880 34448 ? Sl 07:13 0:00 /usr/lib/gnome-terminal/gnome-terminal-server training 9647 0.0 0.0 13264 2564 pts/0 S+ 07:13 0:00 grep --color=auto -E gnome-terminal|shutdown_notify Log out, log in, and I see: training 6495 0.1 0.1 85076 29360 ? S 07:13 0:00 /usr/bin/python3 /usr/local/bin/shutdown_notify.py training 19110 3.1 0.1 84936 29636 ? S 07:15 0:00 /usr/bin/python3 /usr/local/bin/shutdown_notify.py training 19141 2.3 0.2 696496 34584 ? Sl 07:15 0:00 /usr/lib/gnome-terminal/gnome-terminal-server training 19421 0.0 0.0 13264 2696 pts/0 S+ 07:15 0:00 grep --color=auto -E gnome-terminal|shutdown_notify
This is because KillUserProcesses is no by default in Ubuntu. This setting causes user processes not to be killed when the user completely logs out. To change this behavior in order to have all user processes killed on the user's logout, set KillUserProcesses=yes in /etc/systemd/logind.conf and re-login. current value can be checked by (it should be true after change) busctl get-property org.freedesktop.login1 /org/freedesktop/login1 org.freedesktop.login1.Manager KillUserProcesses b true UPDATE The difference here between gnome-terminal (GUI Application) and shutdown_notify.py processes is, gnome-terminal-server process bound to the same TTY as all other X11 processes running. By logging out, the desktop environment and windowing system (x11) will be terminated so thats why gnome-terminal will exit.
Autostarted .desktop application doesn't exit on log out
1,498,678,510,000
I'm using Ubuntu Mate with XMonad. I can't seem to understand how to run something at startup after the login. I want some programs like Firefox to run when I login to my desktop. Now let's say I just want to run a simple script: /home/juser/.xmonad/autostart.sh The file is set as executable. I've tried many things. Startup Applications from Ubuntu Mate settings are obviously not working on xmonad. The second thing I've tried was putting the command at the end of my .xsessionrc file, after xmonad is executed. My .xsessionrc file: #!/bin/bash xrdb -merge .Xresources stalonetray & feh --bg-scale /usr/share/backgrounds/cosmos/sombrero.jpg & udiskie & xfce4-power-manager & xrandr --auto --output HDMI-1 --primary --left-of VGA-1 & # Firefox PulseAudio fix pulseaudio --start --exit-idle-time=-1 & compton -bCG --active-opacity 1.0 --shadow-ignore-shaped & if [ -x /usr/bin/nm-applet ] ; then nm-applet --sm-disable & fi exec xmonad exec /home/juser/.xmonad/autostart.sh #THIS IS NOT WORKING The desktop starts successfully but my script is not executed. Another thing that doesn't work is to use SpawnOnce inside xmonad.hs file. Something like that ( I've pasted my entire file here: https://pastebin.com/yUXjbgva ): ... import XMonad.Util.SpawnOnce myConfig = docks defaultConfig ... , startupHook = myStartupHook ... myStartupHook = do spawnOnce "/home/juser/.xmonad/autostart.sh" ... My script is simply ignored same as in previous example. So I gave up on SpawnOnce directive. Do I have any other options? What am I doing wrong?
The line exec xmonad in your shell script replaces the shell running the script with the xmonad process. So there's nobody left to run the next line. Type help exec in a bash shell, or see bash(1). You probably want to rewite the last two lines as /home/juser/.xmonad/autostart.sh & exec xmonad if none of the autostarted stuff needs xmonad.
Ubuntu with Xmonad - How to run programs on startup
1,498,678,510,000
On i3 start up I'd like HexChat to automatically start in my fifth workspace. I know how to edit my config (~/.i3/config) to start HexChat on i3 start up, namely by adding exec hexchat line to it, but that starts it in my first workspace, when I want it started in my fifth workspace (i.e. $workspace5 in my i3 config). Despite this, I want workspace 1 to be the one I'm shown on i3 start up (which is what I'm shown with my current config). My i3 configs are here and my distribution is openSUSE Tumbleweed.
You need to find some criteria that matches your window, then you can configure a workspace for it. I don't know HexChat, so here is an example for xclock. If you run this well-know X11 application, then run xprop and click on the clock window you will get output showing you the window class is XClock: WM_CLASS(STRING) = "xclock", "XClock" So in your config you would have assign [class="^XClock$"] 5 exec --no-startup-id xclock This matches the class with a regular expression, hence the ^ and $, but in most cases you can be less explicit.
How do I get HexChat to start on i3 login in its own workspace (but not in workspace 1)?
1,498,678,510,000
I came across the messages below with sudo journalctl --since today | tail -n 3000. Shouldn't autostart entries be removed from there when removing a package? It seems like general good practice to remove autostart entries if the package has been removed (or to prompt the user about it during removal or to remove them when running a command like sudo apt-get autoremove). Moreover, wouldn't it cause executables to get autostarted if they are named / aliased / linked in the same way the command in Exec there. If this isn't currently done, what would be the right way to auto-remove these entries? Where could an issue about this exist? For now I under Debian12/KDE just moved them like so sudo mv /etc/xdg/autostart/xbindkeys.desktop ~/DisabledAutostarts_defunct/xbindkeys.desktop and hope that is fine as is as a workaround. systemd-xdg-autostart-generator[1712]: Exec binary 'xbindkeys_autostart' does not exist: No such file or directory systemd-xdg-autostart-generator[1712]: /etc/xdg/autostart/xbindkeys.desktop: not generating unit, executable specified in Exec= does not exist. systemd-xdg-autostart-generator[1712]: Exec binary '/usr/lib/policykit-1-gnome/polkit-gnome-authentication-agent-1' does not exist: No such file or directory systemd-xdg-autostart-generator[1712]: /etc/xdg/autostart/polkit-gnome-authentication-agent-1.desktop: not generating unit, executable specified in Exec= does not exist. systemd-xdg-autostart-generator[1712]: Exec binary 'gsettings-data-convert' does not exist: No such file or directory systemd-xdg-autostart-generator[1712]: /etc/xdg/autostart/gsettings-data-convert.desktop: not generating unit, executable specified in Exec= does not exist. systemd-xdg-autostart-generator[1712]: Exec binary 'start-pulseaudio-x11' does not exist: No such file or directory systemd-xdg-autostart-generator[1712]: /etc/xdg/autostart/pulseaudio.desktop: not generating unit, executable specified in Exec= does not exist. systemd-xdg-autostart-generator[1712]: Exec binary 'kmixctrl' does not exist: No such file or directory systemd-xdg-autostart-generator[1712]: /etc/xdg/autostart/restore_kmix_volumes.desktop: not generating unit, executable specified in Exec= does not exist. systemd-xdg-autostart-generator[1712]: Exec binary 'kmix' does not exist: No such file or directory systemd-xdg-autostart-generator[1712]: /etc/xdg/autostart/kmix_autostart.desktop: not generating unit, executable specified in Exec= does not exist. systemd-xdg-autostart-generator[1712]: Exec binary 'usbguard-applet-qt' does not exist: No such file or directory systemd-xdg-autostart-generator[1712]: /etc/xdg/autostart/usbguard-applet-qt.desktop: not generating unit, executable specified in Exec= does not exist. Files that cause things to autostart are different from any /etc/ config file which just provides some configs if the application is running. In addition to the problem mention, it can for example even also be a problem when reinstalling a later version of the same software because it could create an additional entry in that folder with another name so it starts twice. Issue is here now.
These files are treated as configuration files, and are only removed on package purge, not package removal. apt list '~c' will list packages which have been removed but not purged (including potentially some which dpkg still considered to be installed but which aren’t really). sudo apt purge '~c' will purge them, and you’ll see their configuration files in /etc disappear too. I agree that this situation isn’t great. I think the best solution would be if the Desktop Application Autostart specification allows autostart files to be dropped somewhere under /usr (in the same way as regular .desktop files can be installed in /usr/share/applications); that way packages could ship their default autostart files there, and they would be removed automatically along with the other non-configuration contents.
Shouldn't files in /etc/xdg/autostart/ be removed when removing a package?
1,498,678,510,000
I am trying to start a GUI application (python3 project) on application startup. I've created a script in /etc/xdg/autostart/. @lxpanel --profile LXDE @pcmanfm --desktop --profile LXDE @xscreensaver -no-splash export DISPLAY=:0 @/usr/bin/python3 ~/path/to/the/file.py I've also tried the following script in /home/debian/.config/autostart/fileName @lxpanel --profile LXDE-pi @pcmanfm --desktop --profile LXDE-pi @xscreensaver -no-splash export DISPLAY=:0 @lxterminal -e /usr/bin/python3 ~/path/to/the/file.py Something to note is that some of the libraries used are only available in "debian" login. Hence, I need to run this script as "debian". Any suggestions on how to improve this? Currently, on startup nothing happens. I tried putting a touch command before the export command to see if the file is even invoked, I didnt see any file generated either.
Found the solution here : Why does LXQT Autostart not do anything? I updated the /home/debian/.config/autostart folder and created a new .desktop file with inputs: [Desktop Entry] Exec=sh script_name Path=/full/path/to/working/directory Name=MyAppName Type=Application Version=1.0
Autostarting a python GUI application on startup as local user (beagle bone black)
1,498,678,510,000
I'm trying to configure the Autoexec section of Dosbox. I downloaded DOSBOX for Windows and I added these lines : [autoexec] # Lines in this section will be run at startup. # You can put your MOUNT lines here. MOUNT C C:\DOSGAMES\ C: I saved and ran Dosbox.exe, it worked ! I'm doing the same thing for Dosbox in Fedora 30, configuring the file dosbox-0.74.conf located in /usr/share/dosbox/translations/fr. But when I run it, it doesn't work. Here is a screenshot : http://image.noelshack.com/fichiers/2019/30/2/1563874333-dosboxconf.jpg It worked for Windows... I tried to put these lines on each dosbox-0.74.conf located in each translation folder, but still not working. I think that the application doesn't run this configuration file, it seems to be another file, but which one? I've made a search of dosbox and these were the only files existing.
The .dosbox folder was hidden by design. Each user have its own default config file in ~/.dosbox/dosbox.conf . For root, go in /root/ and do Ctrl+H , and hidden folders will appear. You can choose other config files with the corresponding flag, so make sure to configure the right dosbox.conf file.
AUTOEXEC Configuration for DOSBOX doesn't work (Works on Windows)
1,498,678,510,000
I want to access the screensaver state of ubuntu (when my laptop is idle for some minute) . And run an application if screensaver start and if I click or stir mouse pad then application should exit as screensaver exit. After searching on google I come to a command xssstate -i But I don't know what it does and How can I use it?
You can use xssstate -s to check out screensaver's status: $ xssstate -s off then based on the output decide what you have to do. -i returns X's idle time. You can create a simple script and run it using cron then on that script use xssstate to see if you have to run or end your program.
Use of command "xssstate " in ubuntu?
1,498,678,510,000
I have a bash script, called runthunderbird, that… runs thunderbird : #!/bin/bash sleep 5 thunderbird & disown When I execute runthunderbird from krunner, Thunderbird starts after 5 seconds as intended, then the runthunderbird process ends, as evidenced by ps -ef or the process list in the System monitor from Plasma 5.27.10. However, the same System monitor has an Applications tab where runthunderbird remains listed, instead of thunderbird-bin, with the resource (eg. memory) consumption of Thunderbird. If instead of just Thunderbird, the scripts runs several graphical applications before ending, they are lumped together in the Application tab, again under the name of the (terminated) script. I am wondering how this is managed and how I should write my script to completely detach thunderbird from the script than runs it. I am also wondering if this fact is related to Thunderbird not being effectively run when the runthunderbird script is run at login from a desktop file in ~/.config/autostart.
It's managed via cgroups and systemd. (That is, I'm 95% sure it's managed via cgroups and systemd, although I don't have KDE installed here to confirm.) Recently, both GNOME and KDE have begun using systemd's support for on-the-fly creation of .services and .scopes for launching desktop apps if the user-level systemd --user service manager is running. This is based on the Linux cgroup feature – each systemd .service or .scope corresponds to a Linux cgroup. (Cgroups predate systemd but saw little use before it.) As soon as the app process has been spawned, they ask systemd to move it into a temporary .scope unit (and accordingly a new cgroup). Later, the task manager can look at the cgroup name of each process via /proc/<PID>/cgroup and read the resource values from /sys/fs/cgroup (which also provides a list of PIDs within each cgroup). Take a look systemd-cgls or loginctl user-status to see the resulting cgroup tree; check out systemd-cgtop as an example of per-cgroup resource metering. You can even add the "Systemd unit" column in htop or ps, e.g. ps -e -o pid,cmd,unit,uunit. You might also see individual cgroups being created for each terminal tab; GNOME terminals (libvte) and tmux do this now and I've heard that Konsole does as well. (Among other things, it allows each tab to have its own task limit; I've successfully ran the "bash smiley" forkbomb in GNOME Terminal and the rest of the system just kept chugging along.) I am wondering how this is managed and how I should write my script to completely detach thunderbird from the script than runs it. Cgroups are created a) by starting services, b) on-the-fly with systemd-run --user. To "detach" Thunderbird, you could use: systemd-run --user [--scope] --collect /bin/thunderbird (…or you could create a real "thunderbird.service" in your ~/.config/systemd/user/ and run it using systemctl --user start. That, combined with "thunderbird.timer" to delay the startup, would remove the need for your script entirely.) Use --scope and -u/--unit="app-foo-bar" to make it more similar to what GNOME and KDE are doing. Systemd-run's default is to create a .service; the difference is that .service units always have their process spawned by systemd itself while .scope allows the caller to move an already running process. This also means that in scope mode the command will still run "in foreground" unless you & it as usual, whereas in service mode it will always run "in background" and you would need to explicitly ask for -t/--pty if you want to run an interactive terminal app, or -P/--pipe if the program isn't necessarily interactive but you still want to see its output; otherwise all stdout goes to the system logs (journalctl). The -G/--collect just prevents stale units from lingering around. Systemd-run can be useful to specify a cgroup-level memory limit for a potentially large task without risk of OOM'ing the entire system – take a look at the -p/--property parameters.
How are "Applications" run with krunner managed wrt processes?
1,689,350,565,000
I am running the latest version of Debian on an Intel NUC and I've setup PM2 autostart at boot with pm2 startup For my needs I need to delay its start by let's say 30 seconds. How to do that please?
You've probably figured this out by now, but for others who come here the answer is quite simple. You just open your shell script file, which is the one that the PM script runs upon start. In here you simply put in a "sleep", for the 30 seconds simply write sleep 30s at the top of your .sh file. I used this to open my magic mirror after 30s, so I was sure my Pi connected to the internet before opening the app. My .sh file looks like this: sleep 30s cd ./MagicMirror DISPLAY=:0 npm start See this link for more info; https://www.lifewire.com/use-linux-sleep-command-3572060
How to delay PM2 autostart at boot?
1,689,350,565,000
I have compiled my code in this path : /home/m/ChatScript-master/SRC and created executable file myapp . I can run it from inside the SRC folder like ./myapp. But when I try /home/m/ChatScript-master/SRC/myapp from my /home/m it gives me: in cs_init.txt at 0: Error opening utf8writeappend file LOGS/startlog.txt: No such file or directory Why do I get this error message? My main problem is that, I want to build a kiosk like system and want to add my executable file inside : /home/m/.config/openbox/autostart like this: $ cat /home/m/.config/openbox/autostart echo 7 > /tmp/yy /home/m/ChatScript-master/SRC/myapp & echo 8 > /tmp/yy2 But it doesn't work! I could do it already with other programs, but this program gives me this error! NOTE: There are some folders inside ChatScript-master directory like SRC and LOGS and my executable file is inside SRC folder.
Your program uses a relative path from the current working directory to access LOGS/startlog.txt. If there is no LOGS directory in the current directory, the application fails. To correct this, make sure that the application uses an absolute path for accessing the file, or change the working directory as you start the application: ( cd /home/m/ChatScript-master/SRC && ./myapp ) & ... assuming /home/m/ChatScript-master/SRC contains the needed LOGS directory.
I get error when I want to execute my program from /home
1,689,350,565,000
I'm talking about the autostart scripts placed in ~/.config/autostart. I know that systemd-xdg-autostart-generator creates .service files for them. So where do i find the generated files so that i can check their status using systemctl status foo.service
You can find the generated entries at $XDG_RUNTIME_DIR/systemd/generator.late to check the status systemctl status --user foo.service
Check status of systemd autostart scrips
1,689,350,565,000
I'm thinking of developing an application to watch changes for auto starting applications and service that would notify when: A service state change (enabled/disabled/added/removed) An application is added/removed to boot Eventually revert the change Auto start location to be watched: Systemd services Systemd timers Cron... ~/.kde/Autostart /etc/init.d/ /etc/xdg/autostart/ ~/.xinitrc etc. Note that I am looking for a gui application eventually something that reside on the system tray or may be a script that pop-up a message/window. There are some tools and command like incron, diff <(cat old) <(cat new), notify-send, zenity and gxmessage to make it easy to write a bash script that could take care of all that but is there an application that already do that? Is there a similar tool that I can start from to avoid writing everything from scratch or a tool/application that have a different purpose but could be transformed into the needed function? (any programming language)
Startup-Watcher Ended up writing it from scratch... Watch for changes on 32 locations Notify when change occur Save change to /home/../.startup-watcher/changes Start hidden on the tray Root not required Watch root and user And much more. Captures https://github.com/Intika-Linux-Apps/Startup-Watcher
Is there an application to watch changes in auto starting apps and services?
1,689,350,565,000
I did try to initialize a dummy interface in shell manually without no problem. In order to bring up this interface at every boot up process, then i tried to add it to /etc/bashrc or /etc/profile as below: ip link set name eth0 dev dummy0 ip link set eth0 address d0:17:c2:a9:a5:5e ifconfig eth0 hw ether d0:17:c2:a9:a5:5e ‌I also added below config to /etc/modules-load.d/dummy.conf but it did not worked as well. /sbin/ip link set name eth0 dev dummy0
In most of the distros the answer for executing commands at the boot is /etc/rc.local (today this includes redhat and new versions of debian, but you need to create the rc.local file first). So create a script as /etc/rc.local (if it does not exists - if it is already there, just include the lines before exit 0). The file contents must have an exit 0 on the end. #!/bin/bash ip link set name eth0 dev dummy0 ip link set eth0 address d0:17:c2:a9:a5:5e ifconfig eth0 hw ether d0:17:c2:a9:a5:5e exit 0 and after that remember to make it executable with: chmod +x /etc/rc.local If that works for you please mark this as the correct answer. In the most of cases you don't want to create a service or to use another solution that can change the peace with your boot system, as rc.local is a "user place" for doing things like that. Good Lucky!
Why I cannot initialize dummy interface at bashrc?
1,689,350,565,000
I have a local VM running on Ubuntu that, on start, I need to run two commands on (as my user, andreas): sudo mount -a docker-compose up -d The last command is run in my home directory, and can't be run as root otherwise docker gives me grief. How do I run these two commands automatically when the machine loads, one as root, and one as me? Thanks
Run the script as root and use sudo's -u option to run the docker-compose as your user, and the -l option to make sure it's running in a login shell (i.e. the same environment you would have if you logged in[1]). e.g. #!/bin/sh mount -a sudo -l -u yourusername docker compose up -d alternatively, use su: #!/bin/sh mount -a su -l yourusername docker compose up -d [1] Except for anything to do with X or your GUI desktop environment.
Mount and Start Docker on Startup
1,380,179,815,000
How can I reliably address different machines on my network? I've always used the .local suffix to talk to computers on my local network before. With a new router, though, .local rarely (though sometimes) works. I've found that .home and .lan both usually work, but not always. .-------. .--------. .-----. | modem |---| router |))))))(wifi))))))| foo | .-------. .--------. v .-----. || | v /_^_^_\ | \))))))).-----. / cloud \ | | bar | \-_-_-/ .-----. .-----. | baz | .-----. So, from a terminal on foo, I can try: ssh bar.local ssh bar.home ssh bar.lan ssh baz.local ssh baz.home ssh baz.lan and sometimes some of those suffixes work and some don't, but I don't know how to predict which or when. foo, bar, and baz are all modern Linux or Android systems and the Linux boxes all have (or can have) avahi-daemon, or other reasonably-available packages, installed (I don't want to set up static IP addresses: I'd like to keep using DHCP (from the router) for each machine, and even if I was okay with static addresses I'd want to be able to enter hostnames in the unrooted Android machines, where I can't edit the hosts file to map a chosen hostname to an IP address.)
There are no RFCs that specify .lan and .home. Thus, it is up to the router's vendor what pseudo TLDs (top-level-domain names) are by default configured. For example my router vendor (AVM) seems to use .fritz.box by default. .local is used by mDNS (multicast DNS), a protocol engineered by Apple. Using example.local only works on systems (and for destinations) that have a mDNS daemon running (e.g. MacOSX, current Linux distributions like Ubuntu/Fedora). You can keep using dhcp - but perhaps you have to configure your router a little bit. Most routers let you configure such things like the domain name for the network. Note that using pseudo TLDs is kind of dangerous - .lan seems to be popular - and better than .local (because it does not clash with mDNSs .local) - but there is no guarantee that ICANN will not introduce it as new TLD at some point. 2019 update: Case in point, .box isn't a pseudo TLD, anymore. ICANN delegated .box in 2016. Thus, it makes sense to get a real domain name - and use sub-domains of it for private stuff, e.g. when your domain is example.org you could use: lan.example.org internal.example.org ...
What's the difference between .local, .home, and .lan?
1,380,179,815,000
Running a server machine with CentOS 7, I've noticed that the avahi service is running by default. I am kind of wondering what the purpose of it is. One thing it seems to do (in my environment) is randomly disabling IPv6 connectivity, which looks like this in the logs: Oct 20 12:23:29 example.org avahi-daemon[779]: Withdrawing address record for fd00::1:2:3:4 on eno1 Oct 20 12:23:30 example.org Withdrawing address record for 2001:1:2:3:4:5:6:7 Oct 20 12:23:30 example.org Registering new address record for fe80::1:2:3:4 on eno1.*. (the suffixes 1:2:3... are made up) And indeed, after that the public 2001:1:2:3:4:5:6:7 IPv6 address is not accessible anymore. Because of that I've disabled the avahi service via: # systemctl disable avahi-daemon.socket avahi-daemon.service # systemctl mask avahi-daemon.socket avahi-daemon.service # systemctl stop avahi-daemon.socket avahi-daemon.service So far I haven't noticed any limitations. Thus, my question about the use-case(s) of avahi on a server system.
Avahi is the opensource implementation of Bonjour/Zeroconf. excerpt - http://avahi.org/ Avahi is a system which facilitates service discovery on a local network via the mDNS/DNS-SD protocol suite. This enables you to plug your laptop or computer into a network and instantly be able to view other people who you can chat with, find printers to print to or find files being shared. Compatible technology is found in Apple MacOS X (branded ​Bonjour and sometimes Zeroconf). A more detailed description is here along with the Wikipedia article. The ArchLinux article is more useful, specifying the types of services that can benefit from Avahi. In the past I'd generally disable it on servers, since every server I've managed in the past was explicitly told about the various resources that it needed to access. The two big benefits of Avahi are name resolution & finding printers, but on a server, in a managed environment, it's of little value.
What is the purpose of avahi on a RHEL 7 server?
1,380,179,815,000
I'm currently working on a project that has required some DNS troubleshooting. However I am fairly new to the wonderful world of networking and I'm at a bit of a loss as to where to begin. My specific problem probably belongs on the Raspberry Pi Stack Exchange, so I'll avoid crossposting. Just looking for information here. Looking for information, I was lead to the resolv.conf(5) file, resolvconf(8), systemd-resolve(1), and the beast that avahi appears to be. My Raspberry Pi with Raspbian Buster appears to have avahi-daemon running. My Ubuntu 18.04.4 LTS has systemd-resolved AND avahi-daemon. Does resolvconf(8) (man page only on Ubuntu) coordinate the two? When is /etc/resolv.conf used/ignored? On Ubuntu: $ cat /etc/resolv.conf # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8) # DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN # 127.0.0.53 is the systemd-resolved stub resolver. # run "systemd-resolve --status" to see details about the actual nameservers. nameserver 127.0.0.53 search telus On Raspbian: $ cat /etc/resolv.conf # Generated by resolvconf nameserver 192.168.0.1 nameserver 8.8.8.8 nameserver fd51:42f8:caae:d92e::1 Which utilities are responsible for this? I don't really understand enough jargon to sift through the man pages and differentiate all these, and I'd love an explanation of how their roles are related.
When you run a command such as ping foobar the system needs to work out how to convert foobar to an ip address. Typically the first place it looks is /etc/nsswitch.conf. This might have a line such as: hosts: files dns mdns4 This tells the lookup routine to first look in "files", which is /etc/hosts. If that doesn't find a match then it will then try to do a DNS lookup. And if we still don't know the answer then it'll try to do a mDNS lookup. The DNS lookup is where the system then looks at /etc/resolv.conf. This tells it what DNS servers to look at. On my machines I have this auto-configured by DHCP. % cat /etc/resolv.conf # Generated by NetworkManager search mydomain nameserver 10.0.0.1 nameserver 10.0.0.10 How resolv.conf is built can change, depending on the operating system, what optional components you got, other configuration entries, boot sequence... In your case, on Ubuntu, you're running the systemd programs that configure this file to point to your local systemd-resolved and that will know how to talk to the real DNS servers. On my primary servers, which have static IP addresses and no systemd-resolved, I manually edit this file. Finally mdns4 tells the routines to try asking avahi-daemon if it knows the name. You can change the rules. eg if /etc/nsswitch.conf just said: hosts: files then only the local /etc/hosts file is used. Other entries are possible; eg ldap would make it do an LDAP lookup.
What is the difference between resolvconf, systemd-resolve, and avahi?
1,380,179,815,000
What I would like is to use avahi-daemon to multicast more then one name. So that I could connect to it with domainA.local domainB.local. I could then reroute these addresses to a different web interface of different applications with nginx. Is it possible to configure the avahi-daemon in such a way that it would multicast multiple names? P.S. Using the avahi-daemon is not a requirement. If there is another program that has this functionality I would gladly switch. Research and results So as suggested by gollum, I tried avahi-aliases first. It is in the repositories, but it did not appear to have installed correctly on my system. According to the instructions is should have installed a script in /etc/init.d/, but there was none. I then gave the other link that gollum suggested a try and this worked straight away. It does depend on python-avahi and is just an example of a python script that needs to run in the background. I am now able to broadcast domainA.local, domainB.local and domainC.local and in combination with nginx that leads to different web interfaces on the machine, but are all accessible on port 80. Update After some more fiddling with the two, I also discovered that avahi-aliases can only broadcast subdomains. So if your computername would be elvispc then avahi-aliases can only broadcast subdomainA.elvispc.local and subdomainB.elvispc.local, where the python script will broadcast any name.
A cumbersome solution would be running several instances of the following command in background: avahi-publish -a -R whatever.local 192.168.123.1 A better solution is probably publishing cnames using python-avahi. See e.g. https://github.com/airtonix/avahi-aliases or http://www.avahi.org/wiki/Examples/PythonPublishAlias Update: The avahi wiki seems to be gone. Here is the archived page of the link I've posted: https://web.archive.org/web/20151016190620/http://www.avahi.org:80/wiki/Examples/PythonPublishAlias
Multicasting multiple mdns names
1,380,179,815,000
I am running Kali 2.0 64-bit, and I recently noticed that avahi-daemon is starting at boot time, listening on several udp ports. How do I disable it completely, without purging the package itself? I have tried sudo rcconf --off avahi-daemon But there is a warning: Service 'avahi-daemon' is already off. Skipping... I then tried sudo update-rc.d -f avahi-daemon remove It doesn't produce any errors, nor warnings, but avahi-daemon still persists at boot time. I then tried editing the /etc/default/avahi-daemon file by adding AVAHI_DAEMON_START = 0 But that doesn't work either. I finally used the UPSTART manual override -->> echo manual | sudo tee /etc/init/avahi-daemon.override And still no go. Please help, I am at my wits's end! Thank you.
sudo systemctl disable avahi-daemon to disable boot time startup. A few other options are systemctl list-units for a list of all known units, systemctl enable to enable boot time startup, systemctl start to start the service from terminal, but not enable boot time loading and systemctl stop to stop a service which has been started. man systemctl and man systemd will provide complete set of options. Most (not all though) modern Linux distributions have switched or are switching to systemd from the traditional SysV init scripts. Also, http://blog.jorgenschaefer.de/2014/07/why-systemd.html covers some of the basics of systemd.
How to disable avahi-daemon without uninstalling it
1,380,179,815,000
I have a Ubuntu 16.04 based HTPC/Media Server that's running 24/7. As far as I can remember using an official Ubuntu distro, I've always had issues with the avahi-daemon. The issue is pretty often discussed online. Some people decide to just delete daemon, however, I actually need it as I'm running a CUPS server and use Kodi as my AirPlay reciever. The issue mDNS/DNS-SD is inherently incompatible with unicast DNS zones .local. We strongly recommend not to use Avahi or ​nss-mdns in such a network setup. N.B.: nss-mdns is not typically bundled with Avahi and requires a separate download and install. (avahi.org) The symptoms are simple - after around 2-4 days of uptime the network connection will go down and this will be logged Mar 17 18:33:27 15 avahi-daemon[1014]: Withdrawing address record for 192.168.1.200 on enp3s0. Mar 17 18:33:27 15 avahi-daemon[1014]: Leaving mDNS multicast group on interface enp3s0.IPv4 with address 192.168.1.200. Mar 17 18:33:27 15 avahi-daemon[1014]: Interface enp3s0.IPv4 no longer relevant for mDNS. The network will go back up without issues if you physically reconnect the Ethernet plug, or if you reconnect software-side. Possible solutions There are three solutions listed on the official wiki, which has been non-functional since what appears to be June 2016, so I'm providing a non-direct archive.org link 1.) Edit /etc/nsswitch.conf from "hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4" to hosts: files dns mdns4 2.) Modify /etc/avahi/avahi-daemon.conf from domain-name=.local to domain-name=alocal 3.) "Ask the administrator to move the .local zone" (as said on the wiki) What I did The first solution did not appear to work for me - the daemon still works, however, the network will go down the same way as before (to be fair, on the wiki it does say "Your Mileage May Vary") The second solution causes the daemon to seemingly function properly (nothing wrong if you look at the logs) but the iOS devices fail to "see" the machine as a printer or an AirPlay reciever (as well as iTunes on my Windows machine) The third solution is tricky, because I'm not well versed in the "ins and outs" of how a network is functioning; and I'm not sure I actually tried it. Here's what I mean: on my Asus Router running Asuswrt-Merlin I went into a settings subcategory /LAN/DHCP Server/Basic Config. There I set "RT-AC68U's Domain Name" as "lan" (a domain name I saw advised on the web, because it doesn't conflict with anything, unlike "local"). As far as I can understand, that's what "moving the .local zone" means. If this is in fact correct, than this solution does not work for me as well. Conclusion So, what should I do? I've been battling with this problem for over 4 months now, and every answer online comes down to the those I've already tried; frankly, I'm completely lost. Thanks in advance!
So I tried to change the "host-name" parameter in "avahi-daemon.conf" to something that's not the machines hostname, and I've been running 2 weeks without any issues. Maybe this had to do with the machine also running samba and Windows using the ".local" domain for it's own purposes?
avahi-daemon and ".local" domain issues
1,380,179,815,000
So far, I have avahi-daemon running on all my Ubuntu machines, partly because it is installed by default. The router I used to have was quite dumb and did not really do anything except DHCP and DHCPv6. I could access the other Linux computers with hostname.local which worked fine for my purposes. Now I have an AVM FRITZ!Box 7360 which also does some more regarding hostnames as I can access the Linux machines with hostname.fritz.box in my local network as well. For some reason, I also can do the following now (Linux → Windows): $ ping martin-pavilion.local PING martin-pavilion.local (192.168.188.28) 56(84) bytes of data. 64 bytes from Martin-Pavilion.fritz.box (192.168.188.28): icmp_seq=1 ttl=128 time=0.633 ms The martin-pavilion is running Windows 8. I do not think that it was accessible via .local previously, and the FRITZ!Box seems to translate the .local into the .fritz.box. What is happening here? I somewhat got that Zeroconf/Avahi/Bonjour managed to let every computer know about every other one. Does the FRITZ!Box do the same or is this different? My /etc/resolv.conf is: # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8) # DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN nameserver 127.0.1.1 search fritz.box
The FritzBox home router is using DHCP requests to update the FritzBox's DNS forwarding. Specifically: if there is a hostname option provided in the DHCP request then a hostname.fritz.box DNS record is provided by the FritzBox's DNS forwarding. This is distinct from mDNS's .local domain. The FritzBox is not a mDNS proxy server.
`.fritz.box` and `.local` hostnames in the same network: Which do I really need?
1,380,179,815,000
I just set up my new Pi2 with Raspbian. All works well, I installed avahi, so that I can reach the Pi via raspberrypi.local. However, the Pi does not find my MacBook, which is usually resolvable via mymacbook.local. For example, this is what I get when pinging: raspberrypi $ ping mymacbook.local ping: unknown host mymacbook.local The other way around works fine. What do I need to do, to make Raspbian search the .local domain? The Pi is connected via WiFi (wpa_supplicant), using DHCP.
What you are trying to do is to add multicast DNS to the name searching on Raspbian. Install the package libnss-mdns (ie: sudo apt-get install libnss-mdns). This will pull in the Avahi packages to implement multicast DNS (which is used for name resolution for ".local" domains). After installation ensure that /etc/nsswitch.conf has the line: hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4 Edit: when going from mac-->raspi to ensure that the Mac can log into your Raspberry Pi install the package avahi-daemon and add a file /etc/avahi/services/ssh.service containing <?xml version="1.0" standalone='no'?><!--*-nxml-*--> <!DOCTYPE service-group SYSTEM "avahi-service.dtd"> <service-group> <name replace-wildcards="yes">%h</name> <service> <type>_ssh._tcp</type> <port>22</port> </service> </service-group> Note that the RaspberryPi ships with IPv6 turned off. If the other host does not implement IPv4 link local addresses then you may need to turn on IPv6 on the RaspberryPi to have a IP protocol in common between the two machines. You can turn onIPv6 on the RasPi deleting /etc/modprobe.d/ipv6.conf and rebooting.
How to search .local?
1,380,179,815,000
I have avahi-daemon running on a Debian 9.1 server; however, avahi-browse -a does not display any services in my home network, consisting of a single 192.168.178.0/24 network. I can access all clients (tested with ping and, where applicable, ssh) and server# tcpdump port 5353 gives quite a bit of output from my clients, e.g., 15:30:07.206879 IP Client-OSX.fritz.box.mdns > 224.0.0.251.mdns: 0 [20a] [9q] PTR (QM)? _services._dns-sd._udp.local. PTR (QM)? _http._tcp.local. PTR (QM)? _ipp._tcp.local. PTR (QM)? _pdl-datastream._tcp.local. PTR (QM)? _printer._tcp.local. PTR (QM)? _scanner._tcp.local. PTR (QM)? _privet._tcp.local. PTR (QM)? _http-alt._tcp.local. PTR (QM)? _ssh._tcp.local. (847) However, my clients do not see my server nor the other way around, but the clients see each other's services, e.g., client1# avahi-browse -a + enp0s25 IPv6 client2 SSH Remote Terminal local + enp0s25 IPv4 my-printer _privet._tcp local ... /etc/avahi/avahi-daemon.conf: [server] host-name=alexandria #domain-name=local browse-domains=fritz.box use-ipv4=yes use-ipv6=yes allow-interfaces=eno1 eno2 # deny-interfaces=eth1 # check-response-ttl=no # use-iff-running=no enable-dbus=yes # disallow-other-stacks=no allow-point-to-point=yes # cache-entries-max=4096 # clients-max=4096 # objects-per-client-max=1024 # entries-per-entry-group-max=32 ratelimit-interval-usec=1000000 ratelimit-burst=1000 [wide-area] enable-wide-area=yes [publish] disable-publishing=no #disable-user-service-publishing=no #add-service-cookie=no publish-addresses=yes publish-hinfo=no publish-workstation=no publish-domain=yes #publish-dns-servers=192.168.50.1, 192.168.50.2 publish-resolv-conf-dns-servers=yes publish-aaaa-on-ipv4=yes #publish-a-on-ipv6=no [reflector] enable-reflector=yes #reflect-ipv=no [rlimits] #rlimit-as= rlimit-core=0 rlimit-data=4194304 rlimit-fsize=0 rlimit-nofile=768 rlimit-stack=4194304 rlimit-nproc=3 Clients run Kubuntu 16.04 and macOS. What am I missing?
avahi-browse requires the libnss-mdns package and the /etc/nsswitch.conf must have in its hosts: line two entries mdns mdns4 appended, i.e. hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4 mdns I also added multicast to the interface by first installing net-tools to get ifconfig and then running server$ ifconfig eth0 allmulti
avahi-browse -a does not show any results
1,380,179,815,000
I work in a large office building with hundreds of other computers on the same LAN. I don't have any reason to communicate with most of these computers, and when I do, it's always on an "opt-in" basis (like adding a network mount to my fstab). But Linux Mint is automatically adding printers throughout the building, and the "Network" sidebar in my file manager is filled with computers that belong to people I don't know. Finally, /var/log/syslog is filled with entries like the following, that make it difficult to find issues of real importance: org.gtk.vfs.Daemon[2500]: ** (process:6388): WARNING **: Failed to resolve service name 'XXX': Too many objects avahi-daemon[872]: dbus-protocol.c: Too many objects for client ':1.65', client request failed. I would like to disable this automatic discovery of services, especially printers and network shares. I also would like to ensure that my computer is not automatically broadcasting any information about itself to the rest of the LAN. What steps should I take to do this? Is it sufficient to disable avahi-daemon?
Stop the CUPS service (embodied by a process called cupsd), for example sudo service cups stop Open /etc/cups/cupsd.conf in your favorite editor, for example sudo vim /etc/cups/cupsd.conf Look if there is a line in this file saying Browsing Yes and change this line to Browsing No This should disable the sharing of your own print queues installed locally with the other computers in the same network. (I'm simply assuming you do not want this, given that you also do not want to 'see' other printers shared by other computers...) Likewise, make sure that file has the following lines: BrowseLocalProtocols none BrowseDNSSDSubTypes none DefaultShared No The first two should disable the automatic addition of printers shared on the network. Now start the CUPS service again, for example sudo service cups start
How can I limit engagement with a large office LAN?
1,380,179,815,000
How can a computer running avahi ascertain and display its OWN hostname in the event that it is dynamically changed to foo-2, foo-3, etc due to hostname conflicts with other devices on the network? When two computers (both with hostname = foo) that are running avahi-daemon and are on the same networks, as expected they can be accessed via ssh as foo.local and foo-2.local. Eg, the hostname collision is being handled correctly by avahi. However, on both machines the command hostname returns foo. So that is not dynamically updated when avahi does it's hostname renaming. What command will show the correct (dynamical) hostname to access a computer? These are mobile devices, and I want to display "my hostname is XXXX.local' on each device so when more than one device is present the user know WHICH hostname to enter to go to "their' device.
Run: avahi-resolve -a <IP> | cut -f 2 This will return a list of hostnames (one per-line) registered on mDNS for the IP address you passed in. If you pass in your own local IP, it will return what you have registered. Under normal circumstances, it should return exactly one line with your local host name (or whatever incremental hostname if there were collisions). If you remove the cut command at the end, you can just parse the lines yourself in your own code by splitting at the first tab character and taking the second part of each line. Also, there might be some call you can make on DBus to get this info, but if there is, I've not found any info about it.
when using avahi, how can a host know if it's name is hostname.local or hostname-2.local
1,380,179,815,000
From time to time, Avahi takes 100% CPU and CUPS-browse delays the system shutdown with the message "a stop job is running for make remote cups printers available locally". I know Avahi finds printers, as Xfce shows me 16 printers. I also know I don't need 16 printers. On the other hand, I'm not sure what cups-browsed is doing. I do use a network printer. But I know its IP and model. With that in mind, can I just replace Avahi and CUPS-browse with something more simple instead of fixing them?
Given your use case, you don’t need either package, and you don’t need to replace them with anything — CUPS on its own will be able to provide access to your network printer. cups-browsed is the CUPS component which finds printers on your network, by interpreting Bonjour broadcasts. Since you don’t need to automatically find printers, it’s safe to remove it; you can add the printer you need manually using whichever CUPS printer configuration tool you want. Nothing in Debian strictly depends on cups-browsed so you can easily remove it. (As a side note, cups-browsed is the component which provides access to driver-less printers, if you enable CreateIPPPrinterQueues in /etc/cups/cups-browsed.conf; it will then automatically create printer queues for IPP printers on the network. It turns out printing can be simple, in some cases.) avahi-daemon is a Bonjour server; it broadcasts your computer’s information on the network, and allows other applications to publish and resolve Bonjour information. Since you’re using Xfce you should be able to remove it too, if you don’t need its services. (GNOME on Debian depends on avahi-daemon but Xfce doesn’t.) The Bonjour client for most applications is provided by libavahi-client3, and you won’t be able to easily remove that because many packages depend on it, including CUPS.
Is it advisable to remove Avahi and CUPS-browse?
1,380,179,815,000
At the startup I work, we're setting up a device for image capturing and analysis. It's a box with a camera, with Ubuntu Linux embedded, and, let's assume, we don't want to connect a monitor to this device for configuration. Some guys came with the solution of having a configuration webpage when connecting the device to a notebook through a network cable directly, just like you do with a router or modem, by accessing a well known IP. It sounds like a solution, but fact is that the device is not a router, and as I see it, it's quite a different context, the device won't be delegating an address to the notebook (making it part of a router's network, where it can have a well known address) since it's not a router. So I'm now looking for a solution that resembles the experience of configuring a router, but that's not a router, it's for a device that I should be able to access from a well know address. For that, I've dug a bit about zeroconf/APIPA but from zeroconf RFC 3927 the IP address must be one generated "using a pseudo-random number generator with a uniform distribution in the range from 169.254.1.0 to 169.254.254.255 inclusive". I think a random IP solution may still work, even though it's not a well know address, in case there's any means of discovering which IP this device has got. Besides this, this device should be using NetworkManager to handle connectivity through the many interfaces it's setup with. So, to sum up the problem situation: A device must be configured through a local network. This device is ON and using Network Manager to handle connectivity through many interfaces, let's say one interface connectivity gets down, it would be choosing another interface. We were thinking about having an eth0 alias to have eth0 both being handled by Network Manager (in context with other interfaces) as well as having fixed IP access through a non-managed (by Network Manager) alias. Not sure whether that's even possible. It's all about device discovering, I've also proposed using nmap to reach the device, but it has two drawbacks: scanning is slow on large networks and it's not a simple webpage access, a client using nmap must be built and used to do the discovery. If there's no means to have simple access in a well known IP, having a random one is also a solution, given that the device can be discovered like a printer in the network or something like this. It may be assumed that the solution can be one to configure a device directly connected to a notebook through a network cable and acquire access to it in a device's configuration webpage as well as one solution where the notebook gets connected to the local network the device is also connected, and be able to access the device discovering it in the network or accessing it through an alternative, exotic and fixed address. Notice that accessing the local network router or using nmap/arp scan is not an option. What matter should be studied to address this problem? Is there a common approach people use for this? In my experience I recall configuring my devices but none fitting the problem: Router: Provides an easy to access configuration webpage at a well known address, but it's the router, it's the gateway and it will be delegating my own address. Cubox-i: I have one of these devices, I had to discover it using nmap in my network and access its ssh. Printers: I have never owned one, so I don't know how its device discover/configuration works, but have used them on networks before, they were generally listed in the device settings on a Windows machine. I still have to take a look at "Avahi", "UPnP", "Zeroconf" and other names in the field which I never worked with. Maybe this is the kind of example that may fit the situation. If there's a simple tool I can run on my Arch Linux and have its IP discovered by other devices like my Android or my Windows notebook, I'd like to know. I've also thought about broadcasting but I'm not sure this would be OK in all LANs, where broadcasting could be blocked or unreliable (unsure regarding this).
The best way to do this is with avahi which implements multicast-dns (this is what Apple calls Bonjour). I would disable Network Manager and go with configuring networking in /etc/network/interfaces. The interfaces file supports the ipv4ll method, which uses avahi-autoipd to configure an interface with an IPv4 Link-Layer address (169.254.0.0/16 family). Next, set up a service in avahi to ensure the host advertises itself via bonjour and add mDNS name resolution to /etc/nsswitch.conf. If the rest of your systems are configured to resolve mDNS names, it should all work like magic.
How to have a device's IP discovered in a network in a simple manner?
1,380,179,815,000
I used to be able to ssh [email protected] between machines on my LAN but it is no longer working. I can ssh using the IP of course, but it's DHCP so it may change from time to time. Both machines run Debian 9.12, one is a VM in a Windows host, but still, it DID work ; I haven't fooled around with the config files, just regular updates. ping hostname.local ping: hostname.local: Name or service not known (it might not be exactly that message as I translate from French) ssh hostname.local ssh: Could not resolve hostname hostname.local: Name or service not known (ssh outputs in English) From avahi.org : Avahi is a system which facilitates service discovery on a local network via the mDNS/DNS-SD protocol suite I've looked into /etc/resolv.conf, /etc/avahi/avahi-daemon.conf, /etc/nsswitch.conf but it's standard out-of-the-box config. /etc/resolv.conf (reset by network-manager each time it starts) # Generated by NetworkManager search lan nameserver xx.xx.xx.xx # DNS IPs obtained from DHCP nameserver xx.xx.xx.xx man resolv.conf says that the search list contains only the local domain name by default (something like that, I translated from man page in French) ; shouldn't it be localinstead of lan ? I tried to change it and ping or ssh another host on my lan right away (without restarting network-manager), it didn't work. And when I restart network-manager, it rewrites /etc/resolv.conf and sets search lan. /etc/nsswitch.conf (default, I haven't made any change) # /etc/nsswitch.conf # # Example configuration of GNU Name Service Switch functionality. # If you have the `glibc-doc-reference' and `info' packages installed, try: # `info libc "Name Service Switch"' for information about this file. passwd: compat group: compat shadow: compat gshadow: files hosts: files mdns4_minimal [NOTFOUND=return] dns myhostname networks: files protocols: db files services: db files ethers: db files rpc: db files netgroup: nis I've tried to discover hosts and services with avahi-browse and nbtscan, which rely on avahi (zeroconf / Bonjour), but they seem to find only the host on which they run. (I know this is a possible duplicate of other questions, but I didn't find any answer and I don't have enough reputation to do anything)
Found it ! It seems that my router has a DNS server indeed : nslookup host_ip router_ip Server: 192.168.1.254 Address: 192.168.1.254#53 69.1.168.192.in-addr.arpa name = hostname.lan. So that answers the .localvs .lanquestion. In recent Debian, the local domain is .lan. Still, ping hostname.lan returns unknown host. Thanks to https://askubuntu.com/questions/623940/network-manager-how-to-stop-nm-updating-etc-resolv-conf, I found out that /etc/resolv.conf is a symlink to /var/run/NetworkManager/resolv.conf ; so I had to replace it with my own resolv.conf : search lan nameserver 192.168.1.254 so that it uses the router's DNS (which will route the queries if necessary). Restarting network-manager systemctl restart network-manager and it works like a charm : $ ping hostname.lan PING hostname.lan (192.168.1.69) 56(84) bytes of data. 64 bytes from hostname.lan (192.168.1.69): icmp_seq=1 ttl=64 time=2.02 ms (ping google.fr to make sure WAN queries are processed)
Can't resolve hostname.local on LAN
1,380,179,815,000
I have a Debian server on local network for home media center/NAS purposes. It is running multiple services such as Plex, or Ajenti, which I can access like so: http://debian.local:32400/web for Plex https://debian.local:8000/ for Ajenti However I would like to access these services like so: http://plex.local for Plex https://ajenti.local for Ajenti Is this possible to configure via avahi alone, or what other simple solution would you suggest?
So the way to do the thing you're directly asking is to add those hosts as aliases to /etc/avahi/hosts (which reads like /etc/hosts) so that those work and are advertised over zeroconf/avahi . The second thing would be to install a reverse proxy (using Apache or Nginx or something) to forward requests for those hosts to the right service. No idea how well plex will work behind a reverse proxy, though.
Advertising local HTTP services via Avahi/Zeroconf on Debian
1,380,179,815,000
How can I test software that uses avahi while my laptop is disconnected from any router? All of the services run on the same machine, so Avahi would be advertising an IP address of 127.0.0.1 for all of the services. As an example, I am using a file at /etc/avahi/services/postgresql.service to register a database: $ cat /etc/avahi/services/postgresql.service <?xml version="1.0" standalone="no"?><!--*-nxml-*--> <!DOCTYPE service-group SYSTEM "avahi-service.dtd"> <service-group> <name>my_database_name</name> <service> <type>_postgresql._tcp</type> <port>5432</port> </service> </service-group> When I am connected to the router, avahi-browse shows the service: $ avahi-browse -a | grep my_database_name + wlan0 IPv6 my_database_name PostgreSQL Server local + wlan0 IPv4 my_database_name PostgreSQL Server local When I am disconnected from the router, avahi-browse no longer shows any services and my software cannot find the database: $ avahi-browse -a Here is my interface information when I am disconnected: $ ifconfig eth0 Link encap:Ethernet HWaddr f0:de:f1:ac:a6:37 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:20 Memory:f3900000-f3920000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:3637633 errors:0 dropped:0 overruns:0 frame:0 TX packets:3637633 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1242209987 (1.2 GB) TX bytes:1242209987 (1.2 GB) I am running Ubuntu 14.04 but may need to do this on other distributions as well. Update: One of my (busy) friends sent me the following, which I have not been able to figure out yet: Avahi, by nature will not properly bind to a loop interface. You can either turn up a dummy interface (best solution) or turn up a non-routable number on a physical interface. (works, but can be problematic if you are using a transient connection.)
Avahi requires that the interface have the MULTICAST flag set. That is, ifconfig dummy0 multicast Once the MULTICAST flag is set, avahi will automatically advertise services on that interface, no need to restart or otherwise mess with avahi configuration unless the interface is disallowed in the avahi configuration.
How can I use avahi without a network connection?
1,380,179,815,000
I want to use avahi on a system with a read-only rootfs where /etc is not writable. I can start avahi-daemon with the -f option to specify a non-standard location for the avahi-daemon.conf file (default location is /etc/avahi/avahi-daemon.conf). However I can't find any way to specify a non-standard location for the service definitions (default location is /etc/avahi/services). Is there any option for this ?
It seems that Avahi does not provide any configuration option for searching for service definitions in a non-standard location (other than rebuilding with a custom --prefix, but that obviously has other implications). In case someone else needs this, these are the options I've found: Symlink /etc/avahi/services to a different directory. For this to work, the avahi daemon must be started with the --no-chroot option; otherwise it will not be able to reach out of the chroot jail. Bind mount a different directory on /etc/avahi/services. This does not require that --no-chroot is used. Both options work fine.
Using a non-standard location for avahi services
1,380,179,815,000
I am using Ubuntu 20.04. I have tried to stop and disable the avahi-daemon as below: $ sudo systemctl stop avahi-daemon.service $ sudo systemctl disable --now avahi-daemon.socket $ sudo systemctl disable --now avahi-daemon.service However, the service restarts after I reboot. I would also like to stop cups service from starting at boot and tried the above, but without luck! How to disable a service start at boot in Ubuntu 20.04?
systemctl disable works most of the time but to be sure, use systemctl mask. What this does is to link the unit files to /dev/null which effectively causes any start operation or command to fail. You can see more on this in the man page.
Ubuntu 20.04: Permanently disable a service start at boot
1,380,179,815,000
$ systemctl status avahi-daemon ● avahi-daemon.service - Avahi mDNS/DNS-SD Stack Loaded: loaded (/usr/lib/systemd/system/avahi-daemon.service; enabled; vendor preset: enabled) Active: active (running) since Tue 2018-05-08 08:01:52 BST; 8h ago Main PID: 817 (avahi-daemon) Status: "avahi-daemon 0.7 starting up." Tasks: 2 (limit: 4915) Memory: 1.8M CGroup: /system.slice/avahi-daemon.service ├─817 avahi-daemon: running [alan-laptop.local] └─852 avahi-daemon: chroot helper $ rpm -q avahi avahi-0.7-12.fc28.x86_64
The source code for avahi-daemon source code includes several more status messages, e.g. sd_notifyf(0, "STATUS=Server startup complete. Host name is %s. Local service cookie is %u.", avahi_server_get_host_name_fqdn(s), avahi_server_get_local_service_cookie(s)); However, by default avahi-daemon enters a secure chroot during startup. This means it can't access the systemd notification socket /run/systemd/notify. It has blocked itself from sending status messages to systemd. Oops. # ls -l /proc/817/root lrwxrwxrwx. 1 root root 0 May 8 16:27 /proc/817/root -> /etc/avahi # ls -l /proc/817/root/ -rw-r--r--. 1 root root 1753 Jul 10 2017 avahi-daemon.conf drwxr-xr-x. 2 root root 4096 Apr 6 16:48 etc -rw-r--r--. 1 root root 1121 Jul 10 2017 hosts drwxr-xr-x. 2 root root 4096 Apr 6 16:48 services
Why is the status message for avahi-daemon.service permanently "starting up"?
1,380,179,815,000
I've installed KDE desktop, that depends on avahi. It has two daemons avahi-daemon and avahi-dnsconfd. In ArchLinux wiki there is no info about avahi-dnsconfd. I've tried Daemon and Avahi pages.
Here's the description from debian's avahi-dnsconfd package: Package: avahi-dnsconfd Description-en: Avahi DNS configuration tool Avahi is a fully LGPL framework for Multicast DNS Service Discovery. It allows programs to publish and discover services and hosts running on a local network with no specific configuration. For example you can plug into a network and instantly find printers to print to, files to look at and people to talk to. . This tool listens on the network for announced DNS servers and passes them to resolvconf so it can use them. This is very useful on autoconfigured IPv6 networks. Homepage: http://avahi.org/ More info should be available at the Avahi home page.
What does `avahi-dnsconfd` daemon?
1,380,179,815,000
Suppose I have dynamically published a DNS-SD service using avahi-publish, as in avahi-publish -s "My service" _myservic._tcp 1234 How can I un-publish it without restarting the Avahi daemon? I want to make a service discoverable via Avahi/Bonjour, but stop advertising it if I shut the service down. I don't want to restart the whole Avahi daemon, as I may be advertising other dynamic services that I don't want to stop.
It appears I should have experimented more before asking the question. When you publish a service with avahi-publish, the process continues running in the foreground (the man pages don't mention this!) To un-publish, you terminate the avahi-publish process. Previously, I had been using a static configuration with services files; I hadn't wanted to move to using dynamic configuration before I knew how it was supposed to work.
How to un-publish a DNS-SD service?
1,380,179,815,000
I'm trying to connect to several computers via their hostnames since they get their IP via DHCP. I can successfully ping the machines via ping host-01.local. ping, wget, avahi-resolve and even Firefox all send out the required mDNS packages, which I checked throug Wireshark (UDP on port 5353). However ssh doesn't seem to try to resolve the adresses at all. There are no mDNS queries issued and the output for ssh host-01.local just says: ssh: Could not resolve hostname host-01.local: Device or resource busy For ssh -vvv host-01.local it is: OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017 debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: Applying options for * debug2: resolving "host-01.local" port 22 ssh: Could not resolve hostname host-01.local: Device or resource busy The client is running Linux Mint. The network I'm in doesn't seem to matter. Everything else related to avahi seems to work just fine.
For some reason I had the 32-bit version of ssh on my system. Installing the 64-bit version seems to have solved all of my problems. After looking through the strace of the command I noticed ssh had failed trying to load a bunch of libraries. This baffled me at first because most of these libraries are installed on my system, until I noticed some of the paths wich included i386-linux-gnu. That's when I realized I must have the wrong package installed.
SSH is unable to resolve local domain names
1,380,179,815,000
systemd-networkd: Is it possible to configure default link-local address, which will be tried as the first one? Something like '--start=' parameter for avahi-autoipd.
As of systemd v252-stable you can do IPv4LLStartAddress=169.254.1.1 Something like [Match] Name=en* eth* KernelCommandLine=!nfsroot [Network] DHCP=yes LinkLocalAddressing=fallback IPv4LLStartAddress=169.254.1.1 [DHCP] RouteMetric=10 ClientIdentifier=mac [DHCPv4] MaxAttempts=3 Would allow for a network interface that gets DHCP but will use a link local address starting at 169.254.1.1 if DHCP is not available. It will figure out a new address if conflicts occur.
systemd.networkd: how to set default link-local address?
1,361,291,930,000
The man pages for badblocks do not seem to mention what the three numbers in the output mean in particular: Pass completed, 7 bad blocks found (7/0/0 errors) Pass completed, 120 bad blocks found (0/0/120 errors) I'm guessing it's "Errors while reading/writing/comparing". Can someone enlighten me?
Your guess is correct. The source code looks like this: if (v_flag) fprintf(stderr, _("Pass completed, %u bad blocks found. (%d/%d/%d errors)\n"), bb_count, num_read_errors, num_write_errors, num_corruption_errors); So its read/write/corruption errors. And corruption means comparison with previously written data: if (t_flag) { /* test the comparison between all the blocks successfully read */ int i; for (i = 0; i < got; ++i) if (memcmp (blkbuf+i*block_size, blkbuf+blocks_at_once*block_size, block_size)) bb_count += bb_output(currently_testing + i, CORRUPTION_ERROR); }
How to interpret badblocks output
1,361,291,930,000
I need to do a destructive (rw) test on a new drive, and a read-only on a drive that fell out of my RAID array. I want to see if it finds problems and how far along it is.
Let /dev/sda be the new drive on which to test destructive-rw and /dev/sdb the old drive where you want non-destructive-r # badblocks -wsv /dev/sda # badblocks -sv /dev/sdb -s gives the process indicator -v gives verbose output -w enables destructive read-write -n would be non-destructive read-write Read-only testing is the default and doesn't need special parameters.
How do you use badblocks?
1,361,291,930,000
I'm running badblocks to check for bad segments on an external drive, and it's been about an hour and it has not yet finished. Now, I need to go and consider cancelling. Is this somehow risky? Should I avoid this? Clearly, I will need to start again from scratch; I just want to know if this is somehow risky to abort midway.
From examining the source code, I find that: If you didn't specify -n or -w, badblocks doesn't write to the disk at all, so you're safe interrupting it. If you specified -w, badblocks has already overwritten the filesystem, so it's much too late to worry about interrupting the process. If you specified -n, badblocks uses a signal handler to prevent the program from exiting with the disk in an inconsistent state, so it is safe to press ctrl-c.
Is interrupting badblocks risky?
1,361,291,930,000
The tl;dr: how would I go about fixing a bad block on 1 disk in a RAID1 array? But please read this whole thing for what I've tried already and possible errors in my methods. I've tried to be as detailed as possible, and I'm really hoping for some feedback This is my situation: I have two 2TB disks (same model) set up in a RAID1 array managed by mdadm. About 6 months ago I noticed the first bad block when SMART reported it. Today I noticed more, and am now trying to fix it. This HOWTO page seems to be the one article everyone links to to fix bad blocks that SMART is reporting. It's a great page, full of info, however it is fairly outdated and doesn't address my particular setup. Here is how my config is different: Instead of one disk, I'm using two disks in a RAID1 array. One disk is reporting errors while the other is fine. The HOWTO is written with only one disk in mind, which bring up various questions such as 'do I use this command on the disk device or the RAID device'? I'm using GPT, which fdisk does not support. I've been using gdisk instead, and I'm hoping that it is giving me the same info that I need So, lets get down to it. This is what I have done, however it doesn't seem to be working. Please feel free to double check my calculations and method for errors. The disk reporting errors is /dev/sda: # smartctl -l selftest /dev/sda smartctl 5.42 2011-10-20 r3458 [x86_64-linux-3.4.4-2-ARCH] (local build) Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net === START OF READ SMART DATA SECTION === SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Short offline Completed: read failure 90% 12169 3212761936 With this, we gather that the error resides on LBA 3212761936. Following the HOWTO, I use gdisk to find the start sector to be used later in determining the block number (as I cannot use fdisk since it does not support GPT): # gdisk -l /dev/sda GPT fdisk (gdisk) version 0.8.5 Partition table scan: MBR: protective BSD: not present APM: not present GPT: present Found valid GPT with protective MBR; using GPT. Disk /dev/sda: 3907029168 sectors, 1.8 TiB Logical sector size: 512 bytes Disk identifier (GUID): CFB87C67-1993-4517-8301-76E16BBEA901 Partition table holds up to 128 entries First usable sector is 34, last usable sector is 3907029134 Partitions will be aligned on 2048-sector boundaries Total free space is 2014 sectors (1007.0 KiB) Number Start (sector) End (sector) Size Code Name 1 2048 3907029134 1.8 TiB FD00 Linux RAID Using tunefs I find the blocksize to be 4096. Using this info and the calculuation from the HOWTO, I conclude that the block in question is ((3212761936 - 2048) * 512) / 4096 = 401594986. The HOWTO then directs me to debugfs to see if the block is in use (I use the RAID device as it needs an EXT filesystem, this was one of the commands that confused me as I did not, at first, know if I should use /dev/sda or /dev/md0): # debugfs debugfs 1.42.4 (12-June-2012) debugfs: open /dev/md0 debugfs: testb 401594986 Block 401594986 not in use So block 401594986 is empty space, I should be able to write over it without problems. Before writing to it, though, I try to make sure that it, indeed, cannot be read: # dd if=/dev/sda1 of=/dev/null bs=4096 count=1 seek=401594986 1+0 records in 1+0 records out 4096 bytes (4.1 kB) copied, 0.000198887 s, 20.6 MB/s If the block could not be read, I wouldn't expect this to work. However, it does. I repeat using /dev/sda, /dev/sda1, /dev/sdb, /dev/sdb1, /dev/md0, and +-5 to the block number to search around the bad block. It all works. I shrug my shoulders and go ahead and commit the write and sync (I use /dev/md0 because I figured modifying one disk and not the other might cause issues, this way both disks overwrite the bad block): # dd if=/dev/zero of=/dev/md0 bs=4096 count=1 seek=401594986 1+0 records in 1+0 records out 4096 bytes (4.1 kB) copied, 0.000142366 s, 28.8 MB/s # sync I would expect that writing to the bad block would have the disks reassign the block to a good one, however running another SMART test shows differently: # 1 Short offline Completed: read failure 90% 12170 3212761936 Back to square 1. So basically, how would I fix a bad block on 1 disk in a RAID1 array? I'm sure I've not done something correctly... Thanks for your time and patience. EDIT 1: I've tried to run an long SMART test, with the same LBA returning as bad (the only difference is it reports 30% remaining rather than 90%): SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Extended offline Completed: read failure 30% 12180 3212761936 # 2 Short offline Completed: read failure 90% 12170 3212761936 I've also used badblocks with the following output. The output is strange and seems to be miss-formatted, but I tried to test the numbers outputed as blocks but debugfs gives an error # badblocks -sv /dev/sda Checking blocks 0 to 1953514583 Checking for bad blocks (read-only test): 1606380968ne, 3:57:08 elapsed. (0/0/0 errors) 1606380969ne, 3:57:39 elapsed. (1/0/0 errors) 1606380970ne, 3:58:11 elapsed. (2/0/0 errors) 1606380971ne, 3:58:43 elapsed. (3/0/0 errors) done Pass completed, 4 bad blocks found. (4/0/0 errors) # debugfs debugfs 1.42.4 (12-June-2012) debugfs: open /dev/md0 debugfs: testb 1606380968 Illegal block number passed to ext2fs_test_block_bitmap #1606380968 for block bitmap for /dev/md0 Block 1606380968 not in use Not sure where to go from here. badblocks definitely found something, but I'm not sure what to do with the information presented... EDIT 2 More commands and info. I feel like an idiot forgetting to include this originally. This is SMART values for /dev/sda. I have 1 Current_Pending_Sector, and 0 Offline_Uncorrectable. SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x002f 100 100 051 Pre-fail Always - 166 2 Throughput_Performance 0x0026 055 055 000 Old_age Always - 18345 3 Spin_Up_Time 0x0023 084 068 025 Pre-fail Always - 5078 4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 75 5 Reallocated_Sector_Ct 0x0033 252 252 010 Pre-fail Always - 0 7 Seek_Error_Rate 0x002e 252 252 051 Old_age Always - 0 8 Seek_Time_Performance 0x0024 252 252 015 Old_age Offline - 0 9 Power_On_Hours 0x0032 100 100 000 Old_age Always - 12224 10 Spin_Retry_Count 0x0032 252 252 051 Old_age Always - 0 11 Calibration_Retry_Count 0x0032 252 252 000 Old_age Always - 0 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 75 181 Program_Fail_Cnt_Total 0x0022 100 100 000 Old_age Always - 1646911 191 G-Sense_Error_Rate 0x0022 100 100 000 Old_age Always - 12 192 Power-Off_Retract_Count 0x0022 252 252 000 Old_age Always - 0 194 Temperature_Celsius 0x0002 064 059 000 Old_age Always - 36 (Min/Max 22/41) 195 Hardware_ECC_Recovered 0x003a 100 100 000 Old_age Always - 0 196 Reallocated_Event_Count 0x0032 252 252 000 Old_age Always - 0 197 Current_Pending_Sector 0x0032 100 100 000 Old_age Always - 1 198 Offline_Uncorrectable 0x0030 252 100 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x0036 200 200 000 Old_age Always - 0 200 Multi_Zone_Error_Rate 0x002a 100 100 000 Old_age Always - 30 223 Load_Retry_Count 0x0032 252 252 000 Old_age Always - 0 225 Load_Cycle_Count 0x0032 100 100 000 Old_age Always - 77 # mdadm -D /dev/md0 /dev/md0: Version : 1.2 Creation Time : Thu May 5 06:30:21 2011 Raid Level : raid1 Array Size : 1953512383 (1863.01 GiB 2000.40 GB) Used Dev Size : 1953512383 (1863.01 GiB 2000.40 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Tue Jul 3 22:15:51 2012 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Name : server:0 (local to host server) UUID : e7ebaefd:e05c9d6e:3b558391:9b131afb Events : 67889 Number Major Minor RaidDevice State 2 8 1 0 active sync /dev/sda1 1 8 17 1 active sync /dev/sdb1 As per one of the answers: it would seem I did switch seek and skip for dd. I was using seek as that's what is used with the HOWTO. Using this command causes dd to hang: # dd if=/dev/sda1 of=/dev/null bs=4096 count=1 skip=401594986 Using blocks around that one (..84, ..85, ..87, ..88) seems to work just fine, and using /dev/sdb1 with block 401594986 reads just fine as well (as expected as that disk passed SMART testing). Now, the question that I have is: When writing over this area to reassign the blocks, do I use /dev/sda1 or /dev/md0? I don't want to cause any issues with the RAID array by writing directly to one disk and not having the other disk update. EDIT 3 Writing to the block directly produced filesystem errors. I've chosen an answer that solved the problem quickly: # 1 Short offline Completed without error 00% 14211 - # 2 Extended offline Completed: read failure 30% 12244 3212761936 Thanks to everyone who helped. =)
All these "poke the sector" answers are, quite frankly, insane. They risk (possibly hidden) filesystem corruption. If the data were already gone, because that disk stored the only copy, it'd be reasonable. But there is a perfectly good copy on the mirror. You just need to have mdraid scrub the mirror. It'll notice the bad sector, and rewrite it automatically. # echo 'check' > /sys/block/mdX/md/sync_action # use 'repair' instead for older kernels You need to put the right device in there (e.g., md0 instead of mdX). This will take a while, as it does the entire array by default. On a new enough kernel, you can write sector numbers to sync_min/sync_max first, to limit it to only a portion of the array. This is a safe operation. You can do it on all of your mdraid devices. In fact, you should do it on all your mdraid devices, regularly. Your distro likely ships with a cronjob to handle this, maybe you need to do something to enable it? Script for all RAID devices on the system A while back, I wrote this script to "repair" all RAID devices on the system. This was written for older kernel versions where only 'repair' would fix the bad sector; now just doing check is sufficient (repair still works fine on newer kernels, but it also re-copies/rebuilds parity, which isn't always what you want, especially on flash drives) #!/bin/bash save="$(tput sc)"; clear="$(tput rc)$(tput el)"; for sync in /sys/block/md*/md/sync_action; do md="$(echo "$sync" | cut -d/ -f4)" cmpl="/sys/block/$md/md/sync_completed" # check current state and get it repairing. read current < "$sync" case "$current" in idle) echo 'repair' > "$sync" true ;; repair) echo "WARNING: $md already repairing" ;; check) echo "WARNING: $md checking, aborting check and starting repair" echo 'idle' > "$sync" echo 'repair' > "$sync" ;; *) echo "ERROR: $md in unknown state $current. ABORT." exit 1 ;; esac echo -n "Repair $md...$save" >&2 read current < "$sync" while [ "$current" != "idle" ]; do read stat < "$cmpl" echo -n "$clear $stat" >&2 sleep 1 read current < "$sync" done echo "$clear done." >&2; done for dev in /dev/sd?; do echo "Starting offline data collection for $dev." smartctl -t offline "$dev" done If you want to do check instead of repair, then this (untested) first block should work: case "$current" in idle) echo 'check' > "$sync" true ;; repair|check) echo "NOTE: $md $current already in progress." ;; *) echo "ERROR: $md in unknown state $current. ABORT." exit 1 ;; esac
Linux - Repairing bad blocks on a RAID1 array with GPT
1,361,291,930,000
When you're using ext4, you can check for badblocks with the command e2fsck -c /dev/sda1 # or whatever. This will "blacklist" the blocks by adding them to the bad block inode. What is the equivalent of this for an LVM2 physical volume? The filesystem on it is ext4, but presumably, the bad blocks that are detected will become invalid as the underlying LVM setup moves data around on the physical disk. In other words, how can I check for bad blocks to not use in LVM?
When you're using ext4, you can check for badblocks with the command e2fsck -c /dev/sda1 or whatever. This will "blacklist" the blocks by adding them to the bad block inode. e2fsck -c runs badblocks on the underlying hard disk. You can use the badblocks command directly on a LVM physical volume (assuming that the PV is in fact a hard disk, and not some other kind of virtual device like an MD software RAID device), just as you would use that command on a hard disk that contains an ext file system. That won't add any kind of bad block information to the file system, but I don't really think that that's a useful feature of the file system; the hard disk is supposed to handle bad blocks. Even better than badblocks is running a SMART selftest on the disk (replace /dev/sdX with the device name of your hard disk): smartctl -t long /dev/sdX smartctl -a /dev/sdX | less The test ifself will take a few hours (it will tell you exactly how long). When it's done, you can query the result with smartctl -a, look for the self-test log. If it says "Completed successfully", your hard disk is fine. In other words, how can I check for bad blocks to not use in LVM? As I said, the hard disk itself will ensure that it doesn't use damaged blocks and it will also relocate data from those blocks; that's not something that the file system or the LV has to do. On the other hand, when your hard disk has more than just a few bad blocks, you don't want something that relocates them, but you want to replace the whole hard disk because it is failing.
How can I check for bad blocks on an LVM physical volume?
1,361,291,930,000
I have a HDD which I don't entirely trust, but still want to use (burstcoin mining, where if I get a bad block in a file, I'll only lose a few cents). How can I tell btrfs to mark certain blocks as bad (eg from badblocks output)? If I can't pre-mark blocks as bad, will any bad blocks identified by btrfs scrub be avoided in future if the file using them is deleted?
Sadly, no. btrfs doesn't track bad blocks and btrfs scrub doesn't prevent the next file from hitting the same bad block(s). This btrfs mailing list post suggests to use ext4 with mkfs.ext4 -c (this "builds a bad blocks list and then won't use those sectors"). The suggestion to use btrfs over mdadm 3.1+ with RAID0 will not work. It seems that LVM doesn't support badblock reallocation. A work-around is to build a device excluding blocks known to be bad: btrfs over dmsetup. The btrfs Project Ideas wiki says: Not claimed — no patches yet — Not in kernel yet Currently btrfs doesn't keep track of bad blocks, disk blocks that are very likely to lose data written to them. Btrfs should accept a list in badblocks' output format, store it in a new btree (or maybe in the current extent tree, with a new flag), relocate whatever data the blocks contain, and reserve these blocks so they can't be used for future allocations. Additionally, scrub could be taught to test for bad blocks when a checksum error is found. This would make scrub much more useful; checksum errors are generally caused by the disk, but while scrub detects afflicted files, which in a backup scenario gives the opportunity to recreate them, the next file to reuse the bad blocks will just start getting errors instead. These two items would match an ext4 feature (used through e2fsck). Please comment if the status changes and I will update this answer.
Can btrfs track / avoid bad blocks?
1,361,291,930,000
My Ubuntu 13.10 system has been performing very poorly over the last day or so. Looking at the kernel logs, it appears that the <1yr old 3TB SATA disk is having issues with a particular sector: Nov 4 20:54:04 mediaserver kernel: [10893.039180] ata4.01: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0 Nov 4 20:54:04 mediaserver kernel: [10893.039187] ata4.01: BMDMA stat 0x65 Nov 4 20:54:04 mediaserver kernel: [10893.039193] ata4.01: failed command: READ DMA EXT Nov 4 20:54:04 mediaserver kernel: [10893.039202] ata4.01: cmd 25/00:08:f8:3f:83/00:00:af:00:00/f0 tag 0 dma 4096 in Nov 4 20:54:04 mediaserver kernel: [10893.039202] res 51/40:00:f8:3f:83/40:00:af:00:00/10 Emask 0x9 (media error) Nov 4 20:54:04 mediaserver kernel: [10893.039207] ata4.01: status: { DRDY ERR } Nov 4 20:54:04 mediaserver kernel: [10893.039211] ata4.01: error: { UNC } Nov 4 20:54:04 mediaserver kernel: [10893.148527] ata4.00: configured for UDMA/133 Nov 4 20:54:04 mediaserver kernel: [10893.180322] ata4.01: configured for UDMA/133 Nov 4 20:54:04 mediaserver kernel: [10893.180345] sd 3:0:1:0: [sdc] Unhandled sense code Nov 4 20:54:04 mediaserver kernel: [10893.180349] sd 3:0:1:0: [sdc] Nov 4 20:54:04 mediaserver kernel: [10893.180353] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE Nov 4 20:54:04 mediaserver kernel: [10893.180356] sd 3:0:1:0: [sdc] Nov 4 20:54:04 mediaserver kernel: [10893.180359] Sense Key : Medium Error [current] [descriptor] Nov 4 20:54:04 mediaserver kernel: [10893.180371] Descriptor sense data with sense descriptors (in hex): Nov 4 20:54:04 mediaserver kernel: [10893.180373] 72 03 11 04 00 00 00 0c 00 0a 80 00 00 00 00 00 Nov 4 20:54:04 mediaserver kernel: [10893.180384] af 83 3f f8 Nov 4 20:54:04 mediaserver kernel: [10893.180389] sd 3:0:1:0: [sdc] Nov 4 20:54:04 mediaserver kernel: [10893.180393] Add. Sense: Unrecovered read error - auto reallocate failed Nov 4 20:54:04 mediaserver kernel: [10893.180396] sd 3:0:1:0: [sdc] CDB: Nov 4 20:54:04 mediaserver kernel: [10893.180398] Read(16): 88 00 00 00 00 00 af 83 3f f8 00 00 00 08 00 00 Nov 4 20:54:04 mediaserver kernel: [10893.180412] end_request: I/O error, dev sdc, sector 2944614392 Nov 4 20:54:04 mediaserver kernel: [10893.180431] ata4: EH complete The kern.log file is around 33MB mostly full of the above error repeated and the sector doesn't appear to be any different in the repeated messages. I'm currently running the following command on the now unmounted disk to test and attempt to sort out any issues the disk might have. I'm around 12hrs in and expect it to take another 24/48 hours as the disk is so large: e2fsck -c -c -p -v /dev/sdc1 My question is: Is this drive failing, or am I looking at a common issue here? I'm wondering if there is any point to me to repairing or ignoring bad sectors and whether I should replace the disk under warranty whilst it's still covered. My knowledge of the above command is somewhat lacking, so I'm sceptical as to whether it'll help or not. Quick update! e2fsck finally finished after 2 days with lots of 'multiply-claimed block(s) in inode'. Trying to mount the filesystem resulted in an error, forcing it to drop back to read-only: Nov 11 08:29:05 mediaserver kernel: [211822.287758] EXT4-fs (sdc1): warning: mounting fs with errors, running e2fsck is recommended Nov 11 08:29:05 mediaserver kernel: [211822.301699] EXT4-fs (sdc1): mounted filesystem with ordered data mode. Opts: errors=remount-ro Trying to read the sector manually: sudo dd count=1 if=/dev/sdc of=/dev/null skip=2944614392 dd: reading ‘/dev/sdc’: Input/output error 0+0 records in 0+0 records out 0 bytes (0 B) copied, 5.73077 s, 0.0 kB/s Trying to write to it: sudo dd count=1 if=/dev/zero of=/dev/sdc seek=2944614392 dd: writing to ‘/dev/sdc’: Input/output error 1+0 records in 0+0 records out 0 bytes (0 B) copied, 2.87869 s, 0.0 kB/s On both counts, the Reallocated_Sector_Ct remained 0. The drive does go into a sleep state quite often. I'm now thinking this could be a filesystem issue? I'm not 100%.
Bad sectors are always an indication of a failing HDD, in fact the moment you see an I/O error such as this, you probably already lost/corrupted some data. Make a backup if you haven't one already, run a self test smartctl -t long /dev/disk and check SMART data smartctl -a /dev/disk. Get a replacement if you can. Bad sectors can't be repaired, only replaced by reserve sectors, which harms HDD performance, as they require additional seeks to the reserve sectors every time they are accessed. Marking such sectors as bad on the filesystem layer helps, as they won't ever be accessed then; however it's hard to determine which sectors were already reallocated by the disk, so chances are the filesystem won't know to avoid the affected region.
Does a bad sector indicate a failing disk?
1,361,291,930,000
I may be misunderstanding some concepts here, but as far as I know, each disk has a partition table and actual partitions. I'm looking to test a hard drive for bad sectors and errors, but the tools I found to do this are meant for partitions -- not disks. badblocks takes a partition /dev/sda1 not /dev/sda. Same story with e2fsck. As far as I understand, those tools only test space assigned to partitions, not a whole disk. Is there any way to test an entire disk?
Is there any way to test whole disk? Yes, using badblocks: badblocks /dev/sda The manpage refers to partitions because badblocks can tell mkfs.ext2 about the bad blocks it finds, and that only works when checking partitions. But badblocks itself works fine on full disks. However badblocks is really a relic of a bygone era when hard drives didn’t manage their bad blocks themselves. Nowadays drives track errors themselves and are capable of remapping bad sectors as circumstances allow (typically, when a bad sector is rewritten). You’re probably better off running SMART tests and checking the results: smartctl -t long /dev/sda smartctl -t offline /dev/sda smartctl -x /dev/sda (make sure each test completes before running the next one).
How to check entire hard disk for errors and bad sectors
1,361,291,930,000
I can't find anything about what badblocks actually considers a bad block. I've read the man page and looked at a bunch of questions on here, but I can't find specifics. Also, how good is badblocks? Should I trust it's results? My company historically used Victoria on Hirens Boot CD to test hard disks, but that isn't always a good option on newer computers.
badblocks reads, and writes, and compares (not necessarily in that order). Subsequently badblocks -v will output messages like: Pass completed, n bad blocks found (x/y/z errors) Which means it found n bad blocks, consisting of x read errors, y write errors and z corruption errors. It considers read errors and write errors as they occured while reading and writing (as reported by the kernel). These errors can also be caused due to cable / controller / driver problems. A corruption error is where data was compared and found to be different than expected (i.e. the data it read deviated from previously known/written data). In particular badblocks might write various data patterns (specified by one or more -t pattern options) and check if each pattern was written correctly. It's possible to get false positives for corruption errors, if you have another program doing their own writes while badblocks is running. If another program writes, the disk is behaving correctly, but badblocks won't know about that, it just sees what it considers the wrong data. Which is also why you should never run badblocks on a drive that is in use, on a drive that already has a filesystem that could be mounted automatically without you knowing, or on drives you already suspect are bad but you still wish to recover your data. In terms of data recovery, you should always go with ddrescue instead of badblocks. ddrescue does very much the same thing badblocks (read mode) does, it reads the entire drive and logs down the sectors it couldn't read; but at the same time it produces a useful copy, whereas badblocks just discards the data entirely. Is it trustworthy? badblocks is a tool like any other, it does exactly what it says in the manpage - search a device for bad blocks. It may or may not be the right tool for whatever it is you want to do. In the wrong hands, it might be the cause of data corruption. The so called non-destructive mode is a false friend and does not imply safety for your data at all. badblocks (write mode) is primarily useful to put a new, empty drive through the wringer before trusting it with data. For a read only test, it's usually better to go with SMART selftests (smartctl -t long or smartctl -t select). Safer than badblocks and friendly to other I/O.
Is badblocks trustworthy?
1,361,291,930,000
I got a new drive and I'm confused if smartctl detects bad sectors or not. Both short and extended self-tests completed without error. But the Error Log indicates Uncorrectable error in data for 96 sectors. Here's the smartctl output: smartctl 5.41 2011-06-09 r3365 [i686-linux-3.2.0-52-generic] (local build) Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net === START OF INFORMATION SECTION === Model Family: Hitachi Deskstar T7K500 Device Model: Hitachi HDT725025VLA380 Serial Number: VFL104R73X993Z LU WWN Device Id: 5 000cca 316f723ca Firmware Version: V5DOA73A User Capacity: 250,059,350,016 bytes [250 GB] Sector Size: 512 bytes logical/physical Device is: In smartctl database [for details use: -P show] ATA Version is: 7 ATA Standard is: ATA/ATAPI-7 T13 1532D revision 1 Local Time is: Wed Feb 5 19:19:29 2014 UTC SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED General SMART Values: Offline data collection status: (0x80) Offline data collection activity was never started. Auto Offline Data Collection: Enabled. Self-test execution status: ( 0) The previous self-test routine completed without error or no self-test has ever been run. Total time to complete Offline data collection: ( 4949) seconds. Offline data collection capabilities: (0x5b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. No Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 1) minutes. Extended self-test routine recommended polling time: ( 83) minutes. SCT capabilities: (0x003f) SCT Status supported. SCT Error Recovery Control supported. SCT Feature Control supported. SCT Data Table supported. SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000b 100 100 016 Pre-fail Always - 0 2 Throughput_Performance 0x0005 100 100 050 Pre-fail Offline - 0 3 Spin_Up_Time 0x0007 110 110 024 Pre-fail Always - 338 (Average 340) 4 Start_Stop_Count 0x0012 100 100 000 Old_age Always - 1838 5 Reallocated_Sector_Ct 0x0033 100 100 005 Pre-fail Always - 0 7 Seek_Error_Rate 0x000b 100 100 067 Pre-fail Always - 0 8 Seek_Time_Performance 0x0005 100 100 020 Pre-fail Offline - 0 9 Power_On_Hours 0x0012 099 099 000 Old_age Always - 11746 10 Spin_Retry_Count 0x0013 100 100 060 Pre-fail Always - 0 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 1822 192 Power-Off_Retract_Count 0x0032 099 099 000 Old_age Always - 2103 193 Load_Cycle_Count 0x0012 099 099 000 Old_age Always - 2103 194 Temperature_Celsius 0x0002 162 162 000 Old_age Always - 37 (Min/Max 12/48) 196 Reallocated_Event_Count 0x0032 100 100 000 Old_age Always - 0 197 Current_Pending_Sector 0x0022 100 100 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0008 100 100 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x000a 200 253 000 Old_age Always - 0 SMART Error Log Version: 1 ATA Error Count: 27 (device log contains only the most recent five errors) CR = Command Register [HEX] FR = Features Register [HEX] SC = Sector Count Register [HEX] SN = Sector Number Register [HEX] CL = Cylinder Low Register [HEX] CH = Cylinder High Register [HEX] DH = Device/Head Register [HEX] DC = Device Command Register [HEX] ER = Error register [HEX] ST = Status register [HEX] Powered_Up_Time is measured from power on, and printed as DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes, SS=sec, and sss=millisec. It "wraps" after 49.710 days. Error 27 occurred at disk power-on lifetime: 11706 hours (487 days + 18 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 40 51 60 e4 33 e7 47 Error: UNC 96 sectors at LBA = 0x07e733e4 = 132592612 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- 25 03 80 c4 33 e7 40 00 02:28:22.700 READ DMA EXT 25 03 01 00 00 00 40 00 02:28:22.200 READ DMA EXT 25 03 01 00 00 00 40 00 02:28:22.200 READ DMA EXT 25 03 01 00 00 00 40 00 02:28:22.200 READ DMA EXT ef 03 46 c4 33 e7 00 00 02:28:22.200 SET FEATURES [Set transfer mode] Error 26 occurred at disk power-on lifetime: 11706 hours (487 days + 18 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 40 51 60 e4 33 e7 47 Error: UNC 96 sectors at LBA = 0x07e733e4 = 132592612 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- 25 03 80 c4 33 e7 40 00 02:28:11.700 READ DMA EXT 25 03 01 00 00 00 40 00 02:28:11.200 READ DMA EXT 25 03 01 00 00 00 40 00 02:28:11.200 READ DMA EXT 25 03 01 00 00 00 40 00 02:28:11.200 READ DMA EXT ef 03 46 c4 33 e7 00 00 02:28:11.200 SET FEATURES [Set transfer mode] Error 25 occurred at disk power-on lifetime: 11706 hours (487 days + 18 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 40 51 60 e4 33 e7 47 Error: UNC 96 sectors at LBA = 0x07e733e4 = 132592612 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- 25 03 80 c4 33 e7 40 00 02:28:00.700 READ DMA EXT 25 03 01 00 00 00 40 00 02:28:00.200 READ DMA EXT 25 03 01 00 00 00 40 00 02:28:00.200 READ DMA EXT 25 03 01 00 00 00 40 00 02:28:00.200 READ DMA EXT ef 03 46 c4 33 e7 00 00 02:28:00.200 SET FEATURES [Set transfer mode] Error 24 occurred at disk power-on lifetime: 11706 hours (487 days + 18 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 40 51 60 e4 33 e7 47 Error: UNC 96 sectors at LBA = 0x07e733e4 = 132592612 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- 25 03 80 c4 33 e7 40 00 02:27:49.700 READ DMA EXT 25 03 01 00 00 00 40 00 02:27:49.200 READ DMA EXT 25 03 01 00 00 00 40 00 02:27:49.200 READ DMA EXT 25 03 01 00 00 00 40 00 02:27:49.200 READ DMA EXT ef 03 46 c4 33 e7 00 00 02:27:49.200 SET FEATURES [Set transfer mode] Error 23 occurred at disk power-on lifetime: 11706 hours (487 days + 18 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 40 51 60 e4 33 e7 47 Error: UNC 96 sectors at LBA = 0x07e733e4 = 132592612 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- 25 03 80 c4 33 e7 40 00 02:27:38.900 READ DMA EXT 25 03 08 7c a8 3a 40 00 02:27:38.900 READ DMA EXT 35 03 08 7c a8 3a 40 00 02:27:38.900 WRITE DMA EXT 25 03 08 7c a8 3a 40 00 02:27:38.900 READ DMA EXT 25 03 08 a4 eb 94 40 00 02:27:38.900 READ DMA EXT SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Extended offline Completed without error 00% 11746 - # 2 Short offline Completed without error 00% 11744 - SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay. And here's a screenshot with the Error Log: So what is going on? Does the drive have bad sectors or not? UPDATE1: Just to be sure I also used badblocks as suggested in How do you use badblocks?. First, the non-destructive, 1h-long read-only method: root@xubuntu:/home/xubuntu# badblocks -sv /dev/sda Checking blocks 0 to 244198583 Checking for bad blocks (read-only test): done Pass completed, 0 bad blocks found. (0/0/0 errors) And then the destructive, 10h-long write method (use with care!): root@xubuntu:/home/xubuntu# badblocks -wsv /dev/sda Checking for bad blocks in read-write mode From block 0 to 244198583 Testing with pattern 0xaa: done Reading and comparing: done Testing with pattern 0x55: done Reading and comparing: done Testing with pattern 0xff: done Reading and comparing: done Testing with pattern 0x00: done Reading and comparing: done Pass completed, 0 bad blocks found. (0/0/0 errors) As suggested in the answers, it really doesn't look like there are bad sectors on this hard-drive. (Yay!)
Your disk had some problems with reading data from the surface, but it seems that the disk dealt with it. I had similar situation: Error 29 occurred at disk power-on lifetime: 18836 hours (784 days + 20 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 40 51 08 00 40 37 e6 Error: UNC 8 sectors at LBA = 0x06374000 = 104284160 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- c8 00 08 00 40 37 e6 08 03:39:32.447 READ DMA c8 00 08 f8 3f 37 e6 08 03:39:32.447 READ DMA c8 00 08 f0 3f 37 e6 08 03:39:32.447 READ DMA c8 00 08 e8 3f 37 e6 08 03:39:32.447 READ DMA c8 00 08 e0 3f 37 e6 08 03:39:32.447 READ DMA And when I wanted to perform test, I got: Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 7 Short offline Completed: read failure 90% 18845 104284160 Ultimately, I managed to unblock the sectors, and after running the extended test, which scan the whole surface, I got the following result: Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 3 Extended offline Completed without error 00% 18858 - If there were bad blocks, they could be observed in the table under: 5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0 196 Reallocated_Event_Count 0x0032 200 200 000 Old_age Always - 0 197 Current_Pending_Sector 0x0032 200 200 000 Old_age Always - 0 In your case, there's no indication of bad sectors because the extended test was performed (11746 h) after the last error occurred (11706 h). So, you can sleep peacefully. :) As I mentioned in comments, there's two types of badblocks. Here's short info about the difference between the two: There are two types of bad sectors — often divided into “physical” and “logical” bad sectors or “hard” and “soft” bad sectors. A physical — or hard — bad sector is a cluster of storage on the hard drive that’s physically damaged. The hard drive’s head may have touched that part of the hard drive and damaged it, some dust may have settled on that sector and ruined it, a solid-state drive’s flash memory cell may have worn out, or the hard drive may have had other defects or wear issues that caused the sector to become physically damaged. This type of sector cannot be repaired. A logical — or soft — bad sector is a cluster of storage on the hard drive that appears to not be working properly. The operating system may have tried to read data on the hard drive from this sector and found that the error-correcting code (ECC) didn’t match the contents of the sector, which suggests that something is wrong. These may be marked as bad sectors, but can be repaired by overwriting the drive with zeros — or, in the old days, performing a low-level format. Windows’ Disk Check tool can also repair such bad sectors.
Does my hard-drive have bad sectors or not?
1,361,291,930,000
I'd like to test my hard disks with badblocks, but I don't know which values I should use for the block size (-b option) and blocks number (-c option). I randomly tried a few values and it seems they can significantly change the speed of the process. But how can I choose the optimal values?
Here are the relevant excerpts for the same question which have been answered already at How do I choose the right parameters when using badblocks? and Using badblocks on modern disks. There is a badblocks benchmark script available that should suit your purpose. With regards to the -b option: this depends on your disk. Modern, large disks have 4KB blocks, in which case you should set -b 4096. You can get the block size from the operating system (e.g. lsblk -o NAME,PHY-SeC /dev/sdX), and it's also usually obtainable by either reading the disk's information off of the label, or by googling the model number of the disk. If -b is set to something larger than your block size, the integrity of badblocks results can be compromised (i.e. you can get false-negatives: no bad blocks found when they may still exist). If -b is set to something smaller than the block size of your drive, the speed of the badblocks run can be compromised. I'm not sure, but there may be other problems with setting -b to something smaller than your block size, since it isn't verifying the integrity of an entire block, it might still be possible to get false-negatives if it's set too small. The -c option corresponds to how many blocks should be checked at once. Batch reading/writing, basically. This option does not affect the integrity of your results, but it does affect the speed at which badblocks runs. badblocks will (optionally) write, then read, buffer, check, repeat for every N blocks as specified by -c. If -c is set too low, this will make your badblocks runs take much longer than ordinary, as queueing and processing a separate IO request incurs overhead, and the disk might also impose additional overhead per-request. If -c is set too high, badblocks might run out of memory. If this happens, badblocks will fail fairly quickly after it starts. Additional considerations here include parallel badblocks runs: if you're running badblocks against multiple partitions on the same disk (bad idea), or against multiple disks over the same IO channel, you'll probably want to tune -c to something sensibly high given the memory available to badblocks so that the parallel runs don't fight for IO bandwidth and can parallelize in a sane way. I have this answer as community wiki for further improvement
How can I choose optimal values for block size and blocks number for badblocks?
1,361,291,930,000
Windows has a "Resilient FileSystem" since Windows 8. Are there similarly resilient filesystems for Linux? What I expect from such a filesystem is that a bad block won't screw up either files or the journal. I'm no FS geek, so please explain if such error-resilience is unfit for a desktop/CPU intensive/memory intensive/lowers the HDD's lifespan/is already in some FS like Ext4/etc. Is there something like this available for Linux?
If you're looking for advanced filesystems for general-purpose computers in the Linux world, there are two candidates: ZFS and BTRFS. ZFS is older and more mature, but it's originally from Solaris and the port to Linux isn't seamless. BTRFS is still under heavy development, and not all features are ready for prime time yet. Both filesystems offer per-file checksumming, so you will know if a file is corrupted; this is more of a security protection than a protection against failing hardware, because failing hardware tends to make a file unreadable, the hardware has its own checksums so reading wrong data is extremely unlikely (if a disk read returns wrong data, and you're sure it's not an application error, blame your RAM, not your disk). If you want resilience, by far the best thing to do is RAID-1 (i.e. mirroring) over two disks. When a disk starts failing, it's rare that only a few sectors are affected; usually, more sectors follow quickly, if the disk hasn't stopped working altogether. So replicating data over the same disk doesn't help very often. Replicating data over two disks doesn't need any filesystem support. The only reason you might want to replicate data on the same disk is if you have a laptop which can only accommodate one disk, but even then the benefits are very small. Remember that no matter how much replication you have, you still need to have offline backups, to protect against massive hardware failures (power surge, fire, …) and against software-level problems (e.g. accidental file deletion or overwrite).
Are there error resilient filesystems for Linux?
1,361,291,930,000
When running Clonezilla on a NTFS-drive with bad sectors the cloning is interrupted and I get a suggestion from Clonezilla to use the -rescue option to save as much as possible from the damaged drive. What does the -rescue option do? How do I use it? When should I use it?
The -rescue option is meant to "Continue reading past block read errors." as per this manual. This means that it doesn't stop when it reaches a sector error. It simply ignores or bypasses them, enabling you to pull off as much data as possible. If the option isn't selected, CloneZilla will halt and throw a warning message like the one your received. Therefore you must be extra careful to validate the extracted files for errors. Binary files such as executables and libraries should consequently be dismissed or updated with a reliable version.
What is the "-rescue" function of Clonezilla and when should I use it?
1,361,291,930,000
Is there any user space tool that can retrieve and dump the list of bad blocks in a NAND flash device? I've checked the mtdinfo command line utility, and also searched /proc and /sys, but couldn't find anything. I am looking for something suitable for use from a shell script. I could parse dmesg as the kernel prints bad block information on init, but I am hoping there will be a better way.
I have not been able to find any user space utility doing what I need. The closest I have found is the nanddump utility from mtd-utils, which can dump NAND contents, including bad blocks.
Print list of bad blocks in NAND flash from user space
1,361,291,930,000
Context I am running a Raspberry Pi Zero with a Micro SD as it is designed.  However, for this specific application, I cannot use a read-only system as I usually do with a Raspberry Pi. Objective Keep an eye on the health state of the Micro SD (and eventually get a notification somehow that it is time to replace the Micro SD). Questions What is the best way to monitor the health state of the Micro SD (no fsck as I cannot unmount the Micro SD while the system is running)? Is the following proposed approach the best solution in terms of practicality and effectiveness? Proposed approach I thought of using the output of badblocks to keep an eye on the state of the Micro SD, and eventually replacing it when it is time.  But when is it time to replace the Micro SD?  How many bad blocks are too many?  Should I look at write or read errors, or both? I made the following consideration: Micro SD cards automagically reallocate data outside of bad blocks. So if the Micro SD is 80% full, there are 20% of blocks that can "go bad" and still keep the Micro SD running. Adding a little bit of a confidence interval, can we say that if badblocks outputs a number of bad blocks that is below, let's say, 50% of the free-space blocks, it is still safe to use the SD card? To clarify: Total number of blocks: 100 Free space (in blocks): 20 Maximum number of blocks that are acceptable to be corrupted (or whatever): 10
How many bad blocks are too many? A single one. badblocks is the wrong tool. The moment it reports the first bad block, the flash medium has been already used to the point of damage. Just like fsck, it can only detect when things have already gone wrong – and you might have lost data. In SD cards (and generally, all flash mass storage devices you're likely to encounter), there's a flash translation layer in the hardware, which makes the inherently analog, unreliable, and wearout-prone actual memory look like plain, reliable, block-addressable memory. It does that by two means: extensive forward error correction to combat the fact that bits simply like to flip in the physical medium; the more a page has been written to, the more often this happens wear leveling, to make writes to logical pages (as seen by the Pi / operating system) end up as distributed as possible over the physical pages, so that you don't end up hammering the same page all the time That means the wear leveling happens in the hardware, invisible to the operating system. The moment the file system driver sees a bad block, there were irreparable errors that the error correction couldn't handle. That's very unlikely, unless that page had been written to a lot, in which case, the overall write volume already brought the wear leveling to its knees. Meaning that you've written so many times so that even after distributing the writes all over the medium, some places were written to so often that they failed. (The next thing that usually happens is that the controller inside the SD card tells the host that it has become a read-only device). So, nope. You can't check the state of the SD card by verifying the data on it (which is what both fsck and badblocks do), unless you wait until it's far too late. You need info from inside the SD card with an estimate of how much total write "reserve" there is. There is, as far as I know, no standard reply for the "gimme health information, dear SD card" command, but "industrial" labeled SD cards might support CMD65 and there's a few linux tools (sdmon) that support reading it. In short: if you need reliable write storage, then SD cards aren't it. The interface wasn't designed for that originally; instead, it was assumed writes would happen mostly linearly, in large chunks at once, and with very little duplicate writing (typical for camera usage: write one full image, then update the FAT a single time). If you need long-term reliable mass storage on your Pi, I'm afraid your best choice is buying an USB-to-M.2 (SATA, typically) converter enclosure (and a powered USB hub, if necessary), and plugging it into your Rpi zero, after equipping it with an SSD. Most of these controllers support S.M.A.R.T. commands, so that you actually get something to monitor, using smartctl; make sure that each attribute's "Normalized" value is above the "Threshold" value. Make sure you don't buy the cheapest SSD you can find, but every mid-range SSD brings attribe 177, wear-level-count, attribute 195 Hardware ECC Recovered and attribute 182 erase-fail-count-total. A decrease in normalized values of these doesn't necessarily mean anything bad (it's expected to happen, otherwise these metrics would be useless), but as soon as you're getting closer to the threshold, you might want to consider replacing the SSD. (Note that chances are your Rpi Zero might die first; between the 5€ RPi Zero and a consumer SSD, the SSD is the one that was designed for mechanical, thermal and electrical reliability).
Interpreting the output of badblocks: when is it time to replace the MicroSD card?
1,361,291,930,000
Anyone know what happens when UBI uses up all its reserved PEBs that are reserved for bad block management? For example say I have a UBI volume that has 14 PEBs reserved # ubinfo -d 1 ubi1 Volumes count: 1 Logical eraseblock size: 126976 bytes, 124.0 KiB Total amount of logical eraseblocks: 1466 (186146816 bytes, 177.5 MiB) Amount of available logical eraseblocks: 787 (99930112 bytes, 95.3 MiB) Maximum count of volumes 128 Count of bad physical eraseblocks: 0 Count of reserved physical eraseblocks: 14 Current maximum erase counter value: 9 Minimum input/output unit size: 2048 bytes Character device major/minor: 249:0 Present volumes: 0 What happens when UBI finds bad block number 15? Does it not allow the volumes to be used?
I've tested it on armv5tel GNU/Linux 2.6.39+ by marking physical eraseblocks (PEB) as bad using the U-Boot command line: When the bad PEB count is higher than the amount of reserved PEBs, the volume will still be usable. As long as free blocks are available they are used to replace the bad ones. Problems will occur when all PEBs are used up and a new bad block is discovered.
UBI bad block management
1,361,291,930,000
I use badblocks to test my 32GB class-10 microSD card that I use to boot my RPi. I already have a functioning file system on it, so I don't want to scan it with the -w option (destructive read-write test). I have two options: I could use the default read-only test, or I could use a non-destructive read-write test (which is done by backing up the sector, testing it destructively, and then restoring the sector's original content). What should I consider when I choose the test type? I would like it to be as fast as possible, but I also need accurate results.
The read-only test only reads. That's basically the default testing method for just about everything and pretty much the same what disks do for SMART self-tests. The non-destructive read-write test works by overwriting data, then reading to verify, and then writing the original data back afterwards. The only way to verify that writing data works is by actually writing data, no read-only test will ever do that for you. People who only do read tests (the majority, simply because write tests take at least twice as long) simply take it on good faith that when reading works, writing (and being able to read the data that was written later) will probably work too. However, the non-destructive is relative... after all the very write itself might destroy it (on a medium with limited write cycles) and once it's broken there is no way to write the original data back either, so even though the test is non-destructive, if your hardware is faulty it might still lose you some additional data. Therefore you shouldn't use badblocks if there is data on a medium you hope to recover. Especially not if you already know it's going bad... if you don't have a backup already, just do the ddrescue directly. That also happens to be a read-only test and the logfile will tell you where the error zones are...
What are the pros and cons of badblock's two non-destructive tests?
1,361,291,930,000
I have /dev/sda mounted on /, as the root partition. Can I safely run badblocks in read-only mode on this device? Will it show false positives/negatives because it's mounted?
Read-only is just that - reading from the disk. It will pick up sector read errors but (obviously) not sector write errors. Categorically, it is safe to run on a device that is being used a mounted filesystem. With respect to possible false positives, block IO is not "managed", i.e. there are no reader/writer locks. So there is no interaction between badblocks and the filesystem layer.
Can I safely run badblocks in read-only mode on a mounted drive?
1,361,291,930,000
I've ran fsck -c on the (unmounted) partition in question a while ago. The process was unattended and results were not stored anywhere (except badblock inode). Now I'd like to get badblock information to know if there are any problems with the harddrive. Unfortunately, partition is used in the production system and can't be unmounted. I see two ways to get what I want: Run badblocks in read-only mode. This will probably take a lot of time and cause unnecessary bruden on the system. Somehow extract information about badblocks from the filesystem iteself. How can I view known badblocks registered in mounted filesystem?
Try dumpe2fs -b /dev/<WHATEVER>
How to view bad blocks on mounted ext3 filesystem?
1,361,291,930,000
I made a full disk image from a 4 to 5 year old laptop HDD. That HDD was in a laptop that was carried often to places, so, over the years, it has probably experienced physical stresses to some degree. The HDD still works intact, but Guymager, the program I used, showed how many bad sectors were encountered while capturing that image; specifically, 19 bad sectors. Which LBA numbers and which files are affected by these 19 bad sectors? I would first like to create a list of bad LBA's and then I would like to list every file on each of those LBA's in a separate step.
What works listing blocks on all disks independent from file systems? (low-level). And what works with FAT and NTFS? The LBA number and bad block detection is total independent of the file system. Finding files is completely dependent on the filesystem. Don't expect a single tool to work for all filesystems. You can use badblocks to scan for bad blocks, you can use smartctl to get the LBA(s) of reallocated blocks or bad blocks detected by the harddisk firmware, and you can use fdisk etc. if you want to calculate between partition-relative numbers (if you did e.g. badblocks /dev/sda1 and LBAs. As mentioned in the other answer, you can find the affected files for ext2/ext3/ext3 with debugfs. You can use the fibmap ioctl to find the LBA of the n-th block of a given file for all filesystems, but if you want to find the file for a given LBA, this is probably not practical. There's also filefrag, which probably uses this ioctl. I'm sure there are forensic tools for FAT (and possibly even for NTFS) which find a file for a given block number, but I couldn't name any offhand. Edit Googling finds fatcat for forensic analysis of FAT filesystems; it seems with -L you can get a file for a specific cluster (which you can calculate from the LBA). I have no experience with this program. Googling more will probably turn up more such programs.
List bad blocks and affected files
1,361,291,930,000
I have just tried upon tweaking the badblocks utility to use more RAM and possibly achieve a bit higher performance. The exact command I am running is (without HDD's S/N): badblocks -v -b 4096 -c 98304 -w -s /dev/disk/by-id/ata-WDC_WD5000LPCX-24C6HT0_SN >> /root/spare-hdd-badblocks.log 2>&1 & I do not use the badblocks tool very often, however, so if I may ask... What does the -c switch do exactly and why is it suggested to achieve higher speeds? Does it really eat more memory and if so, as I have plenty, could it possibly be wise to further increase it? From its man page: -c: Number of blocks is the number of blocks which are tested at a time. The default is 64. I do not understand it, I just hope someone does. Credit, math, and source of further valuable info: http://www.pantz.org/software/badblocks/badblocksusage.html My system: Debian 11 on a headless Xeon server with 32GB ECC RAM.
The -c flag controls the number of blocks tested in one go. By increasing this number you're reducing overhead (system calls), marginally improving performance. (Consider dd vs dd bs=64M as another example of this optimisation process.) However, I'm less convinced that badblocks is even relevant these days. Disk firmware has got much more sophisticated and the OS no longer needs to omit faulty sectors as the disk does that for you itself. What's more, with SMART you can even get the disk to self-test regularly, and with SMART monitoring you'll be notified when (if) there's a problem - probably in enough time to replace the disk before you lose the data
Speeding up `badblocks` by tweaking its `-c` switch
1,361,291,930,000
Description of my problem is quite large, so first I will give a short summary, then I will precisely describe the situation. Short summary: manufacturer's diagnostic tools found and repaired some errors on my hard disk. As far as I understand tool's manual, these errors were bad blocks. However, smartctl (Linux tool to do SMART on hard disk) doesn't show any reallocated sectors and says that hard disk is good. First question: how is it possible? Reparation of bad blocks means reallocating sectors, right? So why don't smartctl report any reallocated sectors? Second question: I bought this disk a few months ago and I still have warranty for it. Should I demand that seller replace it for a new one or is this disk good and I could continue using it? And now precise description: I have Western Digital hard disk, model WDC WD5000AAKX-001CA0. Recently I noticed that sometimes my computer hangs for a few seconds (something like up to one minute). After such hangs dmesg shows errors like this: knoppix@Microknoppix:~$ dmesg (...) [ 504.003363] ata1.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen [ 504.003374] ata1.00: failed command: READ DMA EXT [ 504.003383] ata1.00: cmd 25/00:00:80:07:01/00:02:00:00:00/e0 tag 0 dma 262144 in [ 504.003385] res 40/00:00:09:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout) [ 504.003389] ata1.00: status: { DRDY } [ 509.016652] ata1: link is slow to respond, please be patient (ready=0) [ 514.030002] ata1: soft resetting link [ 514.200386] ata1.00: configured for UDMA/133 [ 514.200420] ata1: EH complete [ 546.003333] ata1: lost interrupt (Status 0x50) [ 546.003364] ata1.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen [ 546.003371] ata1.00: failed command: READ DMA EXT [ 546.003380] ata1.00: cmd 25/00:00:80:15:06/00:02:00:00:00/e0 tag 0 dma 262144 in [ 546.003381] res 40/00:00:09:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout) [ 546.003386] ata1.00: status: { DRDY } [ 546.003401] ata1: soft resetting link [ 546.181205] ata1.00: configured for UDMA/133 [ 546.181234] ata1: EH complete However, smartctl says that "SMART overall-health self-assessment test result: PASSED" (I will paste complete output of smartctl few paragraphs later). Whenever I tried to make smartctl self test (with smartctl -t short or smartctl -t long) such tests were reported as aborted by host. So I downloaded bootable CD diagnostic tool for my hd - this one: http://support.wdc.com/product/download.asp?groupid=606&sid=2&lang=en Using this tool first I did quick test and it showed error (unfortunately, I don't remember what was the error code). As far as I understand this tool performs just SMART quick self test (http://wdc.custhelp.com/app/answers/detail/search/1/a_id/940/c/130/p/227,295 says "QUICK TEST - performs SMART drive quick self-test to gather and verify the Data Lifeguard information contained on the drive.") Then I did extended test. As far as I understand, this extended test looks for bad sectors (http://wdc.custhelp.com/app/answers/detail/search/1/a_id/940/c/130/p/227,295 says "EXTENDED TEST - performs a Full Media Scan to detect bad sectors"). After some time the tool told that it found and repaired some errors. Now I booted machine with knoppix and did "smartctl --all". Here is its output: root@Microknoppix:/home/knoppix# smartctl --all /dev/sda smartctl 5.43 2012-06-05 r3561 [i686-linux-3.4.9] (local build) Copyright (C) 2002-12 by Bruce Allen, http://smartmontools.sourceforge.net === START OF INFORMATION SECTION === Model Family: Western Digital Caviar Blue Serial ATA Device Model: WDC WD5000AAKX-001CA0 Serial Number: WD-WMAYUW952768 LU WWN Device Id: 5 0014ee 6ad1d9ef1 Firmware Version: 15.01H15 User Capacity: 500,107,862,016 bytes [500 GB] Sector Size: 512 bytes logical/physical Device is: In smartctl database [for details use: -P show] ATA Version is: 8 ATA Standard is: Exact ATA specification draft version not indicated Local Time is: Wed Dec 12 03:34:39 2012 UTC SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED General SMART Values: Offline data collection status: (0x82) Offline data collection activity was completed without error. Auto Offline Data Collection: Enabled. Self-test execution status: ( 0) The previous self-test routine completed without error or no self-test has ever been run. Total time to complete Offline data collection: ( 8160) seconds. Offline data collection capabilities: (0x7b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 2) minutes. Extended self-test routine recommended polling time: ( 83) minutes. Conveyance self-test routine recommended polling time: ( 5) minutes. SCT capabilities: (0x3037) SCT Status supported. SCT Feature Control supported. SCT Data Table supported. SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 486 3 Spin_Up_Time 0x0027 189 141 021 Pre-fail Always - 1525 4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 587 5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0 7 Seek_Error_Rate 0x002e 200 200 000 Old_age Always - 0 9 Power_On_Hours 0x0032 098 098 000 Old_age Always - 1553 10 Spin_Retry_Count 0x0032 100 100 000 Old_age Always - 0 11 Calibration_Retry_Count 0x0032 100 100 000 Old_age Always - 0 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 578 192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age Always - 173 193 Load_Cycle_Count 0x0032 200 200 000 Old_age Always - 413 194 Temperature_Celsius 0x0022 097 093 000 Old_age Always - 46 196 Reallocated_Event_Count 0x0032 200 200 000 Old_age Always - 0 197 Current_Pending_Sector 0x0032 200 200 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0030 200 200 000 Old_age Offline - 5 199 UDMA_CRC_Error_Count 0x0032 200 200 000 Old_age Always - 0 200 Multi_Zone_Error_Rate 0x0008 200 200 000 Old_age Offline - 5 SMART Error Log Version: 1 ATA Error Count: 2 CR = Command Register [HEX] FR = Features Register [HEX] SC = Sector Count Register [HEX] SN = Sector Number Register [HEX] CL = Cylinder Low Register [HEX] CH = Cylinder High Register [HEX] DH = Device/Head Register [HEX] DC = Device Command Register [HEX] ER = Error register [HEX] ST = Status register [HEX] Powered_Up_Time is measured from power on, and printed as DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes, SS=sec, and sss=millisec. It "wraps" after 49.710 days. Error 2 occurred at disk power-on lifetime: 1548 hours (64 days + 12 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 04 51 01 30 4f c2 a0 Error: ABRT Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- b0 d6 01 be 4f c2 a0 02 00:02:58.316 SMART WRITE LOG b0 da 01 00 4f c2 a0 02 00:02:58.259 SMART RETURN STATUS 80 44 00 00 44 57 a0 02 00:02:58.259 [VENDOR SPECIFIC] b0 d6 01 be 4f c2 a0 02 00:02:58.241 SMART WRITE LOG 80 45 00 01 44 57 a0 02 00:02:58.241 [VENDOR SPECIFIC] Error 1 occurred at disk power-on lifetime: 1515 hours (63 days + 3 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 04 51 01 30 4f c2 a0 Error: ABRT Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- b0 d6 01 be 4f c2 a0 02 00:02:21.841 SMART WRITE LOG b0 da 01 00 4f c2 a0 02 00:02:21.784 SMART RETURN STATUS 80 44 00 00 44 57 a0 02 00:02:21.784 [VENDOR SPECIFIC] b0 d6 01 be 4f c2 a0 02 00:02:21.768 SMART WRITE LOG 80 45 00 01 44 57 a0 02 00:02:21.768 [VENDOR SPECIFIC] SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Conveyance offline Completed without error 00% 1552 - # 2 Conveyance offline Completed: read failure 90% 1548 787927349 # 3 Conveyance offline Completed: read failure 90% 1515 883391611 # 4 Short offline Completed without error 00% 1503 - # 5 Short offline Completed without error 00% 1503 - # 6 Short offline Aborted by host 80% 1502 - # 7 Extended offline Completed without error 00% 9 - # 8 Short offline Completed without error 00% 6 - # 9 Short offline Aborted by host 90% 6 - SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay. As you can see, on one hand one conveyance offline was completed with read failure. But, on the other hand, all attributes seems good - for instance, Reallocated_Sector_Ct is 0. I also tried once again to cat the whole disk to /dev/null - and again I have errors in dmesg: root@Microknoppix:/home/knoppix# nice -n 20 ionice -c 3 cat /dev/sda > /dev/null During this cat dmesg shows such errors: knoppix@Microknoppix:~$ dmesg (...) [ 504.003363] ata1.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen [ 504.003374] ata1.00: failed command: READ DMA EXT [ 504.003383] ata1.00: cmd 25/00:00:80:07:01/00:02:00:00:00/e0 tag 0 dma 262144 in [ 504.003385] res 40/00:00:09:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout) [ 504.003389] ata1.00: status: { DRDY } [ 509.016652] ata1: link is slow to respond, please be patient (ready=0) [ 514.030002] ata1: soft resetting link [ 514.200386] ata1.00: configured for UDMA/133 [ 514.200420] ata1: EH complete [ 546.003333] ata1: lost interrupt (Status 0x50) [ 546.003364] ata1.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen [ 546.003371] ata1.00: failed command: READ DMA EXT [ 546.003380] ata1.00: cmd 25/00:00:80:15:06/00:02:00:00:00/e0 tag 0 dma 262144 in [ 546.003381] res 40/00:00:09:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout) [ 546.003386] ata1.00: status: { DRDY } [ 546.003401] ata1: soft resetting link [ 546.181205] ata1.00: configured for UDMA/133 [ 546.181234] ata1: EH complete I thought it can be fault of motherboard or data cable that connects disk to motherboard. So I connected another disk to my motherboard using the same cable and slot and cated it to /dev/null. It succedeed without dmesg showing any errors.
There are no reallocated sectors because they failed to reallocate. Your drive is showing 5 Offline_Uncorrectable sectors, which happens when automatic repair fails. There are obvious read failures shown in the dmesg output, SMART errors, and read failures from SMART tests. There are ways of repairing these sectors as you have mentioned in the question, but from my experience it is a very short-term fix. Replace the drive.
manufacturer's tool found bad blocks, but smartctl doesn't show any
1,361,291,930,000
TLDR; HDD seemed damaged. Unable to format partition (mkfs.ext4 I/O errors), even with a newly created GPT table. SMART test shows some errors. I was about to throw the disk away. Before that, out of curiosity, I ran a full badblocks test. Big surprise : it didn't detect any bad blocks ! Went back to GParted, created a GPT table + a few partitions. Everything works fine now ! What did badblocks do ? The full story I am trying to make sense of what just happened : I was about to throw a HDD away because I was unable to create partitions on it, and SMART showed some errors. Before throwing the disk away I just wanted to play a little with badblocks, and ... big surprise : badblocks seemed to have repaired my disk ! I didn't even know that it could do that ! So I am happy now, I can indeed use my disk, it works fine, but I am still trying to figure out what just happened ! It's a 4TB Seagate HDD that I hadn't used in a few years. I plugged it in a SATA ↔ USB adapter (adapter works fine, I use it with several other HDDs). Wirh GParted I created a new GPT partition table, and then a partition. It was unable to proceed to the end, there was a mkfs.ext4 I/O error : (...) Allocating group tables: done Writing inode tables: done Creating journal (131072 blocks): done Writing superblocks and filesystem accounting information: 0/895 mke2fs 1.46.2 (28-Feb-2021) mkfs.ext4: Input/output error while writing out and closing file system I tried several times, with different USB adapters, different USB cables, different USB ports. Never worked. I then did a SMART short test : # smartctl -t short -C /dev/sde (...) # smartctl -a /dev/sde (...) Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Short captive Completed: read failure 90% 528 191105024 (...) Obviously the HDD seems defect, right ? So I was about to throw it away, but did a badblocks test before : # badblocks -wvs -t random -b 4096 /dev/sde Checking for bad blocks in read-write mode From block 0 to 976754645 Testing with random pattern: done Reading and comparing: done Pass completed, 0 bad blocks found. (0/0/0 errors) The test lasted about 19 hours (4TB disk), it didn't show any errors. I was very surprised ! Back to GParted, created a new GPT table, some partitions, everything went smooth. I ended up doing some copy tests I am used to do, in order to check the disk's performances, and everything seems normal (155MB/s R/W when copying big files). Also did another SMART short test, it completed without error this time : # smartctl -t short -C /dev/sde (...) # smartctl -a /dev/sde (...) Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Short captive Completed without error 00% 549 - # 2 Short captive Completed: read failure 90% 528 191105024 (...) Can someone make sense of that ? It's as if running badblocks somehow repaired my HDD. How is that possible ? Is badblocks even supposed to do that ? Note : more info is available if needed (full SMART output and full GParted results)
Yes, badblocks can have that effect — not really by design, but because hard drives can remap failing blocks, and will do so when they encounter a failed block during a write (since there’s no data that can be lost). By writing to every single accessible sector in the drive, badblocks gives ample opportunity for the drive to do so; and if the drive’s spare capacity is sufficient to remap all the failed blocks, badblocks won’t see anything amiss. If you run smartctl -a on the drive, you should see that it has a non-zero “reallocated sector count” (attribute 5). This indicates that it has remapped sectors. While the drive may work fine now, this does indicate that it has problems, so it should be treated with suspicion; if part of its storage has failed, more is liable to fail in the not-too-distant future. See also SSD: `badblocks` / `e2fsck -c` vs reallocated/remapped sectors.
I/O errors, but after running badblocks everything works again : how is that possible?
1,361,291,930,000
I have 2TB disk that I use in notebook. This disk was formatted as ext4 and It works fine in notebook, but when I attach it to desktop (via sata-usb adapter), I am unable to mount it due to following error: From desktop: # mount /dev/sdd1 /mnt mount: /mnt: wrong fs type, bad option, bad superblock on /dev/sdd1, missing codepage or helper program, or other error. # dmesg | grep sdd [ 6978.692452] sd 11:0:0:0: [sdd] 3907029166 512-byte logical blocks: (2.00 TB/1.82 TiB) [ 6978.692604] sd 11:0:0:0: [sdd] Write Protect is off [ 6978.692606] sd 11:0:0:0: [sdd] Mode Sense: 03 00 00 00 [ 6978.692799] sd 11:0:0:0: [sdd] No Caching mode page found [ 6978.692803] sd 11:0:0:0: [sdd] Assuming drive cache: write through [ 6978.789625] sdd: sdd1 [ 6978.789631] sdd: p1 size 3907027120 extends beyond EOD, enabling native capacity [ 6978.792344] sdd: sdd1 [ 6978.792346] sdd: p1 size 3907027120 extends beyond EOD, truncated [ 6978.793299] sd 11:0:0:0: [sdd] Attached SCSI disk [ 7002.085079] EXT4-fs (sdd1): bad geometry: block count 488378390 exceeds size of device (488378389 blocks) # fdisk -l /dev/sdd Disk /dev/sdd: 1.8 TiB, 2000398932992 bytes, 3907029166 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0xa3bf120c Device Boot Start End Sectors Size Id Type /dev/sdd1 2048 3907029167 3907027120 1.8T 83 Linux From Notebook: # dmesg | grep sdb [ 6.747344] sd 1:0:0:0: [sdb] 3907029168 512-byte logical blocks: (2.00 TB/1.82 TiB) [ 6.747347] sd 1:0:0:0: [sdb] 4096-byte physical blocks [ 6.747369] sd 1:0:0:0: [sdb] Write Protect is off [ 6.747372] sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 [ 6.747407] sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA [ 6.769650] sdb: sdb1 [ 6.770587] sd 1:0:0:0: [sdb] Attached SCSI disk [ 14.128886] EXT4-fs (sdb1): mounted filesystem with ordered data mode. Opts: data=ordered here I tried remount it, and it worked fine: [ 286.189504] EXT4-fs (sdb1): mounted filesystem with ordered data mode. Opts: (null) # fdisk -l /dev/sdb Disk /dev/sdb: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: dos Disk identifier: 0xa3bf120c Device Boot Start End Sectors Size Id Type /dev/sdb1 2048 3907029167 3907027120 1.8T 83 Linux My question is: Why does one computer shows different amount of sectors on disk than the other one? I checked for bad blocks, none were found.
This happens with faulty USB interface adapters. Possible reasons for faulty adapters: Adapter too old Cheap adapter Bad adapter firmware These errors became a lot more frequent with the advent of advanced format drives. Some adapters try to "translate" AF drive interactions so they emulate legacy format drives. This means you can: Use the USB adapter to format the drive, and then continue to use the USB adapter on both computers Get a better USB adapter, so you won't have to format your drive. Use internal SATA connectors on both computers. Formatting will destroy all data on the drive.
ext4-fs: bad geometry: block count exceeds size of device
1,361,291,930,000
I have purchased a new HDD for my backups. Before entrusting the device with the job of keeping my data safe I want to make sure that it is in good condition. The drive is a new internal 3.5 inch SATA drive. I started a destructive write test with badblocks using the following command. (Important: DON'T just copy paste the following command it will erase all data on your disk) # badblocks -wsv -t random /dev/<device> After ~ 1:30h the badblocks run has reached 0.36% completion. iotop reports average writespeeds between 1.6 and 2.5 MB/s which is about 1% of the write speed the drive should actually be capable of. The IO load reported by iotop is 99.9% though. Is there something odd going on or is it really common for badblocks to perform that slow?
You need to add the -c option to do more than 64 blocks and probabky -b to specify a block size other than 1KiB. Right now you're doing 64KiB at a time, which is a lot of seeks. Something like: badblocks -c 2560 -b 4096 -wsv -t random /dev/«device» ought to run much faster. That's 10MiB (= 4KiB × 2560) at a time; go higher with -c if that's still not running full-speed. Also your disk likely has 4K sectors, hence the -b 4096. Otherwise one bad sector will be reported as 4. (You may wish to consider in addition—or even instead—smartctl -t long. And of course mirror your backups if you're paranoid.)
What data transfer / write speeds are to be expected for a badblock destructible write test?
1,361,291,930,000
I have a pretty basic system running Ubuntu 16.04 (this question is not specific to Ubuntu, but rather ext4 partitions), 1 HDD, running a few partitions: sda1 - EXT4 - 100G - / sda2 - EXT4 - 723.5G - /home sda3 - NTFS - 100G - (windows) sda5 - SWAP - 8G Whenever I try to access one of 3-4 files in a specific directory in the /home partition, (the specific folder causing the issues is /home/path/to/broken/folder), the /home partition will error and remount read-only. dmesg shows the following errors: EXT4-fs error (device sda2): ext4_ext_check_inode:497: inode #1415: comm rm: pblk 0 bad header/extent: invalid magic - magic 0, entries 0, max 0(0), depth 0(0) Aborting journal on device sda2-8. EXT4-fs (sda2): Remounting filesystem read-only EXT4-fs error (device sda2): ext4_ext_check_inode:497: inode #1417: comm rm: pblk 0 bad header/extent: invalid magic - magic 0, entries 0, max 0(0), depth 0(0) EXT4-fs error (device sda2): ext4_ext_check_inode:497: inode #1416: comm rm: pblk 0 bad header/extent: invalid magic - magic 0, entries 0, max 0(0), depth 0(0) So I understand what is going on...some bad block is causing an error and is remounting the drive read-only to prevent further corruption. I know it is these specific files because I can undo the error by Logging in as root Running sync Stopping lightdm (and all sub-processes) Stop all remaining open files on /home by finding them with lsof | grep /home Unmounting /home Running fsck /home (fixing the errors) Remount /home Everything is fine again, read and write, until I try to access the same files again, then this entire process is repeated to fix it again. The way I've tried to access the files is by running ls /home/path/to/broken/folder and rm -r /home/path/to/broken/folder, so it seems any kind of HDD operation on that part of the drive errors it and throws it into read-only again. I honestly don't care about the files, I just want them gone. I am willing to remove the entire /home/path/to/broken/folder folder, but every time I try this, it fails and throws into read-only. I ran badblocks -v /dev/sda2 on my hard drive, but it came out clean, no bad blocks. Any help would still be greatly appreciated. Still looking for a solution to this. Some information that might be useful below: $ debugfs -R 'stat &lt1415>' /dev/sda2 debugfs 1.42.13 (17-May-2015) Inode: 1415 Type: regular Mode: 0644 Flags: 0x80000 Generation: 0 Version: 0x00000000 User: 0 Group: 0 Size: 0 File ACL: 0 Directory ACL: 0 Links: 1 Blockcount: 0 Fragment: Address: 0 Number: 0 Size: 0 ctime: 0x5639ad86 -- Wed Nov 4 01:02:30 2015 atime: 0x5639ad86 -- Wed Nov 4 01:02:30 2015 mtime: 0x5639ad86 -- Wed Nov 4 01:02:30 2015 Size of extra inode fields: 0 EXTENTS: Now I looked at this myself and compared it to what I suspect to be a non-corrupted inode: $ debugfs -R 'stat &lt1410>' /dev/sda2 debugfs 1.42.13 (17-May-2015) Inode: 1410 Type: regular Mode: 0644 Flags: 0x80000 Generation: 0 Version: 0x00000000 User: 0 Group: 0 Size: 996 File ACL: 0 Directory ACL: 0 Links: 1 Blockcount: 0 Fragment: Address: 0 Number: 0 Size: 0 ctime: 0x5639ad31 -- Wed Nov 4 01:01:05 2015 atime: 0x5639ad31 -- Wed Nov 4 01:01:05 2015 mtime: 0x5639ad31 -- Wed Nov 4 01:01:05 2015 Size of extra inode fields: 0 EXTENTS: (0):46679378 I have bolded what I believe are the key differences here. I looked at other non-corrupted inodes and they display something similar to the 1410 that has a non-zero size and an extent. Bad header/extent makes sense here...it has no extent....how do I fix this without reformatting my entire /home partition? I really feel like I've handed this question to someone smarter than me on a silver platter, I just don't know what the meal (answer) is!
Finally found the answer from somebody else on another site, just zeroed the inodes and rechecked the system, that was all! debugfs -w /dev/sda2 :clri &lt1415> :clri &lt1416> :clri &lt1417> :q fsck -y /dev/sda2 To anybody else with this issue, I found my bad inodes using find on the bad mount, then checked dmesg for errors on the bad inodes.
Partition Errors and Remounts Read-Only when Accessing Specific File
1,361,291,930,000
The problem that I am having is the extremely long time that fsck is taking. I have thoroughly made searches on Google, but I could not find anything that would resolve the problem. The command that I am running is sudo fsck.ext4 -vc /dev/sdb1. I have a 200GB SATA hard drive which has some bad sectors. It is SMART-compatible, however, SMART somehow is not capable of remapping the sectors. The command that I am running is going to check for bad sectors and add them to the bad block list. However, here is the output so far: e2fsck 1.42 (29-Nov-2011) Checking for bad blocks (read-only test): 1.95% done, 11:53:24 elapsed. (1657/0/0 errors) At this rate it will probably take around 1 month. Now don't tell me "Your hard drive is too old and it's gonna fail soon blah blah blah". I just want to add the bad blocks to the badblocks list. The hard drive is not developing any new bad sectors. My machine has an i3 quad-core with 8GB of RAM. My CPU usage is under 10%, and about 1.5GB of the RAM is used. Nothing is paged. The disk which I am checking has a newly created ext4 filesystem with nothing on it. I just don't understand why it will take 1 month to fsck a disk and list bad blocks. Something is definitely wrong here. Any advice?
SMART doesn't remap sectors, it just detects and logs errors. Bad sectors are remapped automatically when written to. You can do this with dd or hdparm --write-sector. If your drive cannot remap the sector because it has run out of reserve sectors then you should be one step before panic. Remapping them in the file system does not make much sense. If hdparm -t /dev/sdb gives you reasonable results then you may run badblocks on its own (with -s) in order to check whether its faster if run directly and run it through strace if it is not faster in order to get an impression where the performance problem results from. Maybe there are certain areas on the disk which cause a lot of read retries.
Extremely long time for an ext4 fsck
1,361,291,930,000
I tried to move some files to my NAS (ShareCenter DNS-320), but something shows up, when using file managers: Input/Output error or when using rsync on mounted cifs/smb share rsync: close failed on "/mnt/nas1/_am-unordered/.long-file-name.mkv.PI2rPM": Input/output error (5) rsync error: error in file IO (code 11) at receiver.c(856) [receiver=3.1.0] # mount | grep mnt/nas1 //192.168.x.y/backup on /mnt/nas1 type cifs (rw,relatime,vers=1.0,cache=strict,username=admin,domain=BACKUP,uid=1000,forceuid,gid=0,noforcegid,addr=192.168.x.y,file_mode=0755,dir_mode=0755,nounix,serverino,rsize=61440,wsize=65536,actimeo=1) I assume that there are bad sectors inside the NAS, I need to run fsck to check if there are broken disk inside my RAID-0 NAS. I have installed fun_plug using this tutorial, now I could ssh into the NAS successfully. Normally I would use fsck -yvckfC -E fragcheck /dev/sdX to check bad sectors on single unmounted disk. The question is, how to run badblocks and insert it to the bad block list on mounted RAID0 partition? since the ssh service are running on mounted partition on the NAS: # umount /mnt/HD/HD_a2/ umount: /mnt/HD/HD_a2: device is busy. (In some cases useful info about processes that use the device is found by lsof(8) or fuser(1)) # lsof /mnt/HD/HD_a2/ COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME sh 1963 root 1w REG 9,1 5191 12 /mnt/HD/HD_a2/ffp.log sh 1963 root 2w REG 9,1 5191 12 /mnt/HD/HD_a2/ffp.log sh 1963 root 10r REG 9,1 1942 246939802 /mnt/HD/HD_a2/fun_plug rc 1986 root txt REG 9,1 587316 141426950 /mnt/HD/HD_a2/ffp/bin/bash rc 1986 root mem REG 9,1 28892 139854377 /mnt/HD/HD_a2/ffp/lib/ld-uClibc-0.9.33-git.so rc 1986 root mem REG 9,1 260898 139853932 /mnt/HD/HD_a2/ffp/lib/libreadline.so.6.2 **snip** sshd 5519 root mem REG 9,1 60546 139854375 /mnt/HD/HD_a2/ffp/lib/libgcc_s.so.1 sshd 5519 root mem REG 9,1 359940 139854378 /mnt/HD/HD_a2/ffp/lib/libuClibc-0.9.33-git.so Current NAS' RAID configuration are: # cat /proc/mdstat Personalities : [linear] [raid0] [raid1] md1 : active raid0 sda2[0] sdb2[1] 7808789888 blocks 64k chunks md0 : active raid1 sdb1[1] sda1[0] 524224 blocks [2/2] [UU] unused devices: <none>
You're working from a shaky premise, being that badblocks can solve your problem in the first place. Why badblocks Is an Untrustworthy Repair Method As you use a hard drive, it continually does its best to hide problems from you by swapping fresh sectors in for dodgy ones. The hard disk ships from the factory with a pool of spare sectors for this very purpose. As long as the number of new bad sectors grows slowly, the spare sector pool shrinks slowly enough that the hard drive appears to run flawlessly. The only way badblocks can detect a bad sector is when the spare sector pool has run dry, which means that it has been degrading for some time. Put another way, visible bad sectors mean the hard disk has swept so many problems under the rug that the rug is starting to look lumpy. As far as I'm aware, hard drives have done this sort of silent fix for decades now, probably from the early days of IDE. The last systems I used that exposed their initial set of bad sectors from the start used ESDI and MFM hard disks, dating from the late 1980s. This is not to say that modern hard drives no longer ship with an initial set of bad sectors. They do. Bad sectors are mapped out at the factory so that a badblocks test on a new hard disk will turn up zero bad sectors. (Sectors from the spare sector pool are mapped in to take the bad sectors' place.) If a badblocks scan turns up bad sectors on a new drive or one still within its warranty period, that's sufficient reason to have it replaced immediately. It is possible for badblocks to return a consistent result over a long enough period of time for the filesystem's bad sector list to be useful. This can indeed allow you to continue using an out-of-warranty or otherwise irreplaceable hard drive past the point where the drive's own self-repair features have stopped working. However, it is also possible for badblocks to return different results between closely spaced tests. (Say, two tests done a day or a week apart.) When the hard disk gets into such a bad state, the filesystem's bad sector list becomes pointless; the hard disk is dying. The filesystem's bad sector list only provides a benefit when the list remains stable over long periods of time. Bottom line: Replace the hard disk while it is still readable. Yes, I realize this probably means rebuilding the entire NAS, but that's the cost of RAID-0, a.k.a. "scary RAID." A Better Solution: Monitoring You cannot tell that a sector swap has taken place short of tracking the size of the spare sector pool over time via SMART. Some hard drives won't report this, even if you did want to track it, and those that do provide it may report only a modified version of the truth rather than the literal truth. That said, this command may tell you what you need to know: # smartctl -x /dev/sda | grep Realloc 5 Reallocated_Sector_Ct PO--CK 200 200 140 - 0 196 Reallocated_Event_Count -O--CK 200 200 000 - 0 While the raw and normalized values that smartctl reports may not be precisely correct, an increasing number here — especially a large increase over a brief period — is always bad. Notice that the last column is zero on the machine I ran that command on. This is what I mean when I say the report may not be entirely trustworthy. That's the "raw" value, while the "200" columns are the "normalized" value. This drive is claiming there have been no reallocations, ever, which is almost certainly not true. As for "200", that is a value the hard drive manufacturer came up with on their own, with their own meaning. You can't compare it between hard drive brands, and may not even be able to compare it to other hard drives from the same manufacturer. But once again: if you monitor these values and they suddenly start increasing, that is a bad sign, even though it doesn't actually tell you what's going on at the oxide level. smartctl reports information on individual hard drives, not RAID devices. It does know how to talk to several types of hardware RAID controllers to extract per-drive information, but there's no need to have specific support for software RAID, since the underlying devices are directly available. Thus, you'd need to monitor both /dev/sda and /dev/sdb separately in your case, not /dev/md1. smartd — a companion tool to smartctl — does this sort of background continuous monitoring.
Adding badblocks on mounted partition
1,361,291,930,000
The drive is currently in NTFS. After running chkdsk for hours, I have found the locations of bad sectors (below). I want to reformat the disk in EXT4. I have heard that EXT4 has some sort of metadata to mark bad sectors, and there is a utility for that, but I do not want to run the test for hours again. Can I just directly tell EXT4 about the bad sector locations below that I have already found by chkdsk? Stage 4: Looking for bad clusters in user file data ... Read failure with status 0xc000009c at offset 0x280036f1000 for 0x10000 bytes. Read failure with status 0xc000009c at offset 0x280036fb000 for 0x1000 bytes. Read failure with status 0xc000009c at offset 0x280cb987000 for 0x10000 bytes. Read failure with status 0xc000009c at offset 0x280cb993000 for 0x1000 bytes. Read failure with status 0xc000009c at offset 0x280dbdc2000 for 0x10000 bytes. Read failure with status 0xc000009c at offset 0x280dbdc4000 for 0x1000 bytes. Read failure with status 0xc000009c at offset 0x2835d5bb000 for 0x10000 bytes. Read failure with status 0xc000009c at offset 0x2835d5c0000 for 0x1000 bytes.
You can use the "badblocks" command together with e2fsck to specify a list of bad disk blocks to the filesystem. As others have commented, that is not great, because that means your disk is on the verge of increasing failure. Also, because this is normally handled at the drive level today, this badblocks code is rarely used these days.
Manually telling EXT4 about bad sectors?
1,361,291,930,000
Before I sell an old HDD I completely read and write the disk to verify that it has no bad sectors. I always did writing like this: dd if=/dev/zero of=/dev/sdb bs=100M status=progress But my computer has 32GB RAM and a lot of the data might be in cache when dd exits. Is there a way to see when the OS fails to write the cache to the disk (after dd terminated)? What is the right option for dd to to flush the cache before exiting? sync, fsync or fdatasync? Please don't suggest disk checking tools. dd is enough for me.
Although you specifically requested not to recommend disk checking tools, I will do so and hereby recommend: The disk itself. You can ask the drive to perform a thorough, internal self-test, eliminating all possible sources of problems regarding caches. The self-test can conveniently be accessed via gsmartcontrol: If you really do not want this, you still should consider using a tool like F3. It will not only check whether the data can be written but it can also check whether the written data can be read afterwards (which is, in my opinion, the more important function of a storage medium). With dd's conv=fdatasync dd does not terminate before the last block was written.
Using dd to detect bad sectors
1,361,291,930,000
mkfs.vfat -c does a simple check for badblocks.badblocks runs multiple passes with different patterns and thus detects intermittent errors that mkfs.vfat -c will not catch. mkfs.vfat -l filename can read a file with badblocks from badblocks. But I have been unable to find an example on how to generate the file using badblocks. My guess is that it is as simple as: badblocks -w /dev/sde1 > filename mkfs.vfat -l filename /dev/sde1 But I have been unable to confirm this. Is there an authoritative source that can confirm this or explain how to use badblocks to generate input for mkfs.vfat -l filename?
From man badblocks: -o output_file Write the list of bad blocks to the specified file. Without this option, badblocks displays the list on its standard output. The format of this file is suitable for use by the -l option in e2fsck(8) or mke2fs(8). So the correct way would be: badblocks -o filename /dev/sde1 mkfs.vfat -l filename /dev/sde1
Using badblocks with mkfs -l
1,361,291,930,000
I thought I could nuke all partitions of a drive by using dd if=/dev/zero of=/dev/sdX. In the past this has always worked for me, but in this case it is not working as expected. #check the partitions ➜ ~ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 477G 0 disk ├─sda1 8:1 0 512M 0 part /boot/efi └─sda2 8:2 0 476.4G 0 part / sdb 8:16 1 14.6G 0 disk ├─sdb1 8:17 1 292M 0 part /media/james/Gentoo amd64 20190703T214502Z └─sdb2 8:18 1 6.3M 0 part /media/james/GENTOOLIVE #unmount and confirm the drive is still seen. ➜ ~ sudo umount "/media/james/Gentoo amd64 20190703T214502Z" ➜ ~ sudo umount "/media/james/GENTOOLIVE" ➜ ~ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 477G 0 disk ├─sda1 8:1 0 512M 0 part /boot/efi └─sda2 8:2 0 476.4G 0 part / sdb 8:16 1 14.6G 0 disk ├─sdb1 8:17 1 292M 0 part └─sdb2 8:18 1 6.3M 0 part #Run dd ➜ ~ sudo dd if=/dev/zero of=/dev/sdb bs=3M dd: error writing '/dev/sdb': No space left on device 2649+0 records in 2648+0 records out 8330620928 bytes (8.3 GB, 7.8 GiB) copied, 5.50879 s, 1.5 GB/s #the partitions are still there! ➜ ~ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 477G 0 disk ├─sda1 8:1 0 512M 0 part /boot/efi └─sda2 8:2 0 476.4G 0 part / sdb 8:16 1 14.6G 0 disk ├─sdb1 8:17 1 292M 0 part └─sdb2 8:18 1 6.3M 0 part ➜ ~ lsblk #after unplugging and replugging the drive, the old partition still mounts and still contains files. I was able to open several and read the contents. NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 477G 0 disk ├─sda1 8:1 0 512M 0 part /boot/efi └─sda2 8:2 0 476.4G 0 part / sdb 8:16 1 14.6G 0 disk ├─sdb1 8:17 1 292M 0 part └─sdb2 8:18 1 6.3M 0 part /media/james/GENTOOLIVE What is really confusing me is that if I look in Gparted, the device is shown as 8GB unallocated, but this is a 16GB drive. I ran badblocks -wsv, which passed but did so suspiciously quickly (minutes instead of hours). After unplugging and replugging, the drive shows up as /dev/sdc, and Gparted sees 14.56GB partition called "gentoo" Testing with pattern 0xaa: set_o_direct: Invalid argument/0/0 errors) done Reading and comparing: done Testing with pattern 0x55: done Reading and comparing: done Testing with pattern 0xff: done Reading and comparing: done Testing with pattern 0x00: done Reading and comparing: done Pass completed, 0 bad blocks found. (0/0/0 errors) ➜ ~ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 477G 0 disk ├─sda1 8:1 0 512M 0 part /boot/efi └─sda2 8:2 0 476.4G 0 part / sdc 8:32 1 14.6G 0 disk ├─sdc1 8:33 1 292M 0 part └─sdc2 8:34 1 6.3M 0 part I'm guessing I should just put this flash drive out to pasture, but it seems to me such an odd sequence of events, I'm curious as to what sort of failure might have caused it (not really looking for a fix). Edit: This was on Xubuntu 18.04 Edit2: After a reboot, zeroing works as expected. I guess it was just a temporary issue with the OS. I'm still curious about what sort of issue though. Edit3: I spoke too soon, a reboot was not sufficient. I thought dd was working because it was taking a normal amount of time, but it seems not. ➜ ~ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 477G 0 disk ├─sda1 8:1 0 512M 0 part /boot/efi └─sda2 8:2 0 476.4G 0 part / sdb 8:16 1 14.6G 0 disk ├─sdb1 8:17 1 292M 0 part /media/james/Gentoo amd64 20190703T214502Z └─sdb2 8:18 1 6.3M 0 part ➜ ~ sudo dd if=/dev/zero of=/dev/sdb [sudo] password for james: Sorry, try again. [sudo] password for james: dd: writing to '/dev/sdb': No space left on device 30629377+0 records in 30629376+0 records out 15682240512 bytes (16 GB, 15 GiB) copied, 4232.1 s, 3.7 MB/s ➜ ~ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 477G 0 disk ├─sda1 8:1 0 512M 0 part /boot/efi └─sda2 8:2 0 476.4G 0 part / sdb 8:16 1 14.6G 0 disk ├─sdb1 8:17 1 292M 0 part /media/james/Gentoo amd64 20190703T214502Z └─sdb2 8:18 1 6.3M 0 part Edit 4: Ok, so dd did actually work, but lsblk did not update until I ejected and put back in. ➜ ~ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 477G 0 disk ├─sda1 8:1 0 512M 0 part /boot/efi └─sda2 8:2 0 476.4G 0 part / sdb 8:16 1 14.6G 0 disk Edit 5: I checked dmesg and there is a warning about the disk not being properly mounted. ➜ ~ journalctl --dmesg --since="3 days ago" | grep sdb Jul 09 19:59:27 james-Latitude-E7470 kernel: sd 3:0:0:0: [sdb] 30595072 512-byte logical blocks: (15.7 GB/14.6 GiB) Jul 09 19:59:27 james-Latitude-E7470 kernel: sd 3:0:0:0: [sdb] Write Protect is off Jul 09 19:59:27 james-Latitude-E7470 kernel: sd 3:0:0:0: [sdb] Mode Sense: 43 00 00 00 Jul 09 19:59:27 james-Latitude-E7470 kernel: sd 3:0:0:0: [sdb] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA Jul 09 19:59:27 james-Latitude-E7470 kernel: sdb: sdb1 Jul 09 19:59:27 james-Latitude-E7470 kernel: sd 3:0:0:0: [sdb] Attached SCSI removable disk Jul 09 19:59:33 james-Latitude-E7470 kernel: FAT-fs (sdb1): Volume was not properly unmounted. Some data may be corrupt. Please run fsck. Jul 10 02:38:38 james-Latitude-E7470 kernel: sd 3:0:0:0: [sdb] 30629376 512-byte logical blocks: (15.7 GB/14.6 GiB) Jul 10 02:38:38 james-Latitude-E7470 kernel: sd 3:0:0:0: [sdb] Write Protect is off Jul 10 02:38:38 james-Latitude-E7470 kernel: sd 3:0:0:0: [sdb] Mode Sense: 43 00 00 00 Jul 10 02:38:38 james-Latitude-E7470 kernel: sd 3:0:0:0: [sdb] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA Jul 10 02:38:38 james-Latitude-E7470 kernel: sdb: sdb1 sdb2 Jul 10 02:38:38 james-Latitude-E7470 kernel: sd 3:0:0:0: [sdb] Attached SCSI removable disk Jul 10 04:12:42 james-Latitude-E7470 kernel: sd 3:0:0:0: [sdb] 30629376 512-byte logical blocks: (15.7 GB/14.6 GiB) Jul 10 04:12:42 james-Latitude-E7470 kernel: sd 3:0:0:0: [sdb] Write Protect is off Jul 10 04:12:42 james-Latitude-E7470 kernel: sd 3:0:0:0: [sdb] Mode Sense: 43 00 00 00 Jul 10 04:12:42 james-Latitude-E7470 kernel: sd 3:0:0:0: [sdb] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA Jul 10 04:12:42 james-Latitude-E7470 kernel: sd 3:0:0:0: [sdb] Attached SCSI removable disk
#the partitions are still there! This part, at least, is still normal. You have to re-read partition table to update the partition information. You can trigger a re-read with blockdev --rereadpt /dev/sdx (alternatively sfdisk --re-read but that option has been removed from recent versions of sfdisk). For everything else, yes it's possible for flash based storage to fail into a readonly mode. Strange things might also happen for other reasons, for example if the USB connection is not stable, the /dev/sdx device might be dropped, and redetected as /dev/sdy and all writes to /dev/sdx go to limbo, but I guess that would have showed in your lsblk. Sometimes there are interesting error messages in dmesg, but all in all... if your USB stick failed you just have to get a new one, no way around it. For sake of completeness, there is also this special case here (user error): # dd bs=1M if=/dev/zero of=/dev/sdx dd: error writing '/dev/sdx': No space left on device 7956+0 records in 7955+0 records out 8341966848 bytes (8.3 GB, 7.8 GiB) copied, 2.08568 s, 4.0 GB/s So. Did this command write to a device? Not at all. I don't even have a /dev/sdx. Instead it filled my /devtmpfs with 50% RAM sized regular file of zeroes. (I really should adjust my tmpfs size limits. If you do this on two instances of tmpfs, the system crashes because RAM is full.) That happens when the device is missing entirely, or devicename mistyped, since dd does not check at all for existence beforehand, and if your machine has lots of RAM, and /dev not limited to a sane size like 10M, then you get this confusing result.
dd if=/dev/zero leaves drive contents in-tact? Bad USB stick?
1,361,291,930,000
I recently bought a 4TB hard disk and I want to do a read-write badblocks test on this before I start using it. Since running badblocks with -w option would take ages for 4TB, I've thought to first write a pattern on the disk and then use the -t option to read that pattern through badblocks, completing it in hours rather than in days. The problem is that I couldn't understand how the -t option of badblocks works and I got no search results when searching the web for an example of using badblocks with -t option. From man page, test pattern is a numeric value between 0 and ULONG_MAX-1 (I'm making a wild guess that ULONG_MAX is 2^32), but I'm not sure how to provide the pattern (decimal? hexadecimal? binary? ASCII string with length < 2^(32/8)?). And is the pattern size related to block size? The other part of the problem is to write a script to fill the hard disk with the pattern. I can write a Ruby script to do that, but a one liner bash command piped through pv would be nice!
Letting badblocks write the pattern in the first place should be no slower than writing it any other way. Especially if you use the -b block-size and -c blocks-at-once options so it doesn't do small reads/writes. This example overwrites the disk with "random" pattern in 1MiB blocks: badblocks -v -w -t random -b 4096 -c 256 /dev/thedisk If there is a problem with speed, it should have some other cause...
How to write a pattern to a device to use it with badblocks with -t test_pattern option
1,361,291,930,000
I have a MD drive set in RAID6 configuration with LVM2 on top. One of my drives was faulted last night due to bad blocks. Now apart from the fact that I am aware I should physically replace the drive, I am wondering: Can I feed MD a list of badblocks (such as I can with mke2fs -l [badblockslist]) in such a way that the these blocks are no longer used and effectively extend the life of the disk?
No, you can't. You should also check the drive's SMART status either with the gnome disk utility or with smartctl from the smartmontools package. If it is only a few bad sectors, md should have tried to rewrite them, which should have triggered the drive to automatically remap them to the spare pool. If you have enough bad sectors that the spare pool has run out, then you need to replace the drive immediately.
Can MD cope with a badblock list?
1,361,291,930,000
I am trying to check a mounted partition to see if the drive has errors: [root@virtuality ~]# /sbin/badblocks -v /dev/sdb1 Segmentation fault Uh oh. What does this mean? Why is badblocks segfaulting? Can I fix it? (System is CentOS release 4.6, drive is an SATA drive) EDIT: Using strace: [root@virtuality ~]# strace /sbin/badblocks -v /dev/sdb1 ...[snip]... open("/dev/sdb1", O_RDONLY) = 3 ioctl(3, BLKGETSIZE, 0x7fbffff878) = 0 close(3) = 0 open("/dev/sdb1", O_RDONLY) = 3 --- SIGSEGV (Segmentation fault) @ 0 (0) --- +++ killed by SIGSEGV +++
Turned out this was a numbskull error, looks like my copy of badblocks may have just had a bug. I ran yum update and after that, badblocks no longer segfaults.
Why is badblocks segfaulting?
1,361,291,930,000
The harddrive is connected with an external enclosure via USB3.0. $ sudo smartctl -a /dev/sdb === START OF INFORMATION SECTION === Model Family: Seagate Barracuda 3.5 Device Model: ST4000DM004-2CV104 Firmware Version: 0001 User Capacity: 4,000,787,030,016 bytes [4.00 TB] Sector Sizes: 512 bytes logical, 4096 bytes physical Rotation Rate: 5425 rpm Form Factor: 3.5 inches Device is: In smartctl database [for details use: -P show] ATA Version is: ACS-3 T13/2161-D revision 5 SATA Version is: SATA 3.1, 6.0 Gb/s (current: 6.0 Gb/s) SMART support is: Available - device has SMART capability. SMART support is: Enabled You can see that it shows 4096 bytes physical in the above output. But all the following results show 512 bytes: $ sudo fdisk -l /dev/sdb Disk /dev/sdb: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors Disk model: USB3.0 Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 4096 bytes / 33553920 bytes $ cat /sys/block/sdb/queue/hw_sector_size 512 $ cat /sys/block/sdb/queue/physical_block_size 512 $ lsblk -o NAME,PHY-SeC /dev/sdb NAME PHY-SEC sdb 512 └─sdb1 512 $ sudo blockdev --getbsz /dev/sdb 512 So what is the real hard drive physical sector size? I think it should be 4096, but I don't know why all other commands give me wrong result. Btw, the reason why I want to find the real physical sector size is because I want to use -b 4096 when running badblocks. Thanks a lot. PS: If the answer is 4096, are there any other tools in Linux can show the real result other than smartctl? I found another command will show 4096: $ sudo hdparm -I /dev/sdb | grep -i physical Physical Sector size: 4096 bytes
According to fdisk, its a USB disk - so information is hidden. smartctl has a database of many disks, so it can get the physical size. In fact, any HD built in the last few years, especially in the multi-TB sizes will have physical block sizes of 4096 bytes. I've just been looking at my smallish NVME drive and gdisk says that logical/physical sizes are both 512 bytes - but partitions will be aligned on 2048 sector boundaries (thats 1Mib). It says the same about SATA SSD's.
What is my hard drive physical sector size? fdisk, smartctl etc.. give different answers