date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,664,887,245,000
I want to do regular backups of my remote VPS via a cronjob. Both systems run Debian 10. I have been following this guide and tweaked it to my liking. Relevant parts of the script: /root/.local/bin/backup #!/bin/bash SSH_AUTH_SOCK=$(gpgconf --list-dirs agent-ssh-socket) rsync -avzAHXh -e ssh root@[someIP]:/path/to/remote/dir /path/to/local/dir \ || { echo "rsync died with error code $?"; exit 1; } When I run this from the terminal, everything works fine. However if I run it via a cronjob: crontab -u root -e # m h dom mon dow command 0 6 * * * /root/.local/bin/backup >> /var/log/backup 2>&1 Then /var/log/backup shows: root@[someIP]: Permission denied (publickey).^M rsync: connection unexpectedly closed (0 bytes received so far) [Receiver] rsync error: error in rsync protocol data stream (code 12) at io.c(235) [Receiver=3.1.3] rsync died with error code 12 What is going wrong in the cronjob and what can I do about it? PS: I have deleted the passphrase for the gpg key I use here, trying to make this work. Ideally I would like a solution that works even when I add a passphrase again.
Commands executed by Cron have an extremely basic execution environment and path setting, a frequent error is to test a command or script as your normal user id. Then it fails when run by Cron. Exported Environment variables are often overlooked. It is prudent to do a final test, as root, with a completely cut down environment, as a final test, use a sub shell, before deployment. You can always add a one off cronjob, that simply prints it's env to a log file, this enables you to emulate the exact conditions your command will run with, when invoked by Cron. A second advantage, if any error occurs, is that it can be displayed on your terminal, it's easier to debug this way. It looks like the variable assigned in your script is not exported, so SSH will not pick it up. It's also important to use absolute file paths in a script, unless you change directory specifically, you can't assume you are in a particular directory, printing the working directory in the one-off test script, previously mentioned helps as well. You can not assume all distribution s are identical in this respect. It certainly doesn't hurt to check.
Remote Server: Permission denied (publickey) when using rsync with ssh with gpg via cronjob
1,664,887,245,000
I notice that I am getting 2 different cron processes in ps: [user@host ~]$ ps aux | grep -i cron 500 746 0.0 0.0 6384 676 pts/1 S+ 13:40 0:00 grep -i cron root 905 0.0 0.0 20408 1036 ? Ss 2019 54:54 crond root 30406 0.0 0.1 39152 1672 ? S Feb14 0:00 CROND smmsp 30429 0.0 0.2 76424 3820 ? S Feb14 0:00 /usr/sbin/sendmail -FCronDaemon -i -odi -oem -oi -t -f root I have also a zombie process that has 30406 as it's parent: [user@host ~]$ ps -A -ostat,ppid | grep -e '[zZ]'| awk '{ print $2 }' | uniq | xargs ps -p PID TTY TIME CMD 30406 ? 00:00:00 crond What is this second CROND process?
CROND is a child process of crond; it gets created when a crontab entry is processed. $ ps -ef | grep -i cron UID PID PPID C STIME TTY TIME CMD root 2289 1 0 Feb12 ? 00:00:02 /usr/sbin/crond -n root 446475 2289 0 14:37 ? 00:00:00 /usr/sbin/CROND -n Process # 2289 (crond) is the parent process for 446475 (CROND).
Why is my host displaying 2 cron processes?
1,664,887,245,000
System: Mac Big Sur I been working more in the terminal and I thought it would be fun to have a fortune present a different message regularly during login. I know I can just add the fortune command to my .bash_profile but I wanted to run it as a motd for every user of the system and schedule it with launchd/crontab. The last time I setup a crontab to run my script as root but it emailed me an error about not being able to locate the commands in my script. Here's my script: #!/bin/bash #create a motd to be run by crontab every hour fortune quotes | cowsay -f blowfish | lolcat > /etc/motd I haven't tried this yet but I was thinking I could either use the absolute paths to the executables, if that is the issue. Or do I need back quotes around the commands. Would I need to change the root .bashrc file to have a search path to the file locations?
The issue is most likely that the script does not have the correct value of the environment variable PATH. It is in any case not any form of syntax issue. You can solve this in one of three (or more) ways: Use absolute paths for each command, as you yourself suggest. The only instance when you need to do this is when you have multiple variants of utilities in the directories that you would otherwise add to PATH, and it's important that you pick the correct variant of the utility. For example, you may have lolcat in both /usr/local/bin and /opt/bin, and you really want to use the lolcat from /opt/bin. At the same time, you want to use the cowsay in /usr/local/bin and not the one in /opt/bin. In this situation, you can't just add the two paths to PATH. Add the directory (or directories) where the commands are located to the PATH variable in the script, before running your pipeline. E.g., #!/bin/sh PATH=$PATH:/usr/local/bin fortune quotes | cowsay -f blowfish | lolcat >/etc/motd The PATH variable does not need to be exported as it's already an environment variable. The benefit of this would be that the script isn't littered with absolute paths to commands (easier to read). You can also modify the PATH variable as you call the script from your crontab: 0 * * * * PATH=$PATH:/usr/local/bin /path/to/my-motd-script.sh (or whatever your schedule looks like.) Setting a variable on the command line like this sets it in the environment of your script. You could also use the env utility with the same effect: 0 * * * * env PATH="$PATH:/usr/local/bin" /path/to/my-motd-script.sh This would allow you keep your current script unmodified. Changing the PATH in root's .bashrc file would have no effect at all as that file is not read for non-interactive shell sessions, like cron jobs. Note that I changed the #! line to #!/bin/sh. I did that as there is nothing in the script that actually needs bash. In this case, this has more to do with aesthetics than with actual functionality.
Bash or path problem?
1,664,887,245,000
I'm currently working on bash script, that will add jobs to root's crontab. I'm trying to achieve it with command: sudo crontab -e -u root | { cat; echo "@reboot /home/$CURRENT_USER/scripts/reboot.sh"; } | crontab - I'm testing this command in shell of Ubuntu Server 18.04 and right after typing password for sudo shell hangs completely. I've tested that several times. Server resources are ok, everything works fine after establishing new ssh connection. Please help me understand what is wrong with this command and what should I do to make it write something to roots crontab (sudo su doesn't work, because creates new shell, which stops script).
crontab -e invokes an editor. The output from the editor goes to the cat command, but (at best) it's waiting for you to edit the file. You probably should do something like this job="@reboot /home/$CURRENT_USER/scripts/reboot.sh" tab="$(crontab -l)" { echo "$tab" | grep -vxF -e "$job"; echo "$job"; } | crontab If the snippet isn't already being run as root, change both instances of the verb crontab to sudo crontab -u root
I cannot echo job to root's crontab with sudo - shell hangs
1,664,887,245,000
I have this report, but i can't understand why it doesn't recognize gradle command... I'm new to cron, thanks for your help! From [email protected] Sun Dec 13 04:02:01 2020 Return-Path: <[email protected]> X-Original-To: ubuntu Delivered-To: [email protected] Received: by vps-a29e040b.vps.ovh.net (Postfix, from userid 1000) id EF0278157A; Sun, 13 Dec 2020 04:02:01 +0000 (UTC) From: [email protected] (Cron Daemon) To: [email protected] Subject: Cron <ubuntu@vps-a29e040b> ./myscript.sh MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Cron-Env: <PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin> X-Cron-Env: <SHELL=/bin/sh> X-Cron-Env: <HOME=/home/ubuntu> X-Cron-Env: <LOGNAME=ubuntu> Message-Id: <[email protected]> Date: Sun, 13 Dec 2020 04:02:01 +0000 (UTC) TERM environment variable not set. ./myscript.sh: line 4: gradle: command not found
cron is fairly easy to use, but it has some idiosyncrasies you should be aware of. Here's a brief summary: 1. cron jobs have a different ENVIRONMENT cron jobs run with an ENVIRONMENT that's different than your interactive shell. The PATH is part of the ENVIRONMENT. This may be the cause of your issue: the PATH to gradle may not be in the the PATH used by cron. This can cause issues for any cron job. Solution: Typically this may be resolved by using a full path specification for all commands. Instead of this: 0 2 * * * gradle Use this: 0 2 * * * /full/path/to/gradle 2. cron has no awareness of resource availability cron does not verify that the resources needed to run a job are available before attempting to run the job. This is typically encountered in running a cron job at boot time (i.e. using the @reboot facility). An example of this situation is a job that runs at boot time & requires access to network resources. Solution: This is typically resolved by adding a sleep X command prior to the job: @reboot ( /bin/sleep 30; /home/myhome/myprog.sh ) 3. Runtime errors in a cron job are not reported cron has no access to your terminal, so stderr goes to /dev/null. If you want to see errors (and you always do!), you need only to redirect stderr to a file - this is one way to do that: 0 2 * * * /home/myhome/dosomething.sh > /home/myhome/cronjoblog 2>&1 This redirects all stdout (1) to /home/myhome/cronjoblog, and redirects stderr (2) to stdout (2>&1). All output goes to your cronjoblog file. Further reading on cron
Can't Use Gradle With Cron Ubuntu
1,664,887,245,000
This is what my crontab looks like: * * * * * /bin/sh /home/rathindu/assignment/test.sh The test.sh file: #!/bin/sh mkdir new The script is not running. But if I just open the terminal and invoke the script without using crontab it works perfectly. When I inspect the CRON syslog CRON[6909]: (CRON) info (No MTA installed, discarding output This is what I get.
Just as @αғsнιη suggested in the comment, I replaced every relative path with an absolute path and it did worked perfectly. there was no need to use /bin/mkdir/ it just worked fine with simple mkdir. But the paths to the files had to be changed to their absolute paths mkdir new had to be changed to mkdir /home/username/folder/new And regarding the CRON[6909]: (CRON) info (No MTA installed, discarding output It was just a matter of installing a local mailbox: apt-get install postfix and then the mails can be found at: tail -f /var/mail/<cron user>
Cronjob does not work on Linux Mint 20
1,664,887,245,000
I was newbbbie of crontab command, and while i was investigating this command, i suddenly type some number and made my crontab -e look like this: pi@raspberrypi:~ $ crontab -e no crontab for pi - using an empty one 889 is there any way to set crontab back to default or how to delete them? i just want use crontab to automatic doing my tasks. # Edit this file to introduce tasks to be run by cron. # # Each task to run has to be defined through a single line # indicating with different fields when the task will be run # and what command to run for the task # # To define the time you can provide concrete values for # minute (m), hour (h), day of month (dom), month (mon), # and day of week (dow) or use '*' in these fields (for 'any'). # # Notice that tasks will be started based on the cron's system # daemon's notion of time and timezones. # # Output of the crontab jobs (including errors) is sent through # email to the user the crontab file belongs to (unless redirected). # # For example, you can run a backup of all your user accounts # at 5 a.m every week with: # 0 5 * * 1 tar -zcf /var/backups/home.tgz /home/ # # For more information see the manual pages of crontab(5) and cron(8) # # m h dom mon dow command "/tmp/crontab.QzVh1G/crontab" 23 lines, 898964 characters it show like this after I follow your instruction export VISUAL=vi crontab -eand it seems i can not edit this file except using qa! to exit. is there anything i missed?
Your editor is set to ed. The ed editor is a very basic line editor which will output the number of bytes in the file when you open it. In this case, you crontab file contains 889 bytes (type ,p and press Enter in the editor to see the contents of the file). You most likely don't want to use ed as you editor (or you would have recognized that you had started it). To exit the editor, simply type q and press Enter, or press Ctrl+D. Then run crontab -e again, but with the VISUAL environment variable set to the editor that you most commonly use to edit files on your system. Here is how you may set VISUAL to vi, as an example, but you may use nano or any other terminal editor that happens to be installed. export VISUAL=vi crontab -e You may want to set the value of VISUAL in your shell's startup file (in ~/.bashrc if you're using bash).
Crontab -e simple problem
1,664,887,245,000
I have installed Manjaro Linux with 5.7.0-3-MANJARO kernel. The problem is that every time I boot the system, I have to start my NetworkManager module manually with the command sudo systemctl start NetworkManager. I want my system to start it automatically. I tried adding it as a cron job with crontab -e, but it does not work. How do I fix this?
Do not try to use cron for this !Cron is meant to run repeating tasks. If you try to do it with cron it will constantly run this command again (it doesn't do any real damage but that doesn't make it a good idea either Correct solution: sudo systemctl enable NetworkManager(The difference between enable and start is that enable will make sure that from that moment on it will always happen when the system is started)
Setup a new startup job in Manjaro
1,664,887,245,000
I run a .sh bash file from a directory httpdocs/pub/ftp-admin/. On the command line from the directory everything works fine. If i make a cron task in plesk all magento 2 commands are not working and ik get this message. Magento supports PHP 7.1.3 or later. see script below. what is wrong. I run: CentOS Linux 7.8.2003 with php 7.2.31 #!/usr/bin/env bash file=*.ZIP if [ -f $file ] then echo $file "exist" # cp $file ./backup # unzip -P web $file # rm -f $file fi # if [ -f *BASIC.XML ] then mv *_BASIC.XML BASIC.XML php -f ../../bin/magento import:job:run 1 fi # if [ -f *PRICES.XML ] then mv *_PRICES.XML PRICES.XML php -f ../../bin/magento import:job:run 4 php -f ../../bin/magento import:job:run 2 fi # if [ -f *STOCKINFO.XML ] then mv *_STOCKINFO.XML STOCKINFO.XML php -f ../../bin/magento import:job:run 3 fi # cp -f *.XML ./backup # rm -f *.XML
Problem is solved. Cron tasks in plesk are running under a lower PHP version then from SSH. By editing the script like this the problem was solved: /opt/plesk/php/7.2/bin/php -f $MAGEPATH/magento import:job:run 1
cron job task error while command line works oke
1,664,887,245,000
I'd like to run the following backup command as recurrent job where User is my username and /mnt/Rsync_Dell is a folder I created on OMV5 with static IP: sudo rsync -apEo /home/User/ /mnt/Rsync_Dell/Prova/ If I run this command on the terminal, it runs successfully, on the crontab file it fails. What I do is the following: crontab -e Insert %% * * * * sudo rsync -avzpEo /home/User/Desktop /mnt/Rsync_Dell/prova/ where %% is any minute in the future just to test it out. I run on locale other rsync commands, and they work. It just seems that the combination crontab on Samba shares fails.
Since you can't give sudo a password when using it from your own crontab, and since the purpose of using sudo in the first place is to gain root privileges, running your rsync command from root's crontab rather than from your own seems like a better option. You may edit the root user's crontab using sudo crontab -e Within it, you would add the same schedule that you show in your question, but without the sudo command.
rsync back running via cron
1,664,887,245,000
Most logs on my Mac have a time stamp before the output message like this: Nov 14 17:55:24 - SoftRAID Driver: SoftRAID driver loaded, version 5.8.1. I am running an executable in cron and instead of using mail I want to output all info to a log file and add a timestamp to each line in front of the output. I am using >> to append to a single file. This is my crontab: SHELL=/bin/bash MAILTO="" * 6-23 * * * /usr/local/bin/urlwatch >> /Users/john/cronjobs/urlwatch.log This is what I get in urlwatch.log UNCHANGED: (01)urlwatch update released (https://github.com/thp/urlwatch/releases/latest) How do I make the output look like this: Nov 14 17:55:24 - UNCHANGED: (01)urlwatch update released (https://github.com/thp/urlwatch/releases/latest) I have tried many different suggestions from across the web and have had no luck. If saving to a text file is easier that will work as well. Tried this which is close: * 6-23 * * * (date && /usr/local/bin/urlwatch) >> /Users/john/cronjobs/urlwatch.log Output in the log file looks like this: Sun Mar 15 13:35:00 CDT 2020 UNCHANGED: (03)RansomWhere? Objective-See (https://objective-see.com/products/ransomwhere.html) UNCHANGED: (01)urlwatch update released (https://github.com/thp/urlwatch/releases/latest) UNCHANGED: (02)urlwatch webpage (https://thp.io/2008/urlwatch/)
If you have (or can obtain, from homebrew for example) the ts timestamp utility, then you should be able to do something like /usr/local/bin/urlwatch | /path/to/ts >> /Users/john/cronjobs/urlwatch.log (/path/to/ts would typically be /usr/bin/ts on Linux systems; it may be another location on OSX). From man ts: DESCRIPTION ts adds a timestamp to the beginning of each line of input. If you want to specify a non-default time format then you can do so but remember that the % sign has special significance in cron and must be escaped. See for example How can I execute date inside of a cron tab job? You may find that groups of lines are given the same timestamp - that may be because the program that is generating them is buffering its output. If either the unbuffer or stdbuf utilities is available for your system, it may be possible to enforce line-buffering as described here: Turn off buffering in pipe
How to redirect output to a log file from cron and prepend timestamp?
1,664,887,245,000
I have created downloads file inside /etc/cron.d/ directory. Following is the content of the downloads file. * * * * * root /usr/bin/python3 /path/python.py File permissions and owner: -rw-r--r-- 1 root root 79 Dec 25 22:45 downloads systemctl status crond Above command gave the following error. Unit crond.service could not be found. /usr/bin/python3 /path/python.py this executes correctly from terminal.
Append one blank line to the end of your downloads file. Cron jobs need new line termination characters. Also, it is better to manage cron jobs with crontab -e (if you want root priviledges, sudo crontab -e). In case you forget the new line, crontab will warn you.
cron not running in Ubuntu 18.04.3 LTS
1,664,887,245,000
I recently installed timeshift and set it to backup weekly. However, when I looked in /etc/cron.d, I only found timeshift-hourly which ran: 0 * * * * root timeshift --check --scripted which appears to check with timeshift if a backup is schedule and if so, do it. However, I would like to set a cron email for every time I backup and I'm guessing that if I set if for timeshift-hourly, then I will get an email every time that it checks every hour. How would I be able to set a cron email for every time that timeshift does a backup and not every time it checks to see if it's time to backup?
This is already built-in to Timeshift, just disabled by default. The following picture is from the Timeshift setting menu; just uncheck the Stop cron emails box and you should be good to go.
Timeshift weekly cron emails
1,570,835,992,000
I want putting timestamp traces into two different files when I call to the script from a cron task. One file is for stdout script.log and another for stderr script.err With this cron line I get timestamp over script.log file, but don't work with script.err */1 * * * * ((/home/user/script.sh) | ts "\%H:\%M:\%.S ->") 2>>/home/user/script.err >> /home/user/script.log How I can add timestamp into script.err too?
One way is to redirect stdout of ts first, and then redirect stderr of the pipeline to stdout and pipe to another ts:: */1 * * * * (/home/user/script.sh | ts "\%H:\%M:\%.S ->" >> /home/user/script.log) 2>&1 | ts "\%H:\%M:\%.S ->" >> /home/user/script.err (Though this way, and stderr output from the first ts will also be logged be the second ts.)
how to add timestamp to stdout and stderr from a crontab job?
1,570,835,992,000
I have a script(main.sh) which calls another script(Get_Files.sh) and gets a value in a variable like the below: File_to_Refresh="$(sh /Aug/work/Get_Files.sh /Aug/Work/Universal_File.txt)" Now I have to schedule the script main.sh for 3 different files i.e. the Universal_File.txt with 3 different values and on 3 different days. So, to make the script generic, I want to pass the Universal file to Get_Files.sh from the Cron Entry only. How Can i achieve the same? Below is working if I run the script manually like sh /Aug/Work/main.sh /Aug/Work/Universal_File.txt but it's not working when i run it through cron. File_to_Refresh="$(sh /Aug/work/Get_Files.sh "$1")" Cron : 45 08 * * * sh /Aug/Work/main.sh /Aug/Work/Universal_File.txt
If you have 3 cron entries, 1 for each day (as mentioned in comments), you should be able to specificy the file used for each cron entry as an argument and use $1 in the main.sh script $ cat main.sh File_to_Refresh=$(sh sub.sh $1) echo FileToRefresh: $File_to_Refresh $ cat sub.sh echo Sub \$1: $1 $ ./main.sh /Aug/Work/File1.txt FileToRefresh: Sub $1: /Aug/Work/File1.txt Not convinced you need the " marks around the $(xxx), it seems to work both ways for me.
Cron entry to pass the value to script
1,570,835,992,000
I want to schedule a job for 3rd Friday of every month, 1 AM . I checked few cron entry websites for that didn't work for me. I was also checking for some awk options for this so far not successful with that. Can you guys help me with that? I tried to run this for today the cron is working fine but the script is running continuously at the same point and not getting completed : 0 1 15-21 * * test $(date +\%u) -eq 5 && echo "3rd friday" && Extract_Param.sh /landing/file/ABC/file.txt
Unfortunately, when you give both a day-of-month and a day-of-week in a crontab entry, this means either one is sufficient. (Didn't know that either, but the manpage says so.) This means we can't simply calculate that the 3rd friday is a friday that is between the 15th and 21st of the month. Fortunately, the above-linked man page also advises us: One can, however, achieve the desired result by adding a test to the command (see the last example in EXAMPLE CRON FILE below). […] # Run on every second Saturday of the month 0 4 8-14 * * test $(date +\%u) -eq 6 && echo "2nd Saturday" so you should be fine with 0 1 15-21 * * test $(date +\%u) -eq 5 && echo "3rd friday"
CRON Job Schedule issues
1,570,835,992,000
I'm trying to run a daily cron app. What I did: crontab -e Inside file, I have: 0 0 * * * cd /home/ec2-user/myapp && docker-compose up when I check /var/log/cron, I get: Jul 29 00:00:01 localhost CROND[28549]: (ec2-user) CMD (cd /home/ec2-user/myapp && docker-compose up) But I have no logs in myapp, and I can tell app has not run. What am I missing ?
docker-compose can not be found in cron users PATH variable, and therefore can not be run. One way to fix this is to provide the complete path to the binary. 0 0 * * * cd /home/ec2-user/myapp && /usr/local/bin/docker-compose up
Cron logs show activity, but app logs show nothing
1,570,835,992,000
I'm running Ubuntu and got a simple cronjob fetching a remote JSON-file and (over)writing it to the server. */15 * * * * /usr/bin/curl -m 120 -s https://path/to/remote/json.json > /store/json/here.json However I need to make sure the external JSON actually returns data before it overwrites the JSON file located on the server. How can I achieve this? I've found some ways to do it straight in bash, but doesn't seem to work when I put in the crontab.
Write a short shell script and call the script from your crontab. The script may look something like #!/bin/sh PATH=/usr/bin:$PATH cd /store/json || exit 1 if curl -m 120 -s https://path/to/remote/json.json >here.json.tmp && [ -s here.json.tmp ] then mv here.json.tmp here.json else rm here.json.tmp fi The -s test is true if the given file has a size greater than zero.
Cronjob fetching JSON with cURL. Check if response isn't empty before writing to file
1,570,835,992,000
I've been trying to get all cron job outputs to a file (not email). The alias is set in /etc/aliases. logthecron: "|cronlog.sh" And in crontab MAILTO=logthecron. The cronlog.sh file writes to output to some file: #!/bin/sh $@ 2>&1 | sed -e "s/\(.*\)/[`date`] \1/" >> /tmp/a I am using sendmail. Sendmail uses smrsh, a restricted shell utility that provides the ability to specify, through the /etc/smrsh directory, an explicit list of executable programs available to Sendmail. So I symlinked cronlog.sh and sendmail to that directory. Something like... ln -s /root/cron/cronlog.sh /etc/smrsh/ And still keep getting this error. May 10 09:33:11 sandbox01 smrsh: uid 8: attempt to use "cronlog.sh" May 10 09:33:11 sandbox01 sendmail[23870]: x4ADXB5Y023868: to="|cronlog.sh", ctladdr=<logthecron@[hostname]> (8/0), delay=00:00:00, xdelay=00:00:00, mailer=prog, pri=30787, dsn=5.0.0, stat=Service unavailable May 10 09:33:11 sandbox01 sendmail[23870]: x4ADXB5Y023868: x4ADXB5Y023870: DSN: Service unavailable Note: I am using CentOS v7, the file is executable, email works without issues, tried the entire directory path in the alias and I do not want to write individual cronjob outputs but write all the output of cron jobs to some file. Reference: smrsh: http://www.faqs.org/docs/securing/chap22sec182.html Logging ALL stderr output of crontab to file
Instead of trying to use MAILTO (which will always be interpreted as an email address), use SHELL. Set SHELL to the path of a small executable shell script that runs the given command with output directed to a file: #!/bin/sh now=$(date) /bin/sh "$@" 2>&1 | awk -v now="$now" '{ printf("[%s]\t%s\n", now, $0) }' >/tmp/cronjob.log The "$@" here will be expanded to -c followed by the job specification from the crontab file. It's important to write "$@" with the double quotes. In the crontab, use SHELL=/path/to/cronrun # rest of crontab below... (assuming /path/to/cronrun is the correct path to that short script)
Logging all crontab output using MAILTO
1,570,835,992,000
matthew@matthew-pc:~$ crontab -l [...] @reboot 1 5 * sleep 10s && export DISPLAY=:0 && xdg-open ~/@/Testing.jpg "Testing.jpg" was not opened at Linux startup on May 1. How can I use Crontab to open a file at Linux startup on a specific date, e.g. May 2?
You can't mix a schedule with @reboot - cron doesn't support that. If you want to open a file at Linux startup on a specific date, do this: Write a script (chkdate.sh) that checks the date for a match to (for example) May 2 Use the @reboot spec in cron to run chkdate.sh [EDIT: re chkdate.sh, a script to check today's date against a specific date]: You could write a bash script to accomplish this. The date function in bash will return today's date. Having that, you only need then to test today's date against the specific date, and take the desired action. Here's one way to approach this: NOTE: I use a Debian-based Linux distribution, so what follows reflects that. Your system may vary somewhat. Use the date function: Open a "terminal window", or simply get the command prompt on your system. At the command prompt ($ in my case), type date +%Y-%m-%d: $ date +%Y-%m-%d 2019-05-02 $ Specify the date format needed: date can provide its output in a variety of different formats. In the example above, the format is "Year-month-day". If you are interested only in the month and day, then the command would be: date +%m-%d: $ date +%m-%d 05-02 $ Regardless of the format you need, you must learn to use the system documentation. To learn how to instruct date to provide its output in the format you need, you will need to read its documentation: $ man date ... q $ Note that the single character q will close the man page. Once you've decided on the format of the date output, you can begin to write the script chkdate.sh. Let's create the file, and open it in an easy-to-use text editor: $ nano ~/chkdate.sh ... You may begin entering your script; for example: #!/bin/bash SOMEDAY="05-02" TODAY=`date +%m-%d` if [ $TODAY == $SOMEDAY ] then echo "Today is the day!" else echo "Today is NOT the day." fi Now, save the file ^o, and exit the nano editor ^x. You will need to set the file permissions to make the script executable, and you may then run it: $ chmod 755 ~/chkdate.sh $ ~/chkdate.sh Today is the day! $ Because today is May 2nd, the then condition of the if statement is executed (echo "Today is the day!"). On any other day of the year, the else statement would be executed. Also note that it is not necessary to have an else condition - you may simply omit it if you don't need it. Assuming this executes, you can modify the script to replace the echo ... command(s) with the command(s) you want to use.
How do I use Crontab to open a file at Linux startup only on a specific date?
1,570,835,992,000
I have a set of 6 jobs to be run via cron. Let's call them jobs A,B,C,D,E & F 'A' & 'B' take 2 mins to complete, 'C' , 'D' , 'E' & 'F' take 3 min each to complete. No job is dependent on the other The problem of running them all together is a burst of CPU & then everything is idle. I'm looking to space the execution of these jobs apart such that the jobs don't lock up the resources & produce erroneous results which is to say I would like to not have these jobs overlap with any other I'm finding it quite hard to work out the schedule to run these.
As user Archemar already pointed out in his comment use the utility run-parts. In many Linux distributions this is also used to execute all the cron scripts of severaly system software packages stored in the directories /etc/cron.daily, /etc/cron.hourly and so on. Normally run-parts will execute all your scripts one after the other in sequence. Beware of the naming conventions: run-parts will normally not execute any script files which contain a period in the filename.
Work out a cron job schedule
1,570,835,992,000
I'm trying to setup a relatively short shell script called backup_extract.sh which will essentially unzip a zip file, drop a database, create a database of the same name, then import an sql file that was created from the zip file. This zip file is created on a separate server, then transferred to this server every day - it's a simple database backup of a wordpress website & this script is intended to restore that database on the new server to serve as a 1:1 clone of the other website for backup purposes. The problem I'm having is when I just bash run the script it works fine, but when I let cron take over nothing seems to happen. Code looks a bit like this (obviously omitting mysql user & password, as well as full urls but I assure you i am using the exact path to the files as if they were from the root) cd /var/www/html/backups/database mkdir /var/www/html/backups/database/$(date +%Y%m%d) unzip /var/www/html/backups/database/*.zip -d /var/www/html/backups/database/$(date +%Y%m%d) mv /var/www/html/backups/database/*.zip /var/ww/html/backups/database/$(date +%Y%m%d) cd /var/www/html/backups/database/$(date +%Y%m%d)/dup-installer mv /var/www/html/backups/database/$(date +%Y%m%d)/dup-installer/*.sql /var/www/html/backups/database/$(date +%Y%m%d)/dup-installer/database_$(date +%Y%m%d).sql mysql -u USERNAME -p'PASSWORD' database -e \ 'DROP DATABASE database'; mysql -u USERNAME -p'PASSWORD' -e \ 'CREATE DATABASE database'; mysql -u USERNAME -p'PASSWORD' database < /var/www/html/backups/database/$(date +%Y%m%d)/dup-installer/database_$(date +%Y%m%d).sql echo "Backup completed" >> /var/www/html/backups/database/$(date +%Y%m%d)/backup-status.txt Now this is my first time writing a script like this and I'm sure I'm doing some things wrong. But let me explain a few things about what this script is doing. First I have no idea if the cd commands do anything at all when the crontab is involved, but they're there just in case. if i don't need them great. 2nd a directory is created using the current date 3rd a zip file is unzipped into the folder we just created - this is where it gets complicated. every day a new zip file will be moved to this folder, however the name is procedurally generated by a different server & there's no way to predict EXACTLY what the zip file's name will be so I need a way to unzip any zip file that exists in this folder - it will be moved once it is unzipped so there's no worry about unzipping several zip files at once. 4th the zip file is moved to the folder we created on line 2 5th we cd into a folder that was unzipped from the zip file 6th we rename any sql file in here (there will only ever be 1) to a name that is more predictable (same issue with this as there was with the zip file, the sql filename is unpredictable & therefore hard to lock down so I used wildcard to target any sql files in that folder. 7th we login to MySQL & drop the database 8th we login to MySQL & re-create the database with the same name as the one we just deleted 9th we login to MySQL & Import the sql file we renamed earlier 10th we create an arbitrary text file that just says backup completed I've tested the shell script on its own using bash -x several times & it seems to be working fine. there are a few issues like it doesn't actually seem to care what folder the .zip file is in & wants to unzip EVERY zip file on the server which isn't good, but the core issue is that crontab isn't working. my crontab line is this: 00 00 * * * /var/www/html/(actual filepath)/backups/backup_extract.sh I've been changing the times while testing, but generally that's what it looks. When I check/var/log/cron it acts like it has run the script, but in actuality it has done nothing. I've even tried creating a simple crontest.sh file that just creates a crontest.txt file & tested that both using bash & using cron - bash works fine but cron does not.
My solution for this may not be the same for everyone but basically crontab -e didn't work AT ALL. I ended up having to edit /etc/crontab. Set the PATH variable to /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin, the SHELL to /bin/bash, then added my cron jobs in that file. Now the cron job is running correctly. I'm not sure why crontab -e didn't work, I'm sure someone with more experience could give some possible reasons but I'm completely clueless.
Cron log shows shell script has run but it really hasn't
1,570,835,992,000
I have a script very similar to the one at the end of the pygame example page, and I want to run it on startup to make the whole thing listen for commands on startup. I ran "crontab -e" and as a first line I put "@reboot python3 myscript.py". Upon rebooting, the script doesn't work, and returns pygame.error: video system not initialized. On the other hand, if I execute "python3 myscript.py" from the terminal, the script works. The vast majority of answers simply notes that pygame.init() is not called, which is not the case here. According to a post translated from German, this can be solved by exporting a variable somewhere, but I am not sure where or how to do that. Is there a way to give X11 privileges to the script ran on reboot by Crontab? UPDATE: Upon chaining commands like Seamus suggested, setting DISPLAY=:0 doesn't work at all. From terminal after login, I did echo $DISPLAY and got :1. When setting: @reboot (sleep 30; export DISPLAY=:1; python3 /full/path/to/myscript.py) it starts working after login, but not before (which is my main requirement, as the device should be autonomous). I am not sure if this :1 is relevant and what is :0.
If your script runs from the command line, but not from cron, there are two usual suspects: The PATH for cron is different than your path as a "regular" user. You can address that issue by using a full path spec for your script: @reboot python3 /full/path/to/myscript.py cron is unaware of the status of any services when it starts. For cron, @reboot means when cron starts, not when the network is up, or any other services your cron job may require. One way to take care of this is to sleep for a bit before running your script: @reboot (sleep 30; python3 /full/path/to/myscript.py) This will delay execution of your script by 30 seconds. Nothing magic about 30 seconds... it will depend upon your system. You may wish to do some trial and error testing to find a reliable sleep time. Note that another approach that avoids cron's shortcomings wrt status of system services is to use systemd. Some (myself included) find cron easier to use, but certainly systemd has some advantages. Finally, it's usually a good idea to collect any error messages generated when starting your script. Another addition to the single line in your crontab will accomplish that: @reboot (sleep 30; python3 /full/path/to/myscript.py >> /home/yours/cronjoblog) 2>&1 Any error messages from stderr will be appended to the file you designate.
X11 and pygame in Crontab
1,570,835,992,000
I have a shell script that gets updates from github and then runs the code that it retrieved. it is located at /home/me/Desktop/refreshCode.sh -- refreshCode.sh -- #!/bin/bash cd /home/me/src/ProductionMonitor sudo /usr/bin git -C /home/me/src/ProductionMonitor pull sudo cp /home/me/src/ProductionMonitor/* /home/me/Desktop/Production cd /home/me/Desktop/Production sudo /usr/bin/python3 prodmain.py >> logfile.data I know that the shell can launch and runs as expected manually. It opens up the tkinter window and I can interact with the screen. However I cannot seem to get anything to launch correctly. I have tried setting up a crontab for @reboot and was not successful. It never showed the ui I set up a Systemd service as such. -- prodmon.service -- [Unit] Description=Service to run production monitor After=multi-user.target [Service] Type=forking ExecStart=/home/me/Desktop/refreshCode.sh User=caleb Group=caleb WorkingDirectory=/home/me/Desktop/Production/ PIDFile=/var/run/prodmon.pid [Install] WantedBy=multi-user.target Checking the status of the service after boot, it says that it is running. however the tkinter UI never shows up.
You should probably not sudo all the stuff; all things are under your own directory. The main point of your question is, that your script does not know where to display the window. Try adding echo "DISPLAY=$DISPLAY" >> logfile.data You will probably see DISPLAY= in `logfile.data. Furthermore, if you use: /usr/bin/python3 prodmain.py >> logfile.data 2>/tmp/errorfile you will probably see a /tmp/errorfile with something like: prodmain: Xt error: Can't open display: prodmain: DISPLAY is not set So, that is the reason why it does not display at boot. Now how to solve it: that depends a lot on what you want.You can start the program from ~/.xinitrc when you log-in to the graphical environment. You can cut the job into two parts: doing the git and cp at boot, and the prodmain from the .xinitrc.
How to get a shell script to execute correctly on reboot
1,570,835,992,000
I have cron set to run a task for user pi (on my Raspberry Pi running Raspbian Stretch) every 15 minutes. 6,21,36,51 * * * * /usr/bin/mosquitto_sub -h Pi3Plus.local -v -C 1 -t weather >> weather.log I also have ssmtp setup to send email via gmail. Every so often the cron task fails to connect to the server, and generates an error message. The problem is that it then attempts to send a message to user pi, which ssmtp changes to pi@gmail and sends to gmail, where it fails. I have read the man for ssmtp, ssmtp.conf, cron, crontab but cannot find anything to stop these messages. I could write a script to trap error messages in the cron task to prevent it generating an error.
From the crontab manual: In addition to LOGNAME, HOME, and SHELL, cron(8) will look at MAILTO if it has any reason to send mail as a result of running commands in "this" crontab. If MAILTO is defined (and non-empty), mail is sent to the user so named. If MAILTO is defined but empty (MAILTO=""), no mail will be sent. Otherwise mail is sent to the owner of the crontab. Simply add the variable to the top of the crontab file, as shown in this example (taken from the same man page, slightly edited for clarity): # mail any output to 'paul', no matter whose crontab this is MAILTO=paul # run five minutes after midnight, every day 5 0 * * * $HOME/bin/daily.job >> $HOME/tmp/out 2>&1
Prevent cron sending error messages
1,570,835,992,000
I have some cron tasks, and some of them doesn't stop sending me emails. An example task are: */2 * * * * php app/console mautic:email:fetch > /dev/null 2>&1 (All the tasks with the problem are mautic tasks). I've tried some tricks for avoiding emails: > /dev/null >/dev/null >/dev/null 2>&1 >/dev/null 2>&1 || true || true All of them continues to send mails each run. An example email: /bin/sh: 1: cannot create 1: Permission denied (I understand that's a strange error, but it's a example. I know I need to solve the error and not silence it, but I want to know why I cannot silence it with a normal method). The question are: Why even when I redirecting the task result, or when using the || true to change the task result, cron continues sending emails? The only solution I can find (on the linked question), are to add MAILTO="" after the "normal" (or not-spamming) cron tasks (and before these other ones). Related question:How do I completely silence a cronjob to /dev/null/?.
The /bin/sh: 1: cannot create 1: Permission denied error is probably because you have a typo in a redirection. Perhaps instead of 2>&1 you have 2>1 or 2>1&. (Normally the attempt to create a file named 1 in your home directory would succeed, but if a file named 1 already exists and is not writable then you'll get that error.) The reason why that error isn't silenced is that the message is not coming from the command whose output has been redirected. The message is being reported by the shell during the time it is trying to set up redirection for the command. Output from the shell itself has not been redirected, so the message is collected by cron and emailed to you.
Why cron doesn't stop sending emails even redirecting to /dev/null
1,570,835,992,000
I was configuring crontab with user "pi" to execute an sh script every 30 mins. In terminal:crontab -u pi -eI added this line: */30 * * * * /bin/sh /home/pi/test.sh And in the script test.sh there three lines: #!/bin/sh /usr/bin/transmission-gtk echo "done" > /home/pi/startup/result.txt As a result, every 30 minutes the result.txt updated, anyway the transmission-gtk never shows up. Namely, only 1 (of 2) commands in the test.sh worked. But when I manually execute the /home/pi/test.sh in terminal, everything works fine. The test.txt updated and the transmission-gtk shows up. I know that everything in crontab should be written with absolute-path and I'm pretty sure that the transmission-gtk is in /usr/bin. And of course, from the above we know the user "pi" has the permission to execute both test.sh and transmission-gtk. Can anybody tell why is this happening?
Apparently, transmission-gtk is some graphical program that you wish to start? In that case, you need to tell cron on which display to start it. Try this in your crontab before your line (assuming you're using display 0): export DISPLAY=:0 You should also have error messages in your mail (probably /var/mail/username). That would tell us more about the problem.
execute sh script from crontab problem
1,570,835,992,000
Env:- Ubuntu 18.04 I write one C program and trying to speak to port and fetch some data and dump into one file. Then I created one bash script and added this C program and expect to run at multiple intervals. I'm able to run this bash script without any issues. I'm running as root. <<snip>> #!/bin/bash interval=$1 time=$2 ./CC-test $interval $time <<snip>> May I know is there any permission delegation issue for invoking some commands via cron? or do we need to tell cron to excecute with administrative privilege? Anyway I'm running cron as root, then I don't think so, if anything other required. As a test, I just tried two commands in a shell script as follows #!/bin/bash date >> test fdisk -l >> test Even here also I can able to run manually and even both output is printing without any issues. For here I put it is in cron, on the "date" command output which is printed in test file. Please shed me some views on this.
The cron daemon is always running as root. The cron jobs will be run as the user whose cronjob they belong to. If you add the cron job with crontab -e as user john, then the job will be running as user john, not as root. To edit root's cron jobs, use sudo crontab -e. The difference between running a command from the interactive command line and from a cron job is that the environment (environment variables, current working directory etc.) may well be different. Ideally, a cron job should explicitly set the needed variables to the correct values. For example, a script executed from cron may want to add a few paths the PATH variable if 3rd-party utilities are used from non-standard paths, and it may want to cd into the correct directory to set the working directory for the rest of the script (so that you, in your example, are in the correct directory when you run ./CC-test, for example).
Bash script able to run manually, but with cron not working. + Ubuntu 16.04
1,570,835,992,000
In my /etc/crontab # /etc/crontab: system-wide crontab # Unlike any other crontab you don't have to run the `crontab' # command to install the new version when you edit this file # and files in /etc/cron.d. These files also have username fields, # that none of the other crontabs do. We can use crontab to edit a user-defined crontab files under /var/spool/cron/crontabs/, but can't use the same way to edit /etc/crontab or files under /etc/cron.d/. Are they supposed to be edited? If yes, how? Thanks.
Use your preferred editor. nano and vim are good editors. The 6th field is the username under which the entry should run. EDIT: I have a BSD 4.2 box. I have to export the editor before editing the crontab file on it. EDITOR=vi export EDITOR crontab -e
How would you edit `/etc/crontab` and files under `/etc/cron.d/?
1,570,835,992,000
I am trying to get launchd (which is macOS's alternative for cron) to run a job for me. It doesn't work, while running the commands on a bash launched with sudo does. I tried redirecting the output to files, but examining them hasn't enlightened me, either. I like to have access to this 'primal' environment launchd uses to execute commands, so that I can experiment directly there and see what is missing. PS: Here is the script I run in my launchd job (as root): #!/usr/bin/env bash export HOME=/Users/evar source /users/evar/.bashrc /Users/evar/anaconda/bin/python /Base/_Code/Misc/hosts/updateHostsFile.py --auto --replace --backup &> /Users/evar/log/hosts.out2 # tmux new -d -s hosts "/Users/evar/anaconda/bin/python /Base/_Code/Misc/hosts/updateHostsFile.py --auto --replace --backup" # I tried this, but sudo tmux kept saying no sessions while logs kept said "duplicate session hosts".
At the beginning of your script you could do something like env > /tmp/myscriptrun_env.$$ This will put the inherited environment into a file in /tmp. Now from the command line you can use the command env -i /bin/sh to generate a new shell with a fresh environment. Inside this shell you can source the /tmp file you created. You may want to edit it first and add export in front of each line to better mimic the launched environment.
How can I open a shell in the environment that cron (alternatively launchd) uses to execute commands?
1,570,835,992,000
I have this crontab * * * * * tar -czf /backup/$(date +%F--%T)-localusers.tgz /vagrant It does not work. But if I do tar -czf /backup/$(date +%F--%T)-localusers.tgz /vagrant/ It works. Anybody have a clue what's going on? I do keep getting a mail though: N 10 (Cron Daemon) Thu Aug 23 10:43 28/1130 "Cron <root@localhost> tar -czf"
Your problem is likely because of the cron special treatment of percent sign: The entire command portion of the line, up to a newline or % character, will be executed by `/bin/sh or by the shell specified in the SHELL variable of the crontab file. Percent-signs (%) in the command, unless escaped with backslash (), will be changed into newline characters, and all data after the first % will be sent to the command as standard input. So you need to escape them: * * * * * tar -czf /backup/$(date +\%F--\%T)-localusers.tgz /vagrant
Tar crontab not working, while command works on it's own [duplicate]
1,570,835,992,000
I need to make a planned task : Check the available space on the hard disk each day and remove files if this condition is verified : Available < 1Go. Here is the script that I wrote but doesn't seem to work : var="df -h | sed -n 2p |awk '{ print $4 }' " if[var<15];then ./bin/dss stop rm -rf tmp/* rm -rf caches/* ./bin/dss start fi I didn't do the crontab part yet.
try the below code. #!/bin/bash var=$(df -h | sed -n 2p |awk '{ print $4 }') if [ "${var}" le "15" ] then ./bin/dss stop rm -rf tmp/* rm -rf caches/* ./bin/dss start fi
How to check the available space on the hard disk each day and if it is lower than 1GO remove files
1,570,835,992,000
I did write shell script #!/bin/bash #clear #echo "Good morning, world." source activate python36 python /opt/project1/Table_Control.py opt/project1/connection.yaml After I did write in cron # job 1 19 * * * /opt/project1/start.sh and did try else other variant # job 1 19 * * * cd /opt/project1/ && ./start.sh Did check and got result May 11 19:01:01 server01 CROND[127428]: (root) CMD (/opt/project1/start.sh) and May 11 19:43:01 server01 CROND[13797]: (root) CMD (cd /opt/project1/ && ./start.sh) BUT Job must to send me email, has not received any email (( Did run shell script /opt/project1/start.sh work GOOD. How solve this problem?
Usually to debug such issues it might be handy to add some logging, possible for each line. Like this: source activate python36 >> /tmp/1.log 2>&1 python opt/project1/Table_Control.py opt/project1/connection.yaml >> /tmp/2.log 2>&1 You can also set cron task to be be laucnhed every minute so that you don't need to wait. Should help.
Why not working Job start by Cron?
1,570,835,992,000
We are seeing our /var/lib/logrotate/status gets invalid entries like following: saurabh@1236:~$ cat /var/lib/logrotate/status logrotate state -- version 2 "/var/log/syslog" 2018-3-13 "/var/log/auth.log" 2018-3-13 "/var/log/debug" 2018-3-13 "/var/log/lpr.log" 2018-3-13 "/var/log/user.log" 2018-3-13 "/var/log/mail.info" 2018-3-13 "/var/log/cron.log" 2018-3-13 og/messages" 2018-3-13 <=== Corrupted entry "/var/log/cron.log" 2018-3-13 "/var/log/messages" 2018-3-13 Not sure how this gets corrupted in this way. It happens randomly after 10/12 days. My guess is may be this is because of multiple crons editing this file, which is causing this issue as multiple trying to edit this file, but I am not sure that might be the issue. To test that I have added some random delay in one of recent cron added like this: */10 * * * * root sleep $(expr $RANDOM \% 90); /usr/sbin/logrotate -f /etc/logrotate.d/myFile Any better suggestion for an concrete solution?
There are multiple concurrent instances of logrotate being run by cron jobs on your machine. There is no locking on the state file used, and so the different logrotate jobs "step on each other's toes" when updating it. Since you added your myFile configuration for logrotate in the /etc/logrotate.d directory, you don't have to explicitly rotate them in a separate cron job. The usual logrotate cron job run would pick up that configuration automatically. If you need to run the rotation more often than what the system's default log rotation happens, I'd recommend putting the myFile configuration elsewhere. To ensure that your rotation job does not use the same state file (in the instances when the rotation job may run at the same time as the system's log rotation job), use another state file: /usr/sbin/logrotate -f -s /some/location/myFile.state /some/location/myFile Note that the job does not need to run as root unless the logfiles are owned by root or some user other than yourself. In other words, if the logfiles belong to you, you may do the rotation in a personal cron job.
/var/lib/logrotate/status gets invalid entries
1,570,835,992,000
When I run a cronjob, it's nice being able to follow its logs. If I want to see all logs from cron with journalctl, I can select the cron unit (journalctl -u cron). But when I have multiple cronjobs, it's not very helpful filtering only for the cron unit. Is there some way to filter for a specific cronjob in journalctl, or even specify a certain unit that it should belong to?
Since these are simply messages from cron, the metadata is all about cron. Example for a couple of messages about a cronjob, using --output=json: { "__CURSOR": "s=74429436aba942b6bbfc70cf45bfecc6;i=188d;b=108f80cdd87342bcb9dcafca15c45b57;m=6c13fcb19;t=5697ae32bb8a9;x=d292b55b7b7a140d", "__REALTIME_TIMESTAMP": "1523351401773225", "__MONOTONIC_TIMESTAMP": "29011987225", "_BOOT_ID": "108f80cdd87342bcb9dcafca15c45b57", "PRIORITY": "6", "_UID": "0", "_GID": "0", "_CAP_EFFECTIVE": "3fffffffff", "_MACHINE_ID": "5a75b95396344578a23193fb7b823946", "_HOSTNAME": "muru-1604", "_SYSTEMD_SLICE": "system.slice", "_TRANSPORT": "syslog", "SYSLOG_FACILITY": "10", "SYSLOG_IDENTIFIER": "CRON", "MESSAGE": "pam_unix(cron:session): session opened for user root by (uid=0)", "_COMM": "cron", "_EXE": "/usr/sbin/cron", "_CMDLINE": "/usr/sbin/CRON -f", "_AUDIT_LOGINUID": "0", "_SYSTEMD_CGROUP": "/system.slice/cron.service", "_SYSTEMD_UNIT": "cron.service", "SYSLOG_PID": "22158", "_PID": "22158", "_AUDIT_SESSION": "110", "_SOURCE_REALTIME_TIMESTAMP": "1523351401772733" } { "__CURSOR": "s=74429436aba942b6bbfc70cf45bfecc6;i=188e;b=108f80cdd87342bcb9dcafca15c45b57;m=6c13fcba9;t=5697ae32bb939;x=33e51a528b0cef96", "__REALTIME_TIMESTAMP": "1523351401773369", "__MONOTONIC_TIMESTAMP": "29011987369", "_BOOT_ID": "108f80cdd87342bcb9dcafca15c45b57", "PRIORITY": "6", "_UID": "0", "_GID": "0", "_CAP_EFFECTIVE": "3fffffffff", "_MACHINE_ID": "5a75b95396344578a23193fb7b823946", "_HOSTNAME": "muru-1604", "_SYSTEMD_SLICE": "system.slice", "_TRANSPORT": "syslog", "SYSLOG_IDENTIFIER": "CRON", "_COMM": "cron", "_EXE": "/usr/sbin/cron", "_CMDLINE": "/usr/sbin/CRON -f", "_AUDIT_LOGINUID": "0", "_SYSTEMD_CGROUP": "/system.slice/cron.service", "_SYSTEMD_UNIT": "cron.service", "SYSLOG_FACILITY": "9", "MESSAGE": "(root) CMD ([ -x /usr/sbin/dma ] && /usr/sbin/dma -q1)", "_AUDIT_SESSION": "110", "SYSLOG_PID": "22159", "_PID": "22159", "_SOURCE_REALTIME_TIMESTAMP": "1523351401773201" } These two are related (see for example, the PIDs or the timestamps), but that relation is not easily expressed as a filter. As such, there's not much that journalctl do for you. If you use a systemd timer, then the corresponding unit can of course be used as filter for journalctl (along with all the other benefits of systemd timers: check for next runtime, run a job immediately, properly stopping a long-running cronjob, etc.).
Specify a systemd unit for cronjob
1,570,835,992,000
I want to run a script on startup that establishes a GRE tunnel. The script works fine if I just run /root/tunnel.sh after rebooting, it runs and establishes the tunnel. Below are the contents of my crontab -e for root user on my machine. @reboot sleep 15; /root/tunnel.sh Am I missing something? I'm running CentOS 7 if that helps.
/root/tunnel.sh: line 2: ip: command not found Your root login profile (one of ~/.bash_profile, ~/.bash_login, or ~/.profile) is setting $PATH to include /usr/sbin, while your (non-login) script is not setting $PATH to include /usr/sbin. Either expand $PATH in your script or use full paths to programs that are in /usr/sbin. PATH=$PATH:/usr/sbin or /usr/sbin/ip ...
What's wrong with my cronjob?
1,570,835,992,000
Content of /var/spool/mail/root: From [email protected] Fri Feb 23 12:40:02 2018 Return-Path: <[email protected]> X-Original-To: root Delivered-To: [email protected] Received: by test.corp.test.biz (Postfix, from userid 0) id 202C12C0A32; Fri, 23 Feb 2018 12:40:02 -0500 (EST) From: [email protected] (Cron Daemon) To: [email protected] Subject: Cron <root@test> /etc/cron.d/0yum.cron > /etc/cron.d/0yum.cron.log Content-Type: text/plain; charset=UTF-8 Auto-Submitted: auto-generated X-Cron-Env: <LANG=en_US.UTF-8> X-Cron-Env: <SHELL=/bin/sh> X-Cron-Env: <HOME=/root> X-Cron-Env: <PATH=/usr/bin:/bin> X-Cron-Env: <LOGNAME=root> X-Cron-Env: <USER=root> Message-Id: <[email protected]> Date: Fri, 23 Feb 2018 12:40:02 -0500 (EST) /bin/sh: /etc/cron.d/0yum.cron: Permission denied Permission of /etc/cron.d/0yum.cron: -rw-r--r--. 1 root root 4999 Feb 21 12:49 /etc/cron.d/0yum.cron Contents of /etc/cron.d/0yum.cron: #!/bin/bash # Only run if this flag file is set (by /etc/rc.d/init.d/yum-cron) if [ ! -f /var/lock/subsys/yum-cron ]; then exit 0 fi DAILYSCRIPT=/etc/yum/yum-daily.yum WEEKLYSCRIPT=/etc/yum/yum-weekly.yum LOCKDIR=/var/lock/yum-cron.lock LOCKFILE=$LOCKDIR/pidfile TSLOCK=$LOCKDIR/ts.lock # Grab config settings if [ -f /etc/sysconfig/yum-cron ]; then source /etc/sysconfig/yum-cron fi # set default for SYSTEMNAME [ -z "$SYSTEMNAME" ] && SYSTEMNAME=$(hostname) # Only run on certain days of the week dow=`date +%w` DAYS_OF_WEEK=${DAYS_OF_WEEK:-0123456} if [ "${DAYS_OF_WEEK/$dow/}" == "${DAYS_OF_WEEK}" ]; then exit 0 fi # if DOWNLOAD_ONLY is set then we force CHECK_ONLY too. # Gotta check before one can download! if [ "$DOWNLOAD_ONLY" == "yes" ]; then CHECK_ONLY=yes fi YUMTMP=$(mktemp /var/run/yum-cron.XXXXXX) touch $YUMTMP [ -x /sbin/restorecon ] && /sbin/restorecon $YUMTMP # Random wait function random_wait() { sleep $(( $RANDOM % ($RANDOMWAIT * 60) + 1 )) } # Note - the lockfile code doesn't try and use YUMTMP to email messages nicely. # Too many ways to die, this gets handled by normal cron error mailing. # Try mkdir for the lockfile, will test for and make it in one atomic action if mkdir $LOCKDIR 2>/dev/null; then # store the current process ID in there so we can check for staleness later echo "$$" >"${LOCKFILE}" # and clean up locks and tempfile if the script exits or is killed trap "{ rm -f $LOCKFILE $TSLOCK; rmdir $LOCKDIR 2>/dev/null; rm -f $YUMTMP; exit 255; }" INT TERM EXIT else # lock failed, check if process exists. First, if there's no PID file # in the lock directory, something bad has happened, we can't know the # process name, so clean up the old lockdir and restart if [ ! -f $LOCKFILE ]; then rmdir $LOCKDIR 2>/dev/null echo "yum-cron: no lock PID, clearing and restarting myself" >&2 exec $0 "$@" fi OTHERPID="$(cat "${LOCKFILE}")" # if cat wasn't able to read the file anymore, another instance probably is # about to remove the lock -- exit, we're *still* locked if [ $? != 0 ]; then echo "yum-cron: lock failed, PID ${OTHERPID} is active" >&2 exit 0 fi if ! kill -0 $OTHERPID &>/dev/null; then # lock is stale, remove it and restart echo "yum-cron: removing stale lock of nonexistant PID ${OTHERPID}" >&2 rm -rf "${LOCKDIR}" echo "yum-cron: restarting myself" >&2 exec $0 "$@" else # Remove stale (more than a day old) lockfiles find $LOCKDIR -type f -name 'pidfile' -amin +1440 -exec rm -rf $LOCKDIR \; # if it's still there, it wasn't too old, bail if [ -f $LOCKFILE ]; then # lock is valid and OTHERPID is active - exit, we're locked! echo "yum-cron: lock failed, PID ${OTHERPID} is active" >&2 exit 0 else # lock was invalid, restart echo "yum-cron: removing stale lock belonging to stale PID ${OTHERPID}" >&2 echo "yum-cron: restarting myself" >&2 exec $0 "$@" fi fi fi # Then check for updates and/or do them, as configured { # First, if this is CLEANDAY, do so CLEANDAY=${CLEANDAY:-0} if [ ! "${CLEANDAY/$dow/}" == "${CLEANDAY}" ]; then /usr/bin/yum $YUM_PARAMETER -e ${ERROR_LEVEL:-0} -d ${DEBUG_LEVEL:-0} -y shell $WEEKLYSCRIPT fi # Now continue to do the real work if [ "$CHECK_ONLY" == "yes" ]; then random_wait touch $TSLOCK /usr/bin/yum $YUM_PARAMETER -e 0 -d 0 -y check-update 1> /dev/null 2>&1 case $? in 1) exit 1;; 100) echo "New updates available for host `/bin/hostname`"; /usr/bin/yum $YUM_PARAMETER -e ${ERROR_LEVEL:-0} -d ${DEBUG_LEVEL:-0} -y -C check-update if [ "$DOWNLOAD_ONLY" == "yes" ]; then /usr/bin/yum $YUM_PARAMETER -e ${ERROR_LEVEL:-0} -d ${DEBUG_LEVEL:-0} -y --downloadonly update echo "Updates downloaded, use \"yum -C update\" manually to install them." fi ;; esac elif [ "$CHECK_FIRST" == "yes" ]; then # Don't run if we can't access the repos random_wait touch $TSLOCK /usr/bin/yum $YUM_PARAMETER -e 0 -d 0 check-update 2>&- case $? in 1) exit 1;; 100) /usr/bin/yum $YUM_PARAMETER -e ${ERROR_LEVEL:-0} -d ${DEBUG_LEVEL:-0} -y update yum /usr/bin/yum $YUM_PARAMETER -e ${ERROR_LEVEL:-0} -d ${DEBUG_LEVEL:-0} -y shell $DAILYSCRIPT ;; esac else random_wait touch $TSLOCK /usr/bin/yum $YUM_PARAMETER -e ${ERROR_LEVEL:-0} -d ${DEBUG_LEVEL:-0} -y update yum /usr/bin/yum $YUM_PARAMETER -e ${ERROR_LEVEL:-0} -d ${DEBUG_LEVEL:-0} -y shell $DAILYSCRIPT fi } >> $YUMTMP 2>&1 if [ ! -z "$MAILTO" ] && [ -x /bin/mail ]; then # if MAILTO is set, use mail command (ie better than standard mail with cron output) [ -s "$YUMTMP" ] && mail -r "$MAIL_FROM" -s "System update: $SYSTEMNAME" $MAILTO < $YUMTMP else # default behavior is to use cron's internal mailing of output from cron- script cat $YUMTMP fi rm -f $YUMTMP exit 0 I'm getting the above permission denied for my yum script. This could possibly relate to my last question here: BAD FILE MODE yum-cron
Files in /etc/cron.d/ should be cron entries, not scripts. That's why your previous question, BAD FILE MODE yum-cron, showed cron complaining; it expected a plain text file (not an executable script) containing crontab entries. That's why, for example, I have a file named /etc/cron.d/0hourly which contains: # Run the hourly jobs SHELL=/bin/bash PATH=/sbin:/bin:/usr/sbin:/usr/bin MAILTO=root 01 * * * * root run-parts /etc/cron.hourly Since you have a script, I'd recommend placing it in one of the /etc/cron.hourly, /etc/cron.daily, /etc/cron.weekly, or /etc/cron.monthly directories and ensuring that it's executable there (for example, chmod u+x /etc/cron.daily/0yum.cron).
Permission denied yum-cron
1,570,835,992,000
First of all I am not very familiar with cron... Under .../cron/crontabs I have a file (which is called root, not sure whether it needs to be the same as the user?) with one job: * * * * * sleep 0; wget -0 /var/cache/file.txt 'IP-ADDRESS' * * * * * sleep 10; wget -0 /var/cache/file.txt 'IP-ADDRESS' * * * * * sleep 20; wget -0 /var/cache/file.txt 'IP-ADDRESS' ... And this works: it downloads the contents of the IP address and saves into my file every 10 seconds. There are other three commands with sleep 30, 40 and 50. Now I want to add another job that executes a python script every 10 seconds. I tried to create a new file under .../cron/crontabs, which I called job2 but nothing happened. can I just create as many cron scripts as I want? Do I need to start them somehow? Since this did not work, I tried to add my second job to the existing root file, which now reads: * * * * * sleep 0; wget -0 /var/cache/file.txt 'IP-ADDRESS' * * * * * /home/user/Documents/pythonscript * * * * * sleep 10; wget -0 /var/cache/file.txt 'IP-ADDRESS' * * * * * /home/user/Documents/pythonscript * * * * * sleep 20; wget -0 /var/cache/file.txt 'IP-ADDRESS' * * * * * /home/user/Documents/pythonscript ... where pythonscript is an executable, and I made sure that the cron folder has permissions to the python script path... This still does not work. How do I make cron execture a python script?
Yes, you can have only one crontab per user, and its filename is the username. Also, the proper way to edit it is to run crontab -e -- it's entirely possible that the cron daemon doesn't notice you've changed the file directly. Just call crontab -enow, it will open your editor with the crontab, and it will reload the config after you save and exit, and your other jobs should start running.
Two Cron jobs at the same time - one not working
1,570,835,992,000
I'm running the following script on @reboot cron with root: autossh -f -i /home/pi/.ssh/myRemote.pem -R 2210:localhost:22 [email protected] When I run manually it works fine but from cron I see it show in the logs continually failing: Nov 25 01:15:56 kirkins autossh[1936]: starting ssh (count 1) Nov 25 01:15:56 kirkins autossh[1936]: ssh child pid is 1947 Nov 25 01:16:01 kirkins autossh[1936]: ssh exited prematurely with status 130; autossh exiting Nov 25 01:16:40 kirkins autossh[605]: starting ssh (count 13) Nov 25 01:16:40 kirkins autossh[605]: ssh child pid is 1949 Nov 25 01:16:40 kirkins autossh[605]: ssh exited with error status 255; restarting ssh Nov 25 01:18:48 kirkins autossh[605]: starting ssh (count 14) Nov 25 01:18:48 kirkins autossh[605]: ssh child pid is 1970 Nov 25 01:18:49 kirkins autossh[605]: ssh exited with error status 255; restarting ssh Anyone know what's going wrong? I saw some related posts on other StackExchange sites but none of the solutions worked for me.
The reason is probably that ssh does not like the fact that it is started without a controlling terminal (cron children do not have one). You could try ssh -tt. Or run it within screen / tmux.
Auto-SSH works manually but not in background
1,570,835,992,000
I need to be able to start a named "session" from within a cron job and run a job within that named session. The job I need to run MAY cause my appliance to not run properly if problem exist and I need to be able to reattach to the named session created if needed/to close it later. Does anyone have any idea how I might do this?
GNU Screen or TMUX are probably your best options. The general concept of both is pretty similar to a tabbed window manager, but they're both for terminal usage, and you can detach from a session and re-attach later. I'm not quite certain about the syntax needed for TMUX, but for screen the command you want is: screen -D -n -s <name> <command> Replace <name> with the name of the session, and <command> with the command to run. YOu can then re-attach to the session with: screen -D -r <name> The only caveat is that yuu have to be running as the same user when you try to reattach that the session was started as (you can technically reattach to other users sessions, but it's a bit more complicated).
Using session command from a cron job (linux)
1,570,835,992,000
By "adjacent cron tasks" I mean to tasks doing things of the same context and in close times. For example, as for now I have these cron tasks: 0 0 * * * for dir in /var/www/html/*/; do cd "$dir" && /usr/local/bin/wp plugin update --all --allow-root; done 0 0 * * * for dir in /var/www/html/*/; do cd "$dir" && /usr/local/bin/wp core update --allow-root; done 0 0 * * * for dir in /var/www/html/*/; do cd "$dir" && /usr/local/bin/wp theme update --all --allow-root; done As you can see these do pretty "heavy" work: They updated all plugins in several sites, upload their cores, and then their themes. I've given 1 minute for the first task, one to the second and one to the third. From my experience, putting them all on the first minute (minute 0), isn't problematic because cron timing is just the minimal time to start the task, but each process associated with the task would start only when it can, so I would bet that generally there isn't any problem doing so. Anyway my question is this: Is there (at least) a best practice to split adjacent cron tasks to later minutes in the same hour (0,1,2) in the same day, or even on different hours in the same day? In this case my task timing would instead look like this: 0 0 * * * 0 1 * * * 0 2 * * *
For anything other than running a simple command, schedule a script that does the complicated processing instead of trying to do it directly in the crontab. #!/bin/sh for dir in /var/www/html/*/; do ( cd "$dir" && /usr/local/bin/wp plugin update --all --allow-root ) done for dir in /var/www/html/*/; do ( cd "$dir" && /usr/local/bin/wp core update --allow-root ) done for dir in /var/www/html/*/; do ( cd "$dir" && /usr/local/bin/wp theme update --all --allow-root ) done Or even #!/bin/sh for dir in /var/www/html/*/; do ( cd "$dir" && { /usr/local/bin/wp plugin update --all --allow-root /usr/local/bin/wp core update --allow-root /usr/local/bin/wp theme update --all --allow-root } ) done
Is there a best practice to make adjacent cron tasks near in time?
1,570,835,992,000
I have typed for user in $(cut -f1 -d: /etc/passwd); do crontab -u $user -l; done And that say no crontab for root no crontab for daemon ... no crontab for apache2 and i got very often You have new mail in /var/mail/root message when i read there are all the same From [email protected] Wed Aug 2 15:40:02 2017 Return-Path: <[email protected]> X-Original-To: root Delivered-To: [email protected] Received: by lxc2014.localdomain (Postfix, from userid 0) id 03E571D666; Wed, 2 Aug 2017 15:40:02 +0000 (UTC) From: [email protected] (Cron Daemon) To: [email protected] Subject: Cron <root@lxc2014> /dev/.x;^Mno crontab for root MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 8bit X-Cron-Env: <SHELL=/bin/sh> X-Cron-Env: <HOME=/root> X-Cron-Env: <PATH=/usr/bin:/bin> X-Cron-Env: <LOGNAME=root> Message-Id: <[email protected]> Date: Wed, 2 Aug 2017 15:40:02 +0000 (UTC) /bin/sh: 1: ^Mno: not found I know that ^M is \r brut why cron search this file...
When you get mail like From: [email protected] (Cron Daemon) To: [email protected] Subject: Cron <root@lxc2014> /dev/.x;^Mno crontab for root X-Cron-Env: <LOGNAME=root> /bin/sh: 1: ^Mno: not found and you can't tell which crontab contains the offending command, you can grep for pieces of command in the standard places: grep -r "no crontab" /etc/cron* /var/spool/cron | cat -vet The cat -vet will show any embedded control characters that would otherwise be invisible or result in cursor motion. In your case, you found the command at /var/spool/cron/crontabs/root:* * * * * /dev/.x;^Mno crontab for root$ and the file contains this (line breaks added for readability): # DO NOT EDIT THIS FILE - edit the master and reinstall.\n # (- installed on Thu Jul 20 20:50:12 2017)\n # (Cron version -- $Id: crontab.c,v 2.13 1994/01/17 03:20:37 vixie Exp $)\n * * * * * /dev/.x;^Mno crontab for root Because of the embedded Ctrl+M character, running crontab -u root -l appears to show only no crontab for root. This looks like someone is trying to hide the crontab entry. I suggest that someone familiar with security and forensics take a look at your system to determine whether it's been compromised. You can remove this crontab with crontab -u root -r. You mentioned that /dev/.x doesn't exist, and that that string doesn't appear in any file under /etc, but please keep monitoring your system to see whether these files reappear. That would be a strong indicator that your system is still compromised. If possible, please install all security patches offered by your distribution.
why I get this /bin/sh: 1: ^Mno: not found error
1,570,835,992,000
I'm trying to create my first cron job. I'm new to bash scripting too, although I do know some python. I am puzzled by the following: Here is my cronjob file created with crontab -e: */1 * * * * /home/darren/.bash_scripts/urxvt_colors.sh Contents of urxvt_colors.sh: #!/bin/bash python ~/.Py_Scripts/xr_random_colors.py xrdb ~/.Xresources Here is what baffles me. So the python part of the cron job works python ~/.Py_Scripts/xr_random_colors.py is executed every minute. This python script changes the color scheme in my ~/.Xresources file. I confirmed this actually happens by checking every minute. But xrdb ~/.Xresources does not update the file. Running which python shows /usr/bin/python and which xrdb shows /usr/bin/xrdb. So since they are both executed from /usr/bin, how come only the python script executes? Also if I run ./urxvt_colors.sh script manually from my terminal then it works as expected, the python script runs and so does xrdb ~/.Xresources What's happening here?
Try to change your script like this #!/bin/bash python ~/.Py_Scripts/xr_random_colors.py && xrdb ~/.Xresources and i recommend you to use full path to files. PS maybe you need to define DISPLAY var while exec script */1 * * * * DISPLAY=:0 /home/darren/.bash_scripts/urxvt_colors.sh
Cronjob executes `/usr/bin/python` but not `usr/bin/xrdb` [duplicate]
1,570,835,992,000
Am using Ubuntu14.04. Server Backup will take place every week. But will this affect the crontab? Because "sudo crontab -e" does not contain the cron job which I gave. Please help! Thanks in advance
Every user has their individual crontab. This includes the root user. When you add a cronjob with crontab -e, you add it into the list of jobs for the current user. This means that with sudo crontab -e, you will be editing the list of cron jobs for the root user. To edit the crontab for a specific user, use sudo crontab -e -u username, or log in as that user and use crontab -e.
"sudo crontab -e" does not show the cron job which I gave
1,570,835,992,000
I would like to know how to run make a terminal program run automatically at reboot with preset input variables. What I am trying to do is run bro control which is a terminal program that requires input. I know that when you run cron jobs, that are terminal commands, it does not show the terminal screen. I would like it to run like that in the background with predefined inputs. The command also needs root permissions which if there is a cron job way of doing this, that is easy, just put it in your root cron tab. An example of the code to be run automatically: broctl start exit last time I ask this question no one had an answer. I am hoping that by making it broader and more understandable someone will have an answer. I have researched this and cannot find an answer. Hopefully, someone will know the way to do this. By the way, I am running Linux-Mint.
Save your desired input into a file, and pass that file as an input argument to the program. my_command --some_flags_if_needed < path/to/file/containing/input
how to run make a terminal program run automatically at reboot with preset input variables
1,493,277,389,000
I have a script from within which I write some text to a file. It works fine if I manually i9nvoke it from the shell but it doesn't seem to work prperly if it's invoked from cron. The file is created but nothing gets written to /tmp/tx_buf. The script looks like: #!/bin/bash declare -i Threshold=1000 tmpfile="/tmp/tx_buf" if [ -e $tmpfile ] then echo "$tmpfile exists, read value" typeset -i last=$(cat $tmpfile) echo $last fi typeset -i val=$(cat /sys/class/net/eth0/statistics/tx_packets) echo $val > $tmpfile declare -i diff=`expr $val - $last` echo "difference: $diff" if [[ "$diff" -gt "$Threshold" ]] then echo "music is playing, invoke action" `xdotool mousemove_relative 1 1` else echo "no music playing, ignore" fi Why is this, I'm wondering?
You could do some troubleshooting in the following way: Change the script header like this: #!/bin/bash exec 1>/tmp/$0.log 2>&1 set -x ...insert the rest of you script here.... After cron was starting you script you should find a file called /tmp/.log This should give details about what was going on during runtime.
nothing is written to my text file if run from within cron
1,493,277,389,000
I have a script which before being launched checks via pwd if the path upon launch is something specific (say dir/subdir/script) current_folder=$(pwd | grep dir/subdir/script) if [ "$current_folder" == "" ]; then { echo "something bad" exit } fi how can this script be launched via crontab? I cannot remove the check with pwd or amend the script content under any circumstance since it's subject to continuous updates that would be replaced Thanks everyone
You can specify several commands, separated by ; or &&, as your cron job, for example: * * * * * cd /some/path && foo (This will only run foo if the cd was successful.)
launch a script which requires to be launched from a specific path via crontab
1,493,277,389,000
I have a problem with script scheduled in cron. In cron I have following line: 33 09 * * 1-5 oracle /data1/backup/scripts-test/rman.sh > /data1/backup/log.txt 2> /data1/backup/log_err.txt As you see I have to use oracle user to run rman script. RMAN.SH looks as follow: #!/bin/bash ORACLE_HOME="/data1/app/oracle/product/12.1.0.2/db_1" ORACLE_SID="eelxtest" PATH=$ORACLE_HOME/bin:$PATH now="$(date)" logloc="/data1/backup/scripts-test/log" rmanscript="/data1/backup/scripts-test" jboss="/usr/JBossEAP/jboss-eap-6.4/bin" ip="x.x.x.x" ServerGroup="EELX-Server-Group-Test" logfile="$logloc/$(date '+%Y-%m-%d')_log.txt" echo "============================================" | tee -a "$logfile" date | tee -a "$logfile" echo "STEP1 closing JBoss Server Group" | tee -a "$logfile" $jboss/jboss-cli.sh --controller=$ip --connect /server-group=$ServerGroup:stop-servers | tee -a "$logfile" echo "STEP2 oracle backup. See rman log." | tee -a "$logfile" **$ORACLE_HOME/bin/rman msglog /data1/backup/scripts-test/log/$(date '+%Y-%m-%d')_rman.log cmdfile=$rmanscript/rman_backup.cmd** echo "STEP3 starting jboss Server Group" | tee -a "$logfile" $jboss/jboss-cli.sh --controller=$ip --connect /server-group=$ServerGroup:start-servers | tee -a "$logfile"] rman_backup.cmd is: connect target / shutdown immediate; startup mount; run { allocate channel ch1 device type disk; backup as compressed backupset full database format '/data1/extDirectories/xxx/yyy/oracle/test/%T_eelxtest_full_%u.bkp'; backup format '/data1/extDirectories/xxx/yyy/oracle/test/%T_archivelog_eelxtest_%u.bkp' >>(archivelog all delete input); backup spfile; backup current controlfile format '/data1/extDirectories/xxx/yyy/oracle/test/%T_ora_ctl_file_eelxtest_%u.bkp'; release channel ch1; } sql 'alter database open' As a result of job in cron are following messages: Message file RMAN<lang>.msb not found Verify that ORACLE_HOME is set properly So I have verified ORACLE_HOME on both profiles oracle and root and it is in place in .bash_profile. Furthermore there is nothing mentioned in rman trace or log because rman didn't start at all. Please help.
Thomas, possible you must export the variable ORACLE_HOME, by; export ORACLE_HOME after declaration in script: rman.sh
Problem running rman in cron
1,493,277,389,000
i am having a file, which contains usernames and encrypted passwords (openssl passwd) in the format user:password. Now i want to change the password of this User with a Cronjob once a week. With the help of Janos i made a Script, which changes the password to a $RANDOM generated value, and saves the encrypted password in pw.txt, and the non-encrypted in randompw.txt r=$RANDOM cut -d: -f1 pw.txt | while read -r user; do echo "$user:$(openssl passwd $r)" done > newpw.txt mv newpw.txt pw.txt echo $r > randompw.txt So my problems are: 1.) With this, i just have a random generated value for each users, but i want a random value for each user (each line in the file). 2.) It would be good, if i can get the username and the cleartext password of each user into randompw.txt currently, i just have one $RANDOM Password there. Does anyone has an Idea? Old post
You can save the generated password in a variable, and write it two files: One file in clear One file hashed For example: # initialize (truncate) output files > clear.txt > hashed.txt cut -d: -f1 pw.txt | while read -r user; do # generate a hash from a random number hash=$(openssl passwd $RANDOM) # use the first 8 letters of the hash as the password pass=${hash:0:8} # write password in clear formats to one file, and hashed in another echo "$user:$pass" >> clear.txt echo "$user:$(openssl passwd $pass)" >> hashed.txt done
Password change Script
1,493,277,389,000
I'm maintaining a server which runs mailman. In it I find a crontab which looks like the following: 0 8 * * * list [ -x /usr/lib/mailman/cron/checkdbs ] && /usr/lib/mailman/cron/checkdbs 0 9 * * * list [ -x /usr/lib/mailman/cron/disabled ] && /usr/lib/mailman/cron/disabled ... When I type list I get No command 'list' found .. My searches for "crontab list", "linux list command", "mailman cron list" bring up results for listing things. What does list in crontab do ? What command is list refering to ?
Lines in the system crontab (which is what I think you're looking at) have six fixed fields plus a command, in the form: minute hour day-of-month month day-of-week user command This is different from the per-user crontab which lacks the user field. My guess is that list is the mailman user on that system. This user is usually called mailman, but for whatever reason someone thought list was better (more generic?).
What does list in crontab do?
1,493,277,389,000
I want a computer (Debian, XFCE) to shutdown every day at a specific time as well as a pop-up window in advance telling about the immanent shutdown with, say OK, Skip and Delay 60 min buttons. I noticed xmessage being installed and it gives an easy way to handle rudimentary pop-ups with defined buttons. But the workaround with sed on the crontab (to alter the event) and service cron reload (with the appropriate rights in /etc/sudoers) with all the exceptions I have to capture seems too fiddly. Any ideas? P.s.: shutdown's own messages are not being read since the users are usually not on the terminal.
I think you should consider a slightly different approach: instead of using cron for a shutdown, use cron to display a message with xmessage. Then, after the actions (if any) taken from xmessage events (buttons pressed or not), you initiate a shutdown. In other words: at a certain time, display xmessage via cron if no action is taken (button pressed) after a certain time - shutdown if button is pressed, delay shutdown with whatever time. I wrote something similar to your needs in this thread.
Periodic shutdown with popup-message and skip/delay-button
1,493,277,389,000
Here's a little background as it might be the cause to the problem. I'm running Armbian legacy Jessie on a Orange Pi Zero. It does not include an desktop so I've installed X, lightdm and Xfce. I haven't managed to start X on boot so I have a @reboot line in crontab that executes a script that includes this: #!/bin/bash while ! ping -c 1 -W 1 192.168.1.100; do sleep 1 done /usr/bin/startx everything works perfectly (I have autologin enabled and it starts the Mumble client on 1:0). I then have a python script that monitors my GPIO (a push to talk button) and send "Ctrl + 1" if the button is pressed. Mumble is listening to that combination and starts broadcasting when it is pressed. I must run run my python script as root to be able to access the GPIO, so I have added these lines to /etc/profile (so that root can access X): export DISPLAY=:1.0 export XAUTHORITY=/home/icuser/.Xauthority As I said this works perfectly when executed with sudo: sudo python /home/icuser/sendptt_zero.py but when I execute my script with (@reboot in crontab): sudo /usr/bin/python /home/icuser/sendptt_zero.py >> /home/hallgren/ic.log 2>&1 & I get this in my ic.log file (when I press the gpio button that starts the emulate keyboard key function in Python (I'm using http://www.autopy.org/)): No protocol specified Could not open main display My Python script also has this line (won't work without it): os.environ['DISPLAY'] = ':1.0' Do you have any ideas on how to get it to start X automatically and why it works from command line with sudo but not when started from crontab?
Cron doesn't use /etc/profile. Write the variables at the top of your crontab file.
Why can't I run GUI apps with sudo from crontab when I can with sudo: “No protocol specified”? [duplicate]
1,493,277,389,000
I was wondering if my crontab jobs were written correctly. I am hoping to run them on a VPS and monitoring them isn't really possible. Without further ado here are my cron jobs: # cd into directory at 2:57 AM 57 2 * * 1-5 cd /folder_name # activate the virtual environment 58 2 * * 1-5 . env/bin/activate # run the main script 59 2 * * 1-5 python main.py # at 5pm break the script (worried the most about this part) 0 16 * * 1-5 ^C Also I changed my system clock to be eastern time, does that mean the cron jobs will run using the eastern time zone?
No, cron is not a shell. Write a script: #!/bin/sh cd /folder_name . env/bin/activate exec python main.py Make it executable, then point a crontab entry to it: 57 2 * * 1-5 /path/to/script The script should then run every Monday to Friday, at 2:57 in (your machine's idea of) local timezone. If you configured your mail system properly the results (if any) are mailed to you.
Using Cron/Python
1,493,277,389,000
In my CentOS, i'm trying to print the CPU USAGE and FREE MEMORY output numbers into a text file. But when i typing in the terminal, it is all fine. But when it is executed via the crontab the MEMORY output is always blank. Manually typing in terminal: # echo CPU: `top -b -n1 | grep "Cpu(s)" | awk '{print $2 + $4}'`, RAM: `awk '/^Mem/ {print $4}' <(free -m)` >> ~/stats.txt # cat ~/stats.txt CPU: 3.8, RAM: 1307 Same command in Crontab: */10 * * * * echo CPU: `top -b -n1 | grep "Cpu(s)" | awk '{print $2 + $4}'`, RAM: `awk '/^Mem/ {print $4}' <(free -m)` >> ~/stats.txt Then inside the Text file: # cat ~/stats.txt CPU: 3.4, RAM: CPU: 4.1, RAM: CPU: 3.9, RAM: Why is the RAM output always blank please?
Because dash doesn't understand this kind of bashism: <(free -m) Instead, use: free -m | awk '/^Mem/ {print $4}'
Shell Script (in Crontab) : Saving Memory print into text file always gives blank
1,493,277,389,000
I have a script job.sh in /home/user/scripts, which is then symlinked to /home/user/bin/job. The custom binaries path has been included in .bashrc, so whenever I issue the command job param1 etc from the cli everything works as expected. When said command has to be run through a cronjob, it doesn't. On the other hand, if the cronjob refers to the full path (/home/user/bin/job instead of simply job) everything runs fine. Any pointer on how to tackle this issue?
cron intentionally runs with a limited environment (including a restricted path, it does not have the same path as your standard shell). You either need to run a script (including the full path to the script) which then sets a path variable internally, or you need to set the path in the crontab line itself. One example of that is, 12 0 * * * (export PATH=$PATH:/somedirectory; job) Really though, it's safer to just include the full path to whatever you're running in the crontab, and set the path correctly in your scripts that cron executes.
cron not working with commands symlinked in custom PATH
1,493,277,389,000
I am trying to automate backup management on server on raspberry and put a regular tar triggering into a crontab. I set it up via bash scripts. First I set up the task and then use script to carry out expected actions. The problem is the script isn't executed by crontab. When I run the same command as saved in crontab (I check it out with crontab -l command to be sure it's exactly the same) it all works properly. Here is my backup script: #!/bin/sh function set_backup(){ SCHEDULE_FILE="Configuration/crontab" SCHEDULE="* * * * *" USER=$(whoami) WORKING_DIRECTORY="$PWD" SCRIPT_NAME="/Admin/BackupHomeServer.sh" COMMAND=' /bin/bash ' #these spaces are essential for proper file formatting printf "$SCHEDULE " > $SCHEDULE_FILE #printf "$USER" >> $SCHEDULE_FILE printf "$COMMAND" >> $SCHEDULE_FILE printf "$WORKING_DIRECTORY" >> $SCHEDULE_FILE printf "$SCRIPT_NAME\n" >> $SCHEDULE_FILE crontab $SCHEDULE_FILE } function perform_backup(){ BACKUP_FILENAME=$(date +%Y-%m-%d) BACKUP_FILENAME+=".tar.gz" BACKUP_DIR=$(cat Configuration/PyHomeServer.conf | grep "Backup directory:" | awk '{print $3}') FILES_TO_INCLUDE="Configuration/ Database/" if [ ! -d "$BACKUP_DIR" ]; then mkdir $BACKUP_DIR fi tar -cvzpf $BACKUP_DIR/$BACKUP_FILENAME $FILES_TO_INCLUDE } function retrieve_latest_backup(){ echo To be implemented } if [ "$#" = "0" ]; then perform_backup elif [ "$1" = "retrieve" ]; then retrieve_latest_backup elif [ "$1" = "set" ]; then set_backup fi This is what I see when I call crontab -l: * * * * * /bin/bash /home/gonczor/Documents/ServerPy/Admin/BaskupHomeServer.sh I've temporarily got rid of passing username after reading this thread and I've also ensured the path is correct after reading this one.
I've found an answer in "How Linux Works" by Brian Ward. I've simply messed the syntax. Deleting the passing of $USER fixed the problem. In other words the file file I am passing to crontab should have following structure: m h dm m dw command and not: m h dm m dw user command And again thank you roaima for useful tips.
Crontab task not trigerred
1,493,277,389,000
The timer functionality of systemd includes monotonic timers, which measure time in real uptime since some starting point after boot. This means that after a boot, the service triggered by the timer is started and then the timer fires according to some predefined conditions based on actual uptime, i.e., excluding suspend episodes. Is it possible to have a monotonic timer that not only crosses suspend episodes, but also actual downtime, so crosses reboots?
No, currently this is not possible. A feature request has been filed: https://github.com/systemd/systemd/issues/3107
cross-reboot monotonic systemd timer
1,493,277,389,000
I have a server at home that I use as a NAS and some other services. The server has Debian Jessie on it, with 4x 4 TB harddrives in RAID5. I use this server to store all my home data, movies, games, etc. About 75% of it is filled. I learned about AIDE some time ago after checking my cron reports, with aide giving an error: run-parts: /etc/cron.daily/aide exited with return code 1 /etc/cron.daily/tripwire: ### Error: File could not be opened. ### Filename: /var/lib/tripwire/myhostname.twd ### No such file or directory ### Exiting... run-parts: /etc/cron.daily/tripwire exited with return code 8 So to initialize my cron database, I executed the command: sudo aideinit according to this tutorial. However, this command has been running for the last two days!!! I noticed that it's scanning the whole server including my whole RAID array! This I learned because it gave stdout messages related to the data in my RAID. Part of them are the following: /raidarray/Games/UT2004/Help/BallisticFiles/Render_PistolP.jpg mtime in future /raidarray/Games/UT2004/Help/BallisticFiles/Render_M290P.jpg mtime in future /raidarray/Games/UT2004/Help/BallisticFiles/Render_FP9A5Pickups.jpg mtime in future /raidarray/Games/UT2004/Help/BallisticFiles/Render_NRP57P.jpg mtime in future /raidarray/Games/UT2004/Help/BallisticFiles/Render_M290S.jpg mtime in future /raidarray/Games/UT2004/Help/BallisticFiles/Render_R78S.jpg mtime in future /raidarray/Games/UT2004/Help/BallisticFiles/Render_A42S.jpg mtime in future /raidarray/Games/UT2004/Help/BallisticFiles/Render_MRT6Clip.jpg mtime in future /raidarray/Games/UT2004/Help/BallisticFiles/Render_EKS43S.jpg mtime in future /raidarray/Games/UT2004/Help/BallisticFiles/BallisticStripe2.jpg mtime in future /raidarray/Games/UT2004/Help/BallisticFiles/Render_M50Clip.jpg mtime in future /raidarray/Games/UT2004/Help/BallisticFiles/BallisticGoldLogo.jpg mtime in future /raidarray/Games/UT2004/Help/BallisticFiles/Render_Rockets.jpg mtime in future /raidarray/Games/UT2004/Help/BallisticFiles/Render_M925S.jpg mtime in future So what's going on? I'm quite new to AIDE, but I would like to understand how it works. Should it really take that long? Does it make sense for it to scan my whole RAID array? How would you manage this?
It turns out the solution is to exclude that directory of the array. This is how to exclude it: Just add this line to the config file of aide to exclude the folder /raidarray: !/raidarray/.*
AIDE is taking forever to initialize
1,493,277,389,000
My setuid.today has a different date format than setuid.yesterday: setuid.today (German localization?) 3 Dez setuid.yesterday Dec 3 I'm getting emails with the diff reports every day. I guess one of the periodic scripts changed something during the last update. I'm on FreeBSD 10.2 How should I proceed?
To summarize what we discovered in the comments: At some point (before the upgrade), /etc/login.conf was populated with :lang=de_DE.UTF-8 in the default class. After updating to FreeBSD 10.2, cron was presumably restarted, and picked up the new locale. The new locale caused the date formats inside the setuid.today file to change. The FreeBSD 10.2 Release Notes don't mention changes to /etc/login.conf (closest is the Inconsistency between locale and rune locale states patch, but it does not appear to touch /etc/login.conf). The solution is to change the default locale back and use ~/.login_conf overrides where a different locale is desired; then restart cron.
setuid.today date format changed after FreeBSD 10.2 update
1,493,277,389,000
I have read a lot of post around this topic but I didn't found the one to fit my needs. So, basically I want to make two backups: one at mid day (12 PM) and the other at midnight (12 AM) for each database on a MySQL server but I want to leave out system databases: mysql and information_schema (as far as I know is there is another one please let me know). After read a lot of topics I come with this bash script: #!/bin/sh now="$(date +'%d_%m_%Y_%H_%M_%S')" filename="db_backup_$now".gz backupfolder="/home/backups" fullpathbackupfile="$backupfolder/$filename" logfile="$backupfolder/"backup_log_"$(date +'%Y_%m')".txt echo "mysqldump started at $(date +'%d-%m-%Y %H:%M:%S')" >> "$logfile" mysqldump --user=userbackup --password=***** --default-character-set=utf8 database | gzip > "$fullpathbackupfile" echo "mysqldump finished at $(date +'%d-%m-%Y %H:%M:%S')" >> "$logfile" find "$backupfolder" -name db_backup_* -mtime +7 -exec rm {} \; echo "old files deleted" >> "$logfile" echo "operation finished at $(date +'%d-%m-%Y %H:%M:%S')" >> "$logfile" echo "*****************" >> "$logfile" exit 0 This script made a backup for database database and keep 7 last .tar.gz files. Can any help me to improve this script so I can backup each database other than system ones and keep 7 last copies for each?
I'm currently using PostgreSQL, and I do something that looks really close to what you want to achieve, so this is my backup script: #!/bin/bash # #------------------------------------------------------------------------------ # Editable parameters: # ## Filesystem Location to place backups. BACKUP_DIR="/path/to/backup/folder" ## The user used to connect to postgres instance USER="postgres" PWD="pwd_in_plaintext_is_not_a_" ## Just the date string that will be appended to the backup files name BACKUP_DATE="$(date +'%d_%m_%Y_%H_%M_%S')" ## Numbers of days you want to keep copie of your databases NUMBER_OF_DAYS=7 ## Uncomment following line if you want to overwrite the whole folder each time #rm -rf ${BACKUP_DIR}/backupFulldb-* # #------------------------------------------------------------------------------ # don't change anything below this line # Vacumm al databases before begin to backup vacuumdb --all -U ${USER} DATABASES=`psql -U ${USER} -l -t | cut -d'|' -f1 | sed -e 's/ //g' -e '/^$/d'` for i in ${DATABASES}; do if [ "$i" != "template0" ] && [ "$i" != "template1" ]; then echo Dumping $i to ${BACKUP_DIR} pg_dump -U ${USER} --column-inserts $i | gzip -c > ${BACKUP_DIR}/backupFulldb-$i-${BACKUP_DATE}.out.gz fi done find ${BACKUP_DIR} -type f -prune -mtime +${NUMBER_OF_DAYS} -exec rm -f {} \; You just need a query that list all dbs in your mysql instance, and replace it for the DATABASES array. Reading this post and this one, I assume you could eg. do as follow: while read line do DATABASES+=("$line") done < <(mysql -u${USER} -p${PWD} INFORMATION_SCHEMA -e "SELECT SCHEMA_NAME FROM INFORMATION_SCHEMA.SCHEMATA") And sure, fix the dbs names you want to exclude: if [ "$i" != "mysql" ] && [ "$i" != "information_schema" ]; then
Script for backup each DB on server but omit system databases
1,493,277,389,000
I have a client running on Rackspace, they have three servers that are synced together entirely, I've added a bunch of crons in the crontab using wget to the web address of the cron (that sync to each server). When you go to the client's domain, a load balancer takes care of you. I only want to run the crons once on one specific server (web01) Issue is when each server runs the crontabs it gets run x3 for each server. So I put in PHP a check if the hostname is anything other than web01 to die, however since we're using WGET and they are running through the load balancer, we're finding that the LB is sending two requests to web01 and the other to either web02 or web03. I'm thinking this needs to be done entirely in the crontab checking if the current server is web01 and to run the cron jobs inside the IF statement, otherwise do nothing however I'm not entirely sure the best way to express that with an IF statement.
You have three almost-identically configured servers, but you want to run the cron jobs on only one of them? Let's follow your naming scheme for the servers: web01, web02, and web03. I'm using shell script, but the principle is exactly the same for PHP. (I can throw PHP around but I don't consider myself well-versed enough to code it in public.) Near the top of each cron script you can do something like this, which will cause the script to exit unless the server name is web01. [ web01 != "$(hostname)" ] && exit This isn't very resilient in the face of a server outage. If web01 dies you presumably want to pick up the cron jobs on another server. Assuming you have access to DNS you can create a CNAME to the server responsible for running cron. Here's a sample entry for bind; it'll be similar for other systems: cron.contoso.com 300 IN CNAME web01.contoso.com You can then use either dig or nslookup to compare the server name to the alias, and within five minutes (300 seconds) you can have swapped the responsibilities for cron to another server: myaddr="$(nslookup "$(hostname)" | awk '$1=="Address:"{a=$2} END{print a}')" cronaddr="$(nslookup "$(hostname)" | awk '$1=="Address:"{a=$2} END{print a}')" [ "$myaddr" != "$cronaddr" ] && exit Or: myaddr="$(dig +short "$(hostname).contoso.com")" cronaddr="$(dig +short cron.contoso.com)" [ "$myaddr" != "$cronaddr" ] && exit
Running crons on a scaled mirrored servers (cron logic needed) [closed]
1,493,277,389,000
I while ago I wanted to have messages from cron sent to my mail address so I installed ssmtp. But then I decided that I'd rather receive the messages in the file under /var/mail as usual, so I removed ssmtp. But the messages from cronjobs does not show up under /var/mail now, they seem to be entirely lost? I am using debian, and ssmtp was installed and uninstalled using apt-get. How can I restore the original setting or find the messages somewhere?
If there is no Mail Transport Agent installed (e.g. Exim, Postfix, Sendmail) then there will be no sendmail binary for cron jobs to interact with, and odds are any messages that were attempted to be sent are lost. You'll need to ensure that a MTA is installed and properly configured.
I lost my mail spool
1,493,277,389,000
Say I have a crontab like: */30 * * * * /root/scripts/remove_log_files.sh This will remove some logs that I don't want every 30 mins. What would happen if the server shutdown after 25 mins the crontab is activated and restarted 10 mins later? Update: according to some searches the job will not be fired. But will the job be delayed another 30 mins to be executed or 25 mins? Couldn't find some resource discussing this online. Most of the discussion is about how to shutdown a machine with crontab. Thanks
Traditional cron checks every minute if the current time matches one of the time pattern in the crontab and executes every matched lines. There is no notion of "missed jobs" or "jobs running soon" at all. The pattern */30 * * * * matches on timestamps with minutes dividable by 30 (that is 0 and 30). If you want something like "run every 25 minuts of system uptime" you need a more modern cron implementation. One of them is fcron, which adds lots of extra ways to describe when to run jobs, including the very useful "don't run two of these jobs simultanuously", something traditional cron isn't capable of.
What would crontab do if the server is shutdown?
1,493,277,389,000
The below line should provide an output date which when run manually gives an proper output such as Fri Jul 17 01:42:07 2015. But when run using cron, it gives the epoch date i.e. Wed Dec 31 19:00:00 1969 job_date=`iwgetwfobj $i | sed -n 2p |tr -s '=' '@'|awk -F'@' '{print $6}'|tr - d \" |tr -d \>| perl -e 'print localtime(<>) . "\n";'` Please let me know the change to be made. Any help will be appreciated.
The issue was resolved by putting the path before the CLT iwgetwfobj. Thanks for the help.
Output date is not proper when the script is run using cron
1,493,277,389,000
I'm on my freeBSD server. Cronjob suddenly stop working. When I run "top" the service is not listed. But when i check "service cron status" , it is running with given PID. I restarted the service and even server but the problem still persist. How can I troubleshoot this?
You can check this, ps -ef | grep cron
Cron service running but not visible by "top"
1,493,277,389,000
I'm trying to make a script that will run in cron, that will search a path for specific file name. In the path there might be more folders with that same file name. If found and they are newer than 2 days, rsync the parent folder and all content. This part works but only syncs "filename.txt" that is newer than 2 days and nothing else: #!/bin/sh find /path/ -name "filename.txt" -type f -mmin -$((60*48)) -exec rsync --ignore-exsiting -az --log-file=/path/ {} /source /destination \;
I'm assuming by parent folder you just mean the folder with filename.txt. You can get find to print this folder name with -printf '%h\n' instead of the -exec. You can pipe this into a shell loop or xargs for example: find /path/ -name "filename.txt" -type f -mtime -2 -printf '%h\n' | xargs -i rsync ... {} /destination \; I think you need to add -R to your rsync, else all the directories will be superimposed at the destination.
Want to use find with rsync in script
1,431,389,242,000
I am hoping to write a cron wrapper that records all output in a folder $CRON_LOG_DIR. It would be used as, e.g., follows: * * * * * $CRON_WRAPPER "<job name>" "command" which would record the full output from stdout and stderr of command under: $CRON_LOG_DIR/<date>/<job_name>/command_timestamp.log I am using the script below for $CRON_WRAPPER, however, it does not seem to record stdout for command when command is made of multiple commands, e.g. * * * * * $CRON_WRAPPER "<job name>" "command1 && command2 && command3" Why? Below is the full CRON_WRAPPER #!/usr/bin/env zsh COMMAND_NAME=$1 shift DATE=$(date +%B_%d_%Y) TIMESTAMP=$(date +%Y-%m-%d_%H-%M-%S) THIS_CRON_LOG_DIR=$CRON_LOG_DIR/$DATE mkdir -p $THIS_CRON_LOG_DIR CRON_LOG_FILE=${THIS_CRON_LOG_DIR}/${COMMAND_NAME}_${TIMESTAMP}.log # Joining commands: to_join=$@ joined=$(printf ",%s" "${to_join[@]}") # Redirecting all output to our logging file: joined=${joined:1}">> $CRON_LOG_FILE 2>&1" # Finally evaluating the command eval $joined
If the first argument to the script is jobname and the second is command1 && command2 && command3 then the command you build up in the joined variable is something like command1 && command2 && command3>> /path/to/cron/log/dir/May_12_2015/jobname_2015-05-12_01-09-25.log 2>&1 You call eval on this string, and it's parsed normally. The redirection only applies to `command3. You want to treat the second argument as a shell snippet, not as a prefix of a shell snippet. So don't concatenate stuff to it. The redirection belongs directly in the script, not under eval, since you're specifying it directly in the script and not via a string that needs to be parsed. Note that your code would break if jobname contained any shell special character, e.g. $CRON_WRAPPER $(touch /tmp/foo) stuff would execute the code touch /tmp/foo. Another problem with your script is that you're putting commas between the arguments if there are more than two, which doesn't make sense. Joining them with spaces would make more sense, and that's simply "$*. (For joining with commas as separators, what you did is overly complex. In zsh, use ${(j:,:)@}.) Replace the part of the script starting with # Joining commands: by eval "$*" >>$CRON_LOG_FILE 2>&1
A generic cron wrapper that records all output
1,431,389,242,000
I am running a cronscript like: */45 * * * * /path/to/php /path/to/file.php it creates a file and put on home directory; But I want to create this file in my desired path like: */45 * * * * /path/to/php /path/to/file.php /path/where/file/to/save actually the file.php create file.xls and put it on home directory. I want to set path for file.xls should be created in some other directory. is there some way ?
If the PHP script creates the file in the current directory, change to the desired output directory: */45 * * * * cd /path/where/file/to/save && /path/to/php /path/to/file.php If the PHP script creates the file in your home directory, you might be able to pretend your home directory is elsewhere — but if it also tries to read other files from your home directory, then it will also read these files from elsewhere. */45 * * * * HOME=/path/where/file/to/save /path/to/php /path/to/file.php
How to set path in cronscript
1,431,389,242,000
I am trying to take a db backup with a php file that is running from a cronjob. It was running fine when I tested with sample db. But when I used the actual db there was an error. I am using shell_exec() to run the command from the php file and the error is: sh: -c: line 0: syntax error near unexpected token `)' I understand that this happens because the password starts with a special character ')'. What can I do to solve this issue WITHOUT CHANGING the password?
After wasting some time I got it working. I just had to escape the character like this: $pass = '\)d@340kgfj';
Having problem with mysql dbdump when password starts with special character
1,431,389,242,000
I found this file tonight: /etc/cron.daily/apt.cron #!/bin/sh [ ! -f /var/lock/subsys/apt ] && exit 0 [ -f /etc/sysconfig/apt ] && . /etc/sysconfig/apt [ $CHECK_ONLY == "yes" ] && OPTS="$OPTS --check-only" OPTS="$OPTS $EXTRA_OPTIONS" export HOME=/root if /usr/bin/apt-get -qq update; then /usr/bin/apt-get dist-upgrade -qq --check-only if [ $? -eq 100 ]; then /usr/bin/apt-get -q -y $OPTS dist-upgrade fi fi Questions: What's the deal, how did it get there? What does it do?
Try this: rpm -qf /etc/cron.daily/apt.cron That's the rpm command which tells you which package a file comes from (if any). In this case, the file comes from the apt package, which you have installed on your system for some reason. This is the "apt-rpm" port, which, as the name implies, can handle Fedora's RPM packages. It's kind of interesting as a curiosity, but for most practical purposes, you really want yum or the newer dnf. The script does what it says, basically: runs apt-get dist-upgrade daily, with quiet output and no prompts, and the options from /etc/sysconfig/apt if it exists.
Why is there an apt-get script set in my cron.daily, I'm on Fedora 20?
1,431,389,242,000
It sounds really simple, but many times I was stuck at the beginning of learning a new command. The tutorial writer think it's natural to start, but it's not. This is the tutorial I've been reading: http://code.tutsplus.com/tutorials/scheduling-tasks-with-cron-jobs--net-8800 After reading that, I started something like: * * * * * /usr/bin/sh t.sh And I got this error: bxg: command not found bxg is an empty file I created on the directory to debug, the tree graph is like this: . ├── bxg └── t.sh Inside t.sh, the content is: a=$(date) mkdir "$a" Obviously, bash didn't recognize this as a cron command. Then I searched the cron process to see if it is started alread: ps aux | grep cron root 942 0.0 0.0 3056 908 ? Ss 10:34 0:00 cron zen 4924 0.0 0.0 4652 568 pts/1 T 11:54 0:00 grep --color=auto cron zen 9722 0.0 0.0 4656 568 pts/1 S+ 12:05 0:00 grep --color=auto cron Seems it's running there, but as another tutorial says that the process is called crond. I tried the followed: service crond start crond: unrecognized service Maybe the process I greped out with "ps aux" isn't the authentic process: service cron start start: Job is already running: cron OK, I totally got no ideas now, friends help me!
crontab -e it will open a configuration file in editor and you then add your line to the end: * * * * * /usr/bin/sh t.sh
How to start a cron task?
1,431,389,242,000
I am running Fedora 20 and I am having some issues when running logrotate and anacron which I suspect may be related to SELinux: Failed to determine timestamp: Cannot assign requested address chgrp: changing group of /var/log/mariadb: Permission denied I did some research and came across this article from Gentoo wiki that says the following: If you want to perform system administrative tasks using cronjobs, you will need to take special care that the domain in which the job runs has sufficient privileges. First, make sure that your cronjobs run in the system_cronjob_t domains. This means that the cronjobs must be defined as either scripts in the /etc/cron.hourly, /etc/cron.daily, ... directories crontab entries in the /etc/cron.d directory crontab entries in the /etc/crontab file A check on my SELinux default policies reveals that I have the following instead: /etc/cron.daily(/.*)? all files system_u:object_r:bin_t:s0 /etc/cron.hourly(/.*)? all files system_u:object_r:bin_t:s0 /etc/cron.monthly(/.*)? all files system_u:object_r:bin_t:s0 /etc/cron.weekly(/.*)? all files system_u:object_r:bin_t:s0 /etc/cron\.d(/.*)? all files system_u:object_r:system_cron_spool_t:s0 /etc/crontab regular file system_u:object_r:system_cron_spool_t:s0 Should I change the SELinux policy to so that these have system_cronjob_t as the context label instead?
It seems that this is how it should be: From changelog of selinux-policy-3.12.1-139: - Allow systemd_cronjob_t to be entered via bin_t Do you have any errors in /var/log/audit/audit.log pertaining to mariadb? A quick and easy check is to setenforce=0 and run your cron jobs. If they fare better then it was SELinux causing the issue.
What should be the security context of these cron files?
1,431,389,242,000
This weekend with daylight savings, we had a situation that when the system clock hit 3:00am from the jump, we had hundreds of processes fire off out of the crontab. Everything I'm reading says that if you schedule processes during this time and you're running the system in the local timezone, then you may have scripts not run at all (since you skip over the 2:00 hour). However, in this situation, we had several processes spawn 50+ times out of the cron. Has anyone experienced why several processes would be executed so many times? Again, I've read that they may run twice, or not at all (depending on fall/spring), but 50+ times? Each script that ran was scheduled at the top of the hour of 3:00, and not again for at least an hour. Here are a few: 0 3 * * * 0 0-6 * * *
It looks like this was a bug in the version of crond that we were running documented here: https://bugzilla.redhat.com/show_bug.cgi?id=436694 I was able to replicate in a VM upgrading to 1.4.4-12 fixed the issue. Thanks for all the replies!
Crontab in Daylight Savings
1,431,389,242,000
I have some cron jobs set up to run, some of them each minute. I know I can log them to text files by simply putting php /path/to/file.php > /var/logs/something.txt but can I do this every minute? The nature of the log's output means that the log file will be very small, but I don't know how to log each minute's output to a separate file. Thanks
You can pipe the output to cronolog for log file time handling. For documentation see Cronolog Usage and download Cronolog at Sourceforge General example command "|/path/to/cronolog [OPTIONS] logfile-spec" where logfile-spec for you could be /var/log/cmdOutput_%Y_%m_%d_%H_%m.log
Cron Job - Log Each Minutes Activity [duplicate]
1,431,389,242,000
Here is my crontab: # rsnapshot jobs 0 9-21 * * * /usr/bin/rsnapshot -c /home/kaiyin/.rsnapshot.conf hourly 52 22 * * * /usr/bin/rsnapshot -c /home/kaiyin/.rsnapshot.conf daily 42 22 * * 6 /usr/bin/rsnapshot -c /home/kaiyin/.rsnapshot.conf weekly 32 22 1 * * /usr/bin/rsnapshot -c /home/kaiyin/.rsnapshot.conf monthly * * * * * /bin/echo hi >> /tmp/testlog The last one runs ok, but the rsnapshot ones will not run, why?
System crontabs If these are system level crontab entries (/etc/crontab), then they're missing the username they should be running as. # For details see man 4 crontabs # Example of job definition: # .---------------- minute (0 - 59) # | .------------- hour (0 - 23) # | | .---------- day of month (1 - 31) # | | | .------- month (1 - 12) OR jan,feb,mar,apr ... # | | | | .---- day of week (0 - 6) (Sunday=0 or 7) OR sun,mon,tue,wed,thu,fri,sat # | | | | | # * * * * * user-name command to be executed User crontab If on the other hand these are running as as user's crontab (crontab -e) then are they running as user kaiyin? If not then they won't work because the user whose crontab this is doesn't have read access to /home/kaiyin.
Why don't these cron jobs run?
1,431,389,242,000
I've just installed the 'mail' command in Ubuntu (mailutils package) in order to view feedback from cron jobs. I type 'mail' at the prompt and see something like this: "/var/mail/*$USER*": 1 message 1 unread >U 1 *Name* *Date* Output from your job I type 1 at the ? prompt and get a lot of output about the message (From, Date, Subject...) but on the last line it says Error: Can't open display: ? and I'm returned to the prompt. Does anyone have any idea what the problem might be?
The mail program opens emails in a pager. The environment variable PAGER can override the default pager, which is typically less. In Debian-based systems, there is a /usr/bin/pager that is managed by the alternatives system. You need to ensure that your pager is not a GUI application, which would require X. An easy way to test this is to set PAGER temporarily. PAGER=/usr/bin/less mail
Linux 'mail' command: Can't open display
1,431,389,242,000
I'm trying to send cron output to an email address and am struggling... I'm running the following command: 13 15 * * 1-5 root /path/to/mysql-backup.sh 2>&1 | mail -s "Daily Database Backup Report" [email protected] That shows this error within /var/mail/root /usr/bin/mail: line 1: syntax error near unexpected token `(' /usr/bin/mail: line 1: `Config file not found (-s)' Is this trying to validate/execute the output of the cron? Do you do this on your server? If so, how?
In my experience, /usr/bin/mail is a binary executable, but on your system the shell seems to be loading and interpreting it. syntax error near unexpected token is a bash diagnostic. This can happen if you have overwritten an executable. Is there any conceivable chance that you have overwritten /usr/bin/mail with the text "Config file not found (-s)", causing said text to be fed to the shell when you try to execute it?
Sending cron output to email?
1,431,389,242,000
I'm trying to create an interface that will show a message on the screen every set time. Cron is an ideal tool for this case only it doesn't read data from a file during it's run (as far as I could tell). I could create a bunch of file to read but this is redundant. Is there a way to add a line into the user crontab file or remove a line from it after I have read it from a file?
You can always write a cronjob that calls a script that conditionally calls crontab -l > oldcrontab and then executes crontab file. Where file would be the new crontab that should be installed, constructed from oldcrontab modulo an appended/removed line.
Is there a way to combine a file with crontab?
1,431,389,242,000
So I built an MLB scoreboard with a RPI 3. Right now the display can either show my favorite teams playing, OR all the teams playing and cycle through them as they are going on. I would like to add a toggle switch to the back of the scoreboard to select either the All-Teams or My-Teams depending on who's playing at that time. I have two separate nearly identical directories for the scoreboard accommodating either All-Teams or My-Teams. Right now I have a cron file called "start-scoreboard.sh" that will start the score board showing either All-Teams, or My-Teams depending on what I have entered into "cd /home/pi/mlb-led-scoreboard-(All-Teams OR My-Teams here)" within the cron file. Is there a way for the RPI to read the state of a gpio pin and run either All-Teams OR My-Teams directories? Here is an example of what I have in my "start-scoreboard.sh" cron file with the All-Teams directory listed... !/bin/bash u/reboot sleep 60 $$ start-scoreboard.sh cd /home/pi/mlb-led-scoreboard-All-Teams n=0 until [ $n -ge 10 ] do python main.py --led-gpio-mapping=adafruit-hat --led-brightness=50 --led-slowdown-gpio=3 --led-rows=32 --led-cols=64 && break n=$[$n+1] sleep 10 done
So first off: you're doing GPIO things in your python script. Honestly, you should just use the very same library you use there (which I don't know) to read the GPIO state and behave accordingly! Same for your do python… ; sleep 10; done loop: this would really be something you should rather integrate in your python script. Secondly, sure, just check the value of the GPIO-pseudofile, depending on your kernel setup probably in /sys/class/gpio/gpio{number}/value (might need to set the direction of that pin with echo in > /sys/class/gpio/gpio{number}/direction; if it's not there, you might need to export it first by echo {number} > /sys/class/gpio/export) ; that's the deprecated sysfs interface, but I bet on default kernels for the RPi it's still enabled. Alternatively, use the gpio tool that you can install for your Linux distro, with gpio read {id}.
How to read a pin state on a Raspberry Pi and select (either/or) between two entries in a cron file?
1,431,389,242,000
I have an odd problem with my Debian server. Up until a few weeks ago, anacron was running my cron jobs fine and I can see I have backups from then so it was running stuff. Then it appears to have just stopped running cron tasks in /etc/cron.daily/weekly/monthly. The logs, however, don't show any issues. Here's my recent anacron log: Nov 27 22:30:58 localhost anacron[2229443]: Anacron 2.3 started on 2023-11-27 Nov 27 22:30:58 localhost anacron[2229443]: Normal exit (0 jobs run) Nov 27 22:30:58 localhost systemd[1]: anacron.service: Deactivated successfully. Nov 27 23:33:19 localhost systemd[1]: Started anacron.service - Run anacron jobs. Nov 27 23:33:19 localhost anacron[2230690]: Anacron 2.3 started on 2023-11-27 Nov 27 23:33:19 localhost anacron[2230690]: Normal exit (0 jobs run) Nov 27 23:33:19 localhost systemd[1]: anacron.service: Deactivated successfully. Nov 28 07:34:01 localhost systemd[1]: Started anacron.service - Run anacron jobs. Nov 28 07:34:01 localhost anacron[2240027]: Anacron 2.3 started on 2023-11-28 Nov 28 07:34:01 localhost anacron[2240027]: Will run job `cron.daily' in 5 min. Nov 28 07:34:01 localhost anacron[2240027]: Jobs will be executed sequentially Nov 28 07:39:01 localhost anacron[2240027]: Job `cron.daily' started Nov 28 07:39:02 localhost anacron[2240113]: Updated timestamp for job `cron.daily' to 2023-11-28 Nov 28 07:39:03 localhost anacron[2240027]: Job `cron.daily' terminated Nov 28 07:39:03 localhost anacron[2240027]: Normal exit (1 job run) Nov 28 07:39:03 localhost systemd[1]: anacron.service: Killing process 2240176 (ConfigServer Ve) with signal SIGKILL. Nov 28 07:39:03 localhost systemd[1]: anacron.service: Killing process 2240182 (sleep) with signal SIGKILL. Nov 28 07:39:03 localhost systemd[1]: anacron.service: Deactivated successfully. Nov 28 08:30:42 localhost systemd[1]: Started anacron.service - Run anacron jobs. Nov 28 08:30:42 localhost anacron[2241219]: Anacron 2.3 started on 2023-11-28 Nov 28 08:30:42 localhost anacron[2241219]: Normal exit (0 jobs run) It seems to claim that the cron.daily (as well as weekly/monthly, further back) are running fine and exiting normally. But no backups are being made, and no emails are being sent to me (I setup a test script that should always print output and therefore generate an email). I can't for the life of my figure out why anacron apparently isn't doing anything, and isn't logging any errors. I don't remember doing anything back a few weeks ago that could've caused this. Here's my /etc/anacrontab: # /etc/anacrontab: configuration file for anacron # See anacron(8) and anacrontab(5) for details. SHELL=/bin/sh PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin HOME=/root LOGNAME=root # These replace cron's entries 1 5 cron.daily run-parts --report /etc/cron.daily 7 10 cron.weekly run-parts --report /etc/cron.weekly @monthly 15 cron.monthly run-parts --report /etc/cron.monthly What can I do to debug this and figure out what's wrong? My OS is Debian 12.1.
OK, I just discovered (thanks to this) that run-parts ignores files/links that have an extension; it doesn't match the . character. No idea why. Changed my symlinks in /etc/cron.daily not to have the .extension and now they run. Crazy.
Anacron doesn't appear to run anything on Debian, but why? [duplicate]
1,431,389,242,000
so I want to backup my system drive (full drive not just partition) every month using dd, in an external hard drive. So I have something like this in my crontab 0 9 1 * * dd if=/dev/sda | gzip -c > /mnt/5E13119070E2D202/Backups/system_drive.backup.img.gz And that works fine. But I am trying to figure out how to replace /dev/sda (system drive) with something which is persistent between reboots. Using blkid (trimmed): /dev/sda5: UUID="58141b62-72af-463c-a3c3-57d0b739c632" TYPE="swap" PARTUUID="c1b89110-05" /dev/sda1: UUID="a97d9b38-e8a6-4cc2-9684-b7e579c1a990" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="c1b89110-01" /dev/sdg1: BLOCK_SIZE="512" UUID="5E13119070E2D202" TYPE="ntfs" PARTUUID="000b4ae7-01" Any ideas?
But I am trying to figure out how to replace /dev/sda (system drive) with something which is persistent between reboots. So, you need a "world wide unique name" for your whole disk. Luckily, Linux has you covered. Find out the unique name for your drive by finding the right wwn-* entry in /dev/disk/by-id. You could just ls -l /dev/disk/by-id/wwn-*and look for sda, or do something like find /dev/disk/by-id -name 'wwn-*' -lname '*/sda'. Either way, you get a symlink like /dev/disk/by-id/wwn-0x1234cafe. in your script, devlink=/dev/disk/by-id/wwn-0x1234cafe gzip < "${devlink}" > /mnt/5E13119070E2D202/Backups/system_drive.backup.img.gz as there's literally no advantage to using dd here. Quite the contrary! Instead inserting dd with its own block sizes and potential copy overhead, just let the compressor do its work directly on input. I do recommend you do not use gzip for that, for two reasons: it's algorithm is old, slow and not very good at compressing. it's single-threaded, putting another bottleneck atop of the already slow algorithm You could, instead of gzip, at least use pigz, which is multithreaded and does the same. But, while I have much respect for Adler and his compressors, it's been a couple of decades, and modern compressors are both faster and better at compressing; so this would fare better on both speed and compression ratio fronts #!/bin/sh devlink=/dev/disk/by-id/wwn-0x1234cafe zstd -5 -T0 --force -- "${devlink}" > /mnt/5E13119070E2D202/Backups/system_drive.backup.img.gz # ^ ^ ^ ^ ^ # | | | | | # \-------------------- -5: use a medium-low compression ratio. # | | | | still better than `gzip --best`, typically, but also faster. # | | | | Possible values are 1 to 18, where you typically see # | | | | diminishing returns for values > 12. # | | | | # \----------------- -T0: use as many threads as you have CPU cores # | | | # \------------ --force: because the input is not a regular file, zstd want # | | to be explicitly told that, yes, we want this. # | | # \---- --: afterwards there's file names. Just for good measure. # | # \- "${devlink}": input filename Couple other remarks: Running a backup on an actively, read/write mounted root file system: you can do that, yes, but it'll be in need of repair the very moment you try to restore it. Kind of a bad idea, hence. Do this from a live image, preferably, or sync; mount -o remount,ro ${root partition} / before, and mount -o remount,rw … after, at least. Yes, that will mess with the operation of your system while the backup is running. But you're not doing a backup to write "some backup somewhere for uncertain purposes", but to have a restorable, reliable state of your machine. If you need to do this on a live system, you'd typically have a different approach. You'd use LVM or a snapshotting file system (ZFS, btrfs) for your root and data volumes instead, would do a snapshot, and back up that snapshot. If I guess correctly from your partial partition list, your system isn't even set up to use LVM, that's really annoying (nobody wants to deal with raw partitions in 2023! this isn't the 90s.) for you. Consider reinstalling your system to use either LVM or btrfs or ZFS. You're writing your backup to an NTFS volume. Linux NTFS drivers are probably less reliable than they should be for backups. Also, ntfs-3g is slow! So, you'd crank up the compression ratio (your gzip must use -9 / --best, and your zstd would tend towards -12 to -14) in expense of compression speed, because you'd mostly be limited by write speed, not compression speed, anyways, and writing less makes things more reliable, because fewer bits written means fewer chances for mistakes, and faster, since fewer bits to write, less to wait
using a cron job to automatically backup the same drive using dd
1,689,071,548,000
I have a simple shell script that I am running via cron. I am using it to perform a scheduled git pull operation (yeah, I know there are probably better methods, but this is another team's repository and I just need to periodically read from it, so I'm opting for a quick and dirty solution). Anyway, my script does something like this: #!/usr/bin/bash if /usr/bin/cd /opt/repos/other-teams-repo then echo "success!" else echo "fail :(" exit 1 fi /usr/bin/pwd if /usr/bin/git pull then echo "success on the pull" else echo "fail on the pull" exit 1 fi From the command line, this works just fine. But when I run it via cron: 57 11 * * 1 /opt/myuser/shell/otheruser-repo-git-pull.sh > /opt/log/repo-pull.git.cronlog 2>&1 In my cronlog, I get this: success! /home/myuser fatal: Not a git repository (or any parent up to mount point /home) Stopping at filesystem boundary (GIT_DISCOVERY_ACROSS_FILESYSTEM not set). fail on the pull When I change /usr/bin/cd to cd and /usr/bin/pwd to pwd it works just fine. I'm just curious if anyone has any insight as to why this might be?
The working directory is a property of the process, so an external binary like /usr/bin/cd can't change the shell's working directory. (It'll just change its own working directory and then exit.) You need the shell's builtin one, which you get when using just cd. Yes, one might sensibly ask "What is the point of the cd external command?" With pwd vs. /usr/bin/pwd there is no such issue, as the external pwd inherits the working directory of the shell, so it can print it same as the builtin one (as long as the path doesn't contain symlinks).
Using '/usr/bin/cd' in a cron script fails while 'cd' works
1,689,071,548,000
I am running OpenSUSE 15.4 and I am trying to set up a CRON job using crontab -e. Inside my CRON tab I have the following: */1 * * * * /usr/bin/Rscript /run/media/matt/A34E-C6B8/folder/myRScript.R So I give it the full path the where the RScript program can be found and then the location of the RScript I want to execute. I check the system log files by running sudo tail -f /var/log/messages and I find this: 2023-05-04T16:41:03.034501+02:00 localhost systemd[1]: Started Time & Date Service. 2023-05-04T16:41:09.628633+02:00 localhost CRON[6807]: (matt) CMDEND (/usr/bin/Rscript /run/media/matt/A34E-C6B8/folder/myRScript.R) 2023-05-04T16:41:09.630489+02:00 localhost CRON[6807]: pam_unix(crond:session): session closed for user matt 2023-05-04T16:41:09.631608+02:00 localhost systemd[1]: session-c1005.scope: Deactivated successfully. So, it looks like the cronjob is being run but being deactivated...
Adding >> /home/myuser/myscript.log 2>&1 to the end of the crontab line saves a log of the R code and now I see that I have the following error in the R code which is why I was getting no output! Could not open chrome browser. Client error message: Summary: UnknownError Detail: An unknown server-side error occurred while processing the command. Further Details: run errorDetails method Check server log for further details. Additionally, there is no log file for OpenSUSE, so manually creating it helped. ( https://certsimple.com/how-to-check-crontab-logs-in-suse-linux/ ) There is no specific crontab log for SUSE, but you can check your system logs to see if cron is running properly. To do this, open a terminal and type the following command: tail -f /var/log/messages You should see cron-related messages in the output. If you don’t see any messages, then cron is not running. Taken from the question and moved to an answer
Fixing a cronjob issue when running a script and cron task being deactivated [duplicate]
1,689,071,548,000
I have this command which runs properly when executed on terminal: ssh someuser@someserver -t "sudo systemctl start someservice" No password asked for ssh, there is a public key to connect, someuser can execute sudo to start someservice without password. I need to insert the above command to crontab. Unfortunately it's not executed; I suspect the problem arises from using ssh -t for the pseudo-terminal which is needed by sudo. It seems that the pseudo-terminal cannot be implemented through cron (my assumption, no hard evidence). To summarize, my goal is to execute the command, without interaction, on a timed interval. Any ideas how to sort this following the existing method? Working alternatives welcomed.
As I originally suspected, the problem was ssh -t. The solution, I added this in /etc/sudoers on the SSH server side: Defaults:someuser !requiretty I can't tell what is requiretty used for. To be on the safe side I disabled it for a user only, not globally. It doesn't seem to affect this user in other aspects.
sudo through ssh on cron
1,689,071,548,000
I have an old iMac running macOS Catalina. I've scheduled 2 CRON jobs, one running in the morning (say, 8am) and one running in the afternoon (say, 5pm). The CRON jobs run a few R scripts, in case that matters. Given that CRON jobs won't execute if the machine is sleeping, I'm wondering what's the best way to make sure both jobs are executed. Right now I've scheduled for the Mac to "wake up" in the morning and "Shut down" in the evening, but unless I prevent it going to sleep completely, the afternoon job isn't executed. Two potential solutions came to mind: first, setting up a different user profile with a different schedule - but that didn't work, since there's only one schedule per machine. Second, "Power Nap", but that doesn't seem to execute the job either, or at least not consistently. Now, short of either never allowing sleep or building a robot to move the mouse in the afternoon and wake up the machine, I'm running out of options... Any better ideas? Thanks, Philipp
Apple intents to phase out cron (but this is going on for such a long time that I forgot when they first announced this :-)). Nevertheless, using launchd instead would solve your problem quite easily. From man launchd.plist: StartCalendarInterval <dictionary of integers or array of dictionaries of integers> This optional key causes the job to be started every calendar interval as specified. Missing arguments are considered to be wildcard. The semantics are similar to crontab(5) in how firing dates are specified. Multiple dictionaries may be specified in an array to schedule multiple calendar intervals. Unlike cron which skips job invocations when the computer is asleep, launchd will start the job the next time the computer wakes up. If multiple intervals transpire before the computer is woken, those events will be coalesced into one event upon wake from sleep. This answer on StackOverflow has a basic launchd.plist template for simple cronjobs.
Wake up / CRON on Mac
1,689,071,548,000
I'm using crontab to send messages to all users. I wrote */1 * * * * wall $(bash some_shell_script.sh) But the problem is I always have to press Ctrl+D to end the message. How can I solve this??
The wall command executes within the context of cron. It does its thing and exits. On the receiving display devices (terminals) you will get the notification from wall. This notification cares nothing about what you are doing on the terminal, so if you are quietly sitting at a command line prompt when the notification message is sent, the prompt will not be rewritten. You have found that you need to hit enter to get the prompt re-sent, but in practical terms this is unnecessary: you could simply enter your command normally. Scenario timeline You are logged in and sitting at a command prompt The wall notification is sent to your terminal You type ls (and hit Enter) At step 3 you do not have a visible prompt but the shell is still patiently waiting for your command. Alternatively, perhaps you're actually talking about how you exit from the crontab command. In this instance the Ctrl/D is used to signal the end of input. Scenario crontab You enter the line */1 * * * * bash some_shell_script.sh | wall You hit Ctrl/D to end data input to the crontab command Note that the Ctrl/D in step 3 is nothing to do with wall. Also, this crontab entry will repeat your wall command every minute. Is that really what you want?
How to exit "wall" without pressing Ctrl+D? [closed]
1,689,071,548,000
I would like to run a CRONTAB job to delete files older than 5 days in a folder AND pipe the command output to a file in case of errors. This command deletes the files when run from a command line: /usr/bin/find /mnt/SQL_Backups/* -mtime +5 -exec rm {} \; but, when I add this to it to get the the stdout and stderr pipe, it fails. /usr/bin/find /mnt/SQL_Backups/* -mtime +5 -exec rm {} \; > /mnt/output/CRONDeleteFiles.txt 2>$1 From the command line, error is -bash: $1: ambiguous redirect while from CRONTAB email error message, I get this error /bin/sh: 1: cannot create : Directory nonexistent I suspect it has something to do with my piping code? What is the right way to do this?
stderr to stdout is redirected with 2>&1, not 2>$1 as in your example.
Pipe command output from a CRONTAB job deleting files more that 5 days old [closed]
1,689,071,548,000
As root, I installed ssmtp and I configured the /etc/ssmtp/ssmtp.conf as follows: # Sender email address [email protected] # Destination SMTP server and port mailhub=mail.domain.com:587 # Username and password [email protected] AuthPass=password # Sender domain rewriteDomain=domain.com # Machine's hostname hostname=mail.domain.com:587 # Allow set From name in each email FromLineOverride=YES UseSTARTTLS=YES UseTLS=YES I also configured revaliases in /etc/ssmtp/revaliases adding the following row: root:[email protected]:mail.domain.com:587 I set a cron running crontab -e and added the rows (just to test it's running): [email protected] * * * * * echo "this is a test" If I run grep cron /var/log/syslog I see the following error: cron[2704289]: sendmail: RCPT TO:<[email protected]> (553 5.7.1 <[email protected]>: Sender address rejected: not owned by user [email protected]) [email protected] is changed in [email protected]and I cannot find a solution. Any help?
You have inverted the meaning of FromLineOverride. Here with it set to yes you are declaring that the sender is allowed to override the settings defined in ssmtp.conf, and this is why the sender is ending up as root. Switch that setting off (no) and you should be good to go.
crontab error - SSMTP - 553 5.7.1 Sender address rejected: not owned by user
1,650,577,026,000
I've set anacron to run several tasks, but it seems to fail due to some sort of weird permissions error. This is my anacrontab: # /etc/anacrontab: configuration file for anacron # See anacron(8) and anacrontab(5) for details. SHELL=/bin/sh PATH=/sbin:/bin:/usr/sbin:/usr/bin MAILTO=root # the maximal random delay added to the base delay of the jobs RANDOM_DELAY=45 # the jobs will be started during the following hours only START_HOURS_RANGE=3-22 #period in days delay in minutes job-identifier command 1 5 cron.daily nice run-parts /etc/cron.daily 7 25 cron.weekly nice run-parts /etc/cron.weekly @monthly 45 cron.monthly nice run-parts /etc/cron.monthly @daily 1 bashrc.daily rsync -aAX $HOME/.bashrc /run/media/MYUSER/samsung/home/MYUSER/.bashrc @daily 1 bash_aliases.daily rsync -aAX $HOME/.bash_aliases /run/media/MYUSER/samsung/home/MYUSER/.bash_aliases @daily 5 variety.daily rsync -aAX $HOME/.config/variety/ /run/media/MYUSER/samsung/home/MYUSER/.config/variety/ @daily 3 testfile.daily rsync -aAX $HOME/Documents/flag.hs /run/media/MYUSER/samsung/flag.hs @weekly 5 st_apps.daily rsync -aAX $HOME/.local/share/Steam/steamapps/ /run/media/MYUSER/samsung/home/MYUSER/.local/share/Steam/steamapps/ @weekly 15 st_ud.daily rsync -aAX $HOME/.local/share/Steam/userdata/ /run/media/MYUSER/samsung/home/MYUSER/.local/share/Steam/userdata/ @weekly 1 anacrontab.weekly rsync -aAX /etc/anacrontab /run/media/MYUSER/samsung/home/anacrontab This is the output of journalctl -b --no-pager --catalog | grep anacron jan. 14 00:02:09 MYPC anacron[73073]: Anacron started on 2022-01-14 jan. 14 00:02:09 MYPC anacron[73073]: Can't open timestamp file for job cron.daily: Permission denied jan. 14 00:02:09 MYPC anacron[73073]: Aborted jan. 14 00:02:42 MYPC sudo[73113]: MYUSER : TTY=pts/0 ; PWD=/home/MYUSER ; USER=root ; COMMAND=/usr/bin/dd bs=4k of=/etc/anacrontab jan. 14 00:02:50 MYPC sudo[73124]: MYUSER : TTY=pts/0 ; PWD=/home/MYUSER ; USER=root ; COMMAND=/usr/bin/dd bs=4k of=/etc/anacrontab Could someone please prod me in the right direction? System: Fedora 35
My anacron worked earlier but doesn't work any more, so I decided to take a different approach. To be able to run it without escalated privileges, you will need to do as follows: Create an .anacron folder in your home dir and two subdirs (etc and spool) by using the command mkdir -p ~/.anacron/{etc,spool}. Create a new file in the etc dir by running touch $HOME/.anacron/etc/anacrontab with contents similar to the original /etc/anacrontab. E.g.: # Personal anacrontab SHELL=/bin/bash PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin #period in days delay in minutes job-identifier command @daily 1 bashrc.daily rsync -aAXr $HOME/.bashrc /run/media/myuser/samsung/home/myuser/.bashrc @daily 1 bash_aliases.daily rsync -aAXr $HOME/.bash_aliases /run/media/myuser/samsung/home/myuser/.bash_aliases In the terminal, enter crontab -e and add the following: @hourly /usr/sbin/anacron -s -t $HOME/.anacron/etc/anacrontab -S $HOME/.anacron/spool You can test wait for cron to execute the hourly task, or you can force anacron to run immediately by running anacron -fnd -t $HOME/.anacron/etc/anacrontab -S $HOME/.anacron/spool/. Hope this helps someone else in the future!
Why doesn't anacron run the tasks scheduled?
1,650,577,026,000
I have a Bash script which should only execute in a specific time window (from midnight to 00:15 AM). But if I execute the function, I get [: too many arguments as an error message. How do I solve it? I still want to use Bash. I'm using Ubuntu Server 20.04 LTS. Script: currTime=`date +%H%M` check_time_to_run() { tempTime=$1 if [ $tempTime -gt 0 -a $tempTime -lt 015 ]; then echo "Time is after 0 AM and before 0:10 AM. Restarting Server." else echo "Time is not between 0 AM and 0:15 AM. Aborting restart." exit 1 fi }
You can try to break up your statements: if [ $tempTime -gt 015 ] && [ $tempTime -lt 0 ]; then stuff... fi or use the double bracket to test for the a binary result of an expression: if [[ $tempTime -gt 015 && $tempTime -lt 0 ]]; then stuff... fi
How to do math in If-Statement
1,650,577,026,000
I have this entry in cronjob: $ cronjob -l * * * * * $(cd /home/fedor/Documents/mecab_natto && bash run_mecab.sh >> /home/fedor/Documents/mecab_natto/log.txt) when I run the script manually everything works perfectly fine from everywhere in system. However, when I run it as cron job, the job starts, but terminates immediately - therefore terminating my sinatra application: $ journalctl -u cron.service | tail Aug 18 09:37:01 fedor-desktop CRON[12180]: pam_unix(cron:session): session closed for user fedor Aug 18 09:38:01 fedor-desktop CRON[12241]: pam_unix(cron:session): session opened for user fedor by (uid=0) Aug 18 09:38:01 fedor-desktop CRON[12242]: (fedor) CMD ($(cd /home/fedor/Documents/mecab_natto && bash run_mecab.sh >> /home/fedor/Documents/mecab_natto/log.txt)) Aug 18 09:38:01 fedor-desktop CRON[12241]: (CRON) info (No MTA installed, discarding output) Aug 18 09:38:01 fedor-desktop CRON[12241]: pam_unix(cron:session): session closed for user fedor Aug 18 09:39:01 fedor-desktop cron[3820]: (fedor) RELOAD (crontabs/fedor) Aug 18 09:39:01 fedor-desktop CRON[12397]: pam_unix(cron:session): session opened for user fedor by (uid=0) Aug 18 09:39:01 fedor-desktop CRON[12398]: (fedor) CMD (cd /home/fedor/Documents/mecab_natto && bash run_mecab.sh >> /home/fedor/Documents/mecab_natto/log.txt) Aug 18 09:39:01 fedor-desktop CRON[12397]: (CRON) info (No MTA installed, discarding output) Aug 18 09:39:01 fedor-desktop CRON[12397]: pam_unix(cron:session): session closed for user fedor run_mecab.sh: . app.config echo $PID if [ -n "$(ps -p $PID -o pid=)" ] && [ "$(ps -q $PID -o comm=)" == "ruby" ]; then echo "process is running" echo $PID else echo "process is not running, starting" bundle exec ruby main.rb -o 0.0.0.0 -p 4568 & ID="$!" echo $ID echo "PID=$ID" > app.config fi basically the script just check if the service with id is running and if not it starts one and save the new id. I had this setup on rpi raspbian and it worked fine but on jetson nano default image (ubuntu 18.something) the job starts but terminates immediately. I am not sure where to start debuging, I adjusted priviledges for run_mecab.sh based on another cron question and did not find anything satisfying for pam_unix(cron:session): session opened/closed which helps me understand where could be the problem. Thanks for help.
Adding #!/bin/bash to the top of run_mecab.sh and calling with: * * * * * $(cd /home/fedor/Documents/mecab_natto && ./run_mecab.sh >> /home/fedor/Documents/mecab_natto/log.txt) instead of * * * * * $(cd /home/fedor/Documents/mecab_natto && bash run_mecab.sh >> /home/fedor/Documents/mecab_natto/log.txt) solved my issue. Thank you all for help.
Cron job terminates session immediately
1,650,577,026,000
I have read similar posts here, here, here and here that mentioned some environment variables DISPLAY and DBUS_SESSION_BUS_ADDRESS. Setting them at the top of my user crontab enabled notify-send to work in my user crontab. However, the exact same crontab does not work if set in the sudo crontab -e, why not? And how can I make it work? test.sh #! /bin/bash env > $1 notify-send "I want to see this" crontab -e SHELL=/bin/bash PATH="/usr/bin:/home/ripytide/scripts/" DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/1000/bus DISPLAY=:0 * * * * * test.sh /home/ripytide/crontab.env sudo crontab -e SHELL=/bin/bash PATH="/usr/bin:/home/ripytide/scripts/" DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/1000/bus DISPLAY=:0 * * * * * test.sh /home/ripytide/sudocrontab.env crontab.env SHELL=/bin/bash PWD=/home/ripytide LOGNAME=ripytide _=/usr/bin/env HOME=/home/ripytide LANG=en_GB.UTF-8 USER=ripytide DISPLAY=:0 SHLVL=1 PATH=/usr/bin:/home/ripytide/scripts/ DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/1000/bus sudocrontab.env SHELL=/bin/bash PWD=/root LOGNAME=root _=/usr/bin/env HOME=/root LANG=en_GB.UTF-8 USER=root DISPLAY=:0 SHLVL=1 PATH=/usr/bin:/home/ripytide/scripts/ DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/1000/bus
D-Bus checks whether UIDs of the calling process and the session daemon are the same. Your script needs to run notify-send as the target user. If you insist to run the script as root then in the script you need sudo -u user notify-send …. Keep in mind sudo sanitizes the environment, so DBUS_SESSION_BUS_ADDRESS from the environment of the script will not get to the environment of notify-send (/etc/sudoers and the security policy may allow sudo to keep the variable but in your case this probably won't happen by default). You may try to change the settings, I won't elaborate. There is also sudo -E (see man 8 sudo). The least intrusive method is to request sudo to set the variable on demand: sudo -u user DBUS_SESSION_BUS_ADDRESS="$DBUS_SESSION_BUS_ADDRESS" \ notify-send "I want to see this" When the shell processing the script expands $DBUS_SESSION_BUS_ADDRESS, the command becomes: sudo -u user DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/1000/bus \ notify-send "I want to see this" and this form can be as well used directly, so it's up to you. In general things may be configured not to allow sudo to set the variable even this way. In the worst case the following should work: sudo -u user sh -c ' DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/1000/bus notify-send "I want to see this" '
sudo crontab notify-send doesn't work
1,650,577,026,000
I have a small script (basically very easy this is the first I wrote) and likes to know the elapsed time, how long it takes when it is finished. Is it possible to count the difference between the two variable? ($SSS and $EEE), I did try with the $SECONDS as well but it gives 0 as a result. Probably a different way needs to be used to make this work, so I need need regarding how can I solve this. My script: echo "****************************************************************************" SECONDS=0 #!/bin/bash SSS=$(date '+%Y.%m.%d. @ %H:%M:%S') echo "Start time: ${SSS}" echo "" echo "Starting update and upgrade" echo "===================================" sudo apt-get update && sudo apt-get upgrade -y echo "" echo "Starting autoremove" echo "===================================" sudo apt-get autoremove -y echo "" echo "Starting autoclean" echo "===================================" sudo apt-get autoclean -y echo "" echo "Check for Pi-Hole update" echo "===================================" sudo pihole -up echo "" echo "Starting gravity update for Pi-Hole" echo "===================================" sudo pihole -g echo "" EEE=$(date '+%Y.%m.%d. @ %H:%M:%S') echo "Start time: ${SSS}" echo "End time: ${EEE}" duration=$SECONDS echo "Elapsed time $(($duration / 60)) minutes and $(($duration % 60)) seconds." echo "****************************************************************************"
The issue with your script has nothing to do with the timing of the commands. The issue with your script is that it is running in an environment which is different to the environment you run it in from your command line. For example, the PATH variable may be different leading to some utilities not being found, or there are interactive elements to the script that obviously can't be carried out when the script is running in a non-interactive environment. You also mention in comments that you start the script using sh rather than bash, which additionally means that the SECONDS variable may not exist (depending on what shell sh happens to be implemented by). I can see that the script executes almost every single command using sudo. This leads me to suggest that you should be running the script in root's own crontab and not in the crontab of a non-privileged user account (and remove all the sudo invocations from the script). Modify the root user's crontab using sudo crontab -e. Furthermore, make sure that the PATH variable in the script is set to a list of directories that allows you to find all the utilities that you use. For example, #!/bin/bash PATH=/bin:/usr/bin:/usr/local/bin:$PATH SECONDS=0 apt-get update && apt-get upgrade # etc. You should in particular make sure that the PATH variable contains the directories where apt-get and pihole lives (find these with command -v pihole and command -v apt-get on the command line). Also make sure that the #!-line is the first line of the file (as I've shown here in my example). The code in the question is not doing this. Note that the #!-line is completely ignored if you use an explicit interpreter on the command line when invoking the script.
Elapsed time counting
1,650,577,026,000
Whenn I run sh /opt/script/cypress.sh Everything works, the script changes directory and executes a command to open another script. But when I have my crontab like this 1 * * * * /opt/script/cypress.sh It doesn't work. I edited the crontab with "crontab -e" and tested if it works with a touch command The cypress.sh looks like: #!/bin/sh cd "/opt/script" | ./cypress > /opt/script/log; And the cypress file looks like: cd "/opt/Website Testing/" npx cypress run --record --key * I replaced the record key with "*" for this post
What worked for me is the following configuration: crontab: 0 4 * * * /opt/script/cypress.sh > /opt/log cypress.sh: #!/bin/sh . $HOME/.bashrc cd "/opt/Website Testing/" && "/opt/Website Testing/node_modules/.bin/cypress" run --record --key * Thanks for your help, the ". $HOME/.bashrc" was missing for me.
How do I run a shell script which runs another script with cron? [duplicate]
1,616,862,924,000
have removed jobs from crontab but somehow it's still running at the scheduled time. I am really not sure what to do. Below I have also removed user crontab with: sudo crontab -r -u USERNAME $sudo ls -l /var/spool/cron/ total 4 -rw-------. 1 root root 121 Jan 7 02:28 root This is user's crontab which is empty: $crontab -l $ This is cron log from /var/log (It's not even showing here) Mar 18 21:01:01 u0101 run-parts(/etc/cron.hourly)[65988]: starting 0anacron Mar 18 21:01:01 u0101 run-parts(/etc/cron.hourly)[65997]: finished 0anacron Mar 18 21:10:01 u0101 CROND[66668]: (root) CMD (/usr/lib64/sa/sa1 1 1) Mar 18 21:20:01 u0101 CROND[67392]: (root) CMD (/usr/lib64/sa/sa1 1 1) Mar 18 21:30:02 u0101 CROND[68097]: (root) CMD (/usr/lib64/sa/sa1 1 1) Mar 18 21:30:02 u0101 CROND[68098]: (root) CMD (/root/linux-mem.sh 1>/dev/null 2>&1) Mar 18 21:40:01 u0101 CROND[68851]: (root) CMD (/usr/lib64/sa/sa1 1 1) Mar 18 21:50:01 u0101 CROND[69614]: (root) CMD (/usr/lib64/sa/sa1 1 1) What else to lookout for? should I change my script names? I really cant figure out how to stop the jobs from running?
Finally FIGURED OUT!!!!!! Our internal team had cloned our server to UAT environment which created a copy of all jobs/crontab etc. Thanks for all the help.
Removed jobs from user Crontab but still its running
1,616,862,924,000
I've built a program and I want to run it every 5 mins at round time (**:00, **:05, **:10, ..., not **:01, **:06, **:11, ... or **:03, **:08, **:13, ...). As I've read, it's better to use systemd than crontab, so I want to use systemd. And I understand how to launch program every 5 mins, but it doesn't launch at round time. I know I can try to launch the program at **:00 so next time it will launch at **:05 and etc, but it seems to be a too silly way. How do I launch program like this with systemd? Or I can't do it with systemd and I should use crontab?
I don’t think there’s a nice shortcut, but you can do this by listing the possible times in OnCalendar=: OnCalendar=*-*-* *:0,5,10,15,20,25,30,35,40,45,50,55:00 in normalised form. systemd usually allows itself a minute of leeway on calendar events; you can tighten this by specifying AccuracySec=.
systemd - run command every 5 mins and at round time (**:00, **:05, **:10, ...)
1,415,303,231,000
[rootSERVER ~]# rmmod -f cifs ERROR: Removing 'cifs': Resource temporarily unavailable [rootSERVER ~]# modprobe -r cifs FATAL: Module cifs is in use. [root@SERVER ~]# lsb_release -a LSB Version: :core-4.0-amd64:core-4.0-noarch:graphics-4.0-amd64:graphics-4.0-noarch:printing-4.0-amd64:printing-4.0-noarch Distributor ID: Scientific Description: Scientific Linux release 6.1 (Carbon) Release: 6.1 Codename: Carbon [root@SERVER ~]# I tried rmmod -fw cifs but it just waited for ages... (and yes, all cifs shares are "umount -l"-ed before trying to remove cifs module..) QUESTION: how can I remove the cifs module??
/usr/bin/sudo /bin/umount -f -a -t cifs /usr/bin/sudo /bin/umount -f -l -a -t cifs sleep 5 /usr/bin/sudo /sbin/modprobe -r -f cifs pkill nautilus
How to remove kernel module if it's still in use?
1,415,303,231,000
Not so much asking what books (although if you know of any guides/tutorials that'd be helpful) but what is the best way to start doing kernel programming and is there a particular distribution that would be best to learn on? I'm mostly interested in the Device Drivers portion, but I want to learn how the Kernel is set up as well (Modules and such) I have around 4-5 years Experience with C/C++ but it's mostly knowledge from College (so it's not like 4-5 years work experience, if you know what I mean)
Firstly: For the baby stages, writing various variations on "hello world" modules, and virtual hardware drivers, are the best way to start (real hardware introduces real world problems best faced when you have more of an idea what you are doing). "Linux Device Drivers" is an excellent book and well worth starting with: http://lwn.net/Kernel/LDD3/ LDD (used to, at least) have exercises where you wrote virtual drivers, e.g. RAM disks, and virtual network devices. Secondly: subscribe to https://lkml.org/ or to the mailing list of a sub-system you will be hacking in. Lurk for a bit, scanning over threads, reading code review (replies to patches) to see what kind of things people stumble on or pick up on. See if you can obtain (cheap) hardware for a device that is not yet supported, or not yet supported well. Good candidates are cheap-ish USB NICs or similar, low-cost USB peripherals. Something with an out-of-date, or out-of-tree driver, perhaps vendor written, perhaps for 2.4.x, is ideal, since you can start with something that works (sort-of), and gradually adapt it/rewrite it, testing as you go. My first driver attempt was for a Davicom DM9601 USB NIC. There was a 2.4-series vendor-written kernel driver that I slowly adapted to 2.6. (Note: the driver in mainline is not my driver, in the end someone else wrote one from scratch). Another good way in is to look at the Kernel Newbies site, specifically the "kernel janitors" todo: http://kernelnewbies.org/KernelJanitors/Todo This is a list of tasks that a beginner should be able to tackle.
Best way to get into Kernel programming?
1,415,303,231,000
My kernel keeps panicking when connected to a certain wireless network. I'd like to send a bug report but my kernel is apparently tainted. From /var/log/messages: Apr 17 21:28:22 Eiger kernel: [13330.442453] Pid: 4095, comm: kworker/u:1 Tainted: G O 3.8.4-102.fc17.x86_64 #1 and [root@Eiger ~]# cat /proc/sys/kernel/tainted 4096 I've not been able to find documentation for what the 4096 bitmask means, but the G flag means that an external GPL module is loaded into the kernel. How do I find out which module is tainting the kernel? I've grepped for [Tt]aint in /var/log/messages or dmesg and don't find anything corresponding to when a module is loaded. My kernel is the latest kernel from Fedora 17: 3.8.4-102.fc17.x86_64. UPDATE: It may be due to the rts5139 module. It shows up in lsmod but modinfo rts5139 produces ERROR: Module rts5139 not found. When booting the previous kernel, 3.8.3-103.fc17.x86_64, this module is not listed by lsmod and the kernel is not tainted (/proc/sys/kernel/taint is 0). I've tried blacklisting this module echo 'blacklist rts5139' >> /etc/modprobe.d/blacklist.conf but rebooting still shows the kernel as tainted.
Well I don't believe a standard Fedora kernel package will include any modules which would trigger this taint so the question is, what other kernel modules have you installed? Common candidates would be graphics drivers (though I think those will mostly set the "proprietary" bit) and wireless drivers. If you can find anything in the lsmod output that you think may be a candidate then run modinfo <module-name> and see if the output includes intree: Y as any module without that will trigger the taint you are seeing. UPDATE: The rts5139 module that you're seeing in lsmod but which doesn't seem to be on your system is probably in the initrd and is being loaded from there early in the boot process before the main filesystem is mounted. That also explains why blacklisting won't work as you would have to rebuild the initrd with the updated blacklist. Rebuilding the initrd with dracut will cause the module to go away anyway though.
How to determine which module taints the kernel?
1,415,303,231,000
I have installed vitualbox on Debian Jessie according to instructions on debian wiki. By running: apt-get install linux-headers-$(uname -r|sed 's,[^-]*-[^-]*-,,') virtualbox During installation some errors were reported. Now I want to re-configure virtualbox-dkms but I receive this error: Loading new virtualbox-4.3.18 DKMS files... Building only for 3.16-3-amd64 Module build for the currently running kernel was skipped since the kernel source for this kernel does not seem to be installed. Note: uname -r shows 3.16-3-amd64 but my source folder in /usr/src is named: linux-headers-3.16.0-4-amd64. I don't know what to do!
I did all of these solutions but problem was about my kernel! linux-headers-uname -r wanted to install 3.16.0-3 headers due to my kernel version but there is no such linux kernel header in Debian repos: There is 3.16.0-4 Solution: upgrade my kernel via apt-get then everything works fine.
cannot reconfigure virtualbox-dkms