date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,585,076,958,000
I want to set up crontab every minute on Manjaro, so I've put some script every minute with: $ crontab -e * * * * * /path/to/my/script.sh crontab: installing new crontab Then I see it is installed $ crontab -l but I see it is not working, so I try to restart: $ sudo systemctl restart crontab Failed to restart crontab.service: Unit crontab.service not found. $ sudo systemctl restart cron.service Failed to restart cron.service: Unit cron.service not found. $ sudo systemctl list-unit-files | grep -i cron # No output Then I've noticed that no file cron.service was found on my computer, so I've found cron.service on different computer (with Mint): $ cat /lib/systemd/system/cron.service [Unit] Description=Regular background program processing daemon Documentation=man:cron(8) [Service] EnvironmentFile=-/etc/default/cron ExecStart=/usr/sbin/cron -f $EXTRA_OPTS IgnoreSIGPIPE=false KillMode=process [Install] WantedBy=multi-user.target So maybe I can copy those files with all dependencies from the computer: scp mint:/lib/systemd/system/cron.service /lib/systemd/system/ scp mint:/etc/default/cron /etc/default/ scp mint:/usr/sbin/cron /usr/sbin/ but I'm not sure if it is good solution? I know that there are alternatives, especially dedicated timers for Arch distributions, but I prefer portable between systems solution. Is it possible to use normally cron on Manjaro 19.02?
I've found a solution on Manjaro's polish forum. Instead of cron we should install cronie: sudo pacman -S cronie sudo systemctl enable cronie.service sudo systemctl start cronie.service Then we can configure like normal crontab.
crontab.service file not found despite installed and configured crontab
1,585,076,958,000
When I'm sending an email using mutt (output of a script from cron/cronie) I get the following lines at the beginning of my email: To: [email protected] Subject: Cron <root@alarm> /home/alarm/bin/script-name.sh MIME-Version: 1.0 Content-Type: text/plain; charset=ANSI_X3.4-1968 Auto-Submitted: auto-generated Precedence: bulk X-Cron-Env: <LANG=C> X-Cron-Env: <SHELL=/bin/bash> X-Cron-Env: <PATH=/sbin:/bin:/usr/sbin:/usr/bin> X-Cron-Env: <[email protected]> X-Cron-Env: <HOME=/root> X-Cron-Env: <LOGNAME=root> X-Cron-Env: <USER=root> remainder of email from output of script.. EDIT: mutt gets called from the cronie.service file on this line: ExecStart=/usr/bin/crond -n -m mutt $ cat ~/.muttrc set sendmail="/usr/bin/msmtp" set use_from=yes set realname="Ikwyl6" set [email protected] set envelope_from=yes Does anyone know why I'm getting these headers in the content of the email?
In your cronie.service file put: Environment="[email protected]" where EMAIL is the email that you want your cron job (output only from cron scripts) to be emailed. Change the line in cronie.service that has: ExecStart=/usr/bin/crond -n -m 'msmtp -t' to: ExecStart=/usr/bin/crond -n -m 'mutt -H - ${EMAIL}' Where -H - takes the start of the piped input to mutt as the headers of the file and the body of the message. In your /etc/cron.d/ file or crontab file, add: [email protected] restart cronie sudo systemctl restart cronie
mutt showing headers in email content when emailed from crontab output
1,585,076,958,000
I have a cron that runs a script twice a week (Monday/Thursday) - this runs fine, but I need to stop it processing on the first Thursday of the month. I'd like to adapt this code: we=$(LC_TIME=C date +%A) dm=$(date +%d) if [ "$we" = "Thursday" ] && [ "$dm" -lt 8 ] then ..... fi I would assume I just change the = to != but wonder if there would be any gotchas I need to be aware of? This question (where I got the code above from) is the opposite of what I want - I would actually have preferred to add a comment to the accepted answer to ask this question, but I'd need 50 rep.
Using a bash test: if [[ "$(LC_TIME=C date +%A)" == 'Thursday' && "$(date +%d)" -le 7 ]]; then exit 1 fi Note: Functionally this is no different than the test in your question. Just add that to the top of your script. If it is the first thursday of the month the script will exit right away, otherwise it will run. Alternatively you could put it right into your crontab entry, something like: 0 1 * * 1,4 [[ "$(LC_TIME=C date +\%A)" == 'Thursday' && "$(date +\%d)" -le 7 ]] || /path/to/script.sh
Bash condition that won't run on first Thursday of the month
1,585,076,958,000
I'm trying to set up a cron job via cPanel on a machine running CloudLinux/CentOS. The command should run a PHP script every minute to update stats; cd /home/account/public_html/phpredmin/public/ && php index.php cron/index This is running but not updating the stats. Instead, it outputs HTML. However, the same command run via CLI as the account user works as expected by updating stats and obviously not showing any output.
The problem was that CLI was using the correct version of PHP to run the script, but cron was using another (wrong) version. Not sure why that happens, but the solution was to run the command slightly differently to call the correct version of PHP; cd /home/account/public_html/phpredmin/public/ && /opt/alt/php56/usr/bin/php index.php cron/index
Command working in CLI but not in cron
1,585,076,958,000
I'm running PHP 5.6 on an Debian 8 machine and hence there is a cronjob running as root for cleaning up session data: 09,39 * * * * root [ -x /usr/lib/php5/sessionclean ] && /usr/lib/php5/sessionclean Did not even know I had this cronjob until last week when I started to get mails regarding this cronjob saying: /bin/sh: 1: root: not found When I try to run the above commands the command starting with "-x" fails: -bash: -x: command not found What does the -x in [ -x /usr/lib/php5/sessionclean ] mean? Any idea why I'm getting this error/mail?
Unless you're using the system-wide crontab /etc/crontab, there is no user field: Sample user crontab file: # Edit this file to introduce tasks to be run by cron. ... # For more information see the manual pages of crontab(5) and cron(8) # # m h dom mon dow command As you can see the user field is missing. The -x tests if file /usr/lib/php5/sessionclean exists and is executable. Without the username field you could also write: 09,39 * * * * test -x /usr/lib/php5/sessionclean && /usr/lib/php5/sessionclean But your version should work as well without username field.
cronjob [ -x /usr/lib/php5/sessionclean ] returns command not found
1,585,076,958,000
A week ago I updated my wife's computer, and after a few days noticed that crond wasn't running. Running crond -d wasn't much useful, so I ran strace crond -d`. This error appears: openat(AT_FDCWD, "/dev/null", O_RDWR) = 0 dup2(0, 0) = 0 dup2(0, 1) = 1 brk(NULL) = 0x1dab000 brk(0x1dcc000) = 0x1dcc000 getpid() = 1405 mkdir("/run/cron/cron.I23Z7s", 0700) = -1 ENOENT (No such file or directory) dup(2) = 3 fcntl(3, F_GETFL) = 0x2 (flags O_RDWR) fstat(3, {st_mode=S_IFCHR|0620, st_rdev=makedev(136, 0), ...}) = 0 write(3, "mkdtemp: No such file or directo"..., 35mkdtemp: No such file or directory BTW, OS is Slackware64-current. Extra info: I just detected that I can start rc.crond manually (as root), but it doesn't start when rebooting... and it works correctly, executing all the cron tasks.
After some discussion on the ##slackware IRC channel about how this problem could occur, I noticed that there was a difference in the /etc/rc.d/rc.M file in my computer and the one at one of the participants. Older versions of Slackware seem to manage the crond startup directly, while newer versions do this task indirectly by calling rc.crond start. rc.crond does some extra work. Normally this should have been updated automatically, but, for some reason, it wasn't (the rc.M.new hadn't replaced the original file).
crond won't start. Problem with temp directory /run/cron?
1,585,076,958,000
My home server runs a couple of shell scripts regularly for maintenance tasks - mostly backup, but also other stuff. I would like to be alerted in case anything fails but also keep a log of when it works. Currently my setup looks like this: Cron calls one shell script which calls other scripts (just so the one won't get too complex). I decided to use one script with many tasks instead of individual cron items as I don't know how long each will take and I don't want them to interfere with one another. My cron setup contains a MAILTO line. I never get any errors. I don't have any logging. I just check from time to time whether the backup actually exists. I know, I could implement into each script the functionality to log to a file (or syslog). Is there a way to define this from a central point so that I do not have to code this into every script individually? Not sure how to achieve a better monitoring. I think a log analyzer system would be too much for this. Someone suggested running the scripts through Jenkins instead of shell/cron, but that seems to be even more effort. What is a simple and good option?
I have implemented the following: Enabled output to stdout for various steps or added custom output, e.g.: echo "Starting backup..." rsync whatever && echo "Backup successful" || echo "Backup failed" Checking the return codes of each step of the script, either exiting the sub-script immediately or continuing, returning an error code at the end of the script wrote a wrapper for my maintenance script which redirects all the outputs to a log file and if there are any errors within the maintenance script, I get a mail. Example of maintenance script (does not exit if any individual step breaks, but returns an error in the end): #!/bin/bash RETURNCODE=0 echo "Execution started $(date)" /root/do_something.sh || RETURNCODE=1 # (...) exit $RETURNCODE Example of wrapper script that calls the other script, this one is now in my crontab: #!/bin/bash # exit on any error (there should not be any in this script) set -e LOGFILE="/var/log/my.log" # redirect STDOUT and STDERR to logfile... if /root/maintenance.sh > $LOGFILE 2>&1; then # the colon ":" means: do nothing : else # on error, send me an email mail -s "maintenance script failed" [email protected] < "$LOGFILE" fi
How to monitor cron maintenance scripts?
1,585,076,958,000
I am trying to use Shell_exec to a run a script which is dynamically created and stored in a php variable. Here is the script so far. { curl -fsS --retry 3 https://hc-ping.com/same-unique-id-here ; \ echo "name.of.php.file.here STARTED for id # $state_id" ; \ php "/path/to/my.file.php" -i 2 -h prod 2>&1 | tee -a /path/to/log/files/my.file.log || \ curl -fsS --retry 3 https://hc-ping.com/same-unique-id-here/fail ; \ echo "name.of.php.file.here ENDED for id # $state_id with Exit Code $?" ; \ curl -fsS --retry 3 https://hc-ping.com/same-unique-id-here ; } 2>/dev/null >/dev/null & My first problem is I only want to execute the last line curl -fsS --retry 3 https://hc-ping.com/same-unique-id-here if the php "/path/to/my.file.php" -i 2 -h prod 2>&1 | tee -a /path/to/log/files/my.file.log is successful. How do I modify the script to do that? My second question, is this how I print the exit code at the end of a echo? echo "name.of.php.file.here ENDED for id # $state_id with Exit Code $?"
I was hoping for more interaction on this question. I ended up using a variable to control what was executed. Here is the code. { unset -v failcode ; failcode="0" ; curl -fsS --retry 3 https://hc-ping.com/same-unique-value ; \ echo "\n$(TZ="America/New_York" date +"%m-%d-%Y %H:%M:%S %Z") my.process.name.php STARTED for id # 2" ; \ php "/path/to/my/php/script.php" -i 2 -h lab || failcode="1" ; \ echo "\n$(TZ="America/New_York" date +"%m-%d-%Y %H:%M:%S %Z") my.process.name.php ENDED for id # 2 with Exit Code $failcode" ; \ [ "$failcode" == "1" ] && curl -fsS --retry 3 https://hc-ping.com/same-unique-value/fail ; \ [ "$failcode" == "1" ] && exit 1 ; \ curl -fsS --retry 3 https://hc-ping.com/same-unique-value ; } \ 2>&1 | tee -a /path/to/my/log/file.log 2>/dev/null >/dev/null &
Shell_exec of dynamically created script with else
1,585,076,958,000
I want to play a video at a certain time. Like an alarm. for instance, at 07:00 play video.mp4 I have tried this with crontab and with at, but no success yet
I wrote a little script for that: #!/bin/bash [ "$1" = "-q" ] && shift && quiet=true || quiet=false hms=(${1//:/ }) printf -v now '%(%s)T' -1 printf -v tzoff '%(%z)T\n' $now tzoff=$((0${tzoff:0:1}(3600*${tzoff:1:2}+60*${tzoff:3:2}))) slp=$(((86400+(now-now%86400)+10#$hms*3600+10#${hms[1]}*60+${hms[2]}-tzoff-now)%86400)) $quiet || printf 'Alarm goes off at %(%c)T.' $((now+slp)) sleep $slp mplayer /path/to/video.mp4 Call it with the desired time like alarm.bash 7, alarm.bash 7:1:3 or alarm.bash 07:01:03. You may use the -q option to disable the terminal output. Designed to serve as an alarm clock it's not possible to set a time more than 23:59:59 in the future with this script – I suggest to combine it with cron if necessary.
Start playing a video at a certain time
1,585,076,958,000
I have a cron job in machine 1 that opens/close machine 2 for a few hours. Now i deleted all cron jobs in machine 1, so it wont open/close machine 2. I have no crons, To delete all crons in machine 1. I used: sudo crontab -r but for some reason the machine 2 continue to open/close I even used to check cron log on machine 1: sudo cat /var/log/cron I do NOT see the stop/start commands there at the log. So it seems cron jobs does NOT open/shut machine 2. Cron job was a root. When executing the following commands: sudo crontab -l and crontab -l They give me: "no crontab for root" and "no crontab for centos" . Executing the command: sudo ls -l /var/spool/cron/ returns Total 0 /etc/crontab contains: SHELL=/bin/bash PATH=/sbin:/bin:/usr/sbin:/usr/bin MAILTO=root # For details see man 4 crontabs # Example of job definition: # .---------------- minute (0 - 59) # | .------------- hour (0 - 23) # | | .---------- day of month (1 - 31) # | | | .------- month (1 - 12) OR jan,feb,mar,apr ... # | | | | .---- day of week (0 - 6) (Sunday=0 or 7) OR sun,mon,tue,wed,thu,fri,sat # | | | | | # * * * * * user-name command to be executed Doing ls cron* inside etc folder shows: cron.deny crontab cron.d: 0hourly cron.daily: logrotate man-db.cron cron.hourly: 0anacron cron.monthly: cron.weekly: Previously it was a Root cron job. I do not know where to go from here.
Well, There was another service (third party) which i gave access key to my ec2 instance that open/shut my instance and i forgot about it. Well, i deleted it back than and so i assumed it wont be active anymore. Anyhow, the case is solved. Thank you all.
Cron on Amazon EC2 centos still executes even after delete [closed]
1,496,407,618,000
Before, there was a https://linuxmain.blogspot.de/2011/12/gathering-performance-data-with-sysstat.html For SLES you can install the cron settings by SLES10: /etc/init.d/sysstat start SLES11: /etc/init.d/boot.sysstat start SLES12: systemctl start sysstat but on SLES12 it doesn't installs the cronjob for sar if we run the start. Q: How to install the cronjob for sar? Or it need to be done by hand?
I believe systemd takes care of that by its own. So if you do systemctl enable sysstat && systemctl start sysstat as root, you should be set.
How to put systat/sar in cron on SLES 12?
1,496,407,618,000
qBittorrent-nox which was running perfectly until last week, but since then it always crashes on my Ubuntu 14.04. Theoretically it's logging, but the log file only conatines these lines: ******** Információ ******** A qBittorrent vezérléséhez, nyisd meg ezt a címet: localhost:8080 Web UI adminisztrátor felhasználó neve: admin Web UI adminisztrátor jelszó még az alapértelmezett: adminadmin Ez biztonsági kockázatot jelent. Kérlek változtass jelszót a program beállításinál. ******** Információ ******** A qBittorrent vezérléséhez, nyisd meg ezt a címet: localhost:8080 Web UI adminisztrátor felhasználó neve: weylyn1 ******** Információ ******** A qBittorrent vezérléséhez, nyisd meg ezt a címet: localhost:8080 Web UI adminisztrátor felhasználó neve: weylyn1 ******** Információ ******** A qBittorrent vezérléséhez, nyisd meg ezt a címet: localhost:8080 Web UI adminisztrátor felhasználó neve: weylyn1. So i would like to write a script, that will check in every 5 minute, whether qbittorrent-nox is running or not and if it's not running, then start it with # service qbittorrent-nox start (as root). However if it's running, then wait for 5 more minutes and check it again. I would like to use this workaround, until a solution is found for the crashing.
How to test if a daemon is running? It depends. Some daemons has a file with the process ID in say /var/run/foo.pid. An example of that is /var/run/crond.pid. $ cat /var/run/crond.pid 432 If the process is running it has a directory in /proc: $ ls /proc/$(cat /var/run/crond.pid) So if the directory in /proc does not exists we can do a restart. If qBittorrent has this pid-file you can do this: # cat <<EOF >/etc/cron.d/restart-qbittorrent-nox */5 * * * * /bin/test -e /proc/$(cat /var/run/qbittorrent-nox.pid)/cmdline || service qbittorrent-nox start EOF If you don't have any file in /var/run you have to use ps ax | grep qBittorrent to find the process. But the best solution would be to find out why the process crashes...
How to write a crontab script, that will check a process' status and launch it if not running?
1,496,407,618,000
I use Gmail and every so often I have to hack it back in that text/plain email should be rendered in a monospace font. This makes it much easier to skim system-generated reports, &c. The problems with this are that I have to re-hack Gmail every few years when they change their semantics, and I have more "developer" type colleagues who aren't going to hack their gmail to improve the legibility of these emails. So, I'm wondering if anyone knows an easy command to take a text file and wrap it in HTML and enough MIME stuff to correctly encode the message as ... ideally multipart alternative, with the HTML being the text in a PRE tag. I mean, if I can even feed MIME output to cron? I'd be content to pipe to an html-mime-email type command ...
Just found your question, hope the answer can be still relevant. I am successfully running cron jobs with HTML output for years. It needs the following vars set in the crontab before your commands: CONTENT_TYPE="text/html; charset=utf-8" CONTENT_TRANSFER_ENCODING="8bit" Note it is text/html but not multipart. I haven't found any other way of wrapping the output with pre tags other than just write an obvious script: #!/bin/sh echo '<pre>' "$@" echo '</pre>' then using it explicitly in the crontab in front of your command. Such approach has an minor disadvantage that the name of this script will be visible in the subject of an email sent from cron, like, if you'd call it pre: Cron <user@host> pre your_command.
Easy Way to Wrap CRON Output in HTML PRE?
1,496,407,618,000
So I've noticed that I've dumped a lot of annoying comments in the top of quite a few crontab files by using crontab -u user -l > /tmp/crontab.user #muck with the file crontab -u user /tmp/crontab.user and now I'm stuck with # DO NOT EDIT THIS FILE and two other lines repeated over and over again in the top of my crontab files. I'd like to get rid of these by doing something like crontab -u user -l > /tmp/crontab.user #muck with the file #clean up the file crontab -u user /tmp/crontab.user but I'm not really sure what I should do to clean remove groups of three comment lines here safely. I'm guessing some sort of sed + tail combo is in order. Or just a perl one-liner.
The cleanup script can look something like sed -i '/^# DO NOT EDIT.*\|^# (.*/d' /tmp/crontab.user because your version of cron at least puts # DO NOT EDIT and # ( in at the header part and you never use things that start like that in any of your managed code (and if anyone else does, they'll be sorry)
How to update crontab for a user with a script without duplicating comments
1,496,407,618,000
I have several Git repositories with LaTeX files that I want to have typeset automatically. The idea is to have a central bash script (run by a cronjob) that executes a bash script in every repository, which (1) pulls new commits and (2) executes make all, which should call latexmk on the changed LaTeX files. The central bash script simply contains lines like: bash ./repos/repo-xyx/cron.sh Then in repos/repo-xyz/cron.sh is something like: cd "$(dirname "$0")" git pull make all cd - And in the Makefile in the same directory: all: $(subst .tex,.pdf,$(wildcard *.tex)) %.pdf: %.tex latexmk -pdf -pdflatex="pdflatex -shell-escape" $< </dev/null In my user's crontab, I have * * * * * bash .../cron.sh 2>&1 > .../cron.log and SHELL=/bin/bash. When the cronjob is executed, I read the following in the log: Already up-to-date. latexmk -pdf -pdflatex="pdflatex -shell-escape" myfile.tex </dev/null .../ (this comes from the line "cd -") As you can see, latexmk is invocated but doesn't do anything. myfile.pdf is not generated. When I run bash cron.sh (as the same user) from the highest directory, this does work. What could cause the Makefile to not execute commands when run from a bash script that is run by a cron job (at least, I think it's make not executing this command)? This is GNU Make 3.81 on Linux ubuntu 3.13.0-51-generic #84-Ubuntu SMP Wed Apr 15 12:08:34 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux.
The problem turned out to be that the path to pdflatex was being defined in my $HOME/.profile. I thus changed the cronjob to: * * * * * . $HOME/.profile; bash .../cron.sh 2>&1 > .../cron.log in accordance with https://unix.stackexchange.com/a/27291/37050.
Latexmk, from Makefile, from bash script, from Cron - Latexmk not being executed
1,496,407,618,000
I recently got a new DigitalOcean LEMP environment with Ubuntu. I tried making an SH script that uses CasperJS to scrape data from an external website to a JSON file, and a PHP script to parse the JSON and update a MySQL database. The SH is executed by a cron job. My crontab is as follows, meant to run every 90 minutes. # m h dom mon dow command 0 0,3,6,9,12,15,18,21 * * * /bin/sh /usr/share/nginx/private/mpd_calls_for_service/scrape_mpd_calls_cron.sh 30 1,4,7,10,13,16,19,22 * * * /bin/sh /usr/share/nginx/private/mpd_calls_for_service/scrape_mpd_calls_cron.sh And the SH file in question is. #!/bin/sh /usr/local/bin/casperjs /usr/share/nginx/private/mpd_calls_for_service/scrape_mpd_calls.js /usr/bin/php /usr/share/nginx/private/mpd_calls_for_service/parse_json_file.php Executing these lines of code individually via Terminal in SSH works fine. However, when running the shell script manually via SSH, or the Cron, this is the result. Unable to open file: /usr/share/nginx/private/mpd_calls_for_service/scrape_mpd_calls.js Unsafe JavaScript attempt to access frame with URL about:blank from frame with URL file:///usr/local/lib/node_modules/casperjs/bin/bootstrap.js. Domains, protocols and ports must match. Unsafe JavaScript attempt to access frame with URL about:blank from frame with URL file:///usr/local/lib/node_modules/casperjs/bin/bootstrap.js. Domains, protocols and ports must match. Could not open input file: /usr/share/nginx/private/mpd_calls_for_service/parse_json_file.php The solutions I have tried over the past two days have included CHMOD 777 for all affected files. Editing the crontab-e while sudo. Reinstalling PhantomJS and CasperJS via npm -g. After multiple attempts though, I'm at a dead end and ready to start pulling my hair out. Any assistance from the community would be appreciated.
I ended up just running the commands in a single crontab line with && and inside of a bash -l -c statement, it is working now!
CasperJS and PHP In Cron Job Cannot Open Files, Works Fine When Run Manualy
1,496,407,618,000
I am trying to put these two Cron Jobs: 0 3 * * * ! sudo -u asterisk /var/lib/asterisk/bin/module_admin --repos extended,standard,unsupported upgradeall 30 3 * * * ! sudo -u asterisk /var/lib/asterisk/bin/module_admin reload into a repository so that i can run a wget www.website.com cronjob.(zip or text) How would I save those so that I can inject them into crontab and how? sorry if this is very simple, but I am very new and other web resources haven't been any help
This is a HUGE security risk, but wget www.website.com cronjob.txt crontab -u username cronjob.txt But your crontab looks wrong. Why do you need to sudo, cron jobs run as the user they are installed for. So you would be better to just put the lines in asterisk's crontab. sudo crontab -u asterisk then add 0 3 * * * /var/lib/asterisk/bin/module_admin --repos extended,standard,unsupported upgradeall 30 3 * * * /var/lib/asterisk/bin/module_admin reload
Put two Cron jobs into crontab using wget
1,496,407,618,000
For some testing, I need to reboot my system every minute. I have a busybox based system, installed cron using opkg. I setup a cron job using crontab, everything looks ok: root@SL1000-1103DC:~# crontab -l # DO NOT EDIT THIS FILE - edit the master and reinstall. # (/tmp/crontab.1962 installed on Tue Jun 16 14:57:01 2015) # (Cron version -- $Id: crontab.c,v 2.13 1994/01/17 03:20:37 vixie Exp $) * * * * * /sbin/reboot root@SL1000-1103DC:~# But the command is never run after the system boots? However, if I restart cron, then everything works: root@SL1000-1103DC:~# /etc/init.d/cron restart Stopping Vixie-cron. Starting Vixie-cron. root@SL1000-1103DC:~# date Tue Jun 16 14:58:18 EDT 2015 root@SL1000-1103DC:~# Broadcast message from root (Tue Jun 16 14:59:00 2015): The system is going down for reboot NOW! INIT: Switching to runlevel: 6 So is there something different about running cron at startup, versus running from a command line? Maybe some subtle permissions issue? All of this is done at root level. Hmmm.... Edit: More info - looks like the unit is rebooting at odd times, as if cron was confused about the time? I left it alone, and it rebooted several times. Last time I had tail on /var/log/messages, and I see a message from cron issuing the command. So now the question is - why is cron confused about the time?
Sounds like cron was started before the time synchronization had settled, so the fix is to sync the time before cron starts.
Cron not working at startup, but works if restarted?
1,496,407,618,000
I have a crontab that creates a dump of my database every night: 20 3 * * * /path/to/dailydump.sh dailydump.sh contains: #!/bin/sh DATENAME=`date +%Y%m%d` BASENAME="/path/to/dumps/db_${DATENAME}.sql" /usr/bin/mysqldump -hhost -uusername -ppassword databasename > ${BASENAME} Permissions are: -rwx---r-x 1 ... dailydump.sh drwxr-xrwx 2 ... dumps Why does my cronjob not work? I'm on a shared server without root access. There are no logs in /var/log/cron or /var/log/syslog. There is not mail in /var/mail/<user_name> or /var/spool/mail/<user_name> (in fact there is nothing at all in /var/mail/ and /var/spool/), [email protected] does not mail any error messages, and 1 2 * * * /path/to/your/command &>/path/to/mycommand.log does not save any log file. ps -ef | grep cron | grep -v grep? returns nothing. (See https://serverfault.com/a/449652) The whole setup worked fine until I moved all files to a new domain and had to setup a new crontab. (Yes, I updated all paths and the database login information. I checked it multiple times, too.) I'm with the same hosting provider on the same machine, so the environment has not changed. Any help would be greatly appreciated. "Solution" Okay, this is most strange. The help center of my provider says that if the script to be executed by a cronjob rests inside a password protected directory, I need to add -auth=user:password -source before the path to the script. So I added that (with the proper authentication): 20 3 * * * -auth=user:password -source /path/to/dailydump.sh The result was that an error message was emailed to me (so MAILTO= works), telling me /bin/sh: -=: invalid option and listing the available options. The example in the help center did not actually give a physical path (/path/to/file), but an URL (http://...), so I deleted auth and source again and saved the crontab, and now the f%§#ing cronjob runs!! 'The crontab looks exactly like it did before, char for char, but now it runs, with no apparent changes to the code. I have no idea what the problem was, but inserting wrong code and deleting it again did the trick. o_O It appears as if the cronjob actually did run all the time (because when it doesn't, it obviously throws an error), only that it did nothing! Very mysterious. If anyone can explain that to me (in a reproducible manner), I'll offer a bounty of 200 and award it (after the necessary wait of two days). Also, @chaos gave me another solution, completely avoiding the shell script and letting cron dump the database directly (see the comments to his answer below): 20 3 * * * /usr/bin/mysqldump -hhost -uusername -ppassword databasename > /path/to/dumps/db_$(date +\%Y\%m\%d).sql Just don't forget to escape the percent signs, or the script will find an "unexpected EOF". Thank you to everyone for your help. I learned a lot again (although not what was wrong here).
To make sure the cron daemon is running and honouring the crontab, you could make a small test. Edit your crontab with an entry like this: * * * * * /bin/date >>/tmp/test After a minute check the file /tmp/test. If there is no file, the daemon is most probably not running. If that's the case I would get in contact with the support of the provider. Edit: To determine the environment in the cron instance you that: * * * * * /usr/bin/id >>/tmp/test * * * * * /usr/bin/env >>/tmp/test Now see the contents of the file.
Why does my cronjob not execute my shell-script? [closed]
1,418,224,861,000
I am attempting to find the best way to determine when the second file (of a matching criteria) is created. The context is an audit log rotation. Given a directory where audit logs are created every hour, I need to execute a parsing awk script upon the audit log that has been closed off. What I mean by that, is that every hour a new audit log is created, and the old audit log is closed, containing up to an hours worth of information. The new log file is to be left alone until it too is closed and a new one created. I could create a bash shell script and use find /home/tomcat/openam/openam/log -name amAuthentication.* -mmin -60 and then have this executed every 10 minutes via a crontab entry but I'm not sure how to write the rest of it. I suppose the script could start off by saving the contents of that find to a temp file, and upon every crontab execution, compare the new find command contents and when it changes, use the temp file as the input to the awk script. When the awk script is complete, save the contents of the new find to the file. A colleague has suggested I use the old school 'sticky-bit' to flag processed files. Perhaps this is the way to proceed.
For closure, this is the start of the script I am going to use. It needs more work to make it robust and do logging but you should get the general idea. #!/bin/sh # This script should be executed from a crontab that executes every 5 or 10 minutes # the find below looks for all log files that do NOT have the sticky bit set. # You can see the sticky bit with a "T" from a "ls -l". for x in `find /home/tomcat/openam/openam/log/ -name "log-*" -type f ! -perm -1000 -print` do # Look for open files. For safety, log that we are skipping them if lsof | grep $x > /dev/null; then # create a log entry on why I'm not processing this file... echo $x " is open" else # $x "is closed and not sticky" # run the awk scripts to process the file! echo $x " processing with awk..." # Set the sticky bit to indicate we have processed this file chmod +t $x fi done
How to determine the newly closed file within a continuous audit log rotation?
1,418,224,861,000
I'm helping a friend with a server and it appears the server reboots every week on Sunday night. How can I figure out what is causing is the server to reboot? Aside from rebooting, the server appears to also be running maintenance scripts for their custom application. I am guessing there is something running both the maintenance and the reboot in one go.
This sounds like a crontab entry. I'd look through the /etc/cron.weekly directory and also through root's crontab entries crontab -l. Additionally look through your /var/log/cron log file (on Fedora/CentOS/RHEL) or /var/log/syslog (on Debian/Ubuntu). There will be lines that look like this: Aug 12 08:17:01 manny CRON[15582]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) or this: Aug 11 04:50:01 grinchy CROND[30262]: (root) CMD (/usr/lib64/sa/sa1 -S DISK 1 1) Find the lines that correspond to when your server is rebooting.
How to figure out what is performing scheduled server reboots?
1,418,224,861,000
I was given a task to create a cron which finds, kills and clears old puppet runs that were not successfully installed. I am more or less looking for a starting point.
**THIS IS NOT PORTABLE TO SOLARIS** In your cron job: if [[ "$(uname)" = "Linux" ]];then killall --older-than 30m,1h puppet;fi Do not attempt to run this or write this on Solaris systems. If you have Solaris systems in your infrastructure as a mixed Linux/Solaris environment, just walk away from this answer entirely. The man pages have more documentation on the date format for killall, if you're running GNU userland.
Creating Cron Task to kill stalled/failed runs in a system
1,418,224,861,000
Hey guys, im trying to run a script using cron, im using a crontab created by the user ashtanga, in the crontab i have */5 * * * * /home/custom-django-projects/SiteMonitor/sender.py in top of the script i have: #!/usr/local/bin/python and the user ashtanga does have executable permission to the file, but cron is not running the script, its giving me the error: /bin/sh: /home/custom-django-projects/SiteMonitor/sender.py: No such file or directory so my question is, how can i get cron to run the script ?
The user does have permission as the permission is set to 755 The problem is that the user doesn't know of the environment variables needed. Try using bash instead and see if it picks them up then. Otherwise, set them up manually Start troubleshooting by running the script using the /bin/sh shell. You should get the same error then.
using cron to run script
1,418,224,861,000
For some reason, my backup is not working, when it is started by cron. Crontab entry 0 10 * * * /home/yzT/BackupDaily.sh BackupDaily.sh #!/bin/bash /home/yzT/Tools/FreeFileSync/FreeFileSync /home/yzT/Tools/FreeFileSync/BackupDaily.ffs_batch I can see cron starting my backup script in syslog. Oct 20 10:00:01 debian CRON[2589]: (yzT) CMD (/home/yzT/BackupDaily.sh) When I run it manually, the backup system (FreeFileSystem) creates a log file on my Desktop and I can see updated files in the backup directory. But via cron I do not get a logfile nor see any updates. How can I find/fix the problem? edit I found the root of the problem. Switched to TTY and run the script, and I get the following message: Error: Unable to initialize GTK+, is DISPLAY set properly?. So, although there is no GUI using the script, it seems the script wants to have access to the GUI application. How can I fix this?
If you are the only user on the system, you can just edit your crontab (with crontab -e) and add DISPLAY=:0.0 at the top of the file. Alternatively, you can try running your backup job like so: /home/yzT/Tools/FreeFileSync/FreeFileSync /home/yzT/Tools/FreeFileSync/BackupDaily.ffs_batch --display=:0.0
My backup is not working when started by Cron
1,418,224,861,000
I use Ubuntu 16.04 with Nginx and a few WordPress sites. Sometimes I don't visit a site for a long time (>=1 month) and it might be that the site is down. I'm looking for a small utility that will email my Gmail account, if one of my Nginx-WordPress sites is down (without mentioning a reason). Approaches considered so far 1. Creating a tool from scratch Creating the whole non-default configuration for my SMTP server. Adding anc configuring DNS recors at the hosting providers DNS management tool. Adding a weekly cron task with curl -l -L on each domain and save it's output into a file. Adding a weekly cron task of say one hour later, to check each file and email myself if the status code isn't 200. This might seem simple, but is actually quite complex (though not necessarily complicated), and it also might be a bit fragile. A dedicated, communal, maintained utility might be better for me. 2. Third party tools I don't want to use some grandiose, third-party network-monitoring service like Nagios, Icinga, Zabbix, Shinken, etc, and they all seem an overkill per this particular cause. 3. Postfix add-on I've already installed Postfix with the internet-site configuration so that tool might utilize Postfix. I just use the Postfix defaults, some default conf I could add on top of internet-site, maybe without adding and configuring DNS records. A utility which is an interactive program to re-configure Postfix might ease my pain; I wouldn't have to fill my Ubuntu-Nginx-WordPress-Environment installation-script with much SMTP configuration data. Maybe I'll just have to set some DNS records after that, and that's it. Anything that would ease the process this way or another is also an option for me. 4. Handling the spam filter Even if Gmail would mistakenly move my first email (or the first series of email) to spam, I could put it into a whitelist. My question Is there a utility I could use to have this behavior?
Best bet is to use a service like uptime robot. Free tier will cover less than 50 sites, pro plan is quite cheap. It'll do a simple ping check or even HTTP status code check The upshot of this is that you're not adding an additional point of failure (that you can control). You've no longer got to maintain and update a monitoring service
A utility to email myself if my site is down
1,418,224,861,000
I try to write the message "Ran Cronjob XY" to a logfile if my cronjob ran. Attempt: /opt/cpanel/ea-php73/root/usr/bin/php /home/company/example.de/bin/magento list 2>&1 | grep -v "Ran Cronjob XY" >> /home/company/example.de/var/log/test.cron.log But this fails and logs the output of the command /opt/cpanel/ea-php73/root/usr/bin/php /home/company/example.de/bin/magento list instead of just Ran Cronjob XY Bonus: Show how to print "ran successfully" if the command was executed successfully and "failed" if not.
if some_command >/dev/null 2>&1; then echo ran successfully else echo failed fi >>logfile The above code will run the command some_command, discard its output, and then append the text ran successfully to the file logfile if the command finished successfully. If the command fails, it would append the text failed to logfile. In your case, for simplicity (since you have so long pathnames in your commands), I would put this in its own wrapper script and execute that script with a cron job. The script would look like #!/bin/sh PATH=/opt/cpanel/ea-php73/root/usr/bin:$PATH logfile=/home/company/example.de/var/log/test.cron.log if php /home/company/example.de/bin/magento list >/dev/null 2>&1 then echo ran successfully else echo failed fi >>"$logfile" I modify PATH in the script to allow running php without an absolute path. That script would then be scheduled: * * * * * /path/to/thescript.sh ... where the * * * * * should be replaced by the actual schedule. If you'd wanted to turn this into a "one-liner" for use directly in a crontab entry: * * * * * if /opt/cpanel/ea-php73/root/usr/bin/php /home/company/example.de/bin/magento list >/dev/null 2>&1; then echo ran successfully; else echo failed; fi >>/home/company/example.de/var/log/test.cron.log ... where the * * * * * should be replaced by the actual schedule.
How to log certain message to a logfile when my cronjob ran?
1,418,224,861,000
I currently have 50 cron scripting jobs created on crontab -e. These scripts check that various services are running. Sometimes when I'm testing the functionality of one service this requires me to remove a lot of cron jobs. This makes the bells and whistles shooting off constant alerts off an outage. My workaround currently is to remove all of the cron manually by typing out nano crontab -e and running CTRL+K from the top of the list (not very fun to do). I want to know is it possible to disable cron quickly using a command rather than delete all the jobs and place them back in later on? Or can I create an empty text file and run a command to have cron read in that file and replace all the current jobs with that empty text file? Once I'm ready to use all my cron jobs again simply have it read in a text file that contains all my listed cron jobs.
Save your crontab to a file: crontab -l > my-crontab Delete your crontab: crontab -r Then load back the crontab from the file: crontab my-crontab
Any way to quickly disable/renable cron jobs?
1,418,224,861,000
Since I over-use my computer, I would like to block it for few hours, for example every day from 23 to 7, so that I cannot use it in that time frame. Currently, I am using crontab to suspend/shutdown the computer when it comes the time, and I do that every minute, so that - in general - if I try to log-in or turn on the computer, I have only few seconds before it is suspended again. The issue is that those few seconds are enough to change crontab and lock the mechanism. So, I thought about blocking the login process completely (even for root!) in those hours, so that nobody could access the computer in any way, from 23 to 7. If I wanted to disable this behavior, I had to do it beforehand. So, how can I configure my Linux box so that, for few hours, nobody can login? Should I use PAM? If yes, how? Note: I would like to prevent logging in both with GDM and shell.
Another approach is to disable the mouse and keyboard (assuming a system with USB input devices): 00 23 * * * rmmod usbhid 00 7 * * * modprobe usbhid This won't prevent you from turning the system off and on again, which would re-enable the keyboard and mouse... You could play with blacklisting the module if you want to prevent that, but you'd probably need to rebuild your initramfs every time (usbhid needs to be loaded very early during the boot, since you want a working keyboard to fix things when the system can't boot). If usbhid is built into the kernel on your system (e.g. Fedora), you can still achieve the same effect by unbinding all HID devices; the hard part then is re-binding them at 7am — you'll need to store somewhere the drivers from which they were unbound (unless there's a way of re-enumerating them). To unbind all devices: for device in /sys/bus/hid/devices/*; do echo ${device##*/} > ${device}/driver/unbind done (with appropriate error checking of course). To re-bind, you need to remember what driver the ${device}/driver pointed at, and echo the device identifier to bind in the driver's directory.
How do I "lock" my Linux box for few hours?
1,418,224,861,000
I set up an anacron job: 1@hourly 0 name wget https://mydomain.com/actions/controller So it runs hourly but I did not choose the time: 7:33 then 8:33 then 9:33 ... Is it possible to define precisely the time to run as: 7:00 then 8:00 then 9:00 ... Note: my hosting provider give me no choice between cron and anacron, only anacron is available on my virtual server.
No, anacron may not be used to schedule jobs for running at exact times like that. It is best used for making sure that, for example, maintenance script gets run at approximate frequencies, like daily, weekly, or monthly. It does not have a time resolution lower than one day. Personally, I launch anacron from an @hourly and a @reboot cron job (on my OpenBSD machine that isn't running 24/7), and it takes care of the daily, weekly, and monthly tasks if those tasks needs doing: @hourly /usr/local/sbin/anacron -s @reboot /usr/local/sbin/anacron -s The anacrontab: SHELL=/bin/sh PATH=/bin:/sbin:/usr/bin:/usr/sbin HOME=/var/log 1 1 cron.daily /bin/sh /etc/daily 7 3 cron.weekly /bin/sh /etc/weekly 28 5 cron.monthly /bin/sh /etc/monthly Some versions of anacron seems to understand @daily, @weekly, and @monthly (I using version 2.4.3, and its anacrontab manual does not mention those placeholders, but this one does). However, I haven't been able to find any implementations of anacron that supports using @hourly. However, if you run anacron hourly, like I do, and if one of its jobs needs executing, then that job will be run on the hour, i.e. approximately at 08:00 rather than 08:33. But it won't be run hourly.
Is it possible to set one job to run at precise hour with anacron
1,418,224,861,000
I have a web CMS that allows so-called action IDs, which are URLs. When these action IDs are accessed, they perform some action within the CMS. I wanted to perform an action periodically and used cron for that purpose. Even though the cron always ran at the defined time, which I could track in the log, the action was never performed. To troubleshoot this, I tested the URL, http://172.16.0.47/index.php?ACT=47&id=6, in the shell environment. However, when I used wget, it cut off the &id=6 parameter. The same thing happened when I used curl. I'm looking for a solution to this issue.
Wrap the URL in single-quotes. What is happening is that the ampersand is getting interpreted by the shell. curl 'http://172.16.0.47/index.php?ACT=47&id=6' Without the quotes (assuming you are using a shell configured to not error out on unmatched globbing patterns), you would start the command curl http://172.16.0.47/index.php?ACT=47 ... as a background job, and then set the shell variable id to the string 6.
URL gets truncated when accessing it using wget or curl
1,418,224,861,000
I want to change my root password everyday based on the date. The password will be like a combination of a string and the date. The below code is working fine. echo -e "pass"$(date +"%d%m%Y")"\n""pass"$(date +"%d%m%Y") | passwd root But how to call it each time the system starts and at mid night when the date changes (If the system is on.)?
I'm not sure why you would want to do that. If you're concerned about security, if someone discovers your password on 1 July, they'll know it on 31 July or 15 September... To answer your question, if you want to ensure that the password update is done either at a scheduled time or when the system restarts, you want to install anacron. It can do periodic scheduling without assuming the system is on all the time. I'm not sure what distribution you're using, but it should be in your package archives. Alternatively, you can use a mixture of traditional cron (changing the password at midnight) and an init script (to handle the case of rebooting) to ensure that the password is always up-to-date. In either case, put the commands to change the password into a script (say, /usr/local/sbin/rootpass.sh) and then call that script using cron or anacron and from your init script.
Dynamic change of Linux root password everyday
1,418,224,861,000
How do I instruct Ubuntu 13.10 server to: zip svn repository dump mysql database into .sql script tar both files copy backup tar onto sdb disk Are there any premade tools for such kind of operations?
The short answer is yes, there are premade tools for each of those operations: Use zip Use mysqldump Use tar Use cp zip is not so often used in my experience. You get better compression, and preservation of more of the Linux specific file metadata by making a .tar file and compressing that (e.g. with xz). You should just have to make a script that does all four tasks one after another and once that works, call that script on a daily basis using cron.
Backup of svn repository and mysql database on daily basis
1,418,224,861,000
root's default PATH is $ sudo su # echo $PATH /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games After creating /etc/cron.d/myjob 35 * * * * tim ( date && echo $PATH && date ) > /tmp/cron.log 2>&1 /tmp/cron.log shows the default value of PATH is: /usr/bin:/bin Is the default PATH value in a crontab file not the one for the root? Why? Whose PATH value is it? WIll the default PATH value be different if I add the job in /etc/crontab or a file under /etc/cronb.d/? Does it matter which user is specified in the cron job? (such as tim in the above example) Thanks.
This depends on the version of cron you’re using. I seem to remember you use Debian; cron there sets a number of variables up as follows: Several environment variables are set up automatically by the cron(8) daemon. SHELL is set to /bin/sh, and LOGNAME and HOME are set from the /etc/passwd line of the crontab’s owner. PATH is set to "/usr/bin:/bin". HOME, SHELL, and PATH may be overridden by settings in the crontab; LOGNAME is the user that the job is running from, and may not be changed. (See the crontab manpage for details.)
Whose PATH value is the default PATH value in a crontab file? [duplicate]
1,418,224,861,000
So we know that the fifth column in crontab means day of the week. But what day is considered as 1? What is considered the first day of the week? Is it Sunday or Monday?
Sunday is 0, Monday is 1, etc. http://man7.org/linux/man-pages/man5/crontab.5.html The time and date fields are: field allowed values ----- -------------- ... day of week 0-7 (0 or 7 is Sunday, or use names)
What is the first day of the week in crontab?
1,418,224,861,000
I haven't been able to get crontab to execute any of my scripts on start-up. I want to know why it doesn't work. Below is an example of me trying to use it, and I've tried to provide as much troubleshooting information as I can. $crontab -l no crontab for server $crontab -e #I scroll down to the bottom of the file and add the line below in @reboot /usr/bin/teamspeak3-server_linux-amd64/ts3server_minimal_runscript.sh #I make a carriage return at the bottom of the file I press ctrl+o to save the file (as it opened in nano), and ctrl+z to exit. I now issue "crontab -e" to check the contents is there. The file shows up, just without the changes I made to it. I even tried adding just a commented line in the crontab file & this also doesn't save. Anyway I checked the script does actually work normally. $cd /usr/bin/teamspeak3-server_linux-amd64/ $./ts3server_minimal_runscript.sh It then gives loads of output as it reads the script and loads the script perfectly. So I ctrl+c to quit the application, and check the permissions. $ls -l | grep ts3server_minimal -rwxr-xr-x 1 server server bla bla bla ts3server_minimal_runscript.sh So everyone can execute it. I reboot anyway, and find the application doesn't start. Why?
You have to use Ctrl+x to exit nano and install new crontab. Ctrl+z will just stop/send to background nano without installing new crontab. See attached screenshot:
Using nano to edit a crontab [closed]
1,418,224,861,000
Some colleauge suggested that I added a cron job for doing some things, but executing crontab -e and adding the following line: 0 1 * * * . /path/to/some/file.bash To make sure this thing was working, I changed it to 0 1 * * * . /path/to/some/file.bash && date >> /some/log so that I could check that /some/log has indeed none more line per day. This did not happen though. For the purpose of debugging, I've just added these three lines to crontab -e, * * * * * echo "uffa" >> /home/me/without-user * * * * * me echo "uffa" >> /home/me/with-user * * * * * . echo "uffa" >> /home/me/with-any-user where me is my user name, which resulted in all three files being created, but only the first one growing one line per minute, as you can see from this check I'm making after 8 minutes: $ for f in ~/with*; do echo ''; echo $f; cat $f; done /home/emdeange/with-any-user /home/emdeange/without-user uffa uffa uffa uffa uffa uffa uffa uffa /home/emdeange/with-user What's happening? Are the second and third line using a wrong syntax by having one more entry on the line? If so, then why does the file get created anyway? I've just verified that running a nonsense command like jflkdasjflaksd > someFile does create an empty someFile, which tells me that the two lines * * * * * me echo "uffa" >> /home/me/with-user * * * * * . echo "uffa" >> /home/me/with-any-user are just wrong, and the files are created before the error even takes place, because of how shell command line processing works. However those are the lines that work for somebody else. What is happening?
Ok, first, there's two slightly different crontab formats. The one used for per-user crontabs, and another for the system crontabs (/etc/crontab and files in /etc/cron.d/). Personal crontabs have five fields for the time and date, and the rest of the line for the command. System crontabs have five fields for the time and date, a sixth one for the user to run the command, and the rest of the line for the command. For totals of 6 and 7, if you like, though the last "field" with the command is a bit differently defined than the others. Personal crontabs don't have the username field, since it's implicit from the crontabs who the owner is, and regular users aren't allowed to run programs under the identity of others anyway. (As noted in comments, the root user's personal crontab is also just a personal crontab like that of any other user. It does not have the username field, even though root is a bit special in other ways. So not only is /etc/crontab a different file from the one you get with crontab -e as root, it also has a different format.) Then there's the .. It tells the shell to read the script named as an argument, and to run it in the current shell (Some shells know source as an alias for .). Any function definitions and variable assignments would be visible after that, unlike when running a script as a separate program. The line 0 1 * * * . /path/to/some/file.bash tells the shell (that cron started) to run .../file.bash in the same shell. I'm not sure why they'd recommend doing that instead of just running the command directly without the dot. There's a possible slight optimization in not having to initialize a new shell, but the downside is that the script has to be runnable in the shell that cron starts. That wouldn't work if cron starts e.g. a plain sh, but the script is for zsh, or for Python for that matter. If that line was in a global crontab, it'd mean to run /path/to/some/file.bash as the user .. That's likely not how it was meant. I'd suggest just this for simplicity (after making the script executable and adding a proper hashbang line, if not already done): 0 1 * * * /path/to/some/file.bash Then, if the . /some/script && date >> logfile didn't work, the first thing to look is if the script exits with an error. You used the && operator there, and it tells the shell to only run the right-hand command if the left-hand one exits successfully. You could do . /some/script; date >> logfile to run it unconditionally. Or heck, you could try . /some/script; printf "run at %s, exit status %d\n" "$(date)" "$?" >> logfile to save the exit status too. As for these: * * * * * echo "uffa" >> /home/me/without-user * * * * * me echo "uffa" >> /home/me/with-user * * * * * . echo "uffa" >> /home/me/with-any-user In a personal crontab, the first tells the shell to run echo, the second to run a command called me, and the third to source a script called echo. All contain a redirection, and redirections are processed by the shell before the command starts, so your file is created in all cases. (They have to be, since the shell can't know if the command is runnable before it tries to, and if it succeeds, control passes to that command, so the shell can't do anything about the redirections any more.) The two later ones probably give error messages, which you should get in email if your cron is set up properly. However those are the lines that work for somebody else. What is happening? As mentioned above, . /path/to/some/script tries to run the given script in the shell, it'll fail for a binary command, so . echo ... is not likely to work. 0 1 * * * username echo ... would work in a global crontab, but likely not in a personal one. 0 1 * * * . whatever isn't likely to work in a global one, as . probably isn't a valid username.
What does it mean to have 7 items in a crontab entry?
1,418,224,861,000
I'm trying to use a cron job to see when the battery gets lower than a given threshold, and then to send a battery critical notification. However, when I make the cron job execute a script every minute, and make the script send me a notification, it doesn't work. To make sure that it's not a permissions issue with the script or something causing the cron job to not run, I made the script create a file instead, and it worked. This is the crontab entry: * * * * * /home/aravk/test.sh And, to simplify the problem, these are the contents of test.sh: #!/bin/sh /usr/bin/dunstify hi No notification shows up, however. The script does work when I execute it manually. I also tried setting the DISPLAY environment variable to :0 by changing the crontab entry to * * * * * export DISPLAY=:0 && /home/aravk/test.sh, but it still didn't work. How do I send notifications from a script executed by a cron job? I'm on Arch Linux, if it's relevant.
I added this to my crontab and all my notifications work (currently tested with zenity and notify-send): DISPLAY=":0.0" XAUTHORITY="/home/me/.Xauthority" XDG_RUNTIME_DIR="/run/user/1000" DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/1000/bus
Unable to send notifications from cron job
1,418,224,861,000
I want my crontab to be every 4 hours but from a certain time 1 pm: I chose this config: 0 1/4 * * * but if I save I get the error: bad hour errors in crontab file, can't install. This following works perfectly, but then I cannot decide the starting time. 0 */4 * * *
You can't have 1/4 as the hours. This means "run at 01:00 (1am), every 4 hours". What you need is "run from 01:00 (1am) until the end of the day, every 4 hours". 0 1-23/4 * * * You could also write this with the explicit hour numbers listed out, but I personally find this harder to process visually, and it's not so obvious that it means "every four hours from 1am": 0 1,5,9,13,17,21 * * *
bad hour errors in crontab file, can't install
1,515,732,416,000
Is it possible to print to crontab with an heredocument? I tried these but failed: cat <<-"CRONTAB" > crontab 0 0 * * * cat /dev/null > /var/mail/root 0 1 * * 0 certbot renew -q CRONTAB and: bash <<-"CRONTAB" > crontab 0 0 * * * cat /dev/null > /var/mail/root 0 1 * * 0 certbot renew -q CRONTAB This on the other hand is not an heredocument, but worked: # CRONTAB echo " 0 0 * * * cat /dev/null > /var/mail/root 0 1 * * 0 certbot renew -q " | crontab I thus wonder if it's even possible with heredocument.
As others have pointed out, crontab is a command, so all you need to do is feed it the heredoc: crontab <<-"CRONTAB" But as has been mentioned before, it’s an awful lot easier to manage cron jobs by manipulating files in /etc/cron.daily, /etc/cron.d etc.
Redirect to crontab with an heredocument
1,515,732,416,000
When I am trying to open crontab I see the next output: ubuntu@macaroon:~$ crontab -l crontabs/ubuntu/: fopen: Permission denied When I add sudo it opens fine however, if the jobs don't work there: ubuntu@macaroon:~$ sudo crontab -l # Edit this file to introduce tasks to be run by cron. # omitted such info * * * * * /usr/bin/env python3 /home/ubuntu/main.py date >> ~/main_script_cronjob.log What is missing from this machine? How to fix this behaviour Other machines work fine with the regular crontab command without sudo. Here I have to do some workaround: sudo crontab -u ubuntu -e Then it opens correct crontab for ubuntu user. UPDATE: Additional information for crontab details: ubuntu@macaroon:~$ ls -l /usr/bin/crontab -rwxrwxrwx 1 root crontab 39568 Mar 23 2022 /usr/bin/crontab ubuntu@macaroon:~$ sudo namei -l /var/spool/cron/crontabs/ubuntu f: /var/spool/cron/crontabs/ubuntu drwxr-xr-x root root / drwxr-xr-x root root var drwxr-xr-x root root spool drwxr-xr-x root root cron drwx-wx--T root crontab crontabs -rw------- ubuntu root ubuntu It is not a Docker container. It is a physical machine.
The crontab command has lost its permissions. For example, on my (Raspbian) system the permissions include setgid crontab (the s permission bit for the group): -rwxr-sr-x 1 root crontab 30452 Feb 22 2021 /usr/bin/crontab You can confirm this by running ls -l /usr/bin/crontab on your problematic system and adding the result to your question. Here's how you'd fix the missing setgid bit: sudo chmod u=rwx,go=rx,g+s /usr/bin/crontab (I prefer symbolic values: User=rwx, Group,Others=rx, Group+setgid. You could equally use the octal 2755.) Remember also to tell us if you're trying to run this under Docker, as it often strips permissions by default.
Crontab doesn't open for user
1,515,732,416,000
I want the crontab for root to be agnostic, i.e. I don't want to literally specify in it that the admin user is jim. That is why in my crontab for root I introduced the variable au. SHELL=/bin/bash PATH=/usr/bin:/bin:/usr/sbin:/sbin au=$(echo "$(head -n 1 /etc/doas.conf)"|sed 's/permit//;s/as//;s/root//;s/ //g') 5 */4 * * * /home/"${au}"/sync-w-oumaima.sh * * * * * echo "$au">"/home/${au}/${au}.log" Sadly, it does not work - /home/jim/jim.log does not get created by crontab. How do I declare and use a variable in crontab that will hold the name of the admin user account?
The crontab format allows you to define environment variables but does not perform any substitution. In you example, the au variable is defined as the string $(echo "$(head -n 1 /etc/doas.conf)"|sed 's/permit//;s/as//;s/root//;s/ //g'). The simplest solution is to explicitly define the user in your crontab: SHELL=/bin/bash PATH=/usr/bin:/bin:/usr/sbin:/sbin au=jim 5 */4 * * * /home/"${au}"/sync-w-oumaima.sh * * * * * echo "$au">"/home/${au}/${au}.log" Another solution would be to write a simple script that sets the au variable and the run the “real” command and run that script from your crontab: #!/bin/bash export au=$(echo "$(head -n 1 /etc/doas.conf)"|sed 's/permit//;s/as//;s/root//;s/ //g') /home/"${au}"/sync-w-oumaima.sh A third solution would be to use the eval shell builtin to make your shell re-interpret the content of the au variable, with something like: SHELL=/bin/bash PATH=/usr/bin:/bin:/usr/sbin:/sbin au=$(echo "$(head -n 1 /etc/doas.conf)"|sed 's/permit//;s/as//;s/root//;s/ //g') 5 */4 * * * eval /home/"${au}"/sync-w-oumaima.sh * * * * * eval echo "$au" \> "/home/${au}/${au}.log" But you’d have to be extra careful with the required escaping (I certainly was not careful).
How do I declare and use a variable in crontab that will hold the name of the admin user account?
1,515,732,416,000
I have a bash script that sources a file, which sources another: script.sh: cd /script/dir source funcs.sh funcs.sh: ... source mode.sh Now this script runs OK when I run it from command line, from within /script/dir. But when I run it from crontab, the funcs.sh file can't find mode.sh. Probably because for some reason the cd /script/dir doesn't get passed on to it? Is there anything I can do from within script, or do I need to cd in the cron or something similar?
Every process has a $PWD (working directory) cron does not run in your $HOME. Instead of cding in your script you can use this #!/bin/bash dir="$(dirname "$(readlink -f "$0")")" echo $dir $dir is the location directory of a script. For example $ cd / $ bash /home/junaga/script.sh outputs /home/junaga because that is the script location
source doesn't recognize cwd?
1,515,732,416,000
What would be the difference between the following two ways to run cron? Basically, I'm just trying to write to a file and seeing which one is preferred/correct: 2 * * * * python script.py >> /tmp/script.log vs: 2 * * * * python script.py >> /tmp/script.log 2>&1 It seems they both write to a log but perhaps the first one 'flushes' more frequently? Most output is done by normal print statements: print('hello') Which I think is the equivalent of writing to sys.stdout with an implicit \n after each print.
The first redirection appends the standard output (only) of script.py to the log file. If the script produces any messages on the standard error channel (in Pythonese: writes to sys.stderr), those messages will be processed as usual by cron, i.e. the cron daemon will collect them and after the job has ended, will attempt to send them as a local email message to the user that owns the cron job. Normally such local emails will be processed by the local Mail Transfer Agent (MTA for short) and they'll land into the local email inbox file of the user, e.g. /var/mail/<username>. But if the system has no local MTA installed, this process might fail and the cron job error messages might be lost. If the system has a properly configured MTA that can also send mail outside the local system, this email delivery could be redirected to any email address you want. The second redirection appends both standard output and standard error outputs to the log file. If the cron job exits with a non-zero result code, the cron daemon might still attempt to send a local email to the owner of the job, just to report the result code to the job owner. Which one is preferred/correct? It depends on what your script is doing and if you want a more immediate notification of script errors (by email) or not.
Difference between two cron redirections
1,515,732,416,000
I'm using a cpanel where I've hosted an application. In my application I have to run some queries on minutes/hourly/daily basis. For that I'm using cron job from cpanel. Here is a cron command that I'm using: * * * * * wget -O /dev/null https://******/sms_cron The problem in this command is, it is sending an email when it trigger. After searching for solution in online to stop sending email, I've found this command useful: * * * * * wget https://******/sms_cron > /dev/null 2>&1 But there is a problem with this command too. Though it has stop sending email, but it is generating new file every time it trigger. Now I have to idea how to stop that too. Can anyone help me out with that? Thank you.
To stop wget outputting anything, redirect its output to /dev/null and ask it to be quiet. * * * * * wget -O /dev/null --quiet 'URL' This is equivalent to * * * * * wget -O - --quiet 'URL' >/dev/null It will still produce output for actual errors (which I presume that you want to see). To avoid these as well, add 2>/dev/null to the end of the first command, or 2>&1 to the end of the second command.
Cron job creating new file
1,515,732,416,000
If I run my backup script manually from the command line it works fine. No issue backup is created. In the crontab log, I can see the entry for running the backup at 2:00 AM, /home/sgaddis/copy-production-db.sh: Dec 23 01:01:01 mapehr anacron[11921]: Anacron started on 2019-12-23 Dec 23 01:01:01 mapehr anacron[11921]: Normal exit (0 jobs run) Dec 23 01:01:01 mapehr run-parts(/etc/cron.hourly)[11923]: finished 0anacron Dec 23 02:00:01 mapehr CROND[14869]: (sgaddis) CMD (sgaddis /bin/bash /home/sgaddis/copy-production-db.sh) Dec 23 02:01:01 mapehr CROND[14934]: (root) CMD (run-parts /etc/cron.hourly) Dec 23 02:01:01 mapehr run-parts(/etc/cron.hourly)[14934]: starting 0anacron Dec 23 02:01:01 mapehr anacron[14943]: Anacron started on 2019-12-23 but when I look in the /home/sgaddis/backup there is no backup. It should not be a permission issue since it is my home/backup folder. My account should have the right to dump the backup there as it does when I run the script manually. Any place else I should be looking for clues? I went to https://linux4one.com/how-to-set-up-cron-job-on-centos-7/ to see if there was anything I left out in creating the cronjob I searched this site was unable to find a question similar to this one. The cronjob is not hanging or not being executed. The closest I found was https://stackoverflow.com/questions/44723154/using-cronjob-to-automatically-backup-mysql-files, but as I said earlier in my post I have already run the script from the command line and it works fine so the above post does not answer my question. Searching this site does not produce a similar question to the one that I have posted. If you know of a similar post please point it out and not just vote down my post. The exact line is this 0 2 * * * root /bin/bash /home/sgaddis/copy-production-db.sh ==UPDATE== This is the command that is in the log file today. mapehr CROND[21076]: (sgaddis) CMD (/bin/bash /home/sgaddis/copy-production-db.sh) Now it is using my user account to launch the cron job. Maybe my user account does not have permission to execute a cron job?
From comments it is clear that you want to run the cron job as root. You can schedule such a script in two ways: Put it in the system's crontab in /etc/crontab. This crontab is a crontab file with an extra user field, like what you show in your question. I would not recommend this though as some Unix systems may manage this crontab by automatic means. Instead... Put the job in the root user's crontab. You can edit the crontab of the root user with the command sudo crontab -e from you ordinary user account, if you have sudo rights. This crontab is the root user's "personal crontab", and it is handy for running your local scheduled jobs from that need root privileges. Note that the environment that the script is being run in from cron may well be different from the environment that you run it from in an interactive shell. Differences may include what the current working directory is (write your script so that files are being copied to and from absolute paths), and what environment variables are set and to what values (variables set in your shell startup files will not be available; set these explicitly in the script instead).
Backup script is being executed in cron job but no backup is being produced
1,515,732,416,000
On Ubuntu, what is the calling relation between cron and anacron? I am confused by looking at /etc/anacrontab, /etc/cron.daily/0anacron and /etc/crontab. From https://askubuntu.com/a/848638/1471 cron.daily runs anacron everyhour. What does "cron.daily runs anacron everyhour" mean? Is it "cron runs anacron daily via cron.daily" instead? Thanks.
The anacron manpage describes what it’s supposed to do, and to some extent how it does it. You should trust that, and the contents of the files on your system, more than information you find on unrelated sites on the Internet (including this answer of course). anacron’s job is to ensure that daily, weekly and monthly jobs are run periodically, even if the system isn’t running at the appropriate times. It doesn’t need cron to operate but it does need to avoid duplicating cron’s work. /etc/anacrontab tells anacron what to run; by default, that’s all the cron jobs defined in files under /etc/cron.{daily,weekly,monthly}, at the appropriate intervals. When anacron is run (in a mode where it is asked to actually run the jobs it manages), it checks to see when the jobs were last run, and only runs them if a period of time consistent with the jobs’ intended periodicity has elapsed. /etc/cron.{daily,weekly,monthly}/0anacron take care of ensuring that anacron is aware of cron’s operation: every time cron processes its daily, weekly, and monthly jobs, the first job it runs updates anacron’s timestamps, resetting the counter for the elapsed time since the last execution of the corresponding set of jobs. /etc/cron.d/anacron and /lib/systemd/system/anacron.timer ensure that, on systems with either cron or systemd installed, anacron is run periodically: daily with cron, hourly with systemd. anacron is also run at system boot (via its init script) and when the power status changes (on resume, or when AC power is connected).
Is it true that "cron.daily runs anacron everyhour"?
1,515,732,416,000
I set the following line in cron job under /etc/cron.d/find_old_file * * * * * root [[ -d /var/log/ambari-metrics-collector ]] && find /var/log/ambari-metrics-collector -type f -mtime +10 -regex '.*\.log.*[0-9]$' -printf '%TY %Tb %Td %TH:%TM %P\n' >> /var/log/find_old_file this cron should find the logs that older then 10 day ( but only if /var/log/ambari-metrics-collector folder exists ) from some unclear reason we notice that /var/log/find_old_file not created and when we test it this line works fine on shell bash but not from cron we also add 2>&1 in the end of the file but this not work please advice what is wrong in my cron job ? more /etc/cron.d/find_old_file * * * * * root [[ -d /var/log/ambari-metrics-collector ]] && find /var/log/ambari-metrics-collector -type f -mtime +10 -regex '.*\.log.*[0-9]$' -printf '%TY %Tb %Td %TH:%TM %P\n' >> /var/log/find_old_file 2>&1 example when we test it on shell bash [[ -d /var/log/ambari-metrics-collector ]] && find /var/log/ambari-metrics-collector -type f -mtime +10 -regex '.*\.log.*[0-9]$' -printf '%TY %Tb %Td %TH:%TM %P\n' 2018 Aug 13 12:54 collector-gc.log-201808130951 2018 Aug 13 04:22 collector-gc.log-201808130403 2018 Aug 01 12:40 gc.log-201808011229 2018 Aug 01 12:40 collector-gc.log-201808011229 2018 Aug 09 15:36 gc.log-201808091332 2018 Aug 09 10:50 gc.log-201808090825 2018 Aug 13 04:02 collector-gc.log-201808130346 2018 Aug 13 16:51 gc.log-201808131358 2018 Aug 01 13:35 gc.log-201808011241 2018 Aug 01 13:35 collector-gc.log-201808011241 2018 Aug 09 15:39 collector-gc.log-201808091332 2018 Aug 02 23:06 gc.log-201808022256 when I put this * * * * * root echo test >> /var/log/test then its works , but not my line as described so what are happens here?
The % sign in crontab see man (5) crontab has special meaning (newline) and to get your command work you need to escape them. A "%" character in the command, unless escaped with a backslash (\), will be changed into newline characters, and all data after the first % will be sent to the command as standard input. so, your command should be: * * * * * root [[ -d /var/log/ambari-metrics-collector ]] && find /var/log/ambari-metrics-collector -type f -mtime +10 -regex '.*\.log.*[0-9]$' -printf '\%TY \%Tb \%Td \%TH:\%TM \%P\n' >> /var/log/find_old_file
cron job not redirect the output to file [duplicate]
1,515,732,416,000
I have a Raspberry Pi 2, which I use as kiosk, for that I have installed Raspbian based FullPageOS distro. Everything works fine, except some commands fail silently when trying to run from crontab. I have 2 commands for switching kiosk on and off at certain times by the pi user: $crontab -l -u pi # m h dom mon dow command 05 9 mon-fri * * /bin/bash /home/pi/scripts/dispon.sh >> /tmp/cronjob.log 2>&1 15 18 mon-fri * * /usr/bin/xset -display :0 dpms force off >> /tmp/cronjob.log 2>&1 */3 * * * * /usr/bin/touch /tmp/1111 >> /tmp/cronjob.log 2>&1 As you can see, I have tried it different ways: to execute xset directly in the monitor off sequence, and as part of script, when executing the monitor on. Contents of the dispon.sh script (it is chmod a+x): #!/bin/bash xset -display :0 dpms force on xset -display :0 -dpms Neither of commands appear to work (display doesn't turn on/off) and neither leaves any error message in /tmp/cronjob.log The touch command does work and touch the file though. Both xset and dispon.sh work fine when run by pi user from SSH connection. Any ideas??
The man page for the crontab file format (man 5 crontab) writes, Names can also be used for the "month" and "day of week" fields. Use the first three letters of the particular day or month (case doesn't matter). Ranges or lists of names are not allowed. Notice the last sentence: you cannot use mon-fri (but you can use 1-5). You have also missed that the comment (first line) reminds you of the correct field order: minute, hour, day-of-month, month, day-of-week, command; but you had placed the day-of-week values too soon. This corrected crontab file should work better for you: SHELL=/bin/bash PATH=/usr/bin:/bin:/usr/local/bin:/home/pi/scripts # m h dom mon dow command 05 9 * * 1-5 dispon.sh >> /tmp/cronjob.log 2>&1 15 18 * * 1-5 xset -display :0 dpms force off >> /tmp/cronjob.log 2>&1 */3 * * * * touch /tmp/1111 >> /tmp/cronjob.log 2>&1 Finally, if you find that cron is apparently ignoring your entries, you can search for its recent log reports to see what (if anything) is going on: grep CRON /var/log/syslog
Crontab only runs some commands?
1,515,732,416,000
Having read through the Wikipedia page on cron, it is unclear to me when cron begins to execute the tasks I have defined in the crontab file. Is it during the boot process - or even at the end of it - or later? I am sure that they are executed when I login into the system (Linux Mint 17.3), however what happens if I do not?
The tasks defined in the various crontab files are executed by crond, which is started during boot by your init (whether that's sysvinit, systemd or Upstart). crond processes tasks as soon as it starts, so you'll see crontab-defined tasks potentially start executing before the system has finished booting. In any case crond will run tasks you've scheduled regardless of whether you're logged in or not. You can boot a system up without ever logging into it, and crond will still run the tasks that have been defined – this is typically the case on servers. The crond(8) manpage has all the details.
When do cron tasks begin to execute?
1,515,732,416,000
I am testing out a Raspberry Pi with the aim to use it in a production system for logging manufacturing data. All is working well and I have been recording test data over the last month. For redundancy and risk management I have a bunch of Python scripts that need to be scheduled. In the past I have been working on Windows Server environments and the Task Scheduler did everything I needed. Logging issues, retrying tasks on failure, notifications and much more. I have been playing with cron over the last few weeks and I have wasted too much time. The log files are not helpful and since I don't have an MTA installed the real errors get lost. Any alternative that could make scheduling easier rather than harder? Added detail on the issue: I have tried the ">>" in the past with little luck. If that is the normal way to do it I will give it another go. The Python scrips runs fine if I run it manually. But as soon as I put it in crontab it stops working and without a log I can't resolve it or at least understand the problem. It seems to be an issue with writing files. The cron jobs that are working, are python scripts that connects to bluetooth devices and then logs the data in a MySQL db. The Cron log is trying to report output from jobs but cause MTA is not setup it just reports the MTA issue and gives me zero extra info in the actual issue. (/var/log/cron.log this is the main log,right?) I will look into MTA, internet connectivity is an issue and might not be available at all times. My data logging jobs are in "crontab -e" and I have one job that is creating image files that is sitting in "sudo crontab -e". (two different crontab files I take it) All of my cron jobs are similar to these : 00 6 * * * python /app/test.py
Actually, cron is production-ready. It's been battle-tested so many times it's hard to accuse it of malfunctioning. What you might be experiencing is issues resulting from simple errors. It would help a lot if you specified what your problems with cron are, exactly. As already pointed out by gaueth, you can append >> /tmp/somefile 2>&1 to you command in crontab. What this does is: >> /tmp/somefile means 'append the stdout stream of the command to /tmp/somefile'. This simply writes everything that would have otherwise been printed on your terminal into the file specified. Please note the use of >> (append) instead of > (overwrite). 2>&1 means 'send stderr to stdout'. In human terms: write the output of any errors in the same place where you write standard output (explained above) Using this you will have a full log of what happened after cron executed your script. Please note it may be a good idea to make the script print out current date (as well as some other things, perhaps) so that you have this data in the log file. Another thing to bear in mind is the PATH variable. Cron runs in a slightly different environment than your interactive terminal session so it's usually a good idea to include the output of echo $PATH in the script itself or in the crontab line, like so: */20 * * * * export PATH=<paste output of echo $PATH here>; <command>.
cron - production-ready alternative
1,515,732,416,000
I am getting a Bad Hour error for the following crontab entry: */05 17-05 * * * wget -q -O /dev/null "http://abcd/cron/abcd" Is there any issue with this? I want the cron to run from evening 5 PM to morning 5 AM
As you are not specifying which system you are using, I am hoping your system use a "Vixie" or "Vixie"-related crontab utility. Still: 17-05: is not considered a proper range (the lower limit being greater than the higher limit of the range). You could instead write: "17-23,00-05" From man 5 crontab: Ranges of numbers are allowed. Ranges are two numbers separated with a hyphen. The specified range is inclusive. For example, 8-11 for an ``hours'' entry specifies execution at hours 8, 9, 10 and 11. So, sure nothing really forbiding you to write an interval the way you did. The Extensions part of man 5 crontab is also interesting regarding how other crontab utilities would allow you specify more than a simple range (your system might be one of these): Lists and ranges are allowed to co-exist in the same field. "1-3,7-9" would be rejected by ATT or BSD cron -- they want to see "1-3" or "7,8,9" ONLY. So, as you can see, it really depends on your system crontab ability to understand what you mean by "17-05". For more information: man 5 crontab (the "vixie" cron)
Getting a Bad Hour Error in crontab
1,515,732,416,000
Mentioned in a comment in this post How are files under /etc/cron.d used?: People might miss, that "the file names must conform to the filename requirements of run-parts" (see Debian's man cron). So filenames in /etc/cron.d/ which match the shell glob [!A-Za-z0-9_-] are ignored! Hence things like *.dpkg-dist or vi-backups *~ do no harm, but if you accidentally create /etc/cron.d/very_important.crontab, this will get ignored due to the . in it! Is there any restriction in the cron.daily , cron.hourly, cron.monthly etc. folders by the above filenames with the ‘.’ symbol in the file names above? If not, then why? Why is the naming restriction only in /etc/cron.d/?
As explained in man 8 cron on Debian (the source of the quote): Support for /etc/cron.hourly, /etc/cron.daily, /etc/cron.weekly and /etc/cron.monthly is provided in Debian through the default setting of the /etc/crontab file (see the system-wide example in crontab(5)). The default system-wide crontab contains four tasks: run every hour, every day, every week and every month. Each of these tasks will execute run-parts providing each one of the directories as an argument. These tasks are disabled if anacron is installed (except for the hourly task) to prevent conflicts between both daemons. As described above, the files under these directories have to pass some sanity checks including the following: be executable, be owned by root, not be writable by group or other and, if symlinks, point to files owned by root. Additionally, the file names must conform to the filename requirements of run-parts: they must be entirely made up of letters, digits and can only contain the special signs underscores ('_') and hyphens ('-'). Any file that does not conform to these requirements will not be executed by run-parts. For example, any file containing dots will be ignored. This is done to prevent cron from running any of the files that are left by the Debian package management system when handling files in /etc/cron.d/ as configuration files (i.e. files ending in .dpkg-dist, .dpkg-orig, .dpkg-old, and .dpkg-new). (emphasis mine). So yes, the same rules apply to file names under /etc/cron.hourly, /etc/cron.daily, /etc/cron.weekly and /etc/cron.monthly as under /etc/cron.d. All this is specific to Debian and derivatives, and doesn’t apply to your CentOS systems.
Are directories such as /etc/cron.daily and similar restricted by naming rules?
1,515,732,416,000
I need to schedule a script every two minutes from 10 PM to 5 AM next day from Monday to Friday, but I'm not exactly sure if the below is correct and will work, or if there's other correct answers... */2 22-00 * * 1-5 /myscript.sh */2 00-05 * * 2-6 /myscript.sh Update: I expect to start it from Sunday night at 10 PM and from then I expect it to run 22-05 every day till Friday.
The first entry specifies a range that runs backwards. That should instead be */2 22-23 * * 1-5 /myscript.sh This covers the range from 22:00 to 23:58, Monday through to Friday. The second entry should probably not use zero-filled numbers: */2 0-4 * * 1-5 /myscript.sh This covers the range 00:00 to 04:58. The two schedules above would, together, run /myscript.sh every two minutes from 22:00 to 04:58 every Monday through to Friday (starting at 00:00 on Monday morning, ending at 23:58 on Friday evening). These two could be combined into */2 22-23,0-4 * * 1-5 /myscript.sh See also this schedule on the crontab guru site. Would you want a final run at exactly 5 AM, add an extra schedule: 0 5 * * 1-5 /myscript.sh Taking your updated question into account into account: */2 22-23 * * 7 /myscript.sh */2 22-23,0-4 * * 1-4 /myscript.sh */2 0-4 * * 5 /myscript.sh 0 5 * * 1-5 /myscript.sh This runs /myscript.sh every two minutes from 22:00 to 05:00, from Sunday at 22:00 through to Friday at 05:00. The 1st schedule runs the Sunday evening jobs. The 2nd schedule runs the the evening and morning jobs (until 04:58), Mondays to Thursdays. The 3rd schedule runs the Friday morning jobs (until 04:58). The 4th schedule runs the 05:00 jobs Mondays to Fridays.
crontab entry for scheduling over night
1,515,732,416,000
Here is my crontab entry which I basically want to run on April 1st of every year, but only if April 1st lands on a Thursday. 0 13 1 4 4 /path/to/my/script.sh But this seems to be running every week. What am I missing?
problem of the crontab entry is already pointed out, but to execute your script as your expected date&time, change your job schedule to: 0 13 1 4 * [ "$(date +\%u)" -eq 4 ] && /path/to/my/script.sh
Crontab not meeting all requirements
1,515,732,416,000
I have a specific script that gets the IP address of a particular MAC. For this it uses arp, and it works correctly. The problem comes when I Program a crontab to run that script; it works fine and runs but the line where it runs the arp command does not work and, therefore the script does not finish correctly only if it's run from crontab. The script is: #!/bin/bash subred=192.168.1.0/24 mac=aa:bb:cc:dd:ff:gg log() {...} log info "Init program" ip=$(nmap -sP $subred >/dev/null && arp -an | grep $mac | awk '{print $2}' | sed 's/[()]//g') if [ $ip ]; then log ok "IP found in $ip" else log error "IP not found" fi log info "Finished program" This script has been configured to run on crontab every hour with @hourly /root/machaunter.sh. Time scheduling in cron is well done and the script runs smoothly, ruling out permission or script issues. The proof of them is the log file it generates: 29/04/2020 14:00:01 Init program 29/04/2020 14:00:06 IP not found 29/04/2020 14:45:59 Init program 29/04/2020 14:46:08 IP found in 192.168.1.173 29/04/2020 14:46:09 Finished program 29/04/2020 15:00:01 Init program 29/04/2020 15:00:10 IP not found 29/04/2020 16:00:01 Init program 29/04/2020 16:00:13 IP not found 29/04/2020 17:00:01 Init program 29/04/2020 17:00:07 IP not found 29/04/2020 18:00:01 Init program 29/04/2020 18:00:05 IP not found 29/04/2020 18:25:43 Init program 29/04/2020 18:25:50 IP found in 192.168.1.173 29/04/2020 18:25:51 Finished program As you can see by the hours, the two times I ran the script manually worked correctly, but the rest of the Times didn't. I've been debugging and added to the script tests until I find that, in the arp call does not show anything, however it does when I launch it manually (to do the tests I added the log error "arp: $(arp -an)" log and changed the crontab to run every minute * * * * /root/machaunter.sh) 30/04/2020 09:22:01 Init program 30/04/2020 09:22:01 arp: 30/04/2020 09:23:01 Init program 30/04/2020 09:23:01 arp: 30/04/2020 09:24:02 Init program 30/04/2020 09:24:02 arp: 30/04/2020 09:24:29 Init program 30/04/2020 09:24:29 arp: Address HWtype HWaddress Flags Mask Iface 192.168.1.46 ether 7e:2d:d1:ca:d9:c0 C br0 192.168.1.68 ether c894:66:dd:1c:c2:9d C br0 192.168.1.173 ether 48:48:59:e5:b8:5e C br0 192.168.1.1 ether bf:f1:54:4d:e3:25 C br0 30/04/2020 09:25:01 Init program 30/04/2020 09:25:01 arp: 30/04/2020 09:26:01 Init program 30/04/2020 09:26:01 arp: 30/04/2020 09:27:01 Init program 30/04/2020 09:27:01 arp: 30/04/2020 09:28:01 Init program 30/04/2020 09:28:01 arp: 30/04/2020 09:29:02 Init program 30/04/2020 09:29:02 arp: 30/04/2020 09:30:01 Init program 30/04/2020 09:30:01 arp: 30/04/2020 09:31:01 Init program 30/04/2020 09:31:01 arp: As can be seen from the log, the arp command never returns data except for the only time I have manually launched it. Also, as you can see when calling this command doesn't terminate the script (we do not have the Finished program log) Why is this happening? what solution would there be? UPDATE WITH CRON DAEMON LOG ● cron.service - Regular background program processing daemon Loaded: loaded (/lib/systemd/system/cron.service; enabled; vendor preset: enabled) Active: active (running) since Mon 2020-04-27 18:09:19 UTC; 2 days ago Docs: man:cron(8) Main PID: 486 (cron) CGroup: /system.slice/cron.service └─486 /usr/sbin/cron -f Apr 30 10:40:01 NanoPi-R1 CRON[14428]: (root) CMD (/root/machaunter.sh) Apr 30 10:40:01 NanoPi-R1 CRON[14429]: (root) CMD ( /bin/bash /usr/bin/sync_ntp_rtc.sh /dev/rtc0) Apr 30 10:40:01 NanoPi-R1 CRON[14421]: (CRON) info (No MTA installed, discarding output) Apr 30 10:40:01 NanoPi-R1 CRON[14421]: pam_unix(cron:session): session closed for user root Apr 30 10:40:04 NanoPi-R1 CRON[14420]: (CRON) info (No MTA installed, discarding output) Apr 30 10:40:04 NanoPi-R1 CRON[14420]: pam_unix(cron:session): session closed for user root Apr 30 10:41:01 NanoPi-R1 CRON[14459]: pam_unix(cron:session): session opened for user root by (uid=0) Apr 30 10:41:01 NanoPi-R1 CRON[14463]: (root) CMD (/root/machaunter.sh) Apr 30 10:41:01 NanoPi-R1 CRON[14459]: (CRON) info (No MTA installed, discarding output) Apr 30 10:41:01 NanoPi-R1 CRON[14459]: pam_unix(cron:session): session closed for user root UPDATE WITH STDOUT OF COMMANDS I've added the XXXX redirect to the crontab command >/tmp/logfile 2>&1, lagging: * * * * * /root/machaunter.sh >/tmp/logfile 2>&1 In /tmp/logfile I get: 30/04/2020 13:52:01 [info] Init program /root/machaunter.sh: line 37: arp: command not found 30/04/2020 13:52:01 [info] arp:
The environment that cron is running your script in has a different value for the PATH variable compared to your ordinary interactive environment. This means that your script does not know where the arp command is to be found, for example (as mentioned in comments). I would suggest that you make a note in what directories the tools that you use in your script by running command -v for each of them in an interactive shell, e.g. command -v nmap command -v arp etc. This will give you a list of pathnames for those commands. Take the directory names of these and add them to PATH in the script itself (somewhere at the start of the script): PATH=$PATH:/some/directory/path:/another/directory/path Doing this in the script guarantees that the tool will be found by the script. In the end, it may be that all you need is to add /usr/sbin: PATH=$PATH:/usr/sbin The other alternative is to use the tools wih their absolute path, like using /usr/sbin/arp instead of just arp.
arp doesn't run in script when run through crontab
1,515,732,416,000
I have made (with a Stack Exchange help) a script that checks if the web application is available and restarts the web app daemon in order to get it running again (usually helps until I'm at work). It's supposed to run every hour. I've added the following line to sudo crontab -e: 0 * * * * bash /usr/local/bin/test.sh The contents of test.sh: #!/bin/sh HTTP_STATUS=$(lynx -head -source https://example.com |head -1 |awk '{print $2}') calendar=$(date "+%d-%m-%Y - %H:%M") LOG="Log is placed in /usr/local/bin/test.log" if [ "$HTTP_STATUS" = "503" ] then systemctl restart mydaemon echo "$calendar - site unavailable ($HTTP_STATUS). Daemon restarted." >> test.log elif [ "$HTTP_STATUS" = "200" ] then echo "$calendar - site available ($HTTP_STATUS)" >> test.log else echo "$calendar - site status unknown ($HTTP_STATUS)" >> test.log fi I have two other scripts in scheduled in cron that work just fine, only that they run every midnight. All of my scripts have same permissions and same owner. My test.log file is not updated every hour so I suppose the script doesn't run. Running the script manually creates log entries.
crontab runs scripts in a different environment from login shell; in particular, PATH could be different or undefined. Try to use absolute path on the commands (lynx, head ...) You can get them with which <command> Moreover you should specify path for test.log.
My script doesn't run according to cron schedule
1,515,732,416,000
I have this cron in my crontab 00 01 * * * /srv/python/proj/acquisizione/acquisizioneAOK.sh >> /home/crontab_logs/acquisizioneAOK.out 2>&1; mailx -s "Cron output: acquisizioneAOK" [email protected] < /home/crontab_logs/acquisizioneAOK.out which writes the stdoutput to a file .out and then sends me an e-mail with that file as text. How can I just empty the file from previous output? I just want an e-mail with the stdoutput of the last cron execution. Thanks
To overwrite the log file on each execution of the script, replace the >> operator with the > operator. The >> operator appends the output to the file on each execution, and you would have a continuously growing file with that approach. The > operator 'clobbers' the file on each execution, which results in any existing data being removed before the redirected output is written. Going one step further, you could simply pipe the output and error streams from the script to the mailx command directly : /srv/python/proj/acquisizione/acquisizioneAOK.sh 2>&1 | mailx -s "Cron output: acquisizioneAOK" [email protected]
Overwrite file with stdoutput from cron execution
1,515,732,416,000
production.log [2019-02-11 10:18:18] GET / [2019-02-11 11:18:19] POST blavadasdasdgd ... ... <--- A lot of in between logs and other data ... [2019-02-12 11:18:20] INFO ::HTTPServer#start: pid=21378 port=4567 [12/Mar/2019:11:18:25 +0200] "GET / HTTP/1.1" 404 458 0.0086 [12/Mar/2019:11:18:26 EET] "GET / HTTP/1.1" 404 458 - -> / [12/Mar/2019:11:18:27 +0200] "GET /" 200 18893 0.0020 [2019-03-12 11:18:28] GET / [2019-03-12 12:18:29] POST blablabla ... ... <--- A lot of in between logs and other data ... [13/Mar/2019:11:18:30 +0200] "GET / HTTP/1.1" 404 458 0.0086 [13/Mar/2019:11:18:31 EET] "GET / HTTP/1.1" 404 458 - -> / [13/Mar/2019:11:18:32 +0200] "GET /" 200 18893 0.0020 ... ... <--- A lot of in between logs and other data ... [2019-03-14 11:19:18] GET / The content of this file is fake (but the timestamps are in the correct order older to newer) I have a webserver that is running through nohup and outputting everything to a file called production.log and is writing around (+10GB) of data and info to it and I want to truncate it in a way to maintain some good amount of recent logs and data inside it, but getting rid of the old data. So I'm taking an approximate guess of tailing the last 30,000 lines outputting them into a new file called production.log.1 and then move it back and replace it with production.log Example: tail -30000 production.log > production.log.1 && mv production.log.1 production.log Now when I try to tail -f production.log it never outputs anything new from the webserver but instead it only keeps showing the last timestamp log before i managed to replace the file. The webserver just stops writing into it. is there a better or a good way to do this without writing into a different file? I need to get rid of the old data from this site while keeping 2>&1 outputting to it from the webserver.
Make sure to append the output to the file, i.e. use >> and not >. Next you can use "copy and truncate" to preserve the log: cp production.log production.log.1 && cp /dev/null production.log There is a small time window where the copy command is completed and the truncation of the logfile is performed so that you may lose a bit of the log, but that is not to be avoided. Note that the utility logrotate has a special directive to do just this: copytruncate. The point of using append and not simply redirect is that otherwise the program writing to the logfile may remember its write offset, so that if the logfile was e.g. 2MB in size, and you truncate the file, and the program writes again, you may end up with a sparse logfile that has 2MB of null blocks and then the logs again.
Replace log file without affecting "Redirection of stdin and stdout" using nohup
1,515,732,416,000
I have a configuration file that some of its lines is like below: # Mandatory: no # Range: 0-5 # Default: DebugLevel=3 I want to change the 3 at DebugLevel to 5 at 2 AM then after for example 2 hours at 4 AM it change back to 3 again. how can I do this? with crontab or script?
You can use sed to change the value at given time with cron: To change DebugLevel=3 to DebugLevel=5 at 2am every day and then change back DebugLevel=5 to DebugLevel=3 at 4am every day, add following lines to your cron with crontab -e 0 2 * * * sed -i 's/DebugLevel=3/DebugLevel=5/g' file.conf 0 4 * * * sed -i 's/DebugLevel=5/DebugLevel=3/g' file.conf
change a value in a file
1,515,732,416,000
I have this on my crontab 10 6 * * * java -jar /.../myproject.jar >> /../myjob.log 2>&1 I want run my app only the week (Monday to Friday) This line is ok ? : 10 6 * * 1-5 java -jar /.../myproject.jar >> /../myjob.log 2>&1
That is a correct way to run the job on days 1 through 5 (Sunday being 0 (or 7)). Alternatively, you could explicitly list the days: 10 6 * * 1,2,3,4,5 java -jar /.../myproject.jar >> /../myjob.log 2>&1 You'll want to be careful about adding a "day of month" (field 3) restriction, apart from *; if both values are specified, then cron will run the job when either field matches.
Schedule Cron only the week
1,515,732,416,000
I'm trying to create basic script which will be run every minute from cron. Script script.sh is: #!/bin/bash DATE=`date +"%Y-%m-%d %H:%M:%S"` IP=`ifconfig | grep "inet addr" | awk --field-separator ':' '{print $2}' | awk '{print $1}' | head -1` echo "$DATE $IP" >> test.log When I run this script by typing "./script.sh" I've got IP address in test.log in such format (and it's ok): 2017-11-08 16:33:33 10.0.0.1 2017-11-08 16:34:33 10.0.0.1 2017-11-08 16:35:33 10.0.0.1 However when I create such a cron job: * * * * * /path/to/my/script.sh In test.log I have only date: 2017-11-08 16:36:13 2017-11-08 16:37:13 2017-11-08 16:38:13 But why? Why I have no IP address inside? Do you have any idea?
date is probably /bin/date and is in the default $PATH for cron jobs, which is not the same as the $PATH set on user login. ifconfig is likely /sbin/ifconfig which is not in the $PATH. Change ifconfig to an explicit full path (such as /sbin/ifconfig) to run ifconfig within a cron.
problem with echo IP address
1,515,732,416,000
I have the following set of commands: docker exec -u www-data bin/console api:execute --object=Account; docker exec -u www-data bin/console api:execute --object=AgreementType; docker exec -u www-data bin/console api:execute --object=CFProgramLevel; docker exec -u www-data bin/console api:execute --object=Product; docker exec -u www-data bin/console api:execute --object=Customer; docker exec -u www-data bin/console api:execute --object=Distributor; Because the command listed above is a sequence then they have to be executed in the order they appear. What do I need to achieve? Run one and only one job at a time Respect the sequence and this mean for example: Account has to run first than AgreementType, AgreementType has to run before CFProgramLevel and so on Run all the sequences each hour I was thinking in use cronjobs but I haven't a clue in how to achieve this. Could any provide me with some answers?
Put the commands in a script and schedule the script with cron: The script runstuff.sh: #!/bin/sh docker exec -u www-data bin/console api:execute --object=Account docker exec -u www-data bin/console api:execute --object=AgreementType docker exec -u www-data bin/console api:execute --object=CFProgramLevel docker exec -u www-data bin/console api:execute --object=Product docker exec -u www-data bin/console api:execute --object=Customer docker exec -u www-data bin/console api:execute --object=Distributor The crontab: 0 * * * * /path/to/runstuff.sh or, @hourly /path/to/runstuff.sh if your cron understands @hourly (check man 5 crontab). This allows you to change the script (if you need to) without editing the existing cronjob. It also guarantees that the Docker invocations are executed in the correct order, and it collects all processing of the job to one single place (the script).
How to run one job at a time but all of them each hour?
1,515,732,416,000
Say I want to have a beep at 13:00. I would use speaker-test -t sine -f 1300 | at 13:00 I can find that something is going to be executed with atq but how to get the precise scheduled command? (that is speaker-test -t sine -f 1300)
atq will list the jobs: 4 Mon Apr 24 15:00:00 2017 a skitt The first number is the job identifier, which you can then use with at -c to view the job’s contents: at -c 4 Note that at jobs start with a lengthy setup to reproduce the environment in which the job was defined; you’ll see the command you gave at the end.
How to show a at scheduled command? [duplicate]
1,515,732,416,000
For some reason, I cannot get a simple cron job working on my Mint 18 KDE system. This is the job, it tells a script to run every minute. See the crontab line that I get when I type crontab -l: # m h dom mon dow command 1 * * * * sh /home/martien/crontest.sh This is the script crontest.sh: #! /bin/bash cd /home/martien/archives/ DIRECTORY='webcam-'`date +%y-%m-%d-%H-%s` mkdir ~/archives/$DIRECTORY These are the properties of the script -rwxrwxr-x 1 martien martien 110 Apr 2 07:35 crontest.sh The file in /var/spool/cron/crontabs/ confirms the existence of the cron job. Cron runs: root 953 1 0 06:50 ? 00:00:00 /usr/sbin/cron -f The script runs when I enter this in the command line: sh /home/martien/crontest.sh I run Mint 18 (Ubuntu Xenial).
Your cron entry runs once an hour, at one minute past: 1 * * * * sh /home/martien/crontest.sh If you want every minute you should use this: * * * * * /home/martien/crontest.sh Since you've declared your script to be a bash script and you've set it to be executable, just call it directly. Don't write a bash script and use sh to execute it as on some systems they really are different shells. Your script can be amended slightly, too: #!/bin/bash cd /home/martien/archives DIRECTORY="webcam-$(date +'%y-%m-%d-%H-%s')" mkdir "$DIRECTORY" I've quoted your variables when they are used, and switched backticks to the more modern and consistent $(...) construct.
Cron job not working Linux Mint 18
1,515,732,416,000
Is there a log of crontab for me to know if crontab is working fine? I am on Linux Mint 17.3. What I have done: ran crontab -e as root it did not exist, so I chose nano editor and proceeded entered this line 00 12 * * * /home/vlastimil/Development/bash/full-upgrade.sh saved and exited My goal is to automate updates every single day at 12 hours (midday, not midnight). This thread is not about arguing the best way to achieve that. I created /home/vlastimil/Development/bash/full-upgrade.sh file with content: #!/bin/bash apt-get update && apt-get dist-upgrade -y && apt-get --purge autoremove -y And finally I have set 755 rights to the file. I don't know how to test it. Will it work?
Logs probably go to syslog, which varies depending on what syslog daemon is involved and how that is configured to log, start with grep -r cron /etc/*syslog* to find where and what is going on on the system, or per derobert under systemd the relevant command is journalctl -b 0 _SYSTEMD_UNIT=cron.service Adding a test cron job that touches a file (ideally not in /tmp, unless the vendor makes that per-user private, for security reasons) should also confirm whether cron is working or not, just be sure to eventually remove the test cron job before it fills up a partition or something silly. Other usability and security pointers: some cron daemons can run scripts directly, in which case you can just copy the script into /etc/cron.daily, though this may not suit something that you do not want to run (with everything else!) at exactly the top of the hour. root running a script under a user's home directory could be very bad, as a compromise of that user account could then be leveraged to root access, or the script could needlessly fail if the home directory is on NFS or encrypted; move the script elsewhere to avoid this (/root/bin, or somewhere under /usr/local or /opt depending on the local filesystem preferences). Even more pointers fall into the realm of shell script debugging, mostly to note that cron is not the shell; use env or set to see what is set under cron, check that PATH is correct, and etc. (One old and horrible linux kernel bug involved java daemons crashing but only if they were run from cron; that was fun to debug...)
How do I know if crontab is working fine?
1,515,732,416,000
I'm having some difficulties on understanding why the following cronjob does not work anymore: 30 3 * * * /path/to/backup_script.sh && tar -czvf /path/to/archived/backups/retain_last_3_backups/backup-community_$(date '+%Y%m%dT%H%M%S').tar.gz -C /path/to/source/backup/folder/ . If I run it manually using the same user who own the crontab, it does work. It stopped to work when I edited it a couple of days ago adding && tar -czvf [...] should I call in a different way the date command? or escape the $ (I'm going to test this now, just noticed it)? Thanks to David Sánchez Martín, I found the specific log, it report the following error: /bin/sh: 1: Syntax error: Unterminated quoted string
The % symbols are special in a crontab entry, so you can't use them directly in your date format string. man 5 crontab writes The sixth field (the rest of the line) specifies the command to be run. The entire command portion of the line, up to a newline or % character, will be executed by /bin/sh or by the shell specified in the SHELL variable of the cronfile. Percent-signs (%) in the command, unless escaped with backslash (\), will be changed into newline characters, and all data after the first % will be sent to the command as standard input.
broken cron job after editing it
1,515,732,416,000
I have a cron job for very 2 min and somebody told me that some servers do not allow 2 min cron jobs. basically from 30 min. it is true?
No, this is not true. The minimal step is one minute. But depend of service you want to run. Please share more details
problem with cron job for every 2 minutes
1,515,732,416,000
I was looking into how to programmatically add a cronjob, and came across this SO question which advises using the following one liner: (crontab -l 2>/dev/null; echo "*/5 * * * * /path/to/job -with args") | crontab - What does the trailing - mean when provided to crontab here? There's nothing in the man pages of crontab or cron about it as far as I can tell, and my research thus far has only told me that a - character is used in syntax to move file descriptors, has a special meaning for here documents, and that $- expands to the currently enabled shell options. What is the trailing - doing here for crontab ?
From the manpage of crontab SYNOPSIS crontab [ -u user ] file ... The first form of this command is used to install a new crontab from some named file or standard input if the pseudo-filename - is given. So, the - instructs crontab to create the crontab entry from the output of the preceding command, which is piped to the stdin of the crontab command via the | operator.
What is the meaning of a trailing hyphen in this crontab command? [duplicate]
1,515,732,416,000
In Debian 10 (buster) I wan't to schedule a task with cron. This task is a python script who create a csv file. This python script start with: import xmlrpc.client import csv When I execute it, without any cron task, I have this message: /usr/bin/python /home/debian/api_odoo_contact.py Traceback (most recent call last): File "/home/debian/api_odoo_contact.py", line 1, in <module> import xmlrpc.client ImportError: No module named xmlrpc.client How to resolve this modules dependencies ? Do I have to install modules on my server before excuting the script and define their paths in my python script ?
xmlrpc.client is a Python 3 library (it was xmlrpclib in Python 2), so you need to specify a Python 3 interpreter: /usr/bin/python3 /home/debian/api_odoo_contact.py In Debian 10, /usr/bin/python is a Python 2 interpreter.
Debian - Execute a python script with modules import
1,515,732,416,000
I wan't to schedule a SQL query in a PostgreSQL database with a crontab. In terminal, in root, when I run: psql -U postgres -d my_database -h localhost -p 5432 -c "INSERT INTO schema.table (name) VALUES ('Insert with a command from terminal');" It asking me a password. So I add it: PGPASSWORD="mypassword" psql -U postgres -d my_database -h localhost -p 5432 -c "INSERT INTO schema.table (name) VALUES ('Insert with a command from terminal and password');" And when I run the same in the crontab, It doesn't succed: 0 12 * * * PGPASSWORD="mypassword" psql -U postgres -d my_database -h localhost -p 5432 -c "INSERT INTO schema.table (name) VALUES ('Insert with a command from crontab and password');" The crontab log return me nano var/log/syslog (root) CMD (PGPASSWORD="mypassword" psql -U postgres -d my_database -h localhost -p 5432 -c "INSERT INTO schema.table (name) VALUES ('Insert with a command from crontab and password');") CRON[15504]: (CRON) info (No MTA installed, discarding output) CRON[15677]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1) crontab[15862]: (root) BEGIN EDIT (root) systemd[1]: session-60285.scope: Succeeded. crontab[15862]: (root) END EDIT (root) Why the crontab job doesn't execute the pgsql command ? How can I schedule a SQL command with crontab ?
Some implementations of cron (I think including the one that comes with Debian) lets you set the environment variables on previous lines: PGPASSWORD="mypassword" 0 12 * * * psql -U postgres -d my_database -h localhost -p 5432 -c "INSERT INTO schema.table (name) VALUES ('Insert with a command from crontab and password');" Does this have to be a cron job? Since you're running Debian, you could make use of systemd's timers. To implement this in systemd you would write a timer unit that schedules the job and a service unit that runs the job. The service unit can have environment variables set. #/etc/systemd/system/regularquery.timer [Unit] Description="Timer to run SQL query" After=postgresql.service [Timer] OnCalendar=*-*-* 12:00:00 Unit=regularquery.service [Install] WantedBy=multi-user.target #/etc/systemd/system/regularquery.service [Unit] Description="SQL Query [Service] Type=oneshot User=postgres Environment=PGPASSWORD="mypassword" ExecStart=/usr/bin/psql -U postgres -d my_database -h localhost -p 5432 -c "INSERT INTO schema.table (name) VALUES ('Insert with a command from terminal');" Try it out with sudo systemctl start regularquery.service and see if it connects fine. You can monitor the output with journalctl -u regularquery.service. When you're happy, get the thing scheduled with sudo systemctl enable --now regularquery.timer Alternatively, I found this in man psql: -w --no-password Never issue a password prompt. If the server requires password authentication and a password is not available from other sources such as a .pgpass file, the connection attempt will fail. This option can be useful in batch jobs and scripts where no user is present to enter a password. This suggests that ~/.pgpass might be a good alternative to an environment variable. It also suggests --no-password is a good switch for a non-user session (i.e. crontab or systemd) as it will NEVER interactively pause for a prompt.
Debian - Schedule a SQL query with crontab
1,515,732,416,000
I try to name a file with rows count in a crontab : * * * * * ~/script > "~/targetfile-$(rows-count).csv" I can do : * * * * * ~/script > "~/targetfile-$(~/script | wc -l).csv" But I think I can do much better and not execute script twice. Can you help me ? Thx
Write the output of your script to a temporary file, count the number of lines in that file and move the file to a new name: t=$(mktemp) && len=$("$HOME/script" | tee -- "$t" | wc -l) && mv -- "$t" "$HOME/targetfile-$len.csv" If you are not using GNU wc, you may get whitespace characters at the start or end of the value in $len. You would then need to strip these out: t=$(mktemp) && len=$("$HOME/script" | tee -- "$t" | wc -l) && mv -- "$t" "$HOME/targetfile-$(( len + 0 )).csv" I run "$HOME/script" only once here and save the output to a temporary file ($t) and, at the same time (courtesy of tee for duplicating the data), count the number of lines in the output. The temporary file is then moved to the new name. I would probably put this in a separate script and schedule that, rather than scheduling that whole list in my crontab. The script could look like #!/bin/sh tmpfile=$(mktemp) && length=$("$HOME/script" | tee -- "$tmpfile" | wc -l) && mv -- "$tmpfile" "$HOME/targetfile-$(( length + 0 )).csv"
Redirect and rows count in filename
1,607,029,596,000
I am using MX linux and i need next thing. I have a config for openvpn, it works perfectly both from manual launch via cli and from nm-openvpn application in my XFCE. I want to launch my openvpn every morning at 9 am via cron, but with visual displaying in my XFCE like i launched it from GUI. Which command does networkmanager launch when i click "connect to vpn"? I was trying to analyze ps -aux | grep openvpn output and syslog, but without success.
There is no non-networkmanager command that is being launched when you activate the openvpn connection through NM. This is an internal procedure within NM that sets up the connection. To manipulate it through the command line you can use the nmcli command. Some kind of command like this should work: nmcli connect up "name of the openvpn connection" Instead of the name of the VPN connection you can use the ID, UUID or PATH of the connection.
launch nm-openvpn via cli
1,607,029,596,000
I need to execute the following command every 15 minutes: sudo chmod -R 777 /directory I am using Ubuntu server. The instruction must execute with elevated privileges (root). I was thinking of using the /etc/cron.xxx directory. Can someone please direct me how to achieve this. Thanks
If you want to execute a command as root every 15 minutes, add the command to root's own crontab: sudo crontab -e Then, in the crontab, add */15 * * * * chmod -R 777 /directory Save and exit the editor. Since cron jobs are running as the user who owns the job, root in this case, sudo will not be used in the crontab.
crontab repeated schedule with elevated privileges
1,607,029,596,000
Please find below script if [ "$#" -eq 0 ]; then dbs=('test_db_1' 'test_db_2') path='/var/backups/' else dbs=( "$@" ) path='/app/mybackups/' fi dbuser='test_user_name' dbpassword='test_pass_word' now="$(date +'%d-%m-%Y_%T')" currentDate="$(date)" dayToSubtract=365 cd ${path} echo "location: $path" for element in ${dbs[@]} do echo $element echo "Starting backup now for db: $element on $now" mongodump -u ${dbuser} -p ${dbpassword} --authenticationDatabase 'admin' -d $element --gzip --archive=${element}_$now.archive dateToBeRemoved=$(date --date="${currentDate} -${dayToSubtract} day" +%d-%m-%Y) echo $dateToBeRemoved fileToBeRemoved="${element}_${dateToBeRemoved}" echo Removing $fileToBeRemoved rm $fileToBeRemoved* done echo All Done! echo "location: $path" I have been using this script for more than a year for backing up my dbs on the server on a daily basis and I also use it to take manual backups by passing command-line arguments to this script. Recently I got a new server having ubuntu and its giving following error there daily.sh: 2: daily.sh: Syntax error: "(" unexpected (expecting "fi") Please find below screenshot for reference: After adding: #!/usr/bin/env bash Getting same error: Ran which bash, got the following Added the shebang according to which bash result but still facing same issue When running from the current folder: check for the BOM: Ubuntu Info: Distributor ID: Ubuntu Description: Ubuntu 18.04.3 LTS Release: 18.04 Codename: bionic
I am leaving the old answer below, as it adds context and details. But I think your misunderstanding stems from what you assume sh yourscript ought to do. What it does is to use sh (which differs by system, as you noticed ...) as the interpreter for the script name you pass. What you seem to intend, though, was sh -c "your command ... whatever it is" Now, why you insist on using sh as the interpreter when you've full well been told in comments and two answers that this is wrong, I don't know. If your script is executable, has a proper hashbang and is either located in the PATH or invoked with its relative or absolute path, it will work. But by insisting on using sh as the interpreter you are basically actively sabotaging all attempts to help you, because sh is not the same as sh -c COMMAND. And if your interpreter isn't Bash, using Bashisms will likely fail (as you witnessed). And if you want this failure to be "more reliable" you can carve it in stone by using #!/bin/sh as the hashbang (instead of the suggested one), which will yield the same outcome as handing your script to sh on the command line. You have several issues here. Your screenshot shows your script isn't executable (owned by root:root with file mode 0644). So when you are attempting to execute it with sh it will use whichever Shell sh happens to be on your system. The reason this fails is threefold: your script is not executable and not in the PATH sh yourscript makes it - since it's not executable - use sh as the interpreter (which in case of Ubuntu is dash and doesn't understand all Bashisms) your script is missing a hashbang, which provides the clue for the loader which interpreter to use for your "binary" If, however, you'd add a hashbang such as: #!/usr/bin/env bash ... if you made sure your script contains no BOM and if you made sure that it's executable: chmod +x daily_ori.sh ... and used a relative path (or added your current path to PATH, which - however - is a bad idea!): sh ./daily_ori.sh ... this should succeed. Unless there is more information that is lacking from your question. But that last step really calls into question why you insist on executing the script with sh in the first place if your intention is to invoke Bash. Simply use a hashbang (whether hardcoded or with env) to locate bash as interpreter, make sure the script is executable and call it as you normally would ... To check for the BOM use: xxd -g 1 daily_ori.sh|head -n 2 ... and edit that into your question.
Bash script running perfectly fine on Centos 7 and giving exception on Ubuntu Bionic
1,607,029,596,000
I comeback with a very strange behavior when we run this command on Linux redhat machine echo "$(tput bold)" start write to log "$(tput sgr0)" >> /tmp/log.txt we get ended bold text in /tmp/log.txt more /tmp/log.txt start write to log <----- BOLD TEXT but when we run it from cron job under /etc/cron.d */1 * * * * root [[ -f /home/mng_log.sh ]] && echo "$(tput bold)" start write to log "$(tput sgr0)" >> /tmp/log.txt then the text in /tmp/log.txt isn't bold why the cli from cron job not write the bold test ?
tput bold writes the character sequence that is to be used to tell the current terminal it is running in to start writing in bold. It knows the type of the terminal based on the value of the $TERM environment variable. That variable is set by terminal emulators or by getty. tput queries the termcap or terminfo databases to know what sequence to use for a given attribute for a given terminal type. For instance, when running in an xterm, where $TERM will be something like xterm or xterm-256color, tput bold will write \e[1m which is the sequence recognised by xterm (and most modern terminal emulators) to start writing in bold. When running in an hpterm, it will send \e&dB instead. When a script is running from cron, it is not running in a terminal. If you want it to send a sequence to enable the bold attribute, you need to tell it for what terminal that should be, by setting the $TERM environment variable. Maybe something like: export TERM="${TERM-xterm}" # set $TERM to xterm if not set printf '%s\n' "$(tput bold)start write to log$(tput sgr0)" >> /tmp/log.txt Then your /tmp/log.txt will contain the xterm sequence to turn bold on. When the content of the file is sent to a xterm terminal emulator, it will be displayed in bold, YMMV for other terminals.
tput in a cron job does not output bolded text [duplicate]
1,607,029,596,000
I want to learn how to use crontab. I see there are two manual pages: crontab(1) crontab(5) Why are there 2 manual pages? What are the differences between them? Do I need to study both in order to use crontab, or is one authoritative?
crontab(1) is the man page of crontab command. crontab(5) is the man page of crontab file. The man pages are divided in sections as described in What do the numbers in a man page mean?. Each section groups similar man pages. For example, Section 1 holds user commands (commands runable by all users in the system). Section 8 covers SysAdmin commands (i.e. the commands that demand root access to be run). Section 5 covers file formats. See the link above for more info. Another example would be passwd(1) which describes the passwd command, and passwd(5) which describes the /etc/passwd file format.
crontab(1), crontab(5) what's the difference? [duplicate]
1,607,029,596,000
I understand there is a lot of documentation on editing crontabs via script, and I can do this by adding an entry with the following command: line="* * * * * /my/folder/script.sh" ( crontab -l ; echo "$line" ) | crontab - However, myself and a few others each have our own "sections" in our crontab under a superuser. How can I insert this line under a keyword or string, such as underneath the line containing SPECIAL_JOB so as not to disturb others' sections and entries? I don't want to just keep appending new jobs at the bottom of the crontab. The cron entry would look like this: # SPECIAL_JOB * * * * * /my/folder/script.sh Ideally, I would delete the previous entry at this line to keep a single, fresh entry using this: #remove entry crontab -l | grep -ve '/my/folder/script.sh' | crontab -
This inserts after all # SPECIAL_JOB lines the $line string. crontab -l | sed '/# SPECIAL_JOB/a'"$line" | crontab -
How to insert a line into a crontab AFTER a key word or string via script
1,607,029,596,000
I'm having problem getting my crontab to work in my ubuntu server 18.04 running as a amazon ec2 instance. I have the following line in my /etc/crontab file: */15 * * * * root /bin/bash /home/ubuntu/gzip/over_time_compile_ec2.sh But it does not seem to work, and when running sudo service cron status i get the following: Feb 14 22:20:01 ip-172-31-15-110 CRON[15880]: pam_unix(cron:session): session opened for user ubuntu by (uid=0) Feb 14 22:20:01 ip-172-31-15-110 CRON[15879]: pam_unix(cron:session): session opened for user root by (uid=0) Feb 14 22:20:01 ip-172-31-15-110 CRON[15881]: pam_unix(cron:session): session opened for user root by (uid=0) Feb 14 22:20:01 ip-172-31-15-110 CRON[15882]: (root) CMD (/home/ubuntu/gzip/compile_script.sh) Feb 14 22:20:01 ip-172-31-15-110 CRON[15883]: (ubuntu) CMD (/home/ubuntu/gzip/compile_script.sh) Feb 14 22:20:01 ip-172-31-15-110 CRON[15884]: (root) CMD (/bin/bash /home/ubuntu/gzip/over_time_compile_ec2.sh) Feb 14 22:20:01 ip-172-31-15-110 CRON[15881]: (CRON) info (No MTA installed, discarding output) Feb 14 22:20:01 ip-172-31-15-110 CRON[15881]: pam_unix(cron:session): session closed for user root Feb 14 22:20:01 ip-172-31-15-110 CRON[15880]: (CRON) info (No MTA installed, discarding output) Feb 14 22:20:01 ip-172-31-15-110 CRON[15880]: pam_unix(cron:session): session closed for user ubuntu I see the line (CRON) info (No MTA installed, discarding output), but it should not make my CRON fail right?
Having no MTA (like postfix or similar) installed will not make your cron job fail, but it prevents cron from being able to send the owner of the job any output of the job via email. If your job failed (for any reason), or produced output, you would not get notified about it and that information would be discarded. If you don't want to install an MTA, use a redirection of the job to a log file: */15 * * * * root /bin/bash /home/ubuntu/gzip/over_time_compile_ec2.sh >/tmp/job.out 2>/tmp/job.err This would put standard output messages in /tmp/job.out and any diagnostic messages that the job generates into /tmp/job.err. You may also redirect both types of messages into the same file with >/tmp/job.out 2>&1. With >, these files would be emptied and rewritten each time the job runs. With >>, the files would be appended to (but leave 2>&1 as is, if you redirect to the same file; 2>>&1 would not work).
Crontab problem in ubuntu 1804
1,607,029,596,000
According to this site I set up cron to execute a script for me, first just trying to get it to work with cat before doing the actual work I need to do (actual work will need root priviliges so I did everything as root to make my life easier later): me> sudo su root> crontab -e Edited the file as follows, leaving a blank line at the end: SHELL=/bin/bash #which cat outputs /bin/cat PATH="/bin" # execute this every minute, if it works, change cat to my script 1 * * * * cat /home/me/source.txt 1> /home/me/destination.txt According to this SO question, restarted the cron service to be sure it loads changes after saving the file and exiting the editor: root> service cron restart And then waited for a few minutes. Nothing happened. Then restarted the computer. Again, nothing. Where did I do it wrong?
Your crontab is running at 1st minute of every hour. To run every minute you have to configure like this. * * * * * cat /home/me/source.txt 1> /home/me/destination.txt
cron doesn't execute scripts after setting it up
1,607,029,596,000
I'm using HISTIGNORE to ignore the most used commands and HISTCONTROL=ignoreboth:erasedups to remove duplicates. Is there a way to periodically sort the contents of the .bash_history in alphabetical order?
To see a sorted history without changing it, do: history | sort -k2 To sort the history file, do: sort -o ~/.bash_history ~/.bash_history Then log out of bash by typing exit, and log back in. The new terminal instance will have an alphabetically sorted history. For the most cautious possible way to sort the history file, first exit all running instances of bash, (for the current user anyway), then do: sort -o ~/.bash_history ~/.bash_history Note: Users generally have no bad results from editing ~/.bash_history while still logged in. But it's fairly certain that exiting all running instance of bash is as safe or safer.
Sort .bash_history content alphabetically
1,607,029,596,000
I'd like to be able to add a log file that has a timestamp of the last time the cronjob was run. This is the current code i'm using crontab -l > mycron echo ""${var1}" "${var2}" "${var3}" "${var4}" "${var5}" tar -czf "$fsrc"/* > ./"$fdest"/"$fname"">> ~/cronlog.log 2>&1 >>mycron crontab mycron rm mycron The log file is created and the job runs as it should, but the log file has nothing in it. How can I make the log file update? Thanks
With your echo line. [zbrady@myserver ~]$ cat test1.sh var1=0 var2=1 var3=2 var4=3 var5=4 fsrc=abc fdest=def fname=ghi >mycron echo ""${var1}" "${var2}" "${var3}" "${var4}" "${var5}" tar -czf "$fsrc"/* > ./"$fdest"/"$fname"">> ~/cronlog.log 2>&1 >>mycron cat mycron [zbrady@myserver ~]$ ./test1.sh 0 1 2 3 4 tar -czf abc/* > ./def/ghi With my slightly modified version. [zbrady@mysever ~]$ cat test2.sh var1=0 var2=1 var3=2 var4=3 var5=4 fsrc=abc fdest=def fname=ghi >mycron if ! crontab -l |grep ^HOME= then echo HOME=$HOME > mycron fi crontab -l >> mycron echo "${var1} ${var2} ${var3} ${var4} ${var5} tar -czf ./$fdest/$fname $fsrc/* >> $HOME/cronlog.log 2>&1" >> mycron cat mycron [zbrady@myserver ~]$ ./test2.sh HOME=/home/myuser 0 1 2 3 4 tar -czf ./def/ghi abc/* >> /home/myuser/cronlog.log 2>&1 Your quoting was a bit weird and was causing that echo statment to mess up. I also replaced ~ with $HOME. Make sure you have HOME=/home/myuser at the top of your crontab. One other issue I noticed was that you were trying to redirect tar to the output filename, you should specify the archive name after the -f flag and just >> your tar stdout to your log file.
Cron log file isn't updating
1,607,029,596,000
Often times when there's an email address on a website, it is not just plaintext, but a link (hyperlink?). However, instead of containing an address such as https://unix.stackexchange.com it contains mailto:[email protected] which - upon a click - opens a browser "page not found". This seems very useless, however a quick search on the Unix StackExchange suggested that the phrase MAILTO: has some function/meaning on Unix/Linux systems (and maybe on others). Many questions also refer to "cron". While I understand the primary function of "cron" jobs (automatic actions being done at certain times/periods) I have trouble to connect it with the mailto: hyperlinks. So what is the function of these hyperlinks and is it in any way connected to "cron" jobs - if so, how is it supposed to work?
They are a URI for an e-mail address that the browser can use to invoke an e-mail application in order to send e-mail to the given address. The browser should provide a configuration option to set the application used. cron uses the unrelated MAILTO variable (i.e. MAILTO=...) in its configuration to know where to send e-mail with output from the commands it runs rather than sending it all to root.
What is the function of "mailto:[email protected]"
1,607,029,596,000
I have read a number of posts on SE.com regarding cronjobs, crontab, and how to reschedule cron jobs. But it seems that, in my Debian environment, a session cleanup job schedule simply called "php", that is present in /etc/cron.d, specifying root as the user, and currently running every 30 minutes, cannot be rescheduled, unless a reboot occurs. cat /etc/cron.d/php returns: 9,39 8-20 * * * root [ -x /usr/lib/php/sessionclean ] && if [ ! -d /run/systemd/system ]; then /usr/lib/php/sessionclean 2>>/dev/null; fi Whatever hour/minute changes I do will not be taken into account, despite /etc/init.d/cron restart . It is on a production server, therefore the host can't be restarted easily, or as an experiment. Is there a command I can issue to tell crond to take into account any rescheduling?
Welcome to Debian; you no longer use cron. M. Kitt is right that Stack Exchange prefers one question per question, but in this case both questions stem from a fundamental mistake that you are making: You are erroneously assuming that you are running cron jobs. As you can see in the very cron job description that is in front of you, your cron jobs are disabling themselves because they see systemd running on your system:… && if [ ! -d /run/systemd/system ]; then … ; fi /etc/cron.d/php is not unofficial, by the way. It is installed by the Debian php-common package. Also installed from that very same package are the things that run instead of that cron job: a systemd timer unit named phpsessionclean.timer a systemd service unit named phpsessionclean.service This is a pattern that one can find in an increasing number of such Debian packages nowadays: the cron jobs are supplanted by systemd units, and self-disable when (only) systemd is running. So adjusting the scheduling in the cron table will achieve nothing (apart from changing when a shell command that does nothing is scheduled). Similarly, setting environment variables in the cron table will achieve nothing. You need to change the scheduling in the timer unit, and change the environment variables in the service unit. The details of that are beyond the scope of this answer, but very briefly you need to learn about systemd units, unit override files, and the systemctl cat, systemctl status, and systemctl edit commands. Further reading Lennart Poettering et al. "Environment". systemd.exec. systemd manual pages. Freedesktop.org. Lennart Poettering et al. "Options". systemd.timer. systemd manual pages. Freedesktop.org. How can I make a modification to a .service and keep it persistent? Where to place user created systemd unit files https://unix.stackexchange.com/a/196252/5132
cron reschedule seems not to be taken into account unless reboot occurs?
1,607,029,596,000
I've installed Debian stable 9.3 server for an Nginx environment, and I consider upgrading it daily via cron: 0 0 * * * apt-get update -y && apt-get upgrade -y My system is minimal and uncustomized: It's a VPS to run a webserver with only these ports open: 22, 80, 443, 9000. My system has no third-party software (that is, software originating outside the Debian repositoirs), besides CSF-LFD, Maldet and WP-CLI. Is it still problematic to upgrade daily as I plan to do? Given I host at DigitalOcean I've asked the DigitalOcean support about this and they said it should be okay but added something like "there has been some webservers that stopped working due to such upgrades" (it isn't clear if the aforementioned webservers and their environments where of a similar, let alone identical, stack and if they even were Debian stable). Do you also see a problem in the daily upgrade approach I take to my specific minimalist stack? I assume I don't want to use unattended-upgrades that gives only security upgrades. I want to make sure everything is upgraded. This was my main reason to move from Ubuntu 16.04 xenial to Debian stable. I guess this answer can indeed have an answer like "generally it is a wrong approach considering your current stack" or "generally it isn't wrong considering your current stack".
I think your premise is flawed — unattended-upgrades can be configured to apply any update, not just security updates. It also takes care of quite a bit more than just apt-get update && apt-get upgrade: it will avoid starting upgrades which require interaction (e.g. for configuration changes), it can be given a list of packages not to upgrade, it can handle incomplete dpkg runs, it can send email in a variety of circumstances (cron will do that too), it can reboot automatically if necessary (which is useful following a kernel upgrade), etc. See its documentation for details.
Upgrading Debian stable daily [closed]
1,607,029,596,000
To edit the root crontab in Debian I do for example sudo crontab -e. To exit from the preferred text editor (Nano), I do CTLR+X. So far so good, but what if I want that each time I exit crontab, a text will be echoed into the console (into "stdout"). The purpose is to echo a reminder message like: If you haven't already, change p to your password in password[p] to your password! To make sure I'm clear here --- I desire that each time the user finished editing the crontab and then quited back to the console, the message will appear. Is there any way to do so in the current release of Bash?
You can assign the $EDITOR variable a script which first calls an editor and then produces the output: #! /bin/bash vim "$1" echo "foo bar baz" and use this call EDITOR=/path/to/script.sh crontab -e
Echo something to console each time you quit crontab
1,607,029,596,000
I'm setting up a docker container which requires a cronjob to do a backup using awscli (authenticated via environment variables). Since cron doesn't see my docker variables I'm printing them to a file and sourcing them before I run the aws command. I have confirmed that the variables are set from cron yet awscli doesn't see them. Here is a minimal project demonstrating the issue. Dockerfile: FROM debian:jessie # Install aws and cron RUN apt-get -yqq update RUN apt-get install -yqq awscli cron rsyslog # Create cron job ADD crontab /etc/cron.d/hello-cron RUN chmod 0644 /etc/cron.d/hello-cron # Output environment variables to file # Then start cron and watch log CMD printenv > /env && cron && service rsyslog start && tail -F /var/log/* crontab: # Demonstrates that cron can see variables */2 * * * * root /usr/bin/env bash -c '. /env && echo $AWS_DEFAULT_REGION' >> /test1 2>&1 # Attempt to list s3 buckets knowing environment variables are set */2 * * * * root /usr/bin/env bash -c '. /env && aws s3 ls' >> /test2 2>&1 I end up getting back Unable to locate credentials. You can configure credentials by running "aws configure". Yet if I run the same command inside the docker container, I get back a list of buckets. This is is the .env file I'm passing to docker. .env: ## AWS SETTINGS AWS_ACCESS_KEY_ID=(key removed) AWS_SECRET_ACCESS_KEY=(secret removed) AWS_DEFAULT_REGION=us-west-2 Does anyone have an idea as to why awscli can't see the environment variables, but only inside cron?
Environment variables can be set directly in the crontab(5); this will avoid the cost and complication of the additional shell execution and source steps. That is, your hello-cron file would instead contain something like AWS_ACCESS_KEY_ID=(key removed) AWS_SECRET_ACCESS_KEY=(secret removed) AWS_DEFAULT_REGION=us-west-2 */2 * * * * root echo $AWS_DEFAULT_REGION >> /test1 2>&1 */2 * * * * root aws s3 ls >> /test2 2>&1
awscli can't see environment variables but only from cron?
1,607,029,596,000
This is my code in scriptrun (name of my shell script): php -f a1.php; php -f b2.php; sh -e c3.txt This is my cronjob command: /home/telia/www/robot/scriptrun, created as root When I run script I get error message Could not open input file: a1.php Could not open input file: b2.php scriptrun file is already have +x and I already tried /usr/bin/php -f a1.php; /usr/bin/php b2.php ;sh -e c3.txt I tried giving 777 chmod to php files however doesn't change anything. script runs perfectly if I try manually it is just not work with cronjob.
As was answered in a comment, the problem appears to be that your a1.php and b2.php scripts are not in your $HOME directory, which is where cron jobs will execute. Either add a cd /to/that/path command to your scriptrun script, or change the php commands to use the full path to those scripts.
script in cronjob doesn't work with: "Could not open input file"
1,607,029,596,000
I am getting an issue while running a cron job for exporting all databases on my Godaddy Shared hosting Cpanel account. But I have trouble finding the syntax error in below command. Same command is working on my AWS Ec2. mysql -N -ubackup -pt -e 'show databases' | while read dbname; do mysqldump -ubackup -p123 --complete-insert -N "$dbname" > /home/test/sqlbackups/'$(date +"%Y-%m-%d")-$dbname'.sql;done Error Received on Mail: /bin/bash: -c: line 0: unexpected EOF while looking for matching `'' /bin/bash: -c: line 1: syntax error: unexpected end of file
Crontab commands will have all unescaped occurrences of % replaced by newlines. This is from the crontab(5) manual on my system: The command field (the rest of the line) is the command to be run. The entire command portion of the line, up to a newline or % character, will be executed by /bin/sh or by the shell specified in the SHELL variable of the crontab. Percent signs (%) in the command, unless escaped with a backslash (\), will be changed into newline characters, and all data after the first % will be sent to the command as standard input. Your crontab command should look like mysql -N -ubackup -pt -e 'show databases' | while read dbname; do mysqldump -ubackup -p123 --complete-insert -N "$dbname" > /home/test/sqlbackups/"$(date +"\%Y-\%m-\%d")-$dbname".sql;done Here, I've also corrected the $(...) that was previously single-quoted (and thus not expanded by the shell). In general, it's better to put all non-trivial cron jobs in their own scripts and then schedule these instead. That way you have more control over things like this and you are also able to choose the correct interpreter for any particular job (e.g. ksh or bash instead of sh). It additionally makes any subject lines in emails sent to you from the cron daemon more readable.
What is causing my "Unexpected EOF Error while looking for ..." error? [duplicate]
1,607,029,596,000
I'm currently on Linux Ubuntu 14.04 (I think). So I previously had a cron.daily script which was working fine. I decided to use the same script but move it to cron.hourly and now it wont work. /etc/cron.hourly/dstealth-watch-tv #!/bin/bash times=$(date) echo "${times}:" >> /var/log/dstealth/watch-tv.log /usr/bin/curl --silent http://watch.dstealth.com/tv/refreshToken.php?k=secretRefreshKey >> /var/log/dstealth/watch-tv.log I've given the file CHMOD: 755 The log file was created manually with CHMOD: 644 and is empty. Then I used: service cron reload and waited several hours hoping for an output into my log file but it remains empty. I've tried run-parts --verbose /etc/cron.hourly and this is the output I get: /etc/cron.hourly/dstealth-watch-tv: run-parts: failed to exec /etc/cron.hourly/dstealth-watch-tv: No such file or directory run-parts: /etc/cron.hourly/dstealth-watch-tv exited with return code 1 This also did not result in anything being input into the log file.
Run this command to convert the file from Windows format to UNIX/Linux format. dos2unix /etc/cron.hourly/dstealth-watch-tv
cron.hourly "exited with return code 1" no output to log file
1,607,029,596,000
I have created a bash script that runs automated EBS backups on AWS. It is kicked off via a cronjob: 0 2 * * * /bin/bash /root/backup_snapshots.sh > backup.log 2>&1 This works perfectly, but next thing I want to do is to add an exit code for whether or not the script runs successfully ( So I can configure a Nagios check on it). There is a few things I am going to be doing: Create the backup.log in /var/log/backup/ directory. Configure logrotate to rotate it daily, to make this check easier to detect. But one question I have is, is it possible to have cron write an exit code to the backup.log file I created? Or should I go with this type of implementation: Create crontab entry with a script that can kick off the actual script AND have an exit code like this: #!/bin/bash /root/backup_snapshots.sh 2> /dev/null if [ $? -eq 0 ] then echo "PASS" else echo "FAIL" >&2 fi I want to make this as simple as possible so if cron can do it, awesome! If not, would the next best thing would be to create a bash script that executes backup_snapshots.sh and run via cron?
#!/bin/bash exec 1>/var/log/backup/backup_sbapshots.log 2>&1 if /root/backup_snapshots.sh then echo "PASS" else echo "FAIL" fi Make your cron script as shown which will put the cronjob run status in the backup log file. Note that there's no mention anywhere of the $? variable as the if statement does it own it's own.
Adding an exit code to log file
1,607,029,596,000
I'm trying to make a Bash-script which checks if I've downloaded anything, and if I have it should scan that. I'm making it launch at startup using crontab, however that part doesn't work. This is my code: #!/bin/bash inotifywait ~/Downloads -m -r -e modify -e moved_to --format '%w%f' | while read file do $(clamscan --bell --recursive --max-filesize=99999 --log log/myLogs.txt $file) CLAMSCAN="$?" if [ $CLAMSCAN -eq 1 ]; then $(xmessage -buttons Ok:0,"Remove":1,"View Logs":2 -default Ok -center "Infected file: $file is found...") CHOICE="$?" if [ $CHOICE -eq 1 ]; then $(rm -r $file) elif [ $CHOICE -eq 2 ]; then $(xmessage -buttons Ok:0 -default Ok -center -file log/myLogs.txt) fi fi done My crontab: # Edit this file to introduce tasks to be run by cron. # # Each task to run has to be defined through a single line # indicating with different fields when the task will be run # and what command to run for the task # # To define the time you can provide concrete values for # minute (m), hour (h), day of month (dom), month (mon), # and day of week (dow) or use '*' in these fields (for 'any').# # Notice that tasks will be started based on the cron's system # daemon's notion of time and timezones. # # Output of the crontab jobs (including errors) is sent through # email to the user the crontab file belongs to (unless redirected). # # For example, you can run a backup of all your user accounts # at 5 a.m every week with: # 0 5 * * 1 tar -zcf /var/backups/home.tgz /home/ # # For more information see the manual pages of crontab(5) and cron(8) # # m h dom mon dow command SHELL=/bin/sh @reboot sh $HOME/.custom_security/Downloads_sec.sh When I change directory to e.g '/', and then run sh $HOME/.custom_security/Downloads_sec.sh I get this error: Setting up watches. Beware: since -r was given, this may take a while! Watches established. ERROR: Can't open log/myLogs.txt in append mode (check permissions!). ERROR: Problem with internal logger. The script works fine as standalone script!
The problem is with this line: clamscan --bell --recursive --max-filesize=99999 --log log/myLogs.txt $file This tries to write to log/myLogs.txt. If you are in your home directory /home/oneill, it will try to write to /home/oneill/log/myLogs.txt, which is probably the correct place. If you are in the root / directory, it will try to write to /log/myLogs.txt, which it does not have the proper permissions for. Either use absolute paths, or put a cd /home/oneill somewhere in the beginning of the script.
Bash script not working in crontab
1,607,029,596,000
From the PERIODIC(8) man page: The periodic utility is intended to be called by cron(8) to execute shell scripts located in the specified directory. One or more of the following arguments must be specified: daily Perform the standard daily periodic executable run. This usu- ally occurs early in the morning (local time). weekly Perform the standard weekly periodic executable run. This usu- ally occurs very early on Saturday mornings. Why does it say for weekly, for example, that it "usually occurs very early on Saturday mornings?" Is it suggesting that the user set the cron-job for periodic weekly to be "very early on Saturday mornings?"
Your /etc/crontab file contains lines like this, by default: # Perform daily/weekly/monthly maintenance. 1 3 * * * root periodic daily 15 4 * * 6 root periodic weekly 30 5 1 * * root periodic monthly Obviously, 4:15 AM Saturdays is only one of the many possible times which could be set for your weekly maintenance. The documentation isn't "suggesting" anything to me other than the fact that most people do not alter the default settings for these cron jobs, hence the jobs "usually" run at their default times.
If "the periodic utility is intended to be called by cron", why does the man page imply that periodic has its own timing?
1,607,029,596,000
I have a cronjob I want to run every 17 minutes and it does, but it also runs on the hour. How do I keep it from running on the hour (example 13:00)? CRON: */17 * * * * php script.php CRON LOG: Aug 10 16:17:01 CROND[1925]: CMD (php script.php) Aug 10 16:34:01 CROND[1126]: CMD (php script.php) Aug 10 16:51:02 CROND[1197]: CMD (php script.php) Aug 10 17:00:01 CROND[1130]: CMD (php script.php)
You are asking it to run every multiple of 17 minutes, every hour, which is what it is doing (zero is a multiple of seventeen). If you want it to run only at :17, :34 and :51, try 17,34,51 * * * * php script.php If you want it to run every seventeen minutes, you'll need to instead use * * * * * and add the time-checking logic to your command.
Cronjob Running on the Hour (not anticipated)
1,607,029,596,000
I am on using CentOS 6.7 server and every few days my cron service dies. I've tried checking: cat /var/log/messages | grep cron but there was nothing relevant. How can I check what's killing the service?
So the issue turned out my own bug - I've made a process that goes up every 15 minutes. Unfortunately, the process when closed, would leave his child processes up and I've collected thousands of such processes every few days. Every now and then CentOS would run out of memory and kill some process to get more memory. It seems that cron was killed due to that reason.
CentOS - cron service dies every few days
1,607,029,596,000
Currently I got stuck by trying to execute a shell script via crontab. It does not work and I can't figure out what is wrong here. What I want to do is: Execute a javascript (index.js) file with nodejs periodically. The file run-logger.sh is executable (-rwxr-xr-x) and located under /home/pi/apps/fritz-client. run-logger.sh: #!/bin/bash # execute index.js and save all output to log /usr/bin/env node /home/pi/apps/fritz-client/index.js >> fritz.log If I run this command standalone /usr/bin/env node /home/pi/apps/fritz-client/index.js >> fritz.log everything went fine! Even If I do cd /home/pi/apps/fritz-client && ./run-logger.sh crontab -e: # # lots of comments # */1 * * * * /home/pi/apps/fritz-client/run-logger.sh crontab -l shows it too. I tried: */1 * * * * /home/pi/apps/fritz-client/run-logger.sh */1 * * * * /bin/bash /home/pi/apps/fritz-client/run-logger.sh */1 * * * * /bin/sh /home/pi/apps/fritz-client/run-logger.sh */1 * * * * bash /home/pi/apps/fritz-client/run-logger.sh The command more /proc/version results in: Linux version 4.1.18-v7+ (dc4@dc4-XPS13-9333) (gcc version 4.9.3 (crosstool-NG crosstool-ng-1.22.0-88-g8460611) ) #846 SMP Thu Feb 25 14:22:53 GMT 2016 Update This is my first week using linux and a raspberry pi. So be please patient :) syslog output Mar 14 21:08:01 raspberrypi CRON[3609]: (pi) CMD (/home/pi/apps/fritz-client/run-logger.sh) Mar 14 21:08:01 raspberrypi CRON[3602]: (CRON) info (No MTA installed, discarding output) Mar 14 21:09:01 raspberrypi CRON[3626]: (pi) CMD (/home/pi/apps/fritz-client/run-logger.sh) Mar 14 21:09:01 raspberrypi CRON[3619]: (CRON) info (No MTA installed, discarding output) Mar 14 21:10:01 raspberrypi rsyslogd-2007: action 'action 17' suspended, next retry is Mon Mar 14 21:11:31 2016 [try http://www.rsyslog.com/e/2007 ] Mar 14 21:10:01 raspberrypi CRON[3642]: (pi) CMD (/home/pi/apps/fritz-client/run-logger.sh) Mar 14 21:10:01 raspberrypi CRON[3635]: (CRON) info (No MTA installed, discarding output) Mar 14 21:10:06 raspberrypi crontab[3651]: (pi) BEGIN EDIT (pi) Mar 14 21:10:19 raspberrypi crontab[3651]: (pi) REPLACE (pi) Mar 14 21:10:19 raspberrypi crontab[3651]: (pi) END EDIT (pi) ps aux | grep cron root 382 0.0 0.2 5548 2452 ? Ss 19:01 0:00 /usr/sbin/cron -f pi 3683 0.0 0.2 4772 1936 pts/0 S+ 21:11 0:00 grep --color=auto cron
The cronjob is running. The run-logger.sh script is being found and executed, or otherwise syslog would indicate that. But output from that is being sent to your mail inbox which is broken. Fix the MTA so it sends you mail (locally perhaps) so that you can see the output. Alternatively, modify your cronjob so that you collect the output in a file. * * * * * /home/pi/apps/fritz-client/run-logger.sh &>/var/tmp/logger.out You don't need the /1 -- it's redundant. Modify your node-js script so that it also logs stderr: /usr/bin/env node /home/pi/apps/fritz-client/index.js >> fritz.log 2>&1 It's possible that you cannot write to fritz.log. It defaults to the user's home directory. Also, is pi the user of the cronjob?
Execute a shell script via crontab
1,607,029,596,000
Below is a part of script which gives proper output when run manually but gives incorrect output when run using cron: sort < file1.out | uniq -ic |sort -nr> file2.out When run on the command line, this gives a count where lines are grouped ignoring case, such as: 73 /universal/webselfservice/pdf/r60.pdf When running through cron, the counts are split when case varies, for example: 47 /universal/webselfservice/pdf/r60.pdf 26 /universal/webselfservice/pdf/R60.pdf How can I get the cron output to match the command line behavior?
The locale used under cron is different to that in your interactive environment. One has case case-insensitive collation, and the other does not. This means that interactively, the first sort puts /universal/webselfservice/pdf/r60.pdf and /universal/webselfservice/pdf/R60.pdf adjacent, so uniq -i can combine them. But in the locale used by cron, they are not adjacent, and get counted separately. There two simple means to get what you want: specify your case-insensitive locale as an environment variable in your crontab file, or add -f (or --ignore-case) flag to the first sort.
sort and uniq commands not running as expected when run though cron
1,607,029,596,000
I have created an /etc/crontab file on a Mac OS X that runs a few simple commands. I would like to use the crontab to echo a statement in my terminal, logout from the OS (as if I was to point my mouse to the command Logout), and I am trying to test it by running an echo statement every two minutes. Here is what I have so far: 55 12 * * * echo "Don't forget to log out!! Will be automatically logging out at 12PM Sharpe!!" * 13 * * * logout 1 * * * * echo "Test" At that point, I used crontab /etc/crontab to ensure that my user points to the crontab file (which it does) and was able to confirm it by typing crontab -l.. What will i need to do in order to make it so that it echoes in my terminal and have it actually Log out of the operating system at 12 Noon everyday?
If the actual intent is to force a logout of the active user of the system at 1300 every day, try this in the root user's crontab: 0 13 * * * osascript -e 'tell app "System Events" to log out' If you don't wish to display a confirmation dialog to the user, you can use: 0 13 * * * osascript -e 'tell app "System Events" to «event aevtrlgo»'
Crontab on Mac OSX
1,607,029,596,000
I'm trying to run rsync-books with a crontab. Typing crontab -e outputs: 55 12 * * * diegoaguilar /storage/bin/rsync-books Where /storage/bin/rsync-books looks like this: if [ -d "/media/Beagle/books" ]; then rsync -rP --delete --verbose /storage/Copy/Books/ /media/Beagle/books >> ~/rsync-books.log fi Just to confirm, this script has got executable permissions. I tried waiting at that time the command, when /media/Beagle/books existed and it neither rsync anyhing or the log file was created. Is there something I'm missing?
Cron runs commands without an environment so there's no PATH variable set. Because of this you need to specify the full path to rsync in your script. if [ -d "/media/Beagle/books" ]; then /usr/bin/rsync -rP --delete --verbose /storage/Copy/Books/ /media/Beagle/books >> ~/rsync-books.log fi Also, if you're running crontab -e don't include the username in the crontab entry. Your crontab should look like 55 12 * * * /storage/bin/rsync-books EDIT: cron runs in a non-interactive shell so the environment (and PATH) may be different from what you expect. It's always best to specify full paths in any script that will be run from cron.
Cronjob not being executed
1,607,029,596,000
i have a cron in my crontab 45 18 * * * root /bkp_db.sh but it is not working. it is not being executed. what im doing wrong? Inside my script: NOW=$(date +"%Y-%m-%d") mysqldump -u root apps_db > bkp_apps/dump_app1_$NOW.sql mysqldump -u root app_db2 > bkp_apps/dump_app2_$NOW.sql zip -r bkp_apps/bkp_apps_$NOW.zip /var/www/myapps/public_html
Your personal crontab file should look like this 45 18 * * * /bkp_db.sh There are several crontab files, each with a slightly different layout. personal crontab files, edited via crontab -e do not contain the username. man crontab says, There is one file for each user's crontab under the /var/spool/cron/crontabs directory. Users are not allowed to edit the files under that directory directly to ensure that only users allowed by the system to run periodic tasks can add them, and only syntactically cor‐ rect crontabs will be written there. This is enforced by having the directory writable only by the crontab group and configuring crontab com‐ mand with the setgid bid set for that specific group. But if you read man cron you will also read, Additionally, in Debian, cron reads the files in the /etc/cron.d direc‐ tory. cron treats the files in /etc/cron.d as in the same way as the /etc/crontab file (they follow the special format of that file, i.e. they include the user field). However, they are independent of /etc/crontab: they do not, for example, inherit environment variable settings from it. This change is specific to Debian see the note under DEBIAN SPECIFIC below.
cron not executing
1,607,029,596,000
I got a cron format like this: 0 0 12 1/1 * ? *, How to read it and what does it mean. I understand things without slash but not this one.
Slashes means step values (has to be something that the maximum value of the element in question is divisible by) in which the execution will take place. First value is the range, so say 0-30, and the second value is the frequency, so for example 5. If the value was 0-30/5 in the minutes column, it would execute every five minutes between the range of 0-30 minutes. Question marks mean whenever the first execution takes place, it'll grab the corresponding value for the element using a question mark, and will put the value at that time into it. This means, say you start the execution via cron for the first time on a Monday and the day of the week value is a ?, it'll change it to a Monday so it runs on Monday permanently. Quick run-down of the values: 0 - first column means on the 0 minute - this is what minute to execute. 0 - second column means on the 0 hour - this is the hour of execution. 12 - this is the 12th day of the month - this is the day of the month to execute. 1/1 - this means it wants it to be executed once a month (right hand side 1), and the range is locked down to the first month (left hand side 1). If my understanding is correct, this is the same as having 1 alone. * - this is the value for the day of the week - having an asterisk means it'll be repeated every day of the week. This looks like it'll run at 00:00 on the 12th of the first month in the year, regardless of the day of the week. I'm not sure why there are seven values, as standard cron files only have five or six values from what I'm aware (sixth being the year, as viewable in the documentation below - but is not included in standard/default deployments of cron). I'd also suggest having a read through the documentation, as it's great reference material for learning how they are structured: https://en.wikipedia.org/wiki/Cron
cron job with slash