date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,324,046,017,000 |
I have created a --user service in systemd such that a non privileged user can manage a service. This works well.
I wanted to restart the service at a fixed given time of day, so i created a cron job in the users crontab.
Strangely this does not work. The user can restart the service if they run:
systemctl --user restart myservice.service
However running this from the crontab does not restart the service. Does anyone know why?
This is running on Ubuntu 16.04.
|
systemctl --user needs to talk to the D-Bus session, which involves setting at least DBUS_SESSION_BUS_ADDRESS and perhaps XDG_RUNTIME_DIR; typically:
XDG_RUNTIME_DIR=/run/user/$(id -u)
DBUS_SESSION_BUS_ADDRESS=unix:path=${XDG_RUNTIME_DIR}/bus
export DBUS_SESSION_BUS_ADDRESS XDG_RUNTIME_DIR
systemctl --user restart myservice.service
You might want to look at systemd timers instead of cron for this.
| Using Cron to restart a systemd user service |
1,324,046,017,000 |
I'm setting up a docker container which requires a cronjob to do a backup using awscli.
I'm having a problem with the cron job being able to access the environment variables of the docker container. As I work around on startup I print all environment variables to a file printenv > /env.
When I try to use source from the cron job (I have tried both directly in crontab and in a script called by crontab) it doesn't seem to work.
I made a simplified version of my project to demonstrate the issue (including rsyslog for logging):
Dockerfile:
FROM debian:jessie
# Install aws and cron
RUN apt-get -yqq update
RUN apt-get install -yqq awscli cron rsyslog
# Create cron job
ADD crontab /etc/cron.d/hello-cron
RUN chmod 0644 /etc/cron.d/hello-cron
# Output environment variables to file
# Then start cron and watch log
CMD printenv > /env && cron && service rsyslog start && tail -F /var/log/*
crontab:
# Every 3 minutes try to source /env and run `aws s3 ls`.
*/3 * * * * root /usr/bin/env bash & source /env & aws s3 ls >> /test 2>&1
When I start the container I can see /env was created with my variables but it never gets sourced.
|
First of all, the command's (well, shell builtin's) name is source. Unless you have written a script called source and put it in /, you want source and not /source.
The next issue is that cron usually uses whatever you have as /bin/sh and source is a bashism (or other such more complex shells). The portable, POSIX-compliant command for sourcing a file is .. So, try that instead of source:
*/3 * * * * root /usr/bin/env bash & . /env & aws s3 ls >> /test 2>&1
Also, I don't quite understand what that is supposed to be doing. What's the point of starting a bash session and sending it to the background? If you want to use bash to run the subsequent commands, you'd need:
*/3 * * * * root /usr/bin/env bash -c '. /env && aws s3 ls' >> /test 2>&1
I also changed the & to && since sourcing in the background is pointless as far as I can see.
| Can't use `source` from cron? |
1,324,046,017,000 |
I would like to send an email when a file reach a certain size limit.
The only way I thought of doing this is by doing a cronjob which will check the file size and send the email if the file is bigger than the desired size.
However, it seems like a bad solution for me to add a cronjob which would check ,for example every 15-30 min, the size of a file?
I was wondering if there is a better way of doing this to automatically detect when the file is appended some text (event?) so I could then check the size and do the desired treatment.
|
I can conceive of 2 approaches to do this. You can either use a while loop which would run a "stat" command at some set frequency, performing a check to see if the file's size has exceeded your desired size. If it has, then send an email. This method is OK but can be a bit inefficient since it's going to run the "stat" command irregardless if there was an event on the file or not, at the set time frequency.
The other method would involve using file system events that you can subscribe watchers to using the command inotifywatch.
Method #1 - Every X seconds example
If you put the following into a script, say notify.bash:
#!/bin/bash
file="afile"
maxsize=100 # 100 kilobytes
while true; do
actualsize=$(du -k "$file" | cut -f1)
if [ $actualsize -ge $maxsize ]; then
echo size is over $maxsize kilobytes
.... send email ....
exit
else
echo size is under $maxsize kilobytes
fi
sleep 1800 # in seconds = 30 minutes
done
Then run it, it will report on any access to the file, if that access results in the file's size exceeding your minimum size, it will trigger an email to be sent and exit. Otherwise, it will report the current size and continue watching the file.
Method #2 - Only check on accesses example
The more efficient method would be to only check the file when there are actual accesses. The types of accesses can vary, for this example I'm illustrating how to watch for just file accesses, but your could watch only on other events, such as the file being closed. Again we'll name this file, notify.bash:
#!/bin/bash
file=afile
maxsize=100 # 100 kilobytes
while inotifywait -e access "$file"; do
actualsize=$(du -k "$file" | cut -f1)
if [ $actualsize -ge $maxsize ]; then
echo size is over $maxsize kilobytes
.... send email ....
exit
else
echo size is under $maxsize kilobytes
fi
done
Running this script would result in the following output:
$ ./notify.bash
Setting up watches.
Watches established.
Generating some activity on the file, the file now reports it's size as follows:
$ seq 100000 > afile
$ du -k afile
576 afile
The output of our notification script:
afile ACCESS
size is over 100 kilobytes
At which point it would exit.
Sending email
To perform this activity you can simply do something like this within the script:
subject="size exceeded on file $file"
emailAddr="[email protected]"
mailCmd="mail -s \"$subject\" \"$emailAddrs\""
( echo ""; echo "DATE: $(date)"; ) | eval mail -s "$subject" \"$emailAddr\"
Considerations
The second method as it is will work in most situations. One where it will not is if the file is already exceeding the $maxsize when the script is invoked, and there are no further events on the file of type access. This can be remedied with either an additional check performed in the script when it's invoked or by expanding the events that inotifywatch acts on.
References
How to execute a command whenever a file changes?
How to check size of a file?
inotify-tools
| Automatically detect when a file has reached a size limit |
1,324,046,017,000 |
I have logs in the following format: YYYYMMDD
I want to compress old logs (older then current day) and maybe move them to a different directory afterwards.
Can I do this in logrotate, or do I have to use a custom script in cron?
|
Here's a quickie script which will do what you need:
#!/bin/bash
LOGDIR=/var/log/somedir
OLDLOGS=/var/log/keep-old-logs-here
PATH=/bin:$PATH
TODAY=$(date +'%Y%m%d')
[ -d $OLDLOGS ] || mkdir -p $OLDLOGS
cd $LOGDIR
for LOG in $(ls | egrep '^[[:digit:]]{8}$'); do
[ $LOG -lt $TODAY ] && gzip $LOG && mv $LOG.gz
done
Make the script executable:
$ chmod +x /where/you/put/this/script
The crontab entry will look like:
30 0 * * * /where/you/put/this/script
Just adjust LOGDIR and OLDLOGDIR. At 12:30am it will move all logs in the format of YYYYMMDD for the previous (and earlier, if any) days.
| Can I use logrotate to compress daily (date named) logs? |
1,324,046,017,000 |
I write a script and set it as a cron job.
But due to a difference of environment variables it doesn't work as it should be.
In that case I change a little bit with crontab -e and set a cron job time closest minute, and wait next minute to come to show the result. I feel this is a totally absurd approach, but I don't know better way to do it.
If there is a way to run a script as if it is called inside cron job, I'm going to use it.
Does anyone know how to do it?
|
Here is how to do the other way around: forcing cron execution to use your login environment:
bash -lc "your_command"
From the bash manual:
-c string If the -c option is present, then commands are read from string.
If there are arguments after the string, they are assigned to the
positional parameters, starting with $0.
-l Make bash act as if it had been invoked as a login shell
(see INVOCATION below).
INVOCATION (a bit stripped):
When bash is invoked as an interactive login shell, or as a non-interactive shell with the --login option, it first reads and executes commands from the file /etc/profile, if that file exists. After reading that file, it looks for ~/.bash_profile, ~/.bash_login, and ~/.profile, in that order, and reads and executes commands from the first one that exists and is readable. The --noprofile option may be used when the shell is started to inhibit this behavior.
To known more:
man bash
| How to run a command as if it is called from cron |
1,324,046,017,000 |
On a server I inherited, there is a cron job running hourly on one of the Debian servers. It sends an email out to a non-existent email, but bounces back to my account since I listed myself as the root email in /etc/aliases. The cron job has been deleted from /etc/cron.hourly (it was ntupdate), as it's listed in the email. I reloaded the crontab daemon, but I am still getting hourly reports that the file failed to launch, and the email address does not exist!
The output that is getting emailed:
/etc/cron.hourly/ntpdate:
run-parts: failed to exec /etc/cron.hourly/ntpdate: Exec format error
run-parts: /etc/cron.hourly/ntpdate exited with return code 1
Currently, there is just the .placeholder hidden file in /etc/cron.hourly. I also ran crontab -l, and the only 3 jobs listed are expected to be listed, and are running about 10 minutes after this email keep arriving; so I know it is not one of those. Where can I look next to stop getting these emails?
EDIT #1
# ls -l /var/spool/cron
total 4
drwx-wx--T 2 root crontab 4096 Jan 25 2012 crontabs
EDIT #2
# ls -l /var/spool/cron/crontabs/
total 4
-rw------- 1 root crontab 311 Jan 25 2012 root
# more /var/spool/cron/crontabs/root
# DO NOT EDIT THIS FILE - edit the master and reinstall.
# (/tmp/crontab.4nUf85/crontab installed on Wed Jan 25 10:11:10 2012)
# (Cron version -- $Id: crontab.c,v 2.13 1994/01/17 03:20:37 vixie Exp $)
2 1 * * * /etc/webmin/cron/tempdelete.pl
0 22 * * * /etc/init.d/gnugk stop
0 23 * * * /etc/init.d/gnugk start
This is on Debian Squeeze, using just cron, as far as I can tell
|
system crons
Did you look through these files & directories to make sure there isn't a duplicate cronjob present?
/etc/crontab
/etc/cron.hourly/
/etc/cron.d/
/etc/cron.daily/
/etc/cron.hourly/
/etc/cron.monthly/
/etc/cron.weekly/
Also any files present in these directories that's executable will be run. Doesn't matter if it's a .placeholder name or whatever. You can use chmod 644 ... to disable any script that is executable.
user crontabs
Also check the following directory to see if there are any user's that have created their own crontabs:
For example:
$ sudo ls -l /var/spool/cron/
total 0
-rw------- 1 saml root 0 Jun 6 06:43 saml
| Cron job still running when deleted |
1,324,046,017,000 |
I read from an instruction to schedule a script on the last day of the month:
Note:
The astute reader might be wondering just how you would be able to set a command to execute on the last day of every month because you can’t set the dayofmonth value to cover every month. This problem has plagued Linux and Unix programmers, and has spawned quite a few different solutions. A common method is to add an if-then statement that uses the date command to check if tomorrow’s date is 01:
00 12 * * * if [`date +%d -d tomorrow` = 01 ] ; then ; command1
This checks every day at 12 noon to see if it's the last day of the month, and if so, cron runs the command.
How does [`date +%d -d tomorrow` = 01 ] work?
Is it correct to state then; command1?
|
Abstract
The correct code should be:
#!/bin/sh
[ "$#" -eq 0 ] && echo "Usage: $0 command [args]" && exit 1
[ "$(date -d tomorrow +'%d')" = 01 ] || exit 0
exec "$@"
Call this script end_of_month.sh and the call in cron is simply:
00 12 28-31 * * /path/to/script/end_of_month.sh command
That would run the script end_of_month (which internally will check that the day is the last day of the month) only on the days 28, 29, 30 and 31. There is no need to check for end of month on any other day.
Old post.
That is a quote from the book "Linux Command Line and Shell Scripting Bible" by Richard Blum, Christine Bresnahan pp 442, Third Edition, John Wiley & Sons ©2015.
Yes, that is what it says, but that is wrong/incomplete:
Missing a closing fi.
Needs space between [ and the following `.
It is strongly recommended to use $(…) instead of `…`.
It is important that you use quotes around expansions like "$(…)"
There is an additional ; after then
How do I know? (well, by experience ☺ ) but you can try Shellcheck. Paste the code from the book (after the asterisks) and it will show you the errors listed above plus a "missing shebang". An script without any errors in Shellcheck is this:
#!/bin/sh
if [ "$(date +%d -d tomorrow)" = 01 ] ; then script.sh; fi
That site works because what was written is "shell code". That is a syntax that works in many shells.
Some issues that shellcheck doesn't mention are:
It is assuming that the date command is the GNU date version. The one with a -d option that accepts tomorrow as a value (busybox has a -d option but doesn't understand tomorrow and BSD has a -d option but is not related to "display" of time).
It is better to set the format after all the options date -d tomorrow +'%d'.
The cron start time is always in local time, that may make one job start 1 hour earlier of later than an exact day count if the DST (daylight saving time) got set or unset.
What we got done is a shell script which could be called with cron. We can further modify the script to accept arguments of the program or command to execute, like this (finally, the correct code):
#!/bin/sh
[ "$#" -eq 0 ] && echo "Usage: $0 command [args]" && exit 1
[ "$(date -d tomorrow +'%d')" = 01 ] || exit 0
exec "$@"
Call this script end_of_month.sh and the call in cron is simply:
00 12 28-31 * * /path/to/script/end_of_month.sh command
That would run the script end_of_month (which internally will check that the day is the last day of the month) only on the days 28, 29, 30 and 31. There is no need to check for end of month on any other day.
Make sure the correct path is included. The PATH inside cron will not (not likely) be the same as the user PATH.
Note that there is one end of month script tested (as indicated below) that could call many other utilities or scripts.
This will also avoid the additional problem that cron generates with the full command line:
Cron splits the command line on any % even if quoted either with ' or " (only a \ works here). That is a common way in which cron jobs fail.
You can test if end_of_month.sh script works correctly on some date (without waiting to the end of the month to discover it doesn't work) by testing it with faketime:
$ faketime 2018/10/31 ./end_of_month echo "Command will be executed...."
Command will be executed....
| Schedule the last day of every month |
1,431,943,225,000 |
I think about doing something like:
sudo ln -s ~/myCustomCrontab /var/spool/cron/crontabs/username
because I'd like to have all customized files in my home directory. Possible risks I can think of are
security (How should the permissions be to still be secure?)
system failure
Or is there a better way to "keep track of the file"?
|
This does not work, at least in Debian-like systems (symlinked or hardlinked crontab files for (not-system) users are ignored at all).
It also fails if you use crontab to change your crontab file. If the cron version still accepts symlinked crontab files, it creates possible security holes as the crontab file is not checked anymore for consistency.
With your symlink solution, crontab -e crashes if you (or some install script) changes the crontab file:
crontab: crontabs/username: rename: Operation not permitted
as it moves a temporary file to /var/spool/cron/crontabs/username
to replace the old file instantaneously.
crontab has a lot of additional security checks built in such that
the cron system cannot be used to compromise the system. It checks for example the content of the cron file before installing or changing it. A invalid cron
file may crash the cron daemon or could (at least theoretically) misuse it to gain more privileges on the system.
With your solution, there is no check anymore for the crontab file.
| Is it evil to link to a crontab? |
1,431,943,225,000 |
I want to know how cron works internally. Does the process keep checking the current time in an infinite while loop (thus continually consuming CPU cycles)? Or does some function generate an interrupt and notifies the cron daemon ?
|
I once read the vixie-cron source code and had to be hospitalized. However if you're looking for "some function generate an interrupt" at a time in the future, you should investigate the alarm(2) syscall. It asks the kernel to send you the signal SIGALRM at a scheduled time, which you can then catch. In the mean time, your process can do something else, or sleep(), like I did in the hospital.
| How do the internals of the cron daemon work? |
1,431,943,225,000 |
My cronjob looks like this:
5 3 * * * mysqldump -u root test > /srv/backup/mysql_daily.dump
How can I make filename unique for every time when cronjob writes mysql_daily.dump?
|
For bash, maybe:
... > /srv/backup/mysql_daily-$(date -u +\%Y\%m\%dT\%H\%M\%S)Z.${RANDOM}.dump
Personally, i generally only put simple commands in my crontab. I'd put this in a little script and use the script in the crontab. This would have the benefit of not requiring the % characters to be escaped (a common crontab pitfall).
update made it ISO 8601 per @johan comment
cron
| How to add unique id to file name in cron job? |
1,431,943,225,000 |
I'm looking for a way to schedule when an external hard drive connected to my Linux (Debian 9) box goes to sleep (stops spinning).
To put this into content: I have a Linux box that runs as a multimedia server. If a call is made to fetch content that is on the external hard drive, it often takes 15-30 seconds for the hard drive to wake and start spinning which a) is frustrating and b) sometimes causes timeouts with the multimedia server. I could set the hard drive to be awake and spinning 24/7, but this seems a waste when most of the time I only use the multimedia server when I'm at home.
Is there any software tool or command I could use to set a weekly schedule for when the hard drive is spinning - e.g.
Monday-Friday: SPINNING between 5pm and 11pm
Saturday-Sunday: SPINNING between 3pm and 11pm
OTHERWISE SPINNING on demand and sleep as per system timer
|
A cronjob would allow this:
# At 11pm every day, enable sleep after 30s
0 23 * * * /sbin/hdparm -S6 /dev/disk/by-id/...
# At 5pm on weekdays, disable sleeping
0 17 * * 1-5 /sbin/hdparm -S0 /dev/disk/by-id/...
# At 3pm on the weekend, disable sleeping
0 15 * * 0,6 /sbin/hdparm -S0 /dev/disk/by-id/...
| Is it possible to (7 day) schedule sleep time of a hard drive? |
1,431,943,225,000 |
I can run a script at boot by adding the following line to my crontab:
@reboot perl /path/script
That works fine. But the problem arises when I try to run a gui application such as gmail notify. It simply doesn't run.
How do I run a gui application on startup?
|
Cron is not the program you're after. To run GUI programs there are different approaches. Which one to choose depends on your desktop environment.
The traditional way is to hook it into your .xinitrc file before starting the window manager. A simple example .xinitrc looks as follows:
#!/bin/sh
# Play a login sound
ogg123 -q "${HOME}/Music/login.ogg" &
# Start a terminal emulator
urxvt -T Terminal &
# Start the window manager
exec awesome
Depending on the desktop environment, you can also use ~/.config/autostart/ and create a program.desktop file. Check that directory, if it already contains entries. That's the easiest way, I guess.
autostart […] defines a method for automatically starting applications during the startup of a desktop environment […]
Source: freedesktop autostart specification
| Run gui application on startup |
1,431,943,225,000 |
I recently came up to an easy fix for a crontab logging issue and I am wondering what are the pro's and con's of using this specific fix (running a script with a "login shell flag"), as:
#!/bin/bash -l
|
[The following assumes that your unspecified "logging issue" was related to missing environment setup, normally inherited from your profile.]
The -l option tells bash to read all the various "profile" scripts, from /etc and from your home directory. Bash normally only does this for interactive sessions (in which bash is run without any command line parameters).
Normal scripts have no business reading the profile; they're supposed to run in the environment they were given. That said, you might want to do this for personal scripts, maybe, if they're tightly bound to your environment and you plan to run them outside of a normal session.
A crontab is one example of running a script outside your session, so yes, go do it!
If the script is purely for the use of the crontab then adding -l to the shebang is fine. If you might use the script other ways then consider fixing the environment problem in the crontab itself:
0 * * * * bash -l hourly.sh
| What are the pro's and con's in using the "-l" in a script shebang |
1,431,943,225,000 |
I'm using Amazon Linux and trying to run a cron job from my home directory (I don't have sudo permissions on the machine). I'm setting a cron job by executing crontab -e and adding this line
30 18 * * * /home/myuser/run_my_script.sh
within it. However, I'm observing that things don't seem to be running, so I wanted to figure out why. But running this
[myuser@mymachine ~]$ tail /var/log/cron
tail: cannot open ‘/var/log/cron’ for reading: Permission denied
doesn't help. How can I figure out why things aren't running, or rahter where things are breaking down in my script?
|
Add to the end of your cron table entry: >> /home/myuser/myscript.log 2>&1
This will capture the output to a log file. By default, the output is mailed using the local mailer daemon to the user who owns the job, but I am not certain this daemon is running by default on an AWS instance. If it is, try running mail as the user owning the job; you might have some messages waiting for you with the output for which you are looking.
| How do I get the output of a cron script run from my home directory? |
1,431,943,225,000 |
I want to add a cron entry that does something like:
00 00 * * * * /tmp/script.sh
Is there something I can add to the end of the line, so that when it is finished, it will remove the line out of my cron?
Also, if there is a better way to do it, I would definitely want that information instead.
|
Cron is used to schedule a job to run repeatedly. What you want is at, which schedules a job to run one-time. For your example you can write:
at midnight
This will bring up an interactive prompt where you can enter /tmp/script.sh followed by Ctrl+D.
| Cronjob to Run and then Terminate |
1,431,943,225,000 |
I'd like to know if L is one of the allowed special characters on Debian's cron implementation? I'm trying to set up a cron to run on the last day of every month.
From the cron entry on wikipedia:
'L' stands for "last". When used in the day-of-week field, it allows
you to specify constructs such as "the last Friday" ("5L") of a given
month. In the day-of-month field, it specifies the last day of the
month.
Note: L is a non-standard character and exists only in some cron
implementations (Quartz java scheduler)
If not, how would you go about setting a cron to run on the last day of every month? Would you recommend 3 different entries like this solution on stackoverflow?
|
Cron entries on Debian are described in the crontab man page (man 5 crontab). Debian uses Vixie's cron, and his man page says:
The crontab syntax does not make it possible to define all possible
periods one could image off. For example, it is not straightforward to
define the last weekday of a month. If a task needs to be run in a spe-
cific period of time that cannot be defined in the crontab syntaxs the
best approach would be to have the program itself check the date and
time information and continue execution only if the period matches the
desired one.
If the program itself cannot do the checks then a wrapper script would
be required. Useful tools that could be used for date analysis are ncal
or calendar For example, to run a program the last Saturday of every
month you could use the following wrapper code:
0 4 * * Sat [ "$(date +%e)" = "`ncal | grep $(date +%a | sed -e 's/.$//')
| sed -e 's/^.*\s\([0-9]\+\)\s*$/\1/'`" ] && echo "Last Saturday" &&
program_to_run
So working along those lines:
0 0 * * * perl -MTime::Local -e
'exit 1 if (((localtime(time()+60*60*24))[3]) < 2);' || program_to_run
| Cron allowed special character "L" for "Last day of the month" on Debian |
1,431,943,225,000 |
I have a script containing:
#!/bin/bash
printenv
When I run it from the command line:
env testscript.sh
bash testscript.sh
sh testscript.sh
every time, it outputs SHELL=/bin/bash. However, when it is run from the cron, it always outputs SHELL=/bin/sh. Why is this? How can I make cron apply the shebang?
I already checked the cron PATH; it does include /bin.
|
The shebang is working and cron has nothing to do with that. When a file is executed, if that file's content begins with #!, the kernel executes the file specified on the #! line and passes it the original file as an argument.
Your problem is that you seem to believe that SHELL in a shell script reflects the shell that is executing the script. This is not the case. In fact, in most contexts, SHELL means the user's prefered interactive shell, it is meant for applications such as terminal emulator to decide which shell to execute. In cron, SHELL is the variable that tells cron what program to use to run the crontab entries (the part of the lines after the time indications).
Shells do not set the SHELL variable unless it is not set when they start.
The fact that SHELL is /bin/sh is very probably irrelevant. Your script has a #!/bin/bash line, so it's executed by bash. If you want to convince yourself, add ps $$ in the script to make ps show information about the shell executing the script.
| Shebang does not set SHELL in cron |
1,431,943,225,000 |
I've created a simple script to check the value of an environment variable when run from crontab:
#!bin/bash
echo $USER > cron.txt
I save this as script.sh in my home directory. If I run it manually, cron.txt contains my username, as I would expect.
I then add a line to my crontab:
*/1 * * * * ./script.sh
I again expect cron.txt to contain my username, but now it is just empty.
Why is $USER not defined when the script is run from crontab?
|
When the cron execute your job, it does so in an environment that is not the same as your current shell environment.
This means, for example, that ./script.sh may not be found.
There are two solutions:
*/1 * * * * ( cd $HOME/mydir && ./script.sh )
or
*/1 * * * * $HOME/mydir/script.sh
I.e., specify exactly where the script may be found.
The first alternative may be preferable if you don't use absolute pathnames for the output file within the script.
If you go with the second option, modify your script so that you know where the output file goes:
#!/bin/bash
echo "$USER" >"$HOME/mydir/cron.txt"
or
#!/bin/bash
( cd "$HOME/mydir" && echo "$USER" >cron.txt )
Notice that the #! line has to contain the correct absolute path to bash (your's was a relative path).
It should also be noted that cron on some Unices (Linux) do not set USER to the username of the user running the cron job. On these systems, use $LOGNAME instead, or set USER to $LOGNAME when invoking the script:
*/1 * * * * env USER=$LOGNAME $HOME/mydir/script.sh
| $USER environment variable is undefined when running script from crontab |
1,431,943,225,000 |
I have got the following logging script:
#!/bin/bash
top -b -c -n 1 >> /var/log/toplog/top.log
And the following record in my crontab:
*/1 * * * * /home/clime/scripts/toplog.sh
The problem is that lines in top.log are being cut to 80 chars, e.g.:
1512 root 20 0 80756 1436 572 S 0.0 0.1 0:05.92 /usr/libexec/postfi
This does not happen if I run the command directly from console.
I have tried to use COLUMNS variable:
*/1 * * * * COLUMNS=999 /home/clime/scripts/toplog.sh
But that leads to every line being long exactly 999 characters - unused space is padded with spaces, which is not what I want.
How to fix this strange issue? My system is centos 6.3.
|
top always displays spaces until the last screen column. You just don't realize it when it's printing to the terminal because you can't visually distinguish a line with trailing spaces from a line without trailing space. You'll notice the spaces if you copy-paste with the mouse or in screen.
If you want to get rid of the spaces, just filter them away.
COLUMNS=9999 top -b -c -n 1 | sed 's/ *$//' >>/var/log/toplog/top.log
Whatever you're running top for, there are probably far better monitoring tools available.
| output of top gets truncated to 80 columns when run by cron |
1,431,943,225,000 |
My syslog is chock-full of the following:
Oct 28 23:35:01 myhost CRON[17705]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1)
Oct 28 23:45:01 myhost CRON[18392]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1)
and also some
Oct 28 23:59:01 myhost CRON[19251]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 60 2)
Now, obviously, these come from cron jobs, in /etc/cron.d/sysstat:
# Activity reports every 10 minutes everyday
5-55/10 * * * * root command -v debian-sa1 > /dev/null && debian-sa1 1 1
# Additional run at 23:59 to rotate the statistics file
59 23 * * * root command -v debian-sa1 > /dev/null && debian-sa1 60 2
Do I need to have this run so frequently? It doesn't seem to do much when I run it manually. Can I/should I just turn off the cron job, or uninstall sysstat?
|
These commands are, indeed, part of the sysstat package. It's intended for performance monitoring; and specifically, sar is the system activity report:
a Unix System V-derived system monitor command used to report on various system loads, including CPU activity, memory/paging, interrupts, device load, network and swap space utilization. Sar uses /proc filesystem for gathering information
So, running this command does not actually do anything which helps your system's health or stability, it's just statistics-gathering.
With this in mind, you have three options:
Uninstall sysstat as @wurtel suggests. You indicate you're not even able to see the gathered statistics, so obviously you're not really using this facility. That means you probably don't need such monitoring in the first place.
Move cron output into a separate file from /var/log/messages, e.g. into /var/log/cron. If you're using rsyslog for logging, which you likely are given that it's the default on Devuan, you need to do is un-comment a line intended for just this purpose in /etc/rsyslog.conf:
#cron.* /var/log/cron.log
just remove the initial #; and remove cron from what goes into /var/log/syslog, i.e. replace this:
*.=info;*.=notice;*.=warn;\
auth,authpriv.none;\
cron,daemon.none;\
mail,news.none -/var/log/messages
with this:
*.=info;*.=notice;*.=warn;\
auth,authpriv.none;\
<h1>cron,daemon.none;\</h1>
daemon.none;\
mail,news.none -/var/log/messages
In case you don't care to see cron job logging if there were no errors, @binarym suggests limiting the logging to error or warning messages. With rsyslog, that means replacing this:
*.=info;*.=notice;*.=warn;\
auth,authpriv.none;\
cron,daemon.none;\
mail,news.none -/var/log/messages
with this:
*.=info;*.=notice;*.=warn;\
auth,authpriv.none;\
daemon.none;\
mail,news.none -/var/log/messages
*.=warn;*.=err\
cron -/var/log/messages
in the default /etc/rsyslogd.conf. (Although, frankly, I don't understand why .=err isn't there in the first place.
| Can I avoid debian-sa1 lines in my syslog? |
1,431,943,225,000 |
I cannot find anywhere the log level meaning of crond.
I know that 0 is pretty much "log everything" while 8 is "show only important info" thanks to the crond help:
/ # crond --help
BusyBox v1.26.2 (2017-11-23 08:40:54 GMT) multi-call binary.
Usage: crond -fbS -l N -d N -L LOGFILE -c DIR
-f Foreground
-b Background (default)
-S Log to syslog (default)
-l N Set log level. Most verbose:0, default:8
-d N Set log level, log to stderr
-L FILE Log to FILE
-c DIR Cron dir. Default:/var/spool/cron/crontabs
but where I can find exactly the documentation/meaning about the different levels?
I'm on Alpine 3.6.
|
The particular semantics of the log level values for crond are only defined in the code, it seems. All of the crond logging there goes through a crondlog() function in busybox/miscutils/crond.c function:
static void crondlog(unsigned level, const char *msg, va_list va)
{
if (level >= G.log_level) {
/* Do logging... */
So that only those messages with levels higher than the one you specify via the -l command-line option are logged.
Then, elsewhere in that crond.c file, we see that crondlog() is only called via the log5(), log7(), and log8() wrapper functions. Which means that those are the only levels at which that crond program logs messages.
These log levels are specific to crond, and are not related to any syslog(3) levels or other programs. In short, the meaning of these levels is only found in the source code for this program.
| crond log level meaning |
1,431,943,225,000 |
I have several questions related to non-interactive, non-login shells and cron jobs.
Q1. I have read that non-interactive, non-login shells only "load" $BASH_ENV.
What does this exactly mean? Does it mean that I can point $BASH_ENV to a file, and that this file will be sourced?
Q2: Assuming that I have an entry in cron pointing to a Bash script with a Bash shebang, what environment variables and definitions can I assume my Bash script is loaded with?
Q3: If I add SHELL=/bin/bash to the top of my crontab, what exactly does this do? Does it mean that:
cron itself runs in Bash?
The commands specified in crontab are interpreted in Bash?
The scripts that do not have shebangs in them, are run under $SHELL
something else?
Q4: How can I set BASH_ENV for cron jobs?
|
For Q1 & Q2, see here. Q3 is answered in the discussion of your other three questions below.
WRT $BASH_ENV, from man bash:
When bash is started non-interactively, to run a shell script, for
example, it looks for the variable
BASH_ENV in the environment, expands its value if it appears there, and uses the expanded value as
the name of a file to read and execute.
So this could be a .profile type script. As to where or who set it, this depends on context -- generally it is not set, but it could have been by any ancestor of the current process. cron does not seem to make any special use of this, and AFAIK neither does init.
Does it mean that cron itself runs in Bash?
If by "in" you mean, is started from a shell, then this depends on the init system -- e.g., SysV executes shell scripts for services, so those services are always started via a shell.
But if by "in" you mean, "Is it thus a child of a shell process?", no. Like other daemons it runs as the leader of its own process group and its parent process is the init process (pid 1).
The consequences for cron's environment are probably immaterial, however. For more about the environment of start-up services, see the answer linked above regarding Q1 & Q2. WRT to cron specifically, according to man 5 crontab it also sets some environment variables that are inherited by any process it starts ($LOGNAME, $HOME, and $SHELL).
The commands specified in crontab are interpreted in Bash?
They're interpreted with sh or $SHELL; again from man 5 crontab:
The entire command portion
of the line, up to a newline or a "%" character, will be executed by /bin/sh or by the shell specified in the SHELL variable of the cronfile. A "%" character in the command, unless escaped with a
backslash (), will be changed into newline characters, and all data after the first % will be sent
to the command as standard input.
The emphasized part anwsers Q3.
The scripts that do not have shebangs in them, are run using $SHELL
Executable scripts with or without shebangs are opened by $SHELL (or sh). The difference is, those with a shebang will then be handed to the appropriate interpreter (#!/bin/sh being another instance of the shell), whereas executable scripts (ones with the executable bit set that are reference as executables) without a shebang will simply fail, just as they would when run from the command line (unless you run them sh script.sh).
| BASH_ENV and cron jobs |
1,431,943,225,000 |
This command, when run alone, produces the expected result (the last line of the crontab):
tail -n 1 /etc/crontab
However, when I run it as part of an echo command to send the result to a file, it adds a summary of all the files in the working directory, plus the expected result:
sudo bash -c 'echo $(tail -n 1 /etc/crontab) > /path/to/file'
Why did this command produce the extra data?
|
Your crontab line has one or more asterisks * in it, indicating "any time". When that line is substituted in from the command substitution, the result is something like
echo * * * * * cmd > /path/to/file
While most further expansions are not applied to the output of command substitution, pathname expansion is (as is field splitting):
The results of command substitution shall not be processed for further tilde expansion, parameter expansion, command substitution, or arithmetic expansion. If a command substitution occurs inside double-quotes, field splitting and pathname expansion shall not be performed on the results of the substitution.
Pathname expansion is what turns *.txt into a list of matching filenames (globbing), where * matches everything. The end result is that you get every (non-hidden) filename in the working directory listed for every * in your crontab line.
You could fix this by quoting the expansion, if the code you posted was a representative of a more complex command:
sudo bash -c 'echo "$(tail -n 1 /etc/crontab)" > /path/to/file'
but more straightforwardly just lose the echo entirely:
sudo bash -c 'tail -n 1 /etc/crontab > /path/to/file'
This should do what you want and it's simpler as well (the only other material difference is that this version will omit field splitting that would otherwise have occurred, so runs of spaces won't be collapsed).
| Echoing a tail command produces unexpected output? |
1,431,943,225,000 |
How to execute a shell script via a cronjob every 45 days?
|
If you don't need exactly 45 days, but "one and a half months" will do, then a straightforward method would be to run at the beginning of the month every three months, and at the middle of the next month after each of those:
0 12 1 1,4,7,10 * /path/to/script
0 12 16 2,5,8,11 * /path/to/script
For general arbitrary intervals, the other answers are obviously better, but 45 days sounds like it's based on the length of a month anyway. Human users might also be more used to something happening in the beginning or the middle of a month, instead of seeing the exact date drift a day or two each time.
| How to schedule cronjob for every 45 days? |
1,431,943,225,000 |
I have a cron setup to execute a bash script daily at 10pm. I have another cron setup to run monthly on the 1st of the month. Both crons launch a bash script, and the only different in the bash script is the argument they pass into the underlying java program (emulating command line launch of the java program).
The problem is, I need to somehow disable the daily cron on the 1st of the month so that both don't try to run on the same day. Is this possible to do automatically?
I suppose I can create another bash script to edit the cron before the 1st then again after to set things back up, but this seems... unclean.
|
In a similar vein to the solution proposed by @StephaneChazelas in the comments you could specify the range of days in the 3rd field as a range for the cron that you want to run on every day besides the 1st of the month.
The following two entries would accomplish what you're after:
0 22 1 * * /path/to/script/1st_of_the_month.bash
0 22 2-31 * * /path/to/script/every_day_except_1st.bash
| Cron to not run on specific day but all other days |
1,431,943,225,000 |
I would like to use git to track changes in crontab.
I have initialized a new git repository in /var/spool/cron/crontabs/
Now the problem is, when crontab is saved, the second line of the header changes because it contains timestamp.
# DO NOT EDIT THIS FILE - edit the master and reinstall.
# (/tmp/crontab.ubNueW/crontab installed on Thu Aug 1 06:29:24 2019)
What would be the easiest way to ignore these irrelevant changes ?
The possible duplicate question does not address the main point of my question: How to ignore the first 2 irrelevant lines from crontab. Instead, it addresses some other questions which I have not asked, such as some hooks.
|
You could use a filter:
git config filter.dropSecondLine.clean "sed '2d'"
Edit/create .git/info/attributes and add:
* filter=dropSecondLine
If you don't want the filter acting on all the files in the repo, modify the * to match an appropriate pattern or filename.
The effect will be the working directory will remain the same, but the repo blobs will not have the second line in the files. So if you pull it down elsewhere the second line would not appear (the result of the sed 'd2'). And if you change the second line of your log file you will be able to add it, but not commit it, as the change to the blob happens on add, at which point it will be the same file as the one in the repo.
| tracking crontab changes with git |
1,431,943,225,000 |
Ok, so I've just had a read through this page after a way to improve my current backup solution on my Debian server. Tar seems to be offering a quite nice multi-volume feature, although when I try it out, it asks me to Prepare volume #X for ‘mybackup.tar.gz’ and hit return:.
How should I automate this as I would like to take usage of this feature in an automated CRON script where no one is there to push return and enter whatever is rquired by the multi-volume prompt.
Is using split the only way?
|
Here is a solution:
printf 'n file-%02d.tar\n' {2..100} |
tar -ML 716800 -cf file-01.tar Documents/ 2>/dev/null
where 100 is a number greater or equal to the number of volumes.
Edit
Setting a big number should not be a problem, though I tend to not take a ridiculous one.
An alternative could be a "next volume" script, that you can set with the -F option,
tar -ML 716800 -F './myscript file' -cf file.tar Documents/ 2>/dev/null
then in ./myscript put
#!/bin/bash
prefix="$1"
n=1
while [[ -e "$prefix-$n.tar" ]]; do
((n++))
done
mv "$prefix.tar" "$prefix-$n.tar"
echo "$prefix-$n.tar"
It will be executed at each volume end, and will move file.tar to the appropriate fileNNN.tar. For the last volume the script will not be executed, so the last volume name stay file.tar.
Edit 2
I ended up with the following elaborated solution.
Here are two script, one for the creation and the other for the extraction:
#!/bin/bash
# CREATION SCRIPT
# save on file the initial volume number
echo 1 >number
# multi-volume archive creation
tar -ML 100000 -F './tar-multi-volume-script c file' -cf file.tar Documents2/ 2>&-
# execute the "change-volume" script a last time
./tar-multi-volume-script c file
and
#!/bin/bash
# EXTRACTION SCRIPT
# save on file the initial volume number
echo 1 >number
# execute the "change-volume" script a first time
./tar-multi-volume-script x file
# multi-volume archive extraction
tar -M -F './tar-multi-volume-script x file' -xf file.tar 2>&-
# remove a spurious file
rm file.tar
where ./tar-multi-volume-script is given by
#!/bin/bash
# TAR INVOKED SCRIPT
mode="$1"
prefix="$2"
n=$(<number)
case $mode in
c) mv "$prefix.tar" "$prefix-$n.tar" ;;
x) cp "$prefix-$n.tar" "$prefix.tar" ;;
esac
echo $((n+1)) >number
Obviously you have to change many bits here and there to adapt to your situation and to be sure it would work in cron, that is always a little challenge.
| Can I automate tar's multi-volume-feature? |
1,431,943,225,000 |
I am trying to run a bash script I have via cron, and I am getting the following error at the beginning of the execution:
tput: No value for $TERM and no -T specified
Here is what is in my crontab:
0 8 * * 1-5 cd /var/www/inv/ && /var/www/inv/unitTest run all 2>&1| mail -r "[email protected]" -s "Daily Inventory Unit Test Results" [email protected]
|
Your unit test script probably calls tput in order to generate pretty output showing which tests pass and fail. Under cron there is no terminal and thus no terminal type ($TERM), so tput cannot control the nonexistent terminal.
Your unit test script needs to have 2 modes:
running on a terminal: it can call tput to generate pretty-looking output
not running on a terminal: it should not call tput and instead generate a generic text-only output format that is suitable for piping into an email as you are doing here.
The easiest way for the unit tests to know whether or not they are running on a terminal is to test of the stdio file descritors refer to a terminal. If it's a shell script, then:
if [ -t 1 ]; then
tput bold; echo pretty; tput sgr0
else
echo ugly
fi
Basically: do not call tput unless you are running on a terminal, and you will thus avoid the error you are getting, plus produce reasonable output in whichever mode you happen to be running under.
| tput: No value for $TERM and no -T specified |
1,431,943,225,000 |
I have been reading on
OnCalendar=
Sadly, I found no info on how to schedule an event on a day other way than defining a day of the week, which would make it run every week at the rarest.
I need it to run every, say, 14 days. And at a specified hour (say, 4am). Is this possible with systemd?
|
I need it to run every, say, 14 days. And at a specified hour (say, 4am). Is this possible with systemd?
The easiest way to get approximately every 14 days is to make it twice a month.
[Install]
WantedBy=default.target
[Unit]
Description=Every fortnight.
[Timer]
OnCalendar=*-*-1,15 4:00:00
Unit=whatever.service
That syntax is explained in man systemd.timer; *-*-1,15 is the 1st and the 15th of every month of every year.
If you wanted to try for exactly every fourteen days from when the service started:
[Timer]
OnActiveSec=14d
But there's a catch here: I think you'd have to have the system up the whole time. There is a Persistent option to have "the time when the service unit was last triggered...stored on disk" but according to the man page "this setting only has an effect on timers configured with OnCalendar".
| systemd timer every X days at 04:00 |
1,431,943,225,000 |
I created a cronjob, it runs for a very long time but now don't know how to stop it.
|
You should stop the process that the crontab started running.
#kill -HUP PID (PID: Process ID is the process running)
To see a relation of the PID with the running processes (and more info) use top command, change the column order with the keys < and >
Also try ps -ax|grep [your_process_file] which lists the running processes filtered by the name you choose
-HUP = Hang UP
| List currently running cron tab and stop it |
1,431,943,225,000 |
It seems Linux Mint 19.3 Tricia Cinnamon wants to clear out PHP session files every half-hour.
How do I:
Remove this task from the scheduler's awareness, and
Not have to reboot the computer to do so.
I found the crontab file at /etc/cron.d/php.
I edited the file by making the relevant line a comment. I expected that now that there is no information in this crontab file that would establish when it should trigger the task, not even the scheduler(?) would keep aware of it.
# 09,39 * * * * root [ -x /usr/lib/php/sessionclean ] && if [ ! -d /run/systemd/system ]; then /usr/lib/php/sessionclean; fi
The cron process(?) noticed the new file timestamp and reloaded the file (as seen in the syslog).
But the scheduler is still logging in syslog.
Mar 6 01:09:07 BrownBunny systemd[1]: Starting Clean php session files...
Mar 6 01:09:08 BrownBunny systemd[1]: Started Clean php session files.
(which I do not know where to look for those phrases).
I tried the command:
sudo service cron reload
The PHP session files were still cleaned up.
I can move the php crontab file out of cron.d. Considering the above, would this even work?
Notes: I am cross-posting this question from LinuxMint Forums:
Note: This question is copied over from Ask Ubuntu because they consider it not really applicable to Ubuntu.
|
The phpsessionclean scripts as delivered by upstream Debian will either use cron if there is no systemd present, or will use systemd for scheduling if it is available.
This is apparent in the cronjob test if [ ! -d /run/systemd/system ] which checks if systemd has been initialized on this system.
When systemd is available, phpsessionclean has a service unit (phpsessionclean.service) in addition to a timer unit (phpsessionclean.timer). Stopping and disabling both of these, should stop the scheduled task from running within systemd:
systemctl stop phpsessionclean.service
systemctl disable phpsessionclean.service
systemctl stop phpsessionclean.timer
systemctl disable phpsessionclean.timer
| How to remove cron task that deletes PHP sessions |
1,431,943,225,000 |
I read about the following 2 ways to run multiple commands in a single cron job:
We can run several commands in the same cron job by separating them with a semi-colon (;).
* * * * * /path/to/command-1; /path/to/command-2
If the running commands depend on each other, we can use double ampersand (&&) between them. As a result, the second command will not be executed if the first one fails.
* * * * * /path/to/command-1 && /path/to/command-2
My requirements are:
the commands must be executed sequentially (wait for the current one to complete before executing the next one)
the commands must be executed in the given order
but every command should be executed, even if the previous one failed
What the link above therefore doesn't say is:
Does the semicolon ; approach still guarantees that the commands will be executed sequentially, and in the given order?
|
Yes, using ; between the commands will guarantee that all commands are run sequentially, one after the other. The execution of one command would not depend on the exit status of the previous one.
As pointed out by Paul_Pedant in comments, doing anything more complicated than launching a single command from crontab may be better done by collecting the job in a separate script and instead schedule that script. This way, you can test and debug your script independently from cron, although since cron gives you a slightly different environment than your ordinary login shell environment, there are still environmental factors (like what the current working directory is and what the values of $PATH and other variables etc. are) to keep in mind.
| Multiple, sequential commands in cron [duplicate] |
1,431,943,225,000 |
Yeah, I know it's classic. I've googled it all around, but still it doesn't work. I have the following script:
#First go to SVN repo folder
cd $svnrepos
# Just make sure we have write access to backup-folder
if [ -d "$bakdest" ] && [ -w "$bakdest" ] ; then
# Now $repo has folder names = project names
for repo in *; do
# do svn dump for each project
echo "Taking backup/svndump for: $repo"
echo "Executing : svnadmin dump $repo > $bakdest/$repo-$bakdate.svn.dump \n"
# Now finally execute the backup
svnadmin dump $repo > $bakdest/$repo-$bakdate.svn.dump
# You can go an extra mile by applying tar-gz compression to svn-dumps
# We also would like to save the dump to remote place/usb
# USB/other directory exists, copy the dump there
echo "Going to copy $repo dump to $baktousb directory...\n"
/usr/bin/scp -v $bakdest/$repo-$bakdate.svn.dump $baktousb/$repo-$bakdate.svn.dump
done
else
echo "Unable to continue SVN backup process."
echo "$bakdest is *NOT* a directory or you do not have write permission."
fi
# End of backup script
echo "\n\n================================="
echo " - Backup Complete, THANK YOU :-]"
everything works fine in shell, but when It's executed as cron job it simply doesn't scp (but does create a backup). Yes, I have an empty paraphrase. Can't get why it doesn't work.
|
The issue is that you are probably running ssh-agent in your interactive environment but do not in cron and that your ssh key filename is different from the default filenames.
To solve this you can either explicitly specify the ssh key in your scp commandline, i.e. scp -i $SSH_KEY_FILENAME or specify an appropriate ~/.ssh/config entry for your host, i.e:
Host backuphost
IdentityFile SSH_KEY_FILENAME
To test your script you can try to run it via env -i /path/to/your/script which should reset your environment and mimic the cron environment.
| Scp doesn't work in cron |
1,431,943,225,000 |
I had a problem that gmail was blocking emails sent using mailx. I solved this by setting up an appropriate ~/.mailrc, which looks like:
set smtp-use-starttls
set nss-config-dir=/home/theuser/.certs
set ssl-verify=ignore
set smtp=smtp://smtp.gmail.com:587
set smtp-auth=login
set smtp-auth-user=xxx
set smtp-auth-password=yyy
set from="[email protected](Rabbit Server)"
So now when I run:
echo "hi" | mailx [email protected]
my emails send successfully as user and as root.
Now I want cron to work as well. I changed "/etc/sysconfig/crond" to force it to use mailx, with:
CRONDARGS="-m /usr/bin/mailx"
I have ~/.mailrc setup at:
/root/.mailrc
/home/theuser/.mailrc
/etc/.mailrc
But no matter what I do, echo output is not emailed successfully.
The crontab looks like (and I've checked, the scripts are running and doing their job, and echoing, just cron isn't sending emails):
MAILTO="[email protected]"
# Every minute check processes are running, restart if necessary and send an email.
* * * * * source /home/theuser/.bashrc; global audit_regular
# Every day, send an email describing the state of the host and its jobs.
0 5 * * * source /home/theuser/.bashrc; global audit_daily
# Every Monday at 7am, archive the logs.
0 7 * * 1 source /home/theuser/.bashrc; global archive_logs
Also, this crontab is setup on another host and sending emails fine.
|
mailx only sends mail if you pass it the destination address on the command line. When you run it with no arguments, it reads interactive commands from its standard input. Beware that your tests fed it garbage which has been interpreted as commands; some of these commands may have corrupted your mailboxes, sent out emails, etc.
Tell mailx to run mailx -t, which expects a full email with headers on standard input.
From a cursory examination, it doesn't look like you can pass a command with parameters via the crond startup script. So write a shell wrapper /usr/local/sbin/mailx-t
#!/bin/sh
exec mailx -t
and put CRONDARGS="-m /usr/local/sbin/mailx-t" in /etc/sysconfig/crond.
| Calling mailx from crond |
1,431,943,225,000 |
Is it possible to check for a file existence in a crontab oneliner, and only execute a script if that file existed?
Pseudocode:
* * * * * <if /tmp/signal.txt exists> run /opt/myscript.sh
|
You can't technically avoid running a cron job depending on the existence of a file, but you can let the job test whether the file exists and then take further actions based on the outcome of that test.
Use an ordinary test for existence, then run the script if the test succeeds.
* * * * * if [ -e /tmp/signal.txt ]; then /opt/myscript.sh; fi
or
* * * * * if test -e /tmp/signal.txt; then /opt/myscript.sh; fi
Or, using the short-circuit syntax. Doing it this way would cause the job to fail if the file does not exist (which may trigger an email from the cron daemon):
* * * * * [ -e /tmp/signal.txt ] && /opt/myscript.sh
or
* * * * * test -e /tmp/signal.txt && /opt/myscript.sh
You could use the -f test instead of the -e test if you want to additionally ensure that /tmp/signal.txt is a regular file and not a directory, named pipe, or some other type of file.
To keep the cron schedule clean, it may be worth moving the test into the script itself, which would allow you to have a schedule like this:
* * * * * /opt/myscript.sh
... while the start of the script would include the test and exit if the test fails:
if [ ! -e /tmp/signal.txt ]; then
exit
fi
| How to run a crontab job only if a file exists? |
1,431,943,225,000 |
I would like to backup a folder using cron on a centos. The folder c2duo_mms is located in /usr/local/src/djcode/c2duo_mms. I would like it backed ip every 1:00pm on Tuedays to my home folder /home/sh.
|
A good thing to do would be to create a new compressed archive in your home.
Create this script named for exmaple */home/sh/c2duo_mms_backup.sh*:
#!/bin/bash
cd /usr/local/src/djcode/
tar zcf /home/sh/c2duo_mms-`date +%Y%m%d`.tar.gz c2duo_mms
Be sure to add the executable permission to the script:
chmod +x /home/sh/c2duo_mms_backup.sh
Then add the relevant crontab entry with the crontab -e command:
0 13 * * 2 /home/sh/c2duo_mms_backup.sh
The script will create a new compressed archive every Tuesday with the date in the filename, so that you can keep older backups if you want. File name will look like this:
c2duo_mms_20110719.tar.gz
| linux cron: want to backup a folder |
1,431,943,225,000 |
I am on Arch Linux and I'm trying to make a cron job that fires every minute. So I use:
$ crontab -e
And add the script in:
* * * * * Rscript /srv/shiny-system/cron/CPU.R
~
~
"/tmp/crontab.8VZ7vq" 1 line, 47 characters
(I have no idea what that "/tmp/crontab.8VZ7vq" is!)
But it is not working - CPU.R is not running every minute. What should I do then in Arch Linux to run the cron job? I have looked into these wiki guides below but I am still lost:
https://wiki.archlinux.org/index.php/Cron
https://wiki.archlinux.org/index.php/Systemd/Timers
Edit
I found some hints from here regarding crond.
[xxx@localhost ~]$ systemctl status crond
● crond.service
Loaded: not-found (Reason: No such file or directory)
Active: inactive (dead)
[xxx@localhost ~]$ sudo systemctl start crond
[sudo] password for xxx:
Failed to start crond.service: Unit crond.service failed to load: No such file or directory.
What does this mean? Where should I put this crond.service and what script should I put in it?
|
There is no crond.service on Arch Linux. As the Arch Wiki makes perfectly clear:
There are many cron implementations, but none of them are installed by
default as the base system uses systemd/Timers instead.
Consequently, if you want to use cron, you have to choose which of the many implementations you will install, and then start that specific service.
You don't just randomly type systemctl enable nonexistent.service and then wonder why it isn't running...
If you want cronie, then you install cronie and start it with:
pacman -Syu cronie
systemctl enable --now cronie.service
The Arch documentation is generally very clear; if you read the pages you linked to more carefully, you should find out what you need.
| Arch Linux - How to run a cron job? |
1,431,943,225,000 |
For testing purposes ,I would like to run a command check every 15 minutes past the hour.
I'm a little bit confused about the correct timeframe syntax of the crontab :
Is this correct :
*/15 * * * *
or
this one :
15 * * * *
I think its the first one ,since the second will run 15 minutes (once) after 1 hour passed.
Any ideas?
|
The first one will (with most common cron implementations) run the command every 15 minutes and is equivalent to 0,15,30,45 * * * *.
The second one will run 15 minutes past the hour, every hour.
This is described in the crontab(5) manual on your system (man 5 crontab).
| Crontab past the hour |
1,431,943,225,000 |
I installed the python-certbot-apache package per the instructions on certbot.eff.org but can't find any entry for the cron job its supposed to set up.
The Certbot packages on your system come with a cron job that will renew your certificates automatically before they expire. Since Let's Encrypt certificates last for 90 days, it's highly advisable to take advantage of this feature.
From: https://certbot.eff.org/#debianjessie-apache
Where do I find this cron job? I've tried 'crontab -l', with and without sudo both, with no luck.
I understand how to run the cron job to renew the cert; my question is: where is the cron job that this package installed? Did it install?
|
In any Debian derivate, to list the files installed for a package you usually do dpkg -L.
So in your case:
dpkg -L python-certbot-apache
This is give you the list of all files installed, and where.
You can also request the list of files from packages.debian.org
From https://packages.debian.org/stretch/all/python-certbot-apache/filelist
/usr/lib/python2.7/dist-packages/certbot_apache-0.10.2.egg-info/PKG-INFO
/usr/lib/python2.7/dist-packages/certbot_apache-0.10.2.egg-info/dependency_links.txt
/usr/lib/python2.7/dist-packages/certbot_apache-0.10.2.egg-info/entry_points.txt
/usr/lib/python2.7/dist-packages/certbot_apache-0.10.2.egg-info/requires.txt
/usr/lib/python2.7/dist-packages/certbot_apache-0.10.2.egg-info/top_level.txt
/usr/lib/python2.7/dist-packages/certbot_apache/__init__.py
/usr/lib/python2.7/dist-packages/certbot_apache/augeas_configurator.py
/usr/lib/python2.7/dist-packages/certbot_apache/augeas_lens/httpd.aug
/usr/lib/python2.7/dist-packages/certbot_apache/centos-options-ssl-apache.conf
/usr/lib/python2.7/dist-packages/certbot_apache/configurator.py
/usr/lib/python2.7/dist-packages/certbot_apache/constants.py
/usr/lib/python2.7/dist-packages/certbot_apache/display_ops.py
/usr/lib/python2.7/dist-packages/certbot_apache/obj.py
/usr/lib/python2.7/dist-packages/certbot_apache/options-ssl-apache.conf
/usr/lib/python2.7/dist-packages/certbot_apache/parser.py
/usr/lib/python2.7/dist-packages/certbot_apache/tls_sni_01.py
/usr/share/doc/python-certbot-apache/changelog.Debian.gz
/usr/share/doc/python-certbot-apache/copyright
It appears there is no cron job automatic added for the package.
You also need to install the package certbot
sudo apt-get install certbot
List of files:
/etc/cron.d/certbot
/lib/systemd/system/certbot.service
/lib/systemd/system/certbot.timer
/usr/bin/certbot
/usr/bin/letsencrypt
/usr/share/doc/certbot/README.rst.gz
/usr/share/doc/certbot/changelog.Debian.gz
/usr/share/doc/certbot/changelog.gz
/usr/share/doc/certbot/copyright
/usr/share/man/man1/certbot.1.gz
/usr/share/man/man1/letsencrypt.1.gz
So from this last package, the crontab files installed are actually /etc/cron.d/certbot for crontab and you have /lib/systemd/system/certbot.service + /lib/systemd/system/certbot.timer for systemd
| Debian and Certbot: where does the package install the cron job? |
1,431,943,225,000 |
I want to set up up yum to auto update the system using yum-cron. But my internet connection is very limited and I don't want the update process to hog the tiny bit of internet that is available and make the computer usage for everybody on the network miserable.
How can I set up yum to check and download updates automatically, but only between 2am and 6am?
|
Well, if it were me, I would set up a cron job (for root) that starts at 2am every day. Like so:
0 2 * * * /bin/yum -y update
It's about as KISS as it can get!
| How to schedule yum auto update to run only during the night? |
1,450,463,937,000 |
What is needed to give cron commands access to the session bus (if it is running)?
It used to work for me, on Debian Stretch (testing) since switching systemd until relatively recently (might have been month or two ago). The strange thing is that while I strongly suspect this is controlled by the PAM configuration, the only change that happened to /etc/pam.d recently enough was adding some calls to pam_selinux to pam.d/systemd-user.
So what should I look for?
|
This is likely due to the fact that the DBUS_SESSION_BUS_ADDRESS environment variable isn't propagated to the cron environment.
At least under Gnome, the bus isn't made "discoverable" (as documented in the "AUTOMATIC LAUNCHING" section of the dbus-launch(1) man page) via files in $HOME/.dbus/session-bus. This leaves anything run in your crontab without a way to discover $DBUS_SESSION_BUS_ADDRESS and contact the session D-Bus.
I'll take your word for it that it worked in the past, possibly due to use of $HOME/.dbus or the existence of actual /tmp/dbus-$TMPNAM files referenced in $DBUS_SESSION_BUS_ADDRESS (which is normally set to something resembling unix:abstract=/tmp/dbus-GkJdpPD4sk,guid=0001e69e075e5e2). As the dbus-cleanup-sockets(1) man page explains:
On Linux, this program is essentially useless, because D-Bus defaults
to using "abstract sockets" that exist only in memory and don't have a
corresponding file in /tmp.
However, we can use a variation on the idea presented in an ubuntuforum post, Attach to existing DBUS session over SSH, to discover the session D-Bus of an existing user session on the local machine from within the cron environment and set $DBUS_SESSION_BUS_ADDRESS accordingly.
While the technique used there discovers the environment from commonly-running processes like nautilus, pulseaudio, and trackerd, and requires that one or more of them be running in the active session, I recommend a more basic approach.
All of the common desktop environment session managers (gnome-session, lxsession, and kded4) have $DBUS_SESSION_BUS_ADDRESS set in their environment, even though they're started before dbus-daemon and have lower PIDs. So, it makes the most sense to just use whatever session manager corresponds to your desktop environment.
I wrote the following script, placed in $HOME/bin/test-crontab-dbus.sh, to test access to the existing session bus:
#!/bin/sh
SESSION_MANAGER=lxsession
OUTFILE=/tmp/${USER}-cron-dbus.txt
export $(cat /proc/$(pgrep "$SESSION_MANAGER" -u "$USER")/environ \
|egrep -z '^DBUS_SESSION_BUS_ADDRESS=')
date >> $OUTFILE
dbus-send --session --dest=org.freedesktop.DBus \
/ org.freedesktop.DBus.GetId 2>> $OUTFILE
if test "$?" -eq 0; then
echo "Success contacting session bus!" >> $OUTFILE
fi
The SESSION_MANAGER=lxsession above is appropriate for a main desktop session running under LXDE, under Gnome you'd set SESSION_MANAGER=gnome-session, and under KDE you'd use SESSION_MANAGER=kded4.
If the job in the crontab has access to the session bus, you'll see something like the following in the output:
Fri Dec 18 15:27:02 EST 2015
Success contacting session bus!
Otherwise, you'll see the error message output by dbus-send.
Obviously, you can substitute any other method of testing connectivity to the session bus, including whatever operation you actually need to perform via a cron job.
| Access to user's session D-bus from their cron commands |
1,450,463,937,000 |
I'd like to know how I can check to see if my cronjob will run at the specified time I set it at. Is there anyway I can test this without having to wait for that time?
Here are my crontab -l results:
root@work:~$ crontab -l
3 */23 * * * /opt/lampp/bin/php /opt/lampp/htdocs/site/cron/my_script.php > /dev/null
If I did the values right, will my cronjob run at exactly 11pm every night and log any output to /dev/null for cleanliness?
Thanks.
|
The only way to be sure is to let it run and inspect the results. You can modify your command to log the output somewhere and inspect that, or let it email you the output.
You could add another identical line which runs the command within 5 minutes or so, for debugging. eg. If it's 3:13 pm right now, I might add this line to test the command after 3 minutes from now:
# Run at 15:16
16 15 * * * /opt/lampp/bin/php /opt/lampp/htdocs/site/cron/my_script.php > /dev/null
BTW. To run at 11 PM every night, you probably want this instead; I have also redirected stderr to stdout (2>&1) to ensure that all output goes to /dev/null:
# At minute 0 of hour 23 on every day, every month, every day of the week:
0 23 * * * /opt/lampp/bin/php /opt/lampp/htdocs/site/cron/my_script.php > /dev/null 2>&1
| How can I ensure my cronjob will run at specified time? |
1,450,463,937,000 |
This is the first time I'm using cron through its /etc/cron.hourly directory to run a script once every hour.
For some reason, the script doesn't seem to run.
Here's what I did:
Added a basic test script in /etc/cron.hour:
/etc/cron.hourly# ll
-rwxr-xr-x 1 root root 152 Jun 20 23:26 test.bash*
Here's the script:
/etc/cron.hourly# cat test.bash
#!/bin/bash
SHELL=/bin/bash
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
echo "This line from test.bash" > /tmp/from.test.script
Here's /etc/crontab:
/etc/cron.hourly# cat /etc/crontab
# m h dom mon dow user command
17 * * * * root cd / && run-parts --report /etc/cron.hourly
The Cron package was installed by default, so next, I ran /etc/init.d/cron restart, and waited until :17. Still no trace of the output in /tmp.
Any idea what else I could try?
/etc/cron.hourly# run-parts --list /etc/cron.hourly
/etc/cron.hourly# run-parts --test /etc/cron.hourly
/etc/cron.hourly# strace run-parts /etc/cron.hourly
execve("/bin/run-parts", ["run-parts", "/etc/cron.hourly"], [/* 16 vars */]) = 0
brk(0) = 0x7f8000
uname({sys="Linux", node="sheevaplug", ...}) = 0
access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory)
mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb6efe000
access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory)
open("/etc/ld.so.cache", O_RDONLY) = 3
fstat64(3, {st_mode=S_IFREG|0644, st_size=17282, ...}) = 0
mmap2(NULL, 17282, PROT_READ, MAP_PRIVATE, 3, 0) = 0xb6ef9000
close(3) = 0
access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory)
open("/lib/arm-linux-gnueabi/libc.so.6", O_RDONLY) = 3
read(3, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0(\0\1\0\0\0XX\1\0004\0\0\0"..., 512) = 512
lseek(3, 1231644, SEEK_SET) = 1231644
read(3, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 1400) = 1400
lseek(3, 1231204, SEEK_SET) = 1231204
read(3, "A'\0\0\0aeabi\0\1\35\0\0\0\0054T\0\6\2\10\1\t\1\22\4\24\1\25\1"..., 40) = 40
fstat64(3, {st_mode=S_IFREG|0755, st_size=1233044, ...}) = 0
mmap2(NULL, 1275160, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0xb6da3000
mprotect(0xb6ecd000, 32768, PROT_NONE) = 0
mmap2(0xb6ed5000, 12288, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x12a) = 0xb6ed5000
mmap2(0xb6ed8000, 9496, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0xb6ed8000
close(3) = 0
mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb6ef8000
set_tls(0xb6ef86d0, 0xb6ef8da7, 0xb6ef8da8, 0xb6ef86d0, 0xb6f00000) = 0
mprotect(0xb6ed5000, 8192, PROT_READ) = 0
mprotect(0x12000, 4096, PROT_READ) = 0
mprotect(0xb6eff000, 4096, PROT_READ) = 0
munmap(0xb6ef9000, 17282) = 0
umask(022) = 022
brk(0) = 0x7f8000
brk(0x819000) = 0x819000
rt_sigaction(SIGCHLD, {0x931c, [], SA_NOCLDSTOP|0x4000000}, NULL, 8) = 0
rt_sigprocmask(SIG_BLOCK, [CHLD], NULL, 8) = 0
open("/etc/cron.hourly", O_RDONLY|O_NONBLOCK|O_LARGEFILE|O_DIRECTORY|O_CLOEXEC) = 3
fcntl64(3, F_GETFD) = 0x1 (flags FD_CLOEXEC)
getdents(3, /* 6 entries */, 32768) = 128
getdents(3, /* 0 entries */, 32768) = 0
close(3) = 0
exit_group(0) = ?
|
Which distribution are you using? According to Ubuntu's run-parts manual:
If neither the --lsbsysinit option nor the --regex option is given then
the names must consist entirely of ASCII upper- and lower-case letters,
ASCII digits, ASCII underscores, and ASCII minus-hyphens.
Therefore run-parts won't run a script named test.bash. Thanks to this rule, you can avoid to accidentally run renamed scripts, for example *.old, or *.dpkg-dist *.dpkg-old by dpkg upgrades.
| Why doesn't cron run a test script from /etc/cron.hourly? |
1,450,463,937,000 |
I am trying to create a directory having a timestamp in its in name in the /home directory. I have created the following cron job as the root user, but it's not running. What am I doing wrong?
Following is my cron job which I have created with root user privilege.
[root@bvdv-emmws01 home]# crontab -l
* * * * * /home/test.sh
Following are the contents of /home/test.sh.
Update: added full path for directory.
[root@bvdv-emmws01 home]# cat /home/test.sh
#!/bin/bash
mkdir /home/test_$(date -d "today" +"%Y%m%d%H%M%S")
Permission of /home/test.sh:
[root@bvdv-emmws01 home]# ls -ltr /home/test.sh
-rwxrwxrwx 1 root root 58 Dec 2 12:58 /home/test.sh
I have updated the /etc/crontab file. That file now has the following contents:
[root@bvdv-emmws01 home]# cat /etc/crontab
SHELL=/bin/bash
PATH=/sbin:/bin:/usr/sbin:/usr/bin
MAILTO=root
HOME=/
# For details see man 4 crontabs
# Example of job definition:
# .---------------- minute (0 - 59)
# | .------------- hour (0 - 23)
# | | .---------- day of month (1 - 31)
# | | | .------- month (1 - 12) OR jan,feb,mar,apr ...
# | | | | .---- day of week (0 - 6) (Sunday=0 or 7) OR sun,mon,tue,wed,thu,fri,sat
# | | | | |
# * * * * * user-name command to be executed
* * * * * root /home/test.sh
Status of crond daemon
[root@bvdv-emmws01 home]# service crond status
crond (pid 1910) is running...
|
A crontab created with crontab -e and listable with crontab -l should not have a user specified for the command. Your entry should read:
* * * * * /home/test.sh
Or alternatively put the line that you have in /etc/crontab instead.
From man 5 crontab (section EXAMPLE SYSTEM CRON FILE):
# /etc/crontab: system-wide crontab
# Unlike any other crontab you don't have to run the `crontab'
# command to install the new version when you edit this file
# and files in /etc/cron.d. These files also have username fields,
# that none of the other crontabs do.
Note the last sentence.
| How to run a cron job to execute every minute? |
1,450,463,937,000 |
How to set default umask for cron jobs, please? (On RHEL 6.)
Jobs are started under non-interactive (obviously) no-login (?) shell. Not only I prefer dash over bash. But consider also bash called as /bin/sh. It seems, that both shells in non-interactive no-login invocation doesn't read any start-up file like /etc/profile.
Is the default umask hard-wired in shell or it is inherited from cron daemon?
|
On RHEL, PAM is used, so you could try using pam_umask
Try putting this in /etc/pam.d/crond
session optional pam_umask.so umask=0022
Naturally, this is untested, and may very well break assumptions made by various applications.
| Default umask for cron jobs |
1,450,463,937,000 |
Is there any difference between creating a CRON tab using 0/5 and */5?
For example:
0/5 * * * *
vs
*/5 * * * *
|
This IBM support article explains how stepping works. In the case of */5, it would occur every 5 minutes (0, 5, 10, etc). That is the same as 0-59/5. In the case of 0/5, I just tested it and it will never run.
| Difference between CRON tab 0/5 and */5? |
1,450,463,937,000 |
I can run the command aws --version in a script and in the cli. But if I put this command into a crontab it does not work.
Crontab:
50 12 * * * aws --version > ~/yolo.swag
Error:
/bin/sh: 1: aws: not found
The aws command is in a bash script. And I get the same error message when I run the script in cron. How can I get the script to run the command fine ?
|
You need to specify the full path to the aws executable:
50 12 * * * /usr/local/bin/aws --version > ~/yolo.swag
| AWS CLI - not working in a crontab |
1,450,463,937,000 |
I have a script in /etc/cron.hourly :
-rwxr-xr-x 1 root root 85 Dec 6 19:05 /etc/cron.hourly/nvidia_to_exclusive
containing (with an empty line at the end):
#!/bin/bash
/usr/bin/nvidia-smi -c 1 > /home/user/nvidia-smi_set_exclusive.log
The script isn't executed by cron at all, even if using run-parts /etc/cron.hourly successfully execute it.
What could be missing ?
|
The problem was that the cron service was inactive.
While I'm here, I'll summarize all the steps I've found to make a script in /etc/cron.hourly/ work :
Check that the name of your script is only using valid characters for run-parts, i.e. [a-zA-Z0-9_-].
So don't use extension like .sh.
Check that your script is executable.
If not : chmod +x /etc/cron.hourly/yourScript
Check that your script contains the shebang at the top (#!/bin/bash for example).
Check that your script runs with run-parts :
run-parts --test /etc/cron.hourly → your script should be printed.
run-parts /etc/cron.hourly→ your script should be executed.
You can check at the end of /var/log/cron if your script successfully finished.
Check that cron is running with service crond status.
If not : service crond stop then service crond start
Check if your /var/log/cron contains the error BAD FILE MODE (/etc/cron.d/0hourly).
If it's the case, you probably need to execute chmod 0644 /etc/cron.d/0hourly (cron does not like this file to be executable).
Check - at least by default on CentOS 7 - that /etc/cron.d/0hourly exists and contains the line
01 * * * * root run-parts /etc/cron.hourly
| Script in /etc/cron.hourly/ never running |
1,450,463,937,000 |
I'm new on linux user
I try to run crontab to backup my database with vagrant user
* * * * * /usr/bin/mysqldump -h localhost -u root -p root mydb | gzip > /var/backup/all/database_`date +%Y-%m-%d`.sql.gz >/dev/null 2>&1
when the crontab runs there is no backup file in the folder (my backup/all has the permission scheme 755).
This is error from /var/log/syslog
Aug 16 11:55:01 precise64 CRON[2213]: (vagrant) CMD (/usr/bin/mysqldump -h localhost -u root -p root mydb | gzip > /var/backup/all/database_`date +%Y-%m-%d`.sql.gz >/dev/null 2>&1)
Aug 16 11:55:01 precise64 CRON[2212]: (CRON) info (No MTA installed, discarding output)
So I think
it's about crontab can't create backup file because of Permission denied.
it's about I'm didn't install MTA but I use >/dev/null 2>&1 to disable crontab to sent it to email why it error ?
|
Of course, the error is that you don't have a mailer (sendmail,postfix, etc) implemented and active.
That being said your other problem is that the >/dev/null 2>&1 ONLY only applies/associates to the LAST command in this case gzip. Thus there must be some type of output going to STDERR for your mysqldump.
The correct way to do what I think you want is:
* * * * * (command | command ) >/dev/null 2>&1
| crontab error with (No MTA installed) but I use >/dev/null 2>&1 |
1,450,463,937,000 |
I'm trying to set up a cron job under my user. I run crontab -e, make my edits, and try to save and exit. I receive the following error message /var/spool/cron/: mkstemp: Permission denied.
Relevant output from ls -al /var/spool/cron/crontabs
drwxr-xr-x 2 root crontab 4096 Nov 4 10:09 .
drwxr-xr-x 5 root root 4096 Nov 19 2014 ..
-rw-rw-rw- 1 greg crontab 91 Nov 4 11:04 greg
-rw------- 1 root crontab 1231 Oct 29 16:18 root
I can directly edit the greg file and save that but I still can't seem to get the job to run, even if I restart cron after updating it. What do I need to do to fix this problem?
The output from ls -lha $(which crontab) is:
-rwxr-sr-x 1 root crontab 36K Feb 8 2013 /usr/bin/crontab
The output from groups greg is:
greg : greg adm sudo crontab lpadmin sambashare
|
This will fix your immediate problem:
chmod u=rwx,g=wx,o=t /var/spool/cron/crontabs
But, if you can download packages, a more robust way to fix this is to use apt-get to reinstall the appropriate package:
root@ubuntu# dpkg-query -S /var/spool/cron/crontabs
cron: /var/spool/cron/crontabs
root@ubuntu# apt-get install --reinstall cron
after first making sure any local changes you've made to /etc/init/cron.conf, /etc/default/cron, etc. are copied somewhere and then reapplied.
| crontab -e yields: /var/spool/cron/: mkstemp: Permission denied |
1,450,463,937,000 |
I've been migrating my crontabs to systemd's timer units. They all look similar to this:
.timer file:
[Unit]
Description=timer that uses myjob.service
[Timer]
OnCalendar=*-*-* *:00:00
Unit=myjob.service
[Install]
WantedBy=timers.target
.service file:
[Unit]
Description=Script that runs myjob.sh
[Service]
ExecStart=/home/user/myjob.sh
My timers work but they also execute on system reboot. I would like my OnCalendar events to only run at the specified times, not whatever random time I reboot the PC. Any ideas?
UPDATE:
I resolved this problem by converting my 'user' timers into root/system timers.
I disabled all of my .service and .timer files, and moved them out of my home directory into /etc/systemd/system.
I added the 'User=' section to each service file, so that my scripts were ran by the regular user and not as root.
Now my timers aren't being triggered on system startup and I was also getting problems with sporadic triggering when I logged in via ssh. This has also been solved now that they are under control of the root account but run my scripts are still run as the PID of regular user, which preserves my files' ownership attributes. Problem solved.
|
I resolved this problem by converting my 'user' timers into root/system timers.
I disabled all of my .service and .timer files, and moved them out of my home directory into /etc/systemd/system.
I added the 'User=' section to each service file, so that my scripts were ran by the regular user and not as root.
Now my timers aren't being triggered on system startup and I was also getting problems with sporadic triggering when I logged in via ssh. This has also been solved now that they are under control of the root account but run my scripts are still run as the PID of regular user, which preserves my files' ownership attributes. Problem solved.
The OP posted this as an edit to the question, so I reproduced it here.
| Prevent systemd timer from running on startup |
1,450,463,937,000 |
Say I'm running a C program via crontab -e:
15 11 * * * time /home/philip/school/a1_c_program.c > /home/philip/logs/time2execute.txt
But the C program requires the user to enter some text.
What happens in this situation? Does the C script just remain an open process on my machine until I restart? Or does it automatically close after some time?
If I could attach to the process and enter the required text would the time command finish running and print something to the newly created text file?
I noticed with my above cronjob the time2execute.txt file is created but is empty.
Edit: Once I fixed my crontab to point to the compiled c program instead of the source file. The text file included the prompt text that the user would seen.
|
Tools running by the cron daemon get an environment from the cron, and not from your shell. The cron doesn't provide a standard input for these tools, more exactly it provides a faked one from the /dev/null device.
Thus, if the tool requires standard input, it will get an end-of-file error as it tries to read in the input data. It depends on the tool, what will do it with that. In most cases it will be the same as if you would run it by calling the program with a
programname </dev/null
If it tries to read the terminal directly, like ncurses apps, then its terminal initialization sequence won't work. The ncurses_init() call will give it an error result. It depends on the tool also in this case, what it does with the error result. Most tools simply exit with an error message. It depends on the cron configuration, what will it do with this error message (by default, it logs and sends you in an email).
Side note: Your cron line is bad, you try to run a .c source code directly. First you have to compile it to a binary executable. Furthermore, the time tool puts its output to the standard error output, it would be nice for you to redirect its output (and also its std error) to some log file.
| What happens to cron-job that requires user prompt? |
1,450,463,937,000 |
I want to set a cronjob entry that runs a script every 30 minutes from 9:00 to 18:00 but I do not want it to run at 18:30. The script should run for the first time at 9:00 and for the last time at 18:00. Is this possible?
|
0,30 9-18 * * * /path_to_script
However, the above will run at 18:30. So, you're best bet is to have a separate job to handle 18:00. So:
0,30 9-17 * * * /path_to_script
0 18 * * * /path_to_script
Also, Cron job generators are awesome.
| Cronjob to run every 30 minutes |
1,450,463,937,000 |
I am using vagrant boxes for several tests and in this particular case it is the Fedora 31 box from Bento with VirtualBox. When trying to use the crontab command, I get an error that it is not found. A quick search with locate tells me that there is no cron system installed at all.
Is it a new default in Fedora 31 to not install cron at all by default, or is it the Bento project that might think this is a good idea?
On my Fedora Workstation the command is there, but it was there before as I upgraded from 28 I think to 31 over the years.
|
Per the Fedora Documentation,
Fedora comes with the following automated task utilities: cron, anacron, at, and batch.
However, it appears that these need to be installed as they do not come included:
Installing Cron and Anacron
To install Cron and Anacron, you need to install the cronie package with Cron and the cronie-anacron package with Anacron (cronie-anacron is a sub-package of cronie).
dnf install cronie cronie-anacron
See also: https://fedoraproject.org/wiki/Administration_Guide_Draft/Cron
If you look at the setup scripts for those Vagrant boxes, you will see very little deviation from the base image.
| Crontab command missing in Fedora 31 |
1,450,463,937,000 |
I have following crontab file -
# /etc/crontab: system-wide crontab
# Unlike any other crontab you don't have to run the `crontab'
# command to install the new version when you edit this file
# and files in /etc/cron.d. These files also have username fields,
# that none of the other crontabs do.
SHELL=/bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
# m h dom mon dow user command
17 * * * * root cd / && run-parts --report /etc/cron.hourly
25 6 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily )
47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly )
52 6 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly )
* * * * * mango echo hi >> /home/mango/test
I restarted the service using sudo service cron restart but the file does not appear. So I checked the log file to see the issue.
Sep 18 09:24:41 dpiplserver rsyslogd: [origin software="rsyslogd" swVersion="5.8.6" x-pid="869" x-info="http://www.rsyslog.com"] rsyslogd was HUPed
Sep 18 09:24:51 dpiplserver anacron[1157]: Job `cron.daily' terminated
Sep 18 09:24:51 dpiplserver anacron[1157]: Normal exit (1 job run)
Sep 18 09:39:01 dpiplserver CRON[2349]: (root) CMD ( [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) ! -execdir fuser -s {} 2>/dev/null \; -delete)
Sep 18 10:09:01 dpiplserver CRON[2399]: (root) CMD ( [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) ! -execdir fuser -s {} 2>/dev/null \; -delete)
Sep 18 10:16:08 dpiplserver kernel: [ 4726.374034] usb 2-1.5: new high-speed USB device number 3 using ehci_hcd
Sep 18 10:39:01 dpiplserver CRON[2803]: (root) CMD ( [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) ! -execdir fuser -s {} 2>/dev/null \; -delete)
Sep 18 10:50:21 dpiplserver kernel: [ 6773.998690] init: cron main process (1192) killed by TERM signal
Sep 18 10:50:21 dpiplserver cron[2880]: (CRON) INFO (pidfile fd = 3)
Sep 18 10:50:21 dpiplserver cron[2881]: (CRON) STARTUP (fork ok)
Sep 18 10:50:21 dpiplserver cron[2881]: (*system*) INSECURE MODE (group/other writable) (/etc/crontab)
Sep 18 10:50:21 dpiplserver cron[2881]: (CRON) INFO (Skipping @reboot jobs -- not system startup)
Sep 18 10:52:17 dpiplserver kernel: [ 6890.091067] init: cron main process (2881) killed by TERM signal
Sep 18 10:52:17 dpiplserver cron[2905]: (CRON) INFO (pidfile fd = 3)
Sep 18 10:52:17 dpiplserver cron[2906]: (CRON) STARTUP (fork ok)
Sep 18 10:52:17 dpiplserver cron[2906]: (*system*) INSECURE MODE (group/other writable) (/etc/crontab)
Sep 18 10:52:17 dpiplserver cron[2906]: (CRON) INFO (Skipping @reboot jobs -- not system startup)
Sep 18 10:54:41 dpiplserver kernel: [ 7033.797057] init: cron main process (2906) killed by TERM signal
Sep 18 10:54:41 dpiplserver cron[2937]: (CRON) INFO (pidfile fd = 3)
Sep 18 10:54:41 dpiplserver cron[2938]: (CRON) STARTUP (fork ok)
Sep 18 10:54:41 dpiplserver cron[2938]: (*system*) INSECURE MODE (group/other writable) (/etc/crontab)
Sep 18 10:54:41 dpiplserver cron[2938]: (CRON) INFO (Skipping @reboot jobs -- not system startup)
Sep 18 10:56:01 dpiplserver cron[2938]: (*system*) INSECURE MODE (group/other writable) (/etc/crontab)
Sep 18 10:56:57 dpiplserver kernel: [ 7168.922614] init: cron main process (2938) killed by TERM signal
Sep 18 10:56:57 dpiplserver cron[2953]: (CRON) INFO (pidfile fd = 3)
Sep 18 10:56:57 dpiplserver cron[2954]: (CRON) STARTUP (fork ok)
Sep 18 10:56:57 dpiplserver cron[2954]: (*system*) INSECURE MODE (group/other writable) (/etc/crontab)
Sep 18 10:56:57 dpiplserver cron[2954]: (CRON) INFO (Skipping @reboot jobs -- not system startup)
Sep 18 10:59:34 dpiplserver kernel: [ 7325.393315] init: cron main process (2954) killed by TERM signal
Sep 18 10:59:34 dpiplserver cron[2967]: (CRON) INFO (pidfile fd = 3)
Sep 18 10:59:34 dpiplserver cron[2968]: (CRON) STARTUP (fork ok)
Sep 18 10:59:34 dpiplserver cron[2968]: (*system*) INSECURE MODE (group/other writable) (/etc/crontab)
Sep 18 10:59:34 dpiplserver cron[2968]: (CRON) INFO (Skipping @reboot jobs -- not system startup)
Sep 18 11:00:21 dpiplserver kernel: [ 7372.324581] init: cron main process (2968) killed by TERM signal
Sep 18 11:00:21 dpiplserver cron[2977]: (CRON) INFO (pidfile fd = 3)
Sep 18 11:00:21 dpiplserver cron[2978]: (CRON) STARTUP (fork ok)
Sep 18 11:00:21 dpiplserver cron[2978]: (*system*) INSECURE MODE (group/other writable) (/etc/crontab)
Sep 18 11:00:21 dpiplserver cron[2978]: (CRON) INFO (Skipping @reboot jobs -- not system startup)
Sep 18 11:09:01 dpiplserver CRON[3060]: (root) CMD ( [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) ! -execdir fuser -s {} 2>/dev/null \; -delete)
Sep 18 11:09:18 dpiplserver crontab[3068]: (mango) LIST (mango)
Sep 18 11:09:50 dpiplserver crontab[3089]: (root) LIST (root)
Sep 18 11:13:34 dpiplserver kernel: [ 8163.398965] init: cron main process (2978) killed by TERM signal
Sep 18 11:13:34 dpiplserver cron[3144]: (CRON) INFO (pidfile fd = 3)
Sep 18 11:13:34 dpiplserver cron[3145]: (CRON) STARTUP (fork ok)
Sep 18 11:13:34 dpiplserver cron[3145]: (*system*) INSECURE MODE (group/other writable) (/etc/crontab)
Sep 18 11:13:34 dpiplserver cron[3145]: (CRON) INFO (Skipping @reboot jobs -- not system startup)
Sep 18 11:15:22 dpiplserver kernel: [ 8271.390434] init: cron main process (3145) killed by TERM signal
Sep 18 11:15:22 dpiplserver cron[3163]: (CRON) INFO (pidfile fd = 3)
Sep 18 11:15:22 dpiplserver cron[3164]: (CRON) STARTUP (fork ok)
Sep 18 11:15:22 dpiplserver cron[3164]: (*system*) INSECURE MODE (group/other writable) (/etc/crontab)
Sep 18 11:15:22 dpiplserver cron[3164]: (CRON) INFO (Skipping @reboot jobs -- not system startup)
I was expecting that the log file would either show that it run the command or the error it encountered. Why does it print neither success nor error?
|
Please notice that the file /etc/crontab has wrong permissions, that makes cron to ignore the file and not run the tasks inside of that file:
Sep 18 11:15:22 dpiplserver cron[3164]: (system) INSECURE MODE (group/other writable) (/etc/crontab)
Fix it using the command
sudo chmod 600 /etc/crontab
and restart the service
sudo service cron restart
| Cron does not print to syslog |
1,450,463,937,000 |
In cron you can specify */n where n means every n times period, for instance in the first column is minute.
*/5 means every 5 minutes, but which minutes? 0, 5, 10, ...?
What happens if the number specified is not a divisor of 60?
*/7 what will happen, will it start to skew in the next hour?
|
It'll go on 7, 14, ... 56, 0, 7, 14, ...
With that syntax, I like to think of it as going when t mod x === 0
| cron every X exact meaning |
1,450,463,937,000 |
The acme-client(1) man page suggests the following cron entry:
~ * * * * acme-client example.com && \
rcctl reload httpd
When I add to crontab the edit is not saved when I use that syntax:
crontab: installing new crontab
"/tmp/crontab.nOryzjBTlv":22: bad minute
crontab: errors in crontab file, unable to install
Do you want to retry the same edit?
But it saves fine if I just make it a single line:
~ * * * * acme-client example.com && \ rcctl reload httpd
Why doesn't \ allow continuing a line to the next one?
|
There is no way to split a single command line onto multiple lines, like the shell's trailing "\"
You'll find the above statement in the following paragraph if you do man 5 crontab. Please note below is from Ubuntu 20.04LTS.
The ``sixth'' field (the rest of the line) specifies the command to be run. The entire command portion of the line, up to a newline or % character, will be executed by /bin/sh or by the shell speci‐
fied in the SHELL variable of the crontab file. Percent-signs (%) in the command, unless escaped with backslash (\), will be changed into newline characters, and all data after the first % will be
sent to the command as standard input. There is no way to split a single command line onto multiple lines, like the shell's trailing "\"
From https://man.openbsd.org/crontab.5
It says the rest of the line is the command field, it does not say how a command span multiple lines. You should not assume that the crontab has the same syntax as the shell script.
The command field (the rest of the line) is the command to be run. The entire command portion of the line, up to a newline or % character, will be executed by /bin/sh or by the shell specified in the SHELL variable of the crontab. Percent signs (‘%’) in the command, unless escaped with a backslash (‘\’), will be changed into newline characters, and all data after the first ‘%’ will be sent to the command as standard input.
| How come I can't use `\` to continue a line in crontab? |
1,450,463,937,000 |
i have a query on cronjob , if i execute a command using cronjob is it possible to display the output in terminal rather than saving in output file .
say for eg
*/2 * * * root /bin/ping xx.xx.xx.xx
the output should display in the terminal . i tried it doesn't show in the terminal. Anything i need to change in my cronjob .
Thanks in advance
Vinoth
|
You can't do this.
All cron jobs are run in non-interactive shells, there is no terminal attachment. Hence the concept of /dev/tty or similar is not available in cron.
| How to execute the command in cronjob to display the output in terminal |
1,450,463,937,000 |
crond[2113] : (*system*) RELOAD (/etc/cron.d/mycron)
My cron job is not working and I got the error above in log, What does mean that, how can I know what is going on.
My cron job code:
SHELL=/bin/bash
PATH=/sbin:/bin:/usr/sbin:/usr/bin
MAILTO=root
HOME=/
* * * * * root php /var/www/html/app/checkUserAction.php
|
This is not an error. This is the cron daemon informing you that it has detected that /etc/cron.d/mycron was modified and loaded the new version of the file.
Errors from the cron daemon itself will be in the same logs (probably, unless you've reconfigured logging). Errors from the job itself are sent as an email to root; check your email.
| what does mean crond[2113] : (*system*) RELOAD (/etc/cron.d/mycron) |
1,450,463,937,000 |
I get this message when there is an error in my crontab:
cron: No MTA installed, discarding output
I don't want to install a MTA on my system, but I also don't want to miss these error messages.
Where is it configured, that cron tries to send these by mail?
Can I change that, so that these messages are send to a file instead? (perhaps via sysylog).
I don't want to log all cron messages, just the errors.
I have this in my rsyslog.conf:
cron.=info stop
*.* |/dev/xconsole
Unfortunately, it seems that even error messages have the .info tag
How can I only log cron errors only?
Or, in other words: how can I send to log file, what would otherwise be sent to MTA if it was installed?
My system is Debian 10, and i am using rsyslog for logging (no systemd)
UPDATE:
using redirection for each line individually, as suggested by @basin is the solution I was using up until now, and it has few problems:
First of all, as I stated, I would like a solution that redirects what would normally be sent to MTA by default to some other location, ie |/dev/xconsole, without having to specify for each lie individually.
Second, if there is a syntax error in my crontab line, the redirection does not work. Cron still tries to send the error via MTA, and I get the No MTA installed error in the log.
Is there some way to redirect what would have ben sent via MTA, so that it is sent (either directly or via sysylog) to /dev/xconsole ?
ADDITIONAL QUESTION:
When using the solution suggested by @Binarus, to write my own custom sendmail script:
Instead of using the default /usr/sbin/sendmail , can I specify other location for for my custom script, such as /usr/local/sbin/sendmail ? Where does cron take the information that sendmail is in /usr/sbin/ ? Is this hard-coded, or can it be configured in one of the cron's config files ?
|
I believe I have a solution, but it is only tested halfway. Unfortunately, I couldn't test /dev/xconsole, because I don't have that device on my systems, and I admit that I even don't know what it is and that I don't have the time to research it.
However, there are two good things: First, the following method is generic; that is, you nearly sure can use /dev/xconsole instead of the file name I have used. Second, I have just tested it on Debian Buster, so it really should work out-of-the-box for you.
I'll first show the (surprisingly simple) solution, then explain how it works, and then show some possible problems and how to circumvent them.
Solution
As a precaution, first check whether you have /usr/sbin/sendmail. This should not be the case, because this program usually belongs to an MTA, but you said you didn't have installed an MTA. If it exists, uninstall the package where it is from.
Now, create a script /usr/sbin/sendmail with the following content:
#!/bin/bash
cat >>/root/result
Set the permissions accordingly:
chmod a+rx,u+w,og-w /usr/sbin/sendmail
Restart cron:
systemctl restart cron
That was it. All error messages cron would normally send via email are now going into /root/result.
Of course, you must perform the steps above as root user.
In your case, you probably want to replace /root/result by /dev/xconsole (but please remember that I haven't tested this, as outlined above :-)), and you eventually should replace >> by > (but since I know exactly nothing about /dev/xconsole, this might be wrong).
How does it work?
Definition: In the following text, I'll use SENDMAIL to denote the SENDMAIL software package, and I'll use sendmail to denote an application.
Most (or at least, many) programs which send email messages do not themselves implement an SMTP protocol stack which would be needed to talk to an MTA directly via a socket or a network connection, and also don't incorporate SMTP libraries; making mistakes there would be fatal security-wise, and it's just not necessary because in most cases specialized applications come to rescue.
One of those specialized applications is /usr/bin/sendmail, which is capable of SMTP (and much more), and can be used very easily. A common pattern is:
cat MyMailMessage | /usr/bin/sendmail [sendmail-options]
# OR, even shorter
/usr/bin/sendmail <MyMailMessage
That is, the application constructs a mail message (which is very easy since it only needs a few headers plus the actual body) and pipes it to sendmail, which in turn does the SMTP wizardry.
Historically, SENDMAIL was the predominant MTA for a while. SENDMAIL contains not only an MTA, but also an MSP (message submission program). The MSP part is implemented in /usr/sbin/sendmail. For a long time, nobody got away without SENDMAIL, and therefore many applications still literally rely on /usr/sbin/sendmail or at least sendmail to be usable as mail submission program. This is the reason that even SENDMAIL's competitors like POSTFIX still also provide that program as a compatibility wrapper.
cron behaves like other applications in this respect. It is not capable of SMTP. Instead, it relies on sendmail to actually send messages; it just constructs the raw messages, including headers and body, and pipes them to sendmail, which in turn actually sends them.
So I just had to replace /usr/sbin/sendmail by the script shown above (since I have an MTA installed, this was just for testing). cron calls /usr/sbin/sendmail when it tries to send mail, and this now is our script. The script just takes its standard input and redirects it to the file.
As a side note, I actually don't know whether cron pipes the raw email message directly to sendmail, or whether it puts it into a temporary file first and then calls sendmail with stdin redirected from that file, or whether it does something else.
But that's not important here: The key point is that the raw email message in every case is constructed by cron, and that cron puts it into sendmail's stdin (however this may be accomplished).
Disadvantages, Problems, Improvements
Please note that using cat in the script might not be the best choice efficiency-wise, but this depends on your expected output. There are myriads of articles out there which explain how you best can put a script's stdin into a file; I don't think that this in your question's scope.
One disadvantage of my solution is obvious: Should you ever need the "real" /usr/sbin/sendmail, we have a problem. I could imagine something subtle: There might be applications which change their behavior (especially, error and status reporting) depending on whether they find /usr/sbin/sendmail.
For example, an application may decide to email errors if it finds that executable, but write errors to log files otherwise. This application will now change its behavior. What then happens heavily depends on circumstances. First, you might not find error messages at the usual place. Second, since sendmail now is our simple script and does not work as the application expects, weird things might happen.
There is a remedy, though: You mentioned that you would accept recompiling cron. I believe (but can't verify right now) that in its source code, in config.h, there is a #define which defines the MSP's name it uses. Then the remedy is clear: Rename our script to abcd1234, and use that as the value of the #define, or -probably better- put the script into another directory which is not in the system's search path, and use the complete path to the script as value of the #define.
And while you're at it, you eventually should correct the command line options as well, which are in a separate #define; they don't hurt our script because it just ignores them, though. Possibly you even could dump the script and direct cron's error output directly to the file or device you want. Telling whether that would be possible would require further analysis of the source code, which I haven't conducted. I would stick with the script anyway; see the next paragraph for an important reason.
[ Update per 2021-10-20:
As outlined in my comment to your original question from 2021-10-20, there is an alternative way around the problem which saves you from recompiling: Move the "real" sendmail elsewhere and install the scripts in its place. Then, whenever the script is executed, make it figure out who called it. If cron has called it, make it behave as shown above; if not, make it call the "real" sendmail with the same stdin and parameters, i.e. "relay" stdin and the parameters to the "real" sendmail.
]
Another problem is that you don't see only the error messages you're interested in, but each time see the whole raw email message cron would send, including the headers. The remedy would be to add some code to our script which filters out the lines you are not interested in, e.g. using grep, sed and their friends. But this also is outside the scope of this question.
| cron: send error messages to file, when no MTA is installed |
1,450,463,937,000 |
I have a script which dumps out database and uploads the SQL file to Swift. I've run into the issue where the script runs fine in terminal but fails in cron.
A bit of debugging and I found that the /usr/local/bin/swift command is not found in the script.
Here's my crontab entry:
*/2 * * * * . /etc/profile; bash /var/lib/postgresql/scripts/backup
Here's what I've tried:
Using full path to swift as /usr/local/bin/swift
Executing the /etc/profile script before executing the bash script.
How do I solve this?
|
Cron doesn't run with the same environment as a user. If you do the following you will see what I mean:
type env from your terminal prompt and make note of the output.
Then set a cron job like this and compare it's output to the previous:
*/5 * * * * env > ~/output.txt
You will find that the issue is likely because crontab does not have the same PATH variable as your user. As a solution to this you can (from your postgres user) echo $PATH and then copy the results of that to the first line of your crontab (something like this)
PATH=/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/jbutryn/.local/bin:/home/jbutryn/bin
Or if you want to be more specific you can simply add the following:
PATH=$PATH:/usr/local/bin
However I normally always put my user's PATH in crontab because I haven't yet heard a good reason not to do so and you will likely run into this issue again if you don't.
| Script in cron cannot find command |
1,450,463,937,000 |
We have a server that is used for Linux thin clients in a class. The server is started when required and I want to schedule it to power off at 22:00 every day to avoid power wasting.
I thought of using crontab but, after reading this answer, I am not sure if it is a good idea. Is it better to include shutdown -h 22:00 & in a start up script?
|
The shutdown command has already an embedded scheduler so you don't need a cron job for it to run at the specified time. In Linux as everywhere else, it's better to stick to the KISS principle (Keep It Short and Simple).
shutdown -h 22:00 will work fine, no need to run it in the background. Add the command at the end of /etc/rc.local (or /etc/rc.d/rc.local depending on your system) for execution in the last startup script.
The advantage of not using cron is that in this way the shutdown remains scheduled during the day, and you can cancel it at any time by typing
shutdown -c
| How to schedule shutdown every day? |
1,450,463,937,000 |
*/n * * * * in a crontab means run every n minutes, if n divides 60 evenly. What happens for other cases, like */13 * * * *?
|
'*' is equivalent to the full range of possible values, in this case '0-59'
'/13' represents the "step size" or increment used to determine the next run, starting with the initial possible value. (e.g. "Every 13 minutes starting at 0")
When the minutes get reset (59 to 0) it starts over. So for example */13 will always run on 0 13 26 39 52.
Another example from the comments:
"Setting minutes as 3-59/5 will first fire at 3 and last fire at 58." -- DZet
| What does */13 do in a crontab? |
1,450,463,937,000 |
I've been using kSar to look at my servers resource use. There is a definite spike in process creation at 4:04AM daily. Cron seems to define the interval they should be run, but not the specific time
How can I find what cron job runs at that time?
|
If you look at any CentOS 5 or 6 system the file /etc/crontab is typically where all the action starts. There are 4 directories that will contain various scripts. These directories are named:
$ ls -1d /etc/cron*
/etc/cron.d
/etc/cron.daily
/etc/cron.deny
/etc/cron.hourly
/etc/cron.monthly
/etc/crontab
/etc/cron.weekly
The /etc/cron.d and /etc/cron.deny are special so I'm not going to discuss them. The remaining 4 directories: hourly, daily, weekly, & monthly are for exactly what their names imply. But when do they run? Take a look at the /etc/crontab to find that out.
######################################################################
## run-parts
##
01 * * * * root run-parts /etc/cron.hourly
02 4 * * * root run-parts /etc/cron.daily
22 4 * * 0 root run-parts /etc/cron.weekly
42 4 1 * * root run-parts /etc/cron.monthly
######################################################################
Your issue with something running daily @ 4:04AM? It's the /etc/cron.daily directory that's causing this. You'll need to familiarize yourself with what's in that directory to know what the actual culprit is. But if I had to guess it's likely one of these 2 guys:
$ ls -l /etc/cron.daily
logrotate
mlocate.cron
What else is running?
If you have a deviant cron that's tanking your system always consult the log file. Here's everything running at 4AM on my CentOS 5 system:
$ grep " 04:" /var/log/cron | head -10
Feb 9 04:10:01 skinner crond[25640]: (root) CMD (/usr/lib/sa/sa1 1 1)
Feb 9 04:20:02 skinner crond[27086]: (root) CMD (/usr/lib/sa/sa1 1 1)
Feb 9 04:22:01 skinner crond[27432]: (root) CMD (run-parts /etc/cron.weekly)
Feb 9 04:22:01 skinner anacron[27436]: Updated timestamp for job `cron.weekly' to 2014-02-09
Feb 9 04:30:01 skinner crond[28561]: (root) CMD (/usr/lib/sa/sa1 1 1)
Feb 9 04:40:01 skinner crond[30022]: (root) CMD (/usr/lib/sa/sa1 1 1)
Feb 9 04:50:01 skinner crond[31482]: (root) CMD (/usr/lib/sa/sa1 1 1)
Feb 10 04:00:02 skinner crond[7578]: (root) CMD (/usr/lib/sa/sa1 1 1)
Feb 10 04:01:01 skinner crond[7700]: (root) CMD (run-parts /etc/cron.hourly)
Feb 10 04:02:01 skinner crond[7934]: (root) CMD (run-parts /etc/cron.daily)
Notice the 04:02 AM time slots?
| How to find a cron job if you know the specific time it runs? |
1,450,463,937,000 |
I want to have a script running in screen at startup.
This doesn't work:
@reboot pi screen -d -m /home/pi/db_update.py
however running this manually as user pi works:
screen -d -m /home/pi/db_update.py
Any idea on what I am missing?
|
Instead of adding @reboot pi ... to /etc/crontab you should run crontab -e as user pi and add:
@reboot /usr/bin/screen -d -m /home/pi/db_update.py
Make sure to use the full path to screen (just to be sure, it works without it), and that the /home/pi is not on an encrypted filesystem (been there, done that). The command cannot depend on anything that might only be accessible after either the cron daemon has started, or the user is logged in.
You might want to add something to db_update.py (writing to a file in /var/tmp to see that it actually runs, or put a time.sleep(600) at the end of the python program to allow enough time to login and connect.
Tested on Lubuntu 13.04, python 2.7.4 with the following entry:
@reboot screen -d -m /home/anthon/countdown.py
and the countdown.py:
#!/usr/bin/env python
import time
for x in range(600,0,-1):
print x
time.sleep(1)
(and chmod 755 countdown.py)
| Running `screen` through a @reboot cron job |
1,450,463,937,000 |
I am trying to get ssh to work in cron, and it seems like I have tried all the standard tricks with no luck at all. I can run a non-interactive ssh by using
>./some_script_with_ssh
in bash. It's only when I try to use it in cron that it fails. Any help I could get would be greatly appreciated.
Below are some of the data requested in similar questions:
My user's crontab
PATH = /home/zach/.ssh/:/usr/bin
52 * * * * ssh -vvv my_account@my_remote "touch temp.temp"
Printout from the email cron sent me
OpenSSH_7.3p1 Ubuntu-1, OpenSSL 1.0.2g 1 Mar 2016
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: Applying options for *
debug2: resolving "my_remote" port 22
debug2: ssh_connect_direct: needpriv 0
debug1: Connecting to my_remote [IP_HERE] port 22.
debug1: Connection established.
debug1: identity file /home/zach/.ssh/id_rsa type 1
debug1: key_load_public: No such file or directory
debug1: identity file /home/zach/.ssh/id_rsa-cert type -1
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_7.3p1 Ubuntu-1
debug1: Remote protocol version 2.0, remote software version OpenSSH_6.2
debug1: match: OpenSSH_6.2 pat OpenSSH* compat 0x04000000
debug2: fd 3 setting O_NONBLOCK
debug1: Authenticating to my_remote:22 as 'my_account'
debug3: hostkeys_foreach: reading file "/home/zach/.ssh/known_hosts"
debug3: record_hostkey: found key type ECDSA in file /home/zach/.ssh/known_hosts:1
debug3: load_hostkeys: loaded 1 keys from my_remote
debug3: order_hostkeyalgs: prefer hostkeyalgs: [email protected],[email protected],[email protected],ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521
debug3: send packet: type 20
debug1: SSH2_MSG_KEXINIT sent
debug3: receive packet: type 20
debug1: SSH2_MSG_KEXINIT received
debug2: local client KEXINIT proposal
debug2: KEX algorithms: [email protected],ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha256,diffie-hellman-group14-sha1,ext-info-c
debug2: host key algorithms: [email protected],[email protected],[email protected],ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,[email protected],[email protected],ssh-ed25519,rsa-sha2-512,rsa-sha2-256,ssh-rsa
debug2: ciphers ctos: [email protected],aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected],aes128-cbc,aes192-cbc,aes256-cbc,3des-cbc
debug2: ciphers stoc: [email protected],aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected],aes128-cbc,aes192-cbc,aes256-cbc,3des-cbc
debug2: MACs ctos: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1
debug2: MACs stoc: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1
debug2: compression ctos: none,[email protected],zlib
debug2: compression stoc: none,[email protected],zlib
debug2: languages ctos:
debug2: languages stoc:
debug2: first_kex_follows 0
debug2: reserved 0
debug2: peer server KEXINIT proposal
debug2: KEX algorithms: ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1
debug2: host key algorithms: ssh-rsa,ssh-dss,ecdsa-sha2-nistp256
debug2: ciphers ctos: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,[email protected],[email protected],aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected]
debug2: ciphers stoc: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,[email protected],[email protected],aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected]
debug2: MACs ctos: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-md5,hmac-sha1,[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96
debug2: MACs stoc: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-md5,hmac-sha1,[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96
debug2: compression ctos: none,[email protected]
debug2: compression stoc: none,[email protected]
debug2: languages ctos:
debug2: languages stoc:
debug2: first_kex_follows 0
debug2: reserved 0
debug1: kex: algorithm: ecdh-sha2-nistp256
debug1: kex: host key algorithm: ecdsa-sha2-nistp256
debug1: kex: server->client cipher: aes128-ctr MAC: [email protected] compression: none
debug1: kex: client->server cipher: aes128-ctr MAC: [email protected] compression: none
debug3: send packet: type 30
debug1: sending SSH2_MSG_KEX_ECDH_INIT
debug1: expecting SSH2_MSG_KEX_ECDH_REPLY
debug3: receive packet: type 31
debug1: Server host key: ecdsa-sha2-nistp256 SHA256:K8vzLDbyV5JKlcnHsIj6BK/yR4OTJaY4fFuHpsg0FdE
debug3: hostkeys_foreach: reading file "/home/zach/.ssh/known_hosts"
debug3: record_hostkey: found key type ECDSA in file /home/zach/.ssh/known_hosts:1
debug3: load_hostkeys: loaded 1 keys from my_remote
debug3: hostkeys_foreach: reading file "/home/zach/.ssh/known_hosts"
debug3: record_hostkey: found key type ECDSA in file /home/zach/.ssh/known_hosts:2
debug3: load_hostkeys: loaded 1 keys from 128.97.70.146
debug1: Host 'my_remote' is known and matches the ECDSA host key.
debug1: Found key in /home/zach/.ssh/known_hosts:1
debug3: send packet: type 21
debug2: set_newkeys: mode 1
debug1: rekey after 4294967296 blocks
debug1: SSH2_MSG_NEWKEYS sent
debug1: expecting SSH2_MSG_NEWKEYS
debug3: receive packet: type 21
debug2: set_newkeys: mode 0
debug1: rekey after 4294967296 blocks
debug1: SSH2_MSG_NEWKEYS received
debug2: key: /home/zach/.ssh/id_rsa (0x55f6f6440f50)
debug3: send packet: type 5
debug3: receive packet: type 6
debug2: service_accept: ssh-userauth
debug1: SSH2_MSG_SERVICE_ACCEPT received
debug3: send packet: type 50
debug3: receive packet: type 51
debug1: Authentications that can continue: publickey,gssapi-with-mic,password,keyboard-interactive
debug3: start over, passed a different list publickey,gssapi-with-mic,password,keyboard-interactive
debug3: preferred publickey,keyboard-interactive
debug3: authmethod_lookup publickey
debug3: remaining preferred: keyboard-interactive
debug3: authmethod_is_enabled publickey
debug1: Next authentication method: publickey
debug1: Offering RSA public key: /home/zach/.ssh/id_rsa
debug3: send_pubkey_test
debug3: send packet: type 50
debug2: we sent a publickey packet, wait for reply
debug3: receive packet: type 60
debug1: Server accepts key: pkalg ssh-rsa blen 279
debug2: input_userauth_pk_ok: fp SHA256:jsePXa9FO8c9f0bVwdgvXMJQ2GyHVqz5spaO13EQ0/M
debug3: sign_and_send_pubkey: RSA SHA256:jsePXa9FO8c9f0bVwdgvXMJQ2GyHVqz5spaO13EQ0/M
debug1: read_passphrase: can't open /dev/tty: No such device or address
debug2: no passphrase given, try next key
debug2: we did not send a packet, disable method
debug3: authmethod_lookup keyboard-interactive
debug3: remaining preferred:
debug3: authmethod_is_enabled keyboard-interactive
debug1: Next authentication method: keyboard-interactive
debug2: userauth_kbdint
debug3: send packet: type 50
debug2: we sent a keyboard-interactive packet, wait for reply
debug3: receive packet: type 60
debug2: input_userauth_info_req
debug2: input_userauth_info_req: num_prompts 1
debug1: read_passphrase: can't open /dev/tty: No such device or address
debug3: send packet: type 61
debug3: receive packet: type 51
debug1: Authentications that can continue: publickey,gssapi-with-mic,password,keyboard-interactive
debug2: userauth_kbdint
debug3: send packet: type 50
debug2: we sent a keyboard-interactive packet, wait for reply
debug3: receive packet: type 60
debug2: input_userauth_info_req
debug2: input_userauth_info_req: num_prompts 1
debug1: read_passphrase: can't open /dev/tty: No such device or address
debug3: send packet: type 61
debug3: receive packet: type 51
debug1: Authentications that can continue: publickey,gssapi-with-mic,password,keyboard-interactive
debug2: userauth_kbdint
debug3: send packet: type 50
debug2: we sent a keyboard-interactive packet, wait for reply
debug3: receive packet: type 60
debug2: input_userauth_info_req
debug2: input_userauth_info_req: num_prompts 1
debug1: read_passphrase: can't open /dev/tty: No such device or address
debug3: send packet: type 61
debug3: receive packet: type 51
debug1: Authentications that can continue: publickey,gssapi-with-mic,password,keyboard-interactive
debug2: we did not send a packet, disable method
debug1: No more authentication methods to try.
Permission denied (publickey,gssapi-with-mic,password,keyboard-interactive).
Permissions on local RSA data
>ls -l ~/.ssh/
total 12
-rw------- 1 zach zach 1766 Dec 22 13:47 id_rsa
-rw-r--r-- 1 zach zach 419 Dec 4 2015 id_rsa.pub
-rw-r--r-- 1 zach zach 1332 Dec 21 13:51 known_hosts
Permissions on local home
>ls -l ~/..
total 20
drwx------ 2 root root 16384 Jul 17 2015 lost+found
drwx------ 67 zach zach 4096 Dec 22 16:05 zach
Permissions on local ~/.ssh folder
drwx------ 2 zach zach 4096 Dec 22 15:11 .ssh
Permissions on remote home
drwx------ 31 my_account grad 4096 Dec 22 13:57 my_account
Permissions on remote RSA data
> ls -l ~/.ssh/
total 12
-rwx------ 1 my_account grad 419 Dec 4 2015 authorized_keys
-rw------- 1 my_account grad 36 Dec 20 22:45 config
-rw------- 1 my_account grad 223 Sep 10 14:51 known_hosts
Permissions on remote ~/.ssh folder
> ls -l ~
drwx------ 2 my_account grad 4096 Dec 20 22:45 .ssh
Local /etc/ssh/ssh_config
host *
passwordauthentication no
stricthostkeychecking no
identityfile ~/.ssh/id_rsa
sendenv lang lc_*
hashknownhosts yes
Remote /etc/ssh/ssh_config
> cat /etc/ssh/ssh_config
Host *
Protocol 2
ServerAliveInterval 120
TCPKeepAlive no
ConnectTimeout 5
NoHostAuthenticationForLocalhost yes
PreferredAuthentications gssapi-with-mic,publickey,keyboard-interactive,password
GSSAPIAuthentication yes
SendEnv "LOGNAME LANG LC_*"
ForwardX11Trusted yes
My ssh key is not password protected.
>env | grep SSH
SSH_AGENT_LAUNCHER=gnome-keyring
SSH_AUTH_SOCK=/run/user/1000/keyring/ssh (I am user 1000)
I have also tried to use the -n, -T, -t, and -t -t options for ssh with no noticeable difference.
|
debug1: Offering RSA public key: /home/zach/.ssh/id_rsa
debug3: send_pubkey_test
debug3: send packet: type 50
debug2: we sent a publickey packet, wait for reply
debug3: receive packet: type 60
debug1: Server accepts key: pkalg ssh-rsa blen 279
debug2: input_userauth_pk_ok: fp SHA256:jsePXa9FO8c9f0bVwdgvXMJQ2GyHVqz5spaO13EQ0/M
debug3: sign_and_send_pubkey: RSA SHA256:jsePXa9FO8c9f0bVwdgvXMJQ2GyHVqz5spaO13EQ0/M
debug1: read_passphrase: can't open /dev/tty: No such device or address
debug2: no passphrase given, try next key
Your key is passphrase protected, but you probably didn't notice, because you are using gnome-keyring which takes care of that. So what are the possibilities:
Use separate key, which is not encrypted for cron jobs, becuase you still don't have a reasonable and secure way how to provide passphrase in cron job. This is preferred.
If you don't mind storing passphrase in plaintext, use sshpass:
sshpass -p your_passhprase -vvv my_account@my_remote "touch temp.temp"
Other possibility is to try to "hijack" the connection to your gnome-keyring (using SSH_AUTH_SOCK environment variable). But note that this might not work always (once you log out of your graphical session, the gnome-keyring will not be running anymore and you will see the failures again:
SSH_AUTH_SOCK=/run/user/1000/keyring/ssh ssh -vvv my_account@my_remote "touch temp.temp"
| SSH with Cron: RSA key not accepted |
1,450,463,937,000 |
What is the difference between the purposes of
/etc/crontab
files under /etc/cron.d/
/var/spool/cron/crontabs/root
If you want to add a cron job, into which file would you add it?
manpage of cron(8) Says
/etc/cron.d/: directory that contains system cronjobs stored for
different users.
What does "stored for different users" mean?
It looks confusing with files under /var/spool/cron/crontabs/.
https://unix.stackexchange.com/a/478581/674 quotes from cron manpage:
In general, the system administrator should not use /etc/cron.d/, but use the standard system crontab /etc/crontab.
Shall a sysadmin add a job to /etc/crontab, /etc/cron.d/ or /var/spool/cron/crontabs/root?
Thanks.
|
/etc/crontab is the historical location for "system" jobs. It's not necessarily used on all systems (eg RedHat 7 and derivatives has an "empty" entry), or it may have commands to call out to cron.daily and others.
/etc/cron.d/* is, essentially, the same as /etc/crontab but split out into separate files. This makes it easy for packages to add new cron entries; just put them in this directory.
So, for example, on CentOS 7:
% rpm -ql sysstat | grep cron
/etc/cron.d/sysstat
% sudo cat /etc/cron.d/sysstat
# Run system activity accounting tool every 10 minutes
*/10 * * * * root /usr/lib64/sa/sa1 1 1
# 0 * * * * root /usr/lib64/sa/sa1 600 6 &
# Generate a daily summary of process accounting at 23:53
53 23 * * * root /usr/lib64/sa/sa2 -A
You can see these entries match what would otherwise be in /etc/crontab.
Before the cron.d was designed, a script would need to edit /etc/crontab to do the same work, which is more hard work, and likely to go wrong.
/var/spool/cron/crontabs is the stuff managed by the crontab command.
So...
If a cron job is to be deployed by a package then the package should put the file into /etc/cron.d/. Enterprise automation tools may also do that.
If a sysadmin (or any other user) wants to add a cron job with crontab -e then it will be put into /var/spool/cron/crontabs/
| Differences between `/etc/crontab`, files under `/etc/cron.d/` and `/var/spool/cron/crontabs/root`? |
1,450,463,937,000 |
I have a bash script that is run nightly in a cron job. It needs to do case insensitive file matching, so the script calls
shopt -s nocaseglob
I want to make sure this does not affect other cron scripts. Does this setting persist after this one script ends? Or is this setting enabled only for the duration of the script?
Thanks!
|
Setting options with shopt is a shell setting. It only affects the shell instance that you run it in: it is local to the shell process and to subshells invoked by $(…), (…) and similar constructs. It has no effect on other shell scripts executed concurrently or later, nor even on independent bash scripts that happen to be executed from commands executed by this script.
The same applies to the values and types of variables, as long as they aren't exported. It's also possible to have variables that are local to a function; options are always global, in the sense that if you set them in a function, they remain in place when the function returns.
Environment variables (i.e. exported variables), I/O redirections, resource limits, umask, current directory and a number of other settings apply to the current shell process as well as to all subprocesses (i.e. all commands invoked by that script). They too do not escape to unrelated processes that may be executed concurrently.
| Scope of shopt options in cron script |
1,450,463,937,000 |
If I schedule something with 'at` like,
$ at noon
warning: commands will be executed using /bin/sh
at> echo "Will I be created?" > /tmp/at_test
at> <EOT>
job 12 at Fri Jun 30 12:00:00 2017
And if I reboot the machine before execution time, will my command be executed?
Unlike regular cron which schedules tasks from file, does at store this 'info' somewhere?
|
Jobs are stored in /var/spool/cron/atjobs in Ubuntu for instance.
Jobs are a file with some environment variable set (like $PATH, current working directory). The host can be rebooted, if the host is up, jobs will start.
man at should tell you more about it.
Note that some Unices have special cron entry like @reboot.
| Will 'at' command be executed after the reboot? [duplicate] |
1,450,463,937,000 |
Is there a preferably CLI based program I can use to regularly (through cron for example) use to check a mailbox, and download the attachments into a folder?
I have a mailbox called [email protected], I would like to periodically poll the inbox for new emails via POP or IMAP, and grab the attachments to any new emails (they will be photos) and download them to a local folder.
What CLI email utils can do this?
|
Fetchmail is the de facto standard program to retrieve mail over POP or IMAP automatically. You can inject email in the local email system for delivery, or have fetchmail invoke a mail delivery agent such as procmail or maildrop directly.
To extract and possibly strip the attachments, you can use any of the several MIME manipulation tools, such as mpack, metamail.
Here's a simple example using procmail (mda procmail in ~/.fetchmailrc) which saves image attachments and still delivers the mail normally — put this in your ~/.procmailrc:
PHOTO_DROP_DIR=$HOME/photos/incoming
:0c
* ^To: [email protected]
| munpack -q -C "$PHOTO_DROP_DIR"
| Periodically download attachments from mail box |
1,450,463,937,000 |
I have cloned a Git repository of some project. I want to automate git pull and compile the project once in a week. I am using a laptop which won't be on 24x7.
Now, I cannot use cron as I should keep the system running at that exact moment. I cannot use anacron either, as it might start before I connect to network.
Is there some option in anacron that will run this particular job only when I am connected to internet? Or should I be using some other different tool for this?
|
Run the job when you connect to the network. Most distributions have a scripting infrastructure that you can plug into, though you will need root permissions. If you connect with NetworkManager or Wicd, they have their own hook infrastructure as well. Add a cron job that only runs if the network is available (and, optionally, only if the job hasn't been run performed in a long time), in case the network remains connected for a long time.
You don't specify your distribution, so I'll give an example for a Debian-based distribution. The scripts in /etc/network/if-up.d are executed after an interface is brought up; see the interfaces(5) man page for more information. Write a script /etc/network/if-up.d/zzzz_balki_git_pull such as this:
#!/bin/sh
su balki -c /home/balki/bin/pull_repository_if_old
with a pull_repository_if_old that does something like this (run git pull if the network is available unless there is a timestamp that is less than 7 days old):
#!/bin/sh
set -e
if [ -z "$(find /path/to/.repository.pull.timestamp -mtime -7)" ] &&
ping -c 1 -w 1 repository.example.com >/dev/null 2>/dev/null
then
cd /path/to/repository
git pull
touch /path/to/.repository.pull.timestamp
fi
EOF
And a crontab entry on your account:
28 4 * * * /home/balki/bin/pull_repository_if_old
| How to schedule a job that depends on network availability? |
1,450,463,937,000 |
I have a problem with a user's crontab.
Crontab refuses to run any job unless it's scheduled to run every minute (* * * * *).
An soon as you edit the task to run, say, on minute 15 of every hour, it fails to run.
This runs ok every minute:
* * * * * touch /tmp/test01
This fails to run on minute 15 of every hour. It just won't run.
15 * * * * touch /tmp/test02
What's causing this?
How can it be solved?
OS is RedHat 4.
I always edit the cron with crontab -e and EDITOR is set to vi. I've changed back and forth between 15 * * * * (minute can change) and * * * * * and the result is the same. It only likes five asterisks.
HUGE EDIT:
I followed @shane-h 's question and tested for */2 * * * * (every other minute) and it worked! Then I discovered something revealing:
I had made a test with this string 37 * * * * touch /tmp/prueba_777777
To my surprise the thing actually ran but look at the date of the file:
-rw-r--r-- 1 orashut dba 0 Aug 8 08:07 prueba_777777
Recently the server was set to the new Venezuela TimeZone, which is now -04:00, but used to be -04:30.
Whem you run the date command it shows the correct date. When you create a file, the file date in the FS is correct. But somehow cron jobs are running 30 minutes earlier. That's why when I scheduled a job for some minutes in the future it didn't work, because to the cron daemon that time had already passed. If I waited almost an hour it would have run at exactly minutes-30. That's why the touched file is 30 minutes earlier that the schedule's 37.
So the question now is:
It's evident that the cron daemon is working with the old timezone while the rest of the server is working with the new timezone.
How can I fix the cron daemon's understanding of the new timezone?
|
I can't give you a specific answer, but here is how i would go about troubleshooting this:
Try more combinations to find another working expression. How about: */2 * * * * (every other minute), or * */2 * * * (every minute, every other hour). It would be interesting to establish that any relative expression works while fixed expressions do not.
When you save, do you see the confirmation message "New crontab installed?" (or something similar)
Can you do a crontab -l to see your message?
Is this a user crontab or system crontab?
Do you see messages in syslog? If you add a second, * * * * * cron you should see that every minute, and if you add the every-other-minute variant, can you see in syslog just the more frequent job?
Can you configure a working at command?
When you have this figured out, I would suggest using a service I built, https://cronitor.io to keep an eye on your important jobs. It's free for a developer to monitor one job and there are paid plans for business use. I had become too much of an expert cron debugger over the years and knew that if I needed a solution to monitor my important jobs other people probably did too.
from comments:
Cron uses the same system timezone for kicking-off jobs, so I would try just resetting that:
Set your timezone again, just to be thorough: on Red Hat use redhat-config-date
Restart the cron daemon. I think on your OS version: /etc/init.d/crond restart
| My crontab only likes five asterisks (timezone related?) |
1,450,463,937,000 |
I need to run my application in screen when specific users are logged out and kill the screen when someone from my user list logged in. So I am thinking about bash script, which will be called periodically from cron and:
Checks if specific users are logged in.
If nobody is logged in - spawn screen and save pid to file or do nothing if pid file already exists
If someone is logged in - read pid from file and kill screen
I am looking for more sophisticated alternatives, which will eliminate periodically running script from cron.
|
Several obvious options:
modify /etc/profile to add a logout hook (or if your system already has a logout hook file, modify that)
modify the system PAM configuration to add an extra session controller (pam_script library specifically addresses this)
modify the login shell of the users under consideration to something which kills your other program on startup, spawns the real login shell, and then launches the screen program when the real login shell exits
(if you only care about console logins) replace getty
| Run script when specific users logout |
1,431,818,419,000 |
I've started using sudoedit <file> instead of sudo vim <file>. One of the advantages is that it uses my local ~/.vimrc. However, when using sudo crontab -e, it uses /root/.vimrc instead. Is there a way to make sudo crontab -e use my local ~/.vimrc?
Here is a related question, about using sudoedit with vimdiff. However, substituting crontab -e for vimdiff doesn't work.
|
Assuming that you want to be editing root's crontab, sudo must give you root authority. After it does so, crontab will invoke ${VISUAL:-${EDITOR:-vi}} (it'll use $VISUAL unless it doesn't exist; in that case it'll use $EDITOR unless it doesn't exist; in that case it'll use vi).
You have a few possible solutions. They all subvert the security provided by sudo, but you must already be aware of those issues (and be willing to protect your .vimrc) or you wouldn't be using sudoedit in the first place.
The best is probably to add an assignment to the HOME variable on the sudo command line so crontab thinks the HOME directory is different:
sudo HOME=$HOME crontab -e
(That command won't work if there's whitespace in your home directory path!)
| How can I make `sudo crontab -e` use my `sudoedit` environment? |
1,431,818,419,000 |
For a while now, I have had the problem that a gzip process randomly starts on my Kubuntu system, uses up quite a bit of resources and causes my notebook fan to go crazy. The process shows up as gzip -c --rsyncable --best in htop and runs for quite a long time. I have no clue what is causing this, the system is a Kubuntu 14.04 and has no backup plan setup or anything like that. Any idea how I can figure out what is causing this the next time the process appears? I have done a bit of googling already but could not figure it out. I saw some suggestions with the ps command but grepping all lines there did not really point to anything.
|
Process tree
While the process is running try to use ps with the f option to see the process hierarchy:
ps axuf
Then you should get a tree of processes, meaning you should see what the parent process of the gzip is.
If gzip is a direct descendant of init then probably its parent has exited already, as it's very unlikely that init would create the gzip process.
Crontabs
Additionally you should check your crontabs to see whether there's anything creating it. Do sudo crontab -l -u <user> where user is the user of the gzip process you're seeing (in your case that seems to be root).
If you have any other users on that system which might have done stuff like setting up background services, then check their crontabs too. The fact that gzip runs as root doesn't guarantee that the original process that triggered the gzip was running as root as well. You can see a list of all existing crontabs by doing sudo ls /var/spool/cron/crontabs.
Logs
Check all the systems logs you have, looking for suspicious entries at the time the process is created. I'm not sure whether Kubuntu names its log files differently, but in standard Ubuntu you should at least check /var/log/syslog.
Last choice: a gzip wrapper
If none of these lead to any result you could rename your gzip binary and put a little wrapper in place which launches gzip with the passed parameters but also captures the system's state at that moment.
| A gzip process regularly runs on my system, how do I figure out what is triggering it? |
1,431,818,419,000 |
It's simple enough to use cron to schedule a job to occur periodically. I'd like to have something occur less regularly -- say, run the job, then wait 2 to 12 hours before trying again. (Any reasonable type of randomness would work here.) Is there a good way to do this?
|
You could use the command 'at'
at now +4 hours -f commandfile
Or
at now +$((($RANDOM % 10)+2)) hours -f commandfile
| Schedule job at irregular intervals |
1,431,818,419,000 |
I'm trying execute a basic shutdown crontab to run M-F at 10PM. So I did the following:
sudo crontab -e
Once inside of the crontab I added the following line:
0 22 * * 1-5 shutdown now
The job doesn't seem to be running properly and I cannot find any errors in /var/log/syslog. Is there anything glaringly wrong here?
|
Your problem is probably that the PATH in your crontab file is limited and does not include /sbin where shutdown is most likely located.
You should therefore use the full path for shutdown (you can check that with sudo which shutdown ):
0 22 * * 1-5 /sbin/shutdown now
From man 5 crontab:
Note in particular that if you want a PATH other than "/usr/bin:/bin",
you will need to set it in the crontab file.
Instead of specifying the /sbin/shutdown you could do:
PATH = /sbin:$PATH
0 22 * * 1-5 shutdown now
| Crontab not working |
1,431,818,419,000 |
I'm am using servers (debian 7) and I'm currently running cron-apt to e-mail me when there are new upgrades available.
Is the following command safe to run when new upgrades are shown?
sudo apt-get dist-upgrade
Are there any checks I should do before upgrading?
I'm a little concerned that simply upgrading everything every time I get an email might cause failures.
|
sudo apt-get dist-upgrade is very safe to run as it won't do anything to the system, instead stopping to ask for your confirmation ;) You would have to add a -y switch, which is intended for unattended upgrades and makes apt assume that you always answer 'yes' to questions: sudo apt-get -y dist-upgrade. The man page states that
If an undesirable situation, such as changing a held package, trying
to install a unauthenticated package or removing an essential package
occurs then apt-get will abort
but running dist-upgrade unattanded is always risky so you may want to avoid that.
You can always check what apt would do by adding a -s switch, like so: sudo apt-get -s dist-upgrade. This switches apt into simulation mode, in which no changes are made and you can safely review all the changes apt would make to the system.
There is also a more conservative mode of running apt, namely apt-get upgrade. The man page for apt-get is very clear on what it does:
Packages currently installed with new versions available are retrieved
and upgraded; under no circumstances are currently installed packages
removed, or packages not already installed retrieved and installed.
New versions of currently installed packages that cannot be upgraded
without changing the install status of another package will be left at
their current version.
In my original answer I somehow assumed you're going to run dist-upgrade via cron, which, after reading more carefully, does not seem to be the case. However I'm leaving the relevant paragraph as a general comment:
It not advisable to run sudo apt-get -y dist-upgrade via cron, especially if your apt sources happen to point to a testing branch (which generally should not happen on servers, especially in production) as you may end up with an unusable system. You're relatively safe if you're using Debian's stable branch but I'd still recommend to attend upgrades.
Anyway, if you're doing a dist-upgrade that is going to perform serious changes you should always have a backup. Just in case.
| Upgrading packages automatically |
1,431,818,419,000 |
I want to schedule my script for last Saturday of every month.
I have come up with the below:
45 23 22-28 * * test $(date +\%u) -eq 6 && echo "4th Saturday" && sh /folder/script.sh
Is this correct or I need to change it?
I can't test it right now as it will be invoked only on last saturday.
Please advise.
I have the below for last sunday of every month but i can't understand much of it. The first part gives 24 which is sunday and after -eq gives 7 which i don't know what it means:
00 17 * * 0 [ $(cal -s | tail -2 | awk '{print $1}') -eq $(date | awk '{print $3}') ] && /folder/script.sh
Can I modify the above to get last saturday?
With Romeo's help I was help to come up with the below answer:
00 17 * * 6 [ $(cal -s | tail -2 | awk '{print $7}') -eq $(date | awk '{print $3}') ] && /folder/script.sh
|
Your logic will not work. Month can have last Saturday to be on 29 or 30 or 31. For this reason the best way to do the check is to run it every Saturday and check in script if this is in last 7 days in month:
45 23 * * 6 sh /folder/script.sh
and add in your script (here assuming the GNU implementation of date) something like:
if [ "$(date -d "+7 day" +%m)" -eq "$(date +%m)" ]
then echo "This is not one of last 7 days in month.";exit
fi
<rest of your script>
About your line in cron you should edit it to start like this:
00 17 * * 6
(6 means Saturday, 0 or 7 mean Sunday)
| Cron entry for last Saturday of every month |
1,431,818,419,000 |
we want to delete all files of the following job with the same time on midnight
0 0 * * * root [[ -d /var/log/ambari-metrics-collector ]] && find /var/log/ambari-metrics-collector -type f -mtime +10 -regex '.*\.log.*[0-9]$' -delete
0 0 * * * root [[ -d /var/log/kO ]] && find /var/log/Ko -type f -mtime +10 -regex '.*\.log.*[0-9]$' -delete
0 0 * * * root [[ -d /var/log/POE ]] && find /var/log/POE -type f -mtime +10 -regex '.*\.log.*[0-9]$' -delete
0 0 * * * root [[ -d /var/log/REW ]] && find /var/log/REW -type f -mtime +10 -regex '.*\.log.*[0-9]$' -delete
is it ok to run all then on the same time?
dose cron job will run them step by step? or both all them on the same thread?
|
Yes, it is perfectly acceptable to have cron schedule multiple jobs at the same time.
Computers do nothing simultaneously, however, and they will be started in the order present in the cron table. However, they will not be run in sequence; they will be started one after the other within a few milliseconds of midnight -- simultaneously for all practical purposes.
| is it ok to run cron job with the same time? |
1,431,818,419,000 |
I was trying to automate some auto cleaning of my Ubuntu System, with Cron Jobs
I tryed to simplify this:
sudo find /var/log -type f -name "*.1.gz" -delete
sudo find /var/log -type f -name "*.2.gz" -delete
sudo find /var/log -type f -name "*.3.gz" -delete
sudo find /var/log -type f -name "*.4.gz" -delete
,etc...
into one command like this below but is not working, probably because I don't know...
for i=[^0-9]; sudo find /var/log -type f -name "*.$i.gz"
I tried similar ones but didn't worked too... :
$i=[^0-9]; sudo find /var/log -type f -name "*.$i.gz"
i=[^0-9]; sudo find /var/log -type f -name "*.$i.gz"
i=[^0-9]+$; sudo find /var/log -type f -name "*.$i.gz"
I can't see any output with this last four ones... and someones produces errors...
So what is the correct command / syntax?
And any other ideas to keep my "mini" server clean?
Other question:
If I run sudo find / -type f -name "*.7.gz"
will appear this one:
"/usr/share/doc/libruby1.9.1/NEWS-1.8.7.gz"
I can solve with this:
if I run: sudo find / -type f -name "*log.7.gz"
BUT I will probably skip those with *error.(0-9).gz extension and many others...
Any idea to clean old logs under / without find/remove like these ones:
"/usr/share/doc/libruby1.9.1/NEWS-1.8.7.gz"
EDIT
in my /etc/logrotate.d are:
/var/log/apache2/*.log {
weekly
missingok
rotate 52
compress
delaycompress
notifempty
create 640 root adm
sharedscripts
postrotate
if /etc/init.d/apache2 status > /dev/null ; then \
/etc/init.d/apache2 reload > /dev/null; \
fi;
endscript
prerotate
if [ -d /etc/logrotate.d/httpd-prerotate ]; then \
run-parts /etc/logrotate.d/httpd-prerotate; \
fi; \
endscript
}
/var/log/apport.log {
daily
rotate 7
delaycompress
compress
notifempty
missingok
}
/var/log/apt/term.log {
rotate 12
monthly
compress
missingok
notifempty
}
/var/log/apt/history.log {
rotate 12
monthly
compress
missingok
notifempty
}
/var/log/aptitude {
rotate 6
monthly
compress
missingok
notifempty
}
/var/log/cups/*log {
daily
missingok
rotate 7
sharedscripts
prerotate
if [ -e /var/run/cups/cupsd.pid ]; then
invoke-rc.d --quiet cups stop > /dev/null
touch /var/run/cups/cupsd.stopped
fi
endscript
postrotate
if [ -e /var/run/cups/cupsd.stopped ]; then
rm /var/run/cups/cupsd.stopped
invoke-rc.d --quiet cups start > /dev/null
sleep 10
fi
endscript
compress
notifempty
create
}
/var/log/dpkg.log {
monthly
rotate 12
compress
delaycompress
missingok
notifempty
create 644 root root
}
/var/log/alternatives.log {
monthly
rotate 12
compress
delaycompress
missingok
notifempty
create 644 root root
}
# - I put everything in one block and added sharedscripts, so that mysql gets
# flush-logs'd only once.
# Else the binary logs would automatically increase by n times every day.
/var/log/mysql.log /var/log/mysql/mysql.log /var/log/mysql/mysql-slow.log /var/log/mysql/error.log {
daily
rotate 7
missingok
create 640 mysql adm
compress
sharedscripts
postrotate
test -x /usr/bin/mysqladmin || exit 0
# If this fails, check debian.conf!
MYADMIN="/usr/bin/mysqladmin --defaults-file=/etc/mysql/debian.cnf"
if [ -z "`$MYADMIN ping 2>/dev/null`" ]; then
# Really no mysqld or rather a missing debian-sys-maint user?
# If this occurs and is not a error please report a bug.
#if ps cax | grep -q mysqld; then
if killall -q -s0 -umysql mysqld; then
exit 1
fi
else
$MYADMIN flush-logs
fi
endscript
}
/var/log/pm-suspend.log /var/log/pm-powersave.log {
monthly
rotate 4
delaycompress
compress
notifempty
missingok
}
/var/log/ppp-connect-errors {
weekly
rotate 4
missingok
notifempty
compress
nocreate
}
/var/log/syslog
{
rotate 7
daily
missingok
notifempty
delaycompress
compress
postrotate
reload rsyslog >/dev/null 2>&1 || true
endscript
}
/var/log/mail.info
/var/log/mail.warn
/var/log/mail.err
/var/log/mail.log
/var/log/daemon.log
/var/log/kern.log
/var/log/auth.log
/var/log/user.log
/var/log/lpr.log
/var/log/cron.log
/var/log/debug
/var/log/messages
{
rotate 4
weekly
missingok
notifempty
compress
delaycompress
sharedscripts
postrotate
reload rsyslog >/dev/null 2>&1 || true
endscript
}
/var/log/speech-dispatcher/speech-dispatcher.log /var/log/speech-dispatcher/speech-dispatcher-protocol.log {
daily
compress
missingok
sharedscripts
rotate 7
postrotate
/etc/init.d/speech-dispatcher reload >/dev/null
endscript
}
/var/log/speech-dispatcher/debug-epos-generic /var/log/speech-dispatcher/debug-festival /var/log/speech-dispatcher/debug-flite {
daily
compress
missingok
sharedscripts
rotate 2
postrotate
/etc/init.d/speech-dispatcher reload >/dev/null
endscript
}
/var/log/ufw.log
{
rotate 4
weekly
missingok
notifempty
compress
delaycompress
sharedscripts
postrotate
invoke-rc.d rsyslog reload >/dev/null 2>&1 || true
endscript
}
/var/log/unattended-upgrades/unattended-upgrades.log
/var/log/unattended-upgrades/unattended-upgrades-shutdown.log
{
rotate 6
monthly
compress
missingok
notifempty
}
/var/log/upstart/*.log {
daily
missingok
rotate 7
compress
notifempty
nocreate
}
/var/log/vsftpd.log
{
create 640 root adm
# ftpd doesn't handle SIGHUP properly
missingok
notifempty
rotate 4
weekly
}
and in /etc/logrotate.conf:
# see "man logrotate" for details
# rotate log files weekly
weekly
# use the syslog group by default, since this is the owning group
# of /var/log/syslog.
su root syslog
# keep 4 weeks worth of backlogs
rotate 4
# create new (empty) log files after rotating old ones
create
# uncomment this if you want your log files compressed
#compress
# packages drop log rotation information into this directory
include /etc/logrotate.d
# no packages own wtmp, or btmp -- we'll rotate them here
/var/log/wtmp {
missingok
monthly
create 0664 root utmp
rotate 1
}
/var/log/btmp {
missingok
monthly
create 0660 root utmp
rotate 1
}
# system-specific logs may be configured here
Can I set up to 100k rotate 3 all logs per example? How? and how much logs will last at minimum with that configuration?
|
No need to use a for-loop here, you can just use find:
sudo find /var/log/ -type f -regex '.*\.[0-9]+\.gz$' -delete
However, as it was suggested, check the manual page of logrotate for ways to reduce the number of files.
| command to clean up old log files |
1,431,818,419,000 |
We're setting up an SGE cluster with CentOS 6. My sysadmin is installing applications that are not installed via RPM (i.e. via other means like make install) should go in a non-standard directory, in this case something like /share/apps/install/bin/. The path for this is currently added to most sessions (login, qlogin, etc) via /share/apps/etc/environment.sh which is called by /etc/bashrc. environment.sh also appends some stuff to the PERL5LIB.
The problem that I'm running into is that the /share/apps/install/bin is not added to some instances, e.g. things called out of a crontab.
I know I can manually and explicitly set PATH=/bin:/usr/bin:/blah/blah:... within my personal crontab or within any given script or crontab entry, but what I'm hoping is that there's a setting somewhere outside of /etc/profile or /etc/bashrc that would put the non-standard .../bin directory into all PATHs for all users.
|
What we ended up doing was a multi-pronged solution to avoid any path issues. Depending on the use case, we used one or more of the following:
Used absolute paths to the binaries installed in non-standard places instead of expecting the binary to be on the path. This was used for tools that have few, if any, non-standard external dependencies and/or work in isolation.
Created and used a wrapper script for a tool that set up the environment as needed; manually setting the PATH=... within that script and/or running source $HOME/.bashrc as appropriate. This was used for tools that needed other tools, but were otherwise able to run on our cluster.
Created a container (Docker in our case) including the binaries and a more complex setup. This was used for tools that require an environment significantly different from our standard cluster setup.
| Globally change path for all users, even in cron |
1,431,818,419,000 |
I'm trying to write a very basic cron job but it doesn't seem to be saving. Here's what I've done:
1) crontab -e
This opens up a file with vim.
2)
#!usr/bin/env python
0 23 * * 0 ~/Desktop/SquashScraper/helpfulFunctions.py
3) :wq
4) crontab -l
Nothing shows up and I get this message:
crontab: no crontab for ben
I've looked around and most people with similar issues had editor problems. My crontab opens correctly with vim, so that doesn't seem to be the issue.
Any idea why this might not be working / saving properly?
Thanks,
bclayman
Edit to include:
|
For some reason /usr/bin/vi is not working correctly on your machine as you can tell from the error message:
crontab: "/usr/bin/vi" exited with status 1
What happened there is that when you leave vi it is producing an error code. When crontab sees that vi exited with an error code, it will not trust the contents of the file vi was editing and simply doesn't make any changes to your crontab.
You can try to investigate further why vi is not working, or if you prefer to, you can use a completely different editor. For example if you prefer to use vim, you can type:
EDITOR=/usr/bin/vim crontab -e
Alternatively you can keep the "official" version of your crontab under your home directory. Then edit the version under your home directory and finally install it using:
crontab filename
| Cron job not saving |
1,431,818,419,000 |
If I have a crontab job that runs, for instance, every hour, but that job may take more than an hour to complete, will the next job fail to run?
|
No, cron scripts run parallelly, if you do not implement some
locking mechanism.
See Quick-and-dirty way to ensure only one instance of a shell script is running at a time and Correct locking in shell scripts? for possible solutions.
A simpler way is to use lockfile, like in this answer
or the run-one package (see this answer)
- thanks to gertvdijk for suggesting it.
| Can a crontab job run concurrently with itself? |
1,431,818,419,000 |
Is it a good idea to create a cron job for apt-get update; apt-get upgrade for my webserver?
So not apt-get dist-upgrade.
|
Yes, to a limited extent. But you don't have to. There's a package called
unattended-upgrades that will do it for you.
Description-en: automatic installation of security upgrades
This package can download and install security upgrades automatically
and unattended, taking care to only install packages from the
configured APT source, and checking for dpkg prompts about
configuration file changes.
.
This script is the backend for the APT::Periodic::Unattended-Upgrade
option.
This package is intended for use with security upgrades, as the description suggests. Automated installation of security upgrades is a reasonably safe option. Doing other kinds of unattended upgrades is riskier.
| Is it safe to set a cron job for apt-get upgrade? [closed] |
1,431,818,419,000 |
Problem: When running the ssh-keyscan command in cron it emails me the output of ssh-keyscan every day. The email simply contains the following.
# <hostname> SSH-2.0-OpenSSH_5.3
My (simplified) cron job:
host=`uname -n`
SSHKey=`ssh-keyscan $host`
echo $SSHKey >> /root/.ssh/known_hosts
My question: How do I prevent ssh-keyscan from writing anything to the shell?
|
redirect stderr into /dev/null
host=`uname -n`
SSHKey=`ssh-keyscan $host 2> /dev/null`
echo $SSHKey >> /root/.ssh/known_hosts
| Prevent ssh-keyscan from generating output |
1,431,818,419,000 |
I'm relatively new to Linux and I'm working on a CronTab for scheduled backups.
I figured out how to do multiple sequential jobs with ; and &&, but I'm unsure how the dependency affects the sequence.
My cron job looks like this:
# Every day at 0:00 (reboot server and do the hourly backup procedure).
0 0 * * * sh shutdown.sh && sh backup.sh ; sh nas.sh ; sh ftp.sh ; sh startup.sh
What I want to happen is to run shutdown.sh and continue the sequence if it's successful, but cancel it if it fails. My fear is it will only skip sh backup.sh but then continue the sequence.
Will it work as intended? If not, would something like shutdown && (backup ; nas ; ftp ; startup) be possible?
|
Why don't you test it with some dummy commands that you know will work or fail?
$ ls foo && echo ok1 ; echo ok2
ls: cannot access foo: No such file or directory
ok2
$ ls foo && (echo ok1 ; echo ok2)
ls: cannot access foo: No such file or directory
So it seems like your intuition was correct, and you need the second structure.
As suggested by mikeserv, for testing in general, you can use true (or :) and false, instead of ls of a non-existent file. Hence,
$ false && echo ok1 ; echo ok2
ok2
$ false && (echo ok1 ; echo ok2)
| Execution order with multiple commands |
1,431,818,419,000 |
On our linux server we have a lot of entries.
Many entries look like this:
15 13 * * 3 /very/long/path/to/binary/run.sh ...
These entries would be easier to maintain if I could write:
15 13 * * 3 $FPATH/run.sh
Where could I write this mapping:
FPATH=/very/long/path/to/binary
|
This perfectly works, ie
$ crontab -l
TESTDIR=/home/user/test
* * * * * "$TESTDIR"/script.sh
Have a look at
man 5 crontab
more info is found there.
| variables in crontab |
1,431,818,419,000 |
In macOS, cronjobs configured with crontab have the command output and/or error messages (if the cron doesn't succeed) written to the /var/mail/$USER file. Can this be prevented? I've tried:
* * * * * /sbin/ping -c1 website.com ... 2>&1 >/dev/null
Whether it succeeds or fails to reach the domain, the ping output is saved to the mail file...
|
The cron daemon does not write to /var/mail/$USER, it sends an email to the user whenever a job outputs anything or fails, which in turn is written to that file (the user's mail inbox) by the system's mail delivery service.
To turn off the sending of email from the cron daemon, set the MAILTO variable to an empty value in the crontab file:
MAILTO=""
# rest of file with job schedules goes here
From the crontab(5) manual on a macOS system:
In addition to LOGNAME, HOME, and SHELL, cron(8) will look at MAILTO if it has
any reason to send mail as a result of running commands in "this" crontab. If
MAILTO is defined (and non-empty), mail is sent to the user so named. If MAILTO
is defined but empty (MAILTO=""), no mail will be sent. [...]
If you turn off mailing of job output and error notifications in this way, you may want to log the job in some other way, for example,
* * * * * /sbin/ping -c1 website.com ... >>/tmp/ping.log 2>&1
0 0,12 * * * mv /tmp/ping.log /tmp/ping.log.old
This would add the output of ping to a specific file, and also move that file away at midnight and noon (note that your redirections to /dev/null was back-to-front).
You may also want to explicitly send an email if the ping fails:
* * * * * /sbin/ping -c1 website.com ... >>/tmp/ping.log 2>&1 || mail -s "ping failed, do something" "$LOGUSER"
This would send an empty email with the specified title whenever ping returned a non-zero exit status.
Or, you could just get your redirections right from the start and don't bother with MAILTO or logfiles or sending emails:
* * * * * /sbin/ping -c1 website.com ... >/dev/null 2>&1
This would send you an email whenever the ping failed, but would not send you the output of the command every minute.
| Prevent cronjob's from writing to /var/mail/$USER? |
1,431,818,419,000 |
First of all, I'm using elementary OS (based on Ubuntu 12.04).
I have a cron job set up to run a script every day at 23:30:
30 23 * * * /path_to_script/
Is there a way to add it to cron via a single terminal command? All examples I've seen involve invoking cron first via crontab -e and then adding the job there.
I'd like a single command for doing this, something like:
cron add-job '30 23 * * * /path_to_script/'
|
You can do it with this:
{ crontab -l; echo "30 23 * * * /path_to/script/"; } | crontab -
| Add cron job via single command [duplicate] |
1,431,818,419,000 |
All of a sudden my cron jobs don't work any more.
When I type
crontab -e
I am presented an empty file in the editor
/tmp/crontab.3fMYGi/crontab
And by empty I mean that not even this standard cron job info is there any more.
Is there a way to recover my cron jobs?
|
As @Anthon said in comments, you most likely have lost your crontab entries. On the off chance you haven't, they would be located here in this directory: /var/spool/cron/ in a file named after your username.
If they aren't there either then they're lost and you'll have to recreate them or get them from backups.
You might also get lucky and find the remnant of the tmp file used to edit them when you run the command crontab -e. These files would be in /tmp/crontab.*.
| Suddenly lost all cronjobs |
1,431,818,419,000 |
0 0 * * 1 root hostname >> /tmp/hostname.txt
The above cron entry should run midnight on monday and create a file in /tmp called hostname.txt with the output of the hostname command. But it is just a blank file that is created. Why?
|
When dealing with crontab entries it's important to make the distinction between a system level cron, typically in one of these locations:
/etc/crontab
/etc/cron.d/*
/etc/cron.daily/*
/etc/cron.weekly/*
etc.
Or the user level variety, that are actually stored in this directory, /var/spool/cron:
$ sudo ls -l /var/spool/cron/
total 0
-rw------- 1 saml root 0 Jun 6 06:43 saml
You access these with the command crontab -e as a specific user. Even root can have these.
Your problem
The example you've included is the type of line you'd specify when making a system level type of cron. The only difference is the inclusion of the user with which the command should be run. This isn't necessary when creating a crontab -e type of entry since it's redundant, given the crontab entry is already designated to the user that created it.
So simply changing your line to this:
0 0 * * 1 hostname >> /tmp/hostname.txt
Fixes your issue.
How do the daily, weekly crons work?
You'll typically see an entry in your main /etc/crontab file like so:
01 * * * * root run-parts /etc/cron.hourly
02 4 * * * root run-parts /etc/cron.daily
22 4 * * 0 root run-parts /etc/cron.weekly
42 4 1 * * root run-parts /etc/cron.monthly
These govern when the scripts in the respective directories will get run. So the daily's are every morning at 4:20AM my local time.
So if you want to run the command hostname >> /tmp/hostname.txt daily, you'd put it in a script, make it executable, and put the script file in the cron.daily directory.
/etc/cron.daily/catchhostname.bash
#!/bin/bash
hostname >> /tmp/hostname.txt
cron.daily
# ls -l |grep catch
-rwxr-xr-x 1 root root 118 Feb 28 2013 catchhostname.bash
| why doesn't 0 0 * * 1 root hostname >> /tmp/hostname.txt work as a crontab? |
1,431,818,419,000 |
I have a cron job on Debian:
1 * * * * /home/paradroid/bin/myscript.sh >/dev/null
There is an installed and configured MTA, and I have received emails when the script has a syntax error, so I have always thought that I would be informed when anything goes wrong.
The script downloads a file using curl through a proxy. Recently the proxy has failed so curl could not download. It is the last command in the script, which has been exiting with the error code 7.
I thought I would be getting emails when this happens, but I have not.
How come I get email alerts from something like syntax errors in the script, but I have not been getting them when the script fails to do its job and exits with an error code?
Is something wrong, or do I have to get the script to email me directly when there has been an error with curl?
|
After talking to some people on #curl on Freenode, a dev mentioned that there has been a bug in curl for around a decade (before 7.23.0) where an option parsing error treated -Ss (--show-silent --show-error) as -s (--silent), so all output is suppressed, including stdout.
The solution (before 7.23.0) is to use -sS instead, or long options.
That nasty bug caused me a lot of confusion and let a rather important cron job fail without me knowing about it for several days!
| Crontab job not emailing on failure |
1,431,818,419,000 |
I'm using Devuan ASCII (which is more or less Debian 9, Stretch). Now, my /var/log/auth.log has a bunch of these entries:
Jan 6 09:45:01 mybox CRON[20951]: pam_env(cron:session): Unable to open env file: /etc/environment: No such file or directory
Jan 6 09:45:01 mybox CRON[20951]: pam_unix(cron:session): session opened for user root by (uid=0)
which apparently get generated when I su.
Why is cron/pam_env/pam_unix trying to open that file in the first place, rather than checking whether it exists?
If they legitimately expect it, why isn't it there?
What should I do about this?
|
Answering all of your questions
Why is cron/pam_env/pam_unix trying to open that file in the first place?
See BUG #646015. In some cases (like locale related stuff) this file is deprecated. But it is still used system-wide, and log is made whenever it is missing.
If they legitimately expect it, why isn't it there?
Cause maybe the bug isn't fixed after all. Steve Langasek (BUG #646015) said it is, and new systems should create that file using postinst scripts the same way old systems being upgraded should already have that file.
What should I do about this?
Run dpkg-reconfigure libpam-modules and see if it will create the file through its postinst script.
If that does not work, create the file manually with touch /etc/environment
It's also interesting to report your issue to the Devuan Project with details of the problem and your setup since this issue was resolved before the Debian/Devuan fork happened.
| Cron is trying (and failing) to open env file: /etc/environment |
1,431,818,419,000 |
My script (status.sh) is:
#!/bin/bash
SITE=http://www.example.org/
STATUS=$(/usr/bin/curl -s -o /dev/null -I -w "%{http_code}" $SITE)
if [ $STATUS -eq 200 ]
then
echo $STATUS >> /home/myuser/mysite-up.log
else
echo $STATUS >> /home/myuser/mysite-down.log
fi
I run:
$ chmod +x /home/myuser/status.sh
Then on my crontab i got:
* * * * * /home/myuser/status.sh
When I run:
$ /home/myuser/status.sh
The file /home/myuser/mysite-up.log contains:
200
But when cron run, the file /home/myuser/mysite-up.log contains:
000
What I am doing wrong?
EDIT:
I modified the script adding:
set -x
as @Sobrique suggested and I the output is:
SITE=http://www.example.org/
/usr/bin/curl -s -o /dev/null -I -w '%{http_code}' http://www.example.org/
STATUS=000
'[' 000 -eq 200 ']'
echo 000
|
The main difference between running a script on the command line and running it from cron is the environment. If you get different behavior, check if that behavior might be due to environment variables. Cron jobs run with only a few variables set, and those not necessarily to the same value as in a logged-in session (in particular, PATH is often different).
If you want your environment variables to be set, either declare them in ~/.pam_environment (if your system supports it) or add . ~/.profile && at the beginning of the cron job (if you declare them in .profile). See also What's the best distro/shell-agnostic way to set environment variables?
In this case, a 000 status from curl indicates that it could not connect to the server. Usually, the network connection is system-wide, so networking behaves the same in cron. However one thing that's indicated by environment variables is any proxy use. If you need a proxy to connect to the web and you've set the environment variable http_proxy in a session startup script, that setting isn't applied in your cron job, which would explain the failure.
Add the option -S to you curl invocation to display error messages (while retaining -s to hide other messages).
| Script with curl works manually but not in cron job |
1,431,818,419,000 |
I am running Raspbian on a Pi and installed cron to schedule a job. I wrote a Python script and I set it to run every 5 minutes. The job is happening every 5 minutes, no problems, but when I run crontab -l as root and pi, it says there are no jobs. When I run crontab -e as root and as pi they are blank.
I honestly can't remember the exact details of when I set up the job. I know I wrote a line on a document that was formatted like a crontab and I am pretty sure it was done as root.
I have discovered this as I was going to add some more jobs, and would like to locate the other one I made before I get going on adding more.
|
There are two lists of scheduled tasks (crontabs).
Each user (including root) has a per-user crontab which they can list with crontab -l and edit with crontab -e. The usual Linux implementation of cron stores these files in /var/spool/cron/crontabs. You shouldn't modify these files directly (run crontab -e as the user instead), but it's safe to list them to see what's inside. You need to be root to list them.
There is a system crontab as well. This one is maintained by root, and the jobs can run as any user. The system crontab consists of /etc/crontab and, on many systems, files in /etc/cron.d. These files have an additional column: after the 5 date/time fields, they have a “user” field, which is the user that the job will run as. It's common to set up /etc/crontab to run scripts from directories /etc/cron.hourly, /etc/cron.daily, etc. and that's how it's done on Raspbian.
So look in all these places: /var/spool/cron/crontabs/* (you need to be root for this one), /etc/crontab, /etc/cron.*.
You can also get information in the system logs. They won't tell you where the job was listed, but they tell you exactly what command is being executed, so you can search for the command text. For example, this is the entry that runs commands in /etc/cron.hourly every hour:
May 11 07:17:01 darkstar CRON[2480]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
| Cron job working, but crontab -l says no jobs |
1,431,818,419,000 |
I have several cron jobs that are executed every minute, and I’m thinking on putting @reboot jobs. They are put and executed with root privileges.
So here’s what I want to know. Will these cron jobs run when system gets to the login screen after rebooting? Will @reboot entries run after reboot without me getting logged into root?
|
The cron daemon will start cron jobs scheduled with @reboot as soon as the daemon has started after system boot. It does not matter whether any user has had the time to log in on the newly rebooted system or not or whether the job belongs to the root user or any other user on the system. It is likely that such jobs will run before or as a graphical login screen appears if the system uses one. Basic daemons, like the cron daemon, are usually started before login display managers.
As an example, OpenBSD, like Ubuntu and macOS, is using the Vixie cron daemon. It executes @reboot jobs before even entering its main loop (code is here).
| When is a @reboot cron job executed? |
1,431,818,419,000 |
I have a shell script which runs in cronjob. This shell script has to kill a process which is running and start the new process again.
When I run the script manually it works perfectly fine, but when it runs through cron it does not kill the old process but starts a new process along with the old one.
I am using the below line of code to kill the process:
kill -9 ps | grep "server1" | grep -v grep | awk '{ print $1 }'
|
You have to indicate what to kill:
kill -9 $(ps | grep "server1" | grep -v grep | awk '{ print $1 }')
You can also use the trick:
kill -9 $(ps | grep "server[1]" | awk '{ print $1 }')
| How to kill a process in shell script which is running through cron? |
1,431,818,419,000 |
Is it possible to switch the current tty/vt with the help of a bash script or a cronjob? I only know the physical keyboard input "Ctrl-Alt-Fx".
I would like to show two virtual desktops at different daytimes (controlled by cron).
|
Yes, that is possible. You're looking for the chvt command.
| How to switch tty with a script / cronjob |
1,431,818,419,000 |
Problem statement
I have 5 solaris boxes, some have Solaris 10 and some have Solaris 9.
All of them have many cronjobs in their crontabs.
I would like to know number of active cronjobs available, so I manually count the cronjobs.
Now I am looking for a command [bash shell I am using] to count the number of active cronjobs.
I have tried crontab -l|wc -l, but my crontab contains many comments lines which are also counted with my command.
What I have tried
crontab -l|wc -l
What I am expecting
A bash shell command to count the number of active cronjobs (excluding comments lines).
|
You need to delete everything that does not start with a digit (the minute). But to get that, remove any leading whitespace first. This will get rid of comments, blank lines, variable assignments, etc.
crontab -l 2>/dev/null | sed 's/^ *//;/^[*@0-9]/!d' | wc -l
| command for counting number of active cron jobs in crontabs |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.