date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,459,409,985,000 |
I am trying to write a cron schedule to run a job the Thursday before the second Monday in a month. So far I have this
0 0 8 ? * MON#2
But that runs on the second Monday of the month. Is there a way I can go back four days from that time to the previous Thursday?
For example, for September 2016, the second Monday of the month is the 12th of the month. So I would like to schedule this to run the previous Thursday, which would be the 8th of the month.
|
I don't think you can do that in cron.
Maybe with 0 0 * * 4 you should run a script every thursday and in that script, you can check if it is the thursday before the second monday of the month.
| cron job to run the Thursday before the second Monday in a month |
1,459,409,985,000 |
I'm trying to record the start time of a process kicked off by an @reboot cronjob. I'm using ps -p $$ -o ltime= presently, but I've run into a catch.
My machine (a Raspberry Pi) connects to the network and pulls down an NTP update after cron has started and adjusts the system clock. The time returned by lstart changes after the update (which does make sense, of course).
The problem is now I have two different start times, and it looks to my monitoring as if the process was restarted.
Since the reported start time changes when the NTP update comes down it seems like there's an underlying start-time notion that isn't affected by changes to the system clock (otherwise it would continue saying the process started at the old time). How can I get that underlying start-time from a process?
Excerpt from my system logs:
$ grep -e '@reboot' -e 'Time has been' -C 3 /var/log/syslog
Apr 6 13:17:04 archer triggerhappy[386]: Error opening '/dev/input/event*': No such file or directory
Apr 6 13:17:04 archer kernel: [ 6.721869] usbcore: registered new interface driver brcmfmac
Apr 6 13:17:04 archer kernel: [ 6.930684] brcmfmac: brcmf_c_preinit_dcmds: Firmware version = wl0: Dec 15 2015 18:10:45 version 7.45.41.23 (r606571) FWID 01-cc4eda9c
Apr 6 13:17:04 archer cron[381]: (CRON) INFO (Running @reboot jobs)
Apr 6 13:17:04 archer wpa_supplicant[376]: Successfully initialized wpa_supplicant
Apr 6 13:17:04 archer dphys-swapfile[385]: want /var/swap=100MByte, checking existing: keeping it
Apr 6 13:17:04 archer avahi-daemon[387]: Found user 'avahi' (UID 105) and group 'avahi' (GID 110).
--
Apr 6 13:17:15 archer ntpd_intres[587]: DNS 2.debian.pool.ntp.org -> 65.182.224.39
Apr 6 13:17:15 archer ntpd_intres[587]: DNS 3.debian.pool.ntp.org -> 174.123.154.242
Apr 6 13:17:17 archer dhcpcd[403]: wlan0: no IPv6 Routers available
Apr 6 13:53:40 archer systemd[1]: Time has been changed
Apr 6 13:54:00 archer systemd[1]: Starting user-1000.slice.
Apr 6 13:54:00 archer systemd[1]: Created slice user-1000.slice.
Apr 6 13:54:00 archer systemd[1]: Starting User Manager for UID 1000...
Notice the time-shift - before that point lstart would report 13:17:04 but after it reports 13:53:27.
|
You can ask for the elapsed time in seconds:
ps -p $$ -o etimes=
This will always be accurate and comparable, regardless of what the system thinks the current time is.
You can turn it into an unchanging start value by subtracting it from the current uptime (stored in seconds as the first value in /proc/uptime):
echo $(($(cut -d. -f1 < /proc/uptime) - $(ps -p $$ -o etimes=)))
| Get the process start time irrespective of NTP updates |
1,459,409,985,000 |
In /etc/cron.hourly, there is one file:
-rwxr-xr-x 1 root root 117 Mar 8 20:33 myjob
myjob:
3,18,33,48 * * * * /usr/bin/python /home/me/src/myproject/src/manage.py myjobs > /home/me/log
3,18,25,27,29,31,33,35,37,48 * * * * /bin/echo "testing....." > /home/me/log
/etc/crontab:
# /etc/crontab: system-wide crontab
# Unlike any other crontab you don't have to run the `crontab'
# command to install the new version when you edit this file
# and files in /etc/cron.d. These files also have username fields,
# that none of the other crontabs do.
SHELL=/bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
# m h dom mon dow user command
17 * * * * root cd / && run-parts --report /etc/cron.hourly
25 6 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily )
47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly )
52 6 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly )
why the log file does not appear? Anything missing? myjob should run at 3,18, ... minute in each hour
|
The scrips in /etc/cron.hourly, /etc/cron.daily, /etc/cron.weekly, /etc/cron.monthly are meant to be run at specific times, and do not are in the classic crontab format. Or put it simply, they are scripts, not files in the crontab format.
In the case of /etc/cron.hourly, they are just ran every hour.
you have to insert that line in crontab with crontab -e. For running in /etc/cron.hourly you would have to take out the 5 time fields to run only them every hour (e.g. taking out 3,18,33,48 * * * * ).
So in your case, either you move your script to /etc/cron.d or add to your crontab file the contents of myjob. take it out from the /etc/cron.hourly directory.
Back in /etc/crond.d, you need to put it as in as file as:
3,18,33,48 * * * * root /usr/bin/python ....
| why /etc/cron.hourly/myjob not working? |
1,459,409,985,000 |
I had programmed a cronjob to restart a service with an apparent memory leak weekly and got an email saying that the killproc command wasn't found.
That's in /sbin/killproc and I don't want to modify the service script, even though I'd prefer it to use absolute paths so I'm opting to weasel my fix by way of cron.
So, I don't want to set the PATH at the top of the crontab file as per the man page apparently says (according to other posts I've seen on the internet, not my man page). How can I structure my crontab line to set one path variable (and not squash root's PATH altogether) just for this script?
tl;dr;
This is what I want to do
0 0 * * 0 /etc/init.d/tic_minus restart
this is what I want to avoid
To: Stupid
From: All your customers
Stopping tic_minus: /etc/init.d/tic_minus: line 43: killproc: command not found
Starting tic_minus:
|
Use a sub-shell to limit scope:
0 0 * * 0 (export PATH=$PATH:/sbin; /etc/init.d/tic_minus restart)
| How to set the path for one cron command |
1,459,409,985,000 |
It's possible to set the cron job tool in my website cpanel 'VPS Hosting' to execute php file every 30 second
I am using it to execute php file once at a minute *,*,*,*,* but I need to set it to run twice in minute
I have tried 1/2,*,*,*,* but it's not working.
|
Why don't you run a shell script which runs your command, like:
#!/bin/tcsh -f
start:
mycommand >> /tmp/output
sleep 30
goto start
As someone else said, cron has a granularity of 1 minute.
| Run A Cron Job Task Every 30 Seconds [duplicate] |
1,459,409,985,000 |
I have a task which I wanted it to run on every hour 56 minutes not sure which I should do below and what does "*/number" means.
An example :- 12:56PM, 01:56PM, 02:56PM ...
Use this ?
*/56 * * * * /usr/bin/python3 /home/asd/asd.py
Or
56 * * * * /usr/bin/python3 /home/asd/asd.py
|
Using just 56 on the first field you are telling cron you want to run the script at minute 56; while setting "*/56" on the first field you are telling crontab to run the script every 56 minutes.
If You want the script to run at 12:56PM, 01:56PM, 02:56PM ...; then you use 56
56 * * * * /usr/bin/python3 /home/asd/asd.py
| Cronjob schedule |
1,459,409,985,000 |
I am running my runnable jar in linux machine (machineA) as shown below. In the runnable Jar, I have a java program which sends email to me every 15 mniutes.
/usr/lib/jvm/java-1.7.0-openjdk-amd64/bin/java -jar abc.jar config.properties &
As soon as I start my abc.jar like above, it will be running in the background and there is a class which will keep on sending me email every 15 minutes. I am using Scheduler in my java program which is a thread which wakes up every 15 minutes and send me an email.
Now everything is working fine. Suppose machineA got restarted for some reason or abc.jar got killed for whatever reason, then I am looking for a way so that my abc.jar gets started in the background again automatically.
So I decided to use upstart feature in Ubunutu as I am running Ubuntu 12.04 - And here is my config file -
#/etc/init/testlnp.conf
#sudo start testlnp
#sudo stop testlnp
start on runlevel [2345]
stop on runlevel [016]
chdir /export/home/david/tester
respawn
post-stop script
sleep 30
end script
limit nofile 8092 8092
setuid david
exec /usr/lib/jvm/java-1.7.0-openjdk-amd64/bin/java -jar abc.jar config.properties &
I have my abc.jar file in this directory /export/home/david/tester. Now I started my java program like this one time -
sudo start testlnp
And it started fine, I can see through ps aux | grep java -
david@machineA:~$ ps aux | grep java
david 130691 38.5 0.0 33906208 58636 ? Sl 19:24 0:01 /usr/lib/jvm/java-1.7.0-openjdk-amd64/bin/java -jar abc.jar config.properties
david 131029 0.0 0.0 8100 936 pts/2 S+ 19:24 0:00 grep --color=auto java
Now after some time, I did ps aux | grep java again and I saw like this - Meaning multiple instances of my abc.jar program? This is what I am not able to understand why it happened?
david@slc4b03c-8ixd:~$ ps aux | grep java
david 1746 4.5 0.0 33906208 57808 ? Sl 19:25 0:01 /usr/lib/jvm/java-1.7.0-openjdk-amd64/bin/java -jar abc.jar config.properties
david 2143 73.0 0.0 33906208 57992 ? Sl 19:25 0:01 /usr/lib/jvm/java-1.7.0-openjdk-amd64/bin/java -jar abc.jar config.properties
david 2180 0.0 0.0 8100 936 pts/2 S+ 19:25 0:00 grep --color=auto java
david 130691 2.5 0.0 33906208 57492 ? Sl 19:24 0:01 /usr/lib/jvm/java-1.7.0-openjdk-amd64/bin/java -jar abc.jar config.properties
My main aim is to restart abc.jar again if my machine gets restarted or abc.jar gets killed for whatever reason? How do I achieve this? Is there anything wrong I am doing with upstart?
UPDATE:-
This is what I got for PPID -
david@machineA:~$ ps aux | grep java
david 18454 4.5 0.0 33906208 57520 ? Sl 20:01 0:01 /usr/lib/jvm/java-1.7.0-openjdk-amd64/bin/java -jar abc.jar config.properties
david 18692 27.3 0.0 33906208 57788 ? Sl 20:01 0:01 /usr/lib/jvm/java-1.7.0-openjdk-amd64/bin/java -jar abc.jar config.properties
david 18779 0.0 0.0 8096 940 pts/2 S+ 20:02 0:00 grep --color=auto java
david@machineA:~$ ps xao pid,ppid,pgid,sid,comm | grep java
18454 1 18453 18453 java
18692 1 18691 18691 java
|
Get rid of the &. That makes it fork into the background, Upstart thinks the process died, then spawns a new one. Just have the exec line without the ampersand.
| How to restart my java program automatically if it gets killed? |
1,459,409,985,000 |
After editing /etc/cron.d/anacron and saving the file, are the changes immediately acknowledged by cron? Or do I need to run a command to tell cron to reload /etc/cron.d/anacron?
|
They work immediately after saving
| Edited /etc/cron.d/anacron, changes are immediately in effect? |
1,459,409,985,000 |
It surely going to sound like a silly question but
i wanted to have a confirm about that crontab time
13 * * * *
does it really mean to launch at the 13th minute of every hr of every day of every week and so on? Thx in advance
|
I can recommend CronSandbox to try it out, better to be safe than sorry.
The output there confirms you're right as well.
| cronjob time 13 * * * * |
1,459,409,985,000 |
I'm trying to learn a little about cron.
I edited crontab -e to have this line:
*/10 * * * * /usr/bin/touch /home/dkb/Desktop/test.txt
It works and I see test.txt changing its "Date Modified" value every ten min in Thunar and there are corresponding entries in /var/log/syslog:
Mar 13 10:50:01 dkb-lappy CRON[3978]: (dkb) CMD (/usr/bin/touch /home/dkb/Desktop/test.txt)
Mar 13 11:00:01 dkb-lappy CRON[4066]: (dkb) CMD (/usr/bin/touch /home/dkb/Desktop/test.txt)
Mar 13 11:10:01 dkb-lappy CRON[4099]: (dkb) CMD (/usr/bin/touch /home/dkb/Desktop/test.txt)
But, if I use
*/10 * * * * /usr/bin/touch /home/dkb/Desktop/$(date +%H:%M:%S).txt
no file is created at all in ~/Desktop and syslog has this:
Mar 13 11:30:01 dkb-lappy CRON[4241]: (dkb) CMD (/usr/bin/touch /home/dkb/Desktop/$(date +)
Mar 13 11:30:01 dkb-lappy CRON[4240]: (CRON) info (No MTA installed, discarding output)
Mar 13 11:40:01 dkb-lappy CRON[4251]: (dkb) CMD (/usr/bin/touch /home/dkb/Desktop/$(date +)
Mar 13 11:40:01 dkb-lappy CRON[4250]: (CRON) info (No MTA installed, discarding output)
I checked that /usr/bin/touch /home/dkb/Desktop/$(date +%H:%M:%S).txt works properly in the terminal.
So what am I doing wrong?
|
crontab treats % specially. You need to escape it with a backslash.
From man 5 crontab:
Percent-signs (%) in the command, unless escaped with backslash (\),
will be changed into newline characters, and all data after the
first % will be sent to the command as standard input.
Thus, to get date to work properly:
*/10 * * * * /usr/bin/touch /home/dkb/Desktop/$(date +\%H:\%M:\%S).txt
| Using "crontab -e" [duplicate] |
1,459,409,985,000 |
Cron:
* */6 * * * /path/to/command
I want this cron to run once in every 6 hour.
Whats wrong with the above cron definition, and why?
|
You have to specify a minute value in the first column. The star there makes it run on each minute value.
10 */6 * * * /path/to/command
will make it run 10 minutes past the hour, every six hours (on all days).
From man 5 crontab: A field may be an asterisk (*), which always stands for 'first-last'. This implies all possible values.
| When will this cron run |
1,459,409,985,000 |
My cron and scripting skills are very poor, but I need to run a job every 5 minutes by user 'cpc'. So I created a script and left it at /root.
My crontab -e entry about it is:
0-59/5 * * * * /root/bi-kettle.sh
And this script (bi-kettle.sh) is:
#!/bin/bash
su cpc
cd /home/cpc/data-integration
/bin/bash kitchen.sh -rep="01" -job="MainLoad" -user="admin" -pass="admin" -level="Basic"`
But it is not called or run at any moment. What am I missing here?
Thanks in advance!
|
That su is why it fails, that launches an interactive shell. Why not add it to the crontab of the cpc user instead? crontab -e -u cpc
| how do I run a cron job with a specific user? |
1,459,409,985,000 |
I have some repos which are both in SVN and Git. My username is guyfawkes, and in my home directory I have folder www which contains all my repos. In this directory I also have file update.sh:
[guyfawkes@guyfawkes-desktop ~/www]$ cat update.sh
cd /home/guyfawkes/www
cd crm
echo "upd crm"
svn up
echo "update crm completed"
echo "-------"
cd ../crm_sql
echo "upd sql"
svn up
echo "update sql completed"
echo "-------"
cd ../crm_old
echo "upd old"
svn up
echo "update old completed"
echo "-------"
cd ../mysqldiff
echo "upd mysqldiff"
git pull sotmarket master
echo "update mysqlidff completed"
git push origin master
echo "push to github completed"
echo "-------"
cd ../mysql-migration-manager
echo "upd mmmm"
git pull
echo "mmm updated"
cd data
echo "upd data"
git pull
echo "data updated"
My crontab is:
[guyfawkes@guyfawkes-desktop ~/www]$ crontab -l
*/5 * * * * /home/guyfawkes/www/update.sh
So, it works perfectly with svn repos, but I have this mails in /var/spool/mail/guyfawkes (from cron):
X-Cron-Env: <SHELL=/bin/sh>
X-Cron-Env: <HOME=/home/guyfawkes>
X-Cron-Env: <PATH=/usr/bin:/bin>
X-Cron-Env: <LOGNAME=guyfawkes>
X-Cron-Env: <USER=guyfawkes>
upd crm
Fetching external item into 'public/old'
External at revision 32674.
At revision 483.
update crm completed
-------
upd sql
At revision 29.
update sql completed
-------
upd old
At revision 32674.
update old completed
-------
upd mysqldiff
Permission denied (publickey,keyboard-interactive).
fatal: The remote end hung up unexpectedly
update mysqlidff completed
Permission denied (publickey).
fatal: The remote end hung up unexpectedly
push to github completed
-------
upd mmmm
Permission denied (publickey,keyboard-interactive).
fatal: The remote end hung up unexpectedly
mmm updated
upd data
Permission denied (publickey,keyboard-interactive).
fatal: The remote end hung up unexpectedly
data updated
How can I to fix it?
|
The problem is you are trying to update from github which requires an ssh
key. Either create a dedicated ssh key without a password on your server
and add it to your github account or use the http-readonly uri to update
your repository:
git pull --mirror https://github.com/account/repository.git
| Crontab with SVN and Git |
1,459,409,985,000 |
I'm stuck at the first hurdle!
I want to run a couple of commands from a bash script. First something like this to rsync some directories:
rsync -e ssh -az [email protected]:/home /location/of/local/folder
Then something like this to tar and copy the files somewhere else:
cd /location/of/local/folder
tar zcf /var/backups/home-`date +%Y%m%d`.tar.gz home
I hope this is making sense.
The problem is that obviously I wish for the rsync to finish before the directory is tar'd. So is there a bit of code I can use to make sure that rsync has finished before running the tar command?
e.g. (pseudo code)
rsync
while(is syncing){
sleep 10
}
tar
Or will my .sh script only run the next line after the first line has finished and exited?
|
Commands in a shell script are executed sequentially. If your first command is rsync, the next command will not execute until rsync completes.
What you want to be sure of is that rsync finishes successfully before continuing to the next command.
This is not the most elegant solution, but the easiest to implement.
rsync -e ssh -az [email protected]:/home /location/of/local/folder &&\
tar zcf /var/backups/home-`date +%Y%m%d`.tar.gz /location/of/local/folder
Keep in mind this will only work if the exit status of rsync is 0. Any other exit status and command 2 will not run.
AND and OR lists are sequences of one of more pipelines separated by the &&
and || control operators, respectively. AND and OR lists are executed
with left associativity. An AND list has the form
command1 && command2
command2 is executed if, and only if, command1 returns an exit status of zero.
You could add more intelligence to your script if you performed different actions based on the rsync EXIT VALUES.
#!/bin/bash
PATH=/bin:/usr/bin:/sbin:/usr/sbin
rsync -e ssh -az [email protected]:/home /location/of/local/folder
if [ $? != "0" ]
then
echo "There was a problem"
else
tar zcf /var/backups/home-`date +%Y%m%d`.tar.gz /location/of/local/folder
fi
| Trying to create a cron to Rsync then tar the folder |
1,459,409,985,000 |
I tried to run the rtcwake as root and it's working:
root@ywt01-15Z90N-V-AR53C2:~# rtcwake -m mem -s 90
↑ This command is working.
But When I tried to run the rtcwake within crontab, it's not working:
open crontab:
root@ywt01-15Z90N-V-AR53C2:~# crontab -e
edit crontab:
01 22 * * * /usr/sbin/rtcwake -m mem -s 90 > /root/rtc.log 2>$1
What should I change in the above steps? Or am I missing some other steps?
PS. My OS info:
Distributor ID: Ubuntu
Description: Ubuntu 20.04.5 LTS
Release: 20.04
Codename: focal
|
Don't know how to fix your rtcwake script – there's multiple things that feel a bit anachronistic about that :)
Instead, systemd (as your Ubuntu 20.04 is based on!) makes it simple to have a timer task for which the system is woken up automatically. Something like https://joeyh.name/blog/entry/a_programmable_alarm_clock_using_systemd/ :
/etc/systedm/system/whatever_I_need_to_do_at_22_01.service
[Unit]
Description=Important thing that needs to happen 1 past 10
RefuseManualStop=true
RefuseManualStart=true
# Requires=multi-user.target or whatever you want to sure runs!
[Service]
Type=oneshot
ExecStart=/usr/bin/true
/etc/systedm/system/whatever_I_need_to_do_at_22_01.timer
[Unit]
Description=Important timer
[Timer]
Unit=whatever_I_need_to_do_at_22_01.service
OnCalendar=*-*-* 22:01
WakeSystem=true
Persistent=false
[Install]
WantedBy=multi-user.target
You then just sudo systemctl enable --now whatever_I_need_to_do_at_22_01.timer to enable waking up at the specified time.
To go to sleep, use systemctl suspend. You can also have a timer that invokes the systemd-suspend.service if you want to suspend at any given time!
| How to run the `rtcwake` within `crontab`? |
1,459,409,985,000 |
So I was making a script that monitors my app usage. It does so by running a cronjob every minute, and cronjob in question checks which window is focused on and increments its counter by 1.
Other parts of code are insignificant, this is important part:
focused=$(xdotool getwindowfocus)
pid=$(xdotool getwindowpid $focused 2>/dev/null)
[ "$pid" ] &&
pname="$(cat /proc/$pid/comm)" ||
pname="idling"
I tested the script, and running it from dmenu, terminal or i3blocks, pname is what it is supposed to be every time. But when I run it from crontab, echo $pname ends up resulting in idling every single time.
I checked if I'm running my crontab as my user and not as root.
edit: Ideally I want to keep all my cronjobs in personal crontab, not in /etc/crontab for example.
|
It sounds like you do not mention DISPLAY environment variable at all. With xdotool, you'd be using X11, so the variable DISPLAY should match the DISPLAY variable from when you run this in a terminal in your graphical session:
echo $DISPLAY
edit: adding Quasimodo's comment here in the answer:
Try export DISPLAY=:0 in your script.
| xdotool returning different output when executed from crontab [duplicate] |
1,459,409,985,000 |
I've got an installer/updater script that's meant for IRIX/Linux/macOS/FreeBSD and I would like to extend its compatibility to Solaris.
I've already fixed a few parts that weren't POSIX compliant, except for the crontab which is generated like this:
printf '%s\n' [email protected] '*/15 * * * * /path/cmd' | crontab -
# crontab -l # (on Linux/macOS/FreeBSD)
[email protected]
*/15 * * * * /path/cmd
note: /path/cmd is quiet unless it detects a problem
The code fails on Solaris for three reasons:
MAILTO= throws a syntax error
*/15 throws a syntax error
crontab - tries to open the file named -
I fixed #2 and #3 with:
printf '%s\n' '0,15,30,45 * * * * /path/cmd' | crontab
# crontab -l
0,15,30,45 * * * * /path/cmd
Now I don't know how to convert the MAILTO= part. What would be a POSIX way to forward emails from a crontab?
Selected workaround:
Thanks to @ilkkachu and @Gilles'SO-stopbeingevil' pointers, here's how I decided to emulate crontab's MAILTO behavior in a POSIX compliant way:
# crontab -l
0,15,30,45 * * * * out=$(/path/cmd 2>&1); [ -n "$out" ] && printf \%s\\n "$out" | mailx -s "Cron <$LOGNAME@$(uname -n)>" [email protected]
But, there's a potential problem with this solution: if printf is not a shell builtin and the output is too big then it will fail with an Argument list too long or the likes.
|
Note that MAILTO is no good for a software installer even where it's supported, because it's a global setting: it would apply to all the entries in the crontab, not just to the one added by your software.
If you want your software to send emails to a different address, you need to handle that in your own code. And that means you need to handle the logic of exit status and empty output yourself.
Here's some untested code that implements this logic. Replace newlines by spaces (or just remove them) to put them on a single line in the crontab file.
out=$(mktemp) &&
/path/cmd >"$out" 2>&1;
status=$?;
if [ "$status" -ne 0 ] || [ -s "$out" ]; then
mailx -s "/path/cmd exited with status $status" [email protected] <"$out";
fi;
rm -- "$out";
This code uses only POSIX features plus the widely available mktemp. Unfortunately, it's not available on IRIX. If IRIX has a POSIX-compliant m4, you can use it to implement mktemp. As a fallback, you can store a temporary file somewhere under the user's home directory, or in some other directory where only the other may write. Do not create a temporary file with a predictable name in a shared directory like /tmp: that's insecure.
| POSIX (or portable) way to forward emails from crontab |
1,459,409,985,000 |
My backup script works fine when executed manually; but nothing runs from crontab.
I am running Fedora 35 Workstation.
The crontab editor works:
$ crontab -e
The cron daemon is not running. Each of these commands has no output:
$ pgrep cron
$ pgrep crond
$ pidof cron
$ pidof crond
My attempts to start cron:
$ whereis cron
cron: /usr/share/man/man8/cron.8.gz
$ whereis crond
crond: /usr/sbin/crond /usr/share/man/man8/crond.8.gz
$ sudo service cron start
Redirecting to /bin/systemctl start cron.service
Failed to start cron.service: Unit cron.service not found.
$ sudo systemctl start cron
Failed to start cron.service: Unit cron.service not found.
My attempts to use cronie instead of cron:
$ sudo dnf install cronie
Package cronie-1.5.7-3.fc35.x86_64 is already installed.
Dependencies resolved.
Nothing to do.
$ sudo systemctl enable cronie.service
Failed to enable unit: Unit file cronie.service does not exist.
$ sudo systemctl start cronie.service
Failed to start cronie.service: Unit cronie.service not found.
Note also that anacron ("cronie.service") remains, if previously installed, during a Fedora system upgrade, but is NOT installed on, at least, Fedora Server 35, very surprisingly, when doing a fresh installation.
|
Cron executed my backup script after using the crond location posted by ajgringo619:
$ ls -l /usr/sbin/crond
-rwxr-xr-x. 1 root root 74448 Jul 21 15:05 /usr/sbin/crond
$ sudo /usr/sbin/crond start
$ pgrep cron
121692
| cron.service file not found |
1,459,409,985,000 |
I'm trying to create a cron job that checks the status of certain worker machines and triggers a webhook:
It works, but I'm not sure that this is the best approach:
for i in $(oc get nodes | awk 'FNR>1 {print $2}');do if [[ $i != 'Ready' ]];then <TRIGGER_WEBHOOK>;fi;done
Output of oc get nodes
# oc get nodes
NAME STATUS ROLES AGE VERSION
master1 Ready master 27h v1.20.0+bafe72f-1054
....
worker4 Ready worker 10h v1.20.0+bafe72f-1054
Any advice to improve it. Thx
|
The one thing I can see that I might change is removing the if:
for i in $(oc get nodes | awk 'FNR > 1 && $2 != "Ready" { print $2 }'); do
<TRIGGER API>
done
| Pipe for loop with awk and if |
1,459,409,985,000 |
I have a weather station that sends data to a Raspberry PI, in which runs a linux server that takes that data and stores it. Everythings works fine except for one little thing. Raspberry is connected to the weather station indoor display, via usb cable.
The display is set to reproduce sounds whenever it powers up. So basically, every night at 3 am I am woken up by this sound.
I then connected via ssh to the Raspberry and entered this command to access log, to see what was happening at that time:
nano /var/log/syslog
And i found this line:
Jul 21 02:53:01 weatherstation CRON[25991]: (root) CMD (sudo reboot)
This repeats every day at the same time.
So, apparently I have something in crontab that keeps rebooting my raspberry. Obviously it's ok if the device reboot to just chill for a moment, but it's not ok to do it at 3am lol.
I then opened:
nano /etc/crontab
And I found these 4 lines:
17 * * * * root cd / && run-parts --report /etc/cron.hourly
25 6 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily )
47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly )
52 6 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly )
So there is no clear sudo reboot, but I suspect that something inside /etc/cron.daily is executing that instruction. So I opened the folder cron.daily and inside I found these files:
apache2 apt-compat aptitude bsdmainutils cracklib-runtime dpkg exim4-base logrotate man-db passwd
The only file that seemed interesting for me was apt-compat. In there I found this:
# run daily job
exec /usr/lib/apt/apt.systemd.daily
The problem is, this file is too complicated for me. I don't know the programming language used inside it, and I don't know what is technically doing. So I want just to shift the reboot time from 3am to like 10am, but I can't understand how.
Thanks you all for your attention.
|
From the log message, we see it's run via cron (and not e.g. a systemd timer), and the command is exactly sudo reboot.
Most system-provided cron jobs would be in files within /etc/cron.d/, so that's a place to look.
Also there can be stuff in the root user's personal crontab too. The per-user crontabs should be viewed and edited with crontab -l and crontab -e, but they usually reside in /var/spool/cron/crontabs/ or some such directory. (That's the path on Debian, it might differ on other systems.)
Usually, you wouldn't see system-provided stuff there, though.
Then there's /etc/cron.daily and similar directories, the files there are just executable files (usually shell scripts), which some tool runs daily/weekly/monthly. Though these might give a different sort of log entry, since they don't run directly from cron.
In a pinch, you could always fall back to grep -re "sudo reboot" /etc/ /var/ and so.
Related: Differences between `/etc/crontab`, files under `/etc/cron.d/` and `/var/spool/cron/crontabs/root`? and I think I'll just link this here too: How can I see all cron records in CentOs7
| Why is Linux rebooting everyday at the same time? |
1,459,409,985,000 |
I have the following cron job in /etc/cron.d/backup:
*/1 * * * * backupbot /home/backupbot/bin/backup-script.sh
Basically, I want the backup-script.sh to run every minute (and the user backupbot should be executing the job).
The /home/backupbot/bin/backup-script.sh file is owned by backupbot (and he has "x" permission on it). The file looks as follows:
#!/bin/bash
set -e
{
BACKUP_DIR=/var/app/backups
STORAGE_ACCOUNT_URL=https://myserver/backups
BACKUP_FILE=$(ls $BACKUP_DIR -t | head -1)
if [ -z "$BACKUP_FILE" ]; then
echo "There are no backups to synchronize"
exit 0
fi
azcopy login --identity
azcopy copy $BACKUP_DIR/$BACKUP_FILE $STORAGE_ACCOUNT_URL/$BACKUP_FILE
} >/tmp/cron.backup-script.$$ 2>&1
Normally, any output should be logged into /tmp/cron.backup-script.xxxx. Such a file is never created.
The only evidence that the job is being noticed by Cron is the following output of systemctl status cron.service:
● cron.service - Regular background program processing daemon
Loaded: loaded (/lib/systemd/system/cron.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2021-04-13 09:59:08 UTC; 6h ago
Docs: man:cron(8)
Main PID: 1086 (cron)
Tasks: 1 (limit: 4915)
CGroup: /system.slice/cron.service
└─1086 /usr/sbin/cron -f
Apr 13 16:00:01 my-vm CRON[17201]: pam_unix(cron:session): session closed for user root
Apr 13 16:00:01 my-vm CRON[17198]: pam_unix(cron:session): session closed for user root
Apr 13 16:00:01 my-vm CRON[17199]: pam_unix(cron:session): session closed for user root
Apr 13 16:01:01 my-vm CRON[17402]: pam_unix(cron:session): session opened for user root by (uid=0)
Apr 13 16:01:01 my-vm CRON[17403]: (root) CMD ([ -f /etc/krb5.keytab ] && [ \( ! -f /etc/opt/omi/creds/omi.keytab \) -o \( /etc/krb5.keytab -nt /etc/opt/omi/creds/omi.keytab \) ] &&
Apr 13 16:01:01 my-vm CRON[17401]: pam_unix(cron:session): session opened for user backupbot by (uid=0)
Apr 13 16:01:01 my-vm CRON[17404]: (backupbot) CMD (/home/backupbot/bin/backup-script.sh)
Apr 13 16:01:01 my-vm CRON[17402]: pam_unix(cron:session): session closed for user root
Apr 13 16:01:01 my-vm CRON[17401]: (CRON) info (No MTA installed, discarding output)
Apr 13 16:01:01 my-vm CRON[17401]: pam_unix(cron:session): session closed for user backupbot
It mentions something about sessions for backupbot. How can I investigate further?
|
According to comments, the script is only executable by the owner. I.e., it is not readable. This stops the owner from executing the script.
Example:
$ chmod 500 script
$ ls -l script
-r-x------ 1 myself myself 24 Apr 14 09:21 script
$ ./script
hello
$ chmod 100 script
$ ls -l script
---x------ 1 myself myself 24 Apr 14 09:21 script
$ ./script
/bin/bash: ./script: Permission denied
If the script file is not readable, it can't be read by the bash shell interpreter (which is running as the current user).
Make the script both executable and readable.
Apart from the above, consider using proper double quoting of variable expansions, and don't parse the output from ls (it's only really useful to look at).
Your script, modified:
#!/bin/bash
backup_dir=/var/app/backups
storage_account_url=https://myserver/backups
{
# Find most recently modified file in "$backup_dir".
# Assumes that there are only files there, no subdirectories.
set -- "$backup_dir"/*
backup_file_path=$1
shift
for pathname do
if [ "$pathname" -nt "$backup_file_path" ]; then
backup_file_path=$pathname
fi
done
if [ ! -e "$backup_file_path" ]; then
echo 'There are no backups to synchronize'
exit 1
fi
# Perform backup.
azcopy login --identity || exit 1
azcopy copy "$backup_file_path" "$storage_account_url/$(basename "$backup_file_path")"
} >/tmp/cron.backup-script.$$ 2>&1
| Not sure if cron job is run |
1,459,409,985,000 |
The command below works nicely and puts "Hello from Docker." to out.txt
docker run -it --rm ubuntu echo "Hello from Docker." >> /home/ubuntu/out.txt``
Then when I open "sudo crontab -e" and put their line below, I am getting empty out.txt
* * * * * docker run -it --rm ubuntu echo "Hello from Docker." >> /home/ubuntu/out.txt
Command below works and puts docker version into out.txt
* * * * * docker -v >> /home/ubuntu/out.txt
from what I see "docker run" does not work for me from cron, and gives me no error/output. Do you have a clue why?
|
Try without options -it these are for interactive terminal. But you are using in a script with no terminal, no interaction.
| Docker run is not working as cron command |
1,596,117,343,000 |
DEMONSTRATION (OR STRAIGHT TO THE REAL QUESTION SECTION)
I have a lynis script that I use to scan my server. This script is not important to demonstrate but here is the script anyway: https://gitlab.com/sofibox/maxicron/-/blob/master/usr/local/maxicron/lynis/maxinis
This script runs perfectly and doesn't output any errors when running it manually like this through terminal:
./maxinis manual --cronjob
I also will receive an email about the scan
But, when I run this script via cronjob at specific time like this:
06 21 * * * root /usr/local/maxicron/lynis/maxinis cron --cronjob > /dev/null
It also run perfectly, send an email but I got an extra email from Cron Daemon with 3 warning outputs like this:
THE REAL QUESTION IS HERE:
My question is, based on the 3 warning messages from the picture above, I got cron daemon email warning. How do I suppress the second line of the warning and keep the rest output sent by Cron Daemon ?
The second line of the output is:
# Warning: iptables-legacy tables present, use iptables-legacy to see them
I tried using this grep -v method like this in the crontab but doesn't seems to work:
06 21 * * * root /usr/local/maxicron/lynis/maxinis cron --cronjob > /dev/null | grep -v '# Warning: iptables-legacy tables present, use iptables-legacy to see them'
|
The reason grep isn't working is that the messages are going to stderr, not stdout, so grep is never seeing them. That's why you're getting an email even after sending stdout to /dev/null. You can filter stderr like this:
somecommand > /dev/null 2>( grep -v 'unwanted error' 1>&2 )
| How to suppress a specific warning sent by cron daemon to email? |
1,596,117,343,000 |
i want run a CronJob for reboot the server once or twice at month.
I do:
crontab -e
than:
51 9 5 * * /usr/sbin/reboot
This should reboot at 5 day of the month at 9:51, it works the server is reboot but after 30 seconds the reboot continue generating an infinite loop.
Why this happen and how to fix?
The only way i have to stop this is remove the CronJob so I'm unable to set an automated reboot once a month.
Any help will be appreciated.
Update 1:
The issue is not caused by the too short time of the cron, even if i add +5 minutes or delay the reboot still happen in loop.
|
The issue was caused by the click not synchronized.
I resolved by doing a cron that run before the reboot with the following command:
/sbin/hwclock --systohc >/dev/null 2>&1
than
/usr/sbin/shutdown -r +5 >/dev/null 2>&1
works.
Without the command mentioned to sync clock the issue persist.
| Why crontab on Centos 7.8 for reboot the server once generate e reboot loop? |
1,596,117,343,000 |
I have a simple backup script with this line to come up with a name for the backup:
backup=$(/bin/date +'%Y-%m-%d_%H:%M_%S')_$(hostname).gz
It works great when I run it under the root user. Unfortunately when I set it to run as a cronjob, the $(hostname) part is always empty and I don't get the hostname. Why isn't it working, and how can I get the hostname in a cron job?
I'm running ubuntu 18.04
|
hostname doesn't seem to be in the PATH in your script. Either put /bin/hostname there, like you did for date, or set PATH to include /bin (inside the script or in the crontab).
| $(hostname) doesn't work in cronjob [duplicate] |
1,596,117,343,000 |
I have setup a cron job from cpanel to email me daily so I know the site files have been backed up. However, it emails the entire file paths and the whole job. Snippet of the email output I receive;
tar: Removing leading `/' from member names
/home/user/public_html/
/home/user/public_html/test/
/home/user/public_html/test/index.html
I really just want it to email a simple message when done, like this;
Files successfully backed up at 02:00
Current Cron Job:
tar -cvpzf /home/user/backups/backup_files.tar.gz /home/user/public_html
NOTES:
This cron job works just fine and does the backup. I just don't want the whole job with all the file paths emailed to me (but do want a basic notification).
For my other sites I use a PHP backup script, but this doesn't work on this site as the backup file is simply too large I think for PHP to handle. The end tar.gz file is just under 4GB.
Any guidance appreciated.
|
You can remove the -v (verbose) option, and instead use the tar command's exit status to determine what message to send.
Ex. at its simplest,
tar -cpzf /home/user/backups/backup_files.tar.gz /home/user/public_html && echo "Files successfully backed up at $(date)"
or (slightly more nuanced)
tar -cpzf /home/user/backups/backup_files.tar.gz /home/user/public_html; case $? in 0) echo "Files successfully backed up at $(date)" ;; *) echo "Backup failed" ;; esac
See man tar for the meanings of various non-zero exit status values.
| How to limit the email output from a Cron Job |
1,596,117,343,000 |
I've been reading the recent blogpost "Winding down my Debian involvement" by Michael Stapelberg.
Sad details aside, it's been mentioned that within Debian infrastructure batch jobs run four times a day at XX:52 UTC:
When you want to make a package available in Debian, you upload GPG-signed files via anonymous FTP. There are several batch jobs (the queue daemon, unchecked, dinstall, possibly others) which run on fixed schedules (e.g. dinstall runs at 01:52 UTC, 07:52 UTC, 13:52 UTC and 19:52 UTC).
Is there a reason to choose XX:52 UTC exactly and not to use time rounded to the nearest hour, e.g 02:00, 08:00, 08:00 and 14:00?
Should I also start my cron jobs slightly before the new hour starts, or this was a random choice by the Debian team?
|
It is not random, and it is something that a system administrator should think about.
Notice that your cron.hourly, your cron.daily, your cron.weekly, and your cron.monthly are all run at different times. These times have varied over the years, and have been moved back and forth, because these jobs interact with one another, sometimes badly. The same is true of other Debian infrastructure.
This is a thing to think about in general with scheduled jobs running in batch. (It's not just cron jobs, but this sort of job in general. And it's not just Debian.) The job that cleans up files might interact with the job that scans the filesystem for stuff, which might interact with the job that makes temporary files as it is working, …
Further reading
bam (1998-05-31). cron: race condition from run-parts --report /etc/cron.daily. Debian bug #23023.
Christoph Anton Mitterer (2012-01-22). cron: align /etc/cron.{daily, hourly, monthly, weekly} with @daily, @hourly, @monthly, @weekly. Debian bug #656835.
https://wiki.debian.org/ItsSixAmAndIveBeenCracked
| Starting batch jobs at exact time slightly before the new hour starts |
1,596,117,343,000 |
I have a crontab root file that looks like this:
lab-1:/var/www/cdd# crontab -l
# do daily/weekly/monthly maintenance
# min hour day month weekday command
* * * * * /etc/scripts/script1
*/15 * * * * /etc/scripts/script2
0 * * * * /etc/scripts/script3
I can see that all the jobs are triggered by running this command:
lab-1:/var/www/cdd# cat /var/log/messages | grep cron.info
Mar 15 13:00:00 lab-1 cron.info crond[7897]: USER root pid 26217 cmd /etc/scripts/script2
Mar 15 13:00:00 lab-1 cron.info crond[7897]: USER root pid 26219 cmd /etc/scripts/script3
Mar 15 13:01:00 lab-1 cron.info crond[7897]: USER root pid 26293 cmd /etc/scripts/script1
The problem is that script3 (I've proven that script2 and script1 work) is not actually producing the expected output. It's supposed to create files in another folder.
However, when I run it manually like so, it works just fine:
lab-1:/etc/scripts# bash script3
I'm not a real sys admin so not too sure what the best way is to go about troubleshooting this.
First thing that comes to mind is permissions.
lab-1:/etc/scripts# ls -lah
total 24
drwxr-xr-x 2 root root 4.0K Mar 15 12:20 .
drwxr-xr-x 34 root root 4.0K Mar 14 17:11 ..
-rwxr-xr-x 1 root root 5.0K Mar 15 12:19 script3
-rwxr-xr-x 1 root root 1.8K Mar 14 15:26 script1
-rwxr-xr-x 1 root root 1.9K Mar 14 15:26 script2
Although... having said that, if it were a permissions problem would it even show up as being triggered / started in my /var/log/messages file?
How should I proceed?
EDIT 1
lab-1:/etc/scripts# ./script3 | head -n 4
Working folder set to: /tmp/tmp.kOfhip
*****Grab SQL Data from Remote Server: spp.mydomain.net *****
COPY 344
Warning: Permanently added 'spp.mydomain.net,10.1.1.1' (ECDSA) to the list of known hosts.
Evaluate /tmp/tmp.kOfhip/spp.mydomain.net.db
lab-1:/etc/scripts#
EDIT 2
This is what my script looks like:
https://paste.pound-python.org/show/90vAlrOsAYP0CtYqNWfl/
As you can see, I'm creating a temporary folder and doing all my work in there.
EDIT 3
To prove to myself that it's not because of lines like line 9, I've commented out everything except lines 1 though 15. I added line 16 that does this:
echo "done" >> /tmp/results.txt
ANd then I changed the schedule of the job to run every two minutes from one hour. i can see that it's run 3 times already.
I guess I will continue with this approach until I find something that won't work / blows up.
I don't quite understand the comment made below about using a PATH variable... but I guess I will google it.
EDIT 4
I changed the crontabs root file so it outputs the results of script3 to a file and this is what i see:
Working folder set to: /tmp/tmp.GeNGDJ
*****Grab SQL Data from Remote Server: servername *****
COPY 344
Warning: Permanently added 'spp.mydomain.net,10.1.1.132' (ECDSA) to the list of known hosts.
Permission denied (publickey,keyboard-interactive).
Evaluate /tmp/tmp.GeNGDJ/spp.mydomain.net.db
cat: can't open '/tmp/tmp.GeNGDJ/spp.mydomain.net.db': No such file or directory
So it's dying while trying to scp the file. The remote SQL runs just fine and shows output. But as you can see I'm getting a permission denied
But if i run the same command manually, it seems to work.
Will have to keep poking around. Will try dumping the ENV like is suggested in the answer below.
|
A common mistake when writing scripts that will be executed later by cron, is that you assume the script will have exactly the same environment that you have when you are logged in and you are developing it. It hasn't!
Write a script4 which contains the following line
OFILE=/tmp/crons.environment
(/usr/bin/whoami
/usr/bin/env ) > $OFILE 2>&1
And get cron to run that
Now compare the output in /tmp/crons.environment to what you get when you
just type env
Your script assumes for example that $PATH is setup correctly to find all the programs you execute, you are also querying a database, there could be further Environment variables required for those commands to run correctly.
To check the output from the cron job. Temporarily modify the command run by cron and redirect stdout and stderr to a known file, Like I did above.
0 * * * * /etc/scripts/script3 > /tmp/s3.out 2>&1
| How to troubleshoot failing cron job |
1,596,117,343,000 |
I'm trying to get a script to run according to a crontab entry. The script I have works fine in the terminal but will not run automatically as per the cron entry. The script is simply to create an empty file in the /testexport1 directory once an hour.
I used crontab -e to edit the crontab, which looks like this:
30 * * * * /bin/bash/ /testexport1/./createfilescript.sh
The script itself looks like this:
[root@centostest testexport1]# cat createfilescript.sh
#!/bin/bash
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin:/testexport1
today="$( date +"%Y%m%d" )"
number=0
while test -e "$today$suffix.txt"; do
(( ++number ))
suffix="$( printf -- '-%02d' "$number" )"
done
fname="$today$suffix.txt"
printf 'Will use "%s" as filename\n' "$fname"
touch "$fname"
I added the PATH part to the top of the script to specify where the script was being run from (as per another article I have read).
Any ideas why this crontab entry does not seem to be running the script? Simple fix I'm sure but I'm going around in circles at the mo.
|
The crontab is not running the script because /bin/bash/ can most likely not be found. This should read /bin/bash instead (note the lack of / at the end), or whatever the correct path is to bash on your system.
Also make sure that all utilities that you are using in the script are actually found in the $PATH that you set. It's more common to modify the path rather than to overwrite it, as the system's path usually include directories where things like touch may be found.
| Getting cron to run a script |
1,596,117,343,000 |
I am really struggling to get to the bottom of this one.
For some reason cron cannot see a file system I mounted manually. This is a USB drive formatted to ext4 mounted to /backup. For completeness I mounted it while logged into SSH, not directly at a terminal.
If I compare typing mount | sort at the commandline (over ssh) with the same command over cron or atd the cron version will miss the lines:
tmpfs on /run/sshd type tmpfs (rw,nosuid,nodev,mode=755)
/dev/sdb1 on /backup type ext4 (rw,relatime,data=ordered)
I've confirmed that neither cron nor sshd are chrooted using the accepted answer to another question.
If neither are chrooted, then what else can cause two different processes to see different mounts?
This is really causing a problem because my backups keep writing to the 30GB SD card in my r-pi instead of the 2TB USB hard drive.
Edit for future reference. This bug in Systemd v236 looks like the cause. https://github.com/systemd/systemd/issues/7761
If I type
mount | sort
I get
/dev/mmcblk0p1 on /boot type vfat (rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=ascii,shortname=mixed,errors=remount-ro)
/dev/mmcblk0p2 on / type ext4 (rw,noatime,data=ordered)
/dev/sda1 on /home type ext4 (rw,relatime,data=ordered)
/dev/sda1 on /mnt/mercury_data type ext4 (rw,relatime,data=ordered)
/dev/sda1 on /root type ext4 (rw,relatime,data=ordered)
/dev/sda1 on /var type ext4 (rw,relatime,data=ordered)
/dev/sdb1 on /backup type ext4 (rw,relatime,data=ordered)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/net_cls type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,name=systemd)
cgroup on /sys/fs/cgroup/unified type cgroup2 (rw,nosuid,nodev,noexec,relatime)
configfs on /sys/kernel/config type configfs (rw,relatime)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
devtmpfs on /dev type devtmpfs (rw,relatime,size=470180k,nr_inodes=117545,mode=755)
mqueue on /dev/mqueue type mqueue (rw,relatime)
proc on /proc type proc (rw,relatime)
sunrpc on /run/rpc_pipefs type rpc_pipefs (rw,relatime)
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=27,pgrp=1,timeout=0,minproto=5,maxproto=5,direct)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
tmpfs on /run type tmpfs (rw,nosuid,nodev,mode=755)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k)
tmpfs on /run/sshd type tmpfs (rw,nosuid,nodev,mode=755)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)
But when I run this through cron or atd I get:
/dev/mmcblk0p1 on /boot type vfat (rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=ascii,shortname=mixed,errors=remount-ro)
/dev/mmcblk0p2 on / type ext4 (rw,noatime,data=ordered)
/dev/sda1 on /home type ext4 (rw,relatime,data=ordered)
/dev/sda1 on /mnt/mercury_data type ext4 (rw,relatime,data=ordered)
/dev/sda1 on /root type ext4 (rw,relatime,data=ordered)
/dev/sda1 on /var type ext4 (rw,relatime,data=ordered)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/net_cls type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,name=systemd)
cgroup on /sys/fs/cgroup/unified type cgroup2 (rw,nosuid,nodev,noexec,relatime)
configfs on /sys/kernel/config type configfs (rw,relatime)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
devtmpfs on /dev type devtmpfs (rw,relatime,size=470180k,nr_inodes=117545,mode=755)
mqueue on /dev/mqueue type mqueue (rw,relatime)
proc on /proc type proc (rw,relatime)
sunrpc on /run/rpc_pipefs type rpc_pipefs (rw,relatime)
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=27,pgrp=1,timeout=0,minproto=5,maxproto=5,direct)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
tmpfs on /run type tmpfs (rw,nosuid,nodev,mode=755)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)
|
This sounds like the daemons are running in different mount namespaces, so changes you make in your SSH session aren’t visible in cron or at jobs. Look at mountinfo and ns/mnt inside the various deamons’ /proc/${pid} directories to check which namespaces they’re using and what they can inherit.
| What can cause different processes to see different mount points? |
1,596,117,343,000 |
I have setup 3 crontab jobs to execute my simple ruby scripts periodically every minute, 5 minutes and hour. They execute, however they do not do anything. I have only one user on the machine (root) and I have setup the crontab by executing the command crontab -e. crontab -l lists my current crontab jobs:
5 * * * * ruby /root/www/server-monitoring/current/tasks/cpu_check.rb
0 * * * * ruby /root/www/server-monitoring/current/tasks/free_disk_space.rb
1 * * * * ruby /root/www/server-monitoring/current/tasks/free_ram_check.rb
I can see that they execute but not in the right intervals and also they do not do anything whereas if I execute those ruby files manually they work perfectly fine. I can confirm that the ruby programs work fine 100%, they pass tests, etc. Here are the crontab logs:
Dec 6 15:45:01 monitoring-jedrzej CRON[28281]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1)
Dec 6 15:55:01 monitoring-jedrzej CRON[28478]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1)
Dec 6 16:00:01 monitoring-jedrzej CRON[28584]: (root) CMD (ruby /root/www/server-monitoring/current/tasks/free_disk_space.rb)
Dec 6 16:01:01 monitoring-jedrzej CRON[28614]: (root) CMD (ruby /root/www/server-monitoring/current/tasks/free_ram_check.rb)
Dec 6 16:05:01 monitoring-jedrzej CRON[28702]: (root) CMD (ruby /root/www/server-monitoring/current/tasks/cpu_check.rb)
Dec 6 16:05:01 monitoring-jedrzej CRON[28703]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1)
Dec 6 16:15:01 monitoring-jedrzej CRON[29214]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1)
What am I missing here?
|
You have not set these jobs to run every minute, every 5 minutes, and every hour. All three are set to run once per hour, at :01 past the hour, :05 past the hour, and :00 past the hour. Instead you might try something like
* * * * * echo 'run every minute'
*/5 * * * * echo 'run every 5 minutes'
0,5,10,15,20,25,30,35,40,45,50,55 * * * * echo 'alternate every-five-minute'
0 * * * * echo 'run every hour on the hour'
| Cron jobs not working as expected |
1,596,117,343,000 |
I have two bash scripts. One runs as root, and it calls another one as user "parallels"
/root/cronrun.sh
#! /bin/bash
PARR="thisparameter"
echo "Starting at `date`" >> /root/rlog.log
runuser -l parallels -c "/home/parallels/testscript/newscript.sh $PARR"
echo "Finishing at `date`" >> /root/rlog.log
/home/parallels/testscript/newscript.sh
#! /bin/bash
PARAMM=$1
echo "`date` - newscript.sh ran with $PARAMM" >> /home/parallels/somelog.log
Ran /root/cronrun.sh from command line as root at
18:17:28 CET
18:17:29 CET
Added to crontab
*/2 * * * * /root/cronrun.sh
So it ran at 18:20:00 CET via cron
After this:
/root/rlog.log
Starting at Thu Nov 16 18:17:28 CET 2017
Finishing at Thu Nov 16 18:17:28 CET 2017
Starting at Thu Nov 16 18:17:29 CET 2017
Finishing at Thu Nov 16 18:17:29 CET 2017
Starting at Thu Nov 16 18:20:01 CET 2017
Finishing at Thu Nov 16 18:20:01 CET 2017
/home/parallels/somelog.log
Thu Nov 16 18:17:28 CET 2017 - newscript.sh ran with thisparameter
Thu Nov 16 18:17:29 CET 2017 - newscript.sh ran with thisparameter
So the log entry from the echo in the runuser shell is missing. Why it can be possible? How does cron run differently in this case, which makes "runuser" command ignored/failed?
(System reproduced on is Ubuntu 16.04.3 LTS)
(SHELL=/bin/bash in crontab is not solving it)
|
cron runs with a specific PATH, as seen in the upstream Debian source code:
#ifndef _PATH_DEFPATH
# define _PATH_DEFPATH "/usr/bin:/bin"
#endif
Referenced here:
#if defined(POSIX)
setenv("PATH", _PATH_DEFPATH, 1);
#endif
and since runuser lives in /sbin, you'll need to use the full path to it, or set PATH in your script to include /sbin.
| Cron shell ignores runuser command - why? |
1,596,117,343,000 |
I've got this strange behaviour Empty string parsing ntpq command result, but let me resume and refocus the problem:
I'm executing a java program launched using a shell script that goes like this:
#!/bin/bash
export PATH=.:$PATH
java -jar myJar.jar &
Inside my java code I execute this piped command
ntpq -c peers | awk ' $0 ~ /^\*/ {print $9}'
in order to obtain the offset of the NTP synchronized server.
To execute a piped command inside a java program, I've to use the above mentioned line as an argument of /bin/sh. I execute /bin/sh not the piped command directly.
This is the equivalent that you can launch in a console 1
/bin/sh -c 'ntpq -c peers | awk '"'"' $0 ~ /^\*/ {print $9}'"'"''
Example output from ntpq
remote refid st t when poll reach delay offset jitter
==============================================================================
*172.30.100.1 172.22.204.171 4 u 207 1024 377 1.490 53.388 49.372
Parsing it with awk, I obtain
53.388
Usually it goes well, but sometimes for reasons unknown [This is my question] my program stops working fine. It returns nothing when the execution of the piped command from the console returns a number.
Recovering the err from the executed process created by java, I've obtained this text
/bin/sh: ntpq: command not found
So, sometimes I can execute 1 from a java program and sometimes I can't. Something is happening in the SO behind the scene. Can someone enlighten me, please?
|
I originally posted a similar question in StackOverflow, thinking that the problem may be related with the java programming but it wasn't.
Finally we found what was happening.
My java program is launched with a shell script. When we execute the script manually, ntpq command is found and invoked successfully. The problem arises when the software is fully deployed. In the final environment we've got a cron scheduled demon that keeps our program alive but PATH established by cron is different from the PATH that our profile has got assigned.
PATH used by cron:
.:/usr/bin:/bin
PATH that we got login for launching the script manually:
/usr/sbin:/usr/bin:/bin:/sbin:/usr/lib:/usr/lib64:/local/users/nor:
/usr/local/bin:/usr/local/lib:.
Usually ntpq is in
/usr/sbin/ntpq
Here you can find a better description of the problem and various solutions.
| /bin/sh: ntpq: command not found |
1,596,117,343,000 |
Running Ubuntu 16.04 I have added the following to /etc/crontab:
* * * * * root wget https://www.exmaple.org/bus/ >/dev/null 2>&1
The cron job runs properly, but the result is written as a file to /root/ rather than being discarded as expected.
|
>/dev/null discards the standard output of the command. There is none in your example. 2>&1 causes the standard error of the command to be discarded. In your example, this contains status and error information displayed by wget.
If the URL is valid (as in, if the server returns some content for the page), wget saves the content of the page in a file. This is neither displayed wget's standard output nor wget's standard error: it's some data that wget saves in a file. This is not affected by redirections, since wget is outputting to a file that it opens by name, not to a standard stream.
If you don't want to save the output anywhere, tell wget to save it to /dev/null:
wget -O /dev/null …
Note that errors from a cron job are useful to have, but Ubuntu discards them by default. The output of a cron job (standard output and standard error) is sent by local mail; see How are administrators supposed to read root's mail? for how to make that work. Annoyingly, wget produces a lot of output even when everything is working fine, it has no option to display only errors. You can work around this by saving the output and discarding it if the command is successful:
trace=$(wget … 2>&1) || echo "$trace" >&2
| Why might >/dev/null 2>&1 not work? |
1,596,117,343,000 |
How to execute a different command/script (Task-B) in a parallel mode, while the primary task command/script (Task-A) is exceeding the defined time window/period; which is mentioned in the crontab ?
@ Production environment, not having gnome-terminal.
|
As l0b0 mentioned in his answer, the crontab file only specifies the start time of jobs. It doesn't care if the job takes hours to run and will happily start it again when the next start time arrives, even if the previous incarnation of the job is still running.
From your description, it sounds like you want task B to start if task A takes too long to run.
You may achieve this by combining the two tasks in one and the same script:
#!/bin/sh
timeout=600 # time before task B is started
lockfile=$(mktemp)
trap 'rm -f "$lockfile"' EXIT INT TERM QUIT
# Start task A
# A "lock file" is created to signal that the task is still running.
# It is deleted once the task has finished.
( touch "$lockfile" && start_task_A; rm -f "$lockfile" ) &
task_A_pid="$!"
sleep 1 # allow task A to start
# If task A started, sleep and then check whether the "lock file" exists.
if [ -f "$lockfile" ]; then
sleep "$timeout"
if [ -f "$lockfile" ]; then
# This is task B.
# In this case, task B's task is to kill task A (because it's
# been running for too long).
kill "$task_A_pid"
fi
fi
| Execution of Parallel Script from a single Cron Job |
1,596,117,343,000 |
I have a bunch of munin nodes going and every five minutes they produce this message in /etc/syslog:
CRON[5779]: (root) CMD (if [ -x /etc/munin/plugins/apt_all ]; then /etc/munin/plugins/apt_all update 7200 12 >/dev/null; elif [ -x /etc/munin/plugins/apt ]; then /etc/munin/plugins/apt update 7200 12 >/dev/null; fi)
There is no job related to munin in crontab. I am guessing this is some script to check for munin plugin updates. Is there some way to stop this, and only update plugins (or whatever it's doing) manually?
|
There are two sets of cron jobs for a user: the user's crontab (edited with crontab -e), and the system crontab (/etc/crontab). The system crontab allows the system administrator to execute jobs as any user — typically root or a system account.
It's unusual to have a user crontab on a system account, because the system account shouldn't be the one modifying the crontab, the administrator should do it. And distributions will never ship user crontabs, because the distribution doesn't have full control over user names and user IDs, whereas shipping entries in /etc is routine. So anything that comes from a distribution is in the system crontab.
The location of the system crontab is /etc/crontab. On Debian derivatives at least, /etc/crontab contains no actual services, but only instructions to run scripts under subdirectories of /etc such as /etc/cron.daily for daily jobs and so on. In addition Debian's cron reads entries in /etc/cron.d. The reason for putting separate jobs in separate files is to make package management easier (a package can just drop a file in a directory to register a cron job). So if you're looking for a system cron job, check /etc/cron*, not just /etc/crontab.
| Where does this munin cron job come from? |
1,596,117,343,000 |
I want to be able to run my script, say, 2 or 3 times per day -- 24 hours -- but at a different time each time.
What would you recommend a simple and reliable solution?
|
To run the script 2 times per day:
0 0,12 * * * sleep $(( $$ \% 21600 )); /path/to/script.sh
This will start the job at noon and midnight, then sleep for up to 6 hours (half of the 12-hour interval) before starting the script.
To run the script 3 times per day:
0 0,8,16 * * * sleep $(( $$ \% 14400 )); /path/to/script.sh
This will start the job at midnight, 8am, and 4pm, then sleep for up to 4 hours (half of the 8-hour interval) before starting the script.
Since you tagged this with Ubuntu, you likely have bash as the default /bin/sh, which seems to interpret $RANDOM (despite RANDOM not being POSIX-specified), so for extra unpredictability, adjust Zwans' answer to:
2 runs per day:
0 0,12 * * * sleep $(( RANDOM \% 21600 )); /path/to/script.sh
3 runs per day:
0 0,8,16 * * * sleep $(( RANDOM \% 14400 )); /path/to/script.sh
I changed the && to ;, which changes the meaning from "wait until the sleep command exits successfully" to "sleep for some amount of time, then regardless of whether sleep completed or was killed". Also note that the % needs to be escaped when it is part of a cron command, otherwise it is interpreted as a newline.
| Running a script via cron at a random time, but a certain number of times per day [closed] |
1,596,117,343,000 |
I have some cron tasks unique to root. One of these has a reboot command in the end.
Say I SSH with my work user and while doing some task with it, the reboot-requiring root task will run in the background:
Will the root reboot log my other work user of the SSH?
If I would have to bet, I would bet that it won't take my work user out, as it is another session, but as I'm new to Unix cron, it is important for me to ask it here and that more people could find this easily.
Why would I even want to reboot:
The task is apt-get update && apt-get upgrade, and as I know, it might require reboot.
|
If you have a cron job that has a reboot command in it, the whole system will get rebooted (as @AlexP said), no user sessions (local or remote) or processes will stay active.
As an aside, you might want to consider why you have a reboot task in your cron job; is there a service or process you could restart rather than rebooting the whole machine that would accomplish the same goal? It has been my experience with Linux based systems that it is rare that you need to fully reboot the system. I can think of a couple of cases, but they are few and far between.
| root cron task that requires reboot will get me out of the system if I use another user? |
1,596,117,343,000 |
Per the attached screenshot of htop, about five minutes after I log in to Mint Linux 17.3 it automatically starts a background process for /usr/bin/find, USER nobody, at which point that process consumes 84-percent to 100-percent of the (virtual machine's) CPU. (At that point I can tell without using htop that the process has been started because the system barely responds to user commands...)
I tried using htop -> nice to set NI as high as 16 without apparent effect: The /usr/bin/find process continues to consume essentially all of the system's CPU cycles. The only way I've found to seize control back from this process is to KILL it.
I've searched for ways to manage this process so that it either (a) behaves nicely or (b) isn't started at all. The GUI tools I've tried don't list the runaway process. So I suspect there's a config file somewhere one must edit to make this process behave better but I don't know which file or what edits to apply.
|
This find process is running as part of the updatedb task, which updates the database for locate, a command to locate a file given (part of) its name. It is triggered by anacron, a service that runs scheduled tasks when the computer is turned on. Anacron complements cron, which runs tasks at a predefined time: the updatedb task would run every night if your computer was turned on, and anacron runs it if it didn't get a chance to run the last night.
Updatedb is the most demanding daily task. It runs with a lower I/O priority and a lower CPU priority (the lower CPU priority is what the 10 in the NI (nice) column means), but even so it can be disruptive. You can disable it altogether by.
sudo dpkg-divert --add --rename --local --divert /etc/cron.daily/locate.noauto /etc/cron.daily/locate
If you want to update the database, run sudo /etc/cron.daily/locate.noauto manually.
If you have /etc/cron.daily/mlocate, the same applies (it's a different implementation of locate; both can be installed on the same machine).
| Controlling automatically started /usr/bin/find process |
1,596,117,343,000 |
I have a cron job supposed to trigger a shell script daily at 2 AM.
0 2 * * * /root/bin/script.sh
However it does not work at all. What am I missing?
More details: The script runs fine without cron scheduling when run manually and does what it is supposed to do. Root user is running the cron job. The cron job was scheduled by crontab -e as root user. pgrep cron gives a service id which means that cron service is running. Following are the contents of /root/bin/script.sh file:
BACKUP_LOG=/var/log/backup.log
exec 1> >(while IFS= read -r line; do echo "$(date --rfc-3339 ns) $line"; done | tee -a ${BACKUP_LOG}) 2>&1
# Back up the etc directory
mkdir /home/directory1/backup/etc_backup
cp -Lrp /etc /home/data/backup/etc_backup
tar czf /home/data/backup/etc_backup.tgz
/home/data/backup/etc_backup
rm -rf /home/data/backup/etc_backup
Actually I moved the script from /etc/cron.daily/ to /root/bin. Is the script supposed to be in /etc/cron.daily only for daily execution?
|
My guess is that your script isn't being understood properly by the shell, because it doesn't have any proper shebang. Try using crontab -e with this instead:
0 2 * * * bash /root/bin/script.sh > /tmp/crontest.log 2>&1
By invoking the script directly using bash, the script should run fine now. Any output should end up logged to /tmp/crontest.log as well, which could help a bit with debugging if it still doesn't work.
| Daily cron job does not seem to work |
1,596,117,343,000 |
Is the following crontab possible?
0 4 * * * /sbin/sudo shutdown -r now
I want to run a single command, sudo shutdown -r now, from a crontab without having to put it in a bash script.
|
Normally, sudo will be useless in a crontab. The cron program runs the commands in a restricted environment (most notably a very limited PATH and no controlling tty). While you could probably get this to work by installing severe security holes, the right way to achieve what you probably want is to put the command shutdown -r now in root's crontab.
You obviously don't expect to ever be doing anything at 4 AM on the machine in question, but as a precaution for a rare case when you are, you might want to give the shutdown command some time and a real message. Then, if you are there, you can either cleanup or abort the shutdown in the interval. It's a real pity to have something like this kick in just after you've spent half an hour editing something and have no chance to save it.
| Inline Crontab Commands? |
1,596,117,343,000 |
I tried posting this on Stackexchange but I think I may have more chance of a correct answer here as its very linux specific.
I have an sh script which updates data in a csv, then runs a perl script ($match) from within the sh script that matches data between two csv files and populates the file $matches with the matching results. I'm only going to show a snippet from the end of the script because only this bit is relevant to the question:
#!/bin/sh
export PATH="/opt/bin:/opt/sbin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/syno/sbin:/usr/syno/bin:/usr/local/sbin:/usr/local/bin"
match=/home/perl_experiments/newitems/match.pl
matches=/home/perl_experiments/newitems/matches.txt
/usr/bin/perl $match > $matches
if [[ -s $matches ]] ; then
date >> $log
echo "matches has data." >> $log
$sendmail
else
date >> $log
echo "matches is empty." >> $log
exit
fi
EDIT: Here is the $match script
#!/usr/bin/perl
my @csv2 = ();
open CSV2, "<csv2" or die;
@csv2=<CSV2>;
close CSV2;
my %csv2hash = ();
for (@csv2) {
chomp;
my ($title) = $_ =~ /^.+?,\s*([^,]+?),/; #/ match the title
$csv2hash{$_} = $title;
}
open CSV1, "<csv1" or die;
while (<CSV1>) {
chomp;
my ($title) = $_ =~ /^.+?,\s*([^,]+?),/; #/ match the title
my %words;
$words{$_}++ for split /\s+/, $title; #/ get words
## Collect unique words
my @titlewords = keys(%words);
my @new; #add exception words which shouldn't be matched
foreach my $t (@titlewords){
push(@new, $t) if $t !~ /^(and|the|to|uk)$/i;
}
@titlewords = @new;
my $desired = 5;
my $matched = 0;
foreach my $csv2 (keys %csv2hash) {
my $count = 0;
my $value = $csv2hash{$csv2};
foreach my $word (@titlewords) {
my @matches = ( $value=~/\b$word\b/ig );
my $numIncsv2 = scalar(@matches);
@matches = ( $title=~/\b$word\b/ig );
my $numIncsv1 = scalar(@matches);
++$count if $value =~ /\b$word\b/i;
if ($count >= $desired || ($numIncsv1 >= $desired && $numIncsv2 >= $desired)) {
$count = $desired+1;
last;
}
}
if ($count >= $desired) {
print "$csv2\n";
++$matched;
}
}
print "$_\n\n" if $matched;
}
close CSV1;
~
Now, i've deliberately left some data in the 2 csv's that matches to test the script. When I run the script manually it works as expected and I get the message "matches has data" in the $log file. However, when its run from crontab (as root, just like when I run it manualy) it doesn't produces any data in $matches- and the $log files entry states "matches is empty."
Here is my crontab entry, which definitely runs the script, it just doesn't produce the expected output:
*/10 09-21 * * 1,2,3,4,5 root /home/perl_experiments/newitems/newitems.sh
So my question is, why is this happening, and what can I amend to ensure crontab executions are the same as my manual executions? Is it something to do with crontab having issues with running a perl script inside an sh script?
As I say when run manually it does exactly what I expect it to, but when run my crontab, no matches are produced. Suggestions welcome.
|
Your perl script has
open CSV2, "<csv2" or die;
...
open CSV1, "<csv1" or die;
Where are those files located? cron's current directory is the home directory of the user. If the files are in the "newitems" directory, you have to cd there first.
Make sure you're not making any other assumptions about the environment in your programs.
I find this is a handy command to enable in your crontab one time:
#* * * * * { date; pwd; echo "env:"; env; echo "set:"; set; } > ~/cron.env
An excellent point by @Otheus:
if ! /usr/bin/perl "$match" > "$matches"; then
status=$?
echo "$match script returned unsuccessfully"
exit $status
fi
| sh script containing perl element does not produce same output via crontab as manual execution |
1,596,117,343,000 |
I need to a crontab execute each 12 hours, so I have the following:
0 */12 * * * . /X.sh
then Linux is installing new crontab. after that, I am waiting for 2 hours and still the cron job didn't start?!
When has it to start the job?
|
The crontab entry you've written is equivalent to
0 0-23/12 * * * . /X.sh
This requests execution on the hour between midnight and 11pm, using 12-hour steps — so cron will run the job every day at midnight and noon.
| when the cron job start after installing new crontab? |
1,596,117,343,000 |
Trying to add text on the line before that it find the match (updateKey.sh) in this case, but won't get it to work.
Here is my crontab file that the script adds the line to
0 06,18 * * * /home/server/scripts/CCgenerator.sh
0 05 * * * /home/server/scripts/updateKey.sh
The first line "CCgenerator.sh" is sometimes deleted but it has to look like that. And here is the script that adds that line.
#!/bin/bash
CCgenerator="0 06,18 * * * /home/server/scripts/CCgenerator.sh"
updateKey="0 05 * * * /home/server/scripts/updateKey.sh"
if ! sudo grep -q "$CCgenerator" /var/spool/cron/crontabs/root; then
echo "Adds CCgenerator.sh"
sudo sed -i '/\$updateKey/i $CCgenerator' /var/spool/cron/crontabs/root
else
echo "CCgenerator.sh found"
fi
exit
|
For cron edit purpose l0b0 answer is the best way, to fix your script you have to:
escape dots and asterisk in your search key (updateKey)
use alternative separator in sed (I choose %)
double quotes around sed expression (you want your bash variables resolved)
#!/bin/bash
CCgenerator="0 06,18 * * * /home/server/scripts/CCgenerator.sh"
updateKey="0 05 \* \* \* /home/server/scripts/updateKey\.sh"
if ! grep -q "$CCgenerator" cron; then
echo "Adds CCgenerator.sh"
sed -i "\%$updateKey%i $CCgenerator" cron
else
echo "CCgenerator.sh found"
fi
exit
| add text on line before match |
1,596,117,343,000 |
I typed the following into crontab -e
0 0 * * * bitcoind -datadir=/home/pi/bitcoinData -daemon
0 6 * * * bitcoin-cli -datadir=/home/pi/bitcoinData stop
I expect this to run bitcoind -datadir=/home/pi/bitcoinData -daemon at 12am every day
and then run bitcoin-cli -datadir=/home/pi/bitcoinData stop at 6am every day.
But the commands do not get executed.
How can I fix this?
Output from "cron status":
pi@raspberrypi:~ $ sudo service cron status
● cron.service - Regular background program processing daemon
Loaded: loaded (/lib/systemd/system/cron.service; enabled)
Active: active (running) since Tue 2016-05-03 20:57:33 BST; 58min ago
Docs: man:cron(8)
Main PID: 5932 (cron)
CGroup: /system.slice/cron.service
└─5932 /usr/sbin/cron -f
raspberrypi CRON[7608]: pam_unix(cron:session): session opened for user root by (uid=0)
raspberrypi CRON[7615]: (root) CMD (bitcoind -datadir=/home/pi/bitcoinData -daemon)
raspberrypi CRON[7608]: pam_unix(cron:session): session closed for user root
raspberrypi cron[5932]: (root) RELOAD (crontabs/root)
pi@raspberrypi:~ $
(As suggested)
I added >> /tmp/bitcoin-cron.log 2>&1 to the end of each line in cron.
The log file was showing /bin/sh: 1: bitcoind: not found
So I added the full paths to the programs into the PATH like this:
pi@raspberrypi:~/bin $ locate bitcoind
/usr/local/bin/bitcoind
pi@raspberrypi:~/bin $ locate bitcoin-cli
/usr/local/bin/bitcoin-cli
pi@raspberrypi:~/bin $ export PATH=$PATH:/usr/local/bin/bitcoind
pi@raspberrypi:~/bin $ export PATH=$PATH:/usr/local/bin/bitcoin-cli
pi@raspberrypi:~/bin $ echo $PATH
/home/pi/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/games:/usr/games:/usr/local/bin/bitcoind:/usr/local/bin/bitcoin-cli
Tried cron again, and it generated a second line in the log file, which also said /bin/sh: 1: bitcoind: not found
|
Solution to the problem is to modify the entries in cron with the absolute path names. Added cron command logging capability as the machine doesn't have an MTA to send failure notifications, as follows:
0 0 * * * /usr/local/bin/bitcoind -datadir=/home/pi/bitcoinData -daemon >> ~/bitcoinData/bitcoin-cron.log 2>&1
0 6 * * * /usr/local/bin/bitcoin-cli -datadir=/home/pi/bitcoinData stop >> ~/bitcoinData/bitcoin-cron.log 2>&1
| Two cronjobs not working (one to start a process, one to stop a process) |
1,596,117,343,000 |
I set cron to run script every day using env EDITOR=nano crontab -e where I wrote something like this @daily path/to/sript.script
Now I want to use anacron to be sure that script is going to be run when I log in if I was logged off at the time when it was scheduled. But I am not sure where to specify that.
I tried something like env EDITOR=nano anacron -e but that didn't work. Also, strange is that man anacron doesn't provide anything. I get this: "No manual entry for anacron"
|
I found that anacron is not an installed utility in OS X. Scheduled tasks are handled by launchd, cron has been depreciated in OS X.
| Setting anacron on mac |
1,596,117,343,000 |
We have an app deployed in a server, which, under some scenarios, generates some logs in the below format.
process_name.hostname.common_text.log.{error/info/warning}.date.time
Now, due to this format, there isn't one log, but several such logs all with same process_name.hostname.common_text.{error/info/warning} part, but with rest of the parts different, due to difference in date and time. But as far as logrotate is concerned, it treats all these as individual logs and it will retain 1 such copy of each log if i say
rotate 1
in logrotate conf.
But as far as I am concerned, these are all same logs and I don't want to retain any log other than the recent 1 log from each category (error/info/warning). How would i go about this?
I thought of writing a script which will run weekly in cron. And this script will check for the recent file from each category (error/info/warning) using timestamp (ls -ltr) and then, delete rest of such logs. But, again, it's getting too complicated if i try to put it in script.
I am looking for something like this.
ls -ltr |grep process_name.hostname.common_text.log.error |head -n -1
ls -ltr |grep process_name.hostname.common_text.log.info |head -n -1
ls -ltr |grep process_name.hostname.common_text.log.warning |head -n -1
The above 3 commands will return the names of all the process_name.hostname.common_text.log.{error/info/warning}.date.time logs except the recent one.
1) Is it possible to pass the output of each of the above commands to exec or xargs in the same line, so that i can run rm -rf against it?
2) what's the better way of doing this? coz, the process_name again have 2 different names, so, i'll have to run 6 commands or maybe more than that.
3) Instead of grepping thrice, once for error, and once for info, and once for warning, is there any way i can grep for all 3 in single line?
|
If you use -1 instead of -l for ls, you get only the filenames and can pass them directly to rm. I would use something like this:
rm $(ls -1tr process_name.hostname.common_text.log.error* | head -n -1) \
$(ls -1tr process_name.hostname.common_text.log.info* | head -n -1) \
$(ls -1tr process_name.hostname.common_text.log.warning* | head -n -1)
| handle large number of same log using cron or logrotate |
1,596,117,343,000 |
My question comes from an observation that I made while seeing an every minute cronjob running like that:
12:00:15 (cron started)
12:01:20 (cron started)
12:02:02 (cron started)
.
.
.
As it seems the cron doesn't run every minute (every 60 seconds). That cron actually runs a php script that does a mysql query that selects and updates the database. It completes in less than 2 seconds.
A cronjob scheduled for every minute is supposed to run on the start of the minute (12:00:00) or it would run on any other particular second of that minute?
Has the load of the system anything to do with that? Is there any way to make sure that a cron starts at the very beginning of a minute?
|
What you observe doesn't surprise me at all, cron is not precise to the second. You can trust* it to run the job within the minute you programmed it to run, but you shouldn't really rely on more precision.
If you need your job to start at a precise moment, you should start it earlier and wait inside the script for that moment to arrive. And since 1 minute is the smallest interval supported by cron, I'd think about different options out there to start it (e.g. a Python script).
(*) see the comment below
| When a cronjob really starts? |
1,596,117,343,000 |
Is it possible for a Cron Bash Script to echo to the current session terminal instead of /var/spool/mail/root
I have a script which writes errors to a log file but any supplemental/unimportant info I echo out to the terminal.
When I run the script in cron as root it redirects the messages to /var/spool/mail/root rather than the terminal.
I would like them to be shown in the terminal if root or another user is logged in rather than stored. If these messages are lost when nobody is logged in thats fine. Like on Cisco IOS
|
You can use the write utility to send text to a specific logged-in user.
command that produces output | write root
The documentation further explains:
To write to a user who is logged in more than once, the terminal argument can be used to indicate which terminal to write to; otherwise, the recipient's terminal is selected in an implementation-defined manner and an informational message is written to the sender's standard output, indicating which terminal was chosen.
On Red Hat/CentOS, the implementation-defined manner is to pick the terminal with the shortest idle time.
If you want to be able to write to one of several users who may be logged in, you can do something like this:
for u in root alice bob charlie
do
if users|grep -w -q $u
then
user=$u
break
fi
done
if test -n "$user"
then
command that produces output | write $user
fi
| Cron Bash Script - echo to current terminal instead of /var/spool/mail/root |
1,596,117,343,000 |
The job works if set like this:
*/1 * * * * /usr/bin/php /home/test/cron/test.php
And if set to something like:
15 20 * * * /usr/bin/php /home/test/cron/test.php
it's not working.
[root@localhost mail]# uname -or
2.6.18-308.el5 GNU/Linux
[root@localhost mail]# cat /etc/*elease
#CentOS release 5.8 (Final)
redhat-4
I don't know if this going to help but when I do this:
[root@localhost mail]# date
Wed Aug 5 20:54:02 KST 2015
and when an email comes the date is displayed like so:
Wed Aug 5 06:51:01 2015
which is actually one hour behind the time.
The date was showing up EDT instead of KST, so I changed etc/profile but the cron job still wouldn't work.
|
Updating /etc/localtime for the system wide time zone setting might fix your problem. I guess KST stands for Korea Standard Time, so you might want to choose /usr/share/zoneinfo/Asia/Seoul for it. You can also run tzselect to know which file in /usr/share/zoneinfo to choose.
$ sudo cp /etc/localtime /etc/localtime.orig # for backup
$ sudo cp /usr/share/zoneinfo/Asia/Seoul /etc/localtime
$ date
Wed Aug 5 21:50:23 KST 2015
Then restart cron, or reboot your server.
| Cronjob is running every minute but not at a specific time |
1,596,117,343,000 |
I need to set up a job to delete all of the regular files in the /home/admin directory on the second day of every month at 8:30 A.M.
It seems like wrong command:
# crontab -e
30 08 02 * /bin/find /home/admin -type f -exec /bin/rm {} ";"
|
You need to add 30 08 02 * * and \;
# crontab -e
30 08 02 * * /bin/find /home/admin -type f -exec /bin/rm {} \;
Now it will work.
| How to set up cron job in Linux to delete all regular files? |
1,596,117,343,000 |
I've got a bunch of CronJobs, and they work fine, except for one. I've looked through a lot of forums and websites, and tried a combination of things but alas nothing has worked.
Rephrasing the question is:
Q: The bashscript works without any problems from the terminal. But with the CronJob it does not work at all.
The last thing I have done for debugging is the following:
1) Checked if the Cron Daemon is running (ps ax | grep) =is working
2) Made an extra chron job to (retest) send out an email to me every minute (* * * * * echo "hello" | mail -s "subject" [email protected]) =worked fine
3) I've ran my bash script through the terminal as a standalone =worked fine
4) I've checked grep CRON /var/log/syslog for any errors =looks good/no errors
5) Checked for permissions etc. = no problems with permissions
6) The file path to the bash script for the cron job looks fine
#!/bin/bash
#When adding any additional machines make sure there are two files
#within the directory. MACHINE_NAMEMACHINE_NUMBER_initial_time.txt
#and MACHINE_NAMEMACHINE_NUMBER_old_ignition_value.txt
#./engine_switch_check.txt MACHINE_NAME MACHINE_NUMBER
echo `date +%T` >> test.txt
./engine_switch_check.txt MXE 065
./engine_switch_check.txt TMX5BP 001
./engine_switch_check.txt MX3 122
./engine_switch_check.txt TMX 098
and the engine_switch_check.txt :
#!/bin/bash
mc_id="$1" #-->eg: TMX
mc_no="$2" #-->eg: 098
echo "$mc_id $mc_no"
#echo "1--$mc_id$mc_no-DATAFILE.txt"
mc_fname=$mc_id$mc_no'_old_ignition_value.txt'
echo $mc_fname
#old_ignition_value=$(sed -n '1p' $mc_fname)
#echo "2--$old_ignition_value"
#old_ignition_value=$(sed -n '1p' $mc_id$mc_no'DATAFILE.txt')
#echo "3--$old_ignition_value"
new_ignition_value=`get values from the terminal`
old_ignition_value=$(sed -n '1p' $mc_id$mc_no'_old_ignition_value.txt')
echo "Program name: $0"
echo "New Ignition Value: $new_ignition_value"
echo "Old Ignition Value: $old_ignition_value"
echo;echo;echo
#difference_btwn_new_old_ign_values=$(awk '{print $1-$2}' <<< "$new_ignition_value $old_ignition_value")
difference_btwn_new_old_ign_values=$(($new_ignition_value - $old_ignition_value))
#difference_btwn_new_old_ign_values= expr new_ignition_value - old_ignition_value
echo "$new_ignition_value"
echo "$old_ignition_value"
echo "$difference_btwn_new_old_ign_values"
if [ "$difference_btwn_new_old_ign_values" -lt "1" ]
then
> $mc_id$mc_no'_initial_time.txt'
initial_time=`date +"%s"`
echo $initial_time >> $mc_id$mc_no'_initial_time.txt'
fi
if [ "$difference_btwn_new_old_ign_values" -ge "5" ]
then
final_time=`date +"%s"`
initial_time=$(sed -n '1p' $mc_id$mc_no'_initial_time.txt')
echo;echo;echo "initial time: $initial_time"
echo "final time: $final_time"
#initial_time=0000
time_difference_in_sec=$(( $final_time - $initial_time ))
echo "time difference in sec: $time_difference_in_sec"
time_difference_in_min=$(( $time_difference_in_sec / 60 ))
if [ "$time_difference_in_sec" -le "3600" ]
then
email_subject="$mc_id $mc_no switched on $difference_btwn_new_old_ign_values times within $time_difference_in_min minutes"
`echo -e "Hi there,\n\n$mc_id $mc_no has been switched on $difference_btwn_new_old_ign_values times within the last $time_difference_in_min minutes\n\nCheers," | mail -s "$email_subject" $email_list`
echo "EMAIL SENT"
: <<'psuedo'
> $mc_id$mc_no'_old_ignition_value.txt'
echo $new_ignition_value >> $mc_id$mc_no'_old_ignition_value.txt'
psuedo
fi
if [ "$time_difference_in_sec" -gt "3600" ]
then
> $mc_id$mc_no'_initial_time.txt'
initial_time=`date +"%s"`
echo $initial_time >> $mc_id$mc_no'_initial_time.txt'
fi
fi
I've cut out the details regarding the email, but that line works fine.
I honestly don't know what else I can do. The only difference with this bash file is that it calls another 'executable txt' file from within it. And both of these files work great from the terminal by themselves.
Update (18/02/2015):
I have further tried the CronTab by writing another (simpler) script to email out a timestamp, also recorded the timestamp into a .txt file - which worked without any problems. I rewrote it because I was thinking maybe the CronTab was not working like it should.
For anyone who is having a similiar problem these are some options you should consider:
Other things I did during troubleshooting (not in order)
Created an echo out to a text file to see if the program was being run
Avoided using sudo crontab -e everyone recommends staying away from sudo crontab -e
Checked the directory path within the crontab
Read/reread various forums, read/reread my program over and over again (get someone else who understands programming to do it, as fresh eyes can see what you might miss)
Added PATH and SHELL in to crontab
Added different CronJobs (mentioned update 18/02/15)
Changed relative path to full path within all the programs This made it work with the crontab
|
The solution that worked for me is I changed all of the paths within my two programs from relative paths ie: (./name_of_file.txt) to full paths (ie: /home/ed/directory/some_more_directory/name_of_file.txt)
Was it just this, or a combination of the other things I did, I don't know - but changing the paths from relative to full path did it.
| BashScript works from Terminal but not from CronTab |
1,596,117,343,000 |
I'm trying to set up a cron job in order to backup my databases daily.
Here's what I wrote in my crontab file :
25 18 * * * root mysqldump -u root -p myPassWord --all-databases | gzip > /var/backup/database_`date '+%m-%d-%Y'`.sql.gz
As nothing happened at 18:25, I had a look in my /var/log/syslog file (the server is under Debian) here's what I found:
Jan 24 18:25:01 ns311475 /USR/SBIN/CRON[16252]: (root) CMD (/usr/local/ispconfig/server/server.sh 2>&1 > /dev/null | while read line; do echo `/bin/date` "$line" >> /var/log/ispconfig/cron.log; done)
Jan 24 18:25:01 ns311475 /USR/SBIN/CRON[16253]: (getmail) CMD (/usr/local/bin/run-getmail.sh > /dev/null 2>> /dev/null)
Jan 24 18:25:01 ns311475 dovecot: imap-login: Disconnected (no auth attempts in 0 secs): user=<>, rip=127.0.0.1, lip=127.0.0.1, secured, session=<3xFlL2kNAAB/AAAB>
Jan 24 18:25:01 ns311475 dovecot: pop3-login: Disconnected (no auth attempts in 0 secs): user=<>, rip=127.0.0.1, lip=127.0.0.1, secured, session=<rhRlL2kNvwB/AAAB>
Jan 24 18:25:01 ns311475 postfix/smtpd[16279]: connect from localhost.localdomain[127.0.0.1]
Jan 24 18:25:01 ns311475 postfix/smtpd[16279]: lost connection after CONNECT from localhost.localdomain[127.0.0.1]
Jan 24 18:25:01 ns311475 postfix/smtpd[16279]: disconnect from localhost.localdomain[127.0.0.1]
I don't understand the first line, I assume the rest is trying to email the issue but I got nothing.
|
Debian has a package for you to take care of dumping, compressing and rotating MySQL data. You can install it with the following command:
$ sudo apt-get install automysqlbackup
After this daily, weekly and monthly dumps will be placed in /var/lib/automysqlbackup
| Databases backup with cron |
1,596,117,343,000 |
I'm looking for a scheduler daemon like cron, but with at least seconds precision so I can use it for a radio automation app.
I've heard conflicting rumors about cron's ability to handle seconds. If it is already capable of doing that then I'd be happy to hear that information as well.
|
You can do :
* * * * * sleep 5; script.sh
* * * * * sleep 10; script.sh
...
* * * * * sleep 55; script.sh
to run the script every 5 seconds.
| High precision scheduler daemon |
1,596,117,343,000 |
I'm trying to automatize the updates of debian system, without making upgrades. Then i'll send an e-mail to me, everytime there is an upgrade available.
I've tryied to do it with cron-apt, but i really don't like how the configuration is organized, that's why i would prefer using a (cleaner) cron job that launch the script.
looking around i found that piece of code (not mein) :
if [[ `apt-get update 2>&1 | grep Get` ]]; then
if [[ `apt-get --simulate dist-upgrade 2>&1 | grep Inst` ]]; then
apt-get --simulate dist-upgrade
fi
fi
From what i can understand, that script update the local packages list, and make a simulation of an eventual upgrade.
Now if it's possible i would like to send to me via email the output of the update, and of the upgrade-simulation. For acheave that i could use mail command:
sending first email:
apt-get update | mail -s "daily update report" [email protected]
second email:
apt-get --simulate dist-upgrade | mail -s "daily upgrade-simulation report" [email protected]
My main question is if there a better approach to do all this..?
Secondarely, I've tryied without succed to send everything in just one email, anybody know how i could do it?
|
Don't reinvent the wheel.
apt-get install apticron
Apticron is a simple script which sends daily emails about pending package updates such as security updates, properly handling packages on hold both by dselect and aptitude.
| automate updates with a bash script and cron |
1,596,117,343,000 |
I need to open one (or more) browser tab periodically and I decided to use cron.
The command in the shell (bash) that correctly executes this task is chromium-browser http://mysite.com. If I type it, the browser opens the site in a tab.
But the same command inserted as a task in the crontable doesn't work.
If I redirect the output of other simple commands in the crontable in a file they work correctly, say
echo "hello world" > /home/user/file
So, shall I redirect the output of the command chromium-browser http://mysite.com to my graphic interface? If yes, which would be the device?
|
i added the following to my crontab by typing crontab -e and it worked
* * * * * env DISPLAY=:0 google-chrome www.github.com
My chrome browser opened www.github.com every minute. So the following should work for you.
* * * * * env DISPLAY=:0 chromium-browser http://mysite.com
| Cron task in graphic interface |
1,596,117,343,000 |
I'm trying to chain some commands to periodically check/launch two processes via cron (I'm on a shared host, can't change things around). After a lot of googling around, all the things I've done don't work properly:
Trying to launch them separately in two cron jobs cascades and crushes the server (maybe because the grep command doesn't manage to catch the search words because they are parameters, I don't understand):
ps -fu $USER | grep celeryd >/dev/null || $HOME/python27/bin/python $HOME/utilities/manage.py celeryd -E -B --concurrency=1
ps -fu $USER | grep celerycam >/dev/null || $HOME/python27/bin/python $HOME/utilities/manage.py celerycam
And these other variants where I try to launch the processes together on the same cron, launch only the first process:
ps -u $USER | grep python >/dev/null || $HOME/python27/bin/python $HOME/utilities/manage.py celeryd -E -B --concurrency=1 & $HOME/python27/bin/python $HOME/utilities/manage.py celerycam &
ps -u $USER | grep python >/dev/null || ( $HOME/python27/bin/python $HOME/utilities/manage.py celeryd -E -B --concurrency=1 & $HOME/python27/bin/python $HOME/utilities/manage.py celerycam; )
ps -u $USER | grep python >/dev/null || ( $HOME/python27/bin/python $HOME/utilities/manage.py celeryd -E -B --concurrency=1; ) sleep 30s; ( $HOME/python27/bin/python $HOME/utilities/manage.py celerycam; )
I gave up trying to use grep and took advantage of the --pidfile option of the celery worker; celerycam doesn't allow multiple instances so I don't need any grep there to check for it:
$HOME/python27/bin/python $HOME/utilities/manage.py celery worker -E -B --concurrency=1 --pidfile=worker.pid
$HOME/python27/bin/python $HOME/utilities/manage.py celerycam
|
$USER is not set in most crons. Luckily ps -u without a username defaults to the present user (which is the user from which cron is running for). However your grep has quite a high chance to match the grep itself as well as the celery processes. You can clean that issue up with a sneaky grep '[c]elery'. This will match processes with names/arguments that contain celery only (and not the grep which has argument [c]elery). The other issue I can see is that because you run the program with a giant path to manage.py you may be missing the "celery" part in your ps output. That can be remedied by adding more verbosity to the output of ps with the -w flag (the more of them there are the more verbose it becomes). So The suggested fix is:
ps fwwwu | grep '[c]eleryd' >/dev/null || $HOME/python27/bin/python $HOME/utilities/manage.py celeryd -E -B --concurrency=1
ps fwwwu | grep '[c]elerycam' >/dev/null || $HOME/python27/bin/python $HOME/utilities/manage.py celerycam
EDIT: Removed the extraneous dash from the ps arguments above and added pgrep as a viable alternative.
Alternatively you can also make use of two other commands pidof and pgrep to get the process ids of running processes. I think in this instance (if your username is fred) you can replace the ps command with:
pgrep -fu fred celeryd
to achieve similar results.
EDIT 2: Add single quotes around '[c]elery' to stop shell globbing (the directory where the cron is ran from (the users $HOME) may contain a file that is called celery.
| Launch 2 celery processes via cron |
1,596,117,343,000 |
I have this txt files that contains IPs, one per line of file, that I want to block using ipset.
I have this bash script that essentially reads from the plain txt file and constructs an array. Then it iterates the array elements and add each one to the ipset I have created for that purpose.
The problem is this: if I execute the script manually from the terminal, it works perfectly, but when I add the script to run periodically using crontab, the script runs but the IPs are not added to the ipset.
This is the relevant part of the script.
index=0
while true; do
ipset -quiet -A myIpset $[arrayOfIPS[$index]}
index=$[$index + 1]
if [ "$index" -gt "$lastIndexOfArray" ];
then break
fi
done
This works perfectly from terminal but not running from a crontab task.
why?
|
Your shell knows where to find executables (like ipset) by looking in your PATH, which is set by your environment. cron does not share the same environment. Adding this at the top of the crontab (or your script) should tell it where to find commands as you expect:
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
| ipset not executing from crontab |
1,596,117,343,000 |
I have a cron job:
#!/bin/bash
fn=db.backup.$(date +%m-%d-%y).sql
mysqldump -uMyUsr -pMyPass --add-drop-table dbName> $fn
find ./ -name '*.Z' -type f -mtime +7 -exec rm -f {} \;
I get an error:
/backup.sh: line 3: db.backup.10-24-12.sql: Permission denied
db.backup.10-24-12.sql: No such file or directory
find: ./conf: Permission denied
find: ./pd: Permission denied
It looks like whatever process is running cron doesn't have permissions to create a file or to run 'find' command.
|
you should use:
fn=/full/path/to/db.backup.$(date +%m-%d-%y).sql
or
cd /full/path/to/
before you export the database
unless your cronjob looks like:
* * * * * cd /full/path/to && backup.sh
| Cron job - permission denied creating a file |
1,333,433,627,000 |
I have a number of computers and each computer is starting cron jobs left and right.
Those are both server computers and desktop computers.
On desktop computers it would be inconvenient if a user would shutdown the computer while a cron job is running.
I am thinking of mv-ing the shutdown command, and have users shutdown via a dedicated function, which would in turn, run the shutdown command, but only once all cron jobs have stopped running.
Am i mis thinking this somehow?
Or does the shutdown command already take cron jobs into consideration?
|
The shutdown command doesn't know anything about cron jobs, but if you're on a modern system then shutdown is actually handled by systemd, and systemd knows about inhibitors, which allow you to prevent certain activities -- like system shutdown, suspend, etc -- while a command is running.
You can use the systemd-inhibit command to run a command with an inhibitor lock. So if you have a cron job that does something that shouldn't be interrupted -- like making a system backup, or installing package upgrades -- you can replace something like...
0 0 * * * /path/to/backup-script.sh
With:
0 0 * * * systemd-inhibit --what=shutdown /path/to/backup-script.sh
| Shutdown command vs cron jobs |
1,333,433,627,000 |
When cron runs
0 16 * * * journalctl --vacuum-time=10d
I get an email with the content like
Vacuuming done, freed 0B of archived journals from /var/log/journal.
Vacuuming done, freed 0B of archived journals from /var/log/journal/68eb3115209f4deb876284bab504772b.
Vacuuming done, freed 0B of archived journals from /run/log/journal.
sometimes there is some bytes freed, but how do I suppress those emails if there were 0B freed?
|
Simplest thing to do is to pipe through 2>&1 | grep -v 'freed 0B'
If cron runs a command that produces zero lines of output, then cron will not send an email.
| suppress cron emails from cleaning systemlog if 0B were cleaned |
1,333,433,627,000 |
I'm using linux:
Distributor ID: Ubuntu
Description: Ubuntu 18.04.6 LTS
Release: 18.04
Codename: bionic
I noticed a few weeks ago that there is a file in my home directory: dead.letter.
The file is updating at the first second of every minute with the same number of log line.
I have been searching for the process that cause this over a week for now and still can't find it.
I've tried https://www.baeldung.com/linux/find-process-file-is-busy +
auditctl, killing and uninstalling docker, verifying multiple location where cron jobs may be. Disabling the cron process.
+ mail sendmail and mailx commands are not recognized.
I can't find the process that causes this.
Please help.
|
Found the problem:
We use nfs mounts in our organization. We also use AWS.
We created a new instance from an ec2 instance I was working on.
The template instance had a cron job, that kept running on the new instance, and writing data to the shared directory.
| What process is generating dead.letter? |
1,333,433,627,000 |
I have two users: user1 and user2
user1 is added to /etc/cron.allow
But when I run crontab with the -u option, I get :
user1@hostname:~$ crontab -l -u user2
must be privileged to use -
Is it possible to grant this crontab -u permissions for a user without giving him sudo rights?
|
You can grant sudo access to just that particular command. For example, the following rule...
testuser1 ALL=(ALL) NOPASSWD: /usr/bin/crontab -l -u *
Would let testuser run crontab -l -u <someuser>. E.g, these all work:
[testuser1@fedora ~]$ sudo crontab -l -u testuser1
no crontab for testuser1
[testuser1@fedora ~]$ sudo crontab -l -u testuser2
no crontab for testuser2
[testuser1@fedora ~]$ sudo crontab -l -u root
no crontab for root
But other commands fail:
[testuser1@fedora ~]$ sudo crontab -l
[sudo] password for testuser1:
Sorry, user testuser1 is not allowed to execute '/usr/bin/crontab -l' as root on fedora.
[testuser1@fedora ~]$ sudo date
[sudo] password for testuser1:
Sorry, user testuser1 is not allowed to execute '/usr/bin/date' as root on fedora.
Etc.
To grant these permissions to multiple users, consider creating a group (e.g., cronpeople), and then using that in your sudoers configuration:
%cronpeople ALL=(ALL) NOPASSWD: /usr/bin/crontab -l -u *
You can replace %cronpeople with ALL if you want all users to have these privileges.
| Allow 'crontab -l -u' for non-root users |
1,333,433,627,000 |
I want to run a some scheduled commands on a timed basis within a manually startedscreen session and only within that screen session, because it uses an ssh authentication tied to that session.
Basically it means that after I start the screen session, I add the ssh key for the session then I start the command, so all the subsequent commands run by the utility have access to the remote resources made availabe by the key.
When the session is terminated then it all ends.
at and cron seem to be configured by root but I want a user-level program running within my own account and directories.
|
Asking around led me to supercronic which is described by its developers as
a crontab-compatible job runner, designed specifically to run in
containers.
Although it was designed with containers in mind it works as a regular user program and can use crontab format files for scheduling and running tasks.
| Is there a user-level utility which offers the features of "cron" or "at" and only within a particular session? |
1,333,433,627,000 |
I wanted to set up an anacron task to run a backintime backup once a day (Ubuntu 20.04.3 LTS). If you schedule this using the backintime GUI, the normal crontab is being used, but for my use case this is not suitable: I usually have my computer on standby when I'm not using it, so the cronjob would just be discarded if I don't have the computer running at that exact scheduled time. What I did instead was create a script in the cron.daily directory, so that anacron takes care of it, which also supports delayed execution in case of standby or the computer being shut down for some time. I added the following command:
sudo -i -u samuel /usr/bin/nice -n19 /usr/bin/ionice -c2 -n7 /usr/bin/backintime backup-job >/dev/null
This is exactly what backintime adds to the crontab, so I was sure it would work just fine when using anacron. But that's not the case: The job is started just fine, but the backup never finishes. The syslog output is the following:
backintime (samuel/1): INFO: Lock
backintime (samuel/1): WARNING: Inhibit Suspend failed.
backintime (samuel/1): INFO: mount ssh: [...]
backintime (samuel/1): INFO: Take a new snapshot. Profile: 1 Main profile
backintime (samuel/1): INFO: Call rsync to take the snapshot
[... around 10 seconds later ...]
anacron[1082]: Job `cron.daily' terminated (mailing output)
anacron[1082]: anacron: Can't find sendmail at /usr/sbin/sendmail, not mailing output
anacron[1082]: Can't find sendmail at /usr/sbin/sendmail, not mailing output
systemd[1]: anacron.service: Killing process 7920 (python3) with signal SIGKILL.
anacron[1082]: Normal exit (1 job run)
systemd[1]: anacron.service: Killing process 7958 (ssh-agent) with signal SIGKILL.
systemd[1]: anacron.service: Killing process 8107 (ssh) with signal SIGKILL.
systemd[1]: anacron.service: Killing process 8109 (sshfs) with signal SIGKILL.
systemd[1]: anacron.service: Killing process 8112 (python3) with signal SIGKILL.
systemd[1]: anacron.service: Killing process 8126 (rsync) with signal SIGKILL.
systemd[1]: anacron.service: Killing process 8127 (ssh) with signal SIGKILL.
systemd[1]: anacron.service: Killing process 8123 (QXcbEventQueue) with signal SIGKILL.
systemd[1]: anacron.service: Succeeded.
I find this situation rather weird, because why would anacron deliberately kill my processes? So what happens, as far as I interpret it, is that the command I'm executing in my backup script exits rather quickly, as its only task is to spin off some worker processes like python, ssh, rsync etc, and once they're running in the background, the launcher quits. So far so good, but anacron apparently thinks it's its duty to kill off all descendants of the original backup script once the script is done. But how can I prevent it from doing so? Do I really need to manually find out the descendant PIDs and wait for all of them to be done, before exiting the backup script?
I haven't found any info on this behavior online, so I would be glad if someone had any advice on how to proceed here.
|
Edit: The workaround below didn't work 100% of the time, so I went digging again and identified that the root cause is not anacron itself but rather its systemd configuration: Apparently systemd units can specify a kill mode that is used to clean up any spawned subprocesses once the main process is done. Anacron had this set to "mixed", and changing it to "none" successfully allows my backup task to run in the background. Be aware that for other kinds of anacron tasks disabling the killing of subprocesses might not be ideal, but for my use case it's exactly what I need.
Changing the kill mode can be done via sudo systemctl edit anacron.service and then entering the following configuration:
[Service]
KillMode=none
--
I'll leave the previous workaround for reference:
I found a more or less robust workaround for this situation: In my specific case, backintime does a remote backup via SSH. This is done by launching an sshfs process that takes care of mounting the backup location and exits once the backup is done. As such, the script can just wait for that process to start and finish again, in order to know when the backup is done:
sudo -i -u samuel /usr/bin/nice -n19 /usr/bin/ionice -c2 -n7 /usr/bin/backintime backup-job >/dev/null
echo "Waiting for SSHFS to start"
until pids=$(pidof sshfs)
do
sleep 1
done
echo "Waiting for backup to finish (SSHFS [$pids] to exit)"
while ps -p "$pids" >/dev/null 2>&1
do
sleep 5
done
echo "SSHFS process gone, exiting"
This might not be the most elegant solution but at least it works nicely.
| Anacron kills children of my task |
1,333,433,627,000 |
After researching enough on the times when cron.daily (and weekly and 'hourly') run, I found the following command -
grep run-parts /etc/crontab
But the output of this command isn't very intuitive. If I want to see the corn running times in a human readable format, like the one given by the date command, what should I do?
|
You can use hcron - hcron . You can specify a crontab file and it will parse it the way you would expect. My tip would be to rename the binary after extracting the package, because the binary defaults to the name cron.
Example output:
./hcron --file /var/spool/cron/root
00 15 * * *: At 03:00 PM | command-foo
0 23 * * 5: At 11:00 PM, only on Friday | my-other-command-foo
| Print the cron time in human readable format [duplicate] |
1,333,433,627,000 |
I am new to linux and I am trying to automate my update/upgrade script for my raspberry pi.
Right now it is set to do it every 1 minute, so I can see whether it logs what it does. Later on I will be making it so that it would run once every X time. However it does not log anything nor does it create the out.log file.
The steps I take:
crontab -e
#I used the following line
01 * * * * root apt-get update -y && apt-get upgrade -y > /home/cronbin/out.log
I then save it:
CTRL + X
Y
ENTER
I did read about unattended upgrades and this post (apt-get upgrade not installing upgrades through crontab job) was also an interesting read, however I am trying to learn/work with linux and therefore decided to still ask this question in the hopes of finding out what I am doing wrong and how to make it work.
|
apt generally requires root permissions, and so you should put this in the root crontab instead of a user crontab. On RPi, you can actually get away with using sudo in a user crontab as long as the user is pi, but this may not work on many other distributions - it's not a good practice.
Instead, use this:
sudo crontab -e
This will open the root crontab for editing, and you can enter something like this which will run apt-get update and apt-get upgrade once per day at 12:00 noon, and write all of the results to the file /home/pi/upgrade_results.log:
* 12 * * * apt-get update && sudo apt-get -y upgrade >> /home/pi/upgrade_results.log 2>&1
Please review man apt-get to get the latest information for your system, and all of the important details re use of this command.
No -y option is needed for update as it does not generally prompt the user.
The >> redirect appends output to the log file; if you want only the results of the most recent run, replace the append redirect with the overwrite redirect: >
The 2>&1 redirect combines the normal stdout (1) output with any stderr (2) error messages; this is the same output you would get in your terminal if you ran these commands from the command line of your interactive shell.
see the crontab guru for help structuring your schedule.
When you test this cron job, you should not run it at 1 minute intervals because that may not be sufficient time to run both commands to completion. IOW, you're running these commands "on top of each other" with potentially harmful effect. If you feel the need to run this repeatedly for testing purposes, you should run the commands "manually" first, and then use the -s or --dry-run option in your cron job. Once you are satisfied the cron job is working, be sure to remove the -s or --dry-run option from the apt-get command.
upgrade can generate a fair amount of output, and if you use the append redirect >>, your log file will become large over time. You should review it regularly, and either trim or simply delete the file. If you want to constrain the size of your log automatically, consider the logrotate command, or perhaps another cron job that pipes the log file through tail with the -c option to a temporary location & overwrites the log file with the tmp file.
| cronjob does not output message for automated apt |
1,333,433,627,000 |
What are the differences in using crontab editing directly:
vi /etc/contrab
And using:
crontab -e
Because if you use one or the other, the commands inside the file are not the same.
|
The crontab command manage crontab files for USERS. These crontab are defined in /var/spool/cron/crontabs/ directory ; they are not intended to be edited directly !
The /etc/crontab is the SYSTEM crontab file, which can be edited directly.
| Different Crontab - Linux Debian |
1,333,433,627,000 |
I have small check if today is first Monday of month which is like this:
['$(date "+%u")' = "1"] && echo 'trąba'
but I get the error when crontab is sending me an email that something went wrong
/bin/sh: -c: line 0: unexpected EOF while looking for matching `''
/bin/sh: -c: line 1: syntax error: unexpected end of file
I tried changing '$(date "+%u")' to "$(date '+%u')" but didn't help.
The title of the email is ["$(date '+ so I think it has a problem with 1st quote marks
but this code works just fine when executed inside the terminal.
Maybe someone have better "check" to check for the 1st Monday of month
OS: CeontOS 7
contab -l
* * * * * [ "$(date +%u)" -eq 1 ] && echo trąba
|
Fixed POSIX code follows:
[ "$(date +%u)" -eq 1 ] && echo trąba
Errors / warnings / infos were:
missing spaces in [ .. ] block
apostrophes instead of double quotes
equal sign instead of POSIX -eq in test [ .. ]
you do not have to quote anything after echo
you do not have to quote numbers
you do not have to quote that date code
Cron
*/1 * * * * [ "$(/usr/bin/date +\%u)" -eq 1 ] && /usr/bin/echo trąba >> ~/cron-test
In order to test this, you may try the above code
You may have your date and echo binaries elsewhere on your system, to determine where they are, use which date etc.
After modifying your Cron, you can simply use this and sit tight
tail -f ~/cron-test
| Command not working in crontab [duplicate] |
1,333,433,627,000 |
I have 3 shell scripts which I want to run in order in separate days. how can i do that with crontab?
for example i have these 3 scripts: test1 test2 test3
today is Monday. script test1 is executed at 12 o'clock.
tomorrow is Tuesday. script test2 is executed at 12 o'clock.
Wednesday, test3.
Thursday, test1.
Friday, test2.
and so on.
(if any further information is needed please let me know in comments so i will add to question.)
|
The easy way is to run a script daily, and let it keep track of which script to run. something like:
#!/bin/bash
# find my name
me="${0##*/}"
# make sure the counter file exists.
counter="/var/run/$me"
if [[ ! -f "$counter" ]] ; then
echo "1" >"$counter"
fi
maxcount=3
pick="$(cat "$counter")"
nextpick=$(( pick + 1 ))
[[ $nextpick -gt $maxcount ]] && nextpick=1
echo "$nextpick" > "$counter"
case $pick in
1) test1;break;;
2) test2;break;;
3) test3;break;;
*) echo "Invalid pick: $pick" >&2; exit 1;;
esac
exit 0
| how to use crontab to run scripts so that they are executed one after the other on separate days? |
1,333,433,627,000 |
I am trying to schedule a cron, where if a month has 5 weeks and 5 wednesdays, script A.sh should run on 1st week wednesday 2nd week Wednesday, 3rd week wednesday and 4th week wednesday, where as on 5th Week Wednesday it should run b.sh.
Else
I am trying to schedule a cron, where if a month has 4 weeks and 4 wednesdays, script A.sh should run on 1st week wednesday 2nd week Wednesday, 3rd week wednesday, where as on 4th Week Wednesday it should run b.sh.
|
Original Answer to The Original Question.
You need something like this:
0 1 1-21 * * [ $(date +\%u) -eq 3 ] && "call your command/script here"
This schedule 0 1 1-21 * * means run the jobs "at 01:00AM on every day-of-month from 1 through 21; now the question is why do we limited to 1-21? The answer is because in best case 1day of the month would be already started on Wednesday (so we selected 1) and crontab will be run on 1st, 8th and 15th day-of-month;
And in worst case that would be the last day of first week (day 7), so the crontab would be run on 7th, 14th and 21th day-of-month; and so that's why we limited the running days to 1~21 days of month.
But then man -s 5 crontab says:
Note: The day of a command's execution can be specified by two fields — day of month, and day of week. If both fields are restricted (i.e., don't start with *), the command will be run when e̲i̲t̲h̲e̲r̲ field matches the current time.
and that is why we only choose day-of-month and put * for day-of-week:
instead of specifying day-of-week in crontab as the note pointed above, we are handling that by doing simple test by the shell with [ $(date +\%u) -eq 3 ]; this checks if the day-of-week is the third day of the week, i.e if that is Wednesday; (0-6 of the day-of-week numbers, 0 Mon, ...., 6 Sun) then execute the command/script.
Answer to The Now Revised Question.
0 1 * * 3 [ $(date +\%m) -eq $(date -d'+7days' +\%m) ] && scriptA.sh || scriptB.sh
If it's not the last Wednesday ([ $(date +\%m) -eq $(date -d'+7days' +\%m) ], with this, we are checking that, today's Wednesday's-month $(date +\%m) +7days later (next Wednesday's-month $(date -d'+7days' +\%m)) are in the same month, so it's not the last Wednesday then the scripA.sh will be executed, otherwise it's the last Wednesday and scriptB.sh will be executed.
with above, scriptA.sh will be executed on every Wednesday of the month, except the last Wednesday which then scriptB.sh will be executed.
Of course one could move these tests inside the script[AB].sh instead of complicating things in crontab entries.
Note: Above solution can apply to execute a single commnad/script too by removing the || scriptB.sh part or one could replace it with || true in case you also do not want cron reacts to it by notifying an error via MAILTO for failure cases.
| How to schedule a cron for 1st 2nd and 3rd week and on specific day of week? |
1,333,433,627,000 |
I don't fully understand man cron, and the handling of the first % symbol.
Unlike most questions about % in a crontab file, I'd actually like to use % as newline. What do I need to do with the first % so it isn't interpreted as either a literal "%" symbol, or a change/redirect of stdin? I don't quite understand what it's doing and what to do if I dont need it.
Example crontab line:
0 0 * * * root newline="%" %echo "this${newline}that" %command2 %command3
(I know I could use semicolons or \n for some, but using %-newline for the sake of this example)
Update:
As comments seem to miss the point, this is the issue:
man crontab(5)
The "sixth" field (the rest of the line) specifies the command to be run. The entire command portion of the line, up to a newline or a "%" character, will be executed by /bin/sh or by the shell specified in the SHELL variable of the cronfile.
... A "%" character in the command, unless escaped with a backslash (), will be changed into newline characters ...
... and all data after the first % will be sent to the command as standard input.
I well understand that as written, the crontab line I've given won't currently work. I understand that unescaped % after the first -> newline (first bold point). What I don't understand is the second bold point.
If all I want to do is use % to insert newlines, this implies I need to do something different with the first unescaped "%", to avoid it being treated as an unintended trigger to "send all subsequent data to the command as stdin". That might mean putting an extra "%" at the start of the command, or whatever, to ensure the first % doesnt disrupt the rest of the command's handling.
Some examples of a line with 2 or 3 unescaped %'s, showing how cron will interpret the first unescaped % and what that does in command handling, would be ideal.
Also, is the second bold point to be taken literally? If so, I can use a 2nd ofr subsequent unescaped % to act as a newline, essentially my command can contain a multiline script after % -> newline substitution, before passing to shell. is that correct?
|
In the crontab, the part after the time specifications and up to the first unescaped % makes up the code to be passed to the shell (as the argument after sh and -c). What comes after that first % makes up lines to be fed to that shell via its stdin.
In effect, with a crontab line such as:
* * * * * shell code%line 1%line 2
cron does every minute the equivalent of this shell code:
printf '%s\n' 'line 1' 'line 2' | sh -c 'shell code'
So you can't really use it to store a newline character in a variable of that shell code.
Here, if you want a newline stored in a variable in that code, you can do
* * * * * eval "$(printf 'nl="\n"')"; echo "blah${nl}blah"
You could however make the code be just sh, and then feed the code on stdin:
* * * * * sh%nl='%'%echo "blah${nl}blah"
In that one, cron will run sh -c sh, and the stdin of the process that executes that sh that executes another sh will be:
nl='
'
echo "blah${nl}blah"
Another option is to tell cron to use a shell that supports the ksh93-style $'...' form of quotes (like ksh93/zsh/bash) instead of sh and do:
SHELL=/bin/zsh
* * * * * nl=$'\n'; echo "blah${nl}blah"
| How to handle the first '%' in a cron command? |
1,333,433,627,000 |
I'm trying to set an ENV variable in a crontab every so often to change the session_secret. Using both ruby and bash, I can't seem to get out of the sandboxed environments that they use. I need to set this variable to a randomly generated hex value of so many characters.
Is this possible? I can't seem to change the current instance value.
MYVAR=$(openssl rand -hex 32)
This is in a bash script, but does not change when I go to check as my own user. Not sure what I'm doing wrong.
|
Wrong conclusion - ability to change the process environment from outside the process.
Let's say you run a ruby script.
This starts a bash shell instance with environment vars, in which the ruby interpreter starts inheriting the current bash shell instance environment and maybe adding some interpreter specific new env vars.
Each running program gets a process id aka pid.
The process environment is stored in /proc/<pid>/environ which is read only and can't be changed from outside - as pointed out here change environment of a running process
While your ruby script is running the parent bash instance is running, too. This bash instance is not reading changed env vars and is not inheriting or propagating new vars to it's child :
$ pstree -p | grep ruby
bash(1234)---ruby(5678)
you can grep for your env.vars with
xargs -n 1 -0 < /proc/5678/environ | grep MYVAR
The only way to have the new changed environment vars while the script was running is to start a new bash/ruby instance and exit the old one.
Possible wrong conclusion - script reads constantly from env - script gets notified when env has changed - env is not beeing cached by the interpreter.
Usually a script initializes reading env vars. when starting the script only, and keeps using internal vars at runtime not env_vars. Even if changeing the process env from outside were possible, and the script reads constantly the environment vars - the env could be a cached copy and not the real deal.
(python example)
# start
key = os.environ.get('MYVAR')
# do something with key until scripts ends
Answers:
Modify the ruby script to execute openssl rand -hex 32 and read the result to the variable from time to time.
Modify the ruby script to read a file /path/data into a variable generated by cron with output from openssl rand -hex 32 from time to time.
A cronjob restarts the script with the new environment. (ie. with sudo # see switches -i -u and --) as mentioned in the comments user wide env could be set in profile.d what-do-the-scripts-in-etc-profile-d-do
| How Set ENV Variable Using crontab for All Users? |
1,333,433,627,000 |
I have a cron job which should run once a week to update, upgrade and autoclean apt, but it never seems to work, at least not as far as I can tell.
This is apparent because running sudo apt-get upgrade (weeks after the cron job was added) shows there are packages ready to be upgraded.
System info
Linux squire 4.15.0-88-generic #88-Ubuntu SMP Tue Feb 11 20:11:34 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
Cron job
$ sudo crontab -e
0 12 * * 1 apt-get update && apt-get -y upgrade && apt-get -y autoclean
Processes
$ ps -aux | grep cron
root 674 0.0 0.0 4640 768 ? Ss Oct14 0:01 /bin/sh /snap/nextcloud/23743/bin/nextcloud-cron
root 757 0.0 0.0 31320 1636 ? Ss Jul27 0:17 /usr/sbin/cron -f
squire 22697 0.0 0.0 14428 1000 pts/0 R+ 15:25 0:00 grep --color=auto cron
Systemd service
● cron.service - Regular background program processing daemon
Loaded: loaded (/lib/systemd/system/cron.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2020-07-27 22:15:57 UTC; 2 months 27 days ago
Docs: man:cron(8)
Main PID: 757 (cron)
Tasks: 1 (limit: 2312)
CGroup: /system.slice/cron.service
└─757 /usr/sbin/cron -f
Oct 24 14:09:01 squire CRON[15986]: pam_unix(cron:session): session closed for user root
Oct 24 14:17:01 squire CRON[16575]: pam_unix(cron:session): session opened for user root by (uid=0)
Oct 24 14:17:01 squire CRON[16576]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
Oct 24 14:17:01 squire CRON[16575]: pam_unix(cron:session): session closed for user root
Oct 24 15:03:01 squire cron[757]: (root) RELOAD (crontabs/root)
Oct 24 15:09:01 squire CRON[21350]: pam_unix(cron:session): session opened for user root by (uid=0)
Oct 24 15:09:01 squire CRON[21350]: pam_unix(cron:session): session closed for user root
Oct 24 15:17:01 squire CRON[22008]: pam_unix(cron:session): session opened for user root by (uid=0)
Oct 24 15:17:01 squire CRON[22009]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
Oct 24 15:17:01 squire CRON[22008]: pam_unix(cron:session): session closed for user root
Manual upgrade
$ sudo apt-get update && sudo apt-get upgrade
Hit:1 http://ppa.launchpad.net/certbot/certbot/ubuntu bionic InRelease
Get:2 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB]
Hit:3 https://deb.torproject.org/torproject.org bionic InRelease
Hit:4 http://us.archive.ubuntu.com/ubuntu bionic InRelease
Get:5 http://us.archive.ubuntu.com/ubuntu bionic-updates InRelease [88.7 kB]
Get:6 http://us.archive.ubuntu.com/ubuntu bionic-backports InRelease [74.6 kB]
Fetched 252 kB in 1s (178 kB/s)
Reading package lists... Done
Reading package lists... Done
Building dependency tree
Reading state information... Done
Calculating upgrade... Done
The following packages have been kept back:
base-files linux-generic linux-headers-generic linux-image-generic netplan.io python3-parsedatetime ubuntu-server
The following packages will be upgraded:
cryptsetup cryptsetup-bin libcryptsetup12 libfreetype6
4 upgraded, 0 newly installed, 0 to remove and 7 not upgraded.
Need to get 714 kB of archives.
After this operation, 6,144 B of additional disk space will be used.
Do you want to continue? [Y/n]
Updated cron job for output test
0 12 * * 1 ( apt-get update && apt-get -y upgrade && apt-get -y autoclean ) >/tmp/apt.cron.log 2>&2
Log from test
Hit:1 http://ppa.launchpad.net/certbot/certbot/ubuntu bionic InRelease
Hit:2 http://security.ubuntu.com/ubuntu bionic-security InRelease
Hit:3 https://deb.torproject.org/torproject.org bionic InRelease
Hit:4 http://us.archive.ubuntu.com/ubuntu bionic InRelease
Hit:5 http://us.archive.ubuntu.com/ubuntu bionic-updates InRelease
Hit:6 http://us.archive.ubuntu.com/ubuntu bionic-backports InRelease
Reading package lists...
Reading package lists...
Building dependency tree...
Reading state information...
Calculating upgrade...
The following packages have been kept back:
base-files linux-generic linux-headers-generic linux-image-generic
netplan.io python3-parsedatetime ubuntu-server
The following packages will be upgraded:
cryptsetup cryptsetup-bin libcryptsetup12 libfreetype6
4 upgraded, 0 newly installed, 0 to remove and 7 not upgraded.
Need to get 714 kB of archives.
After this operation, 6,144 B of additional disk space will be used.
Get:1 http://us.archive.ubuntu.com/ubuntu bionic-updates/main amd64 libcryptsetup12 amd64 2:2.0.2-1ubuntu1.2 [134 kB]
Get:2 http://us.archive.ubuntu.com/ubuntu bionic-updates/main amd64 cryptsetup-bin amd64 2:2.0.2-1ubuntu1.2 [93.0 kB]
Get:3 http://us.archive.ubuntu.com/ubuntu bionic-updates/main amd64 cryptsetup amd64 2:2.0.2-1ubuntu1.2 [152 kB]
Get:4 http://us.archive.ubuntu.com/ubuntu bionic-updates/main amd64 libfreetype6 amd64 2.8.1-2ubuntu2.1 [335 kB]
Fetched 714 kB in 1s (997 kB/s)
dpkg: warning: 'ldconfig' not found in PATH or not executable
dpkg: warning: 'start-stop-daemon' not found in PATH or not executable
dpkg: error: 2 expected programs not found in PATH or not executable
Note: root's PATH should usually contain /usr/local/sbin, /usr/sbin and /sbin
|
I've just seen your error messages from the log run
dpkg: warning: 'ldconfig' not found in PATH or not executable
dpkg: warning: 'start-stop-daemon' not found in PATH or not executable
dpkg: error: 2 expected programs not found in PATH or not executable
and
Note: root's PATH should usually contain /usr/local/sbin, /usr/sbin and /sbin
Somewhere near the top - above your job definitions - should be one or two lines line these
SHELL=/bin/bash
PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin
You don't actually need the SHELL line but since most other things on a Linux-based system tend to run with bash I'd recommend it.
If you don't have anything of yours in /usr/local you can strip the PATH line I've given you right back to just the first four directories. But by default cron's PATH doesn't include the two sbin directories which is why it's erroring with commands not being found.
| Cron job doesn't run |
1,333,433,627,000 |
I have a Python script which runs in multiple instances with different parameters, for instance:
python3 proc.py -s -v -l proc_one.log
python3 proc.py -c -v -l proc_two.log
I usually start these in tmux sessions with a script on boot and manually check in on these, to restart if they crashed.
tmux new-session -d -s proct 'cd /opt/proct/ && python3 proc.py -s -v -l proc_one.log; bash'
tmux split-window -t proct 'cd /opt/proct/ && python3 proc.py -c -v -l proc_two.log; bash'
This is rather tedious and should be automated. I recently came across this solution, which offers kind of what I look for. A cronjob running the following bash script every couple of minutes:
if [[ ! $(pgrep -f '^python3 proc.py -s -v -l proc_one.log') ]]; then
python3 proc.py -s -v -l proc_one.log
fi
While this would keep my script running, it would also prevent me form checking in on the process.
My question is, how can I have a script check if my process is running, like the one above, but if it is not running, start it in a tmux session. I wouldn't mind separate tmux sessions for the different instances.
|
Make a service.
Put
[Unit]
Description=ProcOfMine
After=network-online.target
[Service]
ExecStart=python proc.py --args ok
Restart=always
[Install]
WantedBy=multi-user.target
in
/lib/systemd/system/proc.service
Also implement error handling so that the script does not crash.
| Keep a Python script with parameters running, restart if crashed, in a tmux session |
1,333,433,627,000 |
I would like to run a Python script every day if my computer is on and it has been connected to the Internet. How can I do it? My effort is
00 14 * * * python3 /home/jaakko/.config/spyder-py3/temp.py
But the problem is that I don't know if my computer is on on that time and if it has an access to the Internet.
|
You should use anacron instead of cron. In /etc/cron.daily, create a file (I'll call it script) with these contents:
#!/bin/sh
while true; do
for host in www.ieee.com www.stackexchange.com; do
if ping -w 4 $host; then
python3 /home/jaakko/.config/spyder-py3/temp.py
exit 0
fi
done
sleep 60
done
Make it executable by chmod +x script and you're done.
It will ping domains and run python if a response is received in 4 seconds. Choose the domains that best suit your needs. The ones I provide are just examples, but for general Internet access they will probably be enough.
If no packet is received, it will try again in 60 seconds.
Take note: The script will be run as root. If that is a problem for you, you can follow the steps presented in this answer in AskUbuntu to run it as your normal user.
| How can I run Python script once a day if computer is on and it has a connection to the Internet? |
1,333,433,627,000 |
I have a script that verifies the battery level using acpi and if it's below certain threshold it should lock the machine and hibernate. The script is executed every minute using crontab
The problem is that the machine gets locked but never hibernates.
The script:
#!/bin/sh
acpi -b | awk -F'[,:%]' '{print $2, $3}' | {
read -r status capacity
if [ "$status" = Discharging -a "$capacity" -lt 10 ]; then
echo 'Success' >> /tmp/low;
logger "Critical battery threshold";
DISPLAY=:0 i3lock -t -i $(ls -d ~/.wallpapers/* | shuf | head -n 1);
echo 'Locked' >> /tmp/low;
systemctl hibernate;
fi
}
The /tmp/low log file shows the following:
$ cat /tmp/low
Success
Locked
Success
Locked
Success
Locked
I tried to directly run a similar script (Without the ACPI check) and it worked perfectly
The testing script:
#!/bin/sh
acpi -b | awk -F'[,:%]' '{print $2, $3}' | {
read -r status capacity
echo 'Success' >> /tmp/low;
logger "Critical battery threshold";
DISPLAY=:0 i3lock -t -i $(ls -d ~/.wallpapers/* | shuf | head -n 1);
echo 'Locked' >> /tmp/low;
systemctl hibernate;
}
Same testing script was run using at but it didn't hibernate the machine. Any ideas why crontab can't execute systemctl hibernate?
|
I have found a solution. Apparently the problem was in the polkit package that defines the policies for users to shutdown, reboot, suspend, hibernate, etc
As I had no rule file in /etc/polkit-1/rules.d the default is not to allow users to hibernate or suspend the machine while a user is logged in (I believe the problem here is that I'm locking the machine before hibernating, and therefore there is an open session)
To solve it I had to create the file /etc/polkit-1/rules.d/99-allow-hibernate-on-low-battery.rules with the following content:
polkit.addRule(function(action, subject) {
if (action.id == "org.freedesktop.login1.suspend" ||
action.id == "org.freedesktop.login1.suspend-multiple-sessions" ||
action.id == "org.freedesktop.login1.hibernate" ||
action.id == "org.freedesktop.login1.hibernate-multiple-sessions") {
return polkit.Result.YES;
}
});
After that, cron and at can hibernate the machine correctly
| `systemctl hibernate` not executed on crontab script |
1,333,433,627,000 |
My cron jobs have stopped working on my CentOS 7 server. The server is running WHM/cPanel.
It seems like it is an issue with PAM service because in /var/log/secure I can see the following errors when the cron jobs try to run:
Jun 24 10:45:01 server1 crond[22400]: pam_access(crond:account): auth could not identify password for [root]
Jun 24 10:45:01 server1 crond[22404]: pam_access(crond:account): auth could not identify password for [admin]
Jun 24 10:45:01 server1 crond[22400]: pam_localuser(crond:account): auth could not identify password for [root]
Jun 24 10:45:01 server1 crond[22402]: pam_access(crond:account): auth could not identify password for [root]
Jun 24 10:45:01 server1 crond[22405]: pam_access(crond:account): auth could not identify password for [admin]
Jun 24 10:45:01 server1 crond[22400]: pam_localuser(crond:account): auth could not identify password for [root]
Similarly /var/log/cron.log is showing that PAM is failing:
Jun 24 12:40:01 server1 crond[26129]: (admin) PAM ERROR (Authentication information cannot be recovered)
Jun 24 12:40:01 server1 crond[26129]: (admin) FAILED to authorize user with PAM (Authentication information cannot be recovered)
Jun 24 12:40:01 server1 crond[26130]: (admin) PAM ERROR (Authentication information cannot be recovered)
Jun 24 12:40:01 server1 crond[26130]: (admin) FAILED to authorize user with PAM (Authentication information cannot be recovered)
Jun 24 12:40:01 server1 crond[26125]: (root) PAM ERROR (Authentication information cannot be recovered)
Jun 24 12:40:01 server1 crond[26125]: (root) FAILED to authorize user with PAM (Authentication information cannot be recovered)
Jun 24 12:40:01 server1 crond[26127]: (root) PAM ERROR (Authentication information cannot be recovered)
Jun 24 12:40:01 server1 crond[26127]: (root) FAILED to authorize user with PAM (Authentication information cannot be recovered)
Jun 24 12:40:01 server1 crond[26131]: (admin) PAM ERROR (Authentication information cannot be recovered)
Jun 24 12:40:01 server1 crond[26131]: (admin) FAILED to authorize user with PAM (Authentication information cannot be recovered)
I've tried the following with no success:
Rebooting the server
Restarting the cron service
Editing /etc/security/access.conf to ensure that root is allowed access to the cron
cron.allow is non-existent and cron.deny is empty so that shouldn't be the
problem
Disabling SELinux and rebooting
Changing root password to ensure it's not an expiry issue
Checked /etc/passwd and /etc/shadow for passwords of both root and admin user
Removed all cron jobs except a simple one to write to a text file every minute. This cron job also did not work so it's not related to the jobs in the cron.
Please help as I'm not sure what else to do to fix this problem. It's worth noting that this issue started on June 21 and no changes were made to the server when it began occurring.
|
I was able to solve the issue. It is related to a file called /lib/libgrubd.so.
If you're experiencing this issue then check /etc/ld.so.preload. If this file contains /lib/libgrubd.so (it may be the only line in that file) then remove that line and PAM should start working again. I also removed the /lib/libgrubd.so file from the system as it may be associated with a virus as shown here.
Still not entirely sure what caused the issue but this was the reason causing PAM to functioning incorrectly. See more info here.
| Cron jobs have stopped working due to PAM |
1,333,433,627,000 |
I'm working on a script for user Bob with the relevant parts shown below. Problem I'm having is if I put this cronjob under user Bob, zenity will work but shutdown wont. And if I put it under root shutdown will work but zenity won't be visible on console.
#!/bin/bash
eval "export $(egrep -z DBUS_SESSION_BUS_ADDRESS /proc/$(pgrep -u Bob gnome-session)/environ)";
someValue=`DISPLAY=:0.0 zenity --text="tell me your value" --entry`
...
...
/sbin/shutdown -h "now"
I also tried running under root and using su to Bob for zenity and exiting back to root to shutdown, but that didn't work.
Is there a way to do this?
Other info
OS is Linux Mint and Bob is the only user of the machine
|
Non-privileged users cannot shutdown a machine from command line. If you absolutely need to send shutdown as Bob, you can add him to sudoers using visudo.
sudo visudo
Add the following line to is:
bob ALL = (root) NOPASSWD: /sbin/shutdown -h "now"
Save file. Then you can su as bob and test the command:
sudo /sbin/shutdown -h "now"
| Cron Job Console Connect and Root Permissions |
1,333,433,627,000 |
I have a python script that should be restarted automatically every time it fails.
I've trying to use for this purpose cron with the following setting:
*/2 * * * * pgrep -f handler.py || /usr/bin/nohup /usr/bin/python3.6 /root/projects/myproject1/handler.py &
Although, if I run this command directly in cli it starts well, but it doesn't work in cron.
Syslog shows Cron's attempts to run the command without any errors:
CRON[10810]: (root) CMD (pgrep -f handler.py || /usr/bin/nohup /usr/bin/python3.6 /root/projects/myproject1/handler.py &)
|
It's better to run the script as a systemd service or under supervisor or similar process control system.
EDIT:
Just to clarify the reason. You do not have to invent wheel. Both systemd and supervisor does exactly what You need.
| Cron watchdog for a python script |
1,333,433,627,000 |
Background: I want to backup some files on my laptop on a daily basis, and there are times when the laptop is shut down for several days in a row.
I tried scheduling the backup as a cron job and using anacron to execute missed jobs, but was not sure about the way anacron works.
If, for example, I start the laptop after a 3-day shutdown, will anacron execute all 3 missed backup jobs? If this is true, what is the best way to stop anacron from executing jobs delayed for a certain length of time (> 1 day in this case)?
|
If you have an anacron job that runs a daily backup, and your system is down for three days, anacron will run the job once when the system comes online on the fourth day.
To elaborate, anacron allows you to specify commands to be repeated periodically with a frequency specified in days. When anacron is invoked (which can happen at boot time, and also during predefined hours of the day), it will read a list of jobs from the /etc/anacrontab configuration file. For each job, anacron checks if the job has been executed in last n days. For a daily job, this will be the last 1 day, i.e. today. If the job has not been run in this period, anacron executes the job.
Once the job runs to completion, anacron records the date of execution in a file, under /var/spool/anacron. This file is used to check the job's status when anacron is invoked the next day.
Since anacron only looks at the days elapsed since last execution and the configured frequency of execution, there is no problem of a job being executed multiple times.
A daily anacron job can be set up in the /etc/anacrontab configuration file using the following syntax:
1 15 backup-job /path/to/backup/script.sh
1 is the frequency of executing the command specified in days, 15 is a delay in minutes added to the execution of the job, 'backup-job' is an identifier, and '/path/to/backup/script.sh' is the command to be executed. You can take a look at man 8 anacron and man 5 anacrontab for more details.
| How to cancel anacron jobs delayed for a certain length of time? |
1,333,433,627,000 |
I have a server (Ubuntu 18.04) that is supposed to execute a Django management command at a certain interval. (every day at 16:30) I have setup jobs like this before, using cron but for some reason the server fails to execute my cronjob.
The line that I am trying to run is as follows, its using the executable of a python virtualenvironment to run a Django management command.
30 16 * * * /home/username/project/venv/bin/python3 /home/username/project/DjangoProjectName/manage.py process_data >> /home/username/crontaak.log
When I run the command directly from the terminal it all works well (including the log file). Cron also seems to work as I added the following cronjob as a test and it worked as expected.
* * * * * date > /home/username/crontestrun
I also made sure that the script had a blank line at the end, as I found some posts suggesting that as a possible cause for problems.
I checked the crontabs of my other projects with a simular (working) setup and I could not find any mistakes / differences, (except that those jobs run on servers running ubuntu 16.04).
Does anyone here have an idea what is wrong with my setup here?
|
In the terminal use:
echo $PATH
When cron is running it doesn't know all your paths that allows your python script work in the terminal.
The solution is to create a bash script that calls the python script. Before doing so however it executes:
PATH="new-paths:$PATH"
| Cronjob not executing, but command works |
1,333,433,627,000 |
As all know the content of /tmp should be deleted after some time.
In my case we have machines ( redhat version 7.2 ) that are configured as following.
As we can see the service that is triggered to clean up /tmp will be activated every 24H ( 1d ).
systemd-tmpfiles-clean.timer from my machine:
more /lib/systemd/system/systemd-tmpfiles-clean.timer
# This file is part of systemd.
#
# systemd is free software; you can redistribute it and/or modify it
# under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation; either version 2.1 of the License, or
# (at your option) any later version.
[Unit]
Description=Daily Cleanup of Temporary Directories
Documentation=man:tmpfiles.d(5) man:systemd-tmpfiles(8)
[Timer]
OnBootSec=15min
OnUnitActiveSec=1d
And this is the file that is responsible for the rules.
We can see that files/folders according to these rules will be deleted if they are older then 10 days, (this is my understanding , please correct me if I am wrong).
the rules are:
more /usr/lib/tmpfiles.d/tmp.conf
# This file is part of systemd.
#
# systemd is free software; you can redistribute it and/or modify it
# under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation; either version 2.1 of the License, or
# (at your option) any later version.
# See tmpfiles.d(5) for details
# Clear tmp directories separately, to make them easier to override
v /tmp 1777 root root 10d
v /var/tmp 1777 root root 30d
# Exclude namespace mountpoints created with PrivateTmp=yes
x /tmp/systemd-private-%b-*
X /tmp/systemd-private-%b-*/tmp
x /var/tmp/systemd-private-%b-*
X /var/tmp/systemd-private-%b-*/tmp
But because we have a hadoop cluster, we noticed that /tmp contains thousands of empty folders and files, and also folders and files with content that are really huge.
Example:
drwx------ 2 hive hadoop 6 Dec 19 13:54 2d069b18-f07f-4c8b-a7c7-45cd8cfc9d42_resources
drwx------ 2 hive hadoop 6 Dec 19 13:59 ed46a2a0-f142-4bff-9a7b-f2d430aff26d_resources
drwx------ 2 hive hadoop 6 Dec 19 14:04 ce7dc2ca-7a12-4aca-a4ef-87803a33a353_resources
drwx------ 2 hive hadoop 6 Dec 19 14:09 43fd3ce0-01f0-423a-89e5-cfd9f82792e6_resources
drwx------ 2 hive hadoop 6 Dec 19 14:14 f808fe5b-2f27-403f-9704-5d53cba176d3_resources
drwx------ 2 hive hadoop 6 Dec 19 14:19 6ef04ca4-9ab1-43f3-979c-9ba5edb9ccee_resources
drwx------ 2 hive hadoop 6 Dec 19 14:24 387330de-c6f5-4055-9f43-f67d577bd0ed_resources
drwx------ 2 hive hadoop 6 Dec 19 14:29 9517d4d9-8964-41c1-abde-a85f226b38ea_resources
drwx------ 2 hive hadoop 6 Dec 19 14:34 a46a9083-f097-4460-916f-e431f5790bf8_resources
drwx------ 2 hive hadoop 6 Dec 19 14:39 81379a84-17c8-4b24-b69a-d91710868560_resources
drwx------ 2 hive hadoop 6 Dec 19 14:44 4b8ba746-12f5-4caf-b21e-52300b8712a5_resources
drwx------ 2 hive hadoop 6 Dec 19 14:49 b7a2f98b-ecf2-4e9c-a92f-0da31d12a81a_resources
drwx------ 2 hive hadoop 6 Dec 19 14:54 2a745ade-e1a7-421d-9829-c7eb915982ce_resources
drwx------ 2 hive hadoop 6 Dec 19 14:59 9dc1a021-9adf-448b-856d-b14e2cb9812b_resources
drwx------ 2 hive hadoop 6 Dec 19 15:04 5599580d-c664-4f2e-95d3-ebdf479a33b9_resources
drwx------ 2 hive hadoop 6 Dec 19 15:09 d97dfbb5-444a-4401-ba58-d338f1724e68_resources
drwx------ 2 hive hadoop 6 Dec 19 15:14 832cf420-f601-4549-b131-b08853339a39_resources
drwx------ 2 hive hadoop 6 Dec 19 15:19 cd1f10e2-ad4e-4b4e-a3cb-4926ccc5a9c5_resources
drwx------ 2 hive hadoop 6 Dec 19 15:24 19dff3c0-8024-4631-b8da-1d31fea7203f_resources
drwx------ 2 hive hadoop 6 Dec 19 15:29 23528426-b8fb-4d14-8ea9-2fb799fefe51_resources
drwx------ 2 hive hadoop 6 Dec 19 15:34 e3509760-9823-4e30-8d0b-77c5aee80efd_resources
drwx------ 2 hive hadoop 6 Dec 19 15:39 3c157b4d-917c-49ef-86da-b44e310ca30a_resources
drwx------ 2 hive hadoop 6 Dec 19 15:44 b370af30-5323-4ad5-b39e-f02a0dcdc6bb_resources
drwx------ 2 hive hadoop 6 Dec 19 15:49 18a5ea21-30f9-45a8-8774-6d8200ada7ff_resources
drwx------ 2 hive hadoop 6 Dec 19 15:54 ee776a04-f0e8-4295-9872-f8fc6482913e_resources
drwx------ 2 hive hadoop 6 Dec 19 15:59 f5935653-0bf6-4171-895a-558eef8b0773_resources
drwx------ 2 hive hadoop 6 Dec 19 16:04 e80ea30b-c729-48a2-897d-ae7c94a4fa04_resources
drwx------ 2 hive hadoop 6 Dec 19 16:09 fde6f7e4-89bd-41b4-99d3-17204bf66f05_resources
We are worried that /tmp will be full and services can't delete the content because of that.
So we want to delete the folders and files from /tmp according to this:
every folder/file will be deleted if it is older than 1 day
and service will be activated each 1 hour
So we intend to set the following:
OnUnitActiveSec=1h ( in file /lib/systemd/system/systemd-tmpfiles-clean.timer )
v /tmp 1777 root root 1d ( in file /usr/lib/tmpfiles.d/tmp.conf )
Am I right here with the new settings?
secondly - after setting this, do we need to do something for it to take effect ?
|
This combination will certainly work. However, instead of removing all in /tmp every hour, you're probably better of by deleting the resource files and directories only, e.g.
R /tmp/*_resources
Keep in mind that your changes on the systemd and tmpfiles configuration should not be done in /usr or /lib. Instead, place the according overrides in /etc, e.g.
echo 'R /tmp/*_resources' >> /etc/tmpfiles.d/hadoop
cp /lib/systemd/system/systemd-tmpfiles-clean.timer \
/etc/systemd/system/systemd-tmpfiles-clean.timer
$EDITOR /etc/systemd/system/systemd-tmpfiles-clean.timer
If you change the files in /usr or /lib you might end up with conflicts during upgrades.
If you already changed your files, make sure to reload the unit files with systemctl daemon-reload. Otherwise systemd won't pickup the change of your timer.
| how to manage cleaning of /tmp better on hadoop machines |
1,333,433,627,000 |
I'm doing a cronjob task which creates a daily database backup. To desctinct the daily files, I name them as follows: dump- (the current date). The backup operation went well, but the date is not interpreted as it should be (dump-$(date '+%Y-%m-%d')) instead of dump-14-12-2018.
#filename=dump-$(date '+%Y-%m-%d')
#*/3 * * * * cd /bdd-backups/ && mysqldump --all-databases >
$filename.sql -u xxx -pxxx
|
It's hard to tell what your problem is,
since you show us a file that's 80% commented out,
but it looks like you are treating the crontab file
as if it were a multi-line shell script.
It's not; each line is a self-contained, independent entity.
So you cannot assign a value to a variable on one line
and use it on another line.
Either put everything on one line, or —
and this is probably better in the long run —
put the date, cd and mysqldump commands into a separate script file,
and run the script from crontab.
| Can't display the date correctly in a file generated by a cronjob [duplicate] |
1,333,433,627,000 |
Yesterday I've configured rsnapshot on a debian 9 machine, but the cron's I've set for the backups in /etc/cron.d/rsnapshot are not being executed.
Also, after I save & close the /etc/cron.d/rsnapshot it doesn't say it installed the new crontab.
How can I find out why it's not being executed? I've already done some searching on this website and have found the following things:
There are no entries in /var/mail
Right permissions + owner for /etc/crontab
Also the cron process is running:
ps aux | grep cron root 364 0.0 0.0 29664 2880 ? Ss Oct12 0:05 /usr/sbin/cron -f
The cron is set as following:
0 */4* * * root /usr/bin/rsnapshot hourly
Any suggestions on how I can find out why the crons are not being executed? I've already tried executing the command manually (this works) and have also tried nano and vim while editing and closing the rsnapshot cron file.
|
You miss one space in the cron record. I should be like this:
0 */4 * * * root /usr/bin/rsnapshot hourly
| Cron.d not running (rsnapshot) |
1,333,433,627,000 |
I have the following script:
#!/bin/bash
/usr/bin/echo q | /usr/bin/htop -C | /usr/bin/aha --line-fix | /usr/bin/html2text -width 999 | /usr/bin/grep -v "F1Help\|xml version=" > htop.txt
It just captures the htop output.
It works fine if I run the script via command line but then if I run it via crontab as root:
15 15 * * 1-5 /bin/bash /root/collect_system_stats.sh
htop.txt will have just 1 byte and hexdump shows:
0000000 000a
0000001
What I have done wrong?
|
You should use grep with the --line-buffered flag, otherwise grep exits after the first match. I don't have a full qualified explanation for this, but that's what made my script work in a similar case.
Found this answer to line-buffer for grep.
Installing a cronjob as root user with htop can result in a error message:
/usr/bin/htop -C Error opening terminal: unknown
Setting TERM=xterm in the script can resolve this issue.
| Running Script via Crontab With Different Result? |
1,333,433,627,000 |
I need to start the icecast2 service on startup.
To to make it run on boot, I added to crontab (root) the following line:
@reboot service icecast2 start >/home/pi/logs/icecast2.log 2>&1
after restart, the service doesn't run and I get this error:
/bin/sh: 1: service: not found
So I followed this answer on a similar thread, and added the full path as suggested:
@reboot service /usr/bin/icecast2 start >/home/pi/logs/icecast2.log 2>&1
but now I got this error:
Failed to start usr-bin-icecast2.service.mount: Unit
usr-bin-icecast2.service.mount not found.
Notes:
When I type sudo service icecast2 start it works.
Using Debian Scratch on a Raspberry pi
|
You don't need to use cron to get a service to start a boot.
All you need is this:
systemctl enable icecast2
That will start it on boot every time.
| Service command not found cron |
1,536,240,873,000 |
I got stuck with this cronjob that just won't work. I left it for a day to troubleshoot it again with some fresh idea's but still no luck.
I tried to find my answer on this great post, but not everything is clear to me and at the end it still refuses to work. And to make it all worse, there are no logs, no errors to find concerning cron in /var/log.
What do I try to achieve?
Automate the removal of directories that are populated by an ip cam. In the directories are the snapshots.
Here is a view of the list.
pi@raspberrypi:/media/pi/USB/Dahua/Dahua $ ll
total 24K
drwxr-xr-x 3 xxxxxxxx xxxxxxxxx 4.0K Sep 2 05:59 2018-09-02d
drwxr-xr-x 3 xxxxxxxx xxxxxxxxx 4.0K Sep 3 00:57 2018-09-03d
drwxr-xr-x 3 xxxxxxxx xxxxxxxxx 4.0K Sep 4 02:03 2018-09-04d
drwxr-xr-x 3 xxxxxxxx xxxxxxxxx 4.0K Sep 5 01:20 2018-09-05d
drwxr-xr-x 3 xxxxxxxx xxxxxxxxx 4.0K Sep 6 00:20 2018-09-06
-rw-r--r-- 1 xxxxxxxx xxxxxxxxx 4.0K Sep 6 22:28 DVRWorkDirectory
I would like to keep the x newest files and get rid of the rest. All this with a cronjob that would run lets say every week or every day at a certain time.
Seems not to be difficult but I just can't make it work. It's true, my Linux knowledge is basic and will probably be the cause of my problem.
Step 1: My script, the permissions and file location.
In my research I found out that these three topics where important. So this should be valuable info.
-rwxr--r-- 1 root staff 183 Sep 6 15:22 dahuapurge.sh
/usr/local/bin
#!/bin/bash
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
#
#
#Keep the Dahua pics for max x time.
cd /media/pi/USB/Dahua/Dahua/
sudo rm -rf `ls -tl | tail -n +8`
Step 2: My cronjob config.
To create the cronjob I used the command crontab -e. In the file every line is commented except my manually written rule. I know, I set it to run every hour for the moment.
# AUTHOR: - xxxx xxxx
# DATE: - 31/08/2018
# DESCR: - Purge Cam pics
# LINK: -
#
0 * * * * /usr/local/bin/dahuapurge.sh
UPDATE:
I might have found one of my errors or maybe THE error. But I was quite far with my post so I share it anyway.
I added the option -rf to my rm command because I try to delete directories and not files. I modified my script and it should remove 1 directory again in about 30 mins.
#!/bin/bash
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
#
#
#Keep the Dahua pics for max x time.
cd /media/pi/USB/Dahua/Dahua/
sudo rm -rf `ls -tl | tail -n +7`
UPDATE:
Still not working but I'll try the suggestions that where posted and update this asap.
|
Things to check in this order:
Does your script run non-interactively from command line with the exact owner set?
sudo -H -u user -- command
Is the crontab entry syntactically correct? Note that the systemwide crontab has a further column. Restart instead of reload the cron service, then you should get an appropriate error message in /var/log/syslog if the crontab has an error:
Sep 6 15:56:50 myhost cron[834]: (CRON) STARTUP (fork ok)
Sep 6 15:56:50 myhost cron[834]: Error: bad command; while reading /etc/crontab
Sep 6 15:56:50 myhost cron[834]: (*system*) ERROR (Syntax error, this crontab file will be ignored)
The crontab contains a MAILTO variable pointing to a mail address. By this means you can debug your script. Whenever a script outputs something to stdout or stderr, its content is mailed to this address. For this purpose, an MTA (mail transport agent) is necessary. Take a look here.
To another thing: Passing the output of ls to a command is a design anti-pattern. Removing files named *.jpg that are older than 5 days (non-recursively) like this:
find /path -maxdepth 1 -type f -name '*.jpg' -ctime +5 -delete
For more details see man 1 find.
| Crontab not working or script error? |
1,536,240,873,000 |
I have a shellscript which can be successfully executed in UNIX with command sh Shell_script.sh; but I want it to run automatically. So I just configured a cronjob to run the script using crontab -e.
The cronjob added is below:
0 7-23 * * * * /home/folder1/folder2/Shell_script.sh > /dev/null 2>&1
I want it to be executed hourly from 7AM to 11PM every day.
My shell script has a she-bang #!/bin/bash.
I am getting a mail every hour with the content:
Your "cron" job on servername
* /home/folder1/folder2/Shell_script.sh > /dev/null 2>&1
produced the following output:
sh: +61: execute permission denied
|
You have extra * in your cron line which is interpreting as Username. Once you delete it it will be fine executed.
also consider if your Shell_script.sh is not executable, you need run with sh (if it's written in sh) or bash if it's bash written.
0 7-23 * * * bash /home/folder1/folder2/Shell_script.sh > /dev/null 2>&1
| What can be reason for getting "execute permission denied" in cronjob in UNIX? |
1,536,240,873,000 |
Ultimately I want to detect a new backup file being dropped into a directory and then move that new file to another location for other operations.
This needs to work when there is nobody logged into the server and the script I use to start the operation will be triggered by a crontab entry.
I tried using 'inotifywait' like this:
dir=/home/userid/drop/
target=/home/userid/current/
inotifywait -m "$dir" --format '%w%f' -e create |
while read file; do
mv "$file" "$target"
done
This only works in you have a terminal window session open. If you try to start this scipt unattended using a crontab entry, the inotifywait command is ignored.
So then I tried using 'entr' and found the same problem. It only works if you have a terminal window open. When I created a script using entr and triggered it unattended with a crontab entry, it was ignored just like the inotifywait example.
So I know that this can be done using 'stat' and I have looked at many examples and tried to figure them out for my purpose, but I am just not understanding the syntax for it.
I know stat can detect the existence of a new file in a directory, but I do not know how to process that result in order to trigger the execution of the mv (move) command to move the new file to a different location.
And once I have a stat syntax that can do this, I will need it to run perpetually. Maybe it only checks every 15 seconds or something, but I will need it to always be prepared to move the file.
If anyone has experience doing this and can kindly help me with the syntax to link stat to executing another command, I would be greatly appreciative. I really believe that others would like to know how to do this as well, because I cannot imagine that everyone is ok with keeping a ssh putty window open 24/7 for the other 2 solutions.
Thanks in advance.
BKM
|
All you need is incron. Install incron package first if you have Ubuntu/Debian:
sudo apt install incron
or use the command for Red Hat/Fedora:
sudo yum install incron
Open file /etc/incron.allow in your favorite text editor - let it be vim:
vim /etc/incron.allow
and add new line with your user name (assume it's bob) to allow him to use incron:
bob
Afterwards open incron rules editor:
incrontab -e
and add the following line:
/home/userid/drop/ IN_CREATE mv /home/userid/drop/$# /home/userid/current/
where $# is incron built-in wildcard which means name of newly dropped backup file detected by incron.
To test the created rule add a file to the /home/userid/drop/ directory and check if the dropped file has been moved to /home/userid/current/ directory. Additionally check syslog:
tail /var/log/syslog
| How can I use 'stat' to detect a new file and then move it to a different directory? |
1,536,240,873,000 |
I have below entries in crontab -e ,
0 16 * * * /opt/nginxstack/Dropbox-Uploader/DatabaseDumper.sh > /var/log/cron.log 2>&1
19 20 * * * /opt/nginxstack/Dropbox-Uploader/upload_dump_dropbox.sh > var/log/cron.log 2>&1
The first shell script works fine while the second doesn't get executed. I could not find anything in the log as well. can someone please help. Permissions are set correctly and /opt/nginxstack/Dropbox-Uploader/upload_dump_dropbox.sh works when run from terminal.I am running as root.
Edit: below is the code i have in upload_dump_dropbox.sh
#!/bin/bash
/opt/nginxstack/Dropbox-Uploader/dropbox_uploader.sh upload /opt/nginxstack/Dropbox-Uploader/mysqldumpsvps/* /mysqlbckfromvps
|
When a redirection fails on the command line, the associated command is not executed.
Example:
$ echo 'hello' >nonexistent/path
/bin/sh: cannot create nonexistent/path: No such file or directory
(echo never gets to execute)
Your second cron job redirects to a relative pathname, var/log/cron.log. If that pathname is not available from the working directory of the job, this redirection will fail and the job will not execute at all.
The cron daemon should have sent the owner of the crontab an email with the error message for each execution attempt.
| shell script in cron not working |
1,536,240,873,000 |
I have 2 programs, both writing to the same file (/tmp/outfile). Started by cron at the same time.
Basically this is what is happening:
echo -n "1111111111" >> /tmp/outfile
And at the same time:
echo -n "2222222222" >> /tmp/outfile
The output file says "11111222222222211111". This is an example, I am talking about hundreds of lines, where one line is "cut" mid-sentence, but simply put, above thing is happening.
How to prevent this behavior?
|
There are two immediately obvious ways to solve this:
Serialize the tasks. Instead of scheduling the two tasks at the same time, schedule a script that runs the tasks one after the other.
Use a advisory locking scheme to lock the writing operation of the tasks in such a way that only one task can write at a time. See questions tagged with lock and flock.
These two may be combined into a single script that runs both tasks in the background while the tasks themselves uses some form of locking as to not produce garbled/intermingled output.
| Multiple >> redirects to same file by 2 scripts, mid-sentence breaking |
1,536,240,873,000 |
I use Ubuntu 16.04 with Bash and I've created this extensionless, shebangless file /etc/cron.daily/cron_daily:
for dir in "$drt"/*/; do
if pushd "$dir"; then
wp plugin update --all --allow-root
wp core update --allow-root
wp language core update --allow-root
wp theme update --all --allow-root
popd
fi
done
"$rse"
The reason I do it is to be less dependent on crontab.
I'd like to ask if the naming of the file is safe, and if the overall syntax and variable expansions are okay.
The drt and rse variables were already exported after their file was sourced and are usable.
|
Looking at the other scripts in the same location on an Ubuntu machine I have access to, it is clear that these scripts should be proper shell scripts. They should be executable and have a #!-line pointing to the correct interpreter.
Since you expect that the variable drt is set to something, you should check that it is actually set and that it's set to something reasonable. For example, if $drt is supposed to be the pathname of an existing directory:
if [ ! -d "$drt" ]; then
echo 'drt is unset or does not contain path to directory' >&2
exit 1
fi
Likewise for rse:
if [ -z "$rse" ]; then
echo 'rse is unset' >&2
exit 1
fi
This would be done at the start of the script.
Dir checking
pushd and popd are primarily meant for interactive use (this may be argued about). It is additionally difficult to read and maintain a script that changes directories back and forth. Maybe not in this script, but in general.
Instead of changing working directory, doing something, and then changing back, you may use
( cd "some directory" && "some other command" )
The cd above is only affecting the ( ... ) subshell.
In this script, it may be enough with
if cd "$dir"; then
command1
command2
# etc.
fi
assuming that $drt is an absolute path and that the simple command $rse is able to run correctly regardless of where it's started from (this leaves the the script in a modified working directory after the if-statement). See the other scripts in /etc/cron.daily/ to get view of how they do (the above suggestion is how the /etc/cron.daily/dpkg script does it, but it has no further commands after its if-statement).
The script would benefit from properly indenting the body of the for-loop and if-statement.
With the original example code, it can be done like this:
#!/bin/bash
for dir in "$drt"/*/; do
if pushd "$dir"; then
wp plugin update --all --allow-root
wp core update --allow-root
wp language core update --allow-root
wp theme update --all --allow-root
popd
fi
done
"$rse"
Indentation can be done either with spaces or tabs (it's a matter of taste).
Additionally, I mistyped your variable names several times while writing this. Having descriptive variable names is beneficial for both yourself (in a few weeks time) and for anyone else who is trying to figure out what your script is supposed to be doing. There is no benefit of using short variable names in scripts as it leads to obscure code. I'm furthermore uneasy about the fact that these variables are set elsewhere as it implies a dependency on something that is not known and not documented in the script.
| Basic usage of /etc/cron/ (d): correct pattern for /etc/cron (daily/weekly/monthly) |
1,536,240,873,000 |
I have an Intel NUC with Ubuntu installed. It runs a Minecraft server.
I wanted a simple backup system for the server, and through some Googling I found that I can do so using cron and tar.
However, I seem to be unable to make cron do anything at all.
I made a simple test script for cron to run.
#!/bin/bash
cd ~/minecraft/Backups
touch bla.txt
And I modified my crontab by adding this.
# m h dom mon dow command
10 16 * * * /home/ben/minecraft/Backups/Test.sh
I waited for 16:10 to come and... nothing happened. There was no bla.txt file created. I tried it numerous times by entering different times and still nothing.
The script works when I run it manually. Any idea what I'm missing?
|
I am not sure if this answers your question, but I would suggest you to replace this line:
cd ~/minecraft/Backups
With this:
cd /home/ben/minecraft/Backups
Make sure your script has execution permissions:
chmod +x /home/ben/minecraft/Backups/Test.sh
Check if cron is installed and running:
/etc/init.d/cron status
If not, install it / start it:
apt-get install cron
/etc/init.d/cron start
| Having trouble with cron |
1,536,240,873,000 |
I have crontabs set up to download data published on a webpage and save it locally every x seconds:
* * * * * sleep 0; wget -O /home/lab/Documents/watchdog.xml 'IP-address'
and this works!
I actually want to save the file somewhere else, so if I try:
* * * * * sleep 0; wget -O /var/cache/watchdog.xml 'IP-address'
it does not work.
Since changing the location of the target file solves the issue, I'm assuming it's a permission issue? How do I check/change permissions for crontabs?
|
User crontabs (that you edit by running crontab -e) run as your user — so they use your user's permissions.
So you need to make the file writable as your user; most likely sudo chown "$USER" /var/cache/watchdog.xml would do that, if the file already exists. (If not, sudo touch /var/cache/watchdog.xml will create an empty file there). There are other ways (e.g., by changing the group and making it group-writable, or by using ACLs); which makes the most sense depends on the situation.
Alternatively, you can use a system crontab (configured by editing /etc/crontab with a text editor, or—preferably if your system has it—creating a file in /etc/cron.d) which can run as any user. Note that system crontabs add an extra field: the user to run the command as. It comes between the day-of-week field and the command field.
| Crontabs does not write to a location (permissions?) |
1,536,240,873,000 |
For a couple of years now I've been "scraping," using lynx -dump, content from a web page containing non-latin characters. I save the page content to a file, which I then modify via the agency of sed, and send that in the body of an e-mail--all this happening in a script I created. But I'm finding, after switching distros (Ubuntu to Void) that my script is not working as expected. I've identified the point of failure, as follows.
When I run the very first part of my script (the part containing lynx -dump URL and the file name to which the content is to be saved) from the command line, all works as expected. The file shows up and contains the non-latin characters I'm expecting. However when I try to automate the process by stipulating that same command as a cron job, the results are different. The expected file does show up, but instead of containing the expected non-latin characters, what I get is the same text transliterated using latin characters--not what I want. What follows in my script is failing since it depends on the presence of the non-latin characters.
So, why these strange results depending on whether I issue the lynx command from the command line as opposed to in a cron job? Perhaps the site is doing some sort of detection and providing a transliterated page in one case but not in the other? Or is lynx itself doing the transliterating of non-latin characters into latin ones? Input will be appreciated.
|
lynx uses the current locales to determine the charset it can use for showing pages. This information is probably not available from cron however, so you need to do something like this:
lynx -display_charset=UTF-8 -dump http://example.com/some/page.html
(of course, use the charset on your system if different from UTF-8).
| Different output from lynx -dump when run as cron job |
1,536,240,873,000 |
Normally, a cron job's stdout and stderr are e-mailed to me (as per the MAILTO setting in crontab) when the job finishes.
What if my job starts another job with fork/exec (or just plain exec) or with system("foo &")?
What happens with the stdout/stderr of the child job? Will I get it via email?
An ideal outcome would be a separate email for the parent and the child, but I am pretty sure I would have to arrange that myself.
|
A child process inherits its stdin, stdout, and stderr from its parent. Fork and exec are common in cron jobs (consider that every command you run in a shell script involves a fork/exec). The output will generally go to the same cron email.
Having it go to a separate mail is easy enough: just pipe its stdout/stderr to mail (or similar):
#!/bin/sh
command-1 2>&1 | mail -s 'output 1' [email protected]
command-2 2>&1 | mail -s 'output 2' [email protected]
If you want cron's only email if not empty behavior, moreutils includes a convenient ifne for that (… | ifne mail -s…).
| A cron job forks/exec()s before terminating. What happens to the stdout of the child? |
1,536,240,873,000 |
I have gone through many answers as to how to add a crontab through terminal through one liners and came across only one single option everywhere which is
{crontab -l; echo "1 * * * * /usr/bin/firefox" } | crontab -
Running which all I am receiving is
>
That's it. A promt for me to type something.
and second option being
(crontab -l; echo "1 * * * * /usr/bin/firefox" ) | crontab -
Which seems to add the cron to /var/spool/cron/crontabs/root but does not open firefox every minute, in fact it does not open at all.
I read most on most answers that you should not edit the /var/spool/cron/crontabs/root or /etc/crontab files directly.
Is this not supported in my system or what?
An output of uname -a gave the following description of my system
Linux earth 4.9.0-kali4-amd64 #1 SMP Debian 4.9.30-2kali1 (2017-06-22) x86_64 GNU/Linux
EDIT: Following message logs are repeated often in my /var/spool/mail/mail logs
From [email protected] Sun Jul 09 16:01:12 2017
Return-path: < [email protected] >
Envelope-to: [email protected]
Delivery-date: Sun, 09 Jul 2017 16:01:12 +0530
Received: from root by earth with local (Exim 4.89)
(envelope-from <[email protected]>)
id 1dU9UY-0001Ry-3A
for [email protected]; Sun, 09 Jul 2017 16:01:06 +0530
From: [email protected] (Cron Daemon)
To: [email protected]
Subject: Cron <root@earth> /usr/bin/firefox
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit X-Cron-Env: < SHELL=/bin/sh >
X-Cron-Env: < HOME=/root >
X-Cron-Env: < PATH=/usr/bin:/bin >
X-Cron-Env: < LOGNAME=root > Message-Id: < E1dU9UY-0001Ry-3A@earth >
Date: Sun, 09 Jul 2017 16:01:06 +0530
Error: GDK_BACKEND does not match available displays
|
Most likely your second attempt is correct, but your expectation is wrong.
Let's look at it in parts:
crontab -l
lists all existing entries for the current user's crontab. The
echo "1 * * * * /usr/bin/firefox"
just prints that line again. These two commands are then grouped together in a subshell and the common output is piped into
crontab -
So the crontab is overwritten by what comes in via the standard input, which in this case is the old crontab plus the new entry.
As you said it is added to the crontab file. And, assuming the cron daemon is running, the command will be executed each minute.
So why aren't you seeing a firefox window each minute? - Because the conrjob runs in a different shell below the cron daemon, which doesn't have access to your X session, thus firefox will fail and report something like
(firefox:22376): Gtk-WARNING **: Locale not supported by C library.
Using the fallback 'C' locale.
Error: GDK_BACKEND does not match available displays
And terminate. How to see that error? Typically the cron daemon will try to send you a mail, see /var/spool/mail eventually.
About the two forms:
{ crontab -l; echo "1 * * * * /usr/bin/firefox" } | crontab -
would have to be written as
{ crontab -l; echo "1 * * * * /usr/bin/firefox"; } | crontab -
(mind the extra semicolon)
The difference between () and {} is that the former creates a sub-shell, whereas the later executes the commands in the same shell context. Thus variable assignments survive in one form, not in the other.
| Crontab addition not working |
1,536,240,873,000 |
I'm on FreeBSD11. I have a shell script code as cron job that check the zfs pool status and save it in a sqlite database.
when I run it from terminal, it work properly but in crontab it does't work.
The crontab:
#
SHELL=/bin/sh
PATH=/etc:/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/etc/myjob/pool
#
#minute hour mday month wday who command
#
*/1 * * * * root /usr/local/etc/myjob/pool/pool.sh
my script is:
#!/bin/sh
pool=$(/sbin/zpool status | grep pool |awk '{print $2}')
for i in $pool
do
status=$(/sbin/zpool status ${i} |grep state|awk '{print $2}')
echo 'update mytbl set status = '\'''$status''\'';'|sqlite3 /usr/local/var/db/myproject/myDataBase.db
done
Can you help me figure out the mistake?
|
Specify the full path of sqlite in your script.
| shell script cron job not working |
1,536,240,873,000 |
Okay so say I have a java program compiled as a jar file. I want to run four instances of this cron job to execute this jar file every Monday-Friday local time from 8am to 5pm, but at intervals of 30 mins, 1 hour, 4 hours, and 8 hours, respectively. How would I accomplish this?
*/30 8-17 * * 1-5 java -jar queryTickets.jar "critical" >/dev/null 2>&1
0 * * * 1-5 java -jar queryTickets.jar "high" >/dev/null 2>&1
0 */4 * * 1-5 java -jar sendNotifications.jar "medium" >/dev/null 2>&1
0 */8 * * 1-5 java -jar sendNotifications.jar "low" >/dev/null 2>&1
Are these correct? Which folder should I put my jar files in on my Ubuntu server?
|
# Every 30th minute of every hour from 0800 to 1700 on weekdays:
*/30 8-17 * * *5 <<command>>
# Hourly, weekdays
0 * * * 1-5 <<command>>
# Every four hours, weekdays
0 */4 * * 1-5 <<command>>
# Every eight hours, weekdays
0 */8 * * 1-5 <<command>>
As for the command to run, if your jar file doesn't care about or takes care of its own working directory, you can simply run /path/to/jre/bin/java -jar /path/to/my.jar "option" > /dev/null 2>&1.
| Cron job to execute jar file weekdays local time 8am-5pm, no weekends |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.