date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,431,818,419,000 |
Updated (and snipped) with more details below.
I've set up a cron script and I'm trying to debug why it's not running. [Snipped context testing, which is all ok; see revision 2 for details] The command itself, in case it helps, (arrows indicate line-wrapping for legibility) is:
/usr/bin/php -C /etc /path/to/process.php
↪ >>/path/to/stdout.log 2>>/path/to/stderr.log
[Snipped permissions testing, which is all ok; see below and revision 2 for details]
Checking crontab (again, wrapped for legibility), I get:
[blackero@XXXXXXXXXXX to]$ sudo crontab -u cronuser -l
MAIL="blackero@localhost"
30 9 * * * cronuser /usr/bin/php -C /etc /path/to/process.php
↪ >>/path/to/stdout.log 2>>/path/to/stderr.log
20 18 7 * * cronuser /usr/bin/php -C /etc /path/to/process.php
↪ >>/path/to/stdout.log 2>>/path/to/stderr.log
22 18 7 * * cronuser echo "Test" > /path/to/test.txt
↪ 2> /path/to/error.txt
Update #1 at 2012-02-08 12:32 Z
[Snip: Having tried derobert's suggestion (revision 3)], I know that the cronuser can run the script properly and can write to the two .log files. (One of the first things the process.php script does is download a file by FTP; it is successfully doing that too.) But, even after fixing the MAIL="" line (both by removing it and by changing it to MAILTO="blackero@localhost"), the cron task still doesn't run, nor does it send me any email.
A friend suggested that I retry the
9 12 8 * * cronuser /bin/echo "Test" > /var/www/eDialog/test.txt
↪ 2> /var/www/eDialog/error.txt
task, after passing the full path to /bin/echo. Having just tried that, it also didn't work and also generated no email, so I'm at a loss.
Update #2 at 2012-02-08 19:15 Z
A very useful chat conversation with oHessling, it would seem that the problem is with pam. For each time that cron has tried to run my job, I have /var/log/cron entries:
crond[29522]: Authentication service cannot retrieve authentication info
crond[29522]: CRON (cronuser) ERROR: failed to open PAM security session: Success
crond[29522]: CRON (cronuser) ERROR: cannot set security context
I fixed that by adding the following line to /etc/shadow:
cronuser:*:15217:0:99999:7:::
As I found on a forum, if the user does not appear in /etc/shadow, then pam won't continue processing the security request. Adding * as the second column means this user cannot log in with a password (as no hash is specified). Fixing that led to a different error in /var/log/cron, so, double-checking my crontab I noticed I had specified the username each time.
Correcting that means my crontab now reads:
[blackero@XXXXXXXXXXX ~]$ sudo crontab -u cronuser -l
MAILTO="blackero@localhost"
30 9 * * * /usr/bin/php -C /etc /path/to/process.php
↪ >>/path/to/stdout.log 2>>/path/to/stderr.log
52 18 8 * * /usr/bin/php -C /etc /path/to/process.php
↪ >>/path/to/stdout.log 2>>/path/to/stderr.log
9 12 8 * * /bin/echo "Test" > /path/to/test.txt
↪ 2> /path/to/error.txt
but now /var/log/cron shows me:
Feb 8 18:52:01 XXXXXXXXXXX crond[16279]: (cronuser) CMD (/usr/bin/php -C /etc
↪ /path/to/process.php >>/path/to/stdout.log 2>>/path/to/stderr.log)
and nothing comes into the stdout.log or the stderr.log. No mail was sent to me and none of the other files in /var/log/ has any entry in the right timeframe, and I'm running out of ideas as to where to look to see what's going wrong
|
I've found the problem. The -C command line switch I'm sending to php, which should have been -c. I've no idea why cron wasn't reporting that to me in any manner, let alone a useful manner (or how I somehow managed to get it into the crontab with a capital C but test it on the CLI with a lowercase), but running it yet again in the CLI with a colleague here acting as my monkey and suddenly it was obvious.
Now how stupid do I feel?
Well, at least it's resolved now and cron is happily running my damn script. Thank you everyone for all your help.
| Frustrating issue where neither cron nor su -c runs my job (permissions?) |
1,431,818,419,000 |
Let's assume that I want to run a shell script named test.sh at 1 AM every day. I could either use:
0 1 * * * /home/user/test.sh
Or I could use:
0 01 * * * /home/user/test.sh
For the above example, which is technically the correct answer - should a leading 0 be used in the shedule, or should just the number of the hour be entered?
|
If your cron accepts zero-filled numbers, you may use them.
Since the POSIX specification for crontab and the crontab(5) manual on all systems that I have access to only give examples without zero-filled numbers (without actually saying anything about the formatting of numbers), it may be prudent to stay with non-filled numbers if you at some point find yourself on a system where zero-filled numbers are not accepted.
There are examples of systems where 01 is the same as *, not 1:
cron job for hour=7-19 runs every hour instead
| When scheduling jobs to be run by crontab, should leading zeros be used for the hour? |
1,431,818,419,000 |
I'm looking for how to automatically backup a user's home directory in CentOs 7 to a remote host or NAS or just to ~/.snapshot. In some Linux setups, I have seen a .snapshot folder in the user's home directory (~/.snapshot/) that holds hourly, nightly, and weekly backups of their home directory (ie ~/.snapshot/weekly1 for a copy of what was in the user's home directory 1 week ago).
The /home/username/.snapshot/ directory would be read-only by the user. It's not a backup for the purpose of guarding against hardware failure. It's just nice to have the ability to recover a file from yesterday or this morning if you don't like the changes that have been made.
I have seen several related posts on stack overflow, but so far, I haven't seen a guide that explains the complete workflow.
This is what I know so far:
Use rsync to copy the contents of a given folder to the remote host, NAS, or (~/.snapshot/hourly0)
Create a shell script to execute the rsync command
#!/bin/bash
sudo rsync -av --progress --delete --log-file=/home/username/$(date +%Y%m%d)_rsync.log --exclude "/home/username/.snapshot" /home/username/ /home/username/.snapshot/hourly1
Change the permissions on the script to make it executable
sudo chmod +x /home/username/myscript.sh
Use crontab to schedule the rsync command at the desired backup interval
Somehow move hourly0 to hourly1 before running the scheduled hourly rsync
Delete the oldest backup once rsync completes successfully
Are there any guides that cover how to do this?
I don't understand how to automatically rename the folders as time goes on (ie weekly1 to weekly2), or how to delete "week10" if I decide to only keep weeks up to 9. Is this another cron job?
Update: After some more Googling, I've discovered that NetApp creates the snapshot folders. I just don't currently have a NetApp NAS. https://library.netapp.com/ecmdocs/ECMP1635994/html/GUID-FB79BB68-B88D-4212-A401-9694296BECCA.html
|
How about this guide:
create your script: create new file and call it myrsync.sh, copy/paste the lines below:
#!/bin/bash
sudo rsync -av --progress --delete --log-file=/home/your-username/Desktop/$(date +%Y%m%d)rsync.log --exclude "/home/your-username/.folder" /home/data /media/dataBackup$(date +%Y%m%d_%T)
Meaning of the flags:
-av bit: 'a' means archive, or copy everything recursively, preserving things like permissions, ownership and time stamps.
-'v' is verbose, so it tells you what its doing, either in the terminal, in this case, in the log file.
--progress gives you more specific info about progress.
--delete checks for changes between source and destination, and deletes any files at the destination that you've deleted at the source.
--log-file saves a copy of the rsync result to a date-stamped file on my desktop.
--exclude leaves out any files or directories you don't want copied. In the command above, the .folder directory
/home/data is the directory I want copied. /home/data copies the directory and its contents, /home/data would just copy the contents.
/media/dataBackup_$(date +%Y%m%d_%T) is the separate drive. Change this to whatever your backup location is. Note that `rsync` will name every sync differently based on day/time of sync
Save myrsync.sh in your ~$HOME and make it executable by typing:
sudo chmod +x /home/your-username/Desktop/rsync-shell.sh
You can now double click that .sh file, choose Run in Terminal, it will ask you for your password and run, then leave a log file on your desktop. Or, you can make a cron job to do it for you!
The cron job
Copy your myrsync.sh file to /root by typing:
sudo cp /home/your-username/Desktop/myrsync.sh /root
Then type:
sudo crontab -e
You'll see a line which reads: minute hour day month year command
Under that, type:
0 22 * * * /root/myrsync.sh > $HOME/readme.log 2>&1
This means:
The minute
The hour in military time (24 hour) format (0 to 23)
The day of the month (1 to 31)
The month (1 to 12)
The day of the week(0 or 7 is Sun, or use name)
The command to run
So at 22:00 (10pm) every day root will run the shell script, without prompting you for sudo password (because its running as root already).
Now press Control-X, then type "Y", then press Enter
In order to delete older back ups, one way of doing this is to create a file with the timestamp of every sync in it. For example add the following command after the command rsync in myrsync.sh
date +%Y%m%d_%T >> time.txt
Use the command find to delete backups that matches the timestamp e.g: add this command after the date +%Y%m%d_%T >> time.txt in myrsync.sh
find . -type f ! -newer /tmp/timestamp -delete
Or
find . ! -newermt $date ! -type d -delete
This will delete back ups before specific date/time.
More details and sample codes for hourly/daily/monthly backups can be found here
| Scheduled folder backup |
1,431,818,419,000 |
I'm thinking about setting up a cronjob for fetching all my repositories every once in a while, to have the current status ready in case I'm offline. Something like the following (wrapped for better readability):
find $HOME -name .git -type d -printf "%h\0" |
parallel --gnu -0 --delay 0.2 --jobs 100 --progress \
'cd {}; git fetch --all --quiet'
I don't really care what happens if the fetch fails -- it might succeed the next time. Perhaps error output could be logged. My questions are:
What if the background process fetches into a Git repository while I'm committing to it?
Can you recommend other switches to parallel to make this really fail-safe?
|
I've been fetching my local Git repos in the background for two years now, without any sign of trouble. Currently, crontab contains something like
savelog -n -c 400 ~/log/git-fetch.log
find ~/git -type d -execdir [ -d '{}/.git' ] \; -print -prune |
parallel --gnu --keep-order \
"date; echo {}; cd {}; git fetch --all --verbose" \
>> ~/log/git-fetch.log 2>&1
(but in one line).
| Fetching all Git repositories in the background |
1,431,818,419,000 |
Both the shell and environment variables that cronjobs are run in is completely different from the ones presented to me in gnome-terminal. How can I run a cronjob under the same circumstances as if I had run it in the terminal?
My current solution is running the cronjob env DISPLAY=:0.0 gnome-terminal -e my-command, but this pops up a gnome-terminal, which isn't really acceptable.
|
You can either explicitly specify the environment variables you want at the top of your crontab, or you can source your environment from somewhere.
To add environment variables explicitly, you can use a line like this at the top of your script (after the hashbang):
FOO=bar
To source them from a file, use a line like this:
. /foo/bar/baz
In response to your edit of your question to include gpg-agent, you should be able to source ~/.gpg-agent-info to get $GPG_AGENT_INFO. If it does not exist, try starting gpg-agent with --write-env-file "${HOME}/.gpg-agent-info".
| How to run a cronjob in my regular environment? |
1,431,818,419,000 |
I have a python script that I need to run from crontab routing all traffic via a HTTP proxy.
I have already set the proxy in /etc/profile using
http_proxy=http://192.168.0.1:3128 # (Address changed for privacy)
https_proxy=http://192.168.0.1:3128
export http_proxy
export https_proxy
Of course this works fine if I run myscript.py from terminal but when adding the job to crontab it reverts to using standard server IP upon running.
What is the safest way to assure the proxy is used by any python script run from cron? I did find one mention of adding
HTTP_PROXY=http://192.168.0.1:3128
HTTPS_PROXY=http://192.168.0.1:3128
to the top of the crontab. This seems to work when testing with a simple python script to ping an IP checker website, but is it the safest way?
There is no documentation on this other than one old post I found.
|
That second example looks like the right thing to do, but you should change the variable names to lower-case, as the HTTP libraries expect.
The manual page for the crontab file format (i.e. crontab(5) - not crontab(1)) says:
An active line in a crontab will be either an environment setting or a cron command. The crontab file is parsed from top to bottom, so any environment settings will affect only the cron commands
below them in the file. An environment setting is of the form,
name = value
So that's exactly the way to set environment variables for your cron jobs.
Note also, if this is intended to be a system-wide setting (I'm guessing it is, given your reference to /etc/profile), then
On the Debian GNU/Linux system, cron supports the pam_env module, and loads the environment specified by /etc/environment and /etc/security/pam_env.conf. It also reads locale information from
/etc/default/locale.
If your cron is similarly configured, it may make sense to move your defaults from /etc/profile to /etc/environment.
| How should I set HTTP proxy variable for cron jobs? |
1,431,818,419,000 |
I have a server in which there are about a 100 Cron jobs that run various PHP scripts. The task at hand is to generate an alert any time an error occurs in the execution of a PHP script.
The Crons are set as follows:
30 08 * * * /usr/bin/php /var/www/html/phpscirpt1.php > /var/www/html/phpscript1.log 2>&1
What I have tried is placing an && in the end but this generates the Alert Email either way
30 08 * * * /usr/bin/php /var/www/html/phpscript1.php > /var/www/html/phpscript1.log 2>&1 && <Generate Mail>
The ideal case that should work would be
/bin/bash /home/myUser/testfile.sh > /home/myUser/testfile.log 2>&1 ; [$? == 1] && /bin/bash script.sh
The following is just a sample for testing purposes, when
/bin/bash /home/myUser/testfile.sh > /home/myUser/testfile.log 2>&1
is executed testfile.sh only creates a directory. When the above command is executed the first time echo $? gives the output 0, but running the command again return 1, because the scripts returns and logs an error
mkdir: cannot create directory `/home/myUser/testdir': File exists
And Basically this is what is required that is whenever the script in a cron fails it should generate an Alert in the form of an Email. In the above example script.sh contains a mail -s command to send the Email.
But when the complete command is executed an error is returned which is as follows
/bin/bash /home/myUser/testfile.sh > /home/myUser/testfile.log 2>&1 ; [$? == 1] && /bin/bash script.sh
-bash: [1: command not found
I would greatly appreciate any guidance that could be provided in resolving this error. Thank you
|
Basically your solution is good. You have just made simple bash syntax error.
You have to put spaces around '[' and ']' characters:
[ $? == 1 ]
I've tested it on my box and it works. I would also suggest to test error code as not equal 0 ([ $? -ne 0 ]), unless you are sure that you want to react only on error code 1.
| Using an If Statement in Crontab To Generate an Alert |
1,462,031,588,000 |
I am trying to build a Debian-based image to dockerize a cron process, but my cron jobs are never started.
Here is my Dockerfile:
FROM debian:jessie
RUN apt-get update && apt-get install -y --no-install-recommends cron
COPY jobs.txt /etc/crontab
RUN touch /log.txt
CMD ["cron", "-f"]
...and the jobs.txt file:
* * * * * root echo "job done" >> /log.txt
I realized that there is something wrong with the COPY command, because when I replace
COPY jobs.txt /etc/crontab
with
RUN echo '* * * * * root echo "job done" >> /log.txt' > /etc/crontab
it works perfectly.
So is there a problem just with jobs.txt file and Docker unable to copy it the right way? Should I only fix that file, or use a completely different approach?
|
The only difference between using COPY and RUN are the permissions on the /etc/crontab file: with COPY this is 664 and with RUN 644.
I cannot find anything on permissions that /etc/crontab needs to have but if you add
RUN chmod 644 /etc/crontab
after the COPY line in your Dockerfile the cronjobs run (at least for me).
So I think the permissions have to be 644
| How to build a cron docker image properly? |
1,462,031,588,000 |
using linux. I know you can do an @reboot cronjob. I want to do an equivalent thing but instead of running at reboot, running after my computer awakens from suspend. Is that possible?
|
Depends on your distro and/or your destop enviroment, without this info i cant tel much, only that each dispo/desktop enviroment handle it on other ways.
like you see the way for ubuntu is nearly complete different how debian handle this.
Ubuntu solution https://askubuntu.com/questions/226278/run-script-on-wakeup
Debian solution http://forums.debian.net/viewtopic.php?f=5&t=53442
depending on which distro or desktop enviroment you use i recommand to have the first look in your runlevels.
| cronjob at resume from suspend |
1,462,031,588,000 |
If it exists, what is the maximum number of cron jobs (entries?) that can be added to a server's crontab? How do I tell? I'm on RHEL 6.
I googled around, but did not see a conclusive answer -- some were 256, some 65K, some said limitless.
|
To answer your question you should have a look at the cron implementation for your distribution. RHEL seems to use cronie - you can find the source code for cronie online.
cronies limit seems to depend on the filesystem (maximum file size) as the cron daemon stores the crontab entries per user in a linked list - see user.c from cronie for details. This means the maximum number of job entries for crontab is basically unlimited.
| Maximum Number of Entries in Crontab |
1,462,031,588,000 |
Is there some way to display a list of the approaching task executions? That is, translate this crontab:
# m h dom mon dow command
* * * * * imapfilter
4 20 * * Sun fgit gc -- ~/*/ ~/.*/ ~/dev/*/
into this:
2012-04-12 14:40 imapfilter
2012-04-12 14:41 imapfilter
...
2012-04-15 20:03 imapfilter
2012-04-15 20:04 fgit gc -- ~/*/ ~/.*/ ~/dev/*/
2012-04-15 20:04 imapfilter
2012-04-15 20:05 imapfilter
...
It would be useful for verifying the saved schedules.
|
I liked the idea, so I did it. It can be downloaded from here.
| How to print the next crontab tasks to be executed? |
1,462,031,588,000 |
I am unable to configure a cron job to run by placing it in /etc/cron.hourly folder.
SHELL=/bin/bash
PATH=/sbin:/bin:/usr/sbin:/usr/bin
MAILTO=root
HOME=/
# run-parts
01 * * * * root run-parts /etc/cron.hourly
02 4 * * * root run-parts /etc/cron.daily
22 4 * * 0 root run-parts /etc/cron.weekly
42 4 1 * * root run-parts /etc/cron.monthly
The file under cron.hourly is :
lrwxrwxrwx 1 root root 40 2010-07-26 14:52 check -> /usr/local/xxxx/check-interface.bash
Permissions on the file :
-rwxr-xr-x 1 root root 1.6K 2010-09-13 11:22 /usr/local/xxxx/check-interface.bash
There seems to be no errors reported in the var/log/cron logfile. No mention of the script is done. :(
|
In order to isolate the problem, move /usr/local/xxxx/check-interface.bash to /etc/cron.hourly/check , and then see if it runs.
If the script does run, then the problem is caused by an ownership/permissions or related issue which is preventing cron from executing scripts at /usr/local/xxxx/*.
If the script does not run, then the problem is most likely with your script itself.
As another sanity check, replace the contents of /usr/local/xxxx/check-interface.bash with something dead simple, like:
date > /tmp/check-interfaces.log 2>&1
And then see if /tmp/check-interfaces.log is actually being populated by your cronjob. If it does work, then the problem must be with your original script.
| Configuring cron.hourly |
1,462,031,588,000 |
When I schedule a job/command with at to be executed in the future, the standard output and error of the command is "mailed" to the user that did the scheduling.
So after my job runs, I get a message in the command prompt
You have mail in /var/spool/mail/mattb
which I can then read with mail.
Is it possible to have the output instead sent to my corporate mailbox (i.e. [email protected]), rather than the local user's /var/spool/mail?
How does at know which address to email the output of the command, or does it only know how to place a message in the user's /var/spool/mail?
|
at will typically use your installed mail transport agent (MTA) to deliver the mail. If you do not use local mail on the box at all, you can configure your MTA to forward all mail to another server.
Alternatively you can use a .forward file for a single user. If you put "[email protected]" in ~mattb/.forward then your MTA should forward your email there.
| When I schedule a command with 'at', can I change where the output is mailed to? |
1,462,031,588,000 |
I installed the yum-cron package, configured too only check for updates and then mail me. But, doing this daily, seems a bit too much. I would like to do this monthly.
The file is located in /etc/cron.daily/0yum-cron, I want to move it to /etc/cron.monthly/0yum-cron. Can I do this, or is this a big no-no when using yum-cron?
Meaning by just doing this:
sudo mv /etc/cron.daily/0yum-cron /etc/cron.monthly/0yum-cron
Will this work?
|
If you want to move it from daily to monthly run, you need to use mv and not cp as otherwise you would simply add a monthly run.
sudo mv /etc/cron.daily/0yum-cron /etc/cron.monthly/0yum-cron should do what you're asking for.
| Can I move 0yum-cron from cron.daily to monthly? |
1,462,031,588,000 |
I have written a simple shell script to backup the tables in my DB using the mysqldump command. The command simply dumps the DB into a dbName.sql file. However, if the dbName.sql file doesn't already exists and I crontab this shell script, it doesn't work. What could be the reason? Are crontab scripts not permitted to create and write to files?
|
Cron jobs can be run as root or another specified user. If that user does not have the permission to write to the .sql file the job will fail.
If the file exist and it has write permission for the user running the job the file will be written.
if the file does not exist and the user does not have permission to create files in target directory, the job will fail.
I can typically say
sudo touch /var/log/foo.log
sudo chown my_username /var/log/foo.log
now I can say:
echo bar > /var/log/foo.log
but if the file does not exist it will fail, as I have no permission to create files in /var/log/.
So question becomes if the user running the job have the right to create files where ever the output from mysqldump is written.
Now if I set up a cron job to run a script with this:
date > /var/log/foo.log
every minute, I can add:
* * * * * my-user-name /path/to/script
to /etc/crontab. As I have no permission to create files in /var/log/ the cron daemon will send me an email when the job fails.
After 1 minute there are no file. I go to a terminal and type mail:
$ mail
>N 1 Cron Daemon Sat Jan 31 05:15 19/731 Cron <user@host> /path/to/script
? type
...
/path/to/script: line 3: /var/log/foo.log: Permission denied
OK. My bad. I create the file and chown it to me. Now all is OK.
Have a look at these as well:
Where are cron errors logged?
What are the runtime permissions of a cron job?
| Crontabbed shell script not able to create/write to a file |
1,462,031,588,000 |
Recently I started the process of gradual switching my shell to nu
and because of it I thought about assigning its path to SHELL in cron.
Having read a good part of the manual at man 5 crontab, I took a look at PATH and copied the convention of using : in between the values attempting to assign two shells to SHELL:
SHELL=/bin/bash:/home/jerzy/.cargo/bin/nu
It does not work, the scripts from my crontab are not doing their job. Whereas either SHELL=/bin/bash and
SHELL=/home/jerzy/.cargo/bin/nu works fine.
Can I assign two shells to SHELL? Does it even make sense to do so?
|
No, having multiple shells in SHELL doesn't make sense, for the reasons @Stephen described in their answer. But, the SHELL variable only controls the shell cron uses to run the immediate command part in the crontab line; and at least in Vixie cron, which you often have on Linux systems, SHELL can be changed in the middle of crontab. The Debian man page for crontab(5) says:
The crontab file is parsed from top to bottom, so any environment settings will affect only the cron commands below them in the file.
That sentence looks like it might have been added by Debian, but it seems to work the same on the CentOS system I tried. But as noted in comments by Toby Speight, environment variable assignments in crontab aren't a POSIX feature at all, so YMMV.
So, regardless of the cron, you should be able to do something like this:
* * * * * /path/to/somescript.py maybe
* * * * * /path/to/otherscript.pl some
* * * * * /path/to/thirdscript.sh args
where the scripts have the proper hashbang lines, e.g. #!/usr/bin/python3, #!/usr/bin/perl, #!/bin/bash or whatever. SHELL just needs to be set to something that can take /path/to/somescript.py maybe etc. as a command and run that script with an argument. Most shells support the trivial stuff identically, so if you put the complicated stuff inside separate scripts and keep the crontab lines simple, you can use whatever shell or scripting language in the scripts themselves.
And, if you need to use different shells in the immediate crontab commands, you can do this, at least in Vixie cron:
SHELL=/bin/bash
* * 8-14 * * if test "$(date +\%w)" = 0; then echo $BASH_VERSION > /tmp/bashtest; fi
SHELL=/usr/bin/fish
* * 8-14 * * if test (date +\%w) = 0; echo $FISH_VERSION > /tmp/fishtest; end
Both check if the day is Sunday, and print the shell version if it is, one using Bash, one using fish. (That's only an example, of course, but because of the way the cron time settings work, running on the first/second/last particular weekday of a month is one of the common cases where you might want to use shell code on the crontab command.)
Also, as an aside, you mentioned in the comments that Nu doesn't support multiline commands with \, and was hoping you could use Bash there. Note that you can't have multiline commands in the crontab line: The command itself must be on just one line in the file, and while cron takes a % sign as a line break, the lines following the first are sent to the standard input of the command. (And that's why we need to escape the % used in the format string for date above.)
| Can it possibly work if I assign two shells to SHELL in cron? |
1,462,031,588,000 |
I would like to disable the ssh server for certain times of the day. I would like to do this because I recently experienced a brute force compromise via ssh. Can crontab be used to enable/disable SSH?
If not, is there another way to disable ssh at certain times of the day?
|
Sure, just run whatever init scripts there are to stop and start ssh daemon (e.g. /etc/init.d/ssh stop and /etc/init.d/ssh start) at appropriate times.
However, I'd suggest looking into fail2ban, portknocking, disabling password authentication and using only ssh keys, and the most secure way, two factor authentication with one time passwords.
| Is it possible to enable/disable SSH using cron |
1,462,031,588,000 |
I am trying to figure out why I don't have syntax highlighting when editing my crontab.
I have both $EDITOR and $VISUAL set to /usr/bin/vim:
> echo $EDITOR
/usr/bin/vim
> echo $VISUAL
/usr/bin/vim
If I save the crontab to a file and edit it with vim syntax highlighting is enabled.
> crontab -l > saved_cronab
> /usr/bin/vim saved_crontab
And if I use :syntax on while editing the crotab nothing changes
How can I enable highlighting when editing crontab with crontab -e?
|
Did you export these variables (export EDITOR VISUAL)?
| No syntax highlighting when editing crontab |
1,462,031,588,000 |
On all my Red Hat Linux machines version 7.2 we saw that systemd-tmpfiles-clean.service is inactive:
systemctl status systemd-tmpfiles-clean.service
● systemd-tmpfiles-clean.service - Cleanup of Temporary Directories
Loaded: loaded (/usr/lib/systemd/system/systemd-tmpfiles-clean.service; static; vendor preset: disabled)
Active: inactive (dead) since Wed 2018-12-19 14:47:14 UTC; 12min ago
Docs: man:tmpfiles.d(5)
man:systemd-tmpfiles(8)
Process: 34231 ExecStart=/usr/bin/systemd-tmpfiles --clean (code=exited, status=0/SUCCESS)
Main PID: 34231 (code=exited, status=0/SUCCESS)
Dec 19 14:47:14 master02.uridns.com systemd[1]: Starting Cleanup of Temporary Directories...
Dec 19 14:47:14 master02.uridns.com systemd[1]: Started Cleanup of Temporary Directories.
It is strange that we saw the files and folders under /tmp,
and it seems that cleanup is performed every some time.
I searched on crontab or cronjob, but I did not find other cleanup jobs.
Am I missing something here?
Is it possible that in spite of the service being inactive, the cleanup is performed every couple of weeks?
systemctl enable systemd-tmpfiles-clean.service
The unit files have no [Install] section. They are not meant to be enabled
using systemctl.
Possible reasons for having this kind of units are:
1) A unit may be statically enabled by being symlinked from another unit's
.wants/ or .requires/ directory.
2) A unit's purpose may be to act as a helper for some other unit which has
a requirement dependency on it.
3) A unit may be started when needed via activation (socket, path, timer,
D-Bus, udev, scripted systemctl call, ...).
We also saw a few folders that were real old, as
ls -ltr
total 137452
drwxr-xr-x 3 root root 33 Jun 13 2017 Tools
drwx--x--x 3 root root 16 Oct 12 09:33 systemd-private-74982d8a24254a1d8b8ec3b5c0d80a9b-httpd.service-QZqGLA
drwx--x--x 3 root root 16 Oct 12 10:02 systemd-private-74982d8a24254a1d8b8ec3b5c0d80a9b-rtkit-daemon.service-BTcGY1
drwx--x--x 3 root root 16 Oct 12 10:02 systemd-private-74982d8a24254a1d8b8ec3b5c0d80a9b-vmtoolsd.service-mQ1SXc
drwxr-xr-x 2 ambari ambari 18 Oct 12 12:02 hsperfdata_ambari
drwx--x--x 3 root root 16 Oct 12 12:17 systemd-private-74982d8a24254a1d8b8ec3b5c0d80a9b-cups.service-PnKaq8
drwx--x--x 3 root root 16 Oct 12 12:17 systemd-private-74982d8a24254a1d8b8ec3b5c0d80a9b-colord.service-DNn470
-rwxr-xr-x 1 root root 83044 Nov 18 17:27 Spark_Thrift.log
drwxr-xr-x 2 zookeeper hadoop 18 Nov 18 17:28 hsperfdata_zookeeper
-rwxr-xr-x 1 root root 379 Nov 18 17:37 requests.txt
-rwxr-xr-x 1 root root 137348 Nov 22 14:50 pp
-rwxr-xr-x 1 root root 344 Nov 26 15:24 yy
prwx--x--x 1 root root 0 Nov 29 21:26 hogsuspend
-rwxr-xr-x 1 root root 1032 Dec 3 10:55 aa
From my machine:
more /lib/systemd/system/systemd-tmpfiles-clean.timer
# This file is part of systemd.
#
# systemd is free software; you can redistribute it and/or modify it
# under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation; either version 2.1 of the License, or
# (at your option) any later version.
[Unit]
Description=Daily Cleanup of Temporary Directories
Documentation=man:tmpfiles.d(5) man:systemd-tmpfiles(8)
[Timer]
OnBootSec=15min
OnUnitActiveSec=1d
The rules are:
more /usr/lib/tmpfiles.d/tmp.conf
# This file is part of systemd.
#
# systemd is free software; you can redistribute it and/or modify it
# under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation; either version 2.1 of the License, or
# (at your option) any later version.
# See tmpfiles.d(5) for details
# Clear tmp directories separately, to make them easier to override
v /tmp 1777 root root 10d
v /var/tmp 1777 root root 30d
# Exclude namespace mountpoints created with PrivateTmp=yes
x /tmp/systemd-private-%b-*
X /tmp/systemd-private-%b-*/tmp
x /var/tmp/systemd-private-%b-*
X /var/tmp/systemd-private-%b-*/tmp
|
You can ask systemd what a unit’s triggers are:
systemctl show -p TriggeredBy systemd-tmpfiles-clean
This will show that the systemd-tmpfiles-clean service is triggered by the systemd-tmpfiles-clean.timer timer. That is defined as
# SPDX-License-Identifier: LGPL-2.1+
#
# This file is part of systemd.
#
# systemd is free software; you can redistribute it and/or modify it
# under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation; either version 2.1 of the License, or
# (at your option) any later version.
[Unit]
Description=Daily Cleanup of Temporary Directories
Documentation=man:tmpfiles.d(5) man:systemd-tmpfiles(8)
[Timer]
OnBootSec=15min
OnUnitActiveSec=1d
Thus the service runs every day, and cleans directories up based on the tmpfiles.d configuration. See the associated man pages for details.
| Linux + files & folders cleanup under /tmp |
1,462,031,588,000 |
I have a setup in which I sync files between two Linux servers using Unison over an SSH connection. This is implemented by running the following command via cron:
unison -auto -batch PROFILE
Changes to the fileset happen almost exclusively on one side (the other system is an off-site replica). They mostly take place through another Unison sync run (triggered manually) with a client machine, and can be from a few hours to a few weeks apart. Thus, conflicts between the two machine are not much of a concern in practice, and a delay of up to 24 hours for changes to get propagated to the other side is acceptable.
Reasons I am running Unison as a cron job rather than with -repeat (presumably as a systemd service) are:
Predictable sync times, as the cron job is scheduled at a time when I am not expecting any manual sync operations from the third machine (whereas, say, -repeat 86400 would drift by the duration of the sync operation).
Changes mostly happen on server A, while the server-to-server sync job is triggered by server B (is it is easier network-wise if server B initiates the connection). Thus, as I understand, -repeat watch would not pick up most of the changes and even with -repeat watch+TIME, I’d be relying on TIME almost exclusively (correct me if I missed something).
When changes do happen, they are usually low in volume. Occasionally, however, the data volume to be transferred is such that a single Unison run lasts several times as long as the interval between two Unison cron jobs (bandwidth between the systems is somewhat constrained). That would mean one Unison process is still running when cron launches the next one on the same set of files.
I take it that Unison has lock mechanisms in place, which presumably prevent the “new” process from messing with anything the “old” one is working on (but correct me if I’m wring or have missed something). But I’m wondering what the second Unison process would do in that case – I have observed it does not exit but stays around. Does that mean the second process would wait for the first to finish and only then start synchronization (which would then only include files that changed while the first sync was in progress and therefore failed to sync on the first run)?
Is it safe to launch a second Unison process while another one is still running on the same profile? (If not, what is a recommended way to prevent two concurrent Unison instances if, and only if, they are at risk of interfering with each other?)
What about the resource overhead of unison -repeat wait+TIME vs. occasionally having multiple Unison instances queued up, one running and the others waiting for it to finish?
|
Reasons I am running Unison as a cron job rather than with -repeat (presumably as a systemd service)
I would actually suggest that systemd is a perfect fit for when you want "cron but single instance". You don't need to keep it as a long-running service - you can use systemd timers in a cron-like manner while taking advantage of systemd to manage instances.
Define a systemd service of Type=exec. This service will be considered active until the process exits. Use the same command line you currently use with cron.
Define a systemd timer to start the service previously defined. This can be set to happen either at a clock (calendar) time (OnCalendar=), or after a period (OnUnitActiveSec= would be relative to last service start time, OnUnitInactiveSec= would be relative to last service stop time). Personally, I like calendar since it's more predictable and won't drift.
Enable and start the timer. Don't enable the service, you don't want it running independently of the timer.
The timer will start the service on schedule. The service manager will ensure only one instance of the service is active. If the service is still running when the timer next fires, it will simply not do anything.
Sample unit files to run Unison as SOMEUSER with profile SOMEPROFILE (except for their .service and .timer suffixes, both unit files must have the same name):
unison.service:
[Unit]
Description=Unison SOMEUSER@SOMEPROFILE
After=network.target
[Service]
User=SOMEUSER
# Type=simple may be used on older versions
Type=exec
# may be relative to the service's root directory specified by RootDirectory=, or the special value "~"
WorkingDirectory=~
# systemd may insist on an absolute path for the executable
# Run unison with -terse to prevent cluttering the system journal
ExecStart=/usr/bin/unison -auto -batch -ui text -terse SOMEPROFILE
Restart=no
unison.timer (fires daily at 13:37; for OnCalendar syntax see man 5 systemd.time, section Calendar Events):
[Unit]
Description=Daily unison SOMEUSER@SOMEPROFILE
[Timer]
OnCalendar=*-*-* 13:37
[Install]
WantedBy=timers.target
Drop these two unit files into /etc/systemd/system/, then run:
sudo systemctl enable unison.timer
sudo systemctl start unison.timer
You can test the setup by running sudo systemctl start unison.service – this requires only the service unit, not the timer unit. This is also a way to start sync out of schedule. Unison output will be written to the system journal with unison as a tag.
| Unison via cron, how to deal with one job still running as the next one starts |
1,462,031,588,000 |
How can I send a CTRL+C to my tmux tab using crontab? I have the following and it works for sending commands, but I'm not sure how to send a CTRL+C to it.
0,30 * * * * /usr/bin/tmux send-keys -t 0 "CTRL+C"
|
Ctrl keys may be prefixed with C- or ^
(source)
0,30 * * * * /usr/bin/tmux send-keys -t 0 C-c
| How to send a CTRL+C to a tmux pane using crontab? |
1,462,031,588,000 |
I have a bunch of devices all running a similar cron job. Currently I'm setting a cron minute and hours to a random number (that way they don't all run at once).
$random_minute $random_hour * * * sudo /bin/script
I want to keep this pattern of making each device random but I also have a script which needs to be run every 6 hours. How can I combine something like above with */6?
|
There aren't that many hours in the day, so why not just
17 3,9,15,21 * * * sudo /bin/script
to run at 03:17 and every 6 hours hence?
The alternatives would involve adding a sleep to the program itself:
0 */6 * * * (sleep 11820; sudo /bin/script)
or running the script more often (say, hourly), and having the script just exit if the actual job was executed within the last < 6 hours.
| Cron */6 hours but with an offset? |
1,462,031,588,000 |
I am using Ubuntu 14.04. I wrote a small script named trial. The contents of the script are as follows:
#!/bin/sh
SHELL=/bin/sh
PATH=/bin:/sbin:/usr/bin:/usr/sbin
sh -c firefox
I copied the script to /etc/init.d, modified permission using chmod +x trial and used update-rc.d trial defaults. The file made link but when I rebooted the machine. It did not run firefox. I tried cron @reboot but with no success. I tried rc.local too again failure.
|
The directory, /etc/init.d/ contains system scripts that essentially start, stop, restart daemons (system services). It's the "System V Initialization" method (SysVinit), containing the init program (the first process that is run when the kernel has finished loading). (EDIT 2 July 2015: Many Linux systems have recently switched to the systemd init system.)
But, Firefox is a graphical Web browser. As such, it needs the window server (X-Windows) and window manager to be started; and, you would need to be logged into the window manager to start Firefox. So, the task for you is to learn how to automatically start a program after you have logged into your window manager.
Find the name of your window manager. Then search for help about automatically starting a program.
| Run a GUI program at startup |
1,462,031,588,000 |
I need a bash script (not sure how to write the actual .sh file) which I could set to be run by cron every minute and that would delete files with the name index.html that are in a specific directory and its subdirectories.
I believe that the following command will do that. However, I need to write it as a script file which I could then have executed via cron.
find /path/to/directory -name "index.html" -print0 | xargs -0 rm -rf
The /path/to/directory would be relative to the server root.
My two questions are, do I need a trailing / at the end of the path and how do I write the bash script file for example in a file called deleteindexes.sh.
I am assuming that I would need to set the file as an executable using
chmod a+x deleteindexes.sh
As regards setting the cron command, that is not a problem for me.
|
I wouldn't even write the script -- you should be able to put the find command in directly. You can also call the delete command directly from find using the -delete action flag.
Step 1: edit crontab
crontab -e
Step 2: add in the following line (this will run it daily at 4:30am, change to your liking):
30 4 * * * find /path/to/directory -name "index.html" -delete
Step 3: Save and exit.
| Bash script to delete a file called index.html recursively |
1,462,031,588,000 |
Is it just for controlling the permissions of the script being called or does it also effect when or if the script runs? I assume if I setup a cron for a non-root user it should always run, even if I haven't logged in as that user since reboot, right?
|
Remember that Unix was originally designed as a multi-user system, where multiple people were using the same physical computer. (Unlike today, where most Unix systems are used by only one person, and multiple user accounts are just to limit vulnerabilities.)
So the original reason for per-user crontabs was so that each person could schedule jobs they wanted run periodically (without giving them permissions to edit the system crontab, and thus interfere with other people's jobs). But now, it's primarily just to control the permissions of the jobs being run.
So yes, once a crontab is saved, the jobs will run whether or not that user is logged in (or has ever logged in). The main cron daemon runs the per-user crontabs as well as the system crontab.
| What's the purpose of having different crontabs per user? |
1,462,031,588,000 |
In the ~ directory of the root user on my debian wheezy server regularly appears file named dead.letter with (currently) the following content:
orion : Jul 25 10:17:31 : root : unable to resolve host orion
orion : Jul 26 02:17:18 : root : unable to resolve host orion
orion : Jul 26 21:17:19 : root : unable to resolve host orion
orion is the hostname of the server (and can normally be resolved since I have various services/programs using this hostname without problems). After some searching I figured that there is a cron job running hourly, i.e.
17 * * * * root cd / && run-parts --report /etc/cron.hourly
which could explain why those errors only appear 17 minutes after the full hour. The only script in /etc/cron.hourly is fake-hwclock with the following content:
#!/bin/sh
#
# Simple cron script - save the current clock periodically in case of
# a power failure or other crash
if (command -v fake-hwclock >/dev/null 2>&1) ; then
fake-hwclock save
fi
Can this produce those mysterious dead.letter? And why seems fake-hwclock save try to resolve the hostname?
Edit: Some more information.
Input of /etc/hosts:
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
|
Change the following line in /etc/hosts
127.0.0.1 localhost
to
127.0.0.1 localhost orion
Your MTA was unable to resolve the domain name of your machine.
| Mysterious "unable to resolve host" in dead.letter |
1,462,031,588,000 |
I am wondering if someone might point me in the right direction. I've got little experience working with the Linux command line and recently due to various factors in work I've been required to gain knowledge.
Basically I have two php scripts that reside in a directory on my server. For the purposes of the application these scripts must be running continuously. Currently I implement that this way:
nohup sh -c 'while true; do php get_tweets.php; done' >/dev/null &
and
nohup sh -c 'while true; do php parse_tweets.php; done' >/dev/null &
However, I've noticed that despite the infinte loop the scripts to stop periodically and I'm forced to restart them. I'm not sure why but they do. That has made me look into the prospect of a CRON job that checks if they are running and if they are not, run them/restart them.
Would anyone be able to provide me with some information on how to go about this?
|
I'd like to expand on Davidann's answer since you are new to the concept of a cron job. Every UNIX or Linux system has a crontab stored somewhere. The crontab is a plain text file. Consider the following: (From the Gentoo Wiki on Cron)
#Mins Hours Days Months Day of the week
10 3 1 1 * /bin/echo "I don't really like cron"
30 16 * 1,2 * /bin/echo "I like cron a little"
* * * 1-12/2 * /bin/echo "I really like cron"
This crontab should echo "I really like cron" every minute of every
hour of every day every other month. Obviously you would only do that
if you really liked cron. The crontab will also echo "I like cron a
little" at 16:30 every day in January and February. It will also echo
"I don't really like cron" at 3:10 on the January 1st.
Being new to cron, you probably want to comment the starred columns so that you know what each column is used for. Every Cron implementation that I know of has always been this order. Now merging Davidann's answer with my commented file:
#Mins Hours Days Months Day of week
* * * * * lockfile -r 0 /tmp/the.lock && php parse_tweets.php; rm -f /tmp/the.lock
Putting no value in each column defaults to:
Every Minute Of Every Hour Of Every Day Of Every Month All Week Long, --> Every Minute All Year Long.
As Davidann states using a lockfile ensures that only one copy of the php interpreter runs, php parse_tweets.php is the command to "run" the file, and the last bit of the line will delete the lock file to get ready for the next run. I don't like deleting a file every minute, but if this is the behavior you need, this is very acceptable. Writing and Rewriting to disk is just personal preference
| Cron job to check if PHP script is running, if not then run |
1,462,031,588,000 |
I've created new system account for cronjob on my centos 7 and set up a cronjob for it.
However my cronjob fails to run with error:
(CRON) ERROR chdir failed (/home/sysagent): No such file or directory
Currently crontab for sysagent account looks like:
15 16 * * * HOME="/tmp" && cd path/to/project_folder && ./src/mainroutine.sh | ts "[\%Y-\%m-\%d \%H:\%M:\%S]" >> /var/log/automation/sysagent.log
I've added HOME variable and cd to project folder after some investigation (used to have full path to shell script there), but it doesn't help. How to make cronjob forget about home directory? And why it's looking for it by the way?
|
On the assumption the docs are bad or wrong, a source code dive:
% rpm -qa | grep cron
cronie-1.4.11-17.el7.x86_64
cronie-anacron-1.4.11-17.el7.x86_64
crontabs-1.11-6.20121102git.el7.noarch
... some altagoobingleduckgoing here as the URL in the RPM is broken ...
% git clone https://github.com/cronie-crond/cronie && cd cronie
% fgrep -rl 'chdir failed' .
./src/security.c
... so that error only appears in one place, within the cron_change_user_permanently call that is called from various other places in the code ...
% grep cron_change_user_permanently **/*.c
src/do_command.c: if (cron_change_user_permanently(e->pwd, env_get("HOME", jobenv)) < 0)
src/do_command.c: if (cron_change_user_permanently(e->pwd, env_get("HOME", jobenv)) < 0)
src/popen.c: if (cron_change_user_permanently(pw, env_get("HOME", jobenv)) != 0)
src/security.c:int cron_change_user_permanently(struct passwd *pw, char *homedir) {
... so in all cases the HOME environment variable appears to be used to determine where to chdir to for the user, and there is always a chdir to that directory. So you'll need to ensure that the HOME directory exists, or that HOME is properly set before cron_change_user_permanently is called (which likely happens before the shell code in your cron job is even looked at). (Or monkey patch cronie to do something else, but that's probably a really really bad idea.)
| Why cron trying to change to user's home directory and how to avoid this? |
1,462,031,588,000 |
In the process of learning/understanding Linux (difficult but enjoying it). I have written a very short shell script that uses wget to pull an index.html file from a website.
#!/bin/bash
#Script to wget website mywebsite and put it in /home/pi/bin
index=$(wget www.mywebsite.com)
And this works when i enter the command wget_test into command line. It outputs a .html file into /home/pi/bin.
I have started trying to do this via cron so i can do it at a specific time. I entered the following by using crontab -e
23 13 * * * /home/pi/bin/wget_test
In this example I wanted the script to run at 13.23 and to output a .html file to /home/pi/bin but nothing is happening.
|
This line index=$(wget www.mywebsite.com) will set the variable $index to nothing. This is because (by default) wget doesn't write anything to stdout so there's nothing to put into the variable.
What wget does do is to write a file to the current directory. Cron jobs run from your $HOME directory, so if you want to write a file to your $HOME/bin directory you need to do one of two things
Write wget -O bin/index.html www.mywebsite.com
Write cd bin; wget www.mywebsite.com
Incidentally, one's ~/bin directory is usually where personal scripts and programs would be stored, so it might be better to think of somewhere else to write a file regularly retrieved from a website.
| Cron not working with shell script |
1,462,031,588,000 |
*/30 */30 * * * python /root/get_top.py
I'm trying to run a script every 30 hours and 30 minutes. Is the syntax above proper for that?
I've had multiple people tell me that */30 is not a valid value for the hour column, since it's greater than 24. If that's true, how do I make a cron for a job that should run every 30 hours?
|
The simplest solution would probably be to run a cronjob more frequently and use a wrapper script to quit without doing anything if not enough time has passed.
To figure out how often you need to run, take the greatest common factor of cron's limits and your desired interval.
So, for "every 30 hours, 30 minutes", that'd be "every 30 minutes" and, for "every 30 hours", that'd be "every 6 hours" (The greatest common factor of 30 and 24)
You can implement the wrapper one of two ways:
First, you could store a timestamp in a file and then check if the time difference between now and the stored timestamp is greater than or equal to 30 hours and 30 minutes.
This seems simple enough, but has two potential gotchas that complicate the code:
Failsafe parsing of the saved timestamp file
Allowing for some wiggle forward and back when comparing timestamps since other things happening on the system will cause the actual interval to wiggle around.
The second option is to not store a timestamp file at all and, instead, do some math. This is also theoretically faster since the kernel can return the system time without querying the hard drive.
I haven't tested this for typos, but here's Python code for it that's been expanded out for clarity.
import os, time
full_interval = 1830 # (30 hours * 60 minutes) + 30 minutes
cron_interval = 30 # 30 minutes
minutes_since_epoch = time.time() // 60
allowed_back_skew = (cron_interval * 0.1)
sorta_delta = (minutes_since_epoch + allowed_back_skew) % full_interval
if sorta_delta < cron_interval:
os.execlp('python', 'python', '/root/get_top.py')
Here's the idea behind it:
Just as "a stopped clock is right twice a day", the value of minutes_since_epoch % full_interval will only be less than cron_interval once per full_interval.
We need to fuzzy-match to account for variations caused by sharing resources with other processes.
The easiest way to do this is to use [0, cron_interval) as a window within which a task must fall in order to be executed.
To account for jitter in both directions, we slide the starting edge of the window back by 10% of its duration since running too early will be rare while running too late can happen any time the system is so bogged down that the wrapper script is delayed in calling time.time().
If, as I suspect, get_top.py is your own creation, just stick this at the top of it and change the check to
if sorta_delta > cron_interval:
sys.exit(0)
| Setting up a cron for every 30 hours in Debian |
1,462,031,588,000 |
How do I invoke a crontab command so that I can schedule to run a script with a 20 minute delay based on some condition?
Edit: What if I wanted a command that schedules a script to be executed only as many times as the condition evaluated to true in the system? What are my options?
|
Put the logic code for testing your condition in the script itself, don't try and put it in (or associate it with) cron - complex logic is not what cron was designed for.
So, in your script you test the condition and, if it evaluates to true, your processing code runs. If it evaluates to false, exit the script cleanly.
Assuming that your 'conditions' change as a result of processing the script (e.g. watching a folder for incoming files that need processing and processing one file every 20m), then eventually your condition will evaluate to false all the time because all the work has been done.
From your comments it looks like you are monitoring the availability of some server.
I don't do a heck of a lot with bash but how about this:
#!/bin/bash
if [ `ps ax | grep $0 | wc -l` -le 3 ]; then #1
if [ `arping ...` -ne 1 ]; then #2
sleep 1200
if [ `arping ...` -eq 1 ]; then #3
# do your processing here
fi
fi
fi
The first if statement (#1) makes sure that this is the only instance of this particular script that is running. If another script is (still) running we exit and don't do anything.
The second (#2) is your initial 'is host pingable' test. If it is not, then the script waits 1200s (20min) before testing again (#3).
So, if two pings -- 20 minutes apart -- show that your host has become reachable then your processing code will run.
If you want to simplify things a little, try this:
#!/bin/bash
if [ `arping -w 59 ...` -ne 1 ]; then
sleep 1079
if [ `arping -w 59 ...` -eq 1 ]; then
# do your processing here
fi
fi
If you impose an arping deadline of a little under 1 minute (-w 59) for your checks, and tweak the sleep amount, then you can pretty much guarantee that the two tests and the sleep in between will be completed within your 20 minute period, so there should be no overlap with adjacent periods and no need to check to see if another script is still running.
Either of the above scripts would, of course, be invoked via a static cron entry which runs every 20 minutes:
*/20 * * * * /path/to/script.sh
| Conditional crontab command in bash script |
1,462,031,588,000 |
According to Artur Meinild's answer here, the "day of month" and "day of week" fields of a crontab are mutually exclusive. However, according to man 5 crontab (for cronie, if it matters):
Commands are executed by cron(8) when the 'minute', 'hour', and 'month of the year' fields match the current time, and at least one of the two 'day' fields ('day of month', or 'day of week') match the current time
So, is the following line an error, or instructions to run /bin/true on every Wednesday plus the second day of each month? Is cronie's man page documenting standard behavior, or an application-specific quirk?
* * 2 * 3 /bin/true
|
The POSIX specification for crontab, being worded in the language of a standard—aiming to minimize ambiguity—has probably the clearest explanation (emphasis added, paragraph split for clarity):
The specification of days can be made by two fields (day of the month and day of the week).
If month, day of month, and day of week are all <asterisk> characters, every day shall be matched.
If either the month or day of month is specified as an element or list, but the day of week is an <asterisk>, the month and day of month fields shall specify the days that match.
If both month and day of month are specified as an <asterisk>, but day of week is an element or list, then only the specified days of the week match.
Finally, if either the month or day of month is specified as an element or list, and the day of week is also specified as an element or list, then any day matching either the month and day of month, or the day of week, shall be matched.
This confirms that:
The task in your example is executed if the current day is either a Wednesday or the second day of the month.
This is a documented standard behavior.
| Are the "day of month" and "day of week" crontab fields mutually exclusive? |
1,462,031,588,000 |
There is a record:
45 * * * 1 script.sh
and
45 0-23 * * 1 script.sh
The desired effect is to run the script 45 minutes after every hour on Mondays.
Are they identical? If not, what is the difference?
|
Yes, they are identical.
I'd suggest the first syntax as it is more concise.
| Correctly defining a Cron schedule for process to be run every hour |
1,462,031,588,000 |
I have a file in server A which I am able to do transfer to server B using scp.
I need to do this through a cron entry. server B has a password.
How do I perform this?
|
Don't use password authentication. Use ssh keypairs.
Karthik@A $: ssh-keygen #keep the passphrase empty
Karthik@A $: ssh-copy-id B #enter your B password
#^ this will copy your public key to Karthik@B:.ssh/authorized_keys
From then on, you should be able to ssh from A to B (and by extension, scp from A to B) without a password.
| How do I scp a file from server A to server B from cron? |
1,462,031,588,000 |
I'm reading an article in Master Linux Now 2013 called OpenSSH: Easy Logins and it uses ssh-agent to allow you to enter a passphrase for your key once, and then you're able to connect to a remote machine freely without typing it again while the ssh-agent is running.
The reason I was drawn to the article in the first place, aside from not having to retype my password a million times; was so that I could do unattended backups from /to remote machines by calling rsync from cron on a machine remote to the server over ssh;
I saw another article where someone just skipped the passphrase so that cron could easily use the key to login, it doesn't feel right, but is this okay to do in practice? I mean if somebody got hold of that key file they'd be able to wreak havoc on the machine getting backed up.
It seems to me it would be safer to make sure the user is logged in upon reboot and to have them enter the passphrase once, when they login to get the agent running and then just wait for the cron job to run with the screen locked; but I'm likely missing something here, like about what user or user types cron jobs run with.
|
Restrict the commands that can be invoked by the key
If an SSH key is going to be used by any kind of automated or unattended task, you should restrict what commands it is able to execute on a remote machine, no matter what decision you make about how and where to store the key.
Use something like this in ~/.ssh/authorized_keys:
command="/usr/sbin/the-backup-daemon" ssh-rsa AAAAAAAAAAAABBBBBXXXXXX.....
That way, at least the key should not be able to, as you say, wreak havoc. It can only access what it's supposed to access, etc... It can most likely still do damage, but it should have less than full access to the remote system.
You can also restrict the IP addresses that are allowed to connect using that key and disable a bunch of other SSH features like port forwarding for connections where that key is used:
from="10.1.2.3",no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty,command="/usr/sbin/the-backup-daemon" ssh-rsa AAAAAAAAAAAAXXXXXX.....
All that has to go on a single line in ~/.ssh/authorized_keys.
Protecting the key
It depends on what your threat model is.
If you're worried about the key being stolen while "cold", for example, having the computer where it is saved physically stolen, then you won't want to save it without a passphrase in that location.
You could start a kind of background SSH agent manually after the server boots, add the key to that agent, and record the agent's $SSH_AUTH_SOCK for future use by the cron job, but honestly that sounds like more trouble than it's worth. You might as well just store the unencrypted key in a tmpfs filesystem and have the cron job access it from there. Either way the key lives in memory only (if you have no swap or encrypted swap). Of course you should chown and chmod the file so only the target user can access it.
Then again, if you're worried about that, you've probably already set up this computer with an encrypted root filesystem and swap (e.g. luks) so you may not need to worry about it as such.
If you're worried about the key being stolen while "hot" (loaded in memory) then there's not much you can do about that. If the cron job can access it, then so can something else that has managed to gain the same access. It's that, or give up the convenience of unattended job execution.
In conclusion, you should treat a backup server as a very privileged system since it will, by necessity, be given read-only access to the complete filesystems of all the computers it backs up. Your backup server should not be accessible from the Internet, for example.
| Should the key for an automatically running cron job that runs over ssh not have a passphrase? |
1,462,031,588,000 |
I need two very specific cron statements:
A cron entry that would run on the 2nd Monday of each month every 4 hours beginning at 02:00 and execute the file /opt/bin/cleanup.sh
A cron entry that would run at 18:15 on the 3rd day of every month ending in an "r" that executes /opt/bin/verrrrrrrry.sh
I've already tried various cron testers:
cron checker,cron tester,and cron translator
however none of them seem to be able to handle advanced cron expressions(or I do not know how to format the correct expression) as stated on
the cron trigger tutorial and
wikipedia
How can I check my cron statements? I obviously cannot wait for the actual event to pass so that the daemon may execute them.
Is there a good cron tester which supports advanced expressions? Or how to make the cron daemon parse the expression, or how to code these expressions?
What I have so far for these statements is:
0 2 * * 0#2 /opt/bin/cleanup.sh
15 18 3 * * /opt/bin/verrrrrrrry.sh
But of course these are not correct.
For #1, I do not know how to specify the '2nd Monday', nor 'every 4 hours', while still beginning at 02:00.
For #2, I have no idea how to only specify months ending in an 'r' except by manually coding them in. Nor do I know how to specify the 3rd day.
|
To have something execute only on the second Monday of a month the day of week value needs to be 1 and the day of month value has to be 8-14, the hour has to be 2,6,10,14,18,22 and the minute 0. However as dhag correctly commented and provided a solution for, when you specify both the day of week and the day of month (i.e. not as *), the program is executed when either matches. Therefore you have to test explicitly for either one, and the day of week is easier:
0 2,6,10,14,18,22 8-14 * * test $(date +\%u) -eq 1 && /opt/bin/cleanup.sh
The final 1 determines the Monday and the range of day of month (8-14) picks it only when it is the second Monday.
The third day of every month ending in an "r" at 18:15:
15 18 3 september,october,november,december * /opt/bin/verrrrrrrry.sh
(at least on Vixie cron you can use the names of the months. If yours does not support that you can replace that with 9-12)
| How do I format these two complex cron statements? |
1,462,031,588,000 |
I have a backup.sh file that launch an rsync command. This rsync is to backup my dedicated server on a RaspberryPi running raspbian.
(I use keychain so I don't need to type any password etc ...)
The problem is that when I launch the batch manually everything works, but when it's crontab (with the same user) I have the following error :
2013/10/07 19:36:02 [6456] rsync: connection unexpectedly closed (0 bytes received so far) [Receiver]
2013/10/07 19:36:02 [6456] rsync error: error in rsync protocol data stream (code 12) at io.c(605) [Receiver=3.0.9]
Here is my backup.sh file
#!/bin/bash
echo "_ backup start "$(date +%H:%M:%S)
echo " "
/usr/bin/rsync -avh --rsync-path='/usr/bin/rsync' --delete --log-file='/home/user/rsync.'$(date +%d%m%Y-%H%M%S)'.log' --rsh='ssh -p 1234' [email protected]:/path/to/archives/ /media/backup/
echo " "
echo "_ backup end "$(date +%H:%M:%S)
And now the crontab line (crontab -e)
# m h dom mon dow command
30 5 * * * /home/user/backup.sh | mail -s "Backup RPi "$(date +\%d/\%m/\%Y-\%X) [email protected]
Regards
|
(I use keychain so I don't need to type any password etc ...)
Ok, so you need to tell the program running in your cron job how to find your keychain.
SSH looks for an SSH agent (which keychain emulates) via the environment variable SSH_AUTH_SOCK. So you need to set this environment variable in your crontab.
In a typical configuration, SSH_AUTH_SOCK is a path to a socket with a random name. Since you're using keychain, you can easily find an alternate name for that socket: keychain writes files to ~/.keychain that contain environment variable declarations that set SSH_AUTH_SOCK and other similar variables (SSH_AGENT_PID, GPG_AGENT_INFO). So just include the appropriate file in your cron job.
[email protected]
30 5 * * * . ~/.keychain/$(hostname)-sh; /home/user/backup.sh
(Aside: cron has a built-in feature to send mail with the job's output, such that you receive a mail only if the job does produce some output. No need to fiddle with | mail.)
| Rsync problem with cron and no problem manually |
1,462,031,588,000 |
My first solution to this was to execute date +%Y%m%d%H%M and put that format of numbers into a file, and run the script every minute from cron. Then if the date from the file matches the date command, the script would do something. Then it would update that file for the next day plus 3 minutes. Is there a better way to accomplish this?
The result would be that the script would run the first day at (for example) 4:00am then the second day it would run at 4:03am and the third day it would run at 4:06am. It would execute every minute, but only run (if block) at the correct time.
Is the question and my solution clear?
|
You can use the at command from within your script (or a wrapper). Give it the time to run the next iteration.
echo '/dir/scriptname' | at 'now + 1443 minutes'
Put that line as near as possible to the beginning of the script to reduce drift.
| In linux, would it be possible to run a script every day 3 minutes later than the previous day? |
1,462,031,588,000 |
I'm using vixie cron on Debian/Ubuntu. How to set a custom directory to be read by cron? I want to lead crontab to run commands found in a series of files, e.g.
/home/cron/*.cron
or perfectly
/home/*/cron/*.*
In fact, instead of putting cron commands in /var/spool/cron/crontab/root, I want to spread commands in these folder.
Is it possible and recommended to use include for calling other files within root file?
|
Cron in debian reads commands from 3 locations - first is users crontabs in /var/spool/cron/crontabs/$user, then it reads global /etc/crontab and then all files from /etc/cron.d.
But you can't easily have multiple crontabs per user. Only files in /var/spool/cron/crontabs are per-user. The other two are system-wide and each line contains a username, under which is the command run. You could integrate/link those individual files into /etc/cron.d, but users would be able to run commands under root or other user.
So if one crontab per user is OK, just make links from your location to /var/spool/cron/crontabs. If you need multiple per user, then you need to make a script which will take your users's cron files, and modify it for system-wide crontab (that means adding a field with their username) and adding those under /etc/cron.d.
| Custom directory for cron commands |
1,462,031,588,000 |
Currently I need to have a program running all the time, but when the server is rebooted I need to manually run the program. And sometimes I'm not available when that happens.
I can't use a normal configuration to restart my program when the server is starting because I don't have root access and the administrator don't want to install it.
|
I posted this on a similar question
If you have a cron daemon, one of the predefined cron time hooks is @reboot, which naturally runs when the system starts. Run crontab -e to edit your crontab file, and add a line:
@reboot /your/command/here
I'm told this isn't defined for all cron daemons, so you'll have to check to see if it works on your particular one
| how to ensure a program is always running but without root access? |
1,462,031,588,000 |
I have a crontab script which read an environment variable with sensitive information(like a decryption key). The crontab entry is defined like that:
30 3 * * 5 VAR="pw" /usr/bin/script.sh
Is it safe to store something so sensitive in this way? If not, how can i improve it?
Thanks for the help.
|
Putting the variable in the crontab file and in the environment is secure in the sense that other users can't access it. However, it runs the risk of accidental disclosure, for example if you post the crontab line when asking for debugging help, or if you copy it and send it to someone as an example of how to automate running the script, or if some process logs its environment and the log is accessible to people who shouldn't have access to this password, or if you back it up and the backups are accessible to people who shouldn't have access to this password. I recommend keeping secrets in a separate file or directory tree and reading them when needed.
If you can't modify script.sh:
VAR=$(cat ~/.secret/script/password)
If you can modify script.sh, make it read the variable when it needs it, rather than making it available throughout the script.
Note that while environment variables are not disclosed to other users, command line arguments can be visible to other users through ps. So don't pass $VAR as an argument to an external command. It's safe to use $VAR as an argument to a shell builtin, for example the following is safe:
gpg --passphrase-fd=3 3< <(printf %s "$PASSPHRASE")
although this specific example would be simpler expressed as
gpg --passphrase-file=/path/to/passphrase
| Is it safe to call an environment variable from a crontab script? |
1,462,031,588,000 |
I work in a team. We have a CentOS Linux machine. There's a user there called www. We run cron jobs as that user, i.e. I can type sudo -u www crontab -e to see/edit the crontab, and my teammates do the same. However, I like to use nano as my editor, but the crontab opens in vim, because that's the EDITOR for the www user.
Is there a way I can get the crontab to open in nano, without changing the EDITOR setting for the www user? (e.g. via the addition of some additional command line parameter?) My teammates will continue to expect vim to open when they themselves run sudo -u www crontab -e.
|
You can specify the EDITOR variable as an argument to sudo:
sudo -u www EDITOR=nano crontab -e
| How to open the crontab of another user in my editor of choice? |
1,462,031,588,000 |
I've searched through some answers but nothing seems to clarify my confusion.
I have a cron job I want to run every 5 minutes:
*/5 * * * * cd /mnt/internal-storage/coindata && shell/command coins update
Do I place this in the /etc/cron.daily folder or create a /etc/cron.minutely?
Also what kind of file I create inside this folder?
|
The best solution for this is likely to add a line to your crontab. Accessing the crontab file may differ between implementations of cron, so I've provided commands for the two cron implementations in the official Arch repos. If you want a solution that does not require a specific cron implementation, I've written another answer that uses systemd/Timers instead.
crontab -e or variants of it use the EDITOR environment variable (defaults to vi). If you wish to use a different editor, export it to the EDITOR variable like so:
export EDITOR=vim
where vim is replaced with the editor of your choice.
Editing crontab with cronie:
crontab -e
Editing crontab with fcron:
fcrontab -e
Add your cron command to the file and save it:
*/5 * * * * cd /mnt/internal-storage/coindata && shell/command coins update
The format for lines in this file is
minute hour day_of_month month day_of_week command
If cron is not running, start its daemon.
For cronie: systemctl start cronie.service
For fcron: systemctl start fcron.service
If you want the commands in the crontab to continue to run after rebooting, make sure that the cron daemon is enabled:
systemctl enable cronie.service or systemctl enable fcron.service
| Running a cron job every 5 minute on arch |
1,462,031,588,000 |
I want to set up a job to run every 15 minutes, starting at the 5th minute.
This is my crontab line:
5/15 * * * * /root/job.sh >> /root/job.log
But it only runs once an hour, on the fifth minute. I confirmed by checking /var/log/syslog.
On Debian 7.8.
|
Have you looked at this?
I think what you are looking for is:
5-59/15 * * * * /root/job.sh >> /root/job.log
| Why is cron ignoring a / (forward slash) interval? |
1,462,031,588,000 |
I have a bash script that emails me whenever a web server is not responding, and the script is ran by cron every 5 minutes. However, if the website goes down for a few hours, I'd receive too many messages instead of just one.
What's the best way to make it email only once? Should I use an environment variable and check it before sending the email/resetting it when web server goes up again? Are there better ways to do this (without polluting the environment)? Am I doing something silly right now? I'm not confident in my shell scripting skills.
#!/bin/sh
output=$(wget http://lon2315:8081 2>&1)
pattern="connected"
if [[ ! "$output" =~ "$pattern" ]]
then
echo "$output" | mail -s "Website is down" "[email protected]"
fi
|
I don't think you can use environment variables, as they won't persist between script "runs".
Alternatively, you could write to a temporary file in /tmp or somewhere in your home directory, then check it each time?
For example, something like
#!/bin/sh
output=$(wget http://lon2315:8081 2>&1)
pattern="connected"
tempfile='/tmp/my_website_is_down'
if [[ ! "$output" =~ "$pattern" ]]
then
if ! [[ -f "$tempfile" ]]; then
echo "$output" | mail -s "Website is down" "[email protected]"
touch "$tempfile"
fi
else
[[ -f "$tempfile" ]] && rm "$tempfile"
fi
| Run crontab only if condition |
1,462,031,588,000 |
I have a cron job that runs every minute on an Ubuntu 12.04 box
$sudo crontab -e
* * * * * mylogin /pathto/file.sh > /log/filelog
If I run file.sh, the bash script does some stuff and echos out runs:
$ ./file.sh
runs
If I check the log of cron tab it shows that the job is running:
Jul 10 12:41:01 localhost CRON[1811]: (root) CMD (mylogin /pathto/file.sh > /log/filelog)
Jul 10 12:41:01 localhost CRON[1810]: (CRON) info (No MTA installed, discarding output)
Jul 10 12:42:01 localhost CRON[1813]: (root) CMD (mylogin /pathto/file.sh > /log/filelog)
However, the script is not running. It is not doing it's work and not echoing runs to /log/filelog.
$cat /log/filelog #shows nothing
What other steps can I take to debug this problem?
|
Specifying a username like mylogin is for the /etc/crontab file. With your command sudo crontab -e you are actually editing /var/spool/cron/crontabs/root and you should not specify a username in such a file, only in /etc/crontab.
If you have to run the command as user mylogin you have to put the line in /etc/crontab (and edit this with root privileges), or just put it in the mylogin user's crontab.
From man 5 crontab:
EXAMPLE SYSTEM CRON FILE
The following lists the content of a regular system-wide crontab file.
Unlinke a user's crontab, this file has the username field, as used by
/etc/crontab.
| Why isn't this crontab job running |
1,390,561,389,000 |
Could someone explain what the apt script is doing in cron.daily please? I can't find any documentation on it.
NOTE: I'm trying to investigate a server-server communications issue which frequently occurs around midnight - and I want to rule out the cron dailies.
|
Depending on the configuration, it essentially does the following:
apt-get update (update the package lists)
apt-get -y -d dist-upgrade (download all upgradeable packages)
unattended-upgrade (auto upgrade all upgradeable packages, requires the package unattended-upgrades to be installed)
apt-get -y autoclean (autoclean package archive)
| What does /etc/cron.daily/apt do? |
1,390,561,389,000 |
A line in my cron.daily script not work as expected. I haven't any special smtp mail server in system,
this line
rsync -avun --inplace /oneuser/file.xls /otheruser/file.xls| mail -s "$0 $?"
provide Cannot open mail:25 message
What Do i need to setup a local mail subsystem? I preffer simple mailboxes to email server setup. I like that otheruser logged in could read cron (root) messages by mail command. I found a similar question but not the answer here How to set up local mail retrieval and delivery?
when i try to send a mail to user with mail command i get after dot
EOT
[root@localhost etc]# send-mail: Cannot open mail:25
|
I recommend you just install postfix for the local mail delivery. On Ubuntu at least it will interactively ask about your setup, which includes a local delivery only option.
In addition you can make a local account mailboy for mail delivery and allow all people to read the mail delivered to that account.
In order to get the mail to root delivered to mailboy, edit /etc/aliases and adda line:
root: mailboy@localhost
after doing so run newaliases.
| set up local mail delivery to user from cron script |
1,390,561,389,000 |
I have a runaway ruby process - I know exactly how I trigger it.
Point is, it got me thinking about runaway processes (CPU usage or memory usage).
How would one monitor runaway
processes with cron? grep / top / ulimit?
Can one notify the user via the
command line if something like this
happens?
What alternatives are there to Monit?
|
Instead of writing a script yourself you could use the verynice utility. Its main focus is on dynamic process renicing but it also has the option to kill runaway processes and is easily configured.
| cronjob to watch for runaway processes and kill them |
1,390,561,389,000 |
I've got a shell script myscript.sh which calls a command, let's say mycmd.
When i run that script from the terminal, e.g. ./myscript.sh, everything works fine.
But when i add that script to the crontab, the output of mycmd is empty. When i call /usr/local/bin/mycmd inside of myscript.sh everything works fine again.
So why i have to use the absolute path of the executable? Is it because it is not in the $PATH of the "cron-bash"?
|
You have it exactly right - the environment of cron may have a PATH that does not include /usr/local/bin/. You can fix this by, in your script, appending that directory to the PATH:
PATH="$PATH:/usr/local/bin/"
The best practice, though, is indeed to use explicit paths for all external binaries that a script calls just in case (for instance) a maliciously designed program conveniently called cp gets dropped into the PATH somewhere before /bin.
| Absolute path in bash scripts |
1,390,561,389,000 |
I want to run this program as root. It activates a virtual environment and runs a python program in that virtual environment.
The program looks like this
$cat myfile.sh
source /pathto/venv/bin/activate
python /pathto/rsseater.py
If I just run it as root I get
$ sudo ./myfile.sh
./myfile.sh: 2: ./myfile.sh: source: not found
I found this but if I try to add sudo -s to the top of the file it seems to change the prompt to a # -- but the python does not run as expected. I'm not sure what the program is doing when I have sudo -s at the top of the file like that.
How can I get this program to run as root?
|
You generally want to add a call to an interpreter at the top of your scripts, like so:
$cat myfile.sh
#!/bin/bash
source /pathto/venv/bin/activate
python /pathto/rsseater.py
This may seem very similar to what you have but is it in fact very different. In your scenario you're attempting to run the commands within the shell that get's invoked when you run sudo. With the example above, you're getting a separate Bash shell which will have the various commands after the #!/bin/bash line invoked within it. #!/bin/bash results in a new Bash shell getting forked.
Interpreters and shebang
The nomenclature of #! is called a shebang. You can read more about it through the various Q&A on this site and also on Wikipedia. But suffice to say, it tells the system that when a given file is "executed" (aka. run) to load a program from disk first (an interpreter), which then is tasked with the job of parsing the contents of the file you just executed.
NOTE: Essentially everything after the shebang line.
Interpreters
An interpreter is any program that can process commands one at a time from a file. In this case we're using the Bash shell as the interpreter. But there are other shells that you could use here too, such as C shell or Korne shell among others.
You can also use higher level interpreters such as Perl, Ruby, or Python too in this same manner.
Why the error using 'sudo -s'
If you take a look at the man page for sudo it has this to say about -s:
-s [command]
The -s (shell) option runs the shell specified by the SHELL
environment variable if it is set or the shell as specified in
the password database. If a command is specified, it is passed
to the shell for execution via the shell's -c option. If no
command is specified, an interactive shell is executed.
So when you run this command:
$ sudo -s ./myfile.sh
The following happens:
$ /bin/sh
sh-4.2$ source somesource.bash
sh: source: somesource.bash: file not found
The shell, /bin/sh does not support the command source.
| sudo source: command not found |
1,390,561,389,000 |
I have this bash script:
while [[ 1 ]] ; do sleep 3600 ; ./notify.sh --text "ricordati di bere" && play /mnt/musica/login.wav && zenity --info --text="<span size=\"xx-large\">Time is $(date +%Hh%M).</span>\n\nricordati di <b>bere</b>." --title="drink time" ; done
I'd like to execute this script (from 8:00 o'clock, less important) to 19:00 (most important); it's possible?
I have seen at command, but I didn't find how set "until" or "before".
This question is different from this
|
You would do the scheduling with cron. The schedule would look like
0 8-19 * * * /path/to/script
or
0 8,9,10,11,12,13,14,15,16,17,18,19 * * * /path/to/script
and the script would look like
#!/bin/sh
./notify.sh --text "ricordati di bere" &&
play /mnt/musica/login.wav &&
zenity --info \
--text="<span size=\"xx-large\">Time is $(date +%Hh%M)</span>\n\nricordati di <b>bere</b>." \
--title="drink time"
See also "How to send a mail for every 10 minutes through shell script?"
| specify a time interval in which to execute a certain script |
1,390,561,389,000 |
I have a cron script that runs a few times every day throughout the year. However, around Christmas it should work differently. So my script looks basically like this:
# m h dom mon dow command
26 16 * JAN-NOV MON-THU (echo 14 `date`) >> /var/log/cron.www-data 2>&1
26 16 1-18 DEC MON-THU (echo 6 `date`) >> /var/log/cron.www-data 2>&1
where I replaced the actual command with a simple echo for debugging / demonstration purposes.
For the exceptional case, I added some jobs like
26 07 19-24 DEC ? (echo 1 `date` ) >> /var/log/cron.www-data 2>&1
that actually work fine.
The problem is, that the second line above (echo 6) just got run, today December 19 -- the log file shows
6 Tue Dec 19 16:00:01 CET 2017
I guess my question is simply: Why did this job run?
I'm running Linux 3.18.11-v7+ #781 SMP PREEMPT Tue Apr 21 18:07:59 BST 2015 armv7l GNU/Linux on a Raspberry Pi.
|
The reason can be found in crontab(5):
Commands are executed by cron(8) when the minute, hour, and month of year fields match the current time, and when at least one of the two day fields (day of month, or day of week) match the current time.
(Emphasis added)
I believe you want your script to run at the specified time from 1-18 December, but only from Monday to Thursday. As you can see from the manual page, cron doesn't do this when you specify both day of the month and day of the week. Your command will execute on every day from 1-18 December and on any day from Monday to Thursday in December. December 19, 2017 is a Tuesday, hence the script running.
Note: the above applies to ISC cron, the default on Debian systems.
| Why does this cron job run? |
1,390,561,389,000 |
When attempting to establish an ssh tunnel, I noticed that even if the connection fails, the process stays alive. For example, if I try to run this command while hostname is down:
/usr/bin/ssh -f -i /home/user/.ssh/id_rsa -N -R 3000:localhost:22 user@hostname
Occasionally I get the response:
Warning: remote port forwarding failed for listen port 3000
I only get this error message when the original process (running on the local machine) dies but the remote server does not realize yet. The process tries to restart but the server thinks it still has a connection on 3000 and won't except a new connection, resulting in the warning above.
But if I do a pgrep -x ssh I can see that the process is still alive. I would like to run this ssh command as part of a bash script in a cronjob which first checks to see if the tunnel is established and if not reestablishes it, but the way I have the script setup it either a) sees that the tunnel is down and attempts to create a new one (which secretly fails), or b) sees that the failed process is alive and does nothing. The result is that the tunnel never gets reestablished so long as that failed process still exists.
Is there a way to just kill the process if the connection fails instead of getting a warning?
|
For anyone else who may find the answer, I found the option that I wanted in this answer: adding -o ExitOnForwardFailure=True to the command forces ssh to exit if the port forwarding failed, instead of creating a zombie process:
/usr/bin/ssh -f -i /home/user/.ssh/id_rsa -N -R 3000:localhost:22 user@hostname -o ExitOnForwardFailure=True
| SSH process staying alive even if connection fails? |
1,390,561,389,000 |
When I want to manipulate Unix cron, I do
crontab -e
then type (or paste) my directives.
How do I paste directive to crontab directly from a script?
In other words: Instead of pasting content inside crontab -e, I want to paste and save it there from outside, ready from a script, so to automate things up.
I ask this regarding a multi-purpose script I run each time I create a new VPS environment (for example, new Digital Ocean droplet).
I can type to files via, for example:
sudo bash -c "touch /location/new_file && echo 'text...text...text...text' > /location/new_file
Or:
sudo cat <<EOF >> /location/new_file
text...
text...
text...
text...
EOF
Yet, I don't know if it is even possible to write directly to Crontab from a script, and how.
This is the task I would want to paste inside Cron tab --- From a script:
0 8 * * * tar -zcvf /home/USER/backups/files/www-html-$(date +\%F-\%T-).tar.gz /var/www/html
0 8 * * * find /home/USER/backups/files/* -mtime +30 -exec rm {} \;
0 8 * * * mysqldump -u benia -p121212 --all-databases > /home/USER/backups/mysql/alldb_backup.sql
1 8 * * * tar -zcvf /home/USER/backups/mysql/alldb_backup-$(date +\%F-\%T-).sql.tar.gz /home/USER/backups/mysql/alldb_backup.sql
2 8 * * * rm /home/USER/backups/mysql/alldb_backup.sql
2 8 * * * find /home/USER/backups/mysql/* -mtime +30 -exec rm {} \;
Note:
The above cron task does 2 things:
Daily backup all site-dirs and all sqls, into 2 different dirs: One is ~/backups/files and one is ~/backups/sql
Find and delete files created 30 days ago --- each day anew.
|
Per Ipor Sircer's answer about the usage of cron, i.e.
man crontab:
crontab [ -u user ] file
The first form of this command is used to install a new crontab from
some named file or standard input if the pseudo-filename ``-'' is
given.
this means that you send the lines you want in your crontab file, to the stdin of this command:
crontab -
crontab will recreate a new cron file containing those commands.
The script will first print your existing crontab using crontab -u $user -l 2>/dev/null.
you will need to assign the value of your user to $user or use $USER if its in your environment.
It will print the new lines you want and capture the aggregated result into a pipe connected to stdin of crontab -.
Here's how it should look like in your general-purpose script:
#!/bin/bash
user=YOU_NEED_TO_ENTER_YOUR_USER_HERE
# use a subshell to capture both commands output into the pipe ( # prints the current value of crontab
crontab -u $user -l 2>/dev/null
# print your cron jobs to STDOUT
cat <<- 'EOF'
0 8 * * * tar -zcvf /home/USERNAME/backups/files/www-html-$(date +\%F-\%T-).tar.gz /var/www/html
0 8 * * * find /home/USERNAME/backups/files/* -mtime +30 -exec rm {} \;
0 8 * * * mysqldump -u benia -p121212 --all-databases > /home/USERNAME/backups/mysql/alldb_backup.sql
1 8 * * * tar -zcvf /home/USERNAME/backups/mysql/alldb_backup-$(date +\%F-\%T-).sql.tar.gz /home/USERNAME/backups/mysql/alldb_backup.sql
2 8 * * * rm /home/USERNAME/backups/mysql/alldb_backup.sql
2 8 * * * find /home/USERNAME/backups/mysql/* -mtime +30 -exec rm {} \;
EOF
# Everything printed to stdout - inside the subshell will be connected
# to the pipe and feed to crontab on stdin - recreating a new crontab ) | crontab -
| Is it possible to write to the crontab from a multipurpose script? |
1,390,561,389,000 |
To my surprise a trial of using crontab and rsync for backups of some test files, that I started last December (2015), is still running although, in the only crontab file I have, the only line is
#55 20 * * * /home/Harry/testrsync/trial_bak.sh
which is, or should be, commented out by the # added when I thought I had ended the trial after a couple of weeks.
My question is Why is it still being executed? Or is there any other way this line (without the #) could be executed?
The backups are made daily at 20-55 and only the last four are kept, this is still going on exactly as the crontab entry and trial_bak.sh script defines it.
I am using zshell and Fedora 20, this is part of my preparation to update to the latest Fedora.
Solved:
Thanks to all who responded. Following @Marki555's answer I found that I have a /etc/cron.daily directory that contains the script that does the daily backup, so the entry in crontab is indeed commented out and is not actuated.
|
The cron daemon takes crontabs from several files.
Dir /etc/cron.d and file /etc/crontab are special, they can be manually edited and the daemon will always see the new version automatically. Also these are the only crontab files which have also username field.
The crontabs of individual users (usually in /var/spool/cron/crontabs) are not re-read automatically by the cron daemon. You should either edit them by using command crontab -e, or restart the cron daemon after each change.
So in your case I suggest you first restart the cron daemon. Also you can add some debugging output to the trial_bak.sh script like running pstree -p.
| Why is this commented out crontab file line executed? |
1,390,561,389,000 |
I am using Linux Mint 17.
I want to be informed every 50 min, at every hour for small break.
Here is cron job:
nazar@desktop ~ $ crontab -l
DISPLAY=:0.0
XAUTHORITY=/home/matrix/.Xauthority
00 13 * * * /home/nazar/Documents/scripts/lunch_break_job.sh # JOB_ID_2
50 * * * * /home/nazar/Documents/scripts/pc_break.sh # JOB_ID_1
* * * * * /home/nazar/Documents/scripts/cron_job_test.sh # JOB_ID
Here is script for /home/nazar/Documents/scripts/cron_job_test.sh:
#!/bin/bash
export DISPLAY=0.0
export XAUTHORITY=/home/matrix/.Xauthority
if [ -r "$HOME/.dbus/Xdbus" ]; then
. "$HOME/.dbus/Xdbus"
fi
/usr/bin/notify-send -i "hello"
This snippet of function:
if [ -r "$HOME/.dbus/Xdbus" ]; then
. "$HOME/.dbus/Xdbus"
fi
Checks DBUS_SESSION_BUS_ADDRESS and uses it.
According to this answer I executed script, and now my Dbus is saved to $HOME/.dbus/Xdbus:
nazar@desktop ~ $ cat $HOME/.dbus/Xdbus
DBUS_SESSION_BUS_ADDRESS=unix:abstract=/tmp/dbus-flm7sXd0I4,guid=df48c9c8d751d2785c5b31875661ebae
export DBUS_SESSION_BUS_ADDRESS
All should work. I couldn't find what is missed. Because notification doesn't work now.
From terminal it works fine:
How to solve this issue?
SOLUTION:
Now my crontab looks as follows:
DISPLAY=":0.0"
XAUTHORITY="/home/nazar/.Xauthority"
XDG_RUNTIME_DIR="/run/user/1000"
00 13 * * * /home/nazar/Documents/scripts/lunch_break_job.sh # JOB_ID_2
50 * * * * /home/nazar/Documents/scripts/pc_break.sh # JOB_ID_1
# * * * * * /home/nazar/Documents/scripts/cron_job_test.sh # JOB_ID
and cron_job_test.sh looks now:
#!/bin/bash
/usr/bin/notify-send -i /home/nazar/Pictures/icons/Mail.png "hello" "It is just cron test message"
pc_break.sh:
#!/bin/bash
/usr/bin/notify-send -i /home/nazar/Pictures/icons/download_manager.png "Break:" "Make a break for 10 min"
lunch_break_job.sh:
#!/bin/bash
/usr/bin/notify-send -i /home/nazar/Pictures/icons/Apple.png "Lunch: " "Please, take a lunch!"
|
You need to set XDG_RUNTIME_DIR as well. Change your crontab to this:
DISPLAY=":0.0"
XAUTHORITY="/home/nazar/.Xauthority"
XDG_RUNTIME_DIR="/run/user/1001"
00 13 * * * /home/nazar/Documents/scripts/lunch_break_job.sh # JOB_ID_2
50 * * * * /home/nazar/Documents/scripts/pc_break.sh # JOB_ID_1
* * * * * /home/nazar/Documents/scripts/cron_job_test.sh # JOB_ID
Make sure you change nazar to whatever your username is and 1001 to your actual UID. You can get your UID by running id -u.
And all you need in your script is:
#!/bin/bash
/usr/bin/notify-send "hello"
I just tested this on Arch running Cinnamon and it worked fine.
The variables are being set in the crontab, no need to export anything from the script. There's also no point in doing so, the script is being called by cron, it wouldn't export the values you need anyway.
| Notify-send doesn't work at Cinnamon |
1,390,561,389,000 |
System: Xubuntu 13.10
When I have this crontab entry
*/5 * * * * cat /home/dbk/.bash_aliases &> /home/dbk/Desktop/junk
junk has a byte size of 0.
Running
$ cat /home/dbk/.bash_aliases &> /home/dbk/Desktop/junk
gives a file with a proper size and content.
|
The probem because cron run task with sh. &> is a shortcut to redirect both stderr and stdout to the same file in bash, not in sh.
In sh, your command:
cat /home/dbk/.bash_aliases &> /home/dbk/Desktop/junk
meaning run two commands separately:
Run cat /home/dbk/.bash_aliases in background
cat /home/dbk/.bash_aliases &
Truncate the junk file.
> /home/dbk/Desktop/junk
So you should use bash to run your command in crontab:
*/5 * * * * bash -c "cat /home/dbk/.bash_aliases &> /home/dbk/Desktop/junk"
or using more portable way:
*/5 * * * * cat /home/dbk/.bash_aliases > /home/dbk/Desktop/junk 2>&1
| Why does redirection in crontab result in a zero byte file? |
1,390,561,389,000 |
I want to schedule a file to run using the at command, as described in this tutorial. I see that the at command supports the posix format -- but I don't see any mention of timestamps. Surely it is possible to schedule a job at a given timestamp.
$ man at | grep timestamp
|
From usr/share/doc/at/timespec, it doesn't look like it. But you can always use date to convert your timestamp, eg:
at "$(date --date=@1393419435 +'%H:%M %D')"
date takes a timestamp in seconds, so don't forget to trim fractions of seconds if needed.
| Is it possible to use the at command to schedule a job to run at a given timestamp? |
1,390,561,389,000 |
If a cron job produces output it is e-mailed to the user's account.
I would like to redirect this e-mail to another e-mail account. Preferably on a user-by-user basis.
I've looked into some options that are often mentioned in other postings:
Using the cron MAILTO variable. Won't work. That one is not
supported on Solaris. It is a Linux thing, potentially also a BSD
thing, but certainly doesn't exist on Solaris.
Using the ~/.forward file. Can't make that work. I suspect it is
because this file is really related to sendmail universe and I'm
not sure Solaris cron actually uses sendmail to send its e-mails.
To get to the bottom of this I guess I need to understand exactly by which mechanism Solaris cron sends e-mails.
Anyone ?
|
If you want to forward all of a user's mail, not just the mail from Cron, Solaris does support ~/.forward. Solaris also supports global aliases in /etc/mail/aliases; if you modify this file, you need to run newaliases.
If you only want to forward mail from cron, you can set a filter in ~/.forward or /etc/mail/aliases. I don't think Solaris comes with any useful filtering tool preinstalled; the classic program for this is procmail. Use |/usr/local/bin/procmail as your filter, and something like this as your ~/.procmailrc (untested):
:0
* ^From: Cron Daemon <[email protected]>
* ^Subject: Cron .*
! [email protected]
Alternatively, you can mail the output of the job explicitly from the crontab. Install moreutils (I don't know how easy it is to compile under Solaris), which contains a command ifne that executes a program only if its standard input is not empty.
… 2>&1 | ifne mailx -s 'Cron output' [email protected]
| Solaris: how to forward cron e-mails? |
1,390,561,389,000 |
I have an init.d script to start crond which specifies the following for start():
start-stop-daemon -S --quiet --make-pidfile --pidfile /var/run/crond.pid --background --exec /usr/sbin/crond
However, the PID is always one number higher than what's recorded in /var/run/crond.pid. Does anyone have any idea what's going on here? I have approximately ten other init.d scripts that also make the same calls, and only cron.d has this issue.
EDIT:
This is interesting:
# /usr/sbin/crond &
#
[1]+ Done /usr/sbin/crond
# echo $!
737
# ps -eaf | grep crond
738 root /usr/sbin/crond
740 root grep crond
#
|
The crond program is designed to be daemon. When it starts, one of the first things it does it fork a child and exit the parent. This is designed for environments where the caller is waiting for the program to exit before proceeding, whereas the daemon needs to continue executing in the background.
caller ---- fork--> wait -------------------------+-> ...
| |
'----> exec crond ---- fork--> exit -'
|
'----> read crontab, wait for time of next job, ...
The PID recorded by start-stop-daemon is the PID of the parent. If no other process forks during the short interval between the two forks, the child's PID ends up being the PID of the parent plus one.
Since start-stop-daemon is designed to handle daemons and let them run in the background, tell crond to stay in the foreground, i.e. not to fork at the beginning.
caller ---- fork--> store pid; ...
| |
'----> exec crond -f ----> read crontab, wait for time of next job, ...
With BusyBox's crond, pass it the -f option.
start-stop-daemon -S --quiet --make-pidfile --pidfile /var/run/crond.pid --background --exec /usr/sbin/crond -- -f
| start-stop-daemon makes cron pidfile with wrong pid |
1,390,561,389,000 |
I'm writing a bash-script that will be run as a cron job everyday. Very basic, I was wanting to change the wallpaper daily. I have mint-14 with mate.
The thing I'm getting caught up on right now is, I want to have the user's home path detected automatically. If I don't do this I would have to change it for all other users that run the script.
So far I have tried:
homedir=${HOME}/Pictures/daily
mateconftool-2 -t string -s /desktop/mate/background/picture_filename $homedir;
This doesn't work but,
echo $homedir
Prints out the correct path?
EDIT:
When I tried ~user like @vonbrand was suggesting there is no difference.
mateconftool-2 -t string -s /desktop/mate/background/picture_filename ~user/Pictures/daily;
|
$HOME is not set in cron, so put this in a script, and let your cron job execute that instead,
(Remember to set the execution bit for that script with chmod +x XX)
#!/bin/bash
mateconftool-2 -t string -s /desktop/mate/background/picture_filename ~/Pictures/daily
Or in your cronjob,
HOME="$(getent passwd $USER | awk -F ':' '{print $6}')"
homedir=${HOME}/Pictures/daily
| Users home path in a bash script |
1,390,561,389,000 |
EDIT: THE SOLUTION
With Tim's help I determined that larger writes of data were failing versus smaller ones. Why they didn't fail when I ran the script interactively, I don't know...But here was the fix (new mount option of wsize=4096):
if mount -t cifs -o guest,wsize=4096 //drobonas/public /mnt/drobonas-public; then
...
wsize=4096 is a pretty small write (the default is 14x that), so I might experiment to find the limit. But for now I'm just happy it works.
ORIGINAL QUESTION
I've got a shell script that backs up our svn repositories. I back them up by tar'ing them over to a NAS (a Drobo). The script takes care of mounting and unmounting the network share itself.
The script itself works fine when I run it directly, but when run via cron it appears to fail with a few CIFS-related errors appearing in the syslog. First, here's the script:
#!/bin/sh
# This script tars up a backup of the needed svn repo directories.
# It expects to be run as root so that it can mount the drobo's drive.
# There are probably ways to allow user mounting (via additions to /etc/fstab) but I'm trying to minimize setup steps.
# I personally placed it in a directory for root (/root/bin or /home/root/bin depending on your distro), then used crontab -e (again as root) to schedule it.
# My crontab looked like this (runs at 1:01 AM, on Mon-Fri, as root user):
# 01 01 * * 1-5 /root/bin/svn-backup.sh
# mount our backup drive
if mount -t cifs -o guest //drobonas/public /mnt/drobonas-public; then
# perform the actual backup - went with tar so we can preserve permissions, etc
if tar cvpf /mnt/drobonas-public/SvnBackup/svn-backup-temp.tar /home/svnserver/svnconf/ /home/svnserver/svncreaterepo.sh /home/svnserver/svnrepositories/; then
# if everything worked out, we can do some cleanup
# remove our oldest backup in the rotation
rm /mnt/drobonas-public/SvnBackup/svn-backup-3.tar
# rename the existing backups to reflect the new order
#mv /mnt/drobonas-public/SvnBackup/svn-backup-4.tar /mnt/drobonas-public/SvnBackup/svn-backup-5.tar
#mv /mnt/drobonas-public/SvnBackup/svn-backup-3.tar /mnt/drobonas-public/SvnBackup/svn-backup-4.tar
mv /mnt/drobonas-public/SvnBackup/svn-backup-2.tar /mnt/drobonas-public/SvnBackup/svn-backup-3.tar
mv /mnt/drobonas-public/SvnBackup/svn-backup-latest.tar /mnt/drobonas-public/SvnBackup/svn-backup-2.tar
mv /mnt/drobonas-public/SvnBackup/svn-backup-temp.tar /mnt/drobonas-public/SvnBackup/svn-backup-latest.tar
# do svnadmin dumps as well - helps future-proof things
/bin/bash /root/bin/svn-dump.sh
# we're done, so unmount our drive
umount /mnt/drobonas-public
else
# something went wrong, unmount the drive and then exit signalling failure
umount /mnt/drobonas-public
exit 1
fi
else
# mount wasn't successful, exit signalling failure
exit 1
fi
Now here are the log entries (one note: it appears to create the "svn-backup-temp.tar" file successfully, the errors start to happen after that):
Jan 5 07:52:01 giantpenguin CRON[2759]: (root) CMD (/root/bin/svn-backup.sh)
Jan 5 07:52:02 giantpenguin kernel: [21139655.823930] CIFS VFS: Error -4 sending data on socket to server
Jan 5 07:52:02 giantpenguin kernel: [21139655.823961] CIFS VFS: Write2 ret -4, wrote 0
Jan 5 07:52:02 giantpenguin kernel: [21139655.824007] CIFS VFS: Write2 ret -112, wrote 0
The last error line then appears multiple times before the script presumably finishes out. Any insight? Thanks!
|
Have you verified that the backups are being created with all the correct contents?
There are several reasons you could be seeing the errors.
You maybe seeing purely informational errors related to file attribute setting at creation time (during the tar and mv commands). the NTFS or FAT filesystem underlying the CIFS mount may not actually support some of the system calls, and that may not be actual errors.
Have you tried creating the tar archive locally, and then just copying it to the NAS?
Also, you can enable some more verbose logging via (from the fs.cifs README):
echo 7 > /proc/fs/cifs/cifsFYI
cifsFYI functions as a bit mask. Setting it to 1 enables additional
kernel logging of various informational messages. 2 enables logging
of non-zero SMB return codes while 4 enables logging of requests that
take longer than one second to complete (except for byte range lock
requests). Setting it to 4 requires defining CONFIG_CIFS_STATS2
manually in the source code (typically by setting it in the beginning
of cifsglob.h), and setting it to seven enables all three. Finally,
tracing the start of smb requests and responses can be enabled via:
echo 1 > /proc/fs/cifs/traceSMB
Those two options may provide you with enough information to know what your next steps should be.
| cifs write errors in cron job |
1,390,561,389,000 |
I have a machine (A) behind a firewall with no access to the Internet, on this machine I can NFS mount directories on another machine (B) which can access internet, and is accessible from Internet, but I cannot install anything on this machine (B).
I want to keep a directory on (A) in sync with my Dropbox (that I use on all my other machines (not A or B), all of them connect to Internet regularly).
The solution that I came up with is to have a cron job on (A), to call two rsync commands to sync a directory on (A) with an NFS mounted directory which is actually on (B).
Then I can have a cron job on some other machine on the Internet that syncs my Dropbox with the directory on (B).
Anyone can see any problems with this plan, or has a better suggestion?
Anything other unix utility besides cron and rsync?
|
You may be able to use Unison to synchronize your files. Unison uses the rsync protocol and can run over ssh. You may need to copy the executable into your directory on the remote system.
Using rsync may cause problems as it is difficult to synchronize file deletions.
EDIT: To sync a folder on A from system C (with working Drobox) a chosen directory on B becomes the hub and A and C two spokes. Schedule the steps so that only one is running at a time.
Schedule Unison on system C to sync to the directory on B.
Schedule Unison on system A to sync to the directory on B. (May require NFS mounting the directory.)
Periodically, check Unison for conflicts if automatic resolution wasn't configured.
There are other ways to handle this. If the directory on B is alway mounted when you need it, then you can skip this step. A symlink to an autofs NFS mount would handle this.
p.s. I was working with WinSCP today, and found it has a synchronize function. It appears to be useful for periodic use. Unison still seems better for automated updates.
| Using rsync + cron to sync a machine behind a firewall with my dropbox |
1,390,561,389,000 |
I want to use cron and this script (http://askubuntu.com/questions/23593/use-webcam-to-sense-lighting-condition-and-adjust-screen-brightness):
import opencv
import opencv.highgui
import time
import commands
def get_image():
image = opencv.highgui.cvQueryFrame(camera)
return opencv.adaptors.Ipl2PIL(image)
camera = opencv.highgui.cvCreateCameraCapture(-1)
while 1:
image = get_image()
image.thumbnail((32, 24, ))
image = tuple(ord(i) for i in image.tostring())
x = int((int((max(image) / 256.0) * 10) + 1) ** 0.5 / 3 * 10)
cmd = ("sudo su -c 'echo " + str(x) +
" > /sys/devices/virtual/backlight/acpi_video0/brightness'")
status, output = commands.getstatusoutput(cmd)
assert status is 0
crontab: /30*** sudo python /home/username/screen.py
However, there are two issues:
First, can the while-loop be terminated after 5 sec or so?
Second, could anyone please try to improve the script so that lower brightness-levels can be set using the script? Maybe the way the 256 brightness-levels are mapped to the ones that can be set for the screen would need to be changed.
Thanks
|
In older kernels there was brightness control file somewhere in /proc, but I think that it was the same functionality as /sys in your code snippet. In this /procfile there was several levels of brightness that you could use and probably the same is in that mechanism. Try to cat /sys/devices/virtual/backlight/acpi_video0/brightness and check if there is information about brightness levels that you may use.
If you remove loop whole code will be executed once so it will be possible to set brightness only once per half hour period. There is little disadvantage of this solution - if you cover camera at the moment of brightness checking, brightness will be changed and next reading will be after half hour so you end up with completely dark screen for half hour.
To avoid this situation you could replace while loop by for loop (0 to 5 for example) with 5 seconds sleep with it. In loop you can only check brightness five times and after loop you could calculate average brightness and set it.
EDIT: Code with average from 25 seconds:
import opencv
import opencv.highgui
import time
import commands
from time import sleep
def get_image():
image = opencv.highgui.cvQueryFrame(camera)
return opencv.adaptors.Ipl2PIL(image)
camera = opencv.highgui.cvCreateCameraCapture(-1)
x = []
for i in range(5):
image = get_image()
image.thumbnail((32, 24, ))
image = tuple(ord(i) for i in image.tostring())
x.append(int((int((max(image) / 256.0) * 10) + 1) ** 0.5 / 3 * 10))
sleep(5)
sum = 0
for i in x:
sum = sum + i
avg = sum / len(x)
cmd = ("sudo su -c 'echo " + str(avg) + " > /sys/devices/virtual/backlight/acpi_video0/brightness'")
status, output = commands.getstatusoutput(cmd)
assert status is 0
Unfortunately I don't have option to change backlight (old kernel or something missing in kernel) and I don't have camera to check if it is working...
| How to use cron + python to regularly adjust screen brightness? |
1,390,561,389,000 |
I posted this in Raspberry Pi, but was told that it was better suited in a general linux or programming area. So I figured I'd ask here now...
I am putting together a kiosk that should play video. I'm using a NUC with Raspberry Pi Desktop. Everything works but automating the audio with Cron. I work in a school and blasting sound during the day would kind of suck, so I want it to change based on the time of the day.
When run from terminal the following code works:
/usr/bin/amixer set Master 16384
and
amixer sset 'Master' 16384
So, I put it into Cron:
15 09 * * * /usr/bin/amixer set Master 16384
and
43 09 * * * amixer sset 'Master' 16384
Nothing. Fine. So I make a really simple python script to run (yes, I realize I put it in a system folder, I was planning on moving it, it just kind of ended up there.)
#!/usr/bin/env python3
from subprocess import call
call(["/usr/bin/amixer", "set", "Master", "65536"])
I make it executable:
chmod +x /etc/python/sound100.py
Then I call it from the terminal with both:
/etc/python/sound100.py
and
/usr/bin/python3 /etc/python/sound100.py
Again, it works. Yay. Into Cron it goes. Neither:
11 10 * * * python3 /etc/python/sound100.py
nor
11 10 * * * /usr/bin/python3 /etc/python/sound100.py
Not even
11 10 * * * /etc/python/sound100.py
Nothing works.
So begins the actual troubleshooting. I check the syslog. Everything runs, but I learn that when I run it as a sudo command it says that the maximum volume is 83 when run as a sudoer. So that means I can't run it as from sudo crontab -e, but I start running it from the user's cron.
Still nothing. So I try running it as a sudoer but with the user beforehand. Nothing again.
Please help me. I just want to adjust the volume automatically without logging into ssh for every NUC we're putting up three times a day until I quit or retire.
Thank you!
|
After all the help I received I was ultimately able to find a solution. It's not the solution I really wanted and it's not very clean, but it works so I'm posting it here.
The Issue as I Understand it
The heart of the problem seems to be that running Amixer as a generic user elevates to a Sudo permission no matter what I do, and making another user just to run the audio Cronjobs didn't work.
This resulted in every amixer command sent manually via terminal to adjust the volume to a max of 65536 and every command sent via Cron to max out at 88. This huge disparity resulted in effectively muting the audio no matter what I did.
Thanks to a lot of help from the two folks here I've come to the conclusion that it's likely because one is trying to run bash and the other is running sh. But getting the two to reconcile seems to be impossible.
I have ultimately given up and have completely changed how I'm running the physical components all together...
So, here's my solution:
After switching to an HDMI to AGP adapter I plugged in a simple 3.5mm male-to-male audio jack from the NUC to the TV. The TV I'm using allows video from AGP and audio from 3.5mm, so now I have video to the TV, but Audio needed to be switched to Stereo Output on the NUC to get it to work.
After this I ran amixer scontents and sudo amixer scontents and went through the information. Master displayed the two separate values as I came to expect, but PCM now showed 255, which it did not previously.
With this in mind I changed my focus from Master to PCM, and updated the Crontab with the following:
40 06 * * 1-5 amixer sset 'PCM' 191 2> /tmp/cronVolumeLog
25 07 * * 1-5 amixer sset 'PCM' 64 2> /tmp/cronVolumeLog
40 14 * * 1-5 amixer sset 'PCM' 128 2> /tmp/cronVolumeLog
Everything is now working the way that I expected, though with some crackling and other minor issues that appear with using an audio jack, but I'll just have to deal with them.
I know that this isn't the most honest answer to the problem, but it's the only one I found that worked for me after (literally) days worth of searching and working with anyone willing to help me.
My hope is that someone, sometime, will see this post and get a little farther than I did and find a solution. Or give up quickly and move on with their lives and be happy with the stupid audio jack.
But I learned a lot, so that's something.
| Cannot Schedule Audio Volume Change with Cron (tried direct in cron, python, etc...) |
1,390,561,389,000 |
I am trying to organize my webcam picture files into folders otherwise I get thousands of pictures in one folder.
I have a script foscam-move.sh (from Create sub-directories and organize files by date) with the following:
#!/bin/bash
for x in *.jpg; do
d=$(date -r "$x" +%Y-%m-%d)
mkdir -p "$d"
mv -- "$x" "$d/"
done
I have the script located in the folder with all of the .jpg files.
When I run it in terminal all is fine and it organizes it beautifully.
When I add the following cron task it doesn't run.
* * * * * /home/pi/Desktop/FI9821W_************/snap/foscam-move.sh # JOB_ID_2
I have it set to run every minute because this camera takes a lot of pictures.
How do I get cron to run my script every minute?
|
cron doesn't run with the same environment as your user. It's likely having issues because it's not in the proper directory. Have your script cd to the directory containing the images before executing your for loop.
| Sort files in a folder into dated folders |
1,390,561,389,000 |
My at on Mac OS X 10.11 seems to be incapable of launching jobs. I tried:
echo "date > foo" | at now
Running atq afterwards shows the job queued and no file foo was created. To make sure the problem isn't a date mixup, I ran:
$ date && echo "date > foo" | at now && atq
And both date and atq showed the same time.
I also checked for at.allow and at.deny files. /usr/lib/cron/at.deny exists but is empty and /usr/lib/cron/at.allow doesn't exist. According to the man page, that means that I am allowed to run it:
If _PERM_PATH/at.allow does not exist, _PERM_PATH/at.deny is checked,
every username not mentioned in it is then allowed to use at.
[...]
FILES
_PERM_PATH/at.allow allow permission control (/usr/lib/cron/at.allow)
_PERM_PATH/at.deny deny permission control (/usr/lib/cron/at.deny)
What am I doing wrong/what can I do to schedule this for later?
|
Seems that the at command doesn't work out of the box.
In my test, for example:
OS X, according to this SU answer needs:
sudo launchctl load -w /System/Library/LaunchDaemons/com.apple.atrun.plist
raspbian needed:
sudo apt-get install at (which will install and run atd, at daemon)
Arch Linux had at installed, but again needed the daemon to be started manually:
sudo systemctl start atd
sudo systemctl enable atd
| How can I get at to work on OSX? |
1,390,561,389,000 |
I have a temp directory set up where users can place whatever files they need to send to other users via HTTP. The owner of this directory is an SFTP user, and cannot run cron jobs.
I have one shell user that can run cron jobs, but does not have permission to edit files in the SFTP user's directory.
I have an admin user that can access the SFTP user's directory when using sudo, but can't (read: I'd really rather not) run cron jobs.
So, here's the conundrum. How do I get a nightly cron job to run as a shell user to delete files older than 1 week within the SFTP user's directory, with the admin user's privileges?
|
Edit the /etc/sudoers file (use visudo!) and add an entry that allows the shell user to have sufficient privileges to run a specific command, without having to enter a password. If you use a script, make sure the script cannot by edited by anyone but root.
In /etc/sudoers, where shelluser is the shell user name:
shelluser ALL=NOPASSWD: /usr/bin/clean-up-sftp-temp-directory
In a /usr/bin/clean-up-sftp-temp-directory script, you can put something like:
#!/bin/sh
rm -f /home/sftpuser/will-be-deleted/*
After making the script executable, you should be able to call sudo clean-up-sftp-temp-directory and add it to the shell user's crontab.
| Emptying a directory owned by another user weekly |
1,390,561,389,000 |
I have a server monitoring script which, among other things, checks the state of an IPSec tunnel using
ipsec auto --status
It works like a charm when run from the console (as root) but as soon as I run it from a (root) cronjob, the command fails: no output at all.
I even tried to create this simple root cronjob:
*/1 * * * * ipsec auto --status > /tmp/ipsec.txt
All it does is create an empty /tmp/ipsec.txt file!
Note: All other tasks in the script including networking and DB access work fine.
Any lights welcome.
|
It sounds like cron is not seeing ipsec in the path. It's a pretty good habit to include absolute paths to binaries in crontab. There is probably some complaining in /var/log/messages or /var/log/cron.
*/1 * * * * /usr/sbin/ipsec auto --status
You could also add the PATH environment variable to the top of the crontab. The PATH will apply to all the jobs in the crontab.
PATH=/bin:/usr/bin:/sbin:/usr/sbin:/usr/local/sbin:/usr/local/bin:
*/1 * * * * /usr/sbin/ipsec auto --status
| ipsec auto --status fails in cronjob |
1,390,561,389,000 |
I do a lot of work on in the cloud running statistical models that take up a lot of memory, usually with Ubuntu 18.04. One big headache for me is when I set up a model to run for several hours or overnight, and I check on it later to find that the processes was killed. After doing some research, it seems like this is due to something called the Out Of Memory (OOM) killer.
I would like to know when the OOM Killer kills one of my processes as soon as it happens, so I don't spend a whole night paying for a cloud VM that is not even running anything.
It looks like OOM events are logged in /var/log/, so I suppose I could write a cron job that periodically looks for new messages in /var/log/. But this seems like a kludge. Is there any way to set up the OOM killer so that after it kills a process, it then runs a shell script that I can configure to send me notifications?
|
You can ask the kernel to panic on oom:
sysctl vm.panic_on_oom=1
or for future reboots
echo "vm.panic_on_oom=1" >> /etc/sysctl.conf
You can adjust a process's likeliness to be killed, but presumably you have already removed most processes, so this may not be of use.
See man 5 proc for /proc/[pid]/oom_score_adj.
Of course, you can test the exit code of your program. If it is 137 it was killed by SIGKILL, which an oom would do.
If using rsyslogd you can match for the oom message (I don't know what shape that has) in the data stream and run a program:
:msg, contains, "oom killer..." ^/bin/myprogram
| Trigger a script when OOM Killer kills a process |
1,390,561,389,000 |
I am running CentOS 7 with XFCE as my DE. I made a bash script, originally stored in ~/bin (I have since deleted it), which I wanted to have run automatically at startup. I somehow succeeded, but I have tried to remove it from my autostart programs, to no avail.
when I run ctrontab -e, I am given an empty file to edit. It is therefore not started through there.
when I open Session and Startup -> Application autostart, the only programs are: spice vdagent, tracker application miner, tracker metadata extractor, tracker user guides miner, XFCE polkit, Xfsettingsd, redshift, power manager, network.
when I find its PID and look through /proc/PID/, the exe is a link to /usr/bin/xfce4-terminal (note: the script started an xfce4-terminal and ran commands on it, then stayed open after printing its output). I don't know where else I could find useful information about what ran this program. cwd is a link to ~, root is a link to /, the rest are empty files pretty much.
the script is no longer in ~/bin, yet is somehow still being run
I also, at one point, installed devilspie2 to manage that terminal window, and messed around with it. I have since uninstalled it. I wouldn't expect it to have anything to do with it, but I figured I'd specify this.
where else could it be started from? How would I know?
|
If you are running systemd you can create a service that will start yourt software and then use systemctl enable [your-service] to start it on boot up. If your using openrc(old init) then you can use a similar method just use rc-update add [service] default
| Other than crontab, what other ways can one add programs to run at boot time? |
1,390,561,389,000 |
I have a cronjob that runs php5 wp-cron.php every minute to update my website.
However something happened and i had 30+ instance of it (31 is marked on this one dump of ps aux). It ate up my ram, caused additional instance to terminate do to lack of memory and caused me not to be able to ssh into the box.
I can't understand why instances were living >30mins, one usually takes a few seconds. The day it happened i had no jobs planned (although maybe wp cache usesd it? but i never had a problem before)
What can i do to prevent a cronjob from spamming and destroying my memory? Is there a way i can say do not start if an instance is alive? and if an instance is alive for more then 5mins kill it?
Is there a way i can protect myself from something similar from happening?
|
For the goal to prevent multiple copies from running, use flock (Linux), lockf (FreeBSD), shlock (provided with some systems, less reliable). This won't limit execution time but ensures only one process is running. Then, if it hangs, you can analyze its state on the fly.
You can limit CPU time of the spawned process using ulimit shell builtin.
To limit wall time, you can write script which waits for a process termination and kills it after timeout. It's easier in Python/Perl/etc. but shell also allows this (using trap and/on background children).
Sometimes it's useful to provide fixed time between invocations (i.e. from end of previous one to start of next one) instead of invocation starts, as cron does. Usual kinds of cron doesn't allow this, you should run special script.
| crontab, instance, memory issues+spamming |
1,390,561,389,000 |
How can i dump files from my CentOS box to Dropbox account? As regular nightly backup task.
|
0 1 * * * cp -a /tmp/files ~username/Dropbox/tmp_backups_$(date +%Y%m%d)
Breakdown: Every day at one am make an archive copy of /tmp/files into a folder with the date as part of the name in 'username's dropbox.
| How can i dump /tmp/files in CentOS to Dropbox using crontab? |
1,390,561,389,000 |
We are creating a new cron job under /etc/cron.d.
This cron job has around 56 lines, and all commands should be executed at the start of each month.
I am looking for suggestions to validate the syntax of the cron job.
I mean, how can I verify if the cron job file is configured correctly without mistakes - is there some command which can verify this?
|
Here's a start to a validator, written in awk, that checks:
for non-commented lines
for lines with enough fields to look like a crontab entry
where the day-of-month value in $3 is not 1 or *
where the month value is not *
... then print the (offending) line.
On a sample input of:
53 23 * * * root /usr/lib64/sa/sa2 -A
53 23 1 * * root /usr/lib64/sa/sa2 -A
53 23 2 * * root /usr/lib64/sa/sa2 -A
The output is:
53 23 2 * * root /usr/lib64/sa/sa2 -A
This would have to be enhanced to handle month names and ranges (or even @monthly), as the cron syntax allows.
awk '!/^#/ && NF >= 7 && ($3 != 1 && $3 != "*") && $4 != "*" 1'
| How to validate the syntax in cron job [closed] |
1,390,561,389,000 |
I've just set-up one cron-job in cpanel, but although it seems to be executing the script it doesn't work as intended.
Here is the cron job command in cpanel:
/bin/sh /home/my-username/cronjobs/sedclearmalw.sh
and here is the content of the script :
#!/bin/bash
cd ../public_html/
grep -rl '_0xaae8' . | xargs sed -i 's/var\s_0xaae8.*//g'
I believe the cd command should be correct, as it needs to go back a directory and then enter public_html, however the second command seems to be the problem.
I have tried running it via ssh, (bash sedclearmalw.sh) it seems like it's running for ~15 seconds but it's not doing its job , as i checked with the following command :
grep -rl '_0xaae8'
and it returns 1 file containing _0xaae8.
Any help will be appreciated, it must be something simple as i know the above command in the shell script works all right when executed via ssh (not through the script).
|
The problems is due to the use of a relative path. When cron runs a scheduled job, it uses the home directory of the owner as its working directory, e.g., if I schedule a job as the root user, its working directory will be /root/ (on a Cent OS system).
You should specify the absolute path in your cd command. If you’re not running any further commands in your script, you could just run it all in one line:
grep -rl '_0xaae8' /full/path/to/public_html/ | xargs sed -i 's/var\s_0xaae8.*//g'
| shell script for cron job |
1,390,561,389,000 |
I'm looking for a method to start an application immediately after boot-up. This application times out after 1 hour. I'd like to then start another instance of this application 1 hour after the initial boot-up (following the timeout of the initial application). I had been thinking that Cron might be configured (cleverly) to accomplish this. Short of resetting the system clock to 00:00:00 at boot-up and then running Cron normally, is there a way to do this? Thanks in advance.
Update: Based on maulinglawn's advice I have gone with the Systemd.service method. Specifically, I've put a copy of my python script in /usr/bin/startVideo/startVideo.py. Then created a service file in /lib/systemd/service/startVideo.service. Here's that file,
[Unit]
Description=starts video recorder
[Service]
Type=simple
ExecStart=/usr/local/bin/startVideo/startVideo.py
Restart=always
[Install]
WantedBy=multi-user.target
Finally I ran,
sudo systemctl enable startVideo.service
to register the service.
This will be running on a Raspberry Pi3 wired to a video camera with no monitor or keyboard attached. I'm just looking for the system to record video 24/7 and have the capability to restart itself in the event of a power failure. Other suggestions? Is "WantedBy" configured correctly for this type of application?
Massively grateful for this solution and steerage away from Cron-ville.
|
Given your description, I would start the application with a systemd (since that is what I have on all my machines) .service file.
In that service file, I would point to a script that wraps your application in a simple while loop. Something like this:
#!/bin/sh
while true; do
/path/to/your/application
done
This way, every time your application dies ("times out"), it will restart on its own since the condition for the loop is always true.
This is one approach, and the simplest I can think of, there may be others!
| Initiating an application after OS bootup, then restarting that application every hour thereafter |
1,390,561,389,000 |
I'd like to run a script (let's say myScript.sh) every time a yum update or yum install command is executed.
So if I run yum update someprogram then myScript.sh should be executed right after that. Is this possible?
Currently I could run a script if I put it in the ".spec" file when I build the rpm to install, but we have a lot of packages and I'd like to have this script run everytime any package gets updated. I thought about maybe using a cron job to run it every hour but that doesn't seem like a good idea.
|
You can create bash function in .bashrc :
myyumfunction() {
yum update $1
myScript.sh
}
Bash functions that you define in your .bashrc are available within your shell. You can call your function like this:
$ myyumfunction someprogram
| How do I run a script every time "yum update" is run? |
1,390,561,389,000 |
I want my computer to play some given .mp3 every 2 hours. I want it to run at startup, and be modifiable with a given .conf file. The choice of .mp3 will also be in the .conf file.
The reason is, I need this to remind me to eat regularly.
Now, I'm familiar with C, but I don't know anything about daemonising C programs. I'm also not very familiar with CRON, so if that's an option, I'd need further instruction.
What is the easiest way to do this?
|
Just source the conf file in your bash script and use mpg123 to play the mp3. For example in ~/.music-cron:
TARGET_MP3="$HOME/file.mp3"
And in the bash script:
. "$HOME/.music-cron"
mpg123 $TARGET_MP3
Then use cron to schedule the script every two hours and after the system boots up:
0 */2 * * * /path/to/script.sh
@reboot sleep 60 && /path/to/script.sh
The sleep 60 is just in there because you probably don't want it to immediate play it just in case there are other kinds of startup audio notifications going on at the same time.
Out of morbid curiosity, why are you needing to eat every two hours? This is one of the more interesting questions I've ever seen on here.
EDIT:
As Doug pointed out a better solution would be to ignore the 2-hour cron scheduling and just put echo $0 | at now + 2 hours as the last line in the script.
| What's the easiest way to set up a 2-hour alarm? |
1,390,561,389,000 |
I have a simple periodic cron task that must run as root. I want to use Zenity (or similar) to display a GUI informational dialog to user 1000 (or logged in user or all users) when the cron task finishes.
I'm looking for a simple, easy, quick solution. I'll adapt to the requirements of such a simple solution.
Here's where I am so far. My bash script works fine if run manually, but when Anacron runs it, nothing happens and I see Gtk-WARNING **: cannot open display in the logs. I hoped it would display my dialog to the user after being run by cron.
I realize (after reading related questions) that cron needs to be decoupled from the GUI. If user 1000 is not logged in, I could take one of several options:
do nothing (possibly acceptable because I want to keep it simple)
display the dialog with completion message to the user when they log in next time (best)
display some other type of notification (NOTE: the computer is a desktop system without a mail server installed)
I found these related questions:
x11 - Anacron job complains "Gtk-WARNING **: cannot open display" - Unix & Linux Stack Exchange
Anacron job complains "Gtk-WARNING **: cannot open display"
shell - How to pass data outside process for zenity progress? - Unix & Linux Stack Exchange
How to pass data outside process for zenity progress?
Example Code (from other question which is essentially the same as mine):
#!/bin/bash
# Backs up the local filesystem on the external HDD
sleep 60
DISPLAY=:0.0
zenity --question --text "Do you want to backup? Be sure to turn on the HDD."
if [ $? -ne 0 ]
then exit 1
fi
*Do backup stuff here*
Error:
(zenity:9917): Gtk-WARNING **: cannot open display:
run-parts: /etc/cron.daily/backup-on-external exited with return code 1
(I'm running Kubuntu, so a KDE solution would be even better than Zenity, but I already installed Zenity, so I can keep using it.)
|
Based on Mel Boyce's answer, here's what worked for me. This is KDE-based. But I also tested it with Zenity and the same approach works. It is basically the same thing Mel Boyce recommended, but with a few tweaks to get it to work for me. I don't delete the updateNotification.txt file, for example. And I don't use printf.
The updater script includes this:
DATE_STAMP=` date`
echo "\t***The software has been updated to version ${LATEST} on ${DATE_STAMP}***"
echo "The software has been updated to version ${LATEST} on ${DATE_STAMP}. Please close and reopen the program if it is current running. If you have any issues or questions, please write us at [email protected]. Thank you." > /home/$USERN/.updateNotification.txt
Then we have a script running in /home/$USERN/.kde/Autostart/updateNotificationChecker.sh
#!/bin/bash
while true; do
if [[ -s ~/.updateNotification.txt ]]; then
read MSGFE < ~/.updateNotification.txt
kdialog --title 'The software has been updated' --msgbox "$MSGFE"
cat /dev/null > ~/.updateNotification.txt
fi
sleep 300
done
exit 0
| How to display a (zenity/GUI) dialog to the user after a root cron task has completed |
1,356,718,345,000 |
There exists a Git repo that is on one server, we want to generate doxygen output for it on a different server. The following command works for me but has the downside of sending a mail everytime the repo is updated because Git uses stderr for progress reporting (a quick search via the almighty oracle suggests they consider this behaviour a feature).
59 * * * * cd FQNameOfRepo; git pull 1>/dev/null; make doc-all 1>/dev/null; cp doc/latex/refman.pdf doc/html/
While I could grep through the stderr output of Git or compare it to a known string this seems wrong. Am I using the wrong Git command? How would this be done properly?
For clarification I still want this command to send a mail if a real error occurs, so simply redirecting stderr won't help.
|
Relying on the mailing capabilities of crond too much may yield various problems. Depending on your your crond they are perhaps just not flexible enough.
For example, often, as you described, one cannot configure that only an exit status != 0 should trigger the mailing of stdout/stderr. Another issue is that, for example, the Solaris crond has a (relatively) small size limit on the output it captures/mails.
Thus, for such situations, I suggest writing a small helper script that calls the commands and redirects the output to a temporary log-file. It can internally keep track of the exit status of all the programs and if one is != 0 it either:
cat the log-file to stdout
mail it via a command line mail tool
or just output short diagnostics that include the location of the log-file
Something like:
$ cat helper.sh
set -u
set -e
# setup log-file $LOG
# ...
cd FQNameOfRepo
set +e
git pull 1>/dev/null 2>> $LOG
r1=$?
make doc-all 1>/dev/null 2>> $LOG
r2=$?
cp doc/latex/refman.pdf doc/html/ 2>> $LOG
r3=$?
set -e
if [ $r1 -ne 0 -o $r2 -ne 0 -o $r3 -ne 0 ]; then
# do some stuff, print/mail $LOG or something like that, etc.
# ...
exit 23
fi
| Mail cron output only when Git throws a real error |
1,356,718,345,000 |
I am trying to synchronize:
/var/www/CI_MAIN/
according to the changes made in:
/home/coco/workspace/CI_MAIN/
(PDT's workspace).
To do this, I entered
sudo crontab -e in the command line,
and I added the following line to the end of the opened file :
@reboot lsyncd -direct /home/cockroach/workspace/CI_TEST/ /var/www/CI_TEST/
I have also given full privileges to both folders by using sudo chmod -R 777,
but here is no changes made when I use my browser to see the pages I make. This method had been working previously, therfore I think there should be something I have not done yet. Could you help me to fix the problem? Thank you vary much in advance.
|
When writing cron jobs you should use the absolute path to the binary, if you haven't already done a export of your PATH in crontab.
Find out where lsyncd is located with:
$ command -v lsyncd
Example output: /bin/lsyncd. Copy the output and replace lsyncd with the absolute path.
| How to one way sync two folders on startup |
1,356,718,345,000 |
I have created a script called connection.sh, it is used to automatically connect to my vpn :
#!/bin/bash
nmcli connection up MyVPN
I have already tested it, and it works if I launch it manually, but if I use crontab to launch it to a specific time it seems it doesn't work.
I stored the script in /home/MyUser/Scripts.
So if I type crontab -l I get :
@reboot /home/MyUser/Scripts/connection.sh
Can anybody please help me?
|
It's because your shell uses environment variables that have different values then the environment variables that cron job have. Not all of the environment variables have different values but some of them. In not familiar enough with nmcli but you have to find out what environment variables it uses and then set them on your script before you call nmcli. That should solve your problem :)
| Crontab and NMCLI |
1,356,718,345,000 |
I'm trying to figure out a good solution to the following problem:
crontab contains some default rules (A,B,C)
I have a setup.sh script that should append rules from a file my.cron to crontab (suppose rules X,Y,Z). The resulting rules should be A,B,C,X,Y,Z
I cannot use crontab my.cron as it overwrites the existing rules
I cannot simply append the rules as the setup.sh script might be invoked multiple times (which would result in the crontab rules looking like A,B,C,X,Y,Z,X,Y,Z,X,Y,Z
Some solutions I could think of are:
Run a grep command to check whether or not a rule in my.cron exists in the crontab or not. If not, append it.
Create a new user for whom no existing rules are present so that I can overwrite the crontab everytime.
I'm wondering if there's a more elegant way to solve this. From what I can tell, there's no way to tell crontab to refer to a set of files to collate the rules (without duplication or overwriting)
|
From what I can tell, there's no way to tell crontab to refer to a set of files to collate the rules (without duplication or overwriting)/
There is, but not for user crontabs. Use individual files in /etc/cron.d for each cronjob. Then you can safely add, remove or update specific cronjobs without worrying about others.
| Update crontab rules without overwriting or duplicating |
1,356,718,345,000 |
I don't know where I can find more information about crontab, so I ask here.
Crontab is it multithread? How does it work?
|
Probably not. All cron has to do is (to express it simplified) watch until it is time to run one job or the other, and if so, fork a process which runs that job and periodically check if the job is finished in order to clean it up.
MT could be used for this waiting, but I think that would be overkill. With the wait()/waitpid() family functions, it is possible to have a look at all children at once (would be good for kindergarten teachers :-D). And you can have a look without blocking, so you have as well the possibility to continue looking for the time to execute the next job. And SIGCHLD exists as well.
| Is crontab multithread? [closed] |
1,356,718,345,000 |
I'd like to execute a task at a given time, once.
I know I can just use crontab -e to add it there, but I'd like to do that through a shellscript. crontab -e gives me some temporary file to edit which is unpredictable.
Also, once the task executes I'd like to remove it from the crontab again to make sure it'd not leaving behind mess.
So is there a standardized way to add/remove entries from my personal crontab through a script?
I know I could roll my own system: Have a script that executes every minute that executes and deletes .sh files from a folder and have my "addtask" script create .sh files in that folder, but before rolling my own I wonder if there is something already.
If it matters, I use Mac OS X and zsh but I wanted to use something that works on other *nixes as well.
|
I think the at command is what you are after.
E.g.:
echo "mail -s Test mstumm < /etc/group" | at 16:30
This will e-mail you a copy of /etc/group at 4:30 PM.
You can read more about at here: http://www.softpanorama.org/Utilities/at.shtml
| Add a one-off scheduled task through a shell script? |
1,356,718,345,000 |
I have a bash script which is run every 10m by cron. The script performs an expensive calculation for some value (say variable x=value). I need to "cache" this value for 2-3 hours. What are possible solutions to this problem?
I tried memcached but it doesn't seem to play well with bash.
|
Write a second script that does the actual calculation and saves the result to a file:
# calculate $curval
printf '%s' "$curval" > /var/foo/value.txt
Schedule it with cron to run every 2-3 hours.
In the "every 10 minutes" script, simply read the current value from the file:
curval=$(< /var/foo/value.txt)
A nice refinement is to call the calculation script from the "every 10 minutes" script if the value.txt file doesn't exist yet. You could even make it add the crontab entry if it's missing.
| How can I "cache" a variable in Bash? |
1,356,718,345,000 |
I accidentally typed this command without any arguments and hit enter, and it seems the terminal is running something, so my question is what crontab do when I give no arguments?
[root@localhost ~]# crontab
|
crontab with no arguments reads a new crontab from standard input, validates it, and then replaces the current user’s crontab with it.
To get out of your situation, without losing your existing crontab (if any), either kill crontab with CtrlC, or enter an invalid cron job definition (foo) and press CtrlD:
foo
# Now press Ctrl+D
"-":1: bad minute
errors in crontab file, can't install.
| What does crontab do when I give no argument? |
1,356,718,345,000 |
I am using RHEL 5.4
I killed the cron daemon accidentally. I wanted to stop a cron task, didn't know how to do it, ended up killing the cron daemon itself. How do I start it again?
|
Depending on how your exact distribution is set up, /etc/init.d/cron start or variations thereupon might do the trick.
| How to start the cron daemon |
1,356,718,345,000 |
I am running an old version of Ubuntu (14.04 LTS) because my video card is no longer supported by the kernel in newer releases of Ubuntu.
I discovered in my cron.daily directory a script called popularity-contest which is "phoning home" every day. Can I remove it safely?
This link says it was reporting to the developers of Ubuntu which packages I am using. popularity-contest_1.57ubuntu1_all.deb
|
Yes, you can disable this feature safely:
sudo apt-get purge popularity-contest
This will delete the cron script and everything else related to the package, apart from its log files.
(Note that your system might not be configured to phone home — the package needs to be installed, and it needs to be set up to phone home. Look for PARTICIPATE in /etc/popularity-contest.conf; if it’s not yes, you’re not participating.)
| Popularity Contest in Ubuntu |
1,356,718,345,000 |
Using the at command I can run a script once in the future:
at now + 1 minutes -f ~/script.sh
or run a string command now then return the result in the future:
echo "xyz" >> ~/testtest.txt | at now + 1 minute
How can I instead pass a string command (as per example 2) that runs in the future and not now (as per example 1)? E.g.
at now + 1 minutes -SOMEFLAG 'echo "xyz" >> ~/testtest.txt'
Thanks
|
You could single quote the command and pass it to at as is:
echo 'echo "xyz" >> ~/testtest.txt' | at now + 1 minutes
Alternatively, if you're using bash or another script that supports process substitution, you can do:
at now + 1 minutes < <(echo 'echo "xyz" >> ~/testtest.txt')
This basically passes a filename to at just like passing it a file with commands.
However, using a heredoc as suggested by Kusalananda should be more portable as it doesn't depend on your shell supporting process substitution.
| Pass the "at" command a string command instead of a path to a script whilst having it run immediately |
1,356,718,345,000 |
I'd like to run a pretty simple script (is there a dir with name x in dir y if so move it to dir z–and x will only be there about once a day) every 15 seconds or so (one minute divided into 4*15 seconds). Does running cron jobs like that (where the scripts they run are not resource intensive) have a non-neglible negative impact on performance, stability or anything else?
|
To answer the question in the title, no, most system do not experience much of a burden from running cron jobs. Much of the automated tasks that occur on a modern Unix system are kicked off by cron jobs.
Things such as rotating logs, and regenerating index files used by man are all kicked off via cron jobs.
If you're curious take a look in any of the directories under /etc/cron*. There are bound to be examples there which will shed light on how these things are accomplished on your system.
| Does cron jobs per se represent a significant burden on a modern system? |
1,356,718,345,000 |
Server is Debian 6.0. In /etc/aliases I have;
root: [email protected]
This is so that root emails get sent directly to me. Today, a monthly script I have placed into /etc/cron.monthly has emailed me it's output. It a fairly simply script which looks like this;
#!/bin/bash
cd /a/directory
rm -rf ./*
wget http://www.site.com/fresh-copy.zip
unzip fresh-copy.zip
rm fresh-copy.zip
The email I have received is below;
**Subject**: Cron <root@myserver> test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly )
**From**: [email protected]
**Body**:
/etc/cron.monthly/speedtest:
--2013-03-01 06:52:01-- http://www.site.com/fresh-copy.zip
Resolving www.site.com... 11.22.33.44
Connecting to www.site.com|11.22.33.44|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 99763924 (95M) [application/zip]
Saving to: `fresh-copy.zip'
0K .......... .......... .......... .......... .......... 0% 11.7M 8s
50K .......... .......... .......... .......... .......... 0% 11.3M 8s
100K .......... .......... .......... .......... .......... 0% 11.4M 8s
*CUT OUT FOR BREVITY*
97350K .......... .......... .......... .......... .......... 99% 16.9M 0s
97400K .......... .......... ..... 100% 11.6M=9.7s
2013-03-01 06:52:11 (9.78 MB/s) - `fresh-copy.zip' saved [99763924/99763924]
Archive: fresh-copy.zip
inflating: file1.ext
inflating: file2.ext
inflating: file3.ext
inflating: file4.ext
inflating: file5.ext
The only output in this email is from the wget and unzip command. I can edit the script and place > /dev/null on the end on those two lines, but is that really the best way to do this? If I add more commands to the script that produce output, I will always have to add on > /dev/null to each line. Is there a way I can disable email notification of output from this cron script?
|
An easier method is to instead of adding the script to the cron.monthly directory, you add it to an old-fashioned crontab, where you can specify on the crontab line that you want output to go to /dev/null. Like this:
crontab -e
to edit the crontab. Then add the following line:
@monthly /path/to/script > /dev/null
This will mean that STDOUT gets redirected to /dev/null, but STDERR will still end up in an email. If you don't want to get mails on error either, the line should look like this:
@monthly /path/to/script > /dev/null 2>&1
| How to stop monthly cron output email |
1,356,718,345,000 |
In cron's manpage (cronie)
-p Allows Cron to accept any user set crontables.
I learned that cron daemon will implicitly search for and run the cron jobs defined in /etc/crontab, /etc/cron.d/* and /var/spool/cron/cronstabs/*.
What is -p used for?
Is it to explicitly tell cron to search for and run the cron jobs defined in a crontab file which is stored in some place other than those mentioned above?
Or is it to copy a crontab file stored in some place other than those mentioned above to one of the places mentioned above?
Does the cron on Debian or its derivatives have -p option? I don't find -p on the manpage of cron on Ubuntu.
Thanks.
|
The CAVEATS section of the cronie's cron(8) man page says (emphasis mine):
All crontab files have to be regular files or symlinks to regular
files, they must not be executable or writable for anyone else but
the owner. This requirement can be overridden by using the -p option
on the crond command line.
So it is in fact documented on the man page, although not in the most obvious location.
| What is `-p` used for `cron`? |
1,356,718,345,000 |
I'm going to make a bash script that is executed at boot and runs periodically.
I want it user-configurable, so that a user can add a cron job 0 * * * * my_script by running my_script add 0 * * * *, list jobs by my_script list, and remove by my_script remove job_number where the job number is listed in the output of my_script list command.
If I could manage crontab files separately, this would be easily achieved.
However, It seems crontab is only one file per a user (If not, please let me know). Directly dealing with that crontab file is a bad solution, of course.
So what is the proper way to handle the cron jobs? Or, is there a better way to handle periodically running scripts?
Conditions:
Any user should be able to run it, whether privileged or not.
No dependencies.
Additional question:
Since I couldn't find any proper way to manage periodically running scripts, I thought what I might be doing wrong. In the sense of software design, is it not practical to implement the interface to manage the software's scheduled tasks? Should I leave all schedule managements to users?
|
Using cron is the correct way to schedule periodic running of tasks on most Unix systems. Using a personal crontab is the most convenient way for a user to schedule their own tasks. System tasks may be scheduled by root (not using the script below!) in the system crontab, which usually has an ever so slightly different format (an extra field with a username).
Here's a simple script for you. Any user may use this to manage their own personal crontab.
It doesn't do any type of validation of its input except that it will complain if you give it too few arguments. It is therefore completely possible to add improperly formatted crontab entries.
The remove sub-command takes a line number and will remove what's on that line in the crontab, regardless of what that is. The number is passed, unsanitized, directly to sed.
The crontab entry, when you add one, has to be quoted. This affects how you must handle quotes inside the crontab entry itself.
Most of those things should be relatively easy for you to fix.
#!/bin/sh
usage () {
cat <<USAGE_END
Usage:
$0 add "job-spec"
$0 list
$0 remove "job-spec-lineno"
USAGE_END
}
if [ -z "$1" ]; then
usage >&2
exit 1
fi
case "$1" in
add)
if [ -z "$2" ]; then
usage >&2
exit 1
fi
tmpfile=$(mktemp)
crontab -l >"$tmpfile"
printf '%s\n' "$2" >>"$tmpfile"
crontab "$tmpfile" && rm -f "$tmpfile"
;;
list)
crontab -l | cat -n
;;
remove)
if [ -z "$2" ]; then
usage >&2
exit 1
fi
tmpfile=$(mktemp)
crontab -l | sed -e "$2d" >"$tmpfile"
crontab "$tmpfile" && rm -f "$tmpfile"
;;
*)
usage >&2
exit 1
esac
Example of use:
$ ./script
Usage:
./script add "job-spec"
./script list
./script remove "job-spec-lineno"
$ ./script list
1 */15 * * * * /bin/date >>"$HOME"/.fetchmail.log
2 @hourly /usr/bin/newsyslog -r -f "$HOME/.newsyslog.conf"
3 @reboot /usr/local/bin/fetchmail
$ ./script add "0 15 * * * echo 'hello world!'"
$ ./script list
1 */15 * * * * /bin/date >>"$HOME"/.fetchmail.log
2 @hourly /usr/bin/newsyslog -r -f "$HOME/.newsyslog.conf"
3 @reboot /usr/local/bin/fetchmail
4 0 15 * * * echo 'hello world!'
$ ./script remove 4
$ ./script list
1 */15 * * * * /bin/date >>"$HOME"/.fetchmail.log
2 @hourly /usr/bin/newsyslog -r -f "$HOME/.newsyslog.conf"
3 @reboot /usr/local/bin/fetchmail
| How do I add/remove cron jobs by script? |
1,356,718,345,000 |
I need to create a cron job that runs a bash script in background which does not end unless killed.
The bash starts a process that should keep running for about 28 hours then I need another cron job to kill it.
First cron runs every day at 0:00AM, starts the process.
Second cron runs at 4:00AM and has to kill the process started the day before, leaving the one for the current day to run.
From what I searched, I should store the pid of the process in a file and then have the second cron access it but how and where? In the cron or in bash? Considering the process started by bash script does not end until killed, will the commands after ever execute?
EDIT: Ipor Sircer's solution solves the particular problem I have but I'd still like to learn how to export the PID in a file which another CRON can access.
|
Use timeout command, it is much easier:
0 0 * * * timeout 28h /home/script.sh
PS. Remember to use the full path in the crontab.
| CRON job that kills a process started by previous CRON |
1,356,718,345,000 |
My system is centos 6.5,
I write a simple bash shell to check if mysql is crashed, restart service again. I put it in /home/myspace/mysql.sh chown root:root /home/myspace/mysql.sh then run every minute by crontab.
#!/bin/bash
mysql --host="localhost" --user="root" --password="password" --database="test" --execute="select id from test limit 1"
if [ $? -eq 0 ]; then echo "";
else
/usr/bin/sudo service mysql stop
pkill mysql
/usr/bin/sudo service mysql start
echo "error $(date)" >> /home/myspace/restart_log.txt
fi
Now I have 2 questions.
Why in my code, if ... else ... not working? I mean the mysql server has no problem, it can execute "select id from test limit 1" and get result, but the script still run the code in else case.
in /var/log/sucure, it show root : sorry, you must have a tty to run sudo ; TTY=unknown ; PWD=/root ; USER=root ; COMMAND=/sbin/service mysql stop. root : sorry, you must have a tty to run sudo ; TTY=unknown ; PWD=/root ; USER=root ; COMMAND=/sbin/service mysql start. And the mysql server is down.
UPDATE:
Now I tried the code @Anthon given:
#!/bin/bash
RESULT=`mysql --host="localhost" --user="root" --password="password" --database="test" --execute="select browser from test limit 1"`
if [ $? -eq 0 ]; then
echo "mysql select ok"
else
/usr/bin/echo "mysql select failed"
sudo service mysql stop
pkill mysql
/usr/bin/sudo service mysql start
echo "error $(date)" >> /home/myspace/restart_log.txt
fi
it works well in SSH part (return mysql select ok)
but not working in cronjob(every minute write error in /home/myspace/restart_log.txt and mysql server never start)
Is this problem caused by PATH? OR privileges? How to solve it? thanks.
solve
Thanks for all, finally I solved it. @Sigi's method maybe good, but not working in my situation. @Anthon's answer write better than I, but still not working. @Emmanuel, given the answer is more nearer than other's. After a long time test, I shared final working code to all.
#!/bin/bash
PATH=/bin:/usr/bin:/sbin:/usr/local/bin
# cronjob need clear where is the path
# mysql under path /usr/local/bin
mysql --host="localhost" --user="root" --password="password" --database="test" --execute="select id from test limit 1"
if [ $? -eq 0 ]; then echo "";
# mysql run well nothing to do, make a
else
sudo /etc/init.d/mysql stop
pkill /usr/local/bin/mysql
# It should be pkill /usr/local/bin/mysql, not pkill mysql, wrong write will cause below code not working
sudo /etc/init.d/mysql start
sudo /etc/init.d/httpd restart
# Some strange behave, after mysql restart, apache will die, so should add httpd restart
echo "error $(date)" >> /home/myspace/restart_log.txt
fi
|
Force the PATH in your script, cron may not provides the same path as the one you have in a terminal session.
EDIT:
In a terminal get your PATH:
env | grep "^PATH="
PATH=/sbin:/usr/sbin:/usr/local/sbin:... (you will have different value
Cut and paste that line in your script just bellow the #!/bin/bash
#!/bin/bash
PATH=/sbin:/usr/sbin:/usr/local/sbin: (put your path
export PATH
(thats just for instance)
There may be other variable to add in your script; look for variable which names contain the 'MYSQL' string with env | grep MYSQL and set | grep MYSQL
| bash restart mysql when it gone run well in SSH part, but not in cronjob |
1,356,718,345,000 |
I have (from one tutorial) cron job which make weekly backup on my website and is working fine.
#!/bin/bash
NOW=$(date +"%Y-%m-%d-%H%M")
FILE="mysite.com.$NOW.tar"
BACKUP_DIR="/home/user/backups/"
WWW_DIR="/home/user/public_html/"
DB_USER="my_site_db_username"
DB_PASS="password"
DB_NAME="mysite_dn_name"
DB_FILE="mysite.com.$NOW.sql"
WWW_TRANSFORM='s,^home/username/public_html,www,'
DB_TRANSFORM='s,^home/username/backups,database,'
tar -cvf $BACKUP_DIR/$FILE --transform $WWW_TRANSFORM $WWW_DIR
mysqldump -u$DB_USER -p$DB_PASS $DB_NAME > $BACKUP_DIR/$DB_FILE
tar --append --file=$BACKUP_DIR/$FILE --transform $DB_TRANSFORM $BACKUP_DIR/$DB_FILE
rm $BACKUP_DIR/$DB_FILE
gzip -9 $BACKUP_DIR/$FILE
Question is, can someone help me how can I made when backup is done to send me email?
I'm using bash for first time and I'm not sure what I doing.
|
Edit your crontab file and add:
MAILTO=your.email@your_provider.com
and at the end of the script add:
echo 'backup finished', $FILE
cron normally sends any output from the command it runs per email. Your script seems to be running silently, hence no email.
If you don't add the MAILTO, the mail will go to the user running the crontab, IMHO it is better to make that explicit.
| Cron jobs and mail notification |
1,356,718,345,000 |
I'm calling cpulimit from cron:
00 16 * * * /usr/bin/cpulimit --limit 20 /bin/sh /media/storage/sqlbackup/backups.sh
When the job kicks off, the CPU spikes and alerts as it always has, with no actual identifiable limit having taken place. The job itself iterates over a directory of many subdirectories and performs rsync's each time, which I believe is spawning rsync child processes (running top will have a pid available for the called rsync, which after a few minutes will have a different pid for rsync).
I'm unsure how to properly utilize cpulimit to effectively limit the usage this process consumes.
It might be important to keep in mind this is a VM with 2G RAM and 1vCPU.
|
By default cpulimit doesn't limit child processes, so the rsync is not being limited at all. If you are running a recent enough version of cpulimit you should be able to use the --include-children (or -i) option. (See also this answer.)
$ cpulimit -h
Usage: cpulimit [OPTIONS...] TARGET
OPTIONS
-l, --limit=N percentage of cpu allowed from 0 to 400 (required)
-v, --verbose show control statistics
-z, --lazy exit if there is no target process, or if it dies
-i, --include-children limit also the children processes
-h, --help display this help and exit
TARGET must be exactly one of these:
-p, --pid=N pid of the process (implies -z)
-e, --exe=FILE name of the executable program file or path name
COMMAND [ARGS] run this command and limit it (implies -z)
Report bugs to <[email protected]>.
That would change your cron entry to:
00 16 * * * /usr/bin/cpulimit --include-children --limit 20 /bin/sh /media/storage/sqlbackup/backups.sh
EDIT: As the OP answered (themselves) it would work to cpulimit to the rsync commands within the script, but that would not ensure that your script is being nice while it is doing other functions. For example, if the script has to process a massive directory, it could bog down the system and cause a CPU spike and alert.
| cpulimit is not actually limiting CPU usage |
1,356,718,345,000 |
A common way to log crontab errors looks like below:
1 */8 * * * sh /pseudo_path/test.sh 2>> my_err_log
It's a concise command, but it can't record the error occurred time, and the path of script file is omitted.
So I wrote an error record function:
PROGNAME=$(readlink -f "$0")
SCRIPT_ERR_LOG_PATH="/pseudo_path/script_err_log"
error_exit()
{
timestamp="$(date +%Y%m%d-%H:%M:%S)"
x_info=$(echo "Error_${PROGNAME}:Line_${1:-"null"}_")
zeta=$x_info$timestamp
echo "$zeta" >> $SCRIPT_ERR_LOG_PATH
exit 1
}
This function can log the time the error occurred, together with the absolute path of the script. But the downside is that I have to add || error_exit $LINENO at every line of my script to make it work. With Vim's bulk substitution it could be much easier, but it still looks like a clumsy solution.
So, is there a smarter or more efficient way to do the same task?
|
Depending on the quantity of the logging information you're expecting to produce, it might be worth using the standard logger tool to write it to the user syslog in /var/log:
1 */8 * * * /path/to/myprog 2>&1 | logger -p user.debug -t 'myprog'
Here is an example of the output written to /var/log/debug on my Debian-based system:
Jul 31 00:17:09 myserver myprog: test message with user.debug
There are various facility/level pairs available for use. You might want to consider user.notice or user.info or user.debug. Just be aware that some of these may also get written to /var/log/messages and /var/log/syslog.
If you want to differentiate stdout and stderr in your cron job, sending only stderr to the logger, you can use a construct like this, which I'm sure others will improve upon:
1 */8 * * * ( /path/to/myprog 2>&1 1>&3 | logger -p user.debug -t 'myprog' ) 3>&1
| Log crontab shell script errors with error occurred time? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.