date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,356,718,345,000
I have a little VPS I run apache and a Minecraft server on. I don't ever turn it off, but should I restart it for some reason, IPTables blocks most of my ports, including port 80. I've tried so many different suggestions on fixing this, but with no luck. Also, since the provider is OVH, the support is... lacking. So, I've created a workaround, which I'm happy with. I created a simple shell script file to open certain ports I need opened on restart (80 and 25565 for now). The important ones such as 21 and 22 are not affected on restart. The script looks like this: iptables -I INPUT -p tcp --dport 80 -j ACCEPT iptables -I INPUT -p udp --dport 80 -j ACCEPT iptables -I INPUT -p tcp --dport 25565 -j ACCEPT iptables -I INPUT -p udp --dport 25565 -j ACCEPT /sbin/service iptables save When I manually run it by typing /iptdef.sh, it runs fine, the ports become open and it's all good. Of course, it's not practical having to remember to run it every time I restart the server, so I added a crontab. The problem is, it doesn't work/run. This is my crontab file: */5 * * * * /backup2.sh */55 * * * * /backup3.sh @reboot /iptdef.sh * * * * * /iptdef.sh The first two lines work. They are just simple scripts that make a backup of a folder for me. The second two lines are what's not working. Is there a chance that perhaps it's not possible to run iptables commands from a cron? It sounds silly, but I can't see any other reason for it not to work. The scripts have the correct permissions.
It's because cron forcibly sets PATH to /usr/bin:/bin. You need to invoke iptables as /sbin/iptables or add PATH=/usr/sbin:/sbin:/usr/bin:/bin in your script or crontab. See crontab(5) for details.
Not all shell scripts working with crontab
1,356,718,345,000
My ubuntu VM does not run cron jobs with a TTY (not even when I log in and run the command from the user's env). Because of this, cron can't run tmux or screen, preventing programs (specifically rtorrent in this case) to be run in the background in a simple and attachable way. What is the best approach to running a program like rtorrent on boot while making it easy to attach, detach, and kill the process from any terminal? I assume just running the equivalent of rtorrent & should do it, but this is not as convenient as screen/tmux.
If you use the -d option with tmux new, it won't attach to or require a tty. From man tmux: new-session [-AdDEP] [-c start-directory] [-F format] [-n window-name] [-s session-name] [-t group-name] [-x width] [-y height] [shell-command] (alias: new) Create a new session with name session-name. The new session is attached to the current terminal unless -d is given. [...] For example: tmux new -d -s rtorrent rtorrent That creates a new tmux session called "rtorrent" and runs rtorrent inside it. You will probably need to configure ~/.tmux.conf, and run a script to start rtorrent (rather just the bare rtorrent command) in order to setup the run environment correctly. The user who owns the cron job can attach to the session at any time with: tmux attach -t rtorrent The equivalent for Screen is screen -d -m -S rtorrent rtorrent from the cron job and screen -S rtorrent -rd to attach later.
What is the best way to attach to a cron program without TTY
1,356,718,345,000
I have multiple cron job entries configured under a single account: 0 0 * * * /foo/foo.sh 0 2 * * * /foo/foo2.sh 0 4 * * * /foo/foo3.sh Right now, if any of these encounter an error, an email is sent to the user account. Can I configure cron to send a notification to a different email address depending on which entry encounters the error? For example, if an error occurs while running /foo/foo.sh send to [email protected]. If an error occurs while running /foo/foo2.sh send to [email protected]. I know I can set [email protected] but I think that's for ALL cron entries (for the account I'm logged in as)... I don't think that's a per-entry setting. Hopefully this makes sense. Thanks. :-)
It it perfectly Ok to use MAILTO= per-entry, i.e.: MAILTO="address1" 0 0 * * * /foo/foo.sh MAILTO="address2" 0 2 * * * /foo/foo2.sh MAILTO="address3" 0 4 * * * /foo/foo3.sh MAILTO="address4" 0 6 * * * /foo/foo4.sh 0 8 * * * /foo/foo5.sh 0 10 * * * /foo/foo6.sh And so on. Cheers,
Can I configure error notifications on a specific cronjob entry to go to a different email address?
1,356,718,345,000
On my raspberry I really don't need crons and pam logging and I want to have less i/o to make the SD card life a little longer.. I have already removed cron with the second line cron.none, I'm currently testing the authpriv.none auth,authpriv.* /var/log/auth.log *.*;auth,authpriv.none,cron.none -/var/log/syslog #cron.* /var/log/cron.log daemon.* -/var/log/daemon.log kern.* -/var/log/kern.log lpr.* -/var/log/lpr.log mail.* -/var/log/mail.log user.* -/var/log/user.log Basically, all I want to log is fatals, hardware stuff, kernel/dmesg, and failed logins What else can I improve?
This is not the answer you are looking for, because I am going to try and dissuade you from this (which is actually the only rational answer). On my raspberry I really don't need crons and pam logging and I want to have less i/o to make the SD card life a little longer.. If you think cron is truly doing excessive logging, then you should consider what cron is doing and how often, and tweak that. Point being, if you don't care much about what it is doing, then why is it doing it? WRT SD cards, logging is not significant enough to worry about. As in: totally insignificant, you are wasting your time thinking about it. SD cards use wear leveling to help preserve themselves: they don't suffer the effects of fragmentation (i.e. fragmentation is irrelevant to performance), and when you write to disk, the data is written to the least used part of the card, where ever that is. This transcends partition boundaries, so if you have a 2GB partition on a 16GB card, the partition is not limited to a 2GB wide block of physical addresses: it is a dynamic 2GB whose physical addresses will be a non-contiguous, ever-changing list encompassing the entire card. If your system writes a MB of logs a day (you can check this by sending a copy of everything to one file, which is often what /var/log/syslog is), and you have a 4 GB card, it will take 4000 days before such a cycle has written to the entire card just once. The actual lifespan of an SD card might be as much as 100,000 write cycles [but see comments]. So all that logging will wear the card out in 4000 * 100000 / 365 = ~ 1 million years Do you see now why reducing logging by 25%, or 50%, or even 99%, will be completely irrelevant? Even if the card has an incredibly bad lifespan in terms of write cycles -- say, 100 -- you will still get centuries of logging out of it. For a more in-depth test demonstration of this principle, see here. Basically, all I want to log is fatals, hardware stuff, kernel/dmesg, and failed logins Unless you enable "debug" level logging, by far the thing that will write the most to your logs is when something has gone really wrong, and generally those are going to go in as high priority unless you just disable logging entirely. For example, I doubt under normal circumstances that your pi, using the default raspbian config, writes 1 MB a day of logs, even if it is on 24/7. Let's round it up to that. Now say a defective kernel module writes the same 100 byte "emergency" panic message 50 times per second to syslog on an unattended system for one week: 100 * 50 * 60 * 60 * 24 * 7 = ~ 30 MB. Consider that in relation to the aforementioned lifetime of the card, and the fact that you probably want to get the message. Logging that haywire is very unusual, BTW. Logging is good. The logs are your friends. If you want to tinker with the rsyslog configuration, your time will be better spend adding more, not less.
Pimp rsyslogd to have less i/o (cron, pam,...) and less logging
1,356,718,345,000
When I do crontab -e, are the changes applied immediately when I save the file, or do I have to exit vim for it to be applied?
It waits until you exit the editor. From the manpage: The -e option is used to edit the current crontab using the editor specified by the VISUAL or EDITOR environment variables. After you exit from the editor, the modified crontab will be installed automatically. You can also tell by just watching stdout; it waits until you exit the editor and then outputs: crontab: installing new crontab
Are changes in crontab applied when the file is saved, or when the editor is closed?
1,356,718,345,000
i am trying to use following commands in a shell script.. any suggestions to do it a correct way? [root@testserver ~]# crontab -u oracle -e >> 0 0 * * * /usr/local/scrips/setup.sh crontab: usage error: no arguments permitted after this option Usage: crontab [options] file crontab [options] crontab -n [hostname] Options: -u <user> define user -e edit user's crontab -l list user's crontab -r delete user's crontab -i prompt before deleting -n <host> set host in cluster to run users' crontabs -c get host in cluster to run users' crontabs -s selinux context -x <mask> enable debugging Default operation is replace, per 1003.2
The -e switch will make crontab interactive, which isn't the wished behaviour. I suggest you use the crontab -u user file syntax. Below is an example: root@c:~# crontab -l -u user no crontab for user root@c:~# echo "10 10 * * * /bin/true" >> to_install root@c:~# crontab -u user to_install root@c:~# crontab -l -u user 10 10 * * * /bin/true root@c:~# crontab -l -u user > temp root@c:~# echo "12 12 * * * /bin/false" >> temp root@c:~# crontab -u user temp root@c:~# crontab -l -u user 10 10 * * * /bin/true 12 12 * * * /bin/false
As root how can i create an entry in the crontab of other users
1,356,718,345,000
I am trying to set up a cron job, by creating a file /etc/cron.d/myjob: 55 * * * * t echo hello > /tmp/cron.log && cd /tmp/test/ && pwd > /tmp/cron.log I tried to verify if the job is scheduled successfully, by using the redirections. I created the file at 21:50, and when 5 minutes later i.e. it is 21:55, there is still no /tmp/cron.log created. I was wondering why? I specify the user to be t for the job in /etc/cron.d/myjob. But whose is the cron job? I am not sure about it, so I tried the two commands below. Neither shows the job I created. $ crontab -l no crontab for t $ sudo crontab -l [sudo] password for t: no crontab for root My /etc/crontab doesn't explicitly read from the files under /etc/cron.d/. See below. Can that be the reason that my cron job isn't running? Thanks. # /etc/crontab: system-wide crontab # Unlike any other crontab you don't have to run the `crontab' # command to install the new version when you edit this file # and files in /etc/cron.d. These files also have username fields, # that none of the other crontabs do. SHELL=/bin/sh PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin # m h dom mon dow user command 17 * * * * root cd / && run-parts --report /etc/cron.hourly 25 6 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily ) 47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly ) 52 6 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly ) # Update: Thanks to Jeff. change permission on /etc/cron.d/myjob to rw-r--r-- solves the problem. Why does the original rw-rw-r-- permission of my job file not work, but rw-r--r-- is required? Do the files under /etc/cron.d/ and /etc/cron.daily/ must have the same permissions rw-r--r--? Why does /etc/crontab not need to explicitly read the files under /etc/cron.d/?
Q1 Why does the original rw-rw-r-- permission of my job file not work From man cron: /etc/crontab and the files in /etc/cron.d must be owned by root, and must not be group- or other-writable. So, the files inside /etc/cron.d should be: chown root:root /etc/cron.d/* chmod go-wx /etc/cron.d/* chmod -x /etc/cron.d/* Q2 Why does /etc/crontab not need to explicitly read the files under /etc/cron.d/? No, that is not what it says. What man 5 crontab says is: /etc/crontab: system-wide crontab Unlike any other crontab you don't have to run the `crontab' command to install the new version when you edit this file and files in /etc/cron.d. These files also have username fields, that none of the other crontabs do. What that means is that there is no need to run crontab -e to edit and install a new cron job. That the files under /etc/cron.d/ are read and interpreted by cron as soon as edited. And the jobs defined within are executed as scheduled without calling the crontab executable. Warning But you should also read from man cron: In general, the system administrator should not use /etc/cron.d/, but use the standard system crontab /etc/crontab. That means to execute a crontab -e to create cron jobs. Editing /etc/crontab (or /etc/cron.d files) directly is a bad idea in general and for new users specially. Get used to manage cron jobs with crontab -e and crontab -l, then, after some (long) time, you may try to understand what /etc/crontab does. Related edit Since you are asking: Can crontab -e create cron jobs in /etc/cron.d/? No, that is not what I meant. Cron jobs (for users) live in /var/spool/cron/crontabs (in debian-like distros). That is where the command crontab -e edits them. What usually got edited (before) in /etc/crontab are the system jobs, the jobs that installed packages (like anacron) use to schedule their own jobs. That is not a file that users should edit. In debian, there is a directory to add new jobs that avoids the problem of making mistakes in the edited file (/etc/crontab). That directory is /etc/cron.d/. But still, that is not a directory meant for users (even the administrator) to edit manually. That is similar as asking why a user should not edit the /etc/sudoers file. Of course that a user (as root) could execute nano /etc/sudoers and change the file. The system will not block root. But that is a "bad idea". The manual for crontab is misleading in the sense that it was written to talk to developers (not users). It explains what a developer should do in those files. Why About: Why do you think that you need to add to /etc/crontab (or /etc/cron.d) ? Files in those two places contain "lines", each line is a job. Exactly the same is a line that is edited with crontab -e. Each line is a job to be executed by cron. The only difference is that in those two places, the job line contains an additional field for "user". But: crontab -u user1 -e Will also add new lines (jobs) (if the user running that command has the right privileges) that will be executed as "user1". The only (practical) difference is that a user could edit those jobs but not the ones inside /etc/crontab or /etc/cron.d/. That seems like a good idea IMO to avoid users from editing jobs of other users. In short: to add jobs: Use crontab -e.
Why is my cron job not scheduled?
1,356,718,345,000
Cronjob is based on the system time or its own time elapse? If it is system time, when I change the system time manually, will cronjob follow the new set up time accordingly?
cron uses the system clock, which it checks every minute. For changes to the system clock, the following extract from the man page might be useful: Special considerations exist when the clock is changed by less than 3 hours, for example at the beginning and end of daylight savings time. If the time has moved forwards, those jobs which would have run in the time that was skipped will be run soon after the change. Conversely, if the time has moved backwards by less than 3 hours, those jobs that fall into the repeated time will not be re-run.
Cronjob based on system time or its own time elapse
1,356,718,345,000
crontab -l | { cat; echo "0 0 * * * /path/to/cron/job"; } | crontab - I took this line from the net, to add a new cron job. It works well and good. I got the doubts, 1) Why do we need the curly brackets (or can we use single quotes) here, { cat; echo "0 0 * * * /path/to/cron/job"; } 2) And why do we need the cat; command here? 3) crontab -. If my understanding is correct, does - get replaced by the output of the previous command in the pipeline?
1) Why do we need the curly brackets Because you need to pipe both the output of cat and the output of echo to crontab -. Without the curly braces you can't assemble the output of the two commands in a single pipe. (or can we use single quotes) here Nope. ... | 'cat; echo "0 0 * * * /path/to/cron/job"' | ... means "pipe to an executable named cat; echo "0 0 * * * /path/to/cron/job"". That doesn't make sense. 2) And why do we need the cat; command here? You don't. You could have written the same thing more efficiently as: { crontab -l; echo '0 0 * * * /path/to/cron/job'; } | crontab - 3) crontab -. If my understanding is correct, does - get replaced by the output of the previous command in the pipeline? Yes, but that's because crontab specifically supports that. Using - to mean stdin and / or sdtout is a convention understood by many commands, but it's just that, a convention. It isn't mandated by the shell, like, say, >file or | command.
Adding a crontab - syntax
1,356,718,345,000
I have a bash script I wrote to backup a Moodle installation. It works fine, I've tested the backups, but there's a problem with it; when I do the backups, since I have to use sudo to type the password every time, I have to physically type in the password instead of just running a cron job to do it automatically. Now I suspect this has something to do with either something I don't know about cron or using an SSH key; either way I'd like to automate the backup. #!/bin/bash # Turn on Maintance mode and log it... logger "BEGIN Turning on maintance mode in moodle" lynx -cmd_script=./backupScripts/turnOnMaintMode http://moodle.leeand00domain.local logger "END Turning on maintance mode complete." logger "BEGIN Creating Backup Directory" export bkdir=$(date +"%Y-%m-%d") mkdir $bkdir cd $bkdir logger "END Creating Backup Directory" #Get a backup copy of the database logger "BEGIN Backing up the Moodle Database" mysqldump -u moodleuser --password=XXXXX -C -Q -e --create-options moodle > moodle-database.sql logger "END Backing up the Moodle Database" #Get a backup copy of moodle data logger "BEGIN Backing up moodledata" tar -cvzf moodledata.tar.gz --exclude='/var/moodledata/cache' --exclude='/var/moodledata/lang' --exclude='/var/moodledata/sessions' --exclude='/var/moodledata/temp' /var/moodledata && tar -cvzf moodleinstallation.tar.gz /var/www logger "END Backing up moodledata" cd .. tar -cvzf $bkdir.tar.gz $bkdir # Turn off Maintance mode and log it... logger "BEGIN Turning off maintance mode in moodle" lynx -cmd_script=./backupScripts/turnOffMaintMode http://moodle.leeand00domain.local logger "END Turning off maintance mode in moodle complete."
I would do 1 of the following things. Method #1 - system crons Add the backup script to the system's crons rather then to a actual user's crontab entry. Most systems maintain a directory structure under /etc like this: $ ls -1d /etc/cron.* /etc/cron.d /etc/cron.daily /etc/cron.deny /etc/cron.hourly /etc/cron.monthly /etc/cron.weekly You can simply place a script you want to run at whatever frequency in the appropriate directory. Method #2 - password less sudo The other approach would involve setting up a entry in your /etc/sudoers file using the command visudo to edit it. This entry would grant passwordless access to the user's crontab entry for this particular script. Your entry in their crontab would be something like this: $ sudo ...your script... And the entry in your /etc/sudoers file would be something like this: user ALL=(root) NOPASSWD: /home/user/cronscript.sh References How to run a specific program as root without a password prompt?
Automating backups run from a bash script in Linux the right way
1,356,718,345,000
I would like to have a service watchdog script echo the status to the screen if called like ./watchdog.sh but if it is run by cron, there is no need to echo output. What is the proper method? Where does stdout go when a script is run under root's crontab?
For the many cron jobs that I run, I purposely make them so if run on command line appropriate outout is generated but the same script if placed in crontab I always capture both the stdout and stderr to a log file: 00 12 * * 1-5 /home/aws/bin/myscript.sh >> /home/aswartz/rje/cron.log 2>&1
Bash if script is called from terminal echo stdout to terminal, if from cron do not echo output
1,356,718,345,000
I want to automate a task which can only be done on a website (with prior login) on my debian server. There is no public API available, so I can't use one. Is there a way to do so? I thought about a text-based browser or something similar.
You can run Selenium on a headless installation on your server, e.g. by programming the actions in python using pyvirtualdisplay. pyvirtualdisplay allows you to use a xvfb, xepher or xvnc screen so you can do screenshot (or take a remote peek to see what is going on). On Ubuntu 12.04 install: sudo apt-get install python-pip tightvncserver xtightvncviewer sudo pip install selenium pyvirtualdisplay and run the following (this is using the newer Selenium2 API, the older API is still available as well): import subprocess from pyvirtualdisplay import Display from selenium import webdriver def browse_it(port=None): browser = webdriver.Firefox() page = browser.get('http://unix.stackexchange.com/questions') for question in browser.find_elements_by_class_name('question-hyperlink'): print question.text if port: print '--------\nconnect using:\n vncviewer ' + \ 'localhost:{}\nand click the xmessage to quit'.format(port) subprocess.call(['xmessage', 'click to quit']) browser.quit() def browse_it_hidden(rfbport=5904): with Display(backend='xvnc', rfbport=str(rfbport)) as disp: browse_it(rfbport) if __name__ == '__main__': browse_it_hidden() The xmessage prevents the browser to quit, in testing environments you would not want this. You can also call browse_it() directly to test in the foreground. The results of Selenium's find_element.....() do not provide things like selecting the parent element of an element you just found. Something that you might expect from HTML parsing packages (I read somewhere this is on purpose). These limitations can be kind of hassle if you do scraping of pages you have no control over. When testing your own site, just make sure you generate all of the elements that you want to test with an id or unique class so they can be selected without hassle.
Automating tasks on a website on a headless server
1,356,718,345,000
Ok I have requested a code here but initial I didn't ask to make it busybox compatible. My bad. I'm new to Linux and coding. The code needs to do the following: Delete 50GB of oldest data (dir with files) from a directory when the HD reaches a capacity of 95%. The code they gave me is, that is not working with busybox: DIRS="a/ b/" MAXDELBYTES="53687091200" # 50GB DELBYTES="0" find $DIRS -type f -printf "%T@ %s %p\n" | sort -r -n | while read time bytes filename do rm -fv "$filename" DELBYTES=$((DELBYTES + bytes)) if [ $DELBYTES -ge $MAXDELBYTES ]; then break; fi done What is not working: -printf (changed it to -print) %T@ %s %p\n (don't know what to change it to) Do not know what else isn't working. I'm new to coding and Linux. Now this need to be translated to busybox so it will work on my embedded Linux system. Also a cron command needs to be added so it runs every Friday.
Since the busybox implementation of find does not offer custom output formatting, you need to outsource the formatting task to a separate program :) Luckily, even busybox includes the handy stat command. It's output format fields differ from the ones that GNU find uses, so the symbols you need to use are different. The script below assumes that find and stat are those that come from busybox. DIRS="a/ b/" MAXDELBYTES="53687091200" # 50GB DELBYTES="0" find $DIRS -type f -exec stat -c "%Y %s %n" {} \; | sort -r -n | while read time bytes filename do rm -fv "$filename" DELBYTES=$((DELBYTES + bytes)) if [ $DELBYTES -ge $MAXDELBYTES ]; then break; fi done As always, read each command's description before you use it. In case of busybox, you won't find manpages for them, but you can use --help to display usage information. Be warned, that this solution can break things in an unlikely situation, when file names contain newline symbols in them! This should not occur on a healthy system, but might happen, for instance, if someone manages to either break into the system or exploit some vulnerability that allows arbitrary file creation. To prevent accidentally removing useful files in such cases, you should first find and remove all files that include newlines in their names. To list those, run: find / -name "* *" (There is only a newline between the asterisks.) Then, when you're sure all those files are not needed, delete them using either find / -name "* *" -delete or find / -name "* *" -print0 | xargs -0 rm -vf Both should work with busybox.
Remove 50GB oldest files in busybox when used capacity reaches 95%
1,356,718,345,000
I want to allow non-root users to create periodic tasks but I don't want to indirectly give them root access by giving them access to crontab. Is there any alternative for them to create cron tasks that have no root access? I could create a script myself in crontab that checks all the user home folders and runs crontab-like file that they provide with their user permissions. But... this would be basically reinventing crontab and I'm sure my implementation would be prone to security issues. Is there a way to do this without reinventing the wheel? it is very strange that Linux doesn't provide this mechanism already.
By default, every user can have a crontab created. All the user needs is to be able to login to the system via ssh. After the user is logged in, all they have to do is crontab -e And this will open a crontab file for them to populate. After the user is done with the crontab, the file is saved in /var/spool/cron/* for each user. Scheduled jobs will be run as the user, with the user's permissions not as root. If you'd like to create crontabs for different users, you could create them yourself for every user and later inspect them. The users do not need root access for this On a RHEL system, crontab for user joe will have the path /var/spool/cron/joe
Crontab for non-root users with non-root execution permissions
1,580,978,760,000
Just to be clear I want to comment crontab entries, not a basic file. Usually, I do it like crontab -e 30 * * * * /u01/app/abccompny/scripts/GenerateAWRReport.pl 01,31 * * * * /u01/app/abccompny/scripts/table_growth_monitor.sh 30 0,4,8,12 /u01/shivam/script/getMongoData.sh and I add "#" in front of each line and just save it. Similarly after work is done I remove the "#". #30 * * * * /u01/app/abccompny/scripts/GenerateAWRReport.pl #01,31 * * * * /u01/app/abccompny/scripts/table_growth_monitor.sh #30 0,4,8,12 /u01/shivam/script/getMongoData.sh Is there an efficient way to do this using the script?
Export your current crontab into a file, delete the crontab, then use the previously created file. $ crontab -l > cron_content $ crontab -r $ <this is where you do your stuff> $ crontab cron_content
How to comment all the crontab entries and then uncomment same using a script
1,580,978,760,000
I notice that on Debian related systems, system level crontab scripts in /etc/cron.hourly, /etc/cron.daily... are being gradually decommissioned in favour of systemd timers. Eg: $ cat logrotate #!/bin/sh # skip in favour of systemd timer if [ -d /run/systemd/system ]; then exit 0 fi ... I presume that one goal is to gradually decommission cron and anacron. (See note 1) For me a critical use case of cron is user defined crontabs (crontab -e) which allow a user to schedule their own jobs to run as their own user without requiring sysadmin privileges. Are there any features in systemd, current or planned, which allow non-admin users to schedule repetitive tasks? Note 1: Weakening the earlier statement somewhat I've not found any particularly good discussions except those badmouthing cron and singing the praise of systemd timers. I've found no evidence that this direction of travel has been handed down by the gods of Linux distribution. However I do notice it as a direction of travel. Therefore this statement is only based on the idea that if this is a direction of travel and with time I would expect most / all packages to eventually go the same way and make one system (cron) redundant.
Users can set up systemd timers, basically by creating a service and timer in ~/.config/systemd/user and enabling the timer. There are two main features which are lost by switching from user-defined cron jobs to systemd timers (whether that’s good or bad depends on the situation): systemd services don’t email their results; systemd user timers only run when the user session is active, unless the user is configured to linger (which is something the administrator needs to do). Using systemd timers adds a number of possibilities compared to cron jobs; for example the time specification is more expressive than cron’s, and timers can be configured to fire with additional requirements such as “only when a specific VPN is up”. (Of course all these niceties can be written in cron jobs too...) I also find systemd timers nicer to manage than cron jobs: it’s easy to see what a timer’s status is and the next time it will fire. In Debian, the pattern you’ve seen is used to avoid having repetitive tasks run twice, once by systemd and another time by cron or anacron (for whatever reason). It doesn’t mean there’s a general goal to decommission cron or anacron.
Is there a systemd alternative to user specified crontabs?
1,580,978,760,000
I want to run a script without using cron at specific time. But it is not working. #!/bin/sh DATE=`date | cut -d' ' -f4` #Date is getting printed, if I run it manually, without any error. But file is not created at scheduled time. echo $DATE if [[ $DATE == "07:06:55" ]] then echo "this is a test program" >> xyz.log fi
You can use at: at -f "$script" 'now + 24 hours' &>/dev/null The at command explained with an example I found this also: watch -n <the time> <your command or program or executable> The watch command on the web
What should I do to run a script on specific time without cron?
1,580,978,760,000
I have added a job (register-dns.cron) to /etc/cron.daily/, but it is not running. The result of some testing is shown below. #↳ ls -l /etc/cron.daily/ total 28 -rwxr-xr-x 18 root root 1474 Sep 13 2017 apt-compat -rwxr-xr-x 13 root root 355 Oct 25 2016 bsdmainutils -rwxr-xr-x 18 root root 1597 Feb 22 2017 dpkg -rwxr-xr-x 6 root root 4125 Feb 10 08:26 exim4-base -rwxr-xr-x 18 root root 249 May 17 2017 passwd -rwxr-xr-x 3 root root 66 Apr 17 11:57 register-dns.cron #↳ (cd /; run-parts --report --verbose /etc/cron.daily) run-parts: executing /etc/cron.daily/apt-compat run-parts: executing /etc/cron.daily/bsdmainutils run-parts: executing /etc/cron.daily/dpkg run-parts: executing /etc/cron.daily/exim4-base run-parts: executing /etc/cron.daily/passwd #↳ (cd /; run-parts --report --verbose --reverse /etc/cron.daily) run-parts: executing /etc/cron.daily/passwd run-parts: executing /etc/cron.daily/exim4-base run-parts: executing /etc/cron.daily/dpkg run-parts: executing /etc/cron.daily/bsdmainutils run-parts: executing /etc/cron.daily/apt-compat
I found the problem. It seems that by removing the .cron from the end of the filename, it will start to work. (dots are not allowed in filename, see below ). From man run-parts If neither the --lsbsysinit option nor the --regex option is given then the names must consist entirely of ASCII upper- and lower-case letters, ASCII digits, ASCII underscores, and ASCII minus-hyphens.
cron job not running from cron.daily
1,580,978,760,000
I want a cronjob to run every one hour randomly. (i.e if the first job runs at 58 minutes,the second job should run at 47 minutes and the third one at 52 minutes and so on) But this should run randomly for everyone hour.
You can do this by defining a job which runs every hour on the hour, and sleeps for a random amount of time before running the command you're actually interested in. In your crontab: SHELL=/bin/bash 0 * * * * sleep $((RANDOM*3600/32768)) && command (You need to specify the shell, to ensure that $RANDOM is available. There are other ways of getting a random value for sleep if that's not appropriate.)
Running a cron job randomly for every one hour
1,580,978,760,000
I have recently lost my cronjobs. now when i do crontab -e i am presented with an empty file. i would like to get the default file back with the nifty comments and explanaitions at the top!
crontab -r should remove the current user's crontab. The next time you run crontab -e you should get the default crontab.
How to restore default crontab file
1,580,978,760,000
I need to run shell script contains Xdotool codes in /home/z/Desktop/tempo/run.sh. I've tried many ways of DISPLAY=:0 but always not works. I've tried each of below codes, and not works: * * * * * export DISPLAY=:0 cd /home/z/Desktop/tempo; ./run.sh * * * * * export DISPLAY=:0; cd /home/z/Desktop/tempo; ./run.sh * * * * * export DISPLAY=:0 && cd /home/z/Desktop/tempo; ./run.sh * * * * * DISPLAY=:0 cd /home/z/Desktop/tempo; ./run.sh * * * * * DISPLAY=:0; cd /home/z/Desktop/tempo; ./run.sh * * * * * DISPLAY=:0 && cd /home/z/Desktop/tempo; ./run.sh Running directly also not works: * * * * * export DISPLAY=:0 xdotool mousemove 20 20 * * * * * export DISPLAY=:0; xdotool mousemove 20 20 * * * * * export DISPLAY=:0 && xdotool mousemove 20 20 * * * * * DISPLAY=:0 xdotool mousemove 20 20 * * * * * DISPLAY=:0; xdotool mousemove 20 20 * * * * * DISPLAY=:0 && xdotool mousemove 20 20 I always don't see my mouse moving on each line of codes above. I've also tested Xdotool codes to make invalid website URI request and see the logs. Sadly the logs still blank.
* * * * * DISPLAY=:0 xdotool mousemove 20 20 At least should work as long as it's in the crontab of the same user as the one having the X session on the corresponding display. If another user is to do the mousemove, you need to grant him access to your display. This can be done by giving him the MIT Magick Cookie for your display and let him install it in his own X auth store (using xauth), or it can be done with: xhost +si:localuser:the-user Or it can be done by granting him access to your own X auth store, for instance, by doing: setfacl -m u:the-user:r ~/.Xauthority And change the crontab line to: * * * * * DISPLAY=:0 XAUTHORITY=~me/.Xauthority xdotool... If that other user is root you don't need the setfacl step, but I would not run xdotool as root, no reason for that. You can run it as your own user.
Xdotool using "DISPLAY=:0" not works in Crontab
1,580,978,760,000
This setting, for every six hours, works fine 0 */6 * * * But this runs every six hours at 0:00, 6:00, 12:00 and 18:00 Now I want to run every six hours, but starting at 2:00. I read on this simulator that should be 0 2/6 * * * But crontab is returning an error of "bad hour"
You need to provide a range 0 2-23/6 * * * ... This will run the job daily at 2am, 8am, 2pm, 8pm (i.e. every six hours starting from 02:00). I've never been totally sure whether this is user's local time or system local time, though; I've tended to assume system local time.
Crontab does not accept 2/6 on hour [duplicate]
1,580,978,760,000
I'm fairly new to cron and I'm trying to figure out a way to run a job every day of the week, except for the first day of each month and the first day of each week. This may sound a little odd, but the context is that this is for a backup task. I want to backup every day, but also keep some backups around a little longer so take one backup each week and also one each month. I want to setup a cron task for the daily job but I'm struggling to see how to effectively not run the daily task if it's on a day on which the weekly or monthly backup will also run. I think I can achieve this by adding a separate cron task for each day of the month, but that feels wrong because it's a lot of separate tasks. Is there a better way to achieve this in cron?
I would schedule the job with a day-of-the-week restriction then prefix the actual job with a date test for "day of the month". Use your own values for the hour & minute fields. The "day-of-the-month" field is *, meaning run on any & all days of the month. The "month" field is also open. The "day of the week" field is restricted to Mondays through Saturdays, skipping the first day of the week (assuming you count Sunday as the first day of the week). If Monday is the first day of your week, use 0,2-6 as the "day of the week" value instead. (minute) (hour) * * 1-6 [ $(date +\%e) -ne 1 ] && actual-job The simplest approach (of restricting both the day-of-the-month and day-of-the-week fields) doesn't work, due to this Note in man 5 crontab, with my bolded emphasis: Note: The day of a command's execution can be specified in the following two fields — 'day of month', and 'day of week'. If both fields are restricted (i.e., do not contain the "*" character), the command will be run when either field matches the current time. For example,
Cron: Run every day except first day of the month and first day of each week
1,580,978,760,000
We have two servers, one of which runs Ubuntu and the other Amazon Linux 2. Both run a series of cron jobs for different clients. The Ubuntu server sends an email (to a group email address) with the output of every cron job. There is no output redirection: just a command to execute, for each entry: 0 2 * * * /apps/ourapp/sync_data -c variable1 -s all -i CLIENT1 ... repeated with different times and different values for -i.  The scheduled start time is always of the form hh:00, hh:15 or hh:45. It works satisfactorily: people get the email. This isn't the case with the Amazon Linux machine. Its mail log has entries like this (slightly redacted): Feb 28 07:05:04 ip-XXXX sSMTP[32212]: Connection lost in middle of processing Feb 28 08:05:04 ip-XXXX sSMTP[32382]: killed: timeout on stdin while reading body -- message saved to dead.letter. Feb 28 08:05:04 ip-XXXX sSMTP[32382]: Timeout on stdin while reading body Feb 28 21:50:04 ip-XXXX sSMTP[2261]: killed: timeout on stdin while reading body -- message saved to dead.letter. Feb 28 21:50:04 ip-XXXX sSMTP[2261]: Timeout on stdin while reading body Feb 28 22:05:04 ip-XXXX sSMTP[2505]: killed: timeout on stdin while reading body -- message saved to dead.letter. Feb 28 22:05:04 ip-XXXX sSMTP[2505]: Timeout on stdin while reading body Feb 28 22:20:05 ip-XXXX sSMTP[2845]: killed: timeout on stdin while reading body -- message saved to dead.letter. Feb 28 22:20:05 ip-XXXX sSMTP[2845]: Timeout on stdin while reading body You'll note that the timestamp minute components are 5 min past either the hour, the 15 minute, or the 45 minute mark. These correspond to cron jobs that aren't sending email. dead.letter is always empty, which makes sense if the log doesn't get sent until the job ends (as I infer happens.) What I've read about sSMTP suggests that you can't override the stdin timeout with config options (though that information may be out of date). So what can I do? Is this because we're using sSMTP rather than some other mail mechanism?
There's nothing that you can do if you stick with sSMTP. As you surmised, this is not configurable. The 5 minute timeout is hardwired into the code of the program. If your job takes more than 5 minutes, then you will simply need a different mail submission system.
Cron email not sent by sSMTP if job takes > 5 minutes
1,580,978,760,000
I have to create a cron job. I know the rules of cron but when I am using cron generator online, I can see that the resultant expressions have ? in the result. For example, if I am creating an cron expression for job to be called everyday at 4 am, the resultant expression is 0 0 4 ? * * * What is the relevance of this ? in the cron expression?
The ? wildcard is only used in the day of month and day of week fields: It means ? ("no specific value") - useful when you need to specify something in one of the two fields in which the character is allowed, but not the other. For example, if I want my trigger to fire on a particular day of the month (say, the 10th), but don't care what day of the week that happens to be, I would put "10" in the day-of-month field, and "?" in the day-of-week field.
What does ? means in cron expression?
1,580,978,760,000
Following is my crontab entry SHELL=/bin/sh PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin * * * * * /FinalSync.sh $(date --date="5 days ago" +%d_%m_%Y) || echo $? >> log Got no error in log file also Shell Script #! /bin/sh source=/Source/$1 destination=/Destination folderParam=$(basename $source) if /usr/bin/rsync -avh -r $source $destination; then cp /FolderCopyStatus/Success /Status/Success_$folderParam else cp /FolderCopyStatus/Failure /Status/Failure_$folderParam fi Script run perfectly when I use in command line as under sh /FinalSync.sh $(date --date="5 days ago" +%d_%m_%Y)
cron converts % to newline for any crontab entry. You need to escape the %s with \: * * * * * /FinalSync.sh "$(date --date="5 days ago" +\%d_\%m_\%Y)"
Shell script not running in crontab [duplicate]
1,580,978,760,000
Is there some way to call a script of my own instead of just sending an email when any cron job fails?
You cannot do that with Vixie cron, which is standard on most systems nowadays, but there is a very workable alternative. What you do is setup a special user towards which all emails from cron are redirected by setting MAILTO, in the crontab file, to that user. And for that user you make some .procmailrc entry or entries that execute the alternative command if a mail got received for a failed command. You might need to do some parsing of the mail to determine whether an error was encountered, or enforce that programs that have zero exit do not write to stdout.
Run a command when any cron job fails instead of just sending an email
1,580,978,760,000
I use this cron job to do a backup of /home/blah/ each day at 01:00 a.m. : 0 1 * * * tar -zcf /home/blah/backup.tgz /home/blah/ In fact I would prefer that an email is sent to me with the .tgz file as attachment. (yes, the filesize will always be < 5 MB because my folder is very small) Can I do something like : 0 1 * * * mail -s "Backup blah" "[email protected]" --attachment=(tar -zcf /home/blah/backup.tgz /home/blah/) (this is pseudo-code at the end) inside the cron job ? What cron syntax should I use?
This following command worked for me when I tested in my machine. echo "This is the message body" | mutt -a "/path/to/file.to.attach" -s "subject of message" -- [email protected] So probably the approach to follow will be something like, tar -zcf /home/blah/backup.tgz /home/blah/ echo "Please find attached the backup file" | mutt -a "/home/blah/backup.tgz" -s "File attached" -- [email protected] I will save the above script as backup_email.sh and schedule the cron job as, 0 1 * * * /path/to/backup_email.sh References https://stackoverflow.com/a/9524359/1742825
Send backup by email with crontab
1,580,978,760,000
I have setup Cron Jobs to run inside a Chroot Environment, depending on the User/Group; I have noticed that these cron jobs, running inside the chroot environment, fail to send any mail. Log files report that it cannot find a program to send mail. Where does the Cron process look for the default mail binary? Can you set or configure this path? and on a side note.. if the MAILTO= variable is not set, how does Cron know where to send mail to? does it just send mail to the user running the job, on the local host? thanks!
Where does the Cron process look for the default mail binary? Unless otherwise specified I'm fairly sure it just uses the mail program it finds in the path (/bin:/usr/bin). You can though specify the -m command line argument for some versions of cron -m This option allows you to specify a shell command string to use for sending cron mail output instead of sendmail(8). This com- mand must accept a fully formatted mail message (with headers) on stdin and send it as a mail message to the recipients speci- fied in the mail headers. The above works on CentOS/RHEL, Ubuntu looks different Can you set or configure this path? See above. if the MAILTO= variable is not set... If MAILTO is not set then as you suspect the mail is delivered to the local user who is running the job. On CentOS/RHEL you can specify extra command line arguments in /etc/sysconfig/crond so that you dont't have to edit your init scripts. Other OS/distros may provide similar functionality.
where does Cron look for the default mail binary?
1,580,978,760,000
I'm trying to put up a temporary band-aid while I work out a solution for a app's memory leak. What I wrote was a small bash script that I put in the root on the server. This is what the script is suppose to do: Get the location it's run in Check to to see if the script is the crontab If not in the crontab add to the crontab to run every 5mins Test the memory and check to see if the % is above percent_allowed If above test then restart nginx & php-fmp services memory_protect.sh: #!/bin/sh cronfile=memory_protect.sh #NOTE THIS SHOULD DETECT IT'S SELF path=pwd #this is the path to the file percent_allowed=80 #this should be max memory before action has_cron(){ #is the file in the cron? return [[ crontab -l | egrep -v '^$|^#' | grep -q $cronfile ]] && return 1 || return 0 } test_memory(){ memusage=`top -n 1 -b | grep "Mem"` MAXMEM=`echo $memusage | cut -d" " -f2 | awk '{print substr($0,1,length($0)-1)}'` USEDMEM=`echo $memusage | cut -d" " -f4 | awk '{print substr($0,1,length($0)-1)}'` USEDMEM1=`expr $USEDMEM \* 100` PERCENTAGE=`expr $USEDMEM1 / $MAXMEM` #if it's above 80% alert return [[ $PERCENTAG>$percent_allowed ]] && return 1 || return 0 } if [[ has_cron -eq 0 ]] then #was not here so add #run this script every 5 mins */5 * * * $path/$cronfile fi if [[ test_memory ]] then #clear some memory /etc/init.d/nginx restart /etc/init.d/php-fpm restart fi The memory test seems to work when I run that by it's self, but this in whole doesn't seem to be working. Update I needed to run dos2unix on the file, but I also realized I have a return on the conditions of each function in the end.. so that was not going to work. Right now it seems to say that [[ on the if statement is not found. Update 2 Seems close, it's running the restarting of the services, but it's not putting the cron job in.. so I don't see it running #!/bin/bash cronfile=memory_protect.sh #NOTE THIS SHOULD DETECT IT'S SELF path=pwd #this is the path to the file percent_allowed=80 #this should be max memory before action has_cron(){ #is the file in the cron? #return 0 #returning this just to test should #be the next line but it's not working return 0 [[ crontab -l | egrep -v '^$|^#' | grep -q $cronfile ]] && return 1 || return 0 } test_memory(){ memusage=`top -n 1 -b | grep "Mem"` MAXMEM=`echo $memusage | cut -d" " -f2 | awk '{print substr($0,1,length($0)-1)}'` USEDMEM=`echo $memusage | cut -d" " -f4 | awk '{print substr($0,1,length($0)-1)}'` USEDMEM1=`expr $USEDMEM \* 100` PERCENTAGE=`expr $USEDMEM1 / $MAXMEM` #if it's above 80% alert [[ $PERCENTAG -gt $percent_allowed ]] && return 1 || return 0 } if [[ has_cron -eq 0 ]] then #was not here so add #run this script every 5 mins #crontab -e */5 * * * $path/$cronfile cat <(crontab -l) <(echo "*/5 * * * $path/$cronfile") | crontab - else echo "cron present" fi if [ test_memory ] then #clear some memory sudo /etc/init.d/nginx restart sudo /etc/init.d/php-fpm restart fi It's close now I think to being corrected.
To create a crontab entry via a Bash script you'll need to change this line: */5 * * * * $path/$cronfile To something like this: # Write out current crontab crontab -l > mycron # Echo new cron into cron file echo "*/5 * * * * $path/$cronfile" >> mycron # Install new cron file crontab mycron rm mycron You could also get fancy and do it all with this one liner: cat <(crontab -l) <(echo "*/5 * * * $path/$cronfile") | crontab - Your script Here's a modified version of your script that works for me. #!/bin/sh cronfile=memory_protect.sh #NOTE THIS SHOULD DETECT IT'S SELF path=$(pwd) #this is the path to the file percent_allowed=80 #this should be max memory before action has_cron(){ #is the file in the cron? #return 0 #returning this just to test should #be the next line but it's not working if crontab -l | egrep -v '^$|^#' | grep -q $cronfile; then return 1 else return 0 fi } test_memory(){ memusage=$(top -n 1 -b | grep "Mem") MAXMEM=$(echo $memusage | cut -d" " -f2 | awk '{print substr($0,1,length($0)-1)}') USEDMEM=$(echo $memusage | cut -d" " -f4 | awk '{print substr($0,1,length($0)-1)}') USEDMEM1=$(expr $USEDMEM \* 100) PERCENTAGE=$(expr $USEDMEM1 / $MAXMEM) #if it's above 80% alert [[ $PERCENTAG>$percent_allowed ]] && return 1 || return 0 } if has_cron; then #was not here so add #run this script every 5 mins #crontab -e */5 * * * $path/$cronfile #cat <(crontab -l) <(echo "*/5 * * * $path/$cronfile") | crontab - crontab -l > mycron # Echo new cron into cron file echo "*/5 * * * * $path/$cronfile" >> mycron # Install new cron file crontab mycron rm mycron else echo "cron present" fi if test_memory; then #clear some memory echo "/etc/init.d/nginx restart" echo "/etc/init.d/php-fpm restart" fi Example $ ./memory_protect.sh /etc/init.d/nginx restart /etc/init.d/php-fpm restart $ crontab -l */5 * * * * /home/saml/tst/91789/memory_protect.sh The script needs to have these two lines modified so that it will actually restart the nginx and php-fpm services. Change these lines: echo "/etc/init.d/nginx restart" echo "/etc/init.d/php-fpm restart" To these: /etc/init.d/nginx restart /etc/init.d/php-fpm restart I did this just so I could see that the script was running correctly. NOTE: that these restart lines should be prefixed with sudo if this script is running as anyone other than root!         sudo /etc/init.d/nginx restart         sudo /etc/init.d/php-fpm restart This use will likely need to have NOPASSWD privileges at least on these 2 scripts otherwise it will be waiting for the user that owns the cron to supply a password. Crontab entry doesn't exist? You'll encounter this problem when your crontab hasn't been created yet in the directory, /var/sppol/cron. You'll encounter it when you run crontab -l like this: $ crontab -l no crontab for saml Double check: $ sudo ls -l /var/spool/cron/ total 0 -rw------- 1 root root 0 Sep 16 23:47 root So just simply make like you're going to edit it, and save the empty file to create it: $ crontab -e # now you're in the vim editor, add a empty line # type "i", hit return, hit Escape, # and do a Shift + Z + Z to save! Now you should see this: $ crontab -l $ And this: $ sudo ls -l /var/spool/cron/ total 0 -rw------- 1 root root 0 Sep 16 23:47 root -rw------- 1 saml root 0 Sep 21 16:20 saml Restarting the services Another issue you'll run into is if this crontab entry is running as user, user1, then user1 will require sudo rights to restart them.
Script to test for memory usage [closed]
1,580,978,760,000
I'm setting up some shell scripting to be executed every five minutes, then every minute on our client's system, to poll a log and pull some timing information to be stored and accessed by another application through an external file. The current implementation we have in place works fine as it writes a single line to two separate files. We're refining the process so now I need to write two lines to one file every five minutes, and four lines to another, for every minute. However, I've noticed in testing that every few minutes the lines seem to execute out of order. My scripts are included below: */5 * * * * ~/myscript.pl ~/mylog | tail -3 | head -1 > ~/myreport1 */5 * * * * ~/myscript.pl ~/mylog | tail -2 | head -1 >> ~/myreport1 * * * * * ~/myscript.pl ~/mylog | tail -8 | head -1 > ~/myreport2 * * * * * ~/myscript.pl ~/mylog | tail -7 | head -1 >> ~/myreport2 * * * * * ~/myscript.pl ~/mylog | tail -3 | head -1 >> ~/myreport2 * * * * * ~/myscript.pl ~/mylog | tail -2 | head -1 >> ~/myreport2 In some cases, it seems that only a handful of the lines execute properly while in others, only one line is written. I don't even see the full number of lines written to the file all the time, otherwise I'd just assume the values I was pulling weren't being collected properly. I'm not sure how to judge whether all the cron lines are being executed, and what could be causing them to happen out of order, or not at all.
There is no guarantee that cron will run tasks in the order in which they appear in the cronfile. In fact, it may well run two tasks simultaneously. So it's definitely not a good idea to have the tasks depend on each other. For example, in your cronfile, one task creates a file and another one (or three) appends to it. If the appender starts first, the creator will effectively delete the appender's work. Better would be to create a driver script with the four every-minute runs of myscript and another one with the two every-five-minute runs. Then you can cron the two driver scripts, resulting in only one cron task for each time interval.
Cron job for every minute executing out of order?
1,580,978,760,000
A python script to shutdown system works fine from the terminal but doesn't work when included in crontab. The script is called by cron but ends with an error 'shutdown command not found'or 'init 0 command not found'. I am using fedora 17 and the script is executed from root's crontab. #!/usr/bin/python import os os.system('shutdown') os.system('init 0')
I guess the first line of your snippet is supposed to be something like: #!/usr/local/bin/python or, if you also make sure you set an appropriate value for PATH in your crontab: #!/bin/env python What does your crontab entry look like? Don't forget that cron sets a very limited environment, so you will need to provide the full path to the script file so that cron can find it. Additionally, python probably can't find shutdown when called from cron, because it will inherit cron's limited environment. Try providing the full path to the shutdown command in your os.system() call.
Python script to shutdown system doesn't work in cron
1,580,978,760,000
I need to develop a script to test if "now" (e.g. the output of date) is in a given range. For now, this range is expected to be a list of days of month (e.g. 10,11,12,13,21,22,23). And I could code that myself. However, I would prefer to use existing tools instead of coding myself. And I expect the range requiredment to evolve, most probably to restrict on some hours of the day. And maybe also on day-of-week. So I thought about cron syntax that we all know. Question: is there a tool to check of a given date-time is "inside" a cron syntax range or the like (tools that that don't use cron syntax could be ok too)?
One possible way is to use Epoch time which is seconds from 01.01.1970 00:00. Firs generate now date and time: CDATE=$(date +%s) then generate two other dates: F1DATE=$(date -d "11:02 Feb 14 1999" +%s) F2DATE=$(date -d "12:20 Feb 14 2024" +%s) and then compare if CDATE is between F1DATE and F2DATE: if [ "$CDATE -gt "$F1DATE" ] && [ "$CDATE" -lt "$F2DATE" ] then # do the job fi
Is there a tool to test if a date-time is in a given range?
1,580,978,760,000
I want to schedule my crontab to execute a command EVERY 5 MINUTES between 00:05 and 23:55. I'm kinda new to cron, and I'm struggling to find the right way to do this... For now, I only found out how to do it between hours, not hours and minutes. Like, I know I can do it like this if I wanted to execute the command every 5 minutes between 00:00 and 23:00: */5 0-23 * * * But I want to execute it every 5 minutes between 00:05 and 23:55. How do I do that? Thanks for any help you can provide!
You seem to want to execute a job every five minutes, every hour of the day, except for at exactly midnight. You would schedule two jobs: 5-55/5 0-23 * * * 0 1-23 * * * The first job would trigger every five minutes from hh:05 through to hh:55 every hour from 00 through to 23. This job skips every full hour. The second job would trigger on the hour, every hour from 01:00 through to 23:00, but not at midnight. This would take care of the on-the-hour jobs that the first schedule skips. See also: https://crontab.guru/
How to do a cronjob every 5 minutes between 00:05 and 23:55
1,580,978,760,000
I'm honestly not sure if this is an issue with WSL or I'm just doing something wrong for Ubuntu in general, but I cannot get the cron service to run at start on my WSL system. It starts just fine with: sudo service cron start But it doesn't start at boot even after: sudo update-rc.d cron defaults sudo update-rc.d cron enable Version: $ uname -a Linux PC-01 4.4.0-18362-Microsoft #476-Microsoft Fri Nov 01 16:53:00 PST 2019 x86_64 x86_64 x86_64 GNU/Linux $ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 18.04.4 LTS Release: 18.04 Codename: bionic
Old question that I just found when searching for a potential "duplicate" for another question that was just asked. Putting the answer here since this one is the first search result for "wsl start services". There's a current answer and a future answer (based on Windows 10/11 Preview versions of WSL currently available). The current answer is that WSL doesn't have the concept of "startup services". Microsoft's init process is neither a SysVinit nor a Systemd init like on "normal" Linux systems. If you'd like to start the services automatically, there are currently two ways to do it, with a third coming in the next WSL release: Option 1: Set up a Windows "Scheduled Task" to run on login (not boot): The task can be a "Basic Task" to "Start a Program" The the "Program" is wsl.exe And the "Arguments" should be -u root service cron start That will run WSL at login as the root user (needed to start services) and run the service cron start command. Of course, this can be modified to run any service that has an init.d script. Note that this does not (at least currently) work if you schedule the task to run at Windows Boot, since WSL seems to have a requirement that the user be logged in in order to keep the process running in the background. Option 2: Modify your shell startup to check if the service is running, and start it if needed. In your startup (typically ~/.bashrc) add the following line: wsl.exe -u root sh -c "service cron status || service cron start" Under "normal" Linux, you'd need to visudo and give yourself permission to run the command without a password (or type the password each time you log in). Using wsl.exe -u root from within WSL allows you to bypass this. Option 3: A new feature in Windows 11 is the ability to specify startup tasks for WSL using the /etc/wsl.conf file. If you have Windows 11, create that file with the following lines: [boot] command="service cron start" According to the Microsoft doc, this will run the command as root when the WSL instance starts. If you need to run multiple commands at WSL startup, separate them with semicolons in the same command line: [boot] command="service ssh start; service cron start"
WSL Run Service at Start
1,580,978,760,000
Today (in 2020, with init systemd), there seem to be many ways to schedule tasks (something I assume previously made by the crond daemon). My trouble is to understand why there are three packages somewhat named similarly cronie..... available on my RHEL7 setup. This question seeks to get at the bottom about what makes those packages different in their use case. When for instance would one prefere any of cronie/cronie-anacron/cronie-noanacron package. Are those packages interdependent? The information provided via YUM is this: [root@localhost ~]# yum search cronie Loaded plugins: product-id, search-disabled-repos, subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. ============================= N/S matched: cronie ============================== cronie.x86_64 : Cron daemon for executing programs at set times cronie-anacron.x86_64 : Utility for running regular jobs cronie-noanacron.x86_64 : Utility for running simple regular jobs in old cron style I have read this resource that comparese cron with anacron basically saying that anacron's use case is to schedule stuff that has to occur at intervals of days and on systems not running 24/7. What is most puzzling is then the cronie-noanacron thing. ** Update ** Looking into the matter I have stumpled upon this quote: Now I get it. Cronie package by itself does not execute cron.daily, weekly&monthly scripts.../etc/crontab is empty. Scripts are executed either by anacron or /etc/cron.d/dailyjobs (cronie-noanacron) so cronie package depends on either cronie-anacron or cronie-noanacron to actually function as crond did. Installing cronie-noanacron will enable uninstalling cronie-anacron without breaking dependencies. from https://forums.centos.org/viewtopic.php?f=13&t=1040&start=10#p6438 Can somebody confirm this? (since this would kind of give some insight/ answer to the question)
cronie is the package that contains the actual cron daemon. It is a fork of vixie-cron. cronie-anacron provides the anacron tool that allows specifying things to run daily/weekly/monthly/etc. without necessarily specifying the exact time, so that systems that are shut down irregularly can have periodic maintenance jobs. If you don't want to use anacron, the cronie-noanacron package contains the necessary configuration to run the standard daily/weekly/monthly/etc. maintenance jobs in the traditional way, at fixed times.
What is the different use case of cronie, cronie-noanacron, cronie-anacron?
1,580,978,760,000
I would like to make crontab run this script as a regular user: #!/usr/bin/env bash PIDFILE=wp-content/uploads/sphinx/var/log/searchd.pid if ! test -f ${PIDFILE} || ! kill -s 0 `cat ${PIDFILE}`; then `which searchd` --config /home/user/www/wordpress-page/wp-content/uploads/sphinx/sphinx.conf fi It simply reruns Sphinx Search daemon, because my shared server kills all my daemons if anything exceeds 1GB of ram (its Webfaction). When I call that script by hand via CLI command it works, but if I attach it in crontab (using crontab -e) I got an email with an error which: no searchd in (/usr/bin:/bin) /home/user/www/wordpress-page/run-searchd.sh: line 8: --config: command not found Simply which searches root level, but I would like it to behave as called by myself when I log in via ssh as regular user. How to make that happen?
It's probably because of the $PATH. Do this in your shell outside of crontab: command -v searchd | xargs dirname This command will return a directory where searchd is on your system or an error if you don't have searchd in your $PATH even in an interactive shell. Now do this at the top of your script you execute in crontab: PATH=<directory_from_above_command>:$PATH Alternatively just use a full path to searchd instead of which searchd. Also read this on which if you want to fully understand how it works: Why not use "which"? What to use then?.
How to use which command in Crontab?
1,580,978,760,000
I'm trying to set up a script for my school's computer that switches them off automatically at the end of the day (since most teachers forgot them on). I thought to send a zenify warning box to inform the user that the PC is going to switch off and, if the user do not confirm in 1 minute, the PC shuts down. I know the existence of the timeout command, but in this case I want to call sudo shutdown -h now if the zenity command do not return in 60 seconds. I was trying to do something like timeout 60 --onTimeout="sudo shutdown -h now" zenity --warning --text="This PC is going to shut down in a minute if you DO NOT press the OK button." The command would be executed in a cron script. So, the question is: How can I execute a command when the zenity program timeout?
You'll need to check the exit status and make a decision based on that. The manual for GNU timeout says that If the command times out, and --preserve-status is not set, then exit with status 124. So, in a shell script, you'd do something like this: timeout 60 zenity ... if [ $? = 124 ] ; then echo it timed out, do something... else echo didn't time out fi Though, it appears that zenity supports a timeout in itself, so you don't need an external timeout: zenity --timeout 60 --warning --text="Shutdown in 60" if [ $? = 5 ] ; then sudo shutdown -h now fi Even if using Zenity's own timeout, it doesn't seem to show the time remaining. However, it does have a progress bar mode that could be used to convey a sense of impending doom. Something like this would display a timer counting down from 60 with a progress bar filling up at the same time: #!/bin/bash timeout=60 for (( i=0 ; i <= $timeout ; i++ )) do echo "# Shutdown in $[ $timeout - $i ]..." echo $[ 100 * $i / $timeout ] sleep 1 done | zenity --progress --title="Shutting down automatically..." \ --window-icon=warning --width=400 --auto-close if [ $? = 0 ] ; then echo timeout passed else echo cancel was pressed or zenity failed somehow fi
If command reach timeout execute other command
1,580,978,760,000
I am just starting to learn cron jobs. Basically I am going to use webmin to manage my cron jobs, I am also reading some basic information about cron jobs. So far I've learned that /etc/crontab stores the cronjobs and /var/spool/cron/crontabs has cron jobs for different users, and that when I do crontab -e I can see and edit the cron jobs for current user. Root is the only user in my Ubuntu 14.04, and there are only several lines in the crontab files I found in above locations. However, in webmin, this is what I see It is a lot more than what I see in those files. So my question is: where do all these cronjobs I see in webmin come from?
There are several places where cron-jobs are stored. The main place, is /etc/crontab (and on some systems, this is the only one). This file is edited only by root, and often allows to specify which user the job should run as. On some systems, there is also a directory - /etc/cron.d - which complement /etc/crontab. The files here contains line(s) like what is used for /etc/crontab (often with user to run as specified), and is basically added to /etc/crontab. It makes it easy for packages to add an entry to the main crontab-file. Then there are the per-user crontabs - including for root, normal users, and some system-users (often added by packages) - which can be found under /var/spool/cron/crontab in user-specific files... this is what you edit with crontab -e. On some systems, some or all users may be blocked, by using /etc/cron.allow or /etc/cron.deny. You can disable entries in various crontab-files by commenting them out with a #. Finally we have directories called /etc/cron.hourly, cron.daily, cron.weekly and cron.monthly. These contains shell-scripts - not crontab-entries like /etc/cron.d. However, the cron-daemon will look at these directories, and run these scripts at appropriate times. You can find entries for "run.daily", "run.weekly" and similar - usually in /etc/crontab - which is what actually parse and runs these scripts. To disable scripts in these directories, removing the execute-permission should do the trick. One other problem with these - and with cron in general - is if the computer is turned off, during the periods when some cron-entries should be run. If that is the case, these jobs may (almost) never be run at all. Some cron-daemons have mechanism to fix this, or an additional crond-like program is used (eg. anacron) - but in general, a computer running cron, should be on all the time. In addition, the at-and batch-mechanism is run as cron-jobs on some systems, rather than by a dedicated daemon of it's own. +++ Most of the strange entries you've seen, are shell-scripts added by packages into the /etc/cron.{hourly|daily|weekly|monthly}-directories. These may do things like flushing buffers, compressing logs, add updates, or do general clean-up. As mentioned, these are not really crontab-entries but shell-scripts... it's just that cron will check these directories, and run their contents from time to time. Obviously webmin checks all cron-related directories, and lists all entries. Unless the entry/script - in /etc/crontab, /etc/cron.d, /etc/cron.timespecific or /var/spool/cron/crontab/user - is for some service you know you and your system is absolutely not using (eg. uucp or news, on a computer not using uucp or running a news-server); it's probably best to leave these as they are. Most of these cron-jobs will be run at "off-hours" (like in the middle of the night) anyway, and shouldn't be very noticeable. After all, "If it isn't broken, don't fix it".
Why am I seeing much more cron jobs in webmin than in `crontab -e`?
1,580,978,760,000
As a user, I want to edit my crontab. crontab -e gvim is launched. It prints "/tmp/crontab.IUVYhK/crontab" [New DIRECTORY] I can write but as soon as I try to write the temporary file, I get this error message: "crontab.IUVYhK/crontab" E212: Can't open file for writing However, I have no issue when using vi as editor: EDITOR=vi crontab -e Is it wrong to set gvim as EDITOR? Shold I use vi? I do very few admin tasks on this desktop machine, so I never ran into any issue.
You must use a synchronous editor for crontab -e, i.e. one where the command doesn't return until the editing is complete. For example, export EDITOR="gvim --nofork" crontab -e An alternative is this, crontab -l > ~/.crontab gvim ~/.crontab # wait until editing is finished crontab ~/.crontab
`crontab -e: E212: Can't open file for writing` when using gvim (works with vi)
1,580,978,760,000
I'm running LXDE on Fedora 21. My script's purpose is to extend the display across two monitors: #!/bin/sh xrandr --output VIRTUAL1 --off --output LVDS1 --mode 1440x900 --pos 1280x124 --rotate normal --output TV1 --off --output VGA1 --mode 1280x1024 --pos 0x0 --rotate normal This runs without issue from a terminal window but chokes as a cron job. From my cron log: Jul 9 20:14:01 localhost CROND[19494]: (user) CMD (/home/user/screens.sh) Jul 9 20:14:01 localhost CROND[19492]: (user) CMDOUT (Can't open display )
xrandr needs the $DISPLAY variable set to tell it which X session it's manipulating, and that isn't being set in the cron environment. xrandr could be working on your default local X session, or a second one that you started by running startx from a TTY, or a session to a remote display being forwarded over SSH, or a nested X session running inside another one using Xnest, etc. Without the $DISPLAY environment variable (or the --display command line argument) it can't know in general which session to connect to, so it bails out. For example, the following command may resolve your issue: DISPLAY=:0 /home/user/screens.sh
XRandR script runs correctly from command line, fails as cron job
1,580,978,760,000
let us suppose I have a cron job like this: */15 * * * * /path/to/thedaemon The daemon (it being a python daemon via from daemon import runner) will not allow multiple instances of itself, which in itself is quite nice. It one tries to initiate it while the daemon is already running, this is the result: lockfile.LockTimeout: Timeout waiting to acquire lock for /tmp/thedaemon.pid Of course the cron job doesn't care - it could keep dry-firing the command routinely so that in the case it is not running, it then starts running. But that is not very elegant. More elegantly, is there a way to set up the cron job to know if the daemon is running before initiating it? Perhaps a short-hand if-condition? In short, how do I set up the cron job to ensure that the daemon is running? If running, do nothing. If not running, initiate.
You can wrap your python daemon in a shell script. When you first initiate, check if the process is already running: pid=$(cat pid.file) ps -ef | grep $pid | grep <command to start daemon> if [[ $? -eq 0 ]]; then echo "daemon already running" & exit 1 else <command to start daemon> & \ echo $! > pid.file fi
Cron Job to Keep Daemon Running?
1,580,978,760,000
When I use the command crontab -e on my Debian server as a non root user (in this case as postgres), I can't edit it because of "/tmp/crontab.SJlY0Y/crontab" [Permission Denied] crontab -l on the other hand works fine. How can I fix this problem? Here are the current permissions: $ ls -l /tmp/crontab.SJlY0Y/crontab -rw------- 1 root postgres 1.2K Aug 3 11:44 /tmp/crontab.SJlY0Y/crontab $ ls -l /var/spool/cron total 12K drwxrwx--T 2 daemon daemon 4.0K Sep 12 2012 atjobs drwxrwx--T 2 daemon daemon 4.0K Jun 9 2012 atspool drwx-wx--T 2 root crontab 4.0K Aug 3 11:15 crontabs $ ls -l /var/spool/cron/crontabs total 12K -rw------- 1 git crontab 1.3K Mar 2 16:48 git -rw------- 1 postgres crontab 1.4K Aug 3 11:15 postgres -rw------- 1 root root 2.3K Jul 20 20:32 root $ ls -l /usr/bin/crontab -rwsr-xr-x 1 root root 36K Jul 3 2012 /usr/bin/crontab $ ls -ld /tmp/ drwxrwxrwt 6 root root 4.0K Aug 3 11:43 /tmp/
$ ls -l /usr/bin/crontab -rwsr-xr-x 1 root root 36K Jul 3 2012 /usr/bin/crontab The ownership and permission should actually be -rwxr-sr-x 1 root crontab 35880 Jul 3 2012 /usr/bin/crontab Since Debian sarge, crontab is setgid crontab, not setuid root, as requested in bug #18333. This is the cause of your problem: the crontab program expects to run setgid, not setuid, so it creates the temporary file as the user and group it's running as, which are root and the caller's primary group instead of the calling user and the crontab group. Reinstall the cron package: apt-get --reinstall install cron (as root). Check that /var/spool/cron/crontabs has the correct permissions and ownership: drwx-wx--T 2 root crontab 4096 Oct 8 2013 /var/spool/cron/crontabs In the future, don't mess with permissions of system files.
Cannot edit crontab as non root user
1,580,978,760,000
I tried using crontab -l from my terminal as root, it showed no crontab for root. So I tried crontab -e, it returns the following no crontab for root - using an empty one 888 and then the cursor starts blinking. I am not able to quit or save the file.
When you run the command crontab -e it typically defaults to the vi or vim editors. If you type the command Shift+Z+Z you can save any changes in this editor and exit. To add entries to your crontab using this method you'll need to learn how to use this editor more extensively, which is beyond the scope of this question, and should be easy to find many tutorials on the internet. If vi/vim is too much of a learning curve you can instruct crontab to use a different editor. Another console based editor that's easier for new people to Linux is nano, it's typically installed on most distros that I'm familiar with. $ EDITOR=nano crontab -e NOTE: To use nano's menu all the carets (aka ^X) commands at the bottom require the use of the Ctrl key. So to exit, Ctrl+X, for example. You can of course use any editor here. A easy GUI based editor, if you're using a GNOME based desktop, would be gedit: $ EDITOR=gedit crontab -e This last one might be a challenge to use, for a different set of reasons, if your primary desktop is being run by a user other than root, which it likely is, so I would go with nano for starters.
Crontab suspicious activity
1,580,978,760,000
I'm managing multiple machines running Debian, Raspian, and Mint. On some of the machines I want to have updating and upgrading automatically. I've drafted a script that I want to do this and log if the update are successful or not. #!/bin/bash captains_log=/home/jason/mission_control/captains.log apt-get update; if [ $? == 0 ]; then apt-get upgrade -y; if [ $? == 1 ]; then echo `date`": Daily update failed" >> $captains_log else echo `date`": Daily update successful" >> $captains_log fi else echo `date`": Daily update failed" >> $captains_log fi I've set the script to run @daily in a root crontab. I run the command manually and it executes as desired. When cron runs the script, I get success in the log but my software is not getting updated. Can someone tell me where I'm going wrong?
The recommended way to do this is using the unattended-upgrades command. Setting it up is simple: apt-get install unattended-upgrades dpkg-reconfigure unattended-upgrades This is all you need to get the results of what you intend in your cron script. There is no need to reinvent the wheel. As far as your script and it's report of success, any non-zero return code is considered a failure. Your script considers any non-1 to be success. There is no need to check exit codes manually, that is what if does. if apt-get upgrade -y; then echo "$(date): Daily update successful" >> $captains_log else echo "$(date): Daily update failed" >> $captains_log fi When the shell has a "command not found", an exit code of 127 is returned.
Automatic Update Script using Cron, Exit Codes, and Logging
1,580,978,760,000
So I have a crontab that have this line: 30 16 * * * (time sysbench --test=cpu --cpu-max-prime=20000 run) 2>> ~/cpu.out I use this because time output goes by default to stderr and I want to redirect it to a file When I run the command in the terminal it does write the output I need: real X.XXXs user X.XXXs sys X.XXXs But when I run with cron it writes this to the file: 39.69 user 0.92 system 0:40.67elapsed 99%CPU (0avgtext+0avgdata 1400maxresident)k0inputs+1outputs (0major+450minor) pagefaults 0swaps Can someone help me? PS: In the man page there is the -o option but it is not working, if I try that I get an error
When you're running it in the shell, you're actually using the bash built-in function, which looks like this: anthony@Zia:~$ time perl -e 'sleep 1' real 0m1.003s user 0m0.000s sys 0m0.004s Cron isn't using the bash built-in; it's using /usr/bin/time: anthony@Zia:~$ /usr/bin/time perl -e 'sleep 1' 0.00user 0.00system 0:01.00elapsed 0%CPU (0avgtext+0avgdata 1800maxresident)k 0inputs+0outputs (0major+514minor)pagefaults 0swaps The second one actually has all the information from the bash built-in, plus more. It labels "real" as "elapsed". (This is also why the -o option is not working; that's an option for /usr/bin/time, not the bash built-in). If you need to use the bash built-in, there are two things to try: Put SHELL=/bin/bash at the top of your crontab. Change your command to explicitly call bash -c "your command here".
Running time command with cron
1,580,978,760,000
Would someone suggest any handy command line (NOT web) tool or script which adds, enables and disables Cron jobs? I am looking, for example, for the following (or similar) behavior: sh manageCron.sh -idJob 'job1' -addJob '* * * * * <do some job>' sh manageCron.sh -dissableJob 'job1' crontab -e command is good for manual editing of crontab file, but I need to automate.
crontab -u USER -l will list the crontab to STDOUT. crontab -u USER FILE will load FILE as crontab for USER. Now the only thing that is missing is a way to identify your "jobs". "AddJob" will add a line to the output of the current crontab and read it as new crontab. "DisableJob" will just put a comment in front of your job-line, "EnableJob" will remove a comment from your job-line.
Is there any handy command line tool to manage Cron jobs?
1,580,978,760,000
In short: On a Slurm cluster, I need some computers to be available and responsive to their respective owners during work hours. Problem: I manage a small (but growing) heterogeneous cluster with around 10 nodes, where some of the nodes are not dedicated. These are desktop computers used by colleagues on the same network during work hours and they would prefer to work on responsive machines. During nights and weekends however, we pool all of our computers and some dedicated nodes together for batch jobs. I recently switched from HT Condor to Slurm because it fits our needs better in all except one aspect: prioritizing the owner of the machine for regular work not related to the cluster. On Condor a node could be configured to suspend, preempt or kill jobs depending on criteria such as Time of day or weekday (machines are used during the day on weekdays) Keyboard activity (some users may be working late) CPU activity from processes other than those spawned by the cluster (users may leave some of their own processes running overnight, that should run without interference) I would like to mimic any of these behaviors when using Slurm, or find a way to not bother the owner using the computer. Additional info: All the nodes use Ubuntu 18.04-19.04 with slurm found in apt, i.e. version 18+. The cluster uses cgroups for limit enforcement and is configured to use cores as consumable resource, as in SelectType=select/cons_res SelectTypeParameters=CR_Core I do not have sudo-rights on most desktop computers, so either I need a "set and forget" solution that one time when I configure my colleagues PC, or something I can do from the head node where I do have sudo. Attempts: I have considered these options but remain unsatisfied: For time of day/weekday, use either crontab or systemd with OnCalendar events in slurmd.service, to either: start/stop the daemon. This may be the easiest way but kills jobs in a non-clean way. launch a script that sets the node state using scontrol to down/resume/drain/etc, possibly from head node. I haven't tried this one as I can't figure out how to do this outside of the interactive mode of scontrol. For responsiveness, use "systemd edit slurmd.service" to add resource control by setting CpuWeight=5 under [Service]. This should prioritize every other process, but doesn't seem to work as I intended because the jobs make the computers sluggish anyway. I thought jobs would be subprocesses of slurmd and be subject to the same CpuWeight. If this actually worked well, it could solve the whole problem. I feel there should be a better way to achieve what I want. Any help is appreciated.
After a couple of days I managed to answer my own question. In hindsight it was quite simple. Responsiveness: The slurmd daemon can be started with command line arguments, list them with slurmd -h. In particular , slurmd -n 19 sets the highest nice-value (and thus lowest priority) for the daemon and all its subprocesses. On desktop computers, I simply edited /etc/systemd/system/slurmd.service appending -n 19 to ExecStart, i.e. ExecStart=/usr/local/sbin/slurmd $SLURMD_OPTIONS -n 19 reloaded systemd daemons with systemctl daemon-reload restarted the slurmd daemon, systemctl restart slurmd.service Memory reservation: Some memory can be reserved to the system. I leave 8GB to the owner by adding MemSpecLimit=8000 to the node specifications in slurmd.conf. To actually enforce memory limits there were some additional steps: Select Core and Memory as consumable resources, by setting SelectTypeParameters=CR_Core_Memory in slurmd.conf. Add cgroups task plugin by setting TaskPlugin=task/affinity,task/cgroup in slurmd.conf and then setting ConstrainRAMSpace=yes in cgroup.conf. Because we are on Ubuntu, enable memory and swap cgroups by adding the line GRUB_CMDLINE_LINUX="cgroup_enable=memory swapaccount=1" to /etc/default/grub Work hours on weekdays Some of my colleagues would like zero distraction during work hours. This is easy to do with scontrol from the head node, to set their node state to "down" during work hours, and to "resume" after work hours. I do this automatically with systemd timers: First, make an executable script that updates the node states of the desktops in question using scontrol: #!/bin/bash # slurm-update.sh - Updates the state on nodes belonging to the work-hour desktops partition. systemctl start slurmd for node in $(sinfo -h --partition=WHdesktops --format="%n"); do state=$(sinfo -h --node=$node --format="%T") echo "Setting node $node to state=$1 with reason=$2" scontrol update NodeName=$node state=$1 reason="$2" || echo "State on $node is already $(sinfo -h --node=$node --format=\"%T\")" done This takes two arguments, the new state and a reason for it. Create a service/timer pair of files in the directory /etc/systemd/system to run the script above at certain times. Make one pair per state you want to set (for instance I made 3 pairs, to set down, drain and resume). The pair for setting "down" looks like this # /etc/systemd/system/slurm-down.service: [Unit] Description=Shut down all SLURM desktop nodes [Service] Type=simple ExecStart=/bin/bash /mnt/nfs/slurm_fs/systemd/slurm-update.sh down afterhours StandardError=journal and # /etc/systemd/system/slurm-down.timer: [Unit] Description=Timer for shutting down slurm on desktop nodes on weekdays [Timer] Unit=slurm-down.service OnBootSec=10min # Run hourly on weekdays between 8:05 to 18:05 OnCalendar=Mon..Fri *-*-* 8..18:05:00 [Install] WantedBy=multi-user.target Reload the daemons with systemctl daemon-reload and then enable and start the timer only, not the service: systemctl enable --now slurm-down.timer. Repeat the steps for the resume state after work hours, and optionally a drain state an hour or so before the down state.
Slurm on desktop computers, how to prioritize the owner
1,580,978,760,000
I have just installed Cygwin on my Win Server 2008. I have a bash backup script (to backup some user files to an external harddrive) that I want to run on the machine, under crontab, every night. I have just installed the base package, cron and cygrunsrv. Now I need to make crontab run. In Cygwin there is /bin/crontab.exe and /usr/sbin/cron.exe. What is the difference between these two? Which one should I use to run my backup script? If I run crontab -e the crontab file for the /bin/crontab.exe opens. cron -e gives command not found. When searching google I find that people usually setup and use /usr/sbin/cron.exe but I do not really understand why.
Both cron and crontab are commands. cron is the daemon, running in the background and executing the commands defined in a crontab file Cron searches /var/spool/cron for crontab files ... Cron examines all stored crontabs and checks each job to see if it needs to be run in the current minute. When executing commands, any output is mailed to the owner of the crontab. crontab, the command, manages crontab files Crontab is the program used to install a crontab table file, remove or list the existing tables used to serve the cron(8) daemon. Each user can have their own crontab This means, if you want to run a command periodically, you use crontab to install or change your personal crontab file. To run Cygwin's cron in the background, look at How do you run a crontab in Cygwin on Windows?
/bin/crontab and /usr/sbin/cron in cygwin - what is the difference?
1,580,978,760,000
How can I list all cron jobs which are scheduled to run? I need to check if my cron jobs are working. I guess they don't even run, because our Magento does not send out any order confirmation emails anymore.
From the comment of @Marcel You should check /var/spool/cron/* and /etc/cron.*/*. Also, /etc/crontab
Show all scheduled cron jobs
1,541,013,171,000
I heard that /etc/crontab and /etc/cron.d/* can only be edited manually without crontab command from How would you edit `/etc/crontab` and files under `/etc/cron.d/?. Do I need to run crontab -e to create and edit a user-specific crontab file under /var/spool/cron/crontabs/? Can I just manually create and edit a crontab file /var/spool/cron/crontabs/t? Does crontab -e do some work to make cron daemon know and load the user-specific crontab file, which manually creation and editing fail to do? Thanks.
With Vixie Cron, they're just ordinary files — as long as you get the permissions right, you can edit them as you wish. Cron will notice the modified files and reload the crontab (it checks once per minute). This is all actually documented in the cron manpage "NOTES" section, at least on Debian. But you really shouldn't. First off, you really don't need to: you can just pass a file to install as a crontab to the crontab program: crontab -u bob FILE will install FILE as Bob's crontab. And FILE can be - to use stdin. If you want to script a crontab change, you can use crontab -l -u bob to list the crontab, edit that, and then load it back. You could, for example, do this (untested) to make sure your term as root is short-lived: #!/bin/bash while read -r -u 9 user; do { crontab -l -u "$user" printf '%s\n' '* * * * * fortune -o | mail -s "DegradedArray event on /dev/md0" root' } | crontab -u "$user" - done 9< <(getent passwd | cut -d: -f1) Second, a good reason not to was hinted at above: that is documented to apply to Debian's cron. But there are a lot of different Crons. RHEL, for example, uses a different one. Arch uses systemd timers by default (not sure if it uses the systemd crontab-to-timer bridge or not), but gives you the choice of 5 different implementations to pick from if you want an actual Cron. Using crontab to install the crontab will work regardless, or at least fail with an error message so you know it didn't work. It's far more portable.
Can I manually create and edit `/var/spool/cron/crontabs/t` without `crontab -e`?
1,541,013,171,000
Question How to ensure keyboard backlight is on (at max) at boot time on a Dell laptop? Rationale I sometimes hit the combination of keys which turns off the keyboard backlight at day time not noticing it, and then when I wake up, in the middle of the night usually, I don't want to wake up my wife by turning the lights on. Hence, I seek a solution, which would read the maximum possible value of the backlight and setting it no matter what when I boot my computer up. I always turn it off at night so the solution does not have to account for the sleep or hibernate modes. Research The maximum value of keyboard backlight is stored in: /sys/class/leds/dell\:\:kbd_backlight/max_brightness And the actual value of currently set value is stored in: /sys/class/leds/dell\:\:kbd_backlight/brightness
No matter that the maximum value should most probably be constant, I don't know why or how but at previous boot I had the maximum value of 3. Now I have 2. I'm confused and baffled at the same time. I don't want to search for a reason, such as BIOS setting, let's just read the maximum value at boot time and set it. No matter if during the day I accidentally turned the backlight possibly off. I came up with a direct approach using: sudo crontab -e And reading and setting the maximum value in one command: @reboot /bin/cat /sys/class/leds/dell\:\:kbd_backlight/max_brightness > /sys/class/leds/dell\:\:kbd_backlight/brightness
How to ensure keyboard backlight is always on (at max) at boot time?
1,541,013,171,000
The whole day, I am fixing bugs in mainly TLS area, but this question is not specifically about TLS. Well, I have one web server with a few web sites, each with its own SSL certificate. But to the point, I managed to install Certbot version 0.19.0 on my Debian 9.2 like this: Adding backports to the sources: deb http://ftp.debian.org/debian stretch-backports main Installing newer version of Certbot from backports: apt-get install python-certbot-apache -t stretch-backports Afterwards, I had to make some major adjustments to the renewal file, so it looks like this: # renew_before_expiry = 30 days version = 0.10.2 archive_dir = /etc/letsencrypt/archive/pavelstriz.cz-0001 cert = /etc/letsencrypt/live/pavelstriz.cz-0001/cert.pem privkey = /etc/letsencrypt/live/pavelstriz.cz-0001/privkey.pem chain = /etc/letsencrypt/live/pavelstriz.cz-0001/chain.pem fullchain = /etc/letsencrypt/live/pavelstriz.cz-0001/fullchain.pem # Options used in the renewal process [renewalparams] authenticator = webroot installer = apache rsa_key_size = 4096 account = c3f3d026995c1d7370e4d8201c3c11a2 must_staple = True [[webroot_map]] pavelstriz.cz = /home/pavelstriz/public_html www.pavelstriz.cz = /home/pavelstriz/public_html I have managed to renew the pavelstriz.cz domain after this with: certbot renew --dry-run But what worries me is the daily Certbot's cron: # /etc/cron.d/certbot: crontab entries for the certbot package # # Upstream recommends attempting renewal twice a day # # Eventually, this will be an opportunity to validate certificates # haven't been revoked, etc. Renewal will only occur if expiration # is within 30 days. SHELL=/bin/sh PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin 0 */12 * * * root test -x /usr/bin/certbot -a \! -d /run/systemd/system && perl -e 'sleep int(rand(3600))' && certbot -q renew I can't figure out if it works for real or how to run it successfully? If I run: /usr/bin/certbot -a \! -d /run/systemd/system && perl -e 'sleep int(rand(3600))' && certbot -q renew in Bash, it says: Saving debug log to /var/log/letsencrypt/letsencrypt.log The requested ! plugin does not appear to be installed I may have misunderstood those commands.
The actual command run by cron is: test -x /usr/bin/certbot -a \! -d /run/systemd/system && perl -e 'sleep int(rand(3600))' && certbot -q renew It starts by testing some files test -x /usr/bin/certbot -a \! -d /run/systemd/system which translates in does /usr/bin/certbot exist and is executable (-x /usr/bin/certbot) and not (-a \!) /run/systemd/system exist and is a directory (-d /run/systemd/system) If the test succeeds, wait for a random number of seconds (perl -e 'sleep int(rand(3600))'), then try to renew the certificate (certbot -q renew). However, on Debian 9, systemd is installed by default, which means that /run/systemd/system exists and is a directory, so the initial test fails and the renew command is never run. The actual renewal is managed by a systemd timer defined in the file lib/systemd/system/certbot.timer. As of version 0.27.0-1; its content is: [Unit] Description=Run certbot twice daily [Timer] OnCalendar=*-*-* 00,12:00:00 RandomizedDelaySec=43200 Persistent=true [Install] WantedBy=timers.target If cerbot is properly configured, you should probably find lines like Nov 2 20:06:14 hostname systemd[1]: Starting Run certbot twice daily. Nov 2 20:06:14 hostname systemd[1]: Started Run certbot twice daily. in your syslog.
How to validate / fix an error in Certbot renewal cron
1,541,013,171,000
I terminated the gnome-pty-helper process with $ kill -9 9753 After a while it was running again with another process number. There wasn't a restart of a system. Why is it located under ~/.config/gnomy-pty-helper? $ file gnome-pty-helper gnome-pty-helper: ELF 32-bit LSB executable, Intel 80386, version 1 (GNU/Linux), statically linked, stripped Why is it in the crontab? */10 * * * * sh -c " /home/xralf/.config/gnome-pty-helper" Can I delete the file and the line from the crontab? If not, why?
gnome-pty-helper is automatically started by the VTE library when necessary. If you want to avoid its running (why?), you should avoid using anything built with the VTE library (libvte*). The elements you’ve added recently make this look more like a compromise of some sort: ~/.config shouldn’t contain binaries, and gnome-pty-helper certainly doesn’t belong in your crontab. It’s a nice name for a virus of some sort though because it wouldn’t draw attention in a process listing... You can delete the file and the crontab entry (but watch out for their coming back obviously). It might be worth keeping a copy of the file somewhere safe (on a noexec file system) to try to figure out what it was doing...
Prevent gnome-pty-helper from running again
1,541,013,171,000
I have setup ssh keys without a passphrase (Ubuntu) and copied them to my remote server (Centos6). I can login with ssh without a password successfully under my username. When I execute the following script in a terminal under my username (not root) it works. When I execute it through cron under my user name it fails with response: Permission denied, please try again. Permission denied, please try again. Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password). Here is the script: #!/bin/bash export PATH=/home/<username>/git/kodi-playercorefactory/bash-scripts:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin rsync -rvzO -e 'ssh -p 6135 -i /home/<username>/.ssh/id_rsa.pub' <username>@xx.xxx.xx.xx:<filename> <filename> Any help will be greatly appreciated
The -i option of ssh is supposed to input the file name that contains the private key, not public key. But you have presumably given the file name containing the public key, given by the name /home/<username>/.ssh/id_rsa.pub. Assuming the private key is in /home/<username>/.ssh/id_rsa, the following should work: rsync -rvzO -e 'ssh -p 6135 -i /home/<username>/.ssh/id_rsa' <username>@xx.xxx.xx.xx:<filename> <filename>
passphraseless access to rsync with ssh through cron fails
1,541,013,171,000
I get entries like these in /var/log/cron, in a host named SERV2 (RHEL 6.2): Apr 21 14:50:01 SERV1 CROND[14799]: (root) CMD (/usr/lib64/sa/sa1 -S DISK 1 1) Apr 21 15:20:01 serv2 CROND[24438]: (root) CMD (/usr/lib64/sa/sa1 -S DISK 1 1) Apr 21 15:00:01 SERV1 CROND[14838]: (root) CMD (/usr/lib64/sa/sa1 -S DISK 1 1) Entries with SERV1 seem to be coming from other host, but AFAIK cron doesn't work in a distributed way, just as a local service. How can those entries end up here? More info: # hostname SERV2 # cat /etc/hosts 10.22.1.70 serv2 10.22.1.27 serv1 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
The syslog format typically contains a timestamp, hostname, app name, and process ID along with whatever custom message was sent. All of these values are (substantially) under the control of the process that sends the syslog message. The cronie source (if configured to use syslog) uses the openlog and syslog functions to write to syslog. Seeing that the reported messages looked like syslog format, and that the hostnames were different between the messages, and that all of the (mentioned) logs were from the CROND "app name", it seemed plausible that SERV2's syslog was configured to write all "cron" facility logs that it receives to the /var/log/cron file. This would include "remote" logs from other systems that were configured to send their syslogs to SERV2 (and assuming that SERV2 is listening for those remote logs). This theory was confirmed in the comments when the OP found that serv1 had a wildcard syslog entry that pointed every syslog at (presumably the IP of) SERV2.
Separate server's hostname appearing in this server's cron logs
1,541,013,171,000
I'm trying to set up a Raspberry Pi to check a repo on startup and then fire up a node script with forever. I got the second part working, but I tried a dozen git commands with no success. Here is my crontab that I access like so: crontab -u pi -e @reboot /bin/sh /home/pi/code/script.sh Now my script has -rwxr-xr-x access rights and goes like so: #!/bin/sh cd /home/pi/code /usr/bin/sudo -u pi -H /usr/bin/git pull origin master /usr/bin/sudo -u pi -H /home/pi/.nvm/v0.11.11/bin/forever start /home/pi/code/server.js Forever starts the server.js on reboot, no problem, but the repo never gets updated. If I run the script using sh /home/pi/code/script.sh it triggers git pull correctly... I initially set up an alias for git pull to be git up like it is recommended, but figured it might be my problem and I went back to the simplest version I could. Still no success. Any input is welcome. EDIT: the output of the crontab indicates connectivity issue: Could not resolve host: bitbucket.org how can I wait for network to be setup before I run the script?
After getting help to debug and trying out Phlogi's solution without success, I decided to go back to the original crontab and just add code to wait for the network interface to be ready. Here is what the script looks like now: #!/bin/sh while ! ping -c 1 -W 1 bitbucket.org; do echo "Waiting for bitbucket - network interface might be down..." sleep 1 done cd /home/pi/code && /usr/bin/sudo -u pi -H git checkout master && /usr/bin/sudo -u pi -H git up /usr/bin/sudo -u pi -H /home/pi/.nvm/v0.11.11/bin/forever start /home/pi/code/server.js
Crontab shell script git pull and forever start
1,541,013,171,000
I have the following output as a user running crontab -l: #Ansible: backup-external chaos */20 * * * * flock --nonblock /home/mu/.cache/backup-external.lock backup-external chaos */20 * * * * /home/mu/bin/ddc-save-brightness Neither job is executed. If I run them manually, they seem to work just fine. The Ansible snippet there comes from using Ansible to add this one job for my user. Looking at systemctl status crond.service -l makes it clear that the service itself is running. It seems to fail to load the crontab for my user mu due to SELinux it seems: ● crond.service - Command Scheduler Loaded: loaded (/usr/lib/systemd/system/crond.service; enabled; vendor preset: enabled) Active: active (running) since Mi 2016-01-27 17:51:08 CET; 1h 43min ago Main PID: 1351 (crond) CGroup: /system.slice/crond.service └─1351 /usr/sbin/crond -n Jan 27 17:51:09 martin-friese.fritz.box crond[1351]: (mu) Unauthorized SELinux context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 file_context=unconfined_u:object_r:user_cron_spool_t:s0 (/var/spool/cron/mu) Jan 27 17:51:09 martin-friese.fritz.box crond[1351]: (mu) FAILED (loading cron table) Jan 27 17:51:09 martin-friese.fritz.box crond[1351]: (CRON) INFO (running with inotify support) Jan 27 18:01:01 martin-friese.fritz.box CROND[3726]: (root) CMD (run-parts /etc/cron.hourly) Jan 27 18:01:01 martin-friese.fritz.box anacron[3737]: Anacron started on 2016-01-27 Jan 27 18:01:01 martin-friese.fritz.box anacron[3737]: Will run job `cron.daily' in 13 min. Jan 27 18:01:01 martin-friese.fritz.box anacron[3737]: Jobs will be executed sequentially Jan 27 18:01:01 martin-friese.fritz.box run-parts[3741]: (/etc/cron.hourly) starting mcelog.cron Jan 27 18:14:01 martin-friese.fritz.box anacron[3737]: Job `cron.daily' started Jan 27 19:01:01 martin-friese.fritz.box CROND[2681]: (root) CMD (run-parts /etc/cron.hourly) This is on Fedora 23 and I did not change the SELinux policy, so it is probably enforcing strictly. What do I have to change in order to get the jobs running again?
This was resolved in this bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=1298192 Please make sure you have the latest kernel: 4.3.3-301
User cron jobs are not being executed any more (perhaps SELinux)
1,541,013,171,000
I have a RHEL server that I am running a MySQL database on. I have a Bash script that executes mysqldump that creates a backup file. The backup file created when executing the script directly in Bash is 754259 bytes in size. If the same script is run via cron, it's only 20 bytes in size. As far as I know, cron is running with the same user context that I use when logged in to run the script manually. However, given the size differential, that does not appear to be true. Why are the file sizes different when running the same script? The shell script contents: backup_path=/var/custom/db_backups configFile=/var/custom/auth.cnf db_name=[db_name] date=$(date +"%d-%b-%Y") sudo /opt/rh/mysql55/root/usr/bin/mysqldump --defaults-extra-file=$configFile $db_name | gzip -9 > $backup_path/$db_name-$date.sql.gz To edit cron: sudo crontab -e cron file contents: 12 21 * * * /var/custom/maint_plan This executes the script daily at 9:13 PM.
The mysqldump command returns nothing, which is piped trough gzip and ends in an empty gzip file. See: $ echo -n "" | gzip -9 > test.gz $ stat -c %s test.gz 20 This results in a file with size 20 bytes. So the problem is the mysqldump command. Since it's root's crontab the script runs with root priviledges. sudo is not necessary. Use it without sudo. Just: /opt/rh/mysql55/root/usr/bin/mysqldump --defaults-extra-file=$configFile $db_name | gzip -9 > $backup_path/$db_name-$date.sql.gz
Shell script executed with cron results in different size file than executed manually
1,541,013,171,000
I use a mobile data connection service. I only have unrestricted bandwidth between 9am and 4pm. I wonder how I can "silence" applications such as the Dropbox applet outside that schedule. I thought about replacing the Dropbox binary with this script: #!/bin/bash H=`date +"%H"` if (($H >= 9 && $H < 16)) then echo "run dropbox here" fi I was wondering if anyone had a better idea. Namely, in my solution Dropbox will keep running after 4pm, and will fail to launch if the computer is powered before 9am. It would also be cool if no restriction was applied when on wifi. I was wondering if anyone has any solution for systemd or cron or some such. I know those tools are incredibly powerful, but don't know anything beyound that. (My system: XUbuntu 15.04, i.e. xfce4 and systemd)
Use cron to start the daemon and to kill it. Since dropbox runs as a user, edit your cronjob as your user: crontab -e and in the editor, place: 0 9 * * * $HOME/bin/dropbox-daemon-path 59 16 * * * pkill -u "$LOGNAME" dropbox-daemon-process-name At 9am it starts the dropbox daemon (you should provide the full path here) and at 1 minute till 5pm, it kills it (for this user). I'd love to hear from someone with a systemd answer. EDIT: As Gilles points out, this will not be of help if the system is powered on between 9 and 17. Again, this cronjob approach is sub-optimal, but I don't know how to use dropbox with systemd. Having said that, we try your original approach in a wrapper-script which exits if the hour is outside your boundaries: #!/bin/bash hour=$(date +%H) [ $hour -lt 09 -o $hour -gt 16 ] && exit #else exec path-to-dropbox-daemon Modify the crontab slightly * 9-16 * * * $HOME/bin/dropbox-wrapper-script 59 16 * * * sleep 50; pkill -u "$LOGNAME" dropbox-daemon-process-name To me, this isn't pretty. Every minute your script is executed by cron, leaving a few lines of logs behind. But it should be effective.
Only allow process to run between hours x and y
1,541,013,171,000
Given a job specification in crontab, is there any way to get the list of dates and times the job in cron will run in the next week? For example, given a string 10 13 * * *, is there any way to get from cron or a similar builtin or utility the (ideally sorted) list of times when the job will run in the next week?
You can use this perl script : #!/usr/bin/perl # Base on http://search.cpan.org/~pkent/Schedule-Cron-Events-1.8/cron_event_predict.plx # initial release 20091001 use warnings; use strict; use Schedule::Cron::Events; use Getopt::Std; use Time::Local; use vars qw($opt_f $opt_h $opt_p); getopts('p:f:h'); if ($opt_h) { usage(); } my $filename = shift || usage(); my $future = 2; if (defined $opt_f) { $future = $opt_f; } my $past = 0; if (defined $opt_p) { $past = $opt_p; } open (IN, "<$filename") || die "Unable to open '$filename' for read: $!"; while(<IN>) { my $obj = new Schedule::Cron::Events($_) || next; chomp; print "# Original line: $_\n"; if ($future) { for (1..$future) { my $date = localtime( timelocal($obj->nextEvent) ); print "$date - predicted future event\n"; } } $obj->resetCounter; if ($past) { for (1..$past) { my $date = localtime( timelocal($obj->previousEvent) ); print "$date - predicted past event\n"; } } print "\n"; } close IN; sub usage { print qq{ SYNOPSIS $0 [ -h ] [ -f number ] [ -p number ] <crontab-filename> Reads the crontab specified and iterates over every line in it, predicting when each cron event in the crontab will run. Defaults to predicting the next 2 events. -h - show this help -f - how many events predited in the future. Default is 2 -p - how many events predicted for the past. Default is 0. EXAMPLE $0 -f 2 -p 2 ~/my.crontab \$Revision\$ }; exit; } =pod =head1 NAME cron_event_predict - Reads a crontab file and predicts when event will/would have run =head1 SYNOPSIS cron_event_predict.plx [ -h ] [ -f number ] [ -p number ] <crontab-filename> =head1 DESCRIPTION A simple utility program mainly written to provide a worked example of how to use the module, but also of some use in understanding complex or unfamiliar crontab files. Reads the crontab specified and iterates over every line in it, predicting when each cron event in the crontab will run. Defaults to predicting the next 2 events. These are the command line arguments: -h - show this help -f - how many events predited in the future. Default is 2 -p - how many events predicted for the past. Default is 0. Here's an example, showing the default of the next 2 predicted occurences of the each cron job: dev~/src/cronevent > ./cron_event_predict.plx ~/bin/crontab # Original line: 1-56/5 * * * * /usr/local/mrtg-2/bin/mrtg /home/admin/mrtg/mrtg.cfg Thu Sep 26 00:41:00 2002 - predicted future event Thu Sep 26 00:46:00 2002 - predicted future event # Original line: 34 */2 * * * /home/analog/analogwrap.bash > /dev/null Thu Sep 26 02:34:00 2002 - predicted future event Thu Sep 26 04:34:00 2002 - predicted future event # Original line: 38 18 * * * /home/admin/bin/allpodscript.bash > /dev/null Thu Sep 26 18:38:00 2002 - predicted future event Fri Sep 27 18:38:00 2002 - predicted future event And here's an example showing past events too: dev~/src/cronevent > ./cron_event_predict.plx -f 1 -p 3 ~/bin/crontab # Original line: 1-56/5 * * * * /usr/local/mrtg-2/bin/mrtg /home/admin/mrtg/mrtg.cfg Thu Sep 26 00:41:00 2002 - predicted future event Thu Sep 26 00:36:00 2002 - predicted past event Thu Sep 26 00:31:00 2002 - predicted past event Thu Sep 26 00:26:00 2002 - predicted past event # Original line: 34 */2 * * * /home/analog/analogwrap.bash > /dev/null Thu Sep 26 02:34:00 2002 - predicted future event Thu Sep 26 00:34:00 2002 - predicted past event Wed Sep 25 22:34:00 2002 - predicted past event Wed Sep 25 20:34:00 2002 - predicted past event # Original line: 38 18 * * * /home/admin/bin/allpodscript.bash > /dev/null Thu Sep 26 18:38:00 2002 - predicted future event Wed Sep 25 18:38:00 2002 - predicted past event Tue Sep 24 18:38:00 2002 - predicted past event Mon Sep 23 18:38:00 2002 - predicted past event =cut SAMPLE OUTPUT: # Original line: 03 05 * * * /bin/bash command1 Thu Dec 25 05:03:00 2014 - predicted future event Fri Dec 26 05:03:00 2014 - predicted future event # Original line: 10 06 * * * size=$(du -h ~/.xsession-errors); [[ $size =~ ^[O-9]+G ]] && :> ~/.xsession-errors Thu Dec 25 06:10:00 2014 - predicted future event Fri Dec 26 06:10:00 2014 - predicted future event # Original line: 21 06 * * 1 perl WWW::Monitor.pl Mon Dec 29 06:21:00 2014 - predicted future event Mon Jan 5 06:21:00 2015 - predicted future event
Get chronological list of dates & times for scheduled tasks in cron
1,541,013,171,000
Debian seems to have directories for cron jobs such as /etc/cron.daily and /etc/cron.hourly, but is there a way for me to automate editing, removing, and updating cron jobs from an automated script? I'm working on a Docker container and I'd like to allow the user to specify the cron frequency of a specific task to be run in the background. Is there a way to do this?
Probably you want to use /etc/cron.d, where you can place full system crontab entries (as you'd put in /etc/crontab) in their own file. Then you can set the intervals, enable, and disable it by manipulating that file.
Add a custom CRON job from the command line?
1,541,013,171,000
I have several shell scripts (bash) which are started by cron. Every time a script is executed I get an email saying "stdin: is not a tty". Can someone please tell me how to fix this? All scripts run successfully but these mails are killing my email folder. I use Debian Wheezy. "/root/.bashrc" is empty. Cron entries are like: /bin/bash --login /root/script.sh > /dev/null Even this script produces the error message: #!/bin/bash ls Content of .profile: # ~/.profile: executed by Bourne-compatible login shells. if [ "$BASH" ]; then if [ -f ~/.bashrc ]; then . ~/.bashrc fi fi mesg n
Something in your .bashrc is assuming that the shell is running on a terminal. That's perfectly fine: .bashrc is supposed to run only in interactive shells, and interactive shells are supposed to run only on terminals. The problem is that you're systematically including .bashrc from .profile. That's wrong: you should only include .bashrc in interactive shells. Change your .profile to # Bash doesn't load its interactive initialization file if it's invoked as # a login shell, so do it manually. case $- in *i*) if [ -n "$BASH" ]; then . ~/.bashrc;; fi esac Move mesg n into .bashrc: it's a terminal-related command, not a session-related command. If you have environment variable definitions in your .bashrc, move them to .profile. The .profile file is for things that are executed when your session starts, typically mainly environment variable definitions, used by any application that you'll run during the session. The .bashrc file is the configuration file for bash when running interactively, it typically contains terminal setup, alias definitions, shell options and completion settings, and other things related to the interactive use of the shell. For background, see: Difference between Login Shell and Non-Login Shell? How to check if a shell is login/interactive/batch Is there a ".bashrc" equivalent file read by all shells? (and follow the links)
"stdin: is not a tty" mails for running scripts as cron jobs
1,541,013,171,000
I want to run a nodeJS web server on a couple of machines which I don't have sudo access on. What would be a good way to do this? The two requirements are: running the service without being logged in (obviously) automatically restarting if the machine is rebooted. For 1., I've typically used nohup but is this a reasonable approach for production instances? For 2., I can (hackily) add a crontab that starts the service, which will simply fail if it's already running. Is there a better way? These servers are RHEL, but I'd prefer solutions that would also work for Ubuntu, if possible.
You could use @reboot as the crontab startup field to make relatively sure it hasn't already been started.
Running services without sudo
1,541,013,171,000
I'm running Ubuntu 12.04 and bash. I've written a pair of shell scripts that allow me to set an alarm which, after ringing, unsets itself. The first, alarmset, allows me to enter a time and modifies the alarm line in my user crontab. That line launches the second script, alarmring, which launches a radio player in a browser window and then comments out the alarm line in the crontab. alarmring is behaving strangely. If I run it myself directly, it performs both actions: it launches the browser window and edits the crontab. But if I run alarmset, when the crontab launches alarmring at the appointed time, alarmring edits the crontab, but does not launch the browser window. Finally, when crontab runs alarmring, it ignores the set -x command, whereas when I run it directly, set -x is executed. So it's as though the crontab is skipping the first ten lines. Any ideas on what's going on? I'll paste the two scripts and the crontab below. alarmset: #!/bin/bash # alarmset set -x usage() { echo "alarmset [ hour minute | -h ]" } editcrontab() { echo $'/alarmring/s/^\(.*\)\(\* \* \*\)/'$2$' '$1$' \\2/' > ~/Documents/crontab_script.txt crontab -l | sed --file=/home/username/Documents/crontab_script.txt > ~/Documents/new_crontab.txt crontab ~/Documents/new_crontab.txt } ### MAIN case $# in 2 ) editcrontab $1 $2 ;; * ) usage exit ;; esac set +x alarmring: #!/bin/bash # alarmring set -x env DISPLAY=:0 # Ring the alarm : launch BBC World Service in Firefox firefox --new-window http://www.bbc.co.uk/radio/player/bbc_world_service # Unset the alarm : comment out the alarm line in the crontab crontab -l | sed '/alarmring/s/^/#/1' > ~/Documents/new_crontab.txt crontab ~/Documents/new_crontab.txt set +x crontab: SHELL=/bin/bash PATH=~/bin:/usr/bin:/bin # # m h dom mon dow command 53 07 * * * /home/username/bin/alarmring
Entries in the systems crontab (/etc/crontab) or to the directories (/etc/cron.d -or- /etc/cron.hourly, etc.) run as root. It's probably the case that root doesn't have the ability to access a given user's display by default. I'd suggest making crontab entries using the user's ability to add crontabs. This can be accomplished by using the command crontab -e in a shell logged in as the specified user. The command crontab -e will open a text editor (usually vi or vim) where you can add entries using the same syntax that you'd use to add entries to the systems /etc/crontab file. This tutorial covers the basics of adding crontab entires. Also when adding a user's crontab via crontab -e and your script needs access to your display (say you're launching a GUI), you'll need to set the environment variable (export DISPLAY=:0.0) so that the GUI get's directed to the correct display. For example % crontab -e And add the following line: 53 07 * * * export DISPLAY=:0.0;/home/username/bin/alarming
Strange crontab-script interaction (bash)
1,541,013,171,000
I need to add a crontab entry which runs every quarter ,on second sunday ,at 02 am. Which of the below one is correct? the OS is AIX. 00 02 8-14 */3 0 && /myscript.sh or 00 02 8-14 */3 * [ "$(date '+\%a')" == "Sun" ] && /myscript.sh
The minutes, hours and month are correct. There is an interaction between day of week and day of month. Your first version will run on every day from 8th to 14th, and on every Sunday (but not twice if e.g. the 11th is a Sunday). From man -s 5 crontab: Note: The day of a command's execution can be specified by two fields — day of month, and day of week. If both fields are restricted (i.e., aren't *), the command will be run when either field matches the current time. Your second version may have two problems: (1) It may be run by /bin/sh, or by some shell specified in the crontab. I'm not sure how portable the == is. (2) The "Sun" is locale-specific. I would probably sidestep these with [ "$( date '+\%u')" -eq 7 ], but explained with a comment. This results in 00 02 8-14 1,4,7,10 * [ "$(date '+\%u')" -eq 7 ] && /myscript.sh
Crontab scheduling for specific option
1,541,013,171,000
I have a bash script in crontab that runs @reboot: The script itself contains a wget command to pull and download a file from the internet When I run my script after signing in and opening the terminal it works and saves the files properly(html, png). But when I reboot my system it saves runs and saves as plain text files with no content. Solved --> I used the sleep function in crontab and it works!!! New to linux and code, so thanks for all the feedback! I'm going to explore the /etc/network/if-up.d/ options as well.
The issue is almost certainly that your @reboot cron job has started before your network interfaces have come up. This is, in general, a well-documented shortcoming of cron. It doesn't mean the @reboot facility is useless, it just means you need to understand how it works, and how to work around it when it fails - as it has in your case (probably :). There are at least 2 ways to do this: use sleep in your @reboot job to give the network more time to get up. Your crontab entry will look something like this: : @reboot sleep 10; /your/bash/script/as-it-is-now I suggested the value of 10 here to give the interface 10 seconds to come up; YMMV, so experiment with different values. following up on @confetti 's suggestion (and with thanks to @Celada), put your script in /etc/network/if-up.d. Following is a prototype that may be useful. Note that it only runs the first time your system comes up (like @reboot, and NOT each time the network interface is brought up): #!/bin/sh NWKSTATUS=/var/run/the-network-is-up # note that /var/run is a temp fs, and so a system shutdown # will effectively erase our flag file, 'the-network-is-up' case "$IFACE" in lo) # Exclude the loopback interface; we won't consider it # as it's not a true interface. We set the flag only # when a true network interface comes up exit 0 ;; *) ;; esac # if the flag file exists, we're done here # otherwise, we'll create it if [ -e $NWKSTATUS ]; then exit 0 else touch $NWKSTATUS fi # add your script here... So - put all of the above into a file (e.g. setnwkstatus.sh), then save it in the folder /etc/network/if-up.d/ and make it executable (i.e. sudo chmod /etc/network/if-up.d/setnwkstatus.sh )
wget saving files as emply plain text files on boot
1,541,013,171,000
I have users root and user1 All of my python scripts have been created by user1 I have created a bash file that needs to be automated. The bash file calls my python scripts I have added my bash call to the cron file However, my python environment for root is not the same as it is for user1 (different versions, library packages, etc, etc). So when the cron kicks off, it gives me python errors like "library not found" because the root environment is being used. How can I ensure that my cron commands run my python scripts under the user1 python environment and not root? Note that I've already tried using some variation of su in my cron file but it always asks for a password and I need this to be a fully automated process.
Have you tried using sudo su? sudo su -l "user1" -c "/path/to/bashscript.sh" Alternatively you could set the required environment at the top of your crontab: SHELL=/usr/local/bin/bash PATH=<user1 path> LOGNAME=user1 USER=user1 HOME=/home/user1 And if necessary source your user1 config file(s) prior to executing your bash file: 0 12 * * * * . /home/user1/.bash_profile; /path/to/bashscript.sh
Run cron jobs in intended python environment?
1,541,013,171,000
etckeeper should log all config changes on a system. But there is one important setting that is outside of /etc/: crontab -e edits a file inside: /var/spool/cron/crontab/ so there are important config files on the server. How do I include those files to the git repository of etckeeper?
It's still a bit awkward, but if /var and /etc are in the same filesystem you could create a directory under /etc with hard links to files that aren't in /etc. To make it easy to restore, you could do something like /etc/extrafiles/var/spool/cron and then use something like cp -l to create the hard links to files. On recovery you could use rsync --link to copy the tree under extrafiles back out into all the right places.
Let etckeeper monitor /var/spool/cron too
1,541,013,171,000
I am trying to familiarize myself with crontab. I know that it is supposed to send an email containing the output of jobs to the user that scheduled them however, I can see in the syslog that the address crontab is sending emails to is not a "local" one (as if I were to type mail -s "email here" username) but an external email address (like [email protected]). Can someone tell me from where this email is derived and how I can set the users' email address to something else? This could be because I have a top-level domain associated with my instance (I can see in the log the emails are being sent there), however, I actually have two domain names pointed to this server so I don't know how crontab is choosing or what would happen if there were no domain on this machine. In short I am just trying to figure out where this default email address is set.
man 5 crontab If MAILTO is defined (and non-empty), mail is sent to the user so named. MAILTO may also be used to direct mail to multi‐ ple recipients by separating recipient users with a comma.
Where does crontab get the email address of the user it emails / where are user email addresses set?
1,541,013,171,000
I have a cron job which runs a long script producing lots of output. Some of the output is lines delimited by single carriage returns; when run from the command line, these make the successive lines overwrite each other, providing progress output without overly polluting the backscroll. However, when looking at the output from the cron job, I want to see all these lines without missing any. Until recently, when I used mailx to print the output reports from the cronjob, it would replace the control characters with reverse-video ^M to highlight them. This was the behavior I wanted, as it left all of the lines visible. Now, however, something unknown has changed (a version upgrade?), and mailx prints the control characters as-is, causing them to overwrite each other when mail is printed. How can I reverse this behavior and make mailx replace control characters again?
It turned out that the issue lay with mailx's pager setting. It began using more for some reason, when previously it had used less (which does the escaping). Linking more to less again restored the old behavior.
Control character handling in cron/mailx
1,541,013,171,000
At my job, we have a central Backup server, running Debian Wheezy, and on-site servers at each site, also running Debian Wheezy. A few weeks ago, the central office tech emailed me that the backup did not complete properly the night before. We have been troubleshooting ever since, but still cannot seem to solve the issue. The only thing spat back was the following in a cron email: rsync error: received SIGINT, SIGTERM, or SIGHUP (code 20) at rsync.c(549) [generator=3.0.9] rsync error: received SIGUSR1 (code 19) at main.c(1316) [receiver=3.0.9] Googling that phrase leads to pretty much nothing. I found a post from 2002 about removing the -v switch, but that's not being used in the script. The script that runs every night is below: #!/bin/sh set -e x="delete --exclude-from=r_filter --delete-excluded" rsync -aq --$x site1.company.com:/etc /BACKUPS/site1 rsync -aq --$x site1.company.com:/home /BACKUPS/site1 It is set to run at 3:00am, Monday to Friday, from the central backup server. If they try running it manually during the day, it runs fine (cause most of the files were backed up previously?). It's using the -a switch, so I assume it can archive opened files? That's about all I can think of. What would be our next step for figuring this out?
If something happens when you run a job in a crontab at a certain time and it doesn't happen if you make a test where you run the job in one minute, there are two likely possibilities: There's another process on a crontab that somehow interferes with yours. There's a human process going on at that time, such as the cleaning staff unplugging a computer to plug the vacuum cleaner. Your rsync process is receiving signals at a certain time of the night. The first thing I'd look for is another process in a crontab sending signals that it shouldn't be sending. (If something runs fine from the command line but fails when run from cron, that's a whole different kettle of fish.)
RSync Error in script
1,541,013,171,000
I have cron and sometimes at on an ubuntu 14.04. jobs output should be mail to me, but failed. I make a .forward in my home dir. mail explicitly send to local unix user is forwarded to my corp mail fine. echo [email protected] > .forward date | mail archy I receive the mail at [email protected]. however, cron and at sends mail to (non existing) [email protected] What am I missing here ? edit: mailer is postfix
for cron you can to add a line to your crontab: [email protected] maybe there is already a wrong MAILTO line in your crontab? But it might be better to find a universal solution, such as this: Add line(s) to /etc/aliases (one for each user for which you wish to substitute a destination email address): root: [email protected] Rebuild the database with newaliases Reload postfix with postfix reload
cron and at send mail to wrong user
1,541,013,171,000
I have quite a strange problem using cron. I distilled it thus far: I created following simple bash script in /home/user1/cron_dir/cron.sh: #!/bin/bash echo "Success" As user1 I created following crontab: */1 * * * * sh /home/user1/cron_dir/cron.sh This gets installed and runs as expected (getting a "Success"-message from cron in my local mail). However, if I log out from my user1 account, wait a couple of minutes to perform the cron job, log back in and check my local mail, I get: sh: 0: Can't open /home/user1/cron_dir/cron.sh Edit: Thanks to garethTheRed I realized the problem: my home directory is encrypted. Of course the directory is only accessible when I'm logged in.
Answering this question because it'd bad form to just edit your question to put the answer inline (really, just answer your own question, it's okay if done in good faith). I had a similar problem - jobs started via cron appearing to work the first few times, but then failing. The symptoms all traced back to an inability to access the user's home directory and files within it. The same script and setup had worked just fine on a previous Ubuntu box. The answer is that yes, if you chose to encrypt your $HOME directory during Ubuntu install, you'll find that cron jobs will not be able to access files under it, unless you happen to have manually logged into the machine to cause the file system to be decrypted and kept mounted. I said yes to that option because it sounded like a good idea, but I'm not firmly wedded to it. The solution I'm going with is to not encrypt my home directory; which means I have to remove encryption from it. It looks like that's a careful process of shifting all the relevant contents out of the folder, unmounting it, and shifting them all back in - not pleasant. The basic process I followed for this is below. NB: Be very careful and read through all the steps before following them, especially the final ones as I suspect that once you uninstall ecryptfs it will be very tricky to get your old encrypted home folder back. If you're not sure of what you're doing, don't try this as the risk of data loss is very real. I only plowed on ahead because I knew I had backups and could reinstall easily. Add a new user fixer using adduser (because you need to be logged in as somebody other than yourself to shift your home directory around), and give them sudo rights Using sudo, Create a new folder sudo mkdir /home/chrisc.unencrypted to transfer the contents of your home directory to Copy the contents of my home directory to the new unencrypted folder using rsync -aP /home/chrisc /home/chrisc.unencrypted. Make sure that all the hidden files have moved too (e.g. .bash_profile, etc.) Remove the /home/chrisc.unencrypted/.ecryptfs folder Log out (and possibly reboot, as you need the encrypted /home/chrisc folders to be unmounted) Log in again as fixer Use sudo su to run as root Check that the contents of /home/chrisc.unencrypted match what they should be. This is quite important, because the next few steps will remove your ability to see the original home folder Rename the old (encrypted) home using mv /home/chrisc /home/chrisc.old. You may find you need to reboot first to ensure that nothing is using that folder (otherwise you'll get a device in use message preventing the rename). Rename the unencrypted home folder to be the user's default folder mv /home/chrisc.unencrypted /home/chrisc Uninstall the ecryptfs tools using apt-get remove ecryptfs-utils libecryptfs0. If I didn't do this, then logging in as chrisc, I saw an empty home directory (as if it was still mounting the encrypted home directory and hiding my actual unencrypted home directory). I had to reboot to get it to be unmounted and the real unencrypted /home/chrisc to be visible. Log in again as your original user and check It may be possible to remove the configuration folder for ecryptfs, or that there's a per-user configuration somewhere that says "when you log in as chrisc, mount the ecryptfs volume available at /home/chrisc/.Private" If you could sever that link, then you probably wouldn't need to uninstall ecryptfs. If your new home folder doesn't look like it contains the right things, you should be able to restore the encrypted home folder by reversing the moves - making chrisc.old be chrisc again, and the unencrypted home folder chrisc.unencrypted. But that will only work up until the point you uninstall ecryptfs.
Cron can't access my home directory when I'm logged out
1,541,013,171,000
How can I execute logRotate daily at a specific time (3h30) each day? Specific details on how to do this would be appreciated. I'm on Debian.
Step #1 - create script You can create a file such as this: $ sudo gedit /etc/cron.d/logrotate And add these lines to this file: #!/bin/bash /usr/sbin/logrotate /etc/logrotate.conf EXITVALUE=$? if [ $EXITVALUE != 0 ]; then /usr/bin/logger -t logrotate "ALERT exited abnormally with [$EXITVALUE]" fi exit 0 Step #2 - add script to crontab file Then create a crontab entry that runs this script at 3h30 each day. To do this 2nd step edit the file /etc/crontab: $ sudo gedit /etc/crontab And add this line: # m h dom mon dow user command 30 3 * * * root /etc/cron.d/logrotate NOTE: You might need to omit the user in some situations, like this: # m h dom mon dow command 30 3 * * * /etc/cron.d/logrotate Step #3 - make script executable Lastly make the logrotate shell script (/etc/cron.d/logrotate) executable: $ sudo chmod +x /etc/cron.d/logrotate References HowTo: The Ultimate Logrotate Command Tutorial with 10 Examples
Daily logRotate for apache at specific time
1,541,013,171,000
I am trying to add a job to my crontab in FreeBSD but it is not working : I have used this to add the job: sudo crontab -e -u vaibhav @daily /home/vaibhav/applications/comparison/scrapy but it is not working. Is there any way to check whether crontab is able to run this script, like --run-parts in Ubuntu?
While setting up a cron you have to keep in mind a lot of thins 1.The user for which you are trying to set the cron must have permissions over the script i.e. executable permission chmod +x /path/to/scrapy 2.The other imporatnt thing is to make sure that manually the script performs the action it is intended to. 3.Make sure that the enviroment variables are like the way your script requires like set the environment variable by appending the following lines PATH=/usr/local/sbin:/usr/sbin:/sbin:/usr/local/bin:/usr/bin:/bin export PATH 4.If still the cron doesn't execute check the cron logs, what kind of errors does it shows. 5.Try logging the output of your script to a log using the following lines in cron @daily /bin/sh /home/vaibhav/applications/comparison/scrapy > /mylog.log 6.As far as I can see it must be a shell script so you must define it like this in cron job by giving the complete path for the sh binary and scrpay must be in .sh file. @daily /bin/sh /home/vaibhav/applications/comparison/scrapy.sh More about cron job
How to add crontab in FreeBSD
1,541,013,171,000
I'm trying to get this simple bash script to work within a cronjob. It's used to generate static nginx webserver statistic pages using GoAccess. I tried everything I know to resolve this issue, it just won't work as a cronjob. It's executed perfectly in console. It's even working as cronjob when I remove the call to goaccess (e.g. put an echo there to see if the call is built correctly). Any help on this? The script runs for every file but the resulting files only contain the "how to use" instructions from goaccess which appear when you call it without arguments. System is a vServer running Debian 6.0. I use GoAccess 0.5 and nginx 1.2.1 Here's the script: #!/bin/bash PATH=/bin:/sbin:/usr/bin:/usr/sbin:/home/ _logdir="/srv/www/logs" _logfiles="${_logdir}/access_*.log" for logfile in $_logfiles; do _curfilename="${logfile##*/}" # All these commands work flawless #echo "$logfile" >> /home/debug.log #echo "$_curfilename" >> /home/debug.log #echo "goaccess -a -f $logfile > /srv/www/stats/${_curfilename}.html" >> /home/debug.log; # This one fails goaccess -a -f "${logfile}" > "/srv/www/stats/${_curfilename}.html"; done Here's the crontab line for once an hour (i used * * * * * for debugging): 0 * * * * /bin/bash /home/run_webstats_update.sh Error output in cron.log: /USR/SBIN/CRON[26099]: (root) CMD (/bin/bash /home/update_webstats.sh) /USR/SBIN/CRON[26098]: (CRON) error (grandchild #26099 failed with exit status 1)
This should do it. Just make sure you got the right paths. BTW, GoAccess is nifty. #!/bin/bash PATH=$PATH:/home/ _logdir="/srv/www/logs" _logfiles="${_logdir}/access_*.log" for logfile in $_logfiles; do _curfilename="${logfile##*/}" cat "${logfile}" | goaccess -a > "/srv/www/stats/${_curfilename}.html" done You could even parse compressed data files: zcat -f access.log* | goaccess -a > "/srv/www/stats/${_curfilename}.html"
Bash script doesn't work as cronjob
1,541,013,171,000
I would like to know how accurate and certain cron timing is. Lets say I got cronjob to run at 11:00:00 and then I pass time as a parameter for this cron job. Would it be possible, in any scenario, that the time in the parameter would be different, like 11:00:01 for example. Can I be 100% sure that the execution will happen at the exact time? I am concerned about things like heavy server load or any unforeseen circumstances. Can anything affect it? I am assuming it can. I would also like to know how to pass planned time of execution as a parameter, instead of actual time. Lets assume that actual execution was late, like 11:00:01. How can I pass planned time that is defined in cronjobs, so 11:00:00 and how to do it when cron job is set to run let's say every 10 min? Is it possible to get planned time in that case, so that each time script was executed it has planned time as a parameter? 10 * * * * /tmp/script.sh $(date)
Just by checking through my own syslog I can see that cron jobs usuallY start 1 second past the minute (Ubuntu 20.04). (Eg: ten past five triggers at 05:10:01). On my system this is pretty stable, but I wouldn't garentee it. Under heavy load this might be worse because even if the cronjob triggered at the right time, extremely heavy load could delay excve for more than a second. But then such extreme load would slow your script down so much the start time would become largely meaningless. Cron can and does occasionally skip jobs, for example if the system is switched off it will never catch up. I would be especially careful of this note in the manual: Note that this means that non-existent times, such as "missing hours" during daylight savings conversion, will never match, causing jobs scheduled during the "missing times" not to be run. Similarly, times that occur more than once (again, during daylight savings conversion) will cause matching jobs to be run twice. Make sure your code won't break if it runs at 05:10:02 instead of 05:10:01. There's no elegant way to get cron to tell you which job is running. You can setup many identical jobs to run at different times, each job could pass in the time as an argument. I see two options: Setup a crontab with many entries Do check the notes below. Yes there's 1440 minutes in a day, so if you need a crontab to run this minutely you might need a script to generate your crontab: #!/bin/bash for hour in {0..23} ; do for minute in {0..59} ; do echo "${minute} ${hour} * * * root /path/to/script.sh ${hour}:${minute}:01" done done > /etc/cron.d/my-job You'll end up with something like this: 01 00 * * * root my-job 00:01:01 02 00 * * * root my-job 00:02:01 03 00 * * * root my-job 00:03:01 ... Create a wrapper script to find the nearest expected run It might be better to create a wrapper script which checks the current time and finds the nearest entry. If your job is set to run every 15 minutes you could use the example provided here: #!/bin/bash curdate=`date "+%s"` run_time=$(($curdate - ($curdate % (15 * 60)))) run_time_arg=$(date -d"@$run_time" "+%H:%M:%S") /path/to/script.sh $run_time_arg Note this does use behaviour specfic to GNU date and may not work on all implementations
Cronjob timing accuracy - planned time as a parameter
1,541,013,171,000
When tailing /var/log/cron i noticed that the cron job is failing due to PAM permissions. In my access.conf i do have the following uncommented to make sure (or what i thought was making sure) that root did have permissions to run cron jobs. # User "root" should be allowed to get access via cron .. tty5 tty6. + : root : cron crond :0 tty1 tty2 tty3 tty4 tty5 tty6 I'm on Centos 7 Kernel 3.10.0-693.21.1.el7.x86_64, we have connected it to our Windows active directory instanc via realm, sssd, kerberos. My installations steps can be found here Best Auth Mech to Connect to Windows AD Im at a loss at the moment and cant figure out what may be causing this. I double checked that roots password didn't expire and it had not. Current root access is configured though windows security groups. Any help would be greatly appreciated! EDIT I added debug to end of my pam_access.so and got the following crond[17411]: pam_access(crond:account): login_access: user=root, from=cron, file=/etc/security/access.conf crond[17411]: pam_access(crond:account): line 60: - : ALL EXCEPT wheel shutdown sync : LOCAL root crond[17411]: pam_access(crond:account): list_match: list=ALL EXCEPT wheel shutdown sync, item=root crond[17411]: pam_access(crond:account): user_match: tok=ALL, item=root crond[17411]: pam_access(crond:account): string_match: tok=ALL, item=root crond[17411]: pam_access(crond:account): user_match: tok=wheel, item=root crond[17411]: pam_access(crond:account): string_match: tok=wheel, item=root crond[17411]: pam_access(crond:account): user_match: tok=shutdown, item=root crond[17411]: pam_access(crond:account): string_match: tok=shutdown, item=root crond[17411]: pam_access(crond:account): user_match: tok=sync, item=root crond[17411]: pam_access(crond:account): string_match: tok=sync, item=root crond[17411]: pam_access(crond:account): user_match=1, "root" crond[17411]: pam_access(crond:account): list_match: list=LOCAL root, item=root crond[17411]: pam_access(crond:account): from_match: tok=LOCAL, item=cron crond[17411]: pam_access(crond:account): string_match: tok=LOCAL, item=cron crond[17411]: pam_access(crond:account): from_match=1, "cron" crond[17411]: pam_access(crond:account): access denied for userroot' from `cron'
I ended up rearranging my access.conf to the below. In a sense i put the cron entry as my second entry in the config file which seems to correctly set the permissions for root to accessing cron. # # Disallow non-root logins on tty1 # #-:ALL EXCEPT root:tty1 # # User "root" should be allowed to get access via cron .. tty5 tty6. + : root : cron crond :0 tty1 tty2 tty3 tty4 tty5 tty6 # # Disallow console logins to all but a few accounts. # -:ALL EXCEPT wheel shutdown sync:LOCAL root If someone knows but i suspect that permissions are built on how they are entered in the config. Even if you have permission via a entry in the config, if a line before that entry denies you access, it then overwrites the grant entry since it came first?
(root) FAILED to authorize user with PAM (Permission denied)
1,541,013,171,000
I have a command in my crontab to monitor a service (specifically, check whether the Tor version of my website can still be accessed): this monitoring command succeeds if it can access the site, and fails otherwise (and I get an email). However, because of intermittent failures of Tor, I get emails every now and then, ever for fairly short downtimes. I would like to be notified if this monitoring command in my crontab has been failing for multiple consecutive times (say, 10 times), so I would only be notified for longer outages. Of course, I could write a custom script to do this, storing the number of failures in a temporary file, etc., but as this looks like a pretty common need, I thought some standard solution for this may already exist (in the same way that moreutils' chronic already exists to serve a similar but different purpose.) Is there a wrapper script such that issuing wrapper COMMAND will run COMMAND and succeed unless the last 10 invocations of COMMAND have failed, in which case it should return the last error code and the output of the failed invocations?
The following script can be used as the wrapper that you describe. It saves the standard output and standard error streams of the given command to a state directory ($HOME/states) and also stores the number of failed runs. If the number of failed runs of the command exceeds 10 (or whatever number is given to the -t command line flag), it will provide some output (on its standard error stream). In all other cases, no output will be provided. The script exits with the same exit status as the given command. Example use: $ sh ./script.sh -t 2 sh -c 'echo "this will fail"; cd /nowhere' $ sh ./script.sh -t 2 sh -c 'echo "this will fail"; cd /nowhere' FAILED 2 times: sh -c echo "this will fail"; cd /nowhere f88eff95bba49f6dd35a2e5ba744718d stdout -------------------- this will fail stderr -------------------- sh: cd: /nowhere - No such file or directory END The script itself (relies on md5sum from GNU coreutils): #!/bin/sh statedir="$HOME/states" if ! mkdir -p "$statedir"; then printf 'Failed creating "%s"\n' "$statedir" >&2 exit 1 fi max_tries=10 while getopts 't:' opt; do case "$opt" in t) max_tries=$OPTARG ;; *) echo 'error' >&2 exit 1 esac done shift "$(( OPTIND - 1 ))" hash=$( printf '%s\n' "$@" | md5sum | cut -d ' ' -f 1 ) "$@" >"$statedir/$hash".out 2>"$statedir/$hash".err code=$? if [ -f "$statedir/$hash" ]; then read tries <"$statedir/$hash" else tries=0 fi if [ "$code" -eq 0 ]; then echo 0 >"$statedir/$hash" exit 0 fi tries=$(( tries + 1 )) printf '%d\n' "$tries" >"$statedir/$hash" if [ "$tries" -ge "$max_tries" ]; then cat >&2 <<END_MESSAGE FAILED $tries times: $@ stdout -------------------- $(cat "$statedir/$hash".out) stderr -------------------- $(cat "$statedir/$hash".err) END END_MESSAGE fi exit "$code"
Warn in crontab if command has failed multiple consecutive times
1,541,013,171,000
In my crontab, I reference a shell script, print-size, that contains the following lines. #!/bin/sh tmux new-session -t check-size -d tmux send-keys -t check-size 'echo $COLUMNS $LINES' C-m When this script executes as a cron job, it prints 80 23, apparently because the default terminal size is 80x24. If I execute this shell script from a terminal window, it prints the size of that terminal window (minus one line for tmux's status line). Is there a way to influence the size of the tmux window so the above script will print something different, say 132 42, in a cron job? If it matters, this is for Ubuntu 14.04 but I suspect the same behavior in any *nix.
Checking the source code is the way to go: tmux only looks at the system's notion of the size in check-size, and before that, when attaching to or creating a session, it starts with 24x80. The latter is configurable with the command-line -x and -y options. The manual page lists this in new-session: The new session is attached to the current terminal unless -d is given. window-name and shell-command are the name of and shell command to execute in the initial window. If -d is used, -x and -y specify the size of the initial window (80 by 24 if not given).
How do I set tmux's window size in a session started by cron?
1,541,013,171,000
Debian has a cronjob /etc/cron.d/mdadm that starts a raid check (& resync?). It costs a lot of IO and can take up to 96 hours on 3TB disks. At this time the performance of the service will go really down. My question is: As far as I know, Linux will immediately restore the failed RAID. Is it really necessary to run this check? If so, why?
No, it's not really necessary. I used to disable it on most systems. It can, however, be useful. Linux mdadm RAID will only detect errors that occur while the RAID filesystem is being read or written to. This mdadm raid check cron job, just causes the entire raid array to be read so that read errors can be detected. In a similar fashion, both btrfs and zfs have a scrub command to cause all of the data on them to be read....and reading data on those filesystems causes checksums to be verified, thus detecting any errors even on files that don't get accessed very often. zfs scrub or btrfs scrub are usually run weekly or monthly from cron.
Soft raid1 schedule resync
1,405,071,060,000
I recently migrated from vixie-cron to fcron, on my laptop. How does fcron knows if a particular job was run in given period of time? As a file can change during running, what constitute a different job? If I change, for example, from running daily to weekly, will it be re-run?
During initialization (and at any time a user runs fcrontab -z), fcron loads and compiles the fcrontabs. Fcron then computes the time before the next job execution and sleeps for that time. The time remaining until next execution is saved each time the system is stopped.
How does fcron know whether the job was run?
1,405,071,060,000
I wrote the following crontab task which should switch my backgound image every 10 minutes on linux mint: */10 * * * * /home/me/Pictures/wallpapers/switcher.sh >> /home/me/Logs/wallpaper.log 2>&1 Which calls this shell script: #!/bin/bash gsettings set org.cinnamon.desktop.background picture-uri $(/usr/bin/python3 /home/me/Pictures/wallpapers/spaceBack.py) In the log file I get this error message: (process:18951): dconf-CRITICAL **: 14:00:02.264: unable to create file '/home/me/.cache/dconf/user': Permission denied. dconf will not work properly. (process:18951): dconf-CRITICAL **: 14:00:02.265: unable to create file '/home/me/.cache/dconf/user': Permission denied. dconf will not work properly. (process:18951): dconf-CRITICAL **: 14:00:02.265: unable to create file '/home/me/.cache/dconf/user': Permission denied. dconf will not work properly. (process:18951): dconf-WARNING **: 14:00:02.265: failed to commit changes to dconf: Cannot autolaunch D-Bus without X11 $DISPLAY I only get this error when running the shell script via cron (it works fine via terminal). calling ls -la /home/me/.cache/dconf/ Returns drwx------ 2 root root 4096 Jul 6 16:13 . drwx------ 48 me me 4096 Jun 29 15:33 .. -rw------- 1 root root 2 Jul 6 16:13 user
DBUS_SESSION_BUS_ADDRESS has to be set as an environment variable. Its proper value can be set via the following script, taken from this question asking How to change Gsettings via remote shell? #!/bin/bash # Remember to run this script using the command "source ./filename.sh" # Search these processes for the session variable # (they are run as the current user and have the DBUS session variable set) compatiblePrograms=( nautilus kdeinit kded4 pulseaudio trackerd ) # Attempt to get a program pid for index in ${compatiblePrograms[@]}; do PID=$(pidof -s ${index}) if [[ "${PID}" != "" ]]; then break fi done if [[ "${PID}" == "" ]]; then echo "Could not detect active login session" return 1 fi QUERY_ENVIRON="$(tr '\0' '\n' < /proc/${PID}/environ | grep "DBUS_SESSION_BUS_ADDRESS" | cut -d "=" -f 2-)" if [[ "${QUERY_ENVIRON}" != "" ]]; then export DBUS_SESSION_BUS_ADDRESS="${QUERY_ENVIRON}" echo "Connected to session:" echo "DBUS_SESSION_BUS_ADDRESS=${DBUS_SESSION_BUS_ADDRESS}" else echo "Could not find dbus session ID in user environment." return 1 fi return 0 The cron job can then be modified to this: */10 * * * * DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/1000/bus /home/me/Pictures/wallpapers/switcher.sh >> /home/me/Logs/wallpaper.log 2>&1
Crontab fails at switching backgrounds linux mint
1,405,071,060,000
I have picked up a habit of including . from some blog post: 0 0 * * * . /usr/local/bin/somescript.sh ...instead of: 0 0 * * * /usr/local/bin/somescript.sh For instance a visual cron schedule expression editor cron.guru considers using the character as an error, but my scripts appear to have ran as specified at least until now.
cron passes the entire command, including a dot if present, to a shell for execution; so . is the corresponding shell command, which “sources” the script in the current shell instead of launching a new process to run it. For an .sh file, that would probably be a new shell. See What is the difference between sourcing ('.' or 'source') and executing a file in bash? for details. cron.guru only validates schedule expressions, i.e. the part of the crontab entry which defines when it should run; that’s why “0 8 * * Mon .” is marked invalid — that’s not a valid schedule expression.
What effect does a dot character . have in a crontab?
1,405,071,060,000
I have installed rescuetime on debian 9. It requires the command rescuetime to be run in a terminal, this just keeps running rather than running and closing (it adds an icon into the tray at the bottom left of the screen). I'm having some difficulty getting this to run on startup. I have tried crontab and added @reboot rescuetime Also I've tried adding an rc.local file #!/bin/sh -e sh 'rescuetime.sh' exit 0 rescuetime.sh #!/bin/sh<CR> rescuetime Neither of these options work. How do I get rescuetime to run on startup.
Add the commandline to 'startup applications'. This worked for me (at least on Ubuntu 18.04).
How to automatically start Rescuetime on startup (tried crontab and rc.local)
1,405,071,060,000
Goal: Have crontab running at start up logging output from arp command in a txt file. > Chrontab: > > # daemon's notion of time and timezones. > # > # Output of the crontab jobs (including errors) is sent through > # email to the user the crontab file belongs to (unless redirected). > # > # For example, you can run a backup of all your user accounts > # at 5 a.m every week with: > # 0 5 * * 1 tar -zcf /var/backups/home.tgz /home/ > # > # For more information see the manual pages of crontab(5) and cron(8) > # > # m h dom mon dow command * * * * * arp -n > results.txt Unfortunately, instead of writing the output of arp -n it overwrites results.txt with a blank file. The weird thing is if I use arp-n > results.txt in the terminal I get: GNU nano 2.2.6 File: results.txt Address HWtype HWaddress Flags Mask Iface 192.168.42.19 (incomplete) wlan0 192.168.42.14 ether (incomplete) C wlan0 192.168.42.13 (incomplete) wlan0 192.168.42.18 (incomplete) wlan0 192.168.1.1 ether (incomplete) C eth0 192.168.1.25 ether (incomplete) C eth0 192.168.42.12 ether (incomplete) C wlan0 192.168.1.240 ether (incomplete) C eth0 192.168.42.11 (incomplete) wlan0 192.168.42.16 M A wlan0 Does anyone know how to fix this so I can get it running and updating the file using crontab?
The problem seems to be that probably crontab does not know the PATH where the arp command lives. I would use: * * * * * /usr/sbin/arp -n >> results.txt However, I would use arpwatch to monitor ARP changes. It work as a daemon, and as it registers the MAC changes in a file over time, together with the epoch time of the change. It also is able to send messages to syslog and emails. From man arpwatch Arpwatch keeps track for ethernet/ip address pairings. It syslogs activity and reports certain changes via email. Arpwatch uses pcap(3) to listen for arp packets on a local ethernet interface. Report Messages Here's a quick list of the report messages generated by arpwatch(1) (and arpsnmp(1)): new activity This ethernet/ip address pair has been used for the first time six months or more. new station The ethernet address has not been seen before. flip flop The ethernet address has changed from the most recently seen address to the second most recently seen address. (If either the old or new ethernet address is a DECnet address and it is less than 24 hours, the email version of the report is suppressed.) changed ethernet address The host switched to a new ethernet address. Syslog Messages Here are some of the syslog messages; note that messages that are reported are also sysloged. ethernet broadcast The mac ethernet address of the host is a broadcast address. ip broadcast The ip address of the host is a broadcast address. bogon The source ip address is not local to the local subnet. ethernet broadcast The source mac or arp ethernet address was all ones or all zeros. ethernet mismatch The source mac ethernet address didn't match the address inside the arp packet. reused old ethernet address The ethernet address has changed from the most recently seen address to the third (or greater) least recently seen address. (This is similar to a flip flop.) suppressed DECnet flip flop A "flip flop" report was suppressed because one of the two addresses was a DECnet address. Files /var/lib/arpwatch - default directory arp.dat - ethernet/ip address database ethercodes.dat - vendor ethernet block list
Save arp output in terminal to text file every minute using crontab
1,405,071,060,000
I have this sh script: ufw allow 27017 && iptables -F in a file test.sh And I want to run this with a cronjob with root each day at 07:00 like this: 0 7 * * * /root/cron/test.sh And I have also checked if this script actually runned with grep CRON /var/log/syslog and I can see that indeed it runned: Aug 24 07:00:01 vps118774 CRON[1672]: (root) CMD (/root/cron/test.sh) Now my problem is that that actual script from test.sh didn't run properly to unblock my port runned with that cronjob, the point is that if I run that script manually from terminal on the server with: sh script.sh All works fine, and the script will take the desired action,so the script runs as expected but what is wrong with my cron executing that? I put the logs of the execution got these: root/cron/test.sh: 2: /root/cron/test.sh: ufw: not found
Cron jobs are run in a shell environment that may well be different from what your ordinary interactive shell environment is. For example, the PATH variable may have a list of different directories in it and may lack some directories that you are used to being able to execute utilities from. Figure out where the ufw utility is located (using command -v ufw on the command line), then either use the complete path to that utility in the script, or modify the PATH variable inside the script so that it includes the directory in which the ufw utility (and iptables) may be found. The script, as shown in the question, lacks a proper #!-line. This is not an issue if you run it with sh explicitly, but you don't do that in the cron job specification. Instead, make the script executable and then write it as #!/bin/sh PATH="$PATH:/usr/sbin:/sbin" ufw allow 27017 && iptables -F Then call the script from your crontab exactly like you're currently doing. Here, I've also added the two directories /usr/sbin and /sbin to the PATH variable, just to show how one may do that in the script.
Cronjob won't run properly
1,405,071,060,000
I have 2 commands that need to run every hour, so I put these in /etc/cron.hourly/hrcron file, in following format command1; command2 It should've worked in my opinion, but does anyone have any idea what's stopping it from running? I'm running CentOS 6.8.
Files placed in /etc/cron.hourly, cron.daily and cron.monthly need to be executables. If you place a text file with a single line as shown in your question into that directory, it cannot be run at all for the same reason that you could not run such a file as a shell script from the command line, either. What you mean to say is this: #!/bin/sh command1 command2 You could concatenate the second and third lines with a semicolon, but it simply isn't necessary here. It's a full-on shell script, so you don't need to "stack" commands in that way. Also, be sure to mark the script executable, else it still won't run. If all of this seems odd to you, based on your knowledge of crontab entries, realize that executables in these directories are typically run by either anacron or run-parts, not by cron. Thus, the information from man 5 crontab doesn't really apply here.
How can I run two commands in succession from cron?
1,405,071,060,000
We have a job that is set to run Monday thru Friday. Up until two weeks ago, it ran fine 5 days a week. For the last two weeks it has failed on Monday. I am unable to locate where to find the point of failure. ### Example Scripts 0 2 * * 1-5 /admin/scripts/example.exp 1>/dev/null 2>&1 We run AIX 7.1. I've looked in the /var/log/ and there is no cron file there. Looking for advice to add to this so we can troubleshoot. I found that the log is located in /var/adm/log. Further we are getting this error repeatedly ever since this date/time. How do I clear this max limit? c queue max run limit reached Fri Nov 25 21:52:00 2016 ! rescheduling a cron job Fri Nov 25 21:52:00 2016
The error message c queue max run limit reached means that you have reached the limit of concurrent cron jobs. I believe that the default setting for cron on AIX is 50 concurrent jobs, so you really need to investigate why you have 50 jobs running at the same time. (Perhaps they are multiple instances of the same job overlapping each other.) These two lines should give you the list of jobs running under cron, and from there you should be able to investigate the root cause of the issue: p=$(ps -ef | awk '/[c]ron/{print $2}' | xargs | tr ' ' '|') ps -ef | egrep "\<($p)\>" If you really need to increase the number of concurrent jobs you can find the configuration setting in /var/adm/cron/queuedefs: c.50j20n60w where c = The cron queue Nj = The maximum number of jobs to be run simultaneously by cron Nn = The nice value of the jobs to be run (default is 2) Nw = The time a job has to wait until the next attempt to run it See http://www-01.ibm.com/support/docview.wss?uid=isg3T1020382 for the AIX-specific source of this answer.
Cron job doesn't run once per week
1,405,071,060,000
On Debian 8.1, I'm using a Bash feature to detect whether the stackoverflow.com website is reachable: (echo >/dev/tcp/stackoverflow.com/80) &>/dev/null || echo "stackoverflow unreachable" This is Bash-specific and will not work in sh, the default shell of cron. If we, on purpose, try the script in sh, we get: $ /bin/sh: 1: cannot create /dev/tcp/stackoverflow.com/80: Directory nonexistent Hence, if I only put the following in my personal crontab (without setting SHELL to /bin/bash) via crontab -e, I expect that once per minute, the script will be executed, and I therefore expect to also get the above error sent per mail once per minute: * * * * * (echo >/dev/tcp/stackoverflow.com/80) &>/dev/null || echo "stackoverflow unreachable" And indeed, exactly as expected, we see from /var/log/syslog that the entry is executed once per minute: # sudo grep stackoverflow /var/log/syslog Aug 24 18:58:01 localhost CRON[13719]: (mat) CMD ((echo >/dev/tcp/stackoverflow.com/80) &>/dev/null || echo "stackoverflow unreachable") Aug 24 18:59:01 localhost CRON[13723]: (mat) CMD ((echo >/dev/tcp/stackoverflow.com/80) &>/dev/null || echo "stackoverflow unreachable") Aug 24 19:00:01 localhost CRON[13727]: (mat) CMD ((echo >/dev/tcp/stackoverflow.com/80) &>/dev/null || echo "stackoverflow unreachable") ... During the last ~2 hours, this was executed more than 120 times already, as I can verify with piping the output to wc -l. However, from these >120 times the shell command (to repeat: the shell command is invalid for /bin/sh) has been executed, I only got three e-mails: The first one at 19:10:01, the second at 20:15:01, and the third at 20:57:01. The content of all three mails reads exactly as expected and contains exactly the error message that is to be expected from running the script in an incompatible shell (on purpose). For example, the second mail I received reads (and the other two are virtually identical): From [email protected] Mon Aug 24 20:15:01 2015 From: [email protected] (Cron Daemon) To: [email protected] Subject: Cron (echo >/dev/tcp/stackoverflow.com/80)&>/dev/null || echo "stackoverflow unreachable" ... /bin/sh: 1: cannot create /dev/tcp/stackoverflow.com/80: Directory nonexistent` From /var/log/mail.log, I see that these three mails were the only mails sent and received in the last hours. Thus, where are the >100 additional mails we would expect to receive from cron due to the above output that is created by the erroneous script? To summarize: Mail is configured correctly on this system, I can send and receive mails without problem with /usr/bin/sendmail. Cron is set up correctly, notices the task as expected and executes it precisely at the configured times. I have tried many other tasks and scheduling options, and cron executed them all exactly as expected. The script always writes output (see below) and we thus expect cron to send the output to me via mail for each invocation. The output is mailed to me only occasionally, and apparently ignored in most cases. There are many ways to work around the obvious mistake that led to the above observations: I can set SHELL=/bin/bash in my crontab. I can create a heartbeat.sh with #!/bin/bash, and invoke that. I can invoke the script with /bin/bash -c ... within crontab. etc., all fixing the mistake of using a Bash-specific feature within sh. However, all of this does not address the core issue of this question, which is that in this case, cron does not reliably send mails even though the script always creates output. I have verified that the script always creates output by creating wrong.sh (which again on purpose uses the unsuitable /bin/sh shell, to produce the same error that cron should see): #!/bin/sh (echo >/dev/tcp/stackoverflow.com/80) &>/dev/null || echo "stackoverflow unreachable" Now I can invoke the script in a loop and see if there ever is a case where it finishes without creating output. Using Bash: $ while true; do [[ -n $(./wrong.sh 2>&1 ) ]]; echo $?; done | grep -v 0 Even in thousands of invocations, I could not reproduce a case where the script finishes without creating output. What may be the cause of this unpredictable behaviour? Can anyone reproduce this? To me, it looks like there may be a race condition where cron can miss a script's output, possibly primarily involving cases where the error stems from the shell itself. Thank you!
Upon further testing, I suspect the & is messing with your results. As you point out, &>/dev/null is bash syntax, not sh syntax. As a result, sh is creating a subshell and backgrounding it. Sure, the subshell's echo creates stderr, but my theory is that: cron is not catching the subshell's stderr, and the backgrounding of the subshell always completes successfully, thus bypassing your || echo .... ... causing the cron job to have no output and thus no mail. Based on my reading of the vixie-cron source, it would seem that the job's stderr and stdout would be captured by cron, but it must be getting lost by the subshell. Test it yourself in a /bin/sh environment (assuming you do not have a file named 'bar' here): (grep foo bar) & echo $?
Cron only occasionally sends e-mail on output and errors
1,405,071,060,000
I am using Solaris 10, and I am adding many cronjobs to my server's crontab. Currently I have 30 cronjobs in my crontab. one cronjob takes 8 hours to complete ,one other cron job takes 5 hours to complete it and rest of them are ordinary cronjobs (max 30 mins they take) As it is a production server, I am worried about server's performance. How are they related to the performance of my server? Is there any upper limit for number of cronjobs?
Short answer: Yes. Longer answer: Depends on what the cronjobs do and when they do it, i.e. computation-heavy (performance-degrading) tasks run at night (or when no one is in the office, depends on what the server is used for) don't hurt (if they're finished before office hours, of course). If there is a limit (I'm not sure), it's probably way beyond 30. Anyways, it seems a little uncommon to have that many cronjobs (someone correct me if I'm wrong), but without more information, it's impossible to say if there's a better solution. (This is just a first approximation to an answer, I doubt it can be fully answered in this generality.) You could also read up on nice (not sure if it exists on Solaris). The Solaris Resource Management (1), (2) also sounds useful, depending on the type of your jobs.
Is crontab related to server performance?
1,405,071,060,000
I am having trouble figuring out how to set up my first cron job. I simply want to run this command once every week: dpkg -l > ~/Dropbox/installed_packages My /etc/crontab file contains the line 7 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly ) So I need to place my command somewhere in the directory /etc/cron.weekly - but in which file? /etc/cron.weekly currently contains the following files: apt-xapian-index man-db 0anacron cvs
Create a file with the following content (e.g. list_packages.sh): #!/bin/bash dpkg -l > ~/Dropbox/installed_packages Place this file in /etc/cron.weekly/ and it will run once a week.
Weekly cron job to save list of installed packages
1,405,071,060,000
Why does the human readable format of the memory tool free print its numbers using full stops when I print it, but when its run by crontab it uses commas? Sample: free -h total used free shared buffers cached Mem: 3.7G 2.3G 1.4G 145M 675M 869M -/+ buffers/cache: 839M 2.9G Swap: 3.9G 385M 3.5G But when run by crontab: total used free shared buffers cached Mem: 3,7G 2,3G 1,4G 145M 675M 869M -/+ buffers/cache: 840M 2,9G Swap: 3,9G 385M 3,5G I would call this a bug as its a very unexpected behaviour. It's a formula for mistakes.
Your locale settings are different in your shell and in your cronjob. You can check by running locale in both settings, and you can change your cronjob's locale settings by setting the appropriate variables (LC_ALL is the hammer if you don't need to be subtle; see locale(7) for details).
Why does the output of free -h use different digit separators when run by cron?
1,405,071,060,000
Is there a way to schedule a task using crontab by root user but should not be visible using crontab command i.e, crontab -l either for root user or normal users?
If you want to schedule a task using cron, an alternative to crontab in many distributions is to add a file to /etc/cron.d, in the traditional system crontab format (the variant which specifies the user). Tasks defined in this way do not show up in crontab -l's output. For example, on Debian, amavisd-new's Spamassassin maintenance is scheduled by /etc/cron.d/amavisd-new, which contains # # SpamAssassin maintenance for amavisd-new # # m h dom mon dow user command 18 */3 * * * amavis test -e /usr/sbin/amavisd-new-cronjob && /usr/sbin/amavisd-new-cronjob sa-sync 24 1 * * * amavis test -e /usr/sbin/amavisd-new-cronjob && /usr/sbin/amavisd-new-cronjob sa-clean
How to setup an invisible cron job?
1,405,071,060,000
crontab strangely does not execute my script, although the script is running just fine from the shell terminal. I already made sure that scripts are separated from each other by a new line space. However, I am suspicious about the contents of my script. crontab executes run.sh which would later run main.sh. This is the /etc/crontab: * * */3 * * root source /opt/db_maintain/run.sh This is the contents of run.sh which would call main.sh from inside: #!/usr/bin/env bash #********* Saman ********* TM=$(date --date='40 days ago' '+%F %T') TARGET=/opt/db_maintain/main.sh TIMESTAMP=$(echo ${TM} | tr --delete ': -') export TIMESTAMP source $TARGET "$TM" This is the beginning of main.sh: #!/bin/bash ##!/usr/bin/env bash # # main program entry point # source /opt/db_maintain/functions.sh source /opt/db_maintain/constants.sh source /opt/db_maintain/settings.sh source /root/PASSWD_PGRS.sh #read -s -t 0 -n 9999 #read -s -p "Enter password for ${USERNAME}: " PASSWORD
Use this syntax in the cronjob: * * */3 * * root /bin/bash /opt/db_maintain/run.sh You used source. That is a shell builtin command from bash. Therefore it can only be executed inside a bash shell or via a bash shell. The crontab just executes binaries, not shell commands. So you have to call a binary (/bin/bash), preferably by its abolute path.
Task not run by crontab
1,405,071,060,000
I have a program written in python; my program scrapes a value from some financial website every minute and pushes that value into my DB. My program takes like 1 or maximum 1.5 seconds to do this job. I have set a cron job to call my program every minute. I need to run my program in this way every day from 9AM to 4PM. Now sometimes I may have to stop my program to kill the program at any time between 9AM to 4PM. How can I do this? According to this link I tried ps -o pid,sess,cmd afx | grep -A20 "cron$" and I was unable to find my program in the list since it completes its work in seconds. Referring to this I tried /etc/init.d/cron stop and pkill cron which kills all cron jobs - which I don't want. I am running this cron in Ubuntu Linux.
Generally, stopping and starting the system cron daemon is a bad idea. Commenting out the line isn't always convenient so here are a couple of related alternatives Use a semaphore One solution to this requirement is to use a semaphore - or flag - to indicate whether or not the script is permitted to run. In this instance the semaphore can be represented by the presence or absence of a file: if the file exists then do not run the script, otherwise run it. (Of course, the test could be reversed so that the script only runs if the file exists.) Here is a sample cron entry with the semaphore check in place: * * * * * test ! -f /tmp/stop && /path/to/script... To prevent the script from running, simply execute touch /tmp/stop. To allow it to start running again, just rm /tmp/stop. To reverse the test, use this test for the file /tmp/run: * * * * * test -f /tmp/run && /path/to/script... You would probably want to use a different semaphore file in practice to ensure that only authorised people could flag the script to stop (or start). One option might be "$HOME"/.scriptname.stop for a script called scriptname. Prevent the code running If the script is executable, and called by /path/to/script... then simply removing executable permissions or moving it out of the way will stop the script running: chmod a-x /path/to/script or mv /path/to/script{,.stop} # i.e. mv /path/to/script /path/to/script.stop This is a quick and ugly fix that will cause cron to generate an error each time it attempts to run your code. Normally, such errors are caught and emailed to the cron job owner, so a little test before executing the script will prevent this situation being flagged: * * * * * test -x /path/to/script && /path/to/script...
How to stop minute cron job?
1,405,071,060,000
I am trying to write a cron for unix shell that will delete some files, say after 2 weeks or after 1 month from a specific directory. /somedir1/somedir2/ if(somedir2) contains file with extension .txt or .log then check timestamp if two weeks old delete it otherwise don't delete.
Try the find command. find /somedir1/somedir2 -name *.txt -name *.log -mtime 2w -delete Change -delete to -print for a dry run.
Cron to delete particular file(s) from a specific directory
1,405,071,060,000
We want to follow the used space of /var/hadoop/hdfs partition , by way if used space is more the 50% then as results we run the script – do_action.bash , finally this command should be in crontab and should be run every 1 hour Example of the partition hdfs df -Ph | grep 'hdfs' /dev/sdc 20G 1.7G 18G 9% /var/hadoop/hdfs What we did until now is the below syntax that print "run the script do_action.bash" , in case threshold is more then 50% used , df -Ph | grep 'hdfs' | sed s/%//g | awk '{ if($5 > 50) print "run the scriot do_action.bash"}' but how to add the execution of the script - do_action.bash we try df -Ph | grep 'hdfs' | sed s/%//g | awk '{ if($5 > 50) print "run the scriot do_action.bash"}' && bash /opt/do_action.bash but above isn’t right because script - /opt/do_action.bash runs in any case
You can run df /path/to/directory to get the df output of that directory. For example, on my system: $ df -Ph /home/terdon Filesystem Size Used Avail Use% Mounted on /dev/nvme0n1p6 669G 186G 450G 30% /home So you don't need to grep hdfs, you can get it directly and then simply look at the second line (NR==2 in awk) to skip the header. With that in mind, you can then set awk's exit status using exit() and use that with the regular shell && to execute your script. Something like this: df -Ph /var/hadoop/hdfs | tr -d '%' | awk 'NR==2{ exit $5>50 ? 0 : 1}' && /opt/do_action.bash Or even shorter: df -Ph /var/hadoop/hdfs | awk 'NR==2{exit ((0+$5) <= 50)}' && /opt/do_action.bash && means "only run the next command if the previous command was successful". The exit $5>50 ? 0 : 1 will set the awk command's exit code to 0 (success) if $5 is greater than 50, so the script will only run if $5>50. Here's the first awk script written in a more verbose, but easier to understand form: awk '{ if(NR==2){ if($5>50){ exitStatus=0 } else{ exitStatus=1 } exit(exitStatus) } }'
bash + monitor disk space usage and execute script if used space reached the threshold
1,405,071,060,000
I use this script to restart Firefox overnight (to apply the package manager and addon updates): #!/bin/bash killall -s SIGTERM firefox; sleep 15 firefox -P "user" & firefox -P "default settings" & crontab (runs at 3 AM): 0 3 * * * /usr/local/bin/firefox.sh When executed manually the script works as expected: closes the Firefox processes and starts two profiles in their own windows. When cron runs the script, Firefox is consistently only closed.
cron jobs run in a completely separate environment, isolated from your usual GUI desktop or terminal environment. firefox expects to be run as a child process of your desktop environment or, at the very least to have a valid DISPLAY variable set. It is sometimes possible to get cron jobs to start or interact with GUI programs. Try adding export DISPLAY=:0.0 as the second line of your script. If :0.0 doesn't work, run a terminal in your desktop and run echo $DISPLAY to get the correct value. If this still doesn't work, you may also need to set XAUTHORITY=$HOME/.Xauthority or maybe use xauth to enable access. Note that any program started from cron (including firefox) will inherit cron's fairly minimalist environment. Variables like PATH, LOGNAME and/or USER may be different to what you expect, and many variables won't be set at all. e.g. the LC_* locale variables may not be set (depending on distro - e.g. cron in Debian reads /etc/environment and /etc/default/locale. I don't know if that's also the case on Fedora or not). If that program needs specific environment variables set to certain values, you'll need to set them in your crontab file, or export them in your script too. Or just source your usual shell startup files from the script. Firefox, Chromium, and other web browsers may need http_proxy, https_proxy and other proxy-related variables set. FYI, this is roughly how running GUI programs over ssh -X works. The -X option enables X11 forwarding. It sets up a tunnel to proxy X protocol over the ssh connection, and sets the DISPLAY variable to point to that tunnel. I use this to, for example, run xsane on my server (hostname "ganesh", which has a HP3030 printer/scanner attached) but have the windows display on my workstation's monitor - i.e. ssh -X ganesh xsane. If I were to run ssh -X ganesh 'echo $DISPLAY' (needs to be single-quoted or escaped so that my local shell doesn't interpolate the variable), I'd see something like: $ ssh -X ganesh 'echo $DISPLAY' ganesh:11.0
A basic Bash script (to start a GUI program) works partially in cron