date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,536,240,873,000
I've got a Pi2 (running Raspbian Jessie) nicely set up with a 2Tb external USB drive (sda) set up so that I am booting off of /dev/sda1 (16Gb), downloading torrents to /dev/sda2 (200Gb) and saving all my important documents on OwnCloud /dev/sda3 (1.7Tb) df -h: Filesystem Size Used Avail Use% Mounted on /dev/root 16G 2.0G 13G 14% / /dev/mmcblk0p1 63M 21M 43M 33% /boot /dev/sda3 2.5T 744G 1.7T 31% /media/owncloud /dev/sda2 193G 131G 52G 72% /media/torrent Now as you can see from the above, I've got about 750Gb stored on my OwnCloud. I'd really rather not lose any of that. And come to think of it, I'd really rather not lose and of the 130Gb in torrents, nor the work I've put into getting the system running JUST how I like it in /dev/root. So I'm going to be buying a second 2Tb Hdd. The 1st question is: What is the best way to backup/save this data? I've not ever set up a RAID array before, but from preliminary research, I'd need to start with 2 blank drives, and then set it up from there. This isn't really a possibility (Question 2: or is it?) as I don't have anywhere to temporarily store the 870+Gb currently on the drive. (Question 3:) Also, can a RAID1 be set up with USB drives? I could cron a rsync to back up the primary drive periodically, (Question 4:) but is that the best way to do this? And If it really is... Bonus Question 5: What period should I run (after the initial sync)? Once a day will surely not be enough and every minute may be a bit much.
What kinds of danger do you expect? Data loss, of course, but how do you expect that data loss to happen? This immediately rules out several strategies. Regardless, RAID is not a backup. Some of the RAID levels (1,5,6,…) merely provide a way to keep your system running if a disk fails. If there's an error in your system, e.g. an accidental rm -rf /media/*, all your data will be deleted across all your drives in your RAID. Note that it's possible in theory to create a RAID1 with only one drive, copy data to it, and then start mirroring, but again, it's not a backup. So instead, just partition and format your second disk with ext4 or another file system of your choice. Now, we come to the next question: do you want incremental backups? Or do you want a mirror of your data? A mirror is rather easy: rsync -av --delete --progress /media/* /path/to/backupdrive/ But depending on your situation, you want incremental backups. There are several applications available, e.g. borg, and they have different features, like de-duplication, speed, and so on: borg create /path/to/backup::repo-{now:%Y-%m-%d} /media/* This has the nice side-effect that the mentioned rm -rf /media/* won't delete your backups (unless you've used rsync --delete). Regardless of the method you use, put that method in a shell script, e.g. ~/utils/backup.sh. But don't create a cron job for that file. Instead, add a second file, ~/utills/backupreminder.sh, that sends you an email, SMS, notification, or prints a page on your printer to remind you that you should take your drive, go to your Raspberry, connect it, execute ~/utils/backup.sh, disconnect it, and put it back. The physical distance is important. If your dog pulls your Raspberry from the shelf, any connected drive will likely die. If that's too much of a hassle (and your Raspberry is in an infant safe location), at least dismount the drive after each backup. Bonus Question 5: What period should I run (after the initial sync)? Once a day will surely not be enough and every minute may be a bit much. That completely depends on you. If you file a very important document in your OwnCloud every day, you should backup every evening. If the contents of your OwnCloud and other folder only change every second day, and you can handle the loss of such a day, backup every fourth evening. And if disk failure is your major concern, add a third drive for that RAID1. But don't forget the backups. However if that's all too much (which is understandable), you can always rent some space online for ~60$/year, and backup your files there.
How can I prevent data loss (after install)
1,536,240,873,000
I am installing new crontab job which will execute a shell script and then send the printed errors on the console to email. I know all of the required steps to do, which will be as following: 0 10,22 * * * . /X.sh 2>&1 >/dev/null | mail -s "subject" "email" Mu question is: when will it send the email? it will send an email each 12 hours even if shell script finished before the next round (I mean the next new cron job). it will send an email after finish executing from shell script even the duration of the current cron is still active.
The email will be sent every time X.sh finishes running, whenever that is. cron itself will send a job's output by email by default, to the relevant crontab owner or whatever MAILTO is set to; you could use that instead of manually calling mail.
crontab each 12 hour - when will it send an email?
1,536,240,873,000
So i created a script used to backup system, i tested and it worked. Here's the script: #!/bin/bash backup_files="/home /var/spool/mail /etc /root /boot /opt" dest="/mnt/backup" day=$(date +%F) hostname=$(hostname -s) archive_file="$hostname-$day.tgz" echo "Backing up $backup_files to $dest/$archive_file" date echo tar czf $dest/$archive_file $backup_files echo echo "Backup finished" date ls -lh $dest But when put it in crontab , it didn't work. I used "crontab -e" and put this following command line in there * * * * * /root/backup_system.sh I waited and nothing happend. I don't know where i was wrong. @I'm running Ubuntu 14.04, the script was changed permission using chmod.
Unless your script is not executable (fix that with chmod +x /root/backup_system.sh) or the crond isn't running, there's nothing obviously broken in your script that will prevent it from running. All of the commands you use (date, hostname, tar, and ls) should be in /bin, which should be in the default PATH....unless you've changed it in your crontab. The most likely explanation is that you're trying to run this script from a user's crontab rather than root's crontab. And that user doesn't have read and execute access to the /root directory or to the /root/backup_system.sh script. BTW, you should always double-quote your variables. And it's a good idea to use an array variable for variables that contain a list of things (like $backup_files). Here's an improved version of your script which fixes those problems: #!/bin/bash backup_files=(/home /var/spool/mail /etc /root /boot /opt) dest='/mnt/backup' day=$(date +%F) hostname=$(hostname -s) archive_file="$dest/$hostname-$day.tgz" echo "Backing up ${backup_files[@]} to $archive_file" date echo tar czf "$archive_file" "${backup_files[@]}" echo echo "Backup finished" date ls -lh "$dest"
My backup_system.sh file didn't run under crontab
1,536,240,873,000
I am looking for some script or terminal command to list all the scripts (preferably with their paths) which are run on a periodic basis by cron, cron.daily. I am not looking for any filter for time-period of the script and want all the scripts listed ( however some administrators may want this kind of filter). Purpose: To document all scripts running periodically so that debugging or fault check , updating and transferring to a newer system is easy and efficient.
To find the filenames and script types of all scripts run from cron by non-root users (does not identify user): find /var/spool/cron/crontabs/ -type f ! -name 'root' \ -exec awk '!/^[[:blank:]]*(#|$)/ {print $6}' {} + | xargs -d'\n' file | grep -i script /home/cas/scripts/fetch.sh: Bourne-Again shell script, ASCII text executable /usr/local/sbin/backup-postgres.sh: Bourne-Again shell script, ASCII text executable To find all executables (binaries & scripts) run from cron by non-root users (identifies user): find /var/spool/cron/crontabs/ -type f ! -name 'root' \ -execdir awk '!/^[[:blank:]]*(#|$)/ {print FILENAME"\t"$6}' {} + | sed -e 's:^./::' cas /home/cas/scripts/fetch.sh postgres /usr/local/sbin/backup-postgres.sh Of course, both of these have to be run as root. Only root can read the crontabs of all users. Note: The crontabs may be in a different directory on your system. Check the documentation for your cron daemon.
Get a list of all scripts and their paths which are run as cron job
1,536,240,873,000
I am trying to port the code over to a new server hosted somewhere else, and I want to know what cron jobs are running on the old box. Where can I find these? crontab -l SHELL="/bin/bash" 0 0,6 * * * php-cli /home/mycompany/public_html/index.php cron get_review_data 0 0 * * * php-cli /home/mycompany/public_html/index.php cron save_stats 0 0,6 * * * php-cli /home/mycompany/public_html/index.php cron check_for_new_reviews 0 0,6 * * * php-cli /home/mycompany/public_html/index.php cron refresh_infusionsoft_token */3 * * * * php-cli /home/mycompany/public_html/index.php cron infusionsoft */5 * * * * php-cli /home/mycompany/public_html/index.php cron sequence 0 0,6 * * * /usr/local/bin/python3.4 /home/scraper/scraper.py I've checked /etc/cron.* with no luck. Where can I go?
The functions called (get_review_data, save_stats, check_for_new_reviews, etc.) should be listed within the php code and python code, /home/mycompany/public_html/index.php and /home/scraper/scraper.py Examining those files should show what's actually being executed.
Where are my lost cron jobs?
1,536,240,873,000
So I've made a basic .sh file to clear my cache using the sync; echo < 3 /proc/sys/vm/drop_caches command. Then I saved it to my username in /home/marc Then I came across the crontab so I decided to use this sh file with crontab to run every hour as I noticed that my RAM is eaten up extremely quickly on my laptop. using crontab -e, I edited the file to this: 0 * * * * /root/clearcache.sh My question is, have I done everything right? Have I editied the crontab file correctly and if so, how can I check that this job runs as it should do?
The simplest answer to check if a cronjob works when you expect it to (short of reading the man page) would be to add a simple cronjob that will report the date to a particular file: 0 * * * * date >> /tmp/cronjob.test Then check it the next day (or whenever) and ensure it's triggering when you expected it to. I personally would recommend reading the man page instead—it's faster—but the above is the longer lazier way. ;)
How to check if a crontab job works when it should
1,536,240,873,000
I've got a shell script. It is supposed to be executed automatically on and on over time. Maybe about three times a day. But I don't want to write a cron job for it since it's not the same time every day. Rather, my shell script knows after execution by itself when it wants to be executed for the next time. Is there a possibility to tell the system to call it again at that time? I don't want to implement it with long sleeps since I want the task to be quit after it did its computation.
Do you have the possibility to install the at command ? (sudo apt-get install at) If so, you can add a call at the end of your script, and timing it with the parameters you want For example, you can add this at the end of your script to execute it again 2 hours later: at now +2 hours -f myScript.sh
Shell script says system when to execute for the next time
1,536,240,873,000
There are multiple options - with cron - to start your script at a specific time, but is one more secure over the other? My question is simple: What's the difference between adding scripts in the /etc/cron.daily/ or editing in your script in crontab(-e)? What I'm worried about is that other users are able to see the content. I want to make sure that nobody but root can view the crontab, whether it be in the /etc/cron.daily/ or the user's crontab. Are other users able to see what's in /etc/cron.daily/ or the crontab, which you can see with crontab -l? I'm always logged as root in a particular server. I just need to know who/what/when about crons, so that I can choose wisely when implementinig cron jobs. I'm using CentOS 6.6.
The system-wide scripts in /etc/cron* are world-readable by default. For example, on my Arch: $ ls -ld /etc/cron* drwxr-xr-x 2 root root 4096 May 31 2015 /etc/cron.d drwxr-xr-x 2 root root 4096 May 31 2015 /etc/cron.daily -rw-r--r-- 1 root root 74 May 31 2015 /etc/cron.deny drwxr-xr-x 2 root root 4096 May 31 2015 /etc/cron.hourly drwxr-xr-x 2 root root 4096 May 31 2015 /etc/cron.monthly drwxr-xr-x 2 root root 4096 May 31 2015 /etc/cron.weekly And: $ ls -l /etc/cron.d/0hourly -rw-r--r-- 1 root root 128 May 31 2015 /etc/cron.d/0hourly User-specific cron files are in /var/spool/cron by default and they, at least on my system, are not world-readable: $ ls -l /var/spool/cron/ total 8 -rw------- 1 root root 20 Feb 23 16:34 root -rw------- 1 terdon terdon 22 Feb 23 16:32 terdon So, the "safest" way would be to use the user's crontab, the one you get with crontab -e. Normal users can't read that: $ cat /var/spool/cron/root cat: /var/spool/cron/root: Permission denied I suggest you check and make sure this is also the case on your CentOS first though, I don't have access to a CentOS machine at the moment.
What's the difference between adding scripts in the /etc/cron.daily/ or editing in your script in crontab(-e)?
1,536,240,873,000
Suppose the computer is being launched irregularly. And I want to run a certain script at boot time once in a month. But I can't setup a cron job as I don't know whether the machine will be booted at this date. I though about this approach. I create a file with last date time of script execution. Every boot I do the following: if(file_exists("last_boot.txt")) { if(<<more then a month has passed since last date time>>) { <<run script>> } } else { <<run script>> } <<write current date time into the file>> Is is possible to do it using some standard tools without the trick with file?
Just* run it at either schedule using cron and check the other part of the schedule in the script: One way: @reboot /path/to/my_script.sh if has_run_this_month() { exit } Other way: 0 0 1 * * /path/to/my_script.sh if has_run_since_reboot() { exit } * There are several issues with this way of running things: Do you ever keep a machine running for more than a month? Two months? What happens in either case? Do you ever leave the machine off for more than a month? You can see where this is going… How do you make sure the path where you persist the state (because you have to store it somewhere persistent) doesn't get wiped out at boot because it's a RAM disk?
Run certain script periodically at boot time
1,536,240,873,000
I'm looking for a way (bash script or other) to make a command run every minute using date - I've been looking around for similar solutions but they all suggest either cron or watch but what I want to do is to execute the command according to the output of date so that I could make the command run when the second hand hits 30, example 12:50:30 12:51:30 12:52:30 12:53:30 ... any help will be highly appreciated.
#!/bin/bash until [[ $(date +%S) -eq 30 ]] do sleep 0.75 done while true do #command & #here your cmds should be forked to background to avoid delaying date +%S sleep 30 done edit as you wanted to see date +%S stdout and check if = 30 and execute somthing! #!/bin/bash foo(){ echo yay #add commands here } while true do date +%S | grep '30' && foo sleep 1 done or #!/bin/bash foo(){ echo yay #add commands here } while true do date +%S [[ $(date +%S) -eq 30 ]] && foo sleep 1 done
Execute a command every minute forever using 'date'
1,536,240,873,000
I have a bash script that users while do along with sleep to constantly monitor the connection state and take measures in case something goes wrong. The only thing that is not guarded is the execution of the checking script itself. I'd like to have the myscript.sh executed at startup via init.d entry and then have CRON task executed every minute to see if the script that I ran at startup automatically is still executing. How can I achieve this?
With a modern init system (like systemd or upstart), you could just have the init system take care of restarting the script if it fails. If for some reason you're stuck with a legacy system, you could have the script periodically update a flag file (touch /var/lib/myapp/flagfile), and then via cron check if the flag file is older than a certain number of seconds, and restart the script if necessary. Something like: # get current time (in seconds since the epoch) now=$(date +%s) # get flag file mtime (in seconds since the epoch) last_update=$(stat --printf '%Y' /var/lib/myapp/flagfile) if [ $(( now - last_update )) -gt $interval ]; then restart_script_here fi Using systemd If you have systemd available, you just simply create a .service. unit with theRestart=alwayskey, which instructs systemd to restart the script whenever it fails. E.g., put something like this in/etc/systemd/system/myscript.service`: [Unit] Description=This is my nifty service. # Put any dependencies here [Service] Type=simple ExecStart=/path/to/myscript Restart=always [Install] WantedBy=multi-user.target And then activate it with: # systemctl enable myscript # systemctl start myscript Using cron Instead of running a persistent script, you could just run a regular cron job that would perform any necessary checks and remediation. You would be limited to checking once per minute, but if that frequency is acceptable it's probably a simpler solution.
Have cron check if the bash script is executing
1,536,240,873,000
I am trying to deploy a script to agents and run the script using cron every hour. When I am running puppet agent -t I am getting error: Error 400 on SERVER: Invalid parameter path on Cron[homebackup] at /etc/puppet/modules/homebackup/manifests/init.pp:16 on node In the manifest I wrote in path: /etc/puppet/modules/homebackup/manifests/init.pp (located at /etc/puppet/modules/homebackup/script.sh on the puppet master) class homebackup { file { 'scriptfile': ensure => 'file', source => 'puppet:///modules/homebackup/script.sh', path => '/usr/local/bin/script.sh', owner => 'root', group => 'root', mode => 0755, } cron { 'homebackup': ensure => 'present', command => "/usr/local/bin/script.sh", user => root, minute => 0, require => File['scriptfile'], } } could someone help me in finding the glitch? Here is the fixed manifest class homebackup { file { 'scriptfile': ensure => 'file', source => 'puppet:///modules/homebackup/script.sh', path => '/usr/local/bin/script.sh', owner => 'root', group => 'root', mode => 0755, } cron { 'homebackup': ensure => 'present', command => "/usr/local/bin/script.sh", user => root, hour => "23", minute => absent, require => File['scriptfile'], } }
The immediate issue is that your file (script.sh) needs to be in the files directory under the module, i.e. /etc/puppet/modules/homebackup/files/script.sh. Confusingly, the files part of the path is not part of the source URI. You also have another error: The File requirement AFAIK needs to refer to the name of the file resource, that is, it should be literally require => File['script file'],. Your cron resource also doesn't specify a command property, which is the actual command to run. Some other tips: Simplify file resources (and references to them) by using the path as the title (rather than script file). This has the added benefit that you don't have to even specify the path property - it defaults to the title. Use variables for anything you refer to more than once, such as the title of your file resource. Use puppet-lint to spot some common issues.
Unable to run the script in cron via manifest
1,536,240,873,000
I am running Apache server on lubuntu linux, Since there are plenty of applications ruining on the web server with 10+ users accessing the server regularly, the server logs [access & error logs] gets piled up to huge size. I cannot turn off them since we might need to monitor them for security as well as we use the logs for debugging. Is there a way we can auto clear them every weekend or so ? during the process its vital that Apache should keep running. I am aware of the permission issues thats why I seek help here from the Linux veterans.
As mentioned in a comment, logrotate does this for you already. Just install (if it's not already installed) and enable it. Trying to do this with a custom cron job as @Dave mentions, while perhaps functional, is really not a good idea for your long-term sanity.
Whats the best way to clear / empty Apache log files periodically
1,536,240,873,000
My server always shut down when it reach a high load average. I have optimized my Apache, MySQL. But I cannot prevent sometime in a heavy traffic. So I try to write some shell to control load average. #!/bin/bash check=`uptime | sed 's/\./ /' | awk '{print $10}'` if[$check -gt 5] then /usr/bin/systemctl restart httpd.service fi but it report error when I excuse the script /var/www/html/load_average.sh: line 3: if[0.98, -gt 5]: command not found /var/www/html/load_average.sh: line 4: syntax error near unexpected token `then' /var/www/html/load_average.sh: line 4: `then' Another question, how to run the script in every 10 seconds with cronjob? another working code, share to everyone: #!/bin/bash check=$(uptime | tr -d ',' | awk '{print $10}') if [[ $check > 5 ]]; then /usr/bin/systemctl restart httpd.service fi in cronjob part * * * * * /var/www/html/load_average.sh >/dev/null 2>&1 * * * * * sleep 10; /var/www/html/load_average.sh >/dev/null 2>&1 * * * * * sleep 20; /var/www/html/load_average.sh >/dev/null 2>&1 * * * * * sleep 30; /var/www/html/load_average.sh >/dev/null 2>&1 * * * * * sleep 40; /var/www/html/load_average.sh >/dev/null 2>&1 * * * * * sleep 50; /var/www/html/load_average.sh >/dev/null 2>&1
You must separate the brackets from the data with spaces, like this: for I in 0 1 2 3 4 5; do check=$(uptime | tr -d ',.' | awk '{print $10}') if [ "$check" -gt 5 ]; then /usr/bin/systemctl restart httpd.service fi sleep 10 done In UNIX, [ is really a shell command. When the shell replaces the $check variable by its value, it will try to find the resulted command, hence the error. Additionally, I suggest you quoting the $check variable, as if it gets replaced by an empty string or a whitespace, it would by a syntax error. Regarding your crontab question, execute crontab -e and add an entry like this one to your file: * * * * * /PATH/TO/YOUR/SCRIPT Cronjobs minimal resolution elapse time is 1 minute, so, you will have to use a loop for repeting the check 6 times every ten seconds.
auto restart Apache when high load average
1,536,240,873,000
At work on our linux servers, I have a standard non root account. Unfortunately the account does not have permissions to crontab or at. Is there a good reason to block these commands? I can write a script that uses sleep to do the stuff I want to schedule but I would rather do it in either cron or at.
It slightly simplifies system administration, because users can be locked out by using /etc/nologin and kill, without having to worry about processes coming back through cron or at. It shouldn't be a big problem if you can run your own cron daemon.
Is there a good reason to prevent users from using cron/at?
1,536,240,873,000
I am studying linux on my raspberry pi that installed debian version. There were no problem, worked perfectly in crontab, but something has occured. @reboot sudo bash /home/pi/IP_check.sh */10 9-24 * * * sudo bash /home/pi/IP_check.sh Shell script has no problem, but don't work at every 10 mins. There is not any log in /var/log/syslog. It seems perfectly ignored. But @reboot is working, also the filelog exists in /var/log/syslog, but just for the reboot. I surely checked permission to execute (r-x), tried re-install the crontab but it doesn't fixed anything. What can I try something for solve this problem?
That crontab entry is invalid, maximum value for hours is 23, not 24. Hence that line is rejected. You should have gotten an error after adding it.
In crontab, @reboot is working but regular not working
1,536,240,873,000
CentOS 5.x I noticed that someone (presumably another admin) added an entry directly to the bottom of /etc/crontab so it reads like: SHELL=/bin/bash PATH=/sbin:/bin:/usr/sbin:/usr/bin MAILTO=root HOME=/ # run-parts 01 * * * * root run-parts /etc/cron.hourly 02 4 * * * root run-parts /etc/cron.daily 22 4 * * 0 root run-parts /etc/cron.weekly 42 4 1 * * root run-parts /etc/cron.monthly 00 3 * * 0 root foocommand Assuming that I indeed want the command to run as root, are there consequences to adding scheduled tasks this way? I'm more accustomed to using crontab -e to add/edit scheduled tasks.
From running the command in the crontab file there is no difference. At least Vixie cron (as on CentOS) checks every minute if the spool directory's modification time or that of /etc/crontab has changed. However if you edit via crontab -e, and write the crontab will be checked for glaring errors. E.g if the last line of your crontab is * * * * you will get the message: crontab: installing new crontab "/tmp/crontab.HbT2Sa/crontab":27: bad day-of-week errors in crontab file, can't install Do you want to retry the same edit? (y/n) (at least if that last line is line 27). Such a checking mechanism is of course not in place if you just add lines to the crontab file (either /etc/crontab or those in /var/spool/cron/).
Are there disadvantages/consequences to adding scheduled tasks directly to /etc/crontab instead of using the crontab command? [duplicate]
1,536,240,873,000
Having issues making crontab run certain commands, despite the PATH and SHELL being set correctly. Here is the env of the machine: SHELL=/bin/bash USER=ubuntu MAIL=/var/mail/ubuntu PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/gopath/bin Here is the env of cron (looks the same): SHELL=/bin/bash PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/gopath/bin PWD=/home/ubuntu Then, in the crontab: PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/gopath/bin SHELL=/bin/bash */1 * * * * "whoami" */1 * * * * "whoami && which whoami" The first whoami task succeeds, but the second fails with: /bin/bash: whoami && which whoami: command not found because which is not found. However, this is quite strange as: $ which whoami /usr/bin/whoami $ which which /usr/bin/which And /usr/bin is on the PATH in cron. What gives?
You shouldn't quote the cron job. You have */1 * * * * "whoami && which whoami" Which is looking a command literally called whoami && which whoami. Such as /usr/bin/whoami && which whoami. Obviously, this command does not exist. Remove the quotes so that the command is properly interpreted: */1 * * * * whoami && which whoami
cron will find certain commands on the PATH but not others
1,536,240,873,000
I want to execute all cronjobs in the environment defined in my ~/.zshenv, and I want to redirect STDOUT and STDERR of every cronjob to a single log file. I am on OS X 10.9. What is the cleanest way to achieve this?
~/.zshenv is loaded by zsh when it starts (except when started with -f or if the configuration directory is changed by setting ZDOTDIR). It is not loaded (cannot be understood) by any other shell. So arranging load ~/.zshenv is equivalent to arranging for your jobs to be executed by zsh. Set the SHELL variable in the crontab; this applies to every job. Beware of putting things on .zshenv, because it is read by every shell. For example, if you set an environment variable to execute some programs in a different environment (e.g. you want different versions of some files or programs, so you set some …PATH environment variables), this won't work if your .zshenv overrides these variables. In particular, if you want to set environment variables for both your interactive sessions and cron jobs, don't use .zshenv. Use a file which you source from both ~/.profile and crontabs. Start every cron job with . ~/.my_environment.sh; (you can't do that globally). Output from cron jobs is emailed to you using the local mail facility. I don't know how that's set up on OSX. I don't recommend using a log file instead, because cron takes care of sending an email per job and of not sending an email from successful jobs (no output, return code 0)). If you really want to use a log file, start each job with something like exec >~/cron-logs/nameofthisjob-$(date +\%Y\%m\%d-\%H\%M) 2>&1 &&.
Execute All Cronjobs in ~/.zshenv Environment
1,536,240,873,000
I have added a crontab entry via crontab -e for the current user: @reboot supervisord -c /etc/supervisord.conf Yet after the reboot, supervisord demon is not running. I need to run the command again in bash. Only a second try yields in an error I expected right away after setting up the cronjob. $ supervisord -c /etc/supervisord.conf $ supervisord -c /etc/supervisord.conf Error: Another program is already listening on a port that one of our HTTP servers is configured to use. Shut this program down first before starting supervisord. Why doesn't my cronjob work and how do I start supervisor daemon on boot?
cron by default running in a minimal environment, from man 5 cron: Several environment variables are set up automatically by the cron(8) daemon. SHELL is set to /bin/sh, and LOGNAME and HOME are set from the /etc/passwd line of the crontab's owner. PATH is set to "/usr/bin:/bin". HOME, SHELL, and PATH may be overridden by settings in the crontab; Your supervisord executable may not be in /usr/bin or /bin, so cron can not find it and fail to run. A good practice, safest way is always use full path to your executable in cron entry, if you're not sure it's in default path of cron. Or you can change PATH globally for cron by edditing /etc/crontab: $ cat /etc/crontab # /etc/crontab: system-wide crontab # Unlike any other crontab you don't have to run the `crontab' # command to install the new version when you edit this file # and files in /etc/cron.d. These files also have username fields, # that none of the other crontabs do. SHELL=/bin/sh PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin # m h dom mon dow user command 17 * * * * root cd / && run-parts --report /etc/cron.hourly 25 6 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily ) 47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly ) 52 6 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly ) #
Why doesn't my supervisor daemon start on reboot?
1,396,342,628,000
I'm using: # cat /etc/redhat-release CentOS release 6.4 (Final) Have an script in user's crontab: $ crontab -l 0 7 * * * /home/teamcity/scripts/TC_backup.sh If I run this script under this very user - it's create needed archive: $ /home/teamcity/scripts/TC_backup.sh ... And from local-scripts-log: $ cat /var/log/teamcity_backup.log | tail -n 20 Exporting TeamCity data directory: Exporting build logs Exporting TeamCity data directory: Exporting personal changes Finalizing export... Export has completed successfully. Backup file created: /home/teamcity/backups/TeamCity_20140117_172034.zip Backup finished Done. Free disc space: 147G; Used disc space: 125M. Last archives available: 55M Jan 17 17:21 TeamCity_20140117_172034.zip 55M Jan 17 17:14 TeamCity_20140117_171318.zip 16M Dec 17 18:43 TeamCity_20131217_184333.zip Backup finished at 2014-01-17--17-21 And have files: $ ls -l /home/teamcity/backups/ total 127656 -rw-rw-r-- 1 teamcity teamcity 16777213 Dec 17 18:43 TeamCity_20131217_184333.zip -rw-rw-r-- 1 teamcity teamcity 56914703 Jan 17 17:14 TeamCity_20140117_171318.zip -rw-rw-r-- 1 teamcity teamcity 57022673 Jan 17 17:21 TeamCity_20140117_172034.zip But - I haven't any *.zip from other days. In cron-log I see that script started without errors: Jan 12 07:00:01 lms-teamcity CROND[11878]: (teamcity) CMD (/home/teamcity/scripts/TC_backup.sh) Contents of the script: BDIR="/home/teamcity/backups/" LOGFILE="/var/log/teamcity_backup.log" ARHNAME="TeamCity" backup () { /home/teamcity/TeamCity/bin/maintainDB.sh backup --all -M -F $BDIR/$ARHNAME >> $LOGFILE } backup What can be the cause? UPD: 2> errorlog.file doesn't give result: $ crontab -l 0 7 * * * /home/teamcity/scripts/TC_backup.sh 2> /var/log/teamcity_backup_error.log $ file /var/log/teamcity_backup_error.log /var/log/teamcity_backup_error.log: empty In cron-log also no any errors: ... Jan 20 09:00:01 lms-teamcity CROND[24816]: (setevoy) CMD (/home/setevoy/scripts /db_con_test-4.sh) Jan 20 09:00:01 lms-teamcity CROND[24818]: (teamcity) CMD (/home/teamcity/scripts/TC_backup.sh 2> /var/log/teamcity_backup_error.log) Jan 20 09:00:01 lms-teamcity CROND[24817]: (root) CMD (/usr/lib64/sa/sa1 1 1) ...
Сause was in JAVA_HOME variable, as it needed by maintainDB.sh script. So, getting info from script running from cron: getenv () { env >> $BDIR/envlog.log } ... getenv; ... And than checking file to variable: $ cat envlog.log | grep JAVA Show nothing. After adding: ... export JAVA_HOME="/usr/java/jdk1.6.0_45/" export JAVA_OPTS="-Xmx1536m -XX:MaxPermSize=768m" export JRE_HOME="/usr/java/jdk1.6.0_45/jre" ... In to this very backup script TC_backup.sh - all work good.
Script from cron doesn't create archive
1,396,342,628,000
INPUT: * * * * * ( /path/to/foo -x ) 3 * * * * /full/path/tothing 3 3,2,4 * * * /full/path/tothing4 3 OUTPUT: ( /path/to/foo -x ) /full/path/tothing /full/path/tothing4 3 Q: How can I truncate the line until the fifth space?
Here's a way: $ crontab -l|sed -r 's/([^[:space:]]+[[:space:]]){5}//'
How to only output commands from CRON?
1,396,342,628,000
Is there a way, in Linux or FreeBSD, to receive notification from system at a specified time? I'm thinking something in like of what inotify in Linux does for filesystem events. There IS a way to that using cron, but I'm asking if there is a lower-level interface that can be called programatically. If cron is an 'official' Unix interface for this kind of task, I'd like to know that, too.
There are two low level interfaces that I'm aware of: One is simply to do a sleep() until that moment when you want to receive the notification. The sleep call is provided by glibc. The other method would be the alarm() system call. It allows you to tell the kernel that after a defined amount of time has passed it should send the calling process a SIGALRM. It's very likely that you'll have to create an appropriate signal handler which then does what you want to do. For both of these two approaches you can't set the absolute time when you want to get notified. Instead you will have to get the current time, and calculate based on it for how long your process has to wait until it should be awaken or alarmed. References (on Linux): man 3 sleep; man 2 alarm;
Get system notification at a certain time?
1,396,342,628,000
I have a couple of cron jobs set up to maintain a local copy of a remote database. The first one downloads the latest version of the database from the remote machine, which runs every day and is working fine. The second one imports the downloaded data, however it is not working. The job is just a simple shell script: #!/bin/bash mysql -u root -mypass -h localhost my_db < my_db.sql The cron task is set up like this: 0 8 * * * bash /home/lampadmin/cron/my_db.sh If I run the shell script manually, it works OK. What can I do to find out why it is not working through cron?
remove the bash, just have the following and it should work: 0 8 * * * /home/lampadmin/cron/my_db.sh also check that my_db.sh is executable. chmod a+x /home/lampadmin/cron/my_db.sh
Cron job not running / not successful?
1,396,342,628,000
I disabled all my cron jobs by putting a # in front of them. Except for one: @reboot echo "Hi I rebooted" The rest are: #0 2,14 * * * /home/backup1/mysql-backup.sh #0 * * * * mono /root/apps/AlertAuth.exe However, my inbox gets this message PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php5/20090626+lfs/suhosin.so' - /usr/lib/php5/20090626+lfs/suhosin.so: cannot open shared object file: No such file or directory in Unknown on line 0 with the subject: Cron [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -type f -cmin +$(/usr/lib/php5/maxlifetime) -print0 | xargs -n 200 -r -0 rm I don't have any cron jobs. Maybe the init.d script I have for PHP is causing this? Or maybe how do I fix this? I tried reinstalling PHP5, it seems to work yet that file .so doesn't exist.
Debian's cron, like many other modern variants, read jobs in files under directories called /etc/cron.d, /etc/cron.hourly, etc., in addition to the traditional /etc/crontab. In particular, the job you see comes from /etc/cron.d/php5 which is installed by the php5-common package.
PHP phantom cron job
1,396,342,628,000
OS : Ubuntu 12.04 Now I want to use Backup and Whenever gem to automatic backup my database. When I connect the server by ssh as an added user to run backup perform -t my_backup,it works well.But the cron file: 0 22 * * * /bin/bash -l -c 'backup perform -t my_backup' can't run at 22:00. When I use cat /etc/crontab check the cron's config file,it is: SHELL=/bin/sh PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin # m h dom mon dow user command 17 * * * * root cd / && run-parts --report /etc/cron.hourly 25 6 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily ) 47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly ) 52 6 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly ) # The /bin/bash and /bin/sh are different.What's the reason?How to do?
The special crontab file /etc/crontab has a slightly different format in that the sixth field must be the user the cron job will be run as. So if you want to put your job into that file, you must insert a user name between * and /bin/bash to match the format. Something like: 0 22 * * * root /bin/bash -l -c 'backup perform -t my_backup' replacing root with whatever user name you really want to use.
Why isn't cron running automatically?
1,396,342,628,000
I have a MAMP setup which runs PHP 5.3.5 on my Mac OS X 10.5 computer. I am trying to install a crontab that executes a PHP script, which is located on my MAMP server. I can only get the crontab to execute if I use the php installation from /usr/bin/php, which is version 5.2.15. In other words, this is the pre-installed MAC OS X installation of PHP. How can I use my MAMP's version of PHP when executing the crontab? I am not knowledgeable enough with unix to install a new version of PHP at /usr/bin/php, though would this work? I want crontab to execute my PHP script using MAMP's version of PHP; this is because I know that the script runs successfully, and I get the desired output. However, when I try executing the crontab using the system default PHP installation at /usr/bin/php, I get Fatal PHP errors. -- In case this might be useful, here is the outcome of /usr/bin/php vs. MAMP's version. The first code example executes my php script successfully: $ MAMP_PHP_PATH="/applications/mamp/bin/php5.3/bin/php" $ $MAMP_PHP_PATH -f /path/to/my/script.php Now, here's what happens when I simply run the following command: $ # whereis php in this case returns usr/bin/php $ php -f /path/to/my/script.php Results in the following error (formatting mine): Fatal error: require_once(): Failed opening required '/path/to/includes/initialize.php' (include_path='.:') in /path/to/my/script.php on line 3 Finally, this is what my crontab file looks like (with MAMP's php): 15 * * * * /Applications/MAMP/bin/php5.3/bin/php /path/to/my/script.php And without MAMP's php: 15 * * * * php /path/to/my/script.php Thanks
The SHORT answer: Ultimately, I just installed the newest version of PHP onto my system. The LONG answer (and all the pain I endured along the way): I kept getting an error when crontab would run, which stated that a certain class that I instantiated in my script – SoapClient – was not being found. My autoload function wasn't finding it either; hence, as shown in the OP, I was getting this error: Fatal error: require_once(): Failed opening required '/path/to/includes/initialize.php' (include_path='.:') in /path/to/my/script.php on line 3 There was another similar error that I kept getting like this, and I discovered that the problem was that the old version of php did not have the SOAP extension enabled, and when the autoload function went looking for it, it checked the php installation's php.ini file for the line: include_path and checks the directories therein to find the SOAP class that I was trying to include. When it couldn't find the class, a Fatal Error resulted. Note: (include_path in your php.ini file works similar to the $PATH variable in your Unix enviornment). I used the locate php.ini command and a little bit of intuition and found that my system's php.ini file was at /private/etc/php.ini.default. This was the location of the old php.ini file – the one for the php 5.2 version. Point is, soap was simply not enabled, and therefore the include_path parameter of my php.ini file was not finding its location. So, I Downloaded PHP 5.4.4 and ran the following commands: $ ./configure --enable-soap --with-mysql $ make $ make install The installation was made in /usr/local/bin. However, the root php installation was in /usr/bin, so I did the following command to move all the contents of /usr/local/bin into /usr/bin, to overwrite the old php version: $ sudo cp -Rvf /usr/local/bin/ /usr/bin I specify: -R to copy ALL the files within the /usr/local/bin/ heirarchy, -v to simply display an output message stating which files are moved as the process occurs, and -f, which allows me to overwrite the applicable files in /usr/bin as desired. Once I overwrote the old version of PHP with the new version, the location of the new php.ini file was somewhere else. But where? I ran this to find out: $ php -i | grep -i "configuration file" Configuration File (php.ini) Path => /usr/local/lib Loaded Configuration File => /usr/local/lib/php.ini After making the applicable changes, I overwrote the file at /private/etc/php.ini.default with the new php.ini file that came with my php 5.4.4 installation. Viola. The cron job is working and I didn't need to specify a different php path at all. Cheers!
How can I get crontab to use a different PHP installation location?
1,396,342,628,000
I am on Debian 11 testing. It's a low-end computer intended for users with little computer experience and I want to keep it up to date with a script that do not prompt users for a password. Following the advice of several topics (like here): So I wrote a simple script /home/user/Documents/update.sh like this: #!/bin/bash sudo apt update -y sudo apt upgrade -y sudo apt autoclean -y sudo apt autoremove -y I then make the script executable: chmod a+x /home/user/Documents/update.sh Then I gave the user user the rights with visudo so as not to ask for the password user ALL=(ALL:ALL) NOPASSWD:/home/user/Documents/update.sh, /usr/bin/apt update, /usr/bin/apt upgrade, /usr/bin/apt autoclean, /usr/bin/apt autoremove After testing by running the command on the terminal sh '/home/user/Documents/update.sh', the script works without asking me for a password. 1st try to run the script at startup with crontab : To run the script at each startup, I modify crontab -u user -e (I also tested directly with crontab -e): @reboot sh '/home/user/Documents/update.sh' But on each reboot: no script starts. 2nd try to run the script at startup with Gnome Tweaks and a desktop file : As an alternative I try to make an update.desktop file in /usr/share/applications in which a double-click starts the script: [Desktop Entry] Name=update Exec=sh '/home/user/Documents/update.sh' Terminal=true Type=Application Encoding=UTF-8 After testing, double-clicking on this file runs the script without asking for a password Then, with Gnome Tweaks, I add the file updates.desktop as an application at startup This is where my second problem comes in: the script starts at startup, but asks me for a password. Then I don't understand why this happens (is Gnome Tweaks running under another user, etc.). Questions : Why the crontab method does not work ? (I already have started crontab service to be sure) Why the Gnome Tweak method prompt me for a password and how to solve this issue ? Thank you very much.
The most basic thing you can do is prefix /usr/bin to your script: #!/bin/bash sudo /usr/bin/apt update -y sudo /usr/bin/apt upgrade -y sudo /usr/bin/apt autoclean -y sudo /usr/bin/apt autoremove -y This will work because you allowed the user to run /usr/bin/apt but not apt. That's actually good because it stops someone from making their own malicious program called apt and putting it in a directory in $PATH. However, I'd recommend against this because scripts depending on a NOPASSWD option are not the best. The next easiest solution could be to replace sudo with pkexec to work with your script for use with the desktop entry. pkexec is part of polkit and is the GUI-version of sudo. It will dim your gnome session and prompt for a password instead of using a terminal. #!/bin/bash pkexec /usr/bin/apt update -y pkexec /usr/bin/apt upgrade -y pkexec /usr/bin/apt autoclean -y pkexec /usr/bin/apt autoremove -y There are also ways to allow a user to do something without the interactive prompt. You'll need to lookup polkit rules for that. For example, this could work in /usr/share/polkit-1/actions/org.update.policy: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE policyconfig PUBLIC "-//freedesktop//DTD PolicyKit Policy Configuration 1.0//EN" "http://www.freedesktop.org/standards/PolicyKit/1/policyconfig.dtd"> <policyconfig> <vendor>Your name</vendor> <action id="org.update"> <description>Update the system</description> <message>This will run apt update/upgrade and then autoremove.</message> <defaults> <allow_any>yes</allow_any> <allow_inactive>yes</allow_inactive> <allow_active>yes</allow_active> </defaults> <annotate key="org.freedesktop.policykit.exec.path">/usr/bin/apt</annotate> </action> </policyconfig> However, the best answer (especially if you prefer non-interactivity) is to run this as a service. A simple solution might be: #/etc/systemd/system/update.service [Unit] Description=Auto upgrade After=network-online.target [Service] Type=oneshot ExecStart=/usr/bin/apt update -y ExecStart=/usr/bin/apt upgrade -y ExecStart=/usr/bin/apt autoclean -y ExecStart=/usr/bin/apt autoremove -y [Install] WantedBy=multi-user.target Then use sudo systemctl enable --now update.service to run it and arm it for the next boots. apt (at least in Debian's distribution) already has something similar to the service solution: apt-daily.service and apt-daily-upgrade.service which are triggered once per day and do some updating for you. Take a look at /etc/apt/apt.conf.d/50unattended-upgrades and /usr/lib/apt/apt.systed.daily for instructions on how to configure it to do what you want. I think this solution is even better than making your own services because edge cases and stability which could break your system have already been addressed by the authors of apt itself. For example, if the autoclean would uninstall libc6 or linux-* this script will avoid that. Something like this might work (these values are all 0-disabled by default): # /etc/apt/apt.conf.d/10periodic # Run 'apt update' every day APT::Periodic::Update-Package-Lists 1 # Run 'apt-get autoclean' every 1 day APT::Periodic::AutocleanInterval 1 # Run apt-get clean every 1 day APT::Periodic::CleanInterval 1 # Run unattended-upgrade every 1 day # This is similar to `apt upgrade` depending on your configuration # (requires package 'unattended-upgrade') # This effectively does the 'apt upgrade' APT::Periodic::Unattended-Upgrade 1 # Run `apt autoremove` after upgrading # This may be set in /etc/apt/apt.conf.d/50unattended-upgrades # You might want to set this in that other file to avoid # getting confused about what goes where Unattended-Upgrade::Remove-Unused-Dependency "true"
Debian : how to use a sudo bash script without password at startup
1,396,342,628,000
I have multiple cron jobs in one crontab (as usual). One of the jobs interacts with a remote system in another timezone, which means I need to adjust the crontab every time either timezone enters or exits daylight savings. I know I can set CRON_TZ to control what timezone cron uses for all jobs, but is there a way to set a different timezone for one of many? Obvious ideas include: Have a cron job for updating your crontab. Have a separate user with a different default timezone, just for running those TZ-sensitive jobs. Both of which seem a bit hacky. Are there any nicer solutions out there?
Using systemd timers, you can define what would classically be Cron jobs as timers, which race to an OnCalendar= specification, which can incorporate time zones. Regarding the mentioned hurdles switching away from cronjobs: at least the recurring time event specification allows for basically the same functionality; so that's nice for migration. I don't know whether there's a conversion tool that just takes crontab lines and converts them to systemd timer units, but I could very well imagine that existing.
Use a different timezone for one cron job?
1,396,342,628,000
I currently have one perl script that is used to make a memory intense call to another protocol. This takes approximately 20-25GB ram to complete. And, it takes anywhere from 8 to 15 minutes. Upon completion, I export that data to another perl script that processes it and submit to a discord bot for me. The problem I'm having; I need to run this code every 5 days. Not every 5th day of the week or months but every 5 days. The protocol has new information available every 5 days, exactly. If I use perl1 to call perl2 and have perl2 sleep for 431000 seconds (subtracting some time for the time it takes to even get perl2 involved) and then call perl1 again, is that a bad plan? I started thinking I might have a lot of perl processes opening as p1 calls p2 and p2 a calls p1 again. I'm not even sure if I can sleep for ~ 5 days or the process will end itself. I know there is a better way to do this. Investigating cron, nothing I could come up with or find really matches the scenario. Let's say the time is October 20th at 4:44pm. The next data becomes available at October 25th at 4:44pm, then 30th, then November 4th and so on. The time does not change. It is always at 4:44pm. I don't know the safest/most efficient way to achieve this. I'm already taxing my 16gb ram system with an 8gb swap to read the protocol data as-is, I don't want endless processes running holding up additional resources. Server is on Ubuntu Thanks
cron isn't quite that flexible with this sort of thing and sleeping for five days at a time is not going to be entirely reliable in a couple of ways. One solution would be to bake the "every five days" logic into a wrapper script and set it up in cron to run every single day. So something like this is one approach: unix_day=$(($(date +%s) / (60*60*24))) modulus=$((unix_day % 5)) [ "${modulus}" -eq 0 ] && perl yourScript.pl This is deterministic, it will run every 5 days based on number of days since epoch. If you have a specific day you want to base the 5 days on, then this should do that. In this example, the magic start date is the 13th October 2022 (yesterday) - you should set this as appropriate to your use case my_epoch=$(($(date -d"2022-10-13 04:44" +%s) / (60*60*24))) unix_day=$(($(date -d 04:44 +%s) / (60*60*24))) modulus=$(((unix_day - my_epoch) % 5)) [ "${modulus}" -eq 0 ] && perl yourScript.pl
Repeating perl script every 5 days, not nth day of month
1,396,342,628,000
At my job we have a Linux server with several cron jobs with the syntax wget -O /dev/null followed immediately by an http request to a PHP file or whatever. So something like wget -O /dev/null "http://foo.bar.com/file.php". I've been able to find out what the various pieces of it mean individually: wget: An app that can be used in the Linux terminal to download files from web resources without a GUI. If you have the URL to a particular download, like WordPress, PyCharm, whatever, you can use -wget to download it without physically visiting the site. wget -O: The -O option allows you to rename the file you're downloading. So if I was downloading VS Code but wanted to rename it, I could run wget -O <alternate name> <url to vscode download file>. dev/null: From what I understand, this is basically a dump for unwanted log files. If a program is running from the Terminal, and you don't care to see/store the output, you can direct it here, where it will be immediately and automatically erased. Basically a way to save space for large programs. I just can't figure out what they do together - Does using them all together rename the PHP file to /dev/null so Linux catches it and erases it? Is this some sort of shorthand syntax to rename the file to an empty string, and direct the results to /dev/null?. Or more likely something different that I'm missing? Thank you in advance!
/dev/null is (pseudo-)device file that just discards everything that's written to it. There's no renaming involved there, wget -O file just opens the named file (possibly creating it) and writes there. In this case with /dev/null, the operating system just discards the written data. The end result is that the URL in question is requested from the server, whatever scripts involved run on the server side, and the response (if any) is discarded. Actually renaming a file on top of a device node would replace the device node with that file. A regular user can't write to /dev, but running e.g. mv somefile.txt /dev/null as root could cause some "interesting" effects in the long run. Without the -O option, wget would create a file in the current directory named based on the URL, so file.php in this case.
Linux wget -O /dev/null <http....> syntax
1,396,342,628,000
With incrontab, I want to monitor a file and whenever it gets modified, I want to replace a string in it. But that will create an infinite loop, I guess. When I configure it with the following table: /etc/file.md IN_MODIFY sed -i 's/Hello/Hi/g' $@ It works once, but never again. I don't see any error messages and the status of incrond remains fine, but I think the service is stuck in an infinite loop. If I restart it, it will work again one single time. Is there a way to prevent such an infinite loop? Or is there another approach to my problem?
It turned out that I was not having an infinite loop, but I was experiencing this bug. The service that modifies the file that I am monitoring, does not just modify the file, but deletes and recreates it. Through the deletion, incrond stops watching it, which can be determined when event IN_IGNORED is logged. That's why it always worked only once after a restart of incrond. To not lose the watch on my file, I used the workaround also mentioned in the linked GitHub issue. Instead of monitoring the file directly, I monitor its parent directory. To not react to all other events in this directory I had to put my sed command into a script file and add a filter for the filename of my interest: /etc IN_CLOSE_WRITE /home/user/myscript.sh $@ $# With /home/user/myscript.sh: #!/bin/bash if [ "$2" == "file.md" ]; then sed -i 's/Hello/Hi/g' "$1/$2" fi I also changed IN_MODIFY to IN_CLOSE_WRITE, because IN_MODIFY seemed to trigger some ms too early for my needs. Fortunately, the above table won't create a loop, because sed -i won't modify or write to the file, but replacing it (IN_MOVE_TO), so no IN_MODIFY or IN_CLOSE_WRITE.
incrontab: modify a modified file
1,396,342,628,000
I've edited my root crontab to periodically execute a script: sudo crontab -e Which should execute the following script, located on a mounted USB, as root: * * * * * /mnt/usb0/fake-hwclock-cron.sh The script (1) fake-hwclock-cron.sh then runs the script (2) fake-hwclock to save the current time: #!/bin/sh echo "$(fake-hwclock save)" The script (2) fake-hwclock (located in "/sbin/fake-hwclock") then saves the time to the file (3) fake-hwclock.data on a mounted USB. The USB is automatically mounted with the following options: PARTUUID=1c921a37-01 /mnt/usb0 ext4 defaults,auto,users,rw,nofail 0 0 File Permissions: (1) -rwxrwxrwx 1 pi pi /mnt/usb0/fake-hwclock-cron.sh (2) -rwxr-xr-x 1 root root /sbin/fake-hwclock (3) -rwxrwxrwx 1 pi pi /mnt/usb0/fake-hwclock.data If I understood it correct, for the files (1) fake-hwclock-cron.sh and (3) fake-hwclock.data every user can read/write/execute. For file (2) fake-hwclock every user can read/execute. .. but ignoring these permissions for a second, isn't the cron job, which is executed as sudo, executing the script (1) fake-hwclock-cron.sh with su rights as well? And what's even more confusing: I can execute both scripts (1) / (2) without su rights in the shell, but I cant execute them with su rights as a cron job. So, can anyone explain to me why I get a "Permission denied" error? /bin/sh: 1: /mnt/usb0/fake-hwclock-cron.sh: Permission denied P.S. I hope i've managed to explain well enough; otherwise please tell me
Based on the output of the mount |grep usb0 it shows that the mount point is mounted by the noexec option and it will prevent you just executing the script but you can do execute it with shell interpreter directly like: * * * * * sh /path/to/script.sh for the later error fake-hwclock: not found, you would need to provide the full of the fake-hwclock commnad. see more about noexec.
bash script in cronjob cannot execute /sbin/fake-hwclock
1,396,342,628,000
I have a crontab that works as expected when my mac is awake but seems not to run when the mac is asleep even though I've scheduled the it to wake up a few minutes before it's supposed to run. In the crontab I've forwarded the output and errors so I can see if anything is failing, but these are also not updated so I gather the script isn't running at all. I know the scheduled start up only works if the mac is charging so I've had it charging over night (which seems bad for the battery) and still nothing... Any ideas what might be happening or what I can change? Thanks so much! Edits: I had previously it scheduled to wake up 5 minutes before the job was to be run. I changed that to 1 minute before and it still didn't go through. As for how I'm waking it up, I just went to System Preferences -> Battery -> Schedule and clicked "Start or wake up" (every day)—see image below. I'm not sure exactly what state it returns to or whether there's a way to configure that (sorry, newbie here).
I was able to get it this to work using pmset. Specifically, I just ran the line pmset repeat wakeorpoweron MTWRFSU 08:59:00!
Crontab not running when mac asleep despite scheduled wakeup
1,396,342,628,000
I'm trying to execute a bash script on Linux startup, but it doesn't work. I have tried all of these commands in the crontab: @reboot bash /home/user/mysqlamazon.sh @reboot sh /home/user/mysqlamazon.sh @reboot /home/user/mysqlamazon.sh @reboot sleep 60 && /home/user/mysqlamazon.sh I have another command on crontab which works perfectly: @reboot pwsh file.ps1 And I have also tried this command: @reboot pwsh file.ps1 && sh /home/user/mysqlamazon.sh None of these work! Any help would be appreciated! Here is the content on bash script: while($true) do ./transfermysql.sh > file.txt bcp tablename in file.txt -S ***********.com,**** -U **** -P *********** -d ********* -c :> file.txt sleep 60 done
You don't tell us how this fails, but I am guessing you don't see it executed. First of all, your script will never work since while($true) isn't valid shell syntax. I assume you want something like this: true=1 while(($true)); do ... ; done The more common idiom for that is: while : ; do ... ; done Or (true is a command): while true; do ... ; done This is most likely because you are using relative paths in your script: ./transfermysql.sh > file.txt Replace that with the full path: /path/to/transfermysql.sh > /path/to/file.txt Next, I also suspect that bcp is not in cron's PATH, so use the full path to that as well: /path/to/bcp tablename in file.txt -S ***********.com,**** -U **** -P *********** -d ********* -c Finally, I don't know why you would want :> file.txt since your first command overwrites its contents anyway, but if you do need it for some reason, you need to use the full path there too: > /path/to/file.txt.
Bash script working in root but not in crontab
1,396,342,628,000
I can't get this cron to run. being new to linux, I really don't know my way around. Pi 3B+ Debian 9 Stretch PHP 7.0.33 Nginx 1.10.3 Pi has OpenMediaVault ( OMV ) running. Used OMV to create a sharedfolder 'www' which I can access and also map to my PC as a network folder. I have php scripts in the www folder and they execute correctly when accessed from the pC browser. I want to automate 1 of the php scripts, which I assume is done using crontab. Used Putty to login to the Pi as user root edit crontab with : crontab -e scrolled down and added : */1 * * * * /usr/bin/php /mnt/fs/sharedfolders/www/testcode/push2.php I understand this will run every 1 minute - using that only as a test. I have tested the push2.php code from my browser and it executes as expected without errors. When cron runs, I get an error report email to my PC ( I assume generated by OMV ) to say : Could not open input file: /mnt/fs/sharedfolders/www/testcode/push2.php What am I missing ?
The path to the file is /sharedfolders/www/testcode/push2.php, not /mnt/fs/sharedfolders/www/testcode/push2.php. From comments, it appears as if you're put into a chrooted environment under /mnt/fs when you log in using ssh. This is why the pathname of the file starts with /sharedfolders rather than with /mnt/fs. The /mnt/fs directory is the root directory of your ssh session.
Pi cron php not running
1,396,342,628,000
I have a shell script to download files from S3. I'm able to run the script manually and it's working fine, But when I schedule it from Cron it's triggered but except the echo command, nothing is executed. Script #!/usr/bin/env bash set -e echo "starting" y_day=$(date --date="-1 day" +%d) y_month=$(date --date="-1 day" +%m) y_year=$(date --date="-1 day" +%Y) files=$( aws s3 ls s3://bucket/log/$y_year/$y_month/$y_day/ | grep _mylog_ | awk -F ' ' '{print $4}') for file in $files do aws s3 cp s3://bucket/log/$y_year/$y_month/$y_day/$file /opt/local_log/ done ## Next files count=$( aws s3 ls s3://bucket/testlog/$y_year/$y_month/$y_day/ | grep _testlog_ | wc -l) if [ $count -gt 0 ] then files=$( aws s3 ls s3://bucket/testlog/$y_year/$y_month/$y_day/ | grep _testlog_ | awk -F ' ' '{print $4}') for file in $files do aws s3 cp s3://bucket/testlog/$y_year/$y_month/$y_day/$file /opt/localtestlog/ done else echo "No files for userlog" fi Cronjob 10 13 * * * /opt/script/copy.sh > /tmp/copy.log log file (/tmp/copy.log) starting No files for userlog s3://bucket/testlog/$y_year/$y_month/$y_day/ No files in this location, so the the log correct. But I first set of command, didn't download anything. Ubuntu syslog Jun 5 19:04:01 ip-10-23-53-161 CRON[6984]: (root) CMD (/opt/script/copy.sh > /tmp/copy.log ) Jun 5 19:04:01 ip-10-23-53-161 cron[6952]: sendmail: fatal: open /etc/postfix/main.cf: No such file or directory Jun 5 19:04:01 ip-10-23-53-161 postfix/sendmail[7012]: fatal: open /etc/postfix/main.cf: No such file or directory Jun 5 19:04:01 ip-10-23-53-161 CRON[6979]: (root) MAIL (mailed 1490 bytes of output but got status 0x004b from MTA#012) I don't have mail server, so the last 3 lines are fine. But my question is, the complete command ran, except the echo, nothing processed their work. But this count command ran and the value is 0, so it did echo No files for userlog.
The primary issue seems to be that the aws command is not found. You could solve this in one of two ways: Use the full path of the command each time you invoke it in the script. Modify PATH in the script to include the path of the directory where the command is found. See also: When is double-quoting necessary?
Cron job triggered - But expect echo nothing is working
1,396,342,628,000
I have a machine configured (via cron) to start a screen session on reboot. The session opens up a few screens and starts a server in one of them. All of this works fine. However, when I login and resume the screen session, I get a (PS1) prompt like this: \u@\h [\j] \w\$ Terminal colors do not appear either. This is the PS1 string I explicitly set in my bashrc file, but the control sequences like \u are not being interpreted by the shell. I have ensured that my bashrc and profile get imported before the screen starts; the script called from cron: #! /bin/bash # This script initializes screen with a propert environment. It is intended to # be run from cron. # source the profile if [ -r "$HOME/.profile" ]; then source "$HOME/.profile"; fi if [ -r "$HOME/.bash_profile" ]; then source "$HOME/.bash_profile"; fi if [ -r "$HOME/.bashrc" ]; then source "$HOME/.bashrc"; fi exec screen -dmS initscreen I tried adding the line "export TERM=screen.xterm-256color" and variants (e.g., export TERM=xterm-256color), but these changed nothing. My assumption is that because there is not a real TTY when the screen gets started at reboot, somehow screen can't interpret my terminal correctly and ends up starting up without any terminal interpretation. When I quit screen and rerun the startup script from an ssh session (instead of from cron at reboot), everything work fine. How can I get screen to startup at reboot in a way that will let me attach it later with these terminal features working? Thanks in advance.
The fact that magic characters like \w in PS1 are not being interpreted seems to suggest that the shell started by screen is not bash, but something simple like /bin/sh. I looked at /etc/crontab in one of the systems I had to hand and it had the line SHELL=/bin/sh at the start, but another distribution had SHELL=/bin/bash, so you probably need to set this explicitly somewhere to ensure you get a consistent result.
How to start screen on reboot with an interactive TERM/TTY
1,396,342,628,000
i have written a script in python3 that gets me some magnet links, the script works perfectly, but i want it to run periodically so i created a cron job to do it every other day. While testing it i get the error that xdg-open: no method available for opening 'magnet....' i have already checked that my default browser is firefox and the default app for magnet links is the qbitorrent, i am out of ideas on how to fix this /usr/bin/xdg-open: 851: /usr/bin/xdg-open: www-browser: not found /usr/bin/xdg-open: 851: /usr/bin/xdg-open: links2: not found /usr/bin/xdg-open: 851: /usr/bin/xdg-open: elinks: not found /usr/bin/xdg-open: 851: /usr/bin/xdg-open: links: not found /usr/bin/xdg-open: 851: /usr/bin/xdg-open: lynx: not found /usr/bin/xdg-open: 851: /usr/bin/xdg-open: w3m: not found xdg-open: no method available for opening 'magnet:?x Thanks
i just found the solution, i was using a bash file to start the python3 virtual environment, and run the script. i added at the beginning of the file 2 environment variables export BROWSER=/usr/bin/firefox export DISPLAY=:0 that fixed the issue
xdg-open: no method available for opening -- Crontab
1,396,342,628,000
I am making backups of all databases of MySQL into separate gnu-zip files using crontab. I also want to delete backup files older then 1 day. I am using below command to delete the old files, but it only works through terminal. If I set cronjob for same command then it does not work. I don't know what is wrong. Going on my command is below: find path -type f -mtime +0 -delete I also can't find any fault in setting up cronjob: 0 0 * * * /path/auto_delete_backup_database.sh >/path/auto_delete_backup_database.log any help will be appreciated. UPDATE as @Kusalananda mentioned i ran ls -l and result is as displayed in below screen shot So is it because .sh file does not have required permission to be executed? if so, how can I grant that permission?
According to what you are showing in the question and saying in comments, your script is not executable and does not contain a #!-line. At a bare minimum, you should make the script executable with chmod +x /path/auto_delete_backup_database.sh and give it a proper #!-line: #!/bin/sh (this needs to be the very first line of the script, with no spaces before it). When the script is not executable, it will not be able to be used as a script at all. Trying to run such a file would return a "Permission denied" error (your cron daemon may well have tried to send you email messages telling you about this). Without the #!-line telling the calling shell what shell interpreter to use to run the script, it depends on what shell is calling it what would happen. You should always have the appropriate #!-line at the start of a script. In this case, since no special bash features are used, it's enough to use /bin/sh. You could obviously also schedule the find command directly with cron: 0 0 * * * find path -type f -mtime +0 -print -delete >/path/auto_delete_backup_database.log 2>&1 I think this is appropriate for single-command cron jobs. Anything more fancy is better scheduled as part of a wrapper script. I added -print to the find command above, which will make it output the names of the files that it tries to delete (it would be silent otherwise).
can't delete files older then x days through cronjob
1,396,342,628,000
I have this user who have a lot of cron jobs and I expected it to log stderr in its /var/mail/user. Sample below cron entry is working as expected in a different server. * 30 * * * * /usr/local/bin/scripts/test.sh > /dev/null 2>&1 I've compared postfix/main.cf on both servers and cannot find anything different. Is there something else I need to check?
This is an explicit redirection in the cron command: > /dev/null 2>&1. It means both stdout and stderr are thrown away. There's no basis to expect any mail then. To keep stderr, leave just this redirection at the end: >/dev/null.
Cron is not logging stderr in /var/mail/user
1,396,342,628,000
Yes, I know C-shell is bad. No, I did not choose C-shell. Yes, I would prefer to use a proper shell. No, I cannot switch to a better shell. I have a very simplistic script: /tmp/env_test.csh #!/bin/csh -f env When I run this script from my user that is logged in with a tcsh shell, SHELL equals /bin/tcsh. When I run this script from cron, SHELL equals /bin/sh. Why is the SHELL not being updated appropriately? What do I need to do to resolve this issue?
Look into man 1 csh. The section Pre-defined and environment variables lists which variables csh defines or respects. There is a variable shell in lowercase: shell The file in which the shell resides. This variable is used in forking shells to interpret files that have execute bits set, but which are not executable by the system. (See the descrip- tion of Non-builtin Command Execution below.) Initialized to the (system-dependent) home of the shell. So let's have a look: % echo $shell /bin/csh
Why is SHELL pointed to /bin/sh in a Csh script?
1,396,342,628,000
I have a computer periodically syncing folders of content with another computer using Resilio Sync. The receiving computer has a sorting process on an hourly cron, which analyses the folders and their contents before moving & cataloging them in a seperate filesystem. My issue is the hourly cron will run and process folders without their full contents if the sync is incomplete. The hourly cron process requires the entire contents of the folder to process the contents correctly. Is there a simple way of checking the contents of the receiving sync folder aren't open, i've looked into lsof, however perhaps there's an easier way? I could switch from the resilio sync process to rsync if that would help.
I suspect there are many ways to do this. The first that came to me to to include a checksum. On the sending server you can run: tar -cf - FILES | md5sum > my_sum.md5 Where we use tar to create (c) a file (f) onto stdin (-) from FILES which can be a glob, directory, or space delineated list of files which then gets piped to md5sum and the hash is saved in my_sum.md5. On the receiving side you can add a check to your cron job to first try to find my_sum.md5 (if it's not there then obviously we haven't copied everything), if it is there, check to see that a similar generation of the checksum on the receiving side has a matching checksum.
How do I confirm a file sync has completed before executing a command?
1,396,342,628,000
I've got a server that is used as an SFTP endpoint and a script living somewhere else that periodically uploads to the server. At a particular time every day all the files in the upload folder get the string 'archive_' prepended to their names. How can I go about tracking down the culprit name changing script? I've searched through the logs in /var/cron, looked at the scripts in /etc/cron*, looked at crontab -e on all the users with a home directory, done a grep of 'archive' /home, /etc/, and /usr and nothing useful has turned up.
Solved. The script that was pulling the files from the server was adding the text string to the file names to flag them as files that don't need to be pulled the next time around. Thanks everyone!
Finding a rogue script changing filenames?
1,396,342,628,000
I'm trying to build a script that automatically takes a file on the remote server and replace with crontab file, but I get permission denied. My idea is to create a shell function for it: update_crontab() { SSH_HOST=$1 FOLDER=$2 { if ssh -o "BatchMode yes" $SSH_SUDO_WHITOUT_PASS@$SSH_HOST "[ -f $FOLDER/crontab ]" then # Folder exists replace crontab with new file ssh -o "BatchMode yes" $SSH_SUDO_WHITOUT_PASS@$SSH_HOST "sudo cat $FOLDER/crontab > /etc/crontab" echo "crontab overwirte from $FOLDER/crontab" fi } || { echo "Error - Folder not exists" exit 1 } }
Since both files are remote, you can simply: ssh ... "sudo cp $FOLDER/crontab /etc/crontab" ... which avoids the "sudo redirection" problem where only the cat has elevated privileges, and your normal user shell does the > /etc/crontab redirection.
Remote overwrite crontab file with sudo
1,549,401,462,000
I have this script to change the background and screensaver from my gnome desktop. Works fine when executing manually, but when I put it in cron it doesn't execute it. The file is executable. I added the cron job with crontab -e. This is the script: #!/bin/bash # change_background - Change desktop background and lockscreen background randomly # Export DBUS_SESSION_BUS_ADDRESS environment variable euid=$(id --real --user) pid=$(pgrep --euid $euid gnome-session) export DBUS_SESSION_BUS_ADDRESS=$(grep -z DBUS_SESSION_BUS_ADDRESS /proc/$pid/environ|cut -d= -f2-) # Wallpapers directory dir="/home/myuser/Pictures/Wallpapers" # Wallpaper and screensaver files background=$(ls $dir/* | shuf -n1) screensaver=$(ls $dir/* | shuf -n1) # Set the wallpaper and screensaver gsettings set org.gnome.desktop.background picture-uri file://$background gsettings set org.gnome.desktop.screensaver picture-uri file://$screensaver My script is in my bin directory /home/myuser/bin which is added to the PATH variable. crontab -l output: # ┌───────────── minute (0 - 59) # │ ┌───────────── hour (0 - 23) # │ │ ┌───────────── day of month (1 - 31) # │ │ │ ┌───────────── month (1 - 12) # │ │ │ │ ┌───────────── day of week (0 - 6) (Sunday to Saturday; # │ │ │ │ │ 7 is also Sunday on some systems) # │ │ │ │ │ # │ │ │ │ │ # * * * * * command # # --- Change background every minute --- # # * * * * * change_background # # --- ------------------------------ --- # My question is: why cron is not executing my script? what I'm doing wrong?.
The issue seems to have been that the environment in the crontab was not set up with a correct PATH, so the script was never found. The user's shell initialisation files are not run by cron, so setting the PATH or other variables therein is useless for a cron job. This can be solved in a number of ways. One is to simply set PATH (and any other variables that needs specific values) in the crontab (this would also change the value of these variables for the script and all other jobs in the crontab): PATH=/home/myuser/bin:$PATH Another is to execute the script with an absolute path: * * * * * /home/myuser/bin/change_background This may be preferable if other jobs are executed that need an individually modified PATH variable for specific things that the scripts themselves are using (the scripts themselves would then set PATH early on, or be started with e.g. env PATH=... /some/path/program).
Crontab not executing script that change background
1,549,401,462,000
Background The PNG image files I want to use is stored in directories according to date, for example: /NAS-mein/data/201812/ PNG stored within it like /NAS-mein/data/201812/foo/bar/20181231_1500.png So I created a symbolic link PNG_path in my home directory ln -s /NAS-mein/data/201812/ PNG_path and I'm able to update it manually through: ln -sf /NAS-mein/data/201812/ PNG_path which works fine and returns `PNG_path' -> `/NAS-mein/data/201812' I'm in a CentOS 6.7 environment and I don't have superuser privilege. The destination directory is created by the others but granted 777 permission, i.e.: drwxrwxrwx /NAS-mein/ drwxrwxrwx /NAS-mein/data/ drwxrwxrwx /NAS-mein/data/201812/ With Crontab Then I tried to automatically update this symbolic link on the first day of month, so it will always redirect me to the directory of current date. I tried start a job in crontab like: 0 0 1 * * ln -sf /NAS-mein/data/$(date "+%Y%m") /home/me/PNG_path >>/home/me/.pngln.log 2>>&1 but this does not work, even without giving any information to the log. So I tried: 0 0 1 * * cd /home/me/ && ln -sf /NAS-mein/data/$(date "+%Y%m") PNG_path >>.pngln.log 2>>&1 and wrap it into a Bash script like: #!/bin/bash /bin/unlink "/home/me/PNG_path" /bin/ln -sf /NAS-mein/data/$(date "+%Y%m") PNG_path >>/home/me/.pngln.log 2>>&1 but all of above seem not working as the symbolic link does not change, and no any information was logged (i.e. .pngln.log is not created anyway.) I'm not sure where I did it wrong, or using ln in crontab is just not a legit use? Edit: I notice that I didn't write the most suspicious part: using date function in ln expression.
The percent sign is special in crontab and needs to be escaped if you put your date command there (see man 5 crontab). Your symbolic link points to a directory. When you run ln again, it will put the link inside that directory. Example: $ mkdir real $ ln -sf real link $ tree . |-- link -> real `-- real 1 directory, 1 file $ ln -sf real link $ tree . |-- link -> real `-- real `-- real -> real 1 directory, 2 files The solution is to use ln with -n (or --no-dereference) on Linux or on any system with GNU coreutils' ln, and with -h on BSD. This would cause ln to not descend into the directory that the link points to before creating the new link. A portable solution would be to first explicitly remove the link using rm: ln -s some_directory link Later: rm link && ln -s some_directory link
How to regularly update symbolic link (ln -sf) via crontab
1,549,401,462,000
Till now I organized my non-crontab cron-jobs by cron.d in Debian-LAMP environment. My cron-jobs use me to upgrade CMSs containing my web applications. Here's how I do it from the beginning: #!/bin/bash cat <<-EOF > /etc/cron.daily/cron_daily #!/bin/bash for dir in ${drt}/*/; do if pushd "$dir"; then rws composer update drupal/* webflo/drupal-core-require-dev --with-dependencies drush updatedb drush cache:rebuild rws popd fi done 2> $HOME/myErrors EOF cat <<-EOF > /etc/cron.weekly/cron_weekly #!/bin/bash find "$drt" -path "*/cache/*" -type f -delete certbot renew -q EOF chmod +x /etc/cron{.daily,.weekly} My question I consider to start using Arch instead Debian. I checked the Arch cron documentation about using cron.d but it's not clear to me if cron.d is a native part of Arch and if not, how to install it. Is cron.d a part of Arch and if not, how to install it?
The /etc/cron.daily will be available after installing cronie package, it is not pre-installed: pacman -S cronie The default system scheduled jobs in arch linux is managed through systemd.timer. To list the timer units : systemctl list-timers
Using cron.d in Arch
1,549,401,462,000
I have a bash script named testScript.sh that looks like this : #!/bin/bash curl -X GET https://www.example.com -o ~/Desktop/testFile.json curl -X POST -d ~/Desktop/testFile.json http://www.example2.com i want to run this script with crontab so i edited crontab file with crontab -e command like that : * * * * * ~/Desktop/testScript.sh The weird part is that when i run the script manually like ./testScript.sh with pi user, both curl commands execute just fine. When the script runs from crontab i see the testFile created so the first curl command execute, but the curl Post is not executed. I have already did my research and most of the people say it's environment variables but i don't seem to understand any of the answers. EDIT* I followed @roaima suggestion to make a log file. Logs : ....Couldn't read data from file testFile.json", this make an empty Warning : post ....... the curl post then is giving me a 400 bad request because there is not content to post. 1) the testFile.json is created with the first curl command, i see it locally on my machine, i don't know why it cannot read it. 2) the script still runs fine if i run it like that./testScript.sh
The answer to my problem was given out by @meuh If you want curl to send the contents of a file, and not just the filename, you would need -d @filename but tilde ~ will not be expanded so use -d @$HOME/Desktop/testFile.json
Cron - crontab executes half of bash script
1,549,401,462,000
Content of service crond status -l: [root@test ~]# service crond status -l Redirecting to /bin/systemctl status -l crond.service ● crond.service - Command Scheduler Loaded: loaded (/usr/lib/systemd/system/crond.service; enabled; vendor preset: enabled) Active: active (running) since Mon 2018-01-15 13:34:58 EST; 1 months 6 days ago Main PID: 831 (crond) CGroup: /system.slice/crond.service └─831 /usr/sbin/crond -n ORPHAN (no passwd entry) (root) BAD FILE MODE (/etc/cron.d/yum-cron) I'm getting the above error of cron status for yum-cron(BAD FILE MODE).
cronie (the cron in question), does a specific check for file permissions on each crontab file, at: https://github.com/cronie-crond/cronie/blob/master/src/database.c#L96 The mask it uses is 533 and the resulting masked permissions must be 400, which means that it will allow read (4) or read/write (4+2) bits for the owner of the file, and no more than read (4) for group and other. Some visual examples: user-readable ===== r w x - human-readable permissions 4 2 1 - permission bit values 1 0 0 - file permissions are: readable only 1 0 1 - a mask of 5 ===== 1 0 0 - OK -- resulting masked bits (4) user-readable and writable ===== r w x - human-readable permissions 4 2 1 - permission bit values 1 1 0 - file permissions are: readable and writable 1 0 1 - a mask of 5 ===== 1 0 0 - OK -- resulting masked bits (4) user-executable ===== r w x - human-readable permissions 4 2 1 - permission bit values 0 0 1 - file permissions are: executable only 1 0 1 - a mask of 5 ===== 0 0 1 - FAIL -- resulting masked bits (1) group (or other) - readable r w x - human-readable permissions 4 2 1 - permission bit values 1 0 0 - file permissions are: readable only 0 1 1 - a mask of 3 ===== 0 0 0 - OK -- resulting masked bits (0) group (or other) - readable and writable r w x - human-readable permissions 4 2 1 - permission bit values 1 1 0 - file permissions are: readable and writable 0 1 1 - a mask of 3 ===== 0 1 0 - FAIL -- resulting masked bits (2) group (or other) - no permissions r w x - human-readable permissions 4 2 1 - permission bit values 0 0 0 - file permissions are: no permissions 0 1 1 - a mask of 3 ===== 0 0 0 - OK -- resulting masked bits (0) You most likely have writable-bits on the file somewhere; some possible fixes are: chmod 400 /etc/cron.d/yum-cron chmod 600 /etc/cron.d/yum-cron chmod 644 /etc/cron.d/yum-cron Reference: http://man7.org/linux/man-pages/man7/inode.7.html
BAD FILE MODE yum-cron
1,549,401,462,000
I have a Ubuntu 16.04 Nginx environment with a few minimal WordPress sites (virtually all with up to 5 conventional plugins, 10 pages, 10 images, and a simple contact form to send only textual data). I daily execute the script cron_daily.sh with the following three loops, from crontab, to update all WordPress apps under document root. The script uses the WP-CLI shell extension. for dir in ${drt}/*/; do cd ${dir} && wp plugin update --all --allow-root; done for dir in ${drt}/*/; do cd ${dir} && wp core update --allow-root; done for dir in ${drt}/*/; do cd ${dir} && wp theme update --all --allow-root; done ${drt} is document root. It was already declared outside permanently, with its file sourced. I was looking for a way to unite the behavior of these three loops into one segment. This pattern seems promising, and is based on this example: for dir in ${drt}/*/; do if pushd ${dir}; then wp plugin update --all --allow-root wp core update --allow-root wp theme update --all --allow-root popd fi done Is this the shortest pattern one could use? How would you do that?
Why three times the same loop instead of one loop like in your example? At first glance I don't see how this could get any shorter — nor why it should. If anything, the script could get better (and thus longer) by improving the detection of WordPress (if needed). Also, I'd probably run wp language core update as well to make sure translations are up-to-date.
Shortest way to automatically upgrade all WordPress instaces under document root (>=4.7.x)
1,549,401,462,000
I have a script which runs perfectly in terminal, but when I tried to run that in crontab every "5 minutes", I got following errors in /var/log/messages: crond: sendmail: fatal: parameter inet_interface: no local interface found for ::1 Crontab entry: */5 * * * * /bin/python /scripts/python/account.py >> /script/python/account.log Note: In my script I am running an aws command (Which might be the reason): aws cloudwatch put-metric-data <----options and parameters----> If anyone could shed some light as to why I am getting this error and to what I can do to overcome this, that would be a great help. Thanks. update 1 Only command that is trying to send information out of system is the aws one, I am using following code to run that command: os.system("aws cloudwatch put-metric-data <----options and parameters---->")
Here's what resolved my problem I updated /etc/postfix/main.cf file as: comment out: inet_interfaces: all add inet_protocol: ipv4 Now I am not getting any sendmail error at all in /var/log/messages.
crond: sendmail error while running a python script in crontab
1,549,401,462,000
My Solaris 11 cron appears to have stopped working. Here is the last output of a cron job I run: -rw-r--r-- 1 root root 60 Jul 2 20:30 locked_passwords.txt I setup a test, like this: * * * * * touch /tmp/testing.txt It never touches the file I checked if the service is running: svcs cron STATE STIME FMRI online Mar_09 svc:/system/cron:default I truss the file and see this: root 532 1 0 Mar 09 ? 3:08 /usr/sbin/cron pfexec truss -f -p 532 532: pollsys(0xFC7FC1A8, 1, 0xFC7FC750, 0x00030414) (sleeping...) Is my cron process sleeping? why? UPDATE: I have restarted cron 2 times and it continues to stop. The majority of my log reads: ! c queue max run limit reached Mon Jul 24 12:53:00 2017 ! rescheduling a cron job Mon Jul 24 12:53:00 2017 ! c queue max run limit reached Mon Jul 24 12:53:00 2017 ! rescheduling a cron job Mon Jul 24 12:53:00 2017 ! c queue max run limit reached Mon Jul 24 12:53:00 2017 ! rescheduling a cron job Mon Jul 24 12:53:00 2017 How do I diagnose this situation?
Have you checked /var/cron/log? Perhaps the account is locked or added to cron.deny? Is there a corresponding .au file in /var/spool/cron/crontabs? Historically, that file was needed for jobs to run. Since it's created by crontab -e, it's usually only missing if a cron file was copied to a new server. Possible issue with the script/binary being called? The cron daemon is usually sleeping while it waits for when it does work. If you're using v11.3, you could also look into using a scheduled service in the event that it is more useful for your needs?
Why has cron stopped processing commands? [duplicate]
1,549,401,462,000
Cron: 1-59 * * * * orangepi /home/orangepi/message.sh > /dev/pts/4; message.sh: #!/bin/bash echo -e "\033[37;1;41m WARNING \033[0m" After executing I need press enter that return to console(root@orangepi:/home/orangepi#).
You have opened /dev/pts/4 for writing, and wrote the output of echo into it, nothing more than that. There is no execution/interpretation of the echo command by your shell, so your shell does not display a new prompt. If you want to execute a command from one terminal to another, you can try non-standard tools such as ttyecho: sudo ttyecho -n /dev/pts/4 'echo -e "\033[37;1;41m WARNING \033[0m"'
how to disable waiting "press enter" after executing bash script over cron
1,549,401,462,000
My Oracle Linux 6 system date prints: $ date Sat Mar 18 08:05:10 PDT 2017 And /var/log/cron timestamp prints: Mar 18 15:05:04 Why is it different, where can I make the change (is there any conf file), so that cron log prints the log in same timezone as system?
The issue was resolved by running the following: /etc/init.d/rsyslog restart /etc/init.d/crond restart
Why oracle linux system's /var/log/cron timezone different from system date? [closed]
1,549,401,462,000
I am backing up my notebook running Arch and my girlfriend's MacBook regularly using rsync and cron / launchctl via ssh. The target is a FreeNAS server. I would like to monitor whether the automatic jobs are running correctly, by receiving a notification if the content of the backup folders did not change for a certain time. How can I do that? Or is there some other approach usually used to verify that automatic jobs are running?
The content of the backup not changing as a symptom of the backup not running? In that case monitor the cronjob with a dedicated cron monitor such as WDT.io. This recipe's example is specifically about backups and shows you how to do it.
Monitoring last rsync backup
1,549,401,462,000
I made a script where the first step is to check if a Mac is even online (otherwise it would be unnecessary to even start the script). In the terminal it works perfectly fine: everything runs as it is supposed to. I want to run it via a cron job at 23:30 at night, so I created a cron job and logged the whole thing. The log however says that the Ping failed for the Macs, but they are definitely online. Any ideas what could cause this? Here is the script itself: #!/bin/bash #Array of Mac hostnames separated by spaces my_macs=( Mac111 Mac121 Mac122 Mac123 Mac124 Mac126 Mac127 Mac128 Mac129 ) # Number of days the remote Mac is allowed to be up MAX_UPDAYS=1 CURR_TIME=$(date +%s) MAX_UPTIME=$(( MAX_UPDAYS * 86400 )) ADMINUSER="pcpatch" #Steps through each hostname and issues SSH command to that host #Loops through the elements of the Array echo "Remote Shutdown Check vom $(date)" >> /Users/pcpatch/desktop/shutdown/Log/remoteshutdown.log for MAC in "${my_macs[@]}" do echo -n "Überprüfe ${MAC}... " >> /Users/pcpatch/desktop/shutdown/Log/remoteshutdown.log # -q quiet # -c nb of pings to perform if ping -q -c3 "${MAC}" >/dev/null; then echo "${MAC} ist angeschaltet. Laufzeit wird ermittelt... " >> /Users/pcpatch/desktop/shutdown/Log/remoteshutdown.log BOOT_TIME=0 # Get time of boot from remote Mac BOOT_TIME=$(ssh "${ADMINUSER}@${MAC}" sysctl -n kern.boottime | sed -e 's/.* sec = \([0-9]*\).*/\1/') if [ "$BOOT_TIME" -gt 0 ] && [ $(( CURR_TIME - BOOT_TIME )) -ge $MAX_UPTIME ]; then echo "${MAC} ist über 24 Stunden online. Shutdown wird ausgeführt!" >> /Users/pcpatch/desktop/shutdown/Log/remoteshutdown.log ssh "${ADMINUSER}@${MAC}" 'sudo /sbin/shutdown -h now' else echo "${MAC} ist noch keine 24 Stunden online. Shutdown wird abgebrochen!" >> /Users/pcpatch/desktop/shutdown/Log/remoteshutdown.log fi else echo "${MAC} ist nicht erreicbar Ping (Ping fehlgeschlagen)" >> /Users/pcpatch/desktop/shutdown/Log/remoteshutdown.log fi done In the cron job I wrote: 30 23 * * * /User/myuser/Shutdown/Shutdown.sh
You need to set an explicit PATH for a script to be run under cron. The default is PATH=/usr/bin:bin and you need (at least) /sbin there. #!/bin/bash export PATH=/usr/local/bin/:/usr/bin:/bin:/usr/sbin:/sbin ... You could also consider tweaking the options on the ping test slightly. The -o allows ping to exit as soon as it receives one response (i.e. the host is awake). The -W1000 forces an upper bound on the time it will wait for a rely. In my tests this caused ping to wait for a maximum of four seconds; without it I had to wait 14 seconds for a failure response: ping -q -c3 -o -W1000 "${MAC}"
Crontab can't reach several Macs?
1,549,401,462,000
I've been looking around but haven't found a solution to my particular problem. I need to create a cron job to backup a file system on daily basis. But the application needs the current date/time to run. Example: bundle exec thor migrator:export /var/tmp/backups --after "2016-12-22 00:00:00 -0700" How can I easily run the above command whereby the date will change daily? Time will remain the same.
Assuming the following line is your example. bundle exec thor migrator:export /var/tmp/backups --after "2016-12-22 00:00:00 -0700" You could use a command substitution for this job. bundle exec thor migrator:export /var/tmp/backups --after "$(date --iso) 00:00:00 -0700" But I rather would recommend to put your command into a separate shell script and run the shell script from cron.
Cron job with a dynamic date
1,549,401,462,000
I am making a GFFS backup script for a school assignment but I've encountered some issues with it. It works like this: /etc/backup/backup.sh PERIOD NUMBER I have added the following lines in cron: # m h dom mon dow command # Backup for fileserver: #daily: 5 times/week 0 23 * * 1-5 /etc/backup/backup.sh daily $(date -d "-1 day" +%w) #weekly: 5 times/month 10 23 * * 7 /etc/backup/backup.sh weekly $((($(date +%-d)-1)/7+1)) #monthly: 12 times/year 20 23 1 * * /etc/backup/backup.sh monthly $(date -d "-1 day" +%m) #yearly: each year 0 3 1 1 * /etc/backup/backup.sh yearly $(date -d "-1 day" +%Y) The calculations at the end is to know what previous backup to override. This works perfect when triggered manually but when triggered by cron it does something weird. i'm talking about the weekly backup entry. the calculation is supposed to give me the week number in the current month. i did 'grep CRON /var/log/syslog' and found this line: Dec 19 14:33:01 BE-SV-04 CRON[5445]: (root) CMD (/etc/backup/backup.sh weekly $((($(date +) It appears as if cron is not executing the calculation correctly. Any help?
I think you have to escape the "%"- signs so this: 0 23 * * 1-5 /etc/backup/backup.sh daily $(date -d "-1 day" +\%w) ... should work. I dont know which have to be escaped, it think + and %, please try out. *when I did it in cron I used the uglyer backtick-syntax for command execution and had to escapte them, too, like: * 0 1 * * * something >> bla\`date \+\%Y_\%m_\%d\`.log
Script working manually but not in cron - not calculating var? [duplicate]
1,549,401,462,000
I have the shell script which opens Firefox and launches macros in it (I use a Firefox add-on called Imacros to create macros). The content of my shell script named house.sh is like that: firefox imacros://run/?m=house.iim And I created a scheduled job via crontab -e to run that script hourly every day: 47 * * * * /home/meerim/bin/house.sh But nothing happened (Firefox didn't open). Then I tried this: 47 * * * * env DISPLAY=:0.0 /home/meerim/bin/house.sh But it didn't solve the problem. So how should I fix it? My house.sh script works properly when I run it from terminal.
You should be able to get this to run by putting in house.sh: export DISPLAY=:0.0 and running xhost + on your interface. Once that works you can restrict who is allowed to connect (using xhost again), but once things stop working you'll know how permissive you have to be. This will not work if you are not logged in. I there run firefox from a python script started with crontab, and the actual interface opens in a Xvnc screen independent of whether I am logged in or not (and it doesn't clobber my UI once it starts running).
How to schedule run shell script that opens Firefox [duplicate]
1,549,401,462,000
This question is related to Debian 8.4. I applied the same updating mechanism to several desktop stations and one unused server. This problem appeared on the server, but I suspect it will happen on all those stations too. As solved in this thread, I managed to get the cron job logged, now I waited for an update. Here it comes and so a piece of error information, which is total gibberish to me right now. Please first see how I set it up here: How do I know if crontab is working fine? The relevant part of the log starts when it wants to download archives. Apr 8 00:00:42 vb-srv-debian updates: Need to get 108 MB of archives. Apr 8 00:00:42 vb-srv-debian updates: After this operation, 20.5 kB of additional disk space will be used. Apr 8 00:00:42 vb-srv-debian updates: Get:1 http://dl.google.com/linux/chrome/deb/ stable/main google-chrome-stable amd64 49.0.2623.112-1 [48.5 MB] Apr 8 00:00:42 vb-srv-debian updates: Get:2 http://nightly.odoo.com/9.0/nightly/deb/ ./ odoo 9.0c.20160407 [59.6 MB] Apr 8 00:00:54 vb-srv-debian updates: Reading changelogs... Apr 8 00:01:01 vb-srv-debian updates: debconf: unable to initialize frontend: Dialog Apr 8 00:01:01 vb-srv-debian updates: debconf: (TERM is not set, so the dialog frontend is not usable.) Apr 8 00:01:01 vb-srv-debian updates: debconf: falling back to frontend: Readline Apr 8 00:01:01 vb-srv-debian updates: debconf: unable to initialize frontend: Readline Apr 8 00:01:01 vb-srv-debian updates: debconf: (This frontend requires a controlling tty.) Apr 8 00:01:01 vb-srv-debian updates: debconf: falling back to frontend: Teletype Apr 8 00:01:01 vb-srv-debian updates: dpkg-preconfigure: unable to re-open stdin: Apr 8 00:01:01 vb-srv-debian updates: Fetched 108 MB in 11s (9,111 kB/s) Apr 8 00:01:01 vb-srv-debian updates: dpkg: warning: 'ldconfig' not found in PATH or not executable Apr 8 00:01:01 vb-srv-debian updates: dpkg: warning: 'start-stop-daemon' not found in PATH or not executable Apr 8 00:01:01 vb-srv-debian updates: dpkg: error: 2 expected programs not found in PATH or not executable Apr 8 00:01:01 vb-srv-debian updates: Note: root's PATH should usually contain /usr/local/sbin, /usr/sbin and /sbin Apr 8 00:01:01 vb-srv-debian updates: E: Sub-process /usr/bin/dpkg returned an error code (2) I suspect the main, if not only one problem is related to $PATH variable, which I currently don't understand, how it is used. While being root, the following variable values are returned: $ echo $PATH /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin $ echo $TERM xterm
cron typically runs things in a fairly minimal environment (man 5 crontab to see what exactly), which probably doesn't have enough in its path for this. If you want to see what is in the path, you can always run printenv > /tmp/cron_env from (presumably at a time in the near future) to see. Generally, you can just define an updated PATH in your crontab file; again see man 5 crontab for details.
CRON problem - an apt-get dist-upgrade job
1,549,401,462,000
I'm having troubles with my crontable on the scientific cluster I'm using. It's a bit peculiar, since the jobs seem to run (I get regular mails from some updates) but when I type crontab -l the response is: > no crontab for USER Same story for crontab -e where I end up with an unedited file, somewhere in the tmp folder. So my question is: Where could this crontab possibly be? I checked some "standard" places, but it seems that there is nothing. Any ideas, anybody? Best, L.
First of all, there is good possibility that mails are coming from somewhere else, so you need to make sure where did it come from. Inspect mail headers, particularly Received: headers. It will show you that info.
Cronjob running but no crontab for user?
1,549,401,462,000
I'm getting ready to launch a crontab, but since I'm new at it, wanted to check with someone if they could verify that what I'm doing is correct. * 3 * * 5 /usr/local/bin/backup.bash The above crontab line will run my backup bash file EVERY Friday at 3 a.m., is that correct? I need it to run every Friday, not just one Friday and then stop.
You need to add 00 (or 0) to the minute(s) field, otherwise it will be run for each minute from 03:00 to 03:59 on every friday. So, to run the job at 03:00 AM every friday: 00 03 * * 5 /usr/local/bin/backup.bash
Crontab Line Date
1,549,401,462,000
With the root user I configured the JAVA_HOME variable for crontab like this: [root@localhost ~]# vim /etc/crontab _______ SHELL=/bin/bash PATH=/sbin:/bin:/usr/sbin:/usr/bin:/opt/jdk1.8.0_71/bin MAILTO=root JAVA_HOME=/opt/jdk1.8.0_71 _______ I have defined a cronjob run by a different user named tomcat like this: [tomcat@localhost ~]$ crontab -e _______ 30 10 * * * /opt/tomcat/bin/shutdown.sh >> /opt/tomcat/logs/cron_restart.log 2>&1 32 10 * * * /opt/tomcat/bin/startup.sh >> /opt/tomcat/logs/cron_restart.log 2>&1 _______ The job runs, but my log says the following: [tomcat@localhost ~]$ vim /opt/tomcat/logs/cron_restart.log ______ Neither the JAVA_HOME nor the JRE_HOME environment variable is defined At least one of these environment variable is needed to run this program Neither the JAVA_HOME nor the JRE_HOME environment variable is defined At least one of these environment variable is needed to run this program ______ 1.) Why is the crontab not picking up the JAVA_HOME? 2.) Which possibilities are there to tell crontab where the JAVA_HOME is? My approach is based on the CentOs-Docs from this page: https://www.centos.org/docs/5/html/5.2/Deployment_Guide/s2-autotasks-cron-configuring.html 3.) Is it possible that i misread the docs?
I got it working this way: Insert the following line in the file /opt/tomcat/bin/setenv.sh: export JAVA_HOME="/opt/jdk1.8.0_71"
CentOs 7 CronTab set JAVA_HOME
1,549,401,462,000
I have a PHP script I'm running from Cron. I want to save its output to file, but also want to save the shell output to a different file. Ideally, I'd like to have this in one line. As such, I tried the following: script "/folder/log/file.errors."`date +"%Y-%m-%d.%H-%M-%S"`".txt" && /usr/bin/php /folder/file.php > "/folder/log/file.php."`date +"%Y-%m-%d.%H-%M-%S"`".txt" But it only runs the first command (before &&). Likewise, if I use ; instead of &&. When I run this as two separate commands, it works just fine: root:~# script "/folder/log/file.errors."`date +"%Y-%m-%d.%H-%M-%S"`".txt" root:~# /usr/bin/php /folder/file.php > "/folder/log/file.php."`date +"%Y-%m-%d.%H-%M-%S"`".txt" How can I join these two commands into one command/line? Also, when run via Cron, will it be necessary for me to run an exit command after the above code in order for script to properly save to file?
Per terdon's request, I am posting this comment as an answer, so that the question can be marked as "answered" Instead of relying on script logging, especially if this will eventually be a cron job, consider sending output and error messages to one or more designated files(s) in your php code. When you run it in cron, it will create a session log unless you divert it to something like >/dev/null 2>&1 directive. So, as a debug tool you can check that cron log
script Multiple Commands
1,549,401,462,000
I'm trying to write a script that will identify corrupt jpg images using imagemagicks identify command. The script will run the command, grep the output for the word "Corrupt", move it to a corrupt folder if corrupt or move it to an input folder if the file appears good. My script works as expected if I run it manually from a terminal window. Ideally i want to run this script as a cron job however when I schedule it as a cronjob, its moving everything to the input folder...so it seems like the IF statement isn't getting evaluated correctly. Is there something different i need to do with my script to make it work as a cronjob? #!/bin/bash LANG=en_US.UTF-8 #variables for our folders DROP_FOLDER=~/Desktop/Drop CORRUPT_FOLDER=corrupt_images/ INPUT_FOLDER=Input/ cd $DROP_FOLDER for f in *.jpg do if /opt/ImageMagick/bin/identify -verbose $f | /usr/bin/grep -iq "Corrupt"; then mv "$f" $CORRUPT_FOLDER else mv "$f" $INPUT_FOLDER fi done I am scheduling in crontab with */2 * * * * cd ~ && ./myscript.sh
I suggest to replace your if block by: /opt/ImageMagick/bin/identify -verbose "$f" | /usr/bin/grep -iq "Corrupt" if [[ ${PIPESTATUS[0]} -eq 0 && ${PIPESTATUS[1]} -eq 1 ]]; then mv "$f" "$INPUT_FOLDER" else mv "$f" "$CORRUPT_FOLDER" fi From man bash: PIPESTATUS: An array variable (see Arrays below) containing a list of exit status values from the processes in the most-recently-executed foreground pipeline (which may contain only a single command).
script not running as expected when scheduled as a cronjob
1,549,401,462,000
I run a script file through CRON daily which involves running selenium testcases and sending the report as mail. Here is my script: check.sh #!/bin/sh set -x ./.bashrc export CLASSPATH=/home/test/TestAutomation/lib/*:. cd /home/test/TestAutomation/lib/ /usr/bin/java -jar selenium-server.jar & cd javac Api.java java Api cd /home/test/TestAutomation/selenium/reports/ cp result.html /home/test/TestReports sh /home/test/repgen.sh sleep 30 sh /home/test/masRepgen.sh This script works fine in cron. In this, sh /home/test/masRepgen.sh this script compiles and executes java file and Send Mail. I made a small change to the above script as follows. #!/bin/sh set -x ./.bashrc . /home/test/blog/build.txt cd /home/test/VT/CT/ if [ -e /home/test/VT/CT/CT__$BuildLabel ]; then echo "Testcases has been run already" else export CLASSPATH=/home/test/TestAutomation/lib/*:. cd /home/test/TestAutomation/lib/ /usr/bin/java -jar selenium-server.jar & cd javac Api.java java Api cd /home/test/TestAutomation/selenium/reports/ cp result.html /home/test/TestReports sh /home/test/repgen.sh sleep 30 fi sh /home/test/masRepgen.sh After this change, I'm not receiving mail. ie., sh /home/test/masRepgen.sh doesn't compiling java class. I couldn't identify where the error is. masRepgen.sh contains this. cd /home/test/ /home/test/jdk1.7.0_12/bin/javac SendMail.java /home/test/jdk1.7.0_12/bin/java SendMail "http://172.20.8.50/Regression/CR__$BuildLabel/compareresults_index.html" "http://172.20.8.50/Summary__$BuildLabel/complete_summary.html" I added this in crontab: 45 02 * * * /bin/sh check.sh >> UI.txt
By placing export CLASSPATH=/home/test/TestAutomation/lib/*:. above if condition solves the problem. Thanks everyone for the comments.
Java file is not getting compiled in cron
1,549,401,462,000
I want to automatically delete old files in ~/Downloads. with exception of ones that have '!' in their name. My current version is as follows: find /home/user/Downloads/ -mtime +90 ! -path '*!*' -delete When I test it as follows*: find /home/user/Downloads/ -depth -mtime +90 ! -path '*!*' -print The result is: /home/user/Downloads/old.file /home/user/Downloads I don't want it to delete my /home/user/Downloads directory that will still contain !protected.old.file I want this to be run from anacron so I'm not sure if using globing there is a good idea. *Notice that -depth is implied by -delete.
If all you want to do is protect the path given to find from being deleted, use -mindepth 1. I'd split the action, however: run once, deleting only files, and run again, this time deleting directories using rmdir, which will only remove empty directories. Note that deleting a file should change the access and modify times of the containing directory, so the test becomes invalid for an old directory as soon as any file in it is deleted. Hence, deleting empty directories in a second run is more likely to succeed.
Cleaning script with find and anacron
1,549,401,462,000
I've done a little java program that ends showing a dialog after doing some other tasks like reading from a web and writting on a file. My goal is to make it run everytime my system starts with 90 seconds delay. (@reboot sleep 90; ...). It allready does all the job well (it creates the file that I want correctly), but the problem is that it doesn't show the dialog. If I run the script manually it works as I want. This is the script: #!/bin/bash javac /home/eneko/workspace/Comprobación\ página/src/Main.java java -classpath /home/eneko/workspace/Comprobación\ página/src/ Main exit 0 And this is what I've written on crontab (I've set it to run every minute just to try if it works without rebooting): * * * * * export DISPLAY=:0 && /home/eneko/Documentos/scriptComprobacionPagina.sh I thought the problem was with export DISPLAY=:0 as it's explained here. But after trying it I'm afraid I'm missing something. I can't even run amarok as in the example. Thanks in advance!
I assumed that cron was the only way to achieve my goal, but I was wrong because cron is for starting background jobs. Then I tried to create a .desktop file and adding it to Startup Applications and it worked. The file is in ~/.config/autostart and this is what it contains: [Desktop Entry] Type=Application Name=Comprobacion Exec=/home/eneko/Documentos/scriptComprobacionPagina.sh Icon= Comment=Sin comentario X-GNOME-Autostart-enabled=true
How to show Java dialog when cron runs the Java program?
1,549,401,462,000
I want to start a process in a terminal using cron. I want the process to start in a terminal, so that I can continuously see the output of the process on the terminal, and kill it / restart it etc. I know I can do this via screen, by using "screen -p -X stuff ", but I've lately run into weird issues with screen freezing (Screen session freezes, output stops, but process continues running. Reclaiming screen possible?), and was wondering if there was a way to start a process via the cron in a terminal without using screen? I can create the terminal beforehand and rename it etc. by hand, in case that helps.
Here is a solution which will both unlockpt() a new pty descriptor and write its ptsname() to stdout for you. <<\C cc -xc - -o pts #include <stdio.h> int main(int argc, char *argv[]) { if(unlockpt(0)) return 2; char *ptsname(int fd); printf("%s\n",ptsname(0)); return argc - 1; } C Which just compiles a tiny little C program that attempts to call unlockpt() on its stdin and if successful prints the name of the newly created and unlocked pty to stdout or else silently returns 2. On a linux system - given appropriate permissions (in general, these are most easily achieved by adding oneself to the tty group) - a new pty descriptor might be obtained as easily as... exec 3<>/dev/ptmx ...to get the master-side fd in the current shell then, provided you have compiled the program above in ./... slave=$(./pts <&3) ...will both make your new descriptor practically usable and put its device name in the value of the shell variable $slave. Next, provided the util_linux package has been installed on your system, you can install a new setsid() process there like: setsid -c sh 3>&- <>"$slave" >&0 2>&1 Which will launch an interactive sh process as the session leader of your pty and quit, essentially backgrounding the interactive shell.The backgrounded shell will interpret anything written to >&3 as user input. A particularly useful thing to write might look like: echo "exec >$(tty) 2>&1" | cat - /dev/tty >&3 To redirect all output from your background shell and its children to your current terminal temporarily, while simultaneously redirecting all of your keyboard input back to your background shell for so long as cat survives. The backgrounded shell's /dev/tty is the pty in slave, though, and so the staus-quo is restored as easily as: echo 'exec >/dev/tty 2>&1' >&3 The same kind of thing can work from cron, but you would want just script th i/o instead, of course, or to setup some pipe relay to do the juggling for you automatically. See How do I come by this pty and what can I do with it? for more.
Using cron to start a process on a terminal without using screen?
1,549,401,462,000
Let's suppose we have a crontab scheduling some commands on an RPi project, and these commands toggle a status of a flag. The box could go out of power, and then reboot. I want to have the flag status at the correct state if the box was continuously working. My Idea: Having a task on the reboot that: recovers the last date of a known state ( this should be easy, I could save a log entry in a file, and the date of that file is the "last well know state date" ) asks crontab to have the list of event that would happen in that lapse, and with these compute the new status. done, work as usual :) Let me know if there is a smarter way in doing so, and if not, how can I achieve point 2?
Apparently it is not an out of the box task. I found a library in python: https://pypi.python.org/pypi/python-crontab that allow to find a "schedule" for a single job: schedule = job.schedule(date_from=datetime.now()) ... datetime = schedule.get_next() That is exactly the needing. Looping from date_from last reboot date while datetime less than now.
Getting crontab events that would happen given a start date/time
1,425,987,254,000
I have issue with my crontab - cron is not launching one of my scripts this is top part of my crontab (root) SHELL=/bin/bash #--------------------------------------- # Items availability #--------------------------------------- # Daily offer import + daily sync everything 30 7 * * * /var/www/import/download_offers.sh > /var/logs/download_offers_cron.log # Updates availability # 7:45 mass update happens */5 0-6 * * * /var/www/import/check_availability.sh 0,5,10,15,20,25 7 * * * /var/www/import/check_availability.sh */5 8-23 * * * /var/www/import/check_availability.sh #--------------------------------------- # Sales import + sync #--------------------------------------- # Import EBAY + WWW sales. Run sales cleaner */15 * * * * /var/www/import/import_all.sh */5 * * * * /var/www/import/import_www.sh 55 13 * * * /var/www/import/import_all.sh 58 13 * * * /var/www/import/import_www.sh ... more stuff 7KB total ... it used to work perfect for long time but after last week reboot this line 30 7 * * * /var/www/import/download_offers.sh > /var/log/download_offers_cron.log stopped working. contents of download_offers.sh: #!/bin/sh echo "Working..." echo "Download offers started" > /var/log/import/OFFER_START.log ... some private stuff ; just bunch of wget's and echo's ... file /var/log/import/OFFER_START.log is not created what I have tried so far I changed time to 7:30, 8:00 (it used to be 7:45 originaly) I added > /var/logs/download_offers_cron.log but file hasn't been created I browsed root emails, all scripts are launched except that one file is +x executable is there some known bug? is there any other option to debug crontab except mails? why script doesn't launch at all? it used to work fine after last week reboot.... Please help I am losing my mind Edit: I am using CentOS release 6.6 (Final) this is /var/log/cron Mar 10 07:30:01 serverpro1 CROND[11291]: (root) CMD (/var/www/import/download_offers.sh > /var/logs/download_offers_cron.log) looks like cron entry is fine but there is no /var/log/download_offers_cron.log file anyway?
it magically works again only thing i changed was 30 7 * * * /var/www/import/download_offers.sh > /var/logs/download_offers_cron.log ^^ here to 30 7 * * * /var/www/import/download_offers.sh > /var/log/download_offers_cron.log ^^ here why it caused error and script didnt work AT ALL? I dont know and why script stopped works in first place is unsolved mystery
crontab not launching one script
1,425,987,254,000
I've got a cronjob that looks like this 0 0 * * 7 [ $(date +\%d) -le 07 ] && /home/archiver/archiver.sh &> /home/archiver/output And it works from the cron+bash perspective in that it'll run on the first Sunday of every month (or so I assume. It ran today, but we'll see about next Sunday, haha). But the &> /home/archiver/output didn't seem to take. The script is pretty talkative and /home/archiver/output has a modified/changed timestamp of 00:00:01, but is completely empty. What am I missing to capture the script's output?
Your cron seems not to know or use the &> shortings from bash. When you write the redirection like this /home/archiver/archiver.sh >/home/archiver/output 2>&1 it should work.I would prefer >>/home/archiver/output 2>&1 to always append to the logfile, too.
cron: output of a script?
1,425,987,254,000
Is there a good reason why apt-get remove leaves installed cron files in place where an apt-get purge or apt-get remove --purge is actually required to completely remove them? Example files may be: /etc/cron.d/<packagename>, or /etc/cron.hourly/<packagename> The man pages and everything else I've seen seems to indicate that only configuration files should remain after a remove command, and purge will only remove those configuration files in addition to the package. If these are considered configuration files, then why? Is it possible to have customised (/configured) versions of these files based on the installation?
Yes, those files are considered configuration files. Generally, (at least) everything in /etc is considered a configuration file in Debian. That's why it takes a purge to remove them. The reason they are configured configuration files is that anything that the system administrator is reasonably expected to customize or edit should be considered a configuration file, and that generally includes anything in /etc, and especially a crontab file.
Why does apt-get remove leave package-installed cron files lying around?
1,425,987,254,000
I have a crontab job to sync a folder: 50 5 * * * /home/user/bin/sync-folder This will execute an script: #!/bin/bash sudo rsync -rav --delete --log-file=/tmp/rsync-output /origin /destination grep folder /tmp/rsync-output if [ $? == 0 ]; then cat /tmp/rsync-output fi The issue is that when there is nothing to sync, I get an email like this: sending incremental file list sent 343 bytes received 17 bytes 720.00 bytes/sec total size is 91,056 speedup is 252.93 What I wanted is to receive an email only where there are new changes. How can I prevent this kind of emails?
Replace this: 50 5 * * * /home/user/bin/sync-folder With this: 50 5 * * * /home/user/bin/sync-folder > /dev/null 2>&1 Add email inside script: #!/bin/bash sudo rsync -rav --delete --log-file=/tmp/rsync-output /origin /destination grep folder /tmp/rsync-output if [ $? == 0 ]; then mailx -s "Rsync Complete at `date +"%F %T"`" [email protected] < /tmp/rsync-output fi
Prevent empty rsync email on cron
1,425,987,254,000
I have a server with some OpenVPN instances (one server, several clients) running on Debian with enforcing SELinux. The connections to some of the VPN servers my machine is connecting to are somewhat unstable, and the OpenVPN instances on my machine crash now and then, so I had set up a cronjob to restart them in case of a crash. Now the problem is, that this cronjob fails due to issues with SELinux, which I don't really understand. Restarting any OpenVPN instance by hand from commandline, using the same command, works fine. This ist what audit says: type=AVC msg=audit(1422960005.730:3567927): avc: denied { sys_module } for pid=14309 comm="ifconfig" capability=16 scontext=system_u:system_r:openvpn_t:s0 tcontext=system_u:system_r:openvpn_t:s0 tclass=capability Was caused by: Missing type enforcement (TE) allow rule. You can use audit2allow to generate a loadable module to allow this access. type=AVC msg=audit(1422960005.722:3567921): avc: denied { relabelfrom } for pid=14295 comm="openvpn" scontext=system_u:system_r:openvpn_t:s0 tcontext=unconfined_u:system_r:openvpn_t:s0-s0:c0.c1023 tclass=tun_socket Was caused by: #Constraint rule: constrain tun_socket { create relabelfrom relabelto } ((u1 == u2 -Fail-) or (t1 == { logrotate_t ldconfig_t initrc_t sysadm_t dpkg_t lvm_t mdadm_t unconfined_mount_t dpkg_script_t newrole_t local_login_t sysadm_passwd_t system_cronjob_t tmpreaper_t unconfined_execmem_t httpd_unconfined_script_t groupadd_t depmod_t insmod_t kernel_t passwd_t updpwd_t apmd_t apt_t chfn_t init_t sshd_t udev_t remote_login_t inetd_child_t restorecond_t setfiles_t unconfined_t systemd_tmpfiles_t sulogin_t useradd_t } -Fail-) ); Constraint DENIED # Possible cause is the source user (system_u) and target user (unconfined_u) are different. # Possible cause is the source level (s0) and target level (s0-s0:c0.c1023) are different. I had already set up a local openvpn configuration for SELinux, in order to get it running at all. It looks like this: module openvpn_local 1.0; require { type openvpn_t; type kernel_t; type udev_t; type var_run_t; class system module_request; class file { read append }; class capability sys_module; class tun_socket { relabelfrom relabelto }; } #============= openvpn_t ============== allow openvpn_t kernel_t:system module_request; # allow openvpn_t self:capability sys_module; allow openvpn_t self:tun_socket { relabelfrom relabelto }; allow udev_t var_run_t:file { read append }; This setup had worked, before I had to make some changes on my setup, to run the OpenVPN instances on static devices nodes. Since then, the granted rights don't seem to be sufficient anymore. Any help, to set up a fine grained solution for this, or how to improve the local SELinux module for OpenVPN would be greatly appreciated!
The first error is clear; the line you've commented out gives the permission that audit says is missing. The second part is more interesting, but what I suspect is the problem is the target context of the socket you're modifying (owned by unconfined_u). Because you've moved to static device nodes, your interfaces are no longer created by the openvpn process that's going to modify them, so I don't think self:tun_socket is going to be enough anymore. You may also be able to fix this by modifying the contexts of your nodes to be owned by system_u instead. If all your network setup is handled in an initscript and everything works at boot, you can definitely use run_init /etc/init.d/openvpn start in root's crontab to make this work. run_init makes sure the script runs in the same context it would have been at boot. EDIT: If you want it to be sensitve to crashes and your daemons have pidfiles, you could use some inotify magic on /proc/pid to notice when they disappear and then call the restart script. waitpid would be nicer, but only works on child processes.
Restarting OpenVPN on SELinux by a cronjob
1,425,987,254,000
I want to stop dropbox from running as soon as I log into my account. Now I know I could do this simply by going to Startup Applications and disabling dropbox. But I was wondering if there is a cron-ish way of doing it or is cron not supposed to be used for stuff like these? Currently I have this in my crontab @reboot pkill dropbox But this doesn't work, because I guess dropbox isn't running when this cron-job runs.
The @reboot entry is there for doing things after each reboot. That is not what you want. Cron is not the right tool for this, as it starts something at a specific time and it has no clue about you being logged in or being in the process of logged in. At most a cron job run every minute could look if you are logged in and then take action. It is better to try and kill dropbox (using pkill e.g.) with a Startup application.
Killing a process with cron as soon as it starts
1,425,987,254,000
I have been trying to create a cron job to execute a script located in ~/Documents/script.sh at midnight every night and every 8 hours after that. I have found a lot of material on this but for some it doesn't work. This is what I have: * 0 * * * ~/Documents/script.sh 0 */8 * * * ~/Documents/script.sh
There is no tilde expansion in your cron job, you have to put in something like: 0 */8 * * * /home/yourloginname/Documents/script.sh This will run script.sh (assuming it is executable (chmod +x script.sh)), at midnight, 8 AM and 4 PM. Leave out the first line you indicated, that would run the script every minute between midnight and 1 AM.
Cron Job Not working [closed]
1,425,987,254,000
I use cron for years now. In order to load all my environment variables at once, I define them in a dedicated file (often but not always .bashrc) that I source in the crontab: * * * * * (. /home/me/my_environment_variables.sh; my_script.sh) Unfortunately, this trick fails to work with the debian wheezy server I've been recently asked to move to. More specifically, all happens as if the source command was ignored: my_script.sh is executed but with empty environment variable so that the results of the script are not the expected ones. I really don't understand the issue. Everything was perfectly working on my ubuntu The installed cron package is "3.0pl1-124". Have you any idea how to solve or to bypass this issue? PS : Defining environmental variables in this way seems to work : * * * * * (export OPTOS_HOME=/home/me/src/optos; my_script.sh) However, defining more than one environment variables in this way is not convenient.
. is an internal shell command. Check what shell is used by cron (by default it is /bin/sh). Alternative solution is to create a wrapper script and put these commands inside. It will work for sure.
Why can I not source a file with the default cron installed on a debian wheezy server? [closed]
1,425,987,254,000
I have a very simple bash script that runs flawlessly when I do ./removeOldBackup.sh or sh /home/myusername/backup/removeOldBackup.sh but when I add it to crontab like * * * * * sh /home/myusername/backup/removeOldBackup.sh or * * * * * /bin/sh /home/myusername/backup/removeOldBackup.sh it never works... This is my script: #!/bin/sh find . -name 'files_20[1-3][0-9]-[0-9][0-9]-[1-2][1-9]--*' -delete find . -name 'files_20[1-3][0-9]-[0-9][0-9]-0[2-9]--*' -delete find . -name 'database_20[1-3][0-9]-[0-9][0-9]-[1-2][1-9]--*' -delete find . -name 'database_20[1-3][0-9]-[0-9][0-9]-0[2-9]--*' -delete This is my script permissions: -rwxr-xr-x 1 root root 295 Jul 25 10:07 /home/myusername/backup/removeOldBackup.sh Crontab is added for user root. This is what I find in /var/log/syslog: Jul 25 10:11:01 myservername /USR/SBIN/CRON[7583]: (root) CMD (sh /home/myusername/backup/removeOldBackup.sh) So again, when I run the script manually, my backup files get removed correctly. When it is run by cron, they never get removed. I'm using debian-6.0-x86_64.
To formalise and expand on what someone said in a comment, when you put something in root's crontab it will run inside /root, not in the directory the script is in, because cron doesn't even know where that is. Because your backup files aren't in that directory tree, the find command never reaches them. So the job is running, it just never finds any files to delete. Providing an absolute path to find or adding cd /home/myusername/backup first will fix your problem. Nonetheless, there doesn't seem to be a need to run this cronjob as root at all: all the files are inside myusername's home directory and presumably owned by them too. Why not put your cronjob inside that user's crontab instead? Run crontab -e as myusername and add the exact same line you used for root's version. That way you aren't needlessly running a task as a privileged user (which automatically deletes files, no less), and you'll also be in a working location for the script to start with.
bash script not being run by cron [duplicate]
1,425,987,254,000
I'm on VMWare which, no matter what anyone says, responds to most unix commands. Some things are unique though, for example the fact that almost no files are persistent through reboot. Due to this I'm getting a problem with a scripted backup which is using a variable to name log files: If I manually insert backup-$(date +%Y-%m-%d).log into cron, log files will of course be named after the date they were started. However, if I insert this into local.sh to re-enter cron job at reboot; /bin/echo "0 0 * * 1-5 ...backup-$(date +%Y-%m-%d).log it will of course end up like this: backup-2014-07-01.log So the question is how do I write this to cron, so that it ends up as a varible instead of a date?
The dollar sign in your string is being expanded at the time the echo is run and causing command substitution to happen then. What you want is to pass that string AS A STRING on to the file. There are two ways you can keep the substitution from happening. You can escape the dollar sign: /bin/echo "0 0 * * 1-5 ../..backup-\$(date +%Y-%m-%d).log" You can use singe quotes instead of double to wrap the string: /bin/echo '0 0 * * 1-5 ../..backup-$(date +%Y-%m-%d).log' Either way you can test this statement from any shell to see whether it gives you the literal string or runs the substitution.
Insert variable and keep the variable
1,425,987,254,000
As nonroot user, how can I schedule a reboot every day at 4:30am? 30 04 * * * /home/user/scripts/reboot.sh As a nonroot normal user, I'm sure the system won't allow me to use the reboot command. What settings do I need to change in Debian order to make this scheduled reboot work? (without asking for sudo password)
You need to put your cron entry in /etc/cron.d or /etc/crontab for it to be run as root. If you put it in a new file under /etc/cron.d, that format should work (/etc/crontab uses a slightly different format).
Debian, nonroot, cron, bash script and rebooting at specific time
1,425,987,254,000
I have set up a cron job on my local Ubuntu 12.04 server to log on a remote server through a passwordless ssh connection and run mysqldump on a database on that server once a day. My problem is that, in addition to running mysqldump at 00:00 every day, it is for some reason also run at HH:17 at every hour, thereby filling up the disk fairly rapidly. The job in my crontab is set up as: @daily /bin/bash /home/backup/scripts/db_backup The most important parts of the script db_backup looks like this: #!/bin/bash # Sets the properties and folders to be backed up host_name=admin@the_host.com db_name=the_db_name db_backup_folder_at_host="~/db_backup" # Dumps the mysql database { # Try ssh ${host_name} "mysqldump ${db_name} > ${db_backup_folder_at_host}/backup$(date +%F_%R).sql" && echo "$(date) SUCCESS! mysqldump of database" } || { # Catch echo "$(date) FAILURE! mysqldump of database" } At the remote server I have specified a .my.cnf file for the database (in the home folder) like this: [mysqldump] user=USERNAME password=PASSWORD host=MYSQLSERVER and this works fine. The crontab is successfully installed for the super user of my local Ubuntu 12.04 server. I have tried rebooting the server, but that does not fix the problem. Running sudo ps -A | grep cron at the Ubuntu server produces 1166 ? 00:00:00 cron as output, so only one process is running. Running sudo crontab -l shows the daily cron job above, while running crontab -l shows that no jobs are installed for the regular user. There are no cron jobs running on the remote server. Can anyone give me a hint on how this can be happening? Where can I search for clues? Note that I have also tried the following for the crontab, but mysqldump is still run every HH:17: 0 0 * * * /bin/bash /home/backup/scripts/db_backup
Although I'm running a different version of ubuntu, my /etc/crontab runs the hourly script 17mins past the hour. SHELL=/bin/sh PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin # m h dom mon dow user command 17 * * * * root cd / && run-parts --report /etc/cron.hourly 25 6 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily ) 47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly ) 52 6 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly ) # Have a look in /etc/cron.hourly
cron runs job at unexpected times
1,425,987,254,000
I'm running Arch ARM on a PogoPlug and want to execute a file every hour, the file when call directly runs fine (it is executable), for testing the file /etc/cron.hourly/crontest contains #!/bin/bash date >> /root/log First I copied it to /etc/cron.daily but it wouldn't run, run-parts --test lists it as valid but nothing shows in the log file, then I created a crontab: */5 * * * * /etc/cron.hourly/crontest To run it every 5 minutes while monitoring the logfile, it doesn't fire. This is /etc/cron.d/0hourly # Run the hourly jobs SHELL=/bin/bash PATH=/sbin:/bin:/usr/sbin:/usr/bin MAILTO=root 01 * * * * root run-parts /etc/cron.hourly and journalctl -u cronie just returns -- Logs begin at Wed 1969-12-31 17:00:03 MST, end at Tue 2014-01-28 10:14:12 MST. -- So even though the PogoPlug doesn't have a rtc it has the correct time via ntp. What else can I do to debug cron / get it to run? I'm tempted to just write a bash script that loops and sleeps x amount of seconds, but I'd rather figure this one out :-)
You need to make sure cronie is started. You can do so with the following command: systemctl start cronie This command will enable cronie to start on boot: systemctl enable cronie
Neither crontab nor anacron is running, how to debug?
1,425,987,254,000
I am creating this cron as apache user and want to run this cron as root, below is my code snippet: # Create a cron job to restart apache cron_file="/etc/cron.d/restart_apache_crontab" cron_job="*/1 * * * * root condrestart-apache.sh" echo "$cron_job" > $cron_file chmod 777 $cron_file exit 0 Provided sufficient privileges to cron.d directory (changes directory group to apache user) and script generated a cron file restart_apache_crontab at /etc/cron.d/ using the above script condrestart-apache.sh contains below code and has sufficient access permissions for apache user: rm -f /etc/cron.d/restart_apache_crontab /sbin/service httpd restart > restart-apache.log This will first remove cron and then restart apache (means cron will run only for 1 time) I am able to create cron as apache user but not able to execute it as root user. Please help me on this.. code snippet will be really helpful..
In the cronjob "*/1 * * * * root condrestart-apache.sh" I would expect an absolute path to your restart script e.g. /usr/local/sbin/condrestart-apache.sh. And condrestart-apache.sh must be an executable (e.g. chmod 755) shell script and of course include something like #!/bin/sh which is missing from your snippet. The primary reason to restart Apache would be to effect a configuration change, so checking if the syntax is correct before trying to restart your webserver and killing it may be a good idea as well... #!/bin/sh # /usr/local/sbin/condrestart-apache.sh rm -f /etc/cron.d/restart_apache_crontab RETVAL=0 /usr/sbin/apachectl configtest RETVAL=$? if [ $RETVAL -ne 0 ]; then echo "Configuration file invalid" exit 1 fi /sbin/service httpd restart RETVAL=$? if [ $RETVAL -ne 0 ]; then echo "Restart Failed" exit 1 fi
Create cron as a apache user and run as root user
1,425,987,254,000
I'm having a strange case of deadlock, where the two processes launched by cron are defunct, but cron does not pick the return code and exit. I don't have access to the root user. myuser@myserver:~) ps -ef | grep 30163 11:29AM 3701 28964 29950 0 11:30 pts/13 00:00:00 grep 30163 root 30163 6622 0 11:00 ? 00:00:00 /usr/sbin/cron 3701 30199 30163 0 11:00 ? 00:00:00 [monitor_daemon] <defunct> 3701 30598 30163 0 11:00 ? 00:00:00 [sendmail] <defunct> myuser@myserver:~) Is there a known reason why we would end up in such a situation? How, without having access to the root user, can I get rid of those three processes that consume memory? I'm using the following kernel/distribution: Linux myserver 2.6.32.23-0.3-default #1 SMP 2010-10-07 14:57:45 +0200 x86_64 x86_64 x86_64 GNU/Linux LSB_VERSION="core-2.0-noarch:core-3.2-noarch:core-4.0-noarch:core-2.0-x86_64:core-3.2-x86_64:core-4.0-x86_64" SUSE Linux Enterprise Server 11 (x86_64) VERSION = 11 PATCHLEVEL = 1
The Last SLES11 SP1 kernel when EoL came (2012-11-08) was 2.6.32.59-0.7. Kernel 2.6.32.23-0.3.1 is from 2010-10-08. So you are most propably hitting an unfixed OS bug. Wake up your root-admin and tell him to get his system in shape. Current supported SLES11 is SP2. Kernel: 3.0.80... To your second part of the question: You can only get rid of these processes as owner of these (root).
Deadlock in a crontab between cron and its child defunct processes
1,425,987,254,000
I want to create a cronjob at my Ubuntu server by adding/modifying a file in the /etc/cron.d folder, rather than using crontab -e as it's not too easy to automate. I've managed to make a file looking like this: * * * * * username command >> logfile I can tell that the file is there, I've chmoded it to 777 but it still isn't executed or appears when I do crontab -l as root. What am I doing wrong?
The file needs to be owned and writeable by root. Also make sure that your time-specification is correct - is * * * * * the real one?
Setting up a cronjob as a file
1,425,987,254,000
Basically I have a bash script that fetches data from my server to perform a backup. As it is now I have to start that script manually, enter the password, and then wait for it to finish. I would like to set up a cronjob that handles the backup. But I really don't know how to handle the password in a cronjob. Also I can't use keys for this, because my provider does not provide the mechanisms I need to configure them. I have SSH access to my home folder, but in my home folder I don't have write access except for the http(s)docs directory. So I can't create the necessary ~/.ssh/ directory and its contents for login via keys.
This is the command I use to backup to another machine: rsync -av -e "ssh -i /root/ssh-rsync-valhalla-key" \ --exclude lost+found \ --delete-before \ /mnt/backup/ \ [email protected]:/cygdrive/r/\!Backups/Niflheim & So you can use the -i to pass a keyfile to ssh. Of course, in your example, that means the keyfile itself will be sharable via HTTP if anybody ever figures out the filename.
Using rsync in a cronjob when a password is needed
1,425,987,254,000
I wanted to remove unnecessary logs automatically , like Xorg.log.0 , and other packed *.gz files , i really don't need that on a laptop system. However , i checked man logrotate , but only an entry called shred , but irrelevant.
Removing old logs is the main job of logrotate. The number of old log versions kept on your disk is set by the rotate config option. Also, take a look at the configuration files (/etc/logrotate.conf and /etc/logrotate.d/*) as well as man logrotate to see various ways how the rotation can be triggered (like monthly or by size limit).
How to remove obsoleted logs with logrotate?
1,425,987,254,000
I have installed crontab on CentOS 5.6 x64 via yum install vixie-cron.x86_64 Here is my crontab -e SHELL=/bin/sh 0 0 * * * cat /root/mysql-backup.txt | xargs /root/mysql-cron.sh Whenever I run this command manually it executes without problems: cat /root/mysql-backup.txt | xargs /root/mysql-cron.sh I am just wondering why my cron is not running. It's suppose to run every midnight to backup my mysql database. Are there any other particular steps to configure cron with crontab?
As an answer this question Stéphane Gimenez told me to run crond. Here is how to do it: /etc/init.d/crond start
vixie-cron.x86_64 doesn't seem to execute jobs on crontab
1,425,987,254,000
If I (e.g.: under Fedora) edit the crontab file with vim, then I restart crond: /etc/init.d/crond restart then the actual crontab file: "/etc/crontab" is the same as the actual cronjobs in reality. Ok! But: Q: How could I list the real-actual cronjobs, if the "/etc/crontab" file gets modified, but the crond not reloaded? Then the real-actual cronjobs aren't the same as the "/etc/crontab" file. Thank you!
All the implementations of cron that I know of re-read /etc/crontab every minute, i.e. as often as they execute jobs. So the “real-actual” cron jobs are the ones in the crontab file. There are (or were) a few buggy implementations of cron that check for jobs to execute first and only then check the crontabs. This causes one minute's delay between setting the jobs and running them. I don't think any of the current cron implementations on Linux (there are several) are affected.
How to list the actual cronjobs?
1,425,987,254,000
I have this /etc/cron.d/reboot file: PROJECT_ROOT=/usr/local/share/applications/ana NODE_PATH=/usr/bin/node REBOOT_SCRIPT=/usr/sbin/reboot SCRIPT=scripts/server-reload-messages.js 0 5 * * * root cd $PROJECT_ROOT && $NODE_PATH $SCRIPT create 1 5 * * * root $REBOOT_SCRIPT I need the script to work once a day at 5am, but it works at 5 and 17. Edit by comments @roaima, thanks for the advice to look at the cron logs. Now I see that in fact it only works once a day, but then I need to rename the task. The thing is, while i using the application, I got a message that the server will now restart, I checked the date, it was 05:00 PM, I had no doubt that something was going wrong. May 15 05:00:01 mail CROND[23278]: (root) CMD (cd $PROJECT_ROOT && $NODE_PATH $SCRIPT create ) May 16 05:00:01 mail CROND[52008]: (root) CMD (cd $PROJECT_ROOT && $NODE_PATH $SCRIPT create ) May 17 05:00:01 mail CROND[5363]: (root) CMD (cd $PROJECT_ROOT && $NODE_PATH $SCRIPT create ) May 18 05:00:01 mail CROND[19420]: (root) CMD (cd $PROJECT_ROOT && $NODE_PATH $SCRIPT create ) Here's what I managed to find out, the date format in the system is strange, it looks like this at 19:20: date Thu May 18 07:20:02 EDT 2023
The correct answer was in the comments from user bxm . The matter is that the time zone of the server for 12 hours differs from the time zone expected by me.
Cron works at 5pm instead of 5am
1,425,987,254,000
I am trying to copy a file on reboot as raspberry pi deletes my .asoundrc file every time. I have a copy of the file saved and a shell script I wrote. The shell script works, but I cannot get it to run in crontab. According to Code in script named copyASoundRC.sh #!/bin/bash cp '/home/sox/asound data/.asoundrc' '/home/sox' attempted code in crontab @reboot bash "/home/sox/asound\ data/copyASoundRC.sh" Any help is greatly appreciated ps this is a repost from the Raspbery Pi exchange, they said it did not belong there. Please don't get angry for that. Edit 1 based on @Seamus answer #!/bin/bash cp /home/sox/asoundData/.asoundrc /home/sox @reboot /home/sox/asoundData/copyASoundRC.sh >> /home/sox/mylogfile.txt 2>&1 There are no errors in mylogfile.txt, but it still does not work
It appears you may have mangled your script, and your crontab entry... why do you have a space between asound and data in cp '/home/sox/asound data/.asoundrc' '/home/sox'?? why do you have a back-slash in the crontab entry?? where exactly is the folder you refer to as data?? Assuming the folder data is actually located at /home/sox/asound/data try this for your script & crontab entries: #!/bin/bash cp /home/sox/asound/data/.asoundrc /home/sox @reboot sleep 60; /home/sox/asound/data/copyASoundRC.sh >> /home/sox/mylogfile.txt 2>&1 This (assuming that's the correct location for your copyASoundRC.sh script) will re-direct (>>) stderr and stdout to a logfile to help you with troubleshooting.
How to copy a file from one directory to another through crontab's reboot
1,674,226,105,000
sudo Crontab -e 15 0 * * 1-5 /usr/bin/screen -S wake_up -d -m /home/pi/auto/wake_up.py But at 00:15 there is no screen started... This command: (worked in terminal) screen -S wake_up -d -m /home/pi/auto/wake_up.py Python File: #!/usr/bin/env python3 import time x = 1 while x<10: print (x) x += 1 time.sleep(1) /var/log/syslog Nov 17 00:15:01 pi cron[352]: (root) RELOAD (crontabs/root) Nov 17 00:15:01 pi CRON[32392]: (root) CMD (/usr/bin/screen -S wake_up -d -m /home/pi/auto/wake_up.py) I even got the log its started correct
screen -S wake_up -d -m /home/pi/auto/wake_up.py This will not keep a command screen open. Instead create this file: % cat /home/pi/auto/.boot-screenrc screen -t cpu 1 stuff /home/pi/auto/wake_up.py\015 This starts a screen, inserts (stuffs) the python command into the stdin, and adds a return. Then add: screen -d -m -S wake_up -c /home/pi/auto/.boot-screenrc Also keep in mind, cron's default PATH is very abbreviated. If you have 3rd party software loaded, it is likely not in the path. Make sure to use absolute paths instead of the env command. Alternatively you can add a line at the top of your crontab file like: PATH=/bin:/usr/bin:/some/other/path
How to start a screen with crontab
1,674,226,105,000
I have CentOS 7 and am trying to access a configuration file that looks a little something like this: # Cron configuration options # For quick reference, the currently available log levels are: # 0 no logging (errors are logged regardless) # 1 log start of jobs # 2 log end of jobs # 4 log jobs with exit status != 0 # 8 log the process identifier of child process (in all logs) # My goal is to disable cron logging. I've tried googling this and apparently some people have files like /etc/default/cron and /etc/rsyslog.d/50-Default.conf but none of those files exist when I search for them. I do have /etc/rsyslog.conf and tried to use cron.none.* /var/log/messages (I don't have /var/log/syslog either) to disable cron logging, but I'm still getting cron logs on my remote server. How can I stop the cron logs (but not cron itself)? Nothing I do seems to stop them.
CentOS/RHEL 7 use the cronie implementation of cron. The file you show there relates to vixie cron, as used on distributions such as Debian. [root@centos7 ~]# head -2 /usr/share/doc/cron*/README 17. January 2008 mmaslano (at) redhat (dot) com Rename the fork on cronie. The source code could be found here: [root@centos7 ~]# root@debian:/# head -20 /usr/share/doc/cron*/README #/* Copyright 1988,1990,1993 by Paul Vixie # * All rights reserved # * # * Distribute freely, except: don't remove my name from the source or # * documentation (don't take credit for my work), mark your changes (don't # * get me blamed for your possible bugs), don't alter or remove this # * notice. May be sold if buildable source is provided to buyer. No # * warrantee of any kind, express or implied, is included with this # * software; use at your own risk, responsibility for damages (if any) to # * anyone resulting from the use of this software rests entirely with the # * user. # * # * Send bug reports, bug fixes, enhancements, requests, flames, etc., and # * I'll try to keep a version up to date. I can be reached as follows: # * Paul Vixie <[email protected]> uunet!decwrl!vixie!paul # */ Vixie Cron V3.0 December 27, 1993 [V2.2 was some time in 1992] root@debian:/# By default, cronie will send all cron messages to /var/log/cron using the /etc/rsyslog.conf entry below [root@centos7 ~]# grep ^cron /etc/rsyslog.conf cron.* /var/log/cron [root@centos7 ~]# To change where they're written, revise the file and restart rsyslog. No need to restart cron. [root@centos7 ~]# sed -i 's!/var/log/cron!/var/log/bob!' /etc/rsyslog.conf [root@centos7 ~]# grep ^cron /etc/rsyslog.conf cron.* /var/log/bob [root@centos7 ~]# systemctl restart rsyslog [root@centos7 ~]# tail /var/log/bob Jul 7 19:16:41 centos7 crontab[8581]: (root) LIST (root) Jul 7 19:17:01 centos7 CROND[8587]: (root) CMD (/bin/touch /tmp/foo) [root@centos7 ~]# And pointed out by @doneal24 to completely discard the entries, per your original question, you'd use a /etc/rsyslog.conf line such as below cron.* ~
Can't find cron config file
1,674,226,105,000
I have a cron job that executes an R script hourly. The script checks an online data source that gets updated at an unknown time each day. If the data source is not updated, the script exits with an error code. If the source is updated, the script runs normally without any error codes. After the script completes, I need to begin a manual workflow. I would like to receive a notification when the cronjob completes, so I know when to begin my workflow. Things I have considered doing, but find to be hacky/incorrect: Send the Email from within the R script Generate error when the Script succeeds What I want to do: Send a customized cron notification email after successful execution Something better that I haven't considered yet
Add something like the following to the end of your cron job's command line: && date | sendmail -s "R job completed successfully" [email protected] That will email you an alert on success, and non-zero exit codes will be handled normally by cron. The date is a placeholder for echo, printf, cat, or whatever you want to use to generate the body of the message. BTW, I used sendmail rather than mail because the various versions of sendmail have more consistent handling of command-line options than the various implementations of mail do. Alternatively, just have your R job produce output on stdout. Unless configured not to do so, cron will email both stdout and stderr from a cron job to the job's owner. Using sendmail explicitly isn't necessary - it just lets you set the subject, recipient address, sender address, add headers, and so on. cron will only send to the job's owner (you) or to whatever email address is set in the MAILTO variable in the crontab. cron will also set the Subject: to contain Cron <user@host> followed by the command executed by cron.
Crontab notify after successful execution?
1,674,226,105,000
I'm working on RHEL 7.9. In my crontab, I have the following variables and command : MAILTO="" SHELL="/usr/bion/ksh" 30 * * * * find /home/me/data/input -name "*.completed" -size +10M -print >> /home/me/jobs/completed.big 2>/home/me/jobs/completed.big.errors Since everyone has detected the extra o in the SHELL var, you (and I) can tell that all jobs were failing silently. The /var/log/cron file was filled with all the expected command lines to run. The journalctl -xe -t crond command only mentioned reloading my personal cron and skipping jobs : Jan 30 01:21:05 servername crond[26104]: (root) INFO (Job execution of per-minute job scheduled for 01:20 delayed into subsequent minute 01:21. Skipping job run.) [...] Feb 05 16:02:01 servername crond[3997]: (me) RELOAD (/var/spool/cron/me) Seeing no logs from my jobs, I removed the MAILTO variable, and then the local mailbox of my account got messages like: [... Stripping mail headers] execl: couldn't exec `/usr/bion/ksh' execl: No such file or directory Since this error arrives before the command actually launches, I didn't get it in my job logs. Is there any way to have these errors or at least trap that CRON failed to launch the command in a log ?We have enough mails here, and logs are already monitored. I have read the Where are cron errors logged? question, but I would like to have a definitive answer on "Can crond errors be found elsewhere than the mails?". This might be a hidden configuration for crond, or anything else, I don't know. At last, I might even consider tricking the mailbox to redirect into a log, and clean it afterwards.
According to this reference, the cron daemon has options for this: -s -m off Man pages do reference those args: -s sends output to system log (usually something like /var/log/cron) -m off disables sending mails It is done system-wide, but it fits my needs: Corporate root jobs are quiet (inventory, patches) Mine are not verbose neither Machine is dedicated
Trapping all crond errors in logs rather than email
1,674,226,105,000
I have a cron job that locks the computer using i3lock at a certain time (12:15), but sometimes I put the computer to sleep before 12:15. If I then I wake up the computer after 12:15 (usually around 13:30), the computer gets locked immediately after wake-up. Why is that? My OS is Debian buster x86-64.
I know about anacron, but as I check its configuration, anacron is clearly not responsible for the immediately sleep after wake up. I read the manual about cron, and found out Special considerations exist when the clock is changed by less than 3 hours, for example at the beginning and end of daylight savings time. If the time has moved forwards, those jobs which would have run in the time that was skipped will be run soon after the change. Conversely, if the time has moved backwards by less than 3 hours, those jobs that fall into the repeated time will not be re-run. Only jobs that run at a particular time (not specified as @hourly, nor with '*' in the hour or minute specifier) are affected. Jobs which are specified with wildcards are run based on the new time immediately. Clock changes of more than 3 hours are considered to be corrections to the clock, and the new time is used immediately. and I checked the log of cron sudo journalctl -xu cron, it did have entry tell me that cron is running the missed cron job after waking up
why debian run cron scheduled task immediately after wake up from sleep?
1,674,226,105,000
I'm trying to schedule a cron job and I'm failing soundly. I'm even starting to think that this can't be done with cron. I'm trying to set a job that will run at a certain hour every day for six months. Then it should stop for two months and start running again for six months, after which it will stop for two months, run again for six months and so on. In a nutshell, I want it to run daily for six months, stop two months and start running again for six months in an endless 6 months on/2 months off loop. I can figure a way to do it if a year had 14 months, but sadly it has only 12. Is it even possible to do this with cron? TIA
There’s no way to fully express your conditions using cron, however it is possible to supplement cron by adding a condition which is evaluated by the shell. The idea here is to tell cron to run your job every day at the appropriate time, then before starting the job, use the shell to check the month. For example, assuming that your month cycles start on the first day of the month, here’s one way to check the cycle (starting in September 2020): [[ $((($(date "+%Y*12+10#%m")-(2020*12+9)) % 8)) -lt 6 ]] && ... This asks date to format the current date as <year>*12+10#<month> (the 10# forces the month to be taken as a base-10 value), then uses that as an arithmetic expression to calculate the number of months elapsed since the start month, and calculate the remainder of the division by 8. Thus in the start month, the result is 0; the following month, 1; six months later, 6, and the cycle restarts after eight months. So comparing the result with 6 determines whether the script should run... Assuming the shell used in your cron supports such arithmetic operations, you can include this in the cron tab directly; otherwise, include it at the start of your script.
Daily cron jobs active or inactive for periods of months
1,674,226,105,000
I'm working on a python script, I created a file in /var/log/ folder with 664 permissions python script is not able to write logs to the file created, IDK why... since file owner is ubuntu(aws's default user). very carefully I have given Read and Write permission to the file scheduled crontab failed to run the app because of permission denied issue. Any ideas? i) the command you used to set up the crontab - crontab -e ii) the crontab line that is running the python script - */30 * * * * python3 /home/ubuntu/message_initiator.py iii) the exact error message - Permission denied: '/var/log/ice-message-initiator.log'
If you run ll -d /var/log Then you'll see that it is owned by root (possibly with syslog as the group) and has 755 or 775 permissions which means that while others can read and traverse the directory, only root (and possibly syslog) can modify it. ice-message-initiator.log is in /var/log and while ubuntu is its owner, ubuntu doesn't have modify permissions to /var/log which means that it can't modify its contents whether they are files or directories. The error isn't occurring because the ubuntu user can't read the file but because it can't write to the file due to not having modify permissions to its parent directory, /var/log. To have your cronjob work, you either need to either run it as root or have it write the log file into a subdirectory such as /var/log/ice and give ubuntu modify permissions to it.
Owner not able to write log file