date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,607,029,596,000
i have build a cronjob for a single user www-data user by using sudo -u www-data crontab -e i'm trying to find where this file is saved, i have looking in /etc/crontab but its not here, soe i can't find it right now but the crontab is saved and every thing runing as it shut but i need the path to make backups of my crontabs for every single user. where can i find my users crontabs?
Depending on your OS, try /var/spool/cron/. On debian its a little further, /var/spool/cron/crontabs/. Usually, this is mentioned in man crontab
Where is the users cronjob
1,607,029,596,000
I have a CentOS Server with 200 Wordpress sites and now I want to insert this line in every wp-config.php file: define('DISABLE_WP_CRON', true) I know there exists a "for ... do ... done" possibility, but really I never used this before and I don't want to break all the sites. Please, help me!
It would be something like this: for i in /path/to/*/wp-config.php; do echo "define('DISABLE_WP_CRON', true)" >>$i done Be sure to make backup before doing this!
Insert Line in Multiples Files
1,607,029,596,000
I set up to run notify-send every minute, $ crontab -l 1 * * * * /usr/bin/notify-send -t 0 "hello" Why does it not work? Do I need to restart OS after editing the crontab file? Does the following mean that cron is running? $ ps aux | grep -i cron root 1038 0.0 0.0 23660 2420 ? Ss Apr20 0:00 cron Can I specify a more frequent schedule, such as 30 seconds? Can the time be specified as decimals? 0.5 * * * * /usr/bin/notify-send -t 0 "hello"
Your first problem is that you have the wrong syntax for running a job every minute: 1 * * * * /usr/bin/notify-send -t 0 "hello" The 1 in the first field means that the job runs only at 1 minute after each hour. Change it from 1 to *: * * * * * /usr/bin/notify-send -t 0 "hello" The second problem is that cron jobs run in a very limited environment. On my system (Linux Mint), the only environment variables that are set are $HOME, $LOGNAME, $PATH, $LANG, $SHELL, and $PWD -- and $PATH is normally set to "/usr/bin/:/bin". At the very least, the lack of a setting for $DISPLAY means that notify-send can't display anything. A quick experiment with: * * * * * DISPLAY=:0.0 notify-send "hello from crontab" resulted in this error: (notify-send:18831): GLib-GObject-CRITICAL **: g_object_unref: assertion `G_IS_OBJECT (object)' failed (I'm running the Gnome desktop.) In another experiment, I copied my entire interactive environment into a script, then edited the script so it sets all the environment variables explicitly and invokes notify-send. That actually works; I'm now getting a pop-up "hello from crontab" message every minute. I'm certain that I don't need all my interactive environment for this to work, but I don't know exactly which environment variables are needed or what their values need to be. It's very likely that some of the needed variables are set when the current login session is started, and that they'll change if I logout and login again. It's also very likely that the details will vary depending on which desktop environment you're using. This is not a complete solution, but it should give you a starting point -- and perhaps someone else can add the relevant details.
My cron job doesn't run
1,607,029,596,000
I am using Fedora 20, and want, eventually, to set up automatic backups. I managed a trial run on my previous Fedora 12 installation, but can't get started again. I am using zshell. I thought I would get going by scheduling a shell program to show a zenity window with a "Hello World" message every minute. The zenity call is: zenity --info --text='Something very nice has happened!' --title="Zenity" which I put in a file /testrsync/zenitytest.sh, which works nicely when I call it from a command line. I have edited crontab to contain: * * * * * /testrsync/zenitytest.sh And nothing happens. I am obviously overlooking something, but I cannot yet see what. (I have tried all sorts of things, too numerous to describe here.) Please can anyone help? After further research I tried the following, [Harry@localhost]~/testrsync% /sbin/service crond status -l Redirecting to /bin/systemctl status -l crond.service crond.service - Command Scheduler Loaded: loaded (/usr/lib/systemd/system/crond.service; enabled) Active: active (running) since Mon 2014-09-22 10:37:42 BST; 3h 24min ago Main PID: 709 (crond) CGroup: /system.slice/crond.service └─709 /usr/sbin/crond -n Sep 22 13:58:01 localhost.localdomain crond[709]: sendmail: Cannot open mail:25 Sep 22 13:58:01 localhost.localdomain crond[709]: sendmail: Cannot open mail:25 Sep 22 13:59:01 localhost.localdomain crond[709]: sendmail: Cannot open mail:25 Sep 22 13:59:01 localhost.localdomain crond[709]: sendmail: Cannot open mail:25 Sep 22 14:00:01 localhost.localdomain crond[709]: sendmail: Cannot open mail:25 Sep 22 14:00:01 localhost.localdomain crond[709]: sendmail: Cannot open mail:25 Sep 22 14:01:01 localhost.localdomain crond[709]: sendmail: Cannot open mail:25 Sep 22 14:01:01 localhost.localdomain crond[709]: sendmail: Cannot open mail:25 Sep 22 14:02:01 localhost.localdomain crond[709]: sendmail: Cannot open mail:25 Sep 22 14:02:01 localhost.localdomain crond[709]: sendmail: Cannot open mail:25 [Harry@localhost]~/testrsync% I then amended my crontab to read: * * * * * /testrsync/zenitytest.sh >/dev/null 2>&1 But the only effect was to stop the error messages , same as above, with different timings, and the last line: Sep 22 14:04:01 localhost.localdomain crond[709]: (Harry) RELOAD (/var/spool/cron/Harry)
An application started via cron has no connected terminal or even X available. So there is nothing where your window can be displayed. To test such things use a file and append anything to this. Then you can look in the file (e.g. with tail -f) and see that the cron is running.
How do I do "Hello World" to try using cron
1,607,029,596,000
What would be my crontab entry for a command which I want to run every week. I don't want to run any command between 8 PM till 5 AM and any random day is fine. Similarly what would be my crontab entry for command which I want to run every three month, and also I don't want to run any command between 8 PM till 5 AM and any random day is fine. Just started with crontab so having some difficulties
crontab entry for every week (Monday at 3:10 pm): 10 15 * * 1 test -x /path/to/your/weekly/command && /path/to/your/weekly/command and every 3 months, on the 2nd of January, April, July, and October at 1:12 pm: 12 13 2 1,4,7,10 * test -x /path/to/your/quarterly/cmd && /path/to/your/quarterly/cmd This is for a normal user's crontab entries , for /etc/crontab add the username before the command. For commands to run weekly, on Vixie cron, you can also use @weekly instead of the first 5 entries. In that case it would run at the beginning of each Sunday at midnight (0 0 * * 0), which is not during the time you want.
crontab entry for a command to run every week and three months?
1,607,029,596,000
I'm really new on Linux and have no any previous experience on it. And I opened this account to ask this question. I need to install a cron job I've followed the short guide here. Well, I think it has been installed successfully. Because, when I type crontab -l, I see it in the result that Putty display. I'm using Putty by the way. Problem (I'm not sure if this is problem, but seems to be): I've asked another developer/college to login the server, using Putty, with his own credentials. Means another user. Then I asked type crontab -l. And the result is: no crontab for username. So, I guess that cron job is only installed for me. Question(s): How can I install it system wide? What I mean is, how can I install it for everyone? If I install if for everyone, does it run as much as number of users when time comes? For example, if there are 3 users in the system, and I install cron job system wide, does it run 3 times (for every user) when time comes to run the scheduled job? Or having the cron job only for a specific username is enough to make the operation? If "no", and if I need to install it system wide, then how to do that, in what step of this guide I need do something different? Little information about what cron job does. There is a database in the system/server, and the cron job is supposed to update the database from another remote database, on the first date of every month, at 8pm. I've used .sh to install the cron job, by the way. Thanks everyone for trying help me.
Each user has their own scheduled tasks. If you go and pick your kid from school at 4pm every day, your neighbor doesn't also go and pick your kid at 4pm. Cron jobs can be registered by each user (in which case they run with that user's privileges) or at the system level (in which case they run as a user chosen by the system administrator). Each scheduled job runs once at each scheduled time. There's no feature to set up the jobs running the same command for multiple users, other than replicating the job entry. It's not a good idea to schedule many jobs at the same time, because they'll use the same resources. I'm puzzled as to why you want multiple users to run the same job. Do you have one database, or do you and your colleague each have your own database? If you have a single database, then it only needs to be processed once, you only need one job. If you want both you and your colleague to have access to the output of the job, then generate the output once, and make it readable by both of you. If there are two databases, then each of you should run similar jobs, with the address of the database and the location of the output file adjusted for each job.
How to set cron job for system
1,607,029,596,000
I have a script running every 5 mins with two exits in a condition clause. #!/bin/bash date=$(date +%Y) if [ $date -eq '2014' ] then echo "Current year is $date" exit 0 else echo "Current year is not $date" exit 2 fi How could I specify to only write a log when exit 2? Could this be possible in crontab? 5 * * * * /home/user/script.sh >> script.log 2>> script.err I have understood that " >> script.log " writes all the return of the script and " 2 >> script.err " writes only if the script didn't run correct. So I don't know if there's a chance to write only where exit 2 down the script.
To write your error to stderr use the 1>&2 redirect: echo "Current year is not $date" 1>&2 exit [number] is specifying a return code of [number]. See also: File Descriptors Standard Streams All About Redirection
Write log for a false statement of an if condition which returns exit 2
1,396,027,959,000
Im trying to use mergecap to merge 15 old pcap files in a folder. I tried to use FILES=($(find /mnt/md0/capture/DCN/ -maxdepth 1 -type f -name "*.pcap" -print0 | xargs -0 ls -lt | tail -15 | awk '{print $8}')) and use mergecap command as mergecap -w Merge.pcap ${FILES[@]} but the mergecap doesnt run when I put it on crontab. Is there any method to combine these two commands to work properly. I tried as answer suggested @l0b0 tried find /mnt/md0/capture/DCN/ -maxdepth 1 -type f -name "*.pcap" -print0 | tr '\0\n' '\n\0' | tail -15 | tr '\0\n' '\n\0' | xargs -r0 mergecap -w /mnt/md0/capture/DCN/Merge_"${TAG1}".pcap in order to get the name of the oldest file to be merged into the output files name. But it gives gibberish as the file name. Am I doing anything wrong? Following scripts gets me the file names FILES=($(find /mnt/md0/capture/DCN/ -maxdepth 1 -type f -name "*.pcap" -print0 | xargs -0 ls -lt | tail -15 | awk '{print $8}')) TAG1=$(basename "${FILES[0]}" | sed 's/.pcap//')
With zsh (again): mergecap -w Merge.pcap /mnt/md0/capture/DCN/(D.om[-15,-1]) Or with GNU tools: cd /mnt/md0/capture/DCN/ && find . -maxdepth 1 -name '*.pcap' ! -name Merge.pcap -type f -printf '%T@@%p\0' | tr '\0\n' '\n\0' | sort -n | head -n 15 | cut -d@ -f2- | tr '\0\n' '\n\0' | xargs -r0 mergecap -w Merge.pcap
using mergecap for set of files
1,396,027,959,000
Yes, I have read many, many documentations but I can't get it to work. I have a simple single php file which I want to run once or twice in a minute. My php file is called: cronjob_refresh.php and I'm using Ubuntu 12. After typing crontab -e I'm getting in terminal: root@cron:~# crontab -e no crontab for root - using an empty one 888 I don't know what '888' is meaning... And I'm writing (after 888) my cronjob string */1 * * * * php /var/www/cronjob/cronjob_refresh.php After hitting enter I'm getting '?' symbol after the cronjob string entered. root@cron:~# crontab -e no crontab for root - using an empty one 888 */1 * * * * php /var/www/cronjob/cronjob_refresh.php ? Then I hit CTRL + Z to exit. After all this I'm executing crontab -l to check if the cronjob is added to my cronjob list but it's saying that the list is empty. I don't know what I'm doing wrong. Also I wan't to execute twice in a minute, 30 - 30 sec. How to set the cronjob? I want to log everything in a file named logs.log in the same directory, is this is possible?
888 is the number of characters in the default Ubuntu crontab file (22 lines of information all commented out). Typing ^Z doesn't save anything it puts the job in the background - read up on job control. 888 */1 * * * * php /var/www/cronjob/cronjob_refresh.php ? ^Z [1]+ Stopped crontab -e You should set your editor preference before running crontab -e export EDITOR=vi or export EDITOR=nano Cron only works with 1 minute resolution so 30 seconds would require your script to do something. To forestall other questions about your php not working (because there are errors in your crontab specification) please read our canonical question & answer on the subject.
Ubuntu crontab php not working
1,396,027,959,000
My below cronjob is not working. What am I doing wrong here?? */5 * * * * /usr/bin/top -n1 | head -10 >>/tmp/load.txt
The top from procps on Linux at least needs the $TERM environment variable to know how to display things like reverse color and cursor positioning when not in batch mode So either run: top -bn1 | head Or: TERM=dumb top -n1 | head Or if you need the output suitable for any given terminal, specify it as TERM=my-terminal top -n1 | head
top not working
1,396,027,959,000
If I have a script called teiid.sh set to run daily by a cron job. The scripts purpose is to initialize the startup of teiid. How would I make a call to test to see the if teiid.sh is working properly with cron, and not just performing endless actions or no action at all? Example: @daily * * * * /etc/init.d/teiid.sh jeff@****.edu Would chkconfig --add /etc/init.d/teiid.sh help with anything at all?
you can do 2 things.. check /var/log/cron to check if it's being executed add > /tmp/log 2>&1 to the end of the cron entry. then cat /tmp/log to check if the output is correct.
Testing Scripts
1,396,027,959,000
For past couple of days I have been observing weird processes in one of our server. Most of the time I see multiple instances of executable 10 and sometimes 4 and takes a lot of cpu resources. When examined this I have been seeing the process is started by cron right after starting a process with executable cpu_hu. Which is apparently foreign to my system and simple search did not resolve to anything. Then examined the cpu_hu process, examined exe location then removed accordingly (location in the image points to a venv for a small project our team is working on) even though I removed the binary, after reboot it appeared in a different location and the executables 10, 4 started from memory (no physical executable location) I deleted cpu_hu binary from all the locations of the system stopped the process and rebooted, but after some time cpu_hu binary appears elsewhere. For now i have stopped crond and killed the respective processes. Which seemed to have stopped the process from starting again. At this point I am pretty sure its malicious. How can I get rid of this or rather find the starting point of this malware to prevent it from starting.
(Rewritten based on OP's comments) The cpu_hu seems to execute as user postgres, which suggests it might have been started by using a weakness in your PostgreSQL database engine or its configuration. Maybe your PostgreSQL database is accessible from the internet and your database admin password is nonexistent or weak? Or are you running your project as the postgres user? That's a bad idea: a vulnerability in any component used by your project (e.g. in Django) that would allow them to inject SQL commands would then give them a "local admin" access to the database engine, and allow them to steal or corrupt all the data managed by that database engine, not limited to the database(s) of your project only. Database admin access also includes the privilege to run arbitrary OS commands as the database engine user. Otherwise, make a list of every software component used in your project and their versions, and start googling for known vulnerabilities in those versions. Do the malware processes appear if you reboot the system without an internet connection? If they do, you have a local persistent malware hiding somewhere. If not, the infection might be non-persistent, and you might be getting reinfected by another computer as soon as your system boots up.
Possible Malware: Unable to track starting point
1,396,027,959,000
I’m on debian. I am calling a ffmpeg process to generate a mp3. this gets called from a php script using shell_exec. This works fine 99% of the time. Sometimes, the ffmpeg process doesn’t exit and I’m left with ffmpeg running for hours. I”m slowly tweaking the params and its happening less, but still crops up on occasion. When I look at the top processes I sometimes find it sitting there eating the cpu and disk, the process hasn’t terminated. 993 www-data 20 0 252012 38904 27384 R 99.7 1.9 390:09.84 ffmpeg I normally look for the process (and confirm it’s the right one by ensuring the params it executed with match my php script: ps -eo args | grep ffmpeg then get its process id and kill it, and go hunt down the file it was working on and trash that too -rw-r--r-- 1 www-data www-data 14G Feb 9 21:20 cfcd208495d565ef66e7dff9f98764da.mp3 - uh oh I’m not sure what words I should be googling for. I’m looking for ideas for a supervisory process or script that I can run through a supervisord/cron job that can output all processes that run for longer than X seconds, and pipe their process details into a script that can match the process with arguments matching a pattern (using some kind of nasty regex I imagine), kill the process and go trash any files matching on their arguments.
You can use timeout that runs a command with a time limit. For example: timeout 2 yes Will echo 'y' for two seconds (it would go on forever otherwise).
How to detect if a process runs longer than X seconds, then do something with it?
1,396,027,959,000
I am having trouble getting my Java program to run from cron. I am able to recreate the problem, using a simple example, as explained below: In the file /path/to/javaenv.txt I define my CLASSPATH variable as follows: export CLASSPATH=\ "/path/to/dir1":\ "/path/to/dir2":\ "/path/to/dirn":\ "/path/to/jar1":\ "/path/to/jar2":\ "/path/to/jarn" From the command line I am able to execute the Java program quite easily by doing something like this: source "/path/to/javaenv.txt" && java pkgName.ClassName cmd-line-params > /tmp/test-$(date +%s).txt 2>&1 However, the job does not get executed from cron, eventhough my crontab has the following entry: * * * * * source "/path/to/javaenv.txt" && java pkgName.ClassName cmd-line-params > /tmp/test-$(date +%s).txt 2>&1
There are a few issues in your cron schedule: The % character has a special meaning in crontabs, and must be escaped as \% if you want to use it as you would ordinary use it on the command line. See How can I execute `date` inside of a crontab job? The source command may not be supported by the shell that interprets the command in the schedule. This depends on what shell /bin/sh is on your system (dash does not support the non-standard source command). Make sure to use . (a dot) in place of source to make your command portable. See e.g. Can't use `source` from cron? A possible third issue is whether or not the java executable is found or not, and this depends on the value of the PATH variable in the cron environment. If you want to ensure that java is found, set PATH to include the correct directory in the crontab, the environment file you source, or invoke the executable with its absolute path.
CLASSPATH in crontab
1,396,027,959,000
I'm running a headless networkless raspberry pi zero to control some hardware**, and occasionally the time/date gets corrupted, maybe other things too. While it's not a perfect fix, rebooting the machine automatically would be an improvement over the status quo. Normally cron would be the extremely obvious choice here, but I'm fairly sure that cron is not robust to the date getting corrupted. And by corrupted I mean it will randomly go back in time 4 and a half years. Something defective in the RTC chip I've added, I suspect. So what can I do that's relatively robust to intermittent clock errors? Ideally the computer would be rebooted every 30 days. Having a buggy clock is obviously going to make that target harder, but precision is not that important, as long as it's not rebooting every week, or never. I could roll my own solution by having a cron script that adds a ! to a file somewhere at midnight every night, and then rebooting when wc reports more than 30 characters, or something like that, but it seems like there must be a standard solution. ** in an extreme case of overkill, this fully functional unix computer that probably equals a mid-90s SGI O2 is operating a "smart" cat door that locks or unlocks depending on light levels outside.
How can you detect large clock jumps? Linux maintains at least two internal system clocks: The "real time clock" commonly kept in sync with an NTP server The "monotonic clock" never goes backwards, just ticks forwards A jump back of approximately 4 years is very likely to be caused by something resetting the real time clock. The monotonic clock should never jump backwards, that's the whole point of it. So if you monitor the monotonic clock compared to the system clock, you can discover times when the system clock has suddenly jumped back a very long way (few clocks should jump back more than a month). If you have python, you can get the difference in seconds trivially: python3 << EOF import time print(time.time() - time.monotonic()) EOF This gives a number (in seconds) that's meaningless in it's own right, but any large changes will infer a system clock jump. Any change of -2592000 or worse shows a jump larger than 30 days. Commands like sleep should be unaffected by clock jumps like this so you should be able to run this periodically. Diagnosing the real problem I went down a very deep rabbit hole with a near identical sounding issue. It's a month of my professional life I won't be getting back. So I'll highlight this answer. It may help you fix the root cause. It's quite unlikely your system clock is spontaneously corrupting itself. The system clock is maintained by software not hardware. The RTC, if your device has one, is normally just read on startup to recover after reboot. Unless you have a quirky ntpd configuration, you can safely rule out the RTC. Typically the only thing spontaneously changing you system clock is either an NTP daemon or SNTP daemon. My money would be on an SNTP daemon causing it because NTP daemons check multiple servers and so are pretty fault tolerant. Some home routers do something very dirty. They act as an NTP or SNTP server. But when the router reboots, without an RTC, the router software just uses a fixed date/time (eg: a software patch build date?). Despite the date/time clearly being wrong, their NTP server carries on issuing the wrong date, and only issues the right date when the router itself has updated with NTP or SNTP. In the case of the BT Home hub I dealt with a few years back, the router reconfigured every device it could using DHCP. It even declared itself a "Stratum 1" NTP server meaning it claimed to be directly connected to a time source (atomic clock). Try rebooting your router a few times to see if this triggers your IOT device time to jump.
How to make Linux reboot every 30 days even if the clock gets corrupted
1,396,027,959,000
I want to set up a cron task to automate a Python script. For example: Suppose we have the script in the desktop directory. #set to run at 7 am each day mkdir url1 cp newspaper.py url1 python newspaper.py URL How would I do this?
modify your script so it won't rely on relative paths. You can use cd in your script, but mkdir newdir will attempt to create that newdir in a folder you won't expect. Do cd /proper/path first or do mkdir /proper/path/newdir, and do not forget that all other file manipulation commands are also could be victims of relative paths. add your script into cron, use crontab -e command and a string 0 7 * * * /full/path/to/script/script
Cron task to automate Python script
1,396,027,959,000
I'm running this as a cron job: ( export PATH='/usr/bin:/bin' && echo "$PATH" && wget "https://www.mahmansystems.com.au/WSDataFeed.asmx/DownLoad?CustomerCode=%2F&WithHeading=true&WithLongDescription=true&DataType=0" -O mahman_direct.zip ) && echo 'I reached the end' &>> /home/myparadise/public_html/wp-content/uploads/import/files/output.txt When I run in CLI as the cron user, it works fine. Hitherto, I have only had two reasons for quietly failing cron jobs which is explained by the difference in user and path when running commands in CLI versus in cron, but this time, it doesn't seem to be either, since I tested the cmd in CLI as the cron user, and no permission errors where observed. The wget does output/display some text describing the process of retrieving the file, but there is no interaction or input/response required by the user. No errors were directed to output.txt. What else could be wrong? Have also tried curl -sLo alternative, in case. It makes no difference.
I had no idea about the % sign behaviour in cron environment. Escape as \% and it works. Ref 1: https://stackoverflow.com/a/1921266 Ref 2: Cron gotchas.
Command runs manually successfully, but fails quietly in cronjob
1,396,027,959,000
In one of my cronjobs, I have a long &&-chained command, at the end of which, I put 2>>/home/myparadise/public_html/wp-content/uploads/import/files/raytheon/raytheon_log_error.txt - indicating that I want any errors to go to the file raytheon_log_error.txt cd /home/myparadise/public_html/wp-content/uploads/import/files/raytheon && wget "https://www.raytheonsystems.com/test" -O raytheon_direct.zip && unzip -q raytheon_direct.zip && rm -f raytheon_direct.zip && csvjoin --left -c "MANUFACTURER SKU,MPN" $(ls -Art ASH* | tail -n 1) /home/myparadise/public_html/wp-content/uploads/import/files/raytheon/raytheon2_multi_images_only.csv > raytheon_new_joined.csv && wget -q -O - "https://www.myparadise.com.au/wp-load.php?import_key=XXXX&import_id=111&action=trigger" 2>>/home/myparadise/public_html/wp-content/uploads/import/files/raytheon/raytheon_log_error.txt However, I never receive errors to that file. Now, I'm not 100% sure what errors I'm expecting, but it stands to reason that since the last command is never executed, that something failed along the way. (Edit: I have subsequently discovered the error was permissions-related: raytheon_direct.zip: Permission denied - however, the question still pertains because I want any such and all errors otherwise, to be logged in future.) How do I fix this, so that if one of the && command blocks fail, then that will be logged as an error to raytheon_log_error.txt, with specific causes included?
You could wrap the chain in a subshell or command group, and redirect the error stream of that: (cmd1 && cmd2 && cmd3) 2>>/path/to/errorfile or { cmd1 && cmd2 && cmd3; } 2>>/path/to/errorfile For example $ crontab -l | tail -1 * * * * * { date && touch /root/file && date; } 2>> ~/cron.log $ tail cron.log touch: cannot touch '/root/file': Permission denied touch: cannot touch '/root/file': Permission denied touch: cannot touch '/root/file': Permission denied Alternatively (and perhaps for better readability) you could move the commands to a script.
Log the error for a whole && chain of commands (in Cron jobs)
1,396,027,959,000
I have the below crontab entries $ crontab -l #Cron to auto restart app1 #Ansible: test1 #*/15 * * * * ansible-playbook /web/playbooks/automation/va_action.yml #Cron to auto restart app7 #Ansible: test7 */15 * * * * ansible-playbook /web/playbooks/automation7/va_action.yml | tee -a /web/playbooks/automation7/cron.out #Cron to restart Apache services on automation server #Ansible: test3 0 2 * * * /web/apps/prod/apache/http-automation/bin/apachectl -k start Below are the enabled cron entries: crontab -l | grep -v '#' | tr -d '\n' */15 * * * * ansible-playbook /web/playbooks/automation7/va_action.yml | tee -a /web/playbooks/automation7/cron.out 0 2 * * * /web/apps/prod/apache/http-automation/bin/apachectl -k start I understand grep -B1 will give me one line above the grep string. As you can see the comment above thee enabled cron entries are #Ansible: test1 #Ansible: test7 Thus i wish to list test1 & test7 as my desired output using | awk '{print $2}' Desired output: test1 test7
You can use awk to do all the work of processing the output of crontab -l. The idea is to ignore blank lines, capture the text associated with comment lines, and print out the captured text when you get other lines. You mention that you want only the second field, so we will only save that. crontab -l | awk '/^$/ { next ; } /^#/ { text=$2 ; } /^[^#]/ { print text; }' There are multiple ways to write this, but I think this is about as clear as it gets. There are 3 patterns to select blank lines, lines starting with # and lines which don't. Note however that this does depend on you putting a comment for every entry in the crontab that is active
How to get one line above the enabled crontab entry
1,396,027,959,000
I have in my cron that auto starts a service inside a tmux if it detects that its not running. The rest of my bash script works, but if the tmux session doesn't exist, it throws an error. Which is why I added in "tmux new ENTER" below. But it still doesn't start tmux session. If I manually started the tmux session, the code works and it will execute the send-keys command. I'm trying to see why the tmux new session doesn't start on cron. Any ideas? /usr/bin/pkill -9 java /usr/bin/tmux new ENTER sleep 3 /usr/bin/tmux send-keys -t 0 "cd /home/xxx/bbb;./run.sh" ENTER echo "$(date) ${1} RESTARTED NODE"
Use /usr/bin/tmux new-session -d -s ENTER and for good measure follow it up with /usr/bin/tmux detach -s ENTER. So your script would look like: /usr/bin/pkill -9 java /usr/bin/tmux new-session -d -s ENTER /usr/bin/tmux detach -s ENTER sleep 3 /usr/bin/tmux send-keys -t 0 "cd /home/xxx/bbb;./run.sh" ENTER echo "$(date) ${1} RESTARTED NODE"
Run a tmux new session with cron, then run a command
1,396,027,959,000
CentOS9 Keeping in mind this question: https://serverfault.com/questions/1099284/centos-how-to-keep-previous-service-state-on-reboot It seems that it is not possible to achieve what I need. Is that correct? What I need - to run a script which stops a service, collects and zips logs, then updates OS and reboots (in case there were kernel updates). My problem is that after the reboot the service starts again. It is undesirable. One way to solve it would be to somehow make CentOS to run a script on this particular reboot, NOT any reboot. So this script can only be run after reboot from the script described above. The scripts stops the service again.. Is there a way to do so?
You can use a "flag" file and an @reboot cron job: #!/bin/bash # flag file must exist across reboots flag="/tmp/myflagfile" # # if there is no $flag, this is the 1st time if [[ ! -f "$flag" ]] ; then touch "$flag" do_firsttime else rm "$flag" do_secondtime fi You can control behavior by creating (or not) the $flag file.
Can I run a script on reboot once?
1,396,027,959,000
I have a script to set mouse 3 button to scroll: #!/bin/bash xinput set-prop "PixArt USB Optical Mouse" "libinput Scroll Method Enabled" 0, 0, 1 xinput set-prop "PixArt USB Optical Mouse" "libinput Button Scrolling Button" 2 Which is working when I execute it manually: ./mouse3.sh But it does not set mousebutton 3 to scroll on reboot using crontab. crontab -e @reboot /home/bera/script/mouse3.sh sudo grep CRON /var/log/syslog Dec 18 14:42:45 corsair cron[547]: (CRON) INFO (Running @reboot jobs) Dec 18 14:42:45 corsair CRON[574]: (bera) CMD (/home/bera/script/mouse3.sh) Dec 18 14:42:45 corsair CRON[549]: (CRON) info (No MTA installed, discarding output) Dec 18 14:45:01 corsair CRON[2203]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1) What am I missing?
In "Session and Startup" (Debian 11, xfce) I added an entry with a command that is just the path to the script:
Get script to execute at startup
1,396,027,959,000
I'm on Ubuntu Server 20.04 LTS and what I'm trying to do is to set the cron job for automysqlbackup utility. When I installed automysqlbackup I noticed that the cron.daily/automysqlbackup was created inside the etc folder. In the automysqlbackup file there are the following lines: #!/bin/sh test -x /usr/sbin/automysqlbackup || exit 0 /usr/sbin/automysqlbackup Why do we need these and what do they do? Can I delete that automysqlbackup file and set everything up through sudo crontab -u username -e for my user?
Why do we need these and what do they do? #!/bin/sh specifies that the cron “script” runs with /bin/sh (it’s a normal shell script). This is a shebang. test -x /usr/sbin/automysqlbackup || exit 0 checks whether /usr/sbin/automysqlbackup exists and is executable; if it isn’t, it exits the script silently. This line is useful to avoid an error if automysqlbackup is removed but its cron job isn’t — which happens if the package is removed, not purged. Technically, it works as follows: test -x /usr/sbin/automysqlbackup exits with code 0 if the file is executable, 1 otherwise. || causes the shell to look at the exit code of the previous command; if it’s 0, the rest of the line is ignored, otherwise it’s executed; exit 0 causes the shell to exit with exit code 0 (no error). /usr/sbin/automysqlbackup runs automysqlbackup. Can I delete that automysqlbackup file and set everything up through sudo crontab -u username -e for my user? Yes, you can; the package is set up correctly so that your removal will be remembered, and the file won’t be restored on package upgrades. Note that you don’t need sudo here, crontab -e as the user works just as well. You will however need to ensure that the user has all the necessary permissions for the backup (in MySQL and in whatever storage the backup goes to). I would suggest considering instead whether there is any particular reason the default daily cron job isn’t appropriate for you; if there isn’t, keep it instead of setting up your own.
What do the following lines mean in /etc/cron.daily/automysqlbackup?
1,396,027,959,000
Just wondering the above. So far I've tried: export EDITOR=$(which nano) export SUDO_EDITOR=$(which nano) export VISUAL=$(which nano) in both ~/.bashrc and /root/.bashrc, as well as Defaults editor="/usr/bin/nano" in /etc/sudoers and then logging out and back in again. None of the above has worked. Any ideas?
If you have export EDITOR=$(which nano) in your homes .bashrc, try sudo -E crontab -e. You become root through sudo, so no need to specify the root-user for crontab. The -E preserves your users environment.
openSUSE Tumbleweed: How do I get sudo crontab -u root -e to open the root crontab in nano instead of vi?
1,396,027,959,000
I have a problem with logs on my web server using apache (centos web panel) Because it is an API server, once in a few days, the domlogs directory of the server get big enough to crash the server (more than 100gb in logs), so I need to empty the file with the command root@local: cd /var/domlogs root@local: > logfile.log This will empty the file without breaking the logs link or having to recreate them by restoring logs. How can I create a cronjob to do the same each X days?
You could set up a crontab for the user ID that runs the apache process, or under the root id, where the crontab entry would be like: @daily cd /var/domlogs; > logfile.log You don't need to create a shell script for a simple task like this; just separate the commands with the ; character. The @daily is a GNU extension to run daily at midnight; you can look at https://crontab.guru/ to play with the different parms. For more specific times, say a on Monday, Wednesday, and Friday at 2:30 AM, try: 30 2 * * 1,3,5 cd /var/domlogs; > logfile.log And here is something I put in the crontab (as comments, for crontab the # is a comment, like shell scripts): #Minute Hour Day Month Wkday UnixCmd #0-59 0-23 1-31 1-12 0-7 UnixCmd To set up a cron job for the current user, just use crontab -e which will invoke the editor to put in the above commands. Now, for more advanced tasks, just be aware the various restrictions in cron jobs. The initial environment has a very limited environment set up as far as PATH, etc. And when the cron job runs, unless the output is redirected to an output file, the standard output and standard error from the task will get emailed to the user that set up the cron.
cronjob to empty a file in a directory
1,396,027,959,000
I have a strange issue with a cron-scheduled task using find which does two things: The first line should delete every night files older than 3 days starting with "read_" and end with ".gz". The second line should delete every minute files ending with ".gz.md5" older than 1 minute. 1 1 * * * find <path that defiantly does not match> -type f -iname "read_*.gz" -ctime +3 -delete */1 * * * * find <path that defiantly does not match> -type f -iname "*.gz.md5" -cmin +1 -delete Typically this is working well, but every now and then at 01:01 I get the following error message by cron: Cron <root@hostname> find <path that defiantly does not match> -type f -iname "read_*.gz" -mtime +3 -delete find: ‘<path that defiantly does not match>/<hostname>/captured-syslog-1617836127-1617836351.gz.md5’: No such file or directory But from my point of view, this makes no sense.
Fact: at 01:01 the two jobs run concurrently. Hypothesis: the filesystem in question does not store file types in directory entries. Possible explanation At some point find needs to know the type of the file under investigation. An explicit -type test can be a reason, but even without it (and in some cases before it) find needs to know if the file is a directory. If it is a directory then find will descend into it. I think your filesystem does not store file type information in directory entries. In such case, to know the file type find needs to call lstat (although not always, some optimizations are possible, see this answer), this requires the file to exist. If something else removes the file after find learns the file is in some directory, but before lstat, then there will be No such file or directory. In your case "something else" is the other job. If the filesystem stored file type information in directory entries, this would be different. Only -ctime or -cmin could yield No such file or directory if the other job removed the file; but this cannot happen because -iname is tested earlier and filters all possible interference out. I actually tested GNU find on ext4 filesystems created with and without the filetype flag. Without the flag the filesystem does not store file types in directory entries. I arranged a situation where files were created and deleted by another process. These files deliberately did not match -iname used with find. In my tests on the filesystem without the filetype flag find often complained about No such file or directory for files that didn't even match -iname I used. In tests with filetype find never complained.
find finds files not matching iname filter
1,396,027,959,000
I have an Ubuntu 16.04 server and one of the crons has a major vulnerability which has been listed as a "Won't Fix" for 16.04, but was fixed in Ubuntu 20.04. I want to pull the patch down, but when I use sudo apt upgrade cron it tells me that it is the latest version (which probably makes sense since it is the latest version for that OS). This is a production server and there isn't currently scope to upgrade the OS completely. Is there any way to upgrade this without breaking the whole server?
If your release doesn’t provide an update you’re interested in, your “best” option is typically to download the source package (specifically, the .dsc and related files), and rebuild the package locally. (See this answer for a brief example detailing how to rebuild a package.) However this does bring a number of risks, the two main ones being: other changes made to the package since the version you’re currently using may affect its functionality, for example current versions may assume a working environment similar to their target release rather than the one you’re actually using; you will no longer receive automatic updates for the package, should the need arise — effectively you’re shouldering the burden of providing support for the package. In this particular instance, CVE-2017-9525, the scope of the vulnerability is limited, as detailed in the notes: I believe that actually exploiting the bug requires updating the cron package. So long as there’s no updates for cron, the vulnerable code doesn’t run. So if we find a second bug in cron then we really should fix the race condition at the same time, but so long as we don’t push a cron update, the vulnerable code just plain doesn’t run. the patch just narrows the time window for the race condition. The vulnerability is only an issue if the package post installation script is run, which in most cases only happens if the package is upgraded. (It is possible to force it manually, with dpkg-reconfigure, but that won’t happen automatically.) Thus it seems the safest bet for you is to leave the package alone.
How to upgrade a single broken package without upgrading the OS (Ubuntu)
1,396,027,959,000
Right now, I have a cron job that looks like this: */15 * * * * cp /home/server/server_log.txt /var/www/html/logs/`date "+\%d-\%b.txt"` >/dev/null 2>&1 It works perfectly. However, I want to modify it to only get data from /home/server/server_log.txt for today's date, whilst still copying it to the same location with the date preserved like it is above. The date in that file is formatted like so: 01/11/2020 14:54:26 text 02/11/2020 03:22:05 text 03/11/2020 09:18:48 test I figured this might be possible with grep, but am not sure what the syntax would be.
The command would be something like grep "$(date +'%d/%m/%Y')" server_log.txt where date +'%d/%m/%Y' generates todays date in the given format, i.e. 06/11/2020.
Cutting date from a file
1,396,027,959,000
Am using Iris Mini to filter the blue light at night, it works pretty well but having to execute it manually is annoying. So am trying to use cron to start it each night at 8PM. This is what i have written executing crontab -e. The command works if i execute it in the terminal Crontab 0 20 * * * sh /home/jogarcia/Software/open-iris-mini.sh open-iris-mini.sh #!/bin/bash export DISPLAY=0:. /home/jogarcia/Software/iris-mini I also executed xhost +localhost for testing (before the time of the cron tab). Searching in the logs with grep CRON /var/log/syslog I found this lines that seems to suggest that is actually been executed: cron log Nov 2 20:00:01 my-computer-is-name CRON[8391]: (user) CMD (sh /home/jogarcia/Software/open-iris-mini.sh) But it isn't working because i can't see the results on my screen (it should display a kind of orangish color) am really lost, I don't know what am doing wrong. To see the errors I installed a mail service. Local mail error QXcbConnection: Could not connect to display 0:.
The secret (as pointed out by @waltinator is to set DISPLAY correctly. That should probably be DISPLAY=:0 The format there is typically hostname:displaynumber, where hostname is optional. The second issue is display security. If you put the cronjob in your own crontab instead of root's, then xhost should not be needed as xauth will work. OP has (probably) done this correctly by using crontab -e as himself. (Note that $HOME must be correct for xauth security to work. This is part of what putting it in your own crontab does.) Another trick here is to install an email system to get the error messages. When OP reported (after installing an email system) "that the cron job can't connect to the display", it probably means that either DISPLAY isn't right or something went wrong with display security (xauth and xhost). To that, I advise to try running the script in a command window. Try running it like env -uDISPLAY /home/jose/Software/open-iris-mini.sh (and if it says that file isn't executable, use chmod +x) (answer added per request of OP.)
How to create a night mode with cron
1,396,027,959,000
In /var/log/cron.log I noticed that CRON runs "@reboot jobs" right after I start my computer. Is there a way for me to see the list of the jobs CRON triggers then?
I don't think there's an easy way to see all configured cron jobs. First, check the global crontab file under /etc/crontab for anything containing @reboot. grep '^\s*@reboot' /etc/crontab As root, you can check the crontab of your users like this: crontab -u $user -l | grep '^\s*@reboot'
Can I see a list of jobs that cron runs @reboot?
1,396,027,959,000
I create a new user, john. Now I want to programmatically write a cronjob to the user john's crontab. #!/bin/bash #script name: cronbuild.sh cronjob1="0 */3 * * * /home/john/ad_dev/modem_dog.sh" { crontab -l -u john; echo "$cronjob1"; } | crontab -u john - Now this does indeed write to john's crontab. However, the script returns: tony@rpi:~ $: sudo ./cronbuild.sh no crontab for user tony@rpi:~ $ Confirming that it does write this to the user john's crontab... tony@rpi:~ $: sudo crontab -u john -l 0 */3 * * * /home/john/ad_dev/modem_dog.sh tony@rpi:~ $: I guess the terminal returns no crontab for user because it in fact does not have a crontab...? Seems obvious and not helpful. So my question, why is it stating no crontab for user.
The message is from crontab -l -u john you used in the script. On one hand you did use the command. The only(?) reason to use it is to preserve the old crontab and append to it (not just overwrite it). This may be a good decision if there is an old crontab with arbitrary content you want to keep. This may be a bad decision if you run the script multiple times (lines you're adding will accumulate). On the other hand you're saying the message is "obvious and not helpful", I guess because you know the user does not have a crontab (yet). But if you know for sure there is no crontab, then there is no need for crontab -l … in the first place. So maybe you don't need it. The fact you used the command can be interpreted as if you expected (or at least allowed) the crontab to exist. So it's reasonable to warn you if it doesn't exist. This is what the command does. You can suppress the message by sending the stderr to /dev/null: { crontab -l -u john 2>/dev/null; echo …
Terminal returning: no crontab for user
1,396,027,959,000
My $EDITOR is vim and when launching vim the direct way, you can jump directly to what you are interested with the +/ option. For example: vi /var/spool/cron/crontabs/root +/rsync But using the actual recommended command crontab -e I don't see an obvious way to do that. Does anything exist?
I don't know what distribution you are using, but in Debian at least the source code just gets VISUAL or EDITOR, appends the filename and forks. So in that case no, you cannot pass additional parameters from the command line to your editor when using crontab -e. You could rebuild the VISUAL or EDITOR variables each time though, as another answer suggests.
crontab -e with jump to line option?
1,396,027,959,000
sid="GR1" if sapcontrol -prot PIPE -nr $sid -prot PIPE -function GetSystemInstanceList | grep 'GRAY' >> $LOGFILE then echo "STATE: SAP system is offline. (sapcontrol)" >> $LOGFILE echo "STATE: SAP system is offline. (sapcontrol)" else echo "ERROR: SAP system is still online. (SAP system has to be online)" >> $LOGFILE echo "ERROR: SAP system is still online. Check the logs. (sapcontrol)" exit 1 fi This state script works manually fine, but if I execute it with crontab it doesn't. It doesn't find "GRAY" but it is gray. The sap log gets created, so the script gets executed, but it stops at this part of the script with SAP check.
We face this kind of issue while using the packages is not from default RHEL. Cron may not run with the same environment as we run in the foreground. we may be required to give full path of commands. locate the path of sapcontrol by using the below command. whereis sapcontrol it may be /bin/sapcontrol or /usr/local/bin/sapcontrol update the obtained full path in the script and give a try.
Crontab doesn't find a word
1,396,027,959,000
On Ubuntu I want to set up cron task to send email from my application. I think that I should run this crontab as user, not as root. Am I right? But I cannot go to the crontabs directory. I have permission denied. user1@srv:/var/spool/cron$ cd crontabs -bash: cd: crontabs: Permission denied user1@srv:/var/spool/cron$ ls -al total 12 drwxr-xr-x 3 root root 4096 Nov 26 2016 . drwxr-xr-x 6 root root 4096 Nov 26 2016 .. drwx-wx--T 2 root crontab 4096 Feb 17 19:01 crontabs When I have called command crontab -e as user1, it has been written in tmp folder. So should I run this task as user1? And if I should, how to do that?
You didn't tell us the exact command you want to put in the crontab, so it's hard to tell definitely if it should be in the user's crontab. Permission denied is expected. Nothing to worry about. In general running crontab -e as unprivileged user is OK. The tool lets you edit a temporary copy and (after you save it without changing its temporary name or path) installs it safely in the right directory. The tool holds the setgid flag and belongs to the group crontab. This way it can access the directory you cannot access directly.
Cron create crontab as user, not as root
1,396,027,959,000
I wrote a script that loops through every file in a folder and performs a simple operation on each file. That folder will almost always be empty and only occasionally contain a file, but I'd like the script to run automatically (and relatively promptly) when a file does appear. What's the best practice to do that? Right now, I just have cron running the script every minute. Is there a problem doing it that way? If I just leave that going long-term, will that make a difference in longevity of the drive? Thanks!
incrond can run a command when a file appears. It uses inotify underneath. As has been pointed out in a comment, systemd can also monitor a directory and trigger actions. [Unit] Wants= my.service [Path] DirectoryNotEmpty= /path/to/monitored/directory
Cron loop best practices
1,396,027,959,000
==================================================== ==================================================== CHECK_BEGIN: DO_CRON ==================================================== ==================================================== [FILE]: CRON.ALLOW -rw-------. 1 root root 0 Sep 1 2017 /etc/cron.allow ==================================================== ==================================================== ==================================================== [FILE]: CRON.DENY -rw------- 1 root root 0 May 5 2018 /etc/cron.deny ==================================================== ==================================================== Checking permissions on /var/spool/cron drwx------. 2 root root 4096 May 5 2018 /var/spool/cron ==================================================== How do I interpret the above output? The root user can always use cron, regardless of the usernames listed in the access control files? If the file cron.allow exists, only users listed in it are allowed to use cron, and the cron.deny file is ignored? Therefore in this case, only the users listed within /etc/cron.allow are allowed access to the cron daemon.
If the cron.allow file exists, it lists the users that may use cron. If it does not exist, the cron.deny file is checked. If the cron.deny file exists, it lists the users that may not use cron. This file is not consulted if the cron.allow file exists. If all users are denied the use of cron (as in your case, since the cron.allow file exists, and is empty), only root is able to use cron. This is the same that would happen if neither file existed. The most common configuration is to have an empty cron.deny file and no cron.allow file. This would allow everyone the use of cron. This also applies to at.deny and at.allow for using at to schedule commands.
Interpreting cron output
1,396,027,959,000
I am writing a sed command that should uncomment an entry in crontab on sun Solaris 10. I have tried 2 ways and they are working on Ubuntu but they didn't work on Sun Solaris 10; it returns sed: illegal option -- E crontab: can't open your crontab file. crontab -l | sed -E '/#* *([^ ]+ *){5}[^ ]*run_all.sh/s/^#* *//' | crontab - also : crontab -l | sed '/#* *\([^ ][^ ]* *\)\{5\}[^ ]*run_all.sh\.sh/s/^#* *//' | crontab - shell on crontab looks like : ###15 00 * * * /bill/u01/WORK/ALARMS/run_all.sh > /bill/u01/WORK/ALARMS/`date +\%Y\%m\%d\%H\%M\%S`_RUN_ALL_PROCEDURE.log
You probably shouldn't overcomplicate the regex. To remove any possible hashtags at the beginning of lines containing the string run_all.sh, you could do: crontab -l | sed 's/^#*\(.*run_all\.sh\)/\1/' | crontab - Unfortunately, I don't have a Solaris system at hand to test it.
sed to remove leading comment on crontab with Sun Solaris 10
1,562,010,413,000
So this is a super weird issue. I have a python script that calls many bash commands via subprocess.call based on certain criteria. Now, the script runs just fine manually but when thrown into a cronjob it fails, BUT only when it gets to a certain part of the code. This part of the code runs a bstat and a bkill command on a user. I've tried using subprocess.call, subprocess.Popen, subprocess.check_output for these two commands and every time it reaches them, it hangs and does nothing. I then get this message in var/spool/mail/root File "/root/Desktop/script.py", line 75, in <module> print subprocess.check_output(['bstat' '-q' 'viz' '-u' ,user,]) File "/usr/lib64/python2.7/subprocess.py", line 568, in check_output process = Popen(stdout=PIPE, *popenargs, **kwargs) File "/usr/lib64/python2.7/subprocess.py", line 711, in __init__ errread, errwrite) File "/usr/lib64/python2.7/subprocess.py", line 1327, in _execute_child raise child_exception OSError: [Errno 2] No such file or directory I've tried: using absolute paths for EVERY command possible, changing directories to the directory of the script before running, calling /bin/python before running. I'm at a total loss. What's even more weird, is there are other subprocess.call commands that work just fine when calling a bash script but when it comes to those two commands it doesn't know what to do. Below is the first subprocess command that it hangs on: print subprocess.check_output(['bstat' '-q' 'viz' '-u' ,user,])
The subprocess call is looking for the bstat or bkill programs in the $PATH... except there is not $PATH` in the cron environment. Specify the full path to those programs, even inside the Python script, and it should work.
Python script runs manually but not in cronjob
1,562,010,413,000
I want to use the cron in the linux redhat 7 in order to run some jobs but I not want to use the crontab -e , since some users can change my conf so I did the following example cd /etc/cron.d vi test * * * * echo test >/tmp/test more test * * * * echo test >/tmp/test so I wait one min to see the log - /tmp/test ut log /tmp/test not created why ? what is wrong with my cron ? ls -ltr -rw-r--r-- 1 root root 29 Aug 1 18:50 test
Your crontab has only four time and date fields. You need five to be valid. If you are placing your script in /etc/cron.d you need to add the username that will execute the script as the first field following the standard time and date ones; like: * * * * * yael echo test > /tmp/test See man 5 crontab
create cron job in linux without crontab -e [closed]
1,562,010,413,000
I'm trying to work with crontab. In crontab -e: */10 * * * * rm home/user/Desktop/myFile trying to delete myFile at every 10th minute. I enabled Crontab using: /etc/init.d/cron start and then: sudo rcconf to ensure that the service remains after rebooting but it doesn't work! user@ubuntu:~$ sudo update-rc.d cron defaults user@ubuntu:~$ /etc/init.d/cron start [ ok ] Starting cron (via systemctl): cron.service.
*/10 * * * * rm home/user/Desktop/myFile You forgot to add a slash at the beginning to make it an absolute path. As it is, it probably won't work. It should be: */10 * * * * rm /home/user/Desktop/myFile
Crontab configuration not working in Ubuntu [closed]
1,562,010,413,000
I try to clean the logs once a week with this cron command: @weekly find /var/log/ \( -iregex ".*\.[2-20]+" -o -iname "*.gz" \) -exec rm {} \; 2>&1 Is it good?
[2-20]+ is not the correct way to test if a number is in the range from 2 to 20. Square brackets in a regular expression just match a single character that matches any of the characters inside it. And - in the character set is used to specify a range of characters (e.g. 2-9 or a-z); the range 2-2 is the same as just 2. So [2-20]+ is equivalen to [20]+, matches any sequence of the characters 2 and 0, such as 2, 20, 02, 2200, etc. It should be ([2-9]|1[0-9]|20). This matches a single digit from 2 to 9, 1 followed by 0 to 9, or 20. If you're using GNU find, you can use the -delete operator instead of -exec rm {} \;. And there's no need to use 2>&1 if you're not redirecting standard output. By default, both standard output and standard error are sent as mail to the user. @weekly find /var/log/ \( -iregex '.*\.([2-9]|1[0-9]|20)' -o -iname "*.gz" \) -delete
Is my cron command good? [closed]
1,562,010,413,000
OS: Centos 7 | Shell: Bash | Virtual Machine: Virtualbox I have created a short script to check few servers on startup of my application and need to run this on every reboot. I have configured the crontab to run it every reboot however i am not seeing the output on the shell. @reboot /home/admin/scripts/connection.sh What am i missing in setting this up ?
A cronjob's output does not go to the shell. Most often, the output written to its standard output and standard error streams are collected and mailed to the owner of the cronjob. To log the output to a file: @reboot /home/admin/scripts/connection.sh >>/home/admin/connection.log 2>&1 This will make all output from the script be appended to the file /home/admin/connection.log.
Output of cron on the ssh shell ?
1,562,010,413,000
I'm running this script in terminal and getting desired results, but when I set a cron to run it from /root/somefolder/ every 5 min, it does not do what its supposed to do. My root user crontab entry looks like this: */5 * * * * ~root/somedirectory/script.sh The script is: #!/bin/bash ## Variables ## host="`/bin/hostname`"; ## Limits ## OneMin="1"; FiveMin="6"; FifteenMin="6"; ## Mail IDs ## To="someone@somedomain"; Fr="root@"$host; ## Load Averages ## LA=(`uptime | grep -Eo '[0-9]+\.[0-9]+' | cut -d"." -f1`) ## Top Process List ## tp=(`ps -ef | sort -nrk 3,3 | grep -E "(php|httpd)" | grep -v root | head -n30 | awk '{print $2}'`) ## Actions ## if [ ${LA[0]} -ge $OneMin ]; then ## Send Mail ## echo -e "From: $Fr To: $To Subject: *ALERT* - Current Load on '$host' Is High Load Averages Are: \n\n 1:Min\t5:Min\t15:Min \n ${LA[0]}\t${LA[1]}\t${LA[2]} \n\n List Of Processes That Were Killed \n" | sendmail -t ## Kill Top Pocesses ## for i in $tp ; do kill -9 $i done fi Issues: All the recipients in $To variable don't get any alert, when script is run via cron even the if statement gets true, but when it is run in terminal, everyone gets an email. I tried to paste all email IDs directly in To: fields like this, cause I thought it's not reading $To variable. To: someone@somedomain instead of $To But still none of the recipients gets any alert, and no actions seems to be performed.
Try the following syntax for sendmail in your script: #!/bin/bash # some code /usr/sbin/sendmail -t <<EOF To: [email protected] "$address1" "$address2" Cc: [email protected] [email protected] [email protected] Subject: [Monitoring] $foo $bar at $host From: [email protected] Monitoring for example.com server loss of connectivity - hourly update: --------------------------------------------------------------------------- $some $more $variables EOF You can embed variables in "heredoc" block. Script name was monitor.sh. Entry i used in crontab, as root: @hourly /root/monitor.sh Issues related to sendmail or (failed) mail delivery can be checked in /var/log/maillog.
Script Runs Fine In Terminal, But Not Under Cron [duplicate]
1,562,010,413,000
I have configured crontab to have the following entry: */2 * * * * source /home/ubuntu/cronenv/python2.7/bin/activate && python /home/ubuntu/trial.py >> /var/log/mycron/trial.log 2>&1 && deactivate By tailing the /var/log/syslog file, I can verify that the cron runs every two minutes. This is the entry in /var/log/syslog every two minutes Nov 8 10:52:01 ip-172-31-0-41 CRON[2023]: (ubuntu) CMD (source /home/ubuntu/cronenv/python2.7/bin/activate && python /home/ubuntu/trial.py >> /var/log/mycron/trial.log 2>&1 && deactivate) By running the command in terminal, it runs as intended and creates trial.log file in /var/log/mycron/ The folder has all the required permissions as shown below: drwxrwxrwx 2 root root 4096 Nov 8 11:02 mycron Please help me in figuring out the issue.
It seems likely that source is failing, so && short circuits. Try redirecting stderr/out to a file from the first source and you should see the error. If trial.py ran at all, you should at least see trial.log created by the shell. The fact that it's not created suggests that this code was never run.
Crontab cron job not creating the redirected output file
1,562,010,413,000
I work in a Linux server which doesn't allow me to use cron. So, to bypass this, I write scripts that needs to run on a definitive time like this: while true do ... ... sleep 1d #changes upon requirement of my script done I always start the scripts in the background nohup ./script.sh &. My question is, say I have six/seven scripts like this running on the server - most times they'd be sleeping. Would this - sleep consume some kind of memory? Will it impact the performance of the server? Is there any efficient way to handle this?
Yes — you're consuming memory with those scripts. You've actually got two processes running, using memory: a shell (e.g., bash) sleep itself. sleep is going to be extremely lightweight, but the shell may consume a few megabytes of memory. On my system an idle non-interactive bash consumes ~1MiB and a sleep 0.7MiB. You can check ps or top (look at the RSS column)—though a lot of that total is actually things like libraries which are shared between all processes using them. All in all, it's likely <1MiB each. On Linux, you can get more details from /proc/pid/status (and /proc/pid/smaps); the Vm* ones are of interest here. For example: bash -c 'grep Vm /proc/$$/status' VmPeak: 13380 kB VmSize: 13380 kB VmLck: 0 kB VmPin: 0 kB VmHWM: 972 kB VmRSS: 972 kB VmData: 220 kB VmStk: 132 kB VmExe: 208 kB VmLib: 2320 kB VmPTE: 48 kB VmPMD: 12 kB VmSwap: 0 kB You can see total RSS (amount of used RAM) of 972 kB, of which 220 kB is "data" (typically not shared) and 132 kB is stack (also not shared). So each extra bash left running is pretty small. Some advice: if you're having to do a bunch of workarounds like this... why can't you use cron? That's a simpler, cleaner approach, far less likely to have unexpected bugs (quick! How does sleep 1d handle a daylight saving time change? What happens if your sleep returns early because it was SIGTERM'd as part of a reboot/shutdown?). If your sysadmin is concerned about unauthorized people scheduling cron jobs, point him/her to /etc/cron.allow and /etc/cron.deny; those are documented in crontab(1).
Unix Memory Question while sleep in background
1,562,010,413,000
I researched some examples and came up with two below that seem like they should work but only the first one executes: */5 * * * * /data/db/test1.py > /data/db/text.txt && hadoop fs -put -f /data/db/text.txt /tmp/ >/dev/null 2>&1 I have also tried */5 * * * * bash -c '/data/db/test1.py > /data/db/text.txt && hadoop fs -put -f /data/db/text.txt /tmp/' >/dev/null 2>&1 If I run both commands separately in shell, they work just fine.
After investigating error in my mail, I did not have Kerberos ticket. The command line worked after Kerberos was resolved. I wrote a separate script that implemented Kerberos ticket and ran two commands mentioned in this issue. When I run the script from crontab, everything works fine
multiple cron jobs on one line (running consecutively
1,562,010,413,000
When working on an embedded environment (current one: Raspberry Pi with Debian Stretch) with an external RTC module (i.e. DS1307) I have to manually keep in sync both system and RTC clock. This can be achieved periodically calling hwclock -w - putting it in a cron job for example. I'm curious how the desktop systems handle this situation. Inspetting the cron jobs on my Debian desktop machine it seems there is nothing related to the RTC (although it could be "hidden" in some other tasks). So, how the RTC is kept in sync with the system clock?
On embedded platforms like the Pi that do not come with a RTC the package adjtimex is not normally installed by default. This is the tool that manages the kernels RTC configuration, and defaults to configuring the kernel to keep the hw clock in sync with the system clock. Note that if you use hwclock or similar it will disable the kernel sync and you will need to use adjtimex to re-enable it again. adjtimex is a solution for machines with intermittant or no connectivity for time sync. The other option is to use ntpd which can be configured to keep the system clock correct, and on shutdown a script calls hwclock --systortc to write the last most correct clock to the hwclock.
RTC management in desktop and embedded environments
1,562,010,413,000
In the root users crontab on a Centos 7 server I have the following: 30 4 1-7 * * test $(date +\%u) -eq 7 && /usr/bin/needs-restarting -r || /usr/sbin/shutdown -r It should run every day at 4:30 between the 1st and 7th day of the month, then it tests if the day of the week is Sunday and only then execute the next command to check if a reboot is required, and then reboot if it is. However my server rebooted today (1st Aug 2017) which is a Tuesday. Can anyone explain why?
In a && b || c, command c is executed when either a or b exit with a value other than 0. Consequently, when test $(date +\%u) -eq 7 is false, your server reboots. According to its name /usr/bin/needs-restarting probably returns 0 when the server needs a reboot. Are you sure that this should not be a && b && c instead? Else, try a && { b || c; }
Cron Job executing on the wrong day
1,562,010,413,000
Say I have 4 VPSs. In each one of these, the following cron commands run weekly, for local backups: 0 0 * * 6 zip -r /root/backups/dirs/html-$(date +\%F-\%T-).zip /var/www/html 1 0 * * 6 find /root/backups/dirs/* -mtime +30 -exec rm {} \; 0 0 * * 6 mysqldump -u root -pPASSWORD --all-databases > /root/backups/db/db-$(date +\%F-\%T-).sql 1 8 * * 6 find /root/backups/db/* -mtime +30 -exec rm {} \; What I wish is to send a copy of each local backup (from each of the 4 VPSs), into a fifth VPS, that I'll use as a central backup environment. The sending should be secured as possible (in the plausible range). How can this be done automatically (also scheduley) with rsync? (or maybe SCP?)
Assuming that each backup is a single file (by archiving both files above), SCP is going to be more efficient than rsync because it does less overall work other than transferring the file. As far as automation, you will need to set things up so that either: The fifth VPS can connect to the other 4 without needing a password as a user who can read the backups (slightly easier to manage, but harder to code). The four other VPS can connect to the fifth without needing a password, preferably each with a separate account (slightly harder to manage, but easier to code). The preferred method for either is usually unencrypted SSH keys. Once you have that, you can set up a cron job (on the fifth VPS in case 1, or each of the 4 others in case 2) that will transfer the latest backup to the desired location. Here's a quick and dirty shell script for the second option that will copy the most recent file out of a directory to a remote system: #!/bin/bash file=`ls -t "${1}" | head -n 1` scp -pCB "${1}"/"${file}" ${2} Running that with the path to the directory where the backups are stored as the first argument, and the user@host:/path string pointing to the location on the fifth VPS as the second argument will copy the most recent backup from the local system to the fifth VPS. The -p option for SCP will preserve mtime (so you can still use the same find command to thin out old backups), -C enables compression (this may or may not improve performance, and -B prevents it from prompting for anything.
Using rsync to backup 4 different VPSs to a fifth VPS that uses to store backups
1,562,010,413,000
On my Beaglebone Black device, I want to run python code at startup which is shared here https://github.com/acseckin/hmrid. The python code requires super user privileges. The Debian version I used on the device was installed with the image "Debian 8.7 2017-03-19 4GB SD IOT". The code works fine from the terminal. sudo python /home/debian/hmrid/runhmrid.py Debian users and roots crontab does not work when I append the following line. @reboot sudo python /home/debian/hmrid/runhmrid.py But other code that does not require a super user is working perfectly when I add a crontab for a Debian user like @reboot python /home/debian/hmrid/runNotSuperUser.py
Place the job in root's crontab with sudo crontab -e as @reboot /full/path/to/python /home/debian/hmrid/runhmrid.py Be aware that the job will be executed without your usual environment. This means that environment variables that may affect the way Python behaves may have to be set elsewhere for the script to work, if it relies on them somehow. If you want to log the output from this command to a separate file, you may use @reboot /full/path/to/python /home/debian/hmrid/runhmrid.py >/tmp/runhmrid.log 2>&1 This will log any output from the cron job to the file /tmp/runhmrid.log including error messages. You may also create a shell script wrapper that sets up the environment (using a series of export statements) and starts you Python script. Then you may call that script from cron.
Crontab with sudo not working on Debian BeagleBone Black
1,562,010,413,000
In a minimal, non customized Ubuntu 16.04 server system, my /etc/crontab has the following code, which executes fine when executed manually: 0 8 * * * zip -r /UsualUser/backups/dirs/html-$(date +\%F-\%T-).zip /var/www/html 0 8 * * * find /UsualUser/backups/dirs/* -mtime +30 -exec rm {} \; # m h dom m dow # 0-59 | 0-23 | 1-31 | 1-12 | 0-6 No errors in journalctl -u cron. I believe the problem is because the file isn't executable as its permissions are 644. Yet I'm not sure what is the safest set of permissions to make it executable so the code in it will start to work.
The /etc/crontab file on Ubuntu is the system crontab file. It should not be executable and its format is different from a user's crontab file. The difference in format is that its 6th field is the name of the user that the entry should be executed as. Also, % has a special meaning in crontab files. All % will be changed to a newline unless escaped as \% (but I see that you have taken precautions against that already). These things are described in the crontab manual (man 5 crontab): The format of a cron command is very much the V7 standard, with a number of upward-compatible extensions. Each line has five time and date fields, followed by a command, followed by a newline character (\n). The system crontab (/etc/crontab) uses the same format, except that the username for the command is specified after the time and date fields and before the command.
valid code in /etc/crontab won't be executed
1,562,010,413,000
I'm experiencing a small problem with a cronjob that runs a script of mine. The script looks something like this, it is named create_report.sh: #!/bin/bash cd /work/directory /some/command.sh > reports/report_$(date -d "1 day ago" +%Y%m%d).txt This is obviously a little simplified, but after writing the output to some file that file gets attached to an email and sent to me. When I run this "manually" it works fine and the report file has contents. Then I created a cronjob using crontab-generator.org which runs that script each morning at 7 am and mutes the output of the script. 0 7 * * * /path/to/script/create_report.sh >/dev/null 2>&1 When the cronjob runs the script, the report files get created and sent to me, but they're empty. I could probably figure out a way around this behaviour myself, but I didn't expect this to happen, can someone explain to me how this happened and maybe point out the correct way to make something like this happen? To clarify: I'm not interested in the output of create_report.sh but the report-File it creates. Thanks in advance!
When cron executes your cron job, it will (eventually) execute bash to run your script, but that bash instance will be non-interactive and not a login shell, so it will not source any of your profile files.1 If your /some/command.sh relies on any of those profiles (to set a variable or perform activity), then you need to: explicitly source those files, or set BASH_ENV (to the right file) before executing the script, or set the -i option in the she-bang line (to load ~/.bashrc), or set the -i and --login options in the she-bang line (to load the first of ~/.bash_profile, ~/.bash_login, or ~/.profile), or set those variables or perform those activities within /some/command.sh Reference: Bash manual - Startup Files Footnote: unless you've already set BASH_ENV
Cronjob overwrites Script Output redirect
1,562,010,413,000
I'm using for nicstat this command while true; do nicstat -eth1 1 60 > log-$(date +%F-%T).txt; done this command creates log file for each 60 seconds of my ethernet interface, the problem is the files are not for each clock minute, when running this command the first log file created can be some thing like log-12:00:04.txt "containing values from second number 4 in this minute ending with second number 3 in the next minute" also if it was not so precisely stared at the beginning of the minute, after a while I start to have seconds step up like log-12:14:05.txt, then after some minutes log-12:32:06 ...etc. I need a file for each clock minute containing values for each scecound from 00 till 59 like: log-12:00:00.txt, log-12:01:00.txt, log-12:02:00.txt, etc.
With zsh: #! /bin/zsh - zmodload zsh/datetime # for $EPOCHREALTIME/strftime... zmodload zsh/zselect # for sub-second sleeps zmodload zsh/mathfunc # for int() # wait till start of the next minute at first for ((next = (EPOCHSECONDS / 60 + 1) * 60;; next += 60)) { (( sleep = int((next - $EPOCHREALTIME) * 100) )) (( sleep <= 0 )) || zselect -t $sleep strftime -s now %T $next nicstat -i eth1 1 60 > log-$now.txt & } I did add a & to run nicstat asynchronously on the assumption that nicstat 1 60 would take slightly over one minute to run. Then if we ran it synchronously (that is without the &), it would start to drift. Here we make sure nicstat is started exactly every minute. However nicstat 1 60 takes slightly over 59 seconds to run, not 60, as the first line it displays is not the statistics for 12:00:00 to 12:00:01, but the overall statistics since boot (or since the statistics were last reset). And the second line (labelled 12:00:01) is for the statistics for 12:00:00 to 12:00:01 (and the 60th line after 59 seconds labelled 12:00:59 is for the statistics from 12:00:58 to 12:00:59). The statistics for 12:00:59 to 12:01:00 will be missing. So you may want to change it to nicstat 1 61, so that the output contains 61 lines, the first one for the statistics since boot and the next 60 ones for the statistics of each second in that minute. As another approach to your problem, you could run just one nicstat and have awk split its output into log files: nicstat -i eth1 1 | awk ' NR == 1 {header = $0; next} !/^[012]/ {next} # skip other headers { log = "log-" substr($0, 1, 6) "00.log" if (log != last_log) { if (last_log) close(last_log) print header > log last_log = log } print > log }' This time, except for the first non-header line of the first file, each entry will be the statistics for the past second.
make a loop run at the starting second of every minute
1,562,010,413,000
I have a security camera which sends pictures it captures to an ftp server on my network, and I developed a script to trim the old files from the receiving directory. The script works well when started from the command line, so I added a line to crontab to execute the script twice a day. # Remove old security images / videos from directory 1 7,19 * * * /home/ftp/bin/secpurg The script wasn't working though. The directory was getting full, so I decided to execute with #!/bin/bash -x to see what was going on. The following messages began to appear in my mail: + fileAge=10 + SecDir=/home/ftp/_Security/ + maxDirSize=3000000 ++ du -s /home/ftp/_Security/ ++ cut -f1 /home/ftp/bin/secpurg: line 11: cut: command not found /home/ftp/bin/secpurg: line 11: du: command not found + secDirSize= + '[' -ge 3000000 ']' /home/ftp/bin/secpurg: line 14: [: -ge: unary operator expected Hu? 'cut' and 'du' are not found when executing the script via CRON? Can anyone provide me with some insight as to why these commands work just fine when I execute the script from a terminal, but not when it's executed from CRON? In case it's useful, I've provided the script for reference: #!/bin/bash -x # secpurg - find and remove older security image files. # Variable decleration fileAge=10 SecDir="/home/ftp/_Security/" maxDirSize=3000000 # Determine the size of $SecDir secDirSize=`du -s $SecDir | cut -f1` # If the size of $SecDir is greater than $maxDirSize ... while [ $secDirSize -ge $maxDirSize ] do # remove files of $fileAge days old or older ... find $SecDir* -mtime +$fileAge -exec rm {} \; # Generate some output to email a report when files are deleted. # set -x # Expanding $SecDir* makes for big emails, so we don't do that, but echo the command for reference ... echo -e "\t\t[ $secDirSize -ge $maxDirSize ] fileAge=$fileAge SecDir=$SecDir maxDirSize$maxDirSize find $SecDir* -mtime +$fileAge -exec rm {} \;" # decrement $fileAge ... fileAge=$(( $fileAge - 1 )) # and re-determine the size of $SecDir. secDirSize=`du -s $SecDir | cut -f1` # Just in case things are crazy, don't delete todays files. if [ $fileAge -le 1 ] then secDirSize=0 fi # Stop generating output for email. # set +x done -- EDIT: Adding echo "PATH -$PATH-" to the top of the script results in the first line of the email being: + echo 'PATH -~/bin:$PATH-'. So now my questions is, what happened to my PATH, and what's the recommended way to add useful directories to it? I assume that this is effecting all of my CRON jobs. --
It works from from an interactive session so i would surmise it's a setting in cron. Ensure you either don't have a PATH= line in your crontab, it if you do that it contains explicit directories (cron path assignment isn't additive like the shell) PATH=/home/myhomedir/bin:/usr/local/bin:/bin:/usr/bin
"du: command not found" - by CRON job
1,562,010,413,000
Can you please help me to create a script considering follwoing points. Step 1: First stop all existing cronjob (at 5 PM) which are running by using # Step 2: After that delete some files Step 3: Then again start all existing cronjob (by removing #) Thank you.
Julies simple solution, stopping and restarting the service as script (Debian Linux): #!/bin/sh service cron stop ... dowhatyouwant ... service cron start Or delete the crontab temporary: crontab -l > cronsafe crontab -r dowhatyouwant crontab cronsafe Approach for short interrupts; send a STOP signal to cron and resume with CONT, like killall -STOP cron dowhatyouwant killall -CONT cron
Script for modifying cronjob [closed]
1,562,010,413,000
Normally I don't post questions which are trying to get advice on how to directly a question in a textbook but in this case I feel it is necessary. The last questions said to write a C program and then set it up as a cron job. I've done that. The next question asks (so I assume it's possible): Explain how you can find the time it took the computer to execute your program. Half the time I'm not sure if the program even executed at all so I'm glad to hear there may be a method to not only check that but how long it took to execute. How can I find the time taken to execute the program?
It sounds like it's asking for the time it took to execute your C program, not how long cron took to execute it. However, as others have commented, you can use the time program to time a command's execution, as below: $ time myprogram will give you the total time the program took to execute to completion.
Find time it took to run a cron job?
1,562,010,413,000
I am using a cron to execute a python script every 15 minutes during the day. At nighttime it should only run every hour. I made 2 entries for this: 0 23-5 * * * python /var/www/script.py > /dev/null 2>&1 */15 6-22 * * * python /var/www/script.py >/dev/null 2>&1 The one running during the day works fine. This is the last entry from that script in /var/log/syslog: Jan 26 22:45:01 web CRON[20278]: (sysadmin) CMD (python /var/www/script.py > /dev/null 2>&1) But there are no entries for that script after 23:00. The next entry from that script is: Jan 27 06:00:01 web CRON[26367]: (sysadmin) CMD (python /var/www/script.py > /dev/null 2>&1) And that is the "day-cron"-entry starting again. Both entries are made in the same crontab of the user "sysadmin". Any ideas what the problem here is? Or where else I could look for clues? I am running Ubuntu 16.04.1 LTS - 4.4.0-42-generic.
Change 23-5 to 23,0,1,2,3,4,5 or you can add two lines like so: 0 23 * * * python /var/www/script.py > /dev/null 2>&1 0 0-5 * * * python /var/www/script.py > /dev/null 2>&1 Or even as others have said (I forgot you can mix and match): 0 23,0-5 * * * python /var/www/script.py > /dev/null 2>&1 The reason why? Because 23-5 is not a valid range. The range must be from low value to high value only.
Cron stops working after a specific time
1,562,010,413,000
Admin note: This question is different then why is sudo path different then su because the environmental variables in a bash script ran from cron do not appear to carry over from environmental variables set for either users as sudo or as su. (See everything after the BUT.) When running sudo su and showing paths, I have /usr/local/bin in my path. I have several custom apps I put in that folder in the intent of making them available system wide. In etc/sudoers, /usr/local/bin is in the secure_path. BUT When running a bash script executed as root via a cron job, /usr/local/bin is apparently not preserved in the path as I get command not found when attempting to run apps that are installed there, despite the fact they are in /etc/sudoers. How do I get these apps to be available to root? Ubuntu 16.10
The environment in a cron job is, as you are seeing, different from that in a shell invoked by su - or sudo -s or sudo /path/to/executable. You can, however, set variables within the cron table: PATH="$PATH:/usr/local/bin" 0 0 * * * /path/to/run-me-at-midnight-with-path-changes.sh
Why are sudo su and bash root script paths different? [duplicate]
1,562,010,413,000
I have following case which may be is simple but I don't know which way is logically correct and how to do it exactly. I have multiple sites in /www/ each site is in own directory and user /www/site1/ // user site1 /www/site2/ // user site2 /www/site3/ // user site3 Now I want to make cronjob which will run PHP script and will update one table in database of each site. The script and actual job aren't problem. The problem is how to do it properly? How to create for each user job? Cronjob will run every half an hour if it's matter.
Add a new user - let's call them allsites. Add the allsites user to the /etc/group for site1, site2 and site3 users. Run the script as the allsites user. Then run a single script with the different details per site included in the script. For (a very basic) example. A copy of script-name.sh is located in each of the $LIST directories: #!/bin/sh # The base location of each site LIST="/path/to/site1 /path/to/site2 /path/to/site3" # Place script-name.sh in each of the above paths SCRIPT_NAME="script-name.sh" for i in $LIST do sh "${i}/${SCRIPT_NAME}" done Another basic example would be something like the following. A single script will pull in a custom config. #!/bin/sh # The base location of each site LIST="/path/to/site1 /path/to/site2 /path/to/site3" # Place details for each site in config.sh in each of the above paths CONFIG="config.sh" for i in $LIST do # Pull in the config for the current site . "${i}/${CONFIG}" # Add your commands here that use the details from $CONFIG echo "EXAMPLE: user name: $username" done The config.sh which contains the unique details per site would be something like this: #!/bin/sh # User name for DB connection?? username="site1user"
Cronjob(s) for multiple users
1,562,010,413,000
This should hopefully be a simple question, but I couldn't find the right search term in Google to show me what I was after... This morning I went to look at my crontab to remind myself of the location of a running script; crontab -e I found the location, and then exited Shift + Z + Z I was then shown a message [1]+ Stopped What does this mean? I've never seen this message before. Normally I'd expect the message "no changes made to crontab" or if I'd made edits "installing new crontab"
That output is from the bash job control system. It is telling you that the first job ([1]) and the last process to be backgrounded (+) has been Stopped (aka paused/suspended). If you run the command jobs it will print currently backgrounded/suspended processes you will likely see your command listed. One way to stop (suspend) processes like this is to press Ctrl+Z, which is likely what you did by accident when trying to press Shift+Z. The process has not ended, and is not likely to have saved your changes. To resume the command in the foreground so you can actually save and quit run fg.
Info messages after editing crontab
1,562,010,413,000
I have postfix running in a Docker container. A cronjob try to send email, but reply “(CRON) info (No MTA installed, discarding output)” error in the syslog According to this link, the solution is to install postfix so CRON can send email. I have postfix running on this Host, is there a way to use it ? I have some other hosts and postfix isn't running on them, can I also tell them to use the distant docker mail server?
The problem is not that postfix is not running but that it should be listening on a mapped port 25 or 587 on the host. Even if that is the case, with not installing postfix on the host, there is probably no installed that sends the mail (gotten from stdin or commandline parameters) to the port. You install the client programs sSMTP or msmtp for that, but I have solved this within a small wrapper program for my crontab entries that I was using anyway. The wrapper only sends mail if the program (the "real" crontab etnry it called) exits with non-zero exit value or its output contains the string "error:". That reduces the span from my own systems i.e. no email if the program ran fine. The program uses the Python standard library smtp module to send the caught output. The wrapper runs the program with subprocess.check_output(cmd, stderr=subprocess.STDOUT), and sends the result on error using: smtp = smtplib.SMTP(host="", port=0) # by default 'localhost' and 25 smtp.connect() smtp.sendmail( from_email, to_email_list, email_header_and_body ) By setting the host and port I also use this to run jobs from other docker containers.
Tell guest OS to use my dockerized mail server to send email
1,562,010,413,000
I'm busy setting up a back-up script to run on my Pi using rsync. I see that a number of people use the -v option in their cron jobs. Why? It's going to be run as root, and not in a terminal where someone will see it. I understand that maybe if something happens you can tail /var/logs/syslog, but the chance of that happening is negligible. As I'm running the backup between 2 external hard drives on the same system, I can see the benefit of using -za. The -z for compression because why not, the CPU is barely taxed at the best of times. The -a to preserve permissions, time-stamps, symlinks and owners and groups and to make it recursive. I might remove the -z and replace it with -W to write whole files in stead of blocks, but I don't want it to run for too many hours. Is there a way to output any encountered errors to an error log file? In that case, the -v option might make sense - unless I'm missing something here.
Usually, cron sends the output of the jobs it runs to the relevant user; so -v is useful there because you get an email with the full output of the rsync command. On a correctly-configured system, even mail to root goes to the appropriate user. For this to work you need mail to be setup appropriately on the system running cron; that used to be common on Unix-type systems, not so much nowadays... cron uses sendmail by default to send email; this can be overridden with the -m option to crond. Alternatively, you can configure crond to log job output using syslog, with the -s option. You can also redirect individual cron jobs' output using shell redirection, so > somelog.log 2> errorlog.log would log standard output to somelog.log, and standard error to errorlog.log (you can of course add paths).
Use of Verbose in a cron job
1,562,010,413,000
I have a script located at /home/user/backup.sh I'm trying to execute this script via my root crontab. It isn't running. The script works if I create the crontab under my user account. Is there anyway to run this script as the root crontab? Here is the line I am using. 25 20 * * * sh /home/user/backup.sh Contents of script: tar -czvf /home/user/backup/backup.tar.gz /var/www/ mysqldump -u root -p password --all-databases > /home/user/backup/backup.sql rsync -avz /home/user/backup user@myserver:/home/user/
root has full access on your system, but doesn't necessarily have all the keys to other systems that your normal user account has. So the trouble is: rsync -avz /home/user/backup user@myserver:/home/user/ ^^^^ If you cause root to execute this command as you, your keys will be used and the command will be successful: sudo -u user rsync -avz /home/user/backup user@myserver:/home/user/ Alternatively, you could install root's public key as an accepted SSH key for user on the myserver system.
How to run a script outside of home directory via cron
1,562,010,413,000
I have some Java executable (jar) that is run my some shell script from a cron job once every night. That executable does not print log statements "as usual" just by printing them out in a sequential manner like line after line (print after print), but while it processes its data its printing a single line with status data and then "overwrite" or "update" just that single line over and over again, until its done with this part of processing. Examples: A common program would output something line this: Program XYZ started..<cr+lf> processing 1<cr+lf> processing 2<cr+lf> processing 3<cr+lf> im done<cr+lf> Simple! I can easily redirect its output to a file and I'm done. But the program I will have to deal with does it more like this: Program XYZ started..<cr+lf> processing 1<cr> processing 1 - part a<cr> processing 1 - part b<cr> processing 1 - part c<cr> processing 1 - part d<cr> <cr+lf> processing 2<cr> processing 2 - part a<cr> processing 2 - part b<cr> processing 2 - part c<cr> processing 2 - part d<cr> <cr+lf> processing 3<cr+lf> im done<cr+lf> but much more intensive than I showed here. So it just not always overwrite status lines, but does it a lot while processing. So just redirecting or appending its output to a log file does not seem to make a lot of sense here, since it will just clutter the log file with millions of screen-updates that can not be viewed and understood that easily by humans. So my question is: is there a way or tool for programs that may behave like that to "log" or maybe "screenshot every 30 seconds" their output to a log file?
Trivially, assuming there are no vt100 escapes to handle, you might try pushing the output through sed 's/\r$//; s/.*\r//' or the awk equivalent awk '{ sub("\r$",""); sub(".*\r",""); print}' but this assumes your sed or awk can handle very long lines, as the carriage-returns obviously aren't newlines. Also, carriage-return only moves the cursor to the start of the line, but doesn't erase what is on that line, so strictly you should keep the long line before \r\r\n in your example. To cope with the effectively long lines you could use tr '\r\n' '\n\001' to translate the \r to newline, and simultaneously the newlines to some other character not in your data, like control-a. Or you can change awk's input record separator with RS='\r', if your awk allows it. This gives you a script like: awk -v RS='\r' ' { if(sub("\n","")){print last;last=""} if($0!="")last = $0 }'
how to properly log the output of a console program that frequently updates "parts" of the screen, resulting in a messy log file?
1,562,010,413,000
I have a cron-job, which is just a script, clearing directories, which accumulate files over time. Unfortunately, it is not firing. Any idea why. I have created the cron-job using cronmaker.com. How do I know it was not fired, because it is set to fire every night, yet there are old files in it. output of crontab -e 0 0 3 1/1 * ? * ./home/deploy/scripts/clearzip.sh Thank you for your help. Script contents : #!/bin/bash rm -rf /media/attachment/zip/*.* rm -rf /home/deploy/excel/*.* rm -rf /home/deploy/pdf/*.*
CronMaker uses Quartz cron triggers, which add a couple of non-standard fields (for seconds and years). You should drop the first and last fields for standard cron, use * instead of ?, and remove the leading . from your command: 0 3 1/1 * * /home/deploy/scripts/clearzip.sh will run every day at 3am. More idiomatically, 0 3 * * * /home/deploy/scripts/clearzip.sh
Debian : Cron job not firing
1,455,896,903,000
I made a script called hello.sh and it contains the following: #!/bin/bash printf "$( arp-scan --interface=eth0 --localnet )\n" printf "test\n" after making it executable (chmod o+x) and running it (./hello.sh >> file.txt) I get the correct output in file.txt (which is the arp-scan result and the string "test"). But after adding the following line to crontab -e: */1 * * * * /path/to/hello.sh >> /path/to/file.txt I get the following output: test test test As you can see there is a empty string returned by the arp-scan part. How can I get arpscan working with cron? Additional info: Everything is done after logging in as sudo (sudo -i). arp-scan needs sudo. I am running this on Fedora 20.
cron runs with a very minimal environment and a reduced path. It's always safest in scripts designed to be executed by cron to ensure all commands have the full path provided, or the script sets its own PATH variable. Try adding the full path to the arp-scan command.
Sudo cron job using arp-scan gives empty output
1,455,896,903,000
When I use time and date at the end of a crontab line like backup`date +%F_%T`.sql or like backup`date%d%m%y`.sql, my crontab command doesn't work. But when I remove it, it works perfectly. Why doesn't it work when I use time and date like date%d%m%y?
First of all you need to escape every % and you should also use a little different syntax with date. So e.g. this one will work just fine: `date "+\%d\%m\%y"`.sql
Why doesn't `date +%F_%T` work in crontab? [duplicate]
1,455,896,903,000
I have a cron job it's on runnable .jar but I really want to check the output logs in console. but I wondered also if this command below will start to run on 3:15am but this jar takes time a lot of time to comple, Is there possibility that the current running will be overriden to the next schedule? need some clarifications. 15 3 * * * java -jar -Xmx4G -Xms256M /home/desktop/Documents/Run/New_Version/wine.jar batch >> /var/log/wine.log
The cron entry you specified will run at 3.15am (if the host is turned on) but there are a few things to check: The usercontext of the job: Can it find the java binary to start? Consider using the absolute path to java. Does the user have access to write to /var/log/wine.log? Output of stderr: Consider to log the error output also using 2>&1 in your command so you can troubleshoot the cron run: 15 3 * * * java ... 2>&1 >> /var/log/wine.log And have a look at the cron log (often part of syslog or messages log).
output logs in crontab does not work [closed]
1,455,896,903,000
I created a small bash script that takes a file using wget and then processes it using a php. The code goes like this: wget -U mozilla -P /home/logfetcher/ http://fakesite.com/log.`date -d 'yesterday' +%Y-%m-%d`.csv wait /usr/bin/php csv-editor.php /home/logfetcher/log.`date -d 'yesterday' +%Y-%m-%d`.csv /home/logfetcher/sorted/log.`date -d 'yesterday' +%Y-%m-%d`.csv 3 9 7 0 2 1 5 11 12 13 && rm /home/logfetcher/*.csv I tested it and it works without any problems, though when I added it to cron like this: 0 6 * * * /home/logfetcher/fetchlogs.sh It downloads the file but the php seems to don't work at all (nor the rm, which I guess indicates an error trying to run the php file). I've been trying to think about why this could be failing and tried a few things, but I don't seem to find a proper answer to fix it. Any help would be appreciated!
What is cron log saying? Or syslog? If there is error with first part of command then second command (after &&) won't be executed. is script csv-editor.php in your PATH? If it isn't than you should run it with absolute path /x/y/my.script. I'm pretty sure that script isn't in your path, so "/usr/bin/php /x/y/script.php ...." should resolve problem. But first read the logs...
Simple bash not running properly by cron
1,455,896,903,000
I was trying to create a cron job which runs a ruby code on digital ocean, however it seems I'm making a mistake. It doesn't give any error but it doesn't also do anything. I ran this cronjob on my raspberry pi however on digital ocean it doesn't work. Here my cronjob 59 17 * * * ruby /home/workspace/delta/analytics/analyze.rb 7 >> /home/testruby It creates testruby file but analyze.rb 7 doesn't work. I tested running ruby /home/ .... and it is working. What might be the problem? UPDATE error file: bin/sh: 1: /usr/local/bin/ruby: not found This is what I wrote in my crontab * * * * * /usr/local/bin/ruby /home/workspace/deriva/analytics/analyze.rb 7 >> /home/testruby 2>&1
Different environment variables, working directory, ... You need to debug where exactly analyze.rb is bailing out. First, you're only redirecting stdout, not stderr. Errors probably go to the later, so adding a 2>&1 to the end may help a lot. Or setting EMAIL= at the top of your crontab to have them mailed to you. You can confirm that ruby is starting up the print "starting!\n" or similar to the beginning of your Ruby script, and seeing if that shows up in the log file.
Running a ruby cron job
1,455,896,903,000
I have set up a cronjob on my server which is supposed to run every minute and store the output in the given file. I have been trying a lot and saw a lot of links but nothing seems to work. Following is the line which I wrote in crontab -e * * * * * /root/snmp_codes/snmp/.\/snmpstats.py -f file -g > logfile.log Can anyone please tell me what mistake I have made?
Fix the path so it's correct. Based on your comment it's likely to be /root/snmp_codes/snmp/snmpstats.py. You can also modify the command so that it captures stderr as well as stdout like this (the 2>&1 attaches stderr to stdout so you get both written to the logfile.log): * * * * * /root/snmp_codes/snmp/snmpstats.py -f file -g > logfile.log 2>&1
Crontab giving no result
1,455,896,903,000
So I have this in my crontab -e 0 0 * * * /opt/www/backup.sh And in the file, I have the following: #!/bin/bash FILENAME=$(date +%Y%m%d).tar.gz tar zcf backups/$FILENAME f The file is located at /opt/www But the backup doesn't get created at all. If I run backup.sh manually, then it'll run and create the backup of the directory f as it should do. I'm running Debian 7.8.
The current (working) directory is probably not set to /opt/www when the cronjob is started. You can set it in your script backup.sh before the tar... line by: cd /opt/www or you can use the full path in the tar line by: tar zcf backups/$FILENAME /opt/www/f I can also advise to use a full path for backups/$FILENAME
Crontab to back up directory
1,455,896,903,000
Need to keep a track of the succeded jobs, on daily basis, with jobs ranging from /10 /20 minutes to twice/thrice a day, least for these every 10/20 minutes runs, I'd be interested to know if there are any ways to keep/maintain the counts for thsoe, maybe in the command part after the script's entry, but then how do we neutralize it after /24Hrs, any workable solution is appreciated.
Few things: -add some kind of the timestamp to the job. -don't redirect anything to the /dev/null . -set a $MAILTO notification to send output to the required team We are using Nagios for this and also cronwatch
Is there any way to maintain counts in crontab?
1,455,896,903,000
I am scheduling the following shell script #!/bin/bash (echo open mailserver.nowhere.local 25; sleep 1; echo EHLO; echo quit)\ | telnet | grep "?Invalid command" if [ $? -eq 1 ]; then if [ -r /tmp/sendmail_stopped ]; then rm /tmp/sendmail_stopped /etc/init.d/sendmail start mail -s "sendmail has started back up." [email protected] < /dev/null else echo "sendmail OK" fi else if [ ! -r /tmp/sendmail_stopped ]; then touch /tmp/sendmail_stopped /etc/init.d/sendmail stop else echo "sendmail still not OK" fi fi like this */5 * * * * root /home/amr/bin/sendmail_alive.sh 2>&1 > /tmp/sendmail_alive.log but am still getting email from the telnet command. I've tried a bunch of ways to suppress getting the email that contains the output of the telnet command, which is Connection closed by foreign host. I cannot figure out what I am doing wrong. Any suggestions would be appreciated.
Common mistake, wrong order of redirection, try this: … sendmail_alive.sh >/tmp/sendmail_alive.log 2>&1 It works like this: file descriptor stdout to /tmp/sendmail_alive.log file descriptor stderr to the value of stdout (/tmp/sendmail_alive.log) With your order, you first point the stderr where originally was stdout and you get the stderr message "Connection closed by foreign host." there as a result.
How do I suppress all email notification from cron for particular job?
1,455,896,903,000
I added this rule to the bottom of my crontab * * * * * /root/test.sh It basically resets iptables so I can fool around. If I mess up I won't be locked out of my box. When I run the script in bash writing /root/test.sh it clears everything as expected. However if I wait a minute it doesn't seem to execute. I ran the command below and can see every minute it appears to run my script but my script isn't doing anything. grep CRON /var/log/syslog IIRC the way to force a script to run as the owner is chmod a+s file. So I did that. stat shows this line Access: (6755/-rwsr-sr-x) Uid: ( 0/ root) Gid: ( 0/ root) Shouldn't cron be running my script as root? Why doesn't it seem to be executing? I am running debian 7 (wheezy)
You cannot run a scripts (as opposed to a binary) with SUID permission. Your script is executing, but as your user, not as root, so its iptables calls aren't working. Error messages from cron jobs go to local email. Make sure that local email is configured properly (some distributions don't do it by default). The easy solution (since you have root access) is to install that script in the root user's crontab, or alternatively in /etc/crontab using the line: * * * * * root /root/test.sh
crontab not executing my script?
1,455,896,903,000
I have a Python script that runs from the command line beautifully, but when I try and run it from CRON does strange things. The script generates, then runs an apk script file. The apk script file is saved in /usr/src/scripts/plots/core_temp_data/weeklyplots when run via the command line; but it is saved in /home/pi when run from CRON. I've also tried writing the CRON errors to a logfile, but that is placed in the /home/pi directory too! The script is working, but saving the resultant files in the wrong place, so how do I specify the correct path? I've tried searching for how to specify the path, but got myself horribly confused. The part of the script that generates the apk script is fout = open("live_gnu_command.gpl", "w") following D_byes help this worked: fout = open("/usr/src/scripts/plots/core_temp_data/weeklyplotslive_gnu_command.gpl", "w")
By default, cron runs all jobs in the home directory of the user who owns the job. Make sure that your python script uses absolute paths when writing the output files, or it'll put them in the current user's home directory.
CRON path problems
1,455,896,903,000
Via my host I have SSH access and a control panel. I installed Rails via SSH on a light weight server. I want to reboot the server when it goes down. I think I need to start it manually since I want to start two servers which run on different versions of Ruby. I was wondering if I can use the "Cronjob" area on my control panel for this, and if so, what command I need. Via SSH I would start the server as follows: cd [app1path] bundle exec thin -C /etc/thin/app2.yml start cd [app2path] bundle exec thin -C /etc/thin/app2.yml start It is important that the steps are executed in sequence, since they seem to conflict with each other (running them as a service from init.d only starts one, strangely enough placing a sleep 60 in one of the two does not make a diffrence, but anyway). Placing the code below in my control panel does not seem to do anything: @reboot [app1path; bundle exec thin -C /etc/thin/app1.yml start; cd [app2path]; bundle exec thin -C /etc/thin/app2.yml start What would work? I would be grateful for some tips and guidance.
THe @reboot entry is started when cron is started, but that doesn't mean everything necessary to run your bundle application is up and running. Depending on the setup, e.g. your network might not be up at that time. I would do the following in your case: have the reboot job write a unique file in some known place have a normal cronjob that executes on a regular basis, e.g. every X minutes. This job checks if the unique file exists, and has existed for at least Y minutes. If the file exists and is old enough, the file gets deleted and the commands to start the bundle are executed. With the checks you can be sure that your command does get started only once, roughly between Y and X+Y minutes after reboot. You can probably reduce X to 1 and Y to 1 or 2, giving a delay of 1 to 2-3 minutes after reboot (you should take larger values if your machine takes longer to get fully up and running). An alternative is to create an init.d job yourself and insert that with appropriate links so that all necessary service are started before that. (How to do that depends on what kind of system your Debian system uses: systemd, sysvinit or some other)
How to create CronJob for running commands on reboot?
1,455,896,903,000
I run screen command through cron, where the codes is in lorem.sh files. This is cron codes: * * * * * cd /home/z; ./lorem.sh Inside lorem.sh: screen -S screenname -X stuff $'\033[B' sleep 1 && screen -S screenname -X stuff 2 sleep 1 && screen -S screenname -X stuff "lorem ipsum dolor" sleep 1 && screen -S screenname -X stuff $'\n' Above lorem.sh codes: First line is "Arrow down", and forth line is "Enter" When running lorem.sh through cron, only second & third line working. Above lorem.sh works fine if run from terminal by typing ./lorem.sh but not working from cron
Most versions of cron run commands using /bin/sh by default, and if the commands run any shell scripts (that don't have a #! line to force use of a specific shell), /bin/sh will be used to run them, too. On some systems, /bin/sh is dash, a shell that doesn't understand the ANSI-C quoting convention used by bash and other shells. So your $'\n' string is probably getting interpreted as the 3-character string $\n. Most versions of cron will let you specify a shell to run your commands. You can have it use bash by editing your crontab to add a line SHELL=/bin/bash that comes before any lines that schedule jobs. Alternatively, you can make lorem.sh always use bash by adding #!/bin/bash as its first line.
Run screen command with variable & tick through cron (cron run .sh)
1,455,896,903,000
I've written a short script to copy Apache's server status to a log: #Save date and time to a variable dt=$(date) #Echo date and time to the log file as it's not included in server-status echo "Time :" $dt >> /var/logs/server-status.log #Grab machine readable server-status and add it to the log curl localhost/server-status?auto >> /var/logs/server-status.log I'm running it every five minutes from a cron job in /var/spool/cron/root but taking a look at the sysstat logs it seems to be using progressively more memory: So my question is: Could I be causing a memory leak by continually writing to this variable? Do I need to kill it after the script is run?
The answer is: Yes, it can. Provided there is a bug in the script interpreter. But, in your code you are not doing anything funny so if you are using stable version of the shell, it's almost 100% sure your problem is somewhere else.
Can continually assigning a variable in a shell script cause a memory leak?
1,455,896,903,000
I installed an RPM on a CentOS machine that was supposed to merge its own files contained in a /etc/cron.d structure with the current /etc/cron.d. However it replaced the current contents. I believe cron.d is a symlink. How can I retrieve the original /etc/cron.d?
/etc/cron.d is not a symlink on my CentOS 5.x box: drwx------ 2 root root 4096 Feb 5 2013 /etc/cron.d So, if it's missing entirely, you can restore it with: # install -d -m 700 -o root -g root /etc/cron.d If something else is in its place, you could move it out of the way, recreate the directory, and then selectively move things back in place. To get a list of all files that are supposed to be installed there, say: # rpm -qla | grep /etc/cron.d Saying rpm -qf filename will tell you which package owns that file, hence which package you can reinstall to restore that file.
How to rebuild /etc/cron.d on CentOS?
1,455,896,903,000
I am setting up my first cron on a Ubuntu Desktop system. I was curious to know if there is a way to lock a folder before a cron is ran and how to set it up? I am following this article. I am just trying to make sure I don't damage any files when moving them from and to the cron folder because it is on a shared network.
A simple script with locking and a delay before accessing files: #!/bin/sh if mkdir /tmp/myscript-running; then cd /mnt/share/whereever find . -type f -mmin +1 -print0 | xargs -i -n 1 -0 myscript.sh "{}" rmdir /tmp/myscript-running else : # previous instance still running, do nothing fi GNU find or equivalent required for the -mmin option. You can run this as often as required via cron. Replace myscript.sh with your processing script. The main features are: use mkdir to create a lock directory to prevent overlapping instances use find with -mmin +1 which limits the output to files modified more than 1 minute ago, this is to try to ensure new files are fully copied use xargs to process the files one at a time, "{}" is replace by the filename use \0 terminated filenames with find | xargs so that troublesome filenames don't cause problems You should be able to modify the find parameters to match what you need. You can also use the above logic to move completed files from an "uploading/" to a "ready/" directory, which might simplify things. Ubuntu also has lockfile and shlock which can be useful, see the man page of the latter for more ideas.
Lock folder when running cron
1,455,896,903,000
I have a Raspberry Pi which I want to use to automate downloads to a network drive. I would like to restart the machine on a regular basis using cron (I've heard they're not the most stable of things) but obviously, I would prefer not to restart it half way through an incomplete download. I'll probably be downloading using several different methods but one of the ways I'll be using soon is get_iplayer and probably basic wget stuff too. Is there a way I can check if one is in progress? Any advice appreciated.
Some dirty ideas : Poll running software using ps : if a wget instance is running, then do not reboot. Create a lock file when triggering a download, and poll the lock file Anyway, wget -c allow to continue an interrupted download.
How can I check if a file is being downloaded?
1,455,896,903,000
I am trying to issue the gnome-session-save --kill command via crontab. I used the command sudo crontab -e. In the file is this: PATH=/usr/bin 00 00 * * * gnome-session-save --kill The command does not run as it is supposed to. /var/log/syslog shows it running successfully however. In the command I've also tried the full path to the command (/usr/bin/gnome-session-save --kill) without any luck either. Ubuntu 10.04LTS
Caleb was right about passing the correct display variable. I also used crontab -e instead of SUDOing it. In Ubuntu, all you would have to do is specify which display to pass in Crontab. So my command looks like this: 00 18 * * * env DISPLAY=:0 gnome-session-save --kill the env DISPLAY=:0 is what tells to pass the cronjob to the current display (desktop). Alternatively, if you have multiple displays, you can specify which display to pass to using a decimal (0.0 = display 1, 0.1=display 2 etc.) http://webcache.googleusercontent.com/search?q=cache:jdM1kg3ituMJ:https://help.ubuntu.com/community/CronHowto+https://help.ubuntu.com/community/CronHowto%23GUI%2520Applications&cd=1&hl=en&ct=clnk&gl=us&source=www.google.com Yes I used the google web cache cause the page wasn't loading correctly for me =D.
How to use gnome-session-save from cron?
1,455,896,903,000
I have a script thar I want to be executed each first minute of each hour. Thus, i made a second file named crontab.sh: $ cat crontab.sh #!/bin/bash #cript path script_path="/mnt/lap_c/home/ditsiaou/2024/climpact/stavros_keppas/weatherxm/beginning/basic_script.sh" # Add the cron job to run the script at the first minute of each hour (crontab -l ; echo "1 * * * * $script_path") | crontab - echo "Cron job added to run the script at the first minute of each hour." I executed crontab.sh. Finally. the desired script runs successfully at the first minute of each hour. Suppose in the near future I want to delete this rule. Should I just delete the crontab.sh file? Also, suppose I want the script to be executed in the nineth minute each hour. Editing crontab.sh file will delete the first rule and will create a second one.
No, don't delete your crontab.sh file but if you want to remove the execution just remove the according line while using crontab -e it will edit the crontab using your preferred EDITOR. You can backup before your existing crontab using crontab -l > /tmp/file
Crontab run after deleting a file
1,455,896,903,000
I'm trying to create a single-line /etc/cron.d/ cron entry, which runs the same command serveral times, essentially to create multiple threads, for long-running jobs: * * * * * usertorunas "for i in {1..6} ; do curl -s 'https://www.example.com/path' &>/dev/null & ; done" What am I doing wrong here? It does not execute, as nothing is shown in ps aux. Is this perhaps an issue with whatever shell is being used having a different syntax for the for loop? Prefixing SHELL=/bin/bash doesn't have any effect. I realise I could just list the same thing 6 times but that seems a pretty cruddy way of doing things. This is on Ubuntu 22.04.
The outer double quotes should be removed, and it's an error to use both & and ; to terminate a statement - see for example Using bash "&" operator with ";" delineator? As well, cron runs jobs using /bin/sh by default which doesn't support brace expansion or the &> combined redirection - so either replace those with POSIX equivalents: * * * * * usertorunas for i in $(seq 1 6) ; do curl -s 'https://www.example.com/path' >/dev/null 2>&1 & done or set SHELL=/bin/bash before the job SHELL=/bin/bash * * * * * usertorunas for i in {1..6} ; do curl -s 'https://www.example.com/path' &>/dev/null & done
Cron job with for loop, executing same command several times to create multiple threads
1,455,896,903,000
From the log it seems like it tries to run at scheduled time Apr 29 10:00:01 momspi CRON[13324]: pam_unix(cron:session): session opened for user admin(uid=1000) by (uid=0) Apr 29 10:00:01 momspi CRON[13325]: (admin) CMD (./media/backup.sh) Apr 29 10:00:01 momspi CRON[13324]: pam_unix(cron:session): session closed for user admin Apr 29 10:00:01 momspi postfix/pickup[13242]: B432C89D70: uid=1000 from=<admin> Apr 29 10:00:01 momspi postfix/cleanup[13329]: B432C89D70: message-id=<[email protected]> Apr 29 10:00:01 momspi postfix/qmgr[13126]: B432C89D70: from=<[email protected]>, size=661, nrcpt=1 (queue active) Apr 29 10:00:04 momspi postfix/smtp[13331]: B432C89D70: to=<[email protected]>, relay=hotmail-com.olc.protection.outlook.com[52.101.194.18]:25, delay=2.3, d> Apr 29 10:00:04 momspi postfix/cleanup[13329]: 049AF89D72: message-id=<[email protected]> Apr 29 10:00:04 momspi postfix/qmgr[13126]: 049AF89D72: from=<>, size=3588, nrcpt=1 (queue active) Apr 29 10:00:04 momspi postfix/bounce[13333]: B432C89D70: sender non-delivery notification: 049AF89D72 Apr 29 10:00:04 momspi postfix/qmgr[13126]: B432C89D70: removed Apr 29 10:00:04 momspi postfix/local[13334]: 049AF89D72: to=<[email protected]>, relay=local, delay=0.04, delays=0.01/0.02/0/0.01, dsn=2.0.0, status> Apr 29 10:00:04 momspi postfix/qmgr[13126]: 049AF89D72: removed Apr 29 10:01:02 momspi crontab[13319]: (admin) END EDIT (admin) but my log file is NOT updated and new files that should sync are NOT present my cron job is setup as such (once a day at 10am) [email protected] 0 10 * * * ./media/backup.sh and the script is fairly basic as well #!/bin/bash now=$(date) LOG_FILE="/media/backuplog.txt" { echo "backing up $now" rsync --exclude-from='/media/backupexclude.txt' -avhzz --delete /mount/skittlesshare/ /media/usb1/sharedmedia/ }> ${LOG_FILE} we can see too that its using the admin user. Which is what i would expect and when manually running it (as admin), there are no issues. any ideas why its not working as expected? also email doesnt work/send
You are using ./media when I suspect you meant to use /media. If you use ./ that means you are giving a path relative to your current directory. So, for example, if you are in /some/dir/, then ./media would mean /some/dir/media. So give the full path to the script, presumably /media/backup.sh, instead and it should work.
troubleshooting crontab not running
1,455,896,903,000
I've got a veeery simple script written in python that tests my internet connection na saves parsed data to a json file: #!/usr/bin/python3 import subprocess, json, os from datetime import datetime # run the bash script and parse results def speedtest(): result = subprocess.run(["/bin/speedtest-cli", "--simple"], stdout=subprocess.PIPE, encoding="UTF-8").stdout # result = os.system("speedtest-cli --simple") download_str = result.find("Download: ") + 10 download_speed = result[download_str:result.find(" Mbit/s", download_str)] upload_str = result.find("Upload: ") + 8 upload_speed = result[upload_str:result.find(" Mbit/s", upload_str)] ping_str = result.find("Ping: ") + 6 ping = result[ping_str:result.find(" ms", ping_str)] return download_speed, upload_speed, ping def save_to_json(data): # load existing data to a dict existing_data = [] with open("/home/trendkiller/development/python/py_speedtest/data.json") as fp: existing_data = json.load(fp) payload = { "date": datetime.now().strftime("%d/%m/%Y"), "time": datetime.now().strftime("%H:%M"), "download": data[0] + " Mbps", "upload": data[1] + " Mbps", "ping": data[2] + " ms" } existing_data.append(payload) # save to a json file with open("/home/trendkiller/development/python/py_speedtest/data.json", "w") as f: json.dump(existing_data, f, indent=4) def main(): test_result = speedtest() # save to a json_file save_to_json(test_result) return 0 if __name__ == "__main__": main() It runs smoothly when I call it from terminal. I would like to automate it so it can test my connection hourly. For this I'm trying to use CRON and it works when I schedule it to run every minute (data is saved to data.json). But when I try to run it hourly, data isn't passed to a json file. Here's my crontab file: @hourly /bin/python3 /home/trendkiller/python/py_speedtest/main.py While looking at the grep "CRON" /var/log/syslog there's nothing strange (to my knowledge): Feb 5 08:00:01 home-lab CRON[119326]: (trendkiller) CMD (/bin/python3 /home/trendkiller/development/python/py_speedtest/main.py > /home/trendkiller/development/python/py_speedtest/log.txt) Feb 5 08:00:04 home-lab CRON[119325]: (CRON) info (No MTA installed, discarding output) Feb 5 08:17:01 home-lab CRON[119719]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Feb 5 09:00:01 home-lab CRON[120590]: (trendkiller) CMD (/bin/python3 /home/trendkiller/development/python/py_speedtest/main.py > /home/trendkiller/development/python/py_speedtest/log.txt) Feb 5 09:00:02 home-lab CRON[120589]: (CRON) info (No MTA installed, discarding output) Feb 5 09:17:01 home-lab CRON[120798]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Any help would be very useful, thanks in advance (and don't mind the clutterred code :P).
A quick answer: Write a script in /etc/cron.hourly. runpy script: #!/bin/bash /bin/python3 /home/trendkiller/python/py_speedtest/main.py Then make it executable: sudo chmod +x /etc/cron.hourly/runpy Test with --report or --test: run-parts --report /etc/cron.hourly manpages.
Cron, python and saving to a file
1,455,896,903,000
So I'm seeing this weird thing (or at least, weird to me, with my tiny amount of experience). cronnext tells me the nextstring for an every-minute cronjob is the beginning of the current minute, ie in the past. eg, if I run cronnext at 02:11:27, it says: nextstring: Tue Jul 25 02:11:00 2023 ( rather than (as I'd naively expect): nextstring: Tue Jul 25 02:12:00 2023 ) Is this normal, or...? Cuz I couldn't find anything about it in any of the manpages, and I haven't been able to google anything up with the terms that I've been able to think of... I added the anacron tag too, cuz I do have it installed, although I never used it directly, so I don't know if it's relevant. ( $sudo cronnext -c #=> - user: "username" crontab: /var/spool/cron/tabs/username system: 0 entries: - user: username cmd: "/home/username/my_script.sh" flags: 0x0F flagnames: MIN_STAR|HR_STAR|DOM_STAR|DOW_STAR delay: 0 next: 1690276260 nextstring: Tue Jul 25 02:11:00 2023 next: 1690276260 ) ( I've already tracked down and fixed the bug I was originally trying to fix when I came across this, so it turned out to just be a red-herring on that particular issue, but I decided I still want to ask about it, cuz it still seems weird to me. )
The tool doesn't exist on Debian so I downloaded the latest version, cronie-1.6.1 from its source repository on Github. The manpage writes in the DESCRIPTION, Determine the time cron will execute the next job. Without arguments, it prints that time considering all crontabs, in number of seconds since the Epoch, rounded to the minute. (My emphasis.) It appears that the code always rounds down, which generates incorrect answers such as that which you've found. Here's how I built it: ./configure --prefix= make And to run it: src/cronnext A fix appears to be to change the loop starting condition on line 250 of src/cronnext.c that calls nextmatch() to use start+60 rather than start. However, I'm not confident enough of this to submit a patch, and I suspect that fixing start by adding 60 (seconds) where it's defined on line 344 rather than when it's used would be better. Since you use the program, it would be best if you were to submit a bug report on the issue tracker.
`cronnext` reports `nextstring` for every-minute cronjob as beginning of the *current* minute, ie in the *past*. Normal?
1,455,896,903,000
In Debian 12, the following command is run weekly: start-stop-daemon --start --pidfile /dev/null --startas /usr/bin/mandb --oknodo --chuid man -- --quiet which generates man caches in /var/cache/man But looking in that directory, I see all possible languages are being generated: ... ./zh_CN ./zh_CN/cat1 ./zh_CN/cat5 ./zh_CN/cat8 ./zh_CN/index.db ./zh_TW ./zh_TW/cat1 ./zh_TW/cat5 ./zh_TW/cat8 ./zh_TW/index.db I have language set to English, How do I prevent nonsense languages being generated?
mandb doesn’t generate all possible languages, it generates database caches for all installed manpages. Compare the contents of /usr/share/man and /var/cache/man: you’ll see that the languages in the latter correspond to the languages in the former. If you don’t need certain languages, you can remove the corresponding manpages entirely. Create a configuration file for dpkg, e.g. /etc/dpkg/dpkg.cfg.d/locales, containing path-exclude=/usr/share/man/* path-include=/usr/share/man/man[1-9]/* path-include=/usr/share/man/en*/* (for English only; add further path-include entries if you want other languages). This will prevent dpkg from installing other manpages in future. Once that’s done, remove the existing directories you don’t need, e.g. sudo rm -rf /usr/share/man/zh* /var/cache/man/zh*
mandb generates all possible languages in /var/cache/man
1,455,896,903,000
What are the key differences between a Cron job and a scheduler, and in what use cases is using one of these tools more appropriate than using the other?
You have here a list of schedulers: https://en.m.wikipedia.org/wiki/List_of_job_scheduler_software Cron is one of them. A simple one. Some other may be more scalable or support dynamic programming of tasks, support dependancy, the presentation of a dashboard (which jobs was successful executed). There are plenty of characteristics which could differ from a scheduler to an other. Note: it is hard to find information about cron type schedulers on Linux since "scheduler" also mean which process the kernel should allocate to the CPU at each milliseconds. I guess this is off topic here.
Cron Job vs. Scheduler: Understanding the Differences and Use Cases
1,455,896,903,000
Below is my existing cron which i wish to enable crontab -l ####Cron to auto restart MYAPP ###*/15 * * * * ansible-playbook /web/playbooks/detectMYAPP/va_action.yml | tee -a /web/playbooks/detectMYAPP/cron.out I wish to enable cron by matching app name MYAPP I use the below sed command for the same: crontab -l> /web/playbooks/cronenabledisable/wladmin.cron sed -i '/^#.*MYAPP/Is/^[#]*//' /web/playbooks/cronenabledisable/wladmin.cron crontab /web/playbooks/cronenabledisable/wladmin.cron Unfortunately, it uncomments the comment section i.e. ####Cron to auto restart MYAPP failing the cron to get installed. Problematic Current Output: Cron to auto restart MYAPP */15 * * * * ansible-playbook /web/playbooks/detectMYAPP/va_action.yml | tee -a /web/playbooks/detectMYAPP/cron.out Expected Output: ####Cron to auto restart MYAPP */15 * * * * ansible-playbook /web/playbooks/detectMYAPP/va_action.yml | tee -a /web/playbooks/detectMYAPP/cron.out Note: i wish to keep MYAPP in the comment section i.e ####Cron to auto restart MYAPP and i cannot simply remove it for the sake of naming conventions
Using sed $ sed -Ei.bak '/#+(\*.*myapp)/Is//\1/' input_file ####Cron to auto restart MYAPP */15 * * * * ansible-playbook /web/playbooks/detectMYAPP/va_action.yml | tee -a /web/playbooks/detectMYAPP/cron.out -i.bak will create a backup file in case you need to roll back
Unable to uncomment just the cron by matching the string MYAPP
1,455,896,903,000
We have a crontab entry that ends up sending an ambiguous redirect error. Pretty sure it's the command to read the date, but don't know how to fix. Are there other solutions? /bin/sh redirects to /bin/bash /opt/startup-shutdown/startup.instances Other > /tmp/`date +%Y%m%d%H%M%S`-cron.log 2>&1 gives: /bin/sh: 1 : ambiguous redirect
crontab uses % for a special purpose: The entire command portion of the line, up to a newline or % character, will be executed by /bin/sh or by the shell specified in the SHELL variable of the crontab file. Percent-signs (%) in the command, unless escaped with backslash (), will be changed into newline characters, and all data after the first % will be sent to the command as standard input. If you put a date command into the crontab, every % must be quoted with a backslash.
RHEL7 Cron entry gives ambiguous redirect error? [duplicate]
1,455,896,903,000
I'm trying to run a script that archives a dir every day at 10 PM but It is not executed nor logs send. Please advise .. [root@linux]# crontab -e 0 5 * * 1 /usr/sbin/aide --check 0 22 * * * /root/backup/script.sh >> /var/log/backup_crontab.log Script : [root@linux]# ls -lrt total 2 -rw-r-----. 1 root root 1002 Sep 30 09:28 script.sh [root@linux]#
your script isn't executable (-rw-r-----) where it should be -rwx-r-----). just do a chmod u+x script.sh to add root execution rights. except using sh /root/backup/script.sh running the script /root/backup/script.sh requires the script to be executable even from crontab.
crontab does not execute nor output send
1,664,887,245,000
I've got a python script that takes data from some XML files and displays them on a website. The script overwrites the index.html file each time it's run and replaces some text in a HTML template with data stored in some variables. The script works fine. I put the command in crontab to run every hour, and it works. But it saves the new updated index.html file to the home directory but I want it saved to /var/www/html/. The python script is running from the same location. Is there a way I can have crontab run the script and save the index.html file the script generates to the correct location? (/var/www/html) This is the command I've put in crontab: 0 * * * * python /var/www/html/boinc.py Thanks
The location of the script is irrelevant to its execution. When cron runs a job it sets the current directory to the home directory of the user. You need to set the working directory yourself, or else modify your script to write to the correct place.
crontab -e is running my python script but its saving the results of the script in my home directory
1,664,887,245,000
This rtcwake job launched manually works fine : $ sudo rtcwake -m no -u -t $(date +%s -d 'tomorrow 07:30') but doesn't seem to run as a root cron job : $ sudo crontab -l 32 7 * * * rtcwake -m no -u -t $(date +%s -d 'tomorrow 07:30') If I log the command with $ sudo crontab -l 32 7 * * * rtcwake -m no -u -t $(date +%s -d 'tomorrow 07:30') &> /home/me/path/to/dir/crond.log I get an empty log file. And the owner of crond.log is me, not root. (EDIT 1: this not the case, my mistake with several tests). What exactly is the issue here? EDIT 2 : thanks to @αғsнιη answer, the correct cronjob is the following (created with sudo crontab -e): $ sudo crontab -l 32 7 * * * /usr/sbin/rtcwake -m no -u -t $(date +\%s -d 'tomorrow 07:30') &> /home/me/path/to/dir/crond.log
Two things about your cronjob: % should be escaped \% because that's special character for crontab and means a newline. always write commands with thier full/absolute path, since crontab doesn't read your shell's PATH variable.
rtcwake not executed as root cron job
1,664,887,245,000
The file: /etc/cron.daily/logrotate has the following statement: /usr/sbin/logrotate /etc/logrotate.conf The logrotate.conf file has the weekly directive defined inside. Now I have two rotational configuration files abc and def inside /etc/logrotate.d. abc has the daily directive inside it which to my knowledge will override the weekly directive in /etc/logrotate.conf. And def has no such directive so it will inherit the weekly directive from /etc/logrotate.conf. Now my doubt is that /etc/cron.daily/logrotate will run everyday and will check inside the /etc/logrotate.conf file which in turn has the include /etc/logrotate.d directive inside. So in conclusion /etc/cron.daily/logrotate will go through both abc and def. My question is what will be the actual frequency of logrotation for both abc and def? Will both of them be rotated daily or one daily and one weekly?
As I tried to explain in What should be the preferred approach while rotating logs - using the daily directive or putting the file path in cron.daily?, /etc/cron.daily/logrotate is irrelevant as far as rotation frequencies are concerned, unless you’re trying to rotate more often than daily. To determine what a given configuration is going to do, you can ask logrotate itself: /usr/sbin/logrotate -d /etc/logrotate.conf won’t actually rotate anything, but it will tell you everything it considers and everything it would do. In particular, it will tell you what the rotation frequency for each pattern and file is. Any daily, weekly, monthly etc. directive inside braces only applies to that pattern. So if abc has /path/to/file.log { daily ... } it will cause the matching file(s) to be rotated daily, regardless of the global directive in /etc/logrotate.conf (if any). It won’t affect any other file. If def doesn’t have a frequency directive, it will inherit the global setting.
Logrotate scenarios - Mix and match cron.daily and daily directive
1,664,887,245,000
I want to run a script on the 1st of every month. If the computer was powered off, I'd like to execute it the next time it turns on. Anacron fits in regard to the "powered off" use case but it only provides daily, weekly, monthly intervals. Monthly is too late and weekly way too early. I checked fcron but that package clashes with Timeshift so that's not an option. I was thinking if cron can run the task one time, at any time between the 1st & the 4th of each month, that would be ok too. I had a look at the cron syntax and think that's actually not possible. Does anybody know how to solve this? I'm on Arch Linux (Manjaro).
Something like this (untested) #!/bin/bash # run this via crontab on days 1-4 and @reboot # # Store the run_month here, or somewhere writable on disk not /tmp runfile="$HOME/run_month" # make sure $runfile exists, initalize to a non-month if 1st run ever [[ ! -f "$runfile" ]] && echo "init" >"$runfile" # # get the last month we ran rf="$(cat "$runfile")" # get the current month cm="$(date "+%b")" # if $rf is the same as $cm, quit if [[ "$cm" = "$rf" ]] ; then exit fi # Remember we ran this month echo "$cm" >"$runfile" # # Left as an exercise for the student
Schedule a cron job on a computer that's not powered on continuously?
1,664,887,245,000
Suppose that we want to run a task every 20 minutes: 0/20 * * * * It would run at X:00, X:20, X:40 and so on. Every 15? 0/15 * * * * So it would run at X:00, X:15, X:30 and X:45. But what happens if we wanted to run every 45 minutes? 0/45 * * * * I am inclined to think that it would run at X:00, then at X:45, then at X+1:00 (which is not what I need, by the way)? Or it would run at X+1:30 (exactly what I need)? Cronjob schedule explains that it would always separate runs by 45 minutes but the question was related to running at X minutes every hour so their correct answer doesn't really apply to my case and I want to be sure of the answer.
The value after the slash is the step value. (See the man page). So 0/45 in the minutes filed means it’ll run at 1:00, 1:45, 2:00, 2:45, 3:00, etc. it doesn’t mean every 45 minutes. /15, /20, /30, all work as expected because 60 divides evenly by those values. If you want to have it run every 45 minutes, you’ll likely have to create multiple lines with the various hours and minutes.
how does cron react to ranges where using a non-even separator?
1,664,887,245,000
I have a question. I have a project implemented on the brownie framework, the launch is only from the project folder, the script is called by the command brownie run /script/create.py -- network rinkyb. I want to create a task that will call brownie run /script/create.py -- network rinkyb in the crontab of the clloe at a certain time interval. I can't create a task in crontab like * * * * * * brownie run /home/denis/project/scripts/create.py because I have an env file in the project folder. I also don't have a variant like SHELL=/home/.local/brownie HOME=/home/denis/project/ * * * * * * * brownie run /scripts/create.py --network rinkyb I get an error that the brownie command is not found
Did you try HOME=/home/denis/project/ * * * * * /home/.local/brownie run /scripts/create.py --network rinkyb ? As it stands, you're trying to run brownie using brownie itself. Furthermore, brownie isn't a shell and cron won't work with arbitrary executables in the SHELL variable.
How to create a crontab file for a program?
1,664,887,245,000
I set JAVA_HOME in .zshrc: export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64/jre/ which is fine for interactive programs. But I have JVM programs running via cron, which uses Bourne shell. The bourne shell programs keep giving me this: groovy: JAVA_HOME is not defined correctly, can not execute: /usr/lib/jvm/default-java/bin/java What's the neatest way to solve this? I don't remember having to worry about this before. Currently I'm setting JAVA_HOME on every crontab entry which is burdensome and redundant.
Assuming you are referring to your own user's crontab, to avoid duplicating the definition of JAVA_HOME you can export the variable in ~/.zshenv (instead of ~/.zshrc), which is read even in non-interactive, non-login shells, and run zsh -c 'sh /path/to/script' in your cron job (replacing sh, based on what the program called "Bourne shell" in your question actually is, if appropriate). Alternatively, if you are fine with defining JAVA_HOME in multiple places and if your sh implementation supports this1, you may export it in ~/.profile and invoke sh as a login shell by either appending -l to the script's shebang or changing the cron job's command into sh -l /path/to/script. Though, in the end, the most convenient solution is probably to simply add JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64/jre/ as a line at the top of your crontab (unless you have distinct cron jobs that need distinct values of JAVA_HOME, of course). 1 Your sh, which is unlikely to be a "true" Bourne shell, may have a -l option if it is actually a link to (for instance) bash or dash. As Stéphane Chazelas pointed out in a comment, 1) it does not have it if it is the Bourne shell or an implementation of POSIX sh (e.g., sh has no -l option on {Free,Net,Open}BSD); and 2) not all the implementations that support -l will read ~/.profile when given that option.
Sharing environment variables between zsh and bourne shell (for crontab)