date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,593,723,601,000 |
I'm running unattended upgrade at random times on a computer. I have a cron to reboot at a set time daily. If my reboot cron runs in the middle of an update, will it wait to reboot or will it force a reboot in the middle of an install?
|
When you run reboot your init system kindly asks running processes to shut down by sending a SIGTERM signal. If they do not shut down within a reasonable amount of time (if you're on a machine using systemd this time defaults to 90 s) the init system will send a SIGKILL signal.
We certainly don't want to kill a busy unattended-upgrades process as this might result in half-installed packages. Knowing that a full run (e.g. installing many updates published on the same day) might take more than 90 s to complete the unattended-upgrades developers bumped up the timeout. On my Ubuntu 20.04 machine I get:
$ grep TimeoutStopSec /usr/lib/systemd/system/unattended-upgrades.service
TimeoutStopSec=1800
30 minutes should be sufficient even on older machines. If you don't want to wait that long or if you're still concerned your unattended-upgrades run gets interrupted, consider enabling the following parameter in /etc/apt/apt.conf.d/50unattended-upgrades:
// Split the upgrade into the smallest possible chunks so that
// they can be interrupted with SIGTERM. This makes the upgrade
// a bit slower but it has the benefit that shutdown while a upgrade
// is running is possible (with a small delay)
//Unattended-Upgrade::MinimalSteps "true";
| Will reboot command wait for unattended upgrade to complete? |
1,593,723,601,000 |
I'm on Kali Linux with VERSION_ID="2019.3"
uname -a
Linux kali 4.19.0-kali5-amd64 #1 SMP Debian 4.19.37-6kali1 (2019-07-22) x86_64 GNU/LINUX
Trying to execute adjust_timezone.sh placed in /usr/local/startup_scripts/
#!/bin/sh
echo "Adjusting timezone...";
ntpdate in.pool.ntp.org;
The output of which ntpdate
/usr/sbin/ntpdate
I tried using the full path in the script too, no success.
The content of /etc/crontab
SHELL=/bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
@reboot /usr/local/startup_scripts/adjust_timezone.sh
Added the same using crontab -e too
@reboot /usr/local/startup_scripts/adjust_timezone.sh
I tried using @reboot : /usr/local/startup_scripts/adjust_timezone.sh too with no success.
I modified the script adding 2>&1 >> log.txt but the log is empty, i think the script is never executing.
Where I'm wrong? Any advice?
EDIT
As suggested the logging format was wrong, i changed it in >> /log.txt 2>&1 and here's the result :
/usr/local/startup_scripts/adjust_timezone.sh: 3: ntpdate: not found
Error resolving in.pool.ntp.org: Name or service not known (-2)
20 Aug 15:14:37 ntpdate[612]: Can't find host in.pool.ntp.org: Name or service not known (-2)
20 Aug 15:14:37 ntpdate[612]: no servers can be used, exiting
|
Your ntpdate is running before the network is up and functional.
A better solution might be to use the systemd time synchronisation module rather than creating your own. Or install ntpd and let it manage your system's time.
| Kali linux - Crontab @reboot is not executing |
1,593,723,601,000 |
I have this directory structure at /var/sync:
# tree -d /var/sync/
/var/sync/
├── sync_bi
│ ├── replicator => Replicator.php
│ sync.php
├── sync_pfizer
│ ├── replicator => Replicator.php
│ sync.php
│ replicator.sh
│ sync.sh
As you can see on each sync_* directory there is script called sync.php and on each replicator directory there is a script called Replicator.php. I need to run those scripts each night. Replicator.php should run at 00:00 AM and sync.php should run at 12:30 AM. I know the right path here is obviously a cron job. I have made two scripts replicator.sh and sync.sh sharing almost the same code the main difference is the script name. See code below:
replicator.sh
#! /bin/bash
cd $1
/usr/bin/php Replicator.php
sync.sh
#! /bin/bash
cd $1
/usr/bin/php sync.php
Then I have added each to cronjob as follow:
0 0 * * * /var/sync/replicator.sh /var/sync/sync_bi/replicator/
0 30 * * * /var/sync/sync.sh /var/sync/sync_bi/
0 2 * * * /var/sync/replicator.sh /var/sync/sync_pfizer/replicator/
30 2 * * * /var/sync/sync.sh /var/sync/sync_pfizer/
My idea behing all this explanation is to optimize my environment by changing script to just one that allow me to perform all the actions in just one cronjob call and keep less files to maintain. So I did a little research and I have found a helpful topics:
By using find some/dir -type f -execdir somecommand {} \; is a proper way to achieve the same thing as I am doing with cd
And by moving into each directory under root path - in my case /var/sync
for D in ./*; do
if [ -d "$D" ]; then
cd "$D"
run_something
cd ..
fi
done
This is a pseudo code of what I am talking about (in case is not clear enough):
read tree structure at /var/sync
foreach (directory in root+path)
if (directory_name is replicator)
cd replicator/
/usr/bin/php replicator.php
else if (directory_name is sync_*)
cd sync_*/
/usr/bin/php sync.php
endforeach
I am not so bash expert so I am asking for some help to translate that pseudo code in something functional that allow me to achieve what I am looking for, any advice and/or help?
UPDATE
Perhaps I am wrong but I am not following at all what you guys are trying to do so here is the order of the events:
# Run Replicator at mid day and mid night
# this two can run at same time since they are pushing data to different DBs
/var/sync/sync_bi/replicator/Replicator.php
/var/sync/sync_pfizer/replicator/Replicator.php
# Run sync at mid day and mid night
# this two should run one first and the second after 30 min since they are
# pushing data to the same DBs and second depends on first one
/var/sync/sync_bi/sync.php
/var/sync/sync_pfizer/sync.php
Having that info (and sorry for not clear this in first) how your solutions should works? As you have them right now or you need to change anything in order to cover this?
|
The simplest approach would be two find commands:
find /var/sync/ -name 'sync.php' -execdir php {} \;
find /var/sync/ -name 'Replicator.php' -execdir php {} \;
That will look for files called either sync.php or Replicator.php, then cd into their parent directory and execute them with php. You could add the commands directly to your crontab:
0 30 * * * find /var/sync/ -name 'sync.php' -execdir php {} \;
0 0 * * * find /var/sync/ -name 'Replicator.php' -execdir php {} \;
If you need the Replicator scripts to be run with a 30 minute pause between them, you could do something like:
find /var/sync/ -name 'Replicator.php' -execdir php {} \; -exec sh -c "sleep 30m" \;
That will first run the script, then wait for 30 minuntes before moving to the next one.
Finally, if you need to make sure that i) you never have >1 sync.php running at the same time and ii) each sync script is run after the corresponding replicator script has finished, you can use this as the command you give to cron:
find /var/sync/ -name 'Replicator.php' -execdir php {} \; ; find /var/sync/ -name 'sync.php' -execdir php {} \;
The above will run each of the Replicator.php scripts first (one after the other, not in parallel) and, once they've finished, will run each of the sync.php scripts.
| Read each directory and perform actions in Bash script |
1,593,723,601,000 |
I know the standard way to disable a task in cron is to comment the line with the task using the # sign in front of it, as with most Unix config files or shell scripts. e.g.:
53 23 * * * /home/dolan/y-u-du-dis.sh 2>&1
That's fine for just one task, but it's really annoying having to comment 100 lines or so... so the question is: Is there a way to comment more than one entry at a time in cron? Something like multi-line comments, or a shortcut to comment everything in the crontab...
I found this question and answer in ServerFault, which basically says that you can't.
|
The answer from SF is accurate as far as it goes, though if all the lines you wish to comment are in one block there is a way "around" this problem. It's not standard practice, and the end result is individual comment markers on every line. My editor of choice for crontab files is vi, so other editors may or may not have similar functionality, but if you wish to comments lines 5 through 80, you could issue the following command sequence in vi:
:5,80s/^/# /
Which has the effect of putting a '# ' at the beginning of lines 5 through 80. Hackish? Absolutely. Effective? Under the constraints I've given, yes.
| Is there a way to comment more than one entry at a time in cron? |
1,593,723,601,000 |
I have a situation where I may need to execute an SAP command line tool hdbsql when memory gets too low (to help clean out the HANA table cache).
How I can best approach this? I did have an idea to extract the free memory value from the top command (or get free/max*100 to get a %, which is better) in a shell script scheduled using crontab, but I cannot find anything on a possible approach on this anywhere, so I cannot even start anything.
|
You could use awk to calculate the percentage and the test utillity to determine if the value exceeds 90%, for example. The cronjob could look like this:
/usr/bin/test 90 -le $(/usr/bin/awk '$1=="MemTotal:"{t=$2} $1=="MemFree:"{f=$2} END{printf "%d", (t-f)/(t/100)}' /proc/meminfo) && command-to-cleanup
The awk part extracts the needed values out of /proc/meminfo, and then calculates the percentage of used memory. The test utillity checks if 90 is is less or equal (-le) to the calculated value. The part after the && would then be your tool that cleans up the memory (.. && command-to-cleanup). That cronjob can run every minute, for example:
* * * * * root /usr/bin/test 90 -le $(/usr/bin/awk '$1=="MemTotal:"{t=$2} $1=="MemFree:"{f=$2} END{printf "%d", (t-f)/(t/100)}' /proc/meminfo) && command-to-cleanup
| CRON Job To Execute Command On Low Memory |
1,593,723,601,000 |
I have several file setup in my crontab, all those scripts must run every 5 minutes.
The problem comes when those script need to much CPU and IO at a time and the machine become unavailable.
To mitigate this effect, I'd like to know if there's an option to put 10 seconde between each script start. It should leverage the dramatic load raise (more than 40 for a monocore) we issue when all those scripts have too much data to process.
Is it possible to somehow schedule script every 5 minutes in crontab without starting at the beginning of the fifth minute for all script ?
|
Create one cron entry that is:
*/5 * * * * processA ; sleep 10 ; processB ; sleep 10 ; process C
However, I recommend against this.
I wouldn't use cron at all. Cron is not that smart. If you tell it to run a job every 5 minutes, and the job takes 6 minutes to run, you will get 2 processes running. By the end of the day you'll have dozens or hundreds of these processes running at the same time.
A safer way is to not use cron. Instead, run a script like this. Use systemd or /etc/init.d scripts to turn it into a "service" that is always running:
while true ; do
processA
sleep 10
processB
sleep 10
processC
sleep 600
done
A longer explanation can be found here:
How not to use Cron
| Cron several script every 5 minutes with 10 second between each script |
1,593,723,601,000 |
I have a crontab that includes many users updating it. The problem is that because of this, it is not easy to know who did what modification of a cronjob.
I was thinking to create a script to do a diff of the crontab with a previously saved version to at least be able to see what has been changed, but I thought that perhaps there is a standard solution for this. What is the best approach for managing my crontab?
|
Subversion
I would put the contents of the crontab under subversion control and only grant access to this user through sudo. Specifically I would only allow people access to the a command via sudo that would take the head from subversion, and install it as the latest crontab for this particular user. This will provide you with the following:
An audit trail of who did what
The ability to roll back to a previous crontab file if a problem arises
Insulate the operators from having too much permission for this special account
It might seem overly complicated but there is nothing too complicated with what I described if you break it up into small chunks.
MultiCron
Another approach would be to use a tool/script such as MultiCron. This tool would allow you to manage the crontab data external to the crontab entry so that you could better control who/when has access to these changes.
Example Using Subversion
Assuming you'd setup a SVN repository you could create a sudo entry which would allow users to do something like this:
$ sudo deploy_app_cron.bash
The innards of this script could do among other things this:
svn cat file:///home/saml/svnrepo/app_cron_data.txt | crontab -u saml -
The contents of the file app_cron_data.txt:
$ svn cat file:///home/saml/svnrepo/app_cron_data.txt
*/5 * * * * /path/to/job -with args"
Example Usage Loop
So userA wants to update the crontab entry. They would do the following to start:
$ cd $HOME/somedir
$ svn co file:///home/saml/svnrepo/ mywksp
A mywksp/app_cron_data.txt
$ cd mywksp
Now they do some edits to the crontab file, app_cron_data.txt, and commit them to the repo when they're done.
$ svn commit -m "some msg.." app_cron_data.txt
To deploy these changes they'd run this sudo command:
$ sudo deploy_app_cron.bash
References
Tracking, auditing and managing your server configuration with Subversion in 10 minutes
| How can I manage my crontab effectively to avoid issue by multiple updates by multiple users? |
1,593,723,601,000 |
How do I make cron daemon check cron entries from more than one files.
In my project I need to frequently update cron file, so instead of manipulating the existing file I was planning to write my project cron entries in a dedicated cron file.
|
Users have 3 options:
Access their own crontab entry using the command crontab -e.
If they have sudo privileges on the system they can add crontab files to the /etc/cron.d directory.
Add scripts to one of these directories:
/etc/cron.daily
/etc/cron.hourly
/etc/cron.monthly
/etc/cron.weekly
| cron from multiple files |
1,593,723,601,000 |
I currently use an application called MxEasy on my linux servers to display video from a couple of IP cameras. The software is rather buggy and crashes occasionally. I've wrote a script that checks to see if the application is running and if its not... it launches the application.
I've tried adding this line to my crontab to have it run the script. It's running the script but not launching MxEasy. Any thing I'm over looking?
0,15,30,45,50 * * * * root export DISPLAY=:0 && /etc/cron.hourly/MxEasyCheck.sh
BTW Ubuntu Server 12.04 is the OS
Here is MxEasyCheck.sh
MXEASY=$(ps -A | grep -w MxEasy)
if ! [ -n "$MXEASY" ] ; then
/home/emuser/bin/MxEasy/startMxEasy.sh &
exit
fi
Here is my crontab
# /etc/crontab: system-wide crontab
# Unlike any other crontab you don't have to run the `crontab'
# command to install the new version when you edit this file
# and files in /etc/cron.d. These files also have username fields,
# that none of the other crontabs do.
SHELL=/bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
# m h dom mon dow user command
17 * * * * root cd / && run-parts --report /etc/cron.hourly
25 6 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily )
47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly )
52 6 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly )
0 * * * * root /etc/cron.hourly/rsynccheck.sh
0,15,30,45,50 * * * * root export DISPLAY=:0 && /etc/cron.hourly/MxEasyCheck.sh
#
|
Rather than check every few minutes, write a loop that relaunches the program when it terminates abnormally. But don't roll your own, there are plenty of existing programs to do that. See Ensure a process is always running
| Watchdog script to keep an application running |
1,593,723,601,000 |
need some help to identify top of the hour in a shell script? I have a cron which runs and execute a shell script in every 10 minutes but I don't want this script to be run on top of the hour. so I am trying to skip the execution from script by checking top of the hour
|
In general, it's better to leave scheduling decisions to cron or other processes outside of the thing being scheduled.
Use a cron schedule that runs your script or code every 10 minutes, in such a way that it avoids running on the hour:
10,20,30,40,50 * * * * my-command-here
This is much more convenient than trying to make your script detect when it's being run. It also would not affect manual use of your script.
Depending on your cron implementation, you could possibly use
10-50/10 * * * * my-command-here
which would run the job every ten minutes from 10 past the hour until 50 past the hour (i.e., it would skip the full hour).
Or even just
10/10 * * * * my-command-here
i.e., every ten minutes from 10 past the hour to 59 past the hour.
You would need to test whether the syntax in these last two examples is valid on your system, and I would suggest that you read your crontab(5) manual (man 5 crontab).
| how to identify top of the hour in a shell script |
1,593,723,601,000 |
In my crontab , I set following bash function and applied it for my job. it is indicated to add timestamp to the log.
adddate() {
while IFS= read -r line; do
printf '%s %s\n' "$(date)" "$line";
done
}
30 06 * * * root $binPath/zsh/test.zsh | adddate 1>>$logPath/log.csv 2>>$errorLogPath/error.txt
But when I see error.txt the bash function didn't work well.
/bin/bash: adddate: command not found
Where is root cause of this?
If someone has opinion, please let me know.
Thanks
|
Cron doesn't accept shell functions, create a script like
#!/bin/bash
adddate() {
while IFS= read -r line; do
printf '%s %s\n' "$(date)" "$line";
done
}
$binPath/zsh/test.zsh | adddate 1>>$logPath/log.csv 2>>$errorLogPath/error.txt
and put that in cron.
(I'm assuming here that you used $binPath and $logPath for the purpose of this question. If this isn't the case you have to set them in the script)
Setting SHELL=/bin/bash in your crontab might be a way to use shell functions.(I didn't try it and it would surprise me if it works). But even if it works I would certainly not advise it.
| bash function command not found in cronjob |
1,593,723,601,000 |
I don't have MTA installed on my desktop.
Whenever there is a problem with a cronjob script, I see this in my logs:
CRON: (CRON) info (No MTA installed, discarding output)
A script that was supposed to be run by cron generated an error, and cron wanted to send me the error per email.
But I would like to see the error in my log instead, ie being logged normally to syslog, same as the above info message.
Is it possible to tell cron to forget about MTA, and log everything, including errors, to local syslog ?
UPDATE
the solution from @roaima works well for my original problem as stated.
But I have realized I need more sophisticated syntax for my cronjob, where stdout from command1 is piped to command2 and stderr (from both?) is piped to command3.
Here is a concrete example (simplified):
0 * * * * mysqldump mydb | ifne xz > "/tmp/$(date +\%F).sql.xz" | logger -t mysqldump -p cron.err
In the abova example, I need to send stdout from mysqldump to ifne xz and only if either mysqldump or ifne xz generate error do I need to pipe it to logger.
This syntax needs to works in dash (/bin/sh)
|
You can use the logger subsystem. There are two variants, depending on whether you have systemd installed or not.
With systemd - using systemd-cat
echo This is a test with systemd-cat | systemd-cat -t mytest -p info
This writes a message to the journal logger (see journalctl) and also through to the legacy system logging subsystem where you'll find the output in /var/log/syslog, etc.
journalctl -t mytest
-- Logs begin at Thu 2020-05-21 07:41:00 UTC, end at Mon 2020-06-01 13:34:56 UTC. --
Jun 01 13:34:56 pi mytest[24236]: This is a test through systemd-cat
With syslog - using logger
echo This is a test with logger | logger -t mytest -p local0.info
This writes a message through the syslog subsystem (see rsyslog.conf or similar) where you'll find the output in /var/log/syslog, etc.
tail /var/log/syslog
[...]
Jun 1 13:34:56 pi mytest[24236]: This is a test through systemd-cat
Jun 1 13:38:28 pi mytest: This is a test through logger
To use one of these logging options just append to your cron job, adjusting the tag name (myscript) and priority (info) in the example
0 * * * * /path/to/script 2>&1 | systemd-cat -t myscript -p info
Now that you have provided a concrete example, where you want stdout to get written to a target data file but you want to log stderr, you can use this
0 * * * * ( mysqldump mydb | ifne xz > "/tmp/$(date +\%F).sql.xz" ) 2>&1 | logger -t mysqldump -p cron.err
| cron: send errors to syslog, instead of MTA |
1,593,723,601,000 |
I'm writing a bash script which, among other things, will edit crontab on another server. The way I figured out how to do this is with:
crontab -l | sed <stuff> | crontab -
It does what I need it to do, but I'm still not sure how. What exactly does "crontab -" do? When I run it by itself from the shell, it takes over the shell until I hit ctl+c, but doesn't seem to do anything. Is its only purpose to overwrite cron contents with whatever's passed from stdin? I can't seem to find any documentation on it.
|
One syntax for crontab is
crontab <file>
Per Usage of dash (-) in place of a filename your usage of the - is to take stdin which in this case is the stdout coming from sed which is then feeding in to replace the <file> argument and replace the contents of cron instead of from the file you are giving it the stdin that is acting as that file.
| function of "crontab -" |
1,593,723,601,000 |
I'm running FreeBSD 11.0-RELEASE.
On default cron is using /usr/lib/sendmail to send user emails. How can I tell/set cron to use /bin/mail instead?
FreeBSD is using the cron version from Paul Vixie, so the -m option sets the email receiver not what mailer to use.
I downloaded the FreeBSD source code and tried the command # make config in /usr/src/usr.sbin/cron/, that ofc does not work since config is not defined. But I think that's an bad idea anyway cause future updates could easy overwrite this?
Thanks for your help!
|
cron by default uses the value of the systemwide _PATH_SENDMAIL macro as the expansion of MAILCMD, the command to use to send messages generated by jobs. In order to use a different mail program, you need to modify the Makefile to define appropriate values for the MAILCMD and MAILARGS macros. The Makefile in the source tree includes commented definitions illustrating possible values, but there appears to be a slight bug in the MAILARGS macro that applies when MAILCMD is defined to be /bin/mail - it has two string expansions, but only receives one string when called, so in the patch in the gist, I removed the first of the expansions.
If you have the patch utility installed (it's in ports, if not), apply this patch (relative to /usr/src/usr.sbin/cron) and build/install cron:
% cd /usr/src/usr.sbin/cron
% make
% make install
Restart cron, and you should now be using /bin/mail. N.B. this patched version builds cleanly on my system (11-STABLE), but I have not tried using it in place of the default version. Remember that you'll probably need to do this again when you upgrade, since the FreeBSD default is to use /usr/lib/sendmail.
| Change cron default sendmail to mail |
1,593,723,601,000 |
I have a cron job that is running twice an hour. It runs once at HH:00 and once at HH:45. This is strange because I tried to specify that it should run every 45 minutes as follows:
*/45 * * * * python my_job.py
This works fine for other jobs I run every 5 minutes as well as jobs that run every 20 minutes. However, I wonder if the fact that an hour isn't evenly divisible by 45 minutes is causing strange behavior. Why would my cron job be running twice an hour with this setup?
|
Your job runs on every minute that is a multiple of 45, i.e. whenever minute % 45 == 0. So it will run at hh:00 and hh:45.
If it were an exact factor of 60, it would run at even-sized intervals.
To make it run at 45-minute intervals, I think you will need three rules, one for each hour (mod 3).
Although I haven't tried it, I believe the following will work:
0,45 0-23/3 * * * /usr/local/bin/myjob
30 1-23/3 * * * /usr/local/bin/myjob
15 2-23/3 * * * /usr/local/bin/myjob
| Cron job runs more often than I thought it should |
1,593,723,601,000 |
I have a cron job that, among other things, does a recursive ls of a directory into a file. This gets compared to another file that I've created that, supposedly, contains an identical listing of the same directory. My problem is that, when I generate the version for comparison, I get the files listed in case-insensitive order. When the cron job runs, its list comes out in case-sensitive order.
How can I get both of these to come out the same way (I don't care which)? My call to ls is
/bin/ls -lR --time-style=long-iso *; as far as I can tell, LC_COLLATE is not set in either setting.
|
Sorting problems can be avoided by explicitly forcing applications to use a certain sort order. You can check the current locale by running locale instead of the program in question and compare the output of different call situations.
The sort order can be forced by setting LC_COLLATE / LC_ALL within the command line:
LC_COLLATE=C ls ...
LC_ALL=C ls ...
| ls gives me different sort orders during a cron job |
1,593,723,601,000 |
There is this thing in QuartzScheduler where we can say execute this in CST/BST/ or whatever timezone. This is also good when you want to execute some jobs in certain timezones and certain jobs in some other timezones.
Is there something like that in crontab where we can specify the timezone, because I have been seeing people going crazy numerous times when there is a timezone change and they have to go through the list of jobs in crontab where they have to change timings.
I am looking for something where we can execute certain jobs in certain timezones and certain other jobs in some other timezones.
|
This depends on your distribution: some versions of Cron support this, others don't. For example, on Debian:
LIMITATIONS
The cron daemon runs with a defined timezone. It currently does not support per-user timezones. All the tasks: system's and user's will be run based on the configured timezone. Even if a user specifies the TZ environment variable in his crontab this will affect only the commands executed in the crontab, not the execution of the crontab tasks themselves.
Whereas on Fedora:
The CRON_TZ specifies the time zone specific for the cron table. User type into the chosen table times in the time of the specified time zone. The time into log is taken from local time zone, where is the daemon running.
So check the crontab man page (man 5 crontab) on your system. (Both passages above are from that man page on the respective systems.)
| Is there anything in unix crontab to determine this should be executed in PST/CST standard time? |
1,593,723,601,000 |
I want to set a personal crontab in another folder and executed.
For example I want it in /home/project/tasks/crontab
Like that it's easier to add/delete tasks.
Thank you for your answers.
|
The cron daemon determines where your active crontab is stored. On my system (Ubuntu), and probably on yours, it's under /var/spool/cron/crontabs/.
But you can maintain your crontab entries anywhere you like. Just remember to run
crontab /home/project/tasks/crontab
every time you update it.
(I suppose you could set up a cron job to do that for you. Disclaimer: I haven't actually tried invoking crontab from a cron job; I'm not 100% sure it would work.)
I personally find that a lot easier to manage than using, say, crontab -e; I can maintain my crontab file under a source control system, so I don't lose anything if I accidentally do crontab -r, for example.
DIGRESSION :
I suppose you could set up a cron job to do that for you. Just as an experiment, I tried setting up a crontab with the following command:
* * * * * crontab .crontab
After manually running crontab .crontab once, changes in $HOME/.crontab were automatically applied after I saved the file, taking effect one to two minutes later.
But personally I'd much rather just run crontab FILENAME manually, so I don't have to worry about what will happen if I save an intermediate version of the file.
| Crontab Change location |
1,593,723,601,000 |
Given a crontab expression like "0 0/15 11-15 ? * MON-FRI", how is that parsed?
I am correct in assuming the 11-15 does not mean "between 11 and 15" but "when the hour is 11-15, inclusive" - i.e. the expression will trigger every 15 minutes starting at 11:00 and ending at 15:45? Or will it end at 14:45? Or maybe 15:00?
|
You have too many fields in your example.
The available fields in a cron job are:
`min hour mday month wday command+args`
The command in your example line would run on:
The zero minute
every 15 hours, starting at midnight (so midnight and 3pm)
on the 11th/12th/13th/14th/15th of the month,
invalid month field of ?
every day of the week
Run the command MON-FRI
Unless, in your specific version of cron, ? is allowed as a non-greedy wildcard for the month field, in which case, it might match single-digit month numbers, or January -> September.
| crontab "0/15" minutes + "11-15" hour field: when does that end? |
1,593,723,601,000 |
I have many computers and I ssh between them.
I have a single cron file, which I manually type after into each computer.
And when I change this cron file, I have to manually ssh into each computer to update its cron accordingly.
Is there any way to have each computers cron, become a function of this file?
That way I could just have a single file, which gets synced between the computers and thus, updating the cron in each.
|
This answer is assuming that you are concerned with the crontab of a particular user, myuser.
A cron job re-reading the cron schedules from a file (crontab crontab.txt) would be fairly easy to set up, but you could do that from ssh directly with
ssh myuser@remote crontab - <crontab.txt
where crontab.txt is stored locally.
This could be put into a loop:
#!/bin/bash
remotes=( remote1 remote2 remote3 )
for remote in "${remotes[@]}"
do
ssh "myuser@$remote" crontab - <crontab.txt
done
The alternative way that I initially skipped over above, with a job that re-reads the crontab:
This would require a crontab with at least a job like this:
0 * * * * crontab "$HOME/crontab.txt"
This job re-reads the schedules from crontab.txt in your home directory on the hour every hour. That file should always contain at least this job. You may then add other jobs to the file (by syncing the file from elsewhere), which would be added to the user's active crontab on the hour.
| Multi computer cron |
1,593,723,601,000 |
I have a server that I am hosting at home using a residential ISP that issues public IP addresses via DHCP--that is to say, I don't always get the same IP back when my DCHP lease renews. I created a bash script to check whether my public IP has changed, and to e-mail me the new IP address if it has changed. The "current" IP is stored in a text file in my home directory that is globally readable and writeable. I'd like to run this hourly, but I can't seem to get cron to run any jobs for me. The script runs without any issues.
I've reviewed previous questions and I don't think there are issues with file permissions or output handling problems. The file/job just never shows up in /var/log/syslog.
Thoughts?
Versioning
System Details (uname -a): Linux jasonbourne 5.10.0-19-amd64 #1 SMP Debian 5.10.149-2 (2022-10-21) x86_64 GNU/Linux
cron details (apt info cron): Version: 3.0pl1-137
bash details (apt info bash): Version: 5.1-2+deb11u1
The Bash Script:
#!/bin/bash
#known (current) IP
current_ip=`cat /home/jasonbourne/.current_ip`
#check for actual IP
check_ip=`dig -4 +short myip.opendns.com @resolver1.opendns.com`
if [ $check_ip != $current_ip ]
then
message="`hostname`'s IP address changed from $current_ip to $check_ip on `date +%D` at `date +%T`"
email="[email protected]"
subject="`hostname` IP address change!"
echo $message | mail -r $email -s "$subject" $email
echo $check_ip > /home/jasonbourne/.current_ip
fi
The script and my current IP have appropriate permissions:
jasonbourne@debian:~$ ls -la /usr/bin/public-ip-monitor.sh
-rwxr-xr-x 1 root root 873 Dec 28 20:14 /usr/bin/public-ip-monitor.sh
jasonbourne@debian:~$ ls -la /home/jasonbourne/.current_ip
-rw-rw-rw- 1 jasonbourne jasonbourne 15 Dec 28 21:28 /home/jasonbourne/.current_ip
The crontab and cron.time Attempts:
The cron service is running just fine:
jasonbourne@debian:~$ sudo systemctl status cron
● cron.service - Regular background program processing daemon
Loaded: loaded (/lib/systemd/system/cron.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2022-12-28 00:32:37 EST; 21h ago
Docs: man:cron(8)
Main PID: 14242 (cron)
Tasks: 1 (limit: 4583)
Memory: 1.0M
CPU: 3.949s
CGroup: /system.slice/cron.service
└─14242 /usr/sbin/cron -f
I've set my user crontab with crontab -e and I've tried 0 * * * * /usr/bin/public-ip-monitor.sh and @hourly /usr/bin/public-ip-monitor.sh. I've also placed the script in /etc/cron.hourly.
|
The reason that /etc/cron.hourly/public-ip-monitor.sh didn't work is because the hourly cron entries are initiated via run-parts in the /etc/crontab file:
01 * * * * root cd / && run-parts --report /etc/cron.hourly
run-parts has certain rules about what it runs:
run-parts runs all the executable files named within constraints described below, found in directory directory.
If neither the --lsbsysinit option nor the --regex option is given then the names must consist entirely of ASCII upper- and lower-case letters, ASCII digits, ASCII underscores, and ASCII minus-hyphens.
As a result, a file that's named with a .sh extension will not be run.
Hat tip to kenorb's AU answer for pointing me in this direction.
| Debian 11 (bullseye): user cron jobs not running |
1,593,723,601,000 |
I created update.sh as following:
#!/bin/bash
sudo apt-get update
sudo apt-get upgrade
echo "Updated Successfully!
and created the following crontab job to run this script every hour:
0 * * * * bash /home/ubuntu/update.sh >> /home/ubuntu/cron_test.txt
but only the log file entries of echo command are being created (as I tried running it every minute too) and no confirmation of whether my bash is updating or not. I tried this in both my WSL2 and VMWare Ubuntu terminals. In WSL, it doesn't even log the echo execution entries.
What could be the issue(s)?
|
You're facing a couple of issues, both rooted in the fact that WSL doesn't run Systemd by default (or easily). I go into detail a bit on this in an Ask Ubuntu answer and another on Super User, so I'm not going to deep into that again here.
But without Systemd:
unattended-upgrades, which was suggested in the comments and another answer, won't work.
You'll need to start cron some other way, since it's normally started by Systemd as well on Ubuntu. Given your comment:
it doesn't even log the echo execution entries
... it sounds like cron isn't running at all, which is to be expected under WSL.
But before I present any possible solutions, let me suggest that you probably shouldn't do this on a WSL instance. WSL instances aren't designed to be long-running, always-on environments. They are more like Docker containers running a distribution. As such, I'd always be worried about an interrupted upgrade happening in the background. The host OS, a wsl --shutdown (or --terminate) or any of a number of things could cause the WSL instance to stop while in the middle of a background/unattended upgrade.
On a physical/virtual machine, the unattended-upgrade-shutdown can inhibit shutdown until the unattended upgrade is complete. However, on WSL this isn't possible, which means that Apt data can be corrupted more easily.
This would also be the case with running apt upgrade in a cron process.
I would recommend just running your sudo apt update && sudo apt upgrade -y manually. The default MOTD will remind you when it's needed, to some degree.
As for running cron on WSL, that can be automated a few different ways. The recommended way is on Windows 11, where there is a boot.command config setting available.
If you are on Windows 10, you can add the following to your ~/.bashrc:
wsl.exe -u root service cron status > /dev/null || wsl.exe -u root service cron start > /dev/null
That uses the wsl.exe command to run as root without requiring a password.
Please see this Super User answer where I cover both techniques in more detail.
| How do I execute a simple "apt-get update and upgrade" script using a crontab job? |
1,593,723,601,000 |
I have a cron job that can fail periodically when resources are not available. Waiting awhile and trying again is the best way to handle such failures. What is the best way to do this? Have the failing script reschedule itself using at? Is there a better method? Perhaps something that already has such retry infrastructure in place.
|
Had a need to keep retrying until a service was available, and so built a dedicated tool to do just this.
https://github.com/minfrin/retry
~$ retry --until=success -- false
retry: 'false' returned 1, backing off for 10 seconds and trying again...
retry: 'false' returned 1, backing off for 10 seconds and trying again...
retry: 'false' returned 1, backing off for 10 seconds and trying again...
^C
Available out the box in recent Debian, Ubuntu, and Nix.
| Retry cron job on failure |
1,593,723,601,000 |
Here is the job, intended to run every 15 minutes between 7AM and 7PM:
*/15 07-19 * * * /home/max/bashScripts/rsyncMe >/dev/null 2>&1
The job is running every 15 minutes every hour instead, i.e., it runs from midnight to 23:45.
The job itself completes in under 5 minutes each time it is started.
The OS is Debian-Buster. Cron is up to date.
What might be the cause of the problem.
|
The valid hours range is 0-23, so you should use
*/15 7-18 * * *
to run every 15 minutes from 07:00 (first run) to 18:45 (last run) every day.
The leading zero for the hour range (07) was the cause for the hour field to be assumed as *. I tested (cronie-1.5.1-lp151.4.6.1.x86_64 on suse) that the behaviour is same to your description, for a range, like * 01-02 * * * but it unexpectedly worked correctly for a single value: * 01 * * *. So, I wouldn't dig it more, we just don't use leading zeros there.
| cron job for hour=7-19 runs every hour instead |
1,593,723,601,000 |
The crontab(5) manual says:
Commands are executed by cron(8) when the minute, hour, and month of year fields match the current time, and when at least one of the two day fields (day of month, or day of week) match the current time (see ``Note'' below). cron(8) examines cron entries once every minute. The time and date fields are:
field allowed values
----- --------------
minute 0-59
hour 0-23
day of month 1-31
month 1-12 (or names, see below)
day of week 0-7 (0 or 7 is Sun, or use names)
It also says:
A field may be an asterisk (*), which always stands for ``first-last''.
And in examples there is:
@weekly Run once a week, "0 0 * * 0".
From asterisk description, "0 0 * * 0" is the same as "0 0 1-31 1-12 0". My question is why doesn't every day match this expression? Documentation says
when the minute, hour, and month of year fields match the current time, and when at least one of the two day fields (day of month, or day of week) match the current time
So why aren't both 2019.12.25 00:00 and 2019.12.26 00:00 valid moments for this expression?
Both of them satisfy the "minute, hour, and month of year fields match the current time", as month is 1-12 and minute and hour are 0.
And also both of them will satisfy the "at least one of the two day fields (day of month, or day of week) match the current time" - as the day of the month is 1-31 and one satisfaction is enough.
Where am I wrong?
|
As drewbenn commented, you missed ‘see “Note” below’.
Note: The day of a command's execution can be specified in the following two fields — 'day of month', and 'day of week'. If both fields are restricted (i.e., do not contain the "*" character), the command will be run when either field matches the current time.
0 0 * * 0 has * for the day of month, so the either-field rule does not apply. If one of the day fields is * and the other is not *, the command runs only when the non-* field matches. This is an exception to the rule that * is equivalent to a numeric range.
The mistake in the manual is that the sentence
A field may contain an asterisk (*), which always stands for “first-last”.
fails to mention this exception.
| Why does @weekly work as it does? Mistake in manual? |
1,593,723,601,000 |
I have a script that scans a folder for all .mp3 files and creates an index.
Next it waits 5s otherwise it doesn't work correctly.
Lastly, it removes bad characters from the song names.
I have this running automatically with sudo crontab:
#!/bin/bash
#Creates index file.
find /var/www/html/uploads/Music/ -name '*.mp3' > /var/www/html/uploads/Music/songs.index | sleep 5s | sudo sed -i -e 's/\/var\/www\/html/\.\./g' /var/www/html/uploads/Music/songs.index
For some reason when it's run through crontab it creates a file that has to be converted to text and back to an index in order to read it. When you run the same script manually it works fine. What am I missing?
crontab file:
1 * * * * /home/aaeadmin/bin/midnightRun.bash
|
It is likely because you run the commands in a pipeline.
Each part of a pipeline is started at the same time and run concurrently with the other parts of the same pipeline. This means that the find command starts at exactly the same time as the sed command. It is only data passed through the pipeline, from the standard output of one command into the standard input of the next, that synchronises the different commands.
Also note that you don't actually use the pipeline aspect of the pipeline. There is no data being passed between the commands.
Your script would be better written as
#!/bin/bash
#Creates index file.
find /var/www/html/uploads/Music/ -name '*.mp3' > /var/www/html/uploads/Music/songs.index
sed -i -e 's/\/var\/www\/html/\.\./g' /var/www/html/uploads/Music/songs.index
Here, the find command would finish executing before the sed command started. I have also removed the sudo command as it is clearly not needed (if find can write into the file, then sed can read and modify it without sudo).
If you find that you do need sudo to write songs.index, I would suggest that you instead run this cron job in the crontab belonging to a user with permissions to write to the target directory /var/www/html/uploads/Music.
A pipelined solution would be
#!/bin/sh
#Creates index file.
find /var/www/html/uploads/Music/ -name '*.mp3' |
sed -e 's,^/var/www/html,..,' >/var/www/html/uploads/Music/songs.index
Here, find writes directly to the sed command, and the result of the sed command is written to the index file. The communication of data (pathnames) from find into sed keeps the two processes synchronised and sed would wait for find to produce the next line of input before continuing (and vice versa; find would wait for sed to process the data before trying to output more data).
I've also made the sed command easier to read and I've anchored the regular expression to the start of the line and removed the unnecessary g flag at the end. Since it now reads from the find command, I've also removed the -i option (strictly speaking, the -e option could also be removed).
Another thing I've changed is the #! line. You're not using any bash-specific features, so we may as well run the script with a (possibly) more light-weight shell.
If you want to write the filenames of the found files to the index, with the initial /var/www/html replaced by .. you could also do that directly from find:
#!/bin/sh
#Creates index file.
find /var/www/html/uploads/Music/ -type f -name '*.mp3' -exec sh -c '
for pathname do
printf "../%s\n" "${pathname#/var/www/html/}"
done' sh {} + >/var/www/html/uploads/Music/songs.index
This single find command would find the pathnames all regular files (i.e. not directories etc.) whose names matched the given pattern. For batches of these pathnames, a short inline shell script is called. The script simply iterates over the current batch of pathnames and prints them out slightly modified.
The modification to the pathname is done through the parameter substitution ${pathname#/var/www/html/}. This removes the string /var/www/html/ from the start of the value in $pathname. The printf format string used would then ensure that this part is replaced by ../.
See also
Understanding the -exec option of `find`
| Why does this script output corrupt files when run automatically with crontab? |
1,593,723,601,000 |
Before some days I wrote a script and put it somewhere to get it started automatically on booting on my raspberry with wheezy.
ps -ax gives me:
2041 ? S 0:00 /usr/sbin/apache2 -k start
2064 ? Ss 0:00 /usr/sbin/cron
2067 ? S 0:00 /USR/SBIN/CRON
2068 ? S 0:00 /USR/SBIN/CRON
2072 ? Ss 0:00 /bin/sh -c eibd -t 1023 -S -D -R -T -i --no-tunnel-cl...
2073 ? Ss 0:00 /bin/sh -c python2.7 /opt/scripts/nibe_uplink/main.py
2074 ? S 0:00 eibd -t 1023 -S -D -R -T -i --no-tunnel-client-queuin...
2075 ? Rl 1:25 python2.7 /opt/scripts/nibe_uplink/main.py
pid 2074 is started from /etc/crontab.
pid 2075 is started from crontab -e
How can I find where pid 2073 is started from?
|
What started this process?
You can use ps to find the parent of each process, either by adding -l (ps -axl) to give "long" output, or by specifically requesting the ppid:
ps -o ppid 2074
PPID
2072
Repeat for 2072 to see what started that (probably CRON).
Why two processes?
cron passes each command to a shell. From crontab(5):
The entire command portion of the line, up to a newline or a
"%" character, will be executed by /bin/sh or by the shell specified
in the SHELL variable of the cronfile.
If you have the following line in crontab:
0 * * * * python2.7 /opt/some/script.py
...then when the entry needs to run (every hour, on the hour), cron executes the shell (/bin/sh) with the two arguments -c and python2.7 /opt/some/script.py.
The shell then interprets everything the item after '-c' as a command to run. It finds python2.7 from PATH, and executes it with the single argument /opt/some/script.py. So, depending on your shell (including what /bin/sh points to), there may now be two processes running:
/bin/sh -c python2.7 /opt/some/script.py
/usr/bin/python2.7 /opt/some/script.py
That's why ps is showing you 2 eibd processes, and 2 python2.7 ones, despite there being only one entry for each in your crontab.
Some shells may avoid forking a second process like this. See Why is there no apparent clone or fork in simple bash command and how it's done?
| From where is my script started on reboot |
1,593,723,601,000 |
I have a shell script which, when run as root, performs various tasks to prepare a Debian (9/stretch) server for running a web application. Amongst the tasks that the script does is append cronjob lines to the crontab files for root and www-data (in /var/spool/cron/crontabs/), using cat and heredoc text.
Each cronjob that is added to the file is enclosed by marker comments, so that when using the uninstall function of the script, these cronjobs can be found and removed from the crontab files using sed.
This seems to be working OK, although I have now noticed that when reviewing each of the crontabs via crontab -l the first 3 lines of the crontab do not appear, although they do still exist when checking the actual crontab file directly. Some research has revealed that this is a side-effect of an intentional feature in Debian's crontab implementation, which hides the first 3 lines of a crontab as it expects those lines to be a 3 line "DO NOT EDIT THIS FILE" header.
However, if I am appending to a previously non-existent crontab file, this header does not exist and so does not get created, which is why the real first 3 lines of the crontab are hidden instead.
I am probably not doing the right thing by writing to a crontab file directly in any case. How can I update my script so that it can automatically add to and remove from crontabs in a way which keeps the system happy?
(I see from the man page that there is a CRONTAB_NOHEADER which can be set to N in order to not hide the 3 lines.)
|
Rather than manipulate a individual crontab I'd opt to drop snippets of crontab functionality in /etc/cron* directories based.
This seems easier to manage in the sense that all that's required is the creation/deletion of files from whatever /etc/cron* directory you need/want the snippet to run under:
$ ls -ld /etc/cron*
-rw-------. 1 root root 0 May 2 10:54 /etc/cron.allow
drwxr-xr-x. 2 root root 4096 Jul 28 14:56 /etc/cron.d
drw-------. 2 root root 4096 Jul 28 14:56 /etc/cron.daily
-rw------- 1 root root 0 Apr 10 21:48 /etc/cron.deny
drw-------. 2 root root 4096 Jul 28 14:55 /etc/cron.hourly
drw-------. 2 root root 4096 Jun 9 2014 /etc/cron.monthly
-rw-------. 1 root root 451 Jun 9 2014 /etc/crontab
drw-------. 2 root root 4096 Jun 9 2014 /etc/cron.weekly
| Adding/removing jobs in Debian crontab files? |
1,593,723,601,000 |
cron can be used to schedule running a program once in a while. But it seems to be not specific to an existing shell process.
If I have a script which accesses the state of a specific bash process (e.g. to access the output of running jobs and dirs in the shell by source the script), how can I schedule its running in the specific bash process once in a while?
Thanks.
update: neither reply actually can directly access the state of an existing bash process. They can indirectly for some state information copied from the parent shell process to the child shell process. I don't remember why I accepted one.
|
Not entirely sure what you're after, but you can kick off a background job that does something and then waits X seconds, before repeating.
Example
( while : ; do echo hello ; sleep 10 ; done ) &
| Can I schedule running a script inside a particular shell process once in a while? |
1,593,723,601,000 |
I want to run a command 7 to 24 o'clock every two minutes and 24 to the next day 7 o'clock every 10 minutes.
I write
*/2 7-24 * * * command
*/10 24-7 * * * command
But crontab tells me it has problem.So how to fix it?
Thank you!
|
Midnight is 0, not 24. Furthermore, each column works independently, so 7-23 means “whenever the hour part of the time is between 7 and 23 inclusive”, not “between 7:00 and 23:00”. So use 7-23 for “7:00 till midnight” and 0-6 for “midnight till 7:00”.
*/2 7-23 * * * command
*/10 0-6 * * * command
| How to write this schedule in crontab? |
1,593,723,601,000 |
I add the following script line to the red-hat crontab, script should run on Saturday in 5:00 in the morning. In order to run the script /var/scripts/PLW.pl every Saturday at 05:00 morning. I want to check the exit status from the script.
my question : is it possible to add in the crontab after the script line the something like this
0 0 * * 5 /var/scripts/PLW.pl
[[ $? -eq 0 ]] && run_once.bash
so if exit status is 0 the run_once.bash script will be activated.
|
You can do something like,
0 0 * * 5 /usr/bin/python /var/scripts/PLW.pl && /bin/bash /path/to/run_once.bash
Note : && /bin/bash /path/to/run_once.bash will only run if previous command run successfully. So instead of using exit code, you can use &&'s inbuilt functionality.
| How to use exit status in crontab |
1,593,723,601,000 |
I have a script that looks like this:
#!/bin/bash
D=~/brew\ update
num=$(ls "$D" | cut -d ' ' -f 2 | sort -nr | head -1)
num=$(( num + 1 ))
script -q "$D/brew_update $num" brew update
This script always works, but when I start it with cron 0 */5 * * * ~/bin/brewupdate2, it says this in a file in /var/mail
^Dscript: brew: No such file or directory
So I thought it was that it uses sh, but I tried sh ~/bin/brewupdate2 and it ran without error.
|
D=~/brew\ update
# the above will never work unless you start script as you
# say you are user fred, this full path will always work
D=/home/fred/brew\ update
Cron works with different permissions and PATH than user, in other words cron basically is its own user (root), with its own PATH etc. Don't use ~/, that refers to another home directory, not yours, when cron starts the job, that ~/ is relative to the home directory of the user in the system that starts the job (root in this case). Use the full path for 'brew update' (and if you can, get rid of that directory name with the space in it, if it's under your control). I have no idea what cron even does with ~/ in terms of generating a path because it will never work predictably so I never thought about that.
With cron, ALWAYS use full system paths, or you will get these kinds of errors.
From serverfault:
what user will [cron] run as?
They all run as root. If you need otherwise, use su in the script or
add a crontab entry to the user's crontab (man crontab) or the
system-wide crontab (whose location I couldn't tell you on CentOS).
So in theory, you could use ~/ which would translate to the home of root, /root/, but that's a very bad idea re readability and testing etc.
[update]As noted, the issue here was 'brew' being in /usr/local/bin, which is not in $PATH Of cron, which made the command 'brew' fail, file not found. I missed that last line. But all the more reason to always use full paths. Full paths on all the programs and files will resolve all these issues.
| Script usually works, but not with cron |
1,593,723,601,000 |
I want to run sudo airodump-ng -w myfile every ten minutes or so, for m minutes. It does not matter if the running time shifts (that it, if it runs m minutes later each time).
Notice that this is a monitoring program, which won't just output and exit. I suppose the solution for this one question is also valid for similar monitoring programs.
I was thinking about putting something like:
*/10 * * * * airodump-ng mon0 -w myfile
into crontab. There is no need to change the myfile name, airodump can correctly check whether myfile exists and create a myfile-02 and so on.
However, how should I stop it running after s secs? pkill airodump is the only thing I can think of. Is this the best for running it 1 minutes twice an hour?
20,40 * * * * airodump-ng mon0 -w myfile
21,41 * * * * pkill airodump-ng
|
Don't use pkill. Instead, run your app under the timeout command from the coreutils package:
*/10 * * * * timeout 5m airodump-ng mon0 -w myfile
(Where here 5m means to run for 5 minutes.) Use --signal if you need something other than TERM.
| Run and stop a monitoring command as sudo for s seconds every m minutes |
1,593,723,601,000 |
I have several running processes which is started by a shell, but I don't want them to run during 08:00am - 20:00pm for each day because they are really bandwidth-consuming, so , I have to suspend them during that period of time instead of killing them directly because killing them will cause some problems.So , my work ,is to suspending them at 08:00am and waking them up at 20:00pm every day.Anyone can tell my how to do that ?
I tried to do this in crontab, I use kill -19 pid to suspend them but I don't know how to wake them up by pid number.
Maybe a shell script will be quite appreciated.Thank you.
|
You can use word notation for more readability :
kill -STOP <PID> # pause
kill -CONT <PID> # continue working
Check
man 7 signal
| How to suspend a process for a certain period of time? |
1,593,723,601,000 |
How would I set a script to execute on every Tuesday and Thursday at 11:50am?
I've been looking at the at command, but I can't conceive how to use it the way I need to from its man page.
|
at is excellent tool for one-off commands. To run a program repetitively at the same times, however, the right tool is cron. Run crontab -e. It will open an editor. Add this line and save the file:
50 11 * * 2,4 /path/to/script
This will run /path/to/script every Tuesday and Thursday at 11:50am. crontab runs programs in a limited environment. So, script may need to set its own PATH, etc.
If the machine has a properly set up mail server, any output from script will be e-mailed to the user who owns the crontab file. Alternatively, output will be mailed to the address specified by the MAILTO variable in the crontab file. See man 5 crontab for details.
The first five columns of the line above define the time that the program is run. Their meaning is documented in man 5 crontab to be:
field allowed values
----- --------------
minute 0-59
hour 0-23
day of month 1-31
month 1-12 (or names, see below)
day of week 0-7 (0 or 7 is Sun, or use names)
| How to execute recurring Bash script at specific times? |
1,593,723,601,000 |
I checked the crontab set by the hosting company and they have this:
0 * * * * /usr/bin/php /www/sites/[domain.com]/files/html/shell/indexer.php reindexall
Does this cron run constantly or once at midnight?
|
midnight is
0 0 * * * /usr/bin/php /www/sites/[domain.com]/files/html/shell/indexer.php reindexall
your current crontab runs every full hour. For more info see e.g. this
| Incorrect cron for midnight? |
1,398,796,996,000 |
I have a cron entry that runs every 30 minutes -
*/30 * * * * /home/myuser/myscripts.sh
How to set this up such that it runs exactly at 30 minute intervals but also exactly at (for example)
3:00 PM, 3:30 PM, 4:00 PM, 4:30 PM and so on.
So I'm not only interested in the 30 minute interval but also the time being close to "round figures" on the clock.
So 3 PM is just an example. I want the job to run at times that are "round figures", like "3:00 PM, 3:30 PM, 4:00 PM" and not "3:15 PM, 3:45 PM, 4:15 PM"
|
You can specify
0,30 * * * * /home/myuser/myscript.sh
although I was always under the impression this would be the same as */30 * ..... But I have never used anything that had to be on the minute like that, just at a regular interval (*/5 * ....)
| How to set up a cron entry that runs at 00 and 30 after the hour? |
1,398,796,996,000 |
I have a root cron task that runs every day. But I want to allow normal users to request that it run immediately, if they so wish. It is a harmless process and it can run as often as needed and these normal users are actually trusted users. But I don't want to give these normal users any special permissions.
Can I allow a normal user to trigger the cron task to run immediately? If so, how? The user will be doing this via a Java app, so I'll probably be using Java's ProcessBuilder.
ProcessBuilder (Java 2 Platform SE 5.0)
http://docs.oracle.com/javase/1.5.0/docs/api/java/lang/ProcessBuilder.html
I would be great if you can provide a solution with Java ProcessBuilder example code.
|
As variant - create a script (added to crontab) and allow to execute without password
https://askubuntu.com/questions/155791/how-do-i-sudo-a-command-in-a-script-without-being-asked-for-a-password
| How can a normal user trigger a root cron task to execute immediately, without delay? |
1,398,796,996,000 |
How can I use Crontab in Linux specifically for a Java program? I want to run a MIS Script. How can I crontab it and what should the path be?
|
Assuming this Java application is a console based app there is nothing inherently special you need to do just because it's a Java application.
If you have a Java .class file, run the application like so:
$ java HelloWorld
If you have a .jar file, run the applicaiton like so:
$ java -jar myapp.jar
Cron job
To make either of the above methods a cron job simply add these to a Bash script and put that script into one of hte designated crontab directories or simply add the above command to a crontab entry.
Examples
Making a script
Here's a script, myjavawrapper.bash.
#!/bin/bash
# Do any CLASSPATH stuff here
$ java -jar myapp.jar
Then put myjavawrapper.bash in one of the cron job directories or system crontab:
$ ls -d1l /etc/cron*
drwxr-xr-x. 2 root root 4096 Nov 1 23:58 /etc/cron.d
drwxr-xr-x. 2 root root 4096 Nov 3 23:46 /etc/cron.daily
-rw-r--r-- 1 root root 0 Jun 29 2011 /etc/cron.deny
drwxr-xr-x. 2 root root 4096 Oct 8 2011 /etc/cron.hourly
drwxr-xr-x. 2 root root 4096 Dec 18 2010 /etc/cron.monthly
-rw-r--r-- 1 root root 451 Jun 2 12:10 /etc/crontab
drwxr-xr-x. 2 root root 4096 Aug 12 2011 /etc/cron.weekly
Add an entry to /etc/crontab
Add a line such as this to the crontab file:
*/30 * * * * root (cd /path/to/class/file; java HellowWorld)
The above will run java HelloWorld every 30 minutes.
The above are just 2 methods, they aren't the only methods. This is just to give you some ideas and approaches on how to accomplish the task. There are several other ways.
| How to use Crontab for a java file in linux |
1,398,796,996,000 |
The issue I currently have is someone has created a crontab process to run on a RHEL 5 box, yet has not left me any privileged user information. How can I execute this cron job that was setup to run as user foo when I do not have root access? Also of note this is a non-standard cron job insofar as it is run at arbitrary times.
Also where does this job get placed? I have read that I should not attempt to modify anything that resides within /var/spool , which is fine since as I have stated I have no root access.
|
Invoking crontab -e when logged in as the specific user will open up that user's crontab file. Edits can be made from that point forward.
| Location of crontab job created by non-privileged user |
1,398,796,996,000 |
I think I'm missing something. I've set up some crontabs to handle rsyncs over ssh in my office to handle some backups, but they don't seem to be running OR logging, for that matter. Am I missing something?
Here are my crontabs:
0 2 * * * idealm /usr/bin/local/rsync -avz --rsync-path=/usr/bin/rsync -e ssh /Ideal\ Machinery\ Database/* [email protected]:~/db_backup >> /home/idealm/Work/cron_log.txt
^ from the Mac Mini Server to the SME Server
0 2 * * * i /usr/bin/rsync -avz --rsync-path=/usr/local/bin/rsync -e ssh /home/e-smith/files/ibays/drive-i/files/Warehouse\ Pics/Converted\ Warehouse\ Pictures/* [email protected]:/ideal_img >> /home/i/cron_log.txt
^ from the SME Server to the Mac Mini Server
0 2 * * * i /usr/bin/rsync -rvz -e ssh /home/e-smith/files/ibays/drive-i/files/Warehouse\ Pics/* fm-backup@[remote server]:/home/fm-backup/img_all/ >> /home/i/cron_log.txt
0 2 * * * i /usr/bin/rsync -rvz -e ssh ~/db_backup/* fm-backup@[remote server]:~/db_backup/ >> /home/i/cron_log.txt
^ From the SME Server to a remote server running Ubuntu 10.10 (where [remote server] is the censored address)
The thing is, when I run these jobs from bash, everything works perfectly - cron just doesn't seem to be working. Any suggestions?
|
First try some simple cronjobs and build from here:
0 12 * * * user echo 'Hello, World!' >> /tmp/test.log 2>&1
1 12 * * * user ssh anotheruser@anotherhost ls >> /tmp/test.log 2>&1
2 12 * * * user rsync -e ssh anotheruser@anotherhost:/path/to/small/dir /tmp/test-dir >> /tmp/test.log 2>&1
Avoid using too many options at first, but using 2>&1 is important for getting any error messages. You could also use 2>/tmp/test-error.log to send them to a separate file. Normally, cron generates an email if, and only if, there is any output. This mail is sent to the email address mentioned in the MAILTO environment variable set in crontab or to the local unix user the job is run as if MAILTO is not set. If you don't have a mail server installed or are unsure how to access it, then you can just redirect all output to a file instead. Another thing to look at is setting SHELL. By default, cron uses /bin/sh which is normally just fine, and may even be a symlink to /bin/bash on some systems, but if not and you expect to use bash-isms in your command, you can add SHELL=/bin/bash or whatever is appropriate before your job listings.
Another issue I can see with your above crontabs is spaces. Since the commands work at a normal shell, it's probably not the issue, but do be wary of using spaces when using certain remote ssh commands like: ssh server ls "Music Videos", spaces in such commands need to be double escaped due to the fact that both the local shell and remote shell may interpret them. The correct command should be something like this: ssh server ls "Music\\ Videos" or ssh server ls '"Music Videos"'
The last thing I can think of is that ssh is failing authentication. I assume you are using public key authentication. Where is your public key being stored? Is it in ~/.ssh/id_rsa as an unencrypted private key, or do you have it loaded in a SSH Public Key Agent? If it's loaded in an agent because the on-disk copy is encrypted, you will need to make sure SSH_AUTH_SOCK is manually set in crontab to point to an agent that will be running when rsync does. I normally keep my private keys encrypted on-disk and keep ssh-agent running in the background with a fixed socket path. For example, I run ssh-agent -a /tmp/user.agent followed by SSH_AUTH_SOCK=/tmp/user.agent ssh-add .ssh/id_rsa to load in the private key to the agent. I have these two commands in a script that I run manually after each reboot since I need to enter in a passphrase to unlock the private key. Then I have SSH_AUTH_SOCK=/tmp/user.agent in my crontab file for my automatic cronjobs to use.
| Cron: My crontabs don't seem to be doing anything at all |
1,398,796,996,000 |
I have to following cron script that runs daily. As you should be able to see from the code, it outputs the results from reflector to /etc/pacman.d/mirrorlist.
$ cat /etc/cron.daily/update-mirrorlist
#!/bin/bash
reflector -l 5 -r -o /etc/pacman.d/mirrorlist
Sometimes, reflector outputs a empty file and thus a invalid mirrorlist is created.
How can I modify the script above to only write to /etc/pacman.d/mirrorlist IF there is valid output from relfector?
|
It's a good idea to first accumulate the data, then move it into place. That way the target file will always be valid, even while the data accumulator program is running.
set -e
target=/etc/pacman.d/mirrorlist
reflector -l 5 -r -o "$target.tmp"
mv -f -- "$target.tmp" "$target"
If reflector does not properly report errors by returning a nonzero status, add your own validation test before the mv command, for example test -s "$target.tmp" to test that the file is not empty.
If you want to keep a backup of the old version, add ln -f -- "$target" "$target.old" || true before the mv command.
| Stop cron script from destroying my mirrorlist with invalid data |
1,398,796,996,000 |
It seems like an no brainer question, but i did not manage to any real information. In my Ubuntu server i have created a custom /etc/cron.d config file, e.g. /etc/cron.d/MyCronTab, the reason I put all here is for ease of finding and they are easy to modify.
Now I'm not going to put anything sensitive at all in these crontab, but i see by default Ubuntu really likes 644(root can read/write, everyone can read) on these files. I guess that could make sense so that people know what task will be running in the background even if they can't alter them.
But in my case, it seems rather not a good idea to even expose this information about my specific crontab files since they are root and admin tasks anyway.
So I changed the permission on my own files (/etc/cron.d/MyCronTab) so that only root can read and write, which seems perfect, and even if that crontab has a task ran by say another user, it still runs without issue, which seems perfect.
Something I'm worried about is, will Ubuntu or cron daemon on updates reset my config file permissions back to 644 so everyone can read or do they persist forever in this directory?
|
It’s your file, you control its permissions — neither package updates nor the cron daemon itself will change them.
As a general rule, while many files under /etc are provided by the system, /etc is the system administrator’s domain, and the system will preserve changes made there. Even changes made to system-provided configuration files are preserved by default (in case of conflict during upgrade, the administrator is asked how to handle it).
On Debian and well-behaved derivatives (including Ubuntu), this requirement is described in the Policy section on configuration files; packages can either delegate their configuration file handling to dpkg, or handle it themselves in their maintainer scripts, which
must be idempotent (i.e., must work correctly if dpkg needs to re-run them due to errors during installation or removal), must cope with all the variety of ways dpkg can call maintainer scripts, must not overwrite or otherwise mangle the user’s configuration without asking, must not ask unnecessary questions (particularly during upgrades), and must otherwise be good citizens.
Even on first installation, existing configuration files are preserved; this means that if at some future point you end up installing a package which conflicts with one of your own configuration files, dpkg will ask you what to do about it. However, purging a package will remove all its configuration, including your own files which are considered as “belonging” to the package; it’s best to ensure /etc is covered by your backup strategy, and it’s also a good idea to track changes to /etc with etckeeper.
| When you alter permissions of files in /etc/cron.d in Ubuntu, do they persist across updates? |
1,398,796,996,000 |
This script works when executed with doas ./backup_cron_root.sh
#!/usr/bin/bash
/usr/bin/crontab -l> "/tmp/cron.$(whoami).$(hostname)" && /bin/date>>"/tmp/cron.$(whoami).$(hostname)" &&
/usr/bin/doas -u joanna /usr/bin/cp -f "/tmp/cron.$(whoami).$(hostname)" "/home/joanna/pCloudDrive/backups" &&
/usr/bin/rm "/tmp/cron.$(whoami).$(hostname)"
where ./backup_cron_root.sh is the name of the script.
When the same script is scheduled as a cronjob with
doas crontab -e and * * * * * /home/joanna/backup_cron_root.sh >/tmp/cronjob.log 2>&1
it creates /tmp/cron.root.joanna-ONE-AMD-M4 which is owned by root
but for some reason it does not succeed in copying it to /home/joanna/pCloudDrive/backups. Why so?
Why doesn't this script succeed from crontab as it does when manually run?
The content of my /etc/doas.conf is
permit joanna as root
permit root as joanna
The following is my tail of grep CRON /var/log/syslog:
Feb 26 17:17:01 joanna-ONE-AMD-M4 CRON[747796]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
Feb 26 17:17:01 joanna-ONE-AMD-M4 CRON[747797]: (root) CMD (/home/joanna/backup_cron_root.sh)
Feb 26 17:17:01 joanna-ONE-AMD-M4 CRON[747792]: (CRON) info (No MTA installed, discarding output)
Feb 26 17:17:01 joanna-ONE-AMD-M4 CRON[747791]: (CRON) info (No MTA installed, discarding output)
Feb 26 17:17:01 joanna-ONE-AMD-M4 CRON[747794]: (CRON) info (No MTA installed, discarding output)
Feb 26 17:17:01 joanna-ONE-AMD-M4 CRON[747793]: (CRON) info (No MTA installed, discarding output)
Feb 26 17:17:22 joanna-ONE-AMD-M4 CRON[747795]: (CRON) info (No MTA installed, discarding output)
Feb 26 17:18:01 joanna-ONE-AMD-M4 CRON[751555]: (root) CMD (/home/joanna/backup_cron_root.sh)
Feb 26 17:18:01 joanna-ONE-AMD-M4 CRON[751551]: (CRON) info (No MTA installed, discarding output)
Feb 26 17:18:01 joanna-ONE-AMD-M4 CRON[751550]: (CRON) info (No MTA installed, discarding output)
Feb 26 17:18:01 joanna-ONE-AMD-M4 CRON[751553]: (CRON) info (No MTA installed, discarding output)
Feb 26 17:18:01 joanna-ONE-AMD-M4 CRON[751552]: (CRON) info (No MTA installed, discarding output)
Feb 26 17:18:22 joanna-ONE-AMD-M4 CRON[751554]: (CRON) info (No MTA installed, discarding output)
The content of /tmp/cronjob.log is
doas: Authentication failed
|
The issue seems to be that you do not allow the root user to switch to the joanna user without a password in your doas configuration.
You do this with the nopass option in the doas.conf file:
permit nopass root
(It makes little sense to stop the root user from using doas to change to other users, so I removed the as joanna bit.)
You also have the option of using su in place of doas:
su joanna -c 'cp "$1" ~/pCloudDrive/backups/" sh "$tmpfile"
Alternatively, copy the file as root and change ownership of it with chown afterwards.
Your script may also be simplified somewhat:
#!/bin/sh
tmpfile=/tmp/crontab.$(whoami).$(hostname)
{ crontab -l; date; } >"$tmpfile"
doas -u joanna cp "$tmpfile" ~joanna/pCloudDrive/backups/
rm -f "$tmpfile"
I've deleted the excessive use of absolute pathnames. If /usr/bin and /bin are not in $PATH when this script is executed from cron, then something's broken in your setup.
I've also assigned the output filename to a variable, which means we don't have to execute whoami and hostname each and every time we need to refer to it.
I have removed the conditional execution of every command, opting to let the script continue with cleaning up the temporary directory instead.
| Why doesn't this script succeed from crontab as it does when manually run? |
1,398,796,996,000 |
My local time zone is not UTC; How do I make cron use UTC for it's schedule without changing time zone on the computer in other aspects?
|
You can set the timezone with the TZ environment variable. If your system is based on systemd, you thus may alter the cron.service file an set the variable for the service only.
E.g. for Debian in /usr/lib/systemd/system/cron.service add
Environment="TZ=UTC"
in the [Service]-section after EnvironmentFile= has been read (in order to ensure said file does not reset the value). Reload daemon and restart cron.service.
Tested on Debian 11.
| How do I make cron use UTC? |
1,398,796,996,000 |
I have a Mac mini running macOS Monterey, and it has InfluxDB running in a Docker container. I'm trying to set up a nightly cronjob via a shell script I'm calling to back the data up, zip it, and only keep the seven most recent backups. The backing up part works perfectly regardless of whether I'm running it manually or via cron, but the bit that's giving me grief is in cron for the shell expansion to get the current number of backup files so I can keep only the most recent ones.
I've stripped the whole thing down to just trying to figure out how to get shell expansion working at all under cron, with zero luck.
The simplest thing I've come up with is this, which behaves (correctly) as the following when running it manually:
$ /bin/bash -c 'BACKUPS=(/Users/virtualwolf/Documents/InfluxDB_Backups/*) && echo "Number of backups: ${#BACKUPS[*]}" && echo "Oldest backup: ${BACKUPS[0]}"'
Number of backups: 10
Oldest backup: /Users/virtualwolf/Documents/InfluxDB_Backups/2022-04-12_00-30.zip
If I put that exact same thing into cron, I get the following:
Number of backups: 1
Oldest backup: /Users/virtualwolf/Documents/InfluxDB_Backups/*
I know by default cron uses /bin/sh and I've tried setting SHELL=/bin/bash at the top of my crontab as well, to no avail (though I wouldn't have thought that would have any effect if I'm either calling a script directly with a shebang of #!/bin/bash?).
Am I missing something obvious here?
[EDIT]
To clarify things a bit, I'm currently testing solely with cron and have excluded running an actual script file just to get down to the minimum possible scenario, the relevant crontab entry looks like this:
* * * * * /bin/bash -c 'BACKUPS=(/Users/virtualwolf/Documents/InfluxDB_Backups/*) && echo Number of backups: ${#BACKUPS[*]} && echo Oldest backup: ${BACKUPS[0]}'
If I copy that exact same line (/bin/bash -c 'BACKUPS=(/Users/virtualwolf/Documents/InfluxDB_Backups/*) && echo Number of backups: ${#BACKUPS[*]} && echo Oldest backup: ${BACKUPS[0]}') into my terminal, it works as expected and it lists ten files and shows the oldest backup. When it's executed by cron, the shell expansion hasn't occurred and it's taken the BACKUPS variable as the literal string /Users/virtualwolf/Documents/InfluxDB_Backups/*.
|
So thanks to @muru's excellent questioning, it turns out this was nothing to do with Cron and shell expansion at all, and was instead a case of the /usr/sbin/cron executable not having full disk access in System Preferences > Security & Privacy > Privacy.
I unlocked the preference pane, went into "Full Disk Access", clicked the + and hit CmdShiftG to bring up the Go To Folder dialog, popped in /usr/sbin and found cron and added it, and now everything works a treat.
| Wildcard expansion doesn't happen when Bash script invoked from cron under macOS |
1,398,796,996,000 |
I'm newbie making cronjobs with linux. My goal is to execute a python script in its own virtual enviroment.
To do this I have made first a shell script called twitter.sh where its content is:
source /home/josecarlos/Workspace/python/robot2-rss/venv/bin/activate
python /home/josecarlos/Workspace/python/robot2-rss/main.py R,1
And its route is:
/home/josecarlos/Workspace/python/robot2-rss
We have access to source and python commands due their routes /usr/bin are included in the PATH variable as you can see in the next image:
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/usr/lib/jvm/java-8-oracle/bin:/usr/lib/jvm/java-8-oracle/db/bin:/usr/lib/jvm/java-8-oracle/jre/bin
The config of my cronjob is:
# m h dom mon dow command
*/1 * * * * /home/josecarlos/Workspace/python/robot2-rss/twitter.sh
However, this configuration of my cron job doesn't works and I don't know what am I doing wrong :(
Edit I:
I have modified my twitter.sh script to this new code:
#!/bin/sh
/home/josecarlos/Workspace/python/robot2-rss/venv/bin/python /home/josecarlos/Workspace/python/robot2-rss/main.py R,1
If I run the script directly works fine but with the cron job doesn't work!!! :(
Edit II:
How in the last modification from twitter.sh I just only was calling to my python script, I have modified my cron job to call directly to the python script like this:
# m h dom mon dow command
* * * * * /home/josecarlos/Workspace/python/robot2-rss/venv/bin/python /home/josecarlos/Workspace/python/robot2-rss/main.py R,1
And it doesn't work :( I don't know what happen :(
|
set a proper shell
edit twitter.sh
#!/bin/bash
PATH=....
source /home/josecarlos/Workspace/python/robot2-rss/venv/bin/activate
python /home/josecarlos/Workspace/python/robot2-rss/main.py R,1
be sure to set PATH.
log result of command
in crontab add login part
*/1 * * * * /home/josecarlos/Workspace/python/robot2-rss/twitter.sh >> /var/log/twitter.log 2>&1
if something goes wrong, you can have a look at /var/log/twitter.log
| Python: How to config crontab to run a script in a virtual environment |
1,398,796,996,000 |
I'm trying to use crontab to check and re-run a long running script.
I've created a script that checks the status and runs the long-running script if needed.
I'm running the long running script with nohup to keep it running when I log out.
The keep-alive script:
#!/bin/bash
BASE_PATH=~/scripts
LOG_PATH=${BASE_PATH}/keep_alive.log
write_to_log () {
date >> ${LOG_PATH}
echo ${1} >> ${LOG_PATH}
echo "--------------" >> ${LOG_PATH}
}
pid=$(pgrep long_script)
if [ -z "${pid}" ]; then
write_to_log "long_script is no runnig, starting..."
nohup ${BASE_PATH}/long_script.py &
fi
crontab entry:
*/30 * * * * /home/user/scripts/keep_alive.sh
When I run the script manually (in bash, ./keep-alive.sh) all works well and the long script starts.
From crontab, the script starts and exits immediately (no issue with the paths/expression), and I see a log written, but the long script stops. So my conclusion is that nohup doesn't work the same for crontab as in bash.
I tired using setsid or disown, but got the same results.
How can I solve it?
Thanks.
|
There is no need for nohup when using crontab. Unless your systemd is configured to kill all your processes when you log out there is no interaction between your shell exiting and crontab running processes (or otherwise), and nohup will have no useful effect on that.
Look at your local email (mail or mailx) and read the error messages reported there from cron - or less /var/mail/$USER if you want a shortcut. Alternatively change your crontab line to capture stdout and stderr to a file you can review later
*/30 * * * * /home/user/scripts/keep_alive.sh >ka.log 2>&1
Typically scripts don't run under cron because you've forgotten to set up the environment ($PATH, etc.).
| Use crontab to invoke nohup in script |
1,398,796,996,000 |
I've just made a perfectly working Bash script that runs xprop(by a regular user and root):
#!/bin/bash
# time tracking BASH script
# current time and date
current_date=$(date --rfc-3339='seconds')
# active window id
window_id=$(xprop -root 32x '\t$0' _NET_ACTIVE_WINDOW | cut --fields 2)
# active window class
wm_class=$(xprop '\t$0\n' -id $window_id WM_CLASS | cut --fields 2)
# active window name
wm_name=$(xprop '\t$0\n' -id $window_id _NET_WM_NAME | cut --fields 2)
echo '"'$current_date'", '$wm_class', '$wm_name
with the following output (run as a regular user and root):
nelson@triplecero:~$ bash-scripts/time-tracking.sh
"2019-10-16 23:28:41-04:00", "konsole", "nelson@triplecero: ~ — Konsole"
the script is called from a regular user crontab but then it doesn't work as expected, writing an error messages in the log: "xprop: unable to open display ''", which is the typical error message displayed when xprop (and any GUI program) is not run by the current session user, which is not case because I can run xprop (and any other guy program) as the root user.
The crontab (for the regular user) is like this:
# m h dom mon dow command
* * * * * /home/nelson/bash-scripts/time-tracking.sh >> /home/nelson/log/time-tracking.log 2>&1
* * * * * window_id=$(xprop -root 32x '\t$0' _NET_ACTIVE_WINDOW | cut --fields 2) 2>&1; echo $window_id >> /home/nelson/log/test.log 2>&1
the first line is executed with the following error messages in time-tracking.log:
xprop: unable to open display ''
xprop: unable to open display ''
xprop: unable to open display ''
"2019-10-16 23:21:01-04:00", ,
while the second one just create blank lines in test.log
What am I doing wrong in cron to get those error messages instead of the right output?
|
You're missing the $DISPLAY environment variable. It is set by the first process that initializes your GUI session and then inherited by all its child processes. For a local X11 session, the value is normally :0.
The $DISPLAY variable tells X11 applications how to contact the X server; the value :0 tells them to use a local Unix socket at /tmp/.X11-unix/X0. After the initial connection, higher performance access methods like Direct Rendering Infrastructure (DRI) can be enabled.
Cron jobs don't get the $DISPLAY variable automatically, because they're supposed to run independent of the GUI session: what if the owner of the job is not logged in at the moment? If a cron job could just gain access to anyone's X11 session, it would enable the users to spy on each other, and that is not acceptable at all.
Depending on the distribution you're using, you may also need to set the $XAUTHORITY environment variable. Without it, all the X11 tools and applications will assume that the X11 session cookie is located at $HOME/.Xauthority, but as an example, Debian 10 does private $TMP directories for every user, so $TMP will be set to /tmp/user/<user's UID number> and $XAUTHORITY will be set to $TMP/xauth-<user's UID number>-_0 for DISPLAY :0.
Without access to the correct X11 session cookie, the X11 server won't respond to requests, not even for root. Running GUI programs after logging in as a regular user and then using su or sudo to become root is possible if and only if the $DISPLAY (and $XAUTHORITY, if necessary) are passed on to the su/sudo session - which is often set up to happen by default.
| Running xprop in a crontab: "unable to open display" |
1,398,796,996,000 |
I do not have much experience in Linux and would like some insight into whether there is a possibility to change a variable within a script before it is automatically started through a Cron Job.
For example, that the variable VORDATE="04%2F15%2F2019" would be changed (automatically) to one week prior to today, present in the variable CURRENTDATE="09%2F09%2F2019".
Thank you!
|
I assume your cron job is going to run a bash script. Your script could use the date command to specify a relative date.
A week in the past
VORDATE=$(date -d "7 days ago")
A week in the future
VORDATE=$(date -d "7 days")
man date for more information.
edit: Howto format output
You can format the output by adding a format string by using the + option, the %<codes> are described in the manual page, to add a literal % use %% here's an example with spaces to aid reading.
$ date -d "7 days ago" +"%m %%2f %d %%2f %Y"
09 %2f 19 %2f 2019
| Edit sh script before a cron job starts running |
1,398,796,996,000 |
I have a situation where I need to delete from mailbox (eg. /var/mail/root) messages with specific Message-Id.
Following code works only from console, but I have to do it without user interaction running from cron /etc/crontab.
File: /tmp/clear_spam_test
mutt -f /var/mail/root -e "set alias_file=/var/mail/root" -e "set crypt_use_gpgme=no" -e "push <delete-pattern>[email protected]\n<sync-mailbox>qy"
I tried many variations
ssh -tt localhost 'bash -s' < /tmp/clear_spam_test
Output:
mutt -f /var/mail/root -e "set alias_file=/var/mail/root" -e "set crypt_use_gpgme=no" -e "push <delete-pattern\>[email protected]\n\<sync-mailbox\>qy"
echo -e "\nTEST $( whoami ) $0"
exit 0
<n>[email protected]\n<sync-mailbox>qy"
Error opening terminal: unknown.
TEST root bash
exit
Connection to localhost closed.
ssh -t localhost 'bash -s' < /tmp/clear_spam_test
Output:
Pseudo-terminal will not be allocated because stdin is not a terminal.
No recipients were specified.
ssh -T localhost 'bash -s' < /tmp/clear_spam_test
Output:
No recipients were specified.
ssh -tt $server <<'ENDSSH'
echo $(/tmp/clear_spam_test)
exit 0
ENDSSH
Output:
Error opening terminal: unknown.
TEST root /tmp/clear_spam_test
logout
Connection to localhost closed.
ssh -t $server <<'ENDSSH'
echo $(/tmp/clear_spam_test)
exit 0
ENDSSH
Output:
Pseudo-terminal will not be allocated because stdin is not a terminal.
mesg: ttyname failed: Inappropriate ioctl for device
No recipients were specified.
TEST root /tmp/clear_spam_test
ssh -T $server <<'ENDSSH'
echo $(/tmp/clear_spam_test)
exit 0
ENDSSH
Output:
mesg: ttyname failed: Inappropriate ioctl for device
No recipients were specified.
TEST root /tmp/clear_spam_test
None of it works. I also tried IFS.
|
Same problem here. This mutt command seems to depend on a working terminal window that cron cannot build.
At least for me it helped to start a virtual terminal using screen:
screen -d -m mutt -f /var/mail/root -e "set alias_file=/var/mail/root" -e "set crypt_use_gpgme=no" -e "push <delete-pattern>[email protected]\n<sync-mailbox>qy"
| CRON (no tty): Delete message with specific "Message-Id" |
1,398,796,996,000 |
I wrote a small application, which runs etherwake. From bash it works fine and wakes up another PC. But if it is launched from crontab, then nothing happens.
Has anyone encountered a similar problem and how to solve it?
Note: Maybe it matters, that the app is written with Qt/C++, etherwake runs via QProcess and OS is Raspbian on Raspberry Pi Zero.
|
I am replying to this message because I was struggling with the same issue.
The problem seems to lie in the etherwake path. Crontab runs commands by default in /bin. But etherwake is located in sbin.
/usr/sbin/etherwake
So instead of doing:
00 06 * * * etherwake -i wlan0 00:11:22:33:44:55
The proper way is:
00 06 * * * /usr/sbin/etherwake -i wlan0 00:11:22:33:44:55
This seemed to do the trick for me. Some other people struggling with the same issue have reported that wakeonlan:
sudo apt-get install wakeonlan
solves the problem as well.
| Cron and etherwake on Raspbian |
1,398,796,996,000 |
I'm working on a script that is supposed to execute on startup, but the problem is that the script requires some files that are on a shared drive that is automatically mounted via fstab and at the time of it's execution the drive isn't mounted yet.
I've tried using cron @reboot and init.d route but they both execute too early. I also considered adding mount -a to the script, but I would rather avoid having to sudo it. For now I just added a delay to make it work, but that feels a bit hacky.
Is there a way to ensure that a startup script runs after fstab has been processed? Or force the mounts to be processed without using sudo?
|
For that you have to run your script as a systemd unit (assuming you have systemd) where you could define dependency...
If you want to stick with cron @reboot (what sounds the simple choice) you have to make your script a bit smarter (or start cron after fs mounts... what change I wouldn't suggest). Instead of a simple delay, you can check if the required filesystem is mounted (in bash):
while ! mount | awk '{print $3}' | grep -qx /the/mountpoint; do
sleep 1
done
Or you can check if the file is there what you need:
while ! [ -f /that/file ] ; do
sleep 1
done
| fstab mounting time |
1,398,796,996,000 |
I have an entry in my crontab file:
14 17 * * */2 python /home/pi/scripts/irrigate_5mins.py >/dev/null 2>&1
The intention is to run the command every other day, which is what the manpage (man 5 crontab) says is what */2 does. The actual quote from the manpage is:
Steps are also permitted after an asterisk, so if you want to say
``every two hours'', just use ``*/2''
The actual behaviour is that the command runs with a recurrence pattern of 2, 2, 2, 1, 2, 2, 2, 1 and so on. So, for example, in March / April the command ran on 15th, 17th, 18th, 20th, 22nd, 24th, 25th, 27th, 29th, 31st, 1st April, 3rd, 5th, 7th, 8th, 10th, 12th, where the dates in bold are those where the command was run the previous day.
So my question is: why is it behaving like this, and is there an (easy) way I can make it do what is expected?
System info:
root@pi:~# uname -a
Linux pi 4.9.28+ #998 Mon May 15 16:50:35 BST 2017 armv6l GNU/Linux
root@pi:~# lsb_release -a
No LSB modules are available.
Distributor ID: Raspbian
Description: Raspbian GNU/Linux 8.0 (jessie)
Release: 8.0
Codename: jessie
It may or may not be of relevance that the system is connected to a timer which causes a hard reboot every 24 hours.
|
By specifying */2 in the day-of-week field, you run on even days. Even days of the week are Mondays, Wednesdays, Fridays and Sundays. (Actually, these are the odd days, hmm, still...)
If you want to run the job on slightly more regular intervals, use the day-of-month field instead (the third field). Note that on months with an odd number of days, this will cause the job to instead skip one day when the next month starts: ..., 28th, 30th, (not on the 31st, not on the 1st), 2nd, 4th, etc.
You could work around this by adding a schedule for months with even days and a separate schedule for months with an odd number of days (although I haven't really thought that through properly to know whether that would make it match up properly).
Another possibility would be to have the job schedule itself using at instead of using cron. This would definitely be a more "hackish" solution and would possibly fail if the job terminated abnormally between starting to run and successfully rescheduling itself in two days time, or if the system happened to be down at the next scheduled run.
| Why does my vixie cron entry to run every other day, actually run on consecutive days every 4th time? |
1,398,796,996,000 |
I'm not in front of a testing environment right now but I desire to download a Bash script with curl and then load its content into crontab.
The content as appears in Github raw document is for example:
0 0 * * * ...
0 0 * * 0 ...
Is this code template looks okay to you?
curl -O https://raw.githubusercontent.com/user/repo/master/file.sh | crontab
|
curl outputs to stdout by default so
curl URL
is enough. And at least on macOS crontab needs a - to read from stdin so we end up with
curl URL | crontab -
Whether it is wise to load unverified data from an URL directly into Cron is another question though...
| Piping content from Github into crontab |
1,398,796,996,000 |
As an example, take the phpsessionclean schedule. The cron.d file for this looks like this:
09,39 * * * * root [ -x /usr/lib/php/sessionclean ] && if [ ! -d /run/systemd/system ]; then /usr/lib/php/sessionclean; fi
It's saying if systemd doesn't exist on the system run the script /usr/lib/php/sessionclean.
If systemd does exist it doesn't run and the systemd timer runs instead. The phpsessionclean.timer file looks like this:
[Unit]
Description=Clean PHP session files every 30 mins
[Timer]
OnCalendar=*-*-* *:09,39:00
Persistent=true
[Install]
WantedBy=timers.target
I read about creating your own .timer files and creating an associated .service file containing the details of the script you're running, but in this case, and in the case of other .timer files installed by packages (such as certbot, apt etc.) there are no associated .service files. So, how do I infer what command is going to be executed when this timer runs?
|
You could be looking in the wrong place. Units can be in several places.
$ systemctl cat systemd-tmpfiles-clean.service
# /lib/systemd/system/systemd-tmpfiles-clean.service
...
(you can also see a command here:
$ systemctl status systemd-tmpfiles-clean.service
● systemd-tmpfiles-clean.service - Cleanup of Temporary Directories
Loaded: loaded (/lib/systemd/system/systemd-tmpfiles-clean.service; static)
Active: inactive (dead) since Sun 2017-07-16 17:34:00 BST; 16h ago
Docs: man:tmpfiles.d(5)
man:systemd-tmpfiles(8)
Process: 28580 ExecStart=/bin/systemd-tmpfiles --clean (code=exited, status=0/SUCCESS)
Main PID: 28580 (code=exited, status=0/SUCCESS)
To doublecheck the associated service:
$ systemctl show -p Unit systemd-tmpfiles-clean.timer
Unit=systemd-tmpfiles-clean.service
| How to see what command is being run by a systemd .timer file? |
1,398,796,996,000 |
I'm not able to redirect output of command into a file when ran with cronjob
[root@mail /]# crontab -l
*/1 * * * * /sbin/ausearch -i > /rummy
[root@mail /]# cat /rummy
It's weird that when I dont give -i option , I'm able to redirect it very well.
[root@mail /]# crontab -l
*/1 * * * * /sbin/ausearch > /rummy
[root@mail /]# cat /rummy
usage: ausearch [options]
-a,--event <Audit event id> search based on audit event id
--arch <CPU> search based on the CPU architecture
-c,--comm <Comm name> search based on command line name
-
-
-
It there any syntax error or I'm missing here something?
Note - "ausearch -i" fetches me below output on terminal and on redirecting output to file , it redirects it as it is.
[root@server ~]# ausearch -i
type=DAEMON_START msg=audit(05/22/2017 11:14:10.391:6858) : auditd start, ver=2.4.5 format=raw kernel=2.6.32-696.el6.x86_64 auid=unset pid=1319 subj=system_u:system_r:auditd_t:s0 res=success
----
type=CONFIG_CHANGE msg=audit(05/22/2017 11:14:10.519:5) : audit_backlog_limit=320 old=64 auid=unset ses=unset subj=system_u:system_r:auditctl_t:s0 res=yes
----
type=USER_ACCT msg=audit(05/22/2017 11:20:01.108:6) : user pid=2073 uid=root auid=unset ses=unset subj=system_u:system_r:crond_t:s0-s0:c0.c1023 msg='op=PAM:accounting acct=root exe=/usr/sbin/crond hostname=? addr=? terminal=cron res=success'
----
type=CRED_ACQ msg=audit(05/22/2017 11:20:01.108:7) : user pid=2073 uid=root auid=unset ses=unset subj=system_u:system_r:crond_t:s0-s0:c0.c1023 msg='op=PAM:setcred acct=root exe=/usr/sbin/crond hostname=? addr=? terminal=cron res=success'
----
type=LOGIN msg=audit(05/22/2017 11:20:01.119:8) : pid=2073 uid=root subj=system_u:system_r:crond_t:s0-s0:c0.c1023 old auid=unset new auid=root old ses=unset new ses=1
----
|
The command does not produce output, but runs ok.
You can see this because the file rummy got created.
The ausearch utility seems to expect a "search criteria", and the empty output could be due to you not providing one.
See the ausearch manual on your system for further information.
After a bit of reading of the ausearch manual, I found the following:
--input-logs
Use the log file location from auditd.conf as input for searching. This is needed if you are using ausearch from a cron job.
Doing some Googling confirms that this indeed may be the issue. One email describes the problem:
You need to use the --input-logs option. If ausearch sees stdin as a pipe, it
assumes that is where it gets its data from. The input logs option tells it
to ignore the fact that stdin is a pipe and process the logs. Aureport has
the same problem and option to fix it.
This was fixed in the 1.6.7 general release and backported to the 1.6.5 RHEL5
release.
There also seems to be users who does not solve this by using --input-logs, but it's not clear what else may be wrong as there are never any followups from them.
| cronjob not redirecting output of command when used with option |
1,398,796,996,000 |
I am trying to make something which records my public ip address in a database, every few minutes.
I have already developed a web page which records the viewer's ip whenever loaded.
I would like to set up a cron job which will periodically load the page in the background. I know how to set up the cron job, but I don't know how to load a web page in the background from the command line.
How would you go about doing this?
|
Presumably, the page was developed so that the user agent doesn’t actually have to be a browser. If this is the case, you can simply use the curl command to fetch the page.
If you’re running it as a cron job, you don’t want the command to print any output. To do this use, the --silent option for curl and discard the HTTP response by redirecting the output to /dev/null. E.g., add the following cron job to fetch the page every 10 minutes:
*/10 * * * * curl --silent http://example.com/path/to/page curl >/dev/null
If you want to be more efficient, you could develop your web page so that it responds to a HTTP HEAD request as well as GET requests. That way, you could use curl -I so only HTTP headers are sent between the server and the client.
| How to run a web page in the background? |
1,398,796,996,000 |
A vision system program runs on a PC and I need it to be always running and on top, But sometimes there may happen a problem in the program and terminate it. So I need a script to check if the program is not running and run it as well.
I used cron job to the task, I wrote this cron job to run the script:
*/1 * * * * /home/masoud/Desktop/vision3/cron.sh
and the cron.sh is :
cd "${0%/*}"
if pgrep -x "video" > /dev/null
then
echo "running"
else
/home/masoud/Desktop/vision3/video &
fi
The cron.sh do the job correct, the cron job runs the script but it terminate it briefly. I can see my webcam LED turns on for a second.
What am I doing wrong?
|
Cron jobs aren't really suited to managing desktop applications. You'd be better off starting the application from a looping shell script; at its simplest
#!/bin/sh
cd /home/masoud/Desktop/vision3
while :; do ./vision; done
That way whenever vision stops, it will be started again.
You may want to plan an "exit strategy" for when you really want to stop the program. This would do:
#!/bin/sh
cd /home/masoud/Desktop/vision3
while [ ! -f no_vision ]; do ./vision; done
Then when you want to stop the program,
touch /home/masoud/Desktop/vision3/no_vision
and close it — the shell script will stop too.
| Start a program using cron job |
1,398,796,996,000 |
On my system, notify-send requires 3 enviorment variables to run, which are kept in a file which is generated automatically on logon:
/home/anmol/.env_vars:
DBUS_SESSION_BUS_ADDRESS=unix:abstract=/tmp/dbus-PwezoBTpF3
export DBUS_SESSION_BUS_ADDRESS
XAUTHORITY=/home/anmol/.Xauthority
export XAUTHORITY
DISPLAY=:0
export DISPLAY
And, in the crontab buffer, I have entered this:
PATH=/home/anmol/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
* * * * * /home/anmol/display-notif.sh
where display-notif.sh contains:
#!/usr/bin/env bash
. /home/anmol/.env_vars
notify-send 'hello'
Although I am able to run notify-send from non-sudo cron (crontab -e) through this setup, I am unable to do so from sudo cron (sudo crontab -e).
I also tried checking if there are any errors being generated:
* * * * * /home/anmol/display-notif.sh 2>/home/anmol/log
But that log file is empty.
How do I make it work from sudo cron ?
I am using Ubuntu 16.04.
|
It is working after replacing
* * * * * /home/anmol/display-notif.sh
with
* * * * * sudo -u anmol /home/anmol/display-notif.sh
| notify-send from root cron |
1,398,796,996,000 |
please advice how to set the crontab in linux
to run a task each day at 23:00 at night
and if someone know a shell script that we can run in the linux that
can help us to set acrontab syntax
|
If you read the cron documentations you will see that you need to use command crontab -e and enter record like:
0 23 * * * /path/to/executable
| linux crontab + how to set crontab at 23:00 |
1,398,796,996,000 |
I want my cron job to run every hour, every day except Friday when I don't want it to run at all between the hours of 2am-9am (but hourly outside of this timeframe). Ideally, I'd like to have this in one line/one cron job. What I have so far is 2 lines (and I'm not 100% sure it is correct):
0 0 * * 0,1,2,3,4,6 script.sh
0 0-1,10-23 * * 5 script.sh
|
The format seems to be correct (after correction applied posted in the comment above). Are there some special restrictions for having everything in a single line? In case you need to have everything in a single line, I would suggest to change the shell script to avoid Fridays 2-9am, eg
#!/bin/bash
# THIS CODE IS NOT TESTED
# skip on fridays 2-9am
# what are the non-running times?
STARTTIME=2
ENDTIME=9
# get the current day of the week
DAY=$(date +"%u") # 1-Monday, therefore 5-Friday
# and the hour
HOUR=$(date +"%H")
if [ "$DAY" -eq 5 -a "$HOUR" -ge "$STARTTIME" -a "$HOUR" -le "$ENDTIME" ]
then
# end the execution
exit 0
fi
Hope that helps to solve the issue!
Frank
| Cron Job Hourly Except Certain Time Frame on Friday |
1,398,796,996,000 |
I have set up a cron job that should execute
hdparm -y /dev/sda >> /var/log/diskspindown.log 2>&1
When testing it running it via "Run selected task" it executes fine and logs
/dev/sda:
issuing standby command
but when executed on schedule it doesn't work and logs
/bin/sh: 1: hdparm: not found
|
Try to use full path (which hdparm). Cron does not have to have all the paths set up.
| Gnome Schedule cannot run hdparm |
1,398,796,996,000 |
i have a python script when i launch my script all it's ok but when i lauchn my scrypt under crontab
i have this errors:
.. 2015-04-24 14:36:02,163 ERROR Problème dans le module
importData[Errno 2] No such file or directory:
'/opt/scripts/stockvo.json' ..
My script py:
#!/usr/bin/env python
# -*- coding: latin-1 -*-
def moveFTPFiles(serverName,userName,passWord,remotePath,localPath,deleteRemoteFiles=False,onlyDiff=False):
"""Connect to an FTP server and bring down files to a local directory"""
import os
import sys
import glob
from sets import Set
import ftplib
logger.info(' Suppressions des anciennes photos du repertoire: '+localDirectoryPath)
os.chdir(localDirectoryPath)
files=glob.glob('*.*')
for filename in files:
os.unlink(filename)
try:
ftp = ftplib.FTP(serverName)
ftp.login(userName,passWord)
ftp.cwd(remotePath)
logger.info(' Connexion au serveur '+serverName)
logger.info(' Téléchargement des photos depuis '+serverName+' vers le repertoire '+localDirectoryPath)
if onlyDiff:
lFileSet = Set(os.listdir(localPath))
rFileSet = Set(ftp.nlst())
transferList = list(rFileSet - lFileSet)
logger.info(' Nombres de photos à récuperer ' + str(len(transferList)))
else:
transferList = ftp.nlst()
delMsg = ""
filesMoved = 0
for fl in transferList:
# create a full local filepath
localFile = localPath + fl
# print "Create a full local filepath: " + localFile
grabFile = True
if grabFile:
#open a the local file
fileObj = open(localFile, 'wb')
# Download the file a chunk at a time using RETR
ftp.retrbinary('RETR ' + fl, fileObj.write)
# Close the file
# print "Close the file "
fileObj.close()
filesMoved += 1
#print "Uploaded: " + str(filesMoved)
#sys.stdout.write(str(filesMoved)+' ')
#sys.stdout.flush()
# Delete the remote file if requested
if deleteRemoteFiles:
ftp.delete(fl)
delMsg = " and Deleted"
logger.info(' Nombre de photos récupérées' + delMsg + ': ' + str(filesMoved) + ' le ' + timeStamp())
except ftplib.all_errors as e:
logger.error(' Problème dans le module moveFTPFiles' + '%s' % e)
ftp.close() # Close FTP connection
ftp = None
def timeStamp():
"""returns a formatted current time/date"""
import time
return str(time.strftime("%a %d %b %Y %I:%M:%S %p"))
def importData(serverName,userName,passWord,directory,filematch,source,destination):
import socket
import ftplib
import os
import subprocess
import json
try:
ftp = ftplib.FTP(serverName)
ftp.login(userName,passWord)
ftp.cwd(directory)
logger.info(' Connexion au serveur '+serverName)
# Loop through matching files and download each one individually
for filename in ftp.nlst(filematch):
local_filename = os.path.join('/opt/scripts/', filename)
fhandle = open(local_filename, 'wb')
logger.info(' Téléchargement du fichier de données ' + filename)
ftp.retrbinary('RETR ' + filename, fhandle.write)
fhandle.close()
#convert xml to json
logger.info(' Conversion du fichier ' + filename + ' au format .json ')
subprocess.call('xml2json -t xml2json -o /opt/scripts/stockvo.json /opt/scripts/stockvo.xml --strip_text', shell=True)
#modify json file
logger.info(' Modification du fichier .json')
data = json.loads(open("/opt/scripts/stockvo.json").read())
with open("/opt/scripts/stockvo.json", "w") as outfile:
json.dump(data["Stock"]["Vehicule"], outfile)
#move json file
logger.info(' Déplacement du fichier de données .json vers le répertoire /opt/scripts/')
os.system('mv %s %s' % ('/opt/scripts/stockvo.json', '/opt/data/stockvo.json'))
#import json file to MongoDB
logger.info(' Import du fichier .json vers la base MongoDB')
#subprocess.call('mongoimport --db AutoPrivilege -c cars stockvo.json --jsonArray --upsert --drop',shell=True)
p = subprocess.Popen(['mongoimport', '--db', 'AutoPrivilege', '-c', 'cars', '/opt/data/stockvo.json', '--jsonArray', '--upsert', '--drop'], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = p.communicate()
if stdout:
logger.info(stdout)
if stderr:
logger.error(stderr)
#remove xml file
logger.info(' Suppression du fichier ' + filename)
## if file exists, delete it ##
myfile="/opt/scripts/stockvo.xml"
if os.path.isfile(myfile):
os.remove(myfile)
except ftplib.all_errors as e:
logger.error(' Problème dans le module importData' + '%s' % e)
ftp.close() # Close FTP connection
ftp = None
import time
import datetime
import re
import os
import stat
import logging
import logging.handlers as handlers
import subprocess
class SizedTimedRotatingFileHandler(handlers.TimedRotatingFileHandler):
"""
Handler for logging to a set of files, which switches from one file
to the next when the current file reaches a certain size, or at certain
timed intervals
"""
def __init__(self, filename, mode='a', maxBytes=0, backupCount=0, encoding=None,
delay=0, when='h', interval=1, utc=False):
# If rotation/rollover is wanted, it doesn't make sense to use another
# mode. If for example 'w' were specified, then if there were multiple
# runs of the calling application, the logs from previous runs would be
# lost if the 'w' is respected, because the log file would be truncated
# on each run.
if maxBytes > 0:
mode = 'a'
handlers.TimedRotatingFileHandler.__init__(
self, filename, when, interval, backupCount, encoding, delay, utc)
self.maxBytes = maxBytes
def shouldRollover(self, record):
"""
Determine if rollover should occur.
Basically, see if the supplied record would cause the file to exceed
the size limit we have.
"""
if self.stream is None: # delay was set...
self.stream = self._open()
if self.maxBytes > 0: # are we rolling over?
msg = "%s\n" % self.format(record)
self.stream.seek(0, 2) #due to non-posix-compliant Windows feature
if self.stream.tell() + len(msg) >= self.maxBytes:
return 1
t = int(time.time())
if t >= self.rolloverAt:
return 1
return 0
if __name__ == '__main__':
#log to a file
log_filename='/opt/log/importData.log'
logger=logging.getLogger()
logger.setLevel(logging.DEBUG)
handler=SizedTimedRotatingFileHandler(
log_filename, maxBytes=10000, backupCount=5,
when='s',interval=60,
# encoding='bz2', # uncomment for bz2 compression
)
formatter = logging.Formatter('%(asctime)s %(levelname)s %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
#--- constant connection values
ftpServerName = "xxx.xx
ftpU = "xxxx"
ftpP = "xxxx"
remoteDirectoryPath = "/xx/xxx/xxx/"
localDirectoryPath = "/xx/xx/xx/xxxx/"
directory = '/datas/'
filematch = '*.xml'
source='/opt/scripts/'
destination='/opt/data/'
start = time.time()
logger.info('================================================')
logger.info('================ DEBUT SCRIPT =================')
logger.info('================================================')
deleteAfterCopy = False #set to true if you want to clean out the remote directory
onlyNewFiles = True #set to true to grab & overwrite all files locally
importData(ftpServerName,ftpU,ftpP,directory,filematch,source,destination)
# moveFTPFiles(ftpServerName,ftpU,ftpP,remoteDirectoryPath,localDirectoryPath,deleteAfterCopy,onlyNewFiles)
end = time.time()
elapsed_time = end - start
now = time.strftime("%H:%M", time.localtime(elapsed_time))
logger.info('================================================')
logger.info('================== FIN SCRIPT ==================')
logger.info('======== Tps d''execution: ' + now + ' minutes =========')
logger.info('================================================')
Update:
add my fodler in my $PATH:
PATH=/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/opt/scripts
is what I forgot some things (privilege,..)?
|
To resolve my problem I added these lines to crontab
PATH=/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/opt/scripts
0 4 * * * /opt/scripts/importData.py
thank you to val0x00ff who gave me the solution
| Script not run with cron [closed] |
1,398,796,996,000 |
I tried to schedule my first cron job as follows:
crontab -e
The file had some comments at the top, and on the first line after those, I put
* * * * * date
I expected the date and time to be printed out every minute, but nothing happens on the terminal. Is the output getting sent elsewhere, or is the cron job not running? Any tips to make this work?
|
From the cron man page:
When executing commands, any output is mailed to the owner of the
crontab (or to the user named in the MAILTO environment variable in
the crontab, if such exists). The children copies of cron running
these processes have their name coerced to uppercase, as will be seen
in the syslog and ps output.
So I would check your email if you have it setup with the system, or the syslog (eg. /var/log/syslog).
EDIT From serverfault(edited to match your command)
The following will send any Cron output to /usr/bin/logger (including stderr, which is converted to stdout using 2>&1), which will send to syslog, with a 'tag' of date_logging. Syslog handles it from there. Since most systems already have built-in log rotation mechanisms, I don't need to worry about a log like /var/log/mycustom.log filling up a disk.
* * * * * root /bin/date 2>&1 | /usr/bin/logger -t date_logging
| Cron job gives no output |
1,398,796,996,000 |
Say I have the following in my crontab:
* * * * * command1 -option A; command2; command3; etc.
I would like cron to run the commands I have in that line with a specific shell. How can I do that?
I know that I could technically put these commands in a file, add the corresponding shebang, and then just ask cron to run that shell script, but I would like to avoid that. Is there any way I can have cron run a set of commands in a specific shell?
|
You can change your cron string to:
* * * * * /bin/sh commannd1..; /bin/tcsh command2... ; /bin/zsh command3
This is the more extreme case. But you can prefix the name of the specific shell before the commands.
Another option is echo all the commands to the specific shell
* * * * * echo 'comand1...;command2....;command3...' | /bin/sh
| Switching shells for one cron job |
1,398,796,996,000 |
I have two servers that are synchronized with rsync, one is the failover server of the other, so they both have the same name.
Now root mails that are sent to my other account are all named from root <[email protected]> so I have no easy way of distinguishing from which server they are coming.
Is there a way of changing root mails of the one server to root failover <[email protected]>?
My first Idea would be simply to change the first line in /etc/passwd to
root:x:0:0:root failover,,,:/root:/bin/bash
But I am afraid to just to try this. Would this work?
|
Change the 'From' text by editing /etc/passwd to receive mail from 'root at failover' instead of just 'root'.
chfn -f 'root at failover' root
source: https://wiki.archlinux.org/index.php/SSMTP
| change the name for root mails from cron |
1,398,796,996,000 |
So I have a python script that pulls down git/svn/p4 repositories and does some writing to a database.
I'm simply trying to automate the running of this script and based from what I see in syslog it is being run, even the file I tried piping the output to is being created however the file is empty. Here is the cron job:
10 04 * * * user /usr/bin/python2.7 /home/user/script.py -f someFlag > ~/cronout.log 2&>1
Kinda stumped and don't know where to start. Thinking its maybe the requirement of passwords for the keychain and what not. Any ideas would be helpful!
|
So it turns out the problem was with environment variables that the Python script needed, and it was so early on in the script that it broke the script before it even output anything.
Cron does not have a regular environment.
Furthermore ssh passwords were required for pulling git repos which i was able to solve by using Keychain.
Using the help of this blog post and some bash wrapper scripts I was able to get everything working and automated.
| Cron job not behaving as expected |
1,398,796,996,000 |
I had a simple Qt program, when running, it shows a simple window with a countdown timer. If you might be interested in the code, please see here.
I had crontab line
* * * * * /home/my-user-name/Documents/bin/program
When executing the comment /home/my-user-name/Documents/bin/program, the program runs correctly. But it's not invoked by the cron. I have multiple cron jobs, all run smoothly except this one.
My question is:
Do you have any idea what might cause this? Qt problem? PATH problem?
I have searched around for cron, and tried almost all the tips.
|
The problem is that cron runs in a text environment. There are a few different approaches for that, depending on what your machine is running.
set a display variable:
* * * * * DISPLAY=:0.0 /home/my-user-name/Documents/bin/program
set up a password-less ssh key-pair and do
* * * * * /usr/bin/ssh -y user@localhost /home/my-user-name/Documents/bin/program
| Qt program not invoked by cron |
1,398,796,996,000 |
Almost 3 weeks that, in my downtime, I try to find out where the files cron.allow & cron.deny are located in debian7 distro. No way, it seems that by default they are not in the system.
'Just' for hardening purposes, I would have those files available in my system. My question is actually if I can just touch them and use them without have to make other configurations.
root@asw-deb:~# touch /etc/cron.allow
root@asw-deb:~# touch /etc/cron.deny
Or if I may have to 'map' those files, maybee editing some cron configuration files, 'saying' where cron can find the two files I created.
Sorry if I sound a little nooby.
|
From the manual man 1 crontab:
If the /etc/cron.allow file exists, then you must be listed (one user per line)
therein in order to be allowed to use this command. If the /etc/cron.allow file
does not exist but the /etc/cron.deny file does exist, then you must not be listed
in the /etc/cron.deny file in order to use this command.
If neither of these files exists, then depending on site-dependent configuration
parameters, only the super user will be allowed to use this command, or all users
will be able to use this command.
If both files exist then /etc/cron.allow takes precedence. Which means that
/etc/cron.deny is not considered and your user must be listed in /etc/cron.allow in
order to be able to use the crontab.
Regardless of the existance of any of these files, the root administrative user is
always allowed to setup a crontab. For standard Debian systems, all users may use
this command.
I gave it try on Debian 7, and it is working exactly this way.
| debian7 cron.allow & cron.deny files |
1,398,796,996,000 |
So I've spent around 2-3 hours now and some times researching, I found several of the same responses online but none seem to work!
I'm trying to execute a PHP script every minute (as a test), but it doesn't work.
I honestly don't see what's wrong with that script. So I've went to check the logs and I get this;
May 1 19:59:01 namehere crond[1112]: (system) RELOAD (/etc/crontab)
May 1 19:59:01 namehere crond[1112]: (CRON) bad username (/etc/crontab)
I am quite confused, any help would be greatly appreciated!
I have LAMP installed and php-cli if that matters.
EDIT:
I finally made it execute! Thanks to the poster below! However, I now have another problem, I get emailed an error
My script includes other scripts, when I was on cPanel (shared hosting), it worked perfectly, but now it doesn't, what could be the problem?
|
You seem to have some version of cron which expects a user-name parameter before the command. It is even in the header, just a bit concealed:
* * * * * <user-name> <command to be executed>
Try this (replace root with whatever user php/apache runs at):
* * * * * root /usr/bin/php /var/www/html/directory/file.php
Also, note that some distributions have separate php.ini configurations depending if it is used via command line (cli) or as apache module etc. So if you run into more problems, make sure your php.ini files match (check /etc/php).
Update
For absolute paths to work, have your includes like this:
include __FILE__ . '../inc/databases.php';
Note the added __FILE__ which returns absolue path to current running script. You will have to update all include and require.
| Executing PHP with CronJobs in CentOS 6.4 not working? |
1,398,796,996,000 |
Being able to put a script in /etc/cron.daily is really nice because I can do it easily from a configuration management system or a package. However, my understanding is that all the entries in /etc/cron.daily will run sequentially. How can I make a script in /etc/cron.daily not hold up the other tasks? Would something like the following work?
#!/bin/bash
#do something long:
nohup sleep 1000000000 &;#instead of sleep, this could point to another script that takes a while to execute
|
Yes, if you background the process in the script, the next one will be started. Scripts in /etc/cron.daily are run by run-parts (from man cron):
Support for /etc/cron.hourly, /etc/cron.daily, /etc/cron.weekly and /etc/cron.monthly is provided in Debian through
the default setting of the /etc/crontab file (see the system-wide example in crontab(5)). The default sytem-wide
crontab contains four tasks: run every hour, every day, every week and every month. Each of these tasks will execute
run-parts providing each one of the directories as an argument. These tasks are disabled if anacron is installed
(except for the hourly task) to prevent conflicts between both daemons.
So, you can simulate by running it manually. For example:
$ ls /etc/cron.daily/
test1 test2
$ cat test1
#!/bin/bash
echo starting 1 >> /tmp/haha
sleep 1000000000 &
$ cat test2
#!/bin/bash
echo starting 2 >> /tmp/haha
sleep 1000000000 &
$ sudo run-parts /etc/cron.daily
$ cat /tmp/haha
starting 1
starting 2
In the example above, I created two scripts that simply run sleep 1000000000 &. Because of the &, the process is sent to the background and run-parts moves on to the next script. So, nohup is not needed, all you need is the & at the end of the line that will take a while.
| Running cron.daily in parallel |
1,398,796,996,000 |
As root, I've setup a crontab rule that launches vpnc everyday early in the morning (before I arrive at my workplace). But it often occurs that the vpn stopped mid day. As a result, I have to sudo vpnc ... in order to relaunch the background process.
How to make the vpnc respawn automatically?
Maybe initab respawn rules or something like that? How would you do? What is the preferable way to do that please?
|
You could put a simple cron script together that would monitor to see if the vpnc process is still up. If not, then run it.
#!/bin/bash
if [ "$(pidof vpnc)" ]; then
echo "restart"
..run vpnc here..
else
echo "running"
..do nothing..
fi
Once you've created this script, call it /etc/cron.d/vpnc_checker.bash and create a crontab entry for it, in the file /etc/crontab. This will run every 5 minutes.
*/5 * * * * root /etc/cron.d/vpnc_checker.bash
Make sure the script is executable:
$ chmod +x /etc/cron.d/vpnc_checker.bash
| How to respawn vpnc when it stops? |
1,398,796,996,000 |
I would like to keep my download folder sorted. My goal is to have the files of the current week in the download folder, and the older files in a folder named after week number and year (eg 2013.09, lexicographically sortable) of the creation date of these files.
I only want this to apply to the files and folders found at the root of download folder; tar archives and others archives are sometimes automatically expanded by the browser when the download is complete.
Files and folders currently at the root of the download folder are assumed to be the ones of the current week. However, mtime and ctime don't tell when the file actually finished downloading.
If the machine was always on, I could set a cron task to run as soon as the week number changes, but this is a laptop, and I put it in stand by mode when I don't use it.
|
Set the crontab as:
@reboot the-script
0 0 * * 1 the-script
0 0 1 1 * the-script
To have it done on Mondays and at each boot. And in the-script, check if it's already been done. (if using the %W for the week number, you need also to do it on the first of January (thanks Gilles), not if using the ISO 8061 week number (%V)).
If your cron doesn't support @reboot, you'd have to add it to a startup script.
Or just run it daily and do something like (assuming GNU find, so not OS/X, though you could use OS/X stat in combination with -exec ... {} +.
cd ~/Download || exit
find . -path './20[0-9][0-9].[0-9]*' -prune -o -type f -mtime +7 -printf '%p\0%TY.%TW\0' |
xargs -r0n2 sh -c 'mkdir -p "$2" && exec mv -i "$@"' sh
(untested)
Or with zsh:
zmodload zsh/stat
cd ~/Download || exit
for f (**/*~20[0-9][0-9].[0-9]*(.DNm+7)) {
zstat -A d -F %Y.%W -- $f &&
mkdir -p $d &&
mv -i -- $f $d
}
If you don't want to do it recursively, it's simpler:
zmodload zsh/stat
cd ~/Download || exit
for f (*(.DNm+7)) {
zstat -A d -F %Y.%W -- $f &&
mkdir -p $d &&
mv -i -- $f $d
}
If the last modification time doesn't reflect the time of download, then you may be able to use the birth time. This time using OS/X find, stat and xargs (and using -maxdepth 1 to not recurse):
find . -type f -maxdepth 1 -Btime +7 \
-exec stat -nf%SB -t%Y.%W {} \; \
-exec printf '\0' \; \
-print0 | xargs -0n2 sh -c '
mkdir -p "$1" && mv -i "$2" "$1"' sh
| How to run a script as soon as the week number changes? |
1,398,796,996,000 |
I have a fairly simple shell script (foo.sh) that looks like this:
#!/bin/bash
echo -n "$USER,$(date)," `binary -flags | sed -e '1,30d'`;
exit 0;
This script is supposed to prepare some output which will then be appended to a text file, like so:
foo.sh >> data.csv
When I run the above on a root prompt, it works fine. However, when I type the exact same command into my (root) crontab which looks like this:
05 * * * * /root/foo.sh >> /path/to/data.csv
The crontab output differs! I wasn't expecting this and I can't understand why. See examples below:
Expected, normal output from running my .sh (example):
Fri Jun 14 16:32:34 CEST 2013,20130614163304,268828672,71682561
The output written to the file by cron:
Fri Jun 14 16:32:34 CEST 2013,
The rights on the binary run in the .sh looks like this:
-rwxr-xr-x 1 root root
Why does the output differ? How can I get the correct output into my CSV file?
|
You can't count on having the same environment in a program run via cron as when you run it interactively. There are two differences most likely to matter in this instance:
The current working directory
The PATH
As jordanm commented above, one or both of these in combination is causing the script to not find your binary program.
If you look at your local system mail (e.g. by running mailx) you will probably find messages complaining about this failure.
It is standard practice when writing crontab entries and scripts intended to be run by cron to hard-code paths to known locations of programs.
| Why do I get different outputs when running my shellscript manually from when I run it with cron? [duplicate] |
1,370,511,561,000 |
I would like to give out a warning If cron is not running or if a certain cronjob is not set in crontab on my server.
Is this possible to check with php?
|
You can parse the output of crontab -l to see if a particular crontab entry is present or not. As for if cron is running or not, you can parse the output of a ps -eaf command to see if crond is running or not.
$ ps -eaf|grep [c]rond
root 1705 1 0 May27 ? 00:00:03 crond
The output from crontab -l would be something like this:
$ crontab -l
0 12 * * * ls
NOTE: You can use the system() function in PHP to call command line tools or exec().
EDIT #1
Based on your comment you could do the following from PHP. My script, cronstatus.php:
#!/usr/bin/php
<?php
exec("PATH=/usr/sbin:/usr/bin:/sbin:/bin; service crond status", $out, $ret);
print $ret . "\n";
?>
example run
$ sudo service crond stop
Stopping crond: [ OK ]
$ ./cronstatus.php
3
$ sudo service crond start
Starting crond: [ OK ]
$ ./cronstatus.php
0
The function exec can return the output of the command to a variable, $out and the results of the status returned by the command it executed in $ret.
| How can I check if my cronjob is runnning on my server via PHP? |
1,370,511,561,000 |
I'm running a Python program on my Linux server, and depending on some external data it has to run again at xx minutes or hours from now.
So let's say it runs at 6 AM, and then it has to run again at 7 AM.
Then, at 7AM, it checks some things and it has to run again at 15:45, and then the next day at 2.05AM, and then the next day at 4.05AM etc.
As you can see there is no predefined logic in the times it has to run, it has to be defined at the time it runs.
The only task schedule mechanism I know is crontab, bu I'm not sure how to add tasks to it without running crontab -e and besides that crontab seems more for recurring tasks and in my case I would add a crontab job and after running it once remove it again and add a new one.
The only thing I could come up with was to set the next rundate in a textfile, and let a crontab job check it every minute to see if it is time to run the program.
|
crontab should be used for jobs that you want to have repeated regularly. An alternative is at. With this utility you can schedule jobs that you want to execute only once, but in the future.
From within the python-script, you should be able to add a command to the queue of at. The link page together with the man-page should give you enough information to get going.
As per @Michel's comment, this will be
newruntime = (datetime.datetime.now() + datetime.timedelta(minutes=5)).strftime("%H:%M %d.%m.%Y")
command = 'echo " python mainprog.py" | at ' + newruntime
os.system(command)
| How can I schedule a python program to run from another python program? |
1,370,511,561,000 |
I have made a mistake, I have edited /etc/crontab via copy and paste. And now those crontab entries are not working.
[root@process ~]# cat /etc/crontab
SHELL=/bin/bash
PATH=/sbin:/bin:/usr/sbin:/usr/bin
MAILTO=root
HOME=/
# run-parts
01 * * * * root run-parts /etc/cron.hourly
02 4 * * * root run-parts /etc/cron.daily
22 4 * * 0 root run-parts /etc/cron.weekly
42 4 1 * * root run-parts /etc/cron.monthly
*/5 * * * * root php /var/www/vhosts/mydomain.com/subdomains/apn/httpdocs/sdkstatistics/callEveryFiveMinute.php 2>&1 /tmp/testlog.log
Later I studied, and found that I should not edit crontab via copy and
paste. Rather I have to use crontab -e to add crontab.
I have run the crontab -l command to show the current listing
[root@process ~]# crontab -l
7,17,27,37,47,57 * * * * /usr/lib/plesk-9.0/postfix-poplockdb-clean
1,16,31,46 * * * * /usr/local/psa/admin/sbin/backupmng >/dev/null 2>&1
Here, my last entry */5 * * * * root php /var/www/vhosts/mydomain.com/subdomains/apn/httpdocs/sdkstatistics/callEveryFiveMinute.php 2>&1 /tmp/testlog.log is not listed.
Now can i revert back to my previous state?
Or, better what should I do?
|
You don't say which distro or version of the cron daemon you are running, but it is likely that your cron maintains system crontab files (/etc/crontab) and also per-user crontab files.
The file that you are seeing when you run crontab -l (as root) is probably /var/spool/cron/crontabs/root which is where you want to make your changes, by using crontab -e
Without further information on your distro version or cron daemon, I won't go on, but it is possible you have broken the /etc/crontab as it has a slightly different syntax to the per-user crontab files. See
man cron
man crontab
and
man 5 crontab
for more details on your cron daemon configuration, then remove the erroneous line from /etc/crontab and either add it one of to the /etc/cron.* directories or to a users crontab file. You may find information on the error in /etc/crontab from the system log files in /var/log
| /etc/crontab edited via copy and paste, how to revert back? |
1,370,511,561,000 |
From a server hosted in Poland (UTC +01:00), is there a way I can consistently have a crontab entry run at 9am New York time (UTC -05:00)?
For me this wasn't trivial since the daylight saving time ended last Sunday in Poland, so jobs that I have scheduled to run at 15:00 their local time most of the year are late this week by an hour from the point of view of US stock exchanges.
I remember Congress' decision a few years ago to extend DST, which put things out of sync for a few weeks per year.
One workaround I am not even sure would work (which I prefer to avoid anyhow) is that when I have a task:
0 14 * * * something
I fetch NY's time TZ=":US/Eastern" date +%s and compare using bash arithmetic whether I should sleep 3600 or not before running something
|
Okay, since altering the TZ env for date does seem to call up the correct current time in Ubuntu, this is at least a workaround (the one I was trying not to rely on):
SHELL=/bin/bash
0 14 * * 1-5 [ $[10#$(date +\%H) - 10#$(TZ=":US/Eastern" date +\%H)] == 5 ] || sleep 3600; w;df
So this aims to run w;df at 9am ET Mon-Fri. Instead of hour 15 I put in 14 with the possibility of sleeping one hour (3600 seconds) before running further commands.
| crontab and DST disagreement with different timezone |
1,370,511,561,000 |
I need to run a command once, but only once, per day, until it succeeds.
Continuous uptime cannot be expected, and program success cannot be guaranteed.
This program requires network access, but not every time I start my computer with network access.
My program will exit with, e.g -1 unless it succeed ( which returns 0 ).
|
Use a shell to provide this. For example, create a script with something like the following:
#!/bin/sh
# Check to see if this is already running from some other day
mkdir /tmp/lock || exit 1
while ! command-to-execute-until-succeed; do
# Wait 30 seconds between successive runs of the command
sleep 30
done
rmdir /tmp/lock
After that, point cron to the script.
| How do I run a program only once per day, while accounting for variable uptime and possible failure of program? |
1,370,511,561,000 |
I have a problem, some user ,and I'm not sure who, on one of my servers wrote a cronjob that executes every night at midnight. The cronjob creates a sql dump of a database which is then grabbed by another server and gzipped.
The problem that I'm experiencing is that once that has occurred the file on the local system is no longer needed and I'd like to remove it in an hour or so. Which in, and of itself is not an issue but I don't know where to go to find the crontab for the person in question. Can anyone think of a way to find the cronjob from all the potential places that one could store a cronjob?
|
On solaris at least, look in /var/spool/cron/crontabs
| Need to find a cronjob |
1,370,511,561,000 |
I wrote a combination .sh and .exp scripts that:
activate vpn connection
connect to remote server
download some files from server
deactivate vpn connection
This scripts should run on schedule.
I use nmcli for activate and deactivate connections.
If I run scripts manually it work correctly, but if I run this scripts via cron, I receive message (about vpn conn activation): Error: Connection activation failed: Not authorized to control networking.
In logs I see that the script is run from under me:
Dec 6 18:48:01 maskalev-Aspire-A514-54 CRON[10975]: (maskalev) CMD (./dev/promomed/__DRAFTS__/utils/sftp_monitor/main.sh)
my groups:
maskalev@maskalev-Aspire-A514-54:~$ groups
maskalev root adm cdrom sudo dip plugdev netdev lpadmin lxd sambashare docker
nmcli permissions
maskalev@maskalev-Aspire-A514-54:~$ nmcli general permissions
PERMISSION VALUE
org.freedesktop.NetworkManager.checkpoint-rollback auth
org.freedesktop.NetworkManager.enable-disable-connectivity-check yes
org.freedesktop.NetworkManager.enable-disable-network yes
org.freedesktop.NetworkManager.enable-disable-statistics yes
org.freedesktop.NetworkManager.enable-disable-wifi yes
org.freedesktop.NetworkManager.enable-disable-wimax yes
org.freedesktop.NetworkManager.enable-disable-wwan yes
org.freedesktop.NetworkManager.network-control yes
org.freedesktop.NetworkManager.reload auth
org.freedesktop.NetworkManager.settings.modify.global-dns auth
org.freedesktop.NetworkManager.settings.modify.hostname auth
org.freedesktop.NetworkManager.settings.modify.own yes
org.freedesktop.NetworkManager.settings.modify.system yes
org.freedesktop.NetworkManager.sleep-wake no
org.freedesktop.NetworkManager.wifi.scan yes
org.freedesktop.NetworkManager.wifi.share.open yes
org.freedesktop.NetworkManager.wifi.share.protected yes
I think I'm interested enable-disable-network, amn't?
Have you any ideas?
May be I can solve this problem (mainly activate vpn) by other way?
OS -- Ubuntu 22.04
|
Thanks @woodin for advice (after 8 months I returned to this question)!
What I did.
Firstly I compared outputs of nmcli general permissions launched from terminal and from cron.
From terminal (permissions for me)
PERMISSION VALUE
org.freedesktop.NetworkManager.checkpoint-rollback auth
org.freedesktop.NetworkManager.enable-disable-connectivity-check yes
org.freedesktop.NetworkManager.enable-disable-network yes
org.freedesktop.NetworkManager.enable-disable-statistics yes
org.freedesktop.NetworkManager.enable-disable-wifi yes
org.freedesktop.NetworkManager.enable-disable-wimax yes
org.freedesktop.NetworkManager.enable-disable-wwan yes
org.freedesktop.NetworkManager.network-control yes
org.freedesktop.NetworkManager.reload auth
org.freedesktop.NetworkManager.settings.modify.global-dns auth
org.freedesktop.NetworkManager.settings.modify.hostname auth
org.freedesktop.NetworkManager.settings.modify.own yes
org.freedesktop.NetworkManager.settings.modify.system yes
org.freedesktop.NetworkManager.sleep-wake no
org.freedesktop.NetworkManager.wifi.scan yes
org.freedesktop.NetworkManager.wifi.share.open yes
org.freedesktop.NetworkManager.wifi.share.protected yes
From cron (permissions for cron or rather for adm group users)
PERMISSION VALUE
org.freedesktop.NetworkManager.checkpoint-rollback auth
org.freedesktop.NetworkManager.enable-disable-connectivity-check no
org.freedesktop.NetworkManager.enable-disable-network no
org.freedesktop.NetworkManager.enable-disable-statistics no
org.freedesktop.NetworkManager.enable-disable-wifi no
org.freedesktop.NetworkManager.enable-disable-wimax no
org.freedesktop.NetworkManager.enable-disable-wwan no
org.freedesktop.NetworkManager.network-control auth
org.freedesktop.NetworkManager.reload auth
org.freedesktop.NetworkManager.settings.modify.global-dns auth
org.freedesktop.NetworkManager.settings.modify.hostname auth
org.freedesktop.NetworkManager.settings.modify.own auth
org.freedesktop.NetworkManager.settings.modify.system no
org.freedesktop.NetworkManager.sleep-wake no
org.freedesktop.NetworkManager.wifi.scan auth
org.freedesktop.NetworkManager.wifi.share.open no
org.freedesktop.NetworkManager.wifi.share.protected no
All I need in my case is give permission to network-control.
I added x.pkla file to /etc/polkit-1/localauthority/50-local.d/ (Docs here):
[Let adm group modify system settings for network]
Identity=unix-group:adm
Action=org.freedesktop.NetworkManager.network-control
ResultAny=yes
May be you need to reload polkit after it.
Check the output:
PERMISSION VALUE
org.freedesktop.NetworkManager.checkpoint-rollback auth
org.freedesktop.NetworkManager.enable-disable-connectivity-check no
org.freedesktop.NetworkManager.enable-disable-network no
org.freedesktop.NetworkManager.enable-disable-statistics no
org.freedesktop.NetworkManager.enable-disable-wifi no
org.freedesktop.NetworkManager.enable-disable-wimax no
org.freedesktop.NetworkManager.enable-disable-wwan no
org.freedesktop.NetworkManager.network-control yes
org.freedesktop.NetworkManager.reload auth
org.freedesktop.NetworkManager.settings.modify.global-dns auth
org.freedesktop.NetworkManager.settings.modify.hostname auth
org.freedesktop.NetworkManager.settings.modify.own auth
org.freedesktop.NetworkManager.settings.modify.system no
org.freedesktop.NetworkManager.sleep-wake no
org.freedesktop.NetworkManager.wifi.scan auth
org.freedesktop.NetworkManager.wifi.share.open no
org.freedesktop.NetworkManager.wifi.share.protected no
Now I can activate (and deactivate) networks via cron!
| nmcli Error: Connection activation failed: Not authorized to control networking |
1,370,511,561,000 |
I'm testing to set up some backup.
I set up my crontab to run every minute.
I have a disk that is mounted on machine2 that I will upload the backup to.
I'm compressing the content of folder /home/user/important to important.tar.gz and moving the tar.gz file to machine2's /mnt/backup2 folder.
cron tab entry:
* * * * * tar -czvf /home/user/important/important.tar.gz /home/user/important &&
rsync -vzhe ssh /home/user/important/*.tar* machine2:/mnt/backup2
This will not run with crontab.
I'm running the same command in terminal.
Then it works:
/home/user/important/
/home/user/important/a.txt
/home/user/important/test.txt
/home/user/important/this.txt
/home/user/important/is.txt
/home/user/important/important.tar.gz
/home/user/important/important.txt
/home/user/important/test1/
important.tar.gz
sent 9.85K bytes received 35 bytes 19.76K bytes/sec
total size is 9.75K speedup is 0.99
And file is received on machine2 in the /mnt/backup2 folder.
Any suggestions why it doesn't work with crontab?
Im running Ubuntu.
|
It's hard to know what's the real problem here, but here a couples things you could try:
Put this one-liner in a script, chmod -x it and use this in the crontab
This will allow you more control over whatever is happening (eg: set -x, etc).
Enable debugging output to see what is causing your crontab entry to not work as expected:
#!/bin/bash -x
or if you're using sh
#!/bin/sh -x
or just use set -x instead.
I'm guessing it might be a problem of permissions or shell globing, but if it doesn't get solved by being in a script, you should do this:
script.sh &> script.log
So you can clearly see what and how something is failing when being run as a cronjob...
Check if there any entry in crontab
This one is obvious, but since some may use the crontab file instead of doing adding it directly through the crontab:
echo "job entry" | crontab -
or the "normal" way, but require text editor to be opened, and the user copy pasting/typing their job/crontab entry manually...:
crontab
or if you prefer a method that prevent duplicated entry:
cronadd() {
if crontab -l | grep -wq -- "$@"; then
:
else
(crontab -l 2>/dev/null; echo "$@") | crontab -
fi
}
And you could use it like so:
cronadd "job entry here"
It should prevent adding duplicate job and you can make sure it's actually added...
Always check with crontab -l though.
| Command works in terminal, but not in crontab |
1,370,511,561,000 |
I'm making a small script in which I make several calls to Zenity. Executing the script manually or executing the commands from the terminal works properly. However, when I run them from Cron they give me problems. To test it, I have put in crontab two commands:
export DISPLAY=:0 && zenity --info --text "Window test"
export DISPLAY=:0 && zenity --notification --text "Notification test"
The first command shows an independent notification window with no problem, but the second one, which should show me a floating system tray notification, does not show anything at all only when I run it from crontab.
What can I do to make zenity --notification work from crontab if it works without any problem from another non-graphical TTY?
My system is KDE-Neon 5.19 with Ubuntu 20.04 and Plasma desktop 5.19.4. The version of Zenity is 3.32.0.
|
I added this to my user's crontab; this has been tested with python (notify2 library) and zenity:
DISPLAY=":0.0"
XAUTHORITY="/home/my_username/.Xauthority"
XDG_RUNTIME_DIR="/run/user/1000"
DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/1000/bus
| Do Zenity system tray notifications not work with Cron? |
1,370,511,561,000 |
Using Mac OSX version 10.14.x
Want to run a Python script every 5 minutes.
Created a shell script, check.sh and marked it executable so that it runs the Python script as shown below.
/usr/bin/python resolve.py
Created a crontab entry using crontab -e command as shown below.
5 * * * * ./check.sh
It is listed successfully using the crontab -l command:
$ crontab -l
5 * * * * ./check.sh
The Python script is supposed to create a log file which is updated each time it executes.
So, I would expect the cron job to execute every 5 minutes and the log file to be created. But this does not happen.
I'm not sure if the cron job is not executing or some other issue.
Please note that when I run the Python script directly, it executes properly and the log file is created.
|
Your cron job is scheduled to execute every hour 5 minutes after the hour. If you want it to run every 5 minutes change it to:
*/5 * * * * ./check.sh
| Cron tasks not executing on Mac OSX |
1,370,511,561,000 |
I have this incrontab, which is monitoring the master Directory to check if the event occurs where a new file is placed here ... and run the php file.
/var/www/html/docs/int/master IN_MOVE php /var/www/html/shscript/work.php
I have a crontab that runs every one minute, and invokes the execution of a .sh file, the content of this SH is to copy the txt files to the Master directory that is in the previous incrontab (that work well):
cd /mnt/test1/int/master
cp *.txt /var/www/html/docs/int/master
The probllem:
when the cron is end of run (copy is successfull), but the icron is not triggering, no detecting event like: IN_MOVE, IN_MOVED_TO; but if i change the event to: IN_MODIFY ; Work well and execute the php file; but i not need to run the php file on modify event i need run it if the file is succefull copy.
I do not know what I'm doing wrong.
|
the result script to do:
/var/www/html/docs/int/master IN_CLOSE_WRITE php /var/www/html/shscript/work.php
Script file to copy from the master repository to the Work directory and after that to move it to processed
#!/bin/sh
cd /mnt/test1/int/master
find . -maxdepth 1 -type f -exec bash -c '
for item do
cp $item /var/www/html/docs/int/master && mv -f $item /mnt/test1/int/master/procesado
done
' bash {} +
| incrontab is not triggering with end of cron sh invokes |
1,370,511,561,000 |
I'm running a Raspberry Pi Model 2B with Raspbian Stretch (Debian).
I have a UI-application (chromium-browser, started via /home/pi/.config/lxsession/LXDE-pi/autostart) and a script started via cron, both of which depend on the actual current time being set when they start (or I'd have to put some ugly hacks in the code to deal with it).
The startup time lags several hours behind, until at some point the time gets synced over WLAN.
I was thinking that maybe I could somehow delay the execution of autostart and cron, until NTP had the chance to sync up.
How would I do that? Or can I start both in some other way?
|
So what I ended up doing is implementing some rudimentary time jump recognition in the script and the application.
| Delay further startup until time is synced |
1,370,511,561,000 |
I have a shell script that works fine when run manually, but fails when run through the crontab.
The script essentially does as follows:
Python script to get audio data and pipe to stdout | ffmpeg the data from stdin and pipe to stdout | stream the data from stdin
When run through crontab the streaming fails, complaining that there is no data at stdin (...No (more) data available on standard input).
I found this answer which seems to allude to the issue of file descriptors in crontab, but I would appreciate some more details on the problem and the best way to get around it.
EDIT:
Troubleshooting the issue by trying each individual command separately shows that the issue starts in the python script which complains:
close failed in file object destructor:
sys.excepthook is missing
lost sys.stderr"
instead of outputting audio data. Following the advice here and here (added sys.stdout.flush() to the end of the file) I can see the actual error message is:
Traceback (most recent call last):
File "/home/*username*/testing.py", line 109, in <module>
sys.stdout.flush()
IOError: [Errno 9] Bad file descriptor
So perhaps it's more of a python issue..though from the error it stil seems to have to do with stdin/stdout
|
As it turns out, the issue really was the python file and cron, but not an issue with file descriptors (stdin/stdout) in the way I had expected.
Rather, as per this answer having a line asking for user input while running through cron was causing the issue. I solved the issue by removing the request for user input as for me it was unnecessary.
| Shell script with pipes not working in crontab |
1,370,511,561,000 |
I use the following /etc/crontab code to create daily backups of my database in limit of the last 30 days:
0 8 * * * mysqldump -u root -PASSWORD --all-databases > /root/backups/mysql/db.sql
1 8 * * * zip /root/backups/mysql/db-$(date +\%F-\%T-).sql.zip /root/backups/mysql/db.sql
2 8 * * * rm /root/backups/mysql/db.sql
2 8 * * * find /root/backups/mysql/* -mtime +30 -exec rm {} \;
My problem:
I must put my password where -PASSWORD. This way my password is exposed if by mistake I entered to change something in this file and someone near to me saw it.
Is it possible to use the script without putting the password there? Alternatively, do you know a similar syntax that won't force me to write the password there for a cronjob?
|
Create a script to do the complete dump, backup and cleanup.
Schedule the script.
Additionally, the password to mysql may also be stored in a protected file and does not need to be given on the command line.
MySQL has a "End-User Guidelines for Password Security" document that you may want to consult.
To summarize that document:
Create .my.cnf in your home directory and add the password to it like this:
[client]
password=your_pass
Then remove read-permissions on the file for other users:
$ chmod 600 .my.cnf
or, equivalently,
$ chmod u=rw,go-rwx .my.cnf
This file, if named .my.cnf and placed in you home directory, will automatically be used by the mysql client program (as well as by mysqldump).
Still, do put the backup etc. into its own script and schedule that instead. That will be a whole lot easier to maintain than a number of cron jobs.
| Making a cron-scheduled database backup (dump) without exposing the password in /etc/crontab |
1,370,511,561,000 |
For a project I want to use wget in a cron to download a data file. In the wget statement, a start- and enddate have to be defined in the following format:
wget --post-data="stns=235&vars=TEMP&start=YYYYMMDDHH&end=YYYYMMDDHH"
Since I want it to be done by a cron job, I would like the start- and enddate to be set automatically. More specific, I would like the startdate to be set to '1 hour ago' and the enddate to 'now'.
There has been a similar question in the post Using date -1day with wget. Here the suggested solution was to put the variables between single quotes, but this did not work. E.g.:
"[...]start='`date -d yesterday +%Y%m%d%H'&end=`date +%Y%m%d%H`"
I simply got the error "Error 400: Bad request" when trying to execute the wget-statement in the terminal.
Thank you.
|
Within a cron job, % is special and must be escaped. Also, backquote syntax is best avoided. I would suggest something like the following:
wget --post-data="start=$(date ... +\%Y\%m\%d\%H)&end=$(date ... +\%Y\%m\%d\%H)&..."
| Using date variable with wget --post-data [duplicate] |
1,370,511,561,000 |
I set up two cron tasks to mute my desktop's audio at night and then unmute it in the morning (so that emails and other notifications don't wake me up):
lumpy@cheetoserver:~$ crontab -e
# At 10:15 PM every night, mute the volume
15 22 * * * /usr/bin/amixer -q set Master mute
# At 7AM every morning, unmute the volume
0 7 * * * /usr/bin/amixer -q set Master unmute
The audio is muted at night, but never gets unmuted in the morning. Yet if I simply execute the 7AM unmute command in the shell:
/usr/bin/amixer -q set Master unmute
...the audio is unmuted immediately.
I tested both commands before entering them into crontab, and they mute and unmute immediately (i.e, it doesn't take two unmutes to counteract a single mute, or anything like that).
Can anyone shed light on why this isn't working?
|
I found other posts by people having exactly the same issue. The problem seems to be that the cron job runs without the necessary context, and adding export DISPLAY=:0 to each task is the solution:
lumpy@cheetoserver:~$ crontab -e
# At 10:15 PM every night, mute the volume
15 22 * * * export DISPLAY=:0 && /usr/bin/amixer -q set Master mute
# At 7AM every morning, unmute the volume
0 7 * * * export DISPLAY=:0 && /usr/bin/amixer -q set Master unmute
For the sake of completeness, I'll add that several solutions mentioned changing set Master in the two tasks above to set Master playback. This made no difference in my case, but if the first solution doesn't resolve your problem, you may want to try this.
| cron task to unmute my audio isn't working |
1,370,511,561,000 |
Basically I want to create a directory at some fixed time and after exactly five minutes, I want to create a text file in that directory.
I tried this code but it didn't work
6 13 * * * /usr/bin/mkdir /qwerty /usr/bin/touch file1
|
Here is the command you asked cron to run:
/usr/bin/mkdir /qwerty /usr/bin/touch file1
This calls mkdir with tree parameters: /qwerty, /usr/bin/touch,
and file1. So, mkdir will attempt to create those as directories.
You probably meant to run those as two separate commands:
6 13 * * * /usr/bin/mkdir /qwerty
11 13 * * * /usr/bin/touch /qwerty/file1
Another style would do this as a one-liner:
6 13 * * * /usr/bin/mkdir /qwerty && sleep 5m && /usr/bin/touch /qwerty/file1
Note that using cron for one-off jobs is strange; as mentioned in a
comment to your question, an at job would make more sense.
Also, this will still fail unless the user is allowed to create
directories under /.
| How can I create a directory using crontab and after five minutes create a txt file inside that directory? |
1,370,511,561,000 |
I want to run a bash script say every 5 minutes, with a predefined argument according to a cycle.
For example, I want to use as the argument 1, 2, 5, 10, 15, 50, 15, 10, 5, 2, and then start the cycle again.
Ideally, the arguments are stored in a file or in the script where I can easily edit them, add or remove some, etc.
How do I do that ?
I could do it with a single script, an array and a while [[ true ]] but I'd like to know if I can do that with cron.
|
Probably the most effective and one of the more simpler ways to accomplish this would be to have the script handle keeping track of cycling the magic number rather than using arguments. Something like this:
#!/bin/bash
sequence=(1 2 5 10 15 50 15 10 5 2)
if [[ -r /var/tmp/myjob.seq ]]; then
seq="$(cat /var/tmp/myjob.seq)"
if [[ $seq -lt $((${#sequence[@]}-1)) ]]; then
nextseq=$(($seq+1))
else
nextseq=0
fi
echo $nextseq > /var/tmp/myjob.seq
else
seq=0
echo 1 > /var/tmp/myjob.seq
fi
magicnumber=${sequence[$seq]}
You can then refer to $magicnumber later on in the script, and use whatever cron schedule you like.
| How do I do a cron job with cyclical argument? |
1,370,511,561,000 |
The reason I ask this is to save resources. I try to save automated cron tasks wherever I can.
Say one installs CalmAV in an Ubuntu based, Apache server, via the command:
apt-get update # apt-get install clamav
How do you then set the program, through the terminal, to work in an all-manual mode, without any task being performed automatically from cron?
Edit: My VPS is 2 Core processor, 40GB SSD, 2GB RAM, 3TB transfer, Ubuntu 16.0.
|
For this specific task, you'd do dpkg-reconfigure -plow clamav-freshclam and select manual.
Note however that freshclam which updates databases uses minuscule amount of resources (runs one per hour for less than a second if there are no updates), and that clamav with outdated databases is a big problem.
So running update manually every day or so to save 0.01% of system resources is not only big waste of your time (and can lead to less protected server if you're not fast enough), but may actually even use more resources as just the act of your logging in to the server need some resources (which are still so minuscule as to be totally unimportant)
So you're better off leaving it at defaults in this case.
| How to set ClamAV for manual-only work in Linux (no automatic cron tasks) |
1,370,511,561,000 |
I'm stuck with the following (simple) problem :
I want a script to be executed every 10 minutes. This script calls executable files. I use crontab and ksh on a AIX 5.3 system.
The script makes use of relative paths, but changing the executable path to absolute didn't make any difference. So, after a few tries and this answer, I came up with the following crontab entry (*/10 doesn't work)
rs14:/home/viloin# crontab -l
0,10,20,30,40,50 * * * * cd /home/viloin/cardme/bin && /bin/ksh myScript.ksh
here is the script :
#!/bin/ksh
Main(){
printf "executed in : %s\n" $(pwd);
executableFile 2>/dev/null 1>&2;
exeResult=$?; # expected return value : 90
printf "%s\n" $exeResult;
}
Main;
Here is the output when I run the command manually :
rs14:/home/viloin/cardme/bin# cd /home/viloin/cardme/bin && /bin/ksh myScript.ksh
executed in : /home/viloin/cardme/bin
90
And finally the output when crontab runs it for me (from mail) :
Subject: Output from cron job cd /home/viloin/cardme/bin && /bin/ksh myScript.ksh, [email protected], exit status 0
Cron Environment:
SHELL = /usr/bin/sh
PATH=/usr/bin:/etc:/usr/sbin:/usr/ucb:/usr/bin/X11:/sbin:/usr/java14/jre/bin:/usr/java14/bin
CRONDIR=/var/spool/cron/crontabs
ATDIR=/var/spool/cron/atjobs
LOGNAME=viloin
HOME=/home/viloin
Your "cron" job executed on rs14.saprr.local on Wed Aug 24 11:50:00 CEST 2016
cd /home/viloin/cardme/bin && /bin/ksh myScript.ksh
produced the following output:
executed in : /home/viloin/cardme/bin
127
*************************************************
Cron: The previous message is the standard output
and standard error of one of your cron commands.
My file myScript.ksh has all rights :
rs14:/home/viloin/cardme/bin# ll -al myScript.ksh
-rwxrwxrwx 1 viloin cardme 174 Aug 24 10:54 myScript.ksh
To make sure that my executableFile is not really exiting with code 127, I used the echo binaries, renamed it and I have the same behavior (Except that it returns 0 instead of 90 when I run the command manually).
What is causing this difference between manually typing the command and asking crontab to do it for me ?
|
change your shell script to provide a full or relative path to the executable:
./executableFile ...
In interactive use, you must either have . or the cardme/bin directory in your PATH: that will not be true in cron's environment.
| Execution of a program called by a shell called by crontab returns code 127 |
1,370,511,561,000 |
In Linux (Mint / Ubuntu), I can create cronjobs for each user individually. Is there any way, I can find out the name of the user against whome the cronjob is running. I want to get the username of cronjob owner in a shell script which is going to cron'd...
|
Get the script's owner
On any system with as stat that is compatible with modern GNU stat, the user ID of the owner of the script is:
stat -c %u "$0"
The user name of the owner of the script is:
stat -c %U "$0"
In general on linux, stat -c %U file returns the owner of file. We substitute in $0 because that variable typically contains the name of current script file.
Getting effective user ID of the user running the script
To get the effective user ID number of the user running a script, use id -u:
$ id -u
1001
To save it in a variable
$ uid=$(id -u)
$ echo "$uid"
1001
If the script is running under bash, then the effective user ID is stored in the shell variable EUID:
$ echo $EUID
1001
On many systems, the default is for cronjobs to run under dash which does not support EUID. Thus, it is safer and more reliable to use id -u as shown above.
Use the -n option in addition to the -u option to get the user name:
$ id -un
john1024
| Get owner / user of a cronjob |
1,370,511,561,000 |
I've been having an issue where I'll see many instances of a particular process:
/usr/sbin/sendmail -FCronDaemon -i -odi -oem -oi -t
I've done a bit of reading and it seems like the processes are starting up to send out the stdout output of a cron job, but for some reason never terminate.
There's one process per day, so it believe it's related to the daily cron jobs. The start time of the process in ps aux (04:01 daily) seems to coincide with the kick-off time for the daily cron jobs (04:02 daily). The contents of /etc/cron.daily are:
0anacron 0logwatch cups logrotate makewhatis.cron mlocate.cron rpm tmpwatch
The contents of /etc/crontab are:
SHELL=/bin/bash
PATH=/sbin:/bin:/usr/sbin:/usr/bin
MAILTO=root
HOME=/
# run-parts
01 * * * * root run-parts /etc/cron.hourly
02 4 * * * root run-parts /etc/cron.daily
22 4 * * 0 root run-parts /etc/cron.weekly
42 4 1 * * root run-parts /etc/cron.monthly
So far, I've manually killed these processes when they get to large numbers; if I don't, the server runs out of resources and the service running on it stops. Worst case, I'll simply set up another cron that kills these processes, but I'd rather stop the problem at the source. Does anyone know the cause for this issue? Can anyone provide steps to debug?
|
The issue wasn't rooted in sendmail at all. Using pstree, I was able to determine that there were many more processes that were also hanging, not terminating, and parented by crond. I looked through each of these processes and discovered that one process was doing something along the lines of
cat /var/log/some_log_file
When I did ls /var/log/some_log_file, I saw
/var/log/some_log_file|
some_log_file was actually a named pipe! It seems that the cron job was trying to read from this pipe, but never terminated because nothing was ever sent to the pipe.
As a fix, I deleted it and made it a regular file.
| Accumulating sendmail processes |
1,370,511,561,000 |
I'm trying to make a backup .sh script to zip my files and to protect the zip archive with a password.
For that, I'm using the zip package (apt-get install zip) and there's an encryption option accessible via the -e parameter. How can I specify the password directly? After typing the command, I need to enter a password, and the .sh-script will be run with cron jobs.
Here's my .sh file:
zip -r -e -q ~/var/backup/backup_`date +%Y_%m_%d-%H_%M` /var/www/
Here's the result with the -e parameter:
How can I automatically set a password and then retype it in a .sh-file (that will be ran with cronjobs)?
|
You can use the -P parameter to specify the password on the command line:
zip -r -e -q -P myPasswordHere ~/var/backup/backup_`date +%Y_%m_%d-%H_%M` /var/www/
You can find this sort of thing out by looking at the manual page for the program in question:
man zip
| Using zip package for debian with password |
1,370,511,561,000 |
I am new to Unix here.
We have some mailboxes that are taking up an incredible amount of space and I'm trying to figure out a way to delete all mail that has been in the box for 30 days. Most of what I look up, deals with just one mailbox.
I haven't done much in this area yet and any help would be greatly appreciated.
|
If you want to clear out all of the mailbox contents except maybe root and some other protected user, you can use something like this:
for mbox in $(ls /var/spool/mail/*|grep -v -e root -e protecteduser);do >${mbox};done
and schedule it in cron to run on the 1st day of each month with
crontab -e
insert the following line at the end of the crontab:
0 2 1 * * /path/to/mailbox/cleaner/script
this will make the script run on the first day of each month at 2 AM
On the other hand if you need to clean mail which is older than 30 days in each mailbox, it will need a totally different approach. If this is your intention, please update your original post.
| How would you build a cron that empties the mail in all mailboxes? |
1,370,511,561,000 |
So, I could use the crontab command, with:
23 0 * ...
but at 23:00 my laptop can be turned off, or hibernated. In that case I want a command to be executed as soon as it will be possible. How can I do that?
|
Use @reboot in addition to your timing (if your crond supports it):
@reboot command
23 0 * * * command
The obvious caveat is that if you boot your computer at 22:59 the command will run twice in very short order. Make sure the command can be run twice at the same time without one process stomping on the other.
| How to schedule task to run everyday, if I don't know when the pc will be turned on? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.