✓ Solved

crontab stopped working around 10/2/21; how to get running again

Crontab has been running for many months, but suddenly stopped working around 10.1/2021. I use it to do simple mysql table updates for various tables at prescribed times. No changes have been made, not to cron settings or any of my related code. Rebooting the server does not solve the problem.

Running CentOS.

To be clear, there are cron jobs scheduled throughout the week and repeating weekly. Each cronjob invokes a php script that updates a table specific to that cronjob.

I see in a cron log that the commands do get issued on time. I manually run the script that the cron lines specify. The command is a script accessible by a URL, and I use that URL with my brokers to test it. The results are successful; the tables are updated.

And again, this has all been unchanged for many many months.

How might crontab be stopped or "dead" if I took no action with it?

If stopped, shouldn't rebooting restart it? I've rebooted the server many times over the period of time all this was running correctly.

18 Replies

✓ Best Answer

stevewi…

First, I thank you very much for helping me with this. Much appreciated.

(1) I said crontab crashed, shoulda been assertion that cron daemon stopped (or something like that) (and maybe that's not true). When I get typing while thinking ahead, sometimes I say crontab when I should say cron, and vice versa. :(

(2) I said I restarted crontab last night -- made no change, but given my education today, why should it.

(3) When I make changes to the crontab table, I sudo vim and manually edit it. Very carefully. I've made no changes for months, with things running well until recently as I mentioned.

(4) No changes made?

Are your crontab files intact?
Yes.

Changed permissions?
No

Changed ownership?
No

Deleted?
No

Fire any consultants (with sudo privileges) on/about 10/2/2021
Can't afford consultants.:)

Years ago, the server was set up for me, so I have no history regarding crontab/cron. I'm seeing root as the user, if I read these answers to your questions correctly:

[ken@alpha spool]$ ls -l /etc/cron.deny -rw------- 1 root root 0 Aug 8 2019 /etc/cron.deny

[ken@alpha /]$ sudo ls -l /var/spool/cron total 0 -rw------- 1 root root 0 Mar 18 2015 root [ken@alpha /]$

[ken@alpha /]$ sudo crontab -l -u root [ken@alpha /]$

I can't explain this being set up as root. It's been that way for over 10 years, and life using cron has been good over the years … until now.

I will set up a trivial cronjob that runs every minute and report back, probably not until tomorrow.

By the way, you refer to cron(8), vs just cron. I gotta feeling there's something important for me to know about that.

If sudo ls -l /var/spool/cron comes up empty, where is the table? It's somewhere, and has been getting found over the years.

This might help you out:

https://www.cyberciti.biz/faq/howto-linux-unix-start-restart-cron/

I have no idea how to make sure it starts again on a reboot.

-- sw

What do you get when you run:

ps -aux | grep cron

service crond status

Maybe try:

sudo systemctl enable crond.service
sudo systemctl start crond.service

The OP writes:

I manually run the script that the cron lines specify.

This is not a way to test the efficacy of a script as a cron(8) job. cron(8) is not the shell! The environment of your login session is very different than the environment that exists while the script is running under cron(8).

It might be that cron(8) is running fine and your scripts are messed up because of this issue. The only way to test your script as a cron(8) job is to run it under cron(8).

That being said, you can put a set -x at the beginning of your script and cron(8) will email you the output.

The OP also writes:

The command is a script accessible by a URL, and I use that URL with my brokers to test it.

Are you sure your web server isn't blocking your cron(8) jobs because they look like bad 'bots…or your brokers are doing the wrong thing because of something missing in the environment for cron(8) that's there during a login session? Just a thought…

I see in a cron log that the commands do get issued on time.

If you're getting these, it's extremely unlikely that cron(8) is not operating correctly. I would attack this problem from the application side… Just because you think nothing changed does not mean that something did not change.

-- sw

acanton77:

$ ps -aux | grep cron
root     22659  0.0  0.1 126408  3184 ?        Ss   06:17   0:00 /usr/sbin/crond -n
ken      28297  0.0  0.1 112832  2240 pts/4    S+   18:56   0:00 grep --color=auto cron

stevewi:

(1) I ran the restart version of that command early today. Tomorrow morning I will be able to verify of the dB table got updated -- or not. Thank you.

(2) I've been struggling to figure out what's changed, because I have the same reaction as you. My reply to your first post above will be interesting.

I have not changed any system software. I have not made any change to my cron commands, nothing. I've not made any changes to the code that is executed for each from job. I have not made any changes to the dB tables other than, of course, entries appearing there when customers for websites I developed sign-up for events; the table is a basic listing of those who have signed up. Every week, the table gets initialized (because of the associated cronjob) during the time when the current event is taking place to prepare it for the following week. The system has been "free-running" for man weeks. Then, suddenly one day, my tables are not getting initialized.

Here's the snippet of the cron log that seems to show it aborts/stops after starting logrotate.:

deleted

Normally, starting and finished are consecutive, like this:

Oct  8 03:16:01 alpha anacron[9722]: Job `cron.daily' started
Oct  8 03:16:01 alpha run-parts(/etc/cron.daily)[9788]: starting logrotate
Oct  8 03:16:01 alpha run-parts(/etc/cron.daily)[9796]: finished logrotate

I think this argues that crontab crashed.

@k4c4gorman writes:

I think this argues that crontab crashed.

This is nonsense. Here's why:

From man crontab:

crontab is the program used to install, remove or list the tables used to drive the cron(8) daemon. Each user can have their own crontab [file], and though these are files in /var/spool/, they are not intended to be edited directly.

crontab is basically a file-management utility. It doesn't run your scripts. cron(8) does that. Maybe cron(8) crashed and sent all your crontab files into the ether. That's a distinct possibility…

I have not changed any system software. I have not made any change to my cron commands, nothing. I've not made any changes to the code that is executed for each from job.

You don't need to do any of that to effect a change… Are your crontab files intact? Changed permissions? Changed ownership? Deleted? Fire any consultants (with sudo privileges) on/about 10/2/2021?

What does sudo crontab -l -u <some_user> say? What are the permissions/ownership of /var/spool/cron and all it's subdirectories/files? Is <some_user> listed in /etc/cron.deny?

These are but a few changes that can be made by errant scripts and users without doing anything you describe.

Can you run any cron(8) job…even one that just runs every minute and writes "Hello World" to the terminal? If you can do that, then cron(8) is not the culprit here (IMHO, you've already demonstrated that).

-- sw

cron(8) refers to the name of the program and the section of man that the program's manage is in:

man 8 cron

It's a bad habit from my Unix days. ;-)

Am I to interpret:

[ken@alpha /]$ sudo crontab -l -u root [ken@alpha /]$

as

[ken@alpha /]$ sudo crontab -l -u root 
[ken@alpha /]$

If so, root has no crontab files.

If sudo ls -l /var/spool/cron comes up empty, where is the table? It's somewhere, and has been getting found over the years.

It appears that it disappeared on 10/2/2021 from your original post. To get your jobs running again, create a file with your jobs listed in it (in crontab(1) format). You can use vim(1) to create it and it can be owned by you. Here's one of mine as an example (yours will be different):

# delete old backups every day
#
@daily                  nice -n 19 /home/stevewi/bin/cron/trimbackup

# update the MaxMind geolocation info (every Tuesday at 7am)
#
0  7         *  *  2    nice -n 19 /srv/netinfo/bin/cron/mminfoupd         

# update the netinfo database (every day at 1:15am, 1:15pm)
#
15 1,13      *  *  *    nice -n 19 /srv/netinfo/bin/cron/netinfoupd

@daily is a synonym for 0 0 * * *. See crontab(5)i.e., man 5 crontab.

Anyway, if/once you have that file, you can reinstall the crontab file for root with

sudo crontab -u root /the/path/to/your/file

Then, you'll be back in business.

-- sw

You say If so, root has no crontab files. But I do have a crontab file.

I access my crontab file using sudo vim /etc/crontab so it exists and always been there. And I see it running cronjobs in the cron log, like this snippet below shows -- with passwords hidden and lines separated for readability -- using cat cron-20211010. I show three lines leading up to my stuff that shows mysql backups. Not shown are cronjobs that run my scripts that simply UPDATE dB tables to clear user entries on a weekly basis. The point is, cron is running -- until it ceases to run, as shown at the bottom of this post.
.
.
.
deleted

+++++++++++++++++
This shows logrotate starting and not finishing. When it croaks (technical term :), it occurs consistently around 03:xx:xx. Others have suggested may be running out of storage, but that seems inconsistent with what logrotate is all about.

deleted

I had forgotten about /etc/crontab. I don't think I've ever used it… I usually schedule cron(8) jobs with crontab(1):

sudo crontab -u root /the/path/to/my/crontab/file

I prefer to leave system-wide things to the system and put my stuff into per-user crontabs (i.e., /var/spool/cron/root in your case). The two scenarios are equivalent though…

Can you post your entire /etc/crontab? Obfuscate passwords, etc. Also, could you post the output of:

sudo logrotate -dv

This may give a clue about why logrotate(8) is starting but not finishing (and possibly causing cron(8) to terminate the execution of /etc/crontab in error). This command will tell you what logrotate(8) is trying to do without doing it. If you end up rotating your logs prematurely by mistake, that's not going to mess anything up. You can read more at:

https://linux.die.net/man/8/logrotate

In each case, paste the text between two rows of ``` (3 backticks). It'll be much easier to read when you're done and you won't have to backquote every line of the file (like you did above with the cron(8) log entries). If you don't want to do that, put them in some publicly-accessible place and post links (the output of logrotate(8) will be quite long).

We know cron(8) is working. We know you have a crontab file to drive it. This has got to be something silly…the most vexing problems usually are.

-- sw

P.S. The command

sudo crontab -e -u <user>

will run the program specified in $EDITOR on /var/spool/cron/<user>. Usually, the default value of $EDITOR (on Linux) is vim(1).

(I use vim regularly)

I first want to bring to your attention something I just discovered, as an outcome of having chosen those four items in the log listing. I chose them because they did dB dumps using mysqldump, something "everyone" is familiar with. It came to mind I had not checked those, to see if the backups were taking place, as they have been for over a decade.

So, I checked them, and all four backups occurred precisely on time: Oct 14, 10:02pm. Owner=root, Group=root, permissions=rw-r--r--.

deleted

So many of those failed I assumed they all have. It will be a somewhat tedious piece of work to check out all of them. But I'm thinking the fact that the mysqldump cronjobs all worked and the others that I did check did not work is a clue.

I have a set of seven cronjobs that do the same thing (backup of a table from each of three databases) at the same time (~10pm) but occur on different days of the week. In other words, a weekly rotating history of tables … from three different databases. The cronjob for each night performs these backups, like I say, for a table of each of three different databases.

The php code is presented below with some stuff obfuscated, with the three tables indicated [tb1], [tb2], and [tb3]. Notice the addition of the day of the week to form a unique table for each day of the week. The three blocks of code are identical except for the table being backed up. Rather than do mysqldump, I just made backups by coping the then current one and writing into a new table for each week.

This code below is for Sunday backups. There are obviously seven blocks of code like this, each invoked by a day-unique cronjob. I have a note towards the end of this post about what if the table being DROPped doesn't exist.

deleted

Having made you suffer through this, I investigated each nightly backup for one of the tables, say tb1.

I found that the backups over the previous week were successful for the Tuesday and Wednesday backups, but unsuccessful for the others (contained data from some previous week). I'm writing this on Saturday morning, so the Friday backup attempt was the most recent.

I verified this by examining time-stamps for each entry into the table when a website user signed-up for an activity, and noticing the radical inconsistencies between this prior week with old (earlier weeks) time-stamps.

What if the table being DROPped doesn't exist? I read this:
The DROP TABLE statement deletes the specified table, and any data associated with it, from the database. The IF EXISTS clause allows the statement to succeed even if the specified tables does not exist. If the table does not exist and you do not include the IF EXISTS clause, the statement will return an error.

When I set up this system of backups, I created the tables for each day of the week to have something to drop, being concerned about that. But I did not consider what would happen if somehow a table not yet dropped was wiped out. It should never (never say never) occur that the target of the DROP doesn't exist because there's at least an old one that didn't get overwritten. Nevertheless, I will change the code to use DROP TABLE IF EXISTS… . And will populate, just for the hell of it, the tables that did not get backed up with the current values.

I need to wait until next Saturday to see if the results mirror what I see for the prior week or not. Place your bets.

Observations:

-- Cron is operating correctly.

-- My code successfully performed in some but not all test cases, identical except coded for different days of the week when invoked.

-- The overall execution sequence from cronjob through expected action worked in some but not all test cases.

-- It's possible/likely this apparent randomness of success occurs for other cronjobs. I think this testing obviates the need to determine which others worked and which did not. Anecdotally interesting maybe, but seems unnecessary (and difficult in some cases).

I don't use mysql(1). I haven't used PHP seriously in a long time (since I discovered Ruby). I can still puzzle my way through PHP code but it takes longer now than it used to (unless the error is glaringly apparent).

I need to wait until next Saturday to see if the results mirror what I see for the prior week or not. Place your bets.

It might behoove you to set up a test bed for this…so you can compress a week into a couple of hours. I can guarantee you that any time you spend on this will be repaid many times over the next time something like this happens.

Also, you might want to modify your cron(8) jobs so that they leave some tracks in the cron(8) log or elsewhere so that you can be apprised of failures as soon as they occur. This article:

https://stackoverflow.com/questions/4811738/how-to-log-cron-jobs

offers a number of different techniques for doing this.

-- sw

Good thoughts re: test bed and logging cron jobs. I'll review that stackoverflow article, being curious how this will be different than the cron log (which is, in fact, a bit unwieldy).

About the php code -- the fact that it runs successfully in some cases and the only difference is the table being backed up, and the fact that it runs "manually", in mind dismisses it as where the error resides.

I will be waiting with bated breath to a week from now, no matter what I learn in the meantime.

Again, thank you for your help and patience with me.

Good thoughts re: test bed and logging cron jobs. I'll review that stackoverflow article, being curious how this will be different than the cron log (which is, in fact, a bit unwieldy).

You can create your own log files that are time-/date-stamped. When I create a project, there's typically a directory called …/var/log for this purpose. Logging from shell scripts is really easy…

If you use shell scripts, you can write to the system logs (via syslogd(8)) with the utility logger(1). PHP has extensive facilities for doing logging. See:

https://www.loggly.com/ultimate-guide/php-logging-basics/

All of this can be apart from the cron(8) log…

As far as the test bed goes, you don't want to do this on a production machine. This is why God invented nanodes ;-) $5/mo for 3 months is a small price to pay to avoid multiple late nights of crisis-mode work with the SVP of IT breathing down your neck asking "Is it fixed yet?" every thirty seconds.

About the php code -- the fact that it runs successfully in some cases and the only difference is the table being backed up, and the fact that it runs "manually", in mind dismisses it as where the error resides.

I wouldn't be so quick to dismiss this as a source of your problem… I've been at this Unix/Linux stuff a very long time and:

  • the state of affairs is never what it appears to be;
  • you should always expect the unexpected; and
  • formulating conclusions without exhaustive empirical evidence is a very dangerous thing.

Lastly, you might want to consider revamping your infrastructure entirely as CentOS has recently been declared end-of-life.

See: https://www.linode.com/community/questions/20843/centos-linux-end-of-life

Fortunately, you have lots of choices for the distro you pick to replace it (although, for personal reasons, I would not pick Ubuntu). You'll have to do your own homework though.

-- sw

Just a FYI: I followed these instructions to see if cron is running (from http://www.dba-oracle.com/t_linux_cron.htm)

To check to see if the cron daemon is running, search the running processes with the ps command. The cron daemon's command will show up in the output as crond.

$ ps -ef | grep crond

root 2560 1 0 07:37 ? 00:00:00 crond oracle 2953 2714 0 07:53 pts/0 00:00:00 grep crond

The entry in this output for grep crond can be ignored but the other entry for crond can be seen running as root. This shows that the cron daemon is running.

Here's what I got:

deleted

I presume the color=auto has nothing to do with reporting running status.

I presume the color=auto has nothing to do with reporting running status.

Yep.

       --color[=WHEN], --colour[=WHEN]
              Surround the matched (non-empty) strings, matching lines,
              context lines, file names, line numbers, byte offsets, and
              separators (for fields and groups of context lines) with
              escape sequences to display them in color on the terminal.
              The colors are defined by the environment variable
              GREP_COLORS.  The deprecated environment variable
              GREP_COLOR is still supported, but its setting does not
              have priority.  WHEN is never, always, or auto.

Actually, grep(1) has nothing to do with reporting status. grep(1) filters text. In your case, it's filtering out all the output from ps(1) that does not contain the text "crond". The --color=auto is probably being added by a command alias or an environment variable.

-- sw

BIG DEAL UPDATE

I believe the problem is solved with the help of another person. No need to reply, but once I implement and test the changes, I will report back here. May take a few days.

In previous posts, I deleted some material that probably was risky to post here.

The problem I'm having has nothing to do with cron. What was overlooked as needing "php" in front of the cron command, like this:

php /[path]/test-script.php

According to the manual "https://www.php.net/manual/en/features.commandline.usage.php"
this should work on the command line. In fact, it's general good practice to test the script outside of cron.

Here's the problem: IT DOESN'T WORK. Obviously, there's something wrong in my specific situation, otherwise there would be great chaos out there.

I'm gonna start a new chat about this to get this in front of others who don't give a hoot about cron.

@k4c4gorman --

You write:

The problem I'm having has nothing to do with cron. What was overlooked as needing "php" in front of the cron command, like this:

php /[path]/test-script.php

and

Here's the problem: IT DOESN'T WORK. Obviously, there's something wrong in my specific situation, otherwise there would be great chaos out there.

Remember at the beginning of all this I said cron(8) is not the shell? Well, cron(8) doesn't know anything about your PATH (the PATH that you set in .bashrc or whatever when you login). cron(8) has it's own PATH…and it doesn't include the place where php(1) lives.

Change the entry in your crontab to

/the/path/to/php /[path]/test-script.php

or, more properly,

/the/path/to/php -f /[path]/test-script.php

and you should be good.

-- sw

P.S. You can alter cron(8)'s idea of what PATH is but this is much easier (and faster).

Reply

Please enter an answer
Tips:

You can mention users to notify them: @username

You can use Markdown to format your question. For more examples see the Markdown Cheatsheet.

> I’m a blockquote.

I’m a blockquote.

[I'm a link] (https://www.google.com)

I'm a link

**I am bold** I am bold

*I am italicized* I am italicized

Community Code of Conduct