How to use locks in PHP cron jobs to avoid cron overlaps

How to use locks in PHP cron jobs to avoid cron overlaps

Cron jobs are hidden building blocks for most of the websites. They are generally used to process/aggregate data in the background. However as a website starts to grow and there is gigabytes of data to be processed by every cron job, chances are that our cron jobs might overlap and possibly corrupt our data. In this blog post, I will demonstrate how can we avoid such overlaps by using simple locking techniques. I will also discuss a few edge cases we need to consider while using locks to avoid overlap.

Cron job helper class
Here is a helper class (cron.helper.php) which will help us avoiding cron job overlaps. (See usage example below)


	define('LOCK_DIR', '/Users/sabhinav/Workspace/cronHelper/');
	define('LOCK_SUFFIX', '.lock');

	class cronHelper {

		private static $pid;

		function __construct() {}

		function __clone() {}

		private static function isrunning() {
			$pids = explode(PHP_EOL, `ps -e | awk '{print $1}'`);
			if(in_array(self::$pid, $pids))
				return TRUE;
			return FALSE;

		public static function lock() {
			global $argv;

			$lock_file = LOCK_DIR.$argv[0].LOCK_SUFFIX;

			if(file_exists($lock_file)) {
				//return FALSE;

				// Is running?
				self::$pid = file_get_contents($lock_file);
				if(self::isrunning()) {
					error_log("==".self::$pid."== Already in progress...");
					return FALSE;
				else {
					error_log("==".self::$pid."== Previous job died abruptly...");

			self::$pid = getmypid();
			file_put_contents($lock_file, self::$pid);
			error_log("==".self::$pid."== Lock acquired, processing the job...");
			return self::$pid;

		public static function unlock() {
			global $argv;

			$lock_file = LOCK_DIR.$argv[0].LOCK_SUFFIX;


			error_log("==".self::$pid."== Releasing lock...");
			return TRUE;



Using cron.helper.php
Here is how the helper class can be integrated in your current cron job code:

  • Save cron.helper.php in a folder called cronHelper
  • Update LOCK_DIR as per your need
  • You might have to set proper permissions on folder cronHelper, so that running cron job have write permissions
  • Wrap your cron job code as show below:
    	require 'cronHelper/cron.helper.php';
    	if(($pid = cronHelper::lock()) !== FALSE) {
    		 * Cron job code goes here
    		sleep(10); // Cron job code for demonstration

Is it working? Verify
Lets verify is the helper class really take care of all the edge cases.

  • sleep(10) is our cron job code for this test
  • Run from command line:
    sabhinav$ php job.php
    ==40818== Lock acquired, processing the job...
    ==40818== Releasing lock...

    where 40818 is the process id of current running cron job

  • Run from command line and terminate the cron job in between by pressing CNTR+C:
    sabhinav$ php job.php
    ==40830== Lock acquired, processing the job...

    By pressing CNTR+C, we simulate the cases when a cron job can die in between due to a fatal error or system shutdown. In such cases, helper class fails to release the lock on this cron job.

  • With the lock in place (ls -l cronHelper | grep lock), run from command line:
    sabhinav$ php job.php
    ==40830== Previous job died abruptly...
    ==40835== Lock acquired, processing the job...
    ==40835== Releasing lock...

    As seen, helper class detects that one of the previous cron job died abruptly and then allow the current job to run successfully.

  • Run the cron job from two command line window and one of them will not proceed as shown below:
    centurydaily-lm:cronHelper sabhinav$ php job.php
    ==40856== Already in progress...

    One of the cron job will die since a cron job with $pid=40856 is already in progress.

Working of cron.helper.php
The helper class create a lock file inside LOCK_DIR. For our test cron job above, lock file name will be job.php.lock. Lock file name suffix can be configured using LOCK_SUFFIX.

cronHelper::lock() places the current running cron job process id inside the lock file. Upon job completion cronHelper::unlock() deletes the lock file.

If cronHelper::lock() finds that lock file already exists, it extracts the previous cron job process id from the lock file and checks whether a previous cron job is still running. If previous job is still in progress, we abort our current current job. If previous job is not in progress i.e. died abruptly, current cron job acquires the lock.

This is the classic method for avoiding cron overlaps. However there can be various other methods of achieving the same thing. If you know any do let me know through your comments.


  1. Pingback: How to use locks in PHP cron jobs to avoid cron overlaps | Abhi's …

  2. Pingback: Webby Scripts How to use locks in PHP cron jobs to avoid cron overlaps | Abhi's …

  3. Pingback: How to use locks in PHP cron jobs to avoid cron overlaps | Abhi's … | Coder Online

  4. Pingback: uberVU - social comments

  5. Pingback: Abhinav Singh’s Blog: How to use locks in PHP cron jobs to avoid cron overlaps | Webs Developer

  6. Pingback: Abhinav Singh’s Blog: How to use locks in PHP cron jobs to avoid cron overlaps | Development Blog With Code Updates :

  7. Nate

    I use a database to hold cron locks since my cron jobs are balanced across a number of web nodes. I have a problem where a script or system crash can occur and the lock will never be removed. Does anyone know of a good solution to automatically handle locks across multiple web nodes, in cases where the script crashes and the lock is still present? The processes are run through http through a load balancer on a local network.

    1. Hi Nate,

      As ndlinh suggested you can use cache like memcached to have your locks in place.

      Since memcached is a distributed solution all your nodes will be able to detect the lock. In case the process dies abruptly, the lock will expire automatically after $ttl. (Though $ttl will act as a tuning parameter here)

      1. Kishore Kumar

        Dear Abhinav,

        Will this work when two or more cron jobs (schedule jobs) running on the same server? I am using PHP on IIS server on Windows 2008. Please help.

    2. Nate

      The cache solution sounds promising, at least better than my current solution. I still have the problem that is, what if the script is actually still running? In some cases running the script twice could bring the database down or corrupt data. I have seen some situations where a script takes an average of 30 minutes to run, but sometimes it takes 90+ minutes due to heavy system load. It can be very unpredictable. In a situation where you could not ever risk the possibility of the script running twice, would the only solution be to check if the script is running through apache? I think I can do that by parsing server-status.

    3. Yeah cache solution can serve you better, but remember its only a cache. If cache is refreshed you might end up running the script twice. Check this link in case you are using memcached.

      But still 90 min or even a 30 min cron job seems like a bad solution to me. In such cases its better to break down the job into several components. Probably by knowing what exactly you are trying to achieve through these cron jobs, I can think of a better solution.

    4. Nate

      The scripts in question are creating “preferred lists” of user information based on a fairly intensive database aggregations. The lists are then stored in memcache and accessed by the application from there. I have each “preferred list” job in a separate script. There probably is some redesign that could be issued to optimize things but I am hoping to find a solid php cron solution that is as robust as a simple bash lock file implementation. This always worked so well on a single node.


      if [ ! -e $lock ]; then
      trap “rm -f $lock; exit” INT TERM EXIT ERR
      touch $lock
      php $dir/cache_primer-bfs.php
      rm $lock
      trap – INT TERM EXIT ERR
      echo “cache_primer-bfs is already running (check lock file)”

    5. Solution 1: Alright based on the job description I guess memcache based solution can serve you, though don’t rely on caches for such jobs.

      Solution 2: Since your cron jobs are interacting with databases, you can very well use the db itself for cron synchronization. Have a pid column per row, which is populated by the process id and probably hostname of the cron job processing it.

      Solution 3: Divide the rows in the databases based on the primary key among different cron jobs (just like consistent hashing algorithms in memcached to know which key goes to which server). So that your cron jobs on each machine know what all rows it should process. Then have a localized locking mechanism per box, just like the code you posted. And everything should work out well.

      Hope it helps and let me know how it goes 🙂

  8. Pingback: Lock Files in PHP & Bash | Development, Analysis And Research

  9. Pingback:

  10. Pingback: Comment utiliser les locks avec des tâches CRON en PHP ? | Alheim

  11. Pingback: Andrew Johnstone’s Blog: Lock Files in PHP & Bash | Webs Developer

  12. Pingback: Andrew Johnstone’s Blog: Lock Files in PHP & Bash | Development Blog With Code Updates :

  13. Pingback: Evitar ejecuciones múltiples en cron con PHP | Sentido Web

  14. Pingback: You are now listed on FAQPAL

  15. Pingback: How to use locks in PHP cron jobs to avoid cron overlaps | PHP Digg

  16. I’ve used ps command and find the php module file name with path in process list, like:

    exec(‘COLUMNS=255 ps xa’, $Tasks);

    In this case i can control not only presence of module in memory, but number of instances and can limit it by 1 or any…

  17. All solutions that do the following:

    if ( lock file doesn’t exist ) {
    create the lock file
    store data in lock file

    suffer from a race condition since the checking and creating are not atomic:

    job 1 doesn’t see the lock file
    job 2 doesn’t see the lock file
    job 1 creates the lock file
    job 2 creates the lock file
    job 1 writes data to the lock file
    job 2 overwrites data in the lock file


  18. Pingback: PHP – Evitar que un proceso cron se ejecute más de una vez al mismo tiempo « El bit campeador

  19. Pingback: Lock Files in PHP & Bash – Missing | Development, Analysis And Research

  20. Stefan


    Thank you for your post. Interesting topic. Might an alternative solution be to use a file locked with flock()?

    1. Attempt to obtain an exclusive lock on blank file.
    2. If lock fails task is running so exit.
    3. If lock succeeds run the tasks and finally unlock the file.

    In the event of a fatal script error or script completion PHP will automatically unlock the file.

    – Fatal script errors handled automatically.
    – Simple.

    – Differences in lock handling between platforms (My windows box will only recognise non-blocking locks when run as CLI).
    – No way of knowing if the previous tank crashed – you only know it finished (though in my case I’m writing script start and end times to a DB so runs with no end time could be considered crashed).

    Example Code:

    $fp = fopen('task.lock', 'a') ;

    //try and get a lock
    if(flock($fp, LOCK_EX | LOCK_NB)) {
    //success! look busy.
    echo 'Obtained lock';
    ftruncate($fp, 0) ;
    fputs($fp, 'Locked by PID '.getmypid().' @ '.date('U'));
    //unlock file on completiion
    flock($fp, LOCK_UN) ;
    } else {
    //File is already locked by previous cron task – exit.
    echo 'Could not lock file';


  21. Paul

    Thanks for sharing this! I been trying to understand why I was getting a “fork: service temporary unavailable” for days! my computer was almost out of memory and I stumbled upon your post and your code. I implemented it and did the job! Now my processes lists is clean and my crons work fine and I also get some free memory that was previously consumed by overlapping crons

  22. Sunil

    Hi Abhinav,

    This is good, but as I test it from command line then a lock is created but when I test it through cron then lock file in not created in the cronHelper directory. while all the code is continuing to execute. Since it get the pid from the lock().

    Please suggest, what should I do??

  23. Pawan

    Hi Abhinav,
    I am really interested and excited to see your solution for the problem I am facing. I am running ISPCONFiG server on ubuntu 12.04. I have many cronjobs php files created in joomla articles and are run through ISPCONFIG cronjobs which do overlaps.

    I am very new to linux system and have no idea where I should put your helper file and path I should give in the php file for including the helper file.

  24. Pingback: Impedir que um Script PHP seja executado ao mesmo tempo no Cron – DevHouse Internet Software Development House

  25. ganesh m

    Hi this is ganesh i am having 3 years of experience as a java developer and i am certified. i have knowledge on OOPS concepts in java but dont know indepth. After learning php will be enough to get a good career in IT with good package? and i crossed php training in chennai website where someone please help me to identity the syllabus covers everything or not??


  26. ganesh m

    Hi this is ganesh i am having 3 years of experience as a java developer and i am certified. i have knowledge on OOPS concepts in java but dont know indepth. After learning php will be enough to get a good career in IT with good package? and i crossed php training in chennai website where someone please help me to identity the syllabus covers everything or not??

    thanks,Ganesh .

  27. Ian Winstanley

    In windows you can get the running PIDs using this conditional

    * Unix and Windows use different methods to determine the running process ids

    if (strstr(strtolower(php_uname(“s”)), ‘windows’)) {
    $pids = array_column(array_map(‘str_getcsv’, explode(“n”,trim(`tasklist /FO csv /NH`))), 1);
    } else {
    $pids = explode(PHP_EOL, `ps -e | awk ‘{print $1}’`);

  28. ananthi ayiram

    It’s interesting that many of the bloggers your tips helped to clarify a few things for me as well as giving.. very specific nice content. And tell people specific ways to live their lives.Sometimes you just have to yell at people and give them a good shake to get your point across.
    Web Design Company

Leave a Reply