I'm using the QueueWorker plugin to do some update/create node process in the background. On local there isn't an issue, it completes the whole queue process. However on the aws server it usually stops at some point.
I'm assuming because of the resource consumption on the server. Whats the ideal way to optimize my QueueWorker?
Here is my code:
$offset = 0;
while (TRUE) {
$nodes= \Drupal::entityQuery('node')->condition('type', 'article')->range($offset, $limit)->execute();
$offset = $offset + $limit;
if (empty($nodes)) {
break;
}
$queue_manager = \Drupal::service('plugin.manager.queue_worker');
$queue_worker = $queue_manager->createInstance('ex_queue');
$queue = $this->queueFactory->get('ex_queue');
foreach ($nodes as $node) {
$item = new \stdClass();
$item->content = $node;
$queue->createItem($item);
}
while ($item = $queue->claimItem()) {
try {
$queue_worker->processItem($item->data);
$queue->deleteItem($item);
}
catch (RequeueException $e) {
$queue->releaseItem($item);
\Drupal::logger('system')->warning('RequeueException');
}
catch (SuspendQueueException $e) {
$queue->releaseItem($item);
\Drupal::logger('system')->error('SuspendQueueException');
}
catch (\Exception $e) {
$queue->releaseItem($item);
\Drupal::logger('system')->error('Exception');
}
}
}
and my QueueWorker
class ExQueueProcessor extends QueueWorkerBase implements ContainerFactoryPluginInterface {
protected $configuration;
public function __construct(array $configuration) {
$this->configuration = $configuration;
}
public static function create(ContainerInterface $container, array $configuration, $plugin_id, $plugin_definition) {
return new static(
$configuration
);
}
public function processItem($item) {
}
Lets say, the total count of $nodes is 17k items, and it stops at around 15k. Is there anyway to optimize this more to make it handle large data?