I suspect you saved a new file every time that script ran and hit some resource limit other than total file content size.
Try df -i
to look for inode usage. Many filesystems impose a limit not only on the total volume of file contents, but additionally on file metadata entries required to store their names, owner, time, permission, .. If you see a million inodes, that would be well above common operating system usage.
Such command might hang for an unspecified time, but you could try to directly check likely directories to determine if you have 962181 empty files in one directory.
Some filesystems impose additional limits on the number of individual entries below a single directory level. This is unlikely something you want, any application where many files are desirable would also be better served with fast (multi-level indexed) access.
How to deal with excessive number of files found?
If you want to keep potential output, but do not need to store the times at which your cronjob had previously run, you might want to selectively delete empty files and only carry on investigating what the non-empty ones say, e.g.:
# find /root -xdev -maxdepth 1 -type f -size 0 -name "dosomething.php.*" -delete
How to stop the problem from reoccurring in the future?
If you did not actually need to preserve the result of all previous downloads, consider specifying an output name (possibly even /dev/null to not store even the latest download) to be overridden on each call, as opposed to the default where curl just appends a number to output filename when it already exist in the directory.
* * * * * wget -O /root/latest-something.html https://example.com/something.php >> /var/log/myjob.log 2>&1