I think that your Azure monitoring doesn't have a agent reporting on the df
disk usage as seen by your VM, but reports on the storage consumption as reported by the storage layer, hence the discrepancy.
Most cloud providers use some form of thin provisioning when assigning storage.
So when you assign a a 512 GB virtual disk to a VM, your VM sees 512GB available, but the actual storage that gets consumed will initially be much closer to 0
GB rather than the allocated 512GB. The empty disk space is not allocated (yet) in the backend, and only once you start writing data to that disk will the actual disk consumption, as measured in the back-end, increase.
In other words: after writing a 100GB to that disk, running df
will show 100GB used inside the VM and looking from the storage back-end you will also see that 100GB is used.
When you delete 80 GB of your files something interesting can happen:
running df
will show only 20GB remains used inside the VM but looking from the storage back-end it will report that there is still 100GB in use. That is because some/many storage back-ends can't reclaim thin-provisioned storage once it has been allocated. Allocated storage can only be reclaimed when the complete virtual disk gets deleted but it can't reclaim storage when the VM deletes files/data on the virtual disks.