The first step is to design the table and columns for the database, determine how metadata will be stored, design the query statements and the update statements.
Then implement Cloud Storage Triggers to notify a service that you write to process events from Cloud Storage. Cloud Functions and Cloud Run are often used to process events. As part of processing the event, your code will update the database.
The final step after triggers are working correctly, is to scan the entire bucket and update the database with metadata for each Cloud Storage object.
Your question does not include details. It is better to use number statements instead of I have a large number of objects stored in a GCP Cloud Storage Bucket. For me, that means tens of millions of objects at a minimum. Your question does not include information about how fast changes occur in Cloud Storage, or the actual queries you need to perform.
Keep in mind that Cloud Storage is a flat namespace. The concept of hierarchy (folders/directories) is emulated in software. If you store the namespace in the database the same as stored in Cloud Storage, then performance might not be any better.
I have implemented your type of design several times for AWS, Google Cloud, and Azure. Unless you really want the complexity of an event-driven system, I recommend reading the storage bucket once in a while and creating a simple text spreadsheet that can be processed with grep, awk, etc.