I just wanna ask you about a known pattern to store validations/data-docs generated by multiple pipelines?
My first guess is use different prefixes in the same bucket, something like this:
If each pipeline has its own Great Expectations Data Context (with its own great_expectations.yml config file), then the approach described in the question is the way to go: configure each Data Docs site to write to a different prefix in the same S3 bucket.
The second approach is for the two pipelines to share a Data Context (one great_expectations.yml config file). When you validate data by running a Validation Operator (or a Checkpoint), you pass data_asset_name. As long as these names in the two pipelines do not collide, the validation results coming from both can co-exist in the same Data Docs site.