Data doc site html being downloaded instead of rendering in browser

Environment - Databricks with s3 as data doc site
I’m facing an issue with data doc site hosted on an S3 bucket using Databricks. I’ve configured the YAML file in Databricks to reference the volume path within the bucket, as I cannot directly save files to S3 from my databricks environment. However, when I access the website link, the HTML file fails to display and appears as downloaded content instead of rendering in the browser.

great expectaions yaml –

# Welcome to Great Expectations! Always know what to expect from your data.
#
# Here you can define datasources, batch kwargs generators, integrations and
# more. This file is intended to be committed to your repo. For help with
# configuration please:
#   - Read our docs: https://docs.greatexpectations.io/docs/guides/connecting_to_your_data/connect_to_data_overview/#2-configure-your-datasource
#   - Join our slack channel: http://greatexpectations.io/slack

# config_version refers to the syntactic version of this config file, and is used in maintaining backwards compatibility
# It is auto-generated and usually does not need to be changed.
config_version: 3

# Datasources tell Great Expectations where your data lives and how to get it.
# Read more at https://docs.greatexpectations.io/docs/guides/connecting_to_your_data/connect_to_data_overview
datasources: {}

# This config file supports variable substitution which enables: 1) keeping
# secrets out of source control & 2) environment-based configuration changes
# such as staging vs prod.
#
# When GX encounters substitution syntax (like `my_key: ${my_value}` or
# `my_key: $my_value`) in the great_expectations.yml file, it will attempt
# to replace the value of `my_key` with the value from an environment
# variable `my_value` or a corresponding key read from this config file,
# which is defined through the `config_variables_file_path`.
# Environment variables take precedence over variables defined here.
#
# Substitution values defined here can be a simple (non-nested) value,
# nested value such as a dictionary, or an environment variable (i.e. ${ENV_VAR})
#
#
# https://docs.greatexpectations.io/docs/guides/setup/configuring_data_contexts/how_to_configure_credentials


config_variables_file_path: uncommitted/config_variables.yml

# The plugins_directory will be added to your python path for custom modules
# used to override and extend Great Expectations.
plugins_directory: plugins/

stores:
# Stores are configurable places to store things like Expectations, Validations
# Data Docs, and more. These are for advanced users only - most users can simply
# leave this section alone.
#
# Three stores are required: expectations, validations, and
# evaluation_parameters, and must exist with a valid store entry. Additional
# stores can be configured for uses such as data_docs, etc.
  expectations_store:
    class_name: ExpectationsStore
    store_backend:
      class_name: TupleFilesystemStoreBackend
      base_directory: expectations/

  validations_store:
    class_name: ValidationsStore
    store_backend:
      class_name: TupleFilesystemStoreBackend
      base_directory: uncommitted/validations/

  evaluation_parameter_store:
    # Evaluation Parameters enable dynamic expectations. Read more here:
    # https://docs.greatexpectations.io/docs/reference/evaluation_parameters/
    class_name: EvaluationParameterStore

  checkpoint_store:
    class_name: CheckpointStore
    store_backend:
      class_name: TupleFilesystemStoreBackend
      suppress_store_backend_id: true
      base_directory: checkpoints/

  profiler_store:
    class_name: ProfilerStore
    store_backend:
      class_name: TupleFilesystemStoreBackend
      suppress_store_backend_id: true
      base_directory: profilers/

expectations_store_name: expectations_store
validations_store_name: validations_store
evaluation_parameter_store_name: evaluation_parameter_store
checkpoint_store_name: checkpoint_store

data_docs_sites:
  # Data Docs make it simple to visualize data quality in your project. These
  # include Expectations, Validations & Profiles. The are built for all
  # Datasources from JSON artifacts in the local repo including validations &
  # profiles from the uncommitted directory. Read more at https://docs.greatexpectations.io/docs/terms/data_docs
  local_site:
    class_name: SiteBuilder
    # set to false to hide how-to buttons in Data Docs
    show_how_to_buttons: true
    store_backend:
        class_name: TupleFilesystemStoreBackend
        base_directory: /Volumes/operations/
    site_index_builder:
        class_name: DefaultSiteIndexBuilder

anonymous_usage_statistics:
  enabled: True```

notebook code -- 

%pip install great_expectations==0.18.15
dbutils.library.restartPython()

#import all the relevant packages
from great_expectations.data_context.types.base import DataContextConfig, FilesystemStoreBackendDefaults
from great_expectations.data_context import BaseDataContext
from great_expectations.profile.user_configurable_profiler import UserConfigurableProfiler
from great_expectations.render.renderer import ProfilingResultsPageRenderer, ExpectationSuitePageRenderer
from great_expectations.render.view import DefaultJinjaPageView
import great_expectations as gx

datasource_name = f"{schema_name}datasource"
data_asset_name = f"{schema_name}
{table_name}"
expectation_suite_name = f"{schema_name}_{table_name}expectation_suite" ## this will be the expectation suite for the selected data assets
checkpoint_name = f"{schema_name}
{table_name}_checkpoint"
executing validations against it.
datasource_config = {
“name”: datasource_name,
“class_name”: “Datasource”,
“execution_engine”: {
“class_name”: “SparkDFExecutionEngine”,
},
“data_connectors”: {
“default_runtime_data_connector_name”: {
“class_name”: “RuntimeDataConnector”,
“batch_identifiers”: [“default_identifier_name”],
},
},
}
context.add_datasource(**datasource_config)

context.add_or_update_expectation_suite(expectation_suite_name=expectation_suite_name)

df = spark.read.format(“delta”).table(f"{schema_name}.{table_name}")

data_asset = SparkDFDataset(df)

profiler = UserConfigurableProfiler(profile_dataset=data_asset)
generated_suite = profiler.build_suite()

context.save_expectation_suite(generated_suite, expectation_suite_name)

loaded_suite = context.get_expectation_suite(expectation_suite_name=expectation_suite_name)
import json
print(json.dumps(loaded_suite.to_json_dict(), indent=2))

dataframe_datasource = context.sources.add_or_update_spark(
name=table_name,
)
dataframe_asset = dataframe_datasource.add_dataframe_asset(
name= data_asset_name,
dataframe=df,
)

batch_request = dataframe_asset.build_batch_request()
print(batch_request)

validator = context.get_validator(
batch_request=batch_request,
expectation_suite_name=expectation_suite_name,
asset_name=data_asset_name
)

results = validator.validate()

checkpoint = context.add_or_update_checkpoint(
name=checkpoint_name,
validator=validator,
)

checkpoint_result = checkpoint.run()

context.build_data_docs()

in s3 bucket i can see the metadata of the html file is not html
![image|690x129](upload://61EqsNJyerAOv8gjeoCEG1k9FDt.png)

can someone please help me out on ths