How to use SqlAlchemyExecutionEngine with passing URL(including connection string and http path)

followed this Engine Configuration — SQLAlchemy 2.0 Documentation

my_engine = create_engine(
    "databricks+connector://token:*****06ca0cd575@****48527138139.9.gcp.databricks.com:443/checkout",
    connect_args={
        "http_path": "sql/protocolv1/o/****748527138139/0915-092710-***i3xlf",
    },
)

datasources configuration

example_yaml = f"""
name: {datasource_name}
class_name: Datasource
execution_engine: {my_engine}
data_connectors:
  default_runtime_data_connector_name:
    class_name: RuntimeDataConnector
    batch_identifiers:
      - default_identifier_name
  default_inferred_data_connector_name:
    class_name: InferredAssetSqlDataConnector
    # include_schema_name: True
    # introspection_directives:
    #   schema_name: {schema_name}
  default_configured_data_connector_name:
    class_name: ConfiguredAssetSqlDataConnector
    assets:
      {table_name}:
        class_name: Asset
        #schema_name: {schema_name}
"""
print(example_yaml)
context.test_yaml_config(yaml_config=example_yaml)

throw this error:

ValidationError: {'execution_engine': {'_schema': ['Invalid input type.']}}

Does running the code below i.e. on Databricks work for you?

import great_expectations as gx

context = gx.get_context()
my_connection_string = "<your connection string here>"

datasource_name = "my_databricks_sql_datasource"
datasource = context.sources.add_databricks_sql(
    name=datasource_name, 
    connection_string=my_connection_string,
)

asset_name = "my_asset"
asset_table_name = "<your table name>"
table_asset = datasource.add_table_asset(name=asset_name, table_name=asset_table_name)