followed this Engine Configuration — SQLAlchemy 2.0 Documentation
my_engine = create_engine(
"databricks+connector://token:*****06ca0cd575@****48527138139.9.gcp.databricks.com:443/checkout",
connect_args={
"http_path": "sql/protocolv1/o/****748527138139/0915-092710-***i3xlf",
},
)
datasources configuration
example_yaml = f"""
name: {datasource_name}
class_name: Datasource
execution_engine: {my_engine}
data_connectors:
default_runtime_data_connector_name:
class_name: RuntimeDataConnector
batch_identifiers:
- default_identifier_name
default_inferred_data_connector_name:
class_name: InferredAssetSqlDataConnector
# include_schema_name: True
# introspection_directives:
# schema_name: {schema_name}
default_configured_data_connector_name:
class_name: ConfiguredAssetSqlDataConnector
assets:
{table_name}:
class_name: Asset
#schema_name: {schema_name}
"""
print(example_yaml)
context.test_yaml_config(yaml_config=example_yaml)
throw this error:
ValidationError: {'execution_engine': {'_schema': ['Invalid input type.']}}