The answer is “not directly”. If you are able to load MongoDB/ArangoDB data into a Pandas or Spark dataframe, you can work with the data this way. See e.g. at https://docs.greatexpectations.io/en/latest/tutorials/create_expectations.html?highlight=batch_kwargs#load-a-batch-of-data-to-create-expectations - instead of providing a ‘path’ key, you would provide a ‘dataset’ key with the Pandas or Spark DF.
Related topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
Not able to create expectation suite and data docs in databricks using spark | 0 | 44 | July 9, 2025 | |
No working examples of Custom expectations on Spark | 0 | 564 | June 29, 2022 | |
How to load a Spark dataframe as a batch | 0 | 446 | May 27, 2020 | |
How to configure a PySpark datasource for accessing the data from AWS S3? | 1 | 1439 | March 28, 2020 | |
Can GE access / validate data from Spark, stored in an S3 bucket? | 1 | 647 | April 20, 2021 |