We are using V3 api configuration and have read multiple files from GE based on a regex filter,
How to process these files in GE, we are getting below error while creating Batch in Jupyter notebook -
1238 if len(batch_list) != 1:
→ 1239 raise ValueError(
1240 f"Got {len(batch_list)} batches instead of a single batch."
1241 )
ValueError: Got 7 batches instead of a single batch.
Is there a way to select from the list of files read ?
@Garima93 Could you please create a GitHub issue for this?
Hello Eugene,
i looked again in the V3 Api configuration GE documentation and find out that it worked by passing the index number as below -
batch_request = BatchRequest(
datasource_name=“datasource_s3_json”,
data_connector_name=“my_data_connector”,
data_asset_name=“DEFAULT_ASSET_NAME”,
data_connector_query={
“index”: 1
}
)
When connecting GE with s3 using InferredAssetS3DataConnector, is the above only way to work with one file in case of multiple files fetched from s3 i.e based on index.
if we want to run the batch on all the files do we need to write code to run batch in loop in all files or there is any other way in GE which can process all the list of files for one expectations suite in one go