Hi Team/ @kyle/@abegong
Great work on Great Expectations!
We wished to know if there is a feature available/already in the roadmap for tagging the great expectations test suites with respective data quality dimensions like completeness, accuracy, timeliness, consistency so that once we run the test suites, we could possibly score the results on a data quality dashboard.
E.g. :
Short answer: yes, we’re interested in building in this direction. We actually put together a basic prototype at our last internal hackathon.
The real question is how you would score data quality along those dimensions. In interviews, we’ve found that different teams have different preferences for how to score.
I’d love to use this thread as a way to gather ideas.
Thanks for the prompt response.
It would be great if we could have a look at this prototype.
Our main goal is to crowdsource the data quality checks to various business analysts/data SMEs in the organization by allowing them to submit their expectation suites and the data quality framework leveraging those suites to perform the checks against a data ingestion into Data Lake.
However, we would like the users to be able to categorize these tests depending on the dimensions so that we can just create a dashboard of these metrics & report it to our clients.