Error ) Alias "column_values.nonnull.unexpected_count' is not unique

It works with Sybase and Sqlalchemy and is testing great expectations using the CLI.


  1. If you use more than one expectation rule for one suit.json, there seems to be a query that overlaps according to the rule.
    ex) I only derived the column_values_to_be_in_set condition, but the issue of not null verification query running at the same time
  2. Problems that cause errors due to overlapping Alias’ names in the query used at this time

Therefore, I would like to think about a solution together.

gx version : 0.15.50
sqlalchemy version : 1.4.48


"expectations_suite_name : "test_suite.json"
"expectations" : [
		"expectation_type" : "expect_column_values_to_be_in_set"
		"kwargs": {
			"column" : "col1"
			"value_set" : [0,1]
		"expectation_type" : "expect_column_values_to_be_between"
		"kwargs": {
			"column" : "col2"
			"min_value" : 1,
                        "max_value": 99


SELECT sum((CASE WHEN ("col1" IS NOT NULL AND "col1" IS NOT IN (0,1))) THEN 1 ELSE 0  END) AS "column_values.in_set.unexpected_count", 
sum(CASE WHEN ("col2" IS NOT NULL AND NOT "col2" >=1 and AND "co2" <=99)) THEN 1 ELSE 0 END)  AS "column_values_between.unexpected_count
, sum(CASE WHEN("col2" IS NULL) THEN 1 ELSE 0 END) AS "column_values.nonnull.unexpected_count"
, sum(CASE WHEN("col1" IS NULL) THEN 1 ELSE 0 END) AS "column_values.nonnull.unexpected_count FROM (SELECT * FROM TABLENAME) AS anon_1)

As we now know, we validate the data through sqlalchemy’s functions (in_,or_,etc) and we realize that they also act as python codes. I wonder where the query runs and if there is no problem changing the alias.

Hi @kang, thanks for reaching out! Welcome to the GX community.

As a first step, I’d recommend upgrading to the latest version of great-expectations. The CLI is deprecated, and version 0.15.x is no longer supported - as of this post, we support GX versions 0.17.x and 0.18.x. It’s possible this is a legacy issue that has been fixed in subsequent versions.

If you still encounter this issue after upgrading, please let us know and we’ll take a look.