-
Notifications
You must be signed in to change notification settings - Fork 109
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: Issues with validate column for time zoned timestamps #930
Conversation
@@ -356,8 +355,8 @@ def test_column_validation_core_types_to_bigquery(): | |||
"-tbls=pso_data_validator.dvt_core_types", | |||
"--filter-status=fail", | |||
"--sum=col_int8,col_int16,col_int32,col_int64,col_float64,col_date,col_datetime,col_dec_10_2,col_dec_20,col_dec_38,col_varchar_30,col_char_2,col_string", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we add col_tstz to the --sum flag as well?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, this is a tricky one.
I added the column and the test fails but it's not clear what the best course of action is.
If I compare SQL Server to BigQuery then we get an epoch seconds mismatch:
- SQL Server: 259206
- BigQuery: 280806
However if I compare SQL Server to itself then the value is 280806, which matches BigQuery.
The problem stems from these lines in config_manager.py
:
elif column_type == "timestamp" or column_type == "!timestamp":
if (
self.source_client.name == "bigquery"
or self.target_client.name == "bigquery"
):
calc_func = "cast"
cast_type = "timestamp"
They are saying that if the source or target is BigQuery then cast both source and target to timestamp before doing the epoch seconds expression. The cast to timestamp is cropping the time zone in SQL Server.
At first I thought I needed to change the SQL Server cast to first convert to UTC but I also think it is wrong that a BigQuery requirement changes what we do on the other engine. If BigQuery needs a pre-cast to TIMESTAMP then it probably shouldn't be catered for here. Although I haven't looked into how hard it is to do that instead.
This needs more investigation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah agreed - seeing something very similar in Teradata. Since we cast to timestamp, we lose the timezone and cause a mismatch when validating col_tstz for TD to BQ.
TD to TD also produces the correct 280806 value.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've created #938 for this problem.
/gcbrun |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
The change fixes Oracle TIMESTAMP WITH TIME ZONE validation and adds column validation tests for: