You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The current scripts expect the Hive shell on a Cloudera gateway machine which currently launches MapReduce jobs on Yarn.
That won't be available in the K8s environment.
I foresee we could do one of these - there may be more options:
Explore the state of Hive on Spark (we gather that may have been removed)
Explore using Trino as the execution engine
Port it to Spark SQL
My intuition is that a Trino solution is the least invasive (perhaps we can replace hive -f ... with trino -f ...), and is likely the best option. It will likely require some UDF changes.
The text was updated successfully, but these errors were encountered:
The current scripts expect the Hive shell on a Cloudera gateway machine which currently launches MapReduce jobs on Yarn.
That won't be available in the K8s environment.
I foresee we could do one of these - there may be more options:
My intuition is that a Trino solution is the least invasive (perhaps we can replace
hive -f ...
withtrino -f ...
), and is likely the best option. It will likely require some UDF changes.The text was updated successfully, but these errors were encountered: