{"payload":{"feedbackUrl":"https://github.com/orgs/community/discussions/53140","repo":{"id":523007292,"defaultBranch":"main","name":"xla","ownerLogin":"openxla","currentUserCanPush":false,"isFork":false,"isEmpty":false,"createdAt":"2022-08-09T15:30:01.000Z","ownerAvatar":"https://avatars.githubusercontent.com/u/107584881?v=4","public":true,"private":false,"isOrgOwned":true},"refInfo":{"name":"","listCacheKey":"v0:1721744129.0","currentOid":""},"activityList":{"items":[{"before":"aec1282d9491b7a2b4b42ce6553294de0d254fe4","after":"36e5fd00f62a50537ef3f8e44a811da4183282fe","ref":"refs/heads/test_654697195","pushedAt":"2024-07-23T14:18:05.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"copybara-service[bot]","name":null,"path":"/apps/copybara-service","primaryAvatarUrl":"https://avatars.githubusercontent.com/in/44061?s=80&v=4"},"commit":{"message":"[XLA:CPU] Add runtime check if `stochastic-convert` was decomposed.\n\nIn the current runtime, `stochastic-convert` op emitting is not supported by design, normally it is rewritten by `StochasticConvertDecomposer` HLO pass and never reaches the emit phase. This CL adds a runtime check if this op was actually rewritten and returns an explicit message if it wasn't (instead of cryptic message that `kStochasticConvert` opcode was not handled in elemental IR emitter).\n\nAlso added fundamental tests for stochastic convert op for both runtimes - current and thunks.\n\nPiperOrigin-RevId: 654697195","shortMessageHtmlLink":"[XLA:CPU] Add runtime check if stochastic-convert was decomposed."}},{"before":"a02433bdc5e6dbe68446d6c95c4959b5e1715cf9","after":null,"ref":"refs/heads/test_654795065","pushedAt":"2024-07-23T14:15:29.000Z","pushType":"branch_deletion","commitsCount":0,"pusher":{"login":"copybara-service[bot]","name":null,"path":"/apps/copybara-service","primaryAvatarUrl":"https://avatars.githubusercontent.com/in/44061?s=80&v=4"}},{"before":"5ef10e53fdfc0a7069fb4c5532051df0086e0fd3","after":"a02433bdc5e6dbe68446d6c95c4959b5e1715cf9","ref":"refs/heads/main","pushedAt":"2024-07-23T14:15:28.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"copybara-service[bot]","name":null,"path":"/apps/copybara-service","primaryAvatarUrl":"https://avatars.githubusercontent.com/in/44061?s=80&v=4"},"commit":{"message":"Integrate LLVM at llvm/llvm-project@acc159aea1e6\n\nUpdates LLVM usage to match\n[acc159aea1e6](https://github.com/llvm/llvm-project/commit/acc159aea1e6)\n\nPiperOrigin-RevId: 655146366","shortMessageHtmlLink":"Integrate LLVM at llvm/llvm-project@acc159aea1e6"}},{"before":"b85e935c2fb6fe9aae688ab868f16b4c140cdb6d","after":"a02433bdc5e6dbe68446d6c95c4959b5e1715cf9","ref":"refs/heads/test_654795065","pushedAt":"2024-07-23T14:15:23.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"copybara-service[bot]","name":null,"path":"/apps/copybara-service","primaryAvatarUrl":"https://avatars.githubusercontent.com/in/44061?s=80&v=4"},"commit":{"message":"Integrate LLVM at llvm/llvm-project@acc159aea1e6\n\nUpdates LLVM usage to match\n[acc159aea1e6](https://github.com/llvm/llvm-project/commit/acc159aea1e6)\n\nPiperOrigin-RevId: 655146366","shortMessageHtmlLink":"Integrate LLVM at llvm/llvm-project@acc159aea1e6"}},{"before":"f85603b61712cb144c0c6ba14139a64ec5ea2f0a","after":"afb0fe68eb66e4cd8f93223e9d00013be189763d","ref":"refs/heads/test_654865806","pushedAt":"2024-07-23T14:09:05.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"copybara-service[bot]","name":null,"path":"/apps/copybara-service","primaryAvatarUrl":"https://avatars.githubusercontent.com/in/44061?s=80&v=4"},"commit":{"message":"[xla:cpu] Support for up to 16 sorted inputs\n\n+ enable more jax/lax tests for XLA CPU thunks\n\nPiperOrigin-RevId: 654865806","shortMessageHtmlLink":"[xla:cpu] Support for up to 16 sorted inputs"}},{"before":"359ea704c4ad0cbfc5ee59a4487f1f0cc2db4601","after":"5ef10e53fdfc0a7069fb4c5532051df0086e0fd3","ref":"refs/heads/main","pushedAt":"2024-07-23T13:58:40.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"copybara-service[bot]","name":null,"path":"/apps/copybara-service","primaryAvatarUrl":"https://avatars.githubusercontent.com/in/44061?s=80&v=4"},"commit":{"message":"PR #15222: [GPU] Fix cuDNN workspace test condition.\n\nImported from GitHub PR https://github.com/openxla/xla/pull/15222\n\nCopybara import of the project:\n\n--\ne57258f605e6df6bf61ac55632e939b5badd6c22 by Ilia Sergachev :\n\n[GPU] Fix cuDNN workspace test condition.\n\nMerging this change closes #15222\n\nCOPYBARA_INTEGRATE_REVIEW=https://github.com/openxla/xla/pull/15222 from openxla:fix_cudnn_test e57258f605e6df6bf61ac55632e939b5badd6c22\nPiperOrigin-RevId: 655145324","shortMessageHtmlLink":"PR #15222: [GPU] Fix cuDNN workspace test condition."}},{"before":"05a4ca3f57fa6029eeb64c55a372e1c7cb431b9d","after":"747f7e460451761f0f624399ff8198f15dafdbf1","ref":"refs/heads/test_655128242","pushedAt":"2024-07-23T13:45:20.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"copybara-service[bot]","name":null,"path":"/apps/copybara-service","primaryAvatarUrl":"https://avatars.githubusercontent.com/in/44061?s=80&v=4"},"commit":{"message":"[XLA:CPU] Add runtime check for whether `batch-norm-training` is rewritten\n\nIn the current runtime, emitting for op `batch-norm-training` is not supported by design.\nThe op is expected to be rewritten by another HLO pass before ever reaching the emit phase.\nThis CL adds a runtime check for whether this op was actually rewritten and returns an explicit message if it wasn't.\nAlso includes a new unit test covering existing and new functionality: batch_norm_training_test.cc.\n\nPiperOrigin-RevId: 655128242","shortMessageHtmlLink":"[XLA:CPU] Add runtime check for whether batch-norm-training is rewr…"}},{"before":"359ea704c4ad0cbfc5ee59a4487f1f0cc2db4601","after":null,"ref":"refs/heads/test_650980466","pushedAt":"2024-07-23T13:42:55.000Z","pushType":"branch_deletion","commitsCount":0,"pusher":{"login":"copybara-service[bot]","name":null,"path":"/apps/copybara-service","primaryAvatarUrl":"https://avatars.githubusercontent.com/in/44061?s=80&v=4"}},{"before":"4e56ae4cafd5a8a2849325f8e2ee0607554b54a2","after":"359ea704c4ad0cbfc5ee59a4487f1f0cc2db4601","ref":"refs/heads/main","pushedAt":"2024-07-23T13:42:53.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"copybara-service[bot]","name":null,"path":"/apps/copybara-service","primaryAvatarUrl":"https://avatars.githubusercontent.com/in/44061?s=80&v=4"},"commit":{"message":"[IFRT] Introduce pass to merge reshards with the same src and dst.\n\nThis optimizes the reshards to use a single RPC call instead of multiple RPC calls.\nIt currently merges only when the src is func BlockArg or an ifrt Op with `outputs` of type IfrtArrayType, and for any destination.\n\nPiperOrigin-RevId: 655145015","shortMessageHtmlLink":"[IFRT] Introduce pass to merge reshards with the same src and dst."}},{"before":"55c58021a46a4f472846bed408a58d40e7ddc1ff","after":"359ea704c4ad0cbfc5ee59a4487f1f0cc2db4601","ref":"refs/heads/test_650980466","pushedAt":"2024-07-23T13:42:52.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"copybara-service[bot]","name":null,"path":"/apps/copybara-service","primaryAvatarUrl":"https://avatars.githubusercontent.com/in/44061?s=80&v=4"},"commit":{"message":"[IFRT] Introduce pass to merge reshards with the same src and dst.\n\nThis optimizes the reshards to use a single RPC call instead of multiple RPC calls.\nIt currently merges only when the src is func BlockArg or an ifrt Op with `outputs` of type IfrtArrayType, and for any destination.\n\nPiperOrigin-RevId: 655145015","shortMessageHtmlLink":"[IFRT] Introduce pass to merge reshards with the same src and dst."}},{"before":null,"after":"05a4ca3f57fa6029eeb64c55a372e1c7cb431b9d","ref":"refs/heads/test_655128242","pushedAt":"2024-07-23T13:35:23.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"copybara-service[bot]","name":null,"path":"/apps/copybara-service","primaryAvatarUrl":"https://avatars.githubusercontent.com/in/44061?s=80&v=4"},"commit":{"message":"[XLA:CPU] Add runtime check for whether `batch-norm-training` is rewritten\n\nIn the current runtime, emitting for op `batch-norm-training` is not supported by design.\nThe op is expected to be rewritten by another HLO pass before ever reaching the emit phase.\nThis CL adds a runtime check for whether this op was actually rewritten and returns an explicit message if it wasn't.\nAlso includes a new unit test covering existing and new functionality: batch_norm_training_test.cc.\n\nPiperOrigin-RevId: 655128242","shortMessageHtmlLink":"[XLA:CPU] Add runtime check for whether batch-norm-training is rewr…"}},{"before":"ad0d6e47547cda0cd3c90783f1961ce4658ec0a0","after":"55c58021a46a4f472846bed408a58d40e7ddc1ff","ref":"refs/heads/test_650980466","pushedAt":"2024-07-23T13:30:52.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"copybara-service[bot]","name":null,"path":"/apps/copybara-service","primaryAvatarUrl":"https://avatars.githubusercontent.com/in/44061?s=80&v=4"},"commit":{"message":"[IFRT] Introduce pass to merge reshards with the same src and dst.\n\nThis optimizes the reshards to use a single RPC call instead of multiple RPC calls.\nIt currently merges only when the src is func BlockArg or an ifrt Op with `outputs` of type IfrtArrayType, and for any destination.\n\nPiperOrigin-RevId: 650980466","shortMessageHtmlLink":"[IFRT] Introduce pass to merge reshards with the same src and dst."}},{"before":"1b3cca09d6e9773c99018211ace517b5f5b7ebc6","after":"b85e935c2fb6fe9aae688ab868f16b4c140cdb6d","ref":"refs/heads/test_654795065","pushedAt":"2024-07-23T13:22:42.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"copybara-service[bot]","name":null,"path":"/apps/copybara-service","primaryAvatarUrl":"https://avatars.githubusercontent.com/in/44061?s=80&v=4"},"commit":{"message":"Integrate LLVM at llvm/llvm-project@acc159aea1e6\n\nUpdates LLVM usage to match\n[acc159aea1e6](https://github.com/llvm/llvm-project/commit/acc159aea1e6)\n\nPiperOrigin-RevId: 654795065","shortMessageHtmlLink":"Integrate LLVM at llvm/llvm-project@acc159aea1e6"}},{"before":"c9af12461ea11e02f54704b50bab2ac2eb099879","after":"ad0d6e47547cda0cd3c90783f1961ce4658ec0a0","ref":"refs/heads/test_650980466","pushedAt":"2024-07-23T12:59:13.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"copybara-service[bot]","name":null,"path":"/apps/copybara-service","primaryAvatarUrl":"https://avatars.githubusercontent.com/in/44061?s=80&v=4"},"commit":{"message":"[IFRT] Introduce pass to merge reshards with the same src and dst.\n\nThis optimizes the reshards to use a single RPC call instead of multiple RPC calls.\nIt currently merges only when the src is func BlockArg or an ifrt Op with `outputs` of type IfrtArrayType, and for any destination.\n\nPiperOrigin-RevId: 650980466","shortMessageHtmlLink":"[IFRT] Introduce pass to merge reshards with the same src and dst."}},{"before":"4e56ae4cafd5a8a2849325f8e2ee0607554b54a2","after":null,"ref":"refs/heads/test_648711460","pushedAt":"2024-07-23T12:48:56.000Z","pushType":"branch_deletion","commitsCount":0,"pusher":{"login":"copybara-service[bot]","name":null,"path":"/apps/copybara-service","primaryAvatarUrl":"https://avatars.githubusercontent.com/in/44061?s=80&v=4"}},{"before":"ddba514b5bdfbc25e73e37071f8ad80930ca2c5d","after":"4e56ae4cafd5a8a2849325f8e2ee0607554b54a2","ref":"refs/heads/main","pushedAt":"2024-07-23T12:48:55.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"copybara-service[bot]","name":null,"path":"/apps/copybara-service","primaryAvatarUrl":"https://avatars.githubusercontent.com/in/44061?s=80&v=4"},"commit":{"message":"#sdy Initial set of changes to allow for lowering to the Shardy dialect.\n\nThe OpenXLA project is working on an open source, MLIR, named-axis based propagation (and in the future SP StableHLO\n3. StableHLO with Shardy propagation\n4. StableHLO with Shardy partitioning\n5. StableHLO -> HLO\n6. XLA optimizations\n\nThe following test:\n\n```py\ndef test_sdy_lowering(self):\n mesh = jtu.create_global_mesh((4, 2), ('x', 'y'))\n np_inp = np.arange(16).reshape(8, 2)\n s = jax.sharding.NamedSharding(mesh, P('x', 'y'))\n arr = jax.device_put(np_inp, s)\n\n @partial(jax.jit, out_shardings=s)\n def f(x):\n return x * 2\n\n print(f.lower(arr).as_text())\n```\n\noutputs:\n\n```\nmodule @jit_f attributes {mhlo.num_partitions = 8 : i32, mhlo.num_replicas = 1 : i32} {\n sdy.mesh @mesh = <\"x\"=4, \"y\"=2>\n func.func public @main(%arg0: tensor<8x2xi64> {mhlo.layout_mode = \"{1,0}\", sdy.sharding = #sdy.sharding<@mesh, [{\"x\"}, {\"y\"}]>}) -> (tensor<8x2xi64> {jax.result_info = \"\", mhlo.layout_mode = \"default\", sdy.sharding = #sdy.sharding<@mesh, [{\"x\"}, {\"y\"}]>}) {\n %c = stablehlo.constant dense<2> : tensor\n %0 = stablehlo.broadcast_in_dim %c, dims = [] : (tensor) -> tensor<8x2xi64>\n %1 = stablehlo.multiply %arg0, %0 : tensor<8x2xi64>\n return %1 : tensor<8x2xi64>\n }\n}\n```\n\nShardy will be hidden behind the `jax_use_shardy_partitioner` flag initially before becoming enabled by default in the future.\n\nPiperOrigin-RevId: 655127611","shortMessageHtmlLink":"#sdy Initial set of changes to allow for lowering to the Shardy dialect."}},{"before":"e74d98a0a19179b1024cff3de79b1e123b5af601","after":"4e56ae4cafd5a8a2849325f8e2ee0607554b54a2","ref":"refs/heads/test_648711460","pushedAt":"2024-07-23T12:48:52.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"copybara-service[bot]","name":null,"path":"/apps/copybara-service","primaryAvatarUrl":"https://avatars.githubusercontent.com/in/44061?s=80&v=4"},"commit":{"message":"#sdy Initial set of changes to allow for lowering to the Shardy dialect.\n\nThe OpenXLA project is working on an open source, MLIR, named-axis based propagation (and in the future SP StableHLO\n3. StableHLO with Shardy propagation\n4. StableHLO with Shardy partitioning\n5. StableHLO -> HLO\n6. XLA optimizations\n\nThe following test:\n\n```py\ndef test_sdy_lowering(self):\n mesh = jtu.create_global_mesh((4, 2), ('x', 'y'))\n np_inp = np.arange(16).reshape(8, 2)\n s = jax.sharding.NamedSharding(mesh, P('x', 'y'))\n arr = jax.device_put(np_inp, s)\n\n @partial(jax.jit, out_shardings=s)\n def f(x):\n return x * 2\n\n print(f.lower(arr).as_text())\n```\n\noutputs:\n\n```\nmodule @jit_f attributes {mhlo.num_partitions = 8 : i32, mhlo.num_replicas = 1 : i32} {\n sdy.mesh @mesh = <\"x\"=4, \"y\"=2>\n func.func public @main(%arg0: tensor<8x2xi64> {mhlo.layout_mode = \"{1,0}\", sdy.sharding = #sdy.sharding<@mesh, [{\"x\"}, {\"y\"}]>}) -> (tensor<8x2xi64> {jax.result_info = \"\", mhlo.layout_mode = \"default\", sdy.sharding = #sdy.sharding<@mesh, [{\"x\"}, {\"y\"}]>}) {\n %c = stablehlo.constant dense<2> : tensor\n %0 = stablehlo.broadcast_in_dim %c, dims = [] : (tensor) -> tensor<8x2xi64>\n %1 = stablehlo.multiply %arg0, %0 : tensor<8x2xi64>\n return %1 : tensor<8x2xi64>\n }\n}\n```\n\nShardy will be hidden behind the `jax_use_shardy_partitioner` flag initially before becoming enabled by default in the future.\n\nPiperOrigin-RevId: 655127611","shortMessageHtmlLink":"#sdy Initial set of changes to allow for lowering to the Shardy dialect."}},{"before":null,"after":"8f59392388bde67332f1f0cb9d2d176122c6124f","ref":"refs/heads/test_651748318","pushedAt":"2024-07-23T12:40:04.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"copybara-service[bot]","name":null,"path":"/apps/copybara-service","primaryAvatarUrl":"https://avatars.githubusercontent.com/in/44061?s=80&v=4"},"commit":{"message":"[XLA:GPU] Don't upcast supported fp8 dot operands when is are inside Triton fusion.\n\nKeep normalizing fp8 outside of Triton, but in the Triton fused computations, certain operand type combinations are fine.\n\nPiperOrigin-RevId: 651748318","shortMessageHtmlLink":"[XLA:GPU] Don't upcast supported fp8 dot operands when is are inside …"}},{"before":"af9882015f8668d370ff230e86897709cf9d2aee","after":"ddba514b5bdfbc25e73e37071f8ad80930ca2c5d","ref":"refs/heads/main","pushedAt":"2024-07-23T12:33:17.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"copybara-service[bot]","name":null,"path":"/apps/copybara-service","primaryAvatarUrl":"https://avatars.githubusercontent.com/in/44061?s=80&v=4"},"commit":{"message":"PR #15210: Require Matching Types in Layer Norm Fusion\n\nImported from GitHub PR https://github.com/openxla/xla/pull/15210\n\nDisables the fusion of layer norm patterns when the input and output types are not the same.\nCopybara import of the project:\n\n--\n4872f4413abbbc5e330a36f1c2ce5a5264a61fd7 by Philipp Hack :\n\nDisables layer norm fusion when the input and output types differ.\n\nMerging this change closes #15210\n\nCOPYBARA_INTEGRATE_REVIEW=https://github.com/openxla/xla/pull/15210 from philipphack:u_layer_type_mismatch_xla 4872f4413abbbc5e330a36f1c2ce5a5264a61fd7\nPiperOrigin-RevId: 655126293","shortMessageHtmlLink":"PR #15210: Require Matching Types in Layer Norm Fusion"}},{"before":null,"after":"e57258f605e6df6bf61ac55632e939b5badd6c22","ref":"refs/heads/fix_cudnn_test","pushedAt":"2024-07-23T12:10:31.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"sergachev","name":"Ilia Sergachev","path":"/sergachev","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1894984?s=80&v=4"},"commit":{"message":"[GPU] Fix cuDNN workspace test condition.","shortMessageHtmlLink":"[GPU] Fix cuDNN workspace test condition."}},{"before":"d1afe17c5b9843e9695888b0722b7537f5c0f34b","after":"e74d98a0a19179b1024cff3de79b1e123b5af601","ref":"refs/heads/test_648711460","pushedAt":"2024-07-23T12:09:36.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"copybara-service[bot]","name":null,"path":"/apps/copybara-service","primaryAvatarUrl":"https://avatars.githubusercontent.com/in/44061?s=80&v=4"},"commit":{"message":"#sdy Initial set of changes to allow for lowering to the Shardy dialect.\n\nThe OpenXLA project is working on an open source, MLIR, named-axis based propagation (and in the future SP StableHLO\n3. StableHLO with Shardy propagation\n4. StableHLO with Shardy partitioning\n5. StableHLO -> HLO\n6. XLA optimizations\n\nThe following test:\n\n```py\ndef test_sdy_lowering(self):\n mesh = jtu.create_global_mesh((4, 2), ('x', 'y'))\n np_inp = np.arange(16).reshape(8, 2)\n s = jax.sharding.NamedSharding(mesh, P('x', 'y'))\n arr = jax.device_put(np_inp, s)\n\n @partial(jax.jit, out_shardings=s)\n def f(x):\n return x * 2\n\n print(f.lower(arr).as_text())\n```\n\noutputs:\n\n```\nmodule @jit_f attributes {mhlo.num_partitions = 8 : i32, mhlo.num_replicas = 1 : i32} {\n sdy.mesh @mesh = <\"x\"=4, \"y\"=2>\n func.func public @main(%arg0: tensor<8x2xi64> {mhlo.layout_mode = \"{1,0}\", sdy.sharding = #sdy.sharding<@mesh, [{\"x\"}, {\"y\"}]>}) -> (tensor<8x2xi64> {jax.result_info = \"\", mhlo.layout_mode = \"default\", sdy.sharding = #sdy.sharding<@mesh, [{\"x\"}, {\"y\"}]>}) {\n %c = stablehlo.constant dense<2> : tensor\n %0 = stablehlo.broadcast_in_dim %c, dims = [] : (tensor) -> tensor<8x2xi64>\n %1 = stablehlo.multiply %arg0, %0 : tensor<8x2xi64>\n return %1 : tensor<8x2xi64>\n }\n}\n```\n\nShardy will be hidden behind the `jax_use_shardy_partitioner` flag initially before becoming enabled by default in the future.\n\nPiperOrigin-RevId: 648711460","shortMessageHtmlLink":"#sdy Initial set of changes to allow for lowering to the Shardy dialect."}},{"before":"0dc1d2c636d505975f44de04d83a3a1f913d1adc","after":"1b3cca09d6e9773c99018211ace517b5f5b7ebc6","ref":"refs/heads/test_654795065","pushedAt":"2024-07-23T11:57:13.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"copybara-service[bot]","name":null,"path":"/apps/copybara-service","primaryAvatarUrl":"https://avatars.githubusercontent.com/in/44061?s=80&v=4"},"commit":{"message":"Integrate LLVM at llvm/llvm-project@acc159aea1e6\n\nUpdates LLVM usage to match\n[acc159aea1e6](https://github.com/llvm/llvm-project/commit/acc159aea1e6)\n\nPiperOrigin-RevId: 654795065","shortMessageHtmlLink":"Integrate LLVM at llvm/llvm-project@acc159aea1e6"}},{"before":"46f6c4764bd774a14d5cfe7f36e40d44f9147c88","after":"af9882015f8668d370ff230e86897709cf9d2aee","ref":"refs/heads/main","pushedAt":"2024-07-23T11:35:00.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"copybara-service[bot]","name":null,"path":"/apps/copybara-service","primaryAvatarUrl":"https://avatars.githubusercontent.com/in/44061?s=80&v=4"},"commit":{"message":"PR #15149: [XLA:GPU] print cudnn frontend check_support error message\n\nImported from GitHub PR https://github.com/openxla/xla/pull/15149\n\nCopybara import of the project:\n\n--\n0e5c2fa7e539fe97f7c5b8aadfc3fedd38ad4232 by cjkkkk :\n\ninit\n\n--\n8565f356e82549e05b9d46e8a92f3e49a4370f3b by cjkkkk :\n\nupdate function signature\n\nMerging this change closes #15149\n\nCOPYBARA_INTEGRATE_REVIEW=https://github.com/openxla/xla/pull/15149 from Cjkkkk:print_cudnn_check_support_message 8565f356e82549e05b9d46e8a92f3e49a4370f3b\nPiperOrigin-RevId: 655113338","shortMessageHtmlLink":"PR #15149: [XLA:GPU] print cudnn frontend check_support error message"}},{"before":"4089aadec1b6186c3d537c5c3d4eba4ba89dd8d5","after":"46f6c4764bd774a14d5cfe7f36e40d44f9147c88","ref":"refs/heads/main","pushedAt":"2024-07-23T11:32:48.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"copybara-service[bot]","name":null,"path":"/apps/copybara-service","primaryAvatarUrl":"https://avatars.githubusercontent.com/in/44061?s=80&v=4"},"commit":{"message":"PR #15163: [NVIDIA GPU] Skip processing for trip count 1 in loop double buffer unrolling\n\nImported from GitHub PR https://github.com/openxla/xla/pull/15163\n\nA minor improvement to loop double buffer unrolling pass to skip processing for loops with trip count =1\nCopybara import of the project:\n\n--\nf6bccd4612dde10cd020141f804523a75d9c84a2 by TJ Xu :\n\nSkip processing for trip count 1\n\n--\n19514326f83ee91f712ffbd74cec90b19db77df7 by TJ Xu :\n\nadded a test case\n\nMerging this change closes #15163\n\nCOPYBARA_INTEGRATE_REVIEW=https://github.com/openxla/xla/pull/15163 from Tixxx:tixxx/double_buffer_skip-1 19514326f83ee91f712ffbd74cec90b19db77df7\nPiperOrigin-RevId: 655111494","shortMessageHtmlLink":"PR #15163: [NVIDIA GPU] Skip processing for trip count 1 in loop doub…"}},{"before":"61d5fa148c384b630f94795f7064aff5da164f31","after":"b51ad7f5e0d95841c3f0a01ef702711aabcf8867","ref":"refs/heads/test_655104496","pushedAt":"2024-07-23T11:21:20.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"copybara-service[bot]","name":null,"path":"/apps/copybara-service","primaryAvatarUrl":"https://avatars.githubusercontent.com/in/44061?s=80&v=4"},"commit":{"message":"[XLA:CPU] Add runtime check for whether `batch-norm-grad` is rewritten\n\nIn the current runtime, emitting for op `batch-norm-grad` is not supported by design.\nThe op is expected to be rewritten by another HLO pass before ever reaching the emit phase.\nThis CL adds a runtime check for whether this op was actually rewritten and returns an explicit message if it wasn't.\nAlso includes a new unit test covering existing and new functionality: batch_norm_grad_test.cc.\n\nPiperOrigin-RevId: 655104496","shortMessageHtmlLink":"[XLA:CPU] Add runtime check for whether batch-norm-grad is rewritten"}},{"before":null,"after":"61d5fa148c384b630f94795f7064aff5da164f31","ref":"refs/heads/test_655104496","pushedAt":"2024-07-23T11:13:21.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"copybara-service[bot]","name":null,"path":"/apps/copybara-service","primaryAvatarUrl":"https://avatars.githubusercontent.com/in/44061?s=80&v=4"},"commit":{"message":"[XLA:CPU] Add runtime check for whether `batch-norm-grad` is rewritten\n\nIn the current runtime, emitting for op `batch-norm-grad` is not supported by design.\nThe op is expected to be rewritten by another HLO pass before ever reaching the emit phase.\nThis CL adds a runtime check for whether this op was actually rewritten and returns an explicit message if it wasn't.\nAlso includes a new unit test covering existing and new functionality: batch_norm_grad_test.cc.\n\nPiperOrigin-RevId: 655104496","shortMessageHtmlLink":"[XLA:CPU] Add runtime check for whether batch-norm-grad is rewritten"}},{"before":"e918698506ecd1f967d50cdf8ca69a2ee7248394","after":"0dc1d2c636d505975f44de04d83a3a1f913d1adc","ref":"refs/heads/test_654795065","pushedAt":"2024-07-23T11:04:10.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"copybara-service[bot]","name":null,"path":"/apps/copybara-service","primaryAvatarUrl":"https://avatars.githubusercontent.com/in/44061?s=80&v=4"},"commit":{"message":"Integrate LLVM at llvm/llvm-project@acc159aea1e6\n\nUpdates LLVM usage to match\n[acc159aea1e6](https://github.com/llvm/llvm-project/commit/acc159aea1e6)\n\nPiperOrigin-RevId: 654795065","shortMessageHtmlLink":"Integrate LLVM at llvm/llvm-project@acc159aea1e6"}},{"before":"4089aadec1b6186c3d537c5c3d4eba4ba89dd8d5","after":null,"ref":"refs/heads/test_653988963","pushedAt":"2024-07-23T11:03:29.000Z","pushType":"branch_deletion","commitsCount":0,"pusher":{"login":"copybara-service[bot]","name":null,"path":"/apps/copybara-service","primaryAvatarUrl":"https://avatars.githubusercontent.com/in/44061?s=80&v=4"}},{"before":"bab3a715d1232505ae9b1e5d63468ef0c4c63ac6","after":"4089aadec1b6186c3d537c5c3d4eba4ba89dd8d5","ref":"refs/heads/main","pushedAt":"2024-07-23T11:03:27.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"copybara-service[bot]","name":null,"path":"/apps/copybara-service","primaryAvatarUrl":"https://avatars.githubusercontent.com/in/44061?s=80&v=4"},"commit":{"message":"[XLA:GPU] Only compute tile offset indexing maps for instructions when it is needed.\n\nComputing tile offset indexing maps is very expensive and not always necessary. When we're at Cost Model/Tiling stage, we only use the indexing map to deduplicate instruction. We can predict if we'll need an indexing map for a particular instruction by computation and comparing a parts of the hash.\n\nPiperOrigin-RevId: 655103934","shortMessageHtmlLink":"[XLA:GPU] Only compute tile offset indexing maps for instructions whe…"}},{"before":"8bd7d27c2d44897aa05e978b75f4b5324e672219","after":"4089aadec1b6186c3d537c5c3d4eba4ba89dd8d5","ref":"refs/heads/test_653988963","pushedAt":"2024-07-23T11:03:26.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"copybara-service[bot]","name":null,"path":"/apps/copybara-service","primaryAvatarUrl":"https://avatars.githubusercontent.com/in/44061?s=80&v=4"},"commit":{"message":"[XLA:GPU] Only compute tile offset indexing maps for instructions when it is needed.\n\nComputing tile offset indexing maps is very expensive and not always necessary. When we're at Cost Model/Tiling stage, we only use the indexing map to deduplicate instruction. We can predict if we'll need an indexing map for a particular instruction by computation and comparing a parts of the hash.\n\nPiperOrigin-RevId: 655103934","shortMessageHtmlLink":"[XLA:GPU] Only compute tile offset indexing maps for instructions whe…"}}],"hasNextPage":true,"hasPreviousPage":false,"activityType":"all","actor":null,"timePeriod":"all","sort":"DESC","perPage":30,"cursor":"djE6ks8AAAAEhvIkkgA","startCursor":null,"endCursor":null}},"title":"Activity · openxla/xla"}