Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
1820 commits
Select commit Hold shift + click to select a range
0c6acaf
compat: Update forward compatibility horizon to 2025-11-20
tensorflower-gardener Nov 20, 2025
bb5b943
PR #34107: [XLA:GPU] Fix cublas fallback test on Thor GPU (sm_110)
and-ivanov Nov 20, 2025
3c3f13c
fix typo in tuple_simplifier.
tensorflower-gardener Nov 20, 2025
460c2d8
PR #34103: [GPU] Fix layout assignment of bitcast-converts.
sergachev Nov 20, 2025
71be9c8
[XLA:CPU][XTile] Expand single element vector ops before lowering to …
WillFroom Nov 20, 2025
cdf4489
Update layout only when ```allow_spmd_sharding_propagation_to_output/…
tensorflower-gardener Nov 20, 2025
059fde8
Automated Code Change
tensorflower-gardener Nov 20, 2025
4bf20e1
PR #32053: [ROCM] Added command buffers support for convolutions
pemeliya Nov 20, 2025
4343c95
[XLA:SPMD] Fix on entry input/output layout changing check.
Tongfei-Guo Nov 20, 2025
8cbf93f
[XLA] Another small refactor in call splitter.
mkuperst Nov 20, 2025
e9b7438
[XLA:GPU] Refactor: Update composite_rewriter to parse dot dimension …
loislo Nov 20, 2025
f08d3a4
[CI] Stop optional services in Windows CI to reduce file permission e…
belitskiy Nov 20, 2025
e9c6172
Populate input and output names of placeholder Signature
terryheo Nov 20, 2025
2ec7037
Minor check for code integrity.
felixwqp Nov 20, 2025
e528edf
[xla:cpu] Remove unused runtime_lightweight_check
ezhulenev Nov 20, 2025
200ed27
Add CollectivePermute support to CollectiveInterpolator. The interpo…
felixwqp Nov 20, 2025
64d1888
Port ConditionalThunk/WhileThunk to provide shape for BufferUse
ermilovmaxim Nov 20, 2025
fe3a149
Integrate LLVM at llvm/llvm-project@355e0f94af5a
tensorflower-gardener Nov 20, 2025
ecbfaa8
Remove unused deprecated absl::testing usages in XLA GPU backend / se…
apivovarov Nov 20, 2025
cbe4456
[XLA] Only split the body of a call once.
mkuperst Nov 20, 2025
9a31dea
Support sink VarHandleOp in tf.While.
SiqiaoWu1993 Nov 20, 2025
9fdfc47
[XLA:Python] Release the GIL while starting or stopping the profiler.
hawkinsp Nov 21, 2025
0093172
Define StructuredTensor.FieldName outside the class.
hawkinsp Nov 21, 2025
4e54b82
Always assign new ID to AsyncHandle
deqiangc Nov 21, 2025
8a18b1f
Fix shared TableConfig processing in compute_sparse_core_stats.
tensorflower-gardener Nov 21, 2025
d91e832
[PJRT C API] Add on_done callback to CopyToRemoteDevice in the cross-…
emilyfertig Nov 21, 2025
47c9d2e
Automated Code Change
tensorflower-gardener Nov 21, 2025
8051a5c
Automated Code Change
tensorflower-gardener Nov 21, 2025
b9cae6b
Automated Code Change
tensorflower-gardener Nov 21, 2025
ab439c9
compat: Update forward compatibility horizon to 2025-11-21
tensorflower-gardener Nov 21, 2025
5d8b4f8
Update GraphDef version to 2418.
tensorflower-gardener Nov 21, 2025
4c4c531
Update XNNPACK version.
alexander-shaposhnikov Nov 21, 2025
75ca61c
Add missing ynn_params.
alexander-shaposhnikov Nov 21, 2025
10fe76f
[XLA:GPU][XTile] Use xtile::MaskOp in reduce.
WillFroom Nov 21, 2025
813bf1f
CVE-2023-37920 @ requirements_mac.txt
MedoX71T Nov 21, 2025
de857f9
Merge branch 'tensorflow:master' into master
MedoX71T Nov 21, 2025
cbe97bd
speed up xla_device_test 5x by reusing session
ermilovmaxim Nov 21, 2025
f0886a9
[IFRT Proxy] Support `UserContext` propagation
hyeontaek Nov 21, 2025
ddfba4b
[XLA:GPU] Add a flag to detect unstable reduction post-optimizations.
bchetioui Nov 21, 2025
c9d14b4
Implement memory_space_by_kind for PjRtCApiDevice.
hhb Nov 21, 2025
7107850
Reverts 9a31dea9ee6fa86752dfb8adc4290f783dc95e6b
tensorflower-gardener Nov 21, 2025
60292bc
[xla:cpu] Move JitMemoryMaper to xla/backends/cpu/codegen
ezhulenev Nov 21, 2025
0a9e96a
Remove unused deprecated absl::testing usages in service and stream_e…
apivovarov Nov 21, 2025
a5a5c10
Serialize matrix_unit_operand_precision with the rest of xla::Compile…
matthiaskramm Nov 21, 2025
05cd7f2
Allow setting sub-allocator visitors from `xla::GpuAllocatorConfig`
junwhanahn Nov 21, 2025
54ee8e7
Fix typo in `reduce_scatter_decomposer_test`.
SlaterLatiao Nov 21, 2025
fd4a2f9
Fix some c++ readability issues in latency hiding scheduler
apivovarov Nov 21, 2025
cebf9c0
Enable nvtx ext payload parsing in CUPTI. Special for NVTX payload sc…
tensorflower-gardener Nov 21, 2025
4ccc0c9
Don't translate StableHLO to MHLO when serving saved models.
mrguenther Nov 22, 2025
fb6cccc
Internal change only
SiqiaoWu1993 Nov 22, 2025
40fbabf
Allow printing more than 1000 literal values in literal_comparison
Nov 22, 2025
7ebeb5a
Add a PJRT C API extension for TPU topology.
hhb Nov 22, 2025
4fe5d49
Refactor CreateOutputLeafTpuBuffer to call DefineBuffer instead. Beca…
pschuh Nov 22, 2025
b5ddfa4
Update workspace0.bzl
johnnynunez Nov 22, 2025
2ce6311
[NanoRt] Change the (default) memory kind to "device"
hyeontaek Nov 22, 2025
6228260
Update GraphDef version to 2419.
tensorflower-gardener Nov 22, 2025
08b51a9
compat: Update forward compatibility horizon to 2025-11-22
tensorflower-gardener Nov 22, 2025
b72af58
[XLA:GPU] Remove SymbolicExprContext.
pifon2a Nov 22, 2025
ffb80ba
Remove unused deprecated absl::testing usages in pjrt/ifrt
apivovarov Nov 22, 2025
2927021
[PJRT C API] Fix ASAN bug in cross-host transfer extension tests.
emilyfertig Nov 22, 2025
83a04f4
Integrate LLVM at llvm/llvm-project@d65be16ab6ad
tensorflower-gardener Nov 22, 2025
2118844
Update GraphDef version to 2420.
tensorflower-gardener Nov 23, 2025
0f18c1c
compat: Update forward compatibility horizon to 2025-11-23
tensorflower-gardener Nov 23, 2025
66dc287
Fix missing LC_UUID
372046933 Nov 23, 2025
b84678d
Integrate LLVM at llvm/llvm-project@f2cb5d7a05ba
tensorflower-gardener Nov 23, 2025
d9603af
Update GraphDef version to 2421.
tensorflower-gardener Nov 24, 2025
5fb6b0c
compat: Update forward compatibility horizon to 2025-11-24
tensorflower-gardener Nov 24, 2025
464b2a0
Add hostname override to customize remote profiling filenames
subhamsoni-google Nov 24, 2025
079c10e
[XLA:GPU] Roll forward the deletion of the legacy Triton GEMM emitter.
bchetioui Nov 24, 2025
fcd0666
[XLA:CPU/GPU][XTile] Make clamp return max in the case the limits are…
WillFroom Nov 24, 2025
63efa21
[XLA:GPU] Make setting `--xla_gpu_unsupported_generic_triton_emitter_…
bchetioui Nov 24, 2025
5c048a2
PR #34296: [ROCm] Fix buildbreak due to missing __builtin_amdgcn_lerp
alekstheod Nov 24, 2025
94230b7
[XTile] Add compatible_with_portable rules to enable CPU linking.
WillFroom Nov 24, 2025
120b117
[XTile] Enable passing fusions without gpu backend config.
WillFroom Nov 24, 2025
5e6f8c0
PR #34228: Refactor the heuristic for max unroll factor.
dimvar Nov 24, 2025
484c657
[XLA:CPU] Enable creating XLA:CPU client from a given topology
basioli-k Nov 24, 2025
297973b
Fix a layout bug in HloEvaluator.
akuegel Nov 24, 2025
247b0cf
[XLA][codegen] Migrate FpToFpOp to arith trunc/ext ops.
basioli-k Nov 24, 2025
8e66401
Fix OSS build failure.
allanrenucci Nov 24, 2025
1b4c8a9
[XLA:GPU] Pass the llvm module explicitly to the BuildKernelPrototype.
pifon2a Nov 24, 2025
5fd2600
PR #34309: [XLA:GPU] Bump cuDNN version for block scaled dot support …
sergey-kozub Nov 24, 2025
0977938
[XLA:GPU] Move CollectiveOpsTestE2E base into a separate build target.
olegshyshkov Nov 24, 2025
1656bc2
Merge pull request #104892 from MedoX71T:master
tensorflower-gardener Nov 24, 2025
782b3a3
PR #34163: [GPU] Make cuDNN GEMM backend aware of dot algorithms.
sergachev Nov 24, 2025
1d1f97e
Document the effort level options avaiable in XLA.
tensorflower-gardener Nov 24, 2025
175f4eb
Merge pull request #103912 from ILCSFNO:patch-4
tensorflower-gardener Nov 24, 2025
021e57f
Merge pull request #103916 from ILCSFNO:patch-5
tensorflower-gardener Nov 24, 2025
4dbe47b
Merge pull request #104548 from AshiteshSingh:patch-1
tensorflower-gardener Nov 24, 2025
3a12588
[XLA:GPU] Emit SliceToDynamic and PadToStatic in a separate llvm module.
pifon2a Nov 24, 2025
75f7de7
PR #34316: [XLA:GPU] Add debug code to profile command buffer's memor…
shawnwang18 Nov 24, 2025
2e00293
PR #33909: [ROCm] Make multi gpu tests exclusive if executed locally
alekstheod Nov 24, 2025
59e2d85
PR #34049: [ROCm] Add support for rocm tar/wheels in hermetic builds
alekstheod Nov 24, 2025
256c4fa
[XLA:LHS] Fix minor issues:
seherellis Nov 24, 2025
4b34e48
[XLA:GPU] Add IrEmitterUnnested::EmitHloEntryComputation method.
pifon2a Nov 24, 2025
7b7fb7f
PR #34333: Add DSV3-1N4G HLO
mingxu1067 Nov 24, 2025
9662455
PR #34335: Add HLO benchmark for llama3-8b with activation offloading
sfvaroglu Nov 24, 2025
2627e63
[XLA] Increase test size to avoid timeouts.
dimitar-asenov Nov 24, 2025
db29d6b
Add missing symbols to CUPTI stub (follow-up for https://github.com/o…
ybaturina Nov 24, 2025
d12d026
Internal change only
SiqiaoWu1993 Nov 24, 2025
335be54
[XLA:CPU][XTile] Add first experimental integration of tiled emitter.
WillFroom Nov 24, 2025
f13649f
Reverts 4bf20e19f440a47a276cf1988b48683778f673a6
dimitar-asenov Nov 24, 2025
9073026
unify unused triton argument removal
ermilovmaxim Nov 24, 2025
077856b
Allow HloDCE to remove dead parameters from the entry computation.
bhatuzdaname Nov 24, 2025
3ed1e7a
[XLA:GPU] Move linking logic for emitters to ir_emitter_unnested.
pifon2a Nov 24, 2025
3a4bd44
Reverts 335be54cf16896a093589c755dd9ee7d012216b6
tensorflower-gardener Nov 25, 2025
6043d14
Add HBM utilization percent metric to XPlane schema.
tensorflower-gardener Nov 25, 2025
a76b674
PR #34227: [GPU] Remove no longer necessary workaround for cuDNN conv…
sergachev Nov 25, 2025
345f4f7
Use unbounded parallelism for IFRT IR program compilation
junwhanahn Nov 25, 2025
5d46b65
[xla:gpu] Extract collective clique requests and acquire into separat…
ezhulenev Nov 25, 2025
5e73b75
Integrate LLVM at llvm/llvm-project@dea330b38d9c
tensorflower-gardener Nov 25, 2025
77e0179
Remove unused deprecated tsl::testing usages
apivovarov Nov 25, 2025
9302340
[xla:ffi] Group internal CPU and GPU APIs for readability
ezhulenev Nov 25, 2025
e433a5a
[xla:gpu] Remove unused op kind from CollectiveConfig
ezhulenev Nov 25, 2025
ee64040
[XLA:GPU] Move constant emission logic from IrEmitterContext to IrEmi…
pifon2a Nov 25, 2025
dbab3fd
[XLA:GPU] add an overview for thunks
metaflow Nov 25, 2025
08df01b
Automated Code Change
tensorflower-gardener Nov 25, 2025
2cc868d
Automated Code Change
tensorflower-gardener Nov 25, 2025
24549b2
Automated Code Change
tensorflower-gardener Nov 25, 2025
ba4dbe7
[xla:gpu] Remove unused op_id from CollectiveConfig
ezhulenev Nov 25, 2025
8dcfef8
[XLA:GPU][XTile] Fold / squeeze xtile.mask when lowering to triton.
WillFroom Nov 25, 2025
2e016d0
Automated Code Change
tensorflower-gardener Nov 25, 2025
a9c57f3
Automated Code Change
tensorflower-gardener Nov 25, 2025
1c3cd2d
[XLA:GPU] use reserved "field name"
metaflow Nov 25, 2025
15ffcef
compat: Update forward compatibility horizon to 2025-11-25
tensorflower-gardener Nov 25, 2025
988b38f
Update GraphDef version to 2422.
tensorflower-gardener Nov 25, 2025
6d07098
[xla:gpu] Remove redundant operand_count from CollectiveConfig
ezhulenev Nov 25, 2025
2ffe1ca
[XLA:CPU][XTile] Use shlo optimization passes.
WillFroom Nov 25, 2025
84f74d8
PR #33897: [ROCm] Make rbe amdgpu pools configurable
alekstheod Nov 25, 2025
e891556
Re-generate XLA's warnings.bazelrc.
thomasjoerg Nov 25, 2025
48deb13
PR #34303: Bump github/codeql-action from 4.31.2 to 4.31.5
dependabot[bot] Nov 25, 2025
c82ee8e
Refactor: Use OpType::create instead of rewriter.create<OpType>
chsigg Nov 25, 2025
44292ab
Reverts fa3c810dc0c94f61760d4480955036659ad9bf51
thomasjoerg Nov 25, 2025
df427e4
PR #34156: [ROCm] Add missing dependencies to header file
mfrancepillois Nov 25, 2025
1dd823a
[XLA:GPU] Pass HloInstruction to MustWrapInstruction.
olegshyshkov Nov 25, 2025
90d9f71
Reverts changelist 723154962
thomasjoerg Nov 25, 2025
6422a9f
Remove kTritonScaledDotFusionKind.
chsigg Nov 25, 2025
70f6172
Remove legacy Triton pointer ops from int4 passes.
chsigg Nov 25, 2025
88819fc
PR #34307: Bump ml-dtypes from 0.5.3 to 0.5.4
dependabot[bot] Nov 25, 2025
79553ab
[XLA:GPU/TMA] Add heuristic for Triton TMA autotuning. This prunes do…
Moerafaat Nov 25, 2025
756970e
[XLA] Introduce --xla_dump_emitter_re flag to control which emitter d…
mooskagh Nov 25, 2025
0159a0a
Cleanup: Remove special handling for `kTritonGemmFusionKind`.
chsigg Nov 25, 2025
2ef952c
[XLA:GPU] Increase test timeout for `combined_ops_test_a`.
thomasjoerg Nov 25, 2025
66bbe24
[XLA:GPU] Introduce a flag --xla_gpu_gemm_autotuner_override_file
mooskagh Nov 25, 2025
a14f729
Move stablehlo input processing logic to hlo_module_loader.cc.
akuegel Nov 25, 2025
5aa10c9
[XLA:GPU] Return MLIR pipeline status error instead of silently ignor…
olegshyshkov Nov 25, 2025
489c777
[XLA] Fix UndefinedBehaviorSanitizer `null-pointer-use` issue in `xla…
thomasjoerg Nov 25, 2025
1914453
[XLA:GPU] Fix usages of deprecated tsl::errors::NotFound in xla
loislo Nov 25, 2025
6b91a2a
Introduce new AOT compilation support in `GpuCompiler`
EusebioDM Nov 25, 2025
7ddae45
[XLA:GPU] Bring hlo_op_profiler_run in sync with hlo_op_profiler_test.
akuegel Nov 25, 2025
a0db284
Move `GpuExecutable` dumping logic from the compiler to the executable
EusebioDM Nov 25, 2025
e9e9372
[XLA:GPU/TMA] Enable TMA by default. This brings more performance to …
Moerafaat Nov 25, 2025
8f30a4d
Integrate LLVM at llvm/llvm-project@26362c68579d
tensorflower-gardener Nov 25, 2025
669b8af
avoid arithmetics with nullptr(UB)
ermilovmaxim Nov 25, 2025
3c2e950
switch from deprecated TF_CHECK_OK
ermilovmaxim Nov 25, 2025
6310355
Remove unused deprecated tsl::testing usages in tsl
apivovarov Nov 25, 2025
e27e420
[XLA:GPU] Add missing dependency on absl/strings:cord.
loislo Nov 25, 2025
36ec9c8
PR #34306: Bump numpy from 1.24.3 to 2.3.5
dependabot[bot] Nov 25, 2025
ce7fc22
PR #34304: Bump actions/checkout from 5.0.0 to 6.0.0
dependabot[bot] Nov 25, 2025
36cde4b
[xla:ffi] Add correct error handling to internal FFI APIs
ezhulenev Nov 25, 2025
bf7be96
Convert a slice taking a single element from the minor dim of a resha…
jbspooner Nov 25, 2025
cc620f3
[XLA] UnflattenCallGraph: Filter computations before hashing
zvikinoza Nov 25, 2025
e507bf5
Add FP32-->FP16 folding support to tfl.cast.
arfaian Nov 26, 2025
c851c88
[xla:ffi] Split internal XLA FFI API implementation into separate target
ezhulenev Nov 26, 2025
5cd81be
[XLA][codegen] Migrate triton specific operations from collective emi…
basioli-k Nov 26, 2025
dcf61a6
Implement `CreateErrorBuffer` in pjrt c api
hhb Nov 26, 2025
2768b3a
Add `StatType` enumerations for SparseCore
charlesalaras Nov 26, 2025
8ef1949
switch from deprecated TF_CHECK_OK
ermilovmaxim Nov 26, 2025
663a0e9
Merge pull request #104910 from johnnynunez:master
tensorflower-gardener Nov 26, 2025
7c91a21
[XLA:GPU] Rename IrEmitterUnnested to ThunkEmitter like in XLA:CPU.
pifon2a Nov 26, 2025
7b4006c
Automated Code Change
tensorflower-gardener Nov 26, 2025
53d6f2a
Merge pull request #104948 from 372046933:fix_macos_26_compile
tensorflower-gardener Nov 26, 2025
d39b052
[XLA:GPU] add xla_enable_scoped_logging_timers debug option
metaflow Nov 26, 2025
0dafe05
[XLA:GPU] Disable `sol_latency_estimator_test` for fastbuild, since i…
thomasjoerg Nov 26, 2025
7e8de0b
Update GraphDef version to 2423.
tensorflower-gardener Nov 26, 2025
eb66b39
compat: Update forward compatibility horizon to 2025-11-26
tensorflower-gardener Nov 26, 2025
99b31d1
[XLA:GPU]: Fix function arguments for metadata construction
sohaibiftikhar Nov 26, 2025
b92cfe4
[PjRt] Avoid race conditions in LocalDeviceState destructor.
thomasjoerg Nov 26, 2025
f6b6d4d
[XLA:GPU] Fix usages of deprecated tsl::errors::InvalidArgument in xla
loislo Nov 26, 2025
997aad1
[XLA:GPU] updates to thunks diagram
metaflow Nov 26, 2025
90ec4bc
[XLA:GPU]: Emit Sort in a separate llvm module.
sohaibiftikhar Nov 26, 2025
f3cd043
fix heap overflow for dynamic dimension buffers
ermilovmaxim Nov 26, 2025
3ecc976
[XLA:GPU] Fix collective_ops_e2e_test timeout and enable it in OSS pr…
tensorflower-gardener Nov 26, 2025
a35f4c1
PR #34372: [ROCm] Fix missing sys deps lib in hermetic builds
alekstheod Nov 26, 2025
8d25372
[Autotuner] Use CustomKernel fission backend in legacy autotuner cache.
tensorflower-gardener Nov 26, 2025
3a4a157
[XLA:CPU/GPU][XTile] Split out lowering functionality from emitter_he…
WillFroom Nov 26, 2025
c2d4e93
[XLA:GPU] Fix lowering of triton atomic passes.
sohaibiftikhar Nov 26, 2025
e468274
Replace the tsl_grpc_cc_dependencies macro by explicit dependencies
beckerhe Nov 26, 2025
8a96928
Use GetInPlaceInputOutputPairs from AliasInfo instead of HloDataflowA…
akuegel Nov 26, 2025
de066ed
PR #34362: [GPU] Link to optimization level doc in GPU flag guidance
terryysun Nov 26, 2025
a0aee15
PR #34112: [ROCm] Include multigpu tests
alekstheod Nov 26, 2025
75ae4d5
[XLA:CPU/GPU][XTile] Split tiled emitting and lowering into two separ…
WillFroom Nov 26, 2025
5fc61c5
[XLA:GPU] Add xla.get_dynamic_dim_size op and its lowering.
olegshyshkov Nov 26, 2025
9b0e9e9
Integrate LLVM at llvm/llvm-project@4f39a4ff0ada
boomanaiden154 Nov 26, 2025
9e662ca
[xla:ffi] Add execution stage to all error messages when checking FFI…
ezhulenev Nov 26, 2025
689b02e
Fix protobuf dependencies in OSS
beckerhe Nov 26, 2025
6185a6b
[XLA:GPU] Fix usages of deprecated tsl::errors::AlreadyExists in xla
loislo Nov 26, 2025
d1d2d98
[XLA:GPU/TMA] Disable TMA on B200 due to a timeout failure.
Moerafaat Nov 26, 2025
9358e89
[XLA] Fix zstd.patch
WillFroom Nov 26, 2025
2fc3b48
Reverts 6185a6b9ab9f84b95a9a94e55d6d16afa433b91a
beckerhe Nov 26, 2025
239363a
[xla:ffi] Relax the check for the number of attributes
ezhulenev Nov 26, 2025
f15c0c9
[XLA:GPU] Add documentation to Priority Fusion pass.
derdrdirk Nov 26, 2025
1ab53c2
hlo_runner_pjrt should keep hlo module config's seed() in optimized h…
tensorflower-gardener Nov 26, 2025
1922cff
Return `std::nullopt` from `xla::ifrt::LoadedExecutable::devices()` f…
junwhanahn Nov 26, 2025
f093641
[XLA:CPU] Use distinct element values in tiled kernel tests
basioli-k Nov 26, 2025
b660560
[XLA:GPU] Prevent all-reduce codegen when replica groups are empty
sohaibiftikhar Nov 26, 2025
d39e0ed
[XLA:CPU][AOT] Make sure stablehlo test doesn't fail if executable lo…
basioli-k Nov 26, 2025
2570e65
[XLA] Update build_bazel_apple_support to 1.24.5
hawkinsp Nov 26, 2025
4524539
[XLA:GPU] Move ragged all to all e2e tests into a separate target.
olegshyshkov Nov 26, 2025
32d5577
Refactor HloDCE to use a setter for removing dead entry parameters.
bhatuzdaname Nov 26, 2025
2586544
Refactor client_library_test_base.cc
apivovarov Nov 26, 2025
c8c5496
Implement PjRtCApiExecutable::GetOutputShapes.
hhb Nov 26, 2025
34e899a
[XLA] Don't eagerly delete side effecting custom calls
vsytch Nov 26, 2025
6914164
switch from deprecated TF_CHECK_OK
ermilovmaxim Nov 26, 2025
2b46af2
Fix types and comments in replica groups V2.
ZixuanJiang Nov 26, 2025
1502061
[XLA:GPU/TMA] Pruning TMA configurations in the new autotuner infrast…
Moerafaat Nov 26, 2025
3049865
Add fp16 data type to TFLite for use within the runtime.
arfaian Nov 26, 2025
6fcae8d
[XLA:GPU] Remove unused num_devices_ member from CollectiveOpsWithFla…
olegshyshkov Nov 26, 2025
ad91213
[xla:ffi] Move CPU FFI implementation to backends/cpu
ezhulenev Nov 26, 2025
27a94f6
[XLA] Add gutil dependency
Nov 26, 2025
ff669ae
Integrate LLVM at llvm/llvm-project@0c2701fe7fa0
tensorflower-gardener Nov 27, 2025
3e6e96c
remove large automatically added dependency
ermilovmaxim Nov 27, 2025
50f5b5c
This change replaces uses of std::next_permutation with absl::c_next_…
apivovarov Nov 27, 2025
2bebe29
Internal build rule change
Nov 27, 2025
c97e474
[XLA:Python] Add nanobind binding for absl::Status
jcai19 Nov 27, 2025
2944961
Don't check tensorflow.logging.log_if signatute because of different …
ezhulenev Nov 27, 2025
43267a7
[XLA:LHS] Adjust VLOG(2) printing for node comparison. `MaybeUpdate` …
seherellis Nov 27, 2025
fc6e11f
[xla:ffi] Move GPU FFI implementation to backends/gpu
ezhulenev Nov 27, 2025
5ca3f48
Migrate std::multimap::find() to equal_range().
tensorflower-gardener Nov 27, 2025
4ccf38a
[xla:gpu] Switch to type safe LocalDeviceId in local to global device…
ezhulenev Nov 27, 2025
88cedf1
Automated Code Change
tensorflower-gardener Nov 27, 2025
c6daf98
[XLA:GPU] Allow to extract settings from hlo config dump.
tensorflower-gardener Nov 27, 2025
77ba53c
[XLA:GPU/TMA] Centralize TMA enablement control in the autotuner. Thi…
Moerafaat Nov 27, 2025
5f62e5a
Automated Code Change
tensorflower-gardener Nov 27, 2025
a3ce88c
[XLA:GPU] respect only fail_ptx_compilation_on_register_spilling in p…
metaflow Nov 27, 2025
589e8d4
Use GetInPlaceInputOutputPairs from AliasInfo instead of HloDataflowA…
akuegel Nov 27, 2025
193a253
Fold no-op reshape when converting to linalg.
WillFroom Nov 27, 2025
f150d49
[XLA:CPU][XTile] Add first experimental integration of tiled emitter.
WillFroom Nov 27, 2025
df423f8
[XLA:GPU] Move LLVMIR emitters out of ThunkEmitter and remove IREmitt…
pifon2a Nov 27, 2025
3cb3083
Update GraphDef version to 2424.
tensorflower-gardener Nov 27, 2025
46ea2d7
compat: Update forward compatibility horizon to 2025-11-27
tensorflower-gardener Nov 27, 2025
06de5a6
[XLA:CPU/GPU][XTile] Add missing RemSIOp & IsFiniteOp elemental instr…
WillFroom Nov 27, 2025
f7446a1
[XLA:GPU] Add a tool to optimize llvm::Module and compile to PTX.
pifon2a Nov 27, 2025
afd8055
[XLA:GPU] remove xla_gpu_unsupported_generic_triton_emitter_features
metaflow Nov 27, 2025
f1b2045
[XLA:CPU][XTile] Add pass to make integer division / remainder safe.
WillFroom Nov 27, 2025
5457acb
[XLA] Add missing semicolons after TF_ASSERT_OK_AND_ASSIGN
Nov 27, 2025
6861f26
[XLA:GPU] Handle non-slice arguments for emitted kernels.
sohaibiftikhar Nov 27, 2025
b903afe
[XLA:GPU] Fix usages of deprecated tsl::errors::AlreadyExists in xla
loislo Nov 27, 2025
5f2a2c6
Add FastImageProcessor: allocation-free preprocess/postprocess for fu…
CodersAcademy006 Nov 27, 2025
0dfea2d
Replace tf.constant(...) in __init__ examples/tests with tf.convert_t…
CodersAcademy006 Nov 27, 2025
db5ae79
Fix indentation: keep self.c43 inside __init__
CodersAcademy006 Nov 27, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
The diff you're trying to view is too large. We only load the first 3000 changed files.
12 changes: 5 additions & 7 deletions .bazelrc
Original file line number Diff line number Diff line change
Expand Up @@ -302,9 +302,11 @@ common:cuda --@local_config_cuda//:enable_cuda
common:cuda --config=cuda_version
# This flag is needed to include CUDA libraries.
common:cuda --@local_config_cuda//cuda:include_cuda_libs=true
common:cuda --@cuda_driver//:include_cuda_umd_libs=true

# This configuration is used for building the wheels.
common:cuda_wheel --@local_config_cuda//cuda:include_cuda_libs=false
common:cuda_wheel --@cuda_driver//:include_cuda_umd_libs=false

# CUDA: This config refers to building CUDA op kernels with clang.
common:cuda_clang --config=cuda
Expand Down Expand Up @@ -596,7 +598,6 @@ common:use_tar_archive_files --repo_env=USE_LLVM_TAR_ARCHIVE_FILES=1
common:use_tar_archive_files --repo_env=USE_MIRRORED_TAR_ARCHIVE_FILES=1

# Make Bazel not try to probe the host system for a C++ toolchain.
common:rbe_base --config=use_tar_archive_files
common:rbe_base --config=resultstore
common:rbe_base --repo_env=BAZEL_DO_NOT_DETECT_CPP_TOOLCHAIN=1
common:rbe_base --define=EXECUTOR=remote
Expand Down Expand Up @@ -639,8 +640,8 @@ common:rbe_linux_cpu --remote_instance_name=projects/tensorflow-testing/instance
# Download CUDA/CUDNN redistributions to preserve the repositories cache between
# CPU and GPU builds.
# TODO(ybaturina): Uncomment when RBE is ready to support this.
commonld:rbe_linux_cpu --repo_env USE_CUDA_REDISTRIBUTIONS=1
commonld:rbe_linux_cpu --config=cuda_version
common:rbe_linux_cpu --repo_env USE_CUDA_REDISTRIBUTIONS=1
common:rbe_linux_cpu --config=cuda_version

# Deprecated RBE config with non-hermetic toolchains.
common:rbe_linux_cpu_clang_local --config=rbe_linux_cpu
Expand All @@ -666,9 +667,6 @@ common:rbe_linux_cuda --config=cuda_clang_official
common:rbe_linux_cuda --config=rbe_linux_cpu
# For Remote build execution -- GPU configuration
common:rbe_linux_cuda --repo_env=REMOTE_GPU_TESTING=1
# Enable forward compatibility for CUDA builds because RBE docker image doesn't
# have latest CUDA drivers installed.
common:rbe_linux_cuda --@cuda_driver//:enable_forward_compatibility=true

common:rbe_linux_cuda_nvcc --config=rbe_linux_cuda
common:rbe_linux_cuda_nvcc --config=cuda_nvcc
Expand Down Expand Up @@ -861,7 +859,7 @@ test:linux_cpu_wheel_test --@local_xla//third_party/py:wheel_dependency=true --c
test:linux_cuda_wheel_test_filters --test_tag_filters=gpu,requires-gpu,-no_gpu,-no_oss,-tf_tosa,-oss_excluded,-oss_serial,-benchmark-test,-no_cuda11,-no_oss_py38,-no_oss_py39,-no_oss_py310,-no_oss_py313
test:linux_cuda_wheel_test_filters --build_tag_filters=gpu,requires-gpu,-no_gpu,-no_oss,-tf_tosa,-oss_excluded,-oss_serial,-benchmark-test,-no_cuda11,-no_oss_py38,-no_oss_py39,-no_oss_py310,-no_oss_py313
test:linux_cuda_wheel_test_filters --test_lang_filters=py --test_size_filters=small,medium
test:linux_cuda_wheel_test --@local_xla//third_party/py:wheel_dependency=true --config=linux_cuda_wheel_test_filters -- //tensorflow/... //tensorflow/tools/pip_package:prebuilt_wheel_import_api_packages_test_gpu -//tensorflow/compiler/tf2tensorrt/... -//tensorflow/core/tpu/... -//tensorflow/lite/... -//tensorflow/tools/toolchains/...
test:linux_cuda_wheel_test --repo_env=HERMETIC_CUDA_UMD_VERSION=12.8.1 --@local_xla//third_party/py:wheel_dependency=true --config=linux_cuda_wheel_test_filters -- //tensorflow/... //tensorflow/tools/pip_package:prebuilt_wheel_import_api_packages_test_gpu -//tensorflow/compiler/tf2tensorrt/... -//tensorflow/core/tpu/... -//tensorflow/lite/... -//tensorflow/tools/toolchains/...
# ARM64 WHEEL
test:linux_arm64_wheel_test_filters --test_tag_filters=-no_oss,-tf_tosa,-no_aarch64,-oss_excluded,-oss_serial,-gpu,-tpu,-benchmark-test,-v1only,-no_oss_py38,-no_oss_py39,-no_oss_py310,-no_oss_py313
test:linux_arm64_wheel_test_filters --build_tag_filters=-no_oss,-tf_tosa,-no_aarch64,-oss_excluded,-oss_serial,-gpu,-tpu,-benchmark-test,-v1only,-no_oss_py38,-no_oss_py39,-no_oss_py310,-no_oss_py313
Expand Down
2 changes: 1 addition & 1 deletion .bazelversion
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
7.4.1
7.7.0
# NOTE: Update Bazel version in tensorflow/tools/ci_build/release/common.sh.oss
2 changes: 1 addition & 1 deletion .github/workflows/osv-scanner-scheduled.yml
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ permissions:
jobs:
scan-scheduled:
if: github.repository == 'tensorflow/tensorflow'
uses: "google/osv-scanner-action/.github/workflows/osv-scanner-reusable.yml@v2.2.3"
uses: "google/osv-scanner-action/.github/workflows/osv-scanner-reusable.yml@v2.2.4"
with:
scan-args: |-
--lockfile=requirements.txt:./requirements_lock_3_9.txt
Expand Down
4 changes: 2 additions & 2 deletions .github/workflows/scorecards-analysis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ jobs:
# Upload the results as artifacts (optional). Commenting out will disable uploads of run results in SARIF
# format to the repository Actions tab.
- name: "Upload artifact"
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
with:
name: SARIF file
path: results.sarif
Expand All @@ -64,6 +64,6 @@ jobs:
# Upload the results to GitHub's code scanning dashboard (optional).
# Commenting out will disable upload of results to your repo's Code Scanning dashboard
- name: "Upload to code-scanning"
uses: github/codeql-action/upload-sarif@3599b3baa15b485a2e49ef411a7a4bb2452e7f93 # v3.29.5
uses: github/codeql-action/upload-sarif@0499de31b99561a6d14a36a5f662c2a54f91beee # v3.29.5
with:
sarif_file: results.sarif
4 changes: 2 additions & 2 deletions .github/workflows/stale-issues.yml
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ jobs:
pull-requests: write
steps:
- name: Awaiting response issues
uses: actions/stale@3a9db7e6a41a89f618792c92c0e97cc736e1b13f # v10.0.0
uses: actions/stale@5f858e3efba33a5ca4407a664cc011ad407f2008 # v10.1.0
with:
#Comma separated list of labels that can be assigned to issues to exclude them from being marked as stale
exempt-issue-labels: 'override-stale'
Expand Down Expand Up @@ -59,7 +59,7 @@ jobs:
close-pr-message: "This PR was closed because it has been inactive for 14 days since being marked as stale. Please reopen if you'd like to work on this further."
repo-token: ${{ secrets.GITHUB_TOKEN }}
- name: Contribution issues
uses: actions/stale@3a9db7e6a41a89f618792c92c0e97cc736e1b13f # v10.0.0
uses: actions/stale@5f858e3efba33a5ca4407a664cc011ad407f2008 # v10.1.0
with:
#Comma separated list of labels that can be assigned to issues to exclude them from being marked as stale
exempt-issue-labels: 'override-stale'
Expand Down
4 changes: 4 additions & 0 deletions RELEASE.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,10 @@
* `tf.lite`
* Adds int8 and int16x8 support for SQRT operator.
* Adds int16x8 support for EQUAL and NOT_EQUAL operators.
* Adds support for int2 type.
* Adds support for int2/int4 in tfl.cast .
* Adds support for SRQ int2 in tfl.fully_connected.
* Adds support for int4 in tfl.slice.

### Bug Fixes and Other Changes

Expand Down
11 changes: 0 additions & 11 deletions ci/official/containers/ml_build/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -12,14 +12,6 @@ COPY builder.packages.txt /builder.packages.txt

RUN /setup.sources.sh && /setup.packages.sh /builder.packages.txt

# Install devtoolset-9 in /dt9 with glibc 2.17 and libstdc++ 4.8, for building
# manylinux2014-compatible packages.
COPY builder.devtoolset/fixlinks.sh /fixlinks.sh
COPY builder.devtoolset/rpm-patch.sh /rpm-patch.sh
COPY builder.devtoolset/build_devtoolset.sh /build_devtoolset.sh
COPY builder.devtoolset/glibc2.17-inline.patch /glibc2.17-inline.patch
RUN /build_devtoolset.sh devtoolset-9 /dt9

# Setup Python
COPY setup.python.sh /setup.python.sh
COPY builder.requirements.txt /builder.requirements.txt
Expand Down Expand Up @@ -56,9 +48,6 @@ RUN ln -sf /usr/bin/python3.12 /usr/bin/python3
RUN ln -sf /usr/bin/python3.12 /usr/bin/python
RUN ln -sf /usr/lib/python3.12 /usr/lib/tf_python

# Make sure clang is on the path
RUN ln -s /usr/lib/llvm-18/bin/clang /usr/bin/clang

# Link the compat driver to the location if available.
RUN if [ -e "/usr/local/cuda/compat/libcuda.so.1" ]; then ln -s /usr/local/cuda/compat/libcuda.so.1 /usr/lib/x86_64-linux-gnu/libcuda.so.1; fi

Expand Down
21 changes: 2 additions & 19 deletions ci/official/containers/ml_build/builder.packages.txt
Original file line number Diff line number Diff line change
@@ -1,28 +1,9 @@
# Packages to be installed for the new Docker image.

# Packages needed to build devtoolset
file
flex
g++
make
patch
rpm2cpio
unar
wget
xz-utils
cpio

# Other build-related tools
apt-transport-https
autoconf
automake
build-essential
ca-certificates
llvm-18
clang-18
clang-tidy-18
lld-18
clang-format-12
curl
git
parallel
Expand All @@ -32,4 +13,6 @@ unzip
zip
openjdk-21-jdk
vim
wget
jq
file
3 changes: 3 additions & 0 deletions ci/official/containers/ml_build/builder.requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,9 @@ id
urllib3
requests

# For XLA
pyyaml

# For JAX
build ~= 1.2.2
# uv is faster than pip for installing Python packages.
Expand Down
23 changes: 23 additions & 0 deletions ci/official/containers/ml_build/cuda13.0_cudnn9.15.packages.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
# All required CUDA packages
cuda-compat-13-0
cuda-command-line-tools-13-0
cuda-cudart-dev-13-0
cuda-nvcc-13-0
cuda-cupti-13-0
cuda-nvprune-13-0
cuda-libraries-13-0
cuda-libraries-dev-13-0
cuda-nvml-dev-13-0
libcufft-13-0
libcurand-13-0
libcusolver-dev-13-0
libcusparse-dev-13-0
libcublas-13-0
libcublas-dev-13-0
libnccl-dev=2.27.7-1+cuda13.0
libnccl2=2.27.7-1+cuda13.0
# CuDNN: https://docs.nvidia.com/deeplearning/sdk/cudnn-install/index.html#ubuntu-network-installation
libcudnn9-headers-cuda-13=9.15.1.9-1
libcudnn9-static-cuda-13=9.15.1.9-1
libcudnn9-dev-cuda-13=9.15.1.9-1
libcudnn9-cuda-13=9.15.1.9-1
10 changes: 0 additions & 10 deletions ci/official/containers/ml_build/setup.python.sh
Original file line number Diff line number Diff line change
Expand Up @@ -45,16 +45,6 @@ fi

/setup.packages.sh pythons.txt

# Re-link pyconfig.h from x86_64-linux-gnu into the devtoolset directory
# for any Python version present
pushd /usr/include/x86_64-linux-gnu
for f in $(ls | grep python); do
# set up symlink for devtoolset-9
rm -f /dt9/usr/include/x86_64-linux-gnu/$f
ln -s /usr/include/x86_64-linux-gnu/$f /dt9/usr/include/x86_64-linux-gnu/$f
done
popd

# Python 3.10 include headers fix:
# sysconfig.get_path('include') incorrectly points to /usr/local/include/python
# map /usr/include/python3.10 to /usr/local/include/python3.10
Expand Down
2 changes: 1 addition & 1 deletion ci/official/envs/linux_arm64
Original file line number Diff line number Diff line change
Expand Up @@ -28,5 +28,5 @@ TFCI_OUTPUT_DIR=build_output
TFCI_WHL_AUDIT_ENABLE=1
TFCI_WHL_AUDIT_PLAT=manylinux2014_aarch64
TFCI_WHL_BAZEL_TEST_ENABLE=1
TFCI_WHL_SIZE_LIMIT=265M
TFCI_WHL_SIZE_LIMIT=270M
TFCI_WHL_SIZE_LIMIT_ENABLE=1
2 changes: 1 addition & 1 deletion ci/official/envs/windows_x86_2022
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@
TFCI_DOCKER_ENABLE=1
TFCI_DOCKER_PULL_ENABLE=1
TFCI_DOCKER_IMAGE="gcr.io/tensorflow-testing/tf-win2022@sha256:915cb093630432c38b028f56bd31116a5559ebbc688d427b6092d86828ae03bc"
TFCI_BAZEL_BAZELRC_ARGS="--output_user_root=C:/t"
TFCI_BAZEL_BAZELRC_ARGS="--output_user_root=C:/x"
TFCI_BAZEL_COMMON_ARGS="--repo_env=HERMETIC_PYTHON_VERSION=$TFCI_PYTHON_VERSION --repo_env=USE_PYWRAP_RULES=True --config=windows_x86_cpu_2022"
TFCI_BAZEL_TARGET_SELECTING_CONFIG_PREFIX=windows_x86_cpu_2022
TFCI_BUILD_PIP_PACKAGE_WHEEL_NAME_ARG="--repo_env=WHEEL_NAME=tensorflow"
Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Requirements for NumPy 1.x
numpy ~= 1.26.0
wheel ~= 0.41.2
h5py >= 3.11.0
h5py >= 3.11.0, < 3.15.0
lit ~= 17.0.2
opt_einsum == 3.3.0
astunparse == 1.6.3
Expand Down
2 changes: 1 addition & 1 deletion ci/official/requirements_updater/requirements.in
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Note that numpy 2.1.0 does not support python 3.9
numpy >= 2.0.0, < 2.2.0
wheel ~= 0.41.2
h5py >= 3.11.0
h5py >= 3.11.0, < 3.15.0
lit ~= 17.0.2
opt_einsum == 3.3.0
astunparse == 1.6.3
Expand Down
6 changes: 6 additions & 0 deletions ci/official/utilities/setup_docker.sh
Original file line number Diff line number Diff line change
Expand Up @@ -62,6 +62,12 @@ if ! docker container inspect tf >/dev/null 2>&1 ; then
# Additional setup is contained in ci/official/envs/rbe.
CONTAINER_IP_ADDR=$(docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' tf)
netsh advfirewall firewall add rule name="Allow Metadata Proxy" dir=in action=allow protocol=TCP localport=80 remoteip="$CONTAINER_IP_ADDR"

# Stop non-essential indexing and link tracking services that
# may lock new files or symlinks.
# They may be causing sporadic "Permission denied" errors during Bazel builds.
# b/461500885
docker exec tf powershell -NoProfile -Command 'Stop-Service -Name SysMain,DiagTrack -Force -ErrorAction SilentlyContinue'
fi

fi
Expand Down
58 changes: 26 additions & 32 deletions tensorflow/BUILD
Original file line number Diff line number Diff line change
Expand Up @@ -1033,6 +1033,7 @@ package_group(
"//tensorflow_models/google/recml/...",
"//third_party/cloud_tpu/convergence_tools/sdc_monitoring/...",
"//third_party/cloud_tpu/inference_converter/...",
"//third_party/pathways/...",
"//third_party/py/cloud_ml_autoflow/...",
"//third_party/py/envlogger/...",
"//third_party/py/gldm/...",
Expand Down Expand Up @@ -1180,38 +1181,31 @@ tf_cc_shared_library(
linkstatic = 1,
per_os_targets = True,
roots = [
"//tensorflow/c/experimental/filesystem:filesystem_interface",
"//tensorflow/c/experimental/stream_executor:stream_executor",
"//tensorflow/c:env",
"//tensorflow/c:kernels",
"//tensorflow/c:kernels_experimental",
"//tensorflow/c:logging",
"//tensorflow/c:ops",
"//tensorflow/cc/saved_model:fingerprinting_impl",
"//tensorflow/cc/saved_model:loader_lite_impl",
"//tensorflow/cc/saved_model:metrics_impl",
"//tensorflow/compiler/tf2tensorrt:op_converter_registry_impl",
"//tensorflow/core/common_runtime:core_cpu_impl",
"//tensorflow/core/common_runtime/gpu:gpu_runtime_impl",
"//tensorflow/core/common_runtime/pluggable_device:pluggable_device_runtime_impl",
"//tensorflow/core:framework_internal_impl",
"//tensorflow/core/framework:tensor",
"//tensorflow/core/grappler/optimizers:custom_graph_optimizer_registry_impl",
"//tensorflow/core:lib_internal_impl",
"//tensorflow/core/profiler:profiler_impl",
"//tensorflow/core/util:determinism", # Must be linked and exported to libtensorflow_framework.so.
"//tensorflow/lite/kernels/shim:tf_kernel_shim",
"@local_xla//xla/stream_executor:stream_executor_impl",
"@local_xla//xla/tsl/framework:bfc_allocator",
"@local_xla//xla/tsl/framework:metrics",
] + tf_additional_binary_deps() +
# TODO(b/259305727): Remove this select and include captured_function in macos builds.
select({
"//tensorflow:macos": [],
"//conditions:default": [
"//tensorflow/core/data:captured_function",
],
}),
"//tensorflow/c/experimental/filesystem:filesystem_interface",
"//tensorflow/c/experimental/stream_executor:stream_executor",
"//tensorflow/c:env",
"//tensorflow/c:kernels",
"//tensorflow/c:kernels_experimental",
"//tensorflow/c:ops",
"//tensorflow/cc/saved_model:fingerprinting_impl",
"//tensorflow/cc/saved_model:loader_lite_impl",
"//tensorflow/cc/saved_model:metrics_impl",
"//tensorflow/compiler/tf2tensorrt:op_converter_registry_impl",
"//tensorflow/core/common_runtime:core_cpu_impl",
"//tensorflow/core/common_runtime/gpu:gpu_runtime_impl",
"//tensorflow/core/common_runtime/pluggable_device:pluggable_device_runtime_impl",
"//tensorflow/core:framework_internal_impl",
"//tensorflow/core/framework:tensor",
"//tensorflow/core/grappler/optimizers:custom_graph_optimizer_registry_impl",
"//tensorflow/core:lib_internal_impl",
"//tensorflow/core/profiler:profiler_impl",
"//tensorflow/core/util:determinism", # Must be linked and exported to libtensorflow_framework.so.
"//tensorflow/lite/kernels/shim:tf_kernel_shim",
"@local_xla//xla/stream_executor:stream_executor_impl",
"@local_xla//xla/tsl/framework:bfc_allocator",
"@local_xla//xla/tsl/framework:metrics",
"//tensorflow/core/data:captured_function",
] + tf_additional_binary_deps(),
soversion = VERSION,
static_deps = PACKAGE_STATIC_DEPS,
visibility = ["//visibility:public"],
Expand Down
13 changes: 0 additions & 13 deletions tensorflow/c/BUILD
Original file line number Diff line number Diff line change
Expand Up @@ -298,7 +298,6 @@ tf_cuda_library(
],
"//conditions:default": [
":env",
":logging",
":tf_status",
":tf_tensor",
"//tensorflow/c/experimental/filesystem:modular_filesystem",
Expand All @@ -325,18 +324,6 @@ tf_cuda_library(
alwayslink = 1,
)

cc_library(
name = "logging",
srcs = ["logging.cc"],
hdrs = ["logging.h"],
visibility = ["//visibility:public"],
deps = [
":c_api_macros",
"//tensorflow/core/platform:logging",
"//tensorflow/core/platform:stringprintf",
],
)

tf_cuda_library(
name = "tf_status_internal",
hdrs = [
Expand Down
2 changes: 1 addition & 1 deletion tensorflow/c/c_api_function_test.cc
Original file line number Diff line number Diff line change
Expand Up @@ -1171,7 +1171,7 @@ TEST_F(CApiFunctionTest, InvalidOutputTensor_BadNodePtr) {
EXPECT_EQ(TF_INVALID_ARGUMENT, TF_GetCode(s_));
EXPECT_EQ(string("Node is null\n\tEncountered while processing output 0 "
"from function 'MyFunc'"),
string(TF_Message(s_)));
std::string(TF_Message(s_)));
}

TEST_F(CApiFunctionTest, NodeMissingInput) {
Expand Down
2 changes: 1 addition & 1 deletion tensorflow/c/c_api_test.cc
Original file line number Diff line number Diff line change
Expand Up @@ -2478,7 +2478,7 @@ TEST_F(CApiAttributesTest, Names) {

TF_OperationGetAttrName(oper, 0, value.get(), s_);
EXPECT_EQ(TF_OK, TF_GetCode(s_)) << TF_Message(s_);
EXPECT_EQ("v", string(static_cast<const char*>(value.get()), 1));
EXPECT_EQ("v", std::string(static_cast<const char*>(value.get()), 1));
}

TEST_F(CApiAttributesTest, Errors) {
Expand Down
Loading
Loading