Update dependency keras to v3.13.2 [SECURITY]#162
Open
renovate-bot wants to merge 1 commit intoGoogleCloudPlatform:mainfrom
Open
Update dependency keras to v3.13.2 [SECURITY]#162renovate-bot wants to merge 1 commit intoGoogleCloudPlatform:mainfrom
renovate-bot wants to merge 1 commit intoGoogleCloudPlatform:mainfrom
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This PR contains the following updates:
==3.9.0→==3.13.2Keras vulnerable to CVE-2025-1550 bypass via reuse of internal functionality
CVE-2025-8747 / GHSA-c9rc-mg46-23w3
More information
Details
Summary
It is possible to bypass the mitigation introduced in response to CVE-2025-1550, when an untrusted Keras v3 model is loaded, even when “safe_mode” is enabled, by crafting malicious arguments to built-in Keras modules.
The vulnerability is exploitable on the default configuration and does not depend on user input (just requires an untrusted model to be loaded).
Impact
Details
Keras’ safe_mode flag is designed to disallow unsafe lambda deserialization - specifically by rejecting any arbitrary embedded Python code, marked by the “lambda” class name.
https://github.com/keras-team/keras/blob/v3.8.0/keras/src/saving/serialization_lib.py#L641 -
A fix to the vulnerability, allowing deserialization of the object only from internal Keras modules, was introduced in the commit bb340d6780fdd6e115f2f4f78d8dbe374971c930.
However, it is still possible to exploit model loading, for example by reusing the internal Keras function
keras.utils.get_file, and download remote files to an attacker-controlled location.This allows for arbitrary file overwrite which in many cases could also lead to remote code execution. For example, an attacker would be able to download a malicious
authorized_keysfile into the user’s SSH folder, giving the attacker full SSH access to the victim’s machine.Since the model does not contain arbitrary Python code, this scenario will not be blocked by “safe_mode”. It will bypass the latest fix since it uses a function from one of the approved modules (
keras).Example
The following truncated
config.jsonwill cause a remote file download from https://raw.githubusercontent.com/andr3colonel/when_you_watch_computer/refs/heads/master/index.js to the local/tmpfolder, by sending arbitrary arguments to Keras’ builtin functionkeras.utils.get_file()-PoC
Download malicious_model_download.keras to a local directory
Load the model -
index.jswas created in the/tmpdirectoryFix suggestions
block_all_lambdathat allows users to completely disallow loading models with a Lambda layer.keras,keras_hub,keras_cv,keras_nlpmodules and remove/block all “gadget functions” which could be used by malicious ML models.lambda_whitelist_functionsthat allows users to specify a list of functions that are allowed to be invoked by a Lambda layerCredit
The vulnerability was discovered by Andrey Polkovnichenko of the JFrog Vulnerability Research
Severity
CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:HReferences
This data is provided by the GitHub Advisory Database (CC-BY 4.0).
Keras is vulnerable to Deserialization of Untrusted Data
CVE-2025-9906 / GHSA-36fq-jgmw-4r9c
More information
Details
Arbitrary Code Execution in Keras
Keras versions prior to 3.11.0 allow for arbitrary code execution when loading a crafted
.kerasmodel archive, even whensafe_mode=True.The issue arises because the archive’s
config.jsonis parsed before layer deserialization. This can invokekeras.config.enable_unsafe_deserialization(), effectively disabling safe mode from within the loading process itself. An attacker can place this call first in the archive and then include aLambdalayer whose function is deserialized from a pickle, leading to the execution of attacker-controlled Python code as soon as a victim loads the model file.Exploitation requires a user to open an untrusted model; no additional privileges are needed. The fix in version 3.11.0 enforces safe-mode semantics before reading any user-controlled configuration and prevents the toggling of unsafe deserialization via the config file.
Affected versions: < 3.11.0
Patched version: 3.11.0
It is recommended to upgrade to version 3.11.0 or later and to avoid opening untrusted model files.
Severity
CVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:P/VC:H/VI:H/VA:H/SC:N/SI:N/SA:NReferences
This data is provided by the GitHub Advisory Database (CC-BY 4.0).
The Keras
Model.load_modelmethod silently ignoressafe_mode=Trueand allows arbitrary code execution when a.h5/.hdf5file is loaded.CVE-2025-9905 / GHSA-36rr-ww3j-vrjv
More information
Details
Note: This report has already been discussed with the Google OSS VRP team, who recommended that I reach out directly to the Keras team. I’ve chosen to do so privately rather than opening a public issue, due to the potential security implications. I also attempted to use the email address listed in your
SECURITY.md, but received no response.Summary
When a model in the
.h5(or.hdf5) format is loaded using the KerasModel.load_modelmethod, thesafe_mode=Truesetting is silently ignored without any warning or error. This allows an attacker to execute arbitrary code on the victim’s machine with the same privileges as the Keras application. This report is specific to the.h5/.hdf5file format. The attack works regardless of the other parameters passed toload_modeland does not require any sophisticated technique—.h5and.hdf5files are simply not checked for unsafe code execution.From this point on, I will refer only to the
.h5file format, though everything equally applies to.hdf5.Details
Intended behaviour
According to the official Keras documentation,
safe_modeis defined as:I understand that the behavior described in this report is somehow intentional, as
safe_modeis only applicable to.kerasmodels.However, in practice, this behavior is misleading for users who are unaware of the internal Keras implementation.
.h5files can still be loaded seamlessly usingload_modelwithsafe_mode=True, and the absence of any warning or error creates a false sense of security. Whether intended or not, I believe silently ignoring a security-related parameter is not the best possible design decision. At a minimum, ifsafe_modecannot be applied to a given file format, an explicit error should be raised to alert the user.This issue is particularly critical given the widespread use of the
.h5format, despite the introduction of newer formats.As a small anecdotal test, I asked several of my colleagues what they would expect when loading a
.h5file withsafe_mode=True. None of them expected the setting to be silently ignored, even after reading the documentation. While this is a small sample, all of these colleagues are cybersecurity researchers—experts in binary or ML security—and regular participants in DEF CON finals. I was careful not to give any hints about the vulnerability in our discussion.Technical Details
Examining the implementation of
load_modelinkeras/src/saving/saving_api.py, we can see that thesafe_modeparameter is completely ignored when loading.h5files. Here's the relevant snippet:As shown, when the file format is
.h5or.hdf5, the method delegates tolegacy_h5_format.load_model_from_hdf5, which does not use or check thesafe_modeparameter at all.Solution
Since the release of the new
.kerasformat, I believe the simplest and most effective way to address this misleading behavior—and to improve security in Keras—is to have thesafe_modeparameter raise an explicit error whensafe_mode=Trueis used with.h5/.hdf5files. This error should be clear and informative, explaining that the legacy format does not supportsafe_modeand outlining the associated risks of loading such files.I recognize this fix may have minor backward compatibility considerations.
If you confirm that you're open to this approach, I’d be happy to open a PR that includes the missing check.
PoC
From the attacker’s perspective, creating a malicious
.h5model is as simple as the following:From the victim’s side, triggering code execution is just as simple:
That’s all. The exploit occurs during model loading, with no further interaction required. The parameters passed to the method do not mitigate of influence the attack in any way.
As expected, the attacker can substitute the
exec(...)call with any payload. Whatever command is used will execute with the same permissions as the Keras application.Attack scenario
The attacker may distribute a malicious
.h5/.hdf5model on platforms such as Hugging Face, or act as a malicious node in a federated learning environment. The victim only needs to load the model—even withsafe_mode=Truethat would give the illusion of security. No inference or further action is required, making the threat particularly stealthy and dangerous.Once the model is loaded, the attacker gains the ability to execute arbitrary code on the victim’s machine with the same privileges as the Keras process. The provided proof-of-concept demonstrates a simple shell spawn, but any payload could be delivered this way.
Severity
CVSS:4.0/AV:L/AC:L/AT:P/PR:N/UI:A/VC:H/VI:H/VA:H/SC:H/SI:H/SA:HReferences
This data is provided by the GitHub Advisory Database (CC-BY 4.0).
Keras is vulnerable to arbitrary local file loading and Server-Side Request Forgery
CVE-2025-12058 / GHSA-mq84-hjqx-cwf2
More information
Details
The Keras.Model.load_model method, including when executed with the intended security mitigation safe_mode=True, is vulnerable to arbitrary local file loading and Server-Side Request Forgery (SSRF).
This vulnerability stems from the way the StringLookup layer is handled during model loading from a specially crafted .keras archive. The constructor for the StringLookup layer accepts a vocabulary argument that can specify a local file path or a remote file path.
Arbitrary Local File Read: An attacker can create a malicious .keras file that embeds a local path in the StringLookup layer's configuration. When the model is loaded, Keras will attempt to read the content of the specified local file and incorporate it into the model state (e.g., retrievable via get_vocabulary()), allowing an attacker to read arbitrary local files on the hosting system.
Server-Side Request Forgery (SSRF): Keras utilizes tf.io.gfile for file operations. Since tf.io.gfile supports remote filesystem handlers (such as GCS and HDFS) and HTTP/HTTPS protocols, the same mechanism can be leveraged to fetch content from arbitrary network endpoints on the server's behalf, resulting in an SSRF condition.
The security issue is that the feature allowing external path loading was not properly restricted by the safe_mode=True flag, which was intended to prevent such unintended data access.
Severity
CVSS:4.0/AV:A/AC:H/AT:P/PR:L/UI:P/VC:H/VI:L/VA:L/SC:H/SI:L/SA:L/E:X/CR:X/IR:X/AR:X/MAV:X/MAC:X/MAT:X/MPR:X/MUI:X/MVC:X/MVI:X/MVA:X/MSC:X/MSI:X/MSA:X/S:X/AU:X/R:X/V:X/RE:X/U:XReferences
This data is provided by the GitHub Advisory Database (CC-BY 4.0).
Keras Directory Traversal Vulnerability
CVE-2025-12060 / GHSA-hjqc-jx6g-rwp9
More information
Details
Summary
Keras's
keras.utils.get_file()function is vulnerable to directory traversal attacks despite implementingfilter_safe_paths(). The vulnerability exists becauseextract_archive()uses Python'starfile.extractall()method without the security-criticalfilter="data"parameter. A PATH_MAX symlink resolution bug occurs before path filtering, allowing malicious tar archives to bypass security checks and write files outside the intended extraction directory.Details
Root Cause Analysis
Current Keras Implementation
The Critical Flaw
While Keras attempts to filter unsafe paths using
filter_safe_paths(), this filtering happens after the tar archive members are parsed and before actual extraction. However, the PATH_MAX symlink resolution bug occurs during extraction, not during member enumeration.Exploitation Flow:
filter_safe_paths()sees symlink paths that appear safeextractall()processes the filtered membersTechnical Details
The vulnerability exploits a known issue in Python's
tarfilemodule where excessively long symlink paths can cause resolution failures, leading to the symlink being treated as a literal path. This bypasses Keras's path filtering because:filter_safe_paths()operates on the parsed tar member informationextractall()../../../../etc/passwdto be writtenAffected Code Location
File:
keras/src/utils/file_utils.pyFunction:
extract_archive()around line 121Issue: Missing
filter="data"parameter intarfile.extractall()Proof of Concept
Environment Setup
Exploitation Steps
keras.utils.get_file()withextract=TrueKey Exploit Components
../../../target/file)Demonstration Results
Vulnerable behavior:
cache_dir/datasets/locationExpected secure behavior:
Impact
Vulnerability Classification
Who Is Impacted
Direct Impact:
keras.utils.get_file()withextract=TrueAttack Scenarios:
Affected Environments:
Risk Assessment
High Risk Factors:
Potential Consequences:
Recommended Fix
Immediate Mitigation
Replace the vulnerable extraction code with:
Long-term Solution
filter="data"parameter to alltarfile.extractall()callsBackward Compatibility
The fix maintains full backward compatibility as
filter="data"is the recommended secure default for Python 3.12+.References
Note: Reported in Huntr as well, but didn't get response
https://huntr.com/bounties/f94f5beb-54d8-4e6a-8bac-86d9aee103f4
Severity
CVSS:4.0/AV:N/AC:L/AT:P/PR:L/UI:P/VC:H/VI:H/VA:H/SC:H/SI:H/SA:HReferences
This data is provided by the GitHub Advisory Database (CC-BY 4.0).
Keras has a Local File Disclosure via HDF5 External Storage During Keras Weight Loading
CVE-2026-1669 / GHSA-3m4q-jmj6-r34q
More information
Details
Summary
TensorFlow / Keras continues to honor HDF5 “external storage” and
ExternalLinkfeatures when loading weights. A malicious.weights.h5(or a.kerasarchive embedding such weights) can directload_weights()to read from an arbitrary readable filesystem path. The bytes pulled from that path populate model tensors and become observable through inference or subsequent re-save operations. Keras “safe mode” only guards object deserialization and does not cover weight I/O, so this behaviour persists even with safe mode enabled. The issue is confirmed on the latest publicly released stack (tensorflow 2.20.0,keras 3.11.3,h5py 3.15.1,numpy 2.3.4).Impact
/etc/hosts,/etc/passwd,/etc/hostname).model.load_weights()ortf.keras.models.load_model()on an attacker-supplied HDF5 weights file or.kerasarchive.Attacker Scenario
/home/<user>/.ssh/id_rsa,/etc/shadowif readable, configuration files containing API keys, etc.).model.load_weights()(ortf.keras.models.load_model()for.kerasarchives). HDF5 follows the external references, opens the targeted host file, and streams its bytes into the model tensors..kerasarchive) persists the secret into a new artifact, which may later be shared publicly or uploaded to a model registry.Additional Preconditions
load_model(..., safe_mode=True)) does not mitigate the issue because the attack path is weight loading rather than object/lambda deserialization./etc/hostname) can reduce impact, but common defaults expose a broad set of host files.Environment Used for Verification (2025‑10‑19)
python -m pip install -U ...):tensorflow==2.20.0keras==3.11.3h5py==3.15.1numpy==2.3.4strace(for syscall tracing),pipupgraded to latest before installs.PYTHONFAULTHANDLER=1,TF_CPP_MIN_LOG_LEVEL=0during instrumentation to capture verbose logs if needed.Reproduction Instructions (Weights-Only PoC)
weights_external_demo.py:python weights_external_demo.py.secret_text_sourceprints the chosen host file path.recovered_ascii/recovered_hex64display the file contents recovered via model inference.Expanded Validation (Multiple Attack Scenarios)
The following test harness generalises the attack for multiple HDF5 constructs:
/etc/hosts.ExternalLinkpointing at/etc/passwd./etc/hostname.strace -f -e trace=open,openat,readwhile callingmodel.load_weights(...).Relevant syscall excerpts captured during the run:
The corresponding model weight bytes (converted to ASCII) mirrored these file contents, confirming successful exfiltration in every case.
Recommended Product Fix
get_external_count) before materialising tensors.SoftLink/ExternalLinktargets and block if they leave the HDF5 file.allow_external_data=Trueflag or environment variable for advanced users who truly rely on HDF5 external storage.Workarounds
h5pyto detect external datasets or links before invoking Keras loaders..npz) that lack external reference capabilities when exchanging weights.Timeline (UTC)
safe_keras_hdf5.pyprototype guard.Severity
CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:L/A:NReferences
This data is provided by the GitHub Advisory Database (CC-BY 4.0).
Keras has an untrusted deserialization vulnerability
CVE-2026-1462 / GHSA-4f3f-g24h-fr8m
More information
Details
A vulnerability in the
TFSMLayerclass of thekeraspackage, version 3.13.0, allows attacker-controlled TensorFlow SavedModels to be loaded during deserialization of.kerasmodels, even whensafe_mode=True. This bypasses the security guarantees ofsafe_modeand enables arbitrary attacker-controlled code execution during model inference under the victim's privileges. The issue arises due to the unconditional loading of external SavedModels, serialization of attacker-controlled file paths, and the lack of validation in thefrom_config()method.Severity
CVSS:3.0/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:HReferences
This data is provided by the GitHub Advisory Database (CC-BY 4.0).
Keras vulnerable to DoS via Malicious .keras Model (HDF5 Shape Bomb Causes Petabyte Allocation in KerasFileEditor)
CVE-2026-0897 / GHSA-mgx6-5cf9-rr43
More information
Details
Summary
Keras’s model loader (KerasFileEditor) unsafely loads user-supplied .keras model files containing HDF5-based weight files without performing any validation on HDF5 dataset metadata. An attacker can craft a .keras archive containing a valid model.weights.h5 file whose dataset declares an extremely large shape (e.g. (50_000_000, 50_000_000)), but stores only a few bytes. The .keras file remains small (100–400 KB) because HDF5 with gzip compression stores minimal data. During model loading,
Keras executes:
python result[key] = value[()] # loads entire dataset into memoryvalue[()] instructs h5py to allocate RAM proportional to the dataset’s declared shape – in this case 8.88 PiB of memory. This results in: Immediate memory exhaustion Python / TensorFlow crashes Jupyter kernel kill System instability Full Denial of Service on any workload that processes untrusted .keras models This allows an attacker to crash any environment or pipeline that loads .keras models, including MLOps backends, training services, model upload endpoints, or automated pipelines.
Proof of Concept
Expected Result
This crash occurs before any actual model processing, confirming the Denial-of-Service impact.
Impact
This vulnerability allows an attacker to crash any system that loads a malicious
.kerasmodel file.The attacker can:
If a platform allows user-uploaded Keras models (training services, inference endpoints, AutoML tools, Kaggle-style platforms), this becomes a Remote Denial of Service vector.
Additional PoC Evidence (Video Demonstration)
Attached is a real-world proof-of-concept video demonstrating the crash and memory exhaustion when loading the malicious .keras model.
PoC Video (Google Drive):
PoC Video
Finding: Critical memory-exhaustion flaw triggered by crafted .keras model files
Vector: Malicious metadata causing extreme tensor shape inflation
Impact: A 31 KB model forces an 8.88 PiB allocation attempt, immediately killing the process
Attack Scenario: Remote DoS on ML model processing pipelines and cloud inference services
Demonstration:
The PoC video shows the crash occurring on Google Colab.
Loading the malicious model consumed all system RAM and repeatedly terminated the runtime.
Severity is high enough that the compute quota dropped from 83 hours → 4 hours after only a few tests.
With larger payloads, this would instantly exhaust resources in real production pipelines.
Severity
CVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:P/VC:N/VI:N/VA:H/SC:N/SI:N/SA:NReferences
This data is provided by the GitHub Advisory Database (CC-BY 4.0).
Release Notes
keras-team/keras (keras)
v3.13.2Compare Source
Security Fixes & Hardening
This release introduces critical security hardening for model loading and saving, alongside improvements to the JAX backend metadata handling.
Disallow
TFSMLayerdeserialization insafe_mode(#22035)TFSMLayercould load external TensorFlow SavedModels during deserialization without respecting Kerassafe_mode. This could allow the execution of attacker-controlled graphs during model invocation.TFSMLayernow enforcessafe_modeby default. Deserialization viafrom_config()will raise aValueErrorunlesssafe_mode=Falseis explicitly passed orkeras.config.enable_unsafe_deserialization()is called.Fix Denial of Service (DoS) in
KerasFileEditor(#21880).kerasfile editor against malicious metadata that could cause dimension overflows or unbounded memory allocation (unbounded numpy allocation of multi-gigabyte tensors).Block External Links in HDF5 files (#22057)
Backend-specific Improvements (JAX)
mutable=Trueby default innnx_metadata(#22074)nnx_metadata.Saving & Serialization
H5IOStoreandShardedH5IOStoreto remove unused, unverified methods.Contributors
We would like to thank the following contributors for their security reports and code impro
Configuration
📅 Schedule: (UTC)
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
🔕 Ignore: Close this PR and you won't be reminded about this update again.
This PR was generated by Mend Renovate. View the repository job log.