You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,10 +4,10 @@ Welcome to Prompt2Sign!
4
4
This repository stores the preprocessed data for the paper:
5
5
<br>[SignLLM: Sign Languages Production Large Language Models.](https://arxiv.org/abs/2405.10718)
6
6
7
-
**Note: The release of our data is tentatively expected at the end of 2024, so don't rush.**
7
+
**Note: Please prioritize using the DWPose extraction and preprocessing data on the homepage, as this is compatible with almost all Pose2Vid models currently available. I believe this will contribute to the development of the field.**
8
8
9
9
## News
10
-
10
+
[2025.07.10] Our paper has been accepted by the ICCV Workshop! In addition, we provide the <ahref='https://huggingface.co/datasets/FangSen9000/How2Sign-dwpose-original-npz/tree/main'>Original DWPose keypoint npz</a> file for your use!
11
11
[2025.05.24] We have recently developed a tool named <ahref='https://github.com/FangSen9000/fast_dwpose'>fast_dwpose</a> for minimizing the extraction and visualization of DW Pose, and we hope it will be helpful to everyone.
12
12
[2025.04.18] Surprise: We have released How2Sign <ahref='https://huggingface.co/datasets/FangSen9000/How2Sign-dwpose/tree/main'>new compressed data</a> based on <ahref='https://github.com/IDEA-Research/DWPose'>DWPose</a>, and an upgraded version of the SignLLM-based application will be launched strongly in the future.<br>
13
13
[2025.04.01]**IMPORTANT:** We will try to provide a new compression solution (maybe based DWpose) at some point. Therefore, for unreleased preprocessed data and for existing data processing, the best approach is to download the original dataset and then process it using our processing tools.<br>
0 commit comments