Spaces:
Sleeping
Sleeping
| import gradio as gr | |
| from model import r_holistic | |
| title='手语动作分类' | |
| description = "此分类模型可以识别250个[ASL](https://www.lifeprint.com/)手语动作\ | |
| 并将其转化为特定的标签, 标签列表见链接[sign_to_prediction_index_map.json](sign_to_prediction_index_map.json), \ | |
| 大家可以使用示例视频进行测试, 也可以根据列表下载或模拟相应的手语视频测试输出.\ | |
| \n工作流程:\ | |
| \n 1. landmark提取, 我使用了[ MediaPipe Holistic Solution](https://ai.google.dev/edge/mediapipe/solutions/vision/holistic_landmarker)进行landmark提取.\ | |
| \n 2. 利用landmark进行手语识别, 此部分模型是我自己搭建并训练的, 主体框架为cnn和transform,此模型在测试数据集上精度在90%以上." | |
| output_video_file = gr.Video(label="landmark输出") | |
| output_text=gr.Textbox(label="手语预测结果") | |
| slider_1=gr.Slider(0,1,label='detection_confidence') | |
| slider_2=gr.Slider(0,1,label='tracking_confidence') | |
| iface = gr.Interface( | |
| fn=r_holistic, | |
| inputs=[gr.Video(sources=None, label="手语视频片段")], | |
| outputs= [output_video_file,output_text], | |
| title=title, | |
| description=description, | |
| examples=['book.mp4','book2.mp4','chair1.mp4','chair2.mp4'], | |
| #cache_examples=True, | |
| ) #["hand-land-mark-video/01.mp4","hand-land-mark-video/02.mp4"] | |
| iface.launch(share=True) | |