HuggingFace-SK commited on
Commit
a371090
·
1 Parent(s): b4edd77

update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -25
README.md CHANGED
@@ -1,35 +1,23 @@
1
- ---
2
- title: Sign Language Interpreter
3
- emoji: 👋
4
- colorFrom: yellow
5
- colorTo: gray
6
- sdk: gradio
7
- sdk_version: 4.36.0
8
- python_version: 3.10.4
9
- app_file: main.py
10
- license: gpl-3.0
11
- pinned: false
12
- ---
13
-
14
  # Sign Language Interpreter
15
- Many people around the world use sign language to communicate. Communication occurs only when a message is sent and also received. Sign-language users are able to effeciently converse when the observer understands sign-language. This is usually not the case in reality. Hence, this tool would be extremely helpful to interpret and pronounce sign language to the listener. The tool would allow sign-language users to communicate with more people and enable them to more easily take part in society.
16
- #
17
 
18
- ## Link To Project
19
- [Sign Language Interpreter](https://huggingface.co/spaces/HuggingFace-SK/Sign-Language-Interpreter)
20
 
21
  ## Table of Contents
 
22
  - [About](#about)
23
  - [Built With](#built-with)
24
  - [Usage](#usage)
25
  - [License](#license)
26
- #
27
  ## About
28
 
29
- Sign-Language-Interpreter aims to allow a fluent sign-language user to sign into a camera and have the user's message be spoken aloud. The program finds hand location data using [MediaPipe](https://ai.google.dev/edge/mediapipe/solutions/guide).
30
- #
 
31
  ## Built With
 
32
  Sign-Language-Interpreter was built with these technologies and libraries:
 
33
  - [Javascript](https://www.w3schools.com/js/DEFAULT.asp)
34
  - [MediaPipe](https://ai.google.dev/edge/mediapipe/solutions/guide)
35
  - [Tensorflow](https://www.tensorflow.org/)
@@ -37,16 +25,14 @@ Sign-Language-Interpreter was built with these technologies and libraries:
37
  - [WebSpeechAPI](https://developer.mozilla.org/en-US/docs/Web/API/Web_Speech_API)
38
  - [Flask](https://flask.palletsprojects.com/)
39
  - [HuggingFace Spaces](https://huggingface.co/docs/hub/en/index#spaces)
40
- #
41
  ## Usage
42
 
43
- The user may sign into a camera and have the signed letters detected by the program.
44
  Fingerspelling is supported as including many signs in the model would require more resources. Fingerspelling is simply using a standard set of stationary signs as letters and making words from the ground up.
45
 
46
  A complete word based implementation is planned. Sign-language ommits many auxiliary words of English and mostly consists of nouns and verbs. The detected words may not make complete sense if directly pronounced. Hence, an LLM model will help to fill in the missing words or restructure the sentence by inferring its intended meaning. The restructured sentence will be spoken.
47
 
48
-
49
- #
50
  ## License
51
 
52
- This project is licensed under the [GPL v3](https://www.gnu.org/licenses/gpl-3.0.en.html) or greater.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  # Sign Language Interpreter
 
 
2
 
3
+ Many people around the world use sign language to communicate. Communication occurs only when a message is sent and also received. Sign-language users are able to efficiently converse when the observer understands sign-language. This is usually not the case in reality. Hence, this tool would be extremely helpful to interpret and pronounce sign language to the listener. The tool would allow sign-language users to communicate with more people and enable them to more easily take part in society.
 
4
 
5
  ## Table of Contents
6
+
7
  - [About](#about)
8
  - [Built With](#built-with)
9
  - [Usage](#usage)
10
  - [License](#license)
11
+
12
  ## About
13
 
14
+ Sign-Language-Interpreter aims to allow a fluent sign-language user to sign into a camera and have the user's message be spoken aloud.
15
+ *Currently this project has been implemented as a demo website running on [HugggingFace](https://huggingface.co/spaces/HuggingFace-SK/Sign-Language-Interpreter). To reach a wider audience and eliminate dependency on internet availability, this [AndroidJS build](https://github.com/Shantanu-Khedkar/silangint) is being developed*
16
+
17
  ## Built With
18
+
19
  Sign-Language-Interpreter was built with these technologies and libraries:
20
+
21
  - [Javascript](https://www.w3schools.com/js/DEFAULT.asp)
22
  - [MediaPipe](https://ai.google.dev/edge/mediapipe/solutions/guide)
23
  - [Tensorflow](https://www.tensorflow.org/)
 
25
  - [WebSpeechAPI](https://developer.mozilla.org/en-US/docs/Web/API/Web_Speech_API)
26
  - [Flask](https://flask.palletsprojects.com/)
27
  - [HuggingFace Spaces](https://huggingface.co/docs/hub/en/index#spaces)
28
+
29
  ## Usage
30
 
31
+ The user may sign into a camera and have the signed letters detected by the program.
32
  Fingerspelling is supported as including many signs in the model would require more resources. Fingerspelling is simply using a standard set of stationary signs as letters and making words from the ground up.
33
 
34
  A complete word based implementation is planned. Sign-language ommits many auxiliary words of English and mostly consists of nouns and verbs. The detected words may not make complete sense if directly pronounced. Hence, an LLM model will help to fill in the missing words or restructure the sentence by inferring its intended meaning. The restructured sentence will be spoken.
35
 
 
 
36
  ## License
37
 
38
+ This project is licensed under the [GPL v3](https://www.gnu.org/licenses/gpl-3.0.en.html) or greater.