Bingsu commited on
Commit
57be1b1
Β·
1 Parent(s): e9a9cb9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +36 -4
README.md CHANGED
@@ -29,16 +29,17 @@ sentence-similarityλ₯Ό κ΅¬ν•˜λŠ” μš©λ„λ‘œ λ°”λ‘œ μ‚¬μš©ν•  μˆ˜λ„ 있고, λͺ©
29
 
30
  ## Usage (Sentence-Transformers)
31
 
32
- Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
33
 
34
  ```
35
  pip install -U sentence-transformers
36
  ```
37
 
38
- Then you can use the model like this:
39
 
40
  ```python
41
  from sentence_transformers import SentenceTransformer
 
42
  sentences = ["This is an example sentence", "Each sentence is converted"]
43
 
44
  model = SentenceTransformer('smartmind/roberta-ko-small-tsdae')
@@ -46,10 +47,41 @@ embeddings = model.encode(sentences)
46
  print(embeddings)
47
  ```
48
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
49
 
50
 
51
  ## Usage (HuggingFace Transformers)
52
- Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
 
53
 
54
  ```python
55
  from transformers import AutoTokenizer, AutoModel
@@ -103,4 +135,4 @@ SentenceTransformer(
103
 
104
  ## Citing & Authors
105
 
106
- <!--- Describe where people can find more information -->
 
29
 
30
  ## Usage (Sentence-Transformers)
31
 
32
+ [sentence-transformers](https://www.SBERT.net)λ₯Ό μ„€μΉ˜ν•œ λ’€, λͺ¨λΈμ„ λ°”λ‘œ 뢈러올 수 μžˆμŠ΅λ‹ˆλ‹€.
33
 
34
  ```
35
  pip install -U sentence-transformers
36
  ```
37
 
38
+ 이후 λ‹€μŒμ²˜λŸΌ λͺ¨λΈμ„ μ‚¬μš©ν•  수 μžˆμŠ΅λ‹ˆλ‹€.
39
 
40
  ```python
41
  from sentence_transformers import SentenceTransformer
42
+
43
  sentences = ["This is an example sentence", "Each sentence is converted"]
44
 
45
  model = SentenceTransformer('smartmind/roberta-ko-small-tsdae')
 
47
  print(embeddings)
48
  ```
49
 
50
+ λ‹€μŒμ€ sentence-transformers의 κΈ°λŠ₯을 μ‚¬μš©ν•˜μ—¬ μ—¬λŸ¬ λ¬Έμž₯의 μœ μ‚¬λ„λ₯Ό κ΅¬ν•˜λŠ” μ˜ˆμ‹œμž…λ‹ˆλ‹€.
51
+
52
+ ```python
53
+ from sentence_transformers import util
54
+
55
+ sentences = [
56
+ "λŒ€ν•œλ―Όκ΅­μ˜ μˆ˜λ„λŠ” μ„œμšΈμž…λ‹ˆλ‹€.",
57
+ "미ꡭ의 μˆ˜λ„λŠ” λ‰΄μš•μ΄ μ•„λ‹™λ‹ˆλ‹€.",
58
+ "λŒ€ν•œλ―Όκ΅­μ˜ μˆ˜λ„ μš”κΈˆμ€ μ €λ ΄ν•œ νŽΈμž…λ‹ˆλ‹€.",
59
+ "μ„œμšΈμ€ λŒ€ν•œλ―Όκ΅­μ˜ μˆ˜λ„μž…λ‹ˆλ‹€.",
60
+ "였늘 μ„œμšΈμ€ ν•˜λ£¨μ’…μΌ λ§‘μŒ",
61
+ ]
62
+
63
+ paraphrase = util.paraphrase_mining(model, sentences)
64
+ for score, i, j in paraphrase:
65
+ print(f"{sentences[i]}\t\t{sentences[j]}\t\t{score:.4f}")
66
+ ```
67
+
68
+ ```
69
+ λŒ€ν•œλ―Όκ΅­μ˜ μˆ˜λ„λŠ” μ„œμšΈμž…λ‹ˆλ‹€. μ„œμšΈμ€ λŒ€ν•œλ―Όκ΅­μ˜ μˆ˜λ„μž…λ‹ˆλ‹€. 0.7616
70
+ λŒ€ν•œλ―Όκ΅­μ˜ μˆ˜λ„λŠ” μ„œμšΈμž…λ‹ˆλ‹€. 미ꡭ의 μˆ˜λ„λŠ” λ‰΄μš•μ΄ μ•„λ‹™λ‹ˆλ‹€. 0.7031
71
+ λŒ€ν•œλ―Όκ΅­μ˜ μˆ˜λ„λŠ” μ„œμšΈμž…λ‹ˆλ‹€. λŒ€ν•œλ―Όκ΅­μ˜ μˆ˜λ„ μš”κΈˆμ€ μ €λ ΄ν•œ νŽΈμž…λ‹ˆλ‹€. 0.6594
72
+ 미ꡭ의 μˆ˜λ„λŠ” λ‰΄μš•μ΄ μ•„λ‹™λ‹ˆλ‹€. μ„œμšΈμ€ λŒ€ν•œλ―Όκ΅­μ˜ μˆ˜λ„μž…λ‹ˆλ‹€. 0.6445
73
+ λŒ€ν•œλ―Όκ΅­μ˜ μˆ˜λ„ μš”κΈˆμ€ μ €λ ΄ν•œ νŽΈμž…λ‹ˆλ‹€. μ„œμšΈμ€ λŒ€ν•œλ―Όκ΅­μ˜ μˆ˜λ„μž…λ‹ˆλ‹€. 0.4915
74
+ 미ꡭ의 μˆ˜λ„λŠ” λ‰΄μš•μ΄ μ•„λ‹™λ‹ˆλ‹€. λŒ€ν•œλ―Όκ΅­μ˜ μˆ˜λ„ μš”κΈˆμ€ μ €λ ΄ν•œ νŽΈμž…λ‹ˆλ‹€. 0.4785
75
+ μ„œμšΈμ€ λŒ€ν•œλ―Όκ΅­μ˜ μˆ˜λ„μž…λ‹ˆλ‹€. 였늘 μ„œμšΈμ€ ν•˜λ£¨μ’…μΌ λ§‘μŒ 0.4119
76
+ λŒ€ν•œλ―Όκ΅­μ˜ μˆ˜λ„λŠ” μ„œμšΈμž…λ‹ˆλ‹€. 였늘 μ„œμšΈμ€ ν•˜λ£¨μ’…μΌ λ§‘μŒ 0.3520
77
+ 미ꡭ의 μˆ˜λ„λŠ” λ‰΄μš•μ΄ μ•„λ‹™λ‹ˆλ‹€. 였늘 μ„œμšΈμ€ ν•˜λ£¨μ’…μΌ λ§‘μŒ 0.2550
78
+ λŒ€ν•œλ―Όκ΅­μ˜ μˆ˜λ„ μš”κΈˆμ€ μ €λ ΄ν•œ νŽΈμž…λ‹ˆλ‹€. 였늘 μ„œμšΈμ€ ν•˜λ£¨μ’…μΌ λ§‘μŒ 0.1896
79
+ ```
80
 
81
 
82
  ## Usage (HuggingFace Transformers)
83
+
84
+ [sentence-transformers](https://www.SBERT.net)λ₯Ό μ„€μΉ˜ν•˜μ§€ μ•Šμ€ μƒνƒœλ‘œλŠ” λ‹€μŒμ²˜λŸΌ μ‚¬μš©ν•  수 μžˆμŠ΅λ‹ˆλ‹€.
85
 
86
  ```python
87
  from transformers import AutoTokenizer, AutoModel
 
135
 
136
  ## Citing & Authors
137
 
138
+ <!--- Describe where people can find more information -->