Rodrigo Ferreira Rodrigues commited on
Commit
a2f0878
·
1 Parent(s): d023c7f

Updating Readme.md

Browse files
Files changed (1) hide show
  1. README.md +27 -13
README.md CHANGED
@@ -17,34 +17,48 @@ pinned: false
17
  ***Module Card Instructions:*** *Fill out the following subsections. Feel free to take a look at existing metric cards if you'd like examples.*
18
 
19
  ## Metric Description
20
- *Give a brief overview of this metric, including what task(s) it is usually used for, if any.*
21
 
22
  ## How to Use
23
- *Give general statement of how to use the metric*
24
 
25
- *Provide simplest possible example for using the metric*
 
 
 
 
 
 
 
 
 
 
 
26
 
27
- ### Inputs
28
- *List all input arguments in the format below*
29
- - **input_field** *(type): Definition of input, with explanation if necessary. State any default value(s).*
30
 
31
  ### Output Values
32
 
33
- *Explain what this metric outputs and provide an example of what the metric output looks like. Modules should return a dictionary with one or multiple key-value pairs, e.g. {"bleu" : 6.02}*
34
 
35
- *State the range of possible values that the metric's output can take, as well as what in that range is considered good. For example: "This metric can take on any value between 0 and 100, inclusive. Higher scores are better."*
36
 
37
  #### Values from Popular Papers
38
- *Give examples, preferrably with links to leaderboards or publications, to papers that have reported this metric, along with the values they have reported.*
39
 
40
  ### Examples
41
- *Give code examples of the metric being used. Try to include examples that clear up any potential ambiguity left from the metric description above. If possible, provide a range of examples that show both typical and atypical results, as well as examples where a variety of input parameters are passed.*
 
 
 
 
 
 
 
42
 
43
  ## Limitations and Bias
44
- *Note any known limitations or biases that the metric has, with links and references if possible.*
45
 
46
  ## Citation
47
- *Cite the source where this metric was introduced.*
48
 
49
  ## Further References
50
- *Add any useful further references.*
 
17
  ***Module Card Instructions:*** *Fill out the following subsections. Feel free to take a look at existing metric cards if you'd like examples.*
18
 
19
  ## Metric Description
20
+ Coordinates Accuracy aims to evaluate model performance in coordinates prediction tasks where the model has to predict a coordinate of a geographic entity in the form **(__lat__, __long__)**. It checks if the coordinates generated are inside a circle of radius __d__ and centered at gold coordinates.
21
 
22
  ## How to Use
23
+ This metric takes 2 mandatory arguments : `generations` (a list of string of generated coordinates), `golds` (a list of list of floats of gold coordinates)
24
 
25
+ ```python
26
+ import evaluate
27
+ coord_acc = evaluate.load("rfr2003/coord_eval")
28
+ results = coord_acc.compute(generations=["(12.7, 67.8)", "(16.7, 89.6)"], golds=[[12.7, 67.8], [10.9, 80.6]], d_range=20)
29
+ print(results)
30
+ {'coord_accuracy': 0.5}
31
+ ```
32
+
33
+ This metric also accepts an optional argument:
34
+
35
+
36
+ `d` (int): Radius of the circle. The default value is `20`.
37
 
 
 
 
38
 
39
  ### Output Values
40
 
41
+ This metric outputs a dictionary with the following values:
42
 
43
+ `coord_accuracy`: The coordinates accuracy between `generations` and `golds`, which ranges from 0.0 to 1.0.
44
 
45
  #### Values from Popular Papers
46
+
47
 
48
  ### Examples
49
+
50
+ ```python
51
+ import evaluate
52
+ coord_acc = evaluate.load("rfr2003/coord_eval")
53
+ results = coord_acc.compute(generations=["(12.7, 67.8)", "(16.7, 89.6)"], golds=[[12.7, 67.8], [10.9, 80.6]], d_range=20)
54
+ print(results)
55
+ {'coord_accuracy': 0.5}
56
+ ```
57
 
58
  ## Limitations and Bias
59
+
60
 
61
  ## Citation
62
+
63
 
64
  ## Further References