YimuWang commited on
Commit
1a20776
·
verified ·
1 Parent(s): cda09eb

Upload folder using huggingface_hub

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. Charades/Charades_v1_classes.txt +157 -0
  2. Charades/Charades_v1_classify.m +128 -0
  3. Charades/Charades_v1_localize.m +155 -0
  4. Charades/Charades_v1_mapping.txt +157 -0
  5. Charades/Charades_v1_objectclasses.txt +38 -0
  6. Charades/Charades_v1_test.csv +0 -0
  7. Charades/Charades_v1_train.csv +0 -0
  8. Charades/Charades_v1_verbclasses.txt +33 -0
  9. Charades/README.txt +229 -0
  10. Charades/get_test_submission_localize.sh +4 -0
  11. Charades/license.txt +15 -0
  12. Charades/test_submission_caption.txt +0 -0
  13. Charades/test_submission_classify.txt +0 -0
  14. README.md +61 -0
  15. activity_net.v1-3.min.json +0 -0
  16. missing_files.zip +3 -0
  17. missing_files_v1-2_test.zip +3 -0
  18. missing_files_v1-3_test.zip +3 -0
  19. train.json +0 -0
  20. train_ids.json +0 -0
  21. v1-2_test.tar.gz.00 +3 -0
  22. v1-2_test.tar.gz.01 +3 -0
  23. v1-2_test.tar.gz.02 +3 -0
  24. v1-2_test.tar.gz.03 +3 -0
  25. v1-2_test.tar.gz.04 +3 -0
  26. v1-2_test.tar.gz.05 +3 -0
  27. v1-2_test.tar.gz.06 +3 -0
  28. v1-2_test.tar.gz.07 +3 -0
  29. v1-2_test.tar.gz.08 +3 -0
  30. v1-2_test.tar.gz.09 +3 -0
  31. v1-2_test.tar.gz.10 +3 -0
  32. v1-2_test.tar.gz.11 +3 -0
  33. v1-2_test.tar.gz.12 +3 -0
  34. v1-2_test.tar.gz.13 +3 -0
  35. v1-2_train.tar.gz.00 +3 -0
  36. v1-2_train.tar.gz.01 +3 -0
  37. v1-2_train.tar.gz.02 +3 -0
  38. v1-2_train.tar.gz.03 +3 -0
  39. v1-2_train.tar.gz.04 +3 -0
  40. v1-2_train.tar.gz.05 +3 -0
  41. v1-2_train.tar.gz.06 +3 -0
  42. v1-2_train.tar.gz.07 +3 -0
  43. v1-2_train.tar.gz.08 +3 -0
  44. v1-2_train.tar.gz.09 +3 -0
  45. v1-2_train.tar.gz.10 +3 -0
  46. v1-2_train.tar.gz.11 +3 -0
  47. v1-2_train.tar.gz.12 +3 -0
  48. v1-2_train.tar.gz.13 +3 -0
  49. v1-2_train.tar.gz.14 +3 -0
  50. v1-2_train.tar.gz.15 +3 -0
Charades/Charades_v1_classes.txt ADDED
@@ -0,0 +1,157 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ c000 Holding some clothes
2
+ c001 Putting clothes somewhere
3
+ c002 Taking some clothes from somewhere
4
+ c003 Throwing clothes somewhere
5
+ c004 Tidying some clothes
6
+ c005 Washing some clothes
7
+ c006 Closing a door
8
+ c007 Fixing a door
9
+ c008 Opening a door
10
+ c009 Putting something on a table
11
+ c010 Sitting on a table
12
+ c011 Sitting at a table
13
+ c012 Tidying up a table
14
+ c013 Washing a table
15
+ c014 Working at a table
16
+ c015 Holding a phone/camera
17
+ c016 Playing with a phone/camera
18
+ c017 Putting a phone/camera somewhere
19
+ c018 Taking a phone/camera from somewhere
20
+ c019 Talking on a phone/camera
21
+ c020 Holding a bag
22
+ c021 Opening a bag
23
+ c022 Putting a bag somewhere
24
+ c023 Taking a bag from somewhere
25
+ c024 Throwing a bag somewhere
26
+ c025 Closing a book
27
+ c026 Holding a book
28
+ c027 Opening a book
29
+ c028 Putting a book somewhere
30
+ c029 Smiling at a book
31
+ c030 Taking a book from somewhere
32
+ c031 Throwing a book somewhere
33
+ c032 Watching/Reading/Looking at a book
34
+ c033 Holding a towel/s
35
+ c034 Putting a towel/s somewhere
36
+ c035 Taking a towel/s from somewhere
37
+ c036 Throwing a towel/s somewhere
38
+ c037 Tidying up a towel/s
39
+ c038 Washing something with a towel
40
+ c039 Closing a box
41
+ c040 Holding a box
42
+ c041 Opening a box
43
+ c042 Putting a box somewhere
44
+ c043 Taking a box from somewhere
45
+ c044 Taking something from a box
46
+ c045 Throwing a box somewhere
47
+ c046 Closing a laptop
48
+ c047 Holding a laptop
49
+ c048 Opening a laptop
50
+ c049 Putting a laptop somewhere
51
+ c050 Taking a laptop from somewhere
52
+ c051 Watching a laptop or something on a laptop
53
+ c052 Working/Playing on a laptop
54
+ c053 Holding a shoe/shoes
55
+ c054 Putting shoes somewhere
56
+ c055 Putting on shoe/shoes
57
+ c056 Taking shoes from somewhere
58
+ c057 Taking off some shoes
59
+ c058 Throwing shoes somewhere
60
+ c059 Sitting in a chair
61
+ c060 Standing on a chair
62
+ c061 Holding some food
63
+ c062 Putting some food somewhere
64
+ c063 Taking food from somewhere
65
+ c064 Throwing food somewhere
66
+ c065 Eating a sandwich
67
+ c066 Making a sandwich
68
+ c067 Holding a sandwich
69
+ c068 Putting a sandwich somewhere
70
+ c069 Taking a sandwich from somewhere
71
+ c070 Holding a blanket
72
+ c071 Putting a blanket somewhere
73
+ c072 Snuggling with a blanket
74
+ c073 Taking a blanket from somewhere
75
+ c074 Throwing a blanket somewhere
76
+ c075 Tidying up a blanket/s
77
+ c076 Holding a pillow
78
+ c077 Putting a pillow somewhere
79
+ c078 Snuggling with a pillow
80
+ c079 Taking a pillow from somewhere
81
+ c080 Throwing a pillow somewhere
82
+ c081 Putting something on a shelf
83
+ c082 Tidying a shelf or something on a shelf
84
+ c083 Reaching for and grabbing a picture
85
+ c084 Holding a picture
86
+ c085 Laughing at a picture
87
+ c086 Putting a picture somewhere
88
+ c087 Taking a picture of something
89
+ c088 Watching/looking at a picture
90
+ c089 Closing a window
91
+ c090 Opening a window
92
+ c091 Washing a window
93
+ c092 Watching/Looking outside of a window
94
+ c093 Holding a mirror
95
+ c094 Smiling in a mirror
96
+ c095 Washing a mirror
97
+ c096 Watching something/someone/themselves in a mirror
98
+ c097 Walking through a doorway
99
+ c098 Holding a broom
100
+ c099 Putting a broom somewhere
101
+ c100 Taking a broom from somewhere
102
+ c101 Throwing a broom somewhere
103
+ c102 Tidying up with a broom
104
+ c103 Fixing a light
105
+ c104 Turning on a light
106
+ c105 Turning off a light
107
+ c106 Drinking from a cup/glass/bottle
108
+ c107 Holding a cup/glass/bottle of something
109
+ c108 Pouring something into a cup/glass/bottle
110
+ c109 Putting a cup/glass/bottle somewhere
111
+ c110 Taking a cup/glass/bottle from somewhere
112
+ c111 Washing a cup/glass/bottle
113
+ c112 Closing a closet/cabinet
114
+ c113 Opening a closet/cabinet
115
+ c114 Tidying up a closet/cabinet
116
+ c115 Someone is holding a paper/notebook
117
+ c116 Putting their paper/notebook somewhere
118
+ c117 Taking paper/notebook from somewhere
119
+ c118 Holding a dish
120
+ c119 Putting a dish/es somewhere
121
+ c120 Taking a dish/es from somewhere
122
+ c121 Wash a dish/dishes
123
+ c122 Lying on a sofa/couch
124
+ c123 Sitting on sofa/couch
125
+ c124 Lying on the floor
126
+ c125 Sitting on the floor
127
+ c126 Throwing something on the floor
128
+ c127 Tidying something on the floor
129
+ c128 Holding some medicine
130
+ c129 Taking/consuming some medicine
131
+ c130 Putting groceries somewhere
132
+ c131 Laughing at television
133
+ c132 Watching television
134
+ c133 Someone is awakening in bed
135
+ c134 Lying on a bed
136
+ c135 Sitting in a bed
137
+ c136 Fixing a vacuum
138
+ c137 Holding a vacuum
139
+ c138 Taking a vacuum from somewhere
140
+ c139 Washing their hands
141
+ c140 Fixing a doorknob
142
+ c141 Grasping onto a doorknob
143
+ c142 Closing a refrigerator
144
+ c143 Opening a refrigerator
145
+ c144 Fixing their hair
146
+ c145 Working on paper/notebook
147
+ c146 Someone is awakening somewhere
148
+ c147 Someone is cooking something
149
+ c148 Someone is dressing
150
+ c149 Someone is laughing
151
+ c150 Someone is running somewhere
152
+ c151 Someone is going from standing to sitting
153
+ c152 Someone is smiling
154
+ c153 Someone is sneezing
155
+ c154 Someone is standing up from somewhere
156
+ c155 Someone is undressing
157
+ c156 Someone is eating something
Charades/Charades_v1_classify.m ADDED
@@ -0,0 +1,128 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ function [rec_all,prec_all,ap_all,map]=Charades_v1_classify(clsfilename,gtpath)
2
+ %
3
+ % Input: clsfilename: path of the input file
4
+ % gtpath: the path of the groundtruth file
5
+ %
6
+ % Output: rec_all: recall
7
+ % prec_all: precision
8
+ % ap_all: AP for each class
9
+ % map: MAP
10
+ %
11
+ % Example:
12
+ %
13
+ % [rec_all,prec_all,ap_all,map]=Charades_v1_classify('test_submission_classify.txt','Charades_v1_test.csv');
14
+ %
15
+ % Code adapted from THUMOS15
16
+ %
17
+
18
+ [gtids,gtclasses] = load_charades(gtpath);
19
+ nclasses = 157;
20
+ ntest = length(gtids);
21
+
22
+ % load test scores
23
+ [testids,testscores]=textread(clsfilename,'%s%[^\n]');
24
+ nInputNum=size(testscores,1);
25
+ if nInputNum<ntest
26
+ fprintf('Warning: %d Videos missing\n',ntest-nInputNum);
27
+ end
28
+ for i=1:nInputNum
29
+ id = testids{i};
30
+ z=regexp(testscores{i},'\t','split');
31
+ eleNum=size(z,2);
32
+ if eleNum~=nclasses&&eleNum~=nclasses+1
33
+ z=regexp(testscores{i},' ','split');
34
+ end
35
+ eleNum=size(z,2);
36
+ if eleNum~=nclasses&&eleNum~=nclasses+1
37
+ fprintf('Error: Incompatible number of classes\n');
38
+ end
39
+ for j=1:eleNum
40
+ z{j}=regexprep(z{j},'\t','');
41
+ z{j}=regexprep(z{j},' ','');
42
+ end
43
+ x = zeros(nclasses,1);
44
+ for j=1:nclasses
45
+ x(j) = str2double(z{j});
46
+ end
47
+ testscores{i} = x;
48
+ end
49
+ predictions = containers.Map(testids,testscores);
50
+
51
+ % compare test scores to ground truth
52
+ gtlabel = zeros(ntest,nclasses);
53
+ test = -inf(ntest,nclasses);
54
+ for i=1:ntest
55
+ id = gtids{i};
56
+ gtlabel(i,gtclasses{i}+1) = 1;
57
+ if predictions.isKey(id)
58
+ test(i,:) = predictions(id);
59
+ end
60
+ end
61
+
62
+ for i=1:nclasses
63
+ [rec_all(:,i),prec_all(:,i),ap_all(:,i)]=THUMOSeventclspr(test(:,i),gtlabel(:,i));
64
+ end
65
+ map=mean(ap_all);
66
+ wap=sum(ap_all.*sum(gtlabel,1))/sum(gtlabel(:));
67
+ fprintf('\n\n')
68
+ fprintf('MAP: %f\n',map);
69
+ fprintf('WAP: %f (weighted by size of each class)',wap);
70
+ fprintf('\n\n')
71
+
72
+
73
+ function [rec,prec,ap]=THUMOSeventclspr(conf,labels)
74
+ [so,sortind]=sort(-conf);
75
+ tp=labels(sortind)==1;
76
+ fp=labels(sortind)~=1;
77
+ npos=length(find(labels==1));
78
+
79
+ % compute precision/recall
80
+ fp=cumsum(fp);
81
+ tp=cumsum(tp);
82
+ rec=tp/npos;
83
+ prec=tp./(fp+tp);
84
+
85
+ % compute average precision
86
+
87
+ ap=0;
88
+ tmp=labels(sortind)==1;
89
+ for i=1:length(conf)
90
+ if tmp(i)==1
91
+ ap=ap+prec(i);
92
+ end
93
+ end
94
+ ap=ap/npos;
95
+
96
+
97
+ function [gtids,gtclasses] = load_charades(gtpath)
98
+ f = fopen(gtpath);
99
+
100
+ % read column headers
101
+ headerline = textscan(f,'%s',1);
102
+ headerline = regexp(headerline{1}{1},',','split');
103
+ ncols = length(headerline);
104
+ headers = struct();
105
+ for i=1:ncols
106
+ headers = setfield(headers,headerline{i},i);
107
+ end
108
+
109
+ % read data
110
+ gtcsv = textscan(f,repmat('%q ',[1 ncols]),'Delimiter',',');
111
+ ntest = size(gtcsv{1},1);
112
+ gtids = cell(ntest,1);
113
+ gtclasses = cell(ntest,1);
114
+ for i=1:ntest
115
+ id = gtcsv{headers.id}{i};
116
+ classes = gtcsv{headers.actions}{i};
117
+ if length(classes)==0; gtclasses{i} = []; continue; end
118
+ classes = regexp(classes,';','split');
119
+ for j=1:length(classes)
120
+ tmp = regexp(classes{j},' ','split');
121
+ [class,s,e] = tmp{:};
122
+ classes{j} = str2double(class(2:end));
123
+ end
124
+ gtids{i} = id;
125
+ gtclasses{i} = cell2mat(classes);
126
+ end
127
+
128
+
Charades/Charades_v1_localize.m ADDED
@@ -0,0 +1,155 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ function [rec_all,prec_all,ap_all,map]=Charades_v1_localize(clsfilename,gtpath)
2
+ %
3
+ % Input: clsfilename: path of the input file
4
+ % gtpath: the path of the groundtruth file
5
+ %
6
+ % Output: rec_all: recall
7
+ % prec_all: precision
8
+ % ap_all: AP for each class
9
+ % map: MAP
10
+ %
11
+ % Please refer to the README.txt file for an overview of how localization performance is evaluated
12
+ %
13
+ % Example:
14
+ %
15
+ % [rec_all,prec_all,ap_all,map]=Charades_v1_localize('test_submission.txt','Charades_v1_test.csv');
16
+ %
17
+ % Code adapted from THUMOS15
18
+ %
19
+
20
+ tic;
21
+ fprintf('Loading Charades Annotations:\n');
22
+ frames_per_video = 25;
23
+ [gtids,gtclasses] = load_charades_localized(gtpath,frames_per_video);
24
+ nclasses = 157;
25
+ ntest = length(gtids);
26
+ toc; tic;
27
+
28
+ % load test scores
29
+ fprintf('Reading Submission File:\n');
30
+ [testids,framenr,testscores]=textread(clsfilename,'%s%d%[^\n]');
31
+ if min(framenr)==0
32
+ fprintf('Warning: Frames should be 1 indexed\n');
33
+ fprintf('Warning: Adding 1 to all frames numbers\n');
34
+ framenr = framenr+1;
35
+ end
36
+ toc; tic;
37
+ fprintf('Parsing Submission Scores:\n');
38
+ nInputNum=size(testscores,1);
39
+ if nInputNum<ntest
40
+ fprintf('Warning: %d Total frames missing\n',ntest-nInputNum);
41
+ end
42
+ testscoresparsed = cellfun(@str2num,testscores,'UniformOutput',false);
43
+ eleNum=length(testscoresparsed{1});
44
+ if eleNum~=nclasses&&eleNum~=nclasses+1
45
+ fprintf('Error: Incompatible number of classes\n');
46
+ end
47
+ make_frameid = @(x,y) [x,'-',sprintf('%03d',y)];
48
+ frameids = cellfun(make_frameid,testids,num2cell(framenr),'UniformOutput',false);
49
+ predictions = containers.Map(frameids,testscoresparsed);
50
+ toc; tic;
51
+
52
+ % compare test scores to ground truth
53
+ fprintf('Constructing Ground Truth Matrix:\n')
54
+ gtlabel = zeros(ntest,nclasses);
55
+ test = -inf(ntest,nclasses);
56
+ for i=1:ntest
57
+ id = gtids{i};
58
+ gtlabel(i,gtclasses{i}+1) = 1;
59
+ if predictions.isKey(id)
60
+ test(i,:) = predictions(id);
61
+ end
62
+ end
63
+ toc; tic;
64
+
65
+ for i=1:nclasses
66
+ [rec_all(:,i),prec_all(:,i),ap_all(:,i)]=THUMOSeventclspr(test(:,i),gtlabel(:,i));
67
+ end
68
+ map=mean(ap_all);
69
+ wap=sum(ap_all.*sum(gtlabel,1))/sum(gtlabel(:));
70
+ fprintf('\n\n')
71
+ fprintf('Per-Frame MAP: %f\n',map);
72
+ fprintf('Per-Frame WAP: %f (weighted by size of each class)',wap);
73
+ fprintf('\n\n')
74
+
75
+
76
+ function [rec,prec,ap]=THUMOSeventclspr(conf,labels)
77
+ [so,sortind]=sort(-conf);
78
+ tp=labels(sortind)==1;
79
+ fp=labels(sortind)~=1;
80
+ npos=length(find(labels==1));
81
+
82
+ % compute precision/recall
83
+ fp=cumsum(fp);
84
+ tp=cumsum(tp);
85
+ rec=tp/npos;
86
+ prec=tp./(fp+tp);
87
+
88
+ % compute average precision
89
+
90
+ ap=0;
91
+ tmp=labels(sortind)==1;
92
+ for i=1:length(conf)
93
+ if tmp(i)==1
94
+ ap=ap+prec(i);
95
+ end
96
+ end
97
+ ap=ap/npos;
98
+
99
+
100
+ function [gtids,gtclasses] = load_charades_localized(gtpath,frames_per_video)
101
+ % Loads the ground truth annotations from the csv file
102
+ f = fopen(gtpath);
103
+
104
+ % read column headers
105
+ headerline = textscan(f,'%s',1);
106
+ headerline = regexp(headerline{1}{1},',','split');
107
+ ncols = length(headerline);
108
+ headers = struct();
109
+ for i=1:ncols
110
+ headers = setfield(headers,headerline{i},i);
111
+ end
112
+
113
+ % read data
114
+ gtcsv = textscan(f,repmat('%q ',[1 ncols]),'Delimiter',',');
115
+ fclose(f);
116
+ ntest = size(gtcsv{1},1);
117
+ framechar = char(cellfun(@(x) sprintf('%03d',x),num2cell((1:50)'),'UniformOutput',false')); %for speed
118
+ gtids = cell(frames_per_video*ntest,1);
119
+ gtclasses = cell(frames_per_video*ntest,1);
120
+ uncell = @(x) x{1};
121
+ c = 1;
122
+ for i=1:ntest
123
+ id = gtcsv{headers.id}{i};
124
+ classes = gtcsv{headers.actions}{i};
125
+ time = str2double(gtcsv{headers.length}{i});
126
+ if strcmp(classes,'')
127
+ missing = true;
128
+ else
129
+ missing = false;
130
+ classes = regexp(classes,';','split')';
131
+ classes = cellfun(@(x) uncell(textscan(x,'c%f %f %f','CollectOutput',true)), classes,'UniformOutput',false); %for speed
132
+ classes = cell2mat(classes);
133
+ % classes is C by 3 matrix where each row is [class start end] for an action
134
+ end
135
+ for j=1:frames_per_video
136
+ frameclasses = zeros(50,1); %for speed
137
+ fc = 1;
138
+ timepoint = (j-1)/frames_per_video*time;
139
+ for k=1:size(classes,1)
140
+ if missing; continue; end
141
+ if (classes(k,2) <= timepoint) && (timepoint <= classes(k,3))
142
+ frameclasses(fc) = classes(k,1);
143
+ fc = fc+1;
144
+ end
145
+ end
146
+ frameid = [id,'-',framechar(j,:)]; %for speed
147
+ gtids{c} = frameid;
148
+ gtclasses{c} = frameclasses(1:(fc-1));
149
+ c = c+1;
150
+ end
151
+ end
152
+ gtids = gtids(1:c-1);
153
+ gtclasses = gtclasses(1:c-1);
154
+
155
+
Charades/Charades_v1_mapping.txt ADDED
@@ -0,0 +1,157 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ c000 o009 v008
2
+ c001 o009 v016
3
+ c002 o009 v023
4
+ c003 o009 v025
5
+ c004 o009 v026
6
+ c005 o009 v030
7
+ c006 o012 v001
8
+ c007 o012 v006
9
+ c008 o012 v012
10
+ c009 o033 v016
11
+ c010 o033 v018
12
+ c011 o033 v018
13
+ c012 o033 v026
14
+ c013 o033 v030
15
+ c014 o033 v032
16
+ c015 o025 v008
17
+ c016 o025 v014
18
+ c017 o025 v016
19
+ c018 o025 v023
20
+ c019 o025 v024
21
+ c020 o001 v008
22
+ c021 o001 v012
23
+ c022 o001 v016
24
+ c023 o001 v023
25
+ c024 o001 v025
26
+ c025 o004 v001
27
+ c026 o004 v008
28
+ c027 o004 v012
29
+ c028 o004 v016
30
+ c029 o004 v019
31
+ c030 o004 v023
32
+ c031 o004 v025
33
+ c032 o004 v031
34
+ c033 o035 v008
35
+ c034 o035 v016
36
+ c035 o035 v023
37
+ c036 o035 v025
38
+ c037 o035 v026
39
+ c038 o035 v030
40
+ c039 o005 v001
41
+ c040 o005 v008
42
+ c041 o005 v012
43
+ c042 o005 v016
44
+ c043 o005 v023
45
+ c044 o005 v023
46
+ c045 o005 v025
47
+ c046 o020 v001
48
+ c047 o020 v008
49
+ c048 o020 v012
50
+ c049 o020 v016
51
+ c050 o020 v023
52
+ c051 o020 v031
53
+ c052 o020 v014
54
+ c053 o031 v008
55
+ c054 o031 v016
56
+ c055 o031 v003
57
+ c056 o031 v023
58
+ c057 o031 v028
59
+ c058 o031 v025
60
+ c059 o007 v018
61
+ c060 o007 v022
62
+ c061 o016 v008
63
+ c062 o016 v016
64
+ c063 o016 v023
65
+ c064 o016 v025
66
+ c065 o029 v005
67
+ c066 o029 v011
68
+ c067 o029 v008
69
+ c068 o029 v016
70
+ c069 o029 v023
71
+ c070 o003 v008
72
+ c071 o003 v016
73
+ c072 o003 v021
74
+ c073 o003 v023
75
+ c074 o003 v025
76
+ c075 o003 v026
77
+ c076 o027 v008
78
+ c077 o027 v016
79
+ c078 o027 v021
80
+ c079 o027 v023
81
+ c080 o027 v025
82
+ c081 o030 v016
83
+ c082 o030 v026
84
+ c083 o026 v023
85
+ c084 o026 v008
86
+ c085 o026 v009
87
+ c086 o026 v016
88
+ c087 o025 v013
89
+ c088 o026 v031
90
+ c089 o037 v001
91
+ c090 o037 v012
92
+ c091 o037 v030
93
+ c092 o037 v031
94
+ c093 o023 v008
95
+ c094 o023 v019
96
+ c095 o023 v030
97
+ c096 o023 v031
98
+ c097 o014 v029
99
+ c098 o006 v008
100
+ c099 o006 v016
101
+ c100 o006 v023
102
+ c101 o006 v025
103
+ c102 o006 v026
104
+ c103 o021 v006
105
+ c104 o021 v027
106
+ c105 o021 v027
107
+ c106 o010 v004
108
+ c107 o010 v008
109
+ c108 o010 v015
110
+ c109 o010 v016
111
+ c110 o010 v023
112
+ c111 o010 v030
113
+ c112 o008 v001
114
+ c113 o008 v012
115
+ c114 o008 v026
116
+ c115 o024 v008
117
+ c116 o024 v016
118
+ c117 o024 v023
119
+ c118 o011 v008
120
+ c119 o011 v016
121
+ c120 o011 v023
122
+ c121 o011 v030
123
+ c122 o032 v010
124
+ c123 o032 v018
125
+ c124 o015 v010
126
+ c125 o015 v018
127
+ c126 o015 v025
128
+ c127 o015 v026
129
+ c128 o022 v008
130
+ c129 o022 v005
131
+ c130 o017 v016
132
+ c131 o034 v009
133
+ c132 o034 v031
134
+ c133 o002 v000
135
+ c134 o002 v010
136
+ c135 o002 v018
137
+ c136 o036 v006
138
+ c137 o036 v008
139
+ c138 o036 v023
140
+ c139 o019 v030
141
+ c140 o013 v006
142
+ c141 o013 v007
143
+ c142 o028 v001
144
+ c143 o028 v012
145
+ c144 o018 v006
146
+ c145 o024 v032
147
+ c146 o000 v000
148
+ c147 o016 v002
149
+ c148 o009 v003
150
+ c149 o000 v009
151
+ c150 o000 v017
152
+ c151 o000 v018
153
+ c152 o000 v019
154
+ c153 o000 v020
155
+ c154 o000 v022
156
+ c155 o009 v028
157
+ c156 o016 v005
Charades/Charades_v1_objectclasses.txt ADDED
@@ -0,0 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ o000 None
2
+ o001 bag
3
+ o002 bed
4
+ o003 blanket
5
+ o004 book
6
+ o005 box
7
+ o006 broom
8
+ o007 chair
9
+ o008 closet/cabinet
10
+ o009 clothes
11
+ o010 cup/glass/bottle
12
+ o011 dish
13
+ o012 door
14
+ o013 doorknob
15
+ o014 doorway
16
+ o015 floor
17
+ o016 food
18
+ o017 groceries
19
+ o018 hair
20
+ o019 hands
21
+ o020 laptop
22
+ o021 light
23
+ o022 medicine
24
+ o023 mirror
25
+ o024 paper/notebook
26
+ o025 phone/camera
27
+ o026 picture
28
+ o027 pillow
29
+ o028 refrigerator
30
+ o029 sandwich
31
+ o030 shelf
32
+ o031 shoe
33
+ o032 sofa/couch
34
+ o033 table
35
+ o034 television
36
+ o035 towel
37
+ o036 vacuum
38
+ o037 window
Charades/Charades_v1_test.csv ADDED
The diff for this file is too large to render. See raw diff
 
Charades/Charades_v1_train.csv ADDED
The diff for this file is too large to render. See raw diff
 
Charades/Charades_v1_verbclasses.txt ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ v000 awaken
2
+ v001 close
3
+ v002 cook
4
+ v003 dress
5
+ v004 drink
6
+ v005 eat
7
+ v006 fix
8
+ v007 grasp
9
+ v008 hold
10
+ v009 laugh
11
+ v010 lie
12
+ v011 make
13
+ v012 open
14
+ v013 photograph
15
+ v014 play
16
+ v015 pour
17
+ v016 put
18
+ v017 run
19
+ v018 sit
20
+ v019 smile
21
+ v020 sneeze
22
+ v021 snuggle
23
+ v022 stand
24
+ v023 take
25
+ v024 talk
26
+ v025 throw
27
+ v026 tidy
28
+ v027 turn
29
+ v028 undress
30
+ v029 walk
31
+ v030 wash
32
+ v031 watch
33
+ v032 work
Charades/README.txt ADDED
@@ -0,0 +1,229 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ###########################################################
2
+ ________ _____ ____ ___ ____ ___________
3
+ / ____/ / / / | / __ \/ | / __ \/ ____/ ___/
4
+ / / / /_/ / /| | / /_/ / /| | / / / / __/ \__ \
5
+ / /___/ __ / ___ |/ _, _/ ___ |/ /_/ / /___ ___/ /
6
+ \____/_/ /_/_/ |_/_/ |_/_/ |_/_____/_____//____/
7
+
8
+ ###########################################################
9
+
10
+ The Charades Dataset
11
+ allenai.org/plato/charades/
12
+ Initial Release, June 2016
13
+
14
+ Gunnar A. Sigurdsson
15
+ Gul Varol
16
+ Xiaolong Wang
17
+ Ivan Laptev
18
+ Ali Farhadi
19
+ Abhinav Gupta
20
+
21
+ If this work helps your research, please cite:
22
+ @article{sigurdsson2016hollywood,
23
+ author = {Gunnar A. Sigurdsson and G{\"u}l Varol and Xiaolong Wang and Ivan Laptev and Ali Farhadi and Abhinav Gupta},
24
+ title = {Hollywood in Homes: Crowdsourcing Data Collection for Activity Understanding},
25
+ journal = {ArXiv e-prints},
26
+ eprint = {1604.01753},
27
+ year = {2016},
28
+ url = {http://arxiv.org/abs/1604.01753},
29
+ }
30
+
31
+ Relevant files:
32
+ README.txt (this file)
33
+ license.txt (the license file, this must be included)
34
+ Charades.zip:
35
+ Charades_v1_train.csv (the training annotations)
36
+ Charades_v1_test.csv (the testing annotations)
37
+ Charades_v1_classes.txt (the classes)
38
+ Charades_v1_objectclasses.txt (the primary object classes)
39
+ Charades_v1_verbclasses.txt (the primary verb classes)
40
+ Charades_v1_mapping.txt (mapping from activity to object and verb)
41
+ Charades_v1_classify.m (evaluation code for video-level classification)
42
+ Charades_v1_localize.m (evaluation code for temporal action detection)
43
+ test_submission_classify.txt (example test output to evaluate an algorithm)
44
+ test_submission_localize.txt (example test output to evaluate an algorithm)
45
+ Charades_v1.zip (the videos)
46
+ Charades_caption.zip (contains evaluation code for caption generation)
47
+ Charades_v1_rgb.tar (the videos stored as jpg frames at 24 fps)
48
+ Charades_v1_flow.tar (the flow stored as jpg frames at 24 fps)
49
+ Charades_v1_features_rgb.tar.gz (fc7 features from the RGB stream of a Two-Stream network)
50
+ Charades_v1_features_flow.tar.gz (fc7 features from the Flow stream of a Two-Stream network)
51
+ Please refer to the website to download any missing files.
52
+
53
+
54
+ ###########################################################
55
+ Charades_v1.zip
56
+ ###########################################################
57
+ The zipfile contains videos encoded in H.264/MPEG-4 AVC (mp4) using ffmpeg:
58
+ ffmpeg -i input.ext -vcodec libx264 -crf 23 -c:a aac -strict -2 -pix_fmt yuv420p output.mp4
59
+ The videos, originally in various formats, maintain their original resolutions and framerates.
60
+
61
+
62
+ ###########################################################
63
+ Charades_v1_classes.txt
64
+ ###########################################################
65
+ Contains each class label (starting at c000) followed by a human-readable description of the action, such as "c008 Opening a door"
66
+
67
+
68
+ ###########################################################
69
+ Charades_v1_train.csv and Charades_v1_test.csv
70
+ ###########################################################
71
+ A comma-seperated csv, where a field may be enclosed by double quotation marks (") in case it contains a comma. If a field has multiple values, such as multiple actions, those are seperated by semicolon (;). The file contains the following fields:
72
+
73
+ - id:
74
+ Unique identifier for each video.
75
+ - subject:
76
+ Unique identifier for each subject in the dataset
77
+ - scene:
78
+ One of 15 indoor scenes in the dataset, such as Kitchen
79
+ - quality:
80
+ The quality of the video judged by an annotator (7-point scale, 7=high quality)
81
+ - relevance:
82
+ The relevance of the video to the script judged by an annotated (7-point scale, 7=very relevant)
83
+ - verified:
84
+ 'Yes' if an annotator successfully verified that the video matches the script, else 'No'
85
+ - script:
86
+ The human-generated script used to generate the video
87
+ - descriptions:
88
+ Semicolon-separated list of descriptions by annotators watching the video
89
+ - actions:
90
+ Semicolon-separated list of "class start end" triplets for each actions in the video, such as c092 11.90 21.20;c147 0.00 12.60
91
+ - length:
92
+ The length of the video in seconds
93
+
94
+ This can be loaded into MATLAB as follows:
95
+
96
+ f = fopen('Charades_v1_train.csv');
97
+ header = textscan(f,repmat('%s ',[1 10]),1,'Delimiter',',');
98
+ csv = textscan(f,repmat('%q ',[1 10]),'Delimiter',',');
99
+ actions = csv{10};
100
+ actions_in_first_video = regexp(actions{1},';','split');
101
+
102
+
103
+ This can be loaded into python as:
104
+
105
+ import csv
106
+ with open('Charades_v1_train.csv') as f:
107
+ reader = csv.DictReader(f)
108
+ for row in reader:
109
+ actions = row['actions'].split(';')
110
+
111
+ Please refer to the evaluation code for usage examples.
112
+
113
+
114
+ ###########################################################
115
+ Charades_v1_classify.m
116
+ ###########################################################
117
+ Evaluation code for video-level classification. Each video has zero or more actions. This script takes in a "submission file" which is a csv file of the form:
118
+
119
+ id vector
120
+
121
+ where 'id' is a video id for a given video, and 'vector' is a whitespace delimited list of 157 floating point numbers representing the scores for each action in a video. An example submission file is provided in test_submission_classify.txt
122
+
123
+ The evaluation script calculates the mean average precision (mAP) for the videos. That is, the average of the average precision (AP) for a single activity in all the videos.
124
+
125
+
126
+ ###########################################################
127
+ Charades_v1_localize.m
128
+ ###########################################################
129
+ Evaluation code for frame-level classification (localization). Each frame in a video has zero or more actions. This script takes in a "submission file" which is a csv file of the form:
130
+
131
+ id framenumber vector
132
+
133
+ where 'id' is a video id for a given video, 'framenumber' is the number of frame described below, and 'vector' is a whitespace delimited list of 157 floating point numbers representing the scores of each action in a frame. An example submission file is provided in test_submission_localize.txt (download this file with get_test_submission_localize.sh).
134
+
135
+ To avoid extremely large submission files, the evaluation script evaluates mAP on 25 equally spaced frames throughout each video. The frames are chosen as follows
136
+
137
+ for j=1:frames_per_video
138
+ timepoint(j) = (j-1)*time/frames_per_video;
139
+
140
+ That is: 0, time/25, 2*time/25, ..., 24*time/25.
141
+
142
+ The baseline performance was generated by calculating the action scores at 75 equally spaced frames in the video (our batchsize) and picking every third prediction.
143
+
144
+ For more information about localization, please refer to the following publication:
145
+ @article{sigurdsson2016asynchronous,
146
+ author = {Gunnar A. Sigurdsson and Santosh Divvala and Ali Farhadi and Abhinav Gupta},
147
+ title = {Asynchronous Temporal Fields for Action Recognition},
148
+ journal={arXiv preprint arXiv:1612.06371},
149
+ year={2016},
150
+ pdf = {http://arxiv.org/pdf/1612.06371.pdf},
151
+ code = {https://github.com/gsig/temporal-fields},
152
+ }
153
+
154
+
155
+ ###########################################################
156
+ Charades_v1_rgb.tar
157
+ ###########################################################
158
+ These frames were extracted at 24fps using the following ffmpeg call for each video in the dataset:
159
+
160
+ line=pathToVideo
161
+ MAXW=320
162
+ MAXH=320
163
+ filename=$(basename $line)
164
+ ffmpeg -i "$line" -qscale:v 3 -filter:v "scale='if(gt(a,$MAXW/$MAXH),$MAXW,-1)':'if(gt(a,$MAXW/$MAXH),-1,$MAXH)',fps=fps=24" "/somepath/${filename%.*}/${filename%.*}_%0d.jpg";
165
+
166
+ The files are stored as Charades_v1_rgb/id/id-000000.jpg where id is the video id and 000000 is the number of the frame at 24fps.
167
+
168
+
169
+ ###########################################################
170
+ Charades_v1_flow.tar
171
+ ###########################################################
172
+ The flow was calculated similarly at 24fps using the OpenCV "Dual TV L1" Optical Flow Algorithm (OpticalFlowDual_TVL1_GPU)
173
+ The flow for each frame is stored as id-000000x.jpg and id-000000y.jpg for the x and y components of the flow respectively.
174
+ The flow is mapped to the range {0,1,...,255} with the following formula:
175
+ y = 255*(x-L)/(H-L)
176
+ y = max(0,y)
177
+ y = min(255,y)
178
+ where L=-20 and H=20, the lower and high bounds of the optical flow.
179
+ The files are stored as Charades_v1_flow/id/id-000000x.jpg where id is the video id, 000000 is the number of the frame at 24fps, and x is either x or y depending on the optical flow direction (optical flow is stored as two seperate grayscale images for the two channels)
180
+
181
+
182
+ ###########################################################
183
+ Charades_v1_features_{rgb,flow}.tar.gz
184
+ ###########################################################
185
+ Using the two-stream code available at github.com/gsig/charades-algorithms we extracted fc7 features (after ReLU) from VGG-16 rgb and optical flow streams using the provided models (twostream_rgb.t7 and twostream_flow.t7). Logistic regression (linear layer, softmax, and cross entropy loss) on top of these features gives 18.9% accuracy on Charades classification. Simplified code for extracting the features is as follows:
186
+
187
+ fc7 = model.modules[37].output
188
+ for i=1,fc7:size(1) do
189
+ out.write(string.format('%.6g',fc7[i]))
190
+
191
+ There are two folders Charades_v1_features_rgb/ and Charades_v1_features_flow/ for the two streams.
192
+ The features are stored in a whitespace delimited textfile for the 4096 numbers.
193
+
194
+ The files are stored as Charades_v1_features/rgb/id/id-000000.txt where id is the video is and 000000 is the number of the frame at 24fps, which matches the provided rgb and flow data. To limit the file size, we include every 4th frame, but the frame numbers correspond to 24fps, so the numbers are 1,5,9,13, etc.
195
+ The features can be loaded as follows:
196
+
197
+ Python:
198
+ fc7 = numpy.loadtxt('id/id-000000.txt')
199
+
200
+ Torch:
201
+ file = io.open('id/id-000000.txt')
202
+ xx = torch.Tensor(file:lines()():split(' '));
203
+ file:close()
204
+
205
+
206
+ ###########################################################
207
+ Baseline algorithms on Charades
208
+ ###########################################################
209
+ Code for multiple activity recognition algorithms are provided at:
210
+ https://github.com/gsig/charades-algorithms
211
+
212
+
213
+ ###########################################################
214
+ CHANGELOG
215
+ ###########################################################
216
+ 6/1/16
217
+ Initial release
218
+
219
+ 2/27/17
220
+ Adding support for evaluating localization. New evaluation script for localization.
221
+ 'length' column was added to Charades_v1_train.csv and Charades_v1_test.csv to have an official length of each video.
222
+ Adding details about provided RGB and Flow data.
223
+
224
+ 5/14/17
225
+ Adding details about provided fc7 features.
226
+ Improving scene annotations ('scene' column in _train and _test).
227
+
228
+
229
+ ###########################################################
Charades/get_test_submission_localize.sh ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ #!/bin/bash
2
+ # Omitted from this zip file due to size. 49M compressed.
3
+ wget http://ai2-website.s3.amazonaws.com/data/test_submission_localize.zip
4
+ unzip test_submission_localize.zip
Charades/license.txt ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ License for Non-Commercial Use
2
+
3
+ If this software is redistributed, this license must be included. The term software includes any source files, documentation, executables, models, and data.
4
+
5
+ This software and data is available for general use by academic or non-profit, or government-sponsored researchers. It may also be used for evaluation purposes elsewhere. This license does not grant the right to use this software or any derivation of it in a for-profit enterprise. For commercial use, please contact The Allen Institute for Artificial Intelligence.
6
+
7
+ This license does not grant the right to modify and publicly release the data in any form.
8
+
9
+ This license does not grant the right to distribute the data to a third party in any form.
10
+
11
+ The subjects in this data should be treated with respect and dignity. This license only grants the right to publish short segments or still images in an academic publication where necessary to present examples, experimental results, or observations.
12
+
13
+ This software comes with no warranty or guarantee of any kind. By using this software, the user accepts full liability.
14
+
15
+ The Allen Institute for Artificial Intelligence (C) 2016.
Charades/test_submission_caption.txt ADDED
The diff for this file is too large to render. See raw diff
 
Charades/test_submission_classify.txt ADDED
The diff for this file is too large to render. See raw diff
 
README.md ADDED
@@ -0,0 +1,61 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ # Description
3
+
4
+ ## Dataset V1-2
5
+
6
+ 1. v1-2\_train.tar.gz and v1-2\_val.tar.gz
7
+
8
+ Data (train and val set) associated with ActivityNet release 1.2
9
+
10
+ 2. v1-2\_test.tar.gz
11
+
12
+ Data (test set only) associated with ActivityNet release 1.2
13
+
14
+ ## Dataset V1-3
15
+
16
+ 1. v1-3\_train\_val.tar.gz
17
+
18
+ - *Additional* videos (train val set) collected for ActivityNet release 1.3
19
+
20
+ - v1-3 is an extension of v1-2, so you also need to download v1-2 data and merge to v1.3
21
+
22
+ 2. v1-3\_test.tar.gz
23
+
24
+ - *Additional* videos (test set only) collected for ActivityNet release 1.3
25
+
26
+ - v1-3 is an extension of v1-2, so you also need to download v1-2 data and merge to v1.3
27
+
28
+ ## Missing files
29
+
30
+ For people who download the dataset from youtube-dl, we provide the missing videos.
31
+
32
+ 1. missing_files_trainval.zip
33
+
34
+ - Missing videos in trainval set
35
+
36
+ 2. missing_files_v1-2_test.zip
37
+
38
+ - Missing videos in v1-2 test set
39
+
40
+ 3. missing_files_v1-2_test.zip
41
+
42
+ - Missing videos in v1-2 test set
43
+
44
+ ## H5 sum
45
+ b06ca628443da1c1389c69b2feac6ca2 v1-2_test.tar.gz
46
+ 8510ed0d8ae399b1bda399fd117e7406 v1-2_train.tar.gz
47
+ 34f0d2b867f188883259b607e68c830f v1-2_val.tar.gz
48
+ ec218c66991d776ad0ff826c0b43836d v1-3_test.tar.gz
49
+ eb0829a2eae2e3f254a0fcb64c6583c4 v1-3_train_val.tar.gz
50
+
51
+ ## External Link
52
+
53
+ For people who are not able to download large files from google drive, we give a link to another driver.
54
+
55
+ 1. Baidu Pan (please send me an email if it is expired)
56
+
57
+ - link: https://pan.baidu.com/s/1Ic6wrvOUJARgnqRSk-BzjA
58
+
59
+ - code: f766
60
+
61
+
activity_net.v1-3.min.json ADDED
The diff for this file is too large to render. See raw diff
 
missing_files.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f0dc600595a22645b37e203cb2bf6d65f733c91614c67b2846a4b414b5709785
3
+ size 37798739657
missing_files_v1-2_test.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:35982454ddbc4a08f1ef65c20e25c0d4afbd6ce8d3a6ad041668bec0a1f57432
3
+ size 5687952619
missing_files_v1-3_test.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8f82e05f37ca28562d4c5acb8d5e7cf8c25c5696efc1d978616ca07ab4328f6e
3
+ size 6888567987
train.json ADDED
The diff for this file is too large to render. See raw diff
 
train_ids.json ADDED
The diff for this file is too large to render. See raw diff
 
v1-2_test.tar.gz.00 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f361c3f201745f1951b7f6dd22069085db04736c3d19eeb4c72ac35ebd9cf482
3
+ size 4194304000
v1-2_test.tar.gz.01 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6577c76853b0e25a8cc0144ee867c62f4c9f229be76458b15e7f242a1e6f78f2
3
+ size 4194304000
v1-2_test.tar.gz.02 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bf7fef61c7c8e4d202cfa5702b0ade27fa203685598501337da00bba62725723
3
+ size 4194304000
v1-2_test.tar.gz.03 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5c839e73be8fe894b8e21356a62cb3fcba8fbf0766bd88b3ce953d0c851ec269
3
+ size 4194304000
v1-2_test.tar.gz.04 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a4e76629544143b504593b993ea67cb5a014fa391e0ad3d5e455342c6fc1c767
3
+ size 4194304000
v1-2_test.tar.gz.05 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:45840394be695f15478c47e682428598a540a33e800ad5903bd84c3a5dda8692
3
+ size 4194304000
v1-2_test.tar.gz.06 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1170e038959c78333ba85b309ae8b8bd0881296f51949d61342b880991d880bc
3
+ size 4194304000
v1-2_test.tar.gz.07 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7ffb1751ec8bd443faebabedf17a7e5312d59e58200fd8b53b18bc81072126fd
3
+ size 4194304000
v1-2_test.tar.gz.08 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f04b66acab5f8451659c9db4bd322601ba54cbda5012b3eaabd74f6ee2807208
3
+ size 4194304000
v1-2_test.tar.gz.09 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2a52e93082da25b6bec812e26e99085e326004e7278bc2d880313806c86d72ed
3
+ size 4194304000
v1-2_test.tar.gz.10 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:59c396b070db0df6af456088f0b83081b1d60c72cc71d49e86cc2100850852d8
3
+ size 4194304000
v1-2_test.tar.gz.11 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c9a829bea0e3cf26e83e9e80c4d1d7a5cf07576af91c695188f88aa7fadffbea
3
+ size 4194304000
v1-2_test.tar.gz.12 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c49eeec20519d95a0956cbe0f85d15c96e4ce3898e0576ff04bb276d334e0b7e
3
+ size 4194304000
v1-2_test.tar.gz.13 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4155fcf720960e4f35769d1a117382e3b9f05df0d9ad583637a46f3c140a082e
3
+ size 3543497960
v1-2_train.tar.gz.00 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:14c7d31d3d34e6d2c27240a69d6734528295bd497f7d6aed67f6f7f3a4ac83ab
3
+ size 4194304000
v1-2_train.tar.gz.01 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d6a3a6bc2369b450b08e911306e0780f6dad11d8090c3143f26dd4453ee50ee9
3
+ size 4194304000
v1-2_train.tar.gz.02 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d4cf214623928c4dae110887bcc978a210de96777f7c14a8a661e5e4c9ae417f
3
+ size 4194304000
v1-2_train.tar.gz.03 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:89d3b7b6ad10bda6894b001768763de1e08ee69e28b08de0795d93b7a6107643
3
+ size 4194304000
v1-2_train.tar.gz.04 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:876e9c01fe9dc6d5bf868f2e3760af2eed9e08a2dd08eb90230fab79657460ae
3
+ size 4194304000
v1-2_train.tar.gz.05 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:edee3fff13ea5ecba52c7ed9783ceff179dc8598de07cd23b18c6fabd1d3fb66
3
+ size 4194304000
v1-2_train.tar.gz.06 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8602441bd94107267cb97ffaa2be62be474edda59c2ec1e638a0867fd4dbe796
3
+ size 4194304000
v1-2_train.tar.gz.07 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a89792fb4aea0fbeade48de320c6e36bb8c352b87766f4996730605823725be2
3
+ size 4194304000
v1-2_train.tar.gz.08 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a52ac47c177ca2668a1f05ef06393ec2445ad743e5200945df1d9bbe6bc13ed9
3
+ size 4194304000
v1-2_train.tar.gz.09 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f2a91408eed8a206b5660aaec9ede6fbbecd1fc94a434edb5537403f03def26d
3
+ size 4194304000
v1-2_train.tar.gz.10 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7935b6b56fbcfcc7fb74cbabb6e859c1c11d01c2581d2f2ad52734fd039953c5
3
+ size 4194304000
v1-2_train.tar.gz.11 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9bb85c04f2e7ad06605ce6c7e1d56fb61f4860386994870c91c99851c6c62f83
3
+ size 4194304000
v1-2_train.tar.gz.12 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9dc08886befd6e09d800b9ca0eac973d79c03ec87908425a0af9d03a62faa9ae
3
+ size 4194304000
v1-2_train.tar.gz.13 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b7bcec5818adca6022245a89ba55218ec9ee23d2c0fd77718b38d7258bc30e43
3
+ size 4194304000
v1-2_train.tar.gz.14 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:09cf2292e6e4899da1ebcec28155f196364b501cd503dfd22ada285369994d44
3
+ size 4194304000
v1-2_train.tar.gz.15 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0cadf8631e09ec1c6bd1f5629534d926420a504151b1760dee46767ddc9d1479
3
+ size 4194304000