diff --git a/.gitattributes b/.gitattributes
index bed0738c7eeb449bca98b5d2f33c89a1ee56349a..8224360b28fdb432a29bd55045eb80b3981d8ca1 100644
--- a/.gitattributes
+++ b/.gitattributes
@@ -58,3 +58,169 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
# Video files - compressed
*.mp4 filter=lfs diff=lfs merge=lfs -text
*.webm filter=lfs diff=lfs merge=lfs -text
+Test/AZDBLab[[:space:]]A[[:space:]]Laboratory[[:space:]]Information[[:space:]]System[[:space:]]for[[:space:]]Large-Scale[[:space:]]Empirical[[:space:]]DBMS[[:space:]]Studies/AZDBLab[[:space:]]A[[:space:]]Laboratory[[:space:]]Information[[:space:]]System[[:space:]]for[[:space:]]Large-Scale[[:space:]]Empirical[[:space:]]DBMS[[:space:]]Studies-Poster.pdf filter=lfs diff=lfs merge=lfs -text
+Test/AZDBLab[[:space:]]A[[:space:]]Laboratory[[:space:]]Information[[:space:]]System[[:space:]]for[[:space:]]Large-Scale[[:space:]]Empirical[[:space:]]DBMS[[:space:]]Studies/AZDBLab[[:space:]]A[[:space:]]Laboratory[[:space:]]Information[[:space:]]System[[:space:]]for[[:space:]]Large-Scale[[:space:]]Empirical[[:space:]]DBMS[[:space:]]Studies.pdf filter=lfs diff=lfs merge=lfs -text
+Test/Camera[[:space:]]Motion[[:space:]]and[[:space:]]Surrounding[[:space:]]Scene[[:space:]]Appearance[[:space:]]as[[:space:]]Context[[:space:]]for[[:space:]]Action[[:space:]]Recognition/Camera[[:space:]]Motion[[:space:]]and[[:space:]]Surrounding[[:space:]]Scene[[:space:]]Appearance[[:space:]]as[[:space:]]Context[[:space:]]for[[:space:]]Action[[:space:]]Recognition-Poster.pdf filter=lfs diff=lfs merge=lfs -text
+Test/Camera[[:space:]]Motion[[:space:]]and[[:space:]]Surrounding[[:space:]]Scene[[:space:]]Appearance[[:space:]]as[[:space:]]Context[[:space:]]for[[:space:]]Action[[:space:]]Recognition/Camera[[:space:]]Motion[[:space:]]and[[:space:]]Surrounding[[:space:]]Scene[[:space:]]Appearance[[:space:]]as[[:space:]]Context[[:space:]]for[[:space:]]Action[[:space:]]Recognition.pdf filter=lfs diff=lfs merge=lfs -text
+Test/Can[[:space:]]a[[:space:]]driving[[:space:]]simulator[[:space:]]assess[[:space:]]the[[:space:]]effectiveness[[:space:]]of[[:space:]]Hazard[[:space:]]Perception[[:space:]]training[[:space:]]in[[:space:]]young[[:space:]]novice[[:space:]]drivers/Can[[:space:]]a[[:space:]]driving[[:space:]]simulator[[:space:]]assess[[:space:]]the[[:space:]]effectiveness[[:space:]]of[[:space:]]Hazard[[:space:]]Perception[[:space:]]training[[:space:]]in[[:space:]]young[[:space:]]novice[[:space:]]drivers-Poster.pdf filter=lfs diff=lfs merge=lfs -text
+Test/Can[[:space:]]a[[:space:]]driving[[:space:]]simulator[[:space:]]assess[[:space:]]the[[:space:]]effectiveness[[:space:]]of[[:space:]]Hazard[[:space:]]Perception[[:space:]]training[[:space:]]in[[:space:]]young[[:space:]]novice[[:space:]]drivers/Can[[:space:]]a[[:space:]]driving[[:space:]]simulator[[:space:]]assess[[:space:]]the[[:space:]]effectiveness[[:space:]]of[[:space:]]Hazard[[:space:]]Perception[[:space:]]training[[:space:]]in[[:space:]]young[[:space:]]novice[[:space:]]drivers.pdf filter=lfs diff=lfs merge=lfs -text
+Test/Leveraging[[:space:]]Multi-Domain[[:space:]]Prior[[:space:]]Knowledge[[:space:]]in[[:space:]]Topic[[:space:]]Models/Leveraging[[:space:]]Multi-Domain[[:space:]]Prior[[:space:]]Knowledge[[:space:]]in[[:space:]]Topic[[:space:]]Models-Poster.pdf filter=lfs diff=lfs merge=lfs -text
+Test/Leveraging[[:space:]]Multi-Domain[[:space:]]Prior[[:space:]]Knowledge[[:space:]]in[[:space:]]Topic[[:space:]]Models/Leveraging[[:space:]]Multi-Domain[[:space:]]Prior[[:space:]]Knowledge[[:space:]]in[[:space:]]Topic[[:space:]]Models.pdf filter=lfs diff=lfs merge=lfs -text
+Test/nips-2011-001/nips-2011-001-Poster.pdf filter=lfs diff=lfs merge=lfs -text
+Test/nips-2011-001/nips-2011-001.pdf filter=lfs diff=lfs merge=lfs -text
+Train/3D[[:space:]]Proximal[[:space:]]Femur[[:space:]]Estimation[[:space:]]through[[:space:]]Bi-planar[[:space:]]Image[[:space:]]Segmentation[[:space:]]with[[:space:]]Hierarchical/3D[[:space:]]Proximal[[:space:]]Femur[[:space:]]Estimation[[:space:]]through[[:space:]]Bi-planar[[:space:]]Image[[:space:]]Segmentation[[:space:]]with[[:space:]]Hierarchical-Poster.pdf filter=lfs diff=lfs merge=lfs -text
+Train/3D[[:space:]]Proximal[[:space:]]Femur[[:space:]]Estimation[[:space:]]through[[:space:]]Bi-planar[[:space:]]Image[[:space:]]Segmentation[[:space:]]with[[:space:]]Hierarchical/3D[[:space:]]Proximal[[:space:]]Femur[[:space:]]Estimation[[:space:]]through[[:space:]]Bi-planar[[:space:]]Image[[:space:]]Segmentation[[:space:]]with[[:space:]]Hierarchical.pdf filter=lfs diff=lfs merge=lfs -text
+Train/AZDBLab[[:space:]]A[[:space:]]Laboratory[[:space:]]Information[[:space:]]System[[:space:]]for[[:space:]]Large-Scale[[:space:]]Empirical[[:space:]]DBMS[[:space:]]Studies/AZDBLab[[:space:]]A[[:space:]]Laboratory[[:space:]]Information[[:space:]]System[[:space:]]for[[:space:]]Large-Scale[[:space:]]Empirical[[:space:]]DBMS[[:space:]]Studies-Poster.pdf filter=lfs diff=lfs merge=lfs -text
+Train/AZDBLab[[:space:]]A[[:space:]]Laboratory[[:space:]]Information[[:space:]]System[[:space:]]for[[:space:]]Large-Scale[[:space:]]Empirical[[:space:]]DBMS[[:space:]]Studies/AZDBLab[[:space:]]A[[:space:]]Laboratory[[:space:]]Information[[:space:]]System[[:space:]]for[[:space:]]Large-Scale[[:space:]]Empirical[[:space:]]DBMS[[:space:]]Studies.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Active[[:space:]]Boundary[[:space:]]Annotation[[:space:]]using[[:space:]]Random[[:space:]]MAP[[:space:]]Perturbations/Active[[:space:]]Boundary[[:space:]]Annotation[[:space:]]using[[:space:]]Random[[:space:]]MAP[[:space:]]Perturbations-Poster.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Active[[:space:]]Boundary[[:space:]]Annotation[[:space:]]using[[:space:]]Random[[:space:]]MAP[[:space:]]Perturbations/Active[[:space:]]Boundary[[:space:]]Annotation[[:space:]]using[[:space:]]Random[[:space:]]MAP[[:space:]]Perturbations.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Adaptive[[:space:]]Structure[[:space:]]from[[:space:]]Motion[[:space:]]with[[:space:]]a[[:space:]]contrario[[:space:]]model[[:space:]]estimation/Adaptive[[:space:]]Structure[[:space:]]from[[:space:]]Motion[[:space:]]with[[:space:]]a[[:space:]]contrario[[:space:]]model[[:space:]]estimation-Poster.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Adaptive[[:space:]]Structure[[:space:]]from[[:space:]]Motion[[:space:]]with[[:space:]]a[[:space:]]contrario[[:space:]]model[[:space:]]estimation/Adaptive[[:space:]]Structure[[:space:]]from[[:space:]]Motion[[:space:]]with[[:space:]]a[[:space:]]contrario[[:space:]]model[[:space:]]estimation.pdf filter=lfs diff=lfs merge=lfs -text
+Train/An[[:space:]]automated[[:space:]]measure[[:space:]]of[[:space:]]MDP[[:space:]]similarity[[:space:]]for[[:space:]]transfer[[:space:]]in[[:space:]]reinforcement[[:space:]]learning/An[[:space:]]automated[[:space:]]measure[[:space:]]of[[:space:]]MDP[[:space:]]similarity[[:space:]]for[[:space:]]transfer[[:space:]]in[[:space:]]reinforcement[[:space:]]learning-Poster.pdf filter=lfs diff=lfs merge=lfs -text
+Train/An[[:space:]]automated[[:space:]]measure[[:space:]]of[[:space:]]MDP[[:space:]]similarity[[:space:]]for[[:space:]]transfer[[:space:]]in[[:space:]]reinforcement[[:space:]]learning/An[[:space:]]automated[[:space:]]measure[[:space:]]of[[:space:]]MDP[[:space:]]similarity[[:space:]]for[[:space:]]transfer[[:space:]]in[[:space:]]reinforcement[[:space:]]learning.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Automated[[:space:]]Embryo[[:space:]]Stage[[:space:]]Classification[[:space:]]in[[:space:]]Time-lapse[[:space:]]Microscopy[[:space:]]Video[[:space:]]of[[:space:]]Early[[:space:]]Human[[:space:]]Embryo[[:space:]]Development/Automated[[:space:]]Embryo[[:space:]]Stage[[:space:]]Classification[[:space:]]in[[:space:]]Time-lapse[[:space:]]Microscopy[[:space:]]Video[[:space:]]of[[:space:]]Early[[:space:]]Human[[:space:]]Embryo[[:space:]]Development-Poster.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Automated[[:space:]]Embryo[[:space:]]Stage[[:space:]]Classification[[:space:]]in[[:space:]]Time-lapse[[:space:]]Microscopy[[:space:]]Video[[:space:]]of[[:space:]]Early[[:space:]]Human[[:space:]]Embryo[[:space:]]Development/Automated[[:space:]]Embryo[[:space:]]Stage[[:space:]]Classification[[:space:]]in[[:space:]]Time-lapse[[:space:]]Microscopy[[:space:]]Video[[:space:]]of[[:space:]]Early[[:space:]]Human[[:space:]]Embryo[[:space:]]Development.pdf filter=lfs diff=lfs merge=lfs -text
+Train/BMVC-2011-001/BMVC-2011-001-Poster.pdf filter=lfs diff=lfs merge=lfs -text
+Train/BMVC-2011-001/BMVC-2011-001.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Bayesian[[:space:]]Joint[[:space:]]Topic[[:space:]]Modelling[[:space:]]for[[:space:]]Weakly[[:space:]]Supervised[[:space:]]Object[[:space:]]Localisation/Bayesian[[:space:]]Joint[[:space:]]Topic[[:space:]]Modelling[[:space:]]for[[:space:]]Weakly[[:space:]]Supervised[[:space:]]Object[[:space:]]Localisation-Poster.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Bayesian[[:space:]]Joint[[:space:]]Topic[[:space:]]Modelling[[:space:]]for[[:space:]]Weakly[[:space:]]Supervised[[:space:]]Object[[:space:]]Localisation/Bayesian[[:space:]]Joint[[:space:]]Topic[[:space:]]Modelling[[:space:]]for[[:space:]]Weakly[[:space:]]Supervised[[:space:]]Object[[:space:]]Localisation.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Being[[:space:]]John[[:space:]]Malkovich/Being[[:space:]]John[[:space:]]Malkovich-Poster.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Being[[:space:]]John[[:space:]]Malkovich/Being[[:space:]]John[[:space:]]Malkovich.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Beyond[[:space:]]Spatial[[:space:]]Pyramids-[[:space:]]Receptive[[:space:]]Field[[:space:]]Learning[[:space:]]for[[:space:]]Pooled[[:space:]]Image[[:space:]]Features/Beyond[[:space:]]Spatial[[:space:]]Pyramids-[[:space:]]Receptive[[:space:]]Field[[:space:]]Learning[[:space:]]for[[:space:]]Pooled[[:space:]]Image[[:space:]]Features-Poster.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Beyond[[:space:]]Spatial[[:space:]]Pyramids-[[:space:]]Receptive[[:space:]]Field[[:space:]]Learning[[:space:]]for[[:space:]]Pooled[[:space:]]Image[[:space:]]Features/Beyond[[:space:]]Spatial[[:space:]]Pyramids-[[:space:]]Receptive[[:space:]]Field[[:space:]]Learning[[:space:]]for[[:space:]]Pooled[[:space:]]Image[[:space:]]Features.pdf filter=lfs diff=lfs merge=lfs -text
+Train/CATALYST[[:space:]]ENHANCED[[:space:]]MICRO[[:space:]]SCALE[[:space:]]BATCH[[:space:]]ASSEMBLY/CATALYST[[:space:]]ENHANCED[[:space:]]MICRO[[:space:]]SCALE[[:space:]]BATCH[[:space:]]ASSEMBLY-Poster.pdf filter=lfs diff=lfs merge=lfs -text
+Train/CATALYST[[:space:]]ENHANCED[[:space:]]MICRO[[:space:]]SCALE[[:space:]]BATCH[[:space:]]ASSEMBLY/CATALYST[[:space:]]ENHANCED[[:space:]]MICRO[[:space:]]SCALE[[:space:]]BATCH[[:space:]]ASSEMBLY.pdf filter=lfs diff=lfs merge=lfs -text
+Train/CRYOGENIC[[:space:]]CATHODOLUMINESCENCE[[:space:]]FROM[[:space:]]CuxAg1-xInSe2[[:space:]]THIN[[:space:]]FILMS/CRYOGENIC[[:space:]]CATHODOLUMINESCENCE[[:space:]]FROM[[:space:]]CuxAg1-xInSe2[[:space:]]THIN[[:space:]]FILMS-Poster.pdf filter=lfs diff=lfs merge=lfs -text
+Train/CRYOGENIC[[:space:]]CATHODOLUMINESCENCE[[:space:]]FROM[[:space:]]CuxAg1-xInSe2[[:space:]]THIN[[:space:]]FILMS/CRYOGENIC[[:space:]]CATHODOLUMINESCENCE[[:space:]]FROM[[:space:]]CuxAg1-xInSe2[[:space:]]THIN[[:space:]]FILMS.pdf filter=lfs diff=lfs merge=lfs -text
+Train/CVPR-2014-011/CVPR-2014-011-Poster.pdf filter=lfs diff=lfs merge=lfs -text
+Train/CVPR-2014-011/CVPR-2014-011.pdf filter=lfs diff=lfs merge=lfs -text
+Train/CVPR-2014-013/CVPR-2014-013-Poster.pdf filter=lfs diff=lfs merge=lfs -text
+Train/CVPR-2014-013/CVPR-2014-013.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Calibrating[[:space:]]Photometric[[:space:]]Stereo[[:space:]]by[[:space:]]Holistic[[:space:]]Reflectance[[:space:]]Symmetry[[:space:]]Analysis/Calibrating[[:space:]]Photometric[[:space:]]Stereo[[:space:]]by[[:space:]]Holistic[[:space:]]Reflectance[[:space:]]Symmetry[[:space:]]Analysis-Poster.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Calibrating[[:space:]]Photometric[[:space:]]Stereo[[:space:]]by[[:space:]]Holistic[[:space:]]Reflectance[[:space:]]Symmetry[[:space:]]Analysis/Calibrating[[:space:]]Photometric[[:space:]]Stereo[[:space:]]by[[:space:]]Holistic[[:space:]]Reflectance[[:space:]]Symmetry[[:space:]]Analysis.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Cambridge[[:space:]]Danehy[[:space:]]Park[[:space:]]Wind[[:space:]]Turbine[[:space:]]Preliminary[[:space:]]Project[[:space:]]Assessment/Cambridge[[:space:]]Danehy[[:space:]]Park[[:space:]]Wind[[:space:]]Turbine[[:space:]]Preliminary[[:space:]]Project[[:space:]]Assessment-Poster.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Cambridge[[:space:]]Danehy[[:space:]]Park[[:space:]]Wind[[:space:]]Turbine[[:space:]]Preliminary[[:space:]]Project[[:space:]]Assessment/Cambridge[[:space:]]Danehy[[:space:]]Park[[:space:]]Wind[[:space:]]Turbine[[:space:]]Preliminary[[:space:]]Project[[:space:]]Assessment.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Cascaded[[:space:]]Shape[[:space:]]Space[[:space:]]Pruning[[:space:]]for[[:space:]]Robust[[:space:]]Facial[[:space:]]Landmark[[:space:]]Detection/Cascaded[[:space:]]Shape[[:space:]]Space[[:space:]]Pruning[[:space:]]for[[:space:]]Robust[[:space:]]Facial[[:space:]]Landmark[[:space:]]Detection-Poster.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Cascaded[[:space:]]Shape[[:space:]]Space[[:space:]]Pruning[[:space:]]for[[:space:]]Robust[[:space:]]Facial[[:space:]]Landmark[[:space:]]Detection/Cascaded[[:space:]]Shape[[:space:]]Space[[:space:]]Pruning[[:space:]]for[[:space:]]Robust[[:space:]]Facial[[:space:]]Landmark[[:space:]]Detection.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Class[[:space:]]Specific[[:space:]]3D[[:space:]]Object[[:space:]]Shape[[:space:]]Priors[[:space:]]Using[[:space:]]Surface[[:space:]]Normals/Class[[:space:]]Specific[[:space:]]3D[[:space:]]Object[[:space:]]Shape[[:space:]]Priors[[:space:]]Using[[:space:]]Surface[[:space:]]Normals-Poster.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Class[[:space:]]Specific[[:space:]]3D[[:space:]]Object[[:space:]]Shape[[:space:]]Priors[[:space:]]Using[[:space:]]Surface[[:space:]]Normals/Class[[:space:]]Specific[[:space:]]3D[[:space:]]Object[[:space:]]Shape[[:space:]]Priors[[:space:]]Using[[:space:]]Surface[[:space:]]Normals.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Classification-Error[[:space:]]Cost[[:space:]]Minimization[[:space:]]Strategy-[[:space:]]dCMS/Classification-Error[[:space:]]Cost[[:space:]]Minimization[[:space:]]Strategy-[[:space:]]dCMS.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Combining[[:space:]]Motion[[:space:]]Planning[[:space:]]and[[:space:]]Optimization[[:space:]]for[[:space:]]Flexible[[:space:]]Robot[[:space:]]Manipulation/Combining[[:space:]]Motion[[:space:]]Planning[[:space:]]and[[:space:]]Optimization[[:space:]]for[[:space:]]Flexible[[:space:]]Robot[[:space:]]Manipulation-Poster.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Combining[[:space:]]Motion[[:space:]]Planning[[:space:]]and[[:space:]]Optimization[[:space:]]for[[:space:]]Flexible[[:space:]]Robot[[:space:]]Manipulation/Combining[[:space:]]Motion[[:space:]]Planning[[:space:]]and[[:space:]]Optimization[[:space:]]for[[:space:]]Flexible[[:space:]]Robot[[:space:]]Manipulation.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Comparing[[:space:]]Visual[[:space:]]Feature[[:space:]]Coding[[:space:]]for[[:space:]]Learning[[:space:]]Disjoint[[:space:]]Camera[[:space:]]Dependencies/Comparing[[:space:]]Visual[[:space:]]Feature[[:space:]]Coding[[:space:]]for[[:space:]]Learning[[:space:]]Disjoint[[:space:]]Camera[[:space:]]Dependencies-Poster.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Comparing[[:space:]]Visual[[:space:]]Feature[[:space:]]Coding[[:space:]]for[[:space:]]Learning[[:space:]]Disjoint[[:space:]]Camera[[:space:]]Dependencies/Comparing[[:space:]]Visual[[:space:]]Feature[[:space:]]Coding[[:space:]]for[[:space:]]Learning[[:space:]]Disjoint[[:space:]]Camera[[:space:]]Dependencies.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Contextual[[:space:]]Gaussian[[:space:]]Process[[:space:]]Bandit[[:space:]]Optimization/Contextual[[:space:]]Gaussian[[:space:]]Process[[:space:]]Bandit[[:space:]]Optimization-Poster.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Contextual[[:space:]]Gaussian[[:space:]]Process[[:space:]]Bandit[[:space:]]Optimization/Contextual[[:space:]]Gaussian[[:space:]]Process[[:space:]]Bandit[[:space:]]Optimization.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Cross-lingual[[:space:]]Knowledge[[:space:]]Validation[[:space:]]Based[[:space:]]Taxonomy[[:space:]]Derivation[[:space:]]from[[:space:]]Heterogeneous[[:space:]]Online[[:space:]]Wikis/Cross-lingual[[:space:]]Knowledge[[:space:]]Validation[[:space:]]Based[[:space:]]Taxonomy[[:space:]]Derivation[[:space:]]from[[:space:]]Heterogeneous[[:space:]]Online[[:space:]]Wikis-Poster.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Cross-lingual[[:space:]]Knowledge[[:space:]]Validation[[:space:]]Based[[:space:]]Taxonomy[[:space:]]Derivation[[:space:]]from[[:space:]]Heterogeneous[[:space:]]Online[[:space:]]Wikis/Cross-lingual[[:space:]]Knowledge[[:space:]]Validation[[:space:]]Based[[:space:]]Taxonomy[[:space:]]Derivation[[:space:]]from[[:space:]]Heterogeneous[[:space:]]Online[[:space:]]Wikis.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Cultivation[[:space:]]and[[:space:]]Characterization[[:space:]]of[[:space:]]Microorganisms[[:space:]]in[[:space:]]Antarctic[[:space:]]Lakes/Cultivation[[:space:]]and[[:space:]]Characterization[[:space:]]of[[:space:]]Microorganisms[[:space:]]in[[:space:]]Antarctic[[:space:]]Lakes.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Curvature[[:space:]]and[[:space:]]Optimal[[:space:]]Algorithms[[:space:]]for[[:space:]]Learning[[:space:]]and[[:space:]]Minimizing[[:space:]]Submodular[[:space:]]Functions/Curvature[[:space:]]and[[:space:]]Optimal[[:space:]]Algorithms[[:space:]]for[[:space:]]Learning[[:space:]]and[[:space:]]Minimizing[[:space:]]Submodular[[:space:]]Functions-Poster.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Curvature[[:space:]]and[[:space:]]Optimal[[:space:]]Algorithms[[:space:]]for[[:space:]]Learning[[:space:]]and[[:space:]]Minimizing[[:space:]]Submodular[[:space:]]Functions/Curvature[[:space:]]and[[:space:]]Optimal[[:space:]]Algorithms[[:space:]]for[[:space:]]Learning[[:space:]]and[[:space:]]Minimizing[[:space:]]Submodular[[:space:]]Functions.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Decision[[:space:]]Tree[[:space:]]Fields/Decision[[:space:]]Tree[[:space:]]Fields-Poster.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Decision[[:space:]]Tree[[:space:]]Fields/Decision[[:space:]]Tree[[:space:]]Fields.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Deformable[[:space:]]Part[[:space:]]Descriptors[[:space:]]for[[:space:]]Fine-grained[[:space:]]Recognition[[:space:]]and[[:space:]]Attribute[[:space:]]Prediction/Deformable[[:space:]]Part[[:space:]]Descriptors[[:space:]]for[[:space:]]Fine-grained[[:space:]]Recognition[[:space:]]and[[:space:]]Attribute[[:space:]]Prediction-Poster.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Deformable[[:space:]]Part[[:space:]]Descriptors[[:space:]]for[[:space:]]Fine-grained[[:space:]]Recognition[[:space:]]and[[:space:]]Attribute[[:space:]]Prediction/Deformable[[:space:]]Part[[:space:]]Descriptors[[:space:]]for[[:space:]]Fine-grained[[:space:]]Recognition[[:space:]]and[[:space:]]Attribute[[:space:]]Prediction.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Dense[[:space:]]Semantic[[:space:]]Image[[:space:]]Segmentation[[:space:]]with[[:space:]]Objects[[:space:]]and[[:space:]]Attributes/Dense[[:space:]]Semantic[[:space:]]Image[[:space:]]Segmentation[[:space:]]with[[:space:]]Objects[[:space:]]and[[:space:]]Attributes-Poster.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Dense[[:space:]]Semantic[[:space:]]Image[[:space:]]Segmentation[[:space:]]with[[:space:]]Objects[[:space:]]and[[:space:]]Attributes/Dense[[:space:]]Semantic[[:space:]]Image[[:space:]]Segmentation[[:space:]]with[[:space:]]Objects[[:space:]]and[[:space:]]Attributes.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Detection[[:space:]]Bank-[[:space:]]An[[:space:]]Object[[:space:]]Detection[[:space:]]Based[[:space:]]Video[[:space:]]Representation[[:space:]]for[[:space:]]Multimedia[[:space:]]Event[[:space:]]Recognition/Detection[[:space:]]Bank-[[:space:]]An[[:space:]]Object[[:space:]]Detection[[:space:]]Based[[:space:]]Video[[:space:]]Representation[[:space:]]for[[:space:]]Multimedia[[:space:]]Event[[:space:]]Recognition-Poster.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Detection[[:space:]]Bank-[[:space:]]An[[:space:]]Object[[:space:]]Detection[[:space:]]Based[[:space:]]Video[[:space:]]Representation[[:space:]]for[[:space:]]Multimedia[[:space:]]Event[[:space:]]Recognition/Detection[[:space:]]Bank-[[:space:]]An[[:space:]]Object[[:space:]]Detection[[:space:]]Based[[:space:]]Video[[:space:]]Representation[[:space:]]for[[:space:]]Multimedia[[:space:]]Event[[:space:]]Recognition.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Difference[[:space:]]of[[:space:]]Boxes[[:space:]]Filters[[:space:]]Revisited-[[:space:]]Shadow[[:space:]]Suppression[[:space:]]and[[:space:]]Efficient[[:space:]]Character/Difference[[:space:]]of[[:space:]]Boxes[[:space:]]Filters[[:space:]]Revisited-[[:space:]]Shadow[[:space:]]Suppression[[:space:]]and[[:space:]]Efficient[[:space:]]Character-Poster.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Difference[[:space:]]of[[:space:]]Boxes[[:space:]]Filters[[:space:]]Revisited-[[:space:]]Shadow[[:space:]]Suppression[[:space:]]and[[:space:]]Efficient[[:space:]]Character/Difference[[:space:]]of[[:space:]]Boxes[[:space:]]Filters[[:space:]]Revisited-[[:space:]]Shadow[[:space:]]Suppression[[:space:]]and[[:space:]]Efficient[[:space:]]Character.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Dimension[[:space:]]Reduction[[:space:]]of[[:space:]]Network[[:space:]]Bottleneck[[:space:]]Bandwidth[[:space:]]Data[[:space:]]Space/Dimension[[:space:]]Reduction[[:space:]]of[[:space:]]Network[[:space:]]Bottleneck[[:space:]]Bandwidth[[:space:]]Data[[:space:]]Space-Poster.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Discriminative[[:space:]]Bayesian[[:space:]]Active[[:space:]]Shape[[:space:]]Models/Discriminative[[:space:]]Bayesian[[:space:]]Active[[:space:]]Shape[[:space:]]Models-Poster.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Discriminative[[:space:]]Bayesian[[:space:]]Active[[:space:]]Shape[[:space:]]Models/Discriminative[[:space:]]Bayesian[[:space:]]Active[[:space:]]Shape[[:space:]]Models.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Discriminative[[:space:]]Segment[[:space:]]Annotation[[:space:]]in[[:space:]]Weakly[[:space:]]Labeled[[:space:]]Video/Discriminative[[:space:]]Segment[[:space:]]Annotation[[:space:]]in[[:space:]]Weakly[[:space:]]Labeled[[:space:]]Video-Poster.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Discriminative[[:space:]]Segment[[:space:]]Annotation[[:space:]]in[[:space:]]Weakly[[:space:]]Labeled[[:space:]]Video/Discriminative[[:space:]]Segment[[:space:]]Annotation[[:space:]]in[[:space:]]Weakly[[:space:]]Labeled[[:space:]]Video.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Display[[:space:]]type[[:space:]]effects[[:space:]]in[[:space:]]military[[:space:]]operational[[:space:]]tasks[[:space:]]using[[:space:]]Unmanned[[:space:]]Vehicle[[:space:]]UV[[:space:]]video[[:space:]]images[[:space:]]Comparison[[:space:]]between[[:space:]]color[[:space:]]and[[:space:]]BW[[:space:]]video[[:space:]]feeds/Display[[:space:]]type[[:space:]]effects[[:space:]]in[[:space:]]military[[:space:]]operational[[:space:]]tasks[[:space:]]using[[:space:]]Unmanned[[:space:]]Vehicle[[:space:]]UV[[:space:]]video[[:space:]]images[[:space:]]Comparison[[:space:]]between[[:space:]]color[[:space:]]and[[:space:]]BW[[:space:]]video[[:space:]]feeds-Poster.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Display[[:space:]]type[[:space:]]effects[[:space:]]in[[:space:]]military[[:space:]]operational[[:space:]]tasks[[:space:]]using[[:space:]]Unmanned[[:space:]]Vehicle[[:space:]]UV[[:space:]]video[[:space:]]images[[:space:]]Comparison[[:space:]]between[[:space:]]color[[:space:]]and[[:space:]]BW[[:space:]]video[[:space:]]feeds/Display[[:space:]]type[[:space:]]effects[[:space:]]in[[:space:]]military[[:space:]]operational[[:space:]]tasks[[:space:]]using[[:space:]]Unmanned[[:space:]]Vehicle[[:space:]]UV[[:space:]]video[[:space:]]images[[:space:]]Comparison[[:space:]]between[[:space:]]color[[:space:]]and[[:space:]]BW[[:space:]]video[[:space:]]feeds.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Diverse[[:space:]]Sequential[[:space:]]Subset[[:space:]]Selection[[:space:]]for[[:space:]]Supervised[[:space:]]Video[[:space:]]Summarization/Diverse[[:space:]]Sequential[[:space:]]Subset[[:space:]]Selection[[:space:]]for[[:space:]]Supervised[[:space:]]Video[[:space:]]Summarization-Poster.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Diverse[[:space:]]Sequential[[:space:]]Subset[[:space:]]Selection[[:space:]]for[[:space:]]Supervised[[:space:]]Video[[:space:]]Summarization/Diverse[[:space:]]Sequential[[:space:]]Subset[[:space:]]Selection[[:space:]]for[[:space:]]Supervised[[:space:]]Video[[:space:]]Summarization.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Domain[[:space:]]Generalization[[:space:]]via[[:space:]]Invariant[[:space:]]Feature[[:space:]]Representation/Domain[[:space:]]Generalization[[:space:]]via[[:space:]]Invariant[[:space:]]Feature[[:space:]]Representation-Poster.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Domain[[:space:]]Generalization[[:space:]]via[[:space:]]Invariant[[:space:]]Feature[[:space:]]Representation/Domain[[:space:]]Generalization[[:space:]]via[[:space:]]Invariant[[:space:]]Feature[[:space:]]Representation.pdf filter=lfs diff=lfs merge=lfs -text
+Train/ExScal[[:space:]]Backbone[[:space:]]Network[[:space:]]Architecture/ExScal[[:space:]]Backbone[[:space:]]Network[[:space:]]Architecture-Poster.pdf filter=lfs diff=lfs merge=lfs -text
+Train/ExScal[[:space:]]Backbone[[:space:]]Network[[:space:]]Architecture/ExScal[[:space:]]Backbone[[:space:]]Network[[:space:]]Architecture.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Exploiting[[:space:]]Database[[:space:]]Similarity[[:space:]]Joins[[:space:]]for[[:space:]]Metric[[:space:]]Spaces/Exploiting[[:space:]]Database[[:space:]]Similarity[[:space:]]Joins[[:space:]]for[[:space:]]Metric[[:space:]]Spaces-Poster.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Exploiting[[:space:]]Database[[:space:]]Similarity[[:space:]]Joins[[:space:]]for[[:space:]]Metric[[:space:]]Spaces/Exploiting[[:space:]]Database[[:space:]]Similarity[[:space:]]Joins[[:space:]]for[[:space:]]Metric[[:space:]]Spaces.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Expression-Invariant[[:space:]]Face[[:space:]]Recognition[[:space:]]with[[:space:]]Expression[[:space:]]Classification/Expression-Invariant[[:space:]]Face[[:space:]]Recognition[[:space:]]with[[:space:]]Expression[[:space:]]Classification-Poster.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Expression-Invariant[[:space:]]Face[[:space:]]Recognition[[:space:]]with[[:space:]]Expression[[:space:]]Classification/Expression-Invariant[[:space:]]Face[[:space:]]Recognition[[:space:]]with[[:space:]]Expression[[:space:]]Classification.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Extracting[[:space:]]Logical[[:space:]]Structure[[:space:]]and[[:space:]]Identifying[[:space:]]Stragglers[[:space:]]in[[:space:]]Parallel[[:space:]]Execution[[:space:]]Traces/Extracting[[:space:]]Logical[[:space:]]Structure[[:space:]]and[[:space:]]Identifying[[:space:]]Stragglers[[:space:]]in[[:space:]]Parallel[[:space:]]Execution[[:space:]]Traces-Poster.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Extracting[[:space:]]Logical[[:space:]]Structure[[:space:]]and[[:space:]]Identifying[[:space:]]Stragglers[[:space:]]in[[:space:]]Parallel[[:space:]]Execution[[:space:]]Traces/Extracting[[:space:]]Logical[[:space:]]Structure[[:space:]]and[[:space:]]Identifying[[:space:]]Stragglers[[:space:]]in[[:space:]]Parallel[[:space:]]Execution[[:space:]]Traces.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Face[[:space:]]Spoofing[[:space:]]Detection[[:space:]]through/Face[[:space:]]Spoofing[[:space:]]Detection[[:space:]]through-Poster.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Face[[:space:]]Spoofing[[:space:]]Detection[[:space:]]through/Face[[:space:]]Spoofing[[:space:]]Detection[[:space:]]through.pdf filter=lfs diff=lfs merge=lfs -text
+Train/FaceTracer-[[:space:]]A[[:space:]]Search[[:space:]]Engine[[:space:]]for[[:space:]]Large[[:space:]]Collections[[:space:]]of[[:space:]]Images[[:space:]]with[[:space:]]Faces/FaceTracer-[[:space:]]A[[:space:]]Search[[:space:]]Engine[[:space:]]for[[:space:]]Large[[:space:]]Collections[[:space:]]of[[:space:]]Images[[:space:]]with[[:space:]]Faces-Poster.pdf filter=lfs diff=lfs merge=lfs -text
+Train/FaceTracer-[[:space:]]A[[:space:]]Search[[:space:]]Engine[[:space:]]for[[:space:]]Large[[:space:]]Collections[[:space:]]of[[:space:]]Images[[:space:]]with[[:space:]]Faces/FaceTracer-[[:space:]]A[[:space:]]Search[[:space:]]Engine[[:space:]]for[[:space:]]Large[[:space:]]Collections[[:space:]]of[[:space:]]Images[[:space:]]with[[:space:]]Faces.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Facebully[[:space:]]Towards[[:space:]]the[[:space:]]Identification[[:space:]]of[[:space:]]Cyberbullying[[:space:]]in[[:space:]]Facebook/Facebully[[:space:]]Towards[[:space:]]the[[:space:]]Identification[[:space:]]of[[:space:]]Cyberbullying[[:space:]]in[[:space:]]Facebook-Poster.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Facebully[[:space:]]Towards[[:space:]]the[[:space:]]Identification[[:space:]]of[[:space:]]Cyberbullying[[:space:]]in[[:space:]]Facebook/Facebully[[:space:]]Towards[[:space:]]the[[:space:]]Identification[[:space:]]of[[:space:]]Cyberbullying[[:space:]]in[[:space:]]Facebook.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Feature[[:space:]]Construction[[:space:]]for[[:space:]]Inverse[[:space:]]Reinforcement[[:space:]]Learning/Feature[[:space:]]Construction[[:space:]]for[[:space:]]Inverse[[:space:]]Reinforcement[[:space:]]Learning-Poster.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Feature[[:space:]]Construction[[:space:]]for[[:space:]]Inverse[[:space:]]Reinforcement[[:space:]]Learning/Feature[[:space:]]Construction[[:space:]]for[[:space:]]Inverse[[:space:]]Reinforcement[[:space:]]Learning.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Feature-based[[:space:]]Part[[:space:]]Retrieval[[:space:]]for[[:space:]]Interactive[[:space:]]3D[[:space:]]Reassembly/Feature-based[[:space:]]Part[[:space:]]Retrieval[[:space:]]for[[:space:]]Interactive[[:space:]]3D[[:space:]]Reassembly.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Finding[[:space:]]Things-[[:space:]]Image[[:space:]]Parsing[[:space:]]with[[:space:]]Regions[[:space:]]and[[:space:]]Per-Exemplar[[:space:]]Detectors/Finding[[:space:]]Things-[[:space:]]Image[[:space:]]Parsing[[:space:]]with[[:space:]]Regions[[:space:]]and[[:space:]]Per-Exemplar[[:space:]]Detectors-Poster.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Finding[[:space:]]Things-[[:space:]]Image[[:space:]]Parsing[[:space:]]with[[:space:]]Regions[[:space:]]and[[:space:]]Per-Exemplar[[:space:]]Detectors/Finding[[:space:]]Things-[[:space:]]Image[[:space:]]Parsing[[:space:]]with[[:space:]]Regions[[:space:]]and[[:space:]]Per-Exemplar[[:space:]]Detectors.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Fine-Grained[[:space:]]Visual[[:space:]]Comparisons[[:space:]]with[[:space:]]Local[[:space:]]Learning/Fine-Grained[[:space:]]Visual[[:space:]]Comparisons[[:space:]]with[[:space:]]Local[[:space:]]Learning-Poster.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Fine-Grained[[:space:]]Visual[[:space:]]Comparisons[[:space:]]with[[:space:]]Local[[:space:]]Learning/Fine-Grained[[:space:]]Visual[[:space:]]Comparisons[[:space:]]with[[:space:]]Local[[:space:]]Learning.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Free[[:space:]]your[[:space:]]Camera-[[:space:]]3D[[:space:]]Indoor[[:space:]]Scene[[:space:]]Understanding[[:space:]]from[[:space:]]Arbitrary[[:space:]]Camera[[:space:]]Motion/Free[[:space:]]your[[:space:]]Camera-[[:space:]]3D[[:space:]]Indoor[[:space:]]Scene[[:space:]]Understanding[[:space:]]from[[:space:]]Arbitrary[[:space:]]Camera[[:space:]]Motion-Poster.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Free[[:space:]]your[[:space:]]Camera-[[:space:]]3D[[:space:]]Indoor[[:space:]]Scene[[:space:]]Understanding[[:space:]]from[[:space:]]Arbitrary[[:space:]]Camera[[:space:]]Motion/Free[[:space:]]your[[:space:]]Camera-[[:space:]]3D[[:space:]]Indoor[[:space:]]Scene[[:space:]]Understanding[[:space:]]from[[:space:]]Arbitrary[[:space:]]Camera[[:space:]]Motion.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Graph-Based[[:space:]]Discriminative[[:space:]]Learning[[:space:]]for[[:space:]]Location[[:space:]]Recognition/Graph-Based[[:space:]]Discriminative[[:space:]]Learning[[:space:]]for[[:space:]]Location[[:space:]]Recognition-Poster.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Graph-Based[[:space:]]Discriminative[[:space:]]Learning[[:space:]]for[[:space:]]Location[[:space:]]Recognition/Graph-Based[[:space:]]Discriminative[[:space:]]Learning[[:space:]]for[[:space:]]Location[[:space:]]Recognition.pdf filter=lfs diff=lfs merge=lfs -text
+Train/GraphTrack[[:space:]]Fast[[:space:]]and[[:space:]]Globally[[:space:]]Optimal[[:space:]]Tracking[[:space:]]in[[:space:]]Videos/GraphTrack[[:space:]]Fast[[:space:]]and[[:space:]]Globally[[:space:]]Optimal[[:space:]]Tracking[[:space:]]in[[:space:]]Videos-Poster.pdf filter=lfs diff=lfs merge=lfs -text
+Train/GraphTrack[[:space:]]Fast[[:space:]]and[[:space:]]Globally[[:space:]]Optimal[[:space:]]Tracking[[:space:]]in[[:space:]]Videos/GraphTrack[[:space:]]Fast[[:space:]]and[[:space:]]Globally[[:space:]]Optimal[[:space:]]Tracking[[:space:]]in[[:space:]]Videos.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Hierarchical[[:space:]]Qualitative[[:space:]]Color[[:space:]]Palettes/Hierarchical[[:space:]]Qualitative[[:space:]]Color[[:space:]]Palettes-Poster.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Hierarchical[[:space:]]Qualitative[[:space:]]Color[[:space:]]Palettes/Hierarchical[[:space:]]Qualitative[[:space:]]Color[[:space:]]Palettes.pdf filter=lfs diff=lfs merge=lfs -text
+Train/History[[:space:]]Dependent[[:space:]]Domain[[:space:]]Adaptation/History[[:space:]]Dependent[[:space:]]Domain[[:space:]]Adaptation-Poster.pdf filter=lfs diff=lfs merge=lfs -text
+Train/History[[:space:]]Dependent[[:space:]]Domain[[:space:]]Adaptation/History[[:space:]]Dependent[[:space:]]Domain[[:space:]]Adaptation.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Hyperspectral[[:space:]]Imaging[[:space:]]for[[:space:]]Ink[[:space:]]Mismatch[[:space:]]Detection/Hyperspectral[[:space:]]Imaging[[:space:]]for[[:space:]]Ink[[:space:]]Mismatch[[:space:]]Detection-Poster.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Hyperspectral[[:space:]]Imaging[[:space:]]for[[:space:]]Ink[[:space:]]Mismatch[[:space:]]Detection/Hyperspectral[[:space:]]Imaging[[:space:]]for[[:space:]]Ink[[:space:]]Mismatch[[:space:]]Detection.pdf filter=lfs diff=lfs merge=lfs -text
+Train/ICCV_2013_001/ICCV_2013_001-Poster.pdf filter=lfs diff=lfs merge=lfs -text
+Train/ICCV_2013_001/ICCV_2013_001.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Learning[[:space:]]People[[:space:]]Detection[[:space:]]Models[[:space:]]from[[:space:]]Few[[:space:]]Training[[:space:]]Samples/Learning[[:space:]]People[[:space:]]Detection[[:space:]]Models[[:space:]]from[[:space:]]Few[[:space:]]Training[[:space:]]Samples-Poster.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Learning[[:space:]]People[[:space:]]Detection[[:space:]]Models[[:space:]]from[[:space:]]Few[[:space:]]Training[[:space:]]Samples/Learning[[:space:]]People[[:space:]]Detection[[:space:]]Models[[:space:]]from[[:space:]]Few[[:space:]]Training[[:space:]]Samples.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Leveraging[[:space:]]High[[:space:]]Performance[[:space:]]Computation[[:space:]]for[[:space:]]Statistical[[:space:]]Wind[[:space:]]Prediction/Leveraging[[:space:]]High[[:space:]]Performance[[:space:]]Computation[[:space:]]for[[:space:]]Statistical[[:space:]]Wind[[:space:]]Prediction-Poster.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Leveraging[[:space:]]High[[:space:]]Performance[[:space:]]Computation[[:space:]]for[[:space:]]Statistical[[:space:]]Wind[[:space:]]Prediction/Leveraging[[:space:]]High[[:space:]]Performance[[:space:]]Computation[[:space:]]for[[:space:]]Statistical[[:space:]]Wind[[:space:]]Prediction.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Low [[:space:]]Overhead [[:space:]]Concurrency[[:space:]] Control [[:space:]]for[[:space:]]Partitioned[[:space:]] Main [[:space:]]Memory[[:space:]] Databases/Low [[:space:]]Overhead [[:space:]]Concurrency[[:space:]] Control [[:space:]]for[[:space:]]Partitioned[[:space:]] Main [[:space:]]Memory[[:space:]] Databases-Poster.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Low [[:space:]]Overhead [[:space:]]Concurrency[[:space:]] Control [[:space:]]for[[:space:]]Partitioned[[:space:]] Main [[:space:]]Memory[[:space:]] Databases/Low [[:space:]]Overhead [[:space:]]Concurrency[[:space:]] Control [[:space:]]for[[:space:]]Partitioned[[:space:]] Main [[:space:]]Memory[[:space:]] Databases.pdf filter=lfs diff=lfs merge=lfs -text
+Train/MatchMiner-[[:space:]]Efficient[[:space:]]Spanning[[:space:]]Structure[[:space:]]Mining[[:space:]]in[[:space:]]Large[[:space:]]Image[[:space:]]Collections/MatchMiner-[[:space:]]Efficient[[:space:]]Spanning[[:space:]]Structure[[:space:]]Mining[[:space:]]in[[:space:]]Large[[:space:]]Image[[:space:]]Collections-Poster.pdf filter=lfs diff=lfs merge=lfs -text
+Train/MatchMiner-[[:space:]]Efficient[[:space:]]Spanning[[:space:]]Structure[[:space:]]Mining[[:space:]]in[[:space:]]Large[[:space:]]Image[[:space:]]Collections/MatchMiner-[[:space:]]Efficient[[:space:]]Spanning[[:space:]]Structure[[:space:]]Mining[[:space:]]in[[:space:]]Large[[:space:]]Image[[:space:]]Collections.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Memorability[[:space:]]of[[:space:]]natural[[:space:]]scene-[[:space:]]the[[:space:]]role[[:space:]]of[[:space:]]attention/Memorability[[:space:]]of[[:space:]]natural[[:space:]]scene-[[:space:]]the[[:space:]]role[[:space:]]of[[:space:]]attention-Poster.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Memorability[[:space:]]of[[:space:]]natural[[:space:]]scene-[[:space:]]the[[:space:]]role[[:space:]]of[[:space:]]attention/Memorability[[:space:]]of[[:space:]]natural[[:space:]]scene-[[:space:]]the[[:space:]]role[[:space:]]of[[:space:]]attention.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Modeling[[:space:]]skin[[:space:]]and[[:space:]]ageing[[:space:]]phenotypes[[:space:]]using[[:space:]]latent[[:space:]]variable[[:space:]]models[[:space:]]in[[:space:]]Infer.NET/Modeling[[:space:]]skin[[:space:]]and[[:space:]]ageing[[:space:]]phenotypes[[:space:]]using[[:space:]]latent[[:space:]]variable[[:space:]]models[[:space:]]in[[:space:]]Infer.NET-Poster.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Modeling[[:space:]]skin[[:space:]]and[[:space:]]ageing[[:space:]]phenotypes[[:space:]]using[[:space:]]latent[[:space:]]variable[[:space:]]models[[:space:]]in[[:space:]]Infer.NET/Modeling[[:space:]]skin[[:space:]]and[[:space:]]ageing[[:space:]]phenotypes[[:space:]]using[[:space:]]latent[[:space:]]variable[[:space:]]models[[:space:]]in[[:space:]]Infer.NET.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Mortal[[:space:]]Multi-Armed[[:space:]]Bandits/Mortal[[:space:]]Multi-Armed[[:space:]]Bandits-Poster.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Mortal[[:space:]]Multi-Armed[[:space:]]Bandits/Mortal[[:space:]]Multi-Armed[[:space:]]Bandits.pdf filter=lfs diff=lfs merge=lfs -text
+Train/NMF-KNN-[[:space:]]Image[[:space:]]Annotation[[:space:]]using[[:space:]]Weighted[[:space:]]Multi-view[[:space:]]Non-negative[[:space:]]Matrix[[:space:]]Factorization/NMF-KNN-[[:space:]]Image[[:space:]]Annotation[[:space:]]using[[:space:]]Weighted[[:space:]]Multi-view[[:space:]]Non-negative[[:space:]]Matrix[[:space:]]Factorization-Poster.pdf filter=lfs diff=lfs merge=lfs -text
+Train/NMF-KNN-[[:space:]]Image[[:space:]]Annotation[[:space:]]using[[:space:]]Weighted[[:space:]]Multi-view[[:space:]]Non-negative[[:space:]]Matrix[[:space:]]Factorization/NMF-KNN-[[:space:]]Image[[:space:]]Annotation[[:space:]]using[[:space:]]Weighted[[:space:]]Multi-view[[:space:]]Non-negative[[:space:]]Matrix[[:space:]]Factorization.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Play[[:space:]]Type[[:space:]]Recognition[[:space:]]in[[:space:]]Real-World[[:space:]]Football[[:space:]]Video/Play[[:space:]]Type[[:space:]]Recognition[[:space:]]in[[:space:]]Real-World[[:space:]]Football[[:space:]]Video-Poster.pdf filter=lfs diff=lfs merge=lfs -text
+Train/Play[[:space:]]Type[[:space:]]Recognition[[:space:]]in[[:space:]]Real-World[[:space:]]Football[[:space:]]Video/Play[[:space:]]Type[[:space:]]Recognition[[:space:]]in[[:space:]]Real-World[[:space:]]Football[[:space:]]Video.pdf filter=lfs diff=lfs merge=lfs -text
+Train/bmvc-2013-031/bmvc-2013-031-Poster.pdf filter=lfs diff=lfs merge=lfs -text
+Train/bmvc-2013-031/bmvc-2013-031.pdf filter=lfs diff=lfs merge=lfs -text
+Train/cvpr-2012-002/cvpr-2012-002-Poster.pdf filter=lfs diff=lfs merge=lfs -text
+Train/cvpr-2012-002/cvpr-2012-002.pdf filter=lfs diff=lfs merge=lfs -text
+Train/cvpr-2012-004/cvpr-2012-004-Poster.pdf filter=lfs diff=lfs merge=lfs -text
+Train/cvpr-2012-004/cvpr-2012-004.pdf filter=lfs diff=lfs merge=lfs -text
+Train/cvpr-2013-005/cvpr-2013-005-Poster.pdf filter=lfs diff=lfs merge=lfs -text
+Train/cvpr-2013-005/cvpr-2013-005.pdf filter=lfs diff=lfs merge=lfs -text
+Train/cvpr-2013-007/cvpr-2013-007-Poster.pdf filter=lfs diff=lfs merge=lfs -text
+Train/cvpr-2013-007/cvpr-2013-007.pdf filter=lfs diff=lfs merge=lfs -text
+Train/cvpr-2013-008/cvpr-2013-008-Poster.pdf filter=lfs diff=lfs merge=lfs -text
+Train/cvpr-2013-008/cvpr-2013-008.pdf filter=lfs diff=lfs merge=lfs -text
+Train/cvpr-2013-010/cvpr-2013-010-Poster.pdf filter=lfs diff=lfs merge=lfs -text
+Train/cvpr-2013-010/cvpr-2013-010.pdf filter=lfs diff=lfs merge=lfs -text
+Train/cvpr-2013-012/cvpr-2013-012-Poster.pdf filter=lfs diff=lfs merge=lfs -text
+Train/cvpr-2013-012/cvpr-2013-012.pdf filter=lfs diff=lfs merge=lfs -text
+Train/cvpr-2013-014/cvpr-2013-014-Poster.pdf filter=lfs diff=lfs merge=lfs -text
+Train/cvpr-2013-014/cvpr-2013-014.pdf filter=lfs diff=lfs merge=lfs -text
+Train/cvpr-2013-016/cvpr-2013-016-Poster.pdf filter=lfs diff=lfs merge=lfs -text
+Train/cvpr-2013-016/cvpr-2013-016.pdf filter=lfs diff=lfs merge=lfs -text
+Train/cvpr-2013-028/cvpr-2013-028-Poster.pdf filter=lfs diff=lfs merge=lfs -text
+Train/cvpr-2013-028/cvpr-2013-028.pdf filter=lfs diff=lfs merge=lfs -text
+Train/cvpr-2013-029/cvpr-2013-029-Poster.pdf filter=lfs diff=lfs merge=lfs -text
+Train/cvpr-2013-029/cvpr-2013-029.pdf filter=lfs diff=lfs merge=lfs -text
+Train/cvpr-2014-002/cvpr-2014-002-Poster.pdf filter=lfs diff=lfs merge=lfs -text
+Train/cvpr-2014-002/cvpr-2014-002.pdf filter=lfs diff=lfs merge=lfs -text
+Train/cvpr-2014-003/cvpr-2014-003-Poster.pdf filter=lfs diff=lfs merge=lfs -text
+Train/cvpr-2014-003/cvpr-2014-003.pdf filter=lfs diff=lfs merge=lfs -text
+Train/eccv-2012-001/eccv-2012-001-Poster.pdf filter=lfs diff=lfs merge=lfs -text
+Train/eccv-2012-001/eccv-2012-001.pdf filter=lfs diff=lfs merge=lfs -text
+Train/iccv-2013-002/iccv-2013-002-Poster.pdf filter=lfs diff=lfs merge=lfs -text
+Train/iccv-2013-002/iccv-2013-002.pdf filter=lfs diff=lfs merge=lfs -text
+Train/ijcb-2011-001/ijcb-2011-001-Poster.pdf filter=lfs diff=lfs merge=lfs -text
+Train/ijcb-2011-001/ijcb-2011-001.pdf filter=lfs diff=lfs merge=lfs -text
diff --git a/Test/AZDBLab A Laboratory Information System for Large-Scale Empirical DBMS Studies/AZDBLab A Laboratory Information System for Large-Scale Empirical DBMS Studies-Poster.pdf b/Test/AZDBLab A Laboratory Information System for Large-Scale Empirical DBMS Studies/AZDBLab A Laboratory Information System for Large-Scale Empirical DBMS Studies-Poster.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..b1261d8cc1ebe3eda3265f922b65a59b535d91c7
--- /dev/null
+++ b/Test/AZDBLab A Laboratory Information System for Large-Scale Empirical DBMS Studies/AZDBLab A Laboratory Information System for Large-Scale Empirical DBMS Studies-Poster.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ff558fc0fa7c8d3dba331efa4da97ab02e63fa95c6c64149401a6ebacaa85425
+size 1308425
diff --git a/Test/AZDBLab A Laboratory Information System for Large-Scale Empirical DBMS Studies/AZDBLab A Laboratory Information System for Large-Scale Empirical DBMS Studies.pdf b/Test/AZDBLab A Laboratory Information System for Large-Scale Empirical DBMS Studies/AZDBLab A Laboratory Information System for Large-Scale Empirical DBMS Studies.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..08f0f2b0040a4db9ce41a813a224a96a2ab8a099
--- /dev/null
+++ b/Test/AZDBLab A Laboratory Information System for Large-Scale Empirical DBMS Studies/AZDBLab A Laboratory Information System for Large-Scale Empirical DBMS Studies.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:98f2b5053f2dd14ec2136beab9ba3becfe775584426c3801f8639c7801c48494
+size 1263646
diff --git a/Test/AZDBLab A Laboratory Information System for Large-Scale Empirical DBMS Studies/info.xml b/Test/AZDBLab A Laboratory Information System for Large-Scale Empirical DBMS Studies/info.xml
new file mode 100644
index 0000000000000000000000000000000000000000..989aa96b8284ee268b492fb83acb9aa09c208ec6
--- /dev/null
+++ b/Test/AZDBLab A Laboratory Information System for Large-Scale Empirical DBMS Studies/info.xml
@@ -0,0 +1,61 @@
+
+
+ Introduction
+ Scientific methodology in the database field can
+ provide a deep understanding of DBMS query
+ optimizers, for better engineered designs.
+ Few DBMS-centric labs are available for scientific
+ investigation; prior labs have focused on networks
+ and smartphones.
+
+
+
+ AZDBLAB (AriZona DataBase Laboratory)
+ Has been in development for seven years.
+ Assists database researchers to conduct large-
+ scale empirical studies across multiple DBMSes.
+ Runs massive experiments with thousands or
+ millions of queries on multiple DBMSes.
+ Supports as experiment subjects seven relational
+ DBMSes supporting SQL and JDBC.
+ Provides robustness to collect data over 8,277
+ hours running about 2.4 million query executions.
+ Conducts automated analyses on multiple query
+ execution runs.
+
+
+
+ Contributions
+ Novel research infrastructure, dedicated for large-
+ scale empirical DBMS studies
+ Seamless data provenance support
+ Several decentralized monitoring schemes: phone
+ apps, web apps, and watcher
+ Reusable GUI
+ Extensibility through a variety of plugins: labshelf,
+ analysis, experiment subject, and scenario
+
+
+
+ AZDBLAB Architecture
+
+
+
+
+ Demonstration
+ Step 1: Choose a labshelf, add a user, and create a notebook,
+ a paper, and a study in the paper on the Observer GUI.
+ Step 2: Load an experiment specification into the notebook.
+ Step 3: Schedule an experiment run on a particular DBMS.
+ Step 4: Monitor the run status via Observer, a web app, and a
+ mobile app, and wait for the experiment to be done.
+ Step 5: Add the completed experiment run to the study and
+ conduct a timing protocol analysis for the study.
+ Step 6: Produce LaTeX/PDF documents containing the analysis
+ results.
+
+
+
+
diff --git a/Test/Camera Motion and Surrounding Scene Appearance as Context for Action Recognition/Camera Motion and Surrounding Scene Appearance as Context for Action Recognition-Poster.pdf b/Test/Camera Motion and Surrounding Scene Appearance as Context for Action Recognition/Camera Motion and Surrounding Scene Appearance as Context for Action Recognition-Poster.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..c89290940829c317ed179be7ff91cf97e26516ca
--- /dev/null
+++ b/Test/Camera Motion and Surrounding Scene Appearance as Context for Action Recognition/Camera Motion and Surrounding Scene Appearance as Context for Action Recognition-Poster.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3c62adf9843902a57c6aa13f0b467cdb3ce3d999df1a488d0d3ca4d5c2e80922
+size 11979849
diff --git a/Test/Camera Motion and Surrounding Scene Appearance as Context for Action Recognition/Camera Motion and Surrounding Scene Appearance as Context for Action Recognition.pdf b/Test/Camera Motion and Surrounding Scene Appearance as Context for Action Recognition/Camera Motion and Surrounding Scene Appearance as Context for Action Recognition.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..b01d896b33343179e2a696fbf8ee9cec0a846789
--- /dev/null
+++ b/Test/Camera Motion and Surrounding Scene Appearance as Context for Action Recognition/Camera Motion and Surrounding Scene Appearance as Context for Action Recognition.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:57b5d3fcdc8006277b1033f6d55b379dee4253f49c1ebf133e740f13edd51fd3
+size 4671926
diff --git a/Test/Camera Motion and Surrounding Scene Appearance as Context for Action Recognition/info.txt b/Test/Camera Motion and Surrounding Scene Appearance as Context for Action Recognition/info.txt
new file mode 100644
index 0000000000000000000000000000000000000000..d0aec10df57b97b243f088da5ee7219d374da2da
--- /dev/null
+++ b/Test/Camera Motion and Surrounding Scene Appearance as Context for Action Recognition/info.txt
@@ -0,0 +1,119 @@
+
+
+ SUMMARY!
+ This work introduces a framework for recognizing human actions by
+ incorporating a new set of visual cues that represent the context of the
+ action:!
+
+
+
+
+ CONTRIBUTIONS!
+ Weak foreground-background segmentation approach.
+ • Study of the global camera motion as a cue for action recognition.
+ Incorporating appearance from static background.
+
+
+
+ METHODOLOGY!
+ This work follows the conventional action recognition pipeline. Given a set of
+ labeled videos, a set of features is extracted from each video, represented
+ using visual descriptors, and combined into a single video descriptor used to
+ train a multi-class classifier for action recognition.!
+
+ • Foreground-background separation: Assuming that a background
+ trajectory produces a small frame-to-frame displacement, we associate a
+ trajectory with the background if the overall displacement is more than three
+ pixels.!
+
+ • Global camera motion: We argue and show that the relationship between
+ an estimated camera motion and underlying action can be a useful cue for
+ discriminating certain action classes. As illustrated in the figure below, there
+ is a correlation between how the camera moves and the actor.!
+
+
+
+
+ • Background-context appearance: Beyond local motion and appearance
+ properties, the surrounding in which an action is performed is a critical
+ component to recognize actions. As Figure below illustrates, the background
+ appearance plays an important role to discriminate the action Drumming in
+ the sense that the drummer needs a drum set to perform the action.!
+
+ • Implementation details: We follow two different Bag Of Feature
+ implementations as described in the Table below.!
+
+
+
+
+ EXPERIMENTAL RESULTS!
+ • Datasets: We use state-of-the-art human action datasets and their
+ corresponding protocols.!
+ • Impact of contextual features: We note that using Fisher vectors
+ consistently boost the performance of our contextual features. Also, our
+ experiments provide evidence that action recognition performance can be
+ improved when static background appearance and global camera motion is
+ incorporated with foreground features.!
+ Comparison with the state-of-the-art: We
+ set
+ side
+ by
+ side
+ our
+ method
+ with
+ recent
+ methods
+ that
+ address
+ the
+ same
+ applica5on
+ using
+ similar
+ representa5ons,
+ i.e.
+ methods
+ that
+ use
+ dense
+ trajectory
+ points
+ to
+ represent
+ video
+ sequences
+ [2,3,4]
+ in
+ the
+ Table
+ below.
+
+
+
+
+
+
+ DISCUSSIONS!
+ • Contextual features: When combined with foreground trajectories, we show
+ that these features, can improve state-of-the-art recognition on challenging
+ action datasets.!
+ • Project page: http://www.cabaf.net/actioncue!
+ References:!
+ [1] Fabian Caba Heilbron, Ali Thabet, Juan Carlos Niebles, Bernard Ghanem. Camera Motion
+ and Surrounding Scene Appearance as Context for Action Recognition. ACCV, Singapore
+ 2014.!
+ [2] Wang, H., Schmid, C. Action recognition with improved trajectories. ICCV, Sydney 2013.!
+ [3] Jiang, Y.G., Dai, Q., Xue, X., Liu, W., Ngo, C.W. Trajectory-based modeling of human
+ actions with motion reference points. ECCV, 2012.!
+ [4] Jain, M., J egou, H., Bouthemy, P.: Better exploiting motion for better action recognition. !
+
+
+
diff --git a/Test/Can a driving simulator assess the effectiveness of Hazard Perception training in young novice drivers/Can a driving simulator assess the effectiveness of Hazard Perception training in young novice drivers-Poster.pdf b/Test/Can a driving simulator assess the effectiveness of Hazard Perception training in young novice drivers/Can a driving simulator assess the effectiveness of Hazard Perception training in young novice drivers-Poster.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..92693539084641a571064cfd90bea6cc3c768845
--- /dev/null
+++ b/Test/Can a driving simulator assess the effectiveness of Hazard Perception training in young novice drivers/Can a driving simulator assess the effectiveness of Hazard Perception training in young novice drivers-Poster.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d84e106c6fd3f73f988c3065955edb42c90e4b50d6c3985a36fad46583400e61
+size 585667
diff --git a/Test/Can a driving simulator assess the effectiveness of Hazard Perception training in young novice drivers/Can a driving simulator assess the effectiveness of Hazard Perception training in young novice drivers.pdf b/Test/Can a driving simulator assess the effectiveness of Hazard Perception training in young novice drivers/Can a driving simulator assess the effectiveness of Hazard Perception training in young novice drivers.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..26ffbd203f294eabae63d76a83357c670b0f572f
--- /dev/null
+++ b/Test/Can a driving simulator assess the effectiveness of Hazard Perception training in young novice drivers/Can a driving simulator assess the effectiveness of Hazard Perception training in young novice drivers.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:be99f1fd0383c4f8e09a6164745f9040104d56e367f82127572c54b473bb8239
+size 785859
diff --git a/Test/Can a driving simulator assess the effectiveness of Hazard Perception training in young novice drivers/info.txt b/Test/Can a driving simulator assess the effectiveness of Hazard Perception training in young novice drivers/info.txt
new file mode 100644
index 0000000000000000000000000000000000000000..63bcfde9e7c9f6f401ba3dd2e39fae9245aa2bd9
--- /dev/null
+++ b/Test/Can a driving simulator assess the effectiveness of Hazard Perception training in young novice drivers/info.txt
@@ -0,0 +1,73 @@
+
+
+ Introduction
+ Hazard perception (HP) is receptive to training. Yet, there is no
+ consensus on an optimal training program or acceptable
+ measures to assess effectiveness. We aimed to evaluate a
+ simulator based hazard perception test (SBHPT) for
+ assessing improvements in HP skills of trained young-novice
+ drivers, relative to a control group, and relative to a group of
+ experienced drivers who served as gold standard.
+
+
+
+ Method
+ Participants. Thirty nine young- novice drivers, 17-18 year-olds
+ with less than three months of driving experience, underwent
+ one of four HP training conditions (AAHPT active, hybrid, RAPT
+ and control) prior to the testing phase. Six experienced drivers
+ (mean age 26, with more than 8 years of driving experience,)
+ completed the test phase.
+ Driving Scenarios. Use of a variety of traffic environments is
+ important as the driving environment dictates the type and
+ frequency of hazardous situations. The simulated drive
+ consisted of 8 urban and 6 residential scenarios merged into a
+ single 18 km drive. Two pairs of urban and residential scenarios
+ are detailed in Table 1. Sample snapshots are shown in Figures
+ 1 and 2.
+
+
+ Figure 1. Sample snapshots of events in urban scenarios. Left: a curve in the road
+ (U1-U2). Right: a bus parked in the station and a pedestrian (marked by an ellipse)
+ crossing the road to catch it (U4).
+
+
+
+ Results and analysis
+ Driver velocity was sampled every 2m. Average velocity among
+ individuals of the same group (AAHPT active, hybrid, RAPT,
+ control, experienced) was calculated for each point. Generating
+ 600 sampling points per group per scenario. Using cubic
+ smoothing spline, a smooth curve was fitted to each set of
+ observations for each group (solid line in Figure 3). A statistical
+ test was then conducted to examine whether the five separate
+ curves, fitted for each group, could be replaced by a single
+ curve (i.e., that all groups chose their speed in the same way).
+ For all 8 scenarios, the group curves could not be combined into
+ one. Since groups were different, additional descriptive
+ examinations were made.
+
+ Figure 3. The distribution of longitudinal velocity sampling points per each group, per
+ points along scenarios U1-U4. Solid lines are the fitted longitudinal velocity curves.
+
+
+
+ Conclusions
+ Group-related metrics can discriminate among driver
+ groups.
+ Patterns of driving behaviour can be evaluated via
+ driving speed.
+ driving speed.
+ Comparisons to control, and to experienced drivers
+ complemented; where the resemblance of trainees
+ was higher to control, they tended to resemble the
+ experienced group less.
+ Events that require a complete stop are less
+ diagnostic than events that require slowing down but
+ not a complete halt.
+
+
+
diff --git a/Test/Leveraging Multi-Domain Prior Knowledge in Topic Models/Leveraging Multi-Domain Prior Knowledge in Topic Models-Poster.pdf b/Test/Leveraging Multi-Domain Prior Knowledge in Topic Models/Leveraging Multi-Domain Prior Knowledge in Topic Models-Poster.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..97af8489c1139e0d5cd831218aff05fd437f8b72
--- /dev/null
+++ b/Test/Leveraging Multi-Domain Prior Knowledge in Topic Models/Leveraging Multi-Domain Prior Knowledge in Topic Models-Poster.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:fefe872f3a6beb4b716f1b8d6404184ab502c03f9c3247c8c69d8b2546cbc67d
+size 953033
diff --git a/Test/Leveraging Multi-Domain Prior Knowledge in Topic Models/Leveraging Multi-Domain Prior Knowledge in Topic Models.pdf b/Test/Leveraging Multi-Domain Prior Knowledge in Topic Models/Leveraging Multi-Domain Prior Knowledge in Topic Models.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..f9b14d4ef8aa0e1156264ba151da7f3a2cbfd030
--- /dev/null
+++ b/Test/Leveraging Multi-Domain Prior Knowledge in Topic Models/Leveraging Multi-Domain Prior Knowledge in Topic Models.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c496c6ea72105912ca4d93332e8dc2f285a57f82b9180f4eeec0217428addfdc
+size 576726
diff --git a/Test/Leveraging Multi-Domain Prior Knowledge in Topic Models/info.xml b/Test/Leveraging Multi-Domain Prior Knowledge in Topic Models/info.xml
new file mode 100644
index 0000000000000000000000000000000000000000..2e37d33c870603cf792a8bf7ebe932eb433a4037
--- /dev/null
+++ b/Test/Leveraging Multi-Domain Prior Knowledge in Topic Models/info.xml
@@ -0,0 +1,46 @@
+
+
+ Problem Definition: Given prior knowledge from multiple domains, improve topic modeling in the new domain.
+ Knowledge in the form of s-set containing words sharing the same semantic meaning, e.g., \{Light, Heavy, Weight\}.
+ A novel technique to transfer knowledge to improve topic models.
+ Existing Knowledge-based models.
+ DF-LDA [Andrzejewski et al., 2009], Seeded Model (e.g., [Mukherjee and Liu, 2012]).
+ Two shortcomings: 1) Incapable of handling multiple senses, and Collapsed Gibbs Sampling and 2) Adverse effect of Knowledge.
+
+
+
+ MDK-LDA
+ Generative Process For each topic $t \in \{1,...,T\}$ \\ i. Draw a per topic distribution over s-sets, $\varphi_t \sim \text{Dir}(\beta)$ \\ ii. For each s-set $t \in\{1,...,T\}$ \\ a) Draw a per topic, per s-set distribution over words, $\eta_{t,s} \sim \text{Dir}(\gamma)$ \\ For each document $m \in {1,...,M}$ \\ i. Draw $\theta_m \sim \text{Dir}(\alpha)$ \\ ii. For each word $w_{m,n}$ , where $n \in {1,..., N_m }$ \\ a) Draw a topic $z_{m,n} \sim \text{Mult}(θ_m)$ \\ b) Draw an s-set $s_{m,n} \sim \text{Mult}(\varphi_{z_{m,n}})$ \\ c) Emit $w_{m,n} \sim \text{Mult (\eta_{z_{m,n},s_{m,n}} )$ \\
+ Plate Notation
+
+ Collapsed Gibbs Sampling
+ Blocked Gibbs Sampler Sample topic $z$ and s-set $s$ for word $w$ \begin{equation} \begin{split} P(z_i=t,s_i=s | \textbf{z}^{-i} \textbf{s}^{-i},\alpha,\beta,\gamma) \propto & \\ \frac{n_{m,t}^{-i}+\alpha}{\sum_{t^{'}=1}^T(n_{t,s}^{-i}+\alpha) }\times \frac{n_{t,s}^-i+\beta }{\sum_{s^{'}=1}^S(n_{t,s}^{-i}+\beta)}\times & \frac{n_{t,s,w_i}^{-i}+\gamma_s}{\sum_{v^{'}=1}^V(n_{t,s,v^{'}}^{ i}+\gamma_s)} \end{split} \end{equation} }
+
+
+
+ Generalized Pólya Urn Model
+ Generalized Pólya urn model [Mahmoud, 2008]
+ When a ball is drawn, that ball is put back along with a certain number of balls of similar colors.
+ Promoting s-set as a whole
+ If a ball of color w is drawn, we put back $A_{s,w^{'},w}$ balls of each color $w^{'} \in {1,...,V}$ where w and $w^{'}$ share s-set $s$. \begin{equation} A_{s,w^{'},w}=\left\{ \begin{array}{ll} 1 & w=w^{'}\\ \sigma & w \in s, w^{'} \in s, w \neq w^{'}\\ 0 & \text{otherwise} \end{array} \right. \end{equation}
+ Collapsed Gibbs Sampling \begin{equation} \begin{split} P(z_i=t,s_i=s | \textbf{z}^{-i},\textbf{s}^{-i},\alpha,\beta,\gamma, A) \propto \frac{n_{m,t}^{-i}+\alpha}{\sum_{t^{'}=1}^T(n_{t,s}^{ i}+\alpha) } & \\ \times\frac{\sum_{w_{'}=1}^V \sum_{v_{'}=1}^V A_{s,v_{'},w_{'}}*n_{t,s,v^{'}}^{-i}+\beta {\sum_{s^{'}=1}^S(n_{t,s}^{-i}+\beta)}\times\frac{n_{t,s,w_i}^{ i}+\gamma_s}{\sum_{v^{'}=1}^V(n_{t,s,v^{'}}^{-i}+\gamma_s)} & \end{split} \end{equation}
+
+
+
+ Experiments
+ Datasets: reviews from six domains from Amazon.com.
+ Baseline Models
+ LDA [Blei et al., 2003], LDA\_GPU [Mimno et al., 2011], and DF-LDA [Andrzejewski et al., 2009].
+ Topic Discovery Results
+ Evaluation measure: Precision @ n (p @ n).
+ Quantitative results in Table 1, Qualitative results in Table
+ Objective Evaluation
+ Topic Coherence [Mimno et al., 2011].
+
+
+
+
+
diff --git a/Test/nips-2011-001/info.xml b/Test/nips-2011-001/info.xml
new file mode 100644
index 0000000000000000000000000000000000000000..9439abd2a0d36a282bf70fda60f84a4f4ae61720
--- /dev/null
+++ b/Test/nips-2011-001/info.xml
@@ -0,0 +1,51 @@
+
+
+ Introduction
+ This paper proposes a parsing algorithm for indoor scene understanding which includes four aspects: computing 3D scene layout, detecting 3D objects (e.g. furniture), detecting 2D faces (windows, doors etc.), and segmenting the background. The algorithm parse an image into a hierarchical structure, namely a parse tree. With the parse tree, we reconstruct the original image by the appearance of line segments, and we further recover the 3D scene by the geometry of 3D background and foreground objects.
+
+
+
+
+
+ Results
+
+
+
+
+
+ Stochastic Scene Grammar
+ The grammar represents compositional structures of visual entities, which includes three types of production rules and two types of contextual relations:
+ Production rules: (i) AND rules represent the decomposition of an entity into sub-parts; (ii) SET rules represent an ensemble of visual entities; (iii) OR rules represent the switching among sub-types of an entity.
+ Contextual relations: (a) Cooperative “+” relations represent positive links between binding entities, such as hinged faces of a object or aligned boxes; (b) Competitive “-” relations represents negative links between competing entities, such as mutually exclusive boxes.
+
+
+
+
+ Bayesian Formulation
+ We define a posterior distribution for a solution (a parse tree) pt conditioned on an image I. This distribution is specified in terms of the statistics defined over the derivation of production rules. \begin{equation} P(pt|I)\propto P(pt)P(I|pt)=P(S)\prod_{v \in V^n}P(Ch_v|v)\prod_{v \in V^T}P(I|v) \end{equation} The probability is defined on the Gibbs distribution: and the energy term is decomposed as three potentials: \begin{equation} E(pt|I)=\sum_{v \in V^{OR}}E^{OR}(Ar(Ch_v))+\sum_{v \in V^AND}E^{AND (A_G(Ch_v))+\sum_{\Lambda_v \in \Lambda_I, v \in V^T}E^T(I(\Lambda_v)) \end{equation}
+
+
+
+ Inference by Hierarchical Cluster Sampling We design an efficient MCMC inference algorithm, namely Hierarchical cluster sampling, to search in the large solution space of scene configurations. The algorithm has two stages:
+ Clustering: It forms all possible higher-level structures (clusters) from lower-level entities by production rules and contextual relations. \begin{equation} P_+(Cl|I)=\prod_{v \in Cl^{OR}}P^{OR}(Ar(v))\prod_{u,v \in Cl^{AND}}P_+^{AND}(A_G(u), A_G(v))\prod_{v \in Cl^T}P^T(I(A_v)) \end{equation}
+ Sampling: It jumps between alternative structures (clusters) in each layer of the hierarchy to find the most probable configuration (represented by a parse tree). \begin{equation} Q(pt^*|pt,I)=P_+(Cl^*|I)\prod_{u \in Cl^{AND}, v \in pt^{AND}} P_-^{AND}(A_G(u)|A_G(v)). \end{equation}
+
+
+
+ Experiment and Conclusion
+ Segmentation precision compared with Hoiem et al. 2007 [1], Hedau et al. 2009 [2], Wang et al. 2010 [3] and Lee et al. 2010 [4] in the UIUC dataset [2].
+
+ Compared with other algorithms, our contributions are
+ A Stochastic Scene Grammar (SSG) to represent the hierarchical structure of visual entities;
+ A Hierarchical Cluster Sampling algorithm to perform fast inference in the SSG model;
+ Richer structures obtained by exploring richer contextual relations.
+ Website: http://www.stat.ucla.edu/~ybzhao/research/sceneparsing
+
+
+
diff --git a/Test/nips-2011-001/nips-2011-001-Poster.pdf b/Test/nips-2011-001/nips-2011-001-Poster.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..8bfb78255d6742986d527b2b33fc865354ac1a92
--- /dev/null
+++ b/Test/nips-2011-001/nips-2011-001-Poster.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:34d35322e82fbfb11d5bc5297b85c6e5708585284b658deaa3c2404681021b29
+size 17399398
diff --git a/Test/nips-2011-001/nips-2011-001.pdf b/Test/nips-2011-001/nips-2011-001.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..61407273257120070b436d21cd4e85945e59f172
--- /dev/null
+++ b/Test/nips-2011-001/nips-2011-001.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:78bc15dafd5d3deb2296cc1a8c390598e5f7068607bee14bc78ad05074977ba1
+size 5414980
diff --git a/Train/3D Proximal Femur Estimation through Bi-planar Image Segmentation with Hierarchical/3D Proximal Femur Estimation through Bi-planar Image Segmentation with Hierarchical-Poster.pdf b/Train/3D Proximal Femur Estimation through Bi-planar Image Segmentation with Hierarchical/3D Proximal Femur Estimation through Bi-planar Image Segmentation with Hierarchical-Poster.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..a3630d958fcef6190af1c4c306ee568b7313988a
--- /dev/null
+++ b/Train/3D Proximal Femur Estimation through Bi-planar Image Segmentation with Hierarchical/3D Proximal Femur Estimation through Bi-planar Image Segmentation with Hierarchical-Poster.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d4496acbe6e598c101de179176848f50072dd52e1dae7aacb20b7eaa417bd39f
+size 907983
diff --git a/Train/3D Proximal Femur Estimation through Bi-planar Image Segmentation with Hierarchical/3D Proximal Femur Estimation through Bi-planar Image Segmentation with Hierarchical.pdf b/Train/3D Proximal Femur Estimation through Bi-planar Image Segmentation with Hierarchical/3D Proximal Femur Estimation through Bi-planar Image Segmentation with Hierarchical.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..515aa1798840cabb6797b97a76b04411e9815bf7
--- /dev/null
+++ b/Train/3D Proximal Femur Estimation through Bi-planar Image Segmentation with Hierarchical/3D Proximal Femur Estimation through Bi-planar Image Segmentation with Hierarchical.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5ab73d1d193fdbc5da9a37623e94fbf91680f3dfd13f5bd9897c63ab753b4529
+size 504684
diff --git a/Train/3D Proximal Femur Estimation through Bi-planar Image Segmentation with Hierarchical/info.txt b/Train/3D Proximal Femur Estimation through Bi-planar Image Segmentation with Hierarchical/info.txt
new file mode 100644
index 0000000000000000000000000000000000000000..defadae7a46c61afa8d1b768c1e0312715452f65
--- /dev/null
+++ b/Train/3D Proximal Femur Estimation through Bi-planar Image Segmentation with Hierarchical/info.txt
@@ -0,0 +1,92 @@
+
+
+ Introduction
+ Problem statement:
+ 3D Proximal Femur modeling from low-dose bi-planar X-Ray images ⇐ important diagnostic interest in Total Hip Replacement.
+ Contributions:
+ Non-uniform hierarchical decomposition of the shape prior of increasing clinical-relevant precision.◮
+ Graphical-model representation of the femur involving third-order and fourth-order priors.
+ Similarity and mirror-symmetry invariant.
+ Providing means of measuring regional and boundary supports in the bi-planar views.
+ Can be learned from a small number of training examples.
+ A dual-decomposition optimization approach for efficient inference of the 3D femur configuration from bi-planar views.
+
+
+
+
+ Hierarchical Multi-Resolution Probabilistic Modeling
+ Mesh sub-sampling formulated as clustering achieved through curvaturedriven unsupervised clustering acting on the geodesic distances between vertices.
+ d (v , vˆ ): the geodesic distance between v and vˆ on M0,
+ curv(ˆv ): the curvature at vˆ on M0.
+ Level of detail selection:
+ Vertices are organized in a tree structure.
+ Starting from the coarsest resolution, regions are selected to be refined iteratively untilreaching the required accuracy for every part.
+ Connectivity computation
+ Edges EMR based on Delaunay triangulation of VMR associated to the geodesic distance.◮
+ Faces FMR computed by searching for minimal cycles in the edge list.
+
+ Probabilistic shape modeling
+ Pose-invariant prior:
+ Based on the relative Euclidean distance dˆ
+ ij = dij /
+ (i,j)∈Pdij for each pair of points (i, j) ∈ Pc in a triplet cP◮of vertices.
+ ˆ
+ c ) of dˆ
+ c is learned from the training data, using Gaussian Mixture Models (GMMs).The distribution ψc (d
+ Smoothness potential function:
+ Encoding constraints on the change of the normal directions, for each quadruplet q of verticescorresponding to a pair of adjacent facets:
+
+
+
+ Probabilistic 3D Surface Estimation Framework
+ Posterior probability maximization:
+ Higher-order MRF formulation:
+ H
+ fR (uf ): regional-term potentials.
+ BH (uq ): boundary-term potentials.
+ q
+ PPH (uc ) and H : model prior potentials.
+ qc
+ H (uc ) and H : model prior potentials.
+ qc
+ MRF inference through dual-decomposition:
+ Decompose the original graph into a series of factor trees.
+ Solve factor trees using max-product belief propagation.
+ Maximize lower bound using a projected subgradient method.
+
+
+
+ I = (Ik )k∈K (K = {1, . . . , K }, K = 2 for the case of bi-planar views): Kobserved images captured from different viewpoints with the correspondingprojection matrices Π = (Πk )k∈K
+ Regional term
+ uf : 3D coordinates of the vertices of a facet f ,
+ δf (uf , Πk ): front-facing facet indicator function,
+ ◮ Ωf (uf , Πk ): 2D region corresponding to the projection of f ,
+ pfg and pbg : distributions of the intensity for the regions of the femur and the background.
+ Boundary term
+ Γ(uq , Πk ): projection of the edge shared by the two adjacent facets,
+ −−−−→
+ ∂Ik (x,y )n(x, y ): outward-pointing unit normal of Γ(uq , Πk ), ∇Ik (x, y ) = (
+ ∂x ,
+ ∂Ik (x,y )
+ ∂y ): gradient of the intensity at (x, y ).
+
+
+
+ Experimental Validation
+ Validation using both dry femurs and real clinical data.
+ Comparaison with the gold standard CT method, through Point-to-surfacedistance and DICE coefficient.
+
+ Figure: (a) Four 3D surface reconstruction results with point-to-surface errors on femoral head. (b)Boxplots on the DICE, the mean and STD of the point-to-surface errors (mm). (c) and (d)Projection results on in vivo data.
+
+
+
+ Future Work
+ Introducing a joint model that couples femur with the hipbone socket.
+ Combining anatomical landmarks with the existing formulation.
+ Application to other clinical settings.
+
+
+
diff --git a/Train/AZDBLab A Laboratory Information System for Large-Scale Empirical DBMS Studies/AZDBLab A Laboratory Information System for Large-Scale Empirical DBMS Studies-Poster.pdf b/Train/AZDBLab A Laboratory Information System for Large-Scale Empirical DBMS Studies/AZDBLab A Laboratory Information System for Large-Scale Empirical DBMS Studies-Poster.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..b1261d8cc1ebe3eda3265f922b65a59b535d91c7
--- /dev/null
+++ b/Train/AZDBLab A Laboratory Information System for Large-Scale Empirical DBMS Studies/AZDBLab A Laboratory Information System for Large-Scale Empirical DBMS Studies-Poster.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ff558fc0fa7c8d3dba331efa4da97ab02e63fa95c6c64149401a6ebacaa85425
+size 1308425
diff --git a/Train/AZDBLab A Laboratory Information System for Large-Scale Empirical DBMS Studies/AZDBLab A Laboratory Information System for Large-Scale Empirical DBMS Studies.pdf b/Train/AZDBLab A Laboratory Information System for Large-Scale Empirical DBMS Studies/AZDBLab A Laboratory Information System for Large-Scale Empirical DBMS Studies.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..08f0f2b0040a4db9ce41a813a224a96a2ab8a099
--- /dev/null
+++ b/Train/AZDBLab A Laboratory Information System for Large-Scale Empirical DBMS Studies/AZDBLab A Laboratory Information System for Large-Scale Empirical DBMS Studies.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:98f2b5053f2dd14ec2136beab9ba3becfe775584426c3801f8639c7801c48494
+size 1263646
diff --git a/Train/AZDBLab A Laboratory Information System for Large-Scale Empirical DBMS Studies/info.txt b/Train/AZDBLab A Laboratory Information System for Large-Scale Empirical DBMS Studies/info.txt
new file mode 100644
index 0000000000000000000000000000000000000000..989aa96b8284ee268b492fb83acb9aa09c208ec6
--- /dev/null
+++ b/Train/AZDBLab A Laboratory Information System for Large-Scale Empirical DBMS Studies/info.txt
@@ -0,0 +1,61 @@
+
+
+ Introduction
+ Scientific methodology in the database field can
+ provide a deep understanding of DBMS query
+ optimizers, for better engineered designs.
+ Few DBMS-centric labs are available for scientific
+ investigation; prior labs have focused on networks
+ and smartphones.
+
+
+
+ AZDBLAB (AriZona DataBase Laboratory)
+ Has been in development for seven years.
+ Assists database researchers to conduct large-
+ scale empirical studies across multiple DBMSes.
+ Runs massive experiments with thousands or
+ millions of queries on multiple DBMSes.
+ Supports as experiment subjects seven relational
+ DBMSes supporting SQL and JDBC.
+ Provides robustness to collect data over 8,277
+ hours running about 2.4 million query executions.
+ Conducts automated analyses on multiple query
+ execution runs.
+
+
+
+ Contributions
+ Novel research infrastructure, dedicated for large-
+ scale empirical DBMS studies
+ Seamless data provenance support
+ Several decentralized monitoring schemes: phone
+ apps, web apps, and watcher
+ Reusable GUI
+ Extensibility through a variety of plugins: labshelf,
+ analysis, experiment subject, and scenario
+
+
+
+ AZDBLAB Architecture
+
+
+
+
+ Demonstration
+ Step 1: Choose a labshelf, add a user, and create a notebook,
+ a paper, and a study in the paper on the Observer GUI.
+ Step 2: Load an experiment specification into the notebook.
+ Step 3: Schedule an experiment run on a particular DBMS.
+ Step 4: Monitor the run status via Observer, a web app, and a
+ mobile app, and wait for the experiment to be done.
+ Step 5: Add the completed experiment run to the study and
+ conduct a timing protocol analysis for the study.
+ Step 6: Produce LaTeX/PDF documents containing the analysis
+ results.
+
+
+
+
diff --git a/Train/Active Boundary Annotation using Random MAP Perturbations/Active Boundary Annotation using Random MAP Perturbations-Poster.pdf b/Train/Active Boundary Annotation using Random MAP Perturbations/Active Boundary Annotation using Random MAP Perturbations-Poster.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..8c6a9f0e69d50e89f294d922afb64fd67caf59e7
--- /dev/null
+++ b/Train/Active Boundary Annotation using Random MAP Perturbations/Active Boundary Annotation using Random MAP Perturbations-Poster.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5c8aea0bb486ae0dba5560531eab5ef84a8ece6bf05506eb38b54883fa791488
+size 1947514
diff --git a/Train/Active Boundary Annotation using Random MAP Perturbations/Active Boundary Annotation using Random MAP Perturbations.pdf b/Train/Active Boundary Annotation using Random MAP Perturbations/Active Boundary Annotation using Random MAP Perturbations.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..471d682534938cb9e6b52a7a89846daa39310df5
--- /dev/null
+++ b/Train/Active Boundary Annotation using Random MAP Perturbations/Active Boundary Annotation using Random MAP Perturbations.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a43cfa42c896945a9a5412a06fe36f7bfe2ec660d6b6fe5815573f9bcf268c75
+size 1332880
diff --git a/Train/Active Boundary Annotation using Random MAP Perturbations/info.txt b/Train/Active Boundary Annotation using Random MAP Perturbations/info.txt
new file mode 100644
index 0000000000000000000000000000000000000000..71509e3d29f4300ddacb1c3f962cc98d2a6b96cb
--- /dev/null
+++ b/Train/Active Boundary Annotation using Random MAP Perturbations/info.txt
@@ -0,0 +1,120 @@
+
+
+ 1. Overview
+
+ Goal: Obtain high quality image annotation with low cost (annotation effort)!
+
+ low quality annotation
+ high quality annotation
+ Approach: Bayesian active learning!
+ Minimize uncertainty in the boundary of MAP prediction !
+ Tradeoff uncertainty reduction and cost of annotation !
+ Contributions!
+ Entropy bounds that measure the expected perturbation that change MAP prediction.!
+ Coarse to fine approach for pixel-accurate annotation that saves 33% in cost.!
+
+
+
+ 2. Active learning in structured spaces
+
+ Traditional Active learning!
+ Active learner picks which data points to label. Typically assume data is i.i.d.!
+ Bayesian active learning in structured spaces!
+ Deals with correlated labels, e.g. labels of a single image (non i.i.d. setting)!
+ Basic idea: Construct a probability function over the label space and reduce its
+ uncertainty with minimal annotation cost (clicks)!
+
+
+
+ 3. Active annotation framework
+
+ Approach!
+ be the set of labels for image x for n pixels!Let,
+ Let,be the set of annotations obtained till time t
+ Let, p(y) be the joint probability of the labels given the data x and annotations till time t!
+ Bayesian experimental design!
+ Given: !
+ a function that measures the uncertainty of the labels given the annotation, U(A) !
+ a function that measures the cost of annotation, C(a)!
+ Pick the annotation task the provides the highest uncertainty reduction/unit cost, i.e.,:!
+ Uncertainty, U(A) = H (p), is defined as the entropy!
+ Computing entropy is exponential in the size of the patch. for many useful cases,•
+ however MAP estimation is tractable for some of these (e.g., via Graph-cuts, MPLP)!
+
+
+
+ 4. Markov Random Fields (MRFs) for image labeling
+
+ Popular for image segmentation (e.g. Grabcut model, Blake et al., 2004) !
+ Let an annotation of an n pixel image be described as a n-tuple!
+ The overall score of the pixel label is given by:!
+ The MAP estimate can be obtained via. Graph cuts (Boykov et al., 2001)The
+
+
+
+ 5. MAP perturbations
+
+ The Perturb MAX model (Papandreou and Yuille, 2011, Tarlow 2012, Gane 2014)!
+ Random functions
+
+
+
+ MAP perturbations upper bound the partition function (Hazan & Jaakkola 2012)
+ Let { i (yi )} be i.i.d. Gumbel random variables with zero mean
+
+
+
+ 6. Measuring uncertainty in the boundary of MAP prediction
+ For Perturb MAX models with Gumbel random variables!
+ Where,!
+ Proof idea: !
+ Conjugate duality:
+ Use MAP perturb. upper bounds.!
+ The optimal theta attains the perturb-max model p(y).
+ The linear term cancels out.
+ !Uncertainty measure!
+ Nonnegative (upper bounds the entropy).!
+ Attains its minimal value for the zero-one distribution (zero mean perturbations).!
+ Attains its maximal value for the uniform distribution (symmetry).
+
+
+
+ 7. Active boundary annotation
+
+
+ Coarse-to-fine boundary refinement!
+ We start from a coarse boundary and repeatedly the!
+ regions are picked by the algorithm, refinement is done by the user!
+ Cost of refinement = number of points in the polygons (boundary complexity)!
+ We don’t know the truth, so we can compute expectations of cost and uncertainty!
+
+
+
+ 8. Experimental evaluation
+
+ An example coarse-to-fine refinement (sampled regions for various strategies)!
+
+ Active annotation results!
+
+
+
+
+ 9. Conclusions and future work
+
+ We proposed a new uncertainty measure!
+ Avoids expensive MCMC sampling by randomly perturbing the model and using a MAPsolver as a black box tool.
+ Applications for parameter estimation and active learning in a number of areas such asmatchings, parse trees, and other combinatorial structures.!
+ Active learning in structured spaces!
+ Sampling based approach allows us to consider non-decomposable cost functions. Forthe boundary annotation task we used boundary complexity, which is not possible tocompute with marginal estimates.!
+ This led to 33% savings in annotation time for pixel-accurate boundary annotations.!
+ Challenges!
+ MAP perturbation based entropy bounds for higher dimensional perturbations.!
+ Beyond super-modular functions in the context of active learning.!
+
+
+
diff --git a/Train/Adaptive Structure from Motion with a contrario model estimation/Adaptive Structure from Motion with a contrario model estimation-Poster.pdf b/Train/Adaptive Structure from Motion with a contrario model estimation/Adaptive Structure from Motion with a contrario model estimation-Poster.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..0b849e357b67e4cb4ba0d9b5dcfecc2752ec12f3
--- /dev/null
+++ b/Train/Adaptive Structure from Motion with a contrario model estimation/Adaptive Structure from Motion with a contrario model estimation-Poster.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:90e98b579b5cdc94f48aed56c8c93350b7856d1434a305427a35b672713f2e96
+size 9210484
diff --git a/Train/Adaptive Structure from Motion with a contrario model estimation/Adaptive Structure from Motion with a contrario model estimation.pdf b/Train/Adaptive Structure from Motion with a contrario model estimation/Adaptive Structure from Motion with a contrario model estimation.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..2cfe05b467041a5c8b4b385ff24c908c543f9dfe
--- /dev/null
+++ b/Train/Adaptive Structure from Motion with a contrario model estimation/Adaptive Structure from Motion with a contrario model estimation.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:dfbb070e74637f0bf0f4e8164b2f302374cf7ef4eb585929d08e9ebc636ce89a
+size 3562498
diff --git a/Train/Adaptive Structure from Motion with a contrario model estimation/info.txt b/Train/Adaptive Structure from Motion with a contrario model estimation/info.txt
new file mode 100644
index 0000000000000000000000000000000000000000..67b95e30b3343f11bef5025684584dde46850720
--- /dev/null
+++ b/Train/Adaptive Structure from Motion with a contrario model estimation/info.txt
@@ -0,0 +1,72 @@
+
+
+ S TRUCTURE FROM MOTION : SFM
+ Structure from Motion depends on robust estimation; RANSAC isused to exclude outliers.
+
+
+
+
+ ROBUST ESTIMATION THRESHOLD DILEMMA
+ RANSAC requires the choice of a threshold T , which must be bal-anced:
+ Too small: Too few inliers, leading to model imprecision,
+ Too large: Models are contaminated by outliers (false data).
+
+ Goal: making T adaptive to data and noise.
+ Find a model that best fits the data with a confidence threshold T thatadapts automatically to noise by using AC-RANSAC.
+
+
+
+ A CONTRARIO S TRUCTURE FROM M OTION
+ AC-RANSAC. A threshold-less rigid model estimation framework.
+ The method answers the question:“Could the rigid set of data have occurred by chance?”
+ The threshold T adapts for inlier/outlier discrimination.
+ It provides a confidence score for each model.
+ A Contrario criterion [3]:
+ Use a background model H0 : uniform distribution.
+ Strong deviation from H0 is deemed meaningful.
+ AC-RANSAC relies on a the following definitions:
+ Number of False Alarms (NFA) measures model fitness to data
+ Given model M , assuming k inliers among n correspondences,Tk denotes the k th smallest residual
+ Expectation: NFA(M ) = mink=N+1...n NFA(M, k) ≤ 1.RANSAC maximizes inlier count. AC-RANSAC minimizes NFA.
+ Application to Structure fromMotion: estimation of
+ * Homography
+ Pose/Resection
+ Fundamental matrix
+ Essential matrix
+
+ Only assumption: returned model is fitted by at least 2 ∗ Nsample data.
+
+
+
+ E XPERIMENTAL RESULTS
+
+
+
+
+
+ C ONTRIBUTIONS
+ An SfM pipeline built on AC-RANSAC:
+ AC-RANSAC estimation of E, F, H, Pose,
+ Experimental validation showing the benefit of adaptive au-tomatic threshold.
+ openMVG open source library
+ A multiple-view geometry library,
+ A collection of 2-view solvers,
+ Generic robust estimators: RANSAC, AC-RANSAC.
+ Synthetic datasets with GT calibration:
+
+
+
+
+ REFERENCES
+ [1] N. Snavely et al. Photo tourism: ex-ploring photo collections in 3D. InSIGGRAPH 2006.
+ [2] C. Strecha et al. On benchmarkingcamera calibration and multi-viewstereo for high resolution imagery.In CVPR 2008.
+ [3] L. Moisan et al. Automatic ho-mographic registration of a pairof images, with a contrario elim-ination of outliers. In IPOL 2012,http://dx.doi.org/10.5201/ipol.2012.mmm-oh.
+
+
+
diff --git a/Train/An automated measure of MDP similarity for transfer in reinforcement learning/An automated measure of MDP similarity for transfer in reinforcement learning-Poster.pdf b/Train/An automated measure of MDP similarity for transfer in reinforcement learning/An automated measure of MDP similarity for transfer in reinforcement learning-Poster.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..d44a961ecd3a272364f7cf040042cbfb78c77cbe
--- /dev/null
+++ b/Train/An automated measure of MDP similarity for transfer in reinforcement learning/An automated measure of MDP similarity for transfer in reinforcement learning-Poster.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5486b3a5608815906bec57004363684666fceeb570577729aac3b9bd35e5c796
+size 1205637
diff --git a/Train/An automated measure of MDP similarity for transfer in reinforcement learning/An automated measure of MDP similarity for transfer in reinforcement learning.pdf b/Train/An automated measure of MDP similarity for transfer in reinforcement learning/An automated measure of MDP similarity for transfer in reinforcement learning.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..8ebe076bd577c113b07844826aa41d6bfc26de27
--- /dev/null
+++ b/Train/An automated measure of MDP similarity for transfer in reinforcement learning/An automated measure of MDP similarity for transfer in reinforcement learning.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:65ea25c72a8f8f632ebcfe1363b9bb59769af355ecf0b919e9df875dbe994904
+size 806408
diff --git a/Train/An automated measure of MDP similarity for transfer in reinforcement learning/info.txt b/Train/An automated measure of MDP similarity for transfer in reinforcement learning/info.txt
new file mode 100644
index 0000000000000000000000000000000000000000..47cc5a93ef0c7e7a63871ec1cfea44365e97f62e
--- /dev/null
+++ b/Train/An automated measure of MDP similarity for transfer in reinforcement learning/info.txt
@@ -0,0 +1,73 @@
+
+
+ Motivation
+ Transfer Learning aims to improve learning times on a newtarget task by reusing knowledge from previously learnedsource task(s). In transfer, the performance of anyalgorithm depends on the choice of the source and targettasks. Here, we present a data-driven similarity measureused to choose source task(s).
+
+
+
+ Markov Decision Processes
+ Tasks are modelled as Markov Decision Processes (MDPs). An
+ MDP is a tuplewith:
+ : state space: transition probability
+ : action space
+ : discount factor: reward function
+
+
+
+ RBDist Similarity Measure
+ Intuition: If two tasks are similar, then a restricted Boltzmann
+ Machine (RBM) trained on samples from the first task should
+ reconstruct samples from the other task. The distance is measured
+ using two phases:
+ Training Phase: Using source samples, train an RBM by
+ contrastive divergence.
+ Reconstruction Phase: Reconstruct target samples by
+ sampling the visible layer (having conditionally independent
+ visible units)
+ Measure similarity: using the Euclidean measure between
+ real samples and reconstructed ones
+
+
+
+
+ Experimental Domains & Benchmarks
+
+
+
+
+
+
+ Dynamical Phase Discovery
+
+
+ RBDist can automatically discover tasks’ dynamical phases
+
+
+
+ Transfer Correlation
+
+
+ Jump-Start correlation as a
+ function of RBDist on Cart
+ Pole systems
+ Jump-Start correlation as a
+ function of RBDist on
+ Mountain Car systems
+ RBDist correlates with initial performance on target tasks
+
+
+
+ Future Work
+ Extend RBDist to support transfer between different domain tasks
+ Assess the effect of RBDist on other transfer criteria (e.g., asymptoticperformance, time to threshold, etc.)
+
+
+
diff --git a/Train/Automated Embryo Stage Classification in Time-lapse Microscopy Video of Early Human Embryo Development/Automated Embryo Stage Classification in Time-lapse Microscopy Video of Early Human Embryo Development-Poster.pdf b/Train/Automated Embryo Stage Classification in Time-lapse Microscopy Video of Early Human Embryo Development/Automated Embryo Stage Classification in Time-lapse Microscopy Video of Early Human Embryo Development-Poster.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..f35332512a0da2ac4d4b12ee3e7b2f2abd41f913
--- /dev/null
+++ b/Train/Automated Embryo Stage Classification in Time-lapse Microscopy Video of Early Human Embryo Development/Automated Embryo Stage Classification in Time-lapse Microscopy Video of Early Human Embryo Development-Poster.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a907f6fcd279612043e92c251c8b85b8f0257a47c3dfebc58b88aa83fde23f23
+size 2495222
diff --git a/Train/Automated Embryo Stage Classification in Time-lapse Microscopy Video of Early Human Embryo Development/Automated Embryo Stage Classification in Time-lapse Microscopy Video of Early Human Embryo Development.pdf b/Train/Automated Embryo Stage Classification in Time-lapse Microscopy Video of Early Human Embryo Development/Automated Embryo Stage Classification in Time-lapse Microscopy Video of Early Human Embryo Development.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..4fe84efb0793225bc86aa23d77fb008852d22231
--- /dev/null
+++ b/Train/Automated Embryo Stage Classification in Time-lapse Microscopy Video of Early Human Embryo Development/Automated Embryo Stage Classification in Time-lapse Microscopy Video of Early Human Embryo Development.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:efb00f47a8683387bc02c97748318a5771d920074f8c55df16ce65ca702dd7c8
+size 1512626
diff --git a/Train/Automated Embryo Stage Classification in Time-lapse Microscopy Video of Early Human Embryo Development/info.txt b/Train/Automated Embryo Stage Classification in Time-lapse Microscopy Video of Early Human Embryo Development/info.txt
new file mode 100644
index 0000000000000000000000000000000000000000..cf32381109d5bb08f65cd4aa16af730ccdbd6417
--- /dev/null
+++ b/Train/Automated Embryo Stage Classification in Time-lapse Microscopy Video of Early Human Embryo Development/info.txt
@@ -0,0 +1,50 @@
+
+
+ Background and significance
+ The(Early Embryo Viability Assessment) Test –wasEevaTMdeveloped to automatically measure cell division timings andprovide quantitative information regarding embryo development.
+
+ We developed a multi-level classification method to identify theembryo stage (i.e. 1-cell, 2-cell, 3-cell, 4-or-more-cell) at every timepoint of a time-lapse microscopy video of early human embryodevelopment.
+
+
+
+
+ The Method
+
+
+
+
+
+ Embryo Features
+ Based on Bhattacharyya distance of the BoF histograms of consecutive frames
+ Registration free, rotation and translation invariant
+ “Dips” in the plot are good indications of stage transitions
+ Used by the Viterbi algorithm to define state transitional probability
+
+
+
+
+
+ Temporal Image Similarity
+ 327 human embryo videos (500 frames, each with 151 x 151 pixels) for training, 389embryo videos for testing.
+ All the embryo videos were captured using the EevaTM system.
+ Two human experts annotated the embryo stages of each frame.
+
+ Importance of different sets of features in trained level-1 (left) and level-2 (right)classification models
+
+ Classification performance at different levels
+
+
+ Precision (left) and Recall (right) of cell division detection asfunctions of the offset tolerance
+
+
+
diff --git a/Train/BMVC-2011-001/BMVC-2011-001-Poster.pdf b/Train/BMVC-2011-001/BMVC-2011-001-Poster.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..9497e6ea07a7f1bd55b9564f92dea743606c28a3
--- /dev/null
+++ b/Train/BMVC-2011-001/BMVC-2011-001-Poster.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c880abef110f10cf57c1c951461a6ed5498affe0d1ce120de5dc841755a205c9
+size 1566524
diff --git a/Train/BMVC-2011-001/BMVC-2011-001.pdf b/Train/BMVC-2011-001/BMVC-2011-001.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..98d12c1a785fae9ee7f8d37046da1c7fc0e6af03
--- /dev/null
+++ b/Train/BMVC-2011-001/BMVC-2011-001.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:136b11aac9f56b5239baadb344c5ca2091a3a8f34731fd8aa97d6a763b9f3112
+size 566510
diff --git a/Train/BMVC-2011-001/info.txt b/Train/BMVC-2011-001/info.txt
new file mode 100644
index 0000000000000000000000000000000000000000..c4f354402b36f1269f2841988e626130af98e06f
--- /dev/null
+++ b/Train/BMVC-2011-001/info.txt
@@ -0,0 +1,134 @@
+
+
+ PROBLEM STATEMENT
+ Given partial 2D or 3D trajectories of the
+ motion of a uniformly colored bouncing
+ ball, that is viewed by a single or multi-
+ ple cameras, estimate its full 3D state,
+ over time, i.e. location, orientation, an-
+ gular and linear velocities.
+
+
+
+ MOTIVATION
+ Scene understanding can benefit from
+ exploiting the fact that a dynamic scene
+ and its visual observations are invariably
+ determined by the laws of physics.
+
+
+
+ MAIN IDEA
+ • Model the physics of the scene using
+ physics-based simulation
+ • Acquire visual observations
+ • Define an objective function that con-
+ nects the model to the observations
+ • Produce physically plausible interpre-
+ tations of the scene by performing
+ black-box optimization
+
+
+
+ PHYSICS BASED SIMULATION
+ (A) Dynamics of a bouncing ball
+ The bouncing ball is affected by gravity
+ and air resistance while in flight and fric-
+ tion while in bounce with a surface.
+
+ (B) Equations of motion
+ We assume standard equations of mo-
+ tion for the flight phase and add air re-
+ sistance. We derive equations for the
+ bounce phase by extending [1].
+ (C) Simulation of a bouncing ball
+ We define a parameterized ball throwing simulation process S that:
+ • receives a 21-D vector of scene properties and initial conditions
+ • at each point in time, produces a 12-D vector of location, orientation, linear and
+ angular velocities
+ • is implemented by augmenting the Newton Game Dynamics simulator with our
+ physics modeling
+ • performs at 500fps, but is sub-sampled to real acquisition rate (30fps), in order to
+ account for aliasing effects
+
+
+
+ PHYSICALLY PLAUSIBLE SCENE INTERPRETATION
+ We estimate the physically plausible explanation e of the observed scene by formu-
+ lating an optimization problem, where:
+ • the hypothesis space of x is defined over the domain of simulation process S
+ • the observation data o are trajectories of a bouncing ball
+ (potentially partial, 3D or 2D, from single or multiple cameras)
+ • the objective function quantifies the discrepancy between the result of an invocation
+ to S and the observations
+ • the objective function is optimized by means of Differential Evolution [5]
+
+
+
+ CONTRIBUTIONS
+ • First method to consider attributes of state that can only be estimated through
+ physics-based simulation
+ • Extension to existing work [2–4] in exploiting physics based simulation in vision
+ • Proposal of an effective method that is clear, generic, top-down, simulation based
+ • Incorporation of realistic physics
+ • Selected generic and modular components allow for extension to other broader or
+ different contexts
+
+
+
+ EXPERIMENTAL RESULTS
+ (A) Multiview estimation of 3D trajectories
+ (synthetic/real)
+
+ (B) Single view estimation of 3D trajectories
+ Finding ball throwing simulations that optimally repro-
+ duce 2D observations.
+
+
+ (C) Seeing the “invisible”
+ Implicit information, like the state of the ball while
+ occluded (left) and the angular components of its 3D
+ state (right), are computer based on a single camera.
+
+
+
+
+
+ KEY REFERENCES
+ [1] P.J. Aston and R. Shail. The Dynamics of a Bouncing
+ Superball with Spin. Dynamical Systems, 22(3):291–
+ 322, 2007.
+ [2] K. Bhat, S. Seitz, J. Popovi´c, and P. Khosla. Com-
+ puting the Physical Parameters of Rigid-body Motion
+ from Video. In ECCV 2002, pages 551–565. Springer,
+ 2002.
+ [3] D.J. Duff, J. Wyatt, and R. Stolkin. Motion Estimation
+ using Physical Simulation. In IEEE International Con-
+ ference on Robotics and Automation (ICRA), pages
+ 1511–1517. IEEE, 2010.
+ [4] D. Metaxas and D. Terzopoulos. Shape and Nonrigid
+ Motion Estimation through Physics-based Synthesis.
+ IEEE Transactions on Pattern Analysis and Machine
+ Intelligence, 15(6):580–591, 1993.
+ [5] R. Storn and K. Price. Differential Evolution–A Sim-
+ ple and Efficient Heuristic for Global Optimization over
+ Continuous Spaces. Journal of Global Optimization,
+ 11(4):341–359, 1997.
+
+
+
+ MORE INFORMATION
+ For more information, visit http://www.ics.forth.gr/ kyriazis/?e=1 or contact {kyriazis,oikonom,argyros}@ics.forth.gr
+ This work was partially supported by the
+ IST-FP7-IP-215821 project GRASP
+
+
+
+
diff --git a/Train/Bayesian Joint Topic Modelling for Weakly Supervised Object Localisation/Bayesian Joint Topic Modelling for Weakly Supervised Object Localisation-Poster.pdf b/Train/Bayesian Joint Topic Modelling for Weakly Supervised Object Localisation/Bayesian Joint Topic Modelling for Weakly Supervised Object Localisation-Poster.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..017910c7779ee3396033eb13b4fee0c583bd558f
--- /dev/null
+++ b/Train/Bayesian Joint Topic Modelling for Weakly Supervised Object Localisation/Bayesian Joint Topic Modelling for Weakly Supervised Object Localisation-Poster.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:aefa184e5f2da280813415965de61b8a4eac13ec2701f1db0e655da7f09ceeec
+size 1759871
diff --git a/Train/Bayesian Joint Topic Modelling for Weakly Supervised Object Localisation/Bayesian Joint Topic Modelling for Weakly Supervised Object Localisation.pdf b/Train/Bayesian Joint Topic Modelling for Weakly Supervised Object Localisation/Bayesian Joint Topic Modelling for Weakly Supervised Object Localisation.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..c97815fc17a3b56527bdce9382b2faa4d43df020
--- /dev/null
+++ b/Train/Bayesian Joint Topic Modelling for Weakly Supervised Object Localisation/Bayesian Joint Topic Modelling for Weakly Supervised Object Localisation.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:fd1aa4b53750df87e91f17a8258c6560a31cc5222147ebbd74d6907c9fdf6ff6
+size 2051164
diff --git a/Train/Bayesian Joint Topic Modelling for Weakly Supervised Object Localisation/info.txt b/Train/Bayesian Joint Topic Modelling for Weakly Supervised Object Localisation/info.txt
new file mode 100644
index 0000000000000000000000000000000000000000..4644eda34dadf47998079fbdd56a9fe68f642de0
--- /dev/null
+++ b/Train/Bayesian Joint Topic Modelling for Weakly Supervised Object Localisation/info.txt
@@ -0,0 +1,158 @@
+
+
+ Task
+
+
+ Fully annotated
+
+ Weakly annotated
+ Many computer vision tasks require fully annotated data, but
+ Time-consuming, Laborious, Human various
+ More and more online media sharing websites (e.g. Flickr) provide
+ weakly annotated data, However,
+ Weaker supervision, Ambiguity (background clutter, occlusion…)
+ Challenge: Weakly Supervised Object Localisation (WSOL).
+
+
+
+ Existing Approaches vs. Ours
+ Three types of cues are exploited in existing WSOL:
+ Object-saliency: A region containing the object should look different
+ from background in general.
+ Intra-class: The region should look similar to other regions containing
+ the object of interest in other training images.
+ Inter-class: The region should look dissimilar to any regions that are
+ known to not contain the object of interest.
+ However, they are independently trained:
+
+ Independent learning ignores the fact that:
+ The knowledge that multiple objects co-exist within each image is
+ not exploited.
+ The background is relevant to different foreground object classes.
+ Our contributions:
+ We propose the novel concept of joint modelling of all object
+ classes and backgrounds for weakly supervised object localisation.
+ We formulate a novel Bayesian topic model suitable for localization
+ of objects and utilizing various types of prior knowledge available.
+ We provide a solution for exploiting unlabeled data for semi+weakly
+ supervised learning of object localisation.
+
+
+
+ Methodology
+ Preprocessing and Representation:
+ Regular Grid SIFT Descriptors. Sampled every 5 pixels.
+ Quantising using 𝑁𝑣 = 2000 word codebook.
+ Words and corresponding locations:
+ Our Model:
+
+ Observed variables:
+ 𝐽 Ο = {𝑥𝑗 , 𝑙𝑗 }
+ 𝑗=1Low-level feature words and corresponding location
+ Latent variables:
+ H=For each topic k and image j𝐾,𝐽{{𝜋𝑘 }𝐾
+ 𝑘=1 , {𝑦𝑗 , 𝜇𝑘𝑗 , Λ 𝑘𝑗 , 𝜃𝑗 }
+ 𝑘=1,𝑗=1 }
+ Given parameters:
+ Label information and prior𝐽, {𝛼𝑗 }
+ 𝑗=1𝑘=1𝐾
+ 𝜋𝑘0 , 𝜇𝑘0 , Λ0
+ 𝑘 , 𝛽𝑘0 , 𝜈𝑘0 Π=
+ Joint distribution:
+ 𝑝 𝑥𝑖𝑗 𝑦𝑖𝑗 , 𝜃𝑗 𝑝 𝑦𝑖𝑗 𝜃𝑗
+ 𝑘𝑝(𝜋𝑘 |𝜋𝑘0 )
+ 𝑖𝑗𝑗
+ 𝑝 𝜇𝑗𝑘 , Λ𝑗𝑘 𝜇𝑘0 , Λ0
+ 𝑘 , 𝛽𝑘0 , 𝜈𝑘0 )𝑝(𝜃𝑗 |𝛼𝑗 )𝑝 𝑂, 𝐻 Π =𝑝 𝑥𝑖𝑗 𝑦𝑖𝑗 , 𝜃𝑗 𝑝 𝑦𝑖𝑗 𝜃𝑗
+ 𝑘𝑝(𝜋𝑘 |𝜋𝑘0 )
+ 𝑖𝑗𝑁𝑗
+ 𝑝 𝜇𝑗𝑘 , Λ𝑗𝑘 𝜇𝑘0 , Λ0
+ 𝑘 , 𝛽𝑘0 , 𝜈𝑘0 )𝑝(𝜃𝑗 |𝛼𝑗 )𝑝 𝑂, 𝐻 Π =𝐾𝐽
+ Prior Knowledge:
+ Human knowledge objects and their relationships with backgrounds
+ Objects are compact whilst background spread across the image.
+ Objects stand out against background.
+ Transferred knowledge
+ Appearance and Geometry information from existing dataset.
+ Object Localisation:
+ Our-Gaussian Aligning a window to the ellipse obtained from q 𝜇, Λ
+ Our-Sampling Non-maximum suppression sampling over heat-map
+
+
+
+ Results
+ Dataset: PASCAL VOC 2007. Three variants are used:
+ VOC07-6×2 : 6 classes with Left and Right poses, 12 classes in total.
+ VOC07-14: 14 classes, other 6 were used as annotated auxiliary data
+ VOC07-20: all 20 classes, each class contain all pose data.
+ PASCAL criterion:
+ intersection-over-union > 0.5 between Ground-Truth and predicted box
+ Comparison with state-of-the-art
+ Initialisation: Localising object of interest in weakly labelled images.
+ Refined by detector: A conventional object detector can be trained
+ using initial annotation. Then it can be used to refine object location.
+
+
+
+
+ Example: Foreground Topics
+
+
+ Figs. (c) and (d) illustrate that the object of interest “explain away”
+ other objects of no interest.
+ A car is successfully located in Fig. (c) using the heat map of car topic.
+ Fig. (d) shows that the motorbike heat map is quite accurately
+ selective, with minimal response obtained on the other vehicular clutter.
+ Fig. (e) indicates how the Gaussian can sometimes give a better location.
+ Fig. (f) shows that the single Gaussian assumption is not ideal when the
+ foreground topic has a less compact response.
+ A failure case is shown in Fig. (g), where a bridge structure resembles
+ the boat in Fig (a) resulting strong response from the foreground topic,
+ whilst the actual boat topic is small and overwhelmed.
+
+
+
+ Example: Background Topics
+
+
+ Background non-annotated data has been modelled in our framework.
+ Irrelevant pixels will be explained to reduce confusion with object.
+ Automatically learned background topics have clear semantic meanings,
+ corresponding to common components as shown in the Figure.
+ Some background components are mixed, e.g. the water topic gives
+ strong response to both water and sky. But this is understandable
+ since water and sky are almost visually indistinguishable in the image.
+
+
+
+ Example: Semi-supervised Learning
+
+ 𝑓𝑔 Unknown image can set as 𝛼
+ 𝑗 =0.1. (soft constraint)
+ 10% labelled data + 90% unlabeled data (relevant) or unrelated data
+ Evaluating on (1) initially annotated 10% data (standard WSOL).
+ (2) testing part dataset (localize objects in new images
+ The figure clearly shows unlabeled data helps to learn a better object
+ model.
+
+
+
+ References
+ [1] T. Deselaers, B. Alexe, and V. Ferrari. Weakly supervised localization
+ and learning with generic knowledge. IJCV. 2012.
+ [2] M. Pandey and S. Lazebnik. Scene recognition and weakly supervised
+ object localization with deformable part-based models. In ICCV, 2011
+ [3] P. Siva and T. Xiang. Weakly supervised object detector learning with
+ model drift detection. In ICCV, 2011.
+ [4] P. Siva, C. Russell, and T. Xiang. In defence of negative mining for
+ annotating weakly labelled data. In ECCV, 2012.
+
+
+
diff --git a/Train/Being John Malkovich/Being John Malkovich-Poster.pdf b/Train/Being John Malkovich/Being John Malkovich-Poster.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..286b32056ce3d0b5547b336de6f10d8a0273cb58
--- /dev/null
+++ b/Train/Being John Malkovich/Being John Malkovich-Poster.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3248e67a89985126282d33a6d6e9453b42aa8c759fd99eb27e60c07234dab834
+size 9692817
diff --git a/Train/Being John Malkovich/Being John Malkovich.pdf b/Train/Being John Malkovich/Being John Malkovich.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..c47b2a163bdf74db9e5bfa6c71a36fbd323da842
--- /dev/null
+++ b/Train/Being John Malkovich/Being John Malkovich.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9456efa1ab6496e6b56fa5046005bafe1b0b615e7c2291e2c5d07fda29ce3db3
+size 6540596
diff --git a/Train/Being John Malkovich/info.txt b/Train/Being John Malkovich/info.txt
new file mode 100644
index 0000000000000000000000000000000000000000..7498036308b991660dc5b31c355e742550983faf
--- /dev/null
+++ b/Train/Being John Malkovich/info.txt
@@ -0,0 +1,365 @@
+
+
+ Our
+ contribution:
+
+ Use
+ your
+ face
+ to
+ drive
+ someone
+ else.
+
+ •
+ A
+ fully
+ automatic
+ real-‐time
+ framework
+ that
+
+ combines
+ a
+ number
+ of
+ face
+ processing
+
+ components
+ in
+ a
+ novel
+ way
+
+ •
+ Works
+ with
+ any
+ unstructured
+ photo
+ collection
+
+ and/or
+ video
+ sequence
+
+ •
+ No
+ training,
+ or
+ labeling
+
+
+
+
+
+
+ Results:
+
+ Puppeteering
+ evaluation
+ (full
+ measure):
+
+
+ Without
+ mouth
+ similarity:
+
+
+ Without
+ eyes
+ similarity:
+
+
+ Cameron
+ Diaz
+ drives
+ John
+ Malkovich:
+
+
+
+ User
+ drives
+ George
+ W.
+ Bush:
+
+ (870
+ photos
+ in
+ Bush’s
+ dataset)
+
+
+
+
+
+
+ The
+ method:
+
+ Image
+ alignment
+ to
+ canonical
+ pose:
+
+ Photo
+ collections:
+
+ Face
+ and
+ fiducial
+
+ points
+ detection
+
+ (Everingham
+ et
+ al
+ 06)
+
+
+ # I: 1108 508 0 0 /media/yuxiao/资料/Paper2Poster/论文与海报/Being John Malkovich-Poster-Poster/image8.png
+ Webcam/Video
+ seq.:
+
+ Real-‐time
+ tracking
+
+ (Saragih
+ et
+ al
+ 09)
+
+
+
+ 2D
+ aligned:
+
+
+ Warped
+ to
+ frontal
+ pose:
+
+
+
+ # I: 1335 716 0 0 /media/yuxiao/资料/Paper2Poster/论文与海报/Being John Malkovich-Poster-Poster/image13.png
+ Appearance
+ representation:
+
+ •
+ LBP
+ (Local
+ Binary
+ Pattern)
+ histograms
+ (Ahonen
+ et
+ al
+ 06)
+
+ •
+ Applied
+ on
+ warped
+ images
+
+ •
+ Only
+ for
+ mouth
+ &
+ eyes
+ regions
+
+ •
+ Mouth
+ region
+ divided
+ to
+ 3x5
+ blocks
+
+ •
+ Eye
+ region
+ divided
+ to
+ 3x2
+ blocks
+
+
+ Distance
+ measure:
+
+ distance
+ between
+ input
+ frame
+ i
+ and
+ target
+ frame
+ j
+ is :
+ The
+
+
+ Appearance:
+
+ €
+ mmeed
+ appear (i, j) = α d (i, j) + α d (i, j)
+ d {m,e} -‐
+ LBP
+ histogram
+ χ 2
+ distances
+
+
+
+
+
+
+
+
+
+
+
+
+
+ restricted
+ to
+ the
+ mouth
+ and
+ eyes
+ regions
+ {m.e}-‐
+ corresponding
+ weights
+ α
+
+ Pose:
+
+ d
+ pose (i, j) = L(|Y
+ i − Y
+ j |) + L(| P
+ i − P
+ j |) + L(| R
+ i − R
+ j |)
+ Y −
+ yaw,
+ P −
+ pitch,
+ R −
+ roll
+ − robust
+ logistic
+ normalization
+ function
+ L(d)
+
+ Temporal
+ continuity:
+
+ €j
+ appearance
+ dist.
+ between
+ frame
+ i -‐1
+ and
+
+
+ Acknowledgments€ :
+ This
+ work
+ was
+ supported
+ in
+ part
+ by
+ Adobe
+ and
+ the
+ University
+ of
+ Washington
+ Animation
+ Research
+
+ Labs.
+ We
+ gratefully
+ acknowledge
+ Jason
+ Saragih
+ for
+ providing
+ the
+ face
+ tracking
+ software.
+ Also,
+ in
+ our
+ experiments
+ we
+ used:
+
+ -‐
+ videos
+ of
+ Cameron
+ Diaz,
+ George
+ Clooney
+ and
+ John
+ Malkovich
+ downloaded
+ from
+ YouTube
+ and
+ mefeedia.com
+
+ -‐
+ a
+ collection
+ of
+ photos
+ of
+ George
+ W.
+ Bush
+ from
+ the
+ LFW
+ face
+ database.
+
+
+
+
+
diff --git a/Train/Beyond Spatial Pyramids- Receptive Field Learning for Pooled Image Features/Beyond Spatial Pyramids- Receptive Field Learning for Pooled Image Features-Poster.pdf b/Train/Beyond Spatial Pyramids- Receptive Field Learning for Pooled Image Features/Beyond Spatial Pyramids- Receptive Field Learning for Pooled Image Features-Poster.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..d13e1ffe1665e812c4bcad136cb8172a93e69192
--- /dev/null
+++ b/Train/Beyond Spatial Pyramids- Receptive Field Learning for Pooled Image Features/Beyond Spatial Pyramids- Receptive Field Learning for Pooled Image Features-Poster.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e7d0362a8d677694eceb26ef63d0410b926374cb81646045f54ca1fde934b994
+size 390301
diff --git a/Train/Beyond Spatial Pyramids- Receptive Field Learning for Pooled Image Features/Beyond Spatial Pyramids- Receptive Field Learning for Pooled Image Features.pdf b/Train/Beyond Spatial Pyramids- Receptive Field Learning for Pooled Image Features/Beyond Spatial Pyramids- Receptive Field Learning for Pooled Image Features.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..d2e697a3b0a25e5b8a4b1ed172434b88b6897de7
--- /dev/null
+++ b/Train/Beyond Spatial Pyramids- Receptive Field Learning for Pooled Image Features/Beyond Spatial Pyramids- Receptive Field Learning for Pooled Image Features.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a9a3b1a27b376a5493b42226c48d3b9e836ba9c6d9ab80ea51722566956c986f
+size 1174942
diff --git a/Train/Beyond Spatial Pyramids- Receptive Field Learning for Pooled Image Features/info.txt b/Train/Beyond Spatial Pyramids- Receptive Field Learning for Pooled Image Features/info.txt
new file mode 100644
index 0000000000000000000000000000000000000000..2905ee5d36e1b0d924df425834fa7c7a0b58d179
--- /dev/null
+++ b/Train/Beyond Spatial Pyramids- Receptive Field Learning for Pooled Image Features/info.txt
@@ -0,0 +1,134 @@
+
+
+ 1. C ONTRIBUTIONS
+ The key contributions of our work are:
+ • Analysis of the spatial receptive field (RF) designs for
+ pooled features.
+ • Evidence that spatial pyramids may be suboptimal in
+ feature generation.
+ • An algorithm that jointly learns adaptive RF and
+ the classifiers, with an efficient implementation using
+ over-completeness and structured sparsity.
+
+
+
+ 2. T HE P IPELINE
+
+ State-of-the-art classification algorithms take a two-layer pipeline: the coding layer learns activations from local
+ image patches, and the pooling layer aggregates activations in multiple spatial regions. Linear classifiers are learned
+ from the pooled features.
+
+
+
+ 3. N EUROSCIENCE I NSPIRATION
+
+
+
+
+ 4. S PATIAL P OOLING R EVISITED
+ • Much work has been done on the coding part, while
+ the spatial pooling methods are often hand-crafted.
+ • Sample performances on CIFAR-10 with different re-
+ ceptive field designs:
+
+ Note the suboptimality of SPM - random selection
+ from an overcomplete set of spatially pooled features
+ consistently outperforms SPM.
+ • We propose to learn the spatial receptive fields as well
+ as the codes and the classifier.
+
+
+
+ 5. N OTATIONS
+ • I: image input.
+ • A1 , · · · , AK : code activation as matrices, with Akij : ac-
+ tivation of code k at position (i, j).
+ • Ri : RF of the i-th pooled feature.
+ • op(·): pooling operator, such as max(·).
+ • f (x, θ): the classifier based on pooled features x.
+ • A pooled feature xi is defined by choosing a code in-
+ dexed by ci and a spatial RF Ri :
+ The vector of pooled features x is then determined
+ by the set of parameters C = {c1 , · · · , cM } and R =
+ {R1 , · · · , RM }.
+
+
+
+ 6. T HE L EARNING P ROBLEM
+ N{(In , yn )}n=1 ,• Given a set of training datawe jointly
+ learn the classifier and the pooled features as (assum-
+ ing that coding is done in an unsupervised way):
+ • Advantage: pooled features are tailored towards the
+ classification task (also reduces redundancy).
+ • Disadvantage: may be intractable - an exponential
+ number of possible receptive fields.
+ • Solution: reasonably overcomplete receptive field
+ candidates + sparsity constraints to control the num-
+ ber of final features.
+
+
+
+ 7. O VERCOMPLETE RF
+ • We propose to use overcomplete receptive field can-
+ didates based on regular grids:
+
+ (a) Base
+
+ (b) SPM
+
+ (c) Ours
+ • The structured sparsity regularization is adopted to
+ select only a subset of features for classification:
+
+
+
+ 8. G REEDY F EATURE S ELECTION
+ • Directly perform optimization is still time and mem-
+ ory consuming.
+ • Following [Perkins JMLR03], We adopted an incre-
+ mental, greedy approach to select features based on
+ their scores:
+ • After each increment, the model is retrained only with
+ respect to an active subset of selected features to en-
+ sure fast re-training:
+
+ • Benefit of overcompleteness in spatial pooling + fea-
+ ture selection: higher performance with smaller code-
+ books and lower feature dimensions.
+
+
+
+
+ 9. R ESULTS
+ • Performance comparison on CIFAR-10 with state-of-
+ the-art approaches:
+
+ • Result on MNIST and the 1-vs-1 saliency map ob-
+ tained from our algorithm:
+
+
+
+
+ 10. R EFERENCES
+ • A Coates and AY Ng. The importance of encoding
+ versus training with sparse coding and vector quanti-
+ zation. ICML 2011.
+ • S Perkins, K Lacker, and J Theiler. Grafting: fast, incre-
+ mental feature selection by gradient descent in func-
+ tion space. JMLR, 3:1333–1356, 2003.
+ • DH Hubel and TN Wiesel. Receptive fields, binocu-
+ lar interaction and functional architecture in the cat’s
+ visual cortex. J. of Physiology, 160(1):106–154, 1962.
+
+
+
diff --git a/Train/CATALYST ENHANCED MICRO SCALE BATCH ASSEMBLY/CATALYST ENHANCED MICRO SCALE BATCH ASSEMBLY-Poster.pdf b/Train/CATALYST ENHANCED MICRO SCALE BATCH ASSEMBLY/CATALYST ENHANCED MICRO SCALE BATCH ASSEMBLY-Poster.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..35d6cecb7d71a3ebc6fc6b25692940af72395e54
--- /dev/null
+++ b/Train/CATALYST ENHANCED MICRO SCALE BATCH ASSEMBLY/CATALYST ENHANCED MICRO SCALE BATCH ASSEMBLY-Poster.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:fa62652716affff520bfd90b12931ad4cad3054eda70b387741d83ceb1d29faf
+size 1154088
diff --git a/Train/CATALYST ENHANCED MICRO SCALE BATCH ASSEMBLY/CATALYST ENHANCED MICRO SCALE BATCH ASSEMBLY.pdf b/Train/CATALYST ENHANCED MICRO SCALE BATCH ASSEMBLY/CATALYST ENHANCED MICRO SCALE BATCH ASSEMBLY.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..80777d94f99e38d5d752e3cc63c64502278dae09
--- /dev/null
+++ b/Train/CATALYST ENHANCED MICRO SCALE BATCH ASSEMBLY/CATALYST ENHANCED MICRO SCALE BATCH ASSEMBLY.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ad6e35ed92035755ebe95d9e3cffd0327a5b137eb4fc7b1d2ea00d2a960802a3
+size 430436
diff --git a/Train/CATALYST ENHANCED MICRO SCALE BATCH ASSEMBLY/info.txt b/Train/CATALYST ENHANCED MICRO SCALE BATCH ASSEMBLY/info.txt
new file mode 100644
index 0000000000000000000000000000000000000000..af9f07f0e7b5e7794c9cd7fb71e19b0581747a0c
--- /dev/null
+++ b/Train/CATALYST ENHANCED MICRO SCALE BATCH ASSEMBLY/info.txt
@@ -0,0 +1,69 @@
+
+
+ Abstract
+ We enhance the efficiency of assembly of microparts in batch dry
+ assembly methods studied previously by our group. Here we study the
+ system dynamics with the addition of a few non-participating millimeter
+ scale parts that act as ‘catalysts’. We present experimental results that
+ show 25-50% reduction in acceleration needed to trigger part motion
+ and up to 4 times increase in concentration of parts in motion due to
+ addition of catalysts. We adapt a model from chemical kinetic theory to
+ understand our system behavior.
+
+
+
+ Analogy with Chemical Kinetics
+
+
+
+
+
+
+ Experimental Setup and Data
+ Collection/Analysis Capabilities
+ •Parts (800x800x50µm ) and catalysts (2x2x.5mm ) are made
+ respectively from SOI/silicon wafers using standard lithography and
+ DRIE etching.33
+ •High speed camera is used to capture part motion
+ •Dedicated Matlab routines were developed for image processing and
+ subsequent data reduction
+
+
+
+
+
+
+
+ Results
+
+
+
+
+
+ Conclusion
+ •‘Catalyst’ is a promising new concept in dry self-assembly
+ •Infrastructure for automated assembly analysis is developed
+ •Chemical Kinetics analogous models and empirical data are available
+ •Future developments include automated accounting of assembly in
+ assembly sites
+
+
+
+
+ Acknowledgements
+ •This work was supported by research grants from Intel Corporation.
+ •Authors thank the feedback and assistance from members of UW-MEMS
+ Lab.
+
+
+
diff --git a/Train/CRYOGENIC CATHODOLUMINESCENCE FROM CuxAg1-xInSe2 THIN FILMS/CRYOGENIC CATHODOLUMINESCENCE FROM CuxAg1-xInSe2 THIN FILMS-Poster.pdf b/Train/CRYOGENIC CATHODOLUMINESCENCE FROM CuxAg1-xInSe2 THIN FILMS/CRYOGENIC CATHODOLUMINESCENCE FROM CuxAg1-xInSe2 THIN FILMS-Poster.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..61721dce50c9e9d48825fb7dafc8aa82f46e878b
--- /dev/null
+++ b/Train/CRYOGENIC CATHODOLUMINESCENCE FROM CuxAg1-xInSe2 THIN FILMS/CRYOGENIC CATHODOLUMINESCENCE FROM CuxAg1-xInSe2 THIN FILMS-Poster.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:215c54c99653da4c9f584d88e27c3546e2180ba77ae00881ecca0f36805c2d74
+size 399612
diff --git a/Train/CRYOGENIC CATHODOLUMINESCENCE FROM CuxAg1-xInSe2 THIN FILMS/CRYOGENIC CATHODOLUMINESCENCE FROM CuxAg1-xInSe2 THIN FILMS.pdf b/Train/CRYOGENIC CATHODOLUMINESCENCE FROM CuxAg1-xInSe2 THIN FILMS/CRYOGENIC CATHODOLUMINESCENCE FROM CuxAg1-xInSe2 THIN FILMS.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..f2a1c2f02899ab37cf438c1382eee311653b1495
--- /dev/null
+++ b/Train/CRYOGENIC CATHODOLUMINESCENCE FROM CuxAg1-xInSe2 THIN FILMS/CRYOGENIC CATHODOLUMINESCENCE FROM CuxAg1-xInSe2 THIN FILMS.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5e73f4004cd0047cbc581b4621e7430219d7e6dd4402e9bedf2c7a87b5559881
+size 2561422
diff --git a/Train/CRYOGENIC CATHODOLUMINESCENCE FROM CuxAg1-xInSe2 THIN FILMS/info.txt b/Train/CRYOGENIC CATHODOLUMINESCENCE FROM CuxAg1-xInSe2 THIN FILMS/info.txt
new file mode 100644
index 0000000000000000000000000000000000000000..f289c90e4aafb76edb93f51e4ea2c8cce7d03fc5
--- /dev/null
+++ b/Train/CRYOGENIC CATHODOLUMINESCENCE FROM CuxAg1-xInSe2 THIN FILMS/info.txt
@@ -0,0 +1,157 @@
+
+
+ Motivation
+ Cu(In,Ga)Se
+ 2 is leading choice for high performance thin film solar cells (20.1% efficiency)
+ • Bandgaps of the I-III-VI
+ 2 system (I-Cu,Ag; III-Ga,In; VI-S,Se) cover almost the entire solar
+ spectrum, which make it an ideal system for multijunction solar cells
+ • However, these make poor solar cells because most of their properties, including
+ recombination mechanisms, are poorly understood
+ recombination mechanisms, are poorly understood
+ • Cathodoluminescence (CL) offers simultaneous scanning of
+ electron and spectroscopic images
+ • Our goals are to:
+ • Study luminescence behavior close to grain boundaries
+ •
+ • Identify emission differences between samples (Cu vs Ag)
+ • Characterize emissions and possible defects responsible
+ for transitions
+
+
+
+
+ Experimental Setup
+ Thin film deposition system
+ Hybrid sputtering/evaporation
+ Hybrid sputtering/evaporation
+ • SputteringSi off metalsl (Cu,(C Ag,A In))
+ • Substrate held at 550°C
+ • Three films analyzed
+ • AgInSe
+ 2 – 300 nm
+ • Cu
+ 0.6Ag
+ 0.4InSe
+ 2 – 700 nm
+ • CuInSe– 700 nm
+
+ JEOL 7000F analytical SEM
+
+ • Gatan MonoCL3 Spectrometer
+ • Liquid N2-cooled Ge detector
+ • Liquid He-cooled stage module
+ • Accelerating Voltage = 15 kV
+ • Current varied from 22 pA-160,000 pA
+ avoid charging during imaging• Some samples were carbon coated to
+
+
+
+
+
+ • Very spatially uniform emissions
+ • Most dark areas in PanCL image
+ correspond to dark areas in the SEM
+ image
+ • Very few grains show reduced emission
+ intensity from facets
+ • No reduction in emission intensity
+ from protuberant surface features
+
+
+ • Much more spatial variation of emissions
+ bright areas in SEM image• Dark areas in PanCL image correspond to
+ • Severe reduction in emission intensity seen
+ from protuberant surface features
+ • Local variations in luminescence are
+ indicative of local variation in defect states
+ or composition
+
+
+ • More spatial variation of emissions than
+ in pure AgInSe
+ 2, but less than in
+ Cu
+ 0.6Ag
+ 0.4InSe
+ 2
+ 0.60.42
+ • Reduction in emission intensity seen as
+ we approach surface facets or grain
+ edges
+ • Center of grains luminesce very well
+ • Intense emissions seen around grains
+
+
+
+ Power-series and Spectral Imaging
+
+ + exponential tail at lowEnergy (eV)
+ • Single Gaussian
+ excitation power
+ new exponential tails at both the high and
+ low energy ends• At excitation powers above 2560 pA we see
+ • Blue shift of main Gaussian peak
+ uniformity• Aligned and overlaid spectral images show spatial and spectral
+ • No enhanced emission from grain boundaries or inter-grain areas
+ • Uniformity in emission indicates compositional uniformity
+
+ • Two Gaussians at low excitation power
+ appear at higher excitation powers together
+ with additional GaussianLow and high energy end exponential tails
+ Blue shift of Gaussian peaks
+ Aligned and overlaid spectral images show spatial and spectral variation
+ • Enhanced emission from grain boundaries and inter-grain areas
+ indicating compositional fluctuations between grainsCAIS has the most variation of emissions from grain to grain,grain
+
+ Very broad emission requires 4+ peaks to fit
+ + exponential tail at low energy
+ emitting above 640 pAHigh energy (1200 nm) Gaussian starts
+ Blue shift of main Gaussian peaks
+ Aligned and overlaid spectral images show spatial and spectral variation
+ • Red emission, although very uniform, is most intense inside grains even
+ close to facets and surface features
+ • Green and blue emissions strongest from grain boundaries and inter-grain
+ areas
+ CIS has most emission variation from grain to boundary indicating
+ compositional fluctuation between grain and grain boundaries
+
+
+
+ Conclusions
+ Emissions from AgInSe
+ 2 are more uniform both spatially and spectrally than Cu-containing samples
+ AgInSe
+ 2 less affected by reduced emission from surface features – partly due to less surface faceting
+ Cu-containing samples exhibit enhanced luminescence from grain boundary or inter-grain areas
+ •Both Cu-containing samples exhibit localized luminescent variations indicative of compositional
+ fluctuations or electrically active defect fluctuations
+ •As Cu increases, emission gets broader indicating larger number of specific defect states (more local
+ chemical variation)
+ •Device implications: AIS may produce more uniform cell performance and may exhibit less air-
+ sensitivity during manufacture
+
+
+
+ Acknowledgmentsg
+ in the Frederick Seitz Materials Research Laboratory Central Facilities, University
+ This work was carried out in part in the Frederick Seitz Materials Research Laboratory Central Facilities, University of Illinois, which are partially
+ supported by the U.S. Department of Energy under grants DE-FG02-07ER46453 and DE-FG02-07ER46471
+ Toledo acknowledgements: Ohio Department of Development (ODOD), Wright Center for Photovoltaics Innovation and Commercialization (PVIC)
+ This work was supported by Air Force Research Laboratory, Space Vehicles Directorate, Kirtland AFB (Contract No. FA9453-08-C-0172)
+ Special thanks to: Dr. James Mabon (SEM/CL setup) and co-workers Allen Hall, Damon Hebert, Pamela Martin, and Yiming Liu
+
+
+
diff --git a/Train/CVPR-2014-011/CVPR-2014-011-Poster.pdf b/Train/CVPR-2014-011/CVPR-2014-011-Poster.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..7fa369989238516f7361a0fc47de00b2972edf4c
--- /dev/null
+++ b/Train/CVPR-2014-011/CVPR-2014-011-Poster.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:07a03de6c71a5677e4412778e4387fd7459517bcaa6f7f330fef175f6aedc72e
+size 3689911
diff --git a/Train/CVPR-2014-011/CVPR-2014-011.pdf b/Train/CVPR-2014-011/CVPR-2014-011.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..30afc03b78706a29b3e5fa1e099e2bd8ab264b66
--- /dev/null
+++ b/Train/CVPR-2014-011/CVPR-2014-011.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:edd46808643f068dcebf16aeb0638110551b7811720ddc2a7b3e3954bbf4f972
+size 8050387
diff --git a/Train/CVPR-2014-011/info.txt b/Train/CVPR-2014-011/info.txt
new file mode 100644
index 0000000000000000000000000000000000000000..d5cde9fb91adebb812bcfa6fc314c792620a9fce
--- /dev/null
+++ b/Train/CVPR-2014-011/info.txt
@@ -0,0 +1,204 @@
+
+
+ Introduction & Motivation
+ unsolved problems, and bridge computer and human
+ vision, we define a battery of 5 tests that measure the
+ gap between human and machine performances in
+ several dimensions.- Here, to help focus research efforts onto the hardest
+ design new experiments to understand mechanisms
+ of human vision, and to reason about its failure.
+ Cases where humans are better inspire computational
+ researchers to learn from humans.- Cases where machines are superior motivate us to
+ or personal robots), perfect accuracy is not necessarily
+ the goal; rather, having the same type of behavior (e.g.,
+ failing in cases where humans fail too) is favorable.In some applications (e.g., human-machine interaction
+
+
+
+
+ Test 1: Scene Recognition
+
+
+
+ A & B) Scene classification accuracy over 6-, 8- and 15-CAT datasets. Error bars represent the standard error of the mean over 10 runs. Naive
+ chance is simply set by the size of the largest class. All models work well above chance level. C) Top: animal vs. non-animal (distractor images)
+ classification. Bottom: classification of target images. 4-way classification is only over target scenes (and not distractors).
+ - A & B: We find that HOG, SSIM, texton, denseSIFT, LBP,
+ and LBHPF outperform other models (accuracy above 70%).
+ We note that spatial feature integration (i.e., x_pyr for the
+ model x) enhances accuracies.
+ - C: Animal vs. Non-Animal: All models perform above 70%,
+ except tiny image. Human accuracy here is about 80%. Inter-
+ estingly, some models exceed human performance here.
+ SUN dataset: Models that performed well on small datasets
+ (although they degrade heavily) still rank on top. GIST model
+ works well here (16.3%) but below top contenders: HOG, tex-
+ ton, SSIM, denseSIFT, and LBP (or their variants). Models
+ ranking at the bottom, in order, are tiny image, line hist, geo
+ color, HMAX, and geo map8x8.
+
+ Performances and correlations on SUN dataset. We randomly
+ chose n = { 1, 5, 10, 20, 50} images per class for training and 50 for test.
+
+
+
+ Learned Lessons
+ 1) Models outperform humans in rapid categorization tasks, indicating that discriminative informa-
+ tion is in place but humans do not have enough time to extract it. Models outperform humans on
+ jumbled images and score relatively high in absence of (less) global information.
+ 2) We find that some models and edge detection methods are more efficient on line drawings and
+ edge maps. Our analysis helps objectively assess the power of edge detection algorithms to ex-
+ tract meaningful structural features for classification, which hints toward new directions.
+ 3) While models are far from human performance over object and scene recognition on natural
+ scenes, even classic models show high performance and correlation with humans on sketches.
+ 4) Consistent with the literature, we find that some models (e.g., HOG, SSIM, geo/texton, and
+ GIST) perform well. We find that they also resemble Fighumans better.
+ 5) Invariance analysis shows that only sparseSIFT and geo_color are invariant to in-plane rotation
+ with the former having higher accuracy (our 3rd test). GIST, a model of scene recognition works
+ better than many models over both Caltech-256 and Sketch datasets.
+
+
+
+ Test 2: Recognition of Line Drawings and Edge Maps
+ Line Drawings
+ - Scenes were were presented
+ to subjects for 17-87 ms
+ in a 6-alternative force choice
+ task (human acc= 77.3%).
+ - On color images, geo_color,
+ sparseSIFT, GIST, and SSIM
+ showed the highest correla-
+ tion (all with classification ac-
+ curacy ≥ 75%), while tiny
+ images, texton, LBHF, and
+ LBP showed the least. Over
+ the SUN dataset, HOG,
+ denseSIFT, and texton
+ showed high correlation with
+ human CM.
+ - It seems that those models
+ that take advantage of re-
+ gional histogram of features
+ (e.g., denseSIFT, GIST, geo_
+ x; x=map or color) or heavily
+ rely on edge histograms
+ (texton and HOG) show
+ higher correlation with
+ humans on color images
+ (although low in magnitude).
+ - Over line drawings: As
+ images, geo_color, SSIM,
+ and sparseSIFT correlate
+ with humans.To our surprise,
+ geo_color worked well.
+
+ Human-model agreement on the 6-CAT dataset. See our paper
+ and its supplement for confusion matrices of models.
+
+ Geometric map: ground, pourous, sky, and vertical regions.
+
+ Edge maps for a sample image .
+ Edge Maps
+
+ Scene classification results using edge detected images over 6-CAT dataset. Canny edge detector
+ leads to best accuracies followed by the log and gPb methods.
+ - A majority of models perform > 70% on line
+ drawings which is higher than human perfor-
+ mance (similar pattern on images with
+ human=77.3% and models > 80%).
+ - SVM trained on images and tested on line
+ drawings: Some models (e.g., line hists, GIST,
+ geo map, sparseSIFT) better generalize to
+ drawings.
+ SVM trained on line drawings and tested on
+ edge maps: Surprisingly, averaged over all
+ models, Sobel and Canny perform better than
+ gPb. GIST, line hists, and HMAX were the most
+ successful models using all edge detection
+ methods. sparseSIFT, LBP, geo_color, and
+ geo_texton were the most affected ones.
+ - Models using Canny technique achieved the
+ best scene classification accuracy.
+
+ Top: training a SVM from color photographs and testing on
+ line drawings, gPb edge maps, and inverted (FL) images. Bottom: SVM
+ trained on line drawings and applied to edge maps.
+
+
+
+ Test 3: Invariance Analysis
+
+ d values over original, 90 o , and 180 o rotated animal images.
+ - A majority of models are invariant to scaling while few are drasti-
+ cally affected with a large amount of scaling (e.g., siagianItti07,
+ SSIM, line hists, and sparseSIFT).
+ - Interestingly, LBP here shows a similar pattern as humans across
+ four stimulus categories (i. e., max for head, min for close body).
+ - Some models show higher similarity to human disruption over the
+ four categories of the animal dataset: sparseSIFT, SSIM, and HOG.
+
+
+
+ Test 4: Local vs. Global Information
+
+ Correlation and classification accuracy over jumbled images.
+ As expected, models based on histograms are less influ-
+ enced (e.g ., geo color, line hist, HOG, texton, and LBP).
+ - Models correlate higher with humans over scenes (OSR and
+ ISR) than objects, and better on outdoor scenes than indoors.
+ - Some models, which use global feature statistics, show high
+ correlation only on scenes but very low on objects (e.g.,
+ GIST, texton, geo map, and LBP), since they do not capture
+ object shape or structure.
+
+
+
+ Test 5: Object Recognition
+ - On Caltech-256,
+ HOG achieves the
+ highest accuracy
+ about 33.28% fol-
+ lowed by SSIM,
+ texton, and dense
+ SIFT.
+ GIST which is spe-
+ cifically designed for
+ scene categorization
+ achieves 27.4% accu-
+ racy, better than some
+ models specialized
+ for object recognition
+ (e.g., HMAX).
+
+
+ Left: Object recognition performance on Caltech-256 dataset. Right: Recognition rate and correlations on Sketch dataset.
+ On sketch images, the shogSmooth model, specially designed for recognizing sketch images, outperforms others
+ (acc=57.2%). Texton histogram and SSIM ranked second and fourth, respectively. HMAX did very well (in contrast to
+ Caltech-256), perhaps due to its success in capturing edges, corners, etc.
+ - Overall, models did much better on sketches than on natural objects (results are almost 2 times higher than the Caltech-
+ 256). Here, similar to the Caltech-256, features relying on geometry (e.g., geo_map) did not perform well.
+
+
+
+ Summary
+
+ Classification results corresponding to 50 training and (50 over SUN and remaining images over Caltech-256 and Sketch) testing images per class
+ Animal vs. non-Animal corresponds to classification of 600 target vs. 600 distractor images . Top three models on each dataset are highlighted in red.
+
+
+
diff --git a/Train/CVPR-2014-013/CVPR-2014-013-Poster.pdf b/Train/CVPR-2014-013/CVPR-2014-013-Poster.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..51a8192a80598c8f553a239bbccf9f61e3e4804d
--- /dev/null
+++ b/Train/CVPR-2014-013/CVPR-2014-013-Poster.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e9e4cc238e5c13638462e4a92ab3bd98d7b84af6b71aa613df0532dd132b2a15
+size 3096631
diff --git a/Train/CVPR-2014-013/CVPR-2014-013.pdf b/Train/CVPR-2014-013/CVPR-2014-013.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..305762bf052c9cb7ebc3e002c32f1e5cabe13f2d
--- /dev/null
+++ b/Train/CVPR-2014-013/CVPR-2014-013.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0afdbf2170552560cfc1142bb517a3b967a78300fbd324ab3fe8b49876c03dbd
+size 8293179
diff --git a/Train/CVPR-2014-013/info.txt b/Train/CVPR-2014-013/info.txt
new file mode 100644
index 0000000000000000000000000000000000000000..37d7d36ee4823988a5535746690de23e2003d05b
--- /dev/null
+++ b/Train/CVPR-2014-013/info.txt
@@ -0,0 +1,75 @@
+
+
+ Overview:--
+ Goal:&Face&alignment&in&unseen&images.&•
+ Constrained&Local&Models&(CLM):&combine&an&ensemble&of&local&detectors&with&a&global&op)miza)on&strategy&that&constrains&the&
+ feature&points&to&lie&in&the&subspace&spanned&by&a&linear&shape&model&(Point&Distribu)on&Model&<&PDM).&&
+ CLM&two&step&fiLng&approach:&
+ (1)&Local&search&using&the&detectors&(likelihood&map&for&each&landmark).&
+ (2)&Global&op)miza)on&strategy&that&es)mates&the&PDM¶meters&that&jointly&maximize&all&the&detec)ons.&
+ Non
+
+
+
+ CLM:-Shape-Model-(PDM)-and-Local-Detectors-
+
+
+
+
+
+ Correla)on&in&Fourier&Domain&
+
+ MOSSE&Filter&
+ Prob.&Landmark&(j)&being&aligned&
+
+
+
+ Given&a&shape&observa)on&(y),&find&the&op)mal&set&of&shape&(b)&
+ and&pose¶meters&that&maximize&the&posterior&probability&&
+
+
+
+
+
+ Posterior-Distribu?on-(KDE)-
+ Kernel&Density&Es)mator&(KDE)&
+ X
+ Inference&by&a&Regularized&Par)cle&Filter&(RPF)&
+
+
+
+
+ Non$Parametric-Global-Op?miza?on
+
+
+ Alignment-Quality:-‘Robust’&Product&of&All&Landmarks&
+ 0
+
+
+
+
+ FiTng-Performance-$-Labeled-Faces-in-the-Wild-(LFW)-
+
+
+
+
+
+ Tracking-Performance-$-FGNET-Talking-Face-Sequence-
+
+
+
+
+
diff --git a/Train/Calibrating Photometric Stereo by Holistic Reflectance Symmetry Analysis/Calibrating Photometric Stereo by Holistic Reflectance Symmetry Analysis-Poster.pdf b/Train/Calibrating Photometric Stereo by Holistic Reflectance Symmetry Analysis/Calibrating Photometric Stereo by Holistic Reflectance Symmetry Analysis-Poster.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..ff64210bb5cec5936eed9c3383ca3ad9915059f5
--- /dev/null
+++ b/Train/Calibrating Photometric Stereo by Holistic Reflectance Symmetry Analysis/Calibrating Photometric Stereo by Holistic Reflectance Symmetry Analysis-Poster.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:10a219627e42af3e2e5af0ca1cf7dc54edab539c6c97e0715335c56479e6e6a1
+size 1445870
diff --git a/Train/Calibrating Photometric Stereo by Holistic Reflectance Symmetry Analysis/Calibrating Photometric Stereo by Holistic Reflectance Symmetry Analysis.pdf b/Train/Calibrating Photometric Stereo by Holistic Reflectance Symmetry Analysis/Calibrating Photometric Stereo by Holistic Reflectance Symmetry Analysis.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..7270d4ae84ffafc868fe38f4b8ee52cf8c5d0d0f
--- /dev/null
+++ b/Train/Calibrating Photometric Stereo by Holistic Reflectance Symmetry Analysis/Calibrating Photometric Stereo by Holistic Reflectance Symmetry Analysis.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f7a0033078bb49f27ab5198d1c23d4dc6a6419bf7186f646012f5296a13da216
+size 835970
diff --git a/Train/Calibrating Photometric Stereo by Holistic Reflectance Symmetry Analysis/info.txt b/Train/Calibrating Photometric Stereo by Holistic Reflectance Symmetry Analysis/info.txt
new file mode 100644
index 0000000000000000000000000000000000000000..e891ca4a778a80b4e9acf8fdd6b04c6509f29c1a
--- /dev/null
+++ b/Train/Calibrating Photometric Stereo by Holistic Reflectance Symmetry Analysis/info.txt
@@ -0,0 +1,85 @@
+
+
+ Motivation
+ The generalized bas-relief(GBR) ambiguity [1]
+
+
+ Resolve GBR by identifying special normals.
+
+ specular
+ spike [2]
+
+ isotropic &
+ reciprocal pairs [3]
+
+ diffuse
+ maxima[4]
+ Disadvantages of these methods:
+ • Rely on the identification of special points
+ • Do not use all available information
+ Solve GBR in a global approach?
+
+
+
+ References
+ [1] P. Belhumeur, D. Kriegman, and A. Yuille. The
+ basrelief Ambiguity. IJCV, 1999
+ [2] O. Drbohlav and M. Chaniler. Can two specular
+ pixels calibrate photometric stereo? ICCV, 2005
+ [3] P. Tan, L. Quan, and T. Zickler. The geometry of
+ reflectance symmetries. TPAMI, 2011
+ [4] P. Favaro and T. Papadhimitri. A closed-form
+ solution to uncalibrated photometric stereo via
+ diffuse maxima. CVPR, 2012
+ [5 ] N. Alldrin, S. Mallick, and D. Kriegman. Resolving
+ the generalized bas-relief ambiguity by entropy
+ minimization. CVPR, 2007
+ [6] B. Shi, Y. Matsushita, Y. Wei, C. Xu, and P. Tan.
+ Selfcalibrating photometric stereo. CVPR, 2010
+
+
+
+ Theory
+
+ parameterization of BRDF
+ Key Assumption (bi-variant BRDF)
+
+ The GBR ambiguity is uniquely determined by restoring the ‘low-rank’ structure of BRDF slices
+ estimated from at least two images! (see the paper for proofs)
+
+
+
+ Auto-calibration Method
+
+ Optimization Problem:
+ Penalize the variation along each row
+ of BRDF slices.
+ Solve by coarse-to-fine search in a
+ bounded interval of GBR parameters.
+
+
+
+ Experimental Results
+ Synthetic data results on the MERL BRDF database
+
+ Real datasets (more in the paper)
+
+ Evaluation with known shape & BRDF
+
+ Comparison on mean normal error (deg)
+
+
+
+
diff --git a/Train/Cambridge Danehy Park Wind Turbine Preliminary Project Assessment/Cambridge Danehy Park Wind Turbine Preliminary Project Assessment-Poster.pdf b/Train/Cambridge Danehy Park Wind Turbine Preliminary Project Assessment/Cambridge Danehy Park Wind Turbine Preliminary Project Assessment-Poster.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..f7a5634ff1bc01c869bb3ad862e140b732774eba
--- /dev/null
+++ b/Train/Cambridge Danehy Park Wind Turbine Preliminary Project Assessment/Cambridge Danehy Park Wind Turbine Preliminary Project Assessment-Poster.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e4161df5a1b623158f6071f46a620c7ac950ed3aff6244c432e98cf117a78fa8
+size 1306313
diff --git a/Train/Cambridge Danehy Park Wind Turbine Preliminary Project Assessment/Cambridge Danehy Park Wind Turbine Preliminary Project Assessment.pdf b/Train/Cambridge Danehy Park Wind Turbine Preliminary Project Assessment/Cambridge Danehy Park Wind Turbine Preliminary Project Assessment.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..110386eea40ef621797b0615fe3370fb191322d7
--- /dev/null
+++ b/Train/Cambridge Danehy Park Wind Turbine Preliminary Project Assessment/Cambridge Danehy Park Wind Turbine Preliminary Project Assessment.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:135d6aa1ea5d5474ea12f7194fd2e2efc7d691a4f25f841be2d826a96a25f1f6
+size 6660492
diff --git a/Train/Cambridge Danehy Park Wind Turbine Preliminary Project Assessment/info.txt b/Train/Cambridge Danehy Park Wind Turbine Preliminary Project Assessment/info.txt
new file mode 100644
index 0000000000000000000000000000000000000000..5f2f538477a680aaf72191ad68f889eef916416d
--- /dev/null
+++ b/Train/Cambridge Danehy Park Wind Turbine Preliminary Project Assessment/info.txt
@@ -0,0 +1,103 @@
+
+
+ Overview
+ Our team investigated the potential wind resource available at
+ Danehy Park in the City of Cambridge, to provide estimated power
+ generation figures, environmental and community impact analysis,
+ and rough financial estimates.
+
+ For the full report, please visit: http://web.mit.edu/wepa/reports/
+
+
+
+ Wind Resource Assessment
+ We collected wind speed data at the site over seven months using
+ sensors mounted to a light pole at two different heights. We then
+ correlated the data with historical data collected at Logan airport
+ from 1997 to 2010 to estimate the long‐term wind resource
+ available at Danehy Park. The method used to perform the
+ estimation is the binned linear regression Measure‐Correlate‐
+ Predict (MCP). Please see our written report for additional details.
+
+ From these estimated wind speeds, we used an approximate wind
+ shear scaling formula to compute a synthetic hourly wind speed
+ time series at hub height for each of the turbines in our evaluation
+ set. From these time series and the turbines’ power curves, we
+ computed estimated annual energy production for each turbine.
+
+
+
+
+ Financial Analysis
+ Using approximate values for purchase, installation, electricity
+ generation, insurance, and maintenance, we computed net present
+ values (NPVs) for each of the turbines in our evaluation set. These
+ figures are very rough, as there were many details left out, notably
+ the effects of clean energy incentives and potential complications
+ arising from the site being built on capped landfill.
+
+
+
+
+ Turbine Evaluation Set
+
+ The above turbines were chosen to provide broad representative
+ coverage of current small to medium scale turbines that would likely
+ be considered for installation at this site. Larger turbines would
+ have to contend with increasingly burdensome noise and shadow
+ flicker issues, greater financial risk, as well as the potential for
+ greater community resistance.
+
+
+
+ Environmental Impact
+ Danehy Park is located within a mile of Fresh Pond, a large body of
+ water located in West Cambridge. Mass Audubon has indicated that
+ birds concentrate in the area in significant numbers in breeding
+ season, winter, and during migration.
+ Of the many bird species
+ observed living near or migrating
+ through the Fresh Pond area, the
+ few that are listed as endangered
+ or of special concern by the state
+ are infrequently observed in the
+ area. Also, several recent studies
+ examining birds and wind
+ turbines have observed that
+ most birds usually avoid turbine
+ blades. Please see our written
+ report for additional details.
+
+
+
+
+ Community Impact
+ The size of the area in which shadow flicker from spinning blades is
+ detectable by the human eye varies with turbine height and rotor
+ diameter. The map below shows the potential areas of effect for
+ the Polaris 20 (blue) and Northern Power 100 (yellow) in various
+ locations. The area of shadow flicker effect for the Aeronautica 29‐
+ 225 and Polaris 500 overlapped with neighboring residential areas.
+
+
+ Another community impact issue associated with wind turbine
+ development is noise level. Each turbine has a noise (sound
+ pressure) level that decreases as a function of distance from the
+ turbine, as shown in the above graph.
+ The noise level from a turbine decreases by approximately half (6
+ dB) for every doubling in distance from the turbine. The markers in
+ the graph above indicate the manufacturer‐published noise levels.
+ For reference, 55 dBA corresponds to the noise level of a busy
+ office. The diameter of the yellow shaded area in the map is
+ roughly 400 meters.
+
+
+
diff --git a/Train/Cascaded Shape Space Pruning for Robust Facial Landmark Detection/Cascaded Shape Space Pruning for Robust Facial Landmark Detection-Poster.pdf b/Train/Cascaded Shape Space Pruning for Robust Facial Landmark Detection/Cascaded Shape Space Pruning for Robust Facial Landmark Detection-Poster.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..3a435edea66eb7ebeab208d9bc9916f2bc51abfc
--- /dev/null
+++ b/Train/Cascaded Shape Space Pruning for Robust Facial Landmark Detection/Cascaded Shape Space Pruning for Robust Facial Landmark Detection-Poster.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:eb9f116e21001ab5fba2c34a3de5443e09a9b71841c25f509964e72c828431dd
+size 759066
diff --git a/Train/Cascaded Shape Space Pruning for Robust Facial Landmark Detection/Cascaded Shape Space Pruning for Robust Facial Landmark Detection.pdf b/Train/Cascaded Shape Space Pruning for Robust Facial Landmark Detection/Cascaded Shape Space Pruning for Robust Facial Landmark Detection.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..f00b2d8040e8d47d68325f51ef1050266dbafae1
--- /dev/null
+++ b/Train/Cascaded Shape Space Pruning for Robust Facial Landmark Detection/Cascaded Shape Space Pruning for Robust Facial Landmark Detection.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b98e4344949c21c81fdfc9a4e5cf7abe01602cf10b654c0acc89dac80c3ddc32
+size 1250663
diff --git a/Train/Cascaded Shape Space Pruning for Robust Facial Landmark Detection/info.txt b/Train/Cascaded Shape Space Pruning for Robust Facial Landmark Detection/info.txt
new file mode 100644
index 0000000000000000000000000000000000000000..4c65ed20a7426bcfbbfe9be47bea15ff7e13f5cf
--- /dev/null
+++ b/Train/Cascaded Shape Space Pruning for Robust Facial Landmark Detection/info.txt
@@ -0,0 +1,103 @@
+
+
+ 1. Motivation & Contribution
+ Motivation
+ Limitations of existing methods proposed for obtaining the globally
+ optimal landmarks configuration from all possible shapes:
+ For Viola et al.’s cascaded AdaBoost framework, each landmark is
+ individually detected, which causes ambiguous detections.
+ The ASMs and AAMs are often prone to locally optimal solution. A
+ good initialization is needed.
+ For the promising regression-based method, it is still challenging to
+ directly predict an accurate shape from the complex image
+ appearances [Saragih et al. CVPR2011].
+ Contribution
+ A discriminative structure classifier is presented to jointly
+ assess the configurations of landmarks.
+ A novel coarse-to-fine shape space pruning algorithm is
+ proposed to progressively filter out the incorrect candidate
+ shapes.
+
+
+
+ 2. Method Overview
+
+ Fig.1 Overview of the proposed cascaded shape space pruning algorithm
+
+
+
+ 3. Cascaded Shape Space Pruning
+ Step (a): Shape space pruning by individually removing
+ landmark candidates
+ The candidate shape space is firstly pruned by individually
+ removing impossible positions of each landmark with separate
+ landmark detector. [Viola et al. CVPR2001].
+ Step (b): Shape space pruning by jointly removing
+ candidate landmark positions
+ Face shape assessment via discriminative structure classifier
+
+
+ Learning of discriminative structure classifier
+ The Structured Output SVM is exploited to learn the model
+ parameters w, where the learned model should satisfy that:
+
+ Efficient cascaded shape space pruning
+ The landmarks configuration with the highest score can be
+ fast computed by dynamic programming.
+ We quickly remove the most unconfident ones by the distance
+ between them and.
+ We do not use the time-consuming one-by-one assessment
+ scheme.
+
+ Fig.2 Score distribution of a real F(I, s) on a face image.
+ More and more accurate shapes are predicted in the pruned
+ shape space with finer and finer appearance features.
+
+
+
+ 4. Experiments
+ Setup
+ Setup
+ Datasets
+ The BioID and LFW face databases
+ Evaluation metric
+ NRMSE (Normalized Root-Mean-Squared Error) is adopted as the error measure
+ CDF (Cumulative Distribution Function) of NRMSE
+ num(NRMSE x)
+ CDF( x)
+ N
+ Resultsx is the specified error, N is the number of test images.
+ Results
+ Analysis of cascaded shape space pruning
+
+
+ Fig.3 Localization performances in
+ different stages on the LFW face database
+
+ Fig.4 Localization results in different stages (S1&S3)
+
+ Fig.5 Results on the LFW database
+ Fig.6 Results on the BioID database
+
+ Fig.7 Localization results on some challenging images
+
+
+
+ 5. Summary
+ The positions of landmarks are not only individually evaluated by the local detectors but also jointly
+ evaluated by the discriminative structure classifier.
+ The globally optimal configuration is progressively approximated by gradually filtering out the incorrect
+ candidate configurations.
+
+
+
diff --git a/Train/Class Specific 3D Object Shape Priors Using Surface Normals/Class Specific 3D Object Shape Priors Using Surface Normals-Poster.pdf b/Train/Class Specific 3D Object Shape Priors Using Surface Normals/Class Specific 3D Object Shape Priors Using Surface Normals-Poster.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..010a55388e9d7049c4e638ee8f3b0a0ab7649115
--- /dev/null
+++ b/Train/Class Specific 3D Object Shape Priors Using Surface Normals/Class Specific 3D Object Shape Priors Using Surface Normals-Poster.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ee9ccf2e9f7ad2c1dcad21a083b9a4528b59f298ea936066811718d11352f475
+size 10773050
diff --git a/Train/Class Specific 3D Object Shape Priors Using Surface Normals/Class Specific 3D Object Shape Priors Using Surface Normals.pdf b/Train/Class Specific 3D Object Shape Priors Using Surface Normals/Class Specific 3D Object Shape Priors Using Surface Normals.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..d65bf4ffd339dd8046b59b52b5dd201cb1bdb96f
--- /dev/null
+++ b/Train/Class Specific 3D Object Shape Priors Using Surface Normals/Class Specific 3D Object Shape Priors Using Surface Normals.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:47094703b3a41fb2bd1f5169cb5999a137a5fef2aa4379e59ba4e4e161fc51a8
+size 10229039
diff --git a/Train/Class Specific 3D Object Shape Priors Using Surface Normals/info.txt b/Train/Class Specific 3D Object Shape Priors Using Surface Normals/info.txt
new file mode 100644
index 0000000000000000000000000000000000000000..bac0f736931d50980f297420ccb4ff34e2fb0948
--- /dev/null
+++ b/Train/Class Specific 3D Object Shape Priors Using Surface Normals/info.txt
@@ -0,0 +1,135 @@
+
+
+ Introduction
+ • Some object classes are hard to reconstruct
+ – Lack of texture
+ – Transparency
+ – Reflection
+ • Solution: shape prior
+ – Shapes within object class similar
+ – Local distribution of surface normals
+
+
+
+
+
+
+ Formulation
+ • Baseline Method: Volumetric depth map fusion
+ – Segmentation of a voxel space into free and occupied space: us ∈ [0, 1]
+ • Shape prior formulation
+ – Voxel space aligned with object of known class
+ ix
+ s∈ [0, 1] andi
+ i x
+ sP=1Labeling of a voxel space into 3 labels:
+ free space, ground, object
+ • Convex Energy
+ – Unary term
+ ∗ Computed from depth maps, local preference for solid class
+ – Smoothness term
+ ∗ Dependent on surface orientation, position and involved labels
+
+
+
+ Overview
+ • Locally, surface normals similar between different examples
+ – Roof at the top of the car close to horizontal
+
+ • Local distribution of normals captured from training data
+ • Input data regularized using trained local normal distributions
+ • Trained anisotropic smoothness used for
+ – free space ↔ object
+ – ground ↔ object
+ • ground ↔ free space generic smoothness
+ • Label determined by smoothness
+
+
+
+ Convex Energy
+ ∈
+ •iρ
+ s :≥
+ joint unary term at voxel s for label i
+ •ijφ
+ s :convex smoothness term at voxel s for labels i and j
+ •ix
+ s∈ [0, 1]: indicating whether label i is chosen at voxel s
+ •ijx
+ s−jix
+ s3∈ [−1, 1] : represents the local surface orientation
+ 3• ek ∈ R : k-th canonical basis vector
+ • Optimized using primal-dual algorithm [Chambolle and Pock 2011]
+
+
+
+ Unary Term
+ • Only indicates free or occupied space
+
+
+
+
+ Shape Prior Training
+
+ • Training data, mesh models
+ • Transformed into volumetric models
+ • Per voxel s
+ – Acquire normal directions of all training samples
+ – Generate histogram over normal directions
+ – Probability of normal n at s, Ps (n) given by histogram
+
+
+
+ Discrete Wulff Shape
+ • φs (·) support function of a Wulff shape Wφs
+ [Esedoglu and Osher 2004]
+ – Wulff shape: convex shape
+ • Intersection of half spaces as parameterization of Wφs
+ – n half space normal
+ –nd
+ sdistance of half-space boundary to origin
+ • We have φs (n) =nd
+ s [Esedoglu and Osher 2004]
+ •nd
+ s= − log (Ps (n)), determined by training data
+
+
+
+
+ Trained Shape Prior
+
+
+ Slices through the bottle shape prior: vertical, horizontal
+
+
+
+ Results
+
+
+ Input image
+ Depth map
+ Vol. fusion
+ Shape Prior
+
+
+
+ Acknowledgements
+ We gratefully acknowledge the support of the 4DVideo
+ ERC starting grant #210806 and V-Charge grant
+ #269916 both under the EC’s FP7/2007-2013.
+
+
+
+
diff --git a/Train/Classification-Error Cost Minimization Strategy- dCMS/Classification-Error Cost Minimization Strategy- dCMS-Poster.pdf b/Train/Classification-Error Cost Minimization Strategy- dCMS/Classification-Error Cost Minimization Strategy- dCMS-Poster.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..e098cb812781ec13494ab3f273781ff2a04ebb88
Binary files /dev/null and b/Train/Classification-Error Cost Minimization Strategy- dCMS/Classification-Error Cost Minimization Strategy- dCMS-Poster.pdf differ
diff --git a/Train/Classification-Error Cost Minimization Strategy- dCMS/Classification-Error Cost Minimization Strategy- dCMS.pdf b/Train/Classification-Error Cost Minimization Strategy- dCMS/Classification-Error Cost Minimization Strategy- dCMS.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..d0c4d6c543d2eeb23bd26747785262412be180b8
--- /dev/null
+++ b/Train/Classification-Error Cost Minimization Strategy- dCMS/Classification-Error Cost Minimization Strategy- dCMS.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4d7ea58ebc052b4f64c5d9b56a81c9e43bd79693ac4ca945a6525a953666c2ec
+size 180924
diff --git a/Train/Classification-Error Cost Minimization Strategy- dCMS/info.txt b/Train/Classification-Error Cost Minimization Strategy- dCMS/info.txt
new file mode 100644
index 0000000000000000000000000000000000000000..defb01f8542c9bfa74f8ee48a6fe289f0ae503d5
--- /dev/null
+++ b/Train/Classification-Error Cost Minimization Strategy- dCMS/info.txt
@@ -0,0 +1,106 @@
+
+
+ Motivation
+ Several applications have different costs associated with different classification-errors
+ Example: intrusion detection, biometric recognition, etc.
+ Most classification systems are geared towards minimizing the error rate and not cost
+ True objective function to be minimized is the cost of classification-error and not error-rate itself
+ Existing approaches can not handle multi-class problems or dynamically changing costs
+ ROC curves (multi-class? [1] ) ; cost-sensitive Adaboost [2] (dynamically changing costs? )
+
+
+
+ Goal
+ Develop a classification-error cost minimization strategy that
+ Can deal with multiple classes in a principled manner
+ Is a simple post-training step
+ Does not require re-training of classifiers for changing costs
+ Is classifier type independent
+ Exploits statistical properties of the trained classifier
+
+
+
+ Contributions
+ costs incurred Statistically significant reduction in
+ Effective on
+ a variety of applications
+ data sets of varying dimensionalities
+ a variety of classifier types
+
+
+
+ Approach
+
+
+ Solution for a two-class, one-feature problem, known distributions
+ If unknown distribution
+ Estimate with a histogram
+ If multiple-features
+ Classification system: maps multiple-features to a single score/feature
+ If multiple-classes
+ High dimensional histogram is not feasible … so then?
+ Intuition: Convert C-class problem to C 2-class problems
+ We have a trained classification system
+ Probability of a misclassified instance classified as
+ class c actually belonging to class i:
+ Expected cost of false positives:
+
+ (Iterate to get a new confusion matrix with new thresholds)
+ Final classification decision:
+ Pick the class corresponding
+ to the score furthest away
+ from it’s corresponding
+ optimum threshold
+
+
+
+ Results
+ Synthetic data: MLP neural network
+
+
+ MIT-DARPA intrusion detection [3]
+ 0.3 million data points
+ 5 classes: DenialOfService,
+ Probe, UserToRoot, RootToLocal,
+ Normal
+ Ensemble of classifiers based
+ classification system: Learn++ [4]
+ (can perform data fusion)41 features
+ 3 feature sets: traffic, content,
+ intrinsic features
+
+
+ PCA reduced intrusion detection
+
+ Other applications [5]
+
+
+
+
+ References:
+ [1] N. Lachiche and P. Flach. Improving accuracy and cost
+ of two-class and multi-class probabilistic classifiers using
+ ROC curves. ICML, 2003.
+ [2] Y. Ma and X. Ding. Robust real-time face detection
+ based on cost-sensitive AdaBoost method. ICME, 2003
+ [3] The UCI KDD Archive, Information and Computer
+ Science, University of California, Irvine,
+ http://kdd.ics.uci.edu/ databases/kddcup99/kddcup99.html
+ [4] D. Parikh and R. Polikar. An Ensemble-Based
+ Incremental Learning Approach to Data Fusion. In IEEE
+ Transactions on Systems, Man and Cybernetics, 2007.
+ [5] C. Blake and C. Merz. UCI Repository of Machine
+ Learning Database at Irvine CA, 2005.
+ http://mlearn.ics.uci.edu/MLRepository.html
+
+
+
diff --git a/Train/Combining Motion Planning and Optimization for Flexible Robot Manipulation/Combining Motion Planning and Optimization for Flexible Robot Manipulation-Poster.pdf b/Train/Combining Motion Planning and Optimization for Flexible Robot Manipulation/Combining Motion Planning and Optimization for Flexible Robot Manipulation-Poster.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..2d7599799d02e2a7f59765282af146f4564e06c4
--- /dev/null
+++ b/Train/Combining Motion Planning and Optimization for Flexible Robot Manipulation/Combining Motion Planning and Optimization for Flexible Robot Manipulation-Poster.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ea9e70357e5daba1e2b99b63cff4e3495d3f515396cef036cbb2e29154ba62a2
+size 3605443
diff --git a/Train/Combining Motion Planning and Optimization for Flexible Robot Manipulation/Combining Motion Planning and Optimization for Flexible Robot Manipulation.pdf b/Train/Combining Motion Planning and Optimization for Flexible Robot Manipulation/Combining Motion Planning and Optimization for Flexible Robot Manipulation.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..e845c2c32cf59370da29a2881aeeaf75cfaaf643
--- /dev/null
+++ b/Train/Combining Motion Planning and Optimization for Flexible Robot Manipulation/Combining Motion Planning and Optimization for Flexible Robot Manipulation.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d09da878eb496027fe679e4e05ebc286b50d3d4f2a2e76d6e77c2f58ed83edf3
+size 973996
diff --git a/Train/Combining Motion Planning and Optimization for Flexible Robot Manipulation/info.txt b/Train/Combining Motion Planning and Optimization for Flexible Robot Manipulation/info.txt
new file mode 100644
index 0000000000000000000000000000000000000000..0aa7ae5e2b34fa42c9ac8867b584bb7a57d8f827
--- /dev/null
+++ b/Train/Combining Motion Planning and Optimization for Flexible Robot Manipulation/info.txt
@@ -0,0 +1,99 @@
+
+
+ Motivation:
+ Robots face two challenges in natural environments:
+ ● Underspecified goals: no human to specify exact
+ goal configuration
+ ● Uncertain dynamics: Effects of robot's actions on
+ novel objects is uncertain
+ Approach:
+ ● For underspecified goals:
+ • Pose task as a constrained optimization problem
+ over a set of reward or cost terms.
+ • Can be defined manually or modeled from human
+ ● For uncertain dynamics:
+ • Quickly approximate dynamics for a set of actions
+ • Plan efficiently using sampling-based techniques
+ Our algorithm:
+ ● Searches in object configuration space using
+ Rapidly-exploring Random Trees (RRT)
+ ● Adds leaves to search tree by forward-simulating
+ the learned dynamics for each object-action pair
+ ● Uses directGD heuristic to quickly search
+ optimization landscape
+ ● Returns a plan from the starting state to the
+ most optimal reachable state, given cost function
+
+
+
+
+ Manipulation under uncertainty
+ Initial state
+ ● Robot begins with a workspace containing
+ three unfamiliar objects
+ ● Robot provided a cost function expressing
+ the following desiderata:
+ • Orthogonality
+ • Cicumscribed area
+ • Distance from edge of workspace
+
+ Solution
+ ● All objects pushed to orthonormal
+ orientations in the center of the workspace
+ ● All paths free of collisions and redundant
+ actions
+ ● Robot monitored error and replanned as
+ necessary
+
+
+
+
+ Generalization to other manipulation tasks
+ Appropriate for tasks naturally expressed as
+ optimization of a cost function:
+ ● Arranging clutter on a surface
+ ● Multiple object placement
+ ● Table setting
+
+
+
+
+
+ Model Learning
+ Goal: discover the dynamics of each object
+ class over a set of action primitives
+
+
+
+
+
+
+ Advantages
+ Appropriate for tasks naturally expressed as
+ optimization of a cost function:
+ • Unlike conventional single-shot methods,
+ doesn't require user specified goals
+ • Always guaranteed to return reachable solution
+ • Favorable anytime characteristics
+ • Feasible for real-time planning in high DOF problems
+ Similar to Reinforcement Learning formalism, but
+ trades path optimality for realtime feasibility
+ • RL can require many full-passes through
+ configuration space to converge to optimal policy
+ • Handling continuous features requires discretization,
+ tiling, or other appoaches
+ • RL better suited for problems with sparse reward
+ landscape, but optimizations offer a gradient (like
+ shaping reward) which allows fast heuristic search
+ with RRT
+
+
+
diff --git a/Train/Comparing Visual Feature Coding for Learning Disjoint Camera Dependencies/Comparing Visual Feature Coding for Learning Disjoint Camera Dependencies-Poster.pdf b/Train/Comparing Visual Feature Coding for Learning Disjoint Camera Dependencies/Comparing Visual Feature Coding for Learning Disjoint Camera Dependencies-Poster.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..e86dfdcff394fb7008b089b8886f9f7237e1931b
--- /dev/null
+++ b/Train/Comparing Visual Feature Coding for Learning Disjoint Camera Dependencies/Comparing Visual Feature Coding for Learning Disjoint Camera Dependencies-Poster.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:09f9ef0b3e88823d0ae40007752c8c2c27dace4928ff25b764691819aec14de7
+size 10922752
diff --git a/Train/Comparing Visual Feature Coding for Learning Disjoint Camera Dependencies/Comparing Visual Feature Coding for Learning Disjoint Camera Dependencies.pdf b/Train/Comparing Visual Feature Coding for Learning Disjoint Camera Dependencies/Comparing Visual Feature Coding for Learning Disjoint Camera Dependencies.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..8be6003418ad52f5a8ff917f6e682983f7c3a784
--- /dev/null
+++ b/Train/Comparing Visual Feature Coding for Learning Disjoint Camera Dependencies/Comparing Visual Feature Coding for Learning Disjoint Camera Dependencies.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:14826daaa3eb70c70d7ee70e6380ba67685ea861463bf694f3f5882fdfc9a8be
+size 7196666
diff --git a/Train/Comparing Visual Feature Coding for Learning Disjoint Camera Dependencies/info.txt b/Train/Comparing Visual Feature Coding for Learning Disjoint Camera Dependencies/info.txt
new file mode 100644
index 0000000000000000000000000000000000000000..ec3c0fc1de86f0791af0d17c619f5f246a1197df
--- /dev/null
+++ b/Train/Comparing Visual Feature Coding for Learning Disjoint Camera Dependencies/info.txt
@@ -0,0 +1,110 @@
+
+
+ Problem
+
+
+ Given two disjoint camera views, we wish to estimate:
+ (1) their inter-camera correlation,
+ (2) and their spatial-temporal dependencies.
+ Moreover, we aim to answer the question:
+ What visual representations are more effective?
+
+
+
+ Motivation
+ Overcome the unreliability of manually selecting visual features
+ from specific datasets;
+ Explore high-level structural constraints in coding low-level
+ features for associating objects entities (supervised);
+ Employ co-occurrence statistics for constructing more reliable
+ representations (unsupervised).
+
+
+
+ Contributions
+ (1) A systematic investigation into the effectiveness of supervised
+ versus unsupervised feature coding methods for learning
+ inter-camera dependencies;
+ (2) Evaluation of the sensitivity of learning inter-camera time
+ correlation to the size of training data and the quality of scene
+ region decomposition.
+
+
+
+ Methodology
+ (i) Supervised method: Random Forest (RF) [1] for supervised
+ feature coding;
+ (ii) Unsupervised method: Latent Dirichlet Allocation (LDA) [2]
+ for mapping low-level features to code-words that capture topic
+ distributions;
+
+ Figure 1: An overview of feature coding comparison for learning inter-camera dependencies.
+ (iii) Time Delayed Dependency Inference: Time Delayed Mutual
+ Information (TDMI) [3] for learning inter-camera dependencies
+ with the aforementioned feature codes;
+ (iv) A new metrics called Mutual Information Margin(MIM)
+ proposed for evaluating different feature coding methods:
+ whereanddenote the TDMI function yielded by
+ the connected and unconnected pairs of regions.
+
+
+
+ Experiments
+
+
+ Figure 2: The layout and example views of an Underground Station (US) dataset (left)
+ and the i-LIDS (right) dataset.
+
+
+
+
+ Figure 3: Motion Saliency Maps obtained on the US and the i-LIDS datasets. The selected
+ regions are labelled by black digits.
+
+ Table 1: Sensitivity to the length of the training sequence: the average improvement
+ in MIM of different feature coding methods over the k-means vector quantisation based
+ representation. Mean improved MIM (MI-MIM) was computed by averaging individual
+ percentage of improvement over the testing range.
+
+ Table 2: Sensitivity to region decomposition: Mean Improved MIM was computed
+ following the same steps as explained in Table 1.
+ Experiment 1: sensitiveness to the size of training data
+ (1) Topic code gave the most favourable results (see Table 1);
+ (2) Suggest that feature coding methods can suppress noisy
+ dependencies between unconnected region pairs.
+ Experiment 2: sensitiveness to the quality of region decomposition
+ (1) Topic code shows the best performance for the US dataset while
+ RF pred for the i-LIDS dataset (see Table 2);
+ (2) Suggest that person count and topic clusters can be useful cues
+ for inter-camera dependency learning.
+ Conclusion:
+ (1) Investigate the effectiveness of supervised (RF) and
+ unsupervised (LDA) feature coding methods for learning
+ inter-camera correlations;
+ (2) RF and LDA coding schemes outperform the k-means vector
+ quantisation in robustness to small training data size;
+ (3) The coded features are more reliable to poor scene region
+ decomposition;
+ (4) Feature coding can suppress noisy dependencies while capture
+ inherent correlations between camera views.
+
+
+
+ References
+ [1] Breiman. Machine Learning, 45(1):5–32, 2001.
+ [2] Blei, Ng, Jordan. J. Machine Learning Research, 3:993–1022, 2003.
+ [3] Loy, Xiang, Gong. IEEE Trans PAMI, 34(9):1799-1813, 2012.
+
+
+
diff --git a/Train/Contextual Gaussian Process Bandit Optimization/Contextual Gaussian Process Bandit Optimization-Poster.pdf b/Train/Contextual Gaussian Process Bandit Optimization/Contextual Gaussian Process Bandit Optimization-Poster.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..2171f12d3ddd4b8e3bdc6c1c87ae8577335a595e
--- /dev/null
+++ b/Train/Contextual Gaussian Process Bandit Optimization/Contextual Gaussian Process Bandit Optimization-Poster.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:440d820e4a9d8789bb66e183a80f3d89a28c9c841562b101f68fb7ee2c71ea04
+size 345623
diff --git a/Train/Contextual Gaussian Process Bandit Optimization/Contextual Gaussian Process Bandit Optimization.pdf b/Train/Contextual Gaussian Process Bandit Optimization/Contextual Gaussian Process Bandit Optimization.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..d3a7ad494125573ab0c19a9cc42051478093d0ee
--- /dev/null
+++ b/Train/Contextual Gaussian Process Bandit Optimization/Contextual Gaussian Process Bandit Optimization.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1008e852b24c0eee85541a31def2e2e6a777ed01609f5f90b2000e4e3d77fe15
+size 485769
diff --git a/Train/Contextual Gaussian Process Bandit Optimization/info.txt b/Train/Contextual Gaussian Process Bandit Optimization/info.txt
new file mode 100644
index 0000000000000000000000000000000000000000..60c8f3a80c92f58fb4bea59034b4bd4777ee8571
--- /dev/null
+++ b/Train/Contextual Gaussian Process Bandit Optimization/info.txt
@@ -0,0 +1,144 @@
+
+
+ Contributions
+ An efficient algorithm, CGP-UCB, for the contextual GP bandit problem
+ Flexibly combining kernels over contexts and actions
+ Generic approach for deriving regret bounds for composite kernel functions
+ Evaluate CGP-UCB on automated vaccine design and sensor management
+
+
+
+ Contextual Bandits [cf., Auer ’02; Langford & Zhang ’08]
+ Play a game for T rounds:
+ Receive context zt ∈ Z
+ Choose an action st ∈ S
+ Receive a payoff yt = f (st , zt ) + t (f unknown).
+ Cumulative regret for context specific action
+ Incur contextual regret rt = sups0∈S f (s0, zt ) − f (st , zt )
+ PTAfter T rounds, the cumulative contextual regret is RT =
+ t=1 rt .
+ Context-specific best action is a demanding benchmark.
+
+
+
+ Gaussian Processes (GP)
+ Model payoff function using GPs: f ∼ GP(µ, k)
+ • observations yT = [y1 . . . yT ]T at inputs AT = {x1, . . . , xT }
+ yt = f (xt ) + t with i.i.d. Gaussian noise t ∼ N(0, σ 2)
+ Posterior distribution over f is a GP with
+ mean
+ covariance
+ variance
+ where kT (x) = [k(x1, x) . . . k(xT , x)]T and KT is the kernel matrix.
+
+
+
+ GP-UCB [Srinivas, Krause, Kakade, Seeger ICML 2010]
+
+
+ Context free upper confidence bound algorithm (GP-UCB)
+ At round t, GP-UCB picks action st = xt such that
+ with appropriate βt . Trades exploration (high σ) and exploitation (high µ).
+ with appropriate βt . Trades exploration (high
+ Maximum information gain bounds regret
+ The (context-free) regret RT of GP-UCB is bounded by O∗( T βT γT ), where
+ γT is defined as the maximum information gain:
+ quantifies the reduction in uncertainty about f achieved by revealing yA.
+ Bounds for Kernels
+ Bounds on γT exist for linear, squared exponential and Mat´ern kernels.
+
+
+
+ Contextual Upper Confidence Bound Algorithm (CGP-UCB)
+ where µt−1(·) and σt−1(·) are the posterior mean and standard deviation of the GP
+ over the joint set X = S × Z conditioned on the observations
+ (s1, z1, y1), . . . , (st−1, zt−1, yt−1).
+
+
+
+ Bounds on Contextual Regret
+ Then√for appropriate choices of βt , the contextual regret of CGP-UCB is bounded by
+ O∗( T γT βT ) w.h.p. Precisely,
+ onpLet δ ∈ (0, 1). Suppose one of the following assumptions holds
+ X is finite, f is sampled from a known GP prior with known noise variance σ 2,
+ dX is compact and convex, ⊆ [0, r ] , d ∈ N, r > 0. Suppose f is sampled from a
+ known GP prior with known noise variance σ 2, and that k(x, x0) has smooth
+ derivatives,
+ X is arbitrary; ||f ||k ≤ B. The noise variables t form an arbitrary martingale
+ difference sequence (meaning that E[εt | ε1, . . . , εt−1] = 0 for all t ∈ N),
+ uniformly bounded by σ.
+ where C1 = 8/ log(1 + σ −2).
+
+
+
+ Composite Kernels
+
+
+ Product of squared exponential kernel
+ and linear kernel
+ Additive combination of payoff that
+ smoothly depends on context, and
+ exhibits clusters of actions.
+ Product kernel
+ • k = kS ⊗ kZ , where (kS ⊗ kZ )((s, z), (s0, z0)) = kZ (z, z0)kS (s, s0)
+ • Two context-action pairs are similar (large correlation) if the contexts are
+ similar and actions are similar
+ Additive kernel
+ • (kS ⊕ kZ )((s, z), (s0, z0)) = kZ (z, z0) + kS (s, s0)
+ • Generative model: first sample a function fS (s, z) that is constant along z, and
+ varies along s with regularity as expressed by ks; then sample a function fz(s, z),
+ which varies along z and is constant along s;
+
+
+
+ Bounds for Composite Kernels
+ Maximum information gain for a GP with kernel k on set V
+ Product kernel
+ Let kZ be a kernel function on Z with rank at most d . Then
+ Additive kernel
+ Let kS and kZ be kernel functions on S and Z respectively. Then
+
+
+
+
+
+
+ Task Discover peptide sequences binding to MHC molecules
+ Context Features encoding the MHC alleles
+ Action Choose a stimulus (the vaccine) s ∈ S that maximizes an observed response
+ (binding affinity).
+ Kernels Use a finite inter-task covariance kernel KZ with rank mZ to model the
+ similarity of different experiments, and a Gaussian kernel kS (s, s0) to model the
+ experimental parameters.
+
+
+
+ Learning to Monitor Sensor Networks
+
+
+
+ Temperature data from a
+ network of 46 sensors at
+ Intel Research.
+ Time (h)
+ CGP-UCB using average
+ temperature
+ CGP-UCB using
+ minimum temperature
+ Task Given a sensor network, monitor maximum temperatures in building
+ Context Time of day
+ Action Pick 5 sensors to activate
+ Kernels Joint spatio-temporal covariance function using the Mat´ern kernel
+
+
+
diff --git a/Train/Cross-lingual Knowledge Validation Based Taxonomy Derivation from Heterogeneous Online Wikis/Cross-lingual Knowledge Validation Based Taxonomy Derivation from Heterogeneous Online Wikis-Poster.pdf b/Train/Cross-lingual Knowledge Validation Based Taxonomy Derivation from Heterogeneous Online Wikis/Cross-lingual Knowledge Validation Based Taxonomy Derivation from Heterogeneous Online Wikis-Poster.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..6434424e8dd53a59157cec2f86dd58c7a8a6d146
--- /dev/null
+++ b/Train/Cross-lingual Knowledge Validation Based Taxonomy Derivation from Heterogeneous Online Wikis/Cross-lingual Knowledge Validation Based Taxonomy Derivation from Heterogeneous Online Wikis-Poster.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0942c5887fd9af5ef5ab84b65b456c7b7aeeebdeaccd0d97811a3b15b4c66d8a
+size 929240
diff --git a/Train/Cross-lingual Knowledge Validation Based Taxonomy Derivation from Heterogeneous Online Wikis/Cross-lingual Knowledge Validation Based Taxonomy Derivation from Heterogeneous Online Wikis.pdf b/Train/Cross-lingual Knowledge Validation Based Taxonomy Derivation from Heterogeneous Online Wikis/Cross-lingual Knowledge Validation Based Taxonomy Derivation from Heterogeneous Online Wikis.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..d16c46ae9052515c9a6d9250b3eadc3cc1a1bd48
--- /dev/null
+++ b/Train/Cross-lingual Knowledge Validation Based Taxonomy Derivation from Heterogeneous Online Wikis/Cross-lingual Knowledge Validation Based Taxonomy Derivation from Heterogeneous Online Wikis.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:dd235bfea383f1d6bc5c0bb162c76975caedff270a5f230d6641c1bb05f6e06c
+size 2220112
diff --git a/Train/Cross-lingual Knowledge Validation Based Taxonomy Derivation from Heterogeneous Online Wikis/info.txt b/Train/Cross-lingual Knowledge Validation Based Taxonomy Derivation from Heterogeneous Online Wikis/info.txt
new file mode 100644
index 0000000000000000000000000000000000000000..1e84affc3ab5408401e6dfb3bdf6f8c1c776b974
--- /dev/null
+++ b/Train/Cross-lingual Knowledge Validation Based Taxonomy Derivation from Heterogeneous Online Wikis/info.txt
@@ -0,0 +1,115 @@
+
+
+ Introduction
+ Creating KBs based on the crowd-sourced wikis has attracted significant
+ research interest in the field of intelligent Web.
+ However, the user-generated subsumption relations in the wikis and the
+ semantic taxonomic relations in the KBs are not exactly the same.
+ Current taxonomy derivation approaches include:
+ The heuristic-based methods
+ The corpus-based methods
+ Here, we systematically study the problem of cross-lingual knowledge
+ validation based taxonomy derivation from heterogeneous online wikis.
+ The problem of cross-lingual taxonomic relation prediction is at the heart
+ of our work.
+
+ Example of Mistaken Derived Facts
+
+
+
+ Approach
+ Given two wikis 𝑊1 , 𝑊2 in different languages
+ (English and Chinese here) and the set of cross-
+ lingual links 𝐶𝐿, Cross-lingual Taxonomy
+ Derivation is a cross-lingual knowledge
+ validation based boosting process, by simultane-
+ ously learning four taxonomic prediction
+ 𝑒𝑛𝑧ℎ𝑒𝑛𝑧ℎfunctions 𝑓 , 𝑓 , 𝑔 and 𝑔 in 𝑇 iterations.
+
+ Framework
+ where 𝑓 𝑒𝑛 , 𝑓 𝑧ℎ , 𝑔𝑒𝑛 and 𝑔 𝑧ℎ denote the English
+ subClassOf, the Chinese subClassOf, the English
+ instanceOf, and the Chinese instanceOf prediction
+ functions respectively.
+ Dynamic Adaptive Boosting (DAB) model is
+ to maintain a dynamic changed training set to
+ achieve a better generalization ability via
+ knowledge validation with cross-lingual links.
+ 1. Weak Classifier
+ We utilize the binary classifier for the basic
+ learner and use the Decision Tree as our
+ implementation.
+ Linguistic Heuristic Features
+ Feature 1: English Features.
+ Whether the head words of label are plural or
+ singular.
+ Feature 2: Chinese Features.
+ Whether the super-category’s label is the
+ prefix/suffix of the sub-category’s label. Or,
+ whether the category’s label is the
+ prefix/suffix of the article’s label.
+ Feature 3: Common Features for instanceOf.
+ Whether the comment contains the label or
+ not.
+ Structural Features
+ Six Normalized Google Distance based
+ structural features are defined on articles,
+ properties and categories.
+ 2. Boosting Model
+ Active Set A: the set of training data.
+ Pool P: the set of all labeled data.
+ Unknown Data Set U: the set of unlabeled
+ data.
+
+ Learning Process.
+ Train a hypothesis on current active set.
+ Re-weight the weight vector.
+ Predict U using current classifier and
+ validate the results using CL.
+ Expand P and update U.
+ Resample A with the constant size.
+
+
+
+ Experiments
+ Comparison Methods
+ Heuristic Linking (HL): only uses the
+ linguistic heuristic features, and trains the
+ taxonomic relation prediction functions
+ separately using the decision tree model.
+ Decision Tree (DT): uses both the linguistic
+ heuristic features and the structural features,
+ and trains the taxonomic relation prediction
+ functions separately using the decision tree
+ model.
+ Adaptive Boosting (AdaBoost): uses the
+ same basic learner, and iteratively trains the
+ taxonomic relation prediction functions using
+ the real AdaBoost model.
+ Performance of Cross-lingual Taxonomy Derivation with Different Methods (%)
+
+
+
+ Boosting Contribution Comparison
+
+
+
+ Conclusion and Future Work
+ DAB gives a new way for language processing tasks using cross-language resources.
+ The future work contains automatically learning more cross-lingual validation rules and conducting more experiments in other languages.
+
+
+
+ References
+ de Melo, G., and Weikum, G. 2010. Menta: Inducing multilingual taxonomies from Wikipedia. In CIKM’10.
+ Potthast, M., Stein, B., and Anderka, M. 2008. A Wikipedia-based multilingual retrieval model. In ECIR’08.
+ Wang, Z.; Li, J.; Wang, Z.; and Tang, J. 2012. Cross-lingual knowledge linking across wiki knowledge bases. In WWW’12.
+
+
+
diff --git a/Train/Cultivation and Characterization of Microorganisms in Antarctic Lakes/Cultivation and Characterization of Microorganisms in Antarctic Lakes-Poster.pdf b/Train/Cultivation and Characterization of Microorganisms in Antarctic Lakes/Cultivation and Characterization of Microorganisms in Antarctic Lakes-Poster.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..802f64ef9e3fd699e34af0c3f95cd6beab3b8166
Binary files /dev/null and b/Train/Cultivation and Characterization of Microorganisms in Antarctic Lakes/Cultivation and Characterization of Microorganisms in Antarctic Lakes-Poster.pdf differ
diff --git a/Train/Cultivation and Characterization of Microorganisms in Antarctic Lakes/Cultivation and Characterization of Microorganisms in Antarctic Lakes.pdf b/Train/Cultivation and Characterization of Microorganisms in Antarctic Lakes/Cultivation and Characterization of Microorganisms in Antarctic Lakes.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..466c5d0cb845d1a0a6131d87b2c70b52c3d6af1e
--- /dev/null
+++ b/Train/Cultivation and Characterization of Microorganisms in Antarctic Lakes/Cultivation and Characterization of Microorganisms in Antarctic Lakes.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:68339bc06675ad173349d99d3f61d86df3b157892c5af8009a002f548b5c099d
+size 307222
diff --git a/Train/Cultivation and Characterization of Microorganisms in Antarctic Lakes/info.txt b/Train/Cultivation and Characterization of Microorganisms in Antarctic Lakes/info.txt
new file mode 100644
index 0000000000000000000000000000000000000000..a4d96eaaacec6d1a2b1b66aff2837a68e292146e
--- /dev/null
+++ b/Train/Cultivation and Characterization of Microorganisms in Antarctic Lakes/info.txt
@@ -0,0 +1,71 @@
+
+
+ INTRODUCTION
+ Antarctic Lakes harbor pristine biotopes and include freshwater and saline systems that are subject to long
+ periods of ice and snow-cover, low temperatures and low levels of photosynthetically active radiation.
+ Unfortunately, the recovery of cultivable organisms from Antarctica is very difficult, and the development of new
+ methods to resuscitation and cultivability of Antarctic microorganisms is very important. In order to understand
+ the diversity, survival, and activity of microorganisms in Antarctic zone, we cultivated and characterized
+ bacterial isolates from Antarctic lakes.
+
+
+
+ Sampling site
+ Water samples were collected from Antarctic Lakes in Skavrvsnes near Syowa Station area (A-6 Ike 0m,
+ Jan.19, 2005; A-7 Ike 0m, Jan.29, 2005; B-1 Ike 0m, Jan.21, 2005; B-3 Ike 0m, Jan.21, 2005; Hunazoko Ike 4m,
+ Jan.22, 2005; Tokkuri Ike 4m, Jan.22, 2005; Suribati Ike 10m, Jan.24, 2005 in Fig.1).
+
+ Fig.1. a. The location of Skavrvsnes; b. Lakes in Skavrvsnes. 1. Hunazoko, 4. Tokkuri, 9.B1,11.B3, 20.
+ A6, 21. A7, 30. Suribati.
+
+
+
+ RESULT AND DISCUSSION
+ The results of the homology analysis of 16s rRNA gene sequences showed that the isolates represented a
+ wide diversity of both gram-positive and gram-negative heterotrophic bacteria belonging to five major classes:
+ Gamma-proteobacteria, Actinobacteria, Alpha-proteobacteria, Bacilli, Flavobacteria class. Isolates related to
+ Flavobacteria formed the largest cluster in terms of diversity, with 9 pylogenetically distinct organisms.
+
+
+
+
+ METHOD
+
+
+
+
+ Microorganism description
+
+ Fig.2. Photo of cultivated bacteria from Antarctica lake sample.
+ a. Isolate 41; b. Isolate 15; c. Isolate 32; d. Isolate 23.
+
+
+
+ CONCLUSION
+ 1. While there were only two strains isolated from
+ incubation at 20ºC by spreading the Antarctica lake samples
+ directly onto agar-media plates, twenty strains were isolated
+ at 4ºC by filtration Antarctica lake samples. It was thought
+ that microorganisms in Antarctica lakes had been subjected
+ to low temperature and limiting nutrient for long time, and
+ high incubation temperature and rich nutrient medium might
+ be stressor for Antarctic microorganisms. Therefore,
+ incubation at low temperature with media not containing
+ rich nutrition were more suitable for Antarctica lake
+ microorganisms because the isolated microorganisms were
+ likely to be adapted to the oligotrophic conditions of many
+ cold habitats.
+ 2. 16S rDNA sequencing results suggested that three of the
+ isolates might represent new species. Strain No. 15, 41, and
+ 27 (28, 32, and 44) showed 93%, 97%, and 97% similarity
+ with the GenBank database respectively. Strain No. 15, 41,
+ and 27 (28, 32, and 44) are likely novel species in the genus
+ Gillisia, Psychroflexus, and Flavobacterium which belong to
+ the Flavobacteria group.
+
+
+
diff --git a/Train/Curvature and Optimal Algorithms for Learning and Minimizing Submodular Functions/Curvature and Optimal Algorithms for Learning and Minimizing Submodular Functions-Poster.pdf b/Train/Curvature and Optimal Algorithms for Learning and Minimizing Submodular Functions/Curvature and Optimal Algorithms for Learning and Minimizing Submodular Functions-Poster.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..8cc6f34f59f8a7a0e4c4dfdd716ecae7bd2949b9
--- /dev/null
+++ b/Train/Curvature and Optimal Algorithms for Learning and Minimizing Submodular Functions/Curvature and Optimal Algorithms for Learning and Minimizing Submodular Functions-Poster.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:05dd53d63f3bf4268bd239116c49d897a0244a76e6cd2f67cbfdf0744ecc9578
+size 254116
diff --git a/Train/Curvature and Optimal Algorithms for Learning and Minimizing Submodular Functions/Curvature and Optimal Algorithms for Learning and Minimizing Submodular Functions.pdf b/Train/Curvature and Optimal Algorithms for Learning and Minimizing Submodular Functions/Curvature and Optimal Algorithms for Learning and Minimizing Submodular Functions.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..fd5ef6358d28ed1877deaaa8a6e7ae76a713fea6
--- /dev/null
+++ b/Train/Curvature and Optimal Algorithms for Learning and Minimizing Submodular Functions/Curvature and Optimal Algorithms for Learning and Minimizing Submodular Functions.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9a7dfd473fb314c00b88eda1cfaac0e0a9883c0e47830d93a6d22ad2639581da
+size 304343
diff --git a/Train/Curvature and Optimal Algorithms for Learning and Minimizing Submodular Functions/info.txt b/Train/Curvature and Optimal Algorithms for Learning and Minimizing Submodular Functions/info.txt
new file mode 100644
index 0000000000000000000000000000000000000000..ab1bdaa48216eaff9c438143be20634ca058e50a
--- /dev/null
+++ b/Train/Curvature and Optimal Algorithms for Learning and Minimizing Submodular Functions/info.txt
@@ -0,0 +1,136 @@
+
+
+ Overview
+ Introduce the notion of curvature, to provide better connections between theory
+ and practice.
+ Study the role of curvature in:I
+ I Study the role of curvature in:
+ Approximating submodular functions everywhere
+ Learning Submodular functions
+ Constrained Minimization of submodular functions.
+ I Provide improved curvature-dependent worst case approximation guarantees
+ and matching hardness results
+
+
+
+ Curvature of a Submodular function
+ Define three variants of curvature of a monotone submodular functionas:I
+ Proposition: κˆf (S) ≤ κf (S) ≤ κf .I
+ I Captures the linearity of a submodular function.
+ I A more gradual characterization of the hardness of
+ various problems.
+ I Investigated for submodular maximization
+ (Conforti & Cornuejols, 1984).
+
+
+
+
+ Approximating Submodular functions Everywhere
+ Problem:Given a submodular function f in form of a value oracle,
+ find an approximation ˆf (within polynomial time and space), such that
+ ˆf (X ) ≤ f (X ) ≤ α1(n)ˆf (X ), ∀X ⊆ V for a polynomial α1(n).
+ We provide a blackbox technique to transform bounds into curvature dependent
+ ones.I
+ κ
+ I Main technique: Approximate the curve-normalized version f as ˆf κ, such that
+ ˆf κ(X ) ≤ f κ(X ) ≤ α(n)ˆf κ(X ).
+
+ Ellipsoidal Approximation:I
+ I The Ellipsoidal Approximation algorithm of Goemans et al, provides a function
+ p√
+ of the form w f (X ) with an approximation factor of α1(n) = O( n log n)).
+ I Corollary: There exists a function of the form,
+ Lower bound: Given a submodular function f with curvature κf , there does not
+ exist any polynomial-time algorithm that approximates f within a factor of
+ n1/2−
+ , for any > 0.
+ 1+(n1/2−−1)(1−κ )fI
+ Modular Upper Bound:I
+ Pmˆ (X ) =I A simplest approximation (and upper bound) is f
+ j∈X f (j).
+ j∈X
+ I Lemma: Given a monotone submodular function f , it holds that,
+ This bound is tight for the class of modularapproximations.I
+ Pka
+ I Corollary: The class of functions, f (X ) =λ[w(X)],λ≥0,satisfiesiiii=1
+ Pf (X ) ≤
+ j∈X f (j) ≤ |X |1−af (X ).
+
+
+
+ Learning Submodular Functions
+ Problem: Given i.i.d training samples {(Xi , f (Xi )}m
+ i=1 from a distribution D, learn
+ an approximation ˆf (X ) that is, with probability 1 − δ, within a multiplicative factor of
+ α2(n) from f .
+ Balcan & Harvey proposeanalgorithmwhichPMAClearnsanysubmodular√
+ function upto a factor of n + 1.
+ We improve this bound to a curvature dependent one.
+ Lemma: Let f be a monotone submodular function for which we know an upper
+ bound on its curvature κf and the singleton weights f (j) for all j ∈ V
+ √ . There is
+ n+1.an poly-time algorithm which PMAC-learns f within a factor of
+ 1+(√
+ n+1−1)(1−κ)
+ We also provide an algorithm which does not need the singleton weights.
+ Lemma: If f is a monotone submodular function with known curvature (or a
+ known upper bound) κˆf (X ), ∀X ⊆ V , then for every , δ > 0 there is an algorithm
+ |X |which PMAC learns f (X ) within a factor of 1 +
+ 1+(|X |−1)(1−κˆf (X )) .
+ Pka
+ I Corollary: The class of functions f (X ) =λ[w(X)], λi ≥ 0, can be learnt toii=1 i
+ a factor of |X |1−a.
+ I Lower bound: Given a class of submodular functions with curvature κf , there
+ does not exist a polynomial-time algorithm that is guaranteed to PMAC-learn f
+ 1/3−0
+ n0within a factor of
+ 1+(n1/3−,forany>0.0−1)(1−κ )
+
+
+
+ Constrained Submodular Minimization
+ Problem: Minimize a submodular function f over a family C of feasible sets, i.e.,
+ minX ∈C f (X ). C could be constraints of the form cardinality (knapsack) constraints,
+ cuts, paths, matchings, trees etc.
+ Main framework is to choose a surrogate function ˆf , and optimize it instead of f .I
+ I Ellipsoidal Approximation based (EA):
+ I Use the curvature based Ellipsoidal Approximation as the surrogate function.
+ Lemma: For a submodular function with curvature κf < 1, algorithm EA will
+ b that satisfiesreturn a solution X
+ Modular Upper bound based:
+ Use the simple modular upper bound as a surrogate.
+ P
+ bI Lemma: Let X ∈ C be the solution for minimizing
+ j∈X f (j) over C. Then
+ Pka
+ I Corollary: The class of functions, f (X ) =λ[w(X)], λi ≥ 0, can beii=1 i
+ minimized upto a factor of |X ∗|1−a.
+
+ Effect of Curvature: Polynomial change in the bounds!I
+ I Experiments:
+ ¯
+ I Define a function fR (X ) = κ min{|X ∩ R| + β, |X |, α} + (1 − κ)|X |.
+ 1/2+
+ I Choose α = nand β = n2, and C = {X : |X | ≥ α}.
+
+
+
+
+
+
+ Acknowledgements
+ Based upon work supported by National Science Foundation Grant No. IIS-1162606,
+ and by a Google, a Microsoft, and an Intel research award. This was also funded in part
+ by Office of Naval Research under grant no. N00014-11-1-0688, NSF CISE Expeditions award
+ CCF-1139158, DARPA XData Award FA8750-12-2-0331, and gifts from Amazon Web Services,
+ Google, SAP, Blue Goji, Cisco, Clearstory Data, Cloudera, Ericsson, Facebook, General
+ Electric, Hortonworks, Intel, Microsoft, NetApp, Oracle, Samsung, Splunk, VMware and Yahoo!
+
+
+
diff --git a/Train/Decision Tree Fields/Decision Tree Fields-Poster.pdf b/Train/Decision Tree Fields/Decision Tree Fields-Poster.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..1f943b4629cc3d88989cc9c3121284f15c35e8f7
--- /dev/null
+++ b/Train/Decision Tree Fields/Decision Tree Fields-Poster.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:407b9b8a2e8215fedf2d25e2582cf66b3b2a5a15a01512a3f139fceff2dc251f
+size 1269694
diff --git a/Train/Decision Tree Fields/Decision Tree Fields.pdf b/Train/Decision Tree Fields/Decision Tree Fields.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..b7d8c47d9e3bf14a547b150a78af2c4edf98a908
--- /dev/null
+++ b/Train/Decision Tree Fields/Decision Tree Fields.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b7e2bfcef4dd2b20911f0d34c8ea9f999d65c2993b20a32d5d3ac9a4f7528e88
+size 733451
diff --git a/Train/Decision Tree Fields/info.txt b/Train/Decision Tree Fields/info.txt
new file mode 100644
index 0000000000000000000000000000000000000000..f1e9493579191377842ffa6fb0477147dc8fbb82
--- /dev/null
+++ b/Train/Decision Tree Fields/info.txt
@@ -0,0 +1,60 @@
+
+
+ Overview
+ DTF = Efficiently learnable non-parametric CRFs for discrete image labelling tasks
+ • All factors (unary, pairwise, higher-order) are represented by decision trees
+ • Decision trees are non-parametric
+ • Efficient training of millions of parameters using pseudo-likelihood
+
+
+
+ Formally
+
+ Graphical Model:
+ Factor types
+
+ Factor Graph
+ Energy
+ Energy linear in w
+ Example pairwise factor
+
+
+
+
+ Special Cases
+ • Unary factors only = Decision Forest, with learned leaf node distributions
+ Zero-depth trees (pairwise factors) = MRF
+ • Conditional (pairwise factors) = CRF
+
+
+
+
+
+ Algorithm - Overview
+ Training
+ 1.Define connective structure (factor types)
+ 2.Train all decision trees (split functions) separately
+ 3.Jointly optimize all weights
+ Testing (2 options)
+ •“Unroll” factor graph:
+ run: BP, TRW, QPBO, etc.
+ •Don’t “unroll” factor graph:
+ run Gibbs Sampling; Simulated Annealing
+
+
+
+ Training of weights “w”
+ •Maximum Pseudo-Likelihood training, convex optimization problem
+ Converges in practice after 150-200 L-BFGS iterations
+ Efficient even for large graphs (e.g. 12 connected, 1.47M weights, 22mins)
+ •Is parallel on the variable level
+ •Variable sub-sampling possible
+ Code will be made available next month!
+
+
+
diff --git a/Train/Deformable Part Descriptors for Fine-grained Recognition and Attribute Prediction/Deformable Part Descriptors for Fine-grained Recognition and Attribute Prediction-Poster.pdf b/Train/Deformable Part Descriptors for Fine-grained Recognition and Attribute Prediction/Deformable Part Descriptors for Fine-grained Recognition and Attribute Prediction-Poster.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..9ad267cd84be1157951632048d7552d61b1cb106
--- /dev/null
+++ b/Train/Deformable Part Descriptors for Fine-grained Recognition and Attribute Prediction/Deformable Part Descriptors for Fine-grained Recognition and Attribute Prediction-Poster.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4b777fc266c98e4f540147cecd54268ca418a8eae6572f5a7b796a9071e27365
+size 7885936
diff --git a/Train/Deformable Part Descriptors for Fine-grained Recognition and Attribute Prediction/Deformable Part Descriptors for Fine-grained Recognition and Attribute Prediction.pdf b/Train/Deformable Part Descriptors for Fine-grained Recognition and Attribute Prediction/Deformable Part Descriptors for Fine-grained Recognition and Attribute Prediction.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..79f12185678d86868128f7b09e36939ba5b41209
--- /dev/null
+++ b/Train/Deformable Part Descriptors for Fine-grained Recognition and Attribute Prediction/Deformable Part Descriptors for Fine-grained Recognition and Attribute Prediction.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2ea207b9a86dc1d80e32e472b207492a3ff27449121525f61e6a891c91b96852
+size 14779858
diff --git a/Train/Deformable Part Descriptors for Fine-grained Recognition and Attribute Prediction/info.txt b/Train/Deformable Part Descriptors for Fine-grained Recognition and Attribute Prediction/info.txt
new file mode 100644
index 0000000000000000000000000000000000000000..4397336aa0fc073f4fad31ec516c7841fa378807
--- /dev/null
+++ b/Train/Deformable Part Descriptors for Fine-grained Recognition and Attribute Prediction/info.txt
@@ -0,0 +1,127 @@
+
+
+ Introduction
+ IFine-grained Recognition
+ anna’s hummingbird
+
+ ruby-throated hummingbird
+
+ Human Attribute PredictionI
+
+ Pose-normalized representations [1]I
+
+
+
+
+ Deformable Part Model (DPM)
+ Weakly supervised DPMI
+ I Fix-sized part filters initialized by
+ heuristics.
+ I Components initialized by clustering
+ aspect ratio.
+ I Strongly supervised DPM [2]
+ I Semantic part filters initialized by
+ part annotations.
+ I Clusters pose information to initialize
+ the components.
+ Computational efficient DPM detections [3].I
+ I Strong DPM provides semantic part localizations for
+ pose-normalized representations.
+ I What about simpler weak DPM without pose annotations?
+
+
+
+
+ Method
+ Deformable part descriptors (DPD)
+ Test Image
+ Part Localization
+ Pose-normalization
+ Classification
+
+ The first descriptor (top row) applies a strong DPM for part localization then pool features from these inherently
+ semantic parts.I
+ I The second descriptor employs a weakly supervised DPM for part localization and then used a learned semantic
+ correspondence weights to pool features from the latent parts into semantic regions.
+
+
+
+ How Weights Get Computed
+ (j)
+ Iw
+ il∈ W of size |P| × |R| × |C|.
+ part of component c (j). rl : semantic region.
+ keypoints or other semantic labels.
+ I ρkl ∈ [0, 1]: relevance of ak to region rl .
+ I Ijk : training images with ak and component c (j).(j)
+ I p : i-th
+ i
+ I ak ∈ A:
+
+
+
+
+ Pooling/Classification
+ 1Pose-normalized representationI
+ 36Pooled image feature for
+ semantic region Ψ(l, rl ).8
+ 1 vs all linear SVM using Ψpn for
+ final classification.
+
+
+
+ Example Results and Failure
+ TORSOCases
+ Top scored people with long hair.
+
+ LEGS0.0%0.2%32.1%0.2%10.6%18.7%8.2%29.9%Top scored people wearing long sleeves.
+
+ Most confused failure case of males.
+
+
+
+
+ Experimental Results
+ Fine-grained Recognition
+
+ Results on CUB200-2010 dataset .
+
+ Results on CUB200-2011 dataset.
+ Human Attribute Prediction
+
+ Results on the Human Attributes dataset.
+
+
+
+ Localization Results of strong DPM
+ Samples of correct part localizations.
+
+ Failure cases of part localizations.
+
+
+
+
+ References
+ [1] Ning Zhang, Ryan Farrell and Trevor Darrell. Pose Pooling Kernels for Sub-Category Recognition. In CVPR 2012.
+ [2] Hossein Azizpour and Ivan Laptev. Object Detection Using Strongly-Supervised Deformable Part Models. In ECCV 2012.
+ [3] Charles Dubout and Franc¸ois Fleuret. Exact Acceleration of Linear Object Detectors. In ECCV 2012.
+ [4] Jeff Donahue, Yangqing Jia, Oriol Vinyals, Judy Hoffman, Ning Zhang, Eric Tzeng and Trevor Darrell. DeCAF: A Deep Convolutional
+ Activation Feature for Generic Visual Recognition. On Arxiv.
+
+
+
diff --git a/Train/Dense Semantic Image Segmentation with Objects and Attributes/Dense Semantic Image Segmentation with Objects and Attributes-Poster.pdf b/Train/Dense Semantic Image Segmentation with Objects and Attributes/Dense Semantic Image Segmentation with Objects and Attributes-Poster.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..426684d7202d96fe3760c14d6d702e6693772c56
--- /dev/null
+++ b/Train/Dense Semantic Image Segmentation with Objects and Attributes/Dense Semantic Image Segmentation with Objects and Attributes-Poster.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a47340f313185677dd780116cb32451e63d9219f5829c4aea96ef93ad76c256a
+size 612805
diff --git a/Train/Dense Semantic Image Segmentation with Objects and Attributes/Dense Semantic Image Segmentation with Objects and Attributes.pdf b/Train/Dense Semantic Image Segmentation with Objects and Attributes/Dense Semantic Image Segmentation with Objects and Attributes.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..9ad9fd61825ddf780052d8328b0d1d4b9355289a
--- /dev/null
+++ b/Train/Dense Semantic Image Segmentation with Objects and Attributes/Dense Semantic Image Segmentation with Objects and Attributes.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:253b1525a091a7be8ca40eaed6ebccb23805e65f8c67968f86d0fce080493b5f
+size 907470
diff --git a/Train/Dense Semantic Image Segmentation with Objects and Attributes/info.txt b/Train/Dense Semantic Image Segmentation with Objects and Attributes/info.txt
new file mode 100644
index 0000000000000000000000000000000000000000..d45828598ec1c36df3c25f837e73a366ad5c7f51
--- /dev/null
+++ b/Train/Dense Semantic Image Segmentation with Objects and Attributes/info.txt
@@ -0,0 +1,76 @@
+
+
+ 1. Abstract
+ • We formulate the problem of joint visual
+ attribute and object class image segmentation
+ as a dense multi-labelling problem, where each
+ pixel in an image can be associated with both
+ an object class and a set of visual attributes
+ labels.
+ • In order to learn the label correlations, we
+ adopt a boosting-based piecewise training
+ approach with respect to the visual appearance
+ and co-occurrence cues.
+ Weuseafiltering-basedmean-field
+ approximation approach for efficient joint
+ inference. Further, we develop a hierarchical
+ model to incorporate region-level object and
+ attribute information.
+ Object class segmentation
+ • Assigning an object class label to each pixel
+
+ Image Segmentation with Objects and Attributes
+ • Assigning an object class label and a set of
+ visual attribute labels to each pixel
+
+
+
+
+
+
+
+
+
+
+ Fully-connected CRF
+ Joint Pixel-level CRF
+ Hierarchical CRF
+ Ground Truth
+
+
+
+ 4. Attribute-augmented NYU dataset
+
+ Attribute annotation
+ Image
+ Object annotation
+ Following the CORE dataset, we augment the
+ attribute annotations for NYU V2 dataset. Above
+ figure shows the annotations for aNYU dataset.
+ Below figure demonstrate the annotation of CORE
+ dataset and aPASCAL dataset.
+
+
+
+ 5. Acknowledgement
+ This project is supported by EPSRC EP/I001107/2,
+ ERC HELIOS 2013-2018Advanced Investigator Award.
+ [1] Dense semantic image segmentation
+ with objects and attributes. CVPR, 2014.
+ [2] ImageSpirit: Verbal Guided Image
+ Parsing, ACM TOG 2014.
+ [3] Efficient Inference in Fully Connected
+ CRFs with Gaussian Edge Potentials.
+ NIPS 2011.
+ http://kylezheng.org/densesegattobj/
+
+
+
+
diff --git a/Train/Detection Bank- An Object Detection Based Video Representation for Multimedia Event Recognition/Detection Bank- An Object Detection Based Video Representation for Multimedia Event Recognition-Poster.pdf b/Train/Detection Bank- An Object Detection Based Video Representation for Multimedia Event Recognition/Detection Bank- An Object Detection Based Video Representation for Multimedia Event Recognition-Poster.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..999825c342f34a4aebca47a442f95eae33d78584
--- /dev/null
+++ b/Train/Detection Bank- An Object Detection Based Video Representation for Multimedia Event Recognition/Detection Bank- An Object Detection Based Video Representation for Multimedia Event Recognition-Poster.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:77a74cdb31492296dee7c641c5ecee7cdec2a375325d4b553d193a90fd08a6b2
+size 2408056
diff --git a/Train/Detection Bank- An Object Detection Based Video Representation for Multimedia Event Recognition/Detection Bank- An Object Detection Based Video Representation for Multimedia Event Recognition.pdf b/Train/Detection Bank- An Object Detection Based Video Representation for Multimedia Event Recognition/Detection Bank- An Object Detection Based Video Representation for Multimedia Event Recognition.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..667cf51754651310937da9c001dced0483cad151
--- /dev/null
+++ b/Train/Detection Bank- An Object Detection Based Video Representation for Multimedia Event Recognition/Detection Bank- An Object Detection Based Video Representation for Multimedia Event Recognition.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ad38e00329cf228666b75f637c4275d7b4e17190d95e3afe5248657b6a2113f9
+size 743738
diff --git a/Train/Detection Bank- An Object Detection Based Video Representation for Multimedia Event Recognition/info.txt b/Train/Detection Bank- An Object Detection Based Video Representation for Multimedia Event Recognition/info.txt
new file mode 100644
index 0000000000000000000000000000000000000000..336c9eb3ec8907f323a5f60452917fa23b6b37e9
--- /dev/null
+++ b/Train/Detection Bank- An Object Detection Based Video Representation for Multimedia Event Recognition/info.txt
@@ -0,0 +1,91 @@
+
+
+ Multimedia Event Detection
+
+
+
+ Birthday Party
+ vsWedding Ceremony
+
+
+
+ Look for: Balloon, Candle, Birthday Cake vs.
+ Bride, Groom, Wedding Gown, Wedding Cake
+
+
+
+ Previous Work
+ Spatial Pyramid Match (SPM)
+
+ Object Bank (OB)
+
+ Problem
+ Scene-level descriptors cannot capture fine-
+ grained phenoma that discriminate between events.
+ Object Bank lacks immediate sense of whether or
+ not there are objects present in the image and if so
+ how many.
+
+
+
+ Idea
+ ObjectBank omits the following steps that are
+ standard in a detection pipeline:●
+ ●Thresholding of score maps
+ ●Non-maximum suppression
+ ●Pooling across all scales
+ ●We compute different detection count statistics to
+ capture e.g. max number of detections, sum of
+ detection scores, probablity of detection based on
+ the detection images from a large number of
+ windowed object detectors.
+ Detection Count Statistics
+ Illustration
+
+
+
+
+
+ Experiments
+ Classification Accuracy
+ on TRECVID MEDDET Curves for
+ all 15 Events
+
+ Conclusion
+ ●Significant performance increase in Multimedia
+ Event Classification Task
+ ●Provides complementary discriminative information
+ to current state-of-the-art image representations
+ such as Spatial Pyramid Matching and Object Bank
+
+
+
+ References
+ S. Lazebnik, C. Schmid, and J. Ponce. Beyond bags of features: Spatial pyramid
+ matching for recognizing natural scene categories. CVPR, 2006.
+ L.-J. Li, H. Su, E. P. Xing, and L. Fei-Fei. Object bank: A high-level image
+ representation for scene classification & semantic feature sparsification. NIPS,
+ 2010.
+ Acknowledgments
+ Supported by the Intelligence Advanced Research Projects Activity (IARPA) via Department of Interior
+ National Business Center contract number D11PC20066. The U.S. Government is authorized to
+ reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation
+ thereon.
+ Disclaimer: The views and conclusion contained herein are those of the authors and should not be
+ interpreted as necessarily representing the official policies or endorsement, either expressed or
+ implied, of IARPA, DOI/NBC, or the U.S. Government.
+ H. Song was supported by Samsung Scholarship Foundation.
+
+
+
diff --git a/Train/Difference of Boxes Filters Revisited- Shadow Suppression and Efficient Character/Difference of Boxes Filters Revisited- Shadow Suppression and Efficient Character-Poster.pdf b/Train/Difference of Boxes Filters Revisited- Shadow Suppression and Efficient Character/Difference of Boxes Filters Revisited- Shadow Suppression and Efficient Character-Poster.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..068666bcdbb65f27d809f60bc1f34ad399273049
--- /dev/null
+++ b/Train/Difference of Boxes Filters Revisited- Shadow Suppression and Efficient Character/Difference of Boxes Filters Revisited- Shadow Suppression and Efficient Character-Poster.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ae58f9c3a43e3b8617fb55cd1b32313103a26125c38449a83714ca8453edd6ba
+size 429473
diff --git a/Train/Difference of Boxes Filters Revisited- Shadow Suppression and Efficient Character/Difference of Boxes Filters Revisited- Shadow Suppression and Efficient Character.pdf b/Train/Difference of Boxes Filters Revisited- Shadow Suppression and Efficient Character/Difference of Boxes Filters Revisited- Shadow Suppression and Efficient Character.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..760673150ed36cec33eba0334bb829914a5efd4b
--- /dev/null
+++ b/Train/Difference of Boxes Filters Revisited- Shadow Suppression and Efficient Character/Difference of Boxes Filters Revisited- Shadow Suppression and Efficient Character.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ed69ea37023ba783114c4b98758ac77e374b320e64cc0733b3fe72a742d5e8f0
+size 712833
diff --git a/Train/Difference of Boxes Filters Revisited- Shadow Suppression and Efficient Character/info.txt b/Train/Difference of Boxes Filters Revisited- Shadow Suppression and Efficient Character/info.txt
new file mode 100644
index 0000000000000000000000000000000000000000..f9f027091c9974a70915326580591bc7ce2c9f8e
--- /dev/null
+++ b/Train/Difference of Boxes Filters Revisited- Shadow Suppression and Efficient Character/info.txt
@@ -0,0 +1,119 @@
+
+
+ Introduction
+ Robust character recognition often relies on a good segmentation.
+ Difficulties: dirt, non-uniform illumination, shadow, . . .
+ Our method of character segmentation is simple, efficient and easy to implement.
+ Algorithm overview:
+ 1. Shadow suppression using multiple difference of boxes filters
+ 2. Ternary segmentation using locally estimated thresholds
+ Applications:
+ license plate recognition
+ ID card recognition
+ arbitrary document analysis systems
+
+
+
+ Multiple Difference of Boxes
+ Base filter: Difference of Boxes Filter [2]
+ Simple interpretation of the idea of Vonikakis et al. [3] (hidden in their formulas)
+ Definition for an one-dimensional signal g (m < M)
+ Approximation of a Difference of Gaussians or Mexican-Hat filter
+ Runtime independent of filter sizes
+ Maximum of the result of several DoB filters with different sizes (mi , Mi ) leads to the final filter output
+ of a Multiple Difference of Boxes (MDoB) Filter
+
+
+
+ Examples
+
+ Our character segmentation framework applied to ID cards and license plates
+
+
+
+ Local Segmentation
+ Ternary segmentation instead of binary segmentation
+ Object
+ Background
+ Unknown
+ Local binary decision between object and background is not possible for all pixels (e.g. within homogenous regions)
+ Solution: definition of a third label “unknown”
+ Local decision depends on maximum and minimum in a neighborhood around each pixel: gmax (x), gmin (x)
+
+
+ Original image and ternary local segmentation (white: background, red: object, blue: unknown)
+
+
+
+ Local Segmentation (Algorithm)
+ 1 Calculate gmax and gmin
+ For each pixel x:
+ 2.1 If gmax (x) − gmin (x) < γ then
+ 2.2 label point as unknown
+ 2.3 else
+ 2.4 T =
+ 21 gmax (x) + gmin (x)
+ min
+ 2.5 If g(x) > T then label point as object
+
+ 2.6 else label point as background
+
+
+
+ Measuring the Quality of Character Segmentations
+ Simple measure of segmentation quality as the distance to a given ground truth segmentation
+ Base for parameter optimization and method evaluation
+ Distance between two components A and B of segmentations:
+ Distance between two segmentations S˜ (p components) and S (q components)
+ Optimization over all injective maps π : {1, . . . , q} → {1, . . . , p} can be carried out using the Hungarian method
+
+
+
+ Optimal Parameters of the Segmentation Method
+ iGiven several ground truth segmentations S
+ G, one can search for optimal parameters maximizing segmentation quality
+ MDoB parameters θ = {m1 , M1 , . . .}
+ number of DoB filters used
+ sizes of inner boxes mj
+ • sizes of outer boxes Mj
+ Parameters of our local segmentation method: η = {γ, size(U(x))}
+ Optimization criteria using our segmentation quality measure:
+ Optimization performed by cyclic coordinate search [1]
+ By iteratively adding a new component to the MDoB filter an optimal number of different DoB filters can be estimated.
+
+
+
+ Experiments
+ Evaluation within a license plate recognition system
+ 6205 test images, fixed set of single letter training images
+ Segmentation framework used to segment an aligned license plate into character regions
+ Recognition performance measured for whole license plates using the complete license plate recognition system
+
+ Evaluation using synthetic input images
+ random noise simulating shadow influence parameterized with β
+ left image: analysis of segmentation error with respect to β
+ right image: example of a single synthetic image after applying noise operation
+
+ Conclusions
+ Simple but robust and efficient method for character segmentation
+ Fast computation: combination of basic filter operations
+ Proposed measure for segmentation quality can be used for evaluation and optimization
+ Optimal parameters of our method can be found with an optimization framework
+
+
+
+ References
+ [1] Jorge Nocedal and Stephen J. Wright. Numerical Optimization. Springer, August 1999.
+ [2] A. Rosenfeld and M. Thurston. Edge and curve detection for visual scene analysis. IEEE Transaction on Computers, 20:562–569,1971.
+ [3] Vassilios Vonikakis, Ioannis Andreadis, Nikos Papamarkos, and Antonios Gasteratos. Adaptive document binarization -Vassilios Vonikakis, Ioannis Andreadis, Nikos Papamarkos, and Antonios Gasteratos. Adaptive document binarization -
+ a human vision approach. In Proceedings of the Second International Conference on Computer Vision Theory and Applications(VISAPP), Barcelona, Spain, March 8-11, 2007 - Volume 2, pages 104–109, 2007.
+ We would like to thank ROBOT Visual Systems GmbH for financial support
+ and for providing experimental data for large scale evaluation.
+
+
+
diff --git a/Train/Dimension Reduction of Network Bottleneck Bandwidth Data Space/Dimension Reduction of Network Bottleneck Bandwidth Data Space-Poster.pdf b/Train/Dimension Reduction of Network Bottleneck Bandwidth Data Space/Dimension Reduction of Network Bottleneck Bandwidth Data Space-Poster.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..0eba8f83346557fcf4b1f746d794241133edda76
--- /dev/null
+++ b/Train/Dimension Reduction of Network Bottleneck Bandwidth Data Space/Dimension Reduction of Network Bottleneck Bandwidth Data Space-Poster.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c0c5e739b83019178894d74507e35eda11e9088531ec665f6ddb68caee290633
+size 255605
diff --git a/Train/Dimension Reduction of Network Bottleneck Bandwidth Data Space/Dimension Reduction of Network Bottleneck Bandwidth Data Space.pdf b/Train/Dimension Reduction of Network Bottleneck Bandwidth Data Space/Dimension Reduction of Network Bottleneck Bandwidth Data Space.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..262512d98e89cb3e55fa73573dd088d635496585
Binary files /dev/null and b/Train/Dimension Reduction of Network Bottleneck Bandwidth Data Space/Dimension Reduction of Network Bottleneck Bandwidth Data Space.pdf differ
diff --git a/Train/Dimension Reduction of Network Bottleneck Bandwidth Data Space/info.txt b/Train/Dimension Reduction of Network Bottleneck Bandwidth Data Space/info.txt
new file mode 100644
index 0000000000000000000000000000000000000000..30f6f64cc7c24b10c1a56438cca6855a262365d6
--- /dev/null
+++ b/Train/Dimension Reduction of Network Bottleneck Bandwidth Data Space/info.txt
@@ -0,0 +1,93 @@
+
+
+ Motivation
+ For latency-aware applications, round-trip-time estimation has
+ been studied extensively. But there are also lots of bandwidth-
+ aware applications. Prediction of bottleneck bandwidth has
+ received much less attention.
+ Therefore, we attempt to design a new system to predict
+ bottleneck bandwidth, based on matrix factorization.
+ As a first step, we need to prove:
+ 1) Low-rank nature
+ bandwidth matricesofbottleneck
+ 2) Feasibility of reducing dimension of
+ bottleneck bandwidth data space
+
+
+
+ Matrix Factorization
+ The network bottleneck bandwidth data space can be modelled
+ as square matrix B. Apply Principle Component Analysis on B:
+
+
+
+
+ Principle Component Analysis
+ We attempt to analyze the magnitude of singular values of B.
+
+ It shows that singular values decrease very fast. Considering the
+ ’Oct 26’ line, the 4th singular value (0.156) is the first one that is
+ smaller than 0.2.
+
+
+
+ Acknowledgement
+ This work is supported by National Science Foundation of China
+ (No.60850003).
+
+
+
+ Methodology
+ Based on HP Scalable Sensing Service (S3), 250 interconnected hosts are extracted out for our evaluation.
+ From September 23 to December 23 2009, we collect bottleneck bandwidth data every four hours. Finally we
+ have 491datasets across 3 months for evaluation.
+ We compare the approximated matrix with the original one for evaluation. Relative error is defined as follows:
+ If (i, j ) ∈ {(m, n) | b
+ mn ≠ −1}, relative error
+ ij =b
+ ij '−b
+ ij
+ b
+ ij
+
+
+
+ Evaluation of Dimension Reduction
+ The figure right shows the median
+ relative error when the dimension of
+ all the 491 datasets are reduced to
+ 2D, 5D, 10D and 20D.
+ The average of median relative error
+ for 10D approximation is only 8.65%
+ among all the 491 datasets.
+
+
+ Considering the tradeoff between
+ computation complexity and target
+ dimension of reduction, a 10D
+ approximation is carried out to show the
+ cumulative distribution function of
+ relative error in figure left.
+ The 90th percentile relative error is only
+ 0.281, meaning that 90% of the data
+ have lower relative error than 0.281.
+
+
+
+ Conclusion
+ 1. Dimension of bottleneck bandwidth data space can be reduced from250D to 10D
+ 2. The average of median relative error for approximation is only8.65% among 491 datasets.
+ 3. The 90th percentile relative error of 10D approximation is only0.281
+
+
+
+ Future work
+ We would design a scalable bottleneck bandwidth prediction system based on matrix factorization, utilizing
+ the low-rank nature and low relative error in dimension reduction.
+
+
+
diff --git a/Train/Discriminative Bayesian Active Shape Models/Discriminative Bayesian Active Shape Models-Poster.pdf b/Train/Discriminative Bayesian Active Shape Models/Discriminative Bayesian Active Shape Models-Poster.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..b4f8872c7ad3deb77a84214f0bff0384ed54c5c9
--- /dev/null
+++ b/Train/Discriminative Bayesian Active Shape Models/Discriminative Bayesian Active Shape Models-Poster.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9bb66dde12ca4e32755d7f9f407660474351d2058b73e5ba43368f0ee2637bfc
+size 17781896
diff --git a/Train/Discriminative Bayesian Active Shape Models/Discriminative Bayesian Active Shape Models.pdf b/Train/Discriminative Bayesian Active Shape Models/Discriminative Bayesian Active Shape Models.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..9ef5eec60dc8f19b0d8c1ccd26fcf3b2999ce1e4
--- /dev/null
+++ b/Train/Discriminative Bayesian Active Shape Models/Discriminative Bayesian Active Shape Models.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6fb5da1cb613f5fd57869e95aae16ed59cd119d0cd4391ba4482359a3b8e3b89
+size 7269249
diff --git a/Train/Discriminative Bayesian Active Shape Models/info.txt b/Train/Discriminative Bayesian Active Shape Models/info.txt
new file mode 100644
index 0000000000000000000000000000000000000000..9d0af48843b00336d638795b92e45d9c0f37ba32
--- /dev/null
+++ b/Train/Discriminative Bayesian Active Shape Models/info.txt
@@ -0,0 +1,282 @@
+
+
+
+ Overview:
+
+
+ Goal:
+ Face
+ alignment
+ in
+ unseen
+ images.
+
+ Closely
+ related
+ to
+ Constrained
+ Local
+ Models
+ (CLM)
+ and
+ Ac)ve
+ Shape
+ Models
+ (ASM),
+ where
+ a
+ set
+ of
+ local
+ detectors
+ is
+ constrained
+
+ to
+ lie
+ in
+ the
+ subspace
+ spanned
+ by
+ a
+ Point
+ Distribu)on
+ Model
+ (PDM).
+
+
+ Two
+ step
+ fiOng
+ approach:
+
+ Two
+ step
+ fiOng
+ approach:
+
+ (1)
+ Local
+ search
+ using
+ the
+ local
+ detectors
+ (response
+ maps
+ for
+ each
+ landmark).
+
+ (2)
+ Global
+ op)miza)on
+ strategy
+ that
+ finds
+ the
+ PDM
+ parameters
+ that
+ jointly
+ maximize
+ all
+ the
+ detec)on
+ at
+ once.
+
+ • New
+ Bayesian
+ global
+ op)miza)on
+ strategy
+ using
+ second
+ order
+ sta)s)cs
+ of
+ the
+ shape
+ and
+ pose
+ parameters.
+
+
+
+
+ The
+ Shape
+ (PDM)
+ and
+ Appearance
+ Models
+
+
+
+
+
+ Local
+ Detectors
+ (MOSSE
+ Filters)
+
+
+
+ Correla)on
+ in
+ Fourier
+ Domain
+
+
+ PMOSSE
+ Filter
+ X
+
+
+
+ The
+ Alignment
+ Goal
+
+ Given
+ a
+ shape
+ observa)on
+ (y),
+ find
+ the
+ op)mal
+ set
+ of
+ shape
+ (b)
+
+ and
+ pose
+ parameters
+ that
+ maximize
+ the
+ posterior
+ probability
+
+
+ Assuming:
+
+ Assuming:
+
+ ① Condi)onal
+ independence
+ between
+ landmarks
+
+ ② Close
+ to
+ a
+ solu)on
+
+
+ The
+ Likelihood
+ Term
+
+
+ The
+ Prior
+ Term
+
+
+
+
+ Local
+ Op)miza)on
+ Strategies
+ (Finding
+ the
+ Likelihood
+ Parameters)
+
+ Weighted
+ Peak
+ Response
+ (WPR)
+ (current
+ mesh
+ es)mate)
+
+ Gaussian
+ Response
+ (GR)
+
+ Kernel
+ Density
+ Es)mator
+ (KDE)
+
+
+
+
+ 2nd
+ Order
+ MAP
+ Global
+ Alignment
+ (DBASM)
+
+
+
+
+
+
+
+
+ Qualita)ve
+ Results
+ -‐
+ Labeled
+ Faces
+ in
+ the
+ Wild
+
+
+ Evalua)ng
+ Global
+ Op)miza)on
+ Strategies
+
+
+
+
+
+
+ Tracking
+ Performance
+ -‐
+ FGNET
+ Talking
+ Face
+ Sequence
+
+
+
+
+
+
diff --git a/Train/Discriminative Segment Annotation in Weakly Labeled Video/Discriminative Segment Annotation in Weakly Labeled Video-Poster.pdf b/Train/Discriminative Segment Annotation in Weakly Labeled Video/Discriminative Segment Annotation in Weakly Labeled Video-Poster.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..9473df58a8df078a490a06576644980e57600aa2
--- /dev/null
+++ b/Train/Discriminative Segment Annotation in Weakly Labeled Video/Discriminative Segment Annotation in Weakly Labeled Video-Poster.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:39dcadd4265b395b5a715d8099a36b8cbc06e6cf24cdac130e642fc086953df5
+size 3292142
diff --git a/Train/Discriminative Segment Annotation in Weakly Labeled Video/Discriminative Segment Annotation in Weakly Labeled Video.pdf b/Train/Discriminative Segment Annotation in Weakly Labeled Video/Discriminative Segment Annotation in Weakly Labeled Video.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..ffbd49d86a069553b8c402496e9cd72954be1c73
--- /dev/null
+++ b/Train/Discriminative Segment Annotation in Weakly Labeled Video/Discriminative Segment Annotation in Weakly Labeled Video.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8ed6ff0eebfc07ecbbd0acbb82f5d6fe905963c27fe249d2de2428eebe014465
+size 2189418
diff --git a/Train/Discriminative Segment Annotation in Weakly Labeled Video/info.txt b/Train/Discriminative Segment Annotation in Weakly Labeled Video/info.txt
new file mode 100644
index 0000000000000000000000000000000000000000..d805bae997c2e96e95cbeabd38c4718f3ef3d06a
--- /dev/null
+++ b/Train/Discriminative Segment Annotation in Weakly Labeled Video/info.txt
@@ -0,0 +1,57 @@
+
+
+ Introduction
+ • Goal
+ - Automatically annotate segments in weakly
+ labeled video taken from YouTube
+
+ • Challenges
+ - Learning from weakly labeled data
+ - Handling label noise in YouTube tags
+ - Parallelize to deploy over large amounts of
+ YouTube data
+
+
+
+ Our Problem Setup
+
+
+
+
+ Our Algorithm: CRANE
+ • Input: uncertain positive segments, large set of negative segments
+ • Output: ranked positive segments by probability of belonging to our concept
+
+ Intuition: Positive segments are less likely to belong to our concept if they are
+ near many negative segments.
+
+
+
+ Sample Object Segmentations
+
+ Inductive Segment Annotation [top two rows]
+ Transductive Segment Annotation [bottom two rows]
+
+ Common Failure Cases
+ [1] P. Siva, C. Russell, and T. Xiang. In defence of negative mining for annotating weakly labelled data. ECCV 2012.
+ [2] M. Grundmann, V. Kwatra, M. Han, and I. Essa. Efficient hierarchical graph-based video segmentation. CVPR 2010.
+ [3] G. Hartmann et al. Weakly supervised learning of object segmentations from web-scale video. ECCV 2012 Workshop.
+ [4] A. Prest et al. Learning object class detectors from weakly annotated video. CVPR 2012.
+
+
+
+ Quantitative Results
+ Transductive Segment Annotation (annotating a dataset)
+
+ • Inductive Segment Annotation (novel object segmentation)
+
+
+
+
diff --git a/Train/Display type effects in military operational tasks using Unmanned Vehicle UV video images Comparison between color and BW video feeds/Display type effects in military operational tasks using Unmanned Vehicle UV video images Comparison between color and BW video feeds-Poster.pdf b/Train/Display type effects in military operational tasks using Unmanned Vehicle UV video images Comparison between color and BW video feeds/Display type effects in military operational tasks using Unmanned Vehicle UV video images Comparison between color and BW video feeds-Poster.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..c4c4217baa1af1f60bfc81f6218cbfee84bf45c1
--- /dev/null
+++ b/Train/Display type effects in military operational tasks using Unmanned Vehicle UV video images Comparison between color and BW video feeds/Display type effects in military operational tasks using Unmanned Vehicle UV video images Comparison between color and BW video feeds-Poster.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c3f9ff994b709fca7dee0a71f902981a201ffb80fdfbb7c61dacbe9d0b1f6959
+size 804770
diff --git a/Train/Display type effects in military operational tasks using Unmanned Vehicle UV video images Comparison between color and BW video feeds/Display type effects in military operational tasks using Unmanned Vehicle UV video images Comparison between color and BW video feeds.pdf b/Train/Display type effects in military operational tasks using Unmanned Vehicle UV video images Comparison between color and BW video feeds/Display type effects in military operational tasks using Unmanned Vehicle UV video images Comparison between color and BW video feeds.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..c9d0e60097eed7080c50935375a5ef5f5f5001c3
--- /dev/null
+++ b/Train/Display type effects in military operational tasks using Unmanned Vehicle UV video images Comparison between color and BW video feeds/Display type effects in military operational tasks using Unmanned Vehicle UV video images Comparison between color and BW video feeds.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:53a17b6cb73030b9d6e0e0cbb908bff73b8b6f22231e7bd97f660e0d436d8e29
+size 906537
diff --git a/Train/Display type effects in military operational tasks using Unmanned Vehicle UV video images Comparison between color and BW video feeds/info.txt b/Train/Display type effects in military operational tasks using Unmanned Vehicle UV video images Comparison between color and BW video feeds/info.txt
new file mode 100644
index 0000000000000000000000000000000000000000..18fcd45127219290cbfe832d89202688d1eff0d0
--- /dev/null
+++ b/Train/Display type effects in military operational tasks using Unmanned Vehicle UV video images Comparison between color and BW video feeds/info.txt
@@ -0,0 +1,111 @@
+
+
+ Background:The use of UV imagery in combat
+ has become essential for mission success. Factors that
+ may affect the UV video feeds’ utilization are color and
+ quality of the feed. Specifically, the use of black and white
+ feeds or colored ones is considered both from technical
+ aspects and cognitive aspects. Operationally, B/W is often
+ used due to band width limitations, or payload selection.
+ Previously (Oron-Gilad, Redden & Minkov, 2011) we have
+ shown task dependent differences among display types
+ with regard to soldiers’ performances on intelligence
+ gathering type tasks.
+
+ A typical scenario for the passive dismounted soldier
+
+
+
+ Method:16 former infantry soldiers with MOUT
+ experience but no prior experience using UV video feed
+ participated. Three displays were examined in two feed
+ variations (color and B/W) in three Intelligence gathering
+ task types (orientation, identification and response to a
+ movement), all using simulated UAV and UGV video
+ images. Performance, workload and subjective data were
+ collected.
+
+ Example of scenario elements and tasks. Participants were
+ asked to (left) navigate between waypoints using a
+ predefined route, (center) identify static parked vehicles or
+ (right) detect movements of soldiers from the UAV (top) or
+ UGV (bottom) feeds.
+
+
+
+ Results: Accuracy – only main effects UV type and
+ task type were statistically significant (Wald 21=14.5;
+ p<0.001, and Wald 22=325.5; p<0.001 ). Accuracy with
+ the UGV feed was higher than with the UAV feed
+ (Mean=3.6, SE=.07 and Mean=3.3, SE=.07, respectively).
+ The movement detection scores were highest (Mean=3.8,
+ SE=.08) followed by vehicle identification Mean=3.5,
+ SE=.09) and orientation (Mean=3.0, SE=.08).
+ Response time - The three-way interaction display type by
+ UV by feed color was significant (F2, 307=11.5, p<.0001).
+ The combination of B/W feed with the HHMD generated
+ longer response times than the colored feed. An inverse
+ trend was seen with the 7" HHD.
+
+ Three-way interaction for response time among Display type, UV
+ type and feed color.
+ Vertical bars denote 0.95 confidence intervals
+ Workload estimates were moderate. Workload was higher
+ when colored feed was presented on the 12“ and HHMD
+ than B/W feed. The 7“display generated the opposite.
+
+
+
+ Discussion:Results showed superiority for the
+ UGV feed. Although consistent with our previous study, in
+ operational contexts generally, aerial vertical views are
+ considered more informative and easy to understand than
+ ground horizontal ones, due to their holistic view and
+ similarity to aerial maps.
+ The experimental scenarios may have contributed to the
+ superiority of the UGV. If the task can be conducted from
+ both ground and aerial perspective feeds then it is
+ reasonable to assume that the ground view (which is more
+ compatible with the soldier's point of view) will be preferred.
+ • Nevertheless, one should keep in mind that oftentimes in
+ operational settings the UGV is not capable of viewing the
+ same information as the UAV and the two sources provide
+ complementary feed (see Ophir-Arbelle, Oron-Gilad,
+ Borowsky and Parmet, in press).
+
+
+
+
+ The experimental system interface. (Right) video feed. (Left)
+ topographic image of the area, from a (top) UAV B/W
+ perspective Vs. UGV colored one
+
+
+
+ Conclusions and future work:
+ Evidence had shown that for video and map based
+ missions a larger display has its superiority, and that B/W
+ feeds may be good enough to perform certain tasks in
+ larger displays. Experimental design considerations may
+ have influenced the results, e.g., the simulation conditions,
+ the (high) quality of the simulated feeds, the lack of role for
+ color in the operational tasks and scenarios, and the
+ partial-balance of the experimental design. Amongst other
+ factors, future studies should focus on how the cognitive
+ state of the operator e.g., mental or physical fatigue and
+ extreme stress affect performance.
+ Acknowledgments. This work was supported by the US Army Research Laboratory through the Micro-
+ Analysis and Design CTA Grant DAAD19-01C0065, Michael Barnes, Technical Monitor. The views
+ expressed in this work are those of the authors and do not necessarily reflect official Army policy.
+
+ Operator station with (from left to right) 7” display, Tablet
+ and HHMD. Participants were free to hold the display to
+ their own comfort
+
+
+
diff --git a/Train/Diverse Sequential Subset Selection for Supervised Video Summarization/Diverse Sequential Subset Selection for Supervised Video Summarization-Poster.pdf b/Train/Diverse Sequential Subset Selection for Supervised Video Summarization/Diverse Sequential Subset Selection for Supervised Video Summarization-Poster.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..e81b57c1b21e574405188464a1fcc8ece5e3bc23
--- /dev/null
+++ b/Train/Diverse Sequential Subset Selection for Supervised Video Summarization/Diverse Sequential Subset Selection for Supervised Video Summarization-Poster.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e7e1ce0e8ff66b0f1aa7f0afc1d8d03fbe87e4014f98feee454a712ce93acf59
+size 812671
diff --git a/Train/Diverse Sequential Subset Selection for Supervised Video Summarization/Diverse Sequential Subset Selection for Supervised Video Summarization.pdf b/Train/Diverse Sequential Subset Selection for Supervised Video Summarization/Diverse Sequential Subset Selection for Supervised Video Summarization.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..d96c08f778598f7bcf26435365c1cdb7162216a9
--- /dev/null
+++ b/Train/Diverse Sequential Subset Selection for Supervised Video Summarization/Diverse Sequential Subset Selection for Supervised Video Summarization.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:27734449d5178bf8b97f68a43614ce31772c123542bc23eb8dc74e0e5b840b46
+size 780725
diff --git a/Train/Diverse Sequential Subset Selection for Supervised Video Summarization/info.txt b/Train/Diverse Sequential Subset Selection for Supervised Video Summarization/info.txt
new file mode 100644
index 0000000000000000000000000000000000000000..fd9d4b90f2a3449490c2c46c389ec422e820e80b
--- /dev/null
+++ b/Train/Diverse Sequential Subset Selection for Supervised Video Summarization/info.txt
@@ -0,0 +1,95 @@
+
+
+ Highlight
+ - Pose video summarization
+ as a supervised learning problem
+ for subset selection
+ - Propose sequential determinantal
+ point process (seqDPP) as the
+ underlying probabilistic model
+ - Evaluate on three video
+ summarization tasks and obtain
+ state-of-the-art performance
+
+
+
+ Introduction
+ Video summarization: pressing need
+ - 100 hours of new Youtube video per min
+ - 422,000 CCTV cameras in London 24/7
+ Summaries by three users
+
+ Challenges
+ - Heterogeneous subjects/categories
+ - Various temporal changing rates
+ - Subjective, disparate, and noisy labels
+ Previous work
+ - Criteria: representativeness vs. diversity
+ - Largely unsupervised, frame clustering
+ - Require sophisticated handcrafting
+ Our main idea
+ - Supervised learning from human
+ supplied annotations
+ - Summarization as subset selection
+ - Modeling temporal cue & diversity
+
+
+
+ Approach
+ Sequential DPP (seqDPP)
+ 1. Partition video into T disjoint segments
+ 2. Introduce subset selection (of frames)
+ variable Yt for each segment
+ 3. Condition Yt on Yt-1 = yt-1 by DPP
+
+ Parameterization of DPP kernel
+ - Linear embedding (L):
+ - Neural networks (NN)
+ Inference
+ Learning via MLE
+ - through gradient descent
+ In contrast, bag DPPs:
+ Model permutable items (no temporal info)
+ Often use quality-diversity kernel (limited)
+ Inference NP hard
+
+
+
+ Generating target summaries
+ User study on inter-annotator agreement
+ - Data: 100 videos from Open Video Project and Youtube
+ - Annotation: 5 user summaries per video
+ - Observation: high inter-annotator agreement
+ Generate target summaries by greedy search
+
+
+
+
+ Experiments
+ Setup
+ - Data: OVP (50), Youtube (39), Kodak (18)
+ - Feature: Fisher vector, saliency, context
+ - Evaluation: Precision, Recall, F-score
+ - Comparison: bag DPP and previous
+ (unsupervised) DT, STIMO, VSUMM
+ Results on Youtube and Kodak
+
+ Results on OVP
+
+
+
+
+
+ [1] S. Avila, A. Lopes, A. Luz Jr, A. Araujo. “VSUMM: A mechanism designed to produce static video summaries and
+ a novel evaluation method”. Pattern Recognition Letters, 32(1):56–68, 2011.
+ [2] A. Kulesza and B. Taskar. “Determinantal point processes for machine learning”. Foundations and Trends® in
+ Machine Learning, 5(2-3):123–286, 2012.
+
+
+
diff --git a/Train/Domain Generalization via Invariant Feature Representation/Domain Generalization via Invariant Feature Representation-Poster.pdf b/Train/Domain Generalization via Invariant Feature Representation/Domain Generalization via Invariant Feature Representation-Poster.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..f5543906078c29244e4d2ef4bbb868715d5d61c0
--- /dev/null
+++ b/Train/Domain Generalization via Invariant Feature Representation/Domain Generalization via Invariant Feature Representation-Poster.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ea01e9364c9172be956c0b826389a5f177e72206bba835186bd0f74aa3e8eac3
+size 1437026
diff --git a/Train/Domain Generalization via Invariant Feature Representation/Domain Generalization via Invariant Feature Representation.pdf b/Train/Domain Generalization via Invariant Feature Representation/Domain Generalization via Invariant Feature Representation.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..4794a1dcaab40365ed26990de570eaeea8d15b9b
--- /dev/null
+++ b/Train/Domain Generalization via Invariant Feature Representation/Domain Generalization via Invariant Feature Representation.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1949abed8ca5f7ae0fff780c4254e0cb14c1db24b0d6d7ccf2f276778956abbf
+size 1551655
diff --git a/Train/Domain Generalization via Invariant Feature Representation/info.txt b/Train/Domain Generalization via Invariant Feature Representation/info.txt
new file mode 100644
index 0000000000000000000000000000000000000000..864581490c1e77e71628611f578fbab6be901c31
--- /dev/null
+++ b/Train/Domain Generalization via Invariant Feature Representation/info.txt
@@ -0,0 +1,203 @@
+
+
+ Abstract
+ This paper investigates domain generalization: How to take knowledge acquired from an arbitrary number of related domains and apply it to
+ previously unseen domains? We propose Domain-Invariant Component Analysis (DICA), a kernel-based optimization algorithm that learns an
+ invariant transformation by minimizing the dissimilarity across domains, whilst preserving the functional relationship between input and output
+ variables. A learning-theoretic analysis shows that reducing dissimilarity improves the expected generalization ability of classifiers on new
+ domains, motivating the proposed algorithm. Experimental results on synthetic and real-world datasets demonstrate that DICA successfully
+ learns invariant features and improves classifier performance in practice.
+
+
+
+ Domain Generalization
+ come from the same distribution, learn a classifier/regressor that
+ generalizes well to the test data.Standard Setting: Assume that the training data and test data
+ Domain Adaptation: the training data and test data may come
+ from different distributions. The common assumption is that we
+ observe the test data at the training time. Adapt the classi-
+ fier/regressor trained using the training data to the specific set
+ of test data.
+ tional P(Y |X) stays the same.Covariate Shift: The marginal P(X) changes, but the condi-
+ Target Shift/Concept Drift The marginal P(Y ) or condi-
+ tional P(Y |X) may also change.
+ distributions. Learn a classifier/regressor that generalizes well to
+ the unseen test data, which also comes from different distribution.Domain Generalization: The training data comes from different
+ Applications: medical diagnosis: aggregating the diagnosis of pre-
+ vious patients to the new patients who have similar demographic
+ and medical profiles.
+
+ Figure 1: A simplified schematic diagram of the domain
+ generalization framework. A major difference between
+ our framework and most previous work in domain adap-
+ tation is that we do not observe the test domains during
+ training time.
+
+
+
+ Objective
+
+ (t)
+ tGiven the training sample S , our goal is to produce an estimate f : PX × X → R that generalizes well to test samples S t = {x
+ k }n
+ k=1 . To
+ actively reduce the dissimilarity between domains, we find transformation B in the RKHS H that
+ i maximizing
+ E
+ Pˆ
+ 1. minimizes the distance between empirical distributions of the transformed samples B(S i ) and
+ 2. preserves the functional relationship between X and Y , i.e., Y ⊥ X | B(X).
+
+
+
+ À Minimizing Distributional Variance
+ Distributional variance VH (P) estimates the variance of PX
+ which generates P1
+ X , P2
+ X , . . . , PN
+ X.
+ Definition 1 Introduce probability distribution P on H with P(µ
+ Pi ) =
+ 1
+ N and center G to obtain the covariance operator of P , denoted as
+ Σ := G − 1N G − G1N + 1N G1N . The distributional variance is
+ VH (P) :==1
+ PN−
+ 2 i,j=1 Gij .
+ N1
+ N tr(G)1
+ N tr(Σ)
+ The empirical distributional variance can be computed by
+
+
+
+ Á Preserving Functional Relationship
+ The central subspace C is the minimal subspace that captures the
+ functional relationship between X and Y , i.e., Y ⊥ X | C > X .
+ Theorem 1 If there exists a central subspace C = [c1 , . . . , cm ] sat-
+ isfying Y ⊥ X|C > X , and for any a ∈ Rd , E[a> X|C > X] is linear in
+ mm{c>X},thenE[X|Y]⊂span{Σc}xxiii=1i=1 .
+ It follows that the bases C of the central subspace coincide with the
+ m largest eigenvectors of V(E[X|Y ]) premultiplied by Σ−1
+ xx . Thus, the
+ basis c is the solution to the eigenvalue problem V(E[X|Y ])Σxx c =
+ γΣxx c.
+
+
+
+ Domain-Invariant Component Analysis
+ Combining À and Á, DICA finds B = [β1 , β2 , . . . , βm ] that solves
+ which leads to the following algorithms:
+ DICA Algorithm
+ Parameters λ, ε, and m n.
+ (i) (i)Ni}Sample S = {S i = {(x
+ k , y
+ k )}n
+ k=1 i=1 Input:
+ Output:e n×n .Projection Bn×m and kernel K
+ 1:(i)
+ (j)l(y, y).(j)
+ Calculate gram matrix [Kij ]kl = k(x
+ k , x
+ l ) and [Lij ]kl =
+ (i)
+ k l
+ Supervised: C = L(L + nεI)−1 K 2 .2:
+ Unsupervised: C = K 2 .
+ 1 3:
+ Solve
+ n1 CB = (KQK + K + λI)BΓ for B .4:
+ 5:e ← KBB > K .Output B and K
+ t t >
+ 6:e t ← K t BB > K where Knt
+ ×n is the joint kernelThe test kernel K
+ t
+ between test and training data.
+
+
+
+ A Learning-Theoretic Bound
+ Theorem 2 Under reasonable technical assumptions, it holds with
+ probability at least 1 − δ that,
+ The bound reveals a tradeoff between reducing the distributional vari-
+ ance and the complexity or size of the transform used to do so. The
+ denominator of (1) is a sum of these terms, so that DICA tightens the
+ bound in Theorem 2.
+ Preserving the functional relationship (i.e. central subspace) by
+ maximizing the numerator in (1) should reduce the empirical risk
+ ˜
+ ij B), Yi ), but a rigorous demonstration has yet to be found.Eˆ `(f (X
+
+
+
+ Relations to Existing Methods
+ The DICA and UDICA algorithms generalize many well-known dimen-
+ sion reduction techniques. In the supervised setting, if dataset S con-
+ tains samples drawn from a single distribution PXY then we have
+ KQK = 0. Substituting α := KB gives the eigenvalue problem
+ 1−1L(L+nεI)Kα = KαΓ, which corresponds to covariance opera-
+ n
+ tor inverse regression (COIR) [KP11].
+ If there is only a single distribution then unsupervised DICA reduces
+ to KPCA since KQK = 0 and finding B requires solving the eigen-
+ system KB = BΓ which recovers KPCA [SSM98]. If there are two
+ domains, source PS and target PT , then UDICA is closely related –
+ though not identical to – Transfer Component Analysis [Pan+11]. This
+ follows from the observation that VH ({PS , PT }) = kµ
+ P− µ
+ Pk2 .
+
+
+
+ Experimental Results
+
+ Figure 2: Projections of a synthetic dataset
+ onto the first two eigenvectors obtained from
+ the KPCA, UDICA, COIR, and DICA. The col-
+ ors of data points corresponds to the output
+ values. The shaded boxes depict the projection
+ of training data, whereas the unshaded boxes
+ show projections of unseen test datasets.
+
+ Table 1: Average accuracies over 30 random subsamples of GvHD datasets. Pooling SVM
+ applies standard kernel function on the pooled data from multiple domains, whereas dis-
+ tributional SVM also considers similarity between domains using kernel on distributions.
+ With sufficiently many samples, DICA outperforms other methods in both pooling and dis-
+ tributional settings.
+
+ Table 2: The average leave-one-
+ out accuracies over 30 subjects
+ on GvHD data. The distribu-
+ tional SVM outperforms the pool-
+ ing SVM. DICA improves classifier
+ accuracy.
+
+
+
+
+
+
+ Conclusions
+ Domain-Invariant Component Analysis (DICA) is a new algorithm for
+ domain generalization based on learning an invariant transformation
+ of the data. The algorithm is theoretically justified and performs well
+ in practice.
+
+
+
+ References
+ [KP11]M. Kim and V. Pavlovic. “Central subspace dimensionality reduction using covariance operators”. In: IEEE Transactions on Pattern Analysis and Machine Intelligence
+ 33.4 (2011), pp. 657–670.
+ [Pan+11]Sinno Jialin Pan et al. “Domain adaptation via transfer component analysis”. In: IEEE Transactions on Neural Networks 22.2 (2011), pp. 199–210.
+ [SSM98]B. Schölkopf, A. Smola, and K-R. Müller. “Nonlinear component analysis as a kernel eigenvalue problem”. In: Neural Computation 10.5 (July 1998), pp. 1299–1319.
+
+
+
diff --git a/Train/ExScal Backbone Network Architecture/ExScal Backbone Network Architecture-Poster.pdf b/Train/ExScal Backbone Network Architecture/ExScal Backbone Network Architecture-Poster.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..67d518a57590d231580b732be55ca01ac364e773
--- /dev/null
+++ b/Train/ExScal Backbone Network Architecture/ExScal Backbone Network Architecture-Poster.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6d3c22b87b837793b6c11a561dc2177f8860c1556386f4776d6b7491fec6aab4
+size 1608940
diff --git a/Train/ExScal Backbone Network Architecture/ExScal Backbone Network Architecture.pdf b/Train/ExScal Backbone Network Architecture/ExScal Backbone Network Architecture.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..c0bfcfca16766d25b7fff414ca047cc563f3bc95
--- /dev/null
+++ b/Train/ExScal Backbone Network Architecture/ExScal Backbone Network Architecture.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e93aa343efa485dd19247e9c78df12ace17f6172fedb6089b83ba8b71746eaca
+size 100539
diff --git a/Train/ExScal Backbone Network Architecture/info.txt b/Train/ExScal Backbone Network Architecture/info.txt
new file mode 100644
index 0000000000000000000000000000000000000000..72520957b5addfeeb4ed694760e8bbdffab0225d
--- /dev/null
+++ b/Train/ExScal Backbone Network Architecture/info.txt
@@ -0,0 +1,95 @@
+
+
+ Introduction: Extreme Scaling of a A Line in the Sand
+ ExScal Specifications Imply a Backbone Network
+ (Tier 2)
+ •System
+ A distributed system of ~1000 sensor nodes spread across 1.26 Km X 300m
+ •Real Time Behavior
+ Detection, classification, and tracking at the base station in real time
+ •Low Overhead
+ Low cost, power efficient, robust, accurate, easily deployable, and self
+ configurable system
+ Network Hierarchy
+
+
+
+
+ Tier 2 Anatomy: Hardware and Layout of XSS Network Deployment
+ XSS (Extreme Scale Stargate)
+
+ Network Topology
+
+
+
+
+ Problem Description: Fault Tolerant Services for the Tier 2
+ Specifications of Middleware Services:
+ •Initialization of XSSs
+ – Initialize processes on all XSSs and collect the geograhic locations of all
+ XSSs at the base station
+ – Communicate reliably and (energy) efficiently packets, each of size up to 1
+ Kbyte, to all XSSs and collect a packet of size up to 32 bytes from each of
+ XSSs
+ •Convergecast
+ – Collect data and status from all XSSs e.g. intruder event detection, tier-1
+ reprogramming feedback, tier-1 and tier-2 management feedback
+ – Reliable and energy efficient delivery of an event detection message from
+ any XSS to the base station within 6 seconds
+ •Broadcast
+ – Disseminate bulk of data to all XSSs e.g. reprogramming of the XSMs,
+ tier-1 and tier-2 management queries
+ – Reliable and energy efficient transmission of a file of size up to 200
+ Kbytes to all XSSs
+ •Management
+ – Monitor processes on XSSs e.g. CPU usage, disk usage
+ – Configure services running on XSSs e.g. change transmission power level
+ – Invoke Deluge, SNMS queries and collect the result of the queries
+ Fault Model:
+ –Crash of one or more user level processes on a XSS
+ –Fail stop of a XSS
+ –Change of location for a XSS
+ Challenges:
+ •Initialization of XSSs
+ – No assumption about the topology of the network
+ •Convergecast
+ – Estimate the qualities of the links using only data traffic
+ •Broadcast
+ — Avoid collisions among messages while broadcasting without timesync
+
+
+
+ Solution: Tier 2 Network Protocol Suite and Management
+ Protocols:
+ •Init
+ – Uses controlled flooding to construct a distributed tree over the network
+ •uniComm
+ – Chooses route based on beacon-free in-situ link estimation
+ •Sprinkler
+ – Constructs a CDS (connected dominating set) and a corresponding packet
+ forwarding schedule for the nodes in CDS to minimize the number of
+ transmissions
+ – Streaming Phase: Uses explicit acknowledgements, piggybacked on the
+ data packets, to reliably communicate packets to all the nodes in CDS
+ – Recovery Phase: Reliably communicates packets to all the non
+ dominating nodes using pull model and unicast transmission
+ Management
+ – Uses Sprinkler to broadcast the queries to all XSSs and the responses from
+ all XSSs are collected at the base station using the UniComm
+ – Uses timer to monitor the spawned processes
+ Architecture Diagram:
+
+ Performance:
+ •Init
+ – Average latency of 6.5 seconds with 100% reliability
+ •uniComm
+ – Average end-to-end latency is 0.25 seconds
+ •Sprinkler
+ – Minimum Latency to transmit a 100Kbytes file to all XSSs is 6 seconds
+
+
+
diff --git a/Train/Exploiting Database Similarity Joins for Metric Spaces/Exploiting Database Similarity Joins for Metric Spaces-Poster.pdf b/Train/Exploiting Database Similarity Joins for Metric Spaces/Exploiting Database Similarity Joins for Metric Spaces-Poster.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..6e8877b7ea1695e9dc87e2c92a261aea2634a389
--- /dev/null
+++ b/Train/Exploiting Database Similarity Joins for Metric Spaces/Exploiting Database Similarity Joins for Metric Spaces-Poster.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1cce1babd75eb7244b8f8a1e8da1d39286509a0d953a20f0a678220eeb599756
+size 1120663
diff --git a/Train/Exploiting Database Similarity Joins for Metric Spaces/Exploiting Database Similarity Joins for Metric Spaces.pdf b/Train/Exploiting Database Similarity Joins for Metric Spaces/Exploiting Database Similarity Joins for Metric Spaces.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..9050c24cc1642b3ade4a6f05f5f705cb240b3e8e
--- /dev/null
+++ b/Train/Exploiting Database Similarity Joins for Metric Spaces/Exploiting Database Similarity Joins for Metric Spaces.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9676792c86a34a40c7464c79931d7bbbb25c55e7abf97bed378522a7f395e31b
+size 475724
diff --git a/Train/Exploiting Database Similarity Joins for Metric Spaces/info.txt b/Train/Exploiting Database Similarity Joins for Metric Spaces/info.txt
new file mode 100644
index 0000000000000000000000000000000000000000..c35c0dd1d79b1b58ba7fb221bf76e64319cca564
--- /dev/null
+++ b/Train/Exploiting Database Similarity Joins for Metric Spaces/info.txt
@@ -0,0 +1,95 @@
+
+
+ The Problem
+ • Similarity joins are a key tool in analyzing and
+ processing data.
+ • Some standalone Similarity Join algorithms
+ have been proposed.
+ • Little work on implementing Similarity Joins as
+ physical database operators has been done.
+ Our Contribution
+ • DBSimJoin, a general Similarity Join database
+ operator for metric spaces implemented inside
+ PostgreSQL.
+ • Non-blocking behavior
+ • Prioritizes early generation of results
+ • Fully supports the iterator interface
+ • We show how this operator can be used in
+ real-world data analysis scenarios:
+ • Identify similar images (vectors)
+ • Identify similar publications (strings)
+
+
+
+ DBSimJoin Algorithm
+ • Partitions data in
+ successive rounds
+ until the partitions
+ are small enough to
+ be joined with a
+ nested loop.
+ • Partitioning is done
+ in a series of
+ rounds.
+ • The algorithm is
+ structured as a
+ finite-state machine
+ in order to support
+ the database
+ iterator interface.
+
+
+
+
+ Partitioning in DBSimJoin
+ • The data is partitioned in
+ a generalized hyperplane
+ using a set of K pivots.
+ • Two types of partitions
+ exist: base partitions and
+ window-pair partitions.
+ • Each data record is
+ placed into the base
+ partition of its closest
+ pivot.
+ • Window partitions hold
+ data that is within ε of
+ the boundary between
+ partitions.
+
+ Partitioning a Base Partition
+
+ Partitioning a Window Partition
+
+
+
+ DBSimJoin Rounds
+ • The first round
+ partitions the
+ input data. All
+ partitions too large
+ to be processed
+ immediately in-
+ memory are stored
+ on-disk.
+ • Additional rounds
+ re-partition
+ partitions that
+ have been stored
+ on disk.
+
+
+
+
+ Performance
+ Increasing Scale Factor
+
+ Increasing Epslion
+
+
+
diff --git a/Train/Expression-Invariant Face Recognition with Expression Classification/Expression-Invariant Face Recognition with Expression Classification-Poster.pdf b/Train/Expression-Invariant Face Recognition with Expression Classification/Expression-Invariant Face Recognition with Expression Classification-Poster.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..e91a170c8f5702efe7a6671354ef3f3a7c6cc460
--- /dev/null
+++ b/Train/Expression-Invariant Face Recognition with Expression Classification/Expression-Invariant Face Recognition with Expression Classification-Poster.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e4c6411a64a6590580150131241d968fba7c5996dbd5c9d09944fe2855331d14
+size 468899
diff --git a/Train/Expression-Invariant Face Recognition with Expression Classification/Expression-Invariant Face Recognition with Expression Classification.pdf b/Train/Expression-Invariant Face Recognition with Expression Classification/Expression-Invariant Face Recognition with Expression Classification.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..768f2b01908fbf0cdab61896893c882a2ae8ab67
--- /dev/null
+++ b/Train/Expression-Invariant Face Recognition with Expression Classification/Expression-Invariant Face Recognition with Expression Classification.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c88d9819e62404c3bc3bb8a67424691d8648a4f419065df557c1e0c52d5b8a8f
+size 308158
diff --git a/Train/Expression-Invariant Face Recognition with Expression Classification/info.txt b/Train/Expression-Invariant Face Recognition with Expression Classification/info.txt
new file mode 100644
index 0000000000000000000000000000000000000000..746e83f0d6ad9ce7a23e15410dd6c0aa0ef0cc7c
--- /dev/null
+++ b/Train/Expression-Invariant Face Recognition with Expression Classification/info.txt
@@ -0,0 +1,90 @@
+
+
+ Expression-invariant face recognition
+ 1. Facial expression affects the performance of a face recognition
+ 2. Expression changes face geometry, but texture is stable.
+ 3. We conduct recognition based on texture /geometry separately.
+ Related works
+ Related works
+ Separately modeling texture and geometry information in
+ different eigenspaces has been applied in ASM and AAM (by
+ Cootes et al.), however our eigenspaces aim at distinguishing
+ individuals and expressions.
+
+
+
+ Feature spaces and recognition methods
+ Space1: Texture space V
+ tex : eigenspace of warped face textures.
+ Space2: Angle space V
+ ang : eigenspace of the weighted inner angles
+ of masks. Angle weight is the inverse of the angle variance in
+ natural warpings of training faces.
+ Face recognition: the projection of testing face into V
+ tex and V
+ ang
+ are used as two attributes in face recognition by distinguishing
+ natural warpings from artificial warpings. A testing face is
+ recognized as the same individual in a reference face if both
+ attributes are similar.
+ Space3: Angle residual space V
+ res : eigenspace of the angle
+ changes during natural warping.
+
+ Warping from “surprise” to “normal”
+ Expression classification: compare the geometry of a testing
+ face with that of the recognized reference face in V
+ res.
+
+
+
+ Mask fitting and warping
+
+ 34 vertexes and
+ 51 triangles
+ Normal
+ faces
+ Expressioned faces
+
+ Natural
+ warpings
+ VS
+ Artificial
+ warpings
+ Natural warping: warping from test face to the reference face of the same individual.
+ Artificial warping: warping from test face to the reference face of a different individual.
+
+
+
+ Results of face recognition
+
+ Results of expression classification: 84%
+
+
+
+
+ Extension: face recognition on noisy 3D face scan:
+ 1. Expression affects 3D face recognition
+ more seriously, since most geometric
+ features will be mis-aligned.
+ 2. 3D face mask carries face shape
+ information. The displacement from
+ mask to its original face carries face
+ surface feature.
+ 3. In a natural warping, an expressioned
+ face will approach its reference face, by
+ removing expression. In an artificial
+ warping, however, face surface features
+ of testing face will be distorted.
+
+
+
+
+
diff --git a/Train/Extracting Logical Structure and Identifying Stragglers in Parallel Execution Traces/Extracting Logical Structure and Identifying Stragglers in Parallel Execution Traces-Poster.pdf b/Train/Extracting Logical Structure and Identifying Stragglers in Parallel Execution Traces/Extracting Logical Structure and Identifying Stragglers in Parallel Execution Traces-Poster.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..59134a4377e6c5aff8e5f1c6109eded41589f9d9
--- /dev/null
+++ b/Train/Extracting Logical Structure and Identifying Stragglers in Parallel Execution Traces/Extracting Logical Structure and Identifying Stragglers in Parallel Execution Traces-Poster.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:dd39d186bd0aa87dfcef842aec24cb7e58a1fcbf68ddc1d78f2f0698135d2187
+size 3475141
diff --git a/Train/Extracting Logical Structure and Identifying Stragglers in Parallel Execution Traces/Extracting Logical Structure and Identifying Stragglers in Parallel Execution Traces.pdf b/Train/Extracting Logical Structure and Identifying Stragglers in Parallel Execution Traces/Extracting Logical Structure and Identifying Stragglers in Parallel Execution Traces.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..0c03b5391815716ee55b87b89b122ee4498b05f6
--- /dev/null
+++ b/Train/Extracting Logical Structure and Identifying Stragglers in Parallel Execution Traces/Extracting Logical Structure and Identifying Stragglers in Parallel Execution Traces.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:510f589c3dff0d2bfd87d460d872b67ea949d9340587b1cff7c46690e3ca9949
+size 710392
diff --git a/Train/Extracting Logical Structure and Identifying Stragglers in Parallel Execution Traces/info.txt b/Train/Extracting Logical Structure and Identifying Stragglers in Parallel Execution Traces/info.txt
new file mode 100644
index 0000000000000000000000000000000000000000..0652939afdea1e96c1faf78cc041fd3a64175921
--- /dev/null
+++ b/Train/Extracting Logical Structure and Identifying Stragglers in Parallel Execution Traces/info.txt
@@ -0,0 +1,118 @@
+
+
+ Abstract
+ We introduce a new approach to automatically extract an idealized logical
+ structure from a parallel execution trace. We use this structure to define
+ intuitive metrics such as the lateness of a process involved in a parallel
+ execution. By analyzing and illustrating traces in terms of logical steps, we
+ leverage a developer’s understanding of the happened-before relations in
+ a parallel program. This technique can uncover dependency chains,
+ elucidate communication patterns, and highlight sources and propagation
+ of delays, all of which may be obscured in a traditional trace visualization.
+
+
+
+ Extracting Logical Structure
+ The logical structure of a program is the ordering of events implied by that
+ program. We describe the logical structure by assigning a logical step to
+ each event.
+ Structure extraction occurs in two phases:
+ 1. Partitioning related communication
+ 2. Step assignment
+ Partitioning
+ Partitions represent non-overlapping application phases. If not predefined,
+ we derive them from the trace:
+ Matching sends and receives and communication handled by the same
+ MPI call must be related and thus in the same partition. When merged,
+ this can create cycles in ordering:
+
+
+ Communication partitions forming a cycle do not permit a partial order, so
+ we infer these partitions are related and merge them.
+ In addition to merging due to ordering
+ constraints, we can optionally merge due
+ to behavioral assumptions. For example,
+ in bulk synchronous codes we expect
+ each process to be active at some
+ distance in the partition graph.
+
+
+
+
+ Step Assignment
+ Each partition is independently assigned steps based on two principles:
+ 1. Happened-before relationships must be maintained
+ 2. Send events have greater impact on structure
+
+ Consider this trace segment from an 8-
+ process run of the pF3D stencil
+ communication benchmark [1].
+ First we determine groups of
+ simultaneous sends (gray) using
+ receives only for ordering.
+
+
+ Then we assign the least step
+ possible to each event.
+ Finally we insert aggregated non-communication events between the
+ sends and receives and determine global steps using partition ordering.
+
+
+
+ Temporal Metrics
+ Having determined a logical structure, we can calculate how late an event
+ was relative to its peers. We define lateness as excess completion time
+ over the earliest related event at a step.
+ We visualize a portion of an MG [2] trace using traditional methods as
+ represented by Vampir [3] (left) and logical structure and lateness (right).
+ In the latter the communication pattern and delay propagation is clear.
+
+
+ We classify four situations contributing to event lateness:
+
+ Using this classification, we can narrow our focus to events where
+ lateness originates by subtracting out propagated lateness. This
+ differential lateness allows us to pinpoint sources of delays automatically.
+
+
+
+ Case Study
+ We analyze a massively parallel algorithm to compute merge trees. The
+ algorithm relies on a global gather-scatter approach where each level
+ requires messages sent both up and down a k-ary gather tree:
+
+ Below are the Vampir (left) and logical structure (right) visualizations of a
+ 16 process, 4-ary merge tree calculation. In the logical structure view,
+ lateness reflects data-dependent load imbalance. Logical steps highlight
+ the gather tree structure, revealing that the gather processes send back to
+ the leaves before sending up to the root, missing an opportunity for more
+ aggressive pipelining.
+
+
+ The 1024-process, 8-ary tree below shows similar issues. The recurring
+ “panhandle” shape highlights waiting due to sending down before up.
+
+
+
+
+ References
+ 1. C. H. Still et al. Filamentation nd forward brillouin scatter of entire smoothed and aberrated laser beams.
+ Physics of Plasmas, 7(5):2023, 2000.
+ 2. D. H. Bailey et al. The nas parallel benchmarks. Int. J. Supercomput. Appl., 5(3):63–73, 1991.
+ 3. W. E. Nagel et al. VAMPIR: Visualization and analysis of MPI resources. Supercomputer, 12(1):69-80, 1996.
+
+
+
diff --git a/Train/Face Spoofing Detection through/Face Spoofing Detection through-Poster.pdf b/Train/Face Spoofing Detection through/Face Spoofing Detection through-Poster.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..3439e18e4f3f259daed970dfc08556f89fd56ea0
--- /dev/null
+++ b/Train/Face Spoofing Detection through/Face Spoofing Detection through-Poster.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4084c461d91a92b977e09cfecf5e927c870384b2cd110e9d131d056cca94a53a
+size 719416
diff --git a/Train/Face Spoofing Detection through/Face Spoofing Detection through.pdf b/Train/Face Spoofing Detection through/Face Spoofing Detection through.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..650d542675daa4112ba6f53e62bc32509fd7e8c1
--- /dev/null
+++ b/Train/Face Spoofing Detection through/Face Spoofing Detection through.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ddb5c15e55752e0343defd508d2305bd5cd4185667bae20fcdda390cfd488baa
+size 7269364
diff --git a/Train/Face Spoofing Detection through/info.txt b/Train/Face Spoofing Detection through/info.txt
new file mode 100644
index 0000000000000000000000000000000000000000..dadd4ba08fde5c3ec1b3b4a74aa3c073036d39ab
--- /dev/null
+++ b/Train/Face Spoofing Detection through/info.txt
@@ -0,0 +1,82 @@
+
+
+ Introduction
+ Problem: 2-D image-based facial verification or recognition system
+ can be spoofed with no difficulty (a person displays a photo of an
+ authorized subject either printed on a piece paper)
+ Idea: anti-spoofing solution based on a holistic representation of the
+ face region through a robust set of low-level feature descriptors,
+ exploiting spatial and temporal information
+ Advantages: PLS allows to use multiple features and avoids the
+ necessity of choosing before-hand a smaller set of features that may not
+ be suitable for the problem
+ Partial Least Squares
+ PLS deals with a large number of variables and a small number of
+ examples
+ Data matrix X and response matrix Y
+
+ Practical Solution: NIPALS algorithm
+ Iterative approach to calculate PLS factors
+ PLS weights the feature descriptors and estimates the location of the
+ most discriminative regions
+
+
+
+
+ Anti-Spoofing Proposed Solution
+ A video sample is divided into m parts, feature extraction is applied
+ for every k-th frame. The resulting descriptors are concatenated to
+ compose the feature vector
+
+ PLS is employed to obtain the latent feature space, in which higher
+ weights are attributed to feature descriptors extracted from regions
+ containing discriminatory characteristics between the two classes
+ The test procedure evaluates if a novel sample belongs either to the
+ live or non-live class. When a sample video is presented to the system,
+ the face is detected and the frames are cropped and rescaled
+
+
+
+
+ Experimental Results
+ Print-Attack Dataset
+
+ Dataset: 200 real-access and 200 printed-photo attack videos [1]
+ Setup: face detection, rescale to 110 x 40 pixels, 10 frames are sampled
+ for feature extraction (HOG, intensity, color frequency (CF) [2],
+ histogram of shearlet coefficients (HSC) [3], GLCM)
+ Classifier evaluation: SVM type C with linear kernel achieved EER of
+ 10%. PLS method achieved EER of 1.67%
+
+ Feature combination
+
+ Comparisons
+ NUAA Dataset
+
+ Dataset: 1743 live images and 1748 non-live images for training. 3362
+ live and 5761 non-live images for testing [4]
+ Setup: faces are detected and images are scaled to 64 x 64 pixels
+ Comparison: Tan et al. [4] achieved AUC of 0.95
+
+ Feature combination
+
+ [1] https://www.idiap.ch/dataset/printattack
+ [2] W. R. Schwartz, A. Kembhavi, D. Harwood, and L. S. Davis. Human Detection Using Partial Least Squares Analysis.
+ In IEEE ICCV, pages 24–31, 2009.
+ [3] W. R. Schwartz, R. D. da Silva, and H. Pedrini. A Novel Feature Descriptor Based on the Shearlet Transform. In
+ IEEE ICIP, 2011.
+ [4] X. Tan, Y. Li, J. Liu, and L. Jiang. Face liveness detection from a single image with sparse low rank bilinear
+ discriminative model. In ECCV, pages 504–517, 2010.
+
+
+
diff --git a/Train/FaceTracer- A Search Engine for Large Collections of Images with Faces/FaceTracer- A Search Engine for Large Collections of Images with Faces-Poster.pdf b/Train/FaceTracer- A Search Engine for Large Collections of Images with Faces/FaceTracer- A Search Engine for Large Collections of Images with Faces-Poster.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..62b5fe6dfb43ccf20959414086c807a574f25187
--- /dev/null
+++ b/Train/FaceTracer- A Search Engine for Large Collections of Images with Faces/FaceTracer- A Search Engine for Large Collections of Images with Faces-Poster.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:34d48bfbc9149040ed02ff839448cd259bfbca337b440b1503fb938f556a64b2
+size 684819
diff --git a/Train/FaceTracer- A Search Engine for Large Collections of Images with Faces/FaceTracer- A Search Engine for Large Collections of Images with Faces.pdf b/Train/FaceTracer- A Search Engine for Large Collections of Images with Faces/FaceTracer- A Search Engine for Large Collections of Images with Faces.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..ba532cd184c70771f3ea3c1269b8331200ba39a6
--- /dev/null
+++ b/Train/FaceTracer- A Search Engine for Large Collections of Images with Faces/FaceTracer- A Search Engine for Large Collections of Images with Faces.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:eb144495bd1a764ee0b08e6ef3a15b9752d4a2adaf16655c685312543d85ece6
+size 8610094
diff --git a/Train/FaceTracer- A Search Engine for Large Collections of Images with Faces/info.txt b/Train/FaceTracer- A Search Engine for Large Collections of Images with Faces/info.txt
new file mode 100644
index 0000000000000000000000000000000000000000..98d6557b35938057d5c7210b1781b4a65416aff7
--- /dev/null
+++ b/Train/FaceTracer- A Search Engine for Large Collections of Images with Faces/info.txt
@@ -0,0 +1,58 @@
+
+
+ Overview
+ FaceTracer is the first face search engine
+ • Built using over 3 million face images
+ • “Real-world” images collected from the internet
+ • Simple attribute-based text queries
+ • Automatic selection of best features for each attribute
+
+
+
+ Database Creation
+
+
+ Our Database: 3,131,075 Faces
+ 17,454 Labeled Faces
+
+
+
+
+ Training
+ Face Regions
+
+ Feature Types
+
+ Training Process
+ 1. Train local SVMs for each region/feature type combination
+ 2. Use Adaboost to select best features
+ 3. Train global SVM on union of selected features
+
+
+
+
+ Results
+ Classification Performance
+
+
+ Automatically Selected Features
+
+ Smiling
+
+ Hair Color
+ Comparison to State-of-the-Art
+
+
+
+
diff --git a/Train/Facebully Towards the Identification of Cyberbullying in Facebook/Facebully Towards the Identification of Cyberbullying in Facebook-Poster.pdf b/Train/Facebully Towards the Identification of Cyberbullying in Facebook/Facebully Towards the Identification of Cyberbullying in Facebook-Poster.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..d9c6d017d03519e445bb4bdbff395a88f4eb2309
--- /dev/null
+++ b/Train/Facebully Towards the Identification of Cyberbullying in Facebook/Facebully Towards the Identification of Cyberbullying in Facebook-Poster.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b66bcfcebfd1bcd9b0ce7f34dea7850200bdef2a5d4b9dcf1d3e9e34ac622aa8
+size 470680
diff --git a/Train/Facebully Towards the Identification of Cyberbullying in Facebook/Facebully Towards the Identification of Cyberbullying in Facebook.pdf b/Train/Facebully Towards the Identification of Cyberbullying in Facebook/Facebully Towards the Identification of Cyberbullying in Facebook.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..9df02ee7ac36b07a0fb86ef2bf8d963ea5bf8bbf
--- /dev/null
+++ b/Train/Facebully Towards the Identification of Cyberbullying in Facebook/Facebully Towards the Identification of Cyberbullying in Facebook.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0e0ae248d8965db36d5d163b63f7f898a81eef19e3782b5e929811acf092d200
+size 153893
diff --git a/Train/Facebully Towards the Identification of Cyberbullying in Facebook/info.txt b/Train/Facebully Towards the Identification of Cyberbullying in Facebook/info.txt
new file mode 100644
index 0000000000000000000000000000000000000000..c19aaf1afe58659b23da9f6d37d18a98b27efae7
--- /dev/null
+++ b/Train/Facebully Towards the Identification of Cyberbullying in Facebook/info.txt
@@ -0,0 +1,89 @@
+
+
+ ❖ Motivation ❖
+ • In the past year, 1 million children were victims
+ of cyberbullying on Facebook.
+ • There has not been sufficient research in
+ identifying cyberbullying behavior in social
+ networks and media.
+
+
+
+ ❖ Our Contribution ❖
+ • Facebully is an application designed to identify
+ a case of cyberbullying by exploiting the social
+ media data available.
+ • The application is based off of a model
+ designed for cyberbully identification that was
+ built on previous research findings of both
+ traditional and cyberbullying in adolescents.
+
+
+
+ ❖ Benefits ❖
+ • Once Facebully is ready for deployment, it can
+ be used, e.g., for parents to monitor their
+ children via their social network and forewarn
+ them if their child is a victim of online
+ aggression.
+ • The model used to design the application can
+ be modified to identify other behaviors as well,
+ such as depression or self-destructive
+ tendencies.
+
+
+
+ ❖ Future Work ❖
+ • Finish the implementation of Facebully 1.0.
+ • Study mechanisms to dynamically adjust the
+ Bullying Rank by using machine learning
+ techniques and to incorporate new
+ cyberbullying factors that cannot be directly
+ extracted from Facebook, e.g., ethnicity,
+ physical and mental disabilities, etc.
+
+
+
+ ❖ Architecture ❖
+
+
+
+
+ ❖ Rank Factors ❖
+
+
+
+
+ ❖ Design ❖
+ • Facebully measures the intensity of online
+ aggression a user may be experiencing by first
+ identifying two major factors:
+ • Warning signs
+ • Vulnerability
+ • Each factor consists of sub-factors whose
+ values can be computed from the data
+ available in the user’s profile.
+ • The Bullying Rank (B) is computed by an
+ equation that normalizes the intensity of
+ cyberbullying.
+ 𝑾𝒂𝒓𝒏𝒊𝒏𝒈 𝑺𝒊𝒈𝒏𝒔 𝑺 : 0, 100
+ 𝑽𝒖𝒍𝒏𝒆𝒓𝒂𝒃𝒊𝒍𝒊𝒕𝒚 𝑽 : 0.22, 0.59
+ 𝑩𝒖𝒍𝒍𝒚𝒊𝒏𝒈 𝑹𝒂𝒏𝒌 𝑩 : 0, 59
+ • The possible range of values of the Bullying
+ Rank (B) is divided into three levels of risk
+ intensity.
+ 𝑹𝒊𝒔𝒌 𝑳𝒆𝒗𝒆𝒍𝒔
+ 1. 𝐿𝑜𝑤 𝑅𝑖𝑠𝑘: 0, 20
+ 2. 𝑀𝑒𝑑𝑖𝑢𝑚 𝑅𝑖𝑠𝑘: 21, 40
+ 3. 𝐻𝑖𝑔ℎ 𝑅𝑖𝑠𝑘: 41, 59
+ • The parent/guardian of the minor is then
+ notified of the Bullying Rank (B) and its level of
+ intensity.
+ • Any extracted data or prior computations that
+ need to be used for later updating the Bullying
+ Rank (B) are stored in the permanent storage.
+
+
+
diff --git a/Train/Feature Construction for Inverse Reinforcement Learning/Feature Construction for Inverse Reinforcement Learning-Poster.pdf b/Train/Feature Construction for Inverse Reinforcement Learning/Feature Construction for Inverse Reinforcement Learning-Poster.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..97e0a2a433fc0bb5139ab6f8fe395094ed71f527
--- /dev/null
+++ b/Train/Feature Construction for Inverse Reinforcement Learning/Feature Construction for Inverse Reinforcement Learning-Poster.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:55240c991b91602ac1637f0268bdcfe002bb71f06de56427bca13c174c4bbfbf
+size 1002537
diff --git a/Train/Feature Construction for Inverse Reinforcement Learning/Feature Construction for Inverse Reinforcement Learning.pdf b/Train/Feature Construction for Inverse Reinforcement Learning/Feature Construction for Inverse Reinforcement Learning.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..98721c3c607a06fb71656bfdeab0b547aaf5c9c5
--- /dev/null
+++ b/Train/Feature Construction for Inverse Reinforcement Learning/Feature Construction for Inverse Reinforcement Learning.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ef5b85e2910c33117a6f56ce49883b9c0d5572e06768033a5e89eea5b7cb8463
+size 950935
diff --git a/Train/Feature Construction for Inverse Reinforcement Learning/info.txt b/Train/Feature Construction for Inverse Reinforcement Learning/info.txt
new file mode 100644
index 0000000000000000000000000000000000000000..f19b984c77fdc470ba33705ceff68ae54032770c
--- /dev/null
+++ b/Train/Feature Construction for Inverse Reinforcement Learning/info.txt
@@ -0,0 +1,97 @@
+
+
+ 1. Introduction
+ Goal: given Markov Decision Process (MDP) M without its re-
+ ward function R, as well as example traces D from its optimal
+ policy, find R.
+ Motivations: learning policies from examples, inferring goals,
+ specifying tasks by demonstration.
+ Challenge: many functions R fit the examples, but many will not
+ generalize to unobserved states. Selecting compact set of features
+ that represent R is difficult.
+ Solution: construct features to represent R from exhaustive list
+ of component features, using logical conjunctions of component
+ features represented as a regression tree.
+
+
+
+ 2. Background
+ Markov Decision Process: M = {S, A, θ, γ, R}
+ S – set of statesA – set of actions
+ γ – discount factorR – reward function
+ θ – state transition probabilities: θsas = P (s |s, a)
+ Optimal Policy: denotedmaximizes Et∗γR(s,a)|π,θttt=0∗π ,∞
+ Example Traces: D = {(s1,1 , a1,1 ), ..., (sn,T , an,T )}, where si,t is
+ thththe t state in the i trace, and ai,t is the optimal action in si,t .
+ Previous Work: most existing algorithms require a set of fea-
+ tures Φ to be provided, and find a reward function that is a linear
+ combination of the features [1, 2, 3, 4]. Finding features that are
+ relevant and sufficient is difficult. Furthermore, a linear combina-
+ tion is not always a good estimate for the reward.
+ Component Features: instead of a complete set of relevant fea-
+ tures, our method accepts an exhaustive list of component features
+ δ : S → Z. The algorithm finds a regression tree, with relevant
+ component features acting as tests, to represent the reward.
+
+
+
+ 3. Algorithm
+ Overview: Iteratively construct feature set Φ and reward R, al-
+ ternating between an optimization phase that determines a re-
+ ward, and a fitting phase that determines the features.
+ Optimization Phase: Find reward R “close” to current features
+ Φ, under which examples D are part of the optimal policy. Letting
+ P rojΦ R denote the closest reward to R that is a linear combination
+ of features Φ, we find R as:
+ Note that R can “step outside” of the current features to satisfy
+ the examples, if the current features Φ are insufficient.
+ Fitting Phase: Fit a regression tree to R, with component
+ features δ acting as tests at tree nodes. Indicators for leaves of
+ the tree are the new features Φ. Only component features that are
+ relevant to the structure of R are selected, and leaves correspond
+ to their logical conjunctions.
+
+
+
+ 4. Illustrated Example
+
+
+
+
+ 5. Experimental Results
+ Gridworld transfer comparison: 64×64 gridworld with colored objects placed at
+ random. Component features give distance to object of specific color. Many
+ colors are irrelevant. Transfer performance corresponds to learning reward
+ on one random gridworld, and evaluating on 10 others (with random object
+ placement). Comparing FIRL (proposed algorithm), Abbeel & Ng [1], MMP
+ [3], LPAL [4]. FIRL outperforms prior methods, which cannot distinguish
+ relevant objects from irrelevant ones.
+
+ Highway driving: “lawful” policy avoids going fast in right lane, “outlaw”
+ policy drives fast, but slows down near police. Features indicate presence
+ of police, current lane, speed, distance to cars, etc. Logical connection be-
+ tween speed and lanes/police cars cannot be captured by linear combina-
+ tions, and prior methods cannot match the expert’s speed while also match-
+ ing feature expectations. Videos of the learned policies can be found at:
+ http://graphics.stanford.edu/projects/firl/index.htm.
+
+
+
+ 6. References
+ [1] P. Abbeel and A. Y. Ng. Apprenticeship learning via inverse reinforcement learn-
+ ing. In ICML ’04: Proceedings of the 21st International Conference on Machine
+ Learning. ACM, 2004.
+ [2] A. Y. Ng and S. J. Russell. Algorithms for inverse reinforcement learning. In
+ ICML ’00: Proceedings of the 17th International Conference on Machine Learn-
+ ing, pages 663–670. Morgan Kaufmann Publishers Inc., 2000.
+ [3] N. D. Ratliff, J. A. Bagnell, and M. A. Zinkevich. Maximum margin planning. In
+ ICML ’06: Proceedings of the 23rd International Conference on Machine Learn-
+ ing, pages 729–736. ACM, 2006.
+ [4] U. Syed, M. Bowling, and R. E. Schapire. Apprenticeship learning using linear
+ programming. In ICML ’08: Proceedings of the 25th International Conference
+ on Machine Learning, pages 1032–1039. ACM, 2008.
+
+
+
diff --git a/Train/Feature-based Part Retrieval for Interactive 3D Reassembly/Feature-based Part Retrieval for Interactive 3D Reassembly-Poster.pdf b/Train/Feature-based Part Retrieval for Interactive 3D Reassembly/Feature-based Part Retrieval for Interactive 3D Reassembly-Poster.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..f03fb5964a64814fb1fe406c24d63f334b6af7ad
Binary files /dev/null and b/Train/Feature-based Part Retrieval for Interactive 3D Reassembly/Feature-based Part Retrieval for Interactive 3D Reassembly-Poster.pdf differ
diff --git a/Train/Feature-based Part Retrieval for Interactive 3D Reassembly/Feature-based Part Retrieval for Interactive 3D Reassembly.pdf b/Train/Feature-based Part Retrieval for Interactive 3D Reassembly/Feature-based Part Retrieval for Interactive 3D Reassembly.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..f95f11972c1e14fdcffbb392667a5b256c170c46
--- /dev/null
+++ b/Train/Feature-based Part Retrieval for Interactive 3D Reassembly/Feature-based Part Retrieval for Interactive 3D Reassembly.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c34efdab54be2997692161180c70e4eaa2ea4fa5d05eb823ed2a5018d1a607c8
+size 367312
diff --git a/Train/Feature-based Part Retrieval for Interactive 3D Reassembly/info.txt b/Train/Feature-based Part Retrieval for Interactive 3D Reassembly/info.txt
new file mode 100644
index 0000000000000000000000000000000000000000..5ce0a5b48ca20d176a8fd760196da7f1d97d0a0a
--- /dev/null
+++ b/Train/Feature-based Part Retrieval for Interactive 3D Reassembly/info.txt
@@ -0,0 +1,64 @@
+
+
+ Motivation
+ Archeology
+ Reconstruct broken artifacts
+ Molecular Biology
+ Identify compatible proteins
+ Forensics
+ Understand disaster scene
+
+
+
+ Goal
+ Help user to reconstruct 3D object from a large
+ collection of pieces
+
+
+
+
+ Approach
+ 1. Detect interest regions and compute local
+ descriptors
+
+ 2. Identify candidate descriptor correspondences
+
+ 3. Quantify geometric compatibility between parts
+
+ 4. Compute final match score using spectral methods
+
+
+ Motivated by [Leordeanu & Hebert, 2005]
+
+
+
+ Results
+ Database of solid objects, each broken into four pieces
+
+ System correctly identifies the pieces for each object
+
+
+ 100 piece database
+ High retrieval
+ accuracy at low ranks
+ Robust to noise
+
+ Accuracy at low ranks
+ is crucial metric to
+ evaluate retrieval
+ Significantly out-
+ performs baseline at
+ varying noise levels
+
+
+
diff --git a/Train/Finding Things- Image Parsing with Regions and Per-Exemplar Detectors/Finding Things- Image Parsing with Regions and Per-Exemplar Detectors-Poster.pdf b/Train/Finding Things- Image Parsing with Regions and Per-Exemplar Detectors/Finding Things- Image Parsing with Regions and Per-Exemplar Detectors-Poster.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..5c9eeb6f5c28846a5389cf88dd37c1f8fd458cd1
--- /dev/null
+++ b/Train/Finding Things- Image Parsing with Regions and Per-Exemplar Detectors/Finding Things- Image Parsing with Regions and Per-Exemplar Detectors-Poster.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d2b1dfdf086e19555d25beb8ae91896bc2e2f68dece308a67bc341cbda6f15d3
+size 3826029
diff --git a/Train/Finding Things- Image Parsing with Regions and Per-Exemplar Detectors/Finding Things- Image Parsing with Regions and Per-Exemplar Detectors.pdf b/Train/Finding Things- Image Parsing with Regions and Per-Exemplar Detectors/Finding Things- Image Parsing with Regions and Per-Exemplar Detectors.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..014d6eda928fb63fd8a27cd2419074b73d15df35
--- /dev/null
+++ b/Train/Finding Things- Image Parsing with Regions and Per-Exemplar Detectors/Finding Things- Image Parsing with Regions and Per-Exemplar Detectors.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:da90e690e8a0dcb8c0f01f216dcb6c16d9aba9e35228601c5b6f19fc61daef92
+size 3828676
diff --git a/Train/Finding Things- Image Parsing with Regions and Per-Exemplar Detectors/info.txt b/Train/Finding Things- Image Parsing with Regions and Per-Exemplar Detectors/info.txt
new file mode 100644
index 0000000000000000000000000000000000000000..aa2983b77194a330a4646116c3e9e21ac31f522c
--- /dev/null
+++ b/Train/Finding Things- Image Parsing with Regions and Per-Exemplar Detectors/info.txt
@@ -0,0 +1,70 @@
+
+
+ Overview
+ We present a system for image parsing aimed at achieving broad coverage across hundreds of object categories, many of them sparsely
+ sampled. The system combines region-level features with per-exemplar sliding window detectors. Region-level features are highly
+ effective in identifying “stuff" categories (sky, road, building, etc.) but are quite bad at localizing “things" (car, person, sign, etc.).
+ Conversely, sliding window detectors can reliably localize “things" but have a hard time with “stuff."
+
+
+
+
+ System Description
+ At training time, we train HOG based per-exemplar detectors (Mal-
+ isiewicz et al. 2011), and compute the necessary features for our
+ Superparsing system (Tighe and Lazebnik 2010).
+ Parsing pipeline:
+ • Obtain a retrieval set of globally similar training images
+ • Region based data term (ER ) is computed using our Super-
+ parsing system
+ • Detector based data term (ED ) :
+ – Run per-exemplar detectors for exemplars in the re-
+ trieval set
+ – Transfer masks from all detections above a set detection
+ threshold to test image
+ – Detector data term is computed as the sum of these
+ masks scaled by their detection score
+ • Combine these two data terms by training a SVM on the con-
+ catenation of ED and ER
+ • Smooth the SVM output (ESVM ) using a MRF:
+
+ An illustration of the generation of our detector based data term.
+
+
+
+
+
+
+
+ SIFT Flow Dataset
+
+
+ Per-class breakdown of the classification rate on the SIFT Flow dataset.
+
+
+
+
+
+
+
+
+ LM+Sun Dataset
+
+ Breakdown of the classification rate on the most common classes in the LM+SUN dataset.
+
+ Examples of “thing” classes on LM+SUN. The caption for each class shows: (# of training
+ instances of that class) / (# of test instances) (per-pixel rate on the test set)%
+
+
+
+
diff --git a/Train/Fine-Grained Visual Comparisons with Local Learning/Fine-Grained Visual Comparisons with Local Learning-Poster.pdf b/Train/Fine-Grained Visual Comparisons with Local Learning/Fine-Grained Visual Comparisons with Local Learning-Poster.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..0204410c75c3ea04513dc5e0e2add2f2060c26ed
--- /dev/null
+++ b/Train/Fine-Grained Visual Comparisons with Local Learning/Fine-Grained Visual Comparisons with Local Learning-Poster.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f44900566a42b64d2adc029c965a5920d4986f5157f5a24e0eb37934e03bc9a0
+size 857242
diff --git a/Train/Fine-Grained Visual Comparisons with Local Learning/Fine-Grained Visual Comparisons with Local Learning.pdf b/Train/Fine-Grained Visual Comparisons with Local Learning/Fine-Grained Visual Comparisons with Local Learning.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..17a747d3915a9bf424d9687b0da1ba36ff8d0d27
--- /dev/null
+++ b/Train/Fine-Grained Visual Comparisons with Local Learning/Fine-Grained Visual Comparisons with Local Learning.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ec2c8cc6a9728ad9de9e7bb6e656a593ff794c259c3c592611b39f5cfe10fc73
+size 1487647
diff --git a/Train/Fine-Grained Visual Comparisons with Local Learning/info.txt b/Train/Fine-Grained Visual Comparisons with Local Learning/info.txt
new file mode 100644
index 0000000000000000000000000000000000000000..284ac02b3935267d280d794e4f3cb24892b95bfa
--- /dev/null
+++ b/Train/Fine-Grained Visual Comparisons with Local Learning/info.txt
@@ -0,0 +1,110 @@
+
+
+ Visual Comparisons
+ Which shoe is more sporty?
+
+ Problem:
+ Fine-grained visual
+ comparisons require
+ accounting for subtle
+ visual differences specific
+ to each comparison pair.
+ Status Quo: Learning a Global Ranking Function
+ [Parikh & Grauman 11, Datta et al. 11, Li et al. 12, Kovashka et al. 12, ...]
+
+ o fails to account for subtle differences
+ among closely related images
+ o each comparison pair exhibits unique
+ visual cues/rationales
+ o visual comparisons need not be transitive
+
+
+
+
+ Our Approach
+ We propose a local learning approach for fine-grained comparisons.
+
+ o learn attribute-specific distance metrics
+ o identify top K analogous neighboring pairs w.r.t. each novel pair
+ o train local function that tailors to the neighborhood statistics
+ Key Idea: having the right data > having more data
+
+
+
+ Analogous Neighboring Pairs
+ Detect analogous pairs based on individual similarity & paired contrast.
+ o select neighboring pairs that accentuate fine-grained differences
+ o take product of pairwise distances of individual members
+ o i.e. highly analogous if both query-training couplings are similar
+
+
+
+
+ Learned Attribute Distance
+ Learn a Mahalanobis metric per attribute (similarity computation).
+ o attribute similarity doesn’t rely equally on each dim of feature space
+ o constraints similar images be close, dissimilar images be far
+
+ Observation: Nearest analogous pairs most suited for local
+ learning need not be those closest in raw feature space.
+
+
+
+ UT Zappos50K Dataset
+ We introduce a new large shoe dataset UT-Zap50K, consisting of
+ CoarseFine-Grained50,025 catalog images from Zappos.com.
+ 4 relative attributes (open, pointy, sporty, comfort)
+ ohigh confidence pairwise labels from mTurk workers
+ o6,751 ordered labels + 4,612 “equal” labels
+ o4,334 twice-labeled fine-grained labels (no “equal” option)o
+
+
+
+
+ Results: UT-Zap50K
+ o FG-LocalPair: our proposed fine-grained approach
+ o Global[Parikh & Grauman 11]: status quo of learning a single
+ global ranking function per attributeo RandPair: local approach with random neighbors
+ o RelTree[Li et al. 12]: non-linear relative attribute approacho LocalPair: our approach w/o the learned metric
+ (10 iterations @ K=100)Accuracy Comparison
+ o coarser comparisons
+
+ o fine-grained comparisons
+
+
+
+
+ o accuracy for the 30 hardest test pairs (according to learned metrics)
+
+ Observation:
+ We outperform all baselines,
+ demonstrating strong advantage for
+ detecting subtle differences on the
+ harder comparisons (~20% more).
+
+
+
+ Results: PubFig & Scenes
+ We form supervision pairs using the category-wise comparisons avg. 20,000 ordered labels / attribute.
+ o Public Figures Face (PubFig): 772 images w/ 11 attributes
+ o Outdoor Scene Recognition (OSR): 2,688 images w/ 6 attributes
+
+ Observation: We outperform the current state of the art on 2 popular relative attribute
+ datasets. Our gains are especially dominant on localizable attributes due to the learned metrics.
+
+
+
diff --git a/Train/Free your Camera- 3D Indoor Scene Understanding from Arbitrary Camera Motion/Free your Camera- 3D Indoor Scene Understanding from Arbitrary Camera Motion-Poster.pdf b/Train/Free your Camera- 3D Indoor Scene Understanding from Arbitrary Camera Motion/Free your Camera- 3D Indoor Scene Understanding from Arbitrary Camera Motion-Poster.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..86d9f19e4c9414fb847d1277037617d5cc7b3d83
--- /dev/null
+++ b/Train/Free your Camera- 3D Indoor Scene Understanding from Arbitrary Camera Motion/Free your Camera- 3D Indoor Scene Understanding from Arbitrary Camera Motion-Poster.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8b24eec3ccd32c41b0433eb93b9fa8ff73e467a6512acde53e0cd5f35ed38658
+size 809562
diff --git a/Train/Free your Camera- 3D Indoor Scene Understanding from Arbitrary Camera Motion/Free your Camera- 3D Indoor Scene Understanding from Arbitrary Camera Motion.pdf b/Train/Free your Camera- 3D Indoor Scene Understanding from Arbitrary Camera Motion/Free your Camera- 3D Indoor Scene Understanding from Arbitrary Camera Motion.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..cc994d9492024f5d4c545f4b2c6aae838ad650c4
--- /dev/null
+++ b/Train/Free your Camera- 3D Indoor Scene Understanding from Arbitrary Camera Motion/Free your Camera- 3D Indoor Scene Understanding from Arbitrary Camera Motion.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:48721f9496164aae2e8b1ebabcbc4cbc7942750e3b6a6e418fcb7f2fe5e973b7
+size 1435428
diff --git a/Train/Free your Camera- 3D Indoor Scene Understanding from Arbitrary Camera Motion/info.txt b/Train/Free your Camera- 3D Indoor Scene Understanding from Arbitrary Camera Motion/info.txt
new file mode 100644
index 0000000000000000000000000000000000000000..4627a4aff67bb15890ddf6972705b36251b518f9
--- /dev/null
+++ b/Train/Free your Camera- 3D Indoor Scene Understanding from Arbitrary Camera Motion/info.txt
@@ -0,0 +1,95 @@
+
+
+ Overview
+ Problem statement
+ 3D Indoor semantic layout estimation
+ Full 6DoF freely moving observer
+ No hard Manhattan assumptions
+ Near real-time performances
+ Experiments
+ Tested on the Michigan Indoor Corridor dataset [1]
+ Introduction of a new challenging dataset
+
+
+
+
+ Proposed approach
+ Sparse 3D reconstruction
+ Estimate camera pose and a sparse map with:
+ Fast Monocular V-SLAM – All frames in real-time
+ Slow VisualSfM – Few frames to preserve real-time
+ Layout definition
+ Made of layout components (walls, ground, floor)
+ Walls are orthogonal to the ground plane
+ Arbitrary number of walls, not mutually orthogonal
+ Layout estimation
+ Iterative RanSaC plane fitting
+ Large number of inaccurate layout components
+ Initialize layout hypotheses as random
+ combinations of layout components
+ Local perturbation and optimization of hypotheses
+ Each hypothesis is a particle in a particle filter
+ Scoring hypotheses
+ Terms in the score function enforce fitness (P
+ f),
+ orthogonality to ground (P
+ o), reprojection error (P
+ r),
+ wall-to-wall orientation (P
+ m), simplicity (P
+ s), wall-to-wall
+ intersection (P
+ w).
+ Advantages:
+ No hard Manhattan assumptions
+ No a priori knowledge of the observer motions w.r.t. the scene
+ Near-real-time performances (~20fps)
+ Particle filter implementation allows recovering from noisy and
+ wrong initialization exploiting multimodal posterior, re-
+ sampling and particle clustering
+
+
+
+
+ Experiments
+ Michigan Indoor Corridor dataset [1]
+ Indoor video sequences from a mobile robot
+ Object-free corridor scenes
+ Proposed dataset
+ Indoor video sequences from hand-held smartphone
+ Various cluttered scenes
+ Offices, corridors, large rooms
+ Complex layouts (not box-room, not Manhattan)
+ Results
+ Our method significantly outperforms [1], [2] and [3] in both
+ classification accuracy and execution time
+ Table below:
+ Left – Results on the Michigan Indoor Corridor dataset [1] (excluding and including ceiling)
+ Right – Results on the proposed dataset (classification accuracy and computation time)
+
+
+
+ [1] Grace Tsai, Changhai Xu, Jingen Liu, and Benjamin Kuipers. Real-time indoor scene understanding using bayesian
+ filtering with motion cues. In ICCV, 2011.
+ [2] Varsha Hedau, Derek Hoiem, and David Forsyth. Recovering the spatial layout of cluttered rooms. In ICCV, 2009.
+ [3] Derek Hoiem, Alexei A. Efros, and Martial Hebert. Recovering surface layout from an image. IJCV, 75(1), 2007.
+
+
+
+ Conclusions
+ Real-time oriented approach for indoor scene understanding
+ Probabilistic framework to generate, evaluate and optimize layout
+ hypotheses
+ Extensive experimental evaluation, that demonstrates that our
+ formulation outperforms state-of-the-art methods in both classification
+ accuracy and computation time
+ Dataset available: http://www.ira.disco.unimib.it/free_your_camera
+ http://vision.stanford.edu/3Dlayout/
+
+
+
diff --git a/Train/Graph-Based Discriminative Learning for Location Recognition/Graph-Based Discriminative Learning for Location Recognition-Poster.pdf b/Train/Graph-Based Discriminative Learning for Location Recognition/Graph-Based Discriminative Learning for Location Recognition-Poster.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..dd63e69115e70e15bcc0dbac7e11358271f12d69
--- /dev/null
+++ b/Train/Graph-Based Discriminative Learning for Location Recognition/Graph-Based Discriminative Learning for Location Recognition-Poster.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:33ae50bd1c39052dec33e0a83fc144f4a35dabc4a80955fc9b8ce2018b1fa06f
+size 9723262
diff --git a/Train/Graph-Based Discriminative Learning for Location Recognition/Graph-Based Discriminative Learning for Location Recognition.pdf b/Train/Graph-Based Discriminative Learning for Location Recognition/Graph-Based Discriminative Learning for Location Recognition.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..0a50d9d24e3a0262a06a0eb0a5dbc940e7c48fea
--- /dev/null
+++ b/Train/Graph-Based Discriminative Learning for Location Recognition/Graph-Based Discriminative Learning for Location Recognition.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:deb3317a3bc681493f42dd869984408b7411136d257f459a2e73f42ef39e1623
+size 6662254
diff --git a/Train/Graph-Based Discriminative Learning for Location Recognition/info.txt b/Train/Graph-Based Discriminative Learning for Location Recognition/info.txt
new file mode 100644
index 0000000000000000000000000000000000000000..211383c56b877b703638dced5cfbec4101c5644e
--- /dev/null
+++ b/Train/Graph-Based Discriminative Learning for Location Recognition/info.txt
@@ -0,0 +1,76 @@
+
+
+ Introduction
+ IGoal: Recognize the an image’s location by matching to a database
+ IChallenges: matching is time consuming; image retrieval is noisy
+ IPrevious Approaches: image retrieval based & direct matching
+ IOur Approach:
+ I Use an image graph to learn local similarity functions
+ I Encourage diversity in top ranked results
+
+
+
+
+ Image Graphs
+ INodes are images
+ IOnly geometrically
+ consistent images
+ are connected
+ IEdge weights defined
+ by Jaccard Index
+ N(a,b)J(a,b):
+ N(a)+N(b)−N(a,b),
+ and thresholded to
+ improve robustness
+ IOn the right: an
+ example image graph
+ on Dubrovnik dataset
+ (red nodes are center
+ images selected)
+
+
+
+
+ Overview of Approach
+ ITraining:
+ I 1. Compute a covering of the graph with a set of subgraphs (select center images
+ or neighborhoods in the image graph).
+ I2. Learn and calibrate an SVM-based distance metric for each subgraph.
+ ITesting:
+ I 3. Use the models in step 2 to compute the distance from a query image to each
+ database image, and generate a ranked shortlist of possible image matches.
+ 4. Perform geometric verification sequentially with the top database images in the
+ shortlist.I
+
+
+
+ Generating Ranking Results
+ IRanked neighborhoods are concatenated to form a ranking list of all DB images
+ IOrder within each neighborhood determined by BoW similarity
+ IGoal: to have the first true match appear in ranked shortlist as early as possible.
+ IComparison of BoW image retrieval ranking and our learned ranking:
+
+ IRanking can be further improved by enforcing diversity in top results: pick the next
+ image conditioned on previous one failing to match
+
+
+
+
+ Experiments
+
+
+
+
+ Reference
+ [1] Y. Li, N. Snavely, and D. Huttenlocher. Location recognition using prioritized
+ feature matching. In ECCV, 2010.
+ [2] T. Sattler, T. Weyand, B. Leibe, and L. Kobbelt. Image retrieval for image-based
+ localization revisited. In BMVC, 2012.
+
+
+
diff --git a/Train/GraphTrack Fast and Globally Optimal Tracking in Videos/GraphTrack Fast and Globally Optimal Tracking in Videos-Poster.pdf b/Train/GraphTrack Fast and Globally Optimal Tracking in Videos/GraphTrack Fast and Globally Optimal Tracking in Videos-Poster.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..516c365860e406d8d97ecaee345053b38badfecb
--- /dev/null
+++ b/Train/GraphTrack Fast and Globally Optimal Tracking in Videos/GraphTrack Fast and Globally Optimal Tracking in Videos-Poster.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:86a5c5e7b20737886fb96609f02e30db0e482cec3187c25915500ef28e029686
+size 7947263
diff --git a/Train/GraphTrack Fast and Globally Optimal Tracking in Videos/GraphTrack Fast and Globally Optimal Tracking in Videos.pdf b/Train/GraphTrack Fast and Globally Optimal Tracking in Videos/GraphTrack Fast and Globally Optimal Tracking in Videos.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..4f40f16fcbedf33b3f12bcd027dbff92fcc4b082
--- /dev/null
+++ b/Train/GraphTrack Fast and Globally Optimal Tracking in Videos/GraphTrack Fast and Globally Optimal Tracking in Videos.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c7bbc6f1289f0b76edca76cdf7b8587109eea6d10623fed7570d19d4860d10ff
+size 942992
diff --git a/Train/GraphTrack Fast and Globally Optimal Tracking in Videos/info.txt b/Train/GraphTrack Fast and Globally Optimal Tracking in Videos/info.txt
new file mode 100644
index 0000000000000000000000000000000000000000..8b6a70538cb105c86c37c2a77113cc3a304c5a8a
--- /dev/null
+++ b/Train/GraphTrack Fast and Globally Optimal Tracking in Videos/info.txt
@@ -0,0 +1,87 @@
+
+
+ P ROBLEM
+ Special effects in movies require tracks
+ of features through scenes. Tracks are
+ found in an interactive process. The
+ artist marks a position, and the com-
+ puter proposes a track which is then
+ further refined by the artist.
+ This is a difficult problem due to three
+ aspects.
+ 1. Sudden appearance changes due
+ to lighting and pose
+ 2. Occlusions
+ 3. Speed: Interactive editing re-
+ quires higher than framerate
+ speed
+
+
+
+ C ONTRIBUTIONS
+ We formulated tracking as path search
+ in a large graph, and solve it efficiently
+ with a modificiation of Dijkstra’s algo-
+ rithm.
+ The method is based on [2]. Our main
+ contributions are
+ 1. Efficient incorporation of a back-
+ ground appearance model
+ 2. Formulation as a shortest path
+ problem
+ 3. Correct handling of occlusions
+ 4. High-Efficiency implementation
+ with up to 200 fps for a high
+ resolution video
+
+
+
+ M ETHOD
+
+ The cost is interpreted as a directed
+ acyclic graph with weights on the
+ nodes and edges. The shortest path
+ corresponds to the optimal track. The
+ dashed edges are occlusions.
+
+
+
+ R ESULTS
+
+ Between one and three user clicks were
+ needed to achieve accurate tracking for
+ the head sequence. Note the correct
+ handling of the occluded ear, which re-
+ quired only a single click.
+ The eye of the running giraffe required
+ eight user interactions, of which three
+ marked occlusions.
+
+
+
+ R EFERENCES
+ [1] B. Amberg, T. Vetter. GraphTrack: Fast and
+ Globally Optimal Tracking in Videos In
+ CVPR ’11
+ [2] A. Buchanan and A. Fitzgibbon. Interactive
+ Feature Tracking using K-D Trees and Dy-
+ namic Programming. In CVPR ’06
+
+
+
+ A F UTURE D IRECTION
+ We incorporated a background model,
+ where a click informs us not only that
+ “this is how the patch looks like”, but
+ also for the rest of the frame, “this is
+ how the patch does not look like”.
+ Can we also efficiently use a back-
+ ground tracks model, allowing us to
+ reason, “this would be a good track,
+ but part of it can be better explained by
+ another track”.
+
+
+
diff --git a/Train/Hierarchical Qualitative Color Palettes/Hierarchical Qualitative Color Palettes-Poster.pdf b/Train/Hierarchical Qualitative Color Palettes/Hierarchical Qualitative Color Palettes-Poster.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..b43915860dc7ff1b93677024700560a7ba0a274b
--- /dev/null
+++ b/Train/Hierarchical Qualitative Color Palettes/Hierarchical Qualitative Color Palettes-Poster.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e4f7f18142909a283ac10d7104f48bcf1cfc8a8efa8b4d43cbf3abfe56481273
+size 1119747
diff --git a/Train/Hierarchical Qualitative Color Palettes/Hierarchical Qualitative Color Palettes.pdf b/Train/Hierarchical Qualitative Color Palettes/Hierarchical Qualitative Color Palettes.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..5f792fba295b4dcfad037ecab0b5d5ca758ccc0d
--- /dev/null
+++ b/Train/Hierarchical Qualitative Color Palettes/Hierarchical Qualitative Color Palettes.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5ce8d40771761fac125a4115916dbcf87100f8022c59098edce6dcaed756b20c
+size 1758456
diff --git a/Train/Hierarchical Qualitative Color Palettes/info.txt b/Train/Hierarchical Qualitative Color Palettes/info.txt
new file mode 100644
index 0000000000000000000000000000000000000000..0023f2efd77c05e66dfb0ae7d61915bdec1ef428
--- /dev/null
+++ b/Train/Hierarchical Qualitative Color Palettes/info.txt
@@ -0,0 +1,61 @@
+
+
+ Motivation
+ Aim: Visualize tree-structured statistical data
+ Question: What color palettes to use?
+
+
+
+ Method
+ Color space: Hue - Chroma- Luminance (HCL)
+ Designed to control human perception. [1, 2]
+ Branch in tree: controlled by Hue values
+ Hue range recursively assigned, starting with the root node:
+
+ Figure 1. Assignment of Hue values
+ - Assigned hue ranges of siblings are permuted to prevent perceptual order.
+ Permutation order is based on [1, 3, 5, 2, 4] permutation.
+ - Middle fractions f are kept to discriminate different branches.
+ Choice of f trade-off between:
+ 1) discrimination of main branches (low f) or
+ 2) discrimination of leaf nodes (high f).
+ Tree depth: controlled by Chroma and Luminance values
+ - Luminance decreases with tree depth
+ - Chroma increases with tree depth
+ (More intense colors helps in
+ discriminating leaf nodes)
+
+ Figure 2. Analogous to ocean water
+
+
+
+ Example tree structure
+ European classification system of economic activity (NACE).
+ Section F (Construction)
+
+ Figure 2. Tree structure of economic sector F of NACE
+
+
+
+ Applications
+
+ Figure 3. Treemap of fictious turnover values per economic sector
+
+ Figure 4. Stacked area chart and bar chart of fictious turnover values
+
+
+
+ References
+ [1] R. Ihaka. Colour for presentation graphics. In Proceedings of the 3rd
+ International Workshop on Distributed Statistical Computing, Vienna
+ Austria, 2003.
+ [2] A. Zeileis, K. Hornik, and P. Murrell. Escaping rgbland: Selectingcolors for
+ statistical graphics. Comput. Stat. Data Anal., 53(9):3259–3270, July 2009.
+
+
+
diff --git a/Train/History Dependent Domain Adaptation/History Dependent Domain Adaptation-Poster.pdf b/Train/History Dependent Domain Adaptation/History Dependent Domain Adaptation-Poster.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..b1098136495291c747dbed7bdb5ecd333d25c63d
--- /dev/null
+++ b/Train/History Dependent Domain Adaptation/History Dependent Domain Adaptation-Poster.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:42de48be5cb2cb59090717f578d028aeb15b23c057bab59686be7b0799792ae9
+size 457340
diff --git a/Train/History Dependent Domain Adaptation/History Dependent Domain Adaptation.pdf b/Train/History Dependent Domain Adaptation/History Dependent Domain Adaptation.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..78d6a7abb633d2066f434fac2808d8dbbbea9255
--- /dev/null
+++ b/Train/History Dependent Domain Adaptation/History Dependent Domain Adaptation.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8ffecfb60cb5f71ab966e243e91d11a9e02e9c4dd464029d2743c162fc1dc917
+size 100275
diff --git a/Train/History Dependent Domain Adaptation/info.txt b/Train/History Dependent Domain Adaptation/info.txt
new file mode 100644
index 0000000000000000000000000000000000000000..eff6e7af402ed6b796a09f0166d169b878adc617
--- /dev/null
+++ b/Train/History Dependent Domain Adaptation/info.txt
@@ -0,0 +1,84 @@
+
+
+ Problem
+ How do we learn when our loss function depends on previous
+ classifications, or on the correctness of previous classifications?
+ When humans manually correct misclassifications, there is a low cost
+ associated with repeating errors; we can simply remember the human's
+ label. However, human corrections take time, and reviewing every
+ classification can be too expensive. Our aim is to learn while minimizing
+ new errors, even if we don't know which classifications are errors.
+ In many large scale machine learning deployments, classification or
+ regression is a service. These systems have an expectation of
+ consistency and adaptability. Current machine learning research focuses
+ on the latter: how well can we classify now? This work makes the trade-off
+ explicit, and shows that major gains in consistency are possible without
+ sacrificing adaptability.
+
+
+
+
+ Solutions
+ Averaging
+ Average weights or model
+ outputs (equivalent in the linear
+ case). A linear combination of
+ previous hypotheses gives us a
+ simple baseline for comparison.
+ Exponential averaging is
+ extremely easy to implement.
+ Warm start
+ Reduce divergence from
+ previous hypotheses by
+ using a small step size,
+ or by taking fewer steps.
+ In general, we might use
+ an online learning
+ algorithm.
+ Weight nearness constraint
+ Full optimization, with a hard constraint.
+
+ Prediction regularization
+ Add a regularization term which
+ penalizes the model for differing
+ from the previous model. The
+ hinge loss term is equivalent to
+ adding extra weighted examples
+ to the data set.
+
+
+
+ Evaluation
+ Metrics
+ Area under the ROC curve (AUC)
+ ● Instantaneous performance
+ ● We want to avoid decreasing this too much
+ ● Cumulative Unique False Positives (CUFP)
+ ● Overall performance
+ ● Number of examples misclassified at least
+ once●
+ Data
+ Adversarial advertisements (Sculley 2011)
+ ● Adversarial (positive) or non-adversarial
+ (negative)
+ ● Sparse, high-dimensional
+ ● Malicious URL Identification (Ma 2009)
+ ● Malicious (positive) or non-malicious
+ (negative)
+ ● Qualitatively similar, public●
+ Results
+ We see up to a 50% reduction in CUFP, with
+ only a very minor reduction in AUC (0.04%)!
+ Warm start, the weight nearness constraint,
+ and averaging all performed quite well.
+ Relatively simple methods can drastically
+ improve consistency. Can we do better? Why
+ do some methods work better than others?
+ Can we make use of unlabeled data?
+
+
+
+
diff --git a/Train/Hyperspectral Imaging for Ink Mismatch Detection/Hyperspectral Imaging for Ink Mismatch Detection-Poster.pdf b/Train/Hyperspectral Imaging for Ink Mismatch Detection/Hyperspectral Imaging for Ink Mismatch Detection-Poster.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..8bd381e03527ad17a821de7eae1cd2d1880d08e4
--- /dev/null
+++ b/Train/Hyperspectral Imaging for Ink Mismatch Detection/Hyperspectral Imaging for Ink Mismatch Detection-Poster.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5fd3a747cd2d0d751955a4ded0f9bf9049a492f0a9ee8ef22b78baf81bd2fac8
+size 8770841
diff --git a/Train/Hyperspectral Imaging for Ink Mismatch Detection/Hyperspectral Imaging for Ink Mismatch Detection.pdf b/Train/Hyperspectral Imaging for Ink Mismatch Detection/Hyperspectral Imaging for Ink Mismatch Detection.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..a41c2fcf619f18a6aef67fa5f167d77a4706d757
--- /dev/null
+++ b/Train/Hyperspectral Imaging for Ink Mismatch Detection/Hyperspectral Imaging for Ink Mismatch Detection.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:cbd68638c0e2c2c84fc3d9e0b208642ec2d5e66f11b6ef27f6b0938eb75e516a
+size 6136401
diff --git a/Train/Hyperspectral Imaging for Ink Mismatch Detection/info.txt b/Train/Hyperspectral Imaging for Ink Mismatch Detection/info.txt
new file mode 100644
index 0000000000000000000000000000000000000000..d2eaf53d69e98bc6e6426e7f38524d92454e320a
--- /dev/null
+++ b/Train/Hyperspectral Imaging for Ink Mismatch Detection/info.txt
@@ -0,0 +1,84 @@
+
+
+ Overview
+ Ink mismatch detection provides important clues to
+ forensic document examiners [1]
+ • identifying whether a particular handwritten note is written with a
+ specific ink
+ • showing that some part (e.g. signature) is written with a different
+ ink as compared to the rest of the note [2].
+ Hyperspectral images capture fine spectral detail
+ • non-destructive and efficient capture
+ • automated, accurate identification
+
+
+
+
+ Ink Mismatch Detection
+ Inks are distinguishable in visible and near infrared range
+ • CCD camera with tunable filter captures hyperspectral images
+ • spectral responses vary in different portions of the EM spectrum
+ • non-uniform illumination modulates true spectral reflectances
+
+
+ Ink Segmentation and Clustering Algorithm
+ • segment ink pixles using Sauvola’s algorithm [3]
+ • normalize spectral responses to unit magnitude
+ • k-means clustering of ink spectral responses (k=2)
+
+ • forward feature (band) selection. leave-1-ink-out cross validation
+ Database
+ • Handwritten note: ‘A quick brown fox jumps over the lazy dog’
+ • 5 blue and 5 black inks by 7 subjects
+ • 33 band hyperspectral image in 400-720nm range (10nm steps)
+ • 3 channel RGB scan for comparison
+
+
+
+ Experiments
+ Evaluation Setup
+ • mix 2 different ink images in equal proportions
+ • 5 inks, taken 2 at a time, results in 10 different mixed ink images
+ Segmentation Results
+ • RGB versus HSI segmentation comparison
+ • Sub-visual range comparative analysis
+ • Segmentation with and without feature selection
+
+
+
+
+
+
+ Example: Discriminating Black Inks
+ • mix black ink 4 and black ink 5
+
+
+
+
+ Conclusion
+ Hyperspectral imaging is of critical value in supporting
+ ink examination.
+ • 1st database collected and made publicly available
+ • overcome hardware limitations in future
+ References
+ [1] G. Edelman, E. Gaston, T. van Leeuwen, P. Cullen, and M. Aalders, “Hyperspectral imaging for
+ non-contact analysis of forensic traces,” Forensic Science International, vol. 223, pp. 28–39, 2012
+ [2] E. B. Brauns and R. B. Dyer, “Fourier transform hyperspectral visible imaging and the
+ nondestructive analysis of potentially fraudulent documents,” Applied spectroscopy, vol. 60, no. 8, pp.
+ 833–840, 2006.
+ [3] F. Shafait, D. Keysers, and T. M. Breuel, “Efficient implementation of local adaptive thresholding
+ techniques using integral images,” Document Recog. and Retrieval XV, pp. 681510–681510–6, 2008
+
+
+
diff --git a/Train/ICCV_2013_001/ICCV_2013_001-Poster.pdf b/Train/ICCV_2013_001/ICCV_2013_001-Poster.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..6a40270c1fccf4f38dec5a1d3c82ec84afa8ddc6
--- /dev/null
+++ b/Train/ICCV_2013_001/ICCV_2013_001-Poster.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0f526018110f89006559ffadb099a5b9041b67002fd274b222954998193e44b8
+size 2039055
diff --git a/Train/ICCV_2013_001/ICCV_2013_001.pdf b/Train/ICCV_2013_001/ICCV_2013_001.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..bd0c5cd97f468f9a84bb64440a7a2addf1723075
--- /dev/null
+++ b/Train/ICCV_2013_001/ICCV_2013_001.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:107874755064eae9e0a0ed34f3a55a7cae79b26aa4ff901266f15aa86748f1f5
+size 2522154
diff --git a/Train/ICCV_2013_001/info.txt b/Train/ICCV_2013_001/info.txt
new file mode 100644
index 0000000000000000000000000000000000000000..6059b57fcad2d476906c436d4a142999c9d3c56b
--- /dev/null
+++ b/Train/ICCV_2013_001/info.txt
@@ -0,0 +1,73 @@
+
+
+ Introduction
+ Goal
+ • Classify high level events from unconstrained web
+ videos
+ Challenges
+ • Complex human-object interactions
+ • Diverse video quality (e.g. YouTube)
+ • Large scale dataset
+ Motivation
+ • Encoding activity concept transitions
+
+
+
+
+
+ Hidden Markov Model Fisher Vector (HMMFV)
+
+ Video Representation: A sequence of activity concept responses
+ Fisher Kernel: Partial derivatives of log-likelihood function
+ HMMFV: Partial derivatives about HMM transition parameters
+
+
+
+ Experimental Setup
+ Dataset
+ •
+ •TRECVID MED 2011 Event Kit
+ 70% for training, 30% for testing
+ Setup
+ •Gaussian kernel SVM, 5-fold cross validation
+ Activity Concepts
+ •
+ •Same domain (Event Kit annotations [1])
+ Cross domain (UCF 101)
+
+
+
+
+ Top Activity Concept Transitions Visualization
+
+
+
+
+ Quantitative results
+ Comparison with baseline
+
+ Training with 10 positive samples
+
+ Comparison with state-of-the-art
+
+ Conclusion
+ •
+ •
+ •Coding temporal transitions of activities by HMMFV improves performance
+ Activity concepts coded by HMMFV is desirable with limited training samples
+ May be useful for video event recounting (description)
+ [1] H. Izadinia and M. Shah. Recognizing complex events using large
+ margin joint low-level event model. In ECCV, 2012.
+ [2] C. Sun and R. Nevatia. Large-scale Web Video Event Classification
+ by use of Fisher Vectors. In WACV, 2013
+
+
+
diff --git a/Train/Learning People Detection Models from Few Training Samples/Learning People Detection Models from Few Training Samples-Poster.pdf b/Train/Learning People Detection Models from Few Training Samples/Learning People Detection Models from Few Training Samples-Poster.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..0922e0423c278094e11ae1499f6f54e8478a4101
--- /dev/null
+++ b/Train/Learning People Detection Models from Few Training Samples/Learning People Detection Models from Few Training Samples-Poster.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:12d7274bdb73ee73089ad0ba939419969dfd2640093d09e3c808362591342008
+size 12897050
diff --git a/Train/Learning People Detection Models from Few Training Samples/Learning People Detection Models from Few Training Samples.pdf b/Train/Learning People Detection Models from Few Training Samples/Learning People Detection Models from Few Training Samples.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..c4a98fc740c0ae86dda4dc8086cf632b301d2dc2
--- /dev/null
+++ b/Train/Learning People Detection Models from Few Training Samples/Learning People Detection Models from Few Training Samples.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:04a5615d7625cf70c3ddb8dc1db803af44fb256c5ab092a4dc2bba665b5f9463
+size 7693784
diff --git a/Train/Learning People Detection Models from Few Training Samples/info.txt b/Train/Learning People Detection Models from Few Training Samples/info.txt
new file mode 100644
index 0000000000000000000000000000000000000000..d6bd7d0f92b27956b6b24cbf5055a3d97a6aa9e5
--- /dev/null
+++ b/Train/Learning People Detection Models from Few Training Samples/info.txt
@@ -0,0 +1,112 @@
+
+
+ Goal
+ • Propose a novel technique to train people detectors
+ from only a few observed training subjects
+ • Approach the lack-of-training-data problem by
+ automatically generating realistic training samples• Push the performance of current detection systems
+ trained on hundreds of manually annotated pedestrians
+
+
+
+ Contributions
+ • Compare the results to prior work (e.g. [2, 7])• Explore the applicability of state-of-the-art 3D human
+ model to learn people detectors
+
+ • Analyze various combinations of synthetic and real
+ training data
+ ⇒ outperform current methods which use real training
+ data only
+
+
+
+
+ Proposed Approach
+ 1. Generate realistic synthetic data by MovieReshape [6]
+ 2. Combine reshaped humans with backgrounds
+ 3. Automatically obtain 2D part annotations from known
+ 3D joint positions
+
+ • Represent shape variations via PCA
+ • Embed kinematic skeleton with linear blend skinning• Learn shape from 3D laser scans of humansStatistical 3D human shape model [5]⇒ Realistic distributions of human appearance
+ and shape
+
+ ⇒ particle filter-based estimator• Fit the parameters of 3D body model to silhouettesAutomatic model fitting
+
+ • Sample 3D shape parameters ±3σ from the mean shape
+ • Use 3D offset vectors to drive 2D image warpingImage deformation
+
+ Composition with background
+ • Adjust color distribution of pedestrian w.r.t. background
+ Sample output images with gradual height changes
+
+
+
+
+ People Detection Models
+ Pictorial structures (PS) [1, 4]
+ • Flexible configuration of body parts with pose prior
+ • AdaBoost part detectors learned from dense shape
+ context descriptor
+ • Inference by sum-product belief propagation
+
+ Histograms of oriented gradients (HOG) [3]
+ • Sliding window detection
+ • Monolithic template based on HOG features
+ • Histogram intersection kernel SVM
+
+
+
+ Datasets
+ • Reshape data (our method): 11 persons, ∼ 2000
+ reshaped images per person
+ • CVC (virtual pedestrians) [7]: 3432 images total
+ • Multi-viewpoint real data [2]: 2972 train images, 248
+ test and 248 validation images
+
+
+
+
+ Results
+ Using Reshape data (PS model)
+
+
+ Combining detectors (PS model)
+
+
+ Combining detectors (HOG model)
+
+
+
+
+ References
+ [1] M. Andriluka, S. Roth, and B. Schiele. Pictorial structures revisited: People detection and
+ articulated pose estimation. In CVPR, pages 1014–1021, 2009.
+ [2] M. Andriluka, S. Roth, and B. Schiele. Monocular 3d pose estimation and tracking by detection.
+ In CVPR, pages 623–630, 2010.
+ [3] N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In CVPR, 2005.
+ [4] P. F. Felzenszwalb and D. P. Huttenlocher. Pictorial structures for object recognition. IJCV,
+ 61:55–79, 2005.
+ [5] N. Hasler, C. Stoll, M. Sunkel, B. Rosenhahn, and H.-P. Seidel. A statistical model of human pose
+ and body shape. CGF (Proc. Eurographics 2008), 2(28), 2009.
+ [6] A. Jain, T. Thormählen, H.-P. Seidel, and C. Theobalt. Moviereshape: Tracking and reshaping of
+ humans in videos. ACM Trans. Graph. (Proc. SIGGRAPH Asia), 29(5), 2010.
+ [7] J. Marin, D. Vazquez, D. Geronimo, and A. Lopez. Learning appearance in virtual scenarios for
+ pedestrian detection. In CVPR, pages 137–144, 2010.
+
+
+
diff --git a/Train/Leveraging High Performance Computation for Statistical Wind Prediction/Leveraging High Performance Computation for Statistical Wind Prediction-Poster.pdf b/Train/Leveraging High Performance Computation for Statistical Wind Prediction/Leveraging High Performance Computation for Statistical Wind Prediction-Poster.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..31074ebbd0d9332d9d2c5b5d133188530f29ecfd
--- /dev/null
+++ b/Train/Leveraging High Performance Computation for Statistical Wind Prediction/Leveraging High Performance Computation for Statistical Wind Prediction-Poster.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:229b27ce3e2d1dd323988bbc97d991b0c61ba36ca559f2ebcb1030afa1c4f37e
+size 678922
diff --git a/Train/Leveraging High Performance Computation for Statistical Wind Prediction/Leveraging High Performance Computation for Statistical Wind Prediction.pdf b/Train/Leveraging High Performance Computation for Statistical Wind Prediction/Leveraging High Performance Computation for Statistical Wind Prediction.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..71657a88755acec7d9aa4fec98717cad92298658
--- /dev/null
+++ b/Train/Leveraging High Performance Computation for Statistical Wind Prediction/Leveraging High Performance Computation for Statistical Wind Prediction.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9734c94215e3880731e3751175e46e1ca541d376010fcc90c2a4de3bb1030594
+size 3240475
diff --git a/Train/Leveraging High Performance Computation for Statistical Wind Prediction/info.txt b/Train/Leveraging High Performance Computation for Statistical Wind Prediction/info.txt
new file mode 100644
index 0000000000000000000000000000000000000000..b14344c4cd12b3d4636dd0751b84a42bc4ed6ef5
--- /dev/null
+++ b/Train/Leveraging High Performance Computation for Statistical Wind Prediction/info.txt
@@ -0,0 +1,158 @@
+
+
+ Overview
+ This poster presents a new application of a particular machine learning technique for improving wind
+ forecasting. The technique, known as kernel regression, is somewhat similar to fuzzy logic in that both
+ make predictions based on the similarity of the current state to historical training states. Unlike fuzzy logic
+ systems, kernel regression relaxes the requirement for explicit event classifications and instead leverages
+ the training set to implicitly form a multi-dimensional joint density and compute a conditional expectation
+ given the available data.
+ The need for faster, highly accurate, and cost-effective predictive techniques for wind power forecasting is
+ becoming imperative as wind energy becomes a larger contributor to the energy mix in places throughout
+ the world. In wind forecasting, like in may other scientific domains, it is often important to be able to tune
+ the trade-off between accuracy and computational efficiency. The work presented here represents the
+ first steps toward building a portable, parallel, auto-tunable forecasting program where the user can select
+ a desired level of accuracy, and the program will respond with the fastest machine-specific parallel
+ algorithm that achieves that accuracy target.
+ Even though tremendous progress has been made in wind forecasting in the recent years, there remains
+ significant work to refine and automate the synthesis of meteorological data for use by wind farm and grid
+ operators, for both planning and operational purposes. This presentation will demonstrate the
+ effectiveness of computationally tunable machine learning techniques for improving wind power
+ prediction, with the goal of finding better ways to deliver accurate forecasts and estimates in a timely
+ fashion.
+
+
+
+ Kernel Density Estimation
+ KDE is a non-parametric model that does not assume any particular structure for the target distribution
+ (linear, quadratic, etc). It uses a historical data set to construct a conditional probability density function
+ to make estimates. The density estimate is similar to a histogram. On each data point, we put a probability
+ mass and then sum all the point masses to get the joint density estimate:
+
+
+
+
+ Why Kernel Density Estimation?
+ Kernel Density Estimation is our algorithm of choice because it has lots of “knobs” to adjust the power of
+ the algorithm. In one situation, we could turn the knobs to utilize a server farm’s worth of computation for
+ multiple hours to yield highly accurate results. In another scenario, we may need to make an estimate in a
+ more timely fashion, so we can adjust the algorithm to sacrifice some accuracy in favor of expediency. As
+ computational hardware becomes more complex, there will no longer be a one-size-fits-all solution. We
+ will need tunable algorithms such as this to make the best use of the hardware at hand.
+
+
+
+ The Nadaraya-Watson KDE Model
+ Let x represent the vector of predictor variables and y the quantity to be estimated (in our case, wind
+ speed). Given a historical training set (xj, yj) and kernel function K the kernel density estimate at (x, y) is:
+ We can then calculate the conditional expectation of y given x:
+ There are several nice things about the form of this expression. First, we can use any sort of variable for
+ the predictors. Wind speed, wind direction, temperature, time of day, day of year can all be predictor
+ variables. These variables can come from multiple sites, including the site where the estimate is being
+ made, neighboring measurement sites, and grid points from a numerical weather forecast such as the
+ NAM 12km model. It is because of this flexibility that we can use this approach on different application
+ types, such as forecasting and site assessment. Also note that during parallelization the algorithm can be
+ broken down into relatively independent pieces to minimize the communications burden on a distributed
+ memory machine.
+
+
+
+ Forecast Analysis on Wind Speeds at MIT
+ To evaluate the effectiveness of our
+ methodology, we analyzed a test site on MIT
+ campus. Data was taken from sensors on
+ the top of the Green Building (Building 44)
+ on the east side of campus. There are plans
+ to install a small-scale turbine on campus by
+ the end of 2010. The turbine installation is
+ being planned by MIT Facilities and the MIT
+ Full Breeze student group.
+
+
+
+
+ Computational Approach
+ The recent trend in computing has been towards increasingly parallel machines. This is not just in the
+ space of high performance computing (i.e. supercomputing), but also for everyday machines such as
+ desktops, notebooks, and even embedded devices! Because power consumption increases with the cube
+ of the clock frequency, chip designers are now favoring massive parallelism over faster single core
+ performance. As the number of cores increases, everything around them becomes more complex,
+ especially the memory subsystem.
+ The hardware problem has thus become a software problem. Designing portable, maintainable software
+ that can harness the power of parallel computers is of utmost importance . In order to manage the search
+ for an efficient algorithm, we plan to leverage a new programming language and compiler, called
+ PetaBricks, to search the space of forecast estimation algorithms for the one that will work best given our
+ accuracy requirements and hardware and time constraints.
+ The figure to the right illustrates a potential set of algorithms
+ where the user can trade off computation time for accuracy of
+ the result. For example, if we need an answer quickly and are
+ willing to sacrifice some accuracy, we would pick an algorithm
+ on the left. If we wanted the highest accuracy and are flexible in
+ the amount of time required, then we would pick an algorithm
+ on the right.
+
+
+
+
+ Experiment and Results
+ The kernel regression estimate performed
+ better on average than both of the other
+ techniques. Kernel regression had a MSE
+ 40% lower than persistence and 12.5%
+ percent lower than linear regression.
+ The second graph shows how tuning the
+ “knobs” of the algorithm allows the user to
+ trade accuracy for faster computation. The
+ more predictor variables used, the higher
+ the accuracy achieved, but at a higher
+ computational cost.
+ These results were obtained using a
+ relatively small set of predictor variables.
+ We hope to achieve better results with
+ more room for improvement using a better
+ estimation algorithm (discussed below) and
+ more diverse predictor variables.For this experiment, we used outputs from
+ the NAM 12km Model to make forecasts at
+ a location on MIT campus (see lower left).
+ We used hourly data for the year 2009 to
+ make one hour ahead predictions and
+ compared the performance of our kernel
+ density estimates against persistence and
+ linear regression.
+
+
+ We also ran a measure-correlate-predict (MCP) analysis on NOAA ocean buoy data. The kernel density
+ estimation achieved an improvement of greater than 25% (in terms of mean squared error versus
+ observed) compared to using the variance-ratio MCP method in estimating missing historical data. These
+ MCP computations were performed on a SGI Altix 350 machine with 12 Intel Itanium 2 processors running
+ Interactive Supercomputing's STAR-P software. Overall, using all 12 processors on the machine, a 8.9x
+ speedup compared to serial performance was achieved.
+ In the future, we plan to use a modification of the algorithm presented here to minimize the mean
+ squared error plus a regularization term, which helps with generalization. This estimation algorithm is
+ more complex as it involves solving a large symmetric linear system to attain the objective minimization.
+ Moving to this more complex algorithm will provide greater accuracy as well as fertile ground for
+ exploring performance vs. accuracy trade-offs.
+
+
+
+ Conclusions
+ We have shown that the use of tunable kernel density estimation and regression techniques can be
+ applied effectively when leveraging high performance parallel computing resources. Not only are the
+ results achieved better than those produced when using methods such as persistence and linear
+ regression, but also the algorithms are tunable to allow the user to trade accuracy for computational
+ performance and vice versa to suit the user’s needs.
+ These types of techniques will become ever more important as parallel computing becomes ubiquitous
+ across all types of computing platforms. As software developers struggle to update their programming
+ practices to utilize these types of resources, techniques such as the automatic tuning of performance
+ parameters to achieve the user’s desired results will become extremely valuable. In the future, we plan to
+ implement these techniques with the PetaBricks programming language, which will do automatic
+ algorithm selection and parameter tuning to achieve high performance, portable, parallel, variable
+ accuracy software for wind prediction applications.
+
+
+
diff --git "a/Train/Low\342\200\251 Overhead\342\200\251 Concurrency \342\200\251Control\342\200\251 for Partitioned \342\200\251Main\342\200\251 Memory \342\200\251Databases/Low\342\200\251 Overhead\342\200\251 Concurrency \342\200\251Control\342\200\251 for Partitioned \342\200\251Main\342\200\251 Memory \342\200\251Databases-Poster.pdf" "b/Train/Low\342\200\251 Overhead\342\200\251 Concurrency \342\200\251Control\342\200\251 for Partitioned \342\200\251Main\342\200\251 Memory \342\200\251Databases/Low\342\200\251 Overhead\342\200\251 Concurrency \342\200\251Control\342\200\251 for Partitioned \342\200\251Main\342\200\251 Memory \342\200\251Databases-Poster.pdf"
new file mode 100644
index 0000000000000000000000000000000000000000..89cc621375305edbc4466584459bd900e7ed2195
--- /dev/null
+++ "b/Train/Low\342\200\251 Overhead\342\200\251 Concurrency \342\200\251Control\342\200\251 for Partitioned \342\200\251Main\342\200\251 Memory \342\200\251Databases/Low\342\200\251 Overhead\342\200\251 Concurrency \342\200\251Control\342\200\251 for Partitioned \342\200\251Main\342\200\251 Memory \342\200\251Databases-Poster.pdf"
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:17a1b0ecc74c212c726dd3ad2efbe89b6d379166650ec6ec0534ffcaaaa1d862
+size 277674
diff --git "a/Train/Low\342\200\251 Overhead\342\200\251 Concurrency \342\200\251Control\342\200\251 for Partitioned \342\200\251Main\342\200\251 Memory \342\200\251Databases/Low\342\200\251 Overhead\342\200\251 Concurrency \342\200\251Control\342\200\251 for Partitioned \342\200\251Main\342\200\251 Memory \342\200\251Databases.pdf" "b/Train/Low\342\200\251 Overhead\342\200\251 Concurrency \342\200\251Control\342\200\251 for Partitioned \342\200\251Main\342\200\251 Memory \342\200\251Databases/Low\342\200\251 Overhead\342\200\251 Concurrency \342\200\251Control\342\200\251 for Partitioned \342\200\251Main\342\200\251 Memory \342\200\251Databases.pdf"
new file mode 100644
index 0000000000000000000000000000000000000000..454d07dd30c9cbeca37581fda02c015bf69c9a07
--- /dev/null
+++ "b/Train/Low\342\200\251 Overhead\342\200\251 Concurrency \342\200\251Control\342\200\251 for Partitioned \342\200\251Main\342\200\251 Memory \342\200\251Databases/Low\342\200\251 Overhead\342\200\251 Concurrency \342\200\251Control\342\200\251 for Partitioned \342\200\251Main\342\200\251 Memory \342\200\251Databases.pdf"
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0776972de9e80a6365068e534cf778b3837d17c722fa335ee9f72ffe18e7165d
+size 557666
diff --git "a/Train/Low\342\200\251 Overhead\342\200\251 Concurrency \342\200\251Control\342\200\251 for Partitioned \342\200\251Main\342\200\251 Memory \342\200\251Databases/info.txt" "b/Train/Low\342\200\251 Overhead\342\200\251 Concurrency \342\200\251Control\342\200\251 for Partitioned \342\200\251Main\342\200\251 Memory \342\200\251Databases/info.txt"
new file mode 100644
index 0000000000000000000000000000000000000000..eb02b7f6217f1d855d240977254a95f4ec1482b4
--- /dev/null
+++ "b/Train/Low\342\200\251 Overhead\342\200\251 Concurrency \342\200\251Control\342\200\251 for Partitioned \342\200\251Main\342\200\251 Memory \342\200\251Databases/info.txt"
@@ -0,0 +1,56 @@
+
+
+ Traditional Concurrency
+ Idle Resources:
+ • Wait for disk
+ • Wait for user
+ Physical Concurrency:
+ • Multiple CPUs, disks
+
+
+
+ Our Approach: H‐Store
+ • Main memory
+ • Stored procedures
+ • Multiple partitions
+
+
+
+ % CPU Cycles (Shore)
+
+
+
+
+ Single Partition Transactions
+ No locks, no undo logging: no overhead
+
+
+
+
+ Multi‐Partition Transactions
+ Two‐phase commit; network stall (bad)
+
+
+
+
+ Low Overhead Concurrency Control: Do useful work during network stall
+ Speculation: Speculate next transactions during stall, after txn is prepared
+ • Best for simple multi‐partition transactions: one round of work on partitions
+ Locking: Don't acquire locks if only executing single partition transactions
+ • Best for workloads with complex transactions; inter‐partition communication
+
+
+
+ Experimental Results
+ Microbenchmark: Two partitions; Change fraction of multi‐partition transactions
+ TPC‐C like: Two partitions varying the number of warehouses
+
+
+
+
+
diff --git a/Train/MatchMiner- Efficient Spanning Structure Mining in Large Image Collections/MatchMiner- Efficient Spanning Structure Mining in Large Image Collections-Poster.pdf b/Train/MatchMiner- Efficient Spanning Structure Mining in Large Image Collections/MatchMiner- Efficient Spanning Structure Mining in Large Image Collections-Poster.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..2043641a587e9d0c54287cb8c9c6dd9fc7dfac1c
--- /dev/null
+++ b/Train/MatchMiner- Efficient Spanning Structure Mining in Large Image Collections/MatchMiner- Efficient Spanning Structure Mining in Large Image Collections-Poster.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5dac940dcc8ec6b19908276ed5ceb537724c5bf7bac6ed92a549147d81a4695b
+size 1643493
diff --git a/Train/MatchMiner- Efficient Spanning Structure Mining in Large Image Collections/MatchMiner- Efficient Spanning Structure Mining in Large Image Collections.pdf b/Train/MatchMiner- Efficient Spanning Structure Mining in Large Image Collections/MatchMiner- Efficient Spanning Structure Mining in Large Image Collections.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..1001c2b57bf30d94fbb2ac27e53ebd4158ec8c4a
--- /dev/null
+++ b/Train/MatchMiner- Efficient Spanning Structure Mining in Large Image Collections/MatchMiner- Efficient Spanning Structure Mining in Large Image Collections.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:bc006c6e0543b58956406a84d143ac250706d597abfa4c009d7110c5df745e9a
+size 4095385
diff --git a/Train/MatchMiner- Efficient Spanning Structure Mining in Large Image Collections/info.txt b/Train/MatchMiner- Efficient Spanning Structure Mining in Large Image Collections/info.txt
new file mode 100644
index 0000000000000000000000000000000000000000..4f1c96a3fc18de2739064f27b312de921a6cf4e1
--- /dev/null
+++ b/Train/MatchMiner- Efficient Spanning Structure Mining in Large Image Collections/info.txt
@@ -0,0 +1,85 @@
+
+
+ Motivation
+ •
+ •
+ •
+ •Internet photos cover large parts of the world
+ Novel applications are using image graphs
+ We want to connect images as efficiently as possible
+ We focus on finding connected components
+
+ Challenges with Unstructured Collections
+ •
+ •
+ •
+ •Image matching is expensive
+ It is hard to know promising image pairs beforehand
+ Visual similarity is a noisy predictor
+ Large image collections have many “singleton” images
+ Contributions: a large-scale image matcher that:
+ • We incorporate relevance feedback
+ • We propose rank distance to prune singleton images
+ • We propose an information-theoretic approach
+ Image Representation and Matching Procedure
+ •
+ •
+ •
+ •Each image is represented using BoW model
+ One million visual words are trained offline
+ Standard tf-idf weights are applied on image vectors
+ We use standard geometric verification procedure
+ • SIFT matching
+ • RANSAC-based F-matrix estimation
+
+
+
+ MatchMiner
+ Two stage approach: (1) we find an initial set of CCs by matching similar
+ images, incorporating relevance feedback, (2) we merge CCs using an
+ information-theoretic approach and discard singleton images.
+ Step 1
+ • Each image vector 𝑄1 retrieves a short list of images {I}
+ • Geometric verification partitions {I} into two sets, P and N
+ 𝑡+1𝑡+1𝑄=𝑄+𝛼/|𝑃|𝐼−𝛽/|𝑁|• Relevance feedback: 𝑡+1𝑡
+ 𝐼∈𝑃𝐼∈𝑁𝐼
+
+ Step 2
+ • Minimizing entropy H(C); prefer to merge large CCs
+ • Rank distance: 𝑅 𝐼, 𝐽 = 2𝑅𝑎𝑛𝑘𝐼 𝐽 𝑅𝑎𝑛𝑘𝐽 (𝐼)/(𝑅𝑎𝑛𝑘𝐼 𝐽 + 𝑅𝑎𝑛𝑘𝐽 (𝐼))
+
+ Entropy-descent Strategy
+
+ Motivation of Rank Distance
+
+
+
+ Experiments
+ • Five medium-sized datasets and two large datasets
+ • We compare MatchMiner with Image Webs [Heath et al. 10]
+ Relevance Feedback
+
+ Rank Distance
+
+ False Edges Pruned by RD
+ Rate of prunning true edges <0.1%
+ Mining Results
+
+ Mining Large-scale Datasets
+
+ • Largest CC of Forum
+ • 1 hr 39 min
+ • 53 nodes
+
+
+
+
diff --git a/Train/Memorability of natural scene- the role of attention/Memorability of natural scene- the role of attention-Poster.pdf b/Train/Memorability of natural scene- the role of attention/Memorability of natural scene- the role of attention-Poster.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..cdc989201772cec9781183da4bbec955daa341a8
--- /dev/null
+++ b/Train/Memorability of natural scene- the role of attention/Memorability of natural scene- the role of attention-Poster.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:315f9055f4d675a567da39e415ab159d6b452c5b3078c1330535c5cccd1d34e0
+size 1155340
diff --git a/Train/Memorability of natural scene- the role of attention/Memorability of natural scene- the role of attention.pdf b/Train/Memorability of natural scene- the role of attention/Memorability of natural scene- the role of attention.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..2cdef68e2150a0f5c2b3d7d9f4c04e0aa65a5559
--- /dev/null
+++ b/Train/Memorability of natural scene- the role of attention/Memorability of natural scene- the role of attention.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:546e3d8caeed0b8f532b558e4df1d720149935e9e8243c4bfe04174fe3a99877
+size 828821
diff --git a/Train/Memorability of natural scene- the role of attention/info.txt b/Train/Memorability of natural scene- the role of attention/info.txt
new file mode 100644
index 0000000000000000000000000000000000000000..4d34380613e5e6a9e358be032877ba265febb642
--- /dev/null
+++ b/Train/Memorability of natural scene- the role of attention/info.txt
@@ -0,0 +1,105 @@
+
+
+ Motivation: Investigate the influence of saliency-related features on images memorability
+ Images memorability: What ?
+ Probability of correctly detecting a repeat after
+ a single view of an image in a long stream.
+ Images memorability: How ?
+
+ Memory game: 665 participants
+ on Amazon’s Mechanical Turk.
+ (Isola et al. 2011).
+ Images memorability: Isola database
+ 2222 images from SUN database (Xiao et al. 2010)
+ with a memorability score from close to 0 (low
+ memorability) to close to 1 (high memorability)
+ Image memorability prediction
+ Isola et al. proposed several features and a classifier
+ to predict images memorability. The results are shown bellow:
+
+
+
+
+ Eye-tracking experiment
+ Proposed dataset:
+ We extracted 3 classes of 45 images each from Isola et al.
+ database. The first 45 (C1) are highly memorable images,
+ the last 45 (C3) are the least memorable and the remaining
+ 45 (C2) have an average memorability. The characteristics
+ of C1, C2 and C3 are listed bellow:
+
+
+ The fixation duration for the three image classes are
+ shown for several viewing times: the 2 first fixations,
+ the 4 first, the 6 first and all the fixations. The difference
+ between C1 and C3 are every time statistically significant.
+ Example of eye-tracking results:
+
+ First row: high memorability image; Second row: low memorability
+ image. First column: original pictures; Second column: fixation
+ map (a green circle represents the first fixation of observers);
+ Third column: Saliency map and Fourth column: heat map.
+
+ The congruency (agreement between viewers) is a second
+ feature which is statistically different between C1 and C3.
+
+
+
+ Saliency-related memorability prediction
+ Two new saliency-related features for memorability prediction:
+ 1/ Saliency maps coverage:
+
+ Several saliency models were tested (above) and RARE 2012
+ was selected because the average coverage difference
+ on several sets of images with different memorability was visible
+ (Image (a) between left (high), middle (average) and right (high)).
+ A coverage factor was computed (Graph (b) on the left on the
+ 2222 images of Isola database. Right graph shows the feature
+ after median filtering). The graph goes from low memorability
+ to high memorability Images.
+ 2/ Visibility features:
+
+ Low-pass filtering from I1 to I9 (pyramid-like): forgetting process.
+ Feature V1: the correlation between I1 and the others
+ Feature V2: the correlation between two successive filters
+
+ Visibility feature vectors V1 and V2 computed for the
+ whole 2222 images database. As for the coverage feature,
+ the raw data both for V1 and V2 (left column) does
+ not exhibit obvious differences. After median filtering (right
+ column) differences between memorable (from the right)
+ and less memorable images (from the left) are noticeable.
+
+
+
+ Conclusion: attention can play a role in memorability analysis !
+ Conclusion 1:
+ The fixation duration is longer for the most memorable images
+ (especially for the very first fixations) which shows a higher
+ cognitive activity for memorable images.
+ Conclusion 2:
+ The observers’ congruency (agreement) is significantly higher for
+ the most memorable images. This shows that when there are areas
+ with high attraction on all viewers, this induces higher memorability.
+ Conclusion 3:
+ The use of coverage and visibility features (without any GIST)
+ provides slight improvement compared to Isola 2011.
+
+ Conclusion 4:
+ The use of coverage and visibility features let us eliminate
+ several other features, while keeping the same efficiency
+
+
+
+
diff --git a/Train/Modeling skin and ageing phenotypes using latent variable models in Infer.NET/Modeling skin and ageing phenotypes using latent variable models in Infer.NET-Poster.pdf b/Train/Modeling skin and ageing phenotypes using latent variable models in Infer.NET/Modeling skin and ageing phenotypes using latent variable models in Infer.NET-Poster.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..c184a46555007442d973fcac7764de2b4909f4ce
--- /dev/null
+++ b/Train/Modeling skin and ageing phenotypes using latent variable models in Infer.NET/Modeling skin and ageing phenotypes using latent variable models in Infer.NET-Poster.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ad0e8b618fc5301c6b887e84c65fe8d1fef6bb34409f59ca6050ce2658a2aa59
+size 658386
diff --git a/Train/Modeling skin and ageing phenotypes using latent variable models in Infer.NET/Modeling skin and ageing phenotypes using latent variable models in Infer.NET.pdf b/Train/Modeling skin and ageing phenotypes using latent variable models in Infer.NET/Modeling skin and ageing phenotypes using latent variable models in Infer.NET.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..559c8fec38a0bacabb90379b0623d3b608bcadd9
--- /dev/null
+++ b/Train/Modeling skin and ageing phenotypes using latent variable models in Infer.NET/Modeling skin and ageing phenotypes using latent variable models in Infer.NET.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c77497c61d6577a23270ebdcb5cc0c93825ab31b9565b00b4060bae72f92e902
+size 604051
diff --git a/Train/Modeling skin and ageing phenotypes using latent variable models in Infer.NET/info.txt b/Train/Modeling skin and ageing phenotypes using latent variable models in Infer.NET/info.txt
new file mode 100644
index 0000000000000000000000000000000000000000..82907b6b0a4b9836b37034357e6267fe1badb0f2
--- /dev/null
+++ b/Train/Modeling skin and ageing phenotypes using latent variable models in Infer.NET/info.txt
@@ -0,0 +1,141 @@
+
+
+ Abstract
+ We demonstrate and compare three unsupervised
+ Bayesian latent variable models implemented in
+ Infer.NET [1] for biomedical data modeling of
+ 42 skin and ageing phenotypes measured on the
+ 12,000 female twins in the Twins UK study [2].
+
+
+
+ Data characteristics
+ Like many biomedical applications:
+ 4. High dimensional. 6000 phenotype and ex-
+ posure variables, measured at multiple time
+ points: use dimensionality reduction3. Multiple observations. Combine into a single
+ phenotype: aids interpretability, improves
+ statistical power and helps with missingness.2. Heterogeneous data. Continuous, categorical
+ (including binary), ordinal and count data:
+ using appropriate likelihood functions for
+ each of these data types improves statistical
+ power.1. High missingness. Many variables have up to
+ 80% missing: Bayesian methods are able to
+ naturally deal with missingness
+
+
+
+ Medical expertise: prior knowledge
+
+ Key processes involved in skin and ageing, de-
+ vised in collaboration with an experienced derma-
+ tologist. We use this prior knowledge in a very
+ crude way at the moment (separating explanatory
+ variables and symptoms) but we intend to use such
+ knowledge to incorporate more structure into our
+ models.
+
+
+
+ Models
+ Factor graphs for the three proposed models.
+
+
+ 1. Generalised mixture model.
+ Clusters individuals. Suitable
+ conjugate prior for each data
+ type.
+
+ 2.Generalised factor anal-
+ ysis model. Allows different
+ observed data types using vari-
+ ous likelihood functions
+
+ 3. Combined regression and
+ factor analysis model. Pro-
+ vides the expressive power
+ of FA and interpretability of
+ regression.
+
+
+
+ Results
+
+ Synthetic data test. Ordinal regression with
+ 5 output values, P = 20 observed explana-
+ tory variables and varying sample size.
+
+ Correlation under the model. The fitted FA model
+ implies a particular covariance structure for the
+ variables of interest.
+
+ Imputation performance (real
+ data). For a random 10% of indi-
+ viduals treat symptoms (e.g. skin
+ cancer, wrinkles) as missing, but
+ leave the explanatory variables
+ (e.g. age, smoking, sun exposure),
+ and infer the predictive posterior
+ over the held out values.
+
+
+
+ Methods
+ We use Variational Message Passing under the In-
+ fer.NET framework. To support these models var-
+ ious factors were added to the framework: e.g. lo-
+ gistic regression, ordinal regression, “sum where”.
+
+
+
+ Conclusions
+ 1. Using appropriate likelihood models allows
+ optimal integration of different data types
+ 2. FA models have superior predictive perfor-
+ mance to mixture models in this setting
+ 3. Combining regression and FA components
+ eases interpretability but at some cost to
+ predictive performance (this may be due to
+ scheduling problems or local minima)
+ 4. Infer.NET allows us to use complex models
+
+
+
+ Future work
+ 1. Time series. Multiple asynchronous visits, dif-
+ ferent phenotypes recorded each time.
+ 2. Scalability. Although our message passing al-
+ gorithms are efficient, scaling modern health-
+ care size datasets remains a challenge. Paral-
+ lelization is a potential solution.
+ 3. Online learning. This would allow new data
+ could be incorporated as it is recorded.
+ 4. Nonlinearities. We are currently experiment-
+ ing with Gaussian Process and Mixture of Ex-
+ perts models to accommodate nonlinearity.
+
+
+
+ References
+ [1] T. Minka, J.M. Winn, J.P. Guiver, and D.A. Knowles.
+ Infer.NET 2.4, 2010.Microsoft Research Cambridge.
+ http://research.microsoft.com/infernet.
+ [2] Tim D. Spector and Alex J. MacGregor. The St. Thomas’ UK
+ Adult Twin Registry. Twin Research, 5:440–443(4), 1 October
+ 2002.
+
+
+
+ Funding
+ DK was supported by Microsoft Research through the Roger
+ Needham Scholarship at Wolfson College, Cambridge.
+
+
+
diff --git a/Train/Mortal Multi-Armed Bandits/Mortal Multi-Armed Bandits-Poster.pdf b/Train/Mortal Multi-Armed Bandits/Mortal Multi-Armed Bandits-Poster.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..90d808970524ce8bdd7a8623d054bb1f7fc723bd
--- /dev/null
+++ b/Train/Mortal Multi-Armed Bandits/Mortal Multi-Armed Bandits-Poster.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0359fa9358f67bdb9a0e0a3c21d9e90a8216f05ef607ea27e1f20eb78a36ded0
+size 472199
diff --git a/Train/Mortal Multi-Armed Bandits/Mortal Multi-Armed Bandits.pdf b/Train/Mortal Multi-Armed Bandits/Mortal Multi-Armed Bandits.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..16e69b825463cdf163cbaba14486785312629f8c
--- /dev/null
+++ b/Train/Mortal Multi-Armed Bandits/Mortal Multi-Armed Bandits.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1004b36e9480d9c0aa50d9e0c0e92dc1cb41125f47ff8aa68b2558f463ce4319
+size 175956
diff --git a/Train/Mortal Multi-Armed Bandits/info.txt b/Train/Mortal Multi-Armed Bandits/info.txt
new file mode 100644
index 0000000000000000000000000000000000000000..909889fc18e8e714db6aed86521e08ea382ffa9a
--- /dev/null
+++ b/Train/Mortal Multi-Armed Bandits/info.txt
@@ -0,0 +1,139 @@
+
+
+ Abstract
+ We study a new variant of the k-armed bandit problem, motivated by
+ e-commerce applications. In our model, arms have a lifetime, after
+ which they expire.
+ • The search algorithm needs to continuously explore new arms,
+ Contrasts with standard k-armed bandit settings, where
+ exploration is reduced once search narrows to good arms.
+ • The algorithm needs to choose among a large collection of arms,
+ • More than can be fully explored within the typical arm lifetime.
+ We present:
+ • An optimal algorithm for the deterministic reward case,
+ • Obtain a number of algorithms for the stochastic reward case.
+ • Show that the proposed algorithms significantly outperform standard
+ multi-armed bandit approaches given various reward distributions.
+
+
+
+ Introduction
+ • In online advertising, ad brokers select ads to display from a large
+ corpus, with the goal to generate the most ad clicks and revenue.
+ • Previous work has suggested considering this as a multi-armed bandit
+ problem. [Pandey et al, 2007].
+ Multi-Armed Bandits
+ • Models a casino with k slot machines (one-armed bandits).
+ • Each machine has an unknown expected payoff.
+ • The goal is to select the optimal sequence of slot machines to play to
+ maximize the expected total reward, or minimize regret: How much
+ we could have made but didn’t.
+ How is this like advertising?
+ • Show ads is like pulling arms: It has a cost, and a possible reward.
+ • We want an algorithm to select the best sequence of ads to show to
+ maximize the (expected) financial reward.
+ How is advertising harder?
+ • A standard assumption is that arms exists perpetually.
+ • The expect payoff is allowed to change, but only slowly.
+ • Ads, on the other hand, are constantly being created and removed
+ from circulation: budgets run out, seasons change, etc.
+ • There are too many ads to explore in a typical ad lifetime.
+ Arm with expected payoff μi provides a reward when pulled:
+ Deterministic setting: reward(μi) = μi
+ Stochastic setting: reward(μi) = 1 with prob. μi, 0 otherwise.
+ Two forms of death are studied:
+ Budgeted: lifetime Li of arms is known to alg., only pulls count.
+ Timed: each arm has probability p of dying each time step.
+ Related approaches
+ • Restless Bandits [e.g. Whittle; Bertsimas; Nino-Mora; Slivkins & Upfal]:
+ Arms rewards change over time.
+ • Sleeping bandits / experts [e.g. Freund et al.; Blum & Mansour;
+ Kleinberg et al]: A subset of arms is available at each time step.
+ •New arms appearing [e.g. Whittle]: There is an optimal index policy.
+ • Infinite arm supply [e.g. Berry et al.; Teytaud et al.; Kleinberg; Krause
+ & Guestrin]: Too many arms to explore completely.
+
+
+
+ Upper Bound on Mortal Reward
+ Consider the deterministic reward, budgeted death case. Assume fresh arms
+ are always available.
+ Let (t ) denote the maximum mean reward that any algorithm for this case
+ can obtain in t steps. Then lim
+ t (t ) max () where
+ and L is the expected arm lifetime and F ( ) is the cumulative distribution of arm
+ payoffs.
+ In the stochastic reward, and timed death cases, we can do no better.
+ Example cases:
+ 1. Say arm payoff is 1 with probability p<0.5, 1-δ otherwise. Say arms have
+ probability p of dying each time step. The mean reward per step is at most
+ 1- δ+ δp, while maximum reward is 1. Hence regret per step is (1).
+ 2. Suppose F(x) = x with x[0,1]. Suppose arms have probability p of dying each
+ time step. The mean reward per step is bounded by 1 p 1 p , expected
+ regret of any algorithm is ( p ) .
+
+
+
+ Bandit Algorithms for Mortal Arms
+ DetOpt: Optimal for the deterministic reward case
+ In the deterministic case, we can try new arms once until we find a good one:
+
+ Let DEPOPT(t) denote the mean reward per turn obtained by DetOpt after
+ running for t steps with arg max ( ) . Then lim
+ t DEPOPT(t) max ( )
+ DetOpt for stochastic reward case, with early stopping:
+ In the stochastic case, we can just try new arms up to n times before deciding if
+ to move on:
+
+ For n O(logL / 2 ) , STOCHASTIC(without early stopping) gets an expected
+ reward per step of ( )
+
+
+
+ Subset Heuristics & Greedy
+ Standard Multi-Armed Bandit algorithms trade off exploration and
+ exploitation well. The problem with mortal arms is that there are
+ too many options. Can we avoid that?
+
+ Picking the theoretically best subset size and epoch length is still
+ an open problem.
+ In many empirical studies, greedy algorithms also perform well on
+ average due to the lack of exploration that is needed for worst-
+ case performance guarantees. AdaptiveGreedy is one such
+ algorithm.
+
+
+
+ Empicial Evaluation
+
+ Simulated with k=1000 arms,
+ for time duration 10 times
+ the expected lifetime of
+ each arm. Simulating
+ k=100,000 arms gives similar
+ results.
+ With F(x) = x (top):
+ •UCB1 performs poorly
+ • Subset heuristic helps
+ • Stochastic with early
+ stopping performs equally
+ best with Adaptive Greedy.
+
+ We see a similar picture
+ with F(x) matching real
+ advertisements (bottom).
+ Similar performance is seen
+ whenF(X) is distributed as
+ beta(1,3).
+ Mortal Multi-Armed Bandits model the realistic case
+ when strategies are sometimes permanently removed.
+ • Sublinear regret is impossible.
+ • We presented algorithms and analysis for this setting.
+
+
+
diff --git a/Train/NMF-KNN- Image Annotation using Weighted Multi-view Non-negative Matrix Factorization/NMF-KNN- Image Annotation using Weighted Multi-view Non-negative Matrix Factorization-Poster.pdf b/Train/NMF-KNN- Image Annotation using Weighted Multi-view Non-negative Matrix Factorization/NMF-KNN- Image Annotation using Weighted Multi-view Non-negative Matrix Factorization-Poster.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..56871a1a54bd94f29e9e39d965340a7c439cd13d
--- /dev/null
+++ b/Train/NMF-KNN- Image Annotation using Weighted Multi-view Non-negative Matrix Factorization/NMF-KNN- Image Annotation using Weighted Multi-view Non-negative Matrix Factorization-Poster.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:18eca63c3e3883e0c8e7bfe091aca9eeafd79aa99a9becf920562cff534a8bfe
+size 17597911
diff --git a/Train/NMF-KNN- Image Annotation using Weighted Multi-view Non-negative Matrix Factorization/NMF-KNN- Image Annotation using Weighted Multi-view Non-negative Matrix Factorization.pdf b/Train/NMF-KNN- Image Annotation using Weighted Multi-view Non-negative Matrix Factorization/NMF-KNN- Image Annotation using Weighted Multi-view Non-negative Matrix Factorization.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..cf3139e081a1fe1074f7205ed18e058a1c03370b
--- /dev/null
+++ b/Train/NMF-KNN- Image Annotation using Weighted Multi-view Non-negative Matrix Factorization/NMF-KNN- Image Annotation using Weighted Multi-view Non-negative Matrix Factorization.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e9da1f893cbf1d246a972d2a493a7184d10d0272e07af83ff705434782b7fc1a
+size 15506028
diff --git a/Train/NMF-KNN- Image Annotation using Weighted Multi-view Non-negative Matrix Factorization/info.txt b/Train/NMF-KNN- Image Annotation using Weighted Multi-view Non-negative Matrix Factorization/info.txt
new file mode 100644
index 0000000000000000000000000000000000000000..80f52c9e78b3160308733a132ba5fe6f4220586c
--- /dev/null
+++ b/Train/NMF-KNN- Image Annotation using Weighted Multi-view Non-negative Matrix Factorization/info.txt
@@ -0,0 +1,108 @@
+
+
+ Problem
+
+ • Assigning relevant tags to query images based on their visual
+ content
+
+
+
+
+ Challenges
+
+ • Finding the most relevant tags among many possible ones.
+
+ • There are tags that do not occur frequently in the dataset.
+
+ • Images that share many tags may conceptually be very different.
+
+ Drawbacks of Existing Methods
+
+ • Addition of images and tags requires retraining the models.
+
+ • ad-hoc feature fusion approaches are usually taken.
+
+ Our Contributions
+
+ •
+ •
+ •
+ •
+ •
+
+
+ A query-specific model ( no global training! )
+
+ A natural solution to feature fusion
+
+ Handling dataset imbalance through weighted NMF formulation
+
+ O(n) test-time complexity
+
+ Straightforward extension for sub-linear test-time complexity
+
+
+
+
+ Proposed Approach
+
+
+
+
+ Query-specific Training
+
+ • Minimizing L via an iterative alternative approach (U and V are unknown)
+
+ • Training finds the optimum U that minimizes L.
+
+ • T penalizes inaccurate matrix factorization severely for rare tags.
+
+ • W is to bias the learning towards a more accurate factorization of images with
+ rare tags.
+
+ Recovering Tags of Query (Testing)
+
+ 1. Project query’s feature vectors on corresponding basis matrices U
+
+ 2. Approximate V(tag) of query by averaging over F different V(visual features)
+
+ 3. Predict score of different tags by computing U(tag) × (V(tag))’
+
+ 4. Select relevant tags with the highest scores
+
+
+
+
+ Experimental Results
+
+ • Datasets: Corel5K and ESP Game
+
+ • Evaluation metrics: Precision, Recall and N+
+
+ Qualitative Results
+
+ Predicted tags in green appear in the ground truth
+ while red ones do not.
+
+
+
+
+
+
+
+
+ Effect of Weight Matrices (W and T)
+
+
+
+
+
+
+
diff --git a/Train/Play Type Recognition in Real-World Football Video/Play Type Recognition in Real-World Football Video-Poster.pdf b/Train/Play Type Recognition in Real-World Football Video/Play Type Recognition in Real-World Football Video-Poster.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..06e1ee13ebc18317b9792669b8f8ed3b8d18417c
--- /dev/null
+++ b/Train/Play Type Recognition in Real-World Football Video/Play Type Recognition in Real-World Football Video-Poster.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0dd167cab08d157b595edd58c334902fa12b17f457bdca72b345aa8d7490085d
+size 1396368
diff --git a/Train/Play Type Recognition in Real-World Football Video/Play Type Recognition in Real-World Football Video.pdf b/Train/Play Type Recognition in Real-World Football Video/Play Type Recognition in Real-World Football Video.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..528c5c6fa57bd568953c6d2ebaad69d9c2cf3c9f
--- /dev/null
+++ b/Train/Play Type Recognition in Real-World Football Video/Play Type Recognition in Real-World Football Video.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0eb795aa325f4ec86267f5dd14aa380dff0ff520afa434ea33b5cd95c251ea2b
+size 1286370
diff --git a/Train/Play Type Recognition in Real-World Football Video/info.txt b/Train/Play Type Recognition in Real-World Football Video/info.txt
new file mode 100644
index 0000000000000000000000000000000000000000..d1e37ca68b87e6e6e268b0e4efd94e78b30fb58a
--- /dev/null
+++ b/Train/Play Type Recognition in Real-World Football Video/info.txt
@@ -0,0 +1,61 @@
+
+
+ Problem Statement:
+ Input: A sequence of temporally ordered videos comprising all plays from a
+ football game.
+ Output: A labeling of each play by one of the five play types (O, D, K, P, F).
+
+ Offense/Defense(O/D): White team is trying to move the ball forward (O).
+ Black team is trying to prevent the other team from moving the ball forward (D).
+
+ Kickoff(K): White team lines up and kicks the ball down the field to the
+ receiving team.
+
+ Punt(P): White team drop-kicks/punts the ball down the field to the opponent.
+
+ Field Goal(F): the ball is kicked at the goal posts in order to score points.
+
+
+
+ Challenge:
+ Big dataset with lots of variations.
+ Big: There are 1463 test videos from 10 full games spanning 5.44 hrs.
+ Variations: Field, view point, uniform color, camera work quality
+
+
+
+
+ System Overview:
+ Partial Rectification: Field lines are extracted, providing a partial frame of
+ reference for the football field.
+ Play-level recognition: Noisy play-type detectors are run for a subset of
+ the play types
+ Game-level reasoning: A temporal model of football games is used to
+ reason about the noisy detections across the full sequence.
+
+
+
+
+ Results:
+
+ Acc for OD detector, * indicates
+ using ground truth MOS
+
+ Result of Kick-off (ko) and Non-
+ punt (np) detectors
+
+ Acc for each game. Subscript indicates the ground truth information used,
+ Second column shows the acc for the fully automatic system
+ Running Time: 2x game length
+
+
+
diff --git a/Train/bmvc-2013-031/bmvc-2013-031-Poster.pdf b/Train/bmvc-2013-031/bmvc-2013-031-Poster.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..4bef943a4dfbdb26e8257f774b5951373af3c6cd
--- /dev/null
+++ b/Train/bmvc-2013-031/bmvc-2013-031-Poster.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d1d648fb08d71eb258378521bf3e6869335a4fdc70a322672b7fde9a5a948cf3
+size 993298
diff --git a/Train/bmvc-2013-031/bmvc-2013-031.pdf b/Train/bmvc-2013-031/bmvc-2013-031.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..1866b59410b76cffc9f47c302cc5277dd948ecc5
--- /dev/null
+++ b/Train/bmvc-2013-031/bmvc-2013-031.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c32e31e847621ef3610c89255c91c2ab98003004312c8a23fe40ee13117cb937
+size 1348051
diff --git a/Train/bmvc-2013-031/info.txt b/Train/bmvc-2013-031/info.txt
new file mode 100644
index 0000000000000000000000000000000000000000..ef79a49162b7c652bc3a2540338773d0d65cfc6c
--- /dev/null
+++ b/Train/bmvc-2013-031/info.txt
@@ -0,0 +1,85 @@
+
+
+ Goal
+ Recognize activities from first person point of view
+
+
+
+
+ Problem
+ Histogram of space-time features is useful video representation
+ [Choi et al. 08, Laptev et al. 08, Pirsiavash & Ramanan 12] …
+
+ …but hand-crafted (e.g., uniformly split) bin structures need not
+ be most discriminative for target recognition task.
+
+
+
+ Main idea
+ •Bag-of-objects histogram pyramids to summarize ego-activity
+ •Boosting to learn discriminative spatio-temporal partitions
+ •“Object-centric” cutting scheme to focus pool of randomized
+ partitions near active objects with which camera wearer interacts
+ •State-of-the-art results recognizing Activities of Daily Living
+
+
+
+
+ Approach
+ Bag-of-objects
+ Histograms count detected object
+ occurrences in series of space-time bins
+
+ Following Pirsiavash & Ramanan, we
+ use separate detectors for active and
+ passive versions of an object.
+ Boosting
+ Select discriminative combination of
+ bin structures from randomized pool
+
+ Object-centric cuts (OCC)
+ Focus sampling of bins where “active”
+ objects are concentrated
+
+ Emphasize video regions likely to
+ characterize key interactions
+ Control pool size for boosting
+
+
+
+ Results
+
+ Activities of Daily Living (ADL)
+ [Pirsiavash & Ramanan, 2012]
+ 18 actions ~ food, hygiene, entertainment
+ (wash hands, make tea, brush teeth, etc.)
+ 20 people, 10 hours of video
+ We improve the state-of-the-art accuracy on this challenging dataset.
+
+ Methods compared:
+ •Bag-of-words (BoW): space-time interest points and HoG/HoF visual words
+ •Bag-of-objects: global histogram of detected objects
+ •Temporal Pyramid: hand-crafted, one cut in time [Pirsiavash & Ramanan, CVPR12]
+ •Boost-RSTP: randomized spatio-temporal pyramids without object-centric cuts
+
+ Object-centric cuts achieve lower
+ error with smaller pool of candidates
+ More efficient training for boosting.
+
+ Best accuracy: actions with regular space-time
+ structure (e.g., comb hair, dry hands)
+ Most confusions: same active objects involved
+ (e.g., making tea vs. making coffee)
+
+
+
diff --git a/Train/cvpr-2012-002/cvpr-2012-002-Poster.pdf b/Train/cvpr-2012-002/cvpr-2012-002-Poster.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..aced8f8c6862b54c3b03e84752918ea3fc790a7e
--- /dev/null
+++ b/Train/cvpr-2012-002/cvpr-2012-002-Poster.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2f9196e467d0697a9687b3aa2282481b2c7fff1d7845d06839546b690c279f80
+size 9243028
diff --git a/Train/cvpr-2012-002/cvpr-2012-002.pdf b/Train/cvpr-2012-002/cvpr-2012-002.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..f518a330aa8557728b1fda58d40066bfd4a6f072
--- /dev/null
+++ b/Train/cvpr-2012-002/cvpr-2012-002.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f2a72c149a8b3c1c593344281d66af1dc39b379c707e6718485bb7d0dcf00dff
+size 5628169
diff --git a/Train/cvpr-2012-002/info.txt b/Train/cvpr-2012-002/info.txt
new file mode 100644
index 0000000000000000000000000000000000000000..389b1eb998e0aee7fb238e4dd29c0cf09fa7c7d2
--- /dev/null
+++ b/Train/cvpr-2012-002/info.txt
@@ -0,0 +1,119 @@
+
+
+ Objective: The basic position of this paper is that supervoxels
+ have great potential in advancing video analysis methods, as super-
+ pixels have for image analysis. To that end, we perform a thorough
+ comparative evaluation of five supervoxel methods. We have also
+ released the underlying methods’ code and the benchmark.
+
+
+
+ What Makes a Good Supervoxel Method?
+ Why supervoxels?
+ Images have many pixels; videos have more. Supervoxels have strong prom-
+ ise for early video processing: voxels are an artifact of the digital sampling pro-
+ cess and not a natural representation, and there are many voxels in a video
+ making sophisticated computational methods intractable.
+ Traits of a good supervoxel method.
+ - Spatiotemporal uniformity, or conservatism, prefers compact and uni-
+ formly shaped supervoxels in space and time.
+ - Spatiotemporal boundary and preservation: supervoxels should follow
+ object and scene boundaries when they are present and should be stable when
+ they are not present.
+ - Computation and performance: computing supervoxels should reduce the
+ overall amount of computation required and not decrease task performance.
+ - Parsimony: the above properties should be maintained with as few super-
+ - Parsimony: the above properties should be maintained with as few super-
+ voxels as possible.
+
+
+
+ Supervoxel Methods Evaluated:
+ We broadly sample the methodology-space, and intentionally select the methods
+ with differing qualities for supervoxel segmentation in our analysis.
+
+ - Segmentation by Weighted Aggregation (SWA) solves the well-known nor-
+ malized cut criterion approximately by sequentially computing a hierarchy of
+ coarser segmentations. It applies algebraic multigrid techniques and recom-
+ putes affinity between regions at multiple scales in the hierarchy. The method
+ was originally proposed by Sharon et al. CVPR 2000.
+ - Graph-based (GB) is a spatiotemporal extension of the Felzenszwalb and
+ Huttenlocher (IJCV 2004) segmentation method, which iteratively computes a
+ minimum spanning forest over the pixel lattice by merging similar regions.
+ - Graph-based Hierarchical (GBH) extends the GB method to sequentially
+ compute a hierarchy of minimum spanning forests: the input graph at a level is
+ the minimum spanning forest at the next finer level down. The method was pro-
+ posed by Grundmann et al. CVPR 2010.
+ - Meanshift is a nonparametric mode-seeking method; we use Paris and
+ Durand’s (CVPR 2007) implementation that takes a Morse theory interpretation
+ of the mean shift as a topological decomposition of the feature space.
+ - Nyström approximately solves the normalized cut eigenproblem; each voxel
+ is embedded into a low-dimensional eigenspace and then k-means clustering
+ computes the final partitioning (Fowlkes et al. PAMI 2004).
+
+
+
+ The Supervoxel Benchmark and Quantitative Results:
+ We propose a novel supervoxel benchmark that is not tied to any particular appli-
+ cation but rather evaluates the desiderata described earlier. We evaluate the
+ benchmark on three data sets:
+ - GaTech: unlabeled videos.
+ - SegTrack: labeled with a single foreground object.
+ - Chen Xiph.org: fully labeled with region segmentations.
+ - 3D Undersegmentation Error measures what fraction of voxels exceed the
+ volume boundary of the ground-truth segment when mapping the supervoxels
+ onto it.
+
+
+ - 3D Boundary Recall measures the spatiotemporal boundary detection: for
+ each segment in the ground-truth and supervoxel segmentations, we extract the
+ within-frame and between-frame boundaries and measure recall.
+
+
+ Chen Xiph.orgSegTrack- 3D Segmentation Accuracy measures what fraction of a ground-truth seg-
+ ment is correctly classified by the supervoxels: each supervoxel should overlap
+ with only one object/segment.
+
+
+ - Explained Variation, a human-independent metric, measures the difference
+ between the image intensities and the mean-statistics of each supervoxel region;
+ i.e., how well the original video is “compressed” by the supervoxel regions.
+
+
+
+
+
+
+ Take Away Message and Visual Examples:
+ Overall, the two best-performing methods are GBH and SWA. The common dis-
+ tinction setting these two methods apart is that they reevaluate region similarity
+ at varying levels during hierarchical segmentation.
+
+
+
+
+
+
+
+
+
+
+
diff --git a/Train/cvpr-2012-004/cvpr-2012-004-Poster.pdf b/Train/cvpr-2012-004/cvpr-2012-004-Poster.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..b4954fe95b622836279749bd6ecc6961a605bfca
--- /dev/null
+++ b/Train/cvpr-2012-004/cvpr-2012-004-Poster.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8b8f3568bf162b26b89e47aa2d78eda6206ad8625239d4c6a1f0d845ea1043c4
+size 6863387
diff --git a/Train/cvpr-2012-004/cvpr-2012-004.pdf b/Train/cvpr-2012-004/cvpr-2012-004.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..c16b01518a1baa5da2381aceaed63600b7a00773
--- /dev/null
+++ b/Train/cvpr-2012-004/cvpr-2012-004.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c35e49e2cac876eb036ffe948e2e47fb5ecfc5053f94985ef777bf05c4ee2f2f
+size 5309218
diff --git a/Train/cvpr-2012-004/info.txt b/Train/cvpr-2012-004/info.txt
new file mode 100644
index 0000000000000000000000000000000000000000..fc3fcea96b3ea28341c9e7c7d56a2a5c1739b20e
--- /dev/null
+++ b/Train/cvpr-2012-004/info.txt
@@ -0,0 +1,98 @@
+
+
+ Mo'va'on
+ How
would
we
understand
an
image?
+ Scene?
+
+ Outdoor
+ Restaurant
+ Objects?
+
+ 3
Picnic-‐umbrellas,
+ 3
Tables,6
Chairs
+ Group
of
Objects!
+
+ 3
Sets
of
picnic-‐umbrella,
+ table
&
chairs
+ Groups
of
objects:
composites
of
two
or
more
objects
which
have
+ mutually
consistent
spaDal,
scale,
and
view
point
relaDonships.
+ possible
groups
with
arbitrary
number
of
parDcipaDng
objects!
Problem:
It
is
NOT
feasible
to
manually
compile
a
list
of
all
+
+
+
+ Contribu'ons
+
Modeling
a
full
spectrum
of
arbitrary
high-‐order
object
+ interacDons
for
deeper
scene
understanding
+ Pair-wise
+
+ Third-order
+
+ Higher-order
+
+
Automa'cally
discovering
groups
from
images
+ annotated
only
with
object
labels
+
Improving
object
detec'on
and
scene
recogni'on
+ performance
on
a
variety
of
datasets:
UIUC
phrasal,
+ PASCAL
VOC07,
SUN09,
MIT
indoor
+
+
+
+ Approach
Results:
Discovered
Groups
+ Manual
labeling
[Sadeghi
&
Farhadi
CVPR
2011]:
+ 12
pair-‐wise
phrases
+
+ High-‐order
groups
are
discovered
on
mul'ple
datasets!
+
+
+
+
+ Approach:
Group
Discovery
+ Step1:
Find
common
object
paTerns
between
every
+ image-‐pair
through
a
4-‐dimensional
transform
space.
+
+ •
SoV
voDng:
alleviate
the
effect
of
hard
quanDzaDon
+ s
+
+ Step2:
Clustering
paTerns
into
groups.
+ •
Assume
transiDvity
between
paTerns
+ •
Allow
missing
parDcipaDng
objects:
lower-‐order
+ groups
instanDaDons
are
merged
with
+ corresponding
higher-‐order
group
instanDaDons
+
+
+ Step3:
Training
group
detectors.
+ •
Generate
a
bounding
box
for
each
instanDaDon
of
the
group:
the
smallest
box
that
+ encompasses
all
parDcipaDng
objects
including
the
hallucinated
missing
object.
+ •
UDlize
any
off-‐the-‐shelf
object
detecDon
method
to
train
group
detectors.
We
used
+ the
deformable
part-‐based
model.
+
+
+
+ Results:
Improved
Scene
Understanding
+ Contextual
+ reasoning
+
+
+ Higher-‐order
groups
provide
useful
+ contextual
informa'on!
+
+ For
more
details,
please
visit:
+ hTp://chenlab.ece.cornell.edu/projects/objectgroup
+
+
+
diff --git a/Train/cvpr-2013-005/cvpr-2013-005-Poster.pdf b/Train/cvpr-2013-005/cvpr-2013-005-Poster.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..dd63e69115e70e15bcc0dbac7e11358271f12d69
--- /dev/null
+++ b/Train/cvpr-2013-005/cvpr-2013-005-Poster.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:33ae50bd1c39052dec33e0a83fc144f4a35dabc4a80955fc9b8ce2018b1fa06f
+size 9723262
diff --git a/Train/cvpr-2013-005/cvpr-2013-005.pdf b/Train/cvpr-2013-005/cvpr-2013-005.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..0a50d9d24e3a0262a06a0eb0a5dbc940e7c48fea
--- /dev/null
+++ b/Train/cvpr-2013-005/cvpr-2013-005.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:deb3317a3bc681493f42dd869984408b7411136d257f459a2e73f42ef39e1623
+size 6662254
diff --git a/Train/cvpr-2013-005/info.txt b/Train/cvpr-2013-005/info.txt
new file mode 100644
index 0000000000000000000000000000000000000000..ac6c7d9fbfbb8950e568d1d041a1528d87796f2b
--- /dev/null
+++ b/Train/cvpr-2013-005/info.txt
@@ -0,0 +1,91 @@
+
+
+ Introduction
+ IGoal: Recognize the an image’s location by matching to a database
+ IChallenges: matching is time consuming; image retrieval is noisy
+ IPrevious Approaches: image retrieval based & direct matching
+ IOur Approach:
+ Use an image graph to learn local similarity functions
+ I Encourage diversity in top ranked results
+
+
+
+
+ Image Graphs
+ INodes are images
+ Only geometrically
+ consistent images
+ are connected
+ IEdge weights defined
+ by Jaccard Index
+ N(a,b)J(a,b):
+ N(a)+N(b)−N(a,b),
+ and thresholded to
+ improve robustness
+ IOn the right: an
+ example image graph
+ on Dubrovnik dataset
+ (red nodes are center
+ images selected)
+
+
+
+
+ Overview of Approach
+ ITraining:
+ I 1. Compute a covering of the graph with a set of subgraphs (select center images
+ or neighborhoods in the image graph).
+ I2. Learn and calibrate an SVM-based distance metric for each subgraph.
+ ITesting:
+ I 3. Use the models in step 2 to compute the distance from a query image to each
+ database image, and generate a ranked shortlist of possible image matches.
+ 4. Perform geometric verification sequentially with the top database images in the
+ shortlist.I
+
+
+
+ Generating Ranking Results
+ IRanked neighborhoods are concatenated to form a ranking list of all DB images
+ IOrder within each neighborhood determined by BoW similarity
+ IGoal: to have the first true match appear in ranked shortlist as early as possible.
+ IComparison of BoW image retrieval ranking and our learned ranking:
+
+
+ IRanking can be further improved by enforcing diversity in top results: pick the next
+ image conditioned on previous one failing to match
+
+
+
+
+ Experiments
+
+ Table : Top-K accuracies
+ Dubrovnik (Specific Vocab.)
+
+ Dubrovnik (Generic Vocab.)
+
+ Rome
+
+ Aachen
+
+
+
+
+ Reference
+ [1] Y. Li, N. Snavely, and D. Huttenlocher. Location recognition using prioritized
+ feature matching. In ECCV, 2010.
+ [2] T. Sattler, T. Weyand, B. Leibe, and L. Kobbelt. Image retrieval for image-based
+ localization revisited. In BMVC, 2012.
+
+
+
diff --git a/Train/cvpr-2013-007/cvpr-2013-007-Poster.pdf b/Train/cvpr-2013-007/cvpr-2013-007-Poster.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..11edb358f9f19a8cb2adbecdb230958efe48ffbe
--- /dev/null
+++ b/Train/cvpr-2013-007/cvpr-2013-007-Poster.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:cee71ed5c59360ad14a5456340529f46c5540dea7daf8eb3931a161dc62ea778
+size 2933984
diff --git a/Train/cvpr-2013-007/cvpr-2013-007.pdf b/Train/cvpr-2013-007/cvpr-2013-007.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..fdeaf8356eb78a2067a6be1e6959d55454dd9b9c
--- /dev/null
+++ b/Train/cvpr-2013-007/cvpr-2013-007.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:281aafbc1df62de1d1f6a2507deee4acc1f382899d372de76d27d98b03e097cb
+size 779718
diff --git a/Train/cvpr-2013-007/info.txt b/Train/cvpr-2013-007/info.txt
new file mode 100644
index 0000000000000000000000000000000000000000..d047aa50b1d0ae77ea1bf39c6f2e7834a50c03cd
--- /dev/null
+++ b/Train/cvpr-2013-007/info.txt
@@ -0,0 +1,84 @@
+
+
+
+
+
+ Motivation:
+ We can recognize persons across camera views from their local
+ distinctive regions
+ Human salience
+ can identify important local features
+ is robust to the change of view points
+ itself is a useful descriptor for pedestrian matching
+ Distinct patches are considered as salient only when they are
+ matched and distinct in both camera views
+ These regions are discarded as outliers by existing methods or
+ have little effect on person matching because of small sizes
+ Contribution:
+ An unsupervised framework to extract distinctive features for person
+ re-identification.
+ Patch matching is utilized with adjacency constraint for handling the
+ misalignment problem caused by viewpoint change and pose variation.
+ Human salience is learned in an unsupervised way.
+ Code is available at
+ http://mmlab.ie.cuhk.edu.hk/projects/project_salience_reid/index.html
+
+
+
+
+
+ Dense Correspondence:
+ Features: dense color histogram + dense SIFT
+ Adjacency constrained search: simple patch matching
+ Unsupervised Salience Learning:
+ Definition: Salient regions are discriminative in making a person standing
+ out from their companions, and reliable in finding the same person across
+ camera views.
+ Assumption: fewer than half of the persons in a reference set share
+ similar appearance if a region is salient. Hence, we set k = Nr/2. Nr is the
+ number of images in reference set.
+ • K-Nearest Neighbor Salience:
+ • One-Class SVM Salience:
+
+
+
+
+ Matching for Re-identification
+ Bi-directional Weighted Matching
+ Complementary Combination
+
+ Experimental Results:
+
+
+ • VIPeR Dataset
+
+
+
+ ETHZ Dataset
+
+
+
+
+
+
+
+
+
diff --git a/Train/cvpr-2013-008/cvpr-2013-008-Poster.pdf b/Train/cvpr-2013-008/cvpr-2013-008-Poster.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..a890f2e379c6640cd4d75586f6d2cbba4d27fa1f
--- /dev/null
+++ b/Train/cvpr-2013-008/cvpr-2013-008-Poster.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:05e8bc4e98d13c68738bcb1a654f2c18271d17e08f67051f919b954caa94b7af
+size 1571327
diff --git a/Train/cvpr-2013-008/cvpr-2013-008.pdf b/Train/cvpr-2013-008/cvpr-2013-008.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..0d70adcd8095957c44dc1780e2954435f2f28672
--- /dev/null
+++ b/Train/cvpr-2013-008/cvpr-2013-008.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:fcf4e21b8597eb71cae07f8b6a28cb1c199fcb073e1f1b7553d1501344ab051e
+size 10081957
diff --git a/Train/cvpr-2013-008/info.txt b/Train/cvpr-2013-008/info.txt
new file mode 100644
index 0000000000000000000000000000000000000000..24418eca600a2175ff6dc95dbde7e185a51171a1
--- /dev/null
+++ b/Train/cvpr-2013-008/info.txt
@@ -0,0 +1,100 @@
+
+
+ Overview of our approach
+ In this paper, we present an approach of scene understanding by
+ reasoning physical stability of objects based on the input of point
+ clouds. We utilize a simple observation:
+ By human design, objects in static scenes should be
+ stable with respect to gravity.
+ Our method consists of two major steps:
+ Geometric reasoning: recovering solid 3D volumetric
+ primitives from defective point cloud.
+ Physical reasoning: grouping the unstable primitives
+ physically stable objects by optimizing the stability and
+ scene prior.
+ Our main contributions includes
+ We define the physical stability function explicitly by studying
+ minimum energy need to change the pose and position of an
+ primitive (or object) from one equilibrium to another
+ We introduce disconnectivity graph (DG) from physics (Spin-
+ glass) to represent the energy landscapes.
+ We solve the complex optimization problem of stability
+ maximization by the sampling method Swendsen-Wang cut
+
+
+
+ Geometric reasoning
+ Given a point cloud of scene, the goal of geometric reasoning is to
+ recover a volumetric representation of object with physical properties,
+ like volume, mass, support relation etc.
+ We first segment the point cloud with Implicit Algebraic Models (IAMs)
+ Region growing segmentation by iterative IAMs fitting
+ Further merging “convexly” connect regions.
+ We then convert the defective point cloud segments to solid
+ volumetric primitives.
+ Estimation gravity direction and generating voxels.
+ Estimating Invisible (occluded) space
+ Filling missing voxels.
+
+
+
+ A illustration of our approach
+
+
+
+
+ Modeling object stabilityPhysical reasoningand
+
+
+ Definition of stability:
+ The stability S(a, x
+ 0,W) of an object a at state x
+ 0 in the presence of a disturbance work W is the maximum
+ energy that it can release when it moves out the energy barrier by the work W.
+ The physical reasoning is then posed as a well-known graph partition problem, through which the unstable
+ primitives can be grouped together to achieve the maximum global stability.
+ Inference of Maximum stability:
+ where L is the labels of graph partition, x(O
+ i) is the current state of grouped object O
+ i, and F(O
+ i)
+ represents a penalty function for the object geometric prior e.g. the size and shape complexity.
+
+
+
+ Problem Inference
+ The Swendsen-Wang cut is applied to solve the complex optimization problem
+ by the sampling the partitioning of unstable primitives..
+
+
+
+
+ Experiment
+ Qualitative results: (a) rgb images (b) results after geometric reasoning
+ (c) results after physical reasoning
+
+ Other qualitative results on large scale point clouds captured by KinectFusion
+
+ Comparison of segmentation error
+
+ Comparison of missing voxel recovery
+
+ Comparison of physical relation inference
+
+
+
+
+ Project Page
+ http://www.stat.ucla.edu/~ybzhao/research/physics/
+
+
+
diff --git a/Train/cvpr-2013-010/cvpr-2013-010-Poster.pdf b/Train/cvpr-2013-010/cvpr-2013-010-Poster.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..7add0d957c87ce51d42e5cecadf6e53fc59a146c
--- /dev/null
+++ b/Train/cvpr-2013-010/cvpr-2013-010-Poster.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5c715f1b5d3b5ac52757fb4c61265abcd52929f521bf096d949f38cfaab72658
+size 545914
diff --git a/Train/cvpr-2013-010/cvpr-2013-010.pdf b/Train/cvpr-2013-010/cvpr-2013-010.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..9f246a692fd3d11f2cbf1f883974943d7f2cb7a8
--- /dev/null
+++ b/Train/cvpr-2013-010/cvpr-2013-010.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ddd2b22f95f382eeaabec91c82d17b105144b50c3185fdb30b25e79ad8939b3e
+size 526583
diff --git a/Train/cvpr-2013-010/info.txt b/Train/cvpr-2013-010/info.txt
new file mode 100644
index 0000000000000000000000000000000000000000..c7364824e55564cd8e0c5d3b5b0d5c27f58e2463
--- /dev/null
+++ b/Train/cvpr-2013-010/info.txt
@@ -0,0 +1,143 @@
+
+
+ P ROBLEM
+ Most hashing methods are designed to gen-
+ erate binary codes that preserve the Euclidean
+ distance in the original space. Manifold learn-
+ ing techniques, in contrast, are better able to pre-
+ serve the intrinsic geodesic distance. However,
+ the following problems hinders the use of mani-
+ fold learning for hashing:
+ 1. Prohibitive computational cost
+ 2. Out-of-sample extension problem – Most
+ manifold learning methods are non-
+ parametric.
+ • Spectral Hashing: uniform data assumption
+ • Anchor Graph Hashing: Nyström extension
+ • Self-Taught Hashing: out-of-sample extension
+ by SVMExisting methods – All based on Laplacian eigenmaps
+
+
+
+ C ONTRIBUTIONS
+ We showed how to learn compact binary
+ embeddings on their intrinsic manifolds. The
+ proposed approach here is inspired by Delal-
+ leau et al.[2], where they have focused on semi-
+ supervised classification. Our contributions in-
+ clude
+ 1. Make semantic hashing on data manifolds
+ practical by an inductive hashing frame-
+ work
+ • Efficient: Linear indexing time O(n)
+ and Constant query time O(1)
+ • Effective: Better than L2 scan with t-
+ SNE et al.
+ 2. Connect manifold learning and hashing
+ • Any manifold learning methods can be
+ applied in the hashing framework.
+ • Evaluation of 9 manifold learning
+ methods for hashing
+
+
+
+ R EFERENCES
+ [1] F. Shen, C. Shen, Q. Shi, A. van den Hengel, Z. Tang. In-
+ ductive Hashing on Manifolds. In IEEE Conf. Comp. Vis.
+ Pattern Recogn., 2013.
+ [2] O. Delalleau, Y. Bengio, and N. Le Roux. Efficient non-
+ parametric function induction in semi-supervised learn-
+ ing. In Proc. Int. Workshop Artif. Intelli. Stat., 2005.
+
+
+
+ ORMULATION
+ Denote the training data by X := {x1 , x2 ,
+ , xn } and their manifold embedding by Y :=
+ 1 , y2 , · · · , yn }. Given a new data point xq ,
+ we aim to generate an embedding yq which pre-
+ serves the local neighborhood relationships:
+ where w(xq , xi ) is the similarity. which is only
+ non-zero for its k nearest neighbors. This results
+ in
+ This provides a simple inductive formulation for
+ the embedding of a new data point by a linear
+ combination of the base embeddings.
+ We developed a prototype algorithm which
+ was able to approximate yq using only a small
+ base set with a good bound: m clusters were used
+ to cover Y. Observing that the cluster centers
+ have the largest overall weight
+ P w.r.t the points
+ from their own cluster, i.e.,
+ i∈Ij w(cj , xi ), we
+ then approximately select all cluster centers to ex-
+ ˆ for efficiency.press y
+ We obtain our general inductive hash function
+ by binarizing the low-dimensional embedding
+ where YB := {y1 , y2 , · · · , ym } is the embedding
+ for the base set B := {c1 , c2 , · · · , cm }, which is the
+ cluster centers obtained by K-means. With this,
+ the embedding for the training data becomes
+ ¯
+ XB is defined such that W¯
+ ij =where W
+ w(xi ,cj )
+ Pm,forx∈X,c∈B.Wetermijw(x,c)i ji=1
+ our hashing method Inductive Manifold-Hashing
+ (IMH). For IMH, any manifold learning methods
+ can be applied to generate the low dimensional
+ embedding YB as a base.
+
+
+
+ A LGORITHM
+
+
+
+
+ S OURCE C ODE
+ available at: http://goo.gl/A9IFL
+
+
+
+ E VALUATION
+ Evaluation of manifold learning methods
+
+
+
+
+
+
+ R ESULTS
+ (60K)CIFAR-10onresultsRetrieval
+
+ (by IMH-tSNE)
+
+
+ Retrieval results on MNIST (70K)
+
+
+ Retrieval results on SIFT1M and GIST1M
+
+
+ Computational times (seconds) on MNIST
+
+ Classification accuracy with linear SVM
+
+
+
+
diff --git a/Train/cvpr-2013-012/cvpr-2013-012-Poster.pdf b/Train/cvpr-2013-012/cvpr-2013-012-Poster.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..24266617128a040f53875dca60598a88606b892d
--- /dev/null
+++ b/Train/cvpr-2013-012/cvpr-2013-012-Poster.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:44dacbbc1b93f533a13e3c510a8cfc29d9a16d64661fcf264afc3c38d96b2d40
+size 7612466
diff --git a/Train/cvpr-2013-012/cvpr-2013-012.pdf b/Train/cvpr-2013-012/cvpr-2013-012.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..4fad002620e1572003be06dde45353c07aab2b4c
--- /dev/null
+++ b/Train/cvpr-2013-012/cvpr-2013-012.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:59de1162c91815ffb323c4c460cb3347f96d349bbdefce22a4740ed5e37524cb
+size 2262467
diff --git a/Train/cvpr-2013-012/info.txt b/Train/cvpr-2013-012/info.txt
new file mode 100644
index 0000000000000000000000000000000000000000..f38d3176f04068353f07cf1865ecaa586f3b9ca4
--- /dev/null
+++ b/Train/cvpr-2013-012/info.txt
@@ -0,0 +1,130 @@
+
+
+ Objective
+ Input: Single low-resolution, noisy, and perhaps heavily quantized depth map
+ Objective: Jointly increase spatial resolution and apparent measurement accuracy (e.g., depth resolution)
+
+ Left: 3x nearest neighbor upscaling. Right: 3x SR output of our algorithm.
+
+
+
+ Contributions
+ ‘Single image’ depth SR—using information only from
+ input depth map—by:
+ • Reasoning in terms of 3D point patches
+ • 3D variant of PatchMatch
+ • Patch upscaling and merging technique
+
+
+
+ Why is it Hard?
+ Most techniques rely on ancillary data that is often
+ unavailable or difficult to obtain (e.g., aligned guiding
+ image at target resolution).
+ Proceeding ‘by example’—by assembling SR out-
+ put from matched 2D pixel patches—poses its own
+ challenges:
+ • Different patch depths (depth normalization?)
+ • Projective distortions (calls for small patches)
+ • Object boundaries (discontinuity handling?)
+
+ Left: three dissimilar pairs of 2D pixel patches.
+ Right: analogous 3D point patch pairs similar.
+
+
+
+ 3D Point Patches
+
+ 6 DoF 3D rigid body motion g ∈ SE(3) relating 3D point
+ patches Sx , Sx0 ⊂ R3 . Point Px is point encoded at pixel x of
+ input depth map and is center point of ‘further’ patch Sx .
+ Point P0
+ x = g(Px ) is center point of ‘closer’ patch Sx0 .
+ Radius r is kept same for all patches.
+
+
+
+ Matching Cost c(x; g)
+ b‘Backward’ cost c (x; g) computes patch similarity by
+ SSD over nearest neighbors of S
+ x in g −1 (S
+ x0 ), which
+ does not penalize addition of new detail. To be more
+ confident that such new detail is reasonable, we also
+ compute analogous ‘forward’ cost cf (x; g).
+
+ cb (x;g).‘Backward’ cost
+
+ ‘Forward’ costcf (x;g).
+ Matching cost c(x; g) over which we minimize given by
+ convex combination of ‘backward’ and ‘forward’ cost.
+
+
+
+ Algorithm
+ Our proposed ‘single image’ depth SR algorithm re-
+ duces to two steps:
+ 1. Obtain dense 6 DoF correspondence field over in-
+ put pixels x using new 3D variant of PatchMatch
+ algorithm of Barnes et al.
+ ˆ of output depth map at2. Populate SR pixels x
+ target resolution using novel patch upscaling and
+ merging technique
+
+
+
+ Dense 6 DoF Correspondence Search
+
+
+ Visualization of projected 3D displacements of output dense
+ 6 DoF rigid body assignment of our 3D PatchMatch variant.
+
+
+
+ Patch Upscaling and Merging
+ SR output generated by weighted sum over interpolated depth values of ‘backward’-transformed points g
+ x−1 (S
+ x0
+ Patch weight computed as function of cb (x; gx ) in order to promote addition of new detail.
+
+
+
+ −1At input resolution, ‘backward’-transformed 3D points gx(Sx0 ) allowed to influence only 2D pixels corresponding to Sx , since it is
+ over these pixels that matching cost c(x; gx ) was computed. At target resolution, we carry out polygon approximation of pixel
+ −1ˆ of polygonalized mask that depth values from gxmask. It is over SR pixels x(Sx0 ) are interpolated.
+
+
+
+ Qualitative Results
+
+ 2x NN and SR (stereo).
+
+ 2x NN and SR (structured light).
+
+ 4x NN and SR (ToF).
+
+
+
+ Quantitative Results (Middlebury)
+
+ Root mean square error (RMSE).
+
+ Percent error.
+
+
+
diff --git a/Train/cvpr-2013-014/cvpr-2013-014-Poster.pdf b/Train/cvpr-2013-014/cvpr-2013-014-Poster.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..649eed003836ab0bdffe803a03e6b23af58e7956
--- /dev/null
+++ b/Train/cvpr-2013-014/cvpr-2013-014-Poster.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b624a639e4b4675b06bcec6cdf9dca4d071ca1be57ecc8f61ddb6a44387b0d8d
+size 12123186
diff --git a/Train/cvpr-2013-014/cvpr-2013-014.pdf b/Train/cvpr-2013-014/cvpr-2013-014.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..c31186971e8f4adaec1229c223432b996d2be9ce
--- /dev/null
+++ b/Train/cvpr-2013-014/cvpr-2013-014.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a65534556fb0333904c8d020c140577611fc319a13129d7a6371e471f9276c72
+size 3033365
diff --git a/Train/cvpr-2013-014/info.txt b/Train/cvpr-2013-014/info.txt
new file mode 100644
index 0000000000000000000000000000000000000000..f28a0338d5afd75cfba9f62367856d08805c83e2
--- /dev/null
+++ b/Train/cvpr-2013-014/info.txt
@@ -0,0 +1,180 @@
+
+
+ Motivation
+ How do you describe this object? What class is this object?
+ Carþ Boatþ Sofaý Airplaneý
+ Independent of its object category, it looks Abnormal!
+ We Argue that Abnormalities are among most important
+ components that form what worth mentioning.
+ We are Proposing a method that Recognize Abnormalities
+ in images and Reports Weird Visual Attributes.
+
+ Research Questions
+ I) What is Abnormality?A: There is NO clear answer for it!
+ II) What can make an image look weird?A: Large variety of sources!
+ III) Is it a simple Two-way classification (Normal vs. Abnormal)?
+ A: It is not a straightforward classification problem.
+ We can not use abnormal images for training the classifier mainly because:
+ 1) Imbalanced number of Abnormal objects comparing to Normal objects.
+ 2) People are able to judge Normality while they have not seen any Abnormal.
+ We only use Normal objects for learning a “Typicality Model” and represents
+ Abnormality as meaningful deviation from Normality.
+
+
+
+ Dataset
+ Cause of Abnormality: I) Abnormalities rooted in Object by itself (Our Focus
+ II) Context related Abnormality
+ We present the First Dataset on Abnormal Objects. Along with a Novel
+ Human Subject Experiment to acquire Ground truth annotation.
+ Data acquisition
+
+
+ 1) Does
+ this
+ image
+ look
+
+ abnormal?
+
+ 2) What
+ is
+ the
+ reason?
+
+ Object
+ vs.
+ Context
+
+ 3) Categorize
+ the
+
+ object
+
+ 4) Assuming
+ its
+
+ category
+
+ membership,
+ rate
+
+ the
+ abnormality
+
+ cues:
+
+
+
+
+
+
+
+
+ Shape
+ /
+ Color
+
+
+
+
+
+
+
+ Texture
+ /
+ Pose
+
+
+
+
+ Human Subject Experiment
+ vTen human subject for each image
+ vOutlier responders are removed
+ vAveraging responses across users
+ Here are some insights:
+ § It is even hard for Human to categorize
+ Abnormal Objects
+ § Across different cues, Shape is the
+ most important reason for abnormality.
+ § Except for Aeroplane people are sure
+ about the cause of Abnormality.
+
+
+
+
+
+ Proposed Abnormality Model
+ Normality assumption (N) imposes a peaked distribution over Object
+ Classes (C) . As a result multimodal and uniform distributions over
+ Categories suggests Abnormality.
+ Considering this point, Normality Model
+ can be illustrated by this graphical model:
+ Conditioned on observed visual attributes (A)
+ and without any assumption on the object
+ class, Abnormality is the complement event
+ of Normality:
+
+ Estimating the joint attribute likelihood by marginalizing over object
+ Classes:and as given
+ An object class, attributes are independent:
+ For a given object class, each visual attribute has a different importance
+ factor. We measured it by Conditional Entropy:
+ As the learned attribute classifiers are not perfect we consider their
+ accuracy during learning as a measure of their reliability:
+
+
+
+ Experimental Results
+ I) Abnormality Prediction
+ * We use Normal images from Pascal dataset
+ and Abnormal images from our dataset.
+ * Parameters are learned ONLY on Normal objects,
+ which are represented by 64 Visual attributes.
+ * Our model has a better accuracy for Abnormality
+ Detection (we are not using any abnormal images –
+ during training). Second row shows the result for
+ the case of using Abnormal images during training a classifier.
+ * Using surprise score, we are able to rank images based on how abnormal they look :
+
+
+ II) Abnormal Attribute Reporting
+
+ * Given an abnormal instance from an object class, we
+ can Determine Missing (M) and Unexpected (U)
+ attributes.
+ * For quantitative results, we made the baseline based on:
+ Farhadi et al (CVPR 2009) which assumes a Gaussian
+ Distribution over attributes for normal objects.
+ *To be comparable with human responses we have
+ grouped Visual attributes into disjoint lists, representing :
+ Shape, Pose, Color and Texture.
+
+ {Numbers are KL-Divergence}
+
+ III) Abnormality Detection helps
+ Object Categorization
+ *By knowing an object is an abnormal instance of a
+ given class, we will have a list of attributes which
+ made this object look weird.
+ *Adjusting these abnormal attributes with what is
+ expected from a normal instance, will improve
+ the object categorization task.
+ * We Measure KL-Divergence between Distribution
+ over object categories using TURK vs. our Model
+ KL-Divergence
+
+
+
+
diff --git a/Train/cvpr-2013-016/cvpr-2013-016-Poster.pdf b/Train/cvpr-2013-016/cvpr-2013-016-Poster.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..a54028c01cedb058e830d64259e8cdf39f68bc0e
--- /dev/null
+++ b/Train/cvpr-2013-016/cvpr-2013-016-Poster.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:480734c3c65d30b1f57c542ed9745da991e555c160e8cbf71c7e4ee7ea3c0f0b
+size 1171552
diff --git a/Train/cvpr-2013-016/cvpr-2013-016.pdf b/Train/cvpr-2013-016/cvpr-2013-016.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..e9d911d08899003c50140e8422caccc8dc59a275
--- /dev/null
+++ b/Train/cvpr-2013-016/cvpr-2013-016.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:39d145292b55f390e141062fe5a32095d6e721e69855dc91112ca81eb5242df8
+size 4979024
diff --git a/Train/cvpr-2013-016/info.txt b/Train/cvpr-2013-016/info.txt
new file mode 100644
index 0000000000000000000000000000000000000000..dc51524e13c740ad752b08200d5ddbee5744fc8b
--- /dev/null
+++ b/Train/cvpr-2013-016/info.txt
@@ -0,0 +1,116 @@
+
+
+ Motivation
+ Photometric stereo with mixture of Ward’s BRDFs [1]
+
+
+ Multi-view photometric stereo
+ [2] with Lambert’s BRDF
+
+ Iso-depth contours for
+ arbitrary isotropic BRDFs [3].
+
+
+
+ Features of Our Method
+ Arbitrary isotropic materials;
+ Full 360-degree reconstruction of shape and BRDF;
+ Simple & inexpensive capture setup;
+ Simple optimization;
+ High accuracy: shape error ~0.3mm,
+ BRDF error: ~9% relative RMSE,
+ (re-rendered image error ~6-8 intensity levels).
+
+
+
+ References
+ [1] D. B. Goldman, B. Curless, A. Hertzmann, and S. M. Seitz.
+ Shape and spatially-varying brdfs from photometric stereo.
+ ICCV2005
+ [2] C. Hernandez, G. Vogiatzis, and R. Cipolla. Multiview
+ photometric stereo. TPAMI 2008
+ [3] N. Alldrin and D. Kriegman. Toward reconstructing surfaces
+ with arbitrary isotropic reflectance: a stratified photometric
+ stereo approach. ICCV2007
+ [4] J. Lawrence, A. Ben-Artzi, C. DeCoro, W. Matusik, H. Pfister,
+ R. Ramamoorthi, and S. Rusinkiewtz. Inverse shade trees for
+ non-parametric material representation and editing.
+ SIGGRAPH2006
+ [5] D. Nehab, S. Rusinkiewicz, J. Davis, and R. Ramamoorthi.
+ Efficiently combining positions and normals for precise 3d
+ geometry. SIGGRAPH2005IEEE
+
+
+
+ Overview
+
+
+
+
+ Technical Details
+ Iso-depth contour estimation
+ o The symmetry axis of an intensity profile provides
+ the azimuth angle [3];
+ o An intensity profile can be fit to a truncated Fourier
+ series (to be robust to shadow, inter-reflection, etc):
+ o Recover iso-depth contours from azimuth angles.
+
+
+ Shape capture
+ Depth propagation
+ 1. Begin with sparse SfM points.
+ 2. Project a 3D point to one view;
+ 3. Assign its depth to the iso-depth contour passing
+ through its projection to generate new 3D points;
+ 4. Iterate 1-3 with a different 3D point and view.
+ Shape refinement
+ Jointly optimize normals and 3D positions[5].
+
+
+ Reflectance capture
+ Fix the shape, and apply the ACLS algorithm [4] to estimate BRDFs.
+
+
+
+ Experiments (more in the paper)
+
+ Two Capture systems
+
+ Ringlight system
+
+ Handheld system
+ Shape Errors
+
+
+ An input
+ image
+ # I: 1259 464 99 366 /media/yuxiao/资料/Paper2Poster/论文与海报/cvpr-2013-016-Poster-Poster/image13.png
+ Final shape
+ # I: 1354 464 92 365 /media/yuxiao/资料/Paper2Poster/论文与海报/cvpr-2013-016-Poster-Poster/image14.png
+ Shape
+ Error (mm)
+ # I: 1447 462 114 364 /media/yuxiao/资料/Paper2Poster/论文与海报/cvpr-2013-016-Poster-Poster/image15.png
+ Basis BRDFs
+ & weights
+ # I: 1576 464 82 361 /media/yuxiao/资料/Paper2Poster/论文与海报/cvpr-2013-016-Poster-Poster/image16.png
+ A reference
+ photograph
+ # I: 1660 461 86 365 /media/yuxiao/资料/Paper2Poster/论文与海报/cvpr-2013-016-Poster-Poster/image17.png
+ A rendered
+ image
+
+
+
diff --git a/Train/cvpr-2013-028/cvpr-2013-028-Poster.pdf b/Train/cvpr-2013-028/cvpr-2013-028-Poster.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..f0e780fa3293bfa4deb17589847a4dd739b185bf
--- /dev/null
+++ b/Train/cvpr-2013-028/cvpr-2013-028-Poster.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5ad261f9e02e58a92a15089f0f5a3fd13df05bccd0f4cb31cc6607aafa6ba7c8
+size 1999889
diff --git a/Train/cvpr-2013-028/cvpr-2013-028.pdf b/Train/cvpr-2013-028/cvpr-2013-028.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..82c6ba9ef2f5a92033313fabfef9ef410d1da634
--- /dev/null
+++ b/Train/cvpr-2013-028/cvpr-2013-028.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:47eb4c735e23cc59298fac8dc84ddd4768a36c4cfbcd2d1f9f5a6a4aaf9e3a27
+size 2227814
diff --git a/Train/cvpr-2013-028/info.txt b/Train/cvpr-2013-028/info.txt
new file mode 100644
index 0000000000000000000000000000000000000000..61bfbbcd74336b59fdc6e87139a46d515a62548c
--- /dev/null
+++ b/Train/cvpr-2013-028/info.txt
@@ -0,0 +1,105 @@
+
+
+ Problem
+ o Existing methods assume monolithic attributes are sufficient
+ [Lampert et al. CVPR 2009, Farhadi et al. CVPR 2009, Branson et al. ECCV 2010,
+ Kumar et al. PAMI 2011, Scheirer et al. CVPR 2012, Parikh & Grauman ICCV 2011, …]
+ o However, there are real perceptual differences between annotators
+
+
+ o Further, attribute terms can be imprecise
+
+
+
+
+
+
+ Our Idea
+ 1) Treat learning of perceived attributes as an adaptation problem.
+
+ We adapt a generic attribute predictor trained with a large amount of
+ majority-voted data with a small amount of user-labeled data.
+ 2) Obtain labels implicitly from user’s search history.
+ Impact: Capture user’s perception with minimal annotation effort.
+ Personalization makes attribute-based image search more accurate.
+
+
+
+ Learning Adapted Attributes
+ Training data
+ Learning
+ Prediction
+ B. Geng, L. Yang, C. Xu, and X.-S. Hua. “Ranking Model Adaptation
+ for Domain-Specific Search.” IEEE TKDE, March 2010.
+ o Similar formulation for binary classifiers (Yang et al. 2007)
+
+
+
+ Inferring Implicit User-Specific Labels
+ o Transitivity
+ o Contradictions
+
+ Feedback implies no
+ images satisfy all
+ constraints.
+ Contradiction implies
+ attribute models are
+ inaccurate.
+
+ Relax conditions for
+ contradiction.
+ Adjust models using
+ new ordering on
+ some image pairs.
+
+
+
+ Datasets
+ Shoes[Berg10, Kovashka12] attributes:pointy, open, bright, shiny,
+ ornamented, high-heeled, long, formal, sporty, feminine
+ SUN[Patterson12] attributes: sailing, vacationing, hiking,
+ camping, socializing, shopping, vegetation, clouds,
+ natural light, cold, open area, far-away horizon
+ Size: 14k images each; Features: GIST, color, HOG, SSIM
+
+
+
+ Visualization of Learned Attribute Spectra
+
+
+
+
+
+ Adapted Attribute Accuracy
+ o Generic: status quo of learning from majority-voted data
+ o Generic+: like above, but uses more generic data
+ o User-exclusive: learns a user-specific model from scratch
+
+
+
+
+
+ Impact of Adapted Attributes for Personalized Search
+
+
+
+ The personalized attribute models allow the user to more quickly find his/her search target.
+ Implicitly gathering labels for personalization saves the user time, while producing similar results.
+
+
+
diff --git a/Train/cvpr-2013-029/cvpr-2013-029-Poster.pdf b/Train/cvpr-2013-029/cvpr-2013-029-Poster.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..84051caeb6913aeeba5d49308ceea321d41e6598
--- /dev/null
+++ b/Train/cvpr-2013-029/cvpr-2013-029-Poster.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:007ce31d8618fc13c5877db4bd1ca382802ce9a21bf9638eb6546f5202f9c9ef
+size 786815
diff --git a/Train/cvpr-2013-029/cvpr-2013-029.pdf b/Train/cvpr-2013-029/cvpr-2013-029.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..0908eb76ab13c0b2deec5d411dad0c00f0779525
--- /dev/null
+++ b/Train/cvpr-2013-029/cvpr-2013-029.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a44985b259edc2173762a8f5330a986f2d32438228cbbb9d07edd170c546b14d
+size 2775100
diff --git a/Train/cvpr-2013-029/info.txt b/Train/cvpr-2013-029/info.txt
new file mode 100644
index 0000000000000000000000000000000000000000..faa7f432c96ccc86c3603ab1d09a443a9d4fcd22
--- /dev/null
+++ b/Train/cvpr-2013-029/info.txt
@@ -0,0 +1,101 @@
+
+
+ Motivation
+ Realistic unlabeled videos are “untrimmed” to temporal regions of
+ interest, and each video contains multiple actions.
+ This yields unlabeled feature distribution where useful and
+ redundant candidates are hard to distinguish for active learning.
+
+
+
+
+ Main Idea
+ • We introduce a detection-based active learning approach to select
+ videos for annotation, while accounting for their untrimmed nature.
+ Voting-based detector is robust to partial evidence and supports fast
+ incremental updates during active learning.
+ • Learn accurate action recognition models with fewer annotations.
+
+
+
+ Hough-based Action Detector
+ Building the Detector
+ • Extract HoG/HoF features at STIPs detected in training videos and
+ build Hough tables, and sort words by discriminative power.
+
+ Applying the Detector to a Novel Video
+ • Use the Hough table entries to vote on the probable action centers
+ • Reduces number of candidate intervals per video for active selection
+
+ [Similar to detectors of Willems et al., BMVC 2009; Yao et al., CVPR 2010]
+
+
+
+ Active Selection of Untrimmed Videos
+ We seek the unlabeled video that, if used to augment the action detector, will more
+ confidently localize actions in all unlabeled videos.
+ where is the training set, = {+1,−1} is the set of possible labels, and
+ unlabeled video has been given label .denotes that the
+ • Treating the unlabeled video as positive, we score the value of probable action intervals
+ in the video to the current detector :
+ • Treating as negative,
+
+ where VALUE is our novel entropy-based detector confidence defined below.
+
+
+
+
+ Estimating Detector Confidence with Space-Time Entropy
+ •Quantize unlabeled video’s 3D vote space and compute normalized entropy
+ •A vote space with good cluster(s) indicates consensus on the location(s) of the action
+
+
+ Using this entropy-based uncertainty metric , we define the confidence of a detector
+ in localizing actions on the entire unlabeled set :•
+
+
+
+ Annotations
+ Our interface that
+ annotators use to
+ label action intervals
+ in the actively
+ requested videos.
+ Available on the
+ project webpage.
+
+
+
+
+ Results
+ Hollywood (8 classes)
+
+ UT Interaction (6 classes)
+
+ MSR Actions 1 (3 classes)
+
+ • Passive < Active: Annotation effort saved by intelligent label requests.
+ • Active Classifier < Ours: Accounting for untrimmed nature of video is critical.
+ Active Entropy < Ours: Simply estimating individual video uncertainty is insufficient.
+ • Active GT-Ints > Active Pred-Ints: Room for improvement in interval estimates.
+
+ UT Interaction
+
+ MSR Actions 1
+ Our active method achieves good accuracy using much less annotations.
+
+
+
diff --git a/Train/cvpr-2014-002/cvpr-2014-002-Poster.pdf b/Train/cvpr-2014-002/cvpr-2014-002-Poster.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..d15442a3ae2d61880d2996d0834bdf933545b574
--- /dev/null
+++ b/Train/cvpr-2014-002/cvpr-2014-002-Poster.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:85de8aba125053fbee57b7d01767c41d3657b01e8d4a55a0a7bce2b592729698
+size 1280519
diff --git a/Train/cvpr-2014-002/cvpr-2014-002.pdf b/Train/cvpr-2014-002/cvpr-2014-002.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..098d1aa9c79fb61e4ba79dbf8a01e5fa18a07593
--- /dev/null
+++ b/Train/cvpr-2014-002/cvpr-2014-002.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c671661b9ca7d6b17ebb576fa716b34acb47437d8d2511c2a3965c8943526072
+size 3800256
diff --git a/Train/cvpr-2014-002/info.txt b/Train/cvpr-2014-002/info.txt
new file mode 100644
index 0000000000000000000000000000000000000000..039488df18197fcf1506731ae3a39f610fcdf1dc
--- /dev/null
+++ b/Train/cvpr-2014-002/info.txt
@@ -0,0 +1,107 @@
+
+
+ Motivation
+ This paper proposes a novel Multiple Instance Learning paradigm,
+ namely MILCUT, to segment the foreground object inside a user-
+ provided bounding box.
+
+
+
+
+ Multiple Instance Learning
+ The Multiple Instance Learning (MIL) is a general formulation for
+ dealing with the hidden class labels in noisy input. In MIL, instances
+ (data samples) appear in the form of positive and negative bags:
+ Positive bag: there is at least one positive instance;
+ Negative bag: all instances are negatives.
+
+
+
+ Overview of the Sweeping-Line Strategy
+ For the interactive segmentation problem, an unknown object of
+ interest is supposed to appear within the bounding box at an
+ unknown location; we also know that image pixels outside the
+ bounding box are background. Therefore, we view the image within
+ the bounding box as “noisy input”, and our task is to discover the
+ object under weak supervision with the data in company of outliers.
+ we provide a sweeping-line strategy to convert the interactive
+ image segmentation into a multiple instance learning problem:
+ Each horizontal or vertical sweeping line inside the bounding
+ box (a Positive bag) must contain at least one pixel from the
+ foreground object;
+ Each horizontal or vertical sweeping line outside the bounding
+ box (a Negative bag) does not contain any pixel from the
+ foreground object.
+ In this way, the sweeping-line strategy naturally corresponds to the
+ multiple instance constraints described above.
+
+
+
+ Validity and Tightness of a Bounding Box
+ Definition 1. Validity:
+ For an image I, a bounding box B is valid if the foreground object O
+ completely lies inside the box.
+ Definition 2. Tightness:
+ For an image I, a bounding box B is tight if the foreground object O
+ intersects the left, right, top, and bottom border of the bounding box.
+ Assuming validity and tightness of the bounding box, we then convert
+ the image segmentation task into a MIL problem by considering the
+ horizontal and vertical slices in the bounding box as positive bags and
+ other slices outside the box as negative bags. Either pixels or
+ superpixels could be used as instances. And we proved the Lemma:
+ Lemma 1. If a bounding box B is valid and tight and the object O inside
+ the bounding box is connected, then the constructed positive and
+ negative bags satisfy multiple instance constraints.
+
+
+
+ Multiple Instance Learning Formulation
+ We formulate the interactive image segmentation problem by a
+ structured prediction model named MILCut-Struct.
+ The log-likelihood function is defined as:
+ The appearance likelihood model distinguishes the foreground pixels
+ or superpixels from the clutter background:
+ The structural constraints model enforces the piecewise smoothness
+ in resulting segments:
+ An alternative way of incorporating structural information is applying
+ GraphCut as a post-processing step, which we named MILCut-Graph.
+
+
+
+ Experiment Results on GrabCut dataset
+
+
+
+
+ Experiment Results on Berkeley dataset
+
+
+
+
+ Experiments with Noisy Inputs on Weizmann dataset
+ In general, MILCut explicitly embeds the bounding box prior in the
+ model, and is able to stretch the foreground segment towards all sides
+ of the bounding box. Here shows F-scores on the Weizmann dataset:
+
+ In real cases, the assumptions we made for the MILCut may not always
+ be satisfied. In this experiment, we consider two distinct situations
+ where multiple instance constraints are not met:
+ • Case 1: The bounding box is not tight. Here shows F-scores on
+ the Weizmann single object dataset with noisy inputs:
+
+ • Case 2: The object is not connected. Here shows F-scores on
+ subset of Weizmann dataset, where each bounding box contains
+ two objects:
+
+ Experiments show that MILCut can still obtain better performance than
+ other approaches in these cases.
+ The first two authors contributed equally to this work.
+
+
+
diff --git a/Train/cvpr-2014-003/cvpr-2014-003-Poster.pdf b/Train/cvpr-2014-003/cvpr-2014-003-Poster.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..b7c6076b0ef659c46f29a6aa5f64925d57d94513
--- /dev/null
+++ b/Train/cvpr-2014-003/cvpr-2014-003-Poster.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b8010c9158438acdf69c70bcc3451ea727897bfde3742a92a1ef4b523a970a3d
+size 1081870
diff --git a/Train/cvpr-2014-003/cvpr-2014-003.pdf b/Train/cvpr-2014-003/cvpr-2014-003.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..cc0f2eb4f0a36b742c4f501cb47c2f5e3f2bd2b6
--- /dev/null
+++ b/Train/cvpr-2014-003/cvpr-2014-003.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:751286c729f8e5fc6129607a08a3104731c4af757bfd9bbe83f04cd5fcc13baa
+size 4990324
diff --git a/Train/cvpr-2014-003/info.txt b/Train/cvpr-2014-003/info.txt
new file mode 100644
index 0000000000000000000000000000000000000000..ee5a5a699c7bbe6dec156442ac81db2d1a88d1d8
--- /dev/null
+++ b/Train/cvpr-2014-003/info.txt
@@ -0,0 +1,90 @@
+
+
+ Motivation
+
+
+
+ Most previous methods on
+ material classification use a head-
+ on camera.
+ Since, Isotropic BRDFs
+ can be approximated by
+ a 2D BRDF (θd, θh)
+
+
+ Conventional camera captures
+ only a 1D slice of the 2D BRDF
+
+
+
+
+
+ A slanted camera captures a larger
+ portion of the 2D BRDF space.
+
+
+
+
+ Experiments on Ink Database
+
+ We observe improved classification accuracy !
+
+
+
+ Application : Ink Identification
+ To simplify the setup, we bring the camera
+ and light together
+ Although accuracy is a bit compromised,
+ we capture important discriminative
+ information (retro reflectance, specular
+ highlights )
+
+
+
+
+
+ Results
+ Sample Image
+
+ Our Result
+
+ Ground Truth
+
+
+
+
+ Conclusions
+ 1. A slanted camera increases the sampling region of the 2D BRDF space.
+ 2. This enhances the performance of BRDF-based material classification.
+ 3. The first work to analyse BRDF for ink identification, an important
+ problem in forensics.
+ 4. A simple handheld camera-flashlight device for data capture.
+
+
+
+ References
+ 1. G. Jinwei and C. Liu, “Discriminative illumination: Per-pixel classification of raw
+ materials based on optimal projections of spectral BRDF,” in Proc. CVPR, 2012.
+ 2. O. Wang, P. Gunawardane, S. Scher, and J. Davis, “Material classification using
+ BRDF slices,” in Proc. CVPR, pp. 2805 –2811, 2009.
+ 3. S. Rusinkiewicz, “A New Change of Variables for Efficient BRDF Representation,” in
+ Eurographics Rendering Workshop, pp. 11 – 22, 1998.
+ This material is based upon work supported by the National Science Foundation under Grant No.
+ IIS-1008285. Ping Tan is partially supported by the ASTAR PSF project R-263-000-698-305.
+
+
+
diff --git a/Train/eccv-2012-001/eccv-2012-001-Poster.pdf b/Train/eccv-2012-001/eccv-2012-001-Poster.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..16796e264e76c9c683fe728d52aec1e81171be25
--- /dev/null
+++ b/Train/eccv-2012-001/eccv-2012-001-Poster.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9b4824d4c07e3178b830f9d449103e1824525539732cda15000bd60d48fe9565
+size 8597386
diff --git a/Train/eccv-2012-001/eccv-2012-001.pdf b/Train/eccv-2012-001/eccv-2012-001.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..3bee0c08d5b9f839463a8f53e9e86f04bf40577d
--- /dev/null
+++ b/Train/eccv-2012-001/eccv-2012-001.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:148660bc884a61ca866bf02451d6802f0ebcb466247781a9905190730db1e93d
+size 6798188
diff --git a/Train/eccv-2012-001/info.txt b/Train/eccv-2012-001/info.txt
new file mode 100644
index 0000000000000000000000000000000000000000..3e054b50aa073407379e2fa61179496131226d66
--- /dev/null
+++ b/Train/eccv-2012-001/info.txt
@@ -0,0 +1,71 @@
+
+
+ Motivation
+
+ How to detect objects above the ground,
+ without class specific knowledge?
+ How to estimate their velocities?
+ How to keep a low computational cost?
+
+
+
+ System Overview
+
+
+ (Badino et al. 2009)We use the stixel world model;
+ the dominant objects in the scene
+ are represented as vertical sticks.
+
+ The ground plane and stixel distances
+ are estimated without a depth map,
+ at 100 fps (Benenson et al. 2011 & 2012).
+ Motion of stixels is estimated without
+ computing the pixelwise optical flow.
+
+
+
+ Key Idea
+ In the stixel world, motion estation becomes
+ a simple 2D dynamic programming problem.
+ Simpler problem ⇒ faster solution
+
+
+
+
+ Evaluation Methodology
+ This is the first work to provide a quantitative evaluation of
+ stixels motion estimation.
+ Annotated pedestrian bounding boxes
+ are used as proxy for evaluation.
+ For each frame, bounding box positions up
+ to ∆ frames in the future are predicted.
+ "Recall vs ∆ frames" curves are used for
+ comparison.
+
+
+
+
+
+
+
+ Results
+
+ Fair quality,
+ at high speed.
+ Lower than high
+ quality optical flow,
+ but comparable to
+ icp tracker.
+
+
+
diff --git a/Train/iccv-2013-002/iccv-2013-002-Poster.pdf b/Train/iccv-2013-002/iccv-2013-002-Poster.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..e44bff93171d9b3e2b731cc724217cdb32a48d4c
--- /dev/null
+++ b/Train/iccv-2013-002/iccv-2013-002-Poster.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:af048599f071a1ff551b0db4c5885c3ad1060bd5623c729749e97fc076a1b080
+size 10271005
diff --git a/Train/iccv-2013-002/iccv-2013-002.pdf b/Train/iccv-2013-002/iccv-2013-002.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..32697a41eb80b9eefbd661fbb3463364ac410b52
--- /dev/null
+++ b/Train/iccv-2013-002/iccv-2013-002.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:25b06301c700425133dd6b9ba19a6e8b1d1bddc594403169af84bd583e88afa2
+size 1281863
diff --git a/Train/iccv-2013-002/info.txt b/Train/iccv-2013-002/info.txt
new file mode 100644
index 0000000000000000000000000000000000000000..d7b5080846215098efe50c6b64b43884db8a1dcb
--- /dev/null
+++ b/Train/iccv-2013-002/info.txt
@@ -0,0 +1,139 @@
+
+
+ Motivation
+ • One of the main challenges for scaling up object recognition
+ systems is the lack of annotated images for real-world categories.
+ Typically there are few images available for training classifiers for
+ most of these categories; severe for fine-grained categorization.!
+ !
+ • There are abundant of textual descriptions of these categories,
+ which comes in the form of dictionary entries, encyclopedia
+ articles, and various online resources. For example, it is possible
+ to find several good descriptions of a “bobolink” in encyclopedias
+ of birds, while there are only a few images available for that bird
+ online.!
+ !
+ • Attributes based approaches for zero shot learning deals with
+ the dilemma of finding best set of visual attributes for object
+ description.!
+
+
+
+ Problem Definition
+ !• The main question we address in this paper is how to use purely
+ textual description of categories with no training images to learn
+ visual classifiers for these categories. !
+ • We propose an approach for zero-shot learning of object
+ categories where the description of unseen categories comes in
+ the form of typical text such as an encyclopedia entry, without the
+ need to explicitly defined attributes.!
+
+
+
+
+ Contributions
+ • First approach that predicts explicit visual classifier parameters of
+ unseen classes from typical text, as encyclopedia entry. !
+ • Two baselines were designed based on Regression and Domain
+ Adaptation (DA) functions.!
+ • We proposed a quadratic program that involves both Regression
+ and DA functions.!
+ Project Website: https://sites.google.com/site/mhelhoseiny/
+ projects/computer-vision-projects/Write_a_Classifier!
+ Includes the data. The code will be available shortly on it.
+
+
+
+ Formulations
+ We denote textual features and visual features as t ∈ T (textual
+ domain) and x ∈ V (visual domain), respectively. In all of our
+ experiments, t is tf-idf features and x is classeme features.!
+ 1) Regression (Reg) Baseline
+ • The predicted classifier for textual feature vector t
+ * is obtained as!
+ c
+ reg (t
+ *) = arg max
+ c [p
+ reg (c|t
+ *)].• A set of one-vs-all classifiers {c
+ k} are learned, one for each seen
+ class. Given {(t
+ k; c
+ k)}, a regressor is learned that can be used to give
+ a prior estimate for p
+ reg (c|t). In our experiments, we used Gaussian
+ Process Regression (GPR) and Twin Gaussian Processes (TGP). !
+ 2) Domain Adaptation (DA) Baseline
+ • A a linear (or nonlinear kernalized) transfer function W between T
+ and V . !
+ • W can be learned by optimizing, with a suitable regularizer, over
+ constraints of the form tT W x > l if t and x belong to the same class,
+ and tT W x < u otherwise, where x is a visual feature vector amended
+ by 1, l and u are model parameters.!
+ • It is not hard to see that this transfer function can act as a classifier.
+ Given a textual feature t and a test image, represented by x , a
+ classification decision can be obtained by tT W x > b, we set b to (l +
+ u)/2 . !
+ • The predicted classifier is obtained as c
+ DA (t
+ *) = t
+ *T W.!
+ 3) DA-Reg Quadratic Program (Better than 1 and 2)
+
+ Predicted Classifier !
+ • {x
+ i , i=1:N} are the visual features
+ of the images of the seen classes.!
+ !
+ • Given ln p
+ reg (c|t) from the TGP
+ and W , this equation reduces
+ to a quadratic program on c
+ with linear constraints!
+
+
+
+ Experimental Results
+ • We provide Text Augmentation of the CUB-Birds and Oxford-Flower
+ datasets.!
+ • We computed the ROC
+ ! curve and report the area
+ under that curve (AUC) as
+ a comparative measure of
+ different approaches.!!
+
+ 1) Flower Dataset
+
+
+
+ Fig 1: ROC curves of best 10 predicted classes
+ (best seen in color) for Flower Dataset!
+
+ Fig 2:AUC improvement over the three
+ baselines (GPR, TGP, DA) on Flower dataset.!
+
+ Fig 3: AUC of the predicated classifiers for all classes of Flower dataset!
+ 2) Birds Dataset
+
+ Fig 4: ROC curves of best 10 predicted
+ classes (best seen in color) for Bird dataset!
+
+ Fig 5:AUC improvement over the three
+ baselines (GPR, TGP, DA) on Birds dataset.!
+
+ Fig 6: AUC of the predicated classifiers for all classes of Birds dataset!
+
+
+
diff --git a/Train/ijcb-2011-001/ijcb-2011-001-Poster.pdf b/Train/ijcb-2011-001/ijcb-2011-001-Poster.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..1cea70e902d5262f0d2935bde0b393bd0f0cf2f7
--- /dev/null
+++ b/Train/ijcb-2011-001/ijcb-2011-001-Poster.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6314daa2b4c3586ca8dfd75eb207f3aaaf12182f76a6839dc13e37c10c5fb700
+size 2664519
diff --git a/Train/ijcb-2011-001/ijcb-2011-001.pdf b/Train/ijcb-2011-001/ijcb-2011-001.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..0cbe5eaaee21c62969c94bb8197f2632655441fb
--- /dev/null
+++ b/Train/ijcb-2011-001/ijcb-2011-001.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c484018daaa0089da9148af196629bc182779942e2a4988c7689f29884f80b9d
+size 2057428
diff --git a/Train/ijcb-2011-001/info.txt b/Train/ijcb-2011-001/info.txt
new file mode 100644
index 0000000000000000000000000000000000000000..92502ad83768f0f1013571128697aea2a880db47
--- /dev/null
+++ b/Train/ijcb-2011-001/info.txt
@@ -0,0 +1,89 @@
+
+
+ Goals & Motivations
+ • Create a classification framework robust to uncontrolled,
+ real world images while avoiding over fitting
+ • Explore localized classification on facial discriminative
+ regions to increase accuracy rate that can overcome:
+ • Pose
+ • Alignment
+ • Facial expressions
+ • Occlusions
+
+
+
+ Dataset
+ • Database comprises 26,766 (13383 male/female) unique
+ faces collected from Flickr by the CMU Biometrics Center
+ • Workers on Amazon Mechanical Turk landmarked key
+ points on the face and provided gender ground truth
+ • High variation in pose, illumination, expression, resolution
+ • Preprocessing performed to normalize rotation and scale
+ variation
+
+ Fig: Seven facial landmark points
+
+ Fig: Point cloud distribution
+
+ Fig: Sample images from database
+
+
+
+ Region Selection & Localization
+ • Local regions improves alignment in each region
+ • These regions are non-overlapping, so they can be
+ manipulated individually and treated as orthogonal
+ • Regions selected based on:
+ • Discriminant ability of SVM weight vector
+ • Fisher Discriminant Analysis (LDA)
+ • Perceptual and psychological studies by Brown et al.
+ • Bounding boxes are heuristically determined facial ratios
+
+ Fig: Fisher , SVM, weight vector
+ (Top to bottom)
+
+ Fig: Regions of Interest
+
+
+
+ Implementation
+ • Gender features are well represented through texture
+ • A separate linear SVM is trained on each of the regions• MR8 filter bank robust to rotation and scaling
+ • The SVM classifications are fused with
+ • Majority voting scheme
+ • Confidence of SVM margin fit to a logistic function
+ • Naïve Bayes classifier trained on margin distance
+ • Logistic regression trained on margin distance
+ • Taking the raw sum of distances to the margin
+
+ Fig: Algorithm implementation
+
+
+
+ Results & Conclusion
+ • Results reported on 5-fold cross validation for each dataset
+ • Despite the significant loss of information for each ROI,
+ each still has significant discriminant power
+ • The SVM distance margins in each regions provides
+ independent prediction of gender
+ • Results in a 5 dimensional feature space
+ • Likely to better separate due to region localization
+
+ Fig: Visualization FERET score distribution
+
+ Fig: Visualization Flickr score distribution
+
+
+
+
+