weipang142857 commited on
Commit
f6cc031
·
verified ·
1 Parent(s): c481902

Upload folder using huggingface_hub

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. .gitattributes +166 -0
  2. Test/AZDBLab A Laboratory Information System for Large-Scale Empirical DBMS Studies/AZDBLab A Laboratory Information System for Large-Scale Empirical DBMS Studies-Poster.pdf +3 -0
  3. Test/AZDBLab A Laboratory Information System for Large-Scale Empirical DBMS Studies/AZDBLab A Laboratory Information System for Large-Scale Empirical DBMS Studies.pdf +3 -0
  4. Test/AZDBLab A Laboratory Information System for Large-Scale Empirical DBMS Studies/info.xml +61 -0
  5. Test/Camera Motion and Surrounding Scene Appearance as Context for Action Recognition/Camera Motion and Surrounding Scene Appearance as Context for Action Recognition-Poster.pdf +3 -0
  6. Test/Camera Motion and Surrounding Scene Appearance as Context for Action Recognition/Camera Motion and Surrounding Scene Appearance as Context for Action Recognition.pdf +3 -0
  7. Test/Camera Motion and Surrounding Scene Appearance as Context for Action Recognition/info.txt +119 -0
  8. Test/Can a driving simulator assess the effectiveness of Hazard Perception training in young novice drivers/Can a driving simulator assess the effectiveness of Hazard Perception training in young novice drivers-Poster.pdf +3 -0
  9. Test/Can a driving simulator assess the effectiveness of Hazard Perception training in young novice drivers/Can a driving simulator assess the effectiveness of Hazard Perception training in young novice drivers.pdf +3 -0
  10. Test/Can a driving simulator assess the effectiveness of Hazard Perception training in young novice drivers/info.txt +73 -0
  11. Test/Leveraging Multi-Domain Prior Knowledge in Topic Models/Leveraging Multi-Domain Prior Knowledge in Topic Models-Poster.pdf +3 -0
  12. Test/Leveraging Multi-Domain Prior Knowledge in Topic Models/Leveraging Multi-Domain Prior Knowledge in Topic Models.pdf +3 -0
  13. Test/Leveraging Multi-Domain Prior Knowledge in Topic Models/info.xml +46 -0
  14. Test/nips-2011-001/info.xml +51 -0
  15. Test/nips-2011-001/nips-2011-001-Poster.pdf +3 -0
  16. Test/nips-2011-001/nips-2011-001.pdf +3 -0
  17. Train/3D Proximal Femur Estimation through Bi-planar Image Segmentation with Hierarchical/3D Proximal Femur Estimation through Bi-planar Image Segmentation with Hierarchical-Poster.pdf +3 -0
  18. Train/3D Proximal Femur Estimation through Bi-planar Image Segmentation with Hierarchical/3D Proximal Femur Estimation through Bi-planar Image Segmentation with Hierarchical.pdf +3 -0
  19. Train/3D Proximal Femur Estimation through Bi-planar Image Segmentation with Hierarchical/info.txt +92 -0
  20. Train/AZDBLab A Laboratory Information System for Large-Scale Empirical DBMS Studies/AZDBLab A Laboratory Information System for Large-Scale Empirical DBMS Studies-Poster.pdf +3 -0
  21. Train/AZDBLab A Laboratory Information System for Large-Scale Empirical DBMS Studies/AZDBLab A Laboratory Information System for Large-Scale Empirical DBMS Studies.pdf +3 -0
  22. Train/AZDBLab A Laboratory Information System for Large-Scale Empirical DBMS Studies/info.txt +61 -0
  23. Train/Active Boundary Annotation using Random MAP Perturbations/Active Boundary Annotation using Random MAP Perturbations-Poster.pdf +3 -0
  24. Train/Active Boundary Annotation using Random MAP Perturbations/Active Boundary Annotation using Random MAP Perturbations.pdf +3 -0
  25. Train/Active Boundary Annotation using Random MAP Perturbations/info.txt +120 -0
  26. Train/Adaptive Structure from Motion with a contrario model estimation/Adaptive Structure from Motion with a contrario model estimation-Poster.pdf +3 -0
  27. Train/Adaptive Structure from Motion with a contrario model estimation/Adaptive Structure from Motion with a contrario model estimation.pdf +3 -0
  28. Train/Adaptive Structure from Motion with a contrario model estimation/info.txt +72 -0
  29. Train/An automated measure of MDP similarity for transfer in reinforcement learning/An automated measure of MDP similarity for transfer in reinforcement learning-Poster.pdf +3 -0
  30. Train/An automated measure of MDP similarity for transfer in reinforcement learning/An automated measure of MDP similarity for transfer in reinforcement learning.pdf +3 -0
  31. Train/An automated measure of MDP similarity for transfer in reinforcement learning/info.txt +73 -0
  32. Train/Automated Embryo Stage Classification in Time-lapse Microscopy Video of Early Human Embryo Development/Automated Embryo Stage Classification in Time-lapse Microscopy Video of Early Human Embryo Development-Poster.pdf +3 -0
  33. Train/Automated Embryo Stage Classification in Time-lapse Microscopy Video of Early Human Embryo Development/Automated Embryo Stage Classification in Time-lapse Microscopy Video of Early Human Embryo Development.pdf +3 -0
  34. Train/Automated Embryo Stage Classification in Time-lapse Microscopy Video of Early Human Embryo Development/info.txt +50 -0
  35. Train/BMVC-2011-001/BMVC-2011-001-Poster.pdf +3 -0
  36. Train/BMVC-2011-001/BMVC-2011-001.pdf +3 -0
  37. Train/BMVC-2011-001/info.txt +134 -0
  38. Train/Bayesian Joint Topic Modelling for Weakly Supervised Object Localisation/Bayesian Joint Topic Modelling for Weakly Supervised Object Localisation-Poster.pdf +3 -0
  39. Train/Bayesian Joint Topic Modelling for Weakly Supervised Object Localisation/Bayesian Joint Topic Modelling for Weakly Supervised Object Localisation.pdf +3 -0
  40. Train/Bayesian Joint Topic Modelling for Weakly Supervised Object Localisation/info.txt +158 -0
  41. Train/Being John Malkovich/Being John Malkovich-Poster.pdf +3 -0
  42. Train/Being John Malkovich/Being John Malkovich.pdf +3 -0
  43. Train/Being John Malkovich/info.txt +365 -0
  44. Train/Beyond Spatial Pyramids- Receptive Field Learning for Pooled Image Features/Beyond Spatial Pyramids- Receptive Field Learning for Pooled Image Features-Poster.pdf +3 -0
  45. Train/Beyond Spatial Pyramids- Receptive Field Learning for Pooled Image Features/Beyond Spatial Pyramids- Receptive Field Learning for Pooled Image Features.pdf +3 -0
  46. Train/Beyond Spatial Pyramids- Receptive Field Learning for Pooled Image Features/info.txt +134 -0
  47. Train/CATALYST ENHANCED MICRO SCALE BATCH ASSEMBLY/CATALYST ENHANCED MICRO SCALE BATCH ASSEMBLY-Poster.pdf +3 -0
  48. Train/CATALYST ENHANCED MICRO SCALE BATCH ASSEMBLY/CATALYST ENHANCED MICRO SCALE BATCH ASSEMBLY.pdf +3 -0
  49. Train/CATALYST ENHANCED MICRO SCALE BATCH ASSEMBLY/info.txt +69 -0
  50. Train/CRYOGENIC CATHODOLUMINESCENCE FROM CuxAg1-xInSe2 THIN FILMS/CRYOGENIC CATHODOLUMINESCENCE FROM CuxAg1-xInSe2 THIN FILMS-Poster.pdf +3 -0
.gitattributes CHANGED
@@ -58,3 +58,169 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
58
  # Video files - compressed
59
  *.mp4 filter=lfs diff=lfs merge=lfs -text
60
  *.webm filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
58
  # Video files - compressed
59
  *.mp4 filter=lfs diff=lfs merge=lfs -text
60
  *.webm filter=lfs diff=lfs merge=lfs -text
61
+ Test/AZDBLab[[:space:]]A[[:space:]]Laboratory[[:space:]]Information[[:space:]]System[[:space:]]for[[:space:]]Large-Scale[[:space:]]Empirical[[:space:]]DBMS[[:space:]]Studies/AZDBLab[[:space:]]A[[:space:]]Laboratory[[:space:]]Information[[:space:]]System[[:space:]]for[[:space:]]Large-Scale[[:space:]]Empirical[[:space:]]DBMS[[:space:]]Studies-Poster.pdf filter=lfs diff=lfs merge=lfs -text
62
+ Test/AZDBLab[[:space:]]A[[:space:]]Laboratory[[:space:]]Information[[:space:]]System[[:space:]]for[[:space:]]Large-Scale[[:space:]]Empirical[[:space:]]DBMS[[:space:]]Studies/AZDBLab[[:space:]]A[[:space:]]Laboratory[[:space:]]Information[[:space:]]System[[:space:]]for[[:space:]]Large-Scale[[:space:]]Empirical[[:space:]]DBMS[[:space:]]Studies.pdf filter=lfs diff=lfs merge=lfs -text
63
+ Test/Camera[[:space:]]Motion[[:space:]]and[[:space:]]Surrounding[[:space:]]Scene[[:space:]]Appearance[[:space:]]as[[:space:]]Context[[:space:]]for[[:space:]]Action[[:space:]]Recognition/Camera[[:space:]]Motion[[:space:]]and[[:space:]]Surrounding[[:space:]]Scene[[:space:]]Appearance[[:space:]]as[[:space:]]Context[[:space:]]for[[:space:]]Action[[:space:]]Recognition-Poster.pdf filter=lfs diff=lfs merge=lfs -text
64
+ Test/Camera[[:space:]]Motion[[:space:]]and[[:space:]]Surrounding[[:space:]]Scene[[:space:]]Appearance[[:space:]]as[[:space:]]Context[[:space:]]for[[:space:]]Action[[:space:]]Recognition/Camera[[:space:]]Motion[[:space:]]and[[:space:]]Surrounding[[:space:]]Scene[[:space:]]Appearance[[:space:]]as[[:space:]]Context[[:space:]]for[[:space:]]Action[[:space:]]Recognition.pdf filter=lfs diff=lfs merge=lfs -text
65
+ Test/Can[[:space:]]a[[:space:]]driving[[:space:]]simulator[[:space:]]assess[[:space:]]the[[:space:]]effectiveness[[:space:]]of[[:space:]]Hazard[[:space:]]Perception[[:space:]]training[[:space:]]in[[:space:]]young[[:space:]]novice[[:space:]]drivers/Can[[:space:]]a[[:space:]]driving[[:space:]]simulator[[:space:]]assess[[:space:]]the[[:space:]]effectiveness[[:space:]]of[[:space:]]Hazard[[:space:]]Perception[[:space:]]training[[:space:]]in[[:space:]]young[[:space:]]novice[[:space:]]drivers-Poster.pdf filter=lfs diff=lfs merge=lfs -text
66
+ Test/Can[[:space:]]a[[:space:]]driving[[:space:]]simulator[[:space:]]assess[[:space:]]the[[:space:]]effectiveness[[:space:]]of[[:space:]]Hazard[[:space:]]Perception[[:space:]]training[[:space:]]in[[:space:]]young[[:space:]]novice[[:space:]]drivers/Can[[:space:]]a[[:space:]]driving[[:space:]]simulator[[:space:]]assess[[:space:]]the[[:space:]]effectiveness[[:space:]]of[[:space:]]Hazard[[:space:]]Perception[[:space:]]training[[:space:]]in[[:space:]]young[[:space:]]novice[[:space:]]drivers.pdf filter=lfs diff=lfs merge=lfs -text
67
+ Test/Leveraging[[:space:]]Multi-Domain[[:space:]]Prior[[:space:]]Knowledge[[:space:]]in[[:space:]]Topic[[:space:]]Models/Leveraging[[:space:]]Multi-Domain[[:space:]]Prior[[:space:]]Knowledge[[:space:]]in[[:space:]]Topic[[:space:]]Models-Poster.pdf filter=lfs diff=lfs merge=lfs -text
68
+ Test/Leveraging[[:space:]]Multi-Domain[[:space:]]Prior[[:space:]]Knowledge[[:space:]]in[[:space:]]Topic[[:space:]]Models/Leveraging[[:space:]]Multi-Domain[[:space:]]Prior[[:space:]]Knowledge[[:space:]]in[[:space:]]Topic[[:space:]]Models.pdf filter=lfs diff=lfs merge=lfs -text
69
+ Test/nips-2011-001/nips-2011-001-Poster.pdf filter=lfs diff=lfs merge=lfs -text
70
+ Test/nips-2011-001/nips-2011-001.pdf filter=lfs diff=lfs merge=lfs -text
71
+ Train/3D[[:space:]]Proximal[[:space:]]Femur[[:space:]]Estimation[[:space:]]through[[:space:]]Bi-planar[[:space:]]Image[[:space:]]Segmentation[[:space:]]with[[:space:]]Hierarchical/3D[[:space:]]Proximal[[:space:]]Femur[[:space:]]Estimation[[:space:]]through[[:space:]]Bi-planar[[:space:]]Image[[:space:]]Segmentation[[:space:]]with[[:space:]]Hierarchical-Poster.pdf filter=lfs diff=lfs merge=lfs -text
72
+ Train/3D[[:space:]]Proximal[[:space:]]Femur[[:space:]]Estimation[[:space:]]through[[:space:]]Bi-planar[[:space:]]Image[[:space:]]Segmentation[[:space:]]with[[:space:]]Hierarchical/3D[[:space:]]Proximal[[:space:]]Femur[[:space:]]Estimation[[:space:]]through[[:space:]]Bi-planar[[:space:]]Image[[:space:]]Segmentation[[:space:]]with[[:space:]]Hierarchical.pdf filter=lfs diff=lfs merge=lfs -text
73
+ Train/AZDBLab[[:space:]]A[[:space:]]Laboratory[[:space:]]Information[[:space:]]System[[:space:]]for[[:space:]]Large-Scale[[:space:]]Empirical[[:space:]]DBMS[[:space:]]Studies/AZDBLab[[:space:]]A[[:space:]]Laboratory[[:space:]]Information[[:space:]]System[[:space:]]for[[:space:]]Large-Scale[[:space:]]Empirical[[:space:]]DBMS[[:space:]]Studies-Poster.pdf filter=lfs diff=lfs merge=lfs -text
74
+ Train/AZDBLab[[:space:]]A[[:space:]]Laboratory[[:space:]]Information[[:space:]]System[[:space:]]for[[:space:]]Large-Scale[[:space:]]Empirical[[:space:]]DBMS[[:space:]]Studies/AZDBLab[[:space:]]A[[:space:]]Laboratory[[:space:]]Information[[:space:]]System[[:space:]]for[[:space:]]Large-Scale[[:space:]]Empirical[[:space:]]DBMS[[:space:]]Studies.pdf filter=lfs diff=lfs merge=lfs -text
75
+ Train/Active[[:space:]]Boundary[[:space:]]Annotation[[:space:]]using[[:space:]]Random[[:space:]]MAP[[:space:]]Perturbations/Active[[:space:]]Boundary[[:space:]]Annotation[[:space:]]using[[:space:]]Random[[:space:]]MAP[[:space:]]Perturbations-Poster.pdf filter=lfs diff=lfs merge=lfs -text
76
+ Train/Active[[:space:]]Boundary[[:space:]]Annotation[[:space:]]using[[:space:]]Random[[:space:]]MAP[[:space:]]Perturbations/Active[[:space:]]Boundary[[:space:]]Annotation[[:space:]]using[[:space:]]Random[[:space:]]MAP[[:space:]]Perturbations.pdf filter=lfs diff=lfs merge=lfs -text
77
+ Train/Adaptive[[:space:]]Structure[[:space:]]from[[:space:]]Motion[[:space:]]with[[:space:]]a[[:space:]]contrario[[:space:]]model[[:space:]]estimation/Adaptive[[:space:]]Structure[[:space:]]from[[:space:]]Motion[[:space:]]with[[:space:]]a[[:space:]]contrario[[:space:]]model[[:space:]]estimation-Poster.pdf filter=lfs diff=lfs merge=lfs -text
78
+ Train/Adaptive[[:space:]]Structure[[:space:]]from[[:space:]]Motion[[:space:]]with[[:space:]]a[[:space:]]contrario[[:space:]]model[[:space:]]estimation/Adaptive[[:space:]]Structure[[:space:]]from[[:space:]]Motion[[:space:]]with[[:space:]]a[[:space:]]contrario[[:space:]]model[[:space:]]estimation.pdf filter=lfs diff=lfs merge=lfs -text
79
+ Train/An[[:space:]]automated[[:space:]]measure[[:space:]]of[[:space:]]MDP[[:space:]]similarity[[:space:]]for[[:space:]]transfer[[:space:]]in[[:space:]]reinforcement[[:space:]]learning/An[[:space:]]automated[[:space:]]measure[[:space:]]of[[:space:]]MDP[[:space:]]similarity[[:space:]]for[[:space:]]transfer[[:space:]]in[[:space:]]reinforcement[[:space:]]learning-Poster.pdf filter=lfs diff=lfs merge=lfs -text
80
+ Train/An[[:space:]]automated[[:space:]]measure[[:space:]]of[[:space:]]MDP[[:space:]]similarity[[:space:]]for[[:space:]]transfer[[:space:]]in[[:space:]]reinforcement[[:space:]]learning/An[[:space:]]automated[[:space:]]measure[[:space:]]of[[:space:]]MDP[[:space:]]similarity[[:space:]]for[[:space:]]transfer[[:space:]]in[[:space:]]reinforcement[[:space:]]learning.pdf filter=lfs diff=lfs merge=lfs -text
81
+ Train/Automated[[:space:]]Embryo[[:space:]]Stage[[:space:]]Classification[[:space:]]in[[:space:]]Time-lapse[[:space:]]Microscopy[[:space:]]Video[[:space:]]of[[:space:]]Early[[:space:]]Human[[:space:]]Embryo[[:space:]]Development/Automated[[:space:]]Embryo[[:space:]]Stage[[:space:]]Classification[[:space:]]in[[:space:]]Time-lapse[[:space:]]Microscopy[[:space:]]Video[[:space:]]of[[:space:]]Early[[:space:]]Human[[:space:]]Embryo[[:space:]]Development-Poster.pdf filter=lfs diff=lfs merge=lfs -text
82
+ Train/Automated[[:space:]]Embryo[[:space:]]Stage[[:space:]]Classification[[:space:]]in[[:space:]]Time-lapse[[:space:]]Microscopy[[:space:]]Video[[:space:]]of[[:space:]]Early[[:space:]]Human[[:space:]]Embryo[[:space:]]Development/Automated[[:space:]]Embryo[[:space:]]Stage[[:space:]]Classification[[:space:]]in[[:space:]]Time-lapse[[:space:]]Microscopy[[:space:]]Video[[:space:]]of[[:space:]]Early[[:space:]]Human[[:space:]]Embryo[[:space:]]Development.pdf filter=lfs diff=lfs merge=lfs -text
83
+ Train/BMVC-2011-001/BMVC-2011-001-Poster.pdf filter=lfs diff=lfs merge=lfs -text
84
+ Train/BMVC-2011-001/BMVC-2011-001.pdf filter=lfs diff=lfs merge=lfs -text
85
+ Train/Bayesian[[:space:]]Joint[[:space:]]Topic[[:space:]]Modelling[[:space:]]for[[:space:]]Weakly[[:space:]]Supervised[[:space:]]Object[[:space:]]Localisation/Bayesian[[:space:]]Joint[[:space:]]Topic[[:space:]]Modelling[[:space:]]for[[:space:]]Weakly[[:space:]]Supervised[[:space:]]Object[[:space:]]Localisation-Poster.pdf filter=lfs diff=lfs merge=lfs -text
86
+ Train/Bayesian[[:space:]]Joint[[:space:]]Topic[[:space:]]Modelling[[:space:]]for[[:space:]]Weakly[[:space:]]Supervised[[:space:]]Object[[:space:]]Localisation/Bayesian[[:space:]]Joint[[:space:]]Topic[[:space:]]Modelling[[:space:]]for[[:space:]]Weakly[[:space:]]Supervised[[:space:]]Object[[:space:]]Localisation.pdf filter=lfs diff=lfs merge=lfs -text
87
+ Train/Being[[:space:]]John[[:space:]]Malkovich/Being[[:space:]]John[[:space:]]Malkovich-Poster.pdf filter=lfs diff=lfs merge=lfs -text
88
+ Train/Being[[:space:]]John[[:space:]]Malkovich/Being[[:space:]]John[[:space:]]Malkovich.pdf filter=lfs diff=lfs merge=lfs -text
89
+ Train/Beyond[[:space:]]Spatial[[:space:]]Pyramids-[[:space:]]Receptive[[:space:]]Field[[:space:]]Learning[[:space:]]for[[:space:]]Pooled[[:space:]]Image[[:space:]]Features/Beyond[[:space:]]Spatial[[:space:]]Pyramids-[[:space:]]Receptive[[:space:]]Field[[:space:]]Learning[[:space:]]for[[:space:]]Pooled[[:space:]]Image[[:space:]]Features-Poster.pdf filter=lfs diff=lfs merge=lfs -text
90
+ Train/Beyond[[:space:]]Spatial[[:space:]]Pyramids-[[:space:]]Receptive[[:space:]]Field[[:space:]]Learning[[:space:]]for[[:space:]]Pooled[[:space:]]Image[[:space:]]Features/Beyond[[:space:]]Spatial[[:space:]]Pyramids-[[:space:]]Receptive[[:space:]]Field[[:space:]]Learning[[:space:]]for[[:space:]]Pooled[[:space:]]Image[[:space:]]Features.pdf filter=lfs diff=lfs merge=lfs -text
91
+ Train/CATALYST[[:space:]]ENHANCED[[:space:]]MICRO[[:space:]]SCALE[[:space:]]BATCH[[:space:]]ASSEMBLY/CATALYST[[:space:]]ENHANCED[[:space:]]MICRO[[:space:]]SCALE[[:space:]]BATCH[[:space:]]ASSEMBLY-Poster.pdf filter=lfs diff=lfs merge=lfs -text
92
+ Train/CATALYST[[:space:]]ENHANCED[[:space:]]MICRO[[:space:]]SCALE[[:space:]]BATCH[[:space:]]ASSEMBLY/CATALYST[[:space:]]ENHANCED[[:space:]]MICRO[[:space:]]SCALE[[:space:]]BATCH[[:space:]]ASSEMBLY.pdf filter=lfs diff=lfs merge=lfs -text
93
+ Train/CRYOGENIC[[:space:]]CATHODOLUMINESCENCE[[:space:]]FROM[[:space:]]CuxAg1-xInSe2[[:space:]]THIN[[:space:]]FILMS/CRYOGENIC[[:space:]]CATHODOLUMINESCENCE[[:space:]]FROM[[:space:]]CuxAg1-xInSe2[[:space:]]THIN[[:space:]]FILMS-Poster.pdf filter=lfs diff=lfs merge=lfs -text
94
+ Train/CRYOGENIC[[:space:]]CATHODOLUMINESCENCE[[:space:]]FROM[[:space:]]CuxAg1-xInSe2[[:space:]]THIN[[:space:]]FILMS/CRYOGENIC[[:space:]]CATHODOLUMINESCENCE[[:space:]]FROM[[:space:]]CuxAg1-xInSe2[[:space:]]THIN[[:space:]]FILMS.pdf filter=lfs diff=lfs merge=lfs -text
95
+ Train/CVPR-2014-011/CVPR-2014-011-Poster.pdf filter=lfs diff=lfs merge=lfs -text
96
+ Train/CVPR-2014-011/CVPR-2014-011.pdf filter=lfs diff=lfs merge=lfs -text
97
+ Train/CVPR-2014-013/CVPR-2014-013-Poster.pdf filter=lfs diff=lfs merge=lfs -text
98
+ Train/CVPR-2014-013/CVPR-2014-013.pdf filter=lfs diff=lfs merge=lfs -text
99
+ Train/Calibrating[[:space:]]Photometric[[:space:]]Stereo[[:space:]]by[[:space:]]Holistic[[:space:]]Reflectance[[:space:]]Symmetry[[:space:]]Analysis/Calibrating[[:space:]]Photometric[[:space:]]Stereo[[:space:]]by[[:space:]]Holistic[[:space:]]Reflectance[[:space:]]Symmetry[[:space:]]Analysis-Poster.pdf filter=lfs diff=lfs merge=lfs -text
100
+ Train/Calibrating[[:space:]]Photometric[[:space:]]Stereo[[:space:]]by[[:space:]]Holistic[[:space:]]Reflectance[[:space:]]Symmetry[[:space:]]Analysis/Calibrating[[:space:]]Photometric[[:space:]]Stereo[[:space:]]by[[:space:]]Holistic[[:space:]]Reflectance[[:space:]]Symmetry[[:space:]]Analysis.pdf filter=lfs diff=lfs merge=lfs -text
101
+ Train/Cambridge[[:space:]]Danehy[[:space:]]Park[[:space:]]Wind[[:space:]]Turbine[[:space:]]Preliminary[[:space:]]Project[[:space:]]Assessment/Cambridge[[:space:]]Danehy[[:space:]]Park[[:space:]]Wind[[:space:]]Turbine[[:space:]]Preliminary[[:space:]]Project[[:space:]]Assessment-Poster.pdf filter=lfs diff=lfs merge=lfs -text
102
+ Train/Cambridge[[:space:]]Danehy[[:space:]]Park[[:space:]]Wind[[:space:]]Turbine[[:space:]]Preliminary[[:space:]]Project[[:space:]]Assessment/Cambridge[[:space:]]Danehy[[:space:]]Park[[:space:]]Wind[[:space:]]Turbine[[:space:]]Preliminary[[:space:]]Project[[:space:]]Assessment.pdf filter=lfs diff=lfs merge=lfs -text
103
+ Train/Cascaded[[:space:]]Shape[[:space:]]Space[[:space:]]Pruning[[:space:]]for[[:space:]]Robust[[:space:]]Facial[[:space:]]Landmark[[:space:]]Detection/Cascaded[[:space:]]Shape[[:space:]]Space[[:space:]]Pruning[[:space:]]for[[:space:]]Robust[[:space:]]Facial[[:space:]]Landmark[[:space:]]Detection-Poster.pdf filter=lfs diff=lfs merge=lfs -text
104
+ Train/Cascaded[[:space:]]Shape[[:space:]]Space[[:space:]]Pruning[[:space:]]for[[:space:]]Robust[[:space:]]Facial[[:space:]]Landmark[[:space:]]Detection/Cascaded[[:space:]]Shape[[:space:]]Space[[:space:]]Pruning[[:space:]]for[[:space:]]Robust[[:space:]]Facial[[:space:]]Landmark[[:space:]]Detection.pdf filter=lfs diff=lfs merge=lfs -text
105
+ Train/Class[[:space:]]Specific[[:space:]]3D[[:space:]]Object[[:space:]]Shape[[:space:]]Priors[[:space:]]Using[[:space:]]Surface[[:space:]]Normals/Class[[:space:]]Specific[[:space:]]3D[[:space:]]Object[[:space:]]Shape[[:space:]]Priors[[:space:]]Using[[:space:]]Surface[[:space:]]Normals-Poster.pdf filter=lfs diff=lfs merge=lfs -text
106
+ Train/Class[[:space:]]Specific[[:space:]]3D[[:space:]]Object[[:space:]]Shape[[:space:]]Priors[[:space:]]Using[[:space:]]Surface[[:space:]]Normals/Class[[:space:]]Specific[[:space:]]3D[[:space:]]Object[[:space:]]Shape[[:space:]]Priors[[:space:]]Using[[:space:]]Surface[[:space:]]Normals.pdf filter=lfs diff=lfs merge=lfs -text
107
+ Train/Classification-Error[[:space:]]Cost[[:space:]]Minimization[[:space:]]Strategy-[[:space:]]dCMS/Classification-Error[[:space:]]Cost[[:space:]]Minimization[[:space:]]Strategy-[[:space:]]dCMS.pdf filter=lfs diff=lfs merge=lfs -text
108
+ Train/Combining[[:space:]]Motion[[:space:]]Planning[[:space:]]and[[:space:]]Optimization[[:space:]]for[[:space:]]Flexible[[:space:]]Robot[[:space:]]Manipulation/Combining[[:space:]]Motion[[:space:]]Planning[[:space:]]and[[:space:]]Optimization[[:space:]]for[[:space:]]Flexible[[:space:]]Robot[[:space:]]Manipulation-Poster.pdf filter=lfs diff=lfs merge=lfs -text
109
+ Train/Combining[[:space:]]Motion[[:space:]]Planning[[:space:]]and[[:space:]]Optimization[[:space:]]for[[:space:]]Flexible[[:space:]]Robot[[:space:]]Manipulation/Combining[[:space:]]Motion[[:space:]]Planning[[:space:]]and[[:space:]]Optimization[[:space:]]for[[:space:]]Flexible[[:space:]]Robot[[:space:]]Manipulation.pdf filter=lfs diff=lfs merge=lfs -text
110
+ Train/Comparing[[:space:]]Visual[[:space:]]Feature[[:space:]]Coding[[:space:]]for[[:space:]]Learning[[:space:]]Disjoint[[:space:]]Camera[[:space:]]Dependencies/Comparing[[:space:]]Visual[[:space:]]Feature[[:space:]]Coding[[:space:]]for[[:space:]]Learning[[:space:]]Disjoint[[:space:]]Camera[[:space:]]Dependencies-Poster.pdf filter=lfs diff=lfs merge=lfs -text
111
+ Train/Comparing[[:space:]]Visual[[:space:]]Feature[[:space:]]Coding[[:space:]]for[[:space:]]Learning[[:space:]]Disjoint[[:space:]]Camera[[:space:]]Dependencies/Comparing[[:space:]]Visual[[:space:]]Feature[[:space:]]Coding[[:space:]]for[[:space:]]Learning[[:space:]]Disjoint[[:space:]]Camera[[:space:]]Dependencies.pdf filter=lfs diff=lfs merge=lfs -text
112
+ Train/Contextual[[:space:]]Gaussian[[:space:]]Process[[:space:]]Bandit[[:space:]]Optimization/Contextual[[:space:]]Gaussian[[:space:]]Process[[:space:]]Bandit[[:space:]]Optimization-Poster.pdf filter=lfs diff=lfs merge=lfs -text
113
+ Train/Contextual[[:space:]]Gaussian[[:space:]]Process[[:space:]]Bandit[[:space:]]Optimization/Contextual[[:space:]]Gaussian[[:space:]]Process[[:space:]]Bandit[[:space:]]Optimization.pdf filter=lfs diff=lfs merge=lfs -text
114
+ Train/Cross-lingual[[:space:]]Knowledge[[:space:]]Validation[[:space:]]Based[[:space:]]Taxonomy[[:space:]]Derivation[[:space:]]from[[:space:]]Heterogeneous[[:space:]]Online[[:space:]]Wikis/Cross-lingual[[:space:]]Knowledge[[:space:]]Validation[[:space:]]Based[[:space:]]Taxonomy[[:space:]]Derivation[[:space:]]from[[:space:]]Heterogeneous[[:space:]]Online[[:space:]]Wikis-Poster.pdf filter=lfs diff=lfs merge=lfs -text
115
+ Train/Cross-lingual[[:space:]]Knowledge[[:space:]]Validation[[:space:]]Based[[:space:]]Taxonomy[[:space:]]Derivation[[:space:]]from[[:space:]]Heterogeneous[[:space:]]Online[[:space:]]Wikis/Cross-lingual[[:space:]]Knowledge[[:space:]]Validation[[:space:]]Based[[:space:]]Taxonomy[[:space:]]Derivation[[:space:]]from[[:space:]]Heterogeneous[[:space:]]Online[[:space:]]Wikis.pdf filter=lfs diff=lfs merge=lfs -text
116
+ Train/Cultivation[[:space:]]and[[:space:]]Characterization[[:space:]]of[[:space:]]Microorganisms[[:space:]]in[[:space:]]Antarctic[[:space:]]Lakes/Cultivation[[:space:]]and[[:space:]]Characterization[[:space:]]of[[:space:]]Microorganisms[[:space:]]in[[:space:]]Antarctic[[:space:]]Lakes.pdf filter=lfs diff=lfs merge=lfs -text
117
+ Train/Curvature[[:space:]]and[[:space:]]Optimal[[:space:]]Algorithms[[:space:]]for[[:space:]]Learning[[:space:]]and[[:space:]]Minimizing[[:space:]]Submodular[[:space:]]Functions/Curvature[[:space:]]and[[:space:]]Optimal[[:space:]]Algorithms[[:space:]]for[[:space:]]Learning[[:space:]]and[[:space:]]Minimizing[[:space:]]Submodular[[:space:]]Functions-Poster.pdf filter=lfs diff=lfs merge=lfs -text
118
+ Train/Curvature[[:space:]]and[[:space:]]Optimal[[:space:]]Algorithms[[:space:]]for[[:space:]]Learning[[:space:]]and[[:space:]]Minimizing[[:space:]]Submodular[[:space:]]Functions/Curvature[[:space:]]and[[:space:]]Optimal[[:space:]]Algorithms[[:space:]]for[[:space:]]Learning[[:space:]]and[[:space:]]Minimizing[[:space:]]Submodular[[:space:]]Functions.pdf filter=lfs diff=lfs merge=lfs -text
119
+ Train/Decision[[:space:]]Tree[[:space:]]Fields/Decision[[:space:]]Tree[[:space:]]Fields-Poster.pdf filter=lfs diff=lfs merge=lfs -text
120
+ Train/Decision[[:space:]]Tree[[:space:]]Fields/Decision[[:space:]]Tree[[:space:]]Fields.pdf filter=lfs diff=lfs merge=lfs -text
121
+ Train/Deformable[[:space:]]Part[[:space:]]Descriptors[[:space:]]for[[:space:]]Fine-grained[[:space:]]Recognition[[:space:]]and[[:space:]]Attribute[[:space:]]Prediction/Deformable[[:space:]]Part[[:space:]]Descriptors[[:space:]]for[[:space:]]Fine-grained[[:space:]]Recognition[[:space:]]and[[:space:]]Attribute[[:space:]]Prediction-Poster.pdf filter=lfs diff=lfs merge=lfs -text
122
+ Train/Deformable[[:space:]]Part[[:space:]]Descriptors[[:space:]]for[[:space:]]Fine-grained[[:space:]]Recognition[[:space:]]and[[:space:]]Attribute[[:space:]]Prediction/Deformable[[:space:]]Part[[:space:]]Descriptors[[:space:]]for[[:space:]]Fine-grained[[:space:]]Recognition[[:space:]]and[[:space:]]Attribute[[:space:]]Prediction.pdf filter=lfs diff=lfs merge=lfs -text
123
+ Train/Dense[[:space:]]Semantic[[:space:]]Image[[:space:]]Segmentation[[:space:]]with[[:space:]]Objects[[:space:]]and[[:space:]]Attributes/Dense[[:space:]]Semantic[[:space:]]Image[[:space:]]Segmentation[[:space:]]with[[:space:]]Objects[[:space:]]and[[:space:]]Attributes-Poster.pdf filter=lfs diff=lfs merge=lfs -text
124
+ Train/Dense[[:space:]]Semantic[[:space:]]Image[[:space:]]Segmentation[[:space:]]with[[:space:]]Objects[[:space:]]and[[:space:]]Attributes/Dense[[:space:]]Semantic[[:space:]]Image[[:space:]]Segmentation[[:space:]]with[[:space:]]Objects[[:space:]]and[[:space:]]Attributes.pdf filter=lfs diff=lfs merge=lfs -text
125
+ Train/Detection[[:space:]]Bank-[[:space:]]An[[:space:]]Object[[:space:]]Detection[[:space:]]Based[[:space:]]Video[[:space:]]Representation[[:space:]]for[[:space:]]Multimedia[[:space:]]Event[[:space:]]Recognition/Detection[[:space:]]Bank-[[:space:]]An[[:space:]]Object[[:space:]]Detection[[:space:]]Based[[:space:]]Video[[:space:]]Representation[[:space:]]for[[:space:]]Multimedia[[:space:]]Event[[:space:]]Recognition-Poster.pdf filter=lfs diff=lfs merge=lfs -text
126
+ Train/Detection[[:space:]]Bank-[[:space:]]An[[:space:]]Object[[:space:]]Detection[[:space:]]Based[[:space:]]Video[[:space:]]Representation[[:space:]]for[[:space:]]Multimedia[[:space:]]Event[[:space:]]Recognition/Detection[[:space:]]Bank-[[:space:]]An[[:space:]]Object[[:space:]]Detection[[:space:]]Based[[:space:]]Video[[:space:]]Representation[[:space:]]for[[:space:]]Multimedia[[:space:]]Event[[:space:]]Recognition.pdf filter=lfs diff=lfs merge=lfs -text
127
+ Train/Difference[[:space:]]of[[:space:]]Boxes[[:space:]]Filters[[:space:]]Revisited-[[:space:]]Shadow[[:space:]]Suppression[[:space:]]and[[:space:]]Efficient[[:space:]]Character/Difference[[:space:]]of[[:space:]]Boxes[[:space:]]Filters[[:space:]]Revisited-[[:space:]]Shadow[[:space:]]Suppression[[:space:]]and[[:space:]]Efficient[[:space:]]Character-Poster.pdf filter=lfs diff=lfs merge=lfs -text
128
+ Train/Difference[[:space:]]of[[:space:]]Boxes[[:space:]]Filters[[:space:]]Revisited-[[:space:]]Shadow[[:space:]]Suppression[[:space:]]and[[:space:]]Efficient[[:space:]]Character/Difference[[:space:]]of[[:space:]]Boxes[[:space:]]Filters[[:space:]]Revisited-[[:space:]]Shadow[[:space:]]Suppression[[:space:]]and[[:space:]]Efficient[[:space:]]Character.pdf filter=lfs diff=lfs merge=lfs -text
129
+ Train/Dimension[[:space:]]Reduction[[:space:]]of[[:space:]]Network[[:space:]]Bottleneck[[:space:]]Bandwidth[[:space:]]Data[[:space:]]Space/Dimension[[:space:]]Reduction[[:space:]]of[[:space:]]Network[[:space:]]Bottleneck[[:space:]]Bandwidth[[:space:]]Data[[:space:]]Space-Poster.pdf filter=lfs diff=lfs merge=lfs -text
130
+ Train/Discriminative[[:space:]]Bayesian[[:space:]]Active[[:space:]]Shape[[:space:]]Models/Discriminative[[:space:]]Bayesian[[:space:]]Active[[:space:]]Shape[[:space:]]Models-Poster.pdf filter=lfs diff=lfs merge=lfs -text
131
+ Train/Discriminative[[:space:]]Bayesian[[:space:]]Active[[:space:]]Shape[[:space:]]Models/Discriminative[[:space:]]Bayesian[[:space:]]Active[[:space:]]Shape[[:space:]]Models.pdf filter=lfs diff=lfs merge=lfs -text
132
+ Train/Discriminative[[:space:]]Segment[[:space:]]Annotation[[:space:]]in[[:space:]]Weakly[[:space:]]Labeled[[:space:]]Video/Discriminative[[:space:]]Segment[[:space:]]Annotation[[:space:]]in[[:space:]]Weakly[[:space:]]Labeled[[:space:]]Video-Poster.pdf filter=lfs diff=lfs merge=lfs -text
133
+ Train/Discriminative[[:space:]]Segment[[:space:]]Annotation[[:space:]]in[[:space:]]Weakly[[:space:]]Labeled[[:space:]]Video/Discriminative[[:space:]]Segment[[:space:]]Annotation[[:space:]]in[[:space:]]Weakly[[:space:]]Labeled[[:space:]]Video.pdf filter=lfs diff=lfs merge=lfs -text
134
+ Train/Display[[:space:]]type[[:space:]]effects[[:space:]]in[[:space:]]military[[:space:]]operational[[:space:]]tasks[[:space:]]using[[:space:]]Unmanned[[:space:]]Vehicle[[:space:]]UV[[:space:]]video[[:space:]]images[[:space:]]Comparison[[:space:]]between[[:space:]]color[[:space:]]and[[:space:]]BW[[:space:]]video[[:space:]]feeds/Display[[:space:]]type[[:space:]]effects[[:space:]]in[[:space:]]military[[:space:]]operational[[:space:]]tasks[[:space:]]using[[:space:]]Unmanned[[:space:]]Vehicle[[:space:]]UV[[:space:]]video[[:space:]]images[[:space:]]Comparison[[:space:]]between[[:space:]]color[[:space:]]and[[:space:]]BW[[:space:]]video[[:space:]]feeds-Poster.pdf filter=lfs diff=lfs merge=lfs -text
135
+ Train/Display[[:space:]]type[[:space:]]effects[[:space:]]in[[:space:]]military[[:space:]]operational[[:space:]]tasks[[:space:]]using[[:space:]]Unmanned[[:space:]]Vehicle[[:space:]]UV[[:space:]]video[[:space:]]images[[:space:]]Comparison[[:space:]]between[[:space:]]color[[:space:]]and[[:space:]]BW[[:space:]]video[[:space:]]feeds/Display[[:space:]]type[[:space:]]effects[[:space:]]in[[:space:]]military[[:space:]]operational[[:space:]]tasks[[:space:]]using[[:space:]]Unmanned[[:space:]]Vehicle[[:space:]]UV[[:space:]]video[[:space:]]images[[:space:]]Comparison[[:space:]]between[[:space:]]color[[:space:]]and[[:space:]]BW[[:space:]]video[[:space:]]feeds.pdf filter=lfs diff=lfs merge=lfs -text
136
+ Train/Diverse[[:space:]]Sequential[[:space:]]Subset[[:space:]]Selection[[:space:]]for[[:space:]]Supervised[[:space:]]Video[[:space:]]Summarization/Diverse[[:space:]]Sequential[[:space:]]Subset[[:space:]]Selection[[:space:]]for[[:space:]]Supervised[[:space:]]Video[[:space:]]Summarization-Poster.pdf filter=lfs diff=lfs merge=lfs -text
137
+ Train/Diverse[[:space:]]Sequential[[:space:]]Subset[[:space:]]Selection[[:space:]]for[[:space:]]Supervised[[:space:]]Video[[:space:]]Summarization/Diverse[[:space:]]Sequential[[:space:]]Subset[[:space:]]Selection[[:space:]]for[[:space:]]Supervised[[:space:]]Video[[:space:]]Summarization.pdf filter=lfs diff=lfs merge=lfs -text
138
+ Train/Domain[[:space:]]Generalization[[:space:]]via[[:space:]]Invariant[[:space:]]Feature[[:space:]]Representation/Domain[[:space:]]Generalization[[:space:]]via[[:space:]]Invariant[[:space:]]Feature[[:space:]]Representation-Poster.pdf filter=lfs diff=lfs merge=lfs -text
139
+ Train/Domain[[:space:]]Generalization[[:space:]]via[[:space:]]Invariant[[:space:]]Feature[[:space:]]Representation/Domain[[:space:]]Generalization[[:space:]]via[[:space:]]Invariant[[:space:]]Feature[[:space:]]Representation.pdf filter=lfs diff=lfs merge=lfs -text
140
+ Train/ExScal[[:space:]]Backbone[[:space:]]Network[[:space:]]Architecture/ExScal[[:space:]]Backbone[[:space:]]Network[[:space:]]Architecture-Poster.pdf filter=lfs diff=lfs merge=lfs -text
141
+ Train/ExScal[[:space:]]Backbone[[:space:]]Network[[:space:]]Architecture/ExScal[[:space:]]Backbone[[:space:]]Network[[:space:]]Architecture.pdf filter=lfs diff=lfs merge=lfs -text
142
+ Train/Exploiting[[:space:]]Database[[:space:]]Similarity[[:space:]]Joins[[:space:]]for[[:space:]]Metric[[:space:]]Spaces/Exploiting[[:space:]]Database[[:space:]]Similarity[[:space:]]Joins[[:space:]]for[[:space:]]Metric[[:space:]]Spaces-Poster.pdf filter=lfs diff=lfs merge=lfs -text
143
+ Train/Exploiting[[:space:]]Database[[:space:]]Similarity[[:space:]]Joins[[:space:]]for[[:space:]]Metric[[:space:]]Spaces/Exploiting[[:space:]]Database[[:space:]]Similarity[[:space:]]Joins[[:space:]]for[[:space:]]Metric[[:space:]]Spaces.pdf filter=lfs diff=lfs merge=lfs -text
144
+ Train/Expression-Invariant[[:space:]]Face[[:space:]]Recognition[[:space:]]with[[:space:]]Expression[[:space:]]Classification/Expression-Invariant[[:space:]]Face[[:space:]]Recognition[[:space:]]with[[:space:]]Expression[[:space:]]Classification-Poster.pdf filter=lfs diff=lfs merge=lfs -text
145
+ Train/Expression-Invariant[[:space:]]Face[[:space:]]Recognition[[:space:]]with[[:space:]]Expression[[:space:]]Classification/Expression-Invariant[[:space:]]Face[[:space:]]Recognition[[:space:]]with[[:space:]]Expression[[:space:]]Classification.pdf filter=lfs diff=lfs merge=lfs -text
146
+ Train/Extracting[[:space:]]Logical[[:space:]]Structure[[:space:]]and[[:space:]]Identifying[[:space:]]Stragglers[[:space:]]in[[:space:]]Parallel[[:space:]]Execution[[:space:]]Traces/Extracting[[:space:]]Logical[[:space:]]Structure[[:space:]]and[[:space:]]Identifying[[:space:]]Stragglers[[:space:]]in[[:space:]]Parallel[[:space:]]Execution[[:space:]]Traces-Poster.pdf filter=lfs diff=lfs merge=lfs -text
147
+ Train/Extracting[[:space:]]Logical[[:space:]]Structure[[:space:]]and[[:space:]]Identifying[[:space:]]Stragglers[[:space:]]in[[:space:]]Parallel[[:space:]]Execution[[:space:]]Traces/Extracting[[:space:]]Logical[[:space:]]Structure[[:space:]]and[[:space:]]Identifying[[:space:]]Stragglers[[:space:]]in[[:space:]]Parallel[[:space:]]Execution[[:space:]]Traces.pdf filter=lfs diff=lfs merge=lfs -text
148
+ Train/Face[[:space:]]Spoofing[[:space:]]Detection[[:space:]]through/Face[[:space:]]Spoofing[[:space:]]Detection[[:space:]]through-Poster.pdf filter=lfs diff=lfs merge=lfs -text
149
+ Train/Face[[:space:]]Spoofing[[:space:]]Detection[[:space:]]through/Face[[:space:]]Spoofing[[:space:]]Detection[[:space:]]through.pdf filter=lfs diff=lfs merge=lfs -text
150
+ Train/FaceTracer-[[:space:]]A[[:space:]]Search[[:space:]]Engine[[:space:]]for[[:space:]]Large[[:space:]]Collections[[:space:]]of[[:space:]]Images[[:space:]]with[[:space:]]Faces/FaceTracer-[[:space:]]A[[:space:]]Search[[:space:]]Engine[[:space:]]for[[:space:]]Large[[:space:]]Collections[[:space:]]of[[:space:]]Images[[:space:]]with[[:space:]]Faces-Poster.pdf filter=lfs diff=lfs merge=lfs -text
151
+ Train/FaceTracer-[[:space:]]A[[:space:]]Search[[:space:]]Engine[[:space:]]for[[:space:]]Large[[:space:]]Collections[[:space:]]of[[:space:]]Images[[:space:]]with[[:space:]]Faces/FaceTracer-[[:space:]]A[[:space:]]Search[[:space:]]Engine[[:space:]]for[[:space:]]Large[[:space:]]Collections[[:space:]]of[[:space:]]Images[[:space:]]with[[:space:]]Faces.pdf filter=lfs diff=lfs merge=lfs -text
152
+ Train/Facebully[[:space:]]Towards[[:space:]]the[[:space:]]Identification[[:space:]]of[[:space:]]Cyberbullying[[:space:]]in[[:space:]]Facebook/Facebully[[:space:]]Towards[[:space:]]the[[:space:]]Identification[[:space:]]of[[:space:]]Cyberbullying[[:space:]]in[[:space:]]Facebook-Poster.pdf filter=lfs diff=lfs merge=lfs -text
153
+ Train/Facebully[[:space:]]Towards[[:space:]]the[[:space:]]Identification[[:space:]]of[[:space:]]Cyberbullying[[:space:]]in[[:space:]]Facebook/Facebully[[:space:]]Towards[[:space:]]the[[:space:]]Identification[[:space:]]of[[:space:]]Cyberbullying[[:space:]]in[[:space:]]Facebook.pdf filter=lfs diff=lfs merge=lfs -text
154
+ Train/Feature[[:space:]]Construction[[:space:]]for[[:space:]]Inverse[[:space:]]Reinforcement[[:space:]]Learning/Feature[[:space:]]Construction[[:space:]]for[[:space:]]Inverse[[:space:]]Reinforcement[[:space:]]Learning-Poster.pdf filter=lfs diff=lfs merge=lfs -text
155
+ Train/Feature[[:space:]]Construction[[:space:]]for[[:space:]]Inverse[[:space:]]Reinforcement[[:space:]]Learning/Feature[[:space:]]Construction[[:space:]]for[[:space:]]Inverse[[:space:]]Reinforcement[[:space:]]Learning.pdf filter=lfs diff=lfs merge=lfs -text
156
+ Train/Feature-based[[:space:]]Part[[:space:]]Retrieval[[:space:]]for[[:space:]]Interactive[[:space:]]3D[[:space:]]Reassembly/Feature-based[[:space:]]Part[[:space:]]Retrieval[[:space:]]for[[:space:]]Interactive[[:space:]]3D[[:space:]]Reassembly.pdf filter=lfs diff=lfs merge=lfs -text
157
+ Train/Finding[[:space:]]Things-[[:space:]]Image[[:space:]]Parsing[[:space:]]with[[:space:]]Regions[[:space:]]and[[:space:]]Per-Exemplar[[:space:]]Detectors/Finding[[:space:]]Things-[[:space:]]Image[[:space:]]Parsing[[:space:]]with[[:space:]]Regions[[:space:]]and[[:space:]]Per-Exemplar[[:space:]]Detectors-Poster.pdf filter=lfs diff=lfs merge=lfs -text
158
+ Train/Finding[[:space:]]Things-[[:space:]]Image[[:space:]]Parsing[[:space:]]with[[:space:]]Regions[[:space:]]and[[:space:]]Per-Exemplar[[:space:]]Detectors/Finding[[:space:]]Things-[[:space:]]Image[[:space:]]Parsing[[:space:]]with[[:space:]]Regions[[:space:]]and[[:space:]]Per-Exemplar[[:space:]]Detectors.pdf filter=lfs diff=lfs merge=lfs -text
159
+ Train/Fine-Grained[[:space:]]Visual[[:space:]]Comparisons[[:space:]]with[[:space:]]Local[[:space:]]Learning/Fine-Grained[[:space:]]Visual[[:space:]]Comparisons[[:space:]]with[[:space:]]Local[[:space:]]Learning-Poster.pdf filter=lfs diff=lfs merge=lfs -text
160
+ Train/Fine-Grained[[:space:]]Visual[[:space:]]Comparisons[[:space:]]with[[:space:]]Local[[:space:]]Learning/Fine-Grained[[:space:]]Visual[[:space:]]Comparisons[[:space:]]with[[:space:]]Local[[:space:]]Learning.pdf filter=lfs diff=lfs merge=lfs -text
161
+ Train/Free[[:space:]]your[[:space:]]Camera-[[:space:]]3D[[:space:]]Indoor[[:space:]]Scene[[:space:]]Understanding[[:space:]]from[[:space:]]Arbitrary[[:space:]]Camera[[:space:]]Motion/Free[[:space:]]your[[:space:]]Camera-[[:space:]]3D[[:space:]]Indoor[[:space:]]Scene[[:space:]]Understanding[[:space:]]from[[:space:]]Arbitrary[[:space:]]Camera[[:space:]]Motion-Poster.pdf filter=lfs diff=lfs merge=lfs -text
162
+ Train/Free[[:space:]]your[[:space:]]Camera-[[:space:]]3D[[:space:]]Indoor[[:space:]]Scene[[:space:]]Understanding[[:space:]]from[[:space:]]Arbitrary[[:space:]]Camera[[:space:]]Motion/Free[[:space:]]your[[:space:]]Camera-[[:space:]]3D[[:space:]]Indoor[[:space:]]Scene[[:space:]]Understanding[[:space:]]from[[:space:]]Arbitrary[[:space:]]Camera[[:space:]]Motion.pdf filter=lfs diff=lfs merge=lfs -text
163
+ Train/Graph-Based[[:space:]]Discriminative[[:space:]]Learning[[:space:]]for[[:space:]]Location[[:space:]]Recognition/Graph-Based[[:space:]]Discriminative[[:space:]]Learning[[:space:]]for[[:space:]]Location[[:space:]]Recognition-Poster.pdf filter=lfs diff=lfs merge=lfs -text
164
+ Train/Graph-Based[[:space:]]Discriminative[[:space:]]Learning[[:space:]]for[[:space:]]Location[[:space:]]Recognition/Graph-Based[[:space:]]Discriminative[[:space:]]Learning[[:space:]]for[[:space:]]Location[[:space:]]Recognition.pdf filter=lfs diff=lfs merge=lfs -text
165
+ Train/GraphTrack[[:space:]]Fast[[:space:]]and[[:space:]]Globally[[:space:]]Optimal[[:space:]]Tracking[[:space:]]in[[:space:]]Videos/GraphTrack[[:space:]]Fast[[:space:]]and[[:space:]]Globally[[:space:]]Optimal[[:space:]]Tracking[[:space:]]in[[:space:]]Videos-Poster.pdf filter=lfs diff=lfs merge=lfs -text
166
+ Train/GraphTrack[[:space:]]Fast[[:space:]]and[[:space:]]Globally[[:space:]]Optimal[[:space:]]Tracking[[:space:]]in[[:space:]]Videos/GraphTrack[[:space:]]Fast[[:space:]]and[[:space:]]Globally[[:space:]]Optimal[[:space:]]Tracking[[:space:]]in[[:space:]]Videos.pdf filter=lfs diff=lfs merge=lfs -text
167
+ Train/Hierarchical[[:space:]]Qualitative[[:space:]]Color[[:space:]]Palettes/Hierarchical[[:space:]]Qualitative[[:space:]]Color[[:space:]]Palettes-Poster.pdf filter=lfs diff=lfs merge=lfs -text
168
+ Train/Hierarchical[[:space:]]Qualitative[[:space:]]Color[[:space:]]Palettes/Hierarchical[[:space:]]Qualitative[[:space:]]Color[[:space:]]Palettes.pdf filter=lfs diff=lfs merge=lfs -text
169
+ Train/History[[:space:]]Dependent[[:space:]]Domain[[:space:]]Adaptation/History[[:space:]]Dependent[[:space:]]Domain[[:space:]]Adaptation-Poster.pdf filter=lfs diff=lfs merge=lfs -text
170
+ Train/History[[:space:]]Dependent[[:space:]]Domain[[:space:]]Adaptation/History[[:space:]]Dependent[[:space:]]Domain[[:space:]]Adaptation.pdf filter=lfs diff=lfs merge=lfs -text
171
+ Train/Hyperspectral[[:space:]]Imaging[[:space:]]for[[:space:]]Ink[[:space:]]Mismatch[[:space:]]Detection/Hyperspectral[[:space:]]Imaging[[:space:]]for[[:space:]]Ink[[:space:]]Mismatch[[:space:]]Detection-Poster.pdf filter=lfs diff=lfs merge=lfs -text
172
+ Train/Hyperspectral[[:space:]]Imaging[[:space:]]for[[:space:]]Ink[[:space:]]Mismatch[[:space:]]Detection/Hyperspectral[[:space:]]Imaging[[:space:]]for[[:space:]]Ink[[:space:]]Mismatch[[:space:]]Detection.pdf filter=lfs diff=lfs merge=lfs -text
173
+ Train/ICCV_2013_001/ICCV_2013_001-Poster.pdf filter=lfs diff=lfs merge=lfs -text
174
+ Train/ICCV_2013_001/ICCV_2013_001.pdf filter=lfs diff=lfs merge=lfs -text
175
+ Train/Learning[[:space:]]People[[:space:]]Detection[[:space:]]Models[[:space:]]from[[:space:]]Few[[:space:]]Training[[:space:]]Samples/Learning[[:space:]]People[[:space:]]Detection[[:space:]]Models[[:space:]]from[[:space:]]Few[[:space:]]Training[[:space:]]Samples-Poster.pdf filter=lfs diff=lfs merge=lfs -text
176
+ Train/Learning[[:space:]]People[[:space:]]Detection[[:space:]]Models[[:space:]]from[[:space:]]Few[[:space:]]Training[[:space:]]Samples/Learning[[:space:]]People[[:space:]]Detection[[:space:]]Models[[:space:]]from[[:space:]]Few[[:space:]]Training[[:space:]]Samples.pdf filter=lfs diff=lfs merge=lfs -text
177
+ Train/Leveraging[[:space:]]High[[:space:]]Performance[[:space:]]Computation[[:space:]]for[[:space:]]Statistical[[:space:]]Wind[[:space:]]Prediction/Leveraging[[:space:]]High[[:space:]]Performance[[:space:]]Computation[[:space:]]for[[:space:]]Statistical[[:space:]]Wind[[:space:]]Prediction-Poster.pdf filter=lfs diff=lfs merge=lfs -text
178
+ Train/Leveraging[[:space:]]High[[:space:]]Performance[[:space:]]Computation[[:space:]]for[[:space:]]Statistical[[:space:]]Wind[[:space:]]Prediction/Leveraging[[:space:]]High[[:space:]]Performance[[:space:]]Computation[[:space:]]for[[:space:]]Statistical[[:space:]]Wind[[:space:]]Prediction.pdf filter=lfs diff=lfs merge=lfs -text
179
+ Train/Low
[[:space:]]Overhead
[[:space:]]Concurrency[[:space:]]
Control
[[:space:]]for[[:space:]]Partitioned[[:space:]]
Main
[[:space:]]Memory[[:space:]]
Databases/Low
[[:space:]]Overhead
[[:space:]]Concurrency[[:space:]]
Control
[[:space:]]for[[:space:]]Partitioned[[:space:]]
Main
[[:space:]]Memory[[:space:]]
Databases-Poster.pdf filter=lfs diff=lfs merge=lfs -text
180
+ Train/Low
[[:space:]]Overhead
[[:space:]]Concurrency[[:space:]]
Control
[[:space:]]for[[:space:]]Partitioned[[:space:]]
Main
[[:space:]]Memory[[:space:]]
Databases/Low
[[:space:]]Overhead
[[:space:]]Concurrency[[:space:]]
Control
[[:space:]]for[[:space:]]Partitioned[[:space:]]
Main
[[:space:]]Memory[[:space:]]
Databases.pdf filter=lfs diff=lfs merge=lfs -text
181
+ Train/MatchMiner-[[:space:]]Efficient[[:space:]]Spanning[[:space:]]Structure[[:space:]]Mining[[:space:]]in[[:space:]]Large[[:space:]]Image[[:space:]]Collections/MatchMiner-[[:space:]]Efficient[[:space:]]Spanning[[:space:]]Structure[[:space:]]Mining[[:space:]]in[[:space:]]Large[[:space:]]Image[[:space:]]Collections-Poster.pdf filter=lfs diff=lfs merge=lfs -text
182
+ Train/MatchMiner-[[:space:]]Efficient[[:space:]]Spanning[[:space:]]Structure[[:space:]]Mining[[:space:]]in[[:space:]]Large[[:space:]]Image[[:space:]]Collections/MatchMiner-[[:space:]]Efficient[[:space:]]Spanning[[:space:]]Structure[[:space:]]Mining[[:space:]]in[[:space:]]Large[[:space:]]Image[[:space:]]Collections.pdf filter=lfs diff=lfs merge=lfs -text
183
+ Train/Memorability[[:space:]]of[[:space:]]natural[[:space:]]scene-[[:space:]]the[[:space:]]role[[:space:]]of[[:space:]]attention/Memorability[[:space:]]of[[:space:]]natural[[:space:]]scene-[[:space:]]the[[:space:]]role[[:space:]]of[[:space:]]attention-Poster.pdf filter=lfs diff=lfs merge=lfs -text
184
+ Train/Memorability[[:space:]]of[[:space:]]natural[[:space:]]scene-[[:space:]]the[[:space:]]role[[:space:]]of[[:space:]]attention/Memorability[[:space:]]of[[:space:]]natural[[:space:]]scene-[[:space:]]the[[:space:]]role[[:space:]]of[[:space:]]attention.pdf filter=lfs diff=lfs merge=lfs -text
185
+ Train/Modeling[[:space:]]skin[[:space:]]and[[:space:]]ageing[[:space:]]phenotypes[[:space:]]using[[:space:]]latent[[:space:]]variable[[:space:]]models[[:space:]]in[[:space:]]Infer.NET/Modeling[[:space:]]skin[[:space:]]and[[:space:]]ageing[[:space:]]phenotypes[[:space:]]using[[:space:]]latent[[:space:]]variable[[:space:]]models[[:space:]]in[[:space:]]Infer.NET-Poster.pdf filter=lfs diff=lfs merge=lfs -text
186
+ Train/Modeling[[:space:]]skin[[:space:]]and[[:space:]]ageing[[:space:]]phenotypes[[:space:]]using[[:space:]]latent[[:space:]]variable[[:space:]]models[[:space:]]in[[:space:]]Infer.NET/Modeling[[:space:]]skin[[:space:]]and[[:space:]]ageing[[:space:]]phenotypes[[:space:]]using[[:space:]]latent[[:space:]]variable[[:space:]]models[[:space:]]in[[:space:]]Infer.NET.pdf filter=lfs diff=lfs merge=lfs -text
187
+ Train/Mortal[[:space:]]Multi-Armed[[:space:]]Bandits/Mortal[[:space:]]Multi-Armed[[:space:]]Bandits-Poster.pdf filter=lfs diff=lfs merge=lfs -text
188
+ Train/Mortal[[:space:]]Multi-Armed[[:space:]]Bandits/Mortal[[:space:]]Multi-Armed[[:space:]]Bandits.pdf filter=lfs diff=lfs merge=lfs -text
189
+ Train/NMF-KNN-[[:space:]]Image[[:space:]]Annotation[[:space:]]using[[:space:]]Weighted[[:space:]]Multi-view[[:space:]]Non-negative[[:space:]]Matrix[[:space:]]Factorization/NMF-KNN-[[:space:]]Image[[:space:]]Annotation[[:space:]]using[[:space:]]Weighted[[:space:]]Multi-view[[:space:]]Non-negative[[:space:]]Matrix[[:space:]]Factorization-Poster.pdf filter=lfs diff=lfs merge=lfs -text
190
+ Train/NMF-KNN-[[:space:]]Image[[:space:]]Annotation[[:space:]]using[[:space:]]Weighted[[:space:]]Multi-view[[:space:]]Non-negative[[:space:]]Matrix[[:space:]]Factorization/NMF-KNN-[[:space:]]Image[[:space:]]Annotation[[:space:]]using[[:space:]]Weighted[[:space:]]Multi-view[[:space:]]Non-negative[[:space:]]Matrix[[:space:]]Factorization.pdf filter=lfs diff=lfs merge=lfs -text
191
+ Train/Play[[:space:]]Type[[:space:]]Recognition[[:space:]]in[[:space:]]Real-World[[:space:]]Football[[:space:]]Video/Play[[:space:]]Type[[:space:]]Recognition[[:space:]]in[[:space:]]Real-World[[:space:]]Football[[:space:]]Video-Poster.pdf filter=lfs diff=lfs merge=lfs -text
192
+ Train/Play[[:space:]]Type[[:space:]]Recognition[[:space:]]in[[:space:]]Real-World[[:space:]]Football[[:space:]]Video/Play[[:space:]]Type[[:space:]]Recognition[[:space:]]in[[:space:]]Real-World[[:space:]]Football[[:space:]]Video.pdf filter=lfs diff=lfs merge=lfs -text
193
+ Train/bmvc-2013-031/bmvc-2013-031-Poster.pdf filter=lfs diff=lfs merge=lfs -text
194
+ Train/bmvc-2013-031/bmvc-2013-031.pdf filter=lfs diff=lfs merge=lfs -text
195
+ Train/cvpr-2012-002/cvpr-2012-002-Poster.pdf filter=lfs diff=lfs merge=lfs -text
196
+ Train/cvpr-2012-002/cvpr-2012-002.pdf filter=lfs diff=lfs merge=lfs -text
197
+ Train/cvpr-2012-004/cvpr-2012-004-Poster.pdf filter=lfs diff=lfs merge=lfs -text
198
+ Train/cvpr-2012-004/cvpr-2012-004.pdf filter=lfs diff=lfs merge=lfs -text
199
+ Train/cvpr-2013-005/cvpr-2013-005-Poster.pdf filter=lfs diff=lfs merge=lfs -text
200
+ Train/cvpr-2013-005/cvpr-2013-005.pdf filter=lfs diff=lfs merge=lfs -text
201
+ Train/cvpr-2013-007/cvpr-2013-007-Poster.pdf filter=lfs diff=lfs merge=lfs -text
202
+ Train/cvpr-2013-007/cvpr-2013-007.pdf filter=lfs diff=lfs merge=lfs -text
203
+ Train/cvpr-2013-008/cvpr-2013-008-Poster.pdf filter=lfs diff=lfs merge=lfs -text
204
+ Train/cvpr-2013-008/cvpr-2013-008.pdf filter=lfs diff=lfs merge=lfs -text
205
+ Train/cvpr-2013-010/cvpr-2013-010-Poster.pdf filter=lfs diff=lfs merge=lfs -text
206
+ Train/cvpr-2013-010/cvpr-2013-010.pdf filter=lfs diff=lfs merge=lfs -text
207
+ Train/cvpr-2013-012/cvpr-2013-012-Poster.pdf filter=lfs diff=lfs merge=lfs -text
208
+ Train/cvpr-2013-012/cvpr-2013-012.pdf filter=lfs diff=lfs merge=lfs -text
209
+ Train/cvpr-2013-014/cvpr-2013-014-Poster.pdf filter=lfs diff=lfs merge=lfs -text
210
+ Train/cvpr-2013-014/cvpr-2013-014.pdf filter=lfs diff=lfs merge=lfs -text
211
+ Train/cvpr-2013-016/cvpr-2013-016-Poster.pdf filter=lfs diff=lfs merge=lfs -text
212
+ Train/cvpr-2013-016/cvpr-2013-016.pdf filter=lfs diff=lfs merge=lfs -text
213
+ Train/cvpr-2013-028/cvpr-2013-028-Poster.pdf filter=lfs diff=lfs merge=lfs -text
214
+ Train/cvpr-2013-028/cvpr-2013-028.pdf filter=lfs diff=lfs merge=lfs -text
215
+ Train/cvpr-2013-029/cvpr-2013-029-Poster.pdf filter=lfs diff=lfs merge=lfs -text
216
+ Train/cvpr-2013-029/cvpr-2013-029.pdf filter=lfs diff=lfs merge=lfs -text
217
+ Train/cvpr-2014-002/cvpr-2014-002-Poster.pdf filter=lfs diff=lfs merge=lfs -text
218
+ Train/cvpr-2014-002/cvpr-2014-002.pdf filter=lfs diff=lfs merge=lfs -text
219
+ Train/cvpr-2014-003/cvpr-2014-003-Poster.pdf filter=lfs diff=lfs merge=lfs -text
220
+ Train/cvpr-2014-003/cvpr-2014-003.pdf filter=lfs diff=lfs merge=lfs -text
221
+ Train/eccv-2012-001/eccv-2012-001-Poster.pdf filter=lfs diff=lfs merge=lfs -text
222
+ Train/eccv-2012-001/eccv-2012-001.pdf filter=lfs diff=lfs merge=lfs -text
223
+ Train/iccv-2013-002/iccv-2013-002-Poster.pdf filter=lfs diff=lfs merge=lfs -text
224
+ Train/iccv-2013-002/iccv-2013-002.pdf filter=lfs diff=lfs merge=lfs -text
225
+ Train/ijcb-2011-001/ijcb-2011-001-Poster.pdf filter=lfs diff=lfs merge=lfs -text
226
+ Train/ijcb-2011-001/ijcb-2011-001.pdf filter=lfs diff=lfs merge=lfs -text
Test/AZDBLab A Laboratory Information System for Large-Scale Empirical DBMS Studies/AZDBLab A Laboratory Information System for Large-Scale Empirical DBMS Studies-Poster.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ff558fc0fa7c8d3dba331efa4da97ab02e63fa95c6c64149401a6ebacaa85425
3
+ size 1308425
Test/AZDBLab A Laboratory Information System for Large-Scale Empirical DBMS Studies/AZDBLab A Laboratory Information System for Large-Scale Empirical DBMS Studies.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:98f2b5053f2dd14ec2136beab9ba3becfe775584426c3801f8639c7801c48494
3
+ size 1263646
Test/AZDBLab A Laboratory Information System for Large-Scale Empirical DBMS Studies/info.xml ADDED
@@ -0,0 +1,61 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <Poster Width="685" Height="913">
2
+ <Panel left="4" right="143" width="297" height="212">
3
+ <Text>Introduction</Text>
4
+ <Text>Scientific methodology in the database field can</Text>
5
+ <Text>provide a deep understanding of DBMS query</Text>
6
+ <Text>optimizers, for better engineered designs.</Text>
7
+ <Text>Few DBMS-centric labs are available for scientific</Text>
8
+ <Text>investigation; prior labs have focused on networks</Text>
9
+ <Text>and smartphones.</Text>
10
+ </Panel>
11
+
12
+ <Panel left="5" right="368" width="297" height="273">
13
+ <Text>AZDBLAB (AriZona DataBase Laboratory)</Text>
14
+ <Text>Has been in development for seven years.</Text>
15
+ <Text>Assists database researchers to conduct large-</Text>
16
+ <Text>scale empirical studies across multiple DBMSes.</Text>
17
+ <Text>Runs massive experiments with thousands or</Text>
18
+ <Text>millions of queries on multiple DBMSes.</Text>
19
+ <Text>Supports as experiment subjects seven relational</Text>
20
+ <Text>DBMSes supporting SQL and JDBC.</Text>
21
+ <Text>Provides robustness to collect data over 8,277</Text>
22
+ <Text>hours running about 2.4 million query executions.</Text>
23
+ <Text>Conducts automated analyses on multiple query</Text>
24
+ <Text>execution runs.</Text>
25
+ </Panel>
26
+
27
+ <Panel left="6" right="653" width="296" height="204">
28
+ <Text>Contributions</Text>
29
+ <Text>Novel research infrastructure, dedicated for large-</Text>
30
+ <Text>scale empirical DBMS studies</Text>
31
+ <Text>Seamless data provenance support</Text>
32
+ <Text>Several decentralized monitoring schemes: phone</Text>
33
+ <Text>apps, web apps, and watcher</Text>
34
+ <Text>Reusable GUI</Text>
35
+ <Text>Extensibility through a variety of plugins: labshelf,</Text>
36
+ <Text>analysis, experiment subject, and scenario</Text>
37
+ </Panel>
38
+
39
+ <Panel left="316" right="145" width="354" height="285">
40
+ <Text>AZDBLAB Architecture</Text>
41
+ <Figure left="320" right="182" width="346" height="243" no="1" OriWidth="0.383237" OriHeight="0.208222
42
+ " />
43
+ </Panel>
44
+
45
+ <Panel left="315" right="439" width="355" height="419">
46
+ <Text>Demonstration</Text>
47
+ <Text>Step 1: Choose a labshelf, add a user, and create a notebook,</Text>
48
+ <Text>a paper, and a study in the paper on the Observer GUI.</Text>
49
+ <Text>Step 2: Load an experiment specification into the notebook.</Text>
50
+ <Text>Step 3: Schedule an experiment run on a particular DBMS.</Text>
51
+ <Text>Step 4: Monitor the run status via Observer, a web app, and a</Text>
52
+ <Text>mobile app, and wait for the experiment to be done.</Text>
53
+ <Text>Step 5: Add the completed experiment run to the study and</Text>
54
+ <Text>conduct a timing protocol analysis for the study.</Text>
55
+ <Text>Step 6: Produce LaTeX/PDF documents containing the analysis</Text>
56
+ <Text>results.</Text>
57
+ <Figure left="321" right="639" width="345" height="212" no="2" OriWidth="0.791329" OriHeight="0.365952
58
+ " />
59
+ </Panel>
60
+
61
+ </Poster>
Test/Camera Motion and Surrounding Scene Appearance as Context for Action Recognition/Camera Motion and Surrounding Scene Appearance as Context for Action Recognition-Poster.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3c62adf9843902a57c6aa13f0b467cdb3ce3d999df1a488d0d3ca4d5c2e80922
3
+ size 11979849
Test/Camera Motion and Surrounding Scene Appearance as Context for Action Recognition/Camera Motion and Surrounding Scene Appearance as Context for Action Recognition.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:57b5d3fcdc8006277b1033f6d55b379dee4253f49c1ebf133e740f13edd51fd3
3
+ size 4671926
Test/Camera Motion and Surrounding Scene Appearance as Context for Action Recognition/info.txt ADDED
@@ -0,0 +1,119 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <Poster Width="686" Height="1033">
2
+ <Panel left="24" right="158" width="314" height="204">
3
+ <Text>SUMMARY!</Text>
4
+ <Text>This work introduces a framework for recognizing human actions by</Text>
5
+ <Text>incorporating a new set of visual cues that represent the context of the</Text>
6
+ <Text>action:!</Text>
7
+ <Figure left="45" right="224" width="269" height="135" no="1" OriWidth="0.534104" OriHeight="0.204647
8
+ " />
9
+ </Panel>
10
+
11
+ <Panel left="24" right="362" width="315" height="58">
12
+ <Text>CONTRIBUTIONS!</Text>
13
+ <Text>Weak foreground-background segmentation approach.
</Text>
14
+ <Text>• Study of the global camera motion as a cue for action recognition.</Text>
15
+ <Text>Incorporating appearance from static background.</Text>
16
+ </Panel>
17
+
18
+ <Panel left="23" right="422" width="317" height="553">
19
+ <Text>METHODOLOGY!</Text>
20
+ <Text>This work follows the conventional action recognition pipeline. Given a set of</Text>
21
+ <Text>labeled videos, a set of features is extracted from each video, represented</Text>
22
+ <Text>using visual descriptors, and combined into a single video descriptor used to</Text>
23
+ <Text>train a multi-class classifier for action recognition.!</Text>
24
+ <Figure left="26" right="495" width="307" height="116" no="2" OriWidth="0.53815" OriHeight="0.15773
25
+ " />
26
+ <Text>• Foreground-background separation: Assuming that a background</Text>
27
+ <Text>trajectory produces a small frame-to-frame displacement, we associate a</Text>
28
+ <Text>trajectory with the background if the overall displacement is more than three</Text>
29
+ <Text>pixels.!</Text>
30
+ <Figure left="36" right="667" width="292" height="106" no="3" OriWidth="0.53526" OriHeight="0.147453
31
+ " />
32
+ <Text>• Global camera motion: We argue and show that the relationship between</Text>
33
+ <Text>an estimated camera motion and underlying action can be a useful cue for</Text>
34
+ <Text>discriminating certain action classes. As illustrated in the figure below, there</Text>
35
+ <Text>is a correlation between how the camera moves and the actor.!</Text>
36
+ <Figure left="53" right="836" width="252" height="135" no="4" OriWidth="0.534104" OriHeight="0.220286
37
+ " />
38
+ </Panel>
39
+
40
+ <Panel left="346" right="159" width="316" height="299">
41
+ <Text>• Background-context appearance: Beyond local motion and appearance</Text>
42
+ <Text>properties, the surrounding in which an action is performed is a critical</Text>
43
+ <Text>component to recognize actions. As Figure below illustrates, the background</Text>
44
+ <Text>appearance plays an important role to discriminate the action Drumming in</Text>
45
+ <Text>the sense that the drummer needs a drum set to perform the action.!</Text>
46
+ <Figure left="375" right="232" width="256" height="117" no="5" OriWidth="0.531792" OriHeight="0.18588
47
+ " />
48
+ <Text>• Implementation details: We follow two different Bag Of Feature</Text>
49
+ <Text>implementations as described in the Table below.!</Text>
50
+ <Figure left="363" right="388" width="275" height="75" no="6" OriWidth="0.42948" OriHeight="0.0437891
51
+ " />
52
+ </Panel>
53
+
54
+ <Panel left="350" right="459" width="309" height="325">
55
+ <Text>EXPERIMENTAL RESULTS!</Text>
56
+ <Text>• Datasets: We use state-of-the-art human action datasets and their</Text>
57
+ <Text>corresponding protocols.!</Text>
58
+ <Text>• Impact of contextual features: We note that using Fisher vectors</Text>
59
+ <Text>consistently boost the performance of our contextual features. Also, our</Text>
60
+ <Text>experiments provide evidence that action recognition performance can be</Text>
61
+ <Text>improved when static background appearance and global camera motion is</Text>
62
+ <Text>incorporated with foreground features.!</Text>
63
+ <Text>Comparison with the state-of-the-art: We </Text>
64
+ <Text>  set </Text>
65
+ <Text>  side </Text>
66
+ <Text>  by </Text>
67
+ <Text>  side </Text>
68
+ <Text>  our </Text>
69
+ <Text>  method </Text>
70
+ <Text>  with </Text>
71
+ <Text>  recent </Text>
72
+ <Text>  methods </Text>
73
+ <Text>  that </Text>
74
+ <Text>  address </Text>
75
+ <Text>  the </Text>
76
+ <Text>  same </Text>
77
+ <Text>  applica5on </Text>
78
+ <Text>  using </Text>
79
+ <Text>  similar </Text>
80
+ <Text>  representa5ons, </Text>
81
+ <Text>  i.e. </Text>
82
+ <Text>  methods </Text>
83
+ <Text>  that </Text>
84
+ <Text>  use </Text>
85
+ <Text>  dense </Text>
86
+ <Text>  trajectory </Text>
87
+ <Text>  points </Text>
88
+ <Text>  to </Text>
89
+ <Text>  represent </Text>
90
+ <Text>  video </Text>
91
+ <Text>  sequences </Text>
92
+ <Text>  [2,3,4] </Text>
93
+ <Text>  in </Text>
94
+ <Text>  the </Text>
95
+ <Text>  Table </Text>
96
+ <Text>  below. </Text>
97
+ <Text>   </Text>
98
+ <Text>  </Text>
99
+ <Figure left="351" right="647" width="302" height="135" no="7" OriWidth="0.479769" OriHeight="0.122431
100
+ " />
101
+ </Panel>
102
+
103
+ <Panel left="347" right="789" width="314" height="187">
104
+ <Text>DISCUSSIONS!</Text>
105
+ <Text>• Contextual features: When combined with foreground trajectories, we show</Text>
106
+ <Text>that these features, can improve state-of-the-art recognition on challenging</Text>
107
+ <Text>action datasets.!</Text>
108
+ <Text>• Project page: http://www.cabaf.net/actioncue!</Text>
109
+ <Text>References:!</Text>
110
+ <Text>[1] Fabian Caba Heilbron, Ali Thabet, Juan Carlos Niebles, Bernard Ghanem. Camera Motion</Text>
111
+ <Text>and Surrounding Scene Appearance as Context for Action Recognition. ACCV, Singapore</Text>
112
+ <Text>2014.!</Text>
113
+ <Text>[2] Wang, H., Schmid, C. Action recognition with improved trajectories. ICCV, Sydney 2013.!</Text>
114
+ <Text>[3] Jiang, Y.G., Dai, Q., Xue, X., Liu, W., Ngo, C.W. Trajectory-based modeling of human</Text>
115
+ <Text>actions with motion reference points. ECCV, 2012.!</Text>
116
+ <Text>[4] Jain, M., J egou, H., Bouthemy, P.: Better exploiting motion for better action recognition. !</Text>
117
+ </Panel>
118
+
119
+ </Poster>
Test/Can a driving simulator assess the effectiveness of Hazard Perception training in young novice drivers/Can a driving simulator assess the effectiveness of Hazard Perception training in young novice drivers-Poster.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d84e106c6fd3f73f988c3065955edb42c90e4b50d6c3985a36fad46583400e61
3
+ size 585667
Test/Can a driving simulator assess the effectiveness of Hazard Perception training in young novice drivers/Can a driving simulator assess the effectiveness of Hazard Perception training in young novice drivers.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:be99f1fd0383c4f8e09a6164745f9040104d56e367f82127572c54b473bb8239
3
+ size 785859
Test/Can a driving simulator assess the effectiveness of Hazard Perception training in young novice drivers/info.txt ADDED
@@ -0,0 +1,73 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <Poster Width="625" Height="893">
2
+ <Panel left="11" right="208" width="295" height="129">
3
+ <Text>Introduction</Text>
4
+ <Text>Hazard perception (HP) is receptive to training. Yet, there is no</Text>
5
+ <Text>consensus on an optimal training program or acceptable</Text>
6
+ <Text>measures to assess effectiveness. We aimed to evaluate a</Text>
7
+ <Text>simulator based hazard perception test (SBHPT) for</Text>
8
+ <Text>assessing improvements in HP skills of trained young-novice</Text>
9
+ <Text>drivers, relative to a control group, and relative to a group of</Text>
10
+ <Text>experienced drivers who served as gold standard.</Text>
11
+ </Panel>
12
+
13
+ <Panel left="11" right="342" width="295" height="548">
14
+ <Text>Method</Text>
15
+ <Text>Participants. Thirty nine young- novice drivers, 17-18 year-olds</Text>
16
+ <Text>with less than three months of driving experience, underwent</Text>
17
+ <Text>one of four HP training conditions (AAHPT active, hybrid, RAPT</Text>
18
+ <Text>and control) prior to the testing phase. Six experienced drivers</Text>
19
+ <Text>(mean age 26, with more than 8 years of driving experience,)</Text>
20
+ <Text>completed the test phase.</Text>
21
+ <Text>Driving Scenarios. Use of a variety of traffic environments is</Text>
22
+ <Text>important as the driving environment dictates the type and</Text>
23
+ <Text>frequency of hazardous situations. The simulated drive</Text>
24
+ <Text>consisted of 8 urban and 6 residential scenarios merged into a</Text>
25
+ <Text>single 18 km drive. Two pairs of urban and residential scenarios</Text>
26
+ <Text>are detailed in Table 1. Sample snapshots are shown in Figures</Text>
27
+ <Text>1 and 2.</Text>
28
+ <Figure left="13" right="557" width="290" height="206" no="1" OriWidth="0.562313" OriHeight="0.181113
29
+ " />
30
+ <Figure left="15" right="772" width="281" height="82" no="2" OriWidth="0.509472" OriHeight="0.0916138
31
+ " />
32
+ <Text> Figure 1. Sample snapshots of events in urban scenarios. Left: a curve in the road</Text>
33
+ <Text>(U1-U2). Right: a bus parked in the station and a pedestrian (marked by an ellipse)</Text>
34
+ <Text>crossing the road to catch it (U4).</Text>
35
+ </Panel>
36
+
37
+ <Panel left="320" right="304" width="293" height="402">
38
+ <Text>Results and analysis</Text>
39
+ <Text>Driver velocity was sampled every 2m. Average velocity among</Text>
40
+ <Text>individuals of the same group (AAHPT active, hybrid, RAPT,</Text>
41
+ <Text>control, experienced) was calculated for each point. Generating</Text>
42
+ <Text>600 sampling points per group per scenario. Using cubic</Text>
43
+ <Text>smoothing spline, a smooth curve was fitted to each set of</Text>
44
+ <Text>observations for each group (solid line in Figure 3). A statistical</Text>
45
+ <Text>test was then conducted to examine whether the five separate</Text>
46
+ <Text>curves, fitted for each group, could be replaced by a single</Text>
47
+ <Text>curve (i.e., that all groups chose their speed in the same way).</Text>
48
+ <Text>For all 8 scenarios, the group curves could not be combined into</Text>
49
+ <Text>one. Since groups were different, additional descriptive</Text>
50
+ <Text>examinations were made.</Text>
51
+ <Figure left="322" right="515" width="288" height="167" no="3" OriWidth="0.601196" OriHeight="0.239605
52
+ " />
53
+ <Text> Figure 3. The distribution of longitudinal velocity sampling points per each group, per</Text>
54
+ <Text>points along scenarios U1-U4. Solid lines are the fitted longitudinal velocity curves.</Text>
55
+ </Panel>
56
+
57
+ <Panel left="320" right="709" width="293" height="178">
58
+ <Text>Conclusions</Text>
59
+ <Text>Group-related metrics can discriminate among driver</Text>
60
+ <Text>groups.</Text>
61
+ <Text>Patterns of driving behaviour can be evaluated via</Text>
62
+ <Text>driving speed.</Text>
63
+ <Text>driving speed.</Text>
64
+ <Text>Comparisons to control, and to experienced drivers</Text>
65
+ <Text>complemented; where the resemblance of trainees</Text>
66
+ <Text>was higher to control, they tended to resemble the</Text>
67
+ <Text>experienced group less.</Text>
68
+ <Text>Events that require a complete stop are less</Text>
69
+ <Text>diagnostic than events that require slowing down but</Text>
70
+ <Text>not a complete halt.</Text>
71
+ </Panel>
72
+
73
+ </Poster>
Test/Leveraging Multi-Domain Prior Knowledge in Topic Models/Leveraging Multi-Domain Prior Knowledge in Topic Models-Poster.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fefe872f3a6beb4b716f1b8d6404184ab502c03f9c3247c8c69d8b2546cbc67d
3
+ size 953033
Test/Leveraging Multi-Domain Prior Knowledge in Topic Models/Leveraging Multi-Domain Prior Knowledge in Topic Models.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c496c6ea72105912ca4d93332e8dc2f285a57f82b9180f4eeec0217428addfdc
3
+ size 576726
Test/Leveraging Multi-Domain Prior Knowledge in Topic Models/info.xml ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <Poster Width="886" Height="1253">
2
+ <Panel left="3" right="171" width="430" height="317">
3
+ <Text>Problem Definition: Given prior knowledge from multiple domains, improve topic modeling in the new domain.</Text>
4
+ <Text>Knowledge in the form of s-set containing words sharing the same semantic meaning, e.g., \{Light, Heavy, Weight\}.</Text>
5
+ <Text>A novel technique to transfer knowledge to improve topic models.</Text>
6
+ <Text>Existing Knowledge-based models.</Text>
7
+ <Text>DF-LDA [Andrzejewski et al., 2009], Seeded Model (e.g., [Mukherjee and Liu, 2012]).</Text>
8
+ <Text>Two shortcomings: 1) Incapable of handling multiple senses, and Collapsed Gibbs Sampling and 2) Adverse effect of Knowledge.</Text>
9
+ </Panel>
10
+
11
+ <Panel left="2" right="509" width="431" height="727">
12
+ <Text>MDK-LDA</Text>
13
+ <Text>Generative Process For each topic $t \in \{1,...,T\}$ \\ i. Draw a per topic distribution over s-sets, $\varphi_t \sim \text{Dir}(\beta)$ \\ ii. For each s-set $t \in\{1,...,T\}$ \\ a) Draw a per topic, per s-set distribution over words, $\eta_{t,s} \sim \text{Dir}(\gamma)$ \\ For each document $m \in {1,...,M}$ \\ i. Draw $\theta_m \sim \text{Dir}(\alpha)$ \\ ii. For each word $w_{m,n}$ , where $n \in {1,..., N_m }$ \\ a) Draw a topic $z_{m,n} \sim \text{Mult}(θ_m)$ \\ b) Draw an s-set $s_{m,n} \sim \text{Mult}(\varphi_{z_{m,n}})$ \\ c) Emit $w_{m,n} \sim \text{Mult (\eta_{z_{m,n},s_{m,n}} )$ \\</Text>
14
+ <Text>Plate Notation</Text>
15
+ <Figure left="34" right="819" width="367" height="238" no="1" OriWidth="0.304396" OriHeight="0.165161
16
+ " />
17
+ <Text>Collapsed Gibbs Sampling</Text>
18
+ <Text>Blocked Gibbs Sampler Sample topic $z$ and s-set $s$ for word $w$ \begin{equation} \begin{split} P(z_i=t,s_i=s | \textbf{z}^{-i} \textbf{s}^{-i},\alpha,\beta,\gamma) \propto & \\ \frac{n_{m,t}^{-i}+\alpha}{\sum_{t^{'}=1}^T(n_{t,s}^{-i}+\alpha) }\times \frac{n_{t,s}^-i+\beta }{\sum_{s^{'}=1}^S(n_{t,s}^{-i}+\beta)}\times & \frac{n_{t,s,w_i}^{-i}+\gamma_s}{\sum_{v^{'}=1}^V(n_{t,s,v^{'}}^{ i}+\gamma_s)} \end{split} \end{equation} }</Text>
19
+ </Panel>
20
+
21
+ <Panel left="450" right="171" width="431" height="387">
22
+ <Text>Generalized Pólya Urn Model</Text>
23
+ <Text>Generalized Pólya urn model [Mahmoud, 2008]</Text>
24
+ <Text>When a ball is drawn, that ball is put back along with a certain number of balls of similar colors.</Text>
25
+ <Text>Promoting s-set as a whole</Text>
26
+ <Text>If a ball of color w is drawn, we put back $A_{s,w^{'},w}$ balls of each color $w^{'} \in {1,...,V}$ where w and $w^{'}$ share s-set $s$. \begin{equation} A_{s,w^{'},w}=\left\{ \begin{array}{ll} 1 & w=w^{'}\\ \sigma & w \in s, w^{'} \in s, w \neq w^{'}\\ 0 & \text{otherwise} \end{array} \right. \end{equation}</Text>
27
+ <Text>Collapsed Gibbs Sampling \begin{equation} \begin{split} P(z_i=t,s_i=s | \textbf{z}^{-i},\textbf{s}^{-i},\alpha,\beta,\gamma, A) \propto \frac{n_{m,t}^{-i}+\alpha}{\sum_{t^{'}=1}^T(n_{t,s}^{ i}+\alpha) } & \\ \times\frac{\sum_{w_{'}=1}^V \sum_{v_{'}=1}^V A_{s,v_{'},w_{'}}*n_{t,s,v^{'}}^{-i}+\beta {\sum_{s^{'}=1}^S(n_{t,s}^{-i}+\beta)}\times\frac{n_{t,s,w_i}^{ i}+\gamma_s}{\sum_{v^{'}=1}^V(n_{t,s,v^{'}}^{-i}+\gamma_s)} & \end{split} \end{equation}</Text>
28
+ </Panel>
29
+
30
+ <Panel left="450" right="573" width="429" height="661">
31
+ <Text>Experiments</Text>
32
+ <Text>Datasets: reviews from six domains from Amazon.com.</Text>
33
+ <Text>Baseline Models</Text>
34
+ <Text>LDA [Blei et al., 2003], LDA\_GPU [Mimno et al., 2011], and DF-LDA [Andrzejewski et al., 2009].</Text>
35
+ <Text>Topic Discovery Results</Text>
36
+ <Text>Evaluation measure: Precision @ n (p @ n).</Text>
37
+ <Text>Quantitative results in Table 1, Qualitative results in Table </Text>
38
+ <Text>Objective Evaluation</Text>
39
+ <Text>Topic Coherence [Mimno et al., 2011].</Text>
40
+ <Figure left="454" right="866" width="425" height="94" no="2" OriWidth="0.2532" OriHeight="0.0782796
41
+ " />
42
+ <Figure left="451" right="989" width="430" height="203" no="3" OriWidth="0.375626" OriHeight="0.132043
43
+ " />
44
+ </Panel>
45
+
46
+ </Poster>
Test/nips-2011-001/info.xml ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <Poster Width="886" Height="1146">
2
+ <Panel left="68" right="187" width="366" height="455">
3
+ <Text>Introduction</Text>
4
+ <Text>This paper proposes a parsing algorithm for indoor scene understanding which includes four aspects: computing 3D scene layout, detecting 3D objects (e.g. furniture), detecting 2D faces (windows, doors etc.), and segmenting the background. The algorithm parse an image into a hierarchical structure, namely a parse tree. With the parse tree, we reconstruct the original image by the appearance of line segments, and we further recover the 3D scene by the geometry of 3D background and foreground objects.</Text>
5
+ <Figure left="77" right="301" width="348" height="232" no="1" OriWidth="0.659432" OriHeight="0.334624
6
+ " />
7
+ <Figure left="78" right="535" width="346" height="94" no="2" OriWidth="0.585977" OriHeight="0.105376
8
+ " />
9
+ </Panel>
10
+
11
+ <Panel left="66" right="658" width="369" height="412">
12
+ <Text>Results</Text>
13
+ <Figure left="73" right="666" width="358" height="130" no="3" OriWidth="0.632165" OriHeight="0.153548
14
+ " />
15
+ <Figure left="75" right="800" width="357" height="258" no="4" OriWidth="0.632721" OriHeight="0.176774
16
+ " />
17
+ </Panel>
18
+
19
+ <Panel left="447" right="188" width="367" height="310">
20
+ <Text>Stochastic Scene Grammar</Text>
21
+ <Text>The grammar represents compositional structures of visual entities, which includes three types of production rules and two types of contextual relations:</Text>
22
+ <Text>Production rules: (i) AND rules represent the decomposition of an entity into sub-parts; (ii) SET rules represent an ensemble of visual entities; (iii) OR rules represent the switching among sub-types of an entity.</Text>
23
+ <Text>Contextual relations: (a) Cooperative “+” relations represent positive links between binding entities, such as hinged faces of a object or aligned boxes; (b) Competitive “-” relations represents negative links between competing entities, such as mutually exclusive boxes.</Text>
24
+ <Figure left="457" right="319" width="351" height="174" no="5" OriWidth="0.632165" OriHeight="0.350538
25
+ " />
26
+ </Panel>
27
+
28
+ <Panel left="447" right="497" width="368" height="164">
29
+ <Text>Bayesian Formulation</Text>
30
+ <Text>We define a posterior distribution for a solution (a parse tree) pt conditioned on an image I. This distribution is specified in terms of the statistics defined over the derivation of production rules. \begin{equation} P(pt|I)\propto P(pt)P(I|pt)=P(S)\prod_{v \in V^n}P(Ch_v|v)\prod_{v \in V^T}P(I|v) \end{equation} The probability is defined on the Gibbs distribution: and the energy term is decomposed as three potentials: \begin{equation} E(pt|I)=\sum_{v \in V^{OR}}E^{OR}(Ar(Ch_v))+\sum_{v \in V^AND}E^{AND (A_G(Ch_v))+\sum_{\Lambda_v \in \Lambda_I, v \in V^T}E^T(I(\Lambda_v)) \end{equation}</Text>
31
+ </Panel>
32
+
33
+ <Panel left="447" right="662" width="369" height="194">
34
+ <Text>Inference by Hierarchical Cluster Sampling We design an efficient MCMC inference algorithm, namely Hierarchical cluster sampling, to search in the large solution space of scene configurations. The algorithm has two stages:</Text>
35
+ <Text>Clustering: It forms all possible higher-level structures (clusters) from lower-level entities by production rules and contextual relations. \begin{equation} P_+(Cl|I)=\prod_{v \in Cl^{OR}}P^{OR}(Ar(v))\prod_{u,v \in Cl^{AND}}P_+^{AND}(A_G(u), A_G(v))\prod_{v \in Cl^T}P^T(I(A_v)) \end{equation}</Text>
36
+ <Text>Sampling: It jumps between alternative structures (clusters) in each layer of the hierarchy to find the most probable configuration (represented by a parse tree). \begin{equation} Q(pt^*|pt,I)=P_+(Cl^*|I)\prod_{u \in Cl^{AND}, v \in pt^{AND}} P_-^{AND}(A_G(u)|A_G(v)). \end{equation}</Text>
37
+ </Panel>
38
+
39
+ <Panel left="447" right="854" width="367" height="214">
40
+ <Text>Experiment and Conclusion</Text>
41
+ <Text>Segmentation precision compared with Hoiem et al. 2007 [1], Hedau et al. 2009 [2], Wang et al. 2010 [3] and Lee et al. 2010 [4] in the UIUC dataset [2].</Text>
42
+ <Figure left="480" right="906" width="305" height="56" no="6" OriWidth="0.537563" OriHeight="0.0726882
43
+ " />
44
+ <Text>Compared with other algorithms, our contributions are</Text>
45
+ <Text>A Stochastic Scene Grammar (SSG) to represent the hierarchical structure of visual entities;</Text>
46
+ <Text>A Hierarchical Cluster Sampling algorithm to perform fast inference in the SSG model;</Text>
47
+ <Text>Richer structures obtained by exploring richer contextual relations.</Text>
48
+ <Text>Website: http://www.stat.ucla.edu/~ybzhao/research/sceneparsing</Text>
49
+ </Panel>
50
+
51
+ </Poster>
Test/nips-2011-001/nips-2011-001-Poster.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:34d35322e82fbfb11d5bc5297b85c6e5708585284b658deaa3c2404681021b29
3
+ size 17399398
Test/nips-2011-001/nips-2011-001.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:78bc15dafd5d3deb2296cc1a8c390598e5f7068607bee14bc78ad05074977ba1
3
+ size 5414980
Train/3D Proximal Femur Estimation through Bi-planar Image Segmentation with Hierarchical/3D Proximal Femur Estimation through Bi-planar Image Segmentation with Hierarchical-Poster.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d4496acbe6e598c101de179176848f50072dd52e1dae7aacb20b7eaa417bd39f
3
+ size 907983
Train/3D Proximal Femur Estimation through Bi-planar Image Segmentation with Hierarchical/3D Proximal Femur Estimation through Bi-planar Image Segmentation with Hierarchical.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5ab73d1d193fdbc5da9a37623e94fbf91680f3dfd13f5bd9897c63ab753b4529
3
+ size 504684
Train/3D Proximal Femur Estimation through Bi-planar Image Segmentation with Hierarchical/info.txt ADDED
@@ -0,0 +1,92 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <Poster Width="685" Height="968">
2
+ <Panel left="5" right="160" width="674" height="123">
3
+ <Text>Introduction</Text>
4
+ <Text>Problem statement:</Text>
5
+ <Text>3D Proximal Femur modeling from low-dose bi-planar X-Ray images ⇐ important diagnostic interest in Total Hip Replacement.</Text>
6
+ <Text>Contributions:</Text>
7
+ <Text>Non-uniform hierarchical decomposition of the shape prior of increasing clinical-relevant precision.◮</Text>
8
+ <Text>Graphical-model representation of the femur involving third-order and fourth-order priors.</Text>
9
+ <Text>Similarity and mirror-symmetry invariant.</Text>
10
+ <Text>Providing means of measuring regional and boundary supports in the bi-planar views.</Text>
11
+ <Text>Can be learned from a small number of training examples.</Text>
12
+ <Text>A dual-decomposition optimization approach for efficient inference of the 3D femur configuration from bi-planar views.</Text>
13
+ <Figure left="567" right="178" width="81" height="99" no="1" OriWidth="0.1363" OriHeight="0.0992141
14
+ " />
15
+ </Panel>
16
+
17
+ <Panel left="8" right="289" width="335" height="440">
18
+ <Text>Hierarchical Multi-Resolution Probabilistic Modeling</Text>
19
+ <Text>Mesh sub-sampling formulated as clustering achieved through curvaturedriven unsupervised clustering acting on the geodesic distances between vertices.</Text>
20
+ <Text>d (v , vˆ ): the geodesic distance between v and vˆ on M0,</Text>
21
+ <Text>curv(ˆv ): the curvature at vˆ on M0.</Text>
22
+ <Text>Level of detail selection:</Text>
23
+ <Text>Vertices are organized in a tree structure.</Text>
24
+ <Text>Starting from the coarsest resolution, regions are selected to be refined iteratively untilreaching the required accuracy for every part.</Text>
25
+ <Text>Connectivity computation</Text>
26
+ <Text>Edges EMR based on Delaunay triangulation of VMR associated to the geodesic distance.◮</Text>
27
+ <Text>Faces FMR computed by searching for minimal cycles in the edge list.</Text>
28
+ <Figure left="137" right="488" width="91" height="83" no="2" OriWidth="0.115438" OriHeight="0.101179
29
+ " />
30
+ <Text>Probabilistic shape modeling</Text>
31
+ <Text>Pose-invariant prior:</Text>
32
+ <Text>Based on the relative Euclidean distance dˆ</Text>
33
+ <Text>ij = dij / </Text>
34
+ <Text>(i,j)∈Pdij for each pair of points (i, j) ∈ Pc in a triplet cP◮of vertices.</Text>
35
+ <Text>ˆ </Text>
36
+ <Text>c ) of dˆ </Text>
37
+ <Text>c is learned from the training data, using Gaussian Mixture Models (GMMs).The distribution ψc (d</Text>
38
+ <Text>Smoothness potential function:</Text>
39
+ <Text>Encoding constraints on the change of the normal directions, for each quadruplet q of verticescorresponding to a pair of adjacent facets:</Text>
40
+ </Panel>
41
+
42
+ <Panel left="7" right="730" width="335" height="210">
43
+ <Text>Probabilistic 3D Surface Estimation Framework</Text>
44
+ <Text>Posterior probability maximization:</Text>
45
+ <Text>Higher-order MRF formulation:</Text>
46
+ <Text>H</Text>
47
+ <Text>fR (uf ): regional-term potentials.</Text>
48
+ <Text>BH (uq ): boundary-term potentials.</Text>
49
+ <Text>q</Text>
50
+ <Text>PPH (uc ) and H : model prior potentials.</Text>
51
+ <Text>qc</Text>
52
+ <Text>H (uc ) and H : model prior potentials.</Text>
53
+ <Text>qc</Text>
54
+ <Text>MRF inference through dual-decomposition:</Text>
55
+ <Text>Decompose the original graph into a series of factor trees.</Text>
56
+ <Text>Solve factor trees using max-product belief propagation.</Text>
57
+ <Text>Maximize lower bound using a projected subgradient method.</Text>
58
+ </Panel>
59
+
60
+ <Panel left="344" right="289" width="335" height="318">
61
+ <Text>I = (Ik )k∈K (K = {1, . . . , K }, K = 2 for the case of bi-planar views): Kobserved images captured from different viewpoints with the correspondingprojection matrices Π = (Πk )k∈K</Text>
62
+ <Text>Regional term</Text>
63
+ <Text>uf : 3D coordinates of the vertices of a facet f ,</Text>
64
+ <Text>δf (uf , Πk ): front-facing facet indicator function,</Text>
65
+ <Text>◮ Ωf (uf , Πk ): 2D region corresponding to the projection of f ,</Text>
66
+ <Text>pfg and pbg : distributions of the intensity for the regions of the femur and the background.</Text>
67
+ <Text>Boundary term</Text>
68
+ <Text>Γ(uq , Πk ): projection of the edge shared by the two adjacent facets,</Text>
69
+ <Text>−−−−→</Text>
70
+ <Text>∂Ik (x,y )n(x, y ): outward-pointing unit normal of Γ(uq , Πk ), ∇Ik (x, y ) = (</Text>
71
+ <Text>∂x ,</Text>
72
+ <Text>∂Ik (x,y )</Text>
73
+ <Text>∂y ): gradient of the intensity at (x, y ).</Text>
74
+ </Panel>
75
+
76
+ <Panel left="345" right="610" width="334" height="260">
77
+ <Text>Experimental Validation</Text>
78
+ <Text>Validation using both dry femurs and real clinical data.</Text>
79
+ <Text>Comparaison with the gold standard CT method, through Point-to-surfacedistance and DICE coefficient.</Text>
80
+ <Figure left="353" right="671" width="320" height="154" no="3" OriWidth="0.557719" OriHeight="0.180747
81
+ " />
82
+ <Text>Figure: (a) Four 3D surface reconstruction results with point-to-surface errors on femoral head. (b)Boxplots on the DICE, the mean and STD of the point-to-surface errors (mm). (c) and (d)Projection results on in vivo data.</Text>
83
+ </Panel>
84
+
85
+ <Panel left="345" right="872" width="334" height="68">
86
+ <Text>Future Work</Text>
87
+ <Text>Introducing a joint model that couples femur with the hipbone socket.</Text>
88
+ <Text>Combining anatomical landmarks with the existing formulation.</Text>
89
+ <Text>Application to other clinical settings.</Text>
90
+ </Panel>
91
+
92
+ </Poster>
Train/AZDBLab A Laboratory Information System for Large-Scale Empirical DBMS Studies/AZDBLab A Laboratory Information System for Large-Scale Empirical DBMS Studies-Poster.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ff558fc0fa7c8d3dba331efa4da97ab02e63fa95c6c64149401a6ebacaa85425
3
+ size 1308425
Train/AZDBLab A Laboratory Information System for Large-Scale Empirical DBMS Studies/AZDBLab A Laboratory Information System for Large-Scale Empirical DBMS Studies.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:98f2b5053f2dd14ec2136beab9ba3becfe775584426c3801f8639c7801c48494
3
+ size 1263646
Train/AZDBLab A Laboratory Information System for Large-Scale Empirical DBMS Studies/info.txt ADDED
@@ -0,0 +1,61 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <Poster Width="685" Height="913">
2
+ <Panel left="4" right="143" width="297" height="212">
3
+ <Text>Introduction</Text>
4
+ <Text>Scientific methodology in the database field can</Text>
5
+ <Text>provide a deep understanding of DBMS query</Text>
6
+ <Text>optimizers, for better engineered designs.</Text>
7
+ <Text>Few DBMS-centric labs are available for scientific</Text>
8
+ <Text>investigation; prior labs have focused on networks</Text>
9
+ <Text>and smartphones.</Text>
10
+ </Panel>
11
+
12
+ <Panel left="5" right="368" width="297" height="273">
13
+ <Text>AZDBLAB (AriZona DataBase Laboratory)</Text>
14
+ <Text>Has been in development for seven years.</Text>
15
+ <Text>Assists database researchers to conduct large-</Text>
16
+ <Text>scale empirical studies across multiple DBMSes.</Text>
17
+ <Text>Runs massive experiments with thousands or</Text>
18
+ <Text>millions of queries on multiple DBMSes.</Text>
19
+ <Text>Supports as experiment subjects seven relational</Text>
20
+ <Text>DBMSes supporting SQL and JDBC.</Text>
21
+ <Text>Provides robustness to collect data over 8,277</Text>
22
+ <Text>hours running about 2.4 million query executions.</Text>
23
+ <Text>Conducts automated analyses on multiple query</Text>
24
+ <Text>execution runs.</Text>
25
+ </Panel>
26
+
27
+ <Panel left="6" right="653" width="296" height="204">
28
+ <Text>Contributions</Text>
29
+ <Text>Novel research infrastructure, dedicated for large-</Text>
30
+ <Text>scale empirical DBMS studies</Text>
31
+ <Text>Seamless data provenance support</Text>
32
+ <Text>Several decentralized monitoring schemes: phone</Text>
33
+ <Text>apps, web apps, and watcher</Text>
34
+ <Text>Reusable GUI</Text>
35
+ <Text>Extensibility through a variety of plugins: labshelf,</Text>
36
+ <Text>analysis, experiment subject, and scenario</Text>
37
+ </Panel>
38
+
39
+ <Panel left="316" right="145" width="354" height="285">
40
+ <Text>AZDBLAB Architecture</Text>
41
+ <Figure left="320" right="182" width="346" height="243" no="1" OriWidth="0.383237" OriHeight="0.208222
42
+ " />
43
+ </Panel>
44
+
45
+ <Panel left="315" right="439" width="355" height="419">
46
+ <Text>Demonstration</Text>
47
+ <Text>Step 1: Choose a labshelf, add a user, and create a notebook,</Text>
48
+ <Text>a paper, and a study in the paper on the Observer GUI.</Text>
49
+ <Text>Step 2: Load an experiment specification into the notebook.</Text>
50
+ <Text>Step 3: Schedule an experiment run on a particular DBMS.</Text>
51
+ <Text>Step 4: Monitor the run status via Observer, a web app, and a</Text>
52
+ <Text>mobile app, and wait for the experiment to be done.</Text>
53
+ <Text>Step 5: Add the completed experiment run to the study and</Text>
54
+ <Text>conduct a timing protocol analysis for the study.</Text>
55
+ <Text>Step 6: Produce LaTeX/PDF documents containing the analysis</Text>
56
+ <Text>results.</Text>
57
+ <Figure left="321" right="639" width="345" height="212" no="2" OriWidth="0.791329" OriHeight="0.365952
58
+ " />
59
+ </Panel>
60
+
61
+ </Poster>
Train/Active Boundary Annotation using Random MAP Perturbations/Active Boundary Annotation using Random MAP Perturbations-Poster.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5c8aea0bb486ae0dba5560531eab5ef84a8ece6bf05506eb38b54883fa791488
3
+ size 1947514
Train/Active Boundary Annotation using Random MAP Perturbations/Active Boundary Annotation using Random MAP Perturbations.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a43cfa42c896945a9a5412a06fe36f7bfe2ec660d6b6fe5815573f9bcf268c75
3
+ size 1332880
Train/Active Boundary Annotation using Random MAP Perturbations/info.txt ADDED
@@ -0,0 +1,120 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <Poster Width="671" Height="948">
2
+ <Panel left="6" right="73" width="330" height="173">
3
+ <Text>1. Overview</Text>
4
+ <Text></Text>
5
+ <Text>Goal: Obtain high quality image annotation with low cost (annotation effort)!</Text>
6
+ <Figure left="60" right="102" width="214" height="56" no="1" OriWidth="0.175034" OriHeight="0.0492662
7
+ " />
8
+ <Text> low quality annotation</Text>
9
+ <Text> high quality annotation</Text>
10
+ <Text>Approach: Bayesian active learning!</Text>
11
+ <Text>Minimize uncertainty in the boundary of MAP prediction !</Text>
12
+ <Text>Tradeoff uncertainty reduction and cost of annotation !</Text>
13
+ <Text>Contributions!</Text>
14
+ <Text>Entropy bounds that measure the expected perturbation that change MAP prediction.!</Text>
15
+ <Text>Coarse to fine approach for pixel-accurate annotation that saves 33% in cost.!</Text>
16
+ </Panel>
17
+
18
+ <Panel left="7" right="249" width="330" height="95">
19
+ <Text>2. Active learning in structured spaces</Text>
20
+ <Text></Text>
21
+ <Text>Traditional Active learning!</Text>
22
+ <Text>Active learner picks which data points to label. Typically assume data is i.i.d.!</Text>
23
+ <Text>Bayesian active learning in structured spaces!</Text>
24
+ <Text>Deals with correlated labels, e.g. labels of a single image (non i.i.d. setting)!</Text>
25
+ <Text>Basic idea: Construct a probability function over the label space and reduce its</Text>
26
+ <Text>uncertainty with minimal annotation cost (clicks)!</Text>
27
+ </Panel>
28
+
29
+ <Panel left="7" right="348" width="328" height="279">
30
+ <Text>3. Active annotation framework</Text>
31
+ <Text></Text>
32
+ <Text>Approach!</Text>
33
+ <Text>be the set of labels for image x for n pixels!Let,</Text>
34
+ <Text>Let,be the set of annotations obtained till time t</Text>
35
+ <Text>Let, p(y) be the joint probability of the labels given the data x and annotations till time t!</Text>
36
+ <Text>Bayesian experimental design!</Text>
37
+ <Text>Given: !</Text>
38
+ <Text>a function that measures the uncertainty of the labels given the annotation, U(A) !</Text>
39
+ <Text>a function that measures the cost of annotation, C(a)!</Text>
40
+ <Text>Pick the annotation task the provides the highest uncertainty reduction/unit cost, i.e.,:!</Text>
41
+ <Text>Uncertainty, U(A) = H (p), is defined as the entropy!</Text>
42
+ <Text>Computing entropy is exponential in the size of the patch. for many useful cases,•</Text>
43
+ <Text>however MAP estimation is tractable for some of these (e.g., via Graph-cuts, MPLP)!</Text>
44
+ </Panel>
45
+
46
+ <Panel left="8" right="631" width="323" height="133">
47
+ <Text>4. Markov Random Fields (MRFs) for image labeling</Text>
48
+ <Text></Text>
49
+ <Text>Popular for image segmentation (e.g. Grabcut model, Blake et al., 2004) !</Text>
50
+ <Text>Let an annotation of an n pixel image be described as a n-tuple!</Text>
51
+ <Text>The overall score of the pixel label is given by:!</Text>
52
+ <Text>The MAP estimate can be obtained via. Graph cuts (Boykov et al., 2001)The</Text>
53
+ </Panel>
54
+
55
+ <Panel left="6" right="767" width="329" height="161">
56
+ <Text>5. MAP perturbations</Text>
57
+ <Text></Text>
58
+ <Text>The Perturb MAX model (Papandreou and Yuille, 2011, Tarlow 2012, Gane 2014)!</Text>
59
+ <Text>Random functions</Text>
60
+ <Figure left="196" right="802" width="137" height="44" no="2" OriWidth="0.367707" OriHeight="0.0870021
61
+ " />
62
+ <Text> </Text>
63
+ <Text> </Text>
64
+ <Text>MAP perturbations upper bound the partition function (Hazan & Jaakkola 2012)</Text>
65
+ <Text>Let { i (yi )} be i.i.d. Gumbel random variables with zero mean</Text>
66
+ </Panel>
67
+
68
+ <Panel left="335" right="73" width="333" height="243">
69
+ <Text>6. Measuring uncertainty in the boundary of MAP prediction</Text>
70
+ <Text>For Perturb MAX models with Gumbel random variables!</Text>
71
+ <Text>Where,!</Text>
72
+ <Text>Proof idea: !</Text>
73
+ <Text>Conjugate duality:</Text>
74
+ <Text>Use MAP perturb. upper bounds.!</Text>
75
+ <Text>The optimal theta attains the perturb-max model p(y).</Text>
76
+ <Text>The linear term cancels out.</Text>
77
+ <Text>!Uncertainty measure!</Text>
78
+ <Text>Nonnegative (upper bounds the entropy).!</Text>
79
+ <Text>Attains its minimal value for the zero-one distribution (zero mean perturbations).!</Text>
80
+ <Text>Attains its maximal value for the uniform distribution (symmetry).</Text>
81
+ </Panel>
82
+
83
+ <Panel left="336" right="320" width="336" height="185">
84
+ <Text>7. Active boundary annotation</Text>
85
+ <Text></Text>
86
+ <Figure left="362" right="338" width="278" height="102" no="3" OriWidth="0.378562" OriHeight="0.0995807
87
+ " />
88
+ <Text>Coarse-to-fine boundary refinement!</Text>
89
+ <Text>We start from a coarse boundary and repeatedly the!</Text>
90
+ <Text>regions are picked by the algorithm, refinement is done by the user!</Text>
91
+ <Text>Cost of refinement = number of points in the polygons (boundary complexity)!</Text>
92
+ <Text>We don’t know the truth, so we can compute expectations of cost and uncertainty!</Text>
93
+ </Panel>
94
+
95
+ <Panel left="336" right="506" width="335" height="239">
96
+ <Text>8. Experimental evaluation</Text>
97
+ <Text></Text>
98
+ <Text>An example coarse-to-fine refinement (sampled regions for various strategies)!</Text>
99
+ <Figure left="352" right="544" width="300" height="50" no="4" OriWidth="0.738128" OriHeight="0.091195
100
+ " />
101
+ <Text>Active annotation results!</Text>
102
+ <Figure left="345" right="624" width="314" height="115" no="5" OriWidth="0.791045" OriHeight="0.138365
103
+ " />
104
+ </Panel>
105
+
106
+ <Panel left="336" right="749" width="329" height="183">
107
+ <Text>9. Conclusions and future work</Text>
108
+ <Text></Text>
109
+ <Text>We proposed a new uncertainty measure!</Text>
110
+ <Text>Avoids expensive MCMC sampling by randomly perturbing the model and using a MAPsolver as a black box tool.</Text>
111
+ <Text>Applications for parameter estimation and active learning in a number of areas such asmatchings, parse trees, and other combinatorial structures.!</Text>
112
+ <Text>Active learning in structured spaces!</Text>
113
+ <Text>Sampling based approach allows us to consider non-decomposable cost functions. Forthe boundary annotation task we used boundary complexity, which is not possible tocompute with marginal estimates.!</Text>
114
+ <Text>This led to 33% savings in annotation time for pixel-accurate boundary annotations.!</Text>
115
+ <Text>Challenges!</Text>
116
+ <Text>MAP perturbation based entropy bounds for higher dimensional perturbations.!</Text>
117
+ <Text>Beyond super-modular functions in the context of active learning.!</Text>
118
+ </Panel>
119
+
120
+ </Poster>
Train/Adaptive Structure from Motion with a contrario model estimation/Adaptive Structure from Motion with a contrario model estimation-Poster.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:90e98b579b5cdc94f48aed56c8c93350b7856d1434a305427a35b672713f2e96
3
+ size 9210484
Train/Adaptive Structure from Motion with a contrario model estimation/Adaptive Structure from Motion with a contrario model estimation.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dfbb070e74637f0bf0f4e8164b2f302374cf7ef4eb585929d08e9ebc636ce89a
3
+ size 3562498
Train/Adaptive Structure from Motion with a contrario model estimation/info.txt ADDED
@@ -0,0 +1,72 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <Poster Width="658" Height="930">
2
+ <Panel left="12" right="102" width="313" height="188">
3
+ <Text>S TRUCTURE FROM MOTION : SFM</Text>
4
+ <Text>Structure from Motion depends on robust estimation; RANSAC isused to exclude outliers.</Text>
5
+ <Figure left="15" right="151" width="307" height="135" no="1" OriWidth="0" OriHeight="0
6
+ " />
7
+ </Panel>
8
+
9
+ <Panel left="10" right="298" width="315" height="215">
10
+ <Text>ROBUST ESTIMATION THRESHOLD DILEMMA</Text>
11
+ <Text>RANSAC requires the choice of a threshold T , which must be bal-anced:</Text>
12
+ <Text>Too small: Too few inliers, leading to model imprecision,</Text>
13
+ <Text>Too large: Models are contaminated by outliers (false data).</Text>
14
+ <Figure left="20" right="380" width="291" height="77" no="2" OriWidth="0" OriHeight="0
15
+ " />
16
+ <Text>Goal: making T adaptive to data and noise.</Text>
17
+ <Text>Find a model that best fits the data with a confidence threshold T thatadapts automatically to noise by using AC-RANSAC.</Text>
18
+ </Panel>
19
+
20
+ <Panel left="332" right="102" width="315" height="409">
21
+ <Text>A CONTRARIO S TRUCTURE FROM M OTION</Text>
22
+ <Text>AC-RANSAC. A threshold-less rigid model estimation framework.</Text>
23
+ <Text>The method answers the question:“Could the rigid set of data have occurred by chance?”</Text>
24
+ <Text>The threshold T adapts for inlier/outlier discrimination.</Text>
25
+ <Text>It provides a confidence score for each model.</Text>
26
+ <Text>A Contrario criterion [3]:</Text>
27
+ <Text>Use a background model H0 : uniform distribution.</Text>
28
+ <Text>Strong deviation from H0 is deemed meaningful.</Text>
29
+ <Text>AC-RANSAC relies on a the following definitions:</Text>
30
+ <Text>Number of False Alarms (NFA) measures model fitness to data</Text>
31
+ <Text>Given model M , assuming k inliers among n correspondences,Tk denotes the k th smallest residual</Text>
32
+ <Text>Expectation: NFA(M ) = mink=N+1...n NFA(M, k) ≤ 1.RANSAC maximizes inlier count. AC-RANSAC minimizes NFA.</Text>
33
+ <Text>Application to Structure fromMotion: estimation of</Text>
34
+ <Text>* Homography</Text>
35
+ <Text>Pose/Resection</Text>
36
+ <Text>Fundamental matrix</Text>
37
+ <Text>Essential matrix</Text>
38
+ <Figure left="489" right="383" width="153" height="97" no="3" OriWidth="0" OriHeight="0
39
+ " />
40
+ <Text>Only assumption: returned model is fitted by at least 2 ∗ Nsample data.</Text>
41
+ </Panel>
42
+
43
+ <Panel left="8" right="519" width="638" height="258">
44
+ <Text>E XPERIMENTAL RESULTS</Text>
45
+ <Figure left="16" right="569" width="308" height="203" no="4" OriWidth="0.385549" OriHeight="0.183081
46
+ " />
47
+ <Figure left="326" right="545" width="315" height="228" no="5" OriWidth="0.557803" OriHeight="0.355537
48
+ " />
49
+ </Panel>
50
+
51
+ <Panel left="10" right="785" width="476" height="142">
52
+ <Text>C ONTRIBUTIONS</Text>
53
+ <Text>An SfM pipeline built on AC-RANSAC:</Text>
54
+ <Text>AC-RANSAC estimation of E, F, H, Pose,</Text>
55
+ <Text>Experimental validation showing the benefit of adaptive au-tomatic threshold.</Text>
56
+ <Text>openMVG open source library</Text>
57
+ <Text>A multiple-view geometry library,</Text>
58
+ <Text>A collection of 2-view solvers,</Text>
59
+ <Text>Generic robust estimators: RANSAC, AC-RANSAC.</Text>
60
+ <Text> Synthetic datasets with GT calibration:</Text>
61
+ <Figure left="295" right="835" width="186" height="76" no="6" OriWidth="0" OriHeight="0
62
+ " />
63
+ </Panel>
64
+
65
+ <Panel left="495" right="785" width="152" height="142">
66
+ <Text>REFERENCES</Text>
67
+ <Text>[1] N. Snavely et al. Photo tourism: ex-ploring photo collections in 3D. InSIGGRAPH 2006.</Text>
68
+ <Text>[2] C. Strecha et al. On benchmarkingcamera calibration and multi-viewstereo for high resolution imagery.In CVPR 2008.</Text>
69
+ <Text>[3] L. Moisan et al. Automatic ho-mographic registration of a pairof images, with a contrario elim-ination of outliers. In IPOL 2012,http://dx.doi.org/10.5201/ipol.2012.mmm-oh.</Text>
70
+ </Panel>
71
+
72
+ </Poster>
Train/An automated measure of MDP similarity for transfer in reinforcement learning/An automated measure of MDP similarity for transfer in reinforcement learning-Poster.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5486b3a5608815906bec57004363684666fceeb570577729aac3b9bd35e5c796
3
+ size 1205637
Train/An automated measure of MDP similarity for transfer in reinforcement learning/An automated measure of MDP similarity for transfer in reinforcement learning.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:65ea25c72a8f8f632ebcfe1363b9bb59769af355ecf0b919e9df875dbe994904
3
+ size 806408
Train/An automated measure of MDP similarity for transfer in reinforcement learning/info.txt ADDED
@@ -0,0 +1,73 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <Poster Width="672" Height="950">
2
+ <Panel left="12" right="180" width="314" height="109">
3
+ <Text>Motivation</Text>
4
+ <Text>Transfer Learning aims to improve learning times on a newtarget task by reusing knowledge from previously learnedsource task(s). In transfer, the performance of anyalgorithm depends on the choice of the source and targettasks. Here, we present a data-driven similarity measureused to choose source task(s).</Text>
5
+ </Panel>
6
+
7
+ <Panel left="10" right="295" width="315" height="95">
8
+ <Text>Markov Decision Processes</Text>
9
+ <Text>Tasks are modelled as Markov Decision Processes (MDPs). An</Text>
10
+ <Text>MDP is a tuplewith:</Text>
11
+ <Text>: state space: transition probability</Text>
12
+ <Text>: action space</Text>
13
+ <Text>: discount factor: reward function</Text>
14
+ </Panel>
15
+
16
+ <Panel left="9" right="395" width="314" height="449">
17
+ <Text>RBDist Similarity Measure</Text>
18
+ <Text>Intuition: If two tasks are similar, then a restricted Boltzmann</Text>
19
+ <Text>Machine (RBM) trained on samples from the first task should</Text>
20
+ <Text>reconstruct samples from the other task. The distance is measured</Text>
21
+ <Text>using two phases:</Text>
22
+ <Text>Training Phase: Using source samples, train an RBM by</Text>
23
+ <Text>contrastive divergence.</Text>
24
+ <Text>Reconstruction Phase: Reconstruct target samples by</Text>
25
+ <Text>sampling the visible layer (having conditionally independent</Text>
26
+ <Text>visible units)</Text>
27
+ <Text>Measure similarity: using the Euclidean measure between</Text>
28
+ <Text>real samples and reconstructed ones</Text>
29
+ <Figure left="24" right="647" width="287" height="193" no="1" OriWidth="0.382834" OriHeight="0.213684
30
+ " />
31
+ </Panel>
32
+
33
+ <Panel left="342" right="180" width="314" height="142">
34
+ <Text>Experimental Domains & Benchmarks</Text>
35
+ <Figure left="346" right="199" width="61" height="86" no="2" OriWidth="0.0762943" OriHeight="0.0842105
36
+ " />
37
+ <Figure left="409" right="202" width="125" height="82" no="3" OriWidth="0.220708" OriHeight="0.111579
38
+ " />
39
+ <Figure left="536" right="201" width="111" height="83" no="4" OriWidth="0.246594" OriHeight="0.0863158
40
+ " />
41
+ </Panel>
42
+
43
+ <Panel left="339" right="328" width="316" height="219">
44
+ <Text>Dynamical Phase Discovery</Text>
45
+ <Figure left="346" right="357" width="143" height="118" no="5" OriWidth="0.339237" OriHeight="0.217895
46
+ " />
47
+ <Figure left="506" right="354" width="143" height="121" no="6" OriWidth="0.337875" OriHeight="0.211579
48
+ " />
49
+ <Text>RBDist can automatically discover tasks’ dynamical phases</Text>
50
+ </Panel>
51
+
52
+ <Panel left="338" right="553" width="316" height="218">
53
+ <Text>Transfer Correlation</Text>
54
+ <Figure left="344" right="580" width="144" height="112" no="7" OriWidth="0.347411" OriHeight="0.209474
55
+ " />
56
+ <Figure left="500" right="579" width="149" height="111" no="8" OriWidth="0.340599" OriHeight="0.210526
57
+ " />
58
+ <Text>Jump-Start correlation as a</Text>
59
+ <Text>function of RBDist on Cart</Text>
60
+ <Text>Pole systems</Text>
61
+ <Text>Jump-Start correlation as a</Text>
62
+ <Text>function of RBDist on</Text>
63
+ <Text>Mountain Car systems</Text>
64
+ <Text>RBDist correlates with initial performance on target tasks</Text>
65
+ </Panel>
66
+
67
+ <Panel left="339" right="778" width="315" height="68">
68
+ <Text>Future Work</Text>
69
+ <Text>Extend RBDist to support transfer between different domain tasks</Text>
70
+ <Text>Assess the effect of RBDist on other transfer criteria (e.g., asymptoticperformance, time to threshold, etc.)</Text>
71
+ </Panel>
72
+
73
+ </Poster>
Train/Automated Embryo Stage Classification in Time-lapse Microscopy Video of Early Human Embryo Development/Automated Embryo Stage Classification in Time-lapse Microscopy Video of Early Human Embryo Development-Poster.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a907f6fcd279612043e92c251c8b85b8f0257a47c3dfebc58b88aa83fde23f23
3
+ size 2495222
Train/Automated Embryo Stage Classification in Time-lapse Microscopy Video of Early Human Embryo Development/Automated Embryo Stage Classification in Time-lapse Microscopy Video of Early Human Embryo Development.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:efb00f47a8683387bc02c97748318a5771d920074f8c55df16ce65ca702dd7c8
3
+ size 1512626
Train/Automated Embryo Stage Classification in Time-lapse Microscopy Video of Early Human Embryo Development/info.txt ADDED
@@ -0,0 +1,50 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <Poster Width="687" Height="972">
2
+ <Panel left="7" right="183" width="295" height="338">
3
+ <Text>Background and significance</Text>
4
+ <Text>The(Early Embryo Viability Assessment) Test –wasEevaTMdeveloped to automatically measure cell division timings andprovide quantitative information regarding embryo development.</Text>
5
+ <Figure left="32" right="253" width="253" height="139" no="1" OriWidth="0" OriHeight="0
6
+ " />
7
+ <Text>We developed a multi-level classification method to identify theembryo stage (i.e. 1-cell, 2-cell, 3-cell, 4-or-more-cell) at every timepoint of a time-lapse microscopy video of early human embryodevelopment.</Text>
8
+ <Figure left="15" right="441" width="279" height="73" no="2" OriWidth="0.324277" OriHeight="0.0600736
9
+ " />
10
+ </Panel>
11
+
12
+ <Panel left="307" right="183" width="374" height="171">
13
+ <Text>The Method</Text>
14
+ <Figure left="315" right="210" width="219" height="142" no="3" OriWidth="0.30578" OriHeight="0.117695
15
+ " />
16
+ <Figure left="536" right="211" width="138" height="132" no="4" OriWidth="0" OriHeight="0
17
+ " />
18
+ </Panel>
19
+
20
+ <Panel left="307" right="357" width="372" height="158">
21
+ <Text>Embryo Features</Text>
22
+ <Text>Based on Bhattacharyya distance of the BoF histograms of consecutive frames</Text>
23
+ <Text>Registration free, rotation and translation invariant</Text>
24
+ <Text>“Dips” in the plot are good indications of stage transitions</Text>
25
+ <Text>Used by the Viterbi algorithm to define state transitional probability</Text>
26
+ <Figure left="317" right="431" width="355" height="31" no="5" OriWidth="0.53237" OriHeight="0.0322844
27
+ " />
28
+ <Figure left="308" right="466" width="370" height="42" no="6" OriWidth="0.557225" OriHeight="0.0465877
29
+ " />
30
+ </Panel>
31
+
32
+ <Panel left="311" right="520" width="370" height="409">
33
+ <Text>Temporal Image Similarity</Text>
34
+ <Text>327 human embryo videos (500 frames, each with 151 x 151 pixels) for training, 389embryo videos for testing.</Text>
35
+ <Text>All the embryo videos were captured using the EevaTM system.</Text>
36
+ <Text>Two human experts annotated the embryo stages of each frame.</Text>
37
+ <Figure left="331" right="591" width="303" height="81" no="7" OriWidth="0.520809" OriHeight="0.0980793
38
+ " />
39
+ <Text> Importance of different sets of features in trained level-1 (left) and level-2 (right)classification models</Text>
40
+ <Figure left="331" right="695" width="319" height="51" no="8" OriWidth="0.553757" OriHeight="0.0527176
41
+ " />
42
+ <Text> Classification performance at different levels</Text>
43
+ <Figure left="324" right="761" width="169" height="135" no="9" OriWidth="0.266474" OriHeight="0.147119
44
+ " />
45
+ <Figure left="498" right="764" width="171" height="133" no="10" OriWidth="0.267052" OriHeight="0.14671
46
+ " />
47
+ <Text> Precision (left) and Recall (right) of cell division detection asfunctions of the offset tolerance</Text>
48
+ </Panel>
49
+
50
+ </Poster>
Train/BMVC-2011-001/BMVC-2011-001-Poster.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c880abef110f10cf57c1c951461a6ed5498affe0d1ce120de5dc841755a205c9
3
+ size 1566524
Train/BMVC-2011-001/BMVC-2011-001.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:136b11aac9f56b5239baadb344c5ca2091a3a8f34731fd8aa97d6a763b9f3112
3
+ size 566510
Train/BMVC-2011-001/info.txt ADDED
@@ -0,0 +1,134 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <Poster Width="1735" Height="1227">
2
+ <Panel left="20" right="186" width="333" height="181">
3
+ <Text>PROBLEM STATEMENT</Text>
4
+ <Text>Given partial 2D or 3D trajectories of the</Text>
5
+ <Text>motion of a uniformly colored bouncing</Text>
6
+ <Text>ball, that is viewed by a single or multi-</Text>
7
+ <Text>ple cameras, estimate its full 3D state,</Text>
8
+ <Text>over time, i.e. location, orientation, an-</Text>
9
+ <Text>gular and linear velocities.</Text>
10
+ </Panel>
11
+
12
+ <Panel left="22" right="373" width="330" height="138">
13
+ <Text>MOTIVATION</Text>
14
+ <Text>Scene understanding can benefit from</Text>
15
+ <Text>exploiting the fact that a dynamic scene</Text>
16
+ <Text>and its visual observations are invariably</Text>
17
+ <Text>determined by the laws of physics.</Text>
18
+ </Panel>
19
+
20
+ <Panel left="23" right="512" width="328" height="233">
21
+ <Text>MAIN IDEA</Text>
22
+ <Text>• Model the physics of the scene using</Text>
23
+ <Text>physics-based simulation</Text>
24
+ <Text>• Acquire visual observations</Text>
25
+ <Text>• Define an objective function that con-</Text>
26
+ <Text>nects the model to the observations</Text>
27
+ <Text>• Produce physically plausible interpre-</Text>
28
+ <Text>tations of the scene by performing</Text>
29
+ <Text>black-box optimization</Text>
30
+ </Panel>
31
+
32
+ <Panel left="359" right="188" width="673" height="552">
33
+ <Text>PHYSICS BASED SIMULATION</Text>
34
+ <Text>(A) Dynamics of a bouncing ball</Text>
35
+ <Text>The bouncing ball is affected by gravity</Text>
36
+ <Text>and air resistance while in flight and fric-</Text>
37
+ <Text>tion while in bounce with a surface.</Text>
38
+ <Figure left="374" right="344" width="293" height="133" no="1" OriWidth="0.516744" OriHeight="0.144003
39
+ " />
40
+ <Text>(B) Equations of motion</Text>
41
+ <Text>We assume standard equations of mo-</Text>
42
+ <Text>tion for the flight phase and add air re-</Text>
43
+ <Text>sistance. We derive equations for the</Text>
44
+ <Text>bounce phase by extending [1].</Text>
45
+ <Text>(C) Simulation of a bouncing ball</Text>
46
+ <Text>We define a parameterized ball throwing simulation process S that:</Text>
47
+ <Text>• receives a 21-D vector of scene properties and initial conditions</Text>
48
+ <Text>• at each point in time, produces a 12-D vector of location, orientation, linear and</Text>
49
+ <Text>angular velocities</Text>
50
+ <Text>• is implemented by augmenting the Newton Game Dynamics simulator with our</Text>
51
+ <Text>physics modeling</Text>
52
+ <Text>• performs at 500fps, but is sub-sampled to real acquisition rate (30fps), in order to</Text>
53
+ <Text>account for aliasing effects</Text>
54
+ </Panel>
55
+
56
+ <Panel left="1040" right="188" width="669" height="314">
57
+ <Text>PHYSICALLY PLAUSIBLE SCENE INTERPRETATION</Text>
58
+ <Text>We estimate the physically plausible explanation e of the observed scene by formu-</Text>
59
+ <Text>lating an optimization problem, where:</Text>
60
+ <Text>• the hypothesis space of x is defined over the domain of simulation process S</Text>
61
+ <Text>• the observation data o are trajectories of a bouncing ball</Text>
62
+ <Text>(potentially partial, 3D or 2D, from single or multiple cameras)</Text>
63
+ <Text>• the objective function quantifies the discrepancy between the result of an invocation</Text>
64
+ <Text>to S and the observations</Text>
65
+ <Text>• the objective function is optimized by means of Differential Evolution [5]</Text>
66
+ </Panel>
67
+
68
+ <Panel left="1040" right="507" width="668" height="229">
69
+ <Text>CONTRIBUTIONS</Text>
70
+ <Text>• First method to consider attributes of state that can only be estimated through</Text>
71
+ <Text>physics-based simulation</Text>
72
+ <Text>• Extension to existing work [2–4] in exploiting physics based simulation in vision</Text>
73
+ <Text>• Proposal of an effective method that is clear, generic, top-down, simulation based</Text>
74
+ <Text>• Incorporation of realistic physics</Text>
75
+ <Text>• Selected generic and modular components allow for extension to other broader or</Text>
76
+ <Text>different contexts</Text>
77
+ </Panel>
78
+
79
+ <Panel left="20" right="752" width="1348" height="376">
80
+ <Text>EXPERIMENTAL RESULTS</Text>
81
+ <Text>(A) Multiview estimation of 3D trajectories</Text>
82
+ <Text>(synthetic/real)</Text>
83
+ <Figure left="29" right="859" width="385" height="255" no="2" OriWidth="0.605658" OriHeight="0.156226
84
+ " />
85
+ <Text>(B) Single view estimation of 3D trajectories</Text>
86
+ <Text>Finding ball throwing simulations that optimally repro-</Text>
87
+ <Text>duce 2D observations.</Text>
88
+ <Figure left="492" right="895" width="204" height="154" no="3" OriWidth="0.251155" OriHeight="0.124905
89
+ " />
90
+ <Figure left="701" right="876" width="199" height="173" no="4" OriWidth="0.277136" OriHeight="0.166921
91
+ " />
92
+ <Text>(C) Seeing the “invisible”</Text>
93
+ <Text>Implicit information, like the state of the ball while</Text>
94
+ <Text>occluded (left) and the angular components of its 3D</Text>
95
+ <Text>state (right), are computer based on a single camera.</Text>
96
+ <Figure left="955" right="921" width="199" height="151" no="5" OriWidth="0.256351" OriHeight="0.127578
97
+ " />
98
+ <Figure left="1161" right="929" width="202" height="142" no="6" OriWidth="0.301963" OriHeight="0.139801
99
+ " />
100
+ </Panel>
101
+
102
+ <Panel left="1377" right="753" width="333" height="369">
103
+ <Text>KEY REFERENCES</Text>
104
+ <Text>[1] P.J. Aston and R. Shail. The Dynamics of a Bouncing</Text>
105
+ <Text>Superball with Spin. Dynamical Systems, 22(3):291–</Text>
106
+ <Text>322, 2007.</Text>
107
+ <Text>[2] K. Bhat, S. Seitz, J. Popovi´c, and P. Khosla. Com-</Text>
108
+ <Text>puting the Physical Parameters of Rigid-body Motion</Text>
109
+ <Text>from Video. In ECCV 2002, pages 551–565. Springer,</Text>
110
+ <Text>2002.</Text>
111
+ <Text>[3] D.J. Duff, J. Wyatt, and R. Stolkin. Motion Estimation</Text>
112
+ <Text>using Physical Simulation. In IEEE International Con-</Text>
113
+ <Text>ference on Robotics and Automation (ICRA), pages</Text>
114
+ <Text>1511–1517. IEEE, 2010.</Text>
115
+ <Text>[4] D. Metaxas and D. Terzopoulos. Shape and Nonrigid</Text>
116
+ <Text>Motion Estimation through Physics-based Synthesis.</Text>
117
+ <Text>IEEE Transactions on Pattern Analysis and Machine</Text>
118
+ <Text>Intelligence, 15(6):580–591, 1993.</Text>
119
+ <Text>[5] R. Storn and K. Price. Differential Evolution–A Sim-</Text>
120
+ <Text>ple and Efficient Heuristic for Global Optimization over</Text>
121
+ <Text>Continuous Spaces. Journal of Global Optimization,</Text>
122
+ <Text>11(4):341–359, 1997.</Text>
123
+ </Panel>
124
+
125
+ <Panel left="22" right="1129" width="1686" height="86">
126
+ <Text>MORE INFORMATION</Text>
127
+ <Text>For more information, visit http://www.ics.forth.gr/ kyriazis/?e=1 or contact {kyriazis,oikonom,argyros}@ics.forth.gr</Text>
128
+ <Text>This work was partially supported by the</Text>
129
+ <Text>IST-FP7-IP-215821 project GRASP</Text>
130
+ <Figure left="1602" right="1172" width="78" height="44" no="7" OriWidth="0" OriHeight="0
131
+ " />
132
+ </Panel>
133
+
134
+ </Poster>
Train/Bayesian Joint Topic Modelling for Weakly Supervised Object Localisation/Bayesian Joint Topic Modelling for Weakly Supervised Object Localisation-Poster.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aefa184e5f2da280813415965de61b8a4eac13ec2701f1db0e655da7f09ceeec
3
+ size 1759871
Train/Bayesian Joint Topic Modelling for Weakly Supervised Object Localisation/Bayesian Joint Topic Modelling for Weakly Supervised Object Localisation.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fd1aa4b53750df87e91f17a8258c6560a31cc5222147ebbd74d6907c9fdf6ff6
3
+ size 2051164
Train/Bayesian Joint Topic Modelling for Weakly Supervised Object Localisation/info.txt ADDED
@@ -0,0 +1,158 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <Poster Width="1734" Height="1214">
2
+ <Panel left="36" right="192" width="398" height="289">
3
+ <Text>Task</Text>
4
+ <Text></Text>
5
+ <Figure left="100" right="244" width="91" height="89" no="1" OriWidth="0" OriHeight="0
6
+ " />
7
+ <Text> Fully annotated</Text>
8
+ <Figure left="264" right="243" width="94" height="91" no="2" OriWidth="0" OriHeight="0" />
9
+ <Text> Weakly annotated</Text>
10
+ <Text> Many computer vision tasks require fully annotated data, but</Text>
11
+ <Text> Time-consuming, Laborious, Human various</Text>
12
+ <Text> More and more online media sharing websites (e.g. Flickr) provide</Text>
13
+ <Text>weakly annotated data, However,</Text>
14
+ <Text> Weaker supervision, Ambiguity (background clutter, occlusion…)</Text>
15
+ <Text> Challenge: Weakly Supervised Object Localisation (WSOL).</Text>
16
+ </Panel>
17
+
18
+ <Panel left="36" right="482" width="397" height="696">
19
+ <Text>Existing Approaches vs. Ours</Text>
20
+ <Text>Three types of cues are exploited in existing WSOL:</Text>
21
+ <Text> Object-saliency: A region containing the object should look different</Text>
22
+ <Text>from background in general.</Text>
23
+ <Text> Intra-class: The region should look similar to other regions containing</Text>
24
+ <Text>the object of interest in other training images.</Text>
25
+ <Text> Inter-class: The region should look dissimilar to any regions that are</Text>
26
+ <Text>known to not contain the object of interest.</Text>
27
+ <Text>However, they are independently trained:</Text>
28
+ <Figure left="39" right="671" width="396" height="273" no="3" OriWidth="0.390751" OriHeight="0.189455
29
+ " />
30
+ <Text>Independent learning ignores the fact that:</Text>
31
+ <Text> The knowledge that multiple objects co-exist within each image is</Text>
32
+ <Text>not exploited.</Text>
33
+ <Text> The background is relevant to different foreground object classes.</Text>
34
+ <Text>Our contributions:</Text>
35
+ <Text> We propose the novel concept of joint modelling of all object</Text>
36
+ <Text>classes and backgrounds for weakly supervised object localisation.</Text>
37
+ <Text> We formulate a novel Bayesian topic model suitable for localization</Text>
38
+ <Text>of objects and utilizing various types of prior knowledge available.</Text>
39
+ <Text> We provide a solution for exploiting unlabeled data for semi+weakly</Text>
40
+ <Text>supervised learning of object localisation.</Text>
41
+ </Panel>
42
+
43
+ <Panel left="455" right="194" width="820" height="407">
44
+ <Text>Methodology</Text>
45
+ <Text>Preprocessing and Representation:</Text>
46
+ <Text> Regular Grid SIFT Descriptors. Sampled every 5 pixels.</Text>
47
+ <Text> Quantising using 𝑁𝑣 = 2000 word codebook.</Text>
48
+ <Text> Words and corresponding locations:</Text>
49
+ <Text>Our Model:</Text>
50
+ <Figure left="460" right="354" width="401" height="239" no="4" OriWidth="0.363584" OriHeight="0.146113
51
+ " />
52
+ <Text>Observed variables:</Text>
53
+ <Text>𝐽 Ο = {𝑥𝑗 , 𝑙𝑗 }</Text>
54
+ <Text>𝑗=1Low-level feature words and corresponding location</Text>
55
+ <Text>Latent variables:</Text>
56
+ <Text> H=For each topic k and image j𝐾,𝐽{{𝜋𝑘 }𝐾</Text>
57
+ <Text>𝑘=1 , {𝑦𝑗 , 𝜇𝑘𝑗 , Λ 𝑘𝑗 , 𝜃𝑗 }</Text>
58
+ <Text>𝑘=1,𝑗=1 }</Text>
59
+ <Text>Given parameters:</Text>
60
+ <Text>Label information and prior𝐽, {𝛼𝑗 }</Text>
61
+ <Text>𝑗=1𝑘=1𝐾</Text>
62
+ <Text>𝜋𝑘0 , 𝜇𝑘0 , Λ0</Text>
63
+ <Text>𝑘 , 𝛽𝑘0 , 𝜈𝑘0 Π=</Text>
64
+ <Text>Joint distribution:</Text>
65
+ <Text>𝑝 𝑥𝑖𝑗 𝑦𝑖𝑗 , 𝜃𝑗 𝑝 𝑦𝑖𝑗 𝜃𝑗</Text>
66
+ <Text>𝑘𝑝(𝜋𝑘 |𝜋𝑘0 )</Text>
67
+ <Text>𝑖𝑗𝑗</Text>
68
+ <Text>𝑝 𝜇𝑗𝑘 , Λ𝑗𝑘 𝜇𝑘0 , Λ0</Text>
69
+ <Text>𝑘 , 𝛽𝑘0 , 𝜈𝑘0 )𝑝(𝜃𝑗 |𝛼𝑗 )𝑝 𝑂, 𝐻 Π =𝑝 𝑥𝑖𝑗 𝑦𝑖𝑗 , 𝜃𝑗 𝑝 𝑦𝑖𝑗 𝜃𝑗</Text>
70
+ <Text>𝑘𝑝(𝜋𝑘 |𝜋𝑘0 )</Text>
71
+ <Text>𝑖𝑗𝑁𝑗</Text>
72
+ <Text>𝑝 𝜇𝑗𝑘 , Λ𝑗𝑘 𝜇𝑘0 , Λ0</Text>
73
+ <Text>𝑘 , 𝛽𝑘0 , 𝜈𝑘0 )𝑝(𝜃𝑗 |𝛼𝑗 )𝑝 𝑂, 𝐻 Π =𝐾𝐽</Text>
74
+ <Text>Prior Knowledge:</Text>
75
+ <Text> Human knowledge objects and their relationships with backgrounds</Text>
76
+ <Text> Objects are compact whilst background spread across the image.</Text>
77
+ <Text> Objects stand out against background.</Text>
78
+ <Text> Transferred knowledge</Text>
79
+ <Text> Appearance and Geometry information from existing dataset.</Text>
80
+ <Text>Object Localisation:</Text>
81
+ <Text> Our-Gaussian Aligning a window to the ellipse obtained from q 𝜇, Λ</Text>
82
+ <Text> Our-Sampling Non-maximum suppression sampling over heat-map</Text>
83
+ </Panel>
84
+
85
+ <Panel left="455" right="602" width="818" height="203">
86
+ <Text>Results</Text>
87
+ <Text>Dataset: PASCAL VOC 2007. Three variants are used:</Text>
88
+ <Text> VOC07-6×2 : 6 classes with Left and Right poses, 12 classes in total.</Text>
89
+ <Text> VOC07-14: 14 classes, other 6 were used as annotated auxiliary data</Text>
90
+ <Text> VOC07-20: all 20 classes, each class contain all pose data.</Text>
91
+ <Text>PASCAL criterion:</Text>
92
+ <Text> intersection-over-union > 0.5 between Ground-Truth and predicted box</Text>
93
+ <Text>Comparison with state-of-the-art</Text>
94
+ <Text> Initialisation: Localising object of interest in weakly labelled images.</Text>
95
+ <Text> Refined by detector: A conventional object detector can be trained</Text>
96
+ <Text>using initial annotation. Then it can be used to refine object location.</Text>
97
+ <Figure left="874" right="634" width="403" height="175" no="5" OriWidth="0.375145" OriHeight="0.152368
98
+ " />
99
+ </Panel>
100
+
101
+ <Panel left="455" right="805" width="819" height="372">
102
+ <Text>Example: Foreground Topics</Text>
103
+ <Text></Text>
104
+ <Figure left="458" right="840" width="818" height="212" no="6" OriWidth="0.775145" OriHeight="0.16622
105
+ " />
106
+ <Text>Figs. (c) and (d) illustrate that the object of interest “explain away”</Text>
107
+ <Text>other objects of no interest.</Text>
108
+ <Text> A car is successfully located in Fig. (c) using the heat map of car topic.</Text>
109
+ <Text>Fig. (d) shows that the motorbike heat map is quite accurately</Text>
110
+ <Text>selective, with minimal response obtained on the other vehicular clutter.</Text>
111
+ <Text>Fig. (e) indicates how the Gaussian can sometimes give a better location.</Text>
112
+ <Text>Fig. (f) shows that the single Gaussian assumption is not ideal when the</Text>
113
+ <Text>foreground topic has a less compact response.</Text>
114
+ <Text>A failure case is shown in Fig. (g), where a bridge structure resembles</Text>
115
+ <Text>the boat in Fig (a) resulting strong response from the foreground topic,</Text>
116
+ <Text>whilst the actual boat topic is small and overwhelmed.</Text>
117
+ </Panel>
118
+
119
+ <Panel left="1294" right="193" width="400" height="512">
120
+ <Text>Example: Background Topics</Text>
121
+ <Text></Text>
122
+ <Figure left="1308" right="229" width="378" height="336" no="7" OriWidth="0.327168" OriHeight="0.257373
123
+ " />
124
+ <Text>Background non-annotated data has been modelled in our framework.</Text>
125
+ <Text>Irrelevant pixels will be explained to reduce confusion with object.</Text>
126
+ <Text>Automatically learned background topics have clear semantic meanings,</Text>
127
+ <Text>corresponding to common components as shown in the Figure.</Text>
128
+ <Text> Some background components are mixed, e.g. the water topic gives</Text>
129
+ <Text>strong response to both water and sky. But this is understandable</Text>
130
+ <Text>since water and sky are almost visually indistinguishable in the image.</Text>
131
+ </Panel>
132
+
133
+ <Panel left="1295" right="705" width="399" height="292">
134
+ <Text>Example: Semi-supervised Learning</Text>
135
+ <Figure left="1303" right="745" width="396" height="139" no="8" OriWidth="0.37341" OriHeight="0.101877
136
+ " />
137
+ <Text>𝑓𝑔 Unknown image can set as 𝛼</Text>
138
+ <Text>𝑗 =0.1. (soft constraint)</Text>
139
+ <Text> 10% labelled data + 90% unlabeled data (relevant) or unrelated data</Text>
140
+ <Text> Evaluating on (1) initially annotated 10% data (standard WSOL).</Text>
141
+ <Text>(2) testing part dataset (localize objects in new images</Text>
142
+ <Text> The figure clearly shows unlabeled data helps to learn a better object</Text>
143
+ <Text>model.</Text>
144
+ </Panel>
145
+
146
+ <Panel left="1296" right="998" width="397" height="180">
147
+ <Text>References</Text>
148
+ <Text>[1] T. Deselaers, B. Alexe, and V. Ferrari. Weakly supervised localization</Text>
149
+ <Text>and learning with generic knowledge. IJCV. 2012.</Text>
150
+ <Text>[2] M. Pandey and S. Lazebnik. Scene recognition and weakly supervised</Text>
151
+ <Text>object localization with deformable part-based models. In ICCV, 2011</Text>
152
+ <Text>[3] P. Siva and T. Xiang. Weakly supervised object detector learning with</Text>
153
+ <Text>model drift detection. In ICCV, 2011.</Text>
154
+ <Text>[4] P. Siva, C. Russell, and T. Xiang. In defence of negative mining for</Text>
155
+ <Text>annotating weakly labelled data. In ECCV, 2012.</Text>
156
+ </Panel>
157
+
158
+ </Poster>
Train/Being John Malkovich/Being John Malkovich-Poster.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3248e67a89985126282d33a6d6e9453b42aa8c759fd99eb27e60c07234dab834
3
+ size 9692817
Train/Being John Malkovich/Being John Malkovich.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9456efa1ab6496e6b56fa5046005bafe1b0b615e7c2291e2c5d07fda29ce3db3
3
+ size 6540596
Train/Being John Malkovich/info.txt ADDED
@@ -0,0 +1,365 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <Poster Width="1734" Height="1340">
2
+ <Panel left="85" right="348" width="782" height="370">
3
+ <Text>Our </Text>
4
+ <Text>  contribution: </Text>
5
+ <Text>  </Text>
6
+ <Text>Use </Text>
7
+ <Text>  your </Text>
8
+ <Text>  face </Text>
9
+ <Text>  to </Text>
10
+ <Text>  drive </Text>
11
+ <Text>  someone </Text>
12
+ <Text>  else. </Text>
13
+ <Text>  </Text>
14
+ <Text>•  </Text>
15
+ <Text>  A </Text>
16
+ <Text>  fully </Text>
17
+ <Text>  automatic </Text>
18
+ <Text>  real-­‐time </Text>
19
+ <Text>  framework </Text>
20
+ <Text>  that </Text>
21
+ <Text>  </Text>
22
+ <Text>combines </Text>
23
+ <Text>  a </Text>
24
+ <Text>  number </Text>
25
+ <Text>  of </Text>
26
+ <Text>  face </Text>
27
+ <Text>  processing </Text>
28
+ <Text>  </Text>
29
+ <Text>components </Text>
30
+ <Text>  in </Text>
31
+ <Text>  a </Text>
32
+ <Text>  novel </Text>
33
+ <Text>  way </Text>
34
+ <Text>  </Text>
35
+ <Text>•  </Text>
36
+ <Text>  Works </Text>
37
+ <Text>  with </Text>
38
+ <Text>  any </Text>
39
+ <Text>  unstructured </Text>
40
+ <Text>  photo </Text>
41
+ <Text>  collection </Text>
42
+ <Text>  </Text>
43
+ <Text>and/or </Text>
44
+ <Text>  video </Text>
45
+ <Text>  sequence </Text>
46
+ <Text>  </Text>
47
+ <Text>•  </Text>
48
+ <Text>  No </Text>
49
+ <Text>  training, </Text>
50
+ <Text>  or </Text>
51
+ <Text>  labeling </Text>
52
+ <Text>  </Text>
53
+ <Figure left="417" right="359" width="445" height="169" no="1" OriWidth="0.575406" OriHeight="0.125458
54
+ " />
55
+ <Figure left="200" right="570" width="621" height="138" no="2" OriWidth="0" OriHeight="0
56
+ " />
57
+ </Panel>
58
+
59
+ <Panel left="87" right="719" width="779" height="396">
60
+ <Text>Results: </Text>
61
+ <Text>  </Text>
62
+ <Text>Puppeteering </Text>
63
+ <Text>  evaluation </Text>
64
+ <Text>  (full </Text>
65
+ <Text>  measure): </Text>
66
+ <Text>  </Text>
67
+ <Figure left="123" right="779" width="322" height="115" no="3" OriWidth="0.589861" OriHeight="0.149490
68
+ " />
69
+ <Text>Without </Text>
70
+ <Text>  mouth </Text>
71
+ <Text>  similarity: </Text>
72
+ <Text>  </Text>
73
+ <Figure left="124" right="916" width="323" height="57" no="4" OriWidth="0.586981" OriHeight="0.073727
74
+ " />
75
+ <Text>Without </Text>
76
+ <Text>  eyes </Text>
77
+ <Text>  similarity: </Text>
78
+ <Text>  </Text>
79
+ <Figure left="122" right="997" width="322" height="113" no="5" OriWidth="0.586981" OriHeight="0.149898
80
+ " />
81
+ <Text>Cameron </Text>
82
+ <Text>  Diaz </Text>
83
+ <Text>  drives </Text>
84
+ <Text>  John </Text>
85
+ <Text>  Malkovich: </Text>
86
+ <Text>   </Text>
87
+ <Text>  </Text>
88
+ <Figure left="472" right="783" width="351" height="126" no="6" OriWidth="0.586981" OriHeight="0.151527
89
+ " />
90
+ <Text>User </Text>
91
+ <Text>  drives </Text>
92
+ <Text>  George </Text>
93
+ <Text>  W. </Text>
94
+ <Text>  Bush: </Text>
95
+ <Text>  </Text>
96
+ <Text>(870 </Text>
97
+ <Text>  photos </Text>
98
+ <Text>  in </Text>
99
+ <Text>  Bush’s </Text>
100
+ <Text>  dataset) </Text>
101
+ <Text>   </Text>
102
+ <Text>  </Text>
103
+ <Figure left="473" right="984" width="350" height="127" no="7" OriWidth="0.586405" OriHeight="0.149898
104
+ " />
105
+ </Panel>
106
+
107
+ <Panel left="868" right="350" width="777" height="764">
108
+ <Text>The </Text>
109
+ <Text>  method: </Text>
110
+ <Text>  </Text>
111
+ <Text>Image </Text>
112
+ <Text>  alignment </Text>
113
+ <Text>  to </Text>
114
+ <Text>  canonical </Text>
115
+ <Text>  pose: </Text>
116
+ <Text>  </Text>
117
+ <Text>Photo </Text>
118
+ <Text>  collections: </Text>
119
+ <Text>  </Text>
120
+ <Text>Face </Text>
121
+ <Text>  and </Text>
122
+ <Text>  fiducial </Text>
123
+ <Text>  </Text>
124
+ <Text>points </Text>
125
+ <Text>  detection </Text>
126
+ <Text>  </Text>
127
+ <Text>(Everingham </Text>
128
+ <Text>  et </Text>
129
+ <Text>  al </Text>
130
+ <Text>  06) </Text>
131
+ <Text>  </Text>
132
+ <Figure left="1030" right="428" width="78" height="81" no="8" OriWidth="0" OriHeight="0
133
+ " />
134
+ <Text># I: 1108 508 0 0 /media/yuxiao/资料/Paper2Poster/论文与海报/Being John Malkovich-Poster-Poster/image8.png</Text>
135
+ <Text>Webcam/Video </Text>
136
+ <Text>  seq.: </Text>
137
+ <Text>  </Text>
138
+ <Text>Real-­‐time </Text>
139
+ <Text>  tracking </Text>
140
+ <Text>  </Text>
141
+ <Text>(Saragih </Text>
142
+ <Text>  et </Text>
143
+ <Text>  al </Text>
144
+ <Text>  09) </Text>
145
+ <Text>  </Text>
146
+ <Figure left="1027" right="522" width="80" height="71" no="9" OriWidth="0.195852" OriHeight="0.116089
147
+ " />
148
+ <Figure left="1137" right="448" width="109" height="98" no="10" OriWidth="0" OriHeight="0
149
+ " />
150
+ <Text>2D </Text>
151
+ <Text>  aligned: </Text>
152
+ <Text>  </Text>
153
+ <Figure left="1277" right="440" width="323" height="64" no="11" OriWidth="0.591013" OriHeight="0.07454
154
+ " />
155
+ <Text>Warped </Text>
156
+ <Text>  to </Text>
157
+ <Text>  frontal </Text>
158
+ <Text>  pose: </Text>
159
+ <Text>   </Text>
160
+ <Text>  </Text>
161
+ <Figure left="1280" right="528" width="319" height="65" no="12" OriWidth="0.446428" OriHeight="0.063951
162
+ " />
163
+ <Text># I: 1335 716 0 0 /media/yuxiao/资料/Paper2Poster/论文与海报/Being John Malkovich-Poster-Poster/image13.png</Text>
164
+ <Text>Appearance </Text>
165
+ <Text>  representation: </Text>
166
+ <Text>  </Text>
167
+ <Text>•  </Text>
168
+ <Text>  LBP </Text>
169
+ <Text>  (Local </Text>
170
+ <Text>  Binary </Text>
171
+ <Text>  Pattern) </Text>
172
+ <Text>  histograms </Text>
173
+ <Text>  (Ahonen </Text>
174
+ <Text>  et </Text>
175
+ <Text>  al </Text>
176
+ <Text>  06) </Text>
177
+ <Text>  </Text>
178
+ <Text>•  </Text>
179
+ <Text>  Applied </Text>
180
+ <Text>  on </Text>
181
+ <Text>  warped </Text>
182
+ <Text>  images </Text>
183
+ <Text>  </Text>
184
+ <Text>•  </Text>
185
+ <Text>  Only </Text>
186
+ <Text>  for </Text>
187
+ <Text>  mouth </Text>
188
+ <Text>  & </Text>
189
+ <Text>  eyes </Text>
190
+ <Text>  regions </Text>
191
+ <Text>  </Text>
192
+ <Text>•  </Text>
193
+ <Text>  Mouth </Text>
194
+ <Text>  region </Text>
195
+ <Text>  divided </Text>
196
+ <Text>  to </Text>
197
+ <Text>  3x5 </Text>
198
+ <Text>  blocks </Text>
199
+ <Text>  </Text>
200
+ <Text>•  </Text>
201
+ <Text>  Eye </Text>
202
+ <Text>  region </Text>
203
+ <Text>  divided </Text>
204
+ <Text>  to </Text>
205
+ <Text>  3x2 </Text>
206
+ <Text>  blocks </Text>
207
+ <Text>  </Text>
208
+ <Figure left="1278" right="646" width="140" height="98" no="13" OriWidth="0.210253" OriHeight="0.103462
209
+ " />
210
+ <Text>Distance </Text>
211
+ <Text>  measure: </Text>
212
+ <Text>  </Text>
213
+ <Text>distance </Text>
214
+ <Text>  between </Text>
215
+ <Text>  input </Text>
216
+ <Text>  frame </Text>
217
+ <Text>  i </Text>
218
+ <Text>  and </Text>
219
+ <Text>  target </Text>
220
+ <Text>  frame </Text>
221
+ <Text>  j </Text>
222
+ <Text>  is : </Text>
223
+ <Text>  The </Text>
224
+ <Text>   </Text>
225
+ <Text>  </Text>
226
+ <Text>Appearance: </Text>
227
+ <Text>  </Text>
228
+ <Text>€</Text>
229
+ <Text>mmeed</Text>
230
+ <Text>appear (i, j) = α d (i, j) + α d (i, j)</Text>
231
+ <Text>d {m,e} -­‐ </Text>
232
+ <Text>  LBP </Text>
233
+ <Text>  histogram </Text>
234
+ <Text>  χ 2 </Text>
235
+ <Text>  distances </Text>
236
+ <Text>  </Text>
237
+ <Text> </Text>
238
+ <Text>   </Text>
239
+ <Text>   </Text>
240
+ <Text>   </Text>
241
+ <Text>   </Text>
242
+ <Text>   </Text>
243
+ <Text>   </Text>
244
+ <Text>   </Text>
245
+ <Text>   </Text>
246
+ <Text>   </Text>
247
+ <Text>   </Text>
248
+ <Text>   </Text>
249
+ <Text>  restricted </Text>
250
+ <Text>  to </Text>
251
+ <Text>  the </Text>
252
+ <Text>  mouth </Text>
253
+ <Text>  and </Text>
254
+ <Text>  eyes </Text>
255
+ <Text>  regions</Text>
256
+ <Text>{m.e}-­‐ </Text>
257
+ <Text>  corresponding </Text>
258
+ <Text>  weights </Text>
259
+ <Text>  α </Text>
260
+ <Text>  </Text>
261
+ <Text>Pose: </Text>
262
+ <Text>  </Text>
263
+ <Text>d </Text>
264
+ <Text>pose (i, j) = L(|Y</Text>
265
+ <Text>i − Y </Text>
266
+ <Text>j |) + L(| P</Text>
267
+ <Text>i − P </Text>
268
+ <Text>j |) + L(| R</Text>
269
+ <Text>i − R </Text>
270
+ <Text>j |)</Text>
271
+ <Text>Y − </Text>
272
+ <Text>  yaw, </Text>
273
+ <Text>  P − </Text>
274
+ <Text>  pitch, </Text>
275
+ <Text>  R − </Text>
276
+ <Text>  roll</Text>
277
+ <Text>− robust </Text>
278
+ <Text>  logistic </Text>
279
+ <Text>  normalization </Text>
280
+ <Text>  function </Text>
281
+ <Text>  L(d) </Text>
282
+ <Text>  </Text>
283
+ <Text>Temporal </Text>
284
+ <Text>  continuity: </Text>
285
+ <Text>  </Text>
286
+ <Text>€j </Text>
287
+ <Text>  appearance </Text>
288
+ <Text>  dist. </Text>
289
+ <Text>  between </Text>
290
+ <Text>  frame </Text>
291
+ <Text>  i -­‐1 </Text>
292
+ <Text>  and </Text>
293
+ <Text>   </Text>
294
+ <Text>  </Text>
295
+ <Text>Acknowledgments€ : </Text>
296
+ <Text>  This </Text>
297
+ <Text>  work </Text>
298
+ <Text>  was </Text>
299
+ <Text>  supported </Text>
300
+ <Text>  in </Text>
301
+ <Text>  part </Text>
302
+ <Text>  by </Text>
303
+ <Text>  Adobe </Text>
304
+ <Text>  and </Text>
305
+ <Text>  the </Text>
306
+ <Text>  University </Text>
307
+ <Text>  of </Text>
308
+ <Text>  Washington </Text>
309
+ <Text>  Animation </Text>
310
+ <Text>  Research </Text>
311
+ <Text>  </Text>
312
+ <Text>Labs. </Text>
313
+ <Text>  We </Text>
314
+ <Text>  gratefully </Text>
315
+ <Text>  acknowledge </Text>
316
+ <Text>  Jason </Text>
317
+ <Text>  Saragih </Text>
318
+ <Text>  for </Text>
319
+ <Text>  providing </Text>
320
+ <Text>  the </Text>
321
+ <Text>  face </Text>
322
+ <Text>  tracking </Text>
323
+ <Text>  software. </Text>
324
+ <Text>  Also, </Text>
325
+ <Text>  in </Text>
326
+ <Text>  our </Text>
327
+ <Text>  experiments </Text>
328
+ <Text>  we </Text>
329
+ <Text>  used: </Text>
330
+ <Text>  </Text>
331
+ <Text>-­‐  </Text>
332
+ <Text>  videos </Text>
333
+ <Text>  of </Text>
334
+ <Text>  Cameron </Text>
335
+ <Text>  Diaz, </Text>
336
+ <Text>  George </Text>
337
+ <Text>  Clooney </Text>
338
+ <Text>  and </Text>
339
+ <Text>  John </Text>
340
+ <Text>  Malkovich </Text>
341
+ <Text>  downloaded </Text>
342
+ <Text>  from </Text>
343
+ <Text>  YouTube </Text>
344
+ <Text>  and </Text>
345
+ <Text>  mefeedia.com </Text>
346
+ <Text>  </Text>
347
+ <Text>-­‐  </Text>
348
+ <Text>  a </Text>
349
+ <Text>  collection </Text>
350
+ <Text>  of </Text>
351
+ <Text>  photos </Text>
352
+ <Text>  of </Text>
353
+ <Text>  George </Text>
354
+ <Text>  W. </Text>
355
+ <Text>  Bush </Text>
356
+ <Text>  from </Text>
357
+ <Text>  the </Text>
358
+ <Text>  LFW </Text>
359
+ <Text>  face </Text>
360
+ <Text>  database. </Text>
361
+ <Text>   </Text>
362
+ <Text>  </Text>
363
+ </Panel>
364
+
365
+ </Poster>
Train/Beyond Spatial Pyramids- Receptive Field Learning for Pooled Image Features/Beyond Spatial Pyramids- Receptive Field Learning for Pooled Image Features-Poster.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e7d0362a8d677694eceb26ef63d0410b926374cb81646045f54ca1fde934b994
3
+ size 390301
Train/Beyond Spatial Pyramids- Receptive Field Learning for Pooled Image Features/Beyond Spatial Pyramids- Receptive Field Learning for Pooled Image Features.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a9a3b1a27b376a5493b42226c48d3b9e836ba9c6d9ab80ea51722566956c986f
3
+ size 1174942
Train/Beyond Spatial Pyramids- Receptive Field Learning for Pooled Image Features/info.txt ADDED
@@ -0,0 +1,134 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <Poster Width="1734" Height="1227">
2
+ <Panel left="31" right="156" width="408" height="237">
3
+ <Text>1. C ONTRIBUTIONS</Text>
4
+ <Text>The key contributions of our work are:</Text>
5
+ <Text>• Analysis of the spatial receptive field (RF) designs for</Text>
6
+ <Text>pooled features.</Text>
7
+ <Text>• Evidence that spatial pyramids may be suboptimal in</Text>
8
+ <Text>feature generation.</Text>
9
+ <Text>• An algorithm that jointly learns adaptive RF and</Text>
10
+ <Text>the classifiers, with an efficient implementation using</Text>
11
+ <Text>over-completeness and structured sparsity.</Text>
12
+ </Panel>
13
+
14
+ <Panel left="451" right="157" width="828" height="238">
15
+ <Text>2. T HE P IPELINE</Text>
16
+ <Figure left="532" right="194" width="667" height="137" no="1" OriWidth="0.555491" OriHeight="0.0848972
17
+ " />
18
+ <Text>State-of-the-art classification algorithms take a two-layer pipeline: the coding layer learns activations from local</Text>
19
+ <Text>image patches, and the pooling layer aggregates activations in multiple spatial regions. Linear classifiers are learned</Text>
20
+ <Text>from the pooled features.</Text>
21
+ </Panel>
22
+
23
+ <Panel left="1293" right="156" width="407" height="239">
24
+ <Text>3. N EUROSCIENCE I NSPIRATION</Text>
25
+ <Figure left="1347" right="199" width="314" height="178" no="2" OriWidth="0" OriHeight="0
26
+ " />
27
+ </Panel>
28
+
29
+ <Panel left="30" right="408" width="409" height="408">
30
+ <Text>4. S PATIAL P OOLING R EVISITED</Text>
31
+ <Text>• Much work has been done on the coding part, while</Text>
32
+ <Text>the spatial pooling methods are often hand-crafted.</Text>
33
+ <Text>• Sample performances on CIFAR-10 with different re-</Text>
34
+ <Text>ceptive field designs:</Text>
35
+ <Figure left="102" right="553" width="280" height="132" no="3" OriWidth="0" OriHeight="0
36
+ " />
37
+ <Text>Note the suboptimality of SPM - random selection</Text>
38
+ <Text>from an overcomplete set of spatially pooled features</Text>
39
+ <Text>consistently outperforms SPM.</Text>
40
+ <Text>• We propose to learn the spatial receptive fields as well</Text>
41
+ <Text>as the codes and the classifier.</Text>
42
+ </Panel>
43
+
44
+ <Panel left="31" right="830" width="406" height="364">
45
+ <Text>5. N OTATIONS</Text>
46
+ <Text>• I: image input.</Text>
47
+ <Text>• A1 , · · · , AK : code activation as matrices, with Akij : ac-</Text>
48
+ <Text>tivation of code k at position (i, j).</Text>
49
+ <Text>• Ri : RF of the i-th pooled feature.</Text>
50
+ <Text>• op(·): pooling operator, such as max(·).</Text>
51
+ <Text>• f (x, θ): the classifier based on pooled features x.</Text>
52
+ <Text>• A pooled feature xi is defined by choosing a code in-</Text>
53
+ <Text>dexed by ci and a spatial RF Ri :</Text>
54
+ <Text>The vector of pooled features x is then determined</Text>
55
+ <Text>by the set of parameters C = {c1 , · · · , cM } and R =</Text>
56
+ <Text>{R1 , · · · , RM }.</Text>
57
+ </Panel>
58
+
59
+ <Panel left="452" right="410" width="406" height="361">
60
+ <Text>6. T HE L EARNING P ROBLEM</Text>
61
+ <Text>N{(In , yn )}n=1 ,• Given a set of training datawe jointly</Text>
62
+ <Text>learn the classifier and the pooled features as (assum-</Text>
63
+ <Text>ing that coding is done in an unsupervised way):</Text>
64
+ <Text>• Advantage: pooled features are tailored towards the</Text>
65
+ <Text>classification task (also reduces redundancy).</Text>
66
+ <Text>• Disadvantage: may be intractable - an exponential</Text>
67
+ <Text>number of possible receptive fields.</Text>
68
+ <Text>• Solution: reasonably overcomplete receptive field</Text>
69
+ <Text>candidates + sparsity constraints to control the num-</Text>
70
+ <Text>ber of final features.</Text>
71
+ </Panel>
72
+
73
+ <Panel left="452" right="785" width="406" height="411">
74
+ <Text>7. O VERCOMPLETE RF</Text>
75
+ <Text>• We propose to use overcomplete receptive field can-</Text>
76
+ <Text>didates based on regular grids:</Text>
77
+ <Figure left="487" right="879" width="102" height="100" no="4" OriWidth="0.104624" OriHeight="0.0817694
78
+ " />
79
+ <Text> (a) Base</Text>
80
+ <Figure left="606" right="878" width="102" height="101" no="5" OriWidth="0.104624" OriHeight="0.080429
81
+ " />
82
+ <Text> (b) SPM</Text>
83
+ <Figure left="725" right="878" width="102" height="101" no="6" OriWidth="0.103468" OriHeight="0.0808758
84
+ " />
85
+ <Text> (c) Ours</Text>
86
+ <Text>• The structured sparsity regularization is adopted to</Text>
87
+ <Text>select only a subset of features for classification:</Text>
88
+ </Panel>
89
+
90
+ <Panel left="871" right="409" width="407" height="787">
91
+ <Text>8. G REEDY F EATURE S ELECTION</Text>
92
+ <Text>• Directly perform optimization is still time and mem-</Text>
93
+ <Text>ory consuming.</Text>
94
+ <Text>• Following [Perkins JMLR03], We adopted an incre-</Text>
95
+ <Text>mental, greedy approach to select features based on</Text>
96
+ <Text>their scores:</Text>
97
+ <Text>• After each increment, the model is retrained only with</Text>
98
+ <Text>respect to an active subset of selected features to en-</Text>
99
+ <Text>sure fast re-training:</Text>
100
+ <Figure left="983" right="753" width="193" height="160" no="7" OriWidth="0.212717" OriHeight="0.139857
101
+ " />
102
+ <Text>• Benefit of overcompleteness in spatial pooling + fea-</Text>
103
+ <Text>ture selection: higher performance with smaller code-</Text>
104
+ <Text>books and lower feature dimensions.</Text>
105
+ <Figure left="981" right="1008" width="222" height="169" no="8" OriWidth="0.244509" OriHeight="0.150134
106
+ " />
107
+ </Panel>
108
+
109
+ <Panel left="1293" right="410" width="405" height="540">
110
+ <Text>9. R ESULTS</Text>
111
+ <Text>• Performance comparison on CIFAR-10 with state-of-</Text>
112
+ <Text>the-art approaches:</Text>
113
+ <Figure left="1332" right="506" width="354" height="213" no="9" OriWidth="0.343353" OriHeight="0.152368
114
+ " />
115
+ <Text>• Result on MNIST and the 1-vs-1 saliency map ob-</Text>
116
+ <Text>tained from our algorithm:</Text>
117
+ <Figure left="1327" right="790" width="359" height="155" no="10" OriWidth="0.358382" OriHeight="0.120197
118
+ " />
119
+ </Panel>
120
+
121
+ <Panel left="1294" right="964" width="404" height="230">
122
+ <Text>10. R EFERENCES</Text>
123
+ <Text>• A Coates and AY Ng. The importance of encoding</Text>
124
+ <Text>versus training with sparse coding and vector quanti-</Text>
125
+ <Text>zation. ICML 2011.</Text>
126
+ <Text>• S Perkins, K Lacker, and J Theiler. Grafting: fast, incre-</Text>
127
+ <Text>mental feature selection by gradient descent in func-</Text>
128
+ <Text>tion space. JMLR, 3:1333–1356, 2003.</Text>
129
+ <Text>• DH Hubel and TN Wiesel. Receptive fields, binocu-</Text>
130
+ <Text>lar interaction and functional architecture in the cat’s</Text>
131
+ <Text>visual cortex. J. of Physiology, 160(1):106–154, 1962.</Text>
132
+ </Panel>
133
+
134
+ </Poster>
Train/CATALYST ENHANCED MICRO SCALE BATCH ASSEMBLY/CATALYST ENHANCED MICRO SCALE BATCH ASSEMBLY-Poster.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fa62652716affff520bfd90b12931ad4cad3054eda70b387741d83ceb1d29faf
3
+ size 1154088
Train/CATALYST ENHANCED MICRO SCALE BATCH ASSEMBLY/CATALYST ENHANCED MICRO SCALE BATCH ASSEMBLY.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ad6e35ed92035755ebe95d9e3cffd0327a5b137eb4fc7b1d2ea00d2a960802a3
3
+ size 430436
Train/CATALYST ENHANCED MICRO SCALE BATCH ASSEMBLY/info.txt ADDED
@@ -0,0 +1,69 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <Poster Width="1734" Height="1900">
2
+ <Panel left="54" right="437" width="523" height="181">
3
+ <Text>Abstract</Text>
4
+ <Text>We enhance the efficiency of assembly of microparts in batch dry</Text>
5
+ <Text>assembly methods studied previously by our group. Here we study the</Text>
6
+ <Text>system dynamics with the addition of a few non-participating millimeter</Text>
7
+ <Text>scale parts that act as ‘catalysts’. We present experimental results that</Text>
8
+ <Text>show 25-50% reduction in acceleration needed to trigger part motion</Text>
9
+ <Text>and up to 4 times increase in concentration of parts in motion due to</Text>
10
+ <Text>addition of catalysts. We adapt a model from chemical kinetic theory to</Text>
11
+ <Text>understand our system behavior.</Text>
12
+ </Panel>
13
+
14
+ <Panel left="56" right="650" width="525" height="1077">
15
+ <Text>Analogy with Chemical Kinetics</Text>
16
+ <Figure left="59" right="701" width="518" height="483" no="1" OriWidth="0.375723" OriHeight="0.206434
17
+ " />
18
+ <Figure left="119" right="1233" width="384" height="237" no="2" OriWidth="0" OriHeight="0
19
+ " />
20
+ <Figure left="118" right="1493" width="384" height="235" no="3" OriWidth="0" OriHeight="0
21
+ " />
22
+ </Panel>
23
+
24
+ <Panel left="604" right="436" width="515" height="1376">
25
+ <Text>Experimental Setup and Data</Text>
26
+ <Text>Collection/Analysis Capabilities</Text>
27
+ <Text>•Parts (800x800x50µm ) and catalysts (2x2x.5mm ) are made</Text>
28
+ <Text>respectively from SOI/silicon wafers using standard lithography and</Text>
29
+ <Text>DRIE etching.33</Text>
30
+ <Text>•High speed camera is used to capture part motion</Text>
31
+ <Text>•Dedicated Matlab routines were developed for image processing and</Text>
32
+ <Text>subsequent data reduction</Text>
33
+ <Figure left="627" right="658" width="491" height="378" no="4" OriWidth="0.410983" OriHeight="0.288204
34
+ " />
35
+ <Figure left="627" right="1065" width="492" height="238" no="5" OriWidth="0.34104" OriHeight="0.218499
36
+ " />
37
+ <Figure left="627" right="1335" width="491" height="187" no="6" OriWidth="0" OriHeight="0
38
+ " />
39
+ <Figure left="628" right="1556" width="490" height="255" no="7" OriWidth="0.357225" OriHeight="0.147006
40
+ " />
41
+ </Panel>
42
+
43
+ <Panel left="1163" right="435" width="526" height="878">
44
+ <Text>Results</Text>
45
+ <Figure left="1171" right="482" width="523" height="314" no="8" OriWidth="0.350289" OriHeight="0.193476
46
+ " />
47
+ <Figure left="1172" right="821" width="519" height="494" no="9" OriWidth="0.371098" OriHeight="0.145219
48
+ " />
49
+ </Panel>
50
+
51
+ <Panel left="1162" right="1330" width="526" height="473">
52
+ <Text>Conclusion</Text>
53
+ <Text>•‘Catalyst’ is a promising new concept in dry self-assembly</Text>
54
+ <Text>•Infrastructure for automated assembly analysis is developed</Text>
55
+ <Text>•Chemical Kinetics analogous models and empirical data are available</Text>
56
+ <Text>•Future developments include automated accounting of assembly in</Text>
57
+ <Text>assembly sites</Text>
58
+ <Figure left="1187" right="1499" width="490" height="187" no="10" OriWidth="0" OriHeight="0
59
+ " />
60
+ </Panel>
61
+
62
+ <Panel left="1163" right="1820" width="487" height="107">
63
+ <Text>Acknowledgements</Text>
64
+ <Text>•This work was supported by research grants from Intel Corporation.</Text>
65
+ <Text>•Authors thank the feedback and assistance from members of UW-MEMS</Text>
66
+ <Text>Lab.</Text>
67
+ </Panel>
68
+
69
+ </Poster>
Train/CRYOGENIC CATHODOLUMINESCENCE FROM CuxAg1-xInSe2 THIN FILMS/CRYOGENIC CATHODOLUMINESCENCE FROM CuxAg1-xInSe2 THIN FILMS-Poster.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:215c54c99653da4c9f584d88e27c3546e2180ba77ae00881ecca0f36805c2d74
3
+ size 399612