robinwitch commited on
Commit
9ad5b1d
·
1 Parent(s): 078d14b
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. tools/visualization_0416/utils/__pycache__/__init__.cpython-310.pyc +0 -0
  2. tools/visualization_0416/utils/__pycache__/__init__.cpython-311.pyc +0 -0
  3. tools/visualization_0416/utils/__pycache__/face_detector.cpython-310.pyc +0 -0
  4. tools/visualization_0416/utils/__pycache__/face_detector.cpython-311.pyc +0 -0
  5. tools/visualization_0416/utils/model_0506/__init__.py +0 -0
  6. tools/visualization_0416/utils/model_0506/__pycache__/utils.cpython-310.pyc +0 -0
  7. tools/visualization_0416/utils/model_0506/__pycache__/utils.cpython-311.pyc +0 -0
  8. tools/visualization_0416/utils/model_0506/__pycache__/utils.cpython-313.pyc +0 -0
  9. tools/visualization_0416/utils/model_0506/model/__init__.py +0 -0
  10. tools/visualization_0416/utils/model_0506/model/__pycache__/__init__.cpython-310.pyc +0 -0
  11. tools/visualization_0416/utils/model_0506/model/__pycache__/__init__.cpython-311.pyc +0 -0
  12. tools/visualization_0416/utils/model_0506/model/__pycache__/__init__.cpython-312.pyc +0 -0
  13. tools/visualization_0416/utils/model_0506/model/__pycache__/__init__.cpython-313.pyc +0 -0
  14. tools/visualization_0416/utils/model_0506/model/basic_model/__init__.py +0 -0
  15. tools/visualization_0416/utils/model_0506/model/basic_model/__pycache__/basic_block.cpython-310.pyc +0 -0
  16. tools/visualization_0416/utils/model_0506/model/basic_model/__pycache__/resnet.cpython-310.pyc +0 -0
  17. tools/visualization_0416/utils/model_0506/model/basic_model/__pycache__/resnet.cpython-312.pyc +0 -0
  18. tools/visualization_0416/utils/model_0506/model/basic_model/basic_block.py +163 -0
  19. tools/visualization_0416/utils/model_0506/model/basic_model/frnet.py +177 -0
  20. tools/visualization_0416/utils/model_0506/model/basic_model/resnet.py +182 -0
  21. tools/visualization_0416/utils/model_0506/model/head_animation/EMOP/__pycache__/appearance_encoder.cpython-310.pyc +0 -0
  22. tools/visualization_0416/utils/model_0506/model/head_animation/EMOP/__pycache__/appearance_encoder.cpython-312.pyc +0 -0
  23. tools/visualization_0416/utils/model_0506/model/head_animation/EMOP/__pycache__/args.cpython-310.pyc +0 -0
  24. tools/visualization_0416/utils/model_0506/model/head_animation/EMOP/__pycache__/args.cpython-312.pyc +0 -0
  25. tools/visualization_0416/utils/model_0506/model/head_animation/EMOP/__pycache__/expression_embedder.cpython-310.pyc +0 -0
  26. tools/visualization_0416/utils/model_0506/model/head_animation/EMOP/__pycache__/expression_embedder.cpython-312.pyc +0 -0
  27. tools/visualization_0416/utils/model_0506/model/head_animation/EMOP/__pycache__/face_decoder.cpython-310.pyc +0 -0
  28. tools/visualization_0416/utils/model_0506/model/head_animation/EMOP/__pycache__/face_decoder.cpython-312.pyc +0 -0
  29. tools/visualization_0416/utils/model_0506/model/head_animation/EMOP/__pycache__/face_decoder_spade.cpython-310.pyc +0 -0
  30. tools/visualization_0416/utils/model_0506/model/head_animation/EMOP/__pycache__/face_decoder_spade.cpython-312.pyc +0 -0
  31. tools/visualization_0416/utils/model_0506/model/head_animation/EMOP/__pycache__/face_encoder.cpython-310.pyc +0 -0
  32. tools/visualization_0416/utils/model_0506/model/head_animation/EMOP/__pycache__/face_encoder.cpython-312.pyc +0 -0
  33. tools/visualization_0416/utils/model_0506/model/head_animation/EMOP/__pycache__/face_generator.cpython-310.pyc +0 -0
  34. tools/visualization_0416/utils/model_0506/model/head_animation/EMOP/__pycache__/face_generator.cpython-312.pyc +0 -0
  35. tools/visualization_0416/utils/model_0506/model/head_animation/EMOP/__pycache__/flow_estimator.cpython-310.pyc +0 -0
  36. tools/visualization_0416/utils/model_0506/model/head_animation/EMOP/__pycache__/flow_estimator.cpython-312.pyc +0 -0
  37. tools/visualization_0416/utils/model_0506/model/head_animation/EMOP/__pycache__/head_pose_regressor.cpython-310.pyc +0 -0
  38. tools/visualization_0416/utils/model_0506/model/head_animation/EMOP/__pycache__/head_pose_regressor.cpython-312.pyc +0 -0
  39. tools/visualization_0416/utils/model_0506/model/head_animation/EMOP/__pycache__/identity_embedder.cpython-310.pyc +0 -0
  40. tools/visualization_0416/utils/model_0506/model/head_animation/EMOP/__pycache__/identity_embedder.cpython-312.pyc +0 -0
  41. tools/visualization_0416/utils/model_0506/model/head_animation/EMOP/__pycache__/motion_encoder.cpython-310.pyc +0 -0
  42. tools/visualization_0416/utils/model_0506/model/head_animation/EMOP/__pycache__/motion_encoder.cpython-312.pyc +0 -0
  43. tools/visualization_0416/utils/model_0506/model/head_animation/EMOP/__pycache__/point_transforms.cpython-310.pyc +0 -0
  44. tools/visualization_0416/utils/model_0506/model/head_animation/EMOP/__pycache__/point_transforms.cpython-312.pyc +0 -0
  45. tools/visualization_0416/utils/model_0506/model/head_animation/EMOP/__pycache__/resblocks_3d.cpython-310.pyc +0 -0
  46. tools/visualization_0416/utils/model_0506/model/head_animation/EMOP/__pycache__/resblocks_3d.cpython-312.pyc +0 -0
  47. tools/visualization_0416/utils/model_0506/model/head_animation/EMOP/__pycache__/spectral_norm.cpython-310.pyc +0 -0
  48. tools/visualization_0416/utils/model_0506/model/head_animation/EMOP/__pycache__/spectral_norm.cpython-312.pyc +0 -0
  49. tools/visualization_0416/utils/model_0506/model/head_animation/EMOP/__pycache__/unet_3d.cpython-310.pyc +0 -0
  50. tools/visualization_0416/utils/model_0506/model/head_animation/EMOP/__pycache__/unet_3d.cpython-312.pyc +0 -0
tools/visualization_0416/utils/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (220 Bytes). View file
 
tools/visualization_0416/utils/__pycache__/__init__.cpython-311.pyc ADDED
Binary file (178 Bytes). View file
 
tools/visualization_0416/utils/__pycache__/face_detector.cpython-310.pyc ADDED
Binary file (11.4 kB). View file
 
tools/visualization_0416/utils/__pycache__/face_detector.cpython-311.pyc ADDED
Binary file (25.4 kB). View file
 
tools/visualization_0416/utils/model_0506/__init__.py ADDED
File without changes
tools/visualization_0416/utils/model_0506/__pycache__/utils.cpython-310.pyc ADDED
Binary file (2.73 kB). View file
 
tools/visualization_0416/utils/model_0506/__pycache__/utils.cpython-311.pyc ADDED
Binary file (4.98 kB). View file
 
tools/visualization_0416/utils/model_0506/__pycache__/utils.cpython-313.pyc ADDED
Binary file (4.28 kB). View file
 
tools/visualization_0416/utils/model_0506/model/__init__.py ADDED
File without changes
tools/visualization_0416/utils/model_0506/model/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (237 Bytes). View file
 
tools/visualization_0416/utils/model_0506/model/__pycache__/__init__.cpython-311.pyc ADDED
Binary file (197 Bytes). View file
 
tools/visualization_0416/utils/model_0506/model/__pycache__/__init__.cpython-312.pyc ADDED
Binary file (162 Bytes). View file
 
tools/visualization_0416/utils/model_0506/model/__pycache__/__init__.cpython-313.pyc ADDED
Binary file (239 Bytes). View file
 
tools/visualization_0416/utils/model_0506/model/basic_model/__init__.py ADDED
File without changes
tools/visualization_0416/utils/model_0506/model/basic_model/__pycache__/basic_block.cpython-310.pyc ADDED
Binary file (6.71 kB). View file
 
tools/visualization_0416/utils/model_0506/model/basic_model/__pycache__/resnet.cpython-310.pyc ADDED
Binary file (5.52 kB). View file
 
tools/visualization_0416/utils/model_0506/model/basic_model/__pycache__/resnet.cpython-312.pyc ADDED
Binary file (10.7 kB). View file
 
tools/visualization_0416/utils/model_0506/model/basic_model/basic_block.py ADDED
@@ -0,0 +1,163 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Reference: https://github.com/hedra-labs/one-shot-face/blob/mega-portraits/models/building_blocks.py
3
+ """
4
+ import torch
5
+ import torch.nn as nn
6
+ import torch.nn.functional as F
7
+ from torch.nn.utils.parametrizations import spectral_norm
8
+
9
+ USE_BIAS = False
10
+
11
+ # https://github.com/joe-siyuan-qiao/WeightStandardization?tab=readme-ov-file#pytorch
12
+ class WSConv2d(nn.Conv2d):
13
+ def __init__(self, *args, **kwargs):
14
+ super(WSConv2d, self).__init__(*args, **kwargs)
15
+
16
+ def forward(self, inp):
17
+ weight = self.weight
18
+ weight_mean = weight.mean(dim=1, keepdim=True).mean(dim=2, keepdim=True).mean(dim=3, keepdim=True)
19
+ weight = weight - weight_mean
20
+ std = weight.view(weight.size(0), -1).std(dim=1).view(-1, 1, 1, 1) + 1e-5
21
+ weight = weight / std.expand_as(weight)
22
+ return F.conv2d(inp, weight, self.bias, self.stride, self.padding, self.dilation, self.groups)
23
+
24
+
25
+ class WSConv3d(nn.Conv3d):
26
+ def __init__(self, *args, **kwargs):
27
+ super(WSConv3d, self).__init__(*args, **kwargs)
28
+
29
+ def forward(self, inp):
30
+ weight = self.weight
31
+ weight_mean = weight.mean(dim=1, keepdim=True).mean(dim=2, keepdim=True).mean(dim=3, keepdim=True).mean(dim=4, keepdim=True)
32
+ weight = weight - weight_mean
33
+ std = weight.view(weight.size(0), -1).std(dim=1).view(-1, 1, 1, 1, 1) + 1e-5
34
+ weight = weight / std.expand_as(weight)
35
+ return F.conv3d(inp, weight, self.bias, self.stride, self.padding, self.dilation, self.groups)
36
+
37
+
38
+ class ResBlock2d(nn.Module):
39
+ def __init__(self, in_channels: int, out_channels: int, num_channels_per_group: int, use_spectral_norm: bool = False):
40
+ super().__init__()
41
+
42
+ norm_func = lambda x: x
43
+ if use_spectral_norm:
44
+ norm_func = spectral_norm
45
+
46
+ if in_channels != out_channels:
47
+ self.skip_layer = norm_func(nn.Conv2d(in_channels, out_channels, kernel_size=3, padding=1, bias=USE_BIAS))
48
+ else:
49
+ self.skip_layer = lambda x: x
50
+
51
+ self.layers = nn.Sequential(
52
+ nn.GroupNorm(in_channels // num_channels_per_group, in_channels, affine=not USE_BIAS),
53
+ nn.ReLU(inplace=True),
54
+ norm_func(WSConv2d(in_channels, out_channels, kernel_size=3, padding=1, bias=USE_BIAS)),
55
+ nn.GroupNorm(out_channels // num_channels_per_group, out_channels, affine=not USE_BIAS),
56
+ nn.ReLU(inplace=True),
57
+ norm_func(nn.Conv2d(out_channels, out_channels, kernel_size=3, padding=1, bias=USE_BIAS)),
58
+ )
59
+
60
+ def forward(self, inp: torch.Tensor):
61
+ return self.skip_layer(inp) + self.layers(inp)
62
+
63
+
64
+ class ResBlock3d(nn.Module):
65
+ def __init__(self, in_channels: int, out_channels: int, num_channels_per_group: int):
66
+ super().__init__()
67
+
68
+ if in_channels != out_channels:
69
+ self.skip_layer = nn.Conv3d(in_channels, out_channels, kernel_size=3, padding=1, bias=USE_BIAS)
70
+ else:
71
+ self.skip_layer = lambda x: x
72
+
73
+ self.layers = nn.Sequential(
74
+ nn.GroupNorm(in_channels // num_channels_per_group, in_channels, affine=not USE_BIAS),
75
+ nn.ReLU(inplace=True),
76
+ WSConv3d(in_channels, out_channels, kernel_size=3, padding=1, bias=USE_BIAS),
77
+ nn.GroupNorm(out_channels // num_channels_per_group, out_channels, affine=not USE_BIAS),
78
+ nn.ReLU(inplace=True),
79
+ nn.Conv3d(out_channels, out_channels, kernel_size=3, padding=1, bias=USE_BIAS),
80
+ )
81
+
82
+ def forward(self, inp: torch.Tensor):
83
+ return self.skip_layer(inp) + self.layers(inp)
84
+
85
+
86
+ class ResBasic(nn.Module):
87
+ def __init__(self, in_channels: int, out_channels: int, stride: int, num_channels_per_group: int):
88
+ super().__init__()
89
+
90
+ if stride != 1 and stride != 2:
91
+ raise NotImplementedError(f"Stride can be only 1 or 2 but '{stride}' is passed.")
92
+
93
+ if in_channels != out_channels or stride != 1:
94
+ self.skip_layer = nn.Sequential(
95
+ WSConv2d(in_channels, out_channels, kernel_size=3, stride=stride, padding=1, bias=USE_BIAS),
96
+ nn.GroupNorm(out_channels // num_channels_per_group, out_channels, affine=not USE_BIAS),
97
+ )
98
+ else:
99
+ self.skip_layer = lambda x: x
100
+
101
+ self.layers = nn.Sequential(
102
+ WSConv2d(in_channels, out_channels, kernel_size=3, stride=stride, padding=1, bias=USE_BIAS),
103
+ nn.GroupNorm(out_channels // num_channels_per_group, out_channels, affine=not USE_BIAS),
104
+ nn.ReLU(inplace=True),
105
+ WSConv2d(out_channels, out_channels, kernel_size=1, bias=USE_BIAS),
106
+ nn.GroupNorm(out_channels // num_channels_per_group, out_channels, affine=not USE_BIAS),
107
+ )
108
+
109
+
110
+ def forward(self, inp: torch.Tensor):
111
+ return F.relu(self.skip_layer(inp) + self.layers(inp))
112
+
113
+
114
+ class ResBottleneck(nn.Module):
115
+ def __init__(self, in_channels: int, out_channels: int, stride: int, num_channels_per_group: int):
116
+ super().__init__()
117
+
118
+ if stride != 1 and stride != 2:
119
+ raise NotImplementedError(f"Stride can be only 1 or 2 but '{stride}' is passed.")
120
+
121
+ if in_channels != out_channels or stride != 1:
122
+ self.skip_layer = nn.Sequential(
123
+ WSConv2d(in_channels, out_channels, kernel_size=3, stride=stride, padding=1, bias=USE_BIAS),
124
+ nn.GroupNorm(out_channels // num_channels_per_group, out_channels, affine=not USE_BIAS),
125
+ )
126
+ else:
127
+ self.skip_layer = lambda x: x
128
+
129
+ temp_out_channels = out_channels // 4
130
+ self.layers = nn.Sequential(
131
+ WSConv2d(in_channels, temp_out_channels, kernel_size=1, bias=USE_BIAS),
132
+ nn.GroupNorm(temp_out_channels // num_channels_per_group, temp_out_channels, affine=not USE_BIAS),
133
+ nn.ReLU(inplace=True),
134
+ WSConv2d(temp_out_channels, temp_out_channels, kernel_size=3, stride=stride, padding=1, bias=USE_BIAS),
135
+ nn.GroupNorm(temp_out_channels // num_channels_per_group, temp_out_channels, affine=not USE_BIAS),
136
+ nn.ReLU(inplace=True),
137
+ WSConv2d(temp_out_channels, out_channels, kernel_size=1, bias=USE_BIAS),
138
+ nn.GroupNorm(out_channels // num_channels_per_group, out_channels, affine=not USE_BIAS),
139
+ )
140
+
141
+
142
+ def forward(self, inp: torch.Tensor):
143
+ return F.relu(self.skip_layer(inp) + self.layers(inp))
144
+
145
+
146
+ class ReshapeTo3DLayer(nn.Module):
147
+ def __init__(self, out_depth: int):
148
+ super().__init__()
149
+
150
+ self.out_depth = out_depth
151
+
152
+ def forward(self, inp: torch.Tensor):
153
+ batch_size, channels, height, width = inp.shape
154
+ return inp.view(batch_size, channels // self.out_depth, self.out_depth, height, width)
155
+
156
+
157
+ class ReshapeTo2DLayer(nn.Module):
158
+ def __init__(self):
159
+ super().__init__()
160
+
161
+ def forward(self, inp: torch.Tensor):
162
+ batch_size, channels, depth, height, width = inp.shape
163
+ return inp.view(batch_size, channels * depth, height, width)
tools/visualization_0416/utils/model_0506/model/basic_model/frnet.py ADDED
@@ -0,0 +1,177 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Reference: https://github.com/yfeng95/DECA/blob/a11554ae2a2b0f3998cf1fa94dd4db03babb34a2/decalib/models/frnet.py
3
+ """
4
+ import torch.nn as nn
5
+ import numpy as np
6
+ import torch
7
+ import torch.nn.functional as F
8
+ from torch.autograd import Variable
9
+ import math
10
+
11
+ def conv3x3(in_planes, out_planes, stride=1):
12
+ """3x3 convolution with padding"""
13
+ return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride,
14
+ padding=1, bias=False)
15
+
16
+ class BasicBlock(nn.Module):
17
+ expansion = 1
18
+
19
+ def __init__(self, inplanes, planes, stride=1, downsample=None):
20
+ super(BasicBlock, self).__init__()
21
+ self.conv1 = conv3x3(inplanes, planes, stride)
22
+ self.bn1 = nn.BatchNorm2d(planes)
23
+ self.relu = nn.ReLU(inplace=True)
24
+ self.conv2 = conv3x3(planes, planes)
25
+ self.bn2 = nn.BatchNorm2d(planes)
26
+ self.downsample = downsample
27
+ self.stride = stride
28
+
29
+ def forward(self, x):
30
+ residual = x
31
+
32
+ out = self.conv1(x)
33
+ out = self.bn1(out)
34
+ out = self.relu(out)
35
+
36
+ out = self.conv2(out)
37
+ out = self.bn2(out)
38
+
39
+ if self.downsample is not None:
40
+ residual = self.downsample(x)
41
+
42
+ out += residual
43
+ out = self.relu(out)
44
+
45
+ return out
46
+
47
+
48
+ class Bottleneck(nn.Module):
49
+ expansion = 4
50
+
51
+ def __init__(self, inplanes, planes, stride=1, downsample=None):
52
+ super(Bottleneck, self).__init__()
53
+ self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, stride=stride, bias=False)
54
+ self.bn1 = nn.BatchNorm2d(planes)
55
+ self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
56
+ self.bn2 = nn.BatchNorm2d(planes)
57
+ self.conv3 = nn.Conv2d(planes, planes * 4, kernel_size=1, bias=False)
58
+ self.bn3 = nn.BatchNorm2d(planes * 4)
59
+ self.relu = nn.ReLU(inplace=True)
60
+ self.downsample = downsample
61
+ self.stride = stride
62
+
63
+ def forward(self, x):
64
+ residual = x
65
+
66
+ out = self.conv1(x)
67
+ out = self.bn1(out)
68
+ out = self.relu(out)
69
+
70
+ out = self.conv2(out)
71
+ out = self.bn2(out)
72
+ out = self.relu(out)
73
+
74
+ out = self.conv3(out)
75
+ out = self.bn3(out)
76
+
77
+ if self.downsample is not None:
78
+ residual = self.downsample(x)
79
+
80
+ out += residual
81
+ out = self.relu(out)
82
+
83
+ return out
84
+
85
+
86
+ class ResNet(nn.Module):
87
+
88
+ def __init__(self, block, layers, num_classes=1000, include_top=True):
89
+ self.inplanes = 64
90
+ super(ResNet, self).__init__()
91
+ self.include_top = include_top
92
+
93
+ self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3, bias=False)
94
+ self.bn1 = nn.BatchNorm2d(64)
95
+ self.relu = nn.ReLU(inplace=True)
96
+ self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=0, ceil_mode=True)
97
+
98
+ self.layer1 = self._make_layer(block, 64, layers[0])
99
+ self.layer2 = self._make_layer(block, 128, layers[1], stride=2)
100
+ self.layer3 = self._make_layer(block, 256, layers[2], stride=2)
101
+ self.layer4 = self._make_layer(block, 512, layers[3], stride=2)
102
+ self.avgpool = nn.AvgPool2d(7, stride=1)
103
+ self.fc = nn.Linear(512 * block.expansion, num_classes)
104
+
105
+ for m in self.modules():
106
+ if isinstance(m, nn.Conv2d):
107
+ n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
108
+ m.weight.data.normal_(0, math.sqrt(2. / n))
109
+ elif isinstance(m, nn.BatchNorm2d):
110
+ m.weight.data.fill_(1)
111
+ m.bias.data.zero_()
112
+
113
+ def _make_layer(self, block, planes, blocks, stride=1):
114
+ downsample = None
115
+ if stride != 1 or self.inplanes != planes * block.expansion:
116
+ downsample = nn.Sequential(
117
+ nn.Conv2d(self.inplanes, planes * block.expansion,
118
+ kernel_size=1, stride=stride, bias=False),
119
+ nn.BatchNorm2d(planes * block.expansion),
120
+ )
121
+
122
+ layers = []
123
+ layers.append(block(self.inplanes, planes, stride, downsample))
124
+ self.inplanes = planes * block.expansion
125
+ for i in range(1, blocks):
126
+ layers.append(block(self.inplanes, planes))
127
+
128
+ return nn.Sequential(*layers)
129
+
130
+ def forward(self, x):
131
+ x = self.conv1(x)
132
+ x = self.bn1(x)
133
+ x = self.relu(x)
134
+ x = self.maxpool(x)
135
+
136
+ x = self.layer1(x)
137
+ x = self.layer2(x)
138
+ x = self.layer3(x)
139
+ x = self.layer4(x)
140
+
141
+ x = self.avgpool(x)
142
+
143
+ if not self.include_top:
144
+ return x
145
+
146
+ x = x.view(x.size(0), -1)
147
+ x = self.fc(x)
148
+ return x
149
+
150
+ def resnet50(**kwargs):
151
+ """Constructs a ResNet-50 model.
152
+ """
153
+ model = ResNet(Bottleneck, [3, 4, 6, 3], **kwargs)
154
+ return model
155
+
156
+ import pickle
157
+ def load_state_dict(model, fname):
158
+ """
159
+ Set parameters converted from Caffe models authors of VGGFace2 provide.
160
+ See https://www.robots.ox.ac.uk/~vgg/data/vgg_face2/.
161
+ Arguments:
162
+ model: model
163
+ fname: file name of parameters converted from a Caffe model, assuming the file format is Pickle.
164
+ """
165
+ with open(fname, 'rb') as f:
166
+ weights = pickle.load(f, encoding='latin1')
167
+
168
+ own_state = model.state_dict()
169
+ for name, param in weights.items():
170
+ if name in own_state:
171
+ try:
172
+ own_state[name].copy_(torch.from_numpy(param))
173
+ except Exception:
174
+ raise RuntimeError('While copying the parameter named {}, whose dimensions in the model are {} and whose '\
175
+ 'dimensions in the checkpoint are {}.'.format(name, own_state[name].size(), param.size()))
176
+ else:
177
+ raise KeyError('unexpected key "{}" in state_dict'.format(name))
tools/visualization_0416/utils/model_0506/model/basic_model/resnet.py ADDED
@@ -0,0 +1,182 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ '''ResNet in PyTorch.
2
+
3
+ For Pre-activation ResNet, see 'preact_resnet.py'.
4
+
5
+ Reference:
6
+ [1] Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun
7
+ Deep Residual Learning for Image Recognition. arXiv:1512.03385
8
+ '''
9
+ import torch
10
+ import torch.nn as nn
11
+ import torch.nn.functional as F
12
+
13
+
14
+ class BasicBlock(nn.Module):
15
+ expansion = 1
16
+
17
+ def __init__(self, in_planes, planes, stride=1):
18
+ super(BasicBlock, self).__init__()
19
+ self.conv1 = nn.Conv2d(
20
+ in_planes, planes, kernel_size=3, stride=stride, padding=1, bias=False)
21
+ self.bn1 = nn.BatchNorm2d(planes)
22
+ self.conv2 = nn.Conv2d(planes, planes, kernel_size=3,
23
+ stride=1, padding=1, bias=False)
24
+ self.bn2 = nn.BatchNorm2d(planes)
25
+
26
+ self.shortcut = nn.Sequential()
27
+ if stride != 1 or in_planes != self.expansion*planes:
28
+ self.shortcut = nn.Sequential(
29
+ nn.Conv2d(in_planes, self.expansion*planes,
30
+ kernel_size=1, stride=stride, bias=False),
31
+ nn.BatchNorm2d(self.expansion*planes)
32
+ )
33
+
34
+ def forward(self, x):
35
+ out = F.relu(self.bn1(self.conv1(x)))
36
+ out = self.bn2(self.conv2(out))
37
+ out += self.shortcut(x)
38
+ out = F.relu(out)
39
+ return out
40
+
41
+
42
+ class Bottleneck(nn.Module):
43
+ expansion = 4
44
+
45
+ def __init__(self, in_planes, planes, stride=1):
46
+ super(Bottleneck, self).__init__()
47
+ self.conv1 = nn.Conv2d(in_planes, planes, kernel_size=1, bias=False)
48
+ self.bn1 = nn.BatchNorm2d(planes)
49
+ self.conv2 = nn.Conv2d(planes, planes, kernel_size=3,
50
+ stride=stride, padding=1, bias=False)
51
+ self.bn2 = nn.BatchNorm2d(planes)
52
+ self.conv3 = nn.Conv2d(planes, self.expansion *
53
+ planes, kernel_size=1, bias=False)
54
+ self.bn3 = nn.BatchNorm2d(self.expansion*planes)
55
+
56
+ self.shortcut = nn.Sequential()
57
+ if stride != 1 or in_planes != self.expansion*planes:
58
+ self.shortcut = nn.Sequential(
59
+ nn.Conv2d(in_planes, self.expansion*planes,
60
+ kernel_size=1, stride=stride, bias=False),
61
+ nn.BatchNorm2d(self.expansion*planes)
62
+ )
63
+
64
+ def forward(self, x):
65
+ out = F.relu(self.bn1(self.conv1(x)))
66
+ out = F.relu(self.bn2(self.conv2(out)))
67
+ out = self.bn3(self.conv3(out))
68
+ out += self.shortcut(x)
69
+ out = F.relu(out)
70
+ return out
71
+
72
+
73
+ class ResNet(nn.Module):
74
+ def __init__(self, block, num_blocks, num_classes=7):
75
+ super(ResNet, self).__init__()
76
+ self.in_planes = 64
77
+
78
+ # self.conv1 = nn.Conv2d(3, 64, kernel_size=3, stride=1, padding=1, bias=False)
79
+ self.conv1 = nn.Conv2d(1, 64, kernel_size = 3, stride = 1, padding = 1, bias = False)
80
+ self.bn1 = nn.BatchNorm2d(64)
81
+ self.relu = nn.ReLU()
82
+ self.layer1 = self._make_layer(block, 64, num_blocks[0], stride=1)
83
+ self.layer2 = self._make_layer(block, 128, num_blocks[1], stride=2)
84
+ self.layer3 = self._make_layer(block, 256, num_blocks[2], stride=2)
85
+ self.layer4 = self._make_layer(block, 512, num_blocks[3], stride=2)
86
+ self.linear = nn.Linear(512*block.expansion, num_classes)
87
+ self.fc = nn.Linear(512*block.expansion, num_classes)
88
+
89
+ def _make_layer(self, block, planes, num_blocks, stride):
90
+ strides = [stride] + [1]*(num_blocks-1)
91
+ layers = []
92
+ for stride in strides:
93
+ layers.append(block(self.in_planes, planes, stride))
94
+ self.in_planes = planes * block.expansion
95
+ return nn.Sequential(*layers)
96
+
97
+ def forward(self, x):
98
+ out = F.relu(self.bn1(self.conv1(x)))
99
+ out = self.layer1(out)
100
+ out = self.layer2(out)
101
+ out = self.layer3(out)
102
+ out = self.layer4(out)
103
+ out = F.avg_pool2d(out, 4)
104
+ out = out.view(out.size(0), -1)
105
+ out = self.linear(out)
106
+ return out
107
+
108
+
109
+ class ResNet_AE(nn.Module):
110
+ def __init__(self, block, num_blocks, num_classes=7):
111
+ super(ResNet_AE, self).__init__()
112
+ self.in_planes = 64
113
+
114
+ self.conv1 = nn.Conv2d(1, 64, kernel_size=3,
115
+ stride=1, padding=1, bias=False)
116
+ self.bn1 = nn.BatchNorm2d(64)
117
+ self.layer1 = self._make_layer(block, 64, num_blocks[0], stride=1)
118
+ self.layer2 = self._make_layer(block, 128, num_blocks[1], stride=2)
119
+ self.layer3 = self._make_layer(block, 256, num_blocks[2], stride=2)
120
+ self.layer4 = self._make_layer(block, 512, num_blocks[3], stride=2)
121
+ self.linear = nn.Linear(512*block.expansion, num_classes)
122
+
123
+ self.decoder = nn.Sequential(
124
+ nn.ConvTranspose2d(64, 1, kernel_size=3,stride=1, padding=1, bias=False),
125
+ nn.Sigmoid()
126
+ )
127
+
128
+ def _make_layer(self, block, planes, num_blocks, stride):
129
+ strides = [stride] + [1]*(num_blocks-1)
130
+ layers = []
131
+ for stride in strides:
132
+ layers.append(block(self.in_planes, planes, stride))
133
+ self.in_planes = planes * block.expansion
134
+ return nn.Sequential(*layers)
135
+
136
+
137
+ def forward(self, x):
138
+ out1 = self.conv1(x)
139
+
140
+ decoded = self.decoder(out1)
141
+
142
+ out = F.relu(self.bn1(out1))
143
+ out = self.layer1(out)
144
+ out = self.layer2(out)
145
+ out = self.layer3(out)
146
+ out = self.layer4(out)
147
+ out = F.avg_pool2d(out, 4)
148
+ out = out.view(out.size(0), -1)
149
+ out = self.linear(out)
150
+ return out, x, decoded
151
+
152
+
153
+ def ResNet18_AE():
154
+ return ResNet_AE(BasicBlock, [2, 2, 2, 2])
155
+
156
+
157
+ def ResNet18():
158
+ return ResNet(BasicBlock, [2, 2, 2, 2])
159
+
160
+
161
+ def ResNet34():
162
+ return ResNet(BasicBlock, [3, 4, 6, 3])
163
+
164
+
165
+ def ResNet50():
166
+ return ResNet(Bottleneck, [3, 4, 6, 3])
167
+
168
+
169
+ def ResNet101():
170
+ return ResNet(Bottleneck, [3, 4, 23, 3])
171
+
172
+
173
+ def ResNet152():
174
+ return ResNet(Bottleneck, [3, 8, 36, 3])
175
+
176
+
177
+ def test():
178
+ net = ResNet18()
179
+ y = net(torch.randn(1, 3, 32, 32))
180
+ print(y.size())
181
+
182
+ # test()
tools/visualization_0416/utils/model_0506/model/head_animation/EMOP/__pycache__/appearance_encoder.cpython-310.pyc ADDED
Binary file (3.23 kB). View file
 
tools/visualization_0416/utils/model_0506/model/head_animation/EMOP/__pycache__/appearance_encoder.cpython-312.pyc ADDED
Binary file (5.33 kB). View file
 
tools/visualization_0416/utils/model_0506/model/head_animation/EMOP/__pycache__/args.cpython-310.pyc ADDED
Binary file (2.22 kB). View file
 
tools/visualization_0416/utils/model_0506/model/head_animation/EMOP/__pycache__/args.cpython-312.pyc ADDED
Binary file (2.9 kB). View file
 
tools/visualization_0416/utils/model_0506/model/head_animation/EMOP/__pycache__/expression_embedder.cpython-310.pyc ADDED
Binary file (11.5 kB). View file
 
tools/visualization_0416/utils/model_0506/model/head_animation/EMOP/__pycache__/expression_embedder.cpython-312.pyc ADDED
Binary file (24.5 kB). View file
 
tools/visualization_0416/utils/model_0506/model/head_animation/EMOP/__pycache__/face_decoder.cpython-310.pyc ADDED
Binary file (7.26 kB). View file
 
tools/visualization_0416/utils/model_0506/model/head_animation/EMOP/__pycache__/face_decoder.cpython-312.pyc ADDED
Binary file (14.1 kB). View file
 
tools/visualization_0416/utils/model_0506/model/head_animation/EMOP/__pycache__/face_decoder_spade.cpython-310.pyc ADDED
Binary file (2.89 kB). View file
 
tools/visualization_0416/utils/model_0506/model/head_animation/EMOP/__pycache__/face_decoder_spade.cpython-312.pyc ADDED
Binary file (6.25 kB). View file
 
tools/visualization_0416/utils/model_0506/model/head_animation/EMOP/__pycache__/face_encoder.cpython-310.pyc ADDED
Binary file (2.1 kB). View file
 
tools/visualization_0416/utils/model_0506/model/head_animation/EMOP/__pycache__/face_encoder.cpython-312.pyc ADDED
Binary file (2.78 kB). View file
 
tools/visualization_0416/utils/model_0506/model/head_animation/EMOP/__pycache__/face_generator.cpython-310.pyc ADDED
Binary file (1.76 kB). View file
 
tools/visualization_0416/utils/model_0506/model/head_animation/EMOP/__pycache__/face_generator.cpython-312.pyc ADDED
Binary file (2.29 kB). View file
 
tools/visualization_0416/utils/model_0506/model/head_animation/EMOP/__pycache__/flow_estimator.cpython-310.pyc ADDED
Binary file (4.02 kB). View file
 
tools/visualization_0416/utils/model_0506/model/head_animation/EMOP/__pycache__/flow_estimator.cpython-312.pyc ADDED
Binary file (7.49 kB). View file
 
tools/visualization_0416/utils/model_0506/model/head_animation/EMOP/__pycache__/head_pose_regressor.cpython-310.pyc ADDED
Binary file (1.38 kB). View file
 
tools/visualization_0416/utils/model_0506/model/head_animation/EMOP/__pycache__/head_pose_regressor.cpython-312.pyc ADDED
Binary file (2.01 kB). View file
 
tools/visualization_0416/utils/model_0506/model/head_animation/EMOP/__pycache__/identity_embedder.cpython-310.pyc ADDED
Binary file (2.9 kB). View file
 
tools/visualization_0416/utils/model_0506/model/head_animation/EMOP/__pycache__/identity_embedder.cpython-312.pyc ADDED
Binary file (5.35 kB). View file
 
tools/visualization_0416/utils/model_0506/model/head_animation/EMOP/__pycache__/motion_encoder.cpython-310.pyc ADDED
Binary file (3.46 kB). View file
 
tools/visualization_0416/utils/model_0506/model/head_animation/EMOP/__pycache__/motion_encoder.cpython-312.pyc ADDED
Binary file (5.73 kB). View file
 
tools/visualization_0416/utils/model_0506/model/head_animation/EMOP/__pycache__/point_transforms.cpython-310.pyc ADDED
Binary file (6.99 kB). View file
 
tools/visualization_0416/utils/model_0506/model/head_animation/EMOP/__pycache__/point_transforms.cpython-312.pyc ADDED
Binary file (13.5 kB). View file
 
tools/visualization_0416/utils/model_0506/model/head_animation/EMOP/__pycache__/resblocks_3d.cpython-310.pyc ADDED
Binary file (1.38 kB). View file
 
tools/visualization_0416/utils/model_0506/model/head_animation/EMOP/__pycache__/resblocks_3d.cpython-312.pyc ADDED
Binary file (1.86 kB). View file
 
tools/visualization_0416/utils/model_0506/model/head_animation/EMOP/__pycache__/spectral_norm.cpython-310.pyc ADDED
Binary file (10.4 kB). View file
 
tools/visualization_0416/utils/model_0506/model/head_animation/EMOP/__pycache__/spectral_norm.cpython-312.pyc ADDED
Binary file (16.8 kB). View file
 
tools/visualization_0416/utils/model_0506/model/head_animation/EMOP/__pycache__/unet_3d.cpython-310.pyc ADDED
Binary file (5.86 kB). View file
 
tools/visualization_0416/utils/model_0506/model/head_animation/EMOP/__pycache__/unet_3d.cpython-312.pyc ADDED
Binary file (12.8 kB). View file