body_hash stringlengths 64 64 | body stringlengths 23 109k | docstring stringlengths 1 57k | path stringlengths 4 198 | name stringlengths 1 115 | repository_name stringlengths 7 111 | repository_stars float64 0 191k | lang stringclasses 1 value | body_without_docstring stringlengths 14 108k | unified stringlengths 45 133k |
|---|---|---|---|---|---|---|---|---|---|
60cda74944ffc6236a4bf3c9f165827a109019157c130c7e0b20cb89757099cc | def getOperationNameForId(i):
' Convert an operation id into the corresponding string\n '
for key in operations:
if (int(operations[key]) is int(i)):
return key
return ('Unknown Operation ID %d' % i) | Convert an operation id into the corresponding string | bitsharesbase/operations.py | getOperationNameForId | gdfbacchus/python-bitshares | 0 | python | def getOperationNameForId(i):
' \n '
for key in operations:
if (int(operations[key]) is int(i)):
return key
return ('Unknown Operation ID %d' % i) | def getOperationNameForId(i):
' \n '
for key in operations:
if (int(operations[key]) is int(i)):
return key
return ('Unknown Operation ID %d' % i)<|docstring|>Convert an operation id into the corresponding string<|endoftext|> |
b34bc5b55890eef7318eddd94e7d37e962546f55be2027ca902f14b7c445411e | def _upsample_add(self, x, y):
"Upsample and add two feature maps.\n Args:\n x: (Variable) top feature map to be upsampled.\n y: (Variable) lateral feature map.\n Returns:\n (Variable) added feature map.\n Note in PyTorch, when input size is odd, the upsampled feature map\n with `F.upsample(..., scale_factor=2, mode='nearest')`\n maybe not equal to the lateral feature map size.\n e.g.\n original input size: [N,_,15,15] ->\n conv2d feature map size: [N,_,8,8] ->\n upsampled feature map size: [N,_,16,16]\n So we choose bilinear upsample which supports arbitrary output sizes.\n "
(_, _, H, W) = y.size()
return (F.upsample(x, size=(H, W), mode='bilinear') + y) | Upsample and add two feature maps.
Args:
x: (Variable) top feature map to be upsampled.
y: (Variable) lateral feature map.
Returns:
(Variable) added feature map.
Note in PyTorch, when input size is odd, the upsampled feature map
with `F.upsample(..., scale_factor=2, mode='nearest')`
maybe not equal to the lateral feature map size.
e.g.
original input size: [N,_,15,15] ->
conv2d feature map size: [N,_,8,8] ->
upsampled feature map size: [N,_,16,16]
So we choose bilinear upsample which supports arbitrary output sizes. | segmentation/utils/efficientdet.py | _upsample_add | WangChen0902/FRSKD-Paddle | 0 | python | def _upsample_add(self, x, y):
"Upsample and add two feature maps.\n Args:\n x: (Variable) top feature map to be upsampled.\n y: (Variable) lateral feature map.\n Returns:\n (Variable) added feature map.\n Note in PyTorch, when input size is odd, the upsampled feature map\n with `F.upsample(..., scale_factor=2, mode='nearest')`\n maybe not equal to the lateral feature map size.\n e.g.\n original input size: [N,_,15,15] ->\n conv2d feature map size: [N,_,8,8] ->\n upsampled feature map size: [N,_,16,16]\n So we choose bilinear upsample which supports arbitrary output sizes.\n "
(_, _, H, W) = y.size()
return (F.upsample(x, size=(H, W), mode='bilinear') + y) | def _upsample_add(self, x, y):
"Upsample and add two feature maps.\n Args:\n x: (Variable) top feature map to be upsampled.\n y: (Variable) lateral feature map.\n Returns:\n (Variable) added feature map.\n Note in PyTorch, when input size is odd, the upsampled feature map\n with `F.upsample(..., scale_factor=2, mode='nearest')`\n maybe not equal to the lateral feature map size.\n e.g.\n original input size: [N,_,15,15] ->\n conv2d feature map size: [N,_,8,8] ->\n upsampled feature map size: [N,_,16,16]\n So we choose bilinear upsample which supports arbitrary output sizes.\n "
(_, _, H, W) = y.size()
return (F.upsample(x, size=(H, W), mode='bilinear') + y)<|docstring|>Upsample and add two feature maps.
Args:
x: (Variable) top feature map to be upsampled.
y: (Variable) lateral feature map.
Returns:
(Variable) added feature map.
Note in PyTorch, when input size is odd, the upsampled feature map
with `F.upsample(..., scale_factor=2, mode='nearest')`
maybe not equal to the lateral feature map size.
e.g.
original input size: [N,_,15,15] ->
conv2d feature map size: [N,_,8,8] ->
upsampled feature map size: [N,_,16,16]
So we choose bilinear upsample which supports arbitrary output sizes.<|endoftext|> |
dc5df65f505ba39f757b5b8d622d6e54f8ea2d237b9623627e5b41de7fed10fb | def init_seed(opt):
'\n Disable cudnn to maximize reproducibility\n '
torch.cuda.cudnn_enabled = False
np.random.seed(opt.manual_seed)
torch.manual_seed(opt.manual_seed)
torch.cuda.manual_seed(opt.manual_seed) | Disable cudnn to maximize reproducibility | src/loss_plot.py | init_seed | eambutu/prototypical-pytorch | 4 | python | def init_seed(opt):
'\n \n '
torch.cuda.cudnn_enabled = False
np.random.seed(opt.manual_seed)
torch.manual_seed(opt.manual_seed)
torch.cuda.manual_seed(opt.manual_seed) | def init_seed(opt):
'\n \n '
torch.cuda.cudnn_enabled = False
np.random.seed(opt.manual_seed)
torch.manual_seed(opt.manual_seed)
torch.cuda.manual_seed(opt.manual_seed)<|docstring|>Disable cudnn to maximize reproducibility<|endoftext|> |
cb5f890367ff9cbd92a2beff4c292757580afe2f58bd14ec2ab0931740ffbbb4 | def init_dataset(opt):
'\n Initialize the datasets, samplers and dataloaders\n '
if (opt.dataset == 'omniglot'):
test_dataset = OmniglotDataset(mode='test')
elif (opt.dataset == 'mini_imagenet'):
test_dataset = MiniImagenetDataset(mode='val')
else:
print('Dataset is not valid')
test_sampler = PrototypicalBatchSampler(labels=test_dataset.y, classes_per_it=opt.classes_per_it_val, num_samples=(opt.num_support_val + opt.num_query_val), iterations=opt.iterations)
test_dataloader = torch.utils.data.DataLoader(test_dataset, batch_sampler=test_sampler)
return test_dataloader | Initialize the datasets, samplers and dataloaders | src/loss_plot.py | init_dataset | eambutu/prototypical-pytorch | 4 | python | def init_dataset(opt):
'\n \n '
if (opt.dataset == 'omniglot'):
test_dataset = OmniglotDataset(mode='test')
elif (opt.dataset == 'mini_imagenet'):
test_dataset = MiniImagenetDataset(mode='val')
else:
print('Dataset is not valid')
test_sampler = PrototypicalBatchSampler(labels=test_dataset.y, classes_per_it=opt.classes_per_it_val, num_samples=(opt.num_support_val + opt.num_query_val), iterations=opt.iterations)
test_dataloader = torch.utils.data.DataLoader(test_dataset, batch_sampler=test_sampler)
return test_dataloader | def init_dataset(opt):
'\n \n '
if (opt.dataset == 'omniglot'):
test_dataset = OmniglotDataset(mode='test')
elif (opt.dataset == 'mini_imagenet'):
test_dataset = MiniImagenetDataset(mode='val')
else:
print('Dataset is not valid')
test_sampler = PrototypicalBatchSampler(labels=test_dataset.y, classes_per_it=opt.classes_per_it_val, num_samples=(opt.num_support_val + opt.num_query_val), iterations=opt.iterations)
test_dataloader = torch.utils.data.DataLoader(test_dataset, batch_sampler=test_sampler)
return test_dataloader<|docstring|>Initialize the datasets, samplers and dataloaders<|endoftext|> |
64c8bbdb01d5b6c90211bbaabd8f7644f3ef168566a5b1182dfa6851d9007c3c | def init_protonet(opt):
'\n Initialize the ProtoNet\n '
if (opt.dataset == 'omniglot'):
model = ProtoNet()
elif (opt.dataset == 'mini_imagenet'):
model = ProtoNet(x_dim=3)
model = (model.cuda() if opt.cuda else model)
return model | Initialize the ProtoNet | src/loss_plot.py | init_protonet | eambutu/prototypical-pytorch | 4 | python | def init_protonet(opt):
'\n \n '
if (opt.dataset == 'omniglot'):
model = ProtoNet()
elif (opt.dataset == 'mini_imagenet'):
model = ProtoNet(x_dim=3)
model = (model.cuda() if opt.cuda else model)
return model | def init_protonet(opt):
'\n \n '
if (opt.dataset == 'omniglot'):
model = ProtoNet()
elif (opt.dataset == 'mini_imagenet'):
model = ProtoNet(x_dim=3)
model = (model.cuda() if opt.cuda else model)
return model<|docstring|>Initialize the ProtoNet<|endoftext|> |
0b2ff21fa1d6da9059753bdf8e7d991da2f8c0becf6063969f0a42d26f73dda6 | def test(opt, test_dataloader, model):
'\n Test the model trained with the prototypical learning algorithm\n '
rand_vec = (0.01 * np.random.randn(100, 1600))
accs = []
test_iter = iter(test_dataloader)
batch = test_iter.__next__()
for idx in range(101):
counter = 0
(x, y) = batch
(x, y) = (Variable(x), Variable(y))
if opt.cuda:
(x, y) = (x.cuda(), y.cuda())
model_output = model(x)
means = obtain_mean(model_output, target=y, n_support=opt.num_support_tr)
if (idx < 100):
means = means.data.numpy()
means[4] = (means[4] + rand_vec[idx])
means = Variable(torch.FloatTensor(means))
(_, acc) = loss(model_output, means, target=y, n_support=opt.num_support_tr)
print('Test Acc: {}'.format(acc.data[0]))
accs.append(acc.data[0])
for idx in range(100):
if (accs[idx] > accs[(- 1)]):
print('Higher index: {}'.format(idx))
import pdb
pdb.set_trace()
return accs | Test the model trained with the prototypical learning algorithm | src/loss_plot.py | test | eambutu/prototypical-pytorch | 4 | python | def test(opt, test_dataloader, model):
'\n \n '
rand_vec = (0.01 * np.random.randn(100, 1600))
accs = []
test_iter = iter(test_dataloader)
batch = test_iter.__next__()
for idx in range(101):
counter = 0
(x, y) = batch
(x, y) = (Variable(x), Variable(y))
if opt.cuda:
(x, y) = (x.cuda(), y.cuda())
model_output = model(x)
means = obtain_mean(model_output, target=y, n_support=opt.num_support_tr)
if (idx < 100):
means = means.data.numpy()
means[4] = (means[4] + rand_vec[idx])
means = Variable(torch.FloatTensor(means))
(_, acc) = loss(model_output, means, target=y, n_support=opt.num_support_tr)
print('Test Acc: {}'.format(acc.data[0]))
accs.append(acc.data[0])
for idx in range(100):
if (accs[idx] > accs[(- 1)]):
print('Higher index: {}'.format(idx))
import pdb
pdb.set_trace()
return accs | def test(opt, test_dataloader, model):
'\n \n '
rand_vec = (0.01 * np.random.randn(100, 1600))
accs = []
test_iter = iter(test_dataloader)
batch = test_iter.__next__()
for idx in range(101):
counter = 0
(x, y) = batch
(x, y) = (Variable(x), Variable(y))
if opt.cuda:
(x, y) = (x.cuda(), y.cuda())
model_output = model(x)
means = obtain_mean(model_output, target=y, n_support=opt.num_support_tr)
if (idx < 100):
means = means.data.numpy()
means[4] = (means[4] + rand_vec[idx])
means = Variable(torch.FloatTensor(means))
(_, acc) = loss(model_output, means, target=y, n_support=opt.num_support_tr)
print('Test Acc: {}'.format(acc.data[0]))
accs.append(acc.data[0])
for idx in range(100):
if (accs[idx] > accs[(- 1)]):
print('Higher index: {}'.format(idx))
import pdb
pdb.set_trace()
return accs<|docstring|>Test the model trained with the prototypical learning algorithm<|endoftext|> |
dd57dba3c399dc74392fcdbad143a3ec4c88894d14648d2b42f6f1317b6878ef | def eval(opt):
'\n Initialize everything and train\n '
options = get_parser().parse_args()
if (torch.cuda.is_available() and (not options.cuda)):
print('WARNING: You have a CUDA device, so you should probably run with --cuda')
init_seed(options)
test_dataloader = init_dataset(options)
model = init_protonet(options)
model_path = os.path.join(opt.experiment_root, 'best_model.pth')
model.load_state_dict(torch.load(model_path))
test(opt=options, test_dataloader=test_dataloader, model=model) | Initialize everything and train | src/loss_plot.py | eval | eambutu/prototypical-pytorch | 4 | python | def eval(opt):
'\n \n '
options = get_parser().parse_args()
if (torch.cuda.is_available() and (not options.cuda)):
print('WARNING: You have a CUDA device, so you should probably run with --cuda')
init_seed(options)
test_dataloader = init_dataset(options)
model = init_protonet(options)
model_path = os.path.join(opt.experiment_root, 'best_model.pth')
model.load_state_dict(torch.load(model_path))
test(opt=options, test_dataloader=test_dataloader, model=model) | def eval(opt):
'\n \n '
options = get_parser().parse_args()
if (torch.cuda.is_available() and (not options.cuda)):
print('WARNING: You have a CUDA device, so you should probably run with --cuda')
init_seed(options)
test_dataloader = init_dataset(options)
model = init_protonet(options)
model_path = os.path.join(opt.experiment_root, 'best_model.pth')
model.load_state_dict(torch.load(model_path))
test(opt=options, test_dataloader=test_dataloader, model=model)<|docstring|>Initialize everything and train<|endoftext|> |
66f7852a5bb6a033e6c26179c70ab7a8b5118e6e3489028f8730e831d6d4e53a | def bin_path(self):
'\n Compute the bin_path as part of meta collection to be more tolerant of\n users that utilize `pm.ondemand`. bin_path is also not required for\n our regular running logic so it can safely be moved down a level (to\n this collector that runs on a regular async basis).\n\n This used to live in manager._find_all() but it is impossible to cache\n the value there.\n '
if (self._bin_path is None):
all_pids = ([self.object.pid] + self.object.workers)
last_exception = None
for pid in all_pids:
ls_cmd_template = (LS_CMD_FREEBSD if (host.linux_name() == 'freebsd') else LS_CMD)
ls_cmd = (ls_cmd_template % pid)
try:
(ls, _) = subp.call(ls_cmd)
context.log.debug(('ls "%s" output: %s' % (ls_cmd, ls)))
except Exception as e:
last_exception = e
else:
try:
self._bin_path = LS_PARSER(ls[0])
except Exception as e:
exc_name = e.__class__.__name__
context.log.debug(('failed to parse ls result "%s" due to %s' % (ls[0], exc_name)))
context.log.debug('additional info:', exc_info=True)
last_exception = None
break
if last_exception:
exc_name = last_exception.__class__.__name__
context.log.debug(('failed to find php-fpm bin path, last attempt: "%s" failed due to %s' % (ls_cmd, exc_name)))
context.log.debug('additional info:', exc_info=True)
if context.objects.root_object:
context.objects.root_object.eventd.event(level=INFO, message='php-fpm bin not found')
self.meta['bin_path'] = self._bin_path | Compute the bin_path as part of meta collection to be more tolerant of
users that utilize `pm.ondemand`. bin_path is also not required for
our regular running logic so it can safely be moved down a level (to
this collector that runs on a regular async basis).
This used to live in manager._find_all() but it is impossible to cache
the value there. | amplify/ext/phpfpm/collectors/master/meta.py | bin_path | jeckel/nginx-amplify-agent | 308 | python | def bin_path(self):
'\n Compute the bin_path as part of meta collection to be more tolerant of\n users that utilize `pm.ondemand`. bin_path is also not required for\n our regular running logic so it can safely be moved down a level (to\n this collector that runs on a regular async basis).\n\n This used to live in manager._find_all() but it is impossible to cache\n the value there.\n '
if (self._bin_path is None):
all_pids = ([self.object.pid] + self.object.workers)
last_exception = None
for pid in all_pids:
ls_cmd_template = (LS_CMD_FREEBSD if (host.linux_name() == 'freebsd') else LS_CMD)
ls_cmd = (ls_cmd_template % pid)
try:
(ls, _) = subp.call(ls_cmd)
context.log.debug(('ls "%s" output: %s' % (ls_cmd, ls)))
except Exception as e:
last_exception = e
else:
try:
self._bin_path = LS_PARSER(ls[0])
except Exception as e:
exc_name = e.__class__.__name__
context.log.debug(('failed to parse ls result "%s" due to %s' % (ls[0], exc_name)))
context.log.debug('additional info:', exc_info=True)
last_exception = None
break
if last_exception:
exc_name = last_exception.__class__.__name__
context.log.debug(('failed to find php-fpm bin path, last attempt: "%s" failed due to %s' % (ls_cmd, exc_name)))
context.log.debug('additional info:', exc_info=True)
if context.objects.root_object:
context.objects.root_object.eventd.event(level=INFO, message='php-fpm bin not found')
self.meta['bin_path'] = self._bin_path | def bin_path(self):
'\n Compute the bin_path as part of meta collection to be more tolerant of\n users that utilize `pm.ondemand`. bin_path is also not required for\n our regular running logic so it can safely be moved down a level (to\n this collector that runs on a regular async basis).\n\n This used to live in manager._find_all() but it is impossible to cache\n the value there.\n '
if (self._bin_path is None):
all_pids = ([self.object.pid] + self.object.workers)
last_exception = None
for pid in all_pids:
ls_cmd_template = (LS_CMD_FREEBSD if (host.linux_name() == 'freebsd') else LS_CMD)
ls_cmd = (ls_cmd_template % pid)
try:
(ls, _) = subp.call(ls_cmd)
context.log.debug(('ls "%s" output: %s' % (ls_cmd, ls)))
except Exception as e:
last_exception = e
else:
try:
self._bin_path = LS_PARSER(ls[0])
except Exception as e:
exc_name = e.__class__.__name__
context.log.debug(('failed to parse ls result "%s" due to %s' % (ls[0], exc_name)))
context.log.debug('additional info:', exc_info=True)
last_exception = None
break
if last_exception:
exc_name = last_exception.__class__.__name__
context.log.debug(('failed to find php-fpm bin path, last attempt: "%s" failed due to %s' % (ls_cmd, exc_name)))
context.log.debug('additional info:', exc_info=True)
if context.objects.root_object:
context.objects.root_object.eventd.event(level=INFO, message='php-fpm bin not found')
self.meta['bin_path'] = self._bin_path<|docstring|>Compute the bin_path as part of meta collection to be more tolerant of
users that utilize `pm.ondemand`. bin_path is also not required for
our regular running logic so it can safely be moved down a level (to
this collector that runs on a regular async basis).
This used to live in manager._find_all() but it is impossible to cache
the value there.<|endoftext|> |
b40f317df50595157b5ad91ce6a350becff1f0642e0fc5f3e709f5fd337bcff3 | def randomString(stringLength=10):
'Generate a random string of fixed length '
letters = string.ascii_lowercase
return ''.join((random.choice(letters) for i in range(stringLength))) | Generate a random string of fixed length | database_builder.py | randomString | Goodkorning/Skyline_operator | 1 | python | def randomString(stringLength=10):
' '
letters = string.ascii_lowercase
return .join((random.choice(letters) for i in range(stringLength))) | def randomString(stringLength=10):
' '
letters = string.ascii_lowercase
return .join((random.choice(letters) for i in range(stringLength)))<|docstring|>Generate a random string of fixed length<|endoftext|> |
16c52151043482f0dd70d78ec7087d243c834128dffc47a76e01e23f8364f3b7 | def __init__(self, hass: HomeAssistant, timeout: int, idle_callback: Callable[([], None)]):
'Initialize IdleTimer.'
self._hass = hass
self._timeout = timeout
self._callback = idle_callback
self._unsub = None
self.idle = False | Initialize IdleTimer. | homeassistant/components/stream/core.py | __init__ | Socalix/core | 4 | python | def __init__(self, hass: HomeAssistant, timeout: int, idle_callback: Callable[([], None)]):
self._hass = hass
self._timeout = timeout
self._callback = idle_callback
self._unsub = None
self.idle = False | def __init__(self, hass: HomeAssistant, timeout: int, idle_callback: Callable[([], None)]):
self._hass = hass
self._timeout = timeout
self._callback = idle_callback
self._unsub = None
self.idle = False<|docstring|>Initialize IdleTimer.<|endoftext|> |
7d6aa36df6c08cc559d8eac02a38cef68bdd8c0187ae9fea1165b2502385a644 | def start(self):
'Start the idle timer if not already started.'
self.idle = False
if (self._unsub is None):
self._unsub = async_call_later(self._hass, self._timeout, self.fire) | Start the idle timer if not already started. | homeassistant/components/stream/core.py | start | Socalix/core | 4 | python | def start(self):
self.idle = False
if (self._unsub is None):
self._unsub = async_call_later(self._hass, self._timeout, self.fire) | def start(self):
self.idle = False
if (self._unsub is None):
self._unsub = async_call_later(self._hass, self._timeout, self.fire)<|docstring|>Start the idle timer if not already started.<|endoftext|> |
94a3d6743fbe7b80982e42e6f2f05916f11a4763c5193db8a24dcf975e4dd65c | def awake(self):
'Keep the idle time alive by resetting the timeout.'
self.idle = False
self.clear()
self._unsub = async_call_later(self._hass, self._timeout, self.fire) | Keep the idle time alive by resetting the timeout. | homeassistant/components/stream/core.py | awake | Socalix/core | 4 | python | def awake(self):
self.idle = False
self.clear()
self._unsub = async_call_later(self._hass, self._timeout, self.fire) | def awake(self):
self.idle = False
self.clear()
self._unsub = async_call_later(self._hass, self._timeout, self.fire)<|docstring|>Keep the idle time alive by resetting the timeout.<|endoftext|> |
008c5df2d8a9073bb727ee2b13e7c324cc4343b6b3b37c0ff309e852ae75e306 | def clear(self):
'Clear and disable the timer if it has not already fired.'
if (self._unsub is not None):
self._unsub() | Clear and disable the timer if it has not already fired. | homeassistant/components/stream/core.py | clear | Socalix/core | 4 | python | def clear(self):
if (self._unsub is not None):
self._unsub() | def clear(self):
if (self._unsub is not None):
self._unsub()<|docstring|>Clear and disable the timer if it has not already fired.<|endoftext|> |
8489158721f17d69fa9c37ccfb96d7624f3f7b556d5125d68821bbc6f5fbe9cb | def fire(self, _now=None):
'Invoke the idle timeout callback, called when the alarm fires.'
self.idle = True
self._unsub = None
self._callback() | Invoke the idle timeout callback, called when the alarm fires. | homeassistant/components/stream/core.py | fire | Socalix/core | 4 | python | def fire(self, _now=None):
self.idle = True
self._unsub = None
self._callback() | def fire(self, _now=None):
self.idle = True
self._unsub = None
self._callback()<|docstring|>Invoke the idle timeout callback, called when the alarm fires.<|endoftext|> |
c5dec1da56ba3f46bcb49f9500f9a1dbd8590df4b198840b6f25565f92e29958 | def __init__(self, hass: HomeAssistant):
'Initialize a stream output.'
self._hass = hass | Initialize a stream output. | homeassistant/components/stream/core.py | __init__ | Socalix/core | 4 | python | def __init__(self, hass: HomeAssistant):
self._hass = hass | def __init__(self, hass: HomeAssistant):
self._hass = hass<|docstring|>Initialize a stream output.<|endoftext|> |
d359cf6933526980c221ebda5218649b4fabaf0cf803986ece13065c08a91918 | @property
def container_options(self) -> Callable[([int], dict)]:
'Return Callable which takes a sequence number and returns container options.'
return None | Return Callable which takes a sequence number and returns container options. | homeassistant/components/stream/core.py | container_options | Socalix/core | 4 | python | @property
def container_options(self) -> Callable[([int], dict)]:
return None | @property
def container_options(self) -> Callable[([int], dict)]:
return None<|docstring|>Return Callable which takes a sequence number and returns container options.<|endoftext|> |
b2c7e6c9b67f3dad57461c55fb9d74544450187d3c90eb28ce2dcfade52a5756 | def put(self, segment: Segment) -> None:
'Store output.'
self._hass.loop.call_soon_threadsafe(self._async_put, segment) | Store output. | homeassistant/components/stream/core.py | put | Socalix/core | 4 | python | def put(self, segment: Segment) -> None:
self._hass.loop.call_soon_threadsafe(self._async_put, segment) | def put(self, segment: Segment) -> None:
self._hass.loop.call_soon_threadsafe(self._async_put, segment)<|docstring|>Store output.<|endoftext|> |
0b4e8ce07b9dd19308ff79ef12be884d3e0a612c521c22d237ac9117152207e4 | @callback
def _async_put(self, segment: Segment) -> None:
'Store output from event loop.' | Store output from event loop. | homeassistant/components/stream/core.py | _async_put | Socalix/core | 4 | python | @callback
def _async_put(self, segment: Segment) -> None:
| @callback
def _async_put(self, segment: Segment) -> None:
<|docstring|>Store output from event loop.<|endoftext|> |
d3180b6acfbf5589aa27d7418b0a552c32ed50185e0741c57a649d8629cdcf8e | async def get(self, request, token, sequence=None):
'Start a GET request.'
hass = request.app['hass']
stream = next((s for s in hass.data[DOMAIN][ATTR_STREAMS] if (s.access_token == token)), None)
if (not stream):
raise web.HTTPNotFound()
stream.start()
return (await self.handle(request, stream, sequence)) | Start a GET request. | homeassistant/components/stream/core.py | get | Socalix/core | 4 | python | async def get(self, request, token, sequence=None):
hass = request.app['hass']
stream = next((s for s in hass.data[DOMAIN][ATTR_STREAMS] if (s.access_token == token)), None)
if (not stream):
raise web.HTTPNotFound()
stream.start()
return (await self.handle(request, stream, sequence)) | async def get(self, request, token, sequence=None):
hass = request.app['hass']
stream = next((s for s in hass.data[DOMAIN][ATTR_STREAMS] if (s.access_token == token)), None)
if (not stream):
raise web.HTTPNotFound()
stream.start()
return (await self.handle(request, stream, sequence))<|docstring|>Start a GET request.<|endoftext|> |
03ab5d65a607f81c24012ad28c48d13a9725687184f6b68a1c223b6bfb0130ef | async def handle(self, request, stream, sequence):
'Handle the stream request.'
raise NotImplementedError() | Handle the stream request. | homeassistant/components/stream/core.py | handle | Socalix/core | 4 | python | async def handle(self, request, stream, sequence):
raise NotImplementedError() | async def handle(self, request, stream, sequence):
raise NotImplementedError()<|docstring|>Handle the stream request.<|endoftext|> |
1f346fc3d5b769145ecc1c5b1abfaa1788daa8c8d21eb6bc7addc19b9023214c | def setFilterValues(securityFilters: SecurityFilters, strategyFilters: StrategyFilters):
'\n update entry fields with new filter values\n '
self.securityFilters = securityFilters
self.securityFilters = securityFilters
self.setSecurityFilters | update entry fields with new filter values | gui/BullPutScreenerApp.py | setFilterValues | SergioEspinoza/PyAlgoTrading_TradingWithIB | 16 | python | def setFilterValues(securityFilters: SecurityFilters, strategyFilters: StrategyFilters):
'\n \n '
self.securityFilters = securityFilters
self.securityFilters = securityFilters
self.setSecurityFilters | def setFilterValues(securityFilters: SecurityFilters, strategyFilters: StrategyFilters):
'\n \n '
self.securityFilters = securityFilters
self.securityFilters = securityFilters
self.setSecurityFilters<|docstring|>update entry fields with new filter values<|endoftext|> |
7c62ac988ebeb24fdaf035b9e35a028a2a9d0aa69eb1cb68425b1bdb38aca534 | def add_input_file(self, localpath, remotepath=None, cache=True):
'Add an input file to the global file list.\n\n Parameters\n ----------\n localpath : str\n Path to the file on the local filesystem.\n remotepath : str, optional\n Path to write the file to on the remote worker. If omitted, the\n basename of ``localpath`` (e.g. "foo/bar.baz" => "bar.baz").\n cache : bool, optional\n Whether to cache the file on the remote worker. If True (default),\n will be cached on the worker between tasks, reducing network\n transfer overhead. If False, will be re-transferred to the worker\n on each task.\n '
self.files.append((localpath, remotepath, 'input', cache)) | Add an input file to the global file list.
Parameters
----------
localpath : str
Path to the file on the local filesystem.
remotepath : str, optional
Path to write the file to on the remote worker. If omitted, the
basename of ``localpath`` (e.g. "foo/bar.baz" => "bar.baz").
cache : bool, optional
Whether to cache the file on the remote worker. If True (default),
will be cached on the worker between tasks, reducing network
transfer overhead. If False, will be re-transferred to the worker
on each task. | shadho/shadho.py | add_input_file | jeffkinnison/shadho | 16 | python | def add_input_file(self, localpath, remotepath=None, cache=True):
'Add an input file to the global file list.\n\n Parameters\n ----------\n localpath : str\n Path to the file on the local filesystem.\n remotepath : str, optional\n Path to write the file to on the remote worker. If omitted, the\n basename of ``localpath`` (e.g. "foo/bar.baz" => "bar.baz").\n cache : bool, optional\n Whether to cache the file on the remote worker. If True (default),\n will be cached on the worker between tasks, reducing network\n transfer overhead. If False, will be re-transferred to the worker\n on each task.\n '
self.files.append((localpath, remotepath, 'input', cache)) | def add_input_file(self, localpath, remotepath=None, cache=True):
'Add an input file to the global file list.\n\n Parameters\n ----------\n localpath : str\n Path to the file on the local filesystem.\n remotepath : str, optional\n Path to write the file to on the remote worker. If omitted, the\n basename of ``localpath`` (e.g. "foo/bar.baz" => "bar.baz").\n cache : bool, optional\n Whether to cache the file on the remote worker. If True (default),\n will be cached on the worker between tasks, reducing network\n transfer overhead. If False, will be re-transferred to the worker\n on each task.\n '
self.files.append((localpath, remotepath, 'input', cache))<|docstring|>Add an input file to the global file list.
Parameters
----------
localpath : str
Path to the file on the local filesystem.
remotepath : str, optional
Path to write the file to on the remote worker. If omitted, the
basename of ``localpath`` (e.g. "foo/bar.baz" => "bar.baz").
cache : bool, optional
Whether to cache the file on the remote worker. If True (default),
will be cached on the worker between tasks, reducing network
transfer overhead. If False, will be re-transferred to the worker
on each task.<|endoftext|> |
82de143b627f7e387f470864499180fea3971076b779259dd64db701f968d64b | def add_output_file(self, localpath, remotepath=None, cache=False):
'Add an input file to the global file list.\n\n Output files are expected to be discovered on the remote worker after a\n task has completed. They are returned to the `shadho.Shadho` instance\n and will be stored for further review without additional processing.\n\n Parameters\n ----------\n localpath : str\n Path to the file on the local filesystem.\n remotepath : str, optional\n Path to write the file to on the remote worker. If omitted, the\n basename of ``localpath`` (e.g. "foo/bar.baz" => "bar.baz").\n cache : bool, optional\n Whether to cache the file on the remote worker. It is recommended\n that this be set to False for output files.\n\n Notes\n -----\n `shadho.Shadho` automatically parses the output file specified in\n ``.shadhorc``, so and output file added through this method will not be\n processed, but rather stored for later review.\n '
self.files.append((localpath, remotepath, 'output', cache)) | Add an input file to the global file list.
Output files are expected to be discovered on the remote worker after a
task has completed. They are returned to the `shadho.Shadho` instance
and will be stored for further review without additional processing.
Parameters
----------
localpath : str
Path to the file on the local filesystem.
remotepath : str, optional
Path to write the file to on the remote worker. If omitted, the
basename of ``localpath`` (e.g. "foo/bar.baz" => "bar.baz").
cache : bool, optional
Whether to cache the file on the remote worker. It is recommended
that this be set to False for output files.
Notes
-----
`shadho.Shadho` automatically parses the output file specified in
``.shadhorc``, so and output file added through this method will not be
processed, but rather stored for later review. | shadho/shadho.py | add_output_file | jeffkinnison/shadho | 16 | python | def add_output_file(self, localpath, remotepath=None, cache=False):
'Add an input file to the global file list.\n\n Output files are expected to be discovered on the remote worker after a\n task has completed. They are returned to the `shadho.Shadho` instance\n and will be stored for further review without additional processing.\n\n Parameters\n ----------\n localpath : str\n Path to the file on the local filesystem.\n remotepath : str, optional\n Path to write the file to on the remote worker. If omitted, the\n basename of ``localpath`` (e.g. "foo/bar.baz" => "bar.baz").\n cache : bool, optional\n Whether to cache the file on the remote worker. It is recommended\n that this be set to False for output files.\n\n Notes\n -----\n `shadho.Shadho` automatically parses the output file specified in\n ``.shadhorc``, so and output file added through this method will not be\n processed, but rather stored for later review.\n '
self.files.append((localpath, remotepath, 'output', cache)) | def add_output_file(self, localpath, remotepath=None, cache=False):
'Add an input file to the global file list.\n\n Output files are expected to be discovered on the remote worker after a\n task has completed. They are returned to the `shadho.Shadho` instance\n and will be stored for further review without additional processing.\n\n Parameters\n ----------\n localpath : str\n Path to the file on the local filesystem.\n remotepath : str, optional\n Path to write the file to on the remote worker. If omitted, the\n basename of ``localpath`` (e.g. "foo/bar.baz" => "bar.baz").\n cache : bool, optional\n Whether to cache the file on the remote worker. It is recommended\n that this be set to False for output files.\n\n Notes\n -----\n `shadho.Shadho` automatically parses the output file specified in\n ``.shadhorc``, so and output file added through this method will not be\n processed, but rather stored for later review.\n '
self.files.append((localpath, remotepath, 'output', cache))<|docstring|>Add an input file to the global file list.
Output files are expected to be discovered on the remote worker after a
task has completed. They are returned to the `shadho.Shadho` instance
and will be stored for further review without additional processing.
Parameters
----------
localpath : str
Path to the file on the local filesystem.
remotepath : str, optional
Path to write the file to on the remote worker. If omitted, the
basename of ``localpath`` (e.g. "foo/bar.baz" => "bar.baz").
cache : bool, optional
Whether to cache the file on the remote worker. It is recommended
that this be set to False for output files.
Notes
-----
`shadho.Shadho` automatically parses the output file specified in
``.shadhorc``, so and output file added through this method will not be
processed, but rather stored for later review.<|endoftext|> |
4663cf3d41aaa2daae93303e4da9c7e5589a5d839a913f31e136323e4404a048 | def add_compute_class(self, name, resource, value, max_queued_tasks=100):
'Add a compute class representing a set of consistent recources.\n\n Parameters\n ----------\n name : str\n The name of this set of compute resources.\n resource : str\n The resource to match, e.g. gpu_name, cores, etc.\n value\n The value of the resource that should be matched, e.g. "TITAN X\n (Pascal)", 8, etc.\n max_queued_tasks : int, optional\n The maximum number of tasks to queue for this compute class,\n default 100.\n '
cc = ComputeClass(name, resource, value, min(self.max_tasks, max_queued_tasks))
self.ccs[cc.id] = cc | Add a compute class representing a set of consistent recources.
Parameters
----------
name : str
The name of this set of compute resources.
resource : str
The resource to match, e.g. gpu_name, cores, etc.
value
The value of the resource that should be matched, e.g. "TITAN X
(Pascal)", 8, etc.
max_queued_tasks : int, optional
The maximum number of tasks to queue for this compute class,
default 100. | shadho/shadho.py | add_compute_class | jeffkinnison/shadho | 16 | python | def add_compute_class(self, name, resource, value, max_queued_tasks=100):
'Add a compute class representing a set of consistent recources.\n\n Parameters\n ----------\n name : str\n The name of this set of compute resources.\n resource : str\n The resource to match, e.g. gpu_name, cores, etc.\n value\n The value of the resource that should be matched, e.g. "TITAN X\n (Pascal)", 8, etc.\n max_queued_tasks : int, optional\n The maximum number of tasks to queue for this compute class,\n default 100.\n '
cc = ComputeClass(name, resource, value, min(self.max_tasks, max_queued_tasks))
self.ccs[cc.id] = cc | def add_compute_class(self, name, resource, value, max_queued_tasks=100):
'Add a compute class representing a set of consistent recources.\n\n Parameters\n ----------\n name : str\n The name of this set of compute resources.\n resource : str\n The resource to match, e.g. gpu_name, cores, etc.\n value\n The value of the resource that should be matched, e.g. "TITAN X\n (Pascal)", 8, etc.\n max_queued_tasks : int, optional\n The maximum number of tasks to queue for this compute class,\n default 100.\n '
cc = ComputeClass(name, resource, value, min(self.max_tasks, max_queued_tasks))
self.ccs[cc.id] = cc<|docstring|>Add a compute class representing a set of consistent recources.
Parameters
----------
name : str
The name of this set of compute resources.
resource : str
The resource to match, e.g. gpu_name, cores, etc.
value
The value of the resource that should be matched, e.g. "TITAN X
(Pascal)", 8, etc.
max_queued_tasks : int, optional
The maximum number of tasks to queue for this compute class,
default 100.<|endoftext|> |
9455ad24301842c7381ed562db5eeffa59a2f187c6ecb44465bdb8f230ca3966 | def run(self):
'Search hyperparameter values on remote workers.\n\n Generate and evaluate hyperparameters using the selected task manager\n and search strategy. Hyperparameters will be evaluated until timeout,\n and the optimal set will be printed to screen.\n\n Notes\n -----\n If `self.await_pending` is True, Shadho will continue to evaluate\n hyperparameters in the queue without generating new hyperparameter\n values. This will continue until the queue is empty and all tasks have\n returned.\n '
if (not hasattr(self, 'manager')):
self.manager = create_manager(manager_type=self.config.manager, config=self.config, tmpdir=self.__tmpdir)
if (len(self.ccs) == 0):
cc = ComputeClass('all', None, None, min(self.max_tasks, self.max_queued_tasks))
self.ccs[cc.id] = cc
else:
for cc in self.ccs.values():
cc.optimizer = self.copy()
cc.max_queued_tasks = max((cc.max_queued_tasks / len(self.ccs)), 1)
self.assign_to_ccs()
self.start = time.time()
completed_tasks = 0
try:
while (not self.done()):
stop = self.generate()
if (not stop):
result = self.manager.run_task()
if (result is not None):
if (len(result) == 3):
self.success(*result)
completed_tasks += 1
else:
self.failure(*result)
if ((self.trial_count % self.save_frequency) == 0):
self.save()
else:
break
self.save()
if self.await_pending:
while (not self.manager.empty()):
result = self.manager.run_task()
if (result is not None):
if (len(result) == 3):
self.success(*result)
else:
self.failure(*result)
self.save()
except KeyboardInterrupt:
if (hasattr(self, '__tmpdir') and (self.__tmpdir is not None)):
os.rmdir(self.__tmpdir)
self.end = time.time()
self.save()
self.summary()
return self.to_dataframes() | Search hyperparameter values on remote workers.
Generate and evaluate hyperparameters using the selected task manager
and search strategy. Hyperparameters will be evaluated until timeout,
and the optimal set will be printed to screen.
Notes
-----
If `self.await_pending` is True, Shadho will continue to evaluate
hyperparameters in the queue without generating new hyperparameter
values. This will continue until the queue is empty and all tasks have
returned. | shadho/shadho.py | run | jeffkinnison/shadho | 16 | python | def run(self):
'Search hyperparameter values on remote workers.\n\n Generate and evaluate hyperparameters using the selected task manager\n and search strategy. Hyperparameters will be evaluated until timeout,\n and the optimal set will be printed to screen.\n\n Notes\n -----\n If `self.await_pending` is True, Shadho will continue to evaluate\n hyperparameters in the queue without generating new hyperparameter\n values. This will continue until the queue is empty and all tasks have\n returned.\n '
if (not hasattr(self, 'manager')):
self.manager = create_manager(manager_type=self.config.manager, config=self.config, tmpdir=self.__tmpdir)
if (len(self.ccs) == 0):
cc = ComputeClass('all', None, None, min(self.max_tasks, self.max_queued_tasks))
self.ccs[cc.id] = cc
else:
for cc in self.ccs.values():
cc.optimizer = self.copy()
cc.max_queued_tasks = max((cc.max_queued_tasks / len(self.ccs)), 1)
self.assign_to_ccs()
self.start = time.time()
completed_tasks = 0
try:
while (not self.done()):
stop = self.generate()
if (not stop):
result = self.manager.run_task()
if (result is not None):
if (len(result) == 3):
self.success(*result)
completed_tasks += 1
else:
self.failure(*result)
if ((self.trial_count % self.save_frequency) == 0):
self.save()
else:
break
self.save()
if self.await_pending:
while (not self.manager.empty()):
result = self.manager.run_task()
if (result is not None):
if (len(result) == 3):
self.success(*result)
else:
self.failure(*result)
self.save()
except KeyboardInterrupt:
if (hasattr(self, '__tmpdir') and (self.__tmpdir is not None)):
os.rmdir(self.__tmpdir)
self.end = time.time()
self.save()
self.summary()
return self.to_dataframes() | def run(self):
'Search hyperparameter values on remote workers.\n\n Generate and evaluate hyperparameters using the selected task manager\n and search strategy. Hyperparameters will be evaluated until timeout,\n and the optimal set will be printed to screen.\n\n Notes\n -----\n If `self.await_pending` is True, Shadho will continue to evaluate\n hyperparameters in the queue without generating new hyperparameter\n values. This will continue until the queue is empty and all tasks have\n returned.\n '
if (not hasattr(self, 'manager')):
self.manager = create_manager(manager_type=self.config.manager, config=self.config, tmpdir=self.__tmpdir)
if (len(self.ccs) == 0):
cc = ComputeClass('all', None, None, min(self.max_tasks, self.max_queued_tasks))
self.ccs[cc.id] = cc
else:
for cc in self.ccs.values():
cc.optimizer = self.copy()
cc.max_queued_tasks = max((cc.max_queued_tasks / len(self.ccs)), 1)
self.assign_to_ccs()
self.start = time.time()
completed_tasks = 0
try:
while (not self.done()):
stop = self.generate()
if (not stop):
result = self.manager.run_task()
if (result is not None):
if (len(result) == 3):
self.success(*result)
completed_tasks += 1
else:
self.failure(*result)
if ((self.trial_count % self.save_frequency) == 0):
self.save()
else:
break
self.save()
if self.await_pending:
while (not self.manager.empty()):
result = self.manager.run_task()
if (result is not None):
if (len(result) == 3):
self.success(*result)
else:
self.failure(*result)
self.save()
except KeyboardInterrupt:
if (hasattr(self, '__tmpdir') and (self.__tmpdir is not None)):
os.rmdir(self.__tmpdir)
self.end = time.time()
self.save()
self.summary()
return self.to_dataframes()<|docstring|>Search hyperparameter values on remote workers.
Generate and evaluate hyperparameters using the selected task manager
and search strategy. Hyperparameters will be evaluated until timeout,
and the optimal set will be printed to screen.
Notes
-----
If `self.await_pending` is True, Shadho will continue to evaluate
hyperparameters in the queue without generating new hyperparameter
values. This will continue until the queue is empty and all tasks have
returned.<|endoftext|> |
8d76b5a765c6a7e11d2b340ec7fc065cb3d2e15194068157d44d125317ba2509 | def generate(self):
'Generate hyperparameter values to test.\n\n Hyperparameter values are generated from the search space specification\n supplied at instantiation using the requested generation method (i.e.,\n random search, TPE, Gaussian process Bayesian optimization, etc.).\n\n Returns\n -------\n stop : bool\n If True, no values were generated and the search should stop. This\n facilitates grid-search-like behavior, for example stopping on\n completion of an exhaustive search.\n\n Notes\n -----\n This method will automatically add a new task to the queue after\n generating hyperparameter values.\n '
stop = True
for cc_id in self.ccs:
cc = self.ccs[cc_id]
n = (cc.max_queued_tasks - cc.current_tasks)
print(cc.max_queued_tasks, cc.current_tasks, n)
for i in range(n):
if (self.hyperparameters_per_task == 1):
trial = super().generate(searchspaces=cc.searchspaces)
if isinstance(trial, Trial):
self.trials[trial.id] = trial
tag = '.'.join([str(trial.id), str(trial.searchspace.id), cc_id])
self.manager.add_task(self.cmd, tag, trial.parameter_dict, files=self.files, resource=cc.resource, value=cc.value)
elif (isinstance(trial, list) and (len(trial) > 0)):
for t in trial:
self.trials[t.id] = t
tag = '.'.join([str(t.id), str(t.searchspace.id), cc_id])
self.manager.add_task(self.cmd, tag, t.parameter_dict, files=self.files, resource=cc.resource, value=cc.value)
stop = False
cc.current_tasks = cc.max_queued_tasks
return stop | Generate hyperparameter values to test.
Hyperparameter values are generated from the search space specification
supplied at instantiation using the requested generation method (i.e.,
random search, TPE, Gaussian process Bayesian optimization, etc.).
Returns
-------
stop : bool
If True, no values were generated and the search should stop. This
facilitates grid-search-like behavior, for example stopping on
completion of an exhaustive search.
Notes
-----
This method will automatically add a new task to the queue after
generating hyperparameter values. | shadho/shadho.py | generate | jeffkinnison/shadho | 16 | python | def generate(self):
'Generate hyperparameter values to test.\n\n Hyperparameter values are generated from the search space specification\n supplied at instantiation using the requested generation method (i.e.,\n random search, TPE, Gaussian process Bayesian optimization, etc.).\n\n Returns\n -------\n stop : bool\n If True, no values were generated and the search should stop. This\n facilitates grid-search-like behavior, for example stopping on\n completion of an exhaustive search.\n\n Notes\n -----\n This method will automatically add a new task to the queue after\n generating hyperparameter values.\n '
stop = True
for cc_id in self.ccs:
cc = self.ccs[cc_id]
n = (cc.max_queued_tasks - cc.current_tasks)
print(cc.max_queued_tasks, cc.current_tasks, n)
for i in range(n):
if (self.hyperparameters_per_task == 1):
trial = super().generate(searchspaces=cc.searchspaces)
if isinstance(trial, Trial):
self.trials[trial.id] = trial
tag = '.'.join([str(trial.id), str(trial.searchspace.id), cc_id])
self.manager.add_task(self.cmd, tag, trial.parameter_dict, files=self.files, resource=cc.resource, value=cc.value)
elif (isinstance(trial, list) and (len(trial) > 0)):
for t in trial:
self.trials[t.id] = t
tag = '.'.join([str(t.id), str(t.searchspace.id), cc_id])
self.manager.add_task(self.cmd, tag, t.parameter_dict, files=self.files, resource=cc.resource, value=cc.value)
stop = False
cc.current_tasks = cc.max_queued_tasks
return stop | def generate(self):
'Generate hyperparameter values to test.\n\n Hyperparameter values are generated from the search space specification\n supplied at instantiation using the requested generation method (i.e.,\n random search, TPE, Gaussian process Bayesian optimization, etc.).\n\n Returns\n -------\n stop : bool\n If True, no values were generated and the search should stop. This\n facilitates grid-search-like behavior, for example stopping on\n completion of an exhaustive search.\n\n Notes\n -----\n This method will automatically add a new task to the queue after\n generating hyperparameter values.\n '
stop = True
for cc_id in self.ccs:
cc = self.ccs[cc_id]
n = (cc.max_queued_tasks - cc.current_tasks)
print(cc.max_queued_tasks, cc.current_tasks, n)
for i in range(n):
if (self.hyperparameters_per_task == 1):
trial = super().generate(searchspaces=cc.searchspaces)
if isinstance(trial, Trial):
self.trials[trial.id] = trial
tag = '.'.join([str(trial.id), str(trial.searchspace.id), cc_id])
self.manager.add_task(self.cmd, tag, trial.parameter_dict, files=self.files, resource=cc.resource, value=cc.value)
elif (isinstance(trial, list) and (len(trial) > 0)):
for t in trial:
self.trials[t.id] = t
tag = '.'.join([str(t.id), str(t.searchspace.id), cc_id])
self.manager.add_task(self.cmd, tag, t.parameter_dict, files=self.files, resource=cc.resource, value=cc.value)
stop = False
cc.current_tasks = cc.max_queued_tasks
return stop<|docstring|>Generate hyperparameter values to test.
Hyperparameter values are generated from the search space specification
supplied at instantiation using the requested generation method (i.e.,
random search, TPE, Gaussian process Bayesian optimization, etc.).
Returns
-------
stop : bool
If True, no values were generated and the search should stop. This
facilitates grid-search-like behavior, for example stopping on
completion of an exhaustive search.
Notes
-----
This method will automatically add a new task to the queue after
generating hyperparameter values.<|endoftext|> |
7d8a85e17ccffbd01bf7814081578e1663f246ba31910cb2aee8c5fb9b5974a2 | def assign_to_ccs(self):
'Assign trees to compute classes.\n\n Each independent model in the search (model being one of a disjoint set\n of search domains) is assigned to at least two compute classes based on\n its rank relative to other models. In this way, only a subset of models\n are evaluated on each set of hardware.\n\n Notes\n -----\n This method accounts for differing counts of models and compute\n classes, adjusting for a greater number of models, a greater number of\n compute classes, or equal counts of models and compute classes.\n\n See Also\n --------\n `shadho.ComputeClass`\n `pyrameter.ModelGroup`\n '
if (len(self.ccs) > 1):
self.sort_spaces(use_complexity=self.use_complexity, use_uncertainty=self.use_uncertainty)
for cc in self.ccs:
cc.clear()
ccids = list(self.ccs.keys())
larger = (self.searchspaces if (len(self.searchspaces) >= len(ccids)) else ccids)
smaller = (ccids if (larger == len(self.searchspaces)) else self.searchspaces)
x = (float(len(larger)) / float(len(smaller)))
y = (x - 1)
j = 0
m = (len(smaller) / 2)
n = (len(larger) / 2)
for i in range(len(larger)):
if (i > np.ceil(y)):
j += 1
y += x
if (smaller[j] in self.ccs):
self.ccs[smaller[j]].add_searchspace(larger[i])
if (j < m):
self.ccs[smaller[(j + 1)]].add_searchspace(self.searchspaces[larger[i]])
else:
self.ccs[smaller[(j - 1)]].add_searchspace(self.searchspaces[larger[i]])
else:
self.ccs[larger[i]].add_searchspace(smaller[j])
if (i < n):
self.ccs[larger[(i + 1)]].add_searchspace(self.searchspaces[smaller[j]])
else:
self.ccs[larger[(i - 1)]].add_searchspace(self.searchspaces[smaller[j]])
elif (len(self.ccs) == 0):
cc = ComputeClass('all', None, None, min(self.max_tasks, self.max_queued_tasks))
self.ccs[cc.id] = cc
cc.add_searchspace(self.searchspaces)
else:
cc = list(self.ccs.values())[0]
cc.clear()
cc.add_searchspace(self.searchspaces) | Assign trees to compute classes.
Each independent model in the search (model being one of a disjoint set
of search domains) is assigned to at least two compute classes based on
its rank relative to other models. In this way, only a subset of models
are evaluated on each set of hardware.
Notes
-----
This method accounts for differing counts of models and compute
classes, adjusting for a greater number of models, a greater number of
compute classes, or equal counts of models and compute classes.
See Also
--------
`shadho.ComputeClass`
`pyrameter.ModelGroup` | shadho/shadho.py | assign_to_ccs | jeffkinnison/shadho | 16 | python | def assign_to_ccs(self):
'Assign trees to compute classes.\n\n Each independent model in the search (model being one of a disjoint set\n of search domains) is assigned to at least two compute classes based on\n its rank relative to other models. In this way, only a subset of models\n are evaluated on each set of hardware.\n\n Notes\n -----\n This method accounts for differing counts of models and compute\n classes, adjusting for a greater number of models, a greater number of\n compute classes, or equal counts of models and compute classes.\n\n See Also\n --------\n `shadho.ComputeClass`\n `pyrameter.ModelGroup`\n '
if (len(self.ccs) > 1):
self.sort_spaces(use_complexity=self.use_complexity, use_uncertainty=self.use_uncertainty)
for cc in self.ccs:
cc.clear()
ccids = list(self.ccs.keys())
larger = (self.searchspaces if (len(self.searchspaces) >= len(ccids)) else ccids)
smaller = (ccids if (larger == len(self.searchspaces)) else self.searchspaces)
x = (float(len(larger)) / float(len(smaller)))
y = (x - 1)
j = 0
m = (len(smaller) / 2)
n = (len(larger) / 2)
for i in range(len(larger)):
if (i > np.ceil(y)):
j += 1
y += x
if (smaller[j] in self.ccs):
self.ccs[smaller[j]].add_searchspace(larger[i])
if (j < m):
self.ccs[smaller[(j + 1)]].add_searchspace(self.searchspaces[larger[i]])
else:
self.ccs[smaller[(j - 1)]].add_searchspace(self.searchspaces[larger[i]])
else:
self.ccs[larger[i]].add_searchspace(smaller[j])
if (i < n):
self.ccs[larger[(i + 1)]].add_searchspace(self.searchspaces[smaller[j]])
else:
self.ccs[larger[(i - 1)]].add_searchspace(self.searchspaces[smaller[j]])
elif (len(self.ccs) == 0):
cc = ComputeClass('all', None, None, min(self.max_tasks, self.max_queued_tasks))
self.ccs[cc.id] = cc
cc.add_searchspace(self.searchspaces)
else:
cc = list(self.ccs.values())[0]
cc.clear()
cc.add_searchspace(self.searchspaces) | def assign_to_ccs(self):
'Assign trees to compute classes.\n\n Each independent model in the search (model being one of a disjoint set\n of search domains) is assigned to at least two compute classes based on\n its rank relative to other models. In this way, only a subset of models\n are evaluated on each set of hardware.\n\n Notes\n -----\n This method accounts for differing counts of models and compute\n classes, adjusting for a greater number of models, a greater number of\n compute classes, or equal counts of models and compute classes.\n\n See Also\n --------\n `shadho.ComputeClass`\n `pyrameter.ModelGroup`\n '
if (len(self.ccs) > 1):
self.sort_spaces(use_complexity=self.use_complexity, use_uncertainty=self.use_uncertainty)
for cc in self.ccs:
cc.clear()
ccids = list(self.ccs.keys())
larger = (self.searchspaces if (len(self.searchspaces) >= len(ccids)) else ccids)
smaller = (ccids if (larger == len(self.searchspaces)) else self.searchspaces)
x = (float(len(larger)) / float(len(smaller)))
y = (x - 1)
j = 0
m = (len(smaller) / 2)
n = (len(larger) / 2)
for i in range(len(larger)):
if (i > np.ceil(y)):
j += 1
y += x
if (smaller[j] in self.ccs):
self.ccs[smaller[j]].add_searchspace(larger[i])
if (j < m):
self.ccs[smaller[(j + 1)]].add_searchspace(self.searchspaces[larger[i]])
else:
self.ccs[smaller[(j - 1)]].add_searchspace(self.searchspaces[larger[i]])
else:
self.ccs[larger[i]].add_searchspace(smaller[j])
if (i < n):
self.ccs[larger[(i + 1)]].add_searchspace(self.searchspaces[smaller[j]])
else:
self.ccs[larger[(i - 1)]].add_searchspace(self.searchspaces[smaller[j]])
elif (len(self.ccs) == 0):
cc = ComputeClass('all', None, None, min(self.max_tasks, self.max_queued_tasks))
self.ccs[cc.id] = cc
cc.add_searchspace(self.searchspaces)
else:
cc = list(self.ccs.values())[0]
cc.clear()
cc.add_searchspace(self.searchspaces)<|docstring|>Assign trees to compute classes.
Each independent model in the search (model being one of a disjoint set
of search domains) is assigned to at least two compute classes based on
its rank relative to other models. In this way, only a subset of models
are evaluated on each set of hardware.
Notes
-----
This method accounts for differing counts of models and compute
classes, adjusting for a greater number of models, a greater number of
compute classes, or equal counts of models and compute classes.
See Also
--------
`shadho.ComputeClass`
`pyrameter.ModelGroup`<|endoftext|> |
a9226622532382cedb5ba5db7a0e3d8bdec32642bf0e7d6aefb83e7628ddfc5f | def success(self, tag, loss, results):
"Handle successful task completion.\n\n Parameters\n ----------\n tag : str\n The task tag, encoding the result id, model id, and compute class\n id as ``<result_id>.<model_id>.<cc_id>``.\n loss : float\n The loss value associated with this result.\n results : dict\n Additional metrics to be included with this result.\n\n Notes\n -----\n This method will trigger a model/compute class reassignment in the\n event that storing the result caused the model's priority to be\n updated.\n "
(trial_id, ss_id, ccid) = tag.split('.')
if (not isinstance(results, list)):
results['compute_class'] = {'id': ccid, 'name': self.ccs[ccid].name, 'value': self.ccs[ccid].value}
else:
trial_id = trial_id.split('@')
ccdata = {'id': ccid, 'name': self.ccs[ccid].name, 'value': self.ccs[ccid].value}
for r in results:
r['compute_class'] = ccdata
self.register_result(ss_id, trial_id, loss, results)
n_completed = sum([1 for trial in self.trials.values() if (trial.status.value == 3)])
if ((n_completed % 10) == 0):
self.assign_to_ccs()
self.ccs[ccid].current_tasks -= 1 | Handle successful task completion.
Parameters
----------
tag : str
The task tag, encoding the result id, model id, and compute class
id as ``<result_id>.<model_id>.<cc_id>``.
loss : float
The loss value associated with this result.
results : dict
Additional metrics to be included with this result.
Notes
-----
This method will trigger a model/compute class reassignment in the
event that storing the result caused the model's priority to be
updated. | shadho/shadho.py | success | jeffkinnison/shadho | 16 | python | def success(self, tag, loss, results):
"Handle successful task completion.\n\n Parameters\n ----------\n tag : str\n The task tag, encoding the result id, model id, and compute class\n id as ``<result_id>.<model_id>.<cc_id>``.\n loss : float\n The loss value associated with this result.\n results : dict\n Additional metrics to be included with this result.\n\n Notes\n -----\n This method will trigger a model/compute class reassignment in the\n event that storing the result caused the model's priority to be\n updated.\n "
(trial_id, ss_id, ccid) = tag.split('.')
if (not isinstance(results, list)):
results['compute_class'] = {'id': ccid, 'name': self.ccs[ccid].name, 'value': self.ccs[ccid].value}
else:
trial_id = trial_id.split('@')
ccdata = {'id': ccid, 'name': self.ccs[ccid].name, 'value': self.ccs[ccid].value}
for r in results:
r['compute_class'] = ccdata
self.register_result(ss_id, trial_id, loss, results)
n_completed = sum([1 for trial in self.trials.values() if (trial.status.value == 3)])
if ((n_completed % 10) == 0):
self.assign_to_ccs()
self.ccs[ccid].current_tasks -= 1 | def success(self, tag, loss, results):
"Handle successful task completion.\n\n Parameters\n ----------\n tag : str\n The task tag, encoding the result id, model id, and compute class\n id as ``<result_id>.<model_id>.<cc_id>``.\n loss : float\n The loss value associated with this result.\n results : dict\n Additional metrics to be included with this result.\n\n Notes\n -----\n This method will trigger a model/compute class reassignment in the\n event that storing the result caused the model's priority to be\n updated.\n "
(trial_id, ss_id, ccid) = tag.split('.')
if (not isinstance(results, list)):
results['compute_class'] = {'id': ccid, 'name': self.ccs[ccid].name, 'value': self.ccs[ccid].value}
else:
trial_id = trial_id.split('@')
ccdata = {'id': ccid, 'name': self.ccs[ccid].name, 'value': self.ccs[ccid].value}
for r in results:
r['compute_class'] = ccdata
self.register_result(ss_id, trial_id, loss, results)
n_completed = sum([1 for trial in self.trials.values() if (trial.status.value == 3)])
if ((n_completed % 10) == 0):
self.assign_to_ccs()
self.ccs[ccid].current_tasks -= 1<|docstring|>Handle successful task completion.
Parameters
----------
tag : str
The task tag, encoding the result id, model id, and compute class
id as ``<result_id>.<model_id>.<cc_id>``.
loss : float
The loss value associated with this result.
results : dict
Additional metrics to be included with this result.
Notes
-----
This method will trigger a model/compute class reassignment in the
event that storing the result caused the model's priority to be
updated.<|endoftext|> |
5421cff6501833980f21266e330edf83638344fb0e2b7b697500a96e218662be | def failure(self, tag, resub):
'Handle task failure.\n\n Parameters\n ----------\n task : `work_queue.Task`\n The failed task to process.\n\n Notes\n -----\n This method will resubmit failed tasks on request to account for\n potential worker dropout, etc.\n '
(trial_id, ss_id, ccid) = tag.split('.')
trials = trial_id.split('@')
(submissions, params) = self.register_result(ss_id, trial_id, objective=None, results=None, errmsg='yes')
if (resub and (submissions < self.max_resubmissions)):
cc = self.ccs[ccid]
self.manager.add_task(self.cmd, tag, params, files=self.files, resource=cc.resource, value=cc.value)
else:
self.ccs[ccid].current_tasks -= 1 | Handle task failure.
Parameters
----------
task : `work_queue.Task`
The failed task to process.
Notes
-----
This method will resubmit failed tasks on request to account for
potential worker dropout, etc. | shadho/shadho.py | failure | jeffkinnison/shadho | 16 | python | def failure(self, tag, resub):
'Handle task failure.\n\n Parameters\n ----------\n task : `work_queue.Task`\n The failed task to process.\n\n Notes\n -----\n This method will resubmit failed tasks on request to account for\n potential worker dropout, etc.\n '
(trial_id, ss_id, ccid) = tag.split('.')
trials = trial_id.split('@')
(submissions, params) = self.register_result(ss_id, trial_id, objective=None, results=None, errmsg='yes')
if (resub and (submissions < self.max_resubmissions)):
cc = self.ccs[ccid]
self.manager.add_task(self.cmd, tag, params, files=self.files, resource=cc.resource, value=cc.value)
else:
self.ccs[ccid].current_tasks -= 1 | def failure(self, tag, resub):
'Handle task failure.\n\n Parameters\n ----------\n task : `work_queue.Task`\n The failed task to process.\n\n Notes\n -----\n This method will resubmit failed tasks on request to account for\n potential worker dropout, etc.\n '
(trial_id, ss_id, ccid) = tag.split('.')
trials = trial_id.split('@')
(submissions, params) = self.register_result(ss_id, trial_id, objective=None, results=None, errmsg='yes')
if (resub and (submissions < self.max_resubmissions)):
cc = self.ccs[ccid]
self.manager.add_task(self.cmd, tag, params, files=self.files, resource=cc.resource, value=cc.value)
else:
self.ccs[ccid].current_tasks -= 1<|docstring|>Handle task failure.
Parameters
----------
task : `work_queue.Task`
The failed task to process.
Notes
-----
This method will resubmit failed tasks on request to account for
potential worker dropout, etc.<|endoftext|> |
61ca67107b5bf6cfb12c021ea65ef7021e27bceae81a67d2aed4be138524719f | @classmethod
def __prepare__(metacls, name, bases, **kargs):
' Prepare the new class, here for completeness\n '
logging.debug(('Preparing Class %s' % name))
return super().__prepare__(name, bases, **kargs) | Prepare the new class, here for completeness | nc5ng/types/datapoint.py | __prepare__ | nc5ng/nc5ng-gmt | 0 | python | @classmethod
def __prepare__(metacls, name, bases, **kargs):
' \n '
logging.debug(('Preparing Class %s' % name))
return super().__prepare__(name, bases, **kargs) | @classmethod
def __prepare__(metacls, name, bases, **kargs):
' \n '
logging.debug(('Preparing Class %s' % name))
return super().__prepare__(name, bases, **kargs)<|docstring|>Prepare the new class, here for completeness<|endoftext|> |
f97a93f5c22735d9f6a081cd7e78f06a4ea551fdf4b12ad37caacff4605a19f9 | @property
def type_shorthand(cls):
' Get the class shorthand name'
return cls._type_shorthand | Get the class shorthand name | nc5ng/types/datapoint.py | type_shorthand | nc5ng/nc5ng-gmt | 0 | python | @property
def type_shorthand(cls):
' '
return cls._type_shorthand | @property
def type_shorthand(cls):
' '
return cls._type_shorthand<|docstring|>Get the class shorthand name<|endoftext|> |
940114a7002607742e8fedd3b5f585d9578e62245470b14b518401bdac4deb3d | @property
def point_store(cls):
' Return the type-specific Point Buffer'
return cls._point_store | Return the type-specific Point Buffer | nc5ng/types/datapoint.py | point_store | nc5ng/nc5ng-gmt | 0 | python | @property
def point_store(cls):
' '
return cls._point_store | @property
def point_store(cls):
' '
return cls._point_store<|docstring|>Return the type-specific Point Buffer<|endoftext|> |
d8d8f4f0f6a57f313a351acc2d3ffc784727181ed5396689f193cd4eb5fadde3 | @property
def point_database(cls):
' Return the Root Database of all DataPoints in this Hierarchy'
return cls._cbdb | Return the Root Database of all DataPoints in this Hierarchy | nc5ng/types/datapoint.py | point_database | nc5ng/nc5ng-gmt | 0 | python | @property
def point_database(cls):
' '
return cls._cbdb | @property
def point_database(cls):
' '
return cls._cbdb<|docstring|>Return the Root Database of all DataPoints in this Hierarchy<|endoftext|> |
0085341cd0053943ee6ed53ffbb126119215644799f8547e00f6d07a4f752f79 | def __new__(metacls, name, bases, namespace, **kargs):
' Create a new data point type, called on class load\n\n Creates class attributes level point set for storage\n \n metaclass __new__ is executed on load time for every\n class that uses it, it is executed after __prepare__ which\n constructs the class object\n '
logging.debug(('Creating Class %s' % name))
cls = super().__new__(metacls, name, bases, namespace)
if (not hasattr(cls, '_cbdb')):
logging.debug('Creating Data Point Database')
cls._cbdb = dict()
cls._type_shorthand = name.lower()
while (cls._type_shorthand in cls._cbdb):
cls._type_shorthand = (cls._type_shorthand + '_')
cls._point_store = namespace.get('_point_store', set())
logging.debug(('Registering new Data Point Type %s with shorthand %s' % (name, cls._type_shorthand)))
cls._cbdb[cls.type_shorthand] = {'type': cls, 'points': cls._point_store}
return cls | Create a new data point type, called on class load
Creates class attributes level point set for storage
metaclass __new__ is executed on load time for every
class that uses it, it is executed after __prepare__ which
constructs the class object | nc5ng/types/datapoint.py | __new__ | nc5ng/nc5ng-gmt | 0 | python | def __new__(metacls, name, bases, namespace, **kargs):
' Create a new data point type, called on class load\n\n Creates class attributes level point set for storage\n \n metaclass __new__ is executed on load time for every\n class that uses it, it is executed after __prepare__ which\n constructs the class object\n '
logging.debug(('Creating Class %s' % name))
cls = super().__new__(metacls, name, bases, namespace)
if (not hasattr(cls, '_cbdb')):
logging.debug('Creating Data Point Database')
cls._cbdb = dict()
cls._type_shorthand = name.lower()
while (cls._type_shorthand in cls._cbdb):
cls._type_shorthand = (cls._type_shorthand + '_')
cls._point_store = namespace.get('_point_store', set())
logging.debug(('Registering new Data Point Type %s with shorthand %s' % (name, cls._type_shorthand)))
cls._cbdb[cls.type_shorthand] = {'type': cls, 'points': cls._point_store}
return cls | def __new__(metacls, name, bases, namespace, **kargs):
' Create a new data point type, called on class load\n\n Creates class attributes level point set for storage\n \n metaclass __new__ is executed on load time for every\n class that uses it, it is executed after __prepare__ which\n constructs the class object\n '
logging.debug(('Creating Class %s' % name))
cls = super().__new__(metacls, name, bases, namespace)
if (not hasattr(cls, '_cbdb')):
logging.debug('Creating Data Point Database')
cls._cbdb = dict()
cls._type_shorthand = name.lower()
while (cls._type_shorthand in cls._cbdb):
cls._type_shorthand = (cls._type_shorthand + '_')
cls._point_store = namespace.get('_point_store', set())
logging.debug(('Registering new Data Point Type %s with shorthand %s' % (name, cls._type_shorthand)))
cls._cbdb[cls.type_shorthand] = {'type': cls, 'points': cls._point_store}
return cls<|docstring|>Create a new data point type, called on class load
Creates class attributes level point set for storage
metaclass __new__ is executed on load time for every
class that uses it, it is executed after __prepare__ which
constructs the class object<|endoftext|> |
781e93334695d664fd25a547964a281f34e6f64d1e883b975bd43e2b68635039 | def __init__(cls, name, bases, namespace):
' Initialize a new FileBacked Class\n\n This is a slot method for class creation, __init__ is called when class is defined (load time)\n\n \\param cls - reference to new type, similiar to @classmethod\n \\param name - new class name\n \\param bases - base classes\n \\param namespace - new class attributes\n \\param Parser - BaseFileParser underlying this type\n \\param **kwargs - keywords passed to Parser initialization\n \n '
logging.debug(('Creating Data Point Class %s' % name))
super().__init__(name, bases, namespace) | Initialize a new FileBacked Class
This is a slot method for class creation, __init__ is called when class is defined (load time)
\param cls - reference to new type, similiar to @classmethod
\param name - new class name
\param bases - base classes
\param namespace - new class attributes
\param Parser - BaseFileParser underlying this type
\param **kwargs - keywords passed to Parser initialization | nc5ng/types/datapoint.py | __init__ | nc5ng/nc5ng-gmt | 0 | python | def __init__(cls, name, bases, namespace):
' Initialize a new FileBacked Class\n\n This is a slot method for class creation, __init__ is called when class is defined (load time)\n\n \\param cls - reference to new type, similiar to @classmethod\n \\param name - new class name\n \\param bases - base classes\n \\param namespace - new class attributes\n \\param Parser - BaseFileParser underlying this type\n \\param **kwargs - keywords passed to Parser initialization\n \n '
logging.debug(('Creating Data Point Class %s' % name))
super().__init__(name, bases, namespace) | def __init__(cls, name, bases, namespace):
' Initialize a new FileBacked Class\n\n This is a slot method for class creation, __init__ is called when class is defined (load time)\n\n \\param cls - reference to new type, similiar to @classmethod\n \\param name - new class name\n \\param bases - base classes\n \\param namespace - new class attributes\n \\param Parser - BaseFileParser underlying this type\n \\param **kwargs - keywords passed to Parser initialization\n \n '
logging.debug(('Creating Data Point Class %s' % name))
super().__init__(name, bases, namespace)<|docstring|>Initialize a new FileBacked Class
This is a slot method for class creation, __init__ is called when class is defined (load time)
\param cls - reference to new type, similiar to @classmethod
\param name - new class name
\param bases - base classes
\param namespace - new class attributes
\param Parser - BaseFileParser underlying this type
\param **kwargs - keywords passed to Parser initialization<|endoftext|> |
2ae5d578499e4212db1e4f7daba9b62f9b5e81c840e0290c26cb63b05dccca0a | def __register__(cls, point):
' Register a point with the class buffer '
cls._point_store.add(point) | Register a point with the class buffer | nc5ng/types/datapoint.py | __register__ | nc5ng/nc5ng-gmt | 0 | python | def __register__(cls, point):
' '
cls._point_store.add(point) | def __register__(cls, point):
' '
cls._point_store.add(point)<|docstring|>Register a point with the class buffer<|endoftext|> |
3336678bfd4aa170b90c8e7c698623957c2da745de700cd002cccbb1f693e966 | def __call__(cls, *args, **kw):
' Create a new Point in this hierarchy'
'\n if typename is not None:\n if typename not in cls._cbdb:\n raise TypeError("Invalid Data Point Type with Shorthand %s"%typename)\n cls = cls._cbdb[typename][\'type\']\n '
if (args and (args[0] in cls.point_database.keys())):
typename = args[0]
elif (kw and ('type' in kw)):
typename = kw['type']
else:
typename = cls.type_shorthand
if (typename == cls.type_shorthand):
point = super().__call__(*args, **kw)
if (not getattr(point, 'ephemeral', False)):
cls.__register__(point)
elif (typename in cls.point_database.keys()):
point = cls.point_database[typename]['type'].__call__(*args, **kw)
else:
return None
return point | Create a new Point in this hierarchy | nc5ng/types/datapoint.py | __call__ | nc5ng/nc5ng-gmt | 0 | python | def __call__(cls, *args, **kw):
' '
'\n if typename is not None:\n if typename not in cls._cbdb:\n raise TypeError("Invalid Data Point Type with Shorthand %s"%typename)\n cls = cls._cbdb[typename][\'type\']\n '
if (args and (args[0] in cls.point_database.keys())):
typename = args[0]
elif (kw and ('type' in kw)):
typename = kw['type']
else:
typename = cls.type_shorthand
if (typename == cls.type_shorthand):
point = super().__call__(*args, **kw)
if (not getattr(point, 'ephemeral', False)):
cls.__register__(point)
elif (typename in cls.point_database.keys()):
point = cls.point_database[typename]['type'].__call__(*args, **kw)
else:
return None
return point | def __call__(cls, *args, **kw):
' '
'\n if typename is not None:\n if typename not in cls._cbdb:\n raise TypeError("Invalid Data Point Type with Shorthand %s"%typename)\n cls = cls._cbdb[typename][\'type\']\n '
if (args and (args[0] in cls.point_database.keys())):
typename = args[0]
elif (kw and ('type' in kw)):
typename = kw['type']
else:
typename = cls.type_shorthand
if (typename == cls.type_shorthand):
point = super().__call__(*args, **kw)
if (not getattr(point, 'ephemeral', False)):
cls.__register__(point)
elif (typename in cls.point_database.keys()):
point = cls.point_database[typename]['type'].__call__(*args, **kw)
else:
return None
return point<|docstring|>Create a new Point in this hierarchy<|endoftext|> |
40a39df93a115239daa29a77005a7b80b4ac4e835990aad473c8909b38abc9c8 | def upload(import_path, verbose=False, skip_subfolders=False, number_threads=None, max_attempts=None, video_import_path=None, dry_run=False, api_version=1.0):
'\n Upload local images to Mapillary\n Args:\n import_path: Directory path to where the images are stored.\n verbose: Print extra warnings and errors.\n skip_subfolders: Skip images stored in subdirectories.\n\n Returns:\n Images are uploaded to Mapillary and flagged locally as uploaded.\n '
if (video_import_path and ((not os.path.isdir(video_import_path)) and (not os.path.isfile(video_import_path)))):
print((('Error, video path ' + video_import_path) + ' does not exist, exiting...'))
sys.exit(1)
if video_import_path:
video_sampling_path = 'mapillary_sampled_video_frames'
video_dirname = (video_import_path if os.path.isdir(video_import_path) else os.path.dirname(video_import_path))
import_path = (os.path.join(os.path.abspath(import_path), video_sampling_path) if import_path else os.path.join(os.path.abspath(video_dirname), video_sampling_path))
if ((not import_path) or (not os.path.isdir(import_path))):
print((('Error, import directory ' + import_path) + ' does not exist, exiting...'))
sys.exit(1)
total_file_list = uploader.get_total_file_list(import_path, skip_subfolders)
upload_file_list = uploader.get_upload_file_list(import_path, skip_subfolders)
failed_file_list = uploader.get_failed_upload_file_list(import_path, skip_subfolders)
success_file_list = uploader.get_success_upload_file_list(import_path, skip_subfolders)
to_finalize_file_list = uploader.get_finalize_file_list(import_path, skip_subfolders)
if (len(success_file_list) == len(total_file_list)):
print('All images have already been uploaded')
else:
if len(failed_file_list):
upload_failed = (raw_input('Retry uploading previously failed image uploads? [y/n]: ') if (not ipc.is_enabled()) else 'y')
if (upload_failed in ['y', 'Y', 'yes', 'Yes']):
upload_file_list.extend(failed_file_list)
upload_file_list = [f for f in upload_file_list if verify_mapillary_tag(f)]
if ((not len(upload_file_list)) and (not len(to_finalize_file_list))):
print('No images to upload.')
print('Please check if all images contain the required Mapillary metadata. If not, you can use "mapillary_tools process" to add them')
sys.exit(1)
if len(upload_file_list):
params = {}
list_per_sequence_mapping = {}
direct_upload_file_list = []
for image in upload_file_list:
log_root = uploader.log_rootpath(image)
upload_params_path = os.path.join(log_root, 'upload_params_process.json')
if os.path.isfile(upload_params_path):
with open(upload_params_path, 'rb') as jf:
params[image] = json.load(jf, object_hook=uploader.ascii_encode_dict)
sequence = params[image]['key']
if (sequence in list_per_sequence_mapping):
list_per_sequence_mapping[sequence].append(image)
else:
list_per_sequence_mapping[sequence] = [image]
else:
direct_upload_file_list.append(image)
print('Uploading {} images with valid mapillary tags (Skipping {})'.format(len(upload_file_list), (len(total_file_list) - len(upload_file_list))))
if (api_version == 2.0):
uploder.uploadfile_list
if len(direct_upload_file_list):
uploader.upload_file_list_direct(direct_upload_file_list, number_threads, max_attempts)
for (idx, sequence) in enumerate(list_per_sequence_mapping):
uploader.upload_file_list_manual(list_per_sequence_mapping[sequence], params, idx, number_threads, max_attempts)
if len(to_finalize_file_list):
params = {}
sequences = []
for image in to_finalize_file_list:
log_root = uploader.log_rootpath(image)
upload_params_path = os.path.join(log_root, 'upload_params_process.json')
if os.path.isfile(upload_params_path):
with open(upload_params_path, 'rb') as jf:
image_params = json.load(jf, object_hook=uploader.ascii_encode_dict)
sequence = image_params['key']
if (sequence not in sequences):
params[image] = image_params
sequences.append(sequence)
for image in params:
uploader.upload_done_file(**params[image])
uploader.flag_finalization(to_finalize_file_list)
uploader.print_summary(upload_file_list) | Upload local images to Mapillary
Args:
import_path: Directory path to where the images are stored.
verbose: Print extra warnings and errors.
skip_subfolders: Skip images stored in subdirectories.
Returns:
Images are uploaded to Mapillary and flagged locally as uploaded. | mapillary_tools/upload.py | upload | testmapitools/mapillary_tools | 1 | python | def upload(import_path, verbose=False, skip_subfolders=False, number_threads=None, max_attempts=None, video_import_path=None, dry_run=False, api_version=1.0):
'\n Upload local images to Mapillary\n Args:\n import_path: Directory path to where the images are stored.\n verbose: Print extra warnings and errors.\n skip_subfolders: Skip images stored in subdirectories.\n\n Returns:\n Images are uploaded to Mapillary and flagged locally as uploaded.\n '
if (video_import_path and ((not os.path.isdir(video_import_path)) and (not os.path.isfile(video_import_path)))):
print((('Error, video path ' + video_import_path) + ' does not exist, exiting...'))
sys.exit(1)
if video_import_path:
video_sampling_path = 'mapillary_sampled_video_frames'
video_dirname = (video_import_path if os.path.isdir(video_import_path) else os.path.dirname(video_import_path))
import_path = (os.path.join(os.path.abspath(import_path), video_sampling_path) if import_path else os.path.join(os.path.abspath(video_dirname), video_sampling_path))
if ((not import_path) or (not os.path.isdir(import_path))):
print((('Error, import directory ' + import_path) + ' does not exist, exiting...'))
sys.exit(1)
total_file_list = uploader.get_total_file_list(import_path, skip_subfolders)
upload_file_list = uploader.get_upload_file_list(import_path, skip_subfolders)
failed_file_list = uploader.get_failed_upload_file_list(import_path, skip_subfolders)
success_file_list = uploader.get_success_upload_file_list(import_path, skip_subfolders)
to_finalize_file_list = uploader.get_finalize_file_list(import_path, skip_subfolders)
if (len(success_file_list) == len(total_file_list)):
print('All images have already been uploaded')
else:
if len(failed_file_list):
upload_failed = (raw_input('Retry uploading previously failed image uploads? [y/n]: ') if (not ipc.is_enabled()) else 'y')
if (upload_failed in ['y', 'Y', 'yes', 'Yes']):
upload_file_list.extend(failed_file_list)
upload_file_list = [f for f in upload_file_list if verify_mapillary_tag(f)]
if ((not len(upload_file_list)) and (not len(to_finalize_file_list))):
print('No images to upload.')
print('Please check if all images contain the required Mapillary metadata. If not, you can use "mapillary_tools process" to add them')
sys.exit(1)
if len(upload_file_list):
params = {}
list_per_sequence_mapping = {}
direct_upload_file_list = []
for image in upload_file_list:
log_root = uploader.log_rootpath(image)
upload_params_path = os.path.join(log_root, 'upload_params_process.json')
if os.path.isfile(upload_params_path):
with open(upload_params_path, 'rb') as jf:
params[image] = json.load(jf, object_hook=uploader.ascii_encode_dict)
sequence = params[image]['key']
if (sequence in list_per_sequence_mapping):
list_per_sequence_mapping[sequence].append(image)
else:
list_per_sequence_mapping[sequence] = [image]
else:
direct_upload_file_list.append(image)
print('Uploading {} images with valid mapillary tags (Skipping {})'.format(len(upload_file_list), (len(total_file_list) - len(upload_file_list))))
if (api_version == 2.0):
uploder.uploadfile_list
if len(direct_upload_file_list):
uploader.upload_file_list_direct(direct_upload_file_list, number_threads, max_attempts)
for (idx, sequence) in enumerate(list_per_sequence_mapping):
uploader.upload_file_list_manual(list_per_sequence_mapping[sequence], params, idx, number_threads, max_attempts)
if len(to_finalize_file_list):
params = {}
sequences = []
for image in to_finalize_file_list:
log_root = uploader.log_rootpath(image)
upload_params_path = os.path.join(log_root, 'upload_params_process.json')
if os.path.isfile(upload_params_path):
with open(upload_params_path, 'rb') as jf:
image_params = json.load(jf, object_hook=uploader.ascii_encode_dict)
sequence = image_params['key']
if (sequence not in sequences):
params[image] = image_params
sequences.append(sequence)
for image in params:
uploader.upload_done_file(**params[image])
uploader.flag_finalization(to_finalize_file_list)
uploader.print_summary(upload_file_list) | def upload(import_path, verbose=False, skip_subfolders=False, number_threads=None, max_attempts=None, video_import_path=None, dry_run=False, api_version=1.0):
'\n Upload local images to Mapillary\n Args:\n import_path: Directory path to where the images are stored.\n verbose: Print extra warnings and errors.\n skip_subfolders: Skip images stored in subdirectories.\n\n Returns:\n Images are uploaded to Mapillary and flagged locally as uploaded.\n '
if (video_import_path and ((not os.path.isdir(video_import_path)) and (not os.path.isfile(video_import_path)))):
print((('Error, video path ' + video_import_path) + ' does not exist, exiting...'))
sys.exit(1)
if video_import_path:
video_sampling_path = 'mapillary_sampled_video_frames'
video_dirname = (video_import_path if os.path.isdir(video_import_path) else os.path.dirname(video_import_path))
import_path = (os.path.join(os.path.abspath(import_path), video_sampling_path) if import_path else os.path.join(os.path.abspath(video_dirname), video_sampling_path))
if ((not import_path) or (not os.path.isdir(import_path))):
print((('Error, import directory ' + import_path) + ' does not exist, exiting...'))
sys.exit(1)
total_file_list = uploader.get_total_file_list(import_path, skip_subfolders)
upload_file_list = uploader.get_upload_file_list(import_path, skip_subfolders)
failed_file_list = uploader.get_failed_upload_file_list(import_path, skip_subfolders)
success_file_list = uploader.get_success_upload_file_list(import_path, skip_subfolders)
to_finalize_file_list = uploader.get_finalize_file_list(import_path, skip_subfolders)
if (len(success_file_list) == len(total_file_list)):
print('All images have already been uploaded')
else:
if len(failed_file_list):
upload_failed = (raw_input('Retry uploading previously failed image uploads? [y/n]: ') if (not ipc.is_enabled()) else 'y')
if (upload_failed in ['y', 'Y', 'yes', 'Yes']):
upload_file_list.extend(failed_file_list)
upload_file_list = [f for f in upload_file_list if verify_mapillary_tag(f)]
if ((not len(upload_file_list)) and (not len(to_finalize_file_list))):
print('No images to upload.')
print('Please check if all images contain the required Mapillary metadata. If not, you can use "mapillary_tools process" to add them')
sys.exit(1)
if len(upload_file_list):
params = {}
list_per_sequence_mapping = {}
direct_upload_file_list = []
for image in upload_file_list:
log_root = uploader.log_rootpath(image)
upload_params_path = os.path.join(log_root, 'upload_params_process.json')
if os.path.isfile(upload_params_path):
with open(upload_params_path, 'rb') as jf:
params[image] = json.load(jf, object_hook=uploader.ascii_encode_dict)
sequence = params[image]['key']
if (sequence in list_per_sequence_mapping):
list_per_sequence_mapping[sequence].append(image)
else:
list_per_sequence_mapping[sequence] = [image]
else:
direct_upload_file_list.append(image)
print('Uploading {} images with valid mapillary tags (Skipping {})'.format(len(upload_file_list), (len(total_file_list) - len(upload_file_list))))
if (api_version == 2.0):
uploder.uploadfile_list
if len(direct_upload_file_list):
uploader.upload_file_list_direct(direct_upload_file_list, number_threads, max_attempts)
for (idx, sequence) in enumerate(list_per_sequence_mapping):
uploader.upload_file_list_manual(list_per_sequence_mapping[sequence], params, idx, number_threads, max_attempts)
if len(to_finalize_file_list):
params = {}
sequences = []
for image in to_finalize_file_list:
log_root = uploader.log_rootpath(image)
upload_params_path = os.path.join(log_root, 'upload_params_process.json')
if os.path.isfile(upload_params_path):
with open(upload_params_path, 'rb') as jf:
image_params = json.load(jf, object_hook=uploader.ascii_encode_dict)
sequence = image_params['key']
if (sequence not in sequences):
params[image] = image_params
sequences.append(sequence)
for image in params:
uploader.upload_done_file(**params[image])
uploader.flag_finalization(to_finalize_file_list)
uploader.print_summary(upload_file_list)<|docstring|>Upload local images to Mapillary
Args:
import_path: Directory path to where the images are stored.
verbose: Print extra warnings and errors.
skip_subfolders: Skip images stored in subdirectories.
Returns:
Images are uploaded to Mapillary and flagged locally as uploaded.<|endoftext|> |
a3d25a48ea9c913c7a6712a342e3dd9a514605c4f8c5e58a228526a8f17fc472 | def top1_correct(pred, label, axis=(- 1)):
'Calculates top 1 correctness.'
assert (pred.shape[0] == label.shape[0]), '{} != {}'.format(pred.shape[0], label.shape[0])
pred_idx = np.argmax(pred, axis=axis)
return (pred_idx == label.astype(pred_idx.dtype)) | Calculates top 1 correctness. | fewshot/experiments/metrics.py | top1_correct | sebamenabar/oc-fewshot-public | 18 | python | def top1_correct(pred, label, axis=(- 1)):
assert (pred.shape[0] == label.shape[0]), '{} != {}'.format(pred.shape[0], label.shape[0])
pred_idx = np.argmax(pred, axis=axis)
return (pred_idx == label.astype(pred_idx.dtype)) | def top1_correct(pred, label, axis=(- 1)):
assert (pred.shape[0] == label.shape[0]), '{} != {}'.format(pred.shape[0], label.shape[0])
pred_idx = np.argmax(pred, axis=axis)
return (pred_idx == label.astype(pred_idx.dtype))<|docstring|>Calculates top 1 correctness.<|endoftext|> |
868d0e5e442d904d38df7929128cb80ad8b11473bae14922128f44cb90d72d72 | def top1_acc(pred, label, axis=(- 1)):
'Calculates top 1 accuracy.'
return top1_correct(pred, label, axis=axis).mean() | Calculates top 1 accuracy. | fewshot/experiments/metrics.py | top1_acc | sebamenabar/oc-fewshot-public | 18 | python | def top1_acc(pred, label, axis=(- 1)):
return top1_correct(pred, label, axis=axis).mean() | def top1_acc(pred, label, axis=(- 1)):
return top1_correct(pred, label, axis=axis).mean()<|docstring|>Calculates top 1 accuracy.<|endoftext|> |
9a680959af750097f775ff0b3d617b20ecbcb65e3079ecf0d156ba0f0a8c7535 | def topk_acc(pred, label, k, axis=(- 1)):
'Calculates top 5 accuracy.'
assert (pred.shape[0] == label.shape[0]), '{} != {}'.format(pred.shape[0], label.shape[0])
topk_choices = np.argsort(pred, axis=axis)
if (len(topk_choices.shape) == 2):
topk_choices = topk_choices[(:, ::(- 1))][(:, :k)]
else:
raise NotImplementedError()
return np.sum((topk_choices == np.expand_dims(label, axis)), axis=axis).mean() | Calculates top 5 accuracy. | fewshot/experiments/metrics.py | topk_acc | sebamenabar/oc-fewshot-public | 18 | python | def topk_acc(pred, label, k, axis=(- 1)):
assert (pred.shape[0] == label.shape[0]), '{} != {}'.format(pred.shape[0], label.shape[0])
topk_choices = np.argsort(pred, axis=axis)
if (len(topk_choices.shape) == 2):
topk_choices = topk_choices[(:, ::(- 1))][(:, :k)]
else:
raise NotImplementedError()
return np.sum((topk_choices == np.expand_dims(label, axis)), axis=axis).mean() | def topk_acc(pred, label, k, axis=(- 1)):
assert (pred.shape[0] == label.shape[0]), '{} != {}'.format(pred.shape[0], label.shape[0])
topk_choices = np.argsort(pred, axis=axis)
if (len(topk_choices.shape) == 2):
topk_choices = topk_choices[(:, ::(- 1))][(:, :k)]
else:
raise NotImplementedError()
return np.sum((topk_choices == np.expand_dims(label, axis)), axis=axis).mean()<|docstring|>Calculates top 5 accuracy.<|endoftext|> |
af8e6571c46adcb954c21b2bc6d786394b469562bc2be4660b995adbafbf59b0 | def stderr(array, axis=0):
'Calculates standard error.'
if (len(array) > 0):
return (array.std(axis=axis) / np.sqrt(float(array.shape[0])))
else:
return 0.0 | Calculates standard error. | fewshot/experiments/metrics.py | stderr | sebamenabar/oc-fewshot-public | 18 | python | def stderr(array, axis=0):
if (len(array) > 0):
return (array.std(axis=axis) / np.sqrt(float(array.shape[0])))
else:
return 0.0 | def stderr(array, axis=0):
if (len(array) > 0):
return (array.std(axis=axis) / np.sqrt(float(array.shape[0])))
else:
return 0.0<|docstring|>Calculates standard error.<|endoftext|> |
69a19b8ac22ceac8bfcf4e34b1f79f50a7b14f8f3011c217b746da1a874f8953 | def mean(array, axis=0):
'Calculates standard error.'
return (array.mean(axis=axis) if (len(array) > 0) else 0.0) | Calculates standard error. | fewshot/experiments/metrics.py | mean | sebamenabar/oc-fewshot-public | 18 | python | def mean(array, axis=0):
return (array.mean(axis=axis) if (len(array) > 0) else 0.0) | def mean(array, axis=0):
return (array.mean(axis=axis) if (len(array) > 0) else 0.0)<|docstring|>Calculates standard error.<|endoftext|> |
b20307e4cf1ef0be170a3b113216f3b879d2468d395ee6db5eb2e15e9b44c1d0 | def calc_nshot_acc_2d(results_list, nappear_max, nshot_max):
'Combining labeled and unlabeled. X-axis number of appearances, Y-axis\n number of labels.'
N = nappear_max
M = nshot_max
unk_id = (results_list[0]['pred'].shape[(- 1)] - 1)
acc_list = np.zeros([N, M])
stderr_list = np.zeros([N, M])
nappear_list = [calc_nshot(r['y_full']) for r in results_list]
nshot_list = [calc_nshot(r['y_s'], y=r['y_full']) for r in results_list]
for n in range(1, (N + 1)):
for m in range(1, (M + 1)):
sel_list = [np.logical_and((nappear_ == n), (nshot_ == m)) for (nappear_, nshot_) in zip(nappear_list, nshot_list)]
if (m > n):
assert all([np.logical_not(s).all() for s in sel_list])
known_list = [(r['y_gt'] < unk_id) for r in results_list]
sel_list = [np.logical_and(s, k) for (s, k) in zip(sel_list, known_list)]
y_gt_list = [r['y_gt'][s][(None, :)] for (s, r) in zip(sel_list, results_list)]
pred_list = [r['pred'][s][(None, :, :)] for (s, r) in zip(sel_list, results_list)]
flag_list = [r['flag'][s][(None, :)] for (s, r) in zip(sel_list, results_list)]
subresults = [{'y_gt': y, 'pred': p, 'flag': f} for (y, p, f) in zip(y_gt_list, pred_list, flag_list)]
(acc_list[((n - 1), (m - 1))], stderr_list[((n - 1), (m - 1))]) = calc_acc(subresults)
return (acc_list, stderr_list) | Combining labeled and unlabeled. X-axis number of appearances, Y-axis
number of labels. | fewshot/experiments/metrics.py | calc_nshot_acc_2d | sebamenabar/oc-fewshot-public | 18 | python | def calc_nshot_acc_2d(results_list, nappear_max, nshot_max):
'Combining labeled and unlabeled. X-axis number of appearances, Y-axis\n number of labels.'
N = nappear_max
M = nshot_max
unk_id = (results_list[0]['pred'].shape[(- 1)] - 1)
acc_list = np.zeros([N, M])
stderr_list = np.zeros([N, M])
nappear_list = [calc_nshot(r['y_full']) for r in results_list]
nshot_list = [calc_nshot(r['y_s'], y=r['y_full']) for r in results_list]
for n in range(1, (N + 1)):
for m in range(1, (M + 1)):
sel_list = [np.logical_and((nappear_ == n), (nshot_ == m)) for (nappear_, nshot_) in zip(nappear_list, nshot_list)]
if (m > n):
assert all([np.logical_not(s).all() for s in sel_list])
known_list = [(r['y_gt'] < unk_id) for r in results_list]
sel_list = [np.logical_and(s, k) for (s, k) in zip(sel_list, known_list)]
y_gt_list = [r['y_gt'][s][(None, :)] for (s, r) in zip(sel_list, results_list)]
pred_list = [r['pred'][s][(None, :, :)] for (s, r) in zip(sel_list, results_list)]
flag_list = [r['flag'][s][(None, :)] for (s, r) in zip(sel_list, results_list)]
subresults = [{'y_gt': y, 'pred': p, 'flag': f} for (y, p, f) in zip(y_gt_list, pred_list, flag_list)]
(acc_list[((n - 1), (m - 1))], stderr_list[((n - 1), (m - 1))]) = calc_acc(subresults)
return (acc_list, stderr_list) | def calc_nshot_acc_2d(results_list, nappear_max, nshot_max):
'Combining labeled and unlabeled. X-axis number of appearances, Y-axis\n number of labels.'
N = nappear_max
M = nshot_max
unk_id = (results_list[0]['pred'].shape[(- 1)] - 1)
acc_list = np.zeros([N, M])
stderr_list = np.zeros([N, M])
nappear_list = [calc_nshot(r['y_full']) for r in results_list]
nshot_list = [calc_nshot(r['y_s'], y=r['y_full']) for r in results_list]
for n in range(1, (N + 1)):
for m in range(1, (M + 1)):
sel_list = [np.logical_and((nappear_ == n), (nshot_ == m)) for (nappear_, nshot_) in zip(nappear_list, nshot_list)]
if (m > n):
assert all([np.logical_not(s).all() for s in sel_list])
known_list = [(r['y_gt'] < unk_id) for r in results_list]
sel_list = [np.logical_and(s, k) for (s, k) in zip(sel_list, known_list)]
y_gt_list = [r['y_gt'][s][(None, :)] for (s, r) in zip(sel_list, results_list)]
pred_list = [r['pred'][s][(None, :, :)] for (s, r) in zip(sel_list, results_list)]
flag_list = [r['flag'][s][(None, :)] for (s, r) in zip(sel_list, results_list)]
subresults = [{'y_gt': y, 'pred': p, 'flag': f} for (y, p, f) in zip(y_gt_list, pred_list, flag_list)]
(acc_list[((n - 1), (m - 1))], stderr_list[((n - 1), (m - 1))]) = calc_acc(subresults)
return (acc_list, stderr_list)<|docstring|>Combining labeled and unlabeled. X-axis number of appearances, Y-axis
number of labels.<|endoftext|> |
9102c22b34936b3ebb811f4cb8d71f2093beebc069218a07498f7fa3b41aad32 | def __new__(cls, version: str, _: Optional[Any]=None) -> '_AwesomeVersionBase':
'Create a new AwesomeVersion object.'
return super().__new__(cls, version) | Create a new AwesomeVersion object. | awesomeversion/awesomeversion.py | __new__ | patrikcoch123/awesomeversion | 0 | python | def __new__(cls, version: str, _: Optional[Any]=None) -> '_AwesomeVersionBase':
return super().__new__(cls, version) | def __new__(cls, version: str, _: Optional[Any]=None) -> '_AwesomeVersionBase':
return super().__new__(cls, version)<|docstring|>Create a new AwesomeVersion object.<|endoftext|> |
1530acb1531f54cf94dcf1c807c527705ac49ee9b373cea027043e6bcabacbba | def __init__(self, version: Union[(str, float, int, 'AwesomeVersion')], ensure_strategy: Optional[Union[(AwesomeVersionStrategy, List[AwesomeVersionStrategy])]]=None) -> None:
'Initialize AwesomeVersion.'
if isinstance(version, AwesomeVersion):
self._version = version._version
else:
self._version = str(version)
if isinstance(self._version, str):
self._version = self._version.strip()
if (ensure_strategy is not None):
ensure_strategy = (ensure_strategy if isinstance(ensure_strategy, list) else [ensure_strategy])
if (self.strategy not in ensure_strategy):
raise AwesomeVersionStrategyException(f'Strategy {self.strategy} does not match {ensure_strategy} for {version}')
super().__init__(self._version) | Initialize AwesomeVersion. | awesomeversion/awesomeversion.py | __init__ | patrikcoch123/awesomeversion | 0 | python | def __init__(self, version: Union[(str, float, int, 'AwesomeVersion')], ensure_strategy: Optional[Union[(AwesomeVersionStrategy, List[AwesomeVersionStrategy])]]=None) -> None:
if isinstance(version, AwesomeVersion):
self._version = version._version
else:
self._version = str(version)
if isinstance(self._version, str):
self._version = self._version.strip()
if (ensure_strategy is not None):
ensure_strategy = (ensure_strategy if isinstance(ensure_strategy, list) else [ensure_strategy])
if (self.strategy not in ensure_strategy):
raise AwesomeVersionStrategyException(f'Strategy {self.strategy} does not match {ensure_strategy} for {version}')
super().__init__(self._version) | def __init__(self, version: Union[(str, float, int, 'AwesomeVersion')], ensure_strategy: Optional[Union[(AwesomeVersionStrategy, List[AwesomeVersionStrategy])]]=None) -> None:
if isinstance(version, AwesomeVersion):
self._version = version._version
else:
self._version = str(version)
if isinstance(self._version, str):
self._version = self._version.strip()
if (ensure_strategy is not None):
ensure_strategy = (ensure_strategy if isinstance(ensure_strategy, list) else [ensure_strategy])
if (self.strategy not in ensure_strategy):
raise AwesomeVersionStrategyException(f'Strategy {self.strategy} does not match {ensure_strategy} for {version}')
super().__init__(self._version)<|docstring|>Initialize AwesomeVersion.<|endoftext|> |
a5ed30fe25497de373ab6b120546630c270b7b4477d58e67d0923fe1fdd37598 | def __eq__(self, compareto: Union[(str, float, int, object)]) -> bool:
'Check if equals to.'
if isinstance(compareto, (str, float, int)):
compareto = AwesomeVersion(compareto)
if (not isinstance(compareto, AwesomeVersion)):
raise AwesomeVersionCompare('Not a valid AwesomeVersion object')
return (self.string == compareto.string) | Check if equals to. | awesomeversion/awesomeversion.py | __eq__ | patrikcoch123/awesomeversion | 0 | python | def __eq__(self, compareto: Union[(str, float, int, object)]) -> bool:
if isinstance(compareto, (str, float, int)):
compareto = AwesomeVersion(compareto)
if (not isinstance(compareto, AwesomeVersion)):
raise AwesomeVersionCompare('Not a valid AwesomeVersion object')
return (self.string == compareto.string) | def __eq__(self, compareto: Union[(str, float, int, object)]) -> bool:
if isinstance(compareto, (str, float, int)):
compareto = AwesomeVersion(compareto)
if (not isinstance(compareto, AwesomeVersion)):
raise AwesomeVersionCompare('Not a valid AwesomeVersion object')
return (self.string == compareto.string)<|docstring|>Check if equals to.<|endoftext|> |
33914894b511160a1be8ee14c360b46a17b1e74c2882a42faf23154e8ad431ac | def __lt__(self, compareto: Union[(str, float, int, object)]) -> bool:
'Check if less than.'
if isinstance(compareto, (str, float, int)):
compareto = AwesomeVersion(compareto)
if (not isinstance(compareto, AwesomeVersion)):
raise AwesomeVersionCompare('Not a valid AwesomeVersion object')
if (AwesomeVersionStrategy.UNKNOWN in (self.strategy, compareto.strategy)):
raise AwesomeVersionCompare(f"Can't compare {AwesomeVersionStrategy.UNKNOWN}")
return CompareHandlers(compareto, self).check() | Check if less than. | awesomeversion/awesomeversion.py | __lt__ | patrikcoch123/awesomeversion | 0 | python | def __lt__(self, compareto: Union[(str, float, int, object)]) -> bool:
if isinstance(compareto, (str, float, int)):
compareto = AwesomeVersion(compareto)
if (not isinstance(compareto, AwesomeVersion)):
raise AwesomeVersionCompare('Not a valid AwesomeVersion object')
if (AwesomeVersionStrategy.UNKNOWN in (self.strategy, compareto.strategy)):
raise AwesomeVersionCompare(f"Can't compare {AwesomeVersionStrategy.UNKNOWN}")
return CompareHandlers(compareto, self).check() | def __lt__(self, compareto: Union[(str, float, int, object)]) -> bool:
if isinstance(compareto, (str, float, int)):
compareto = AwesomeVersion(compareto)
if (not isinstance(compareto, AwesomeVersion)):
raise AwesomeVersionCompare('Not a valid AwesomeVersion object')
if (AwesomeVersionStrategy.UNKNOWN in (self.strategy, compareto.strategy)):
raise AwesomeVersionCompare(f"Can't compare {AwesomeVersionStrategy.UNKNOWN}")
return CompareHandlers(compareto, self).check()<|docstring|>Check if less than.<|endoftext|> |
aa1a8ef57ba3aed2609268c2c2c1f19df4ec2c20bb4dde23f23fc9848044af0a | def __gt__(self, compareto: Union[(str, float, int, object)]) -> bool:
'Check if greater than.'
if isinstance(compareto, (str, float, int)):
compareto = AwesomeVersion(compareto)
if (not isinstance(compareto, AwesomeVersion)):
raise AwesomeVersionCompare('Not a valid AwesomeVersion object')
if (AwesomeVersionStrategy.UNKNOWN in (self.strategy, compareto.strategy)):
raise AwesomeVersionCompare(f"Can't compare {AwesomeVersionStrategy.UNKNOWN}")
return CompareHandlers(self, compareto).check() | Check if greater than. | awesomeversion/awesomeversion.py | __gt__ | patrikcoch123/awesomeversion | 0 | python | def __gt__(self, compareto: Union[(str, float, int, object)]) -> bool:
if isinstance(compareto, (str, float, int)):
compareto = AwesomeVersion(compareto)
if (not isinstance(compareto, AwesomeVersion)):
raise AwesomeVersionCompare('Not a valid AwesomeVersion object')
if (AwesomeVersionStrategy.UNKNOWN in (self.strategy, compareto.strategy)):
raise AwesomeVersionCompare(f"Can't compare {AwesomeVersionStrategy.UNKNOWN}")
return CompareHandlers(self, compareto).check() | def __gt__(self, compareto: Union[(str, float, int, object)]) -> bool:
if isinstance(compareto, (str, float, int)):
compareto = AwesomeVersion(compareto)
if (not isinstance(compareto, AwesomeVersion)):
raise AwesomeVersionCompare('Not a valid AwesomeVersion object')
if (AwesomeVersionStrategy.UNKNOWN in (self.strategy, compareto.strategy)):
raise AwesomeVersionCompare(f"Can't compare {AwesomeVersionStrategy.UNKNOWN}")
return CompareHandlers(self, compareto).check()<|docstring|>Check if greater than.<|endoftext|> |
3041f08ca16331058110f7c4825f943b46a073a9a839ec255cc327d74d9c5b41 | def section(self, idx: int) -> int:
'Return the value of the specified section of the version.'
if (self.sections >= (idx + 1)):
match = get_regex_match_group(RE_DIGIT, self.string.split('.')[idx], 1)
if match:
return int(match)
return 0 | Return the value of the specified section of the version. | awesomeversion/awesomeversion.py | section | patrikcoch123/awesomeversion | 0 | python | def section(self, idx: int) -> int:
if (self.sections >= (idx + 1)):
match = get_regex_match_group(RE_DIGIT, self.string.split('.')[idx], 1)
if match:
return int(match)
return 0 | def section(self, idx: int) -> int:
if (self.sections >= (idx + 1)):
match = get_regex_match_group(RE_DIGIT, self.string.split('.')[idx], 1)
if match:
return int(match)
return 0<|docstring|>Return the value of the specified section of the version.<|endoftext|> |
4d0894ab01f03bd49a75570c866998952e558ae9196f73386808563fcb997c2c | @staticmethod
def ensure_strategy(version: Union[(str, float, int, 'AwesomeVersion')], strategy: Union[(AwesomeVersionStrategy, List[AwesomeVersionStrategy])]) -> 'AwesomeVersion':
'Return a AwesomeVersion object, or raise on creation.'
LOGGER.warning('Using AwesomeVersion.ensure_strategy(version, strategy) is deprecated, use AwesomeVersion(version, strategy) instead')
return AwesomeVersion(version, strategy) | Return a AwesomeVersion object, or raise on creation. | awesomeversion/awesomeversion.py | ensure_strategy | patrikcoch123/awesomeversion | 0 | python | @staticmethod
def ensure_strategy(version: Union[(str, float, int, 'AwesomeVersion')], strategy: Union[(AwesomeVersionStrategy, List[AwesomeVersionStrategy])]) -> 'AwesomeVersion':
LOGGER.warning('Using AwesomeVersion.ensure_strategy(version, strategy) is deprecated, use AwesomeVersion(version, strategy) instead')
return AwesomeVersion(version, strategy) | @staticmethod
def ensure_strategy(version: Union[(str, float, int, 'AwesomeVersion')], strategy: Union[(AwesomeVersionStrategy, List[AwesomeVersionStrategy])]) -> 'AwesomeVersion':
LOGGER.warning('Using AwesomeVersion.ensure_strategy(version, strategy) is deprecated, use AwesomeVersion(version, strategy) instead')
return AwesomeVersion(version, strategy)<|docstring|>Return a AwesomeVersion object, or raise on creation.<|endoftext|> |
e07975ce21d213a9801745a9afc2b669ee485f36a71e5d3721cc9e3cca11e4bc | @property
def string(self) -> str:
'Return a string representaion of the version.'
if self._version.endswith('.'):
self._version = self._version[:(- 1)]
version = get_regex_match_group(RE_VERSION, str(self._version), 2)
return (version or self._version) | Return a string representaion of the version. | awesomeversion/awesomeversion.py | string | patrikcoch123/awesomeversion | 0 | python | @property
def string(self) -> str:
if self._version.endswith('.'):
self._version = self._version[:(- 1)]
version = get_regex_match_group(RE_VERSION, str(self._version), 2)
return (version or self._version) | @property
def string(self) -> str:
if self._version.endswith('.'):
self._version = self._version[:(- 1)]
version = get_regex_match_group(RE_VERSION, str(self._version), 2)
return (version or self._version)<|docstring|>Return a string representaion of the version.<|endoftext|> |
6cd0b3cd3002fe663d2a0af6933d95b6aad53eaba7240a8806c1d1a3ac1ef29e | @property
def prefix(self) -> Optional[str]:
'Return the version prefix if any'
return get_regex_match_group(RE_VERSION, str(self._version), 1) | Return the version prefix if any | awesomeversion/awesomeversion.py | prefix | patrikcoch123/awesomeversion | 0 | python | @property
def prefix(self) -> Optional[str]:
return get_regex_match_group(RE_VERSION, str(self._version), 1) | @property
def prefix(self) -> Optional[str]:
return get_regex_match_group(RE_VERSION, str(self._version), 1)<|docstring|>Return the version prefix if any<|endoftext|> |
31766627bf9b64b8dc678230ca641b573666ccca87652f669ce34daa1a929fef | @property
def alpha(self) -> bool:
'Return a bool to indicate alpha version.'
return (('a' in self.modifier) if self.modifier else False) | Return a bool to indicate alpha version. | awesomeversion/awesomeversion.py | alpha | patrikcoch123/awesomeversion | 0 | python | @property
def alpha(self) -> bool:
return (('a' in self.modifier) if self.modifier else False) | @property
def alpha(self) -> bool:
return (('a' in self.modifier) if self.modifier else False)<|docstring|>Return a bool to indicate alpha version.<|endoftext|> |
b8ea72db27d2086fa86a13934a8e6e1c9365cf1f96bffffe43fb6a5ebc8e124d | @property
def beta(self) -> bool:
'Return a bool to indicate beta version.'
return (('b' in self.modifier) if self.modifier else ('beta' in self.string)) | Return a bool to indicate beta version. | awesomeversion/awesomeversion.py | beta | patrikcoch123/awesomeversion | 0 | python | @property
def beta(self) -> bool:
return (('b' in self.modifier) if self.modifier else ('beta' in self.string)) | @property
def beta(self) -> bool:
return (('b' in self.modifier) if self.modifier else ('beta' in self.string))<|docstring|>Return a bool to indicate beta version.<|endoftext|> |
a526585b2bfd8bb412eb9367d3a8d48449765d6c54b201b43a682352a4a262d4 | @property
def dev(self) -> bool:
'Return a bool to indicate dev version.'
return (('d' in self.modifier) if self.modifier else ('dev' in self.string)) | Return a bool to indicate dev version. | awesomeversion/awesomeversion.py | dev | patrikcoch123/awesomeversion | 0 | python | @property
def dev(self) -> bool:
return (('d' in self.modifier) if self.modifier else ('dev' in self.string)) | @property
def dev(self) -> bool:
return (('d' in self.modifier) if self.modifier else ('dev' in self.string))<|docstring|>Return a bool to indicate dev version.<|endoftext|> |
03172b249984fdbcba6c8805f926df1c94b5d1650dfeadadb6e75a98f8a658f6 | @property
def release_candidate(self) -> bool:
'Return a bool to indicate release candidate version.'
return (('rc' in self.modifier) if self.modifier else ('rc' in self.string)) | Return a bool to indicate release candidate version. | awesomeversion/awesomeversion.py | release_candidate | patrikcoch123/awesomeversion | 0 | python | @property
def release_candidate(self) -> bool:
return (('rc' in self.modifier) if self.modifier else ('rc' in self.string)) | @property
def release_candidate(self) -> bool:
return (('rc' in self.modifier) if self.modifier else ('rc' in self.string))<|docstring|>Return a bool to indicate release candidate version.<|endoftext|> |
247a46bcbbc2fc8dfbf83e443f4aa21e2334894d55d3205ef8c4f5f5b48f5606 | @property
def sections(self) -> int:
'Return a int representaion of the number of sections in the version.'
if (self.strategy == AwesomeVersionStrategy.SEMVER):
return 3
return len(self.string.split('.')) | Return a int representaion of the number of sections in the version. | awesomeversion/awesomeversion.py | sections | patrikcoch123/awesomeversion | 0 | python | @property
def sections(self) -> int:
if (self.strategy == AwesomeVersionStrategy.SEMVER):
return 3
return len(self.string.split('.')) | @property
def sections(self) -> int:
if (self.strategy == AwesomeVersionStrategy.SEMVER):
return 3
return len(self.string.split('.'))<|docstring|>Return a int representaion of the number of sections in the version.<|endoftext|> |
26a1e2b0c36b29e11f390caea971fa0b0461594e21f2b2a9cb9d84ed58b4d407 | @property
def major(self) -> Optional['AwesomeVersion']:
'Return a AwesomeVersion representation of the major version.'
if (self.strategy != AwesomeVersionStrategy.SEMVER):
return None
return AwesomeVersion(self.section(0)) | Return a AwesomeVersion representation of the major version. | awesomeversion/awesomeversion.py | major | patrikcoch123/awesomeversion | 0 | python | @property
def major(self) -> Optional['AwesomeVersion']:
if (self.strategy != AwesomeVersionStrategy.SEMVER):
return None
return AwesomeVersion(self.section(0)) | @property
def major(self) -> Optional['AwesomeVersion']:
if (self.strategy != AwesomeVersionStrategy.SEMVER):
return None
return AwesomeVersion(self.section(0))<|docstring|>Return a AwesomeVersion representation of the major version.<|endoftext|> |
95c64fdfeff7bf3d002a0e089a11961f88239cf75326c6cc3be6184bb6c352ef | @property
def minor(self) -> Optional['AwesomeVersion']:
'Return a AwesomeVersion representation of the minor version.'
if (self.strategy != AwesomeVersionStrategy.SEMVER):
return None
return AwesomeVersion(self.section(1)) | Return a AwesomeVersion representation of the minor version. | awesomeversion/awesomeversion.py | minor | patrikcoch123/awesomeversion | 0 | python | @property
def minor(self) -> Optional['AwesomeVersion']:
if (self.strategy != AwesomeVersionStrategy.SEMVER):
return None
return AwesomeVersion(self.section(1)) | @property
def minor(self) -> Optional['AwesomeVersion']:
if (self.strategy != AwesomeVersionStrategy.SEMVER):
return None
return AwesomeVersion(self.section(1))<|docstring|>Return a AwesomeVersion representation of the minor version.<|endoftext|> |
9a7fa28a05db5eec8bb7574d6e6c4087845383022fec20532b352305feac3f45 | @property
def patch(self) -> Optional['AwesomeVersion']:
'Return a AwesomeVersion representation of the patch version.'
if (self.strategy != AwesomeVersionStrategy.SEMVER):
return None
return AwesomeVersion(self.section(2)) | Return a AwesomeVersion representation of the patch version. | awesomeversion/awesomeversion.py | patch | patrikcoch123/awesomeversion | 0 | python | @property
def patch(self) -> Optional['AwesomeVersion']:
if (self.strategy != AwesomeVersionStrategy.SEMVER):
return None
return AwesomeVersion(self.section(2)) | @property
def patch(self) -> Optional['AwesomeVersion']:
if (self.strategy != AwesomeVersionStrategy.SEMVER):
return None
return AwesomeVersion(self.section(2))<|docstring|>Return a AwesomeVersion representation of the patch version.<|endoftext|> |
3f4c4db95c7996af4dee98af66425f1aefc11c1f99f6c918e94bffcb9eff4aa3 | @property
def modifier(self) -> Optional[str]:
'Return the modifier of the version if any.'
if (self.strategy == AwesomeVersionStrategy.SPECIALCONTAINER):
return None
if (self.strategy == AwesomeVersionStrategy.SEMVER):
modifier_string = get_regex_match_group(RE_SEMVER, str(self.string), 4)
else:
modifier_string = self.string.split('.')[(- 1)]
return get_regex_match_group(RE_MODIFIER, modifier_string, 2) | Return the modifier of the version if any. | awesomeversion/awesomeversion.py | modifier | patrikcoch123/awesomeversion | 0 | python | @property
def modifier(self) -> Optional[str]:
if (self.strategy == AwesomeVersionStrategy.SPECIALCONTAINER):
return None
if (self.strategy == AwesomeVersionStrategy.SEMVER):
modifier_string = get_regex_match_group(RE_SEMVER, str(self.string), 4)
else:
modifier_string = self.string.split('.')[(- 1)]
return get_regex_match_group(RE_MODIFIER, modifier_string, 2) | @property
def modifier(self) -> Optional[str]:
if (self.strategy == AwesomeVersionStrategy.SPECIALCONTAINER):
return None
if (self.strategy == AwesomeVersionStrategy.SEMVER):
modifier_string = get_regex_match_group(RE_SEMVER, str(self.string), 4)
else:
modifier_string = self.string.split('.')[(- 1)]
return get_regex_match_group(RE_MODIFIER, modifier_string, 2)<|docstring|>Return the modifier of the version if any.<|endoftext|> |
a33ee5afc12d8282ba36a1e19495ae7b203fa38956f42da76201370f0b5449b1 | @property
def modifier_type(self) -> Optional[str]:
'Return the modifier type of the version if any.'
if (self.strategy == AwesomeVersionStrategy.SPECIALCONTAINER):
return None
if (self.strategy == AwesomeVersionStrategy.SEMVER):
modifier_string = get_regex_match_group(RE_SEMVER, str(self.string), 4)
else:
modifier_string = self.string.split('.')[(- 1)]
return get_regex_match_group(RE_MODIFIER, modifier_string, 3) | Return the modifier type of the version if any. | awesomeversion/awesomeversion.py | modifier_type | patrikcoch123/awesomeversion | 0 | python | @property
def modifier_type(self) -> Optional[str]:
if (self.strategy == AwesomeVersionStrategy.SPECIALCONTAINER):
return None
if (self.strategy == AwesomeVersionStrategy.SEMVER):
modifier_string = get_regex_match_group(RE_SEMVER, str(self.string), 4)
else:
modifier_string = self.string.split('.')[(- 1)]
return get_regex_match_group(RE_MODIFIER, modifier_string, 3) | @property
def modifier_type(self) -> Optional[str]:
if (self.strategy == AwesomeVersionStrategy.SPECIALCONTAINER):
return None
if (self.strategy == AwesomeVersionStrategy.SEMVER):
modifier_string = get_regex_match_group(RE_SEMVER, str(self.string), 4)
else:
modifier_string = self.string.split('.')[(- 1)]
return get_regex_match_group(RE_MODIFIER, modifier_string, 3)<|docstring|>Return the modifier type of the version if any.<|endoftext|> |
0acf468d855fbd0347606409496702b94a701d1a0892eff365c18841126e43e0 | @property
def strategy(self) -> AwesomeVersionStrategy:
'Return the version strategy.'
for (pattern, strategy) in VERSION_STRATEGIES:
if is_regex_matching(pattern, self.string):
return strategy
return AwesomeVersionStrategy.UNKNOWN | Return the version strategy. | awesomeversion/awesomeversion.py | strategy | patrikcoch123/awesomeversion | 0 | python | @property
def strategy(self) -> AwesomeVersionStrategy:
for (pattern, strategy) in VERSION_STRATEGIES:
if is_regex_matching(pattern, self.string):
return strategy
return AwesomeVersionStrategy.UNKNOWN | @property
def strategy(self) -> AwesomeVersionStrategy:
for (pattern, strategy) in VERSION_STRATEGIES:
if is_regex_matching(pattern, self.string):
return strategy
return AwesomeVersionStrategy.UNKNOWN<|docstring|>Return the version strategy.<|endoftext|> |
6539a2f414340ac8bc25d1cd5372770e34dcff18fad33e06da5ba26b7c33598d | @property
def simple(self) -> bool:
'Return True if the version string is simple.'
return is_regex_matching(RE_SIMPLE, self.string) | Return True if the version string is simple. | awesomeversion/awesomeversion.py | simple | patrikcoch123/awesomeversion | 0 | python | @property
def simple(self) -> bool:
return is_regex_matching(RE_SIMPLE, self.string) | @property
def simple(self) -> bool:
return is_regex_matching(RE_SIMPLE, self.string)<|docstring|>Return True if the version string is simple.<|endoftext|> |
f38b631361741a8a7c2b0b7a677966ece4da2fa637c7de90e9d5c17affa6c138 | def moving_window(array, nrows):
'\n Simple moving window generator over a 2D numpy array.\n '
count = num_windows_of_length_M_on_buffers_of_length_N(nrows, len(array))
for i in range(count):
(yield array[i:(i + nrows)]) | Simple moving window generator over a 2D numpy array. | tests/pipeline/test_adjusted_array.py | moving_window | 1quant/zipline | 412 | python | def moving_window(array, nrows):
'\n \n '
count = num_windows_of_length_M_on_buffers_of_length_N(nrows, len(array))
for i in range(count):
(yield array[i:(i + nrows)]) | def moving_window(array, nrows):
'\n \n '
count = num_windows_of_length_M_on_buffers_of_length_N(nrows, len(array))
for i in range(count):
(yield array[i:(i + nrows)])<|docstring|>Simple moving window generator over a 2D numpy array.<|endoftext|> |
27799e88adf19c031e8f277c166e051336c9dc810eb3d331830eeca44942e6c2 | def num_windows_of_length_M_on_buffers_of_length_N(M, N):
'\n For a window of length M rolling over a buffer of length N,\n there are (N - M) + 1 legal windows.\n\n Example:\n If my array has N=4 rows, and I want windows of length M=2, there are\n 3 legal windows: data[0:2], data[1:3], and data[2:4].\n '
return ((N - M) + 1) | For a window of length M rolling over a buffer of length N,
there are (N - M) + 1 legal windows.
Example:
If my array has N=4 rows, and I want windows of length M=2, there are
3 legal windows: data[0:2], data[1:3], and data[2:4]. | tests/pipeline/test_adjusted_array.py | num_windows_of_length_M_on_buffers_of_length_N | 1quant/zipline | 412 | python | def num_windows_of_length_M_on_buffers_of_length_N(M, N):
'\n For a window of length M rolling over a buffer of length N,\n there are (N - M) + 1 legal windows.\n\n Example:\n If my array has N=4 rows, and I want windows of length M=2, there are\n 3 legal windows: data[0:2], data[1:3], and data[2:4].\n '
return ((N - M) + 1) | def num_windows_of_length_M_on_buffers_of_length_N(M, N):
'\n For a window of length M rolling over a buffer of length N,\n there are (N - M) + 1 legal windows.\n\n Example:\n If my array has N=4 rows, and I want windows of length M=2, there are\n 3 legal windows: data[0:2], data[1:3], and data[2:4].\n '
return ((N - M) + 1)<|docstring|>For a window of length M rolling over a buffer of length N,
there are (N - M) + 1 legal windows.
Example:
If my array has N=4 rows, and I want windows of length M=2, there are
3 legal windows: data[0:2], data[1:3], and data[2:4].<|endoftext|> |
a36c8043b0ca0cd6466f72a4819727e96881bc55ffa7cf3f5004bb8233583f85 | def valid_window_lengths(underlying_buffer_length):
'\n An iterator of all legal window lengths on a buffer of a given length.\n\n Returns values from 1 to underlying_buffer_length.\n '
return iter(range(1, (underlying_buffer_length + 1))) | An iterator of all legal window lengths on a buffer of a given length.
Returns values from 1 to underlying_buffer_length. | tests/pipeline/test_adjusted_array.py | valid_window_lengths | 1quant/zipline | 412 | python | def valid_window_lengths(underlying_buffer_length):
'\n An iterator of all legal window lengths on a buffer of a given length.\n\n Returns values from 1 to underlying_buffer_length.\n '
return iter(range(1, (underlying_buffer_length + 1))) | def valid_window_lengths(underlying_buffer_length):
'\n An iterator of all legal window lengths on a buffer of a given length.\n\n Returns values from 1 to underlying_buffer_length.\n '
return iter(range(1, (underlying_buffer_length + 1)))<|docstring|>An iterator of all legal window lengths on a buffer of a given length.
Returns values from 1 to underlying_buffer_length.<|endoftext|> |
e8353f474df2ddea0e59b46ec6eadc21b0d566f0f166a3141018f4a75865c1d9 | @curry
def as_dtype(dtype, data):
'\n Curried wrapper around array.astype for when you have the dtype before you\n have the data.\n '
return asarray(data).astype(dtype) | Curried wrapper around array.astype for when you have the dtype before you
have the data. | tests/pipeline/test_adjusted_array.py | as_dtype | 1quant/zipline | 412 | python | @curry
def as_dtype(dtype, data):
'\n Curried wrapper around array.astype for when you have the dtype before you\n have the data.\n '
return asarray(data).astype(dtype) | @curry
def as_dtype(dtype, data):
'\n Curried wrapper around array.astype for when you have the dtype before you\n have the data.\n '
return asarray(data).astype(dtype)<|docstring|>Curried wrapper around array.astype for when you have the dtype before you
have the data.<|endoftext|> |
1e9325dda94fd69401f02f7a0d543f7ad12a31d3d474f0b7cf6ab4288beec29e | @curry
def as_labelarray(initial_dtype, missing_value, array):
'\n Curried wrapper around LabelArray, that round-trips the input data through\n `initial_dtype` first.\n '
return LabelArray(array.astype(initial_dtype), missing_value=initial_dtype.type(missing_value)) | Curried wrapper around LabelArray, that round-trips the input data through
`initial_dtype` first. | tests/pipeline/test_adjusted_array.py | as_labelarray | 1quant/zipline | 412 | python | @curry
def as_labelarray(initial_dtype, missing_value, array):
'\n Curried wrapper around LabelArray, that round-trips the input data through\n `initial_dtype` first.\n '
return LabelArray(array.astype(initial_dtype), missing_value=initial_dtype.type(missing_value)) | @curry
def as_labelarray(initial_dtype, missing_value, array):
'\n Curried wrapper around LabelArray, that round-trips the input data through\n `initial_dtype` first.\n '
return LabelArray(array.astype(initial_dtype), missing_value=initial_dtype.type(missing_value))<|docstring|>Curried wrapper around LabelArray, that round-trips the input data through
`initial_dtype` first.<|endoftext|> |
0fc65bb8198f272f0b03649068e98f7f4d7789c0242cc1d752d1cbb1f566e878 | def _gen_multiplicative_adjustment_cases(dtype):
'\n Generate expected moving windows on a buffer with adjustments.\n\n We proceed by constructing, at each row, the view of the array we expect in\n in all windows anchored on that row.\n\n In general, if we have an adjustment to be applied once we process the row\n at index N, should see that adjustment applied to the underlying buffer for\n any window containing the row at index N.\n\n We then build all legal windows over these buffers.\n '
adjustment_type = {float64_dtype: Float64Multiply}[dtype]
(nrows, ncols) = (6, 3)
adjustments = {}
buffer_as_of = ([None] * 6)
baseline = buffer_as_of[0] = full((nrows, ncols), 1, dtype=dtype)
adjustments[1] = [adjustment_type(0, 0, 0, 0, coerce_to_dtype(dtype, 2))]
buffer_as_of[1] = array([[2, 1, 1], [1, 1, 1], [1, 1, 1], [1, 1, 1], [1, 1, 1], [1, 1, 1]], dtype=dtype)
buffer_as_of[2] = buffer_as_of[1]
adjustments[3] = [adjustment_type(1, 2, 1, 1, coerce_to_dtype(dtype, 3)), adjustment_type(0, 1, 0, 0, coerce_to_dtype(dtype, 4))]
buffer_as_of[3] = array([[8, 1, 1], [4, 3, 1], [1, 3, 1], [1, 1, 1], [1, 1, 1], [1, 1, 1]], dtype=dtype)
adjustments[4] = [adjustment_type(0, 3, 2, 2, coerce_to_dtype(dtype, 5))]
buffer_as_of[4] = array([[8, 1, 5], [4, 3, 5], [1, 3, 5], [1, 1, 5], [1, 1, 1], [1, 1, 1]], dtype=dtype)
adjustments[5] = [adjustment_type(0, 4, 1, 1, coerce_to_dtype(dtype, 6)), adjustment_type(2, 2, 2, 2, coerce_to_dtype(dtype, 7))]
buffer_as_of[5] = array([[8, 6, 5], [4, 18, 5], [1, 18, 35], [1, 6, 5], [1, 6, 1], [1, 1, 1]], dtype=dtype)
return _gen_expectations(baseline, default_missing_value_for_dtype(dtype), adjustments, buffer_as_of, nrows, perspective_offsets=(0, 1)) | Generate expected moving windows on a buffer with adjustments.
We proceed by constructing, at each row, the view of the array we expect in
in all windows anchored on that row.
In general, if we have an adjustment to be applied once we process the row
at index N, should see that adjustment applied to the underlying buffer for
any window containing the row at index N.
We then build all legal windows over these buffers. | tests/pipeline/test_adjusted_array.py | _gen_multiplicative_adjustment_cases | 1quant/zipline | 412 | python | def _gen_multiplicative_adjustment_cases(dtype):
'\n Generate expected moving windows on a buffer with adjustments.\n\n We proceed by constructing, at each row, the view of the array we expect in\n in all windows anchored on that row.\n\n In general, if we have an adjustment to be applied once we process the row\n at index N, should see that adjustment applied to the underlying buffer for\n any window containing the row at index N.\n\n We then build all legal windows over these buffers.\n '
adjustment_type = {float64_dtype: Float64Multiply}[dtype]
(nrows, ncols) = (6, 3)
adjustments = {}
buffer_as_of = ([None] * 6)
baseline = buffer_as_of[0] = full((nrows, ncols), 1, dtype=dtype)
adjustments[1] = [adjustment_type(0, 0, 0, 0, coerce_to_dtype(dtype, 2))]
buffer_as_of[1] = array([[2, 1, 1], [1, 1, 1], [1, 1, 1], [1, 1, 1], [1, 1, 1], [1, 1, 1]], dtype=dtype)
buffer_as_of[2] = buffer_as_of[1]
adjustments[3] = [adjustment_type(1, 2, 1, 1, coerce_to_dtype(dtype, 3)), adjustment_type(0, 1, 0, 0, coerce_to_dtype(dtype, 4))]
buffer_as_of[3] = array([[8, 1, 1], [4, 3, 1], [1, 3, 1], [1, 1, 1], [1, 1, 1], [1, 1, 1]], dtype=dtype)
adjustments[4] = [adjustment_type(0, 3, 2, 2, coerce_to_dtype(dtype, 5))]
buffer_as_of[4] = array([[8, 1, 5], [4, 3, 5], [1, 3, 5], [1, 1, 5], [1, 1, 1], [1, 1, 1]], dtype=dtype)
adjustments[5] = [adjustment_type(0, 4, 1, 1, coerce_to_dtype(dtype, 6)), adjustment_type(2, 2, 2, 2, coerce_to_dtype(dtype, 7))]
buffer_as_of[5] = array([[8, 6, 5], [4, 18, 5], [1, 18, 35], [1, 6, 5], [1, 6, 1], [1, 1, 1]], dtype=dtype)
return _gen_expectations(baseline, default_missing_value_for_dtype(dtype), adjustments, buffer_as_of, nrows, perspective_offsets=(0, 1)) | def _gen_multiplicative_adjustment_cases(dtype):
'\n Generate expected moving windows on a buffer with adjustments.\n\n We proceed by constructing, at each row, the view of the array we expect in\n in all windows anchored on that row.\n\n In general, if we have an adjustment to be applied once we process the row\n at index N, should see that adjustment applied to the underlying buffer for\n any window containing the row at index N.\n\n We then build all legal windows over these buffers.\n '
adjustment_type = {float64_dtype: Float64Multiply}[dtype]
(nrows, ncols) = (6, 3)
adjustments = {}
buffer_as_of = ([None] * 6)
baseline = buffer_as_of[0] = full((nrows, ncols), 1, dtype=dtype)
adjustments[1] = [adjustment_type(0, 0, 0, 0, coerce_to_dtype(dtype, 2))]
buffer_as_of[1] = array([[2, 1, 1], [1, 1, 1], [1, 1, 1], [1, 1, 1], [1, 1, 1], [1, 1, 1]], dtype=dtype)
buffer_as_of[2] = buffer_as_of[1]
adjustments[3] = [adjustment_type(1, 2, 1, 1, coerce_to_dtype(dtype, 3)), adjustment_type(0, 1, 0, 0, coerce_to_dtype(dtype, 4))]
buffer_as_of[3] = array([[8, 1, 1], [4, 3, 1], [1, 3, 1], [1, 1, 1], [1, 1, 1], [1, 1, 1]], dtype=dtype)
adjustments[4] = [adjustment_type(0, 3, 2, 2, coerce_to_dtype(dtype, 5))]
buffer_as_of[4] = array([[8, 1, 5], [4, 3, 5], [1, 3, 5], [1, 1, 5], [1, 1, 1], [1, 1, 1]], dtype=dtype)
adjustments[5] = [adjustment_type(0, 4, 1, 1, coerce_to_dtype(dtype, 6)), adjustment_type(2, 2, 2, 2, coerce_to_dtype(dtype, 7))]
buffer_as_of[5] = array([[8, 6, 5], [4, 18, 5], [1, 18, 35], [1, 6, 5], [1, 6, 1], [1, 1, 1]], dtype=dtype)
return _gen_expectations(baseline, default_missing_value_for_dtype(dtype), adjustments, buffer_as_of, nrows, perspective_offsets=(0, 1))<|docstring|>Generate expected moving windows on a buffer with adjustments.
We proceed by constructing, at each row, the view of the array we expect in
in all windows anchored on that row.
In general, if we have an adjustment to be applied once we process the row
at index N, should see that adjustment applied to the underlying buffer for
any window containing the row at index N.
We then build all legal windows over these buffers.<|endoftext|> |
b94826b287f3bfb97a4b7c422346daa3216a90e830303686c02cc94d1ef7637a | def _gen_overwrite_adjustment_cases(dtype):
'\n Generate test cases for overwrite adjustments.\n\n The algorithm used here is the same as the one used above for\n multiplicative adjustments. The only difference is the semantics of how\n the adjustments are expected to modify the arrays.\n\n This is parameterized on `make_input` and `make_expected_output` functions,\n which take 2-D lists of values and transform them into desired input/output\n arrays. We do this so that we can easily test both vanilla numpy ndarrays\n and our own LabelArray class for strings.\n '
adjustment_type = {float64_dtype: Float64Overwrite, datetime64ns_dtype: Datetime64Overwrite, int64_dtype: Int64Overwrite, bytes_dtype: ObjectOverwrite, unicode_dtype: ObjectOverwrite, object_dtype: ObjectOverwrite}[dtype]
make_expected_dtype = as_dtype(dtype)
missing_value = default_missing_value_for_dtype(datetime64ns_dtype)
if (dtype == object_dtype):
def make_overwrite_value(dtype, value):
return str(value)
else:
make_overwrite_value = coerce_to_dtype
adjustments = {}
buffer_as_of = ([None] * 6)
baseline = make_expected_dtype([[2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2]])
buffer_as_of[0] = make_expected_dtype([[2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2]])
adjustments[1] = [adjustment_type(0, 0, 0, 0, make_overwrite_value(dtype, 1))]
buffer_as_of[1] = make_expected_dtype([[1, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2]])
buffer_as_of[2] = buffer_as_of[1]
adjustments[3] = [adjustment_type(1, 2, 1, 1, make_overwrite_value(dtype, 3)), adjustment_type(0, 1, 0, 0, make_overwrite_value(dtype, 4))]
buffer_as_of[3] = make_expected_dtype([[4, 2, 2], [4, 3, 2], [2, 3, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2]])
adjustments[4] = [adjustment_type(0, 3, 2, 2, make_overwrite_value(dtype, 5))]
buffer_as_of[4] = make_expected_dtype([[4, 2, 5], [4, 3, 5], [2, 3, 5], [2, 2, 5], [2, 2, 2], [2, 2, 2]])
adjustments[5] = [adjustment_type(0, 4, 1, 1, make_overwrite_value(dtype, 6)), adjustment_type(2, 2, 2, 2, make_overwrite_value(dtype, 7))]
buffer_as_of[5] = make_expected_dtype([[4, 6, 5], [4, 6, 5], [2, 6, 7], [2, 6, 5], [2, 6, 2], [2, 2, 2]])
return _gen_expectations(baseline, missing_value, adjustments, buffer_as_of, nrows=6, perspective_offsets=(0, 1)) | Generate test cases for overwrite adjustments.
The algorithm used here is the same as the one used above for
multiplicative adjustments. The only difference is the semantics of how
the adjustments are expected to modify the arrays.
This is parameterized on `make_input` and `make_expected_output` functions,
which take 2-D lists of values and transform them into desired input/output
arrays. We do this so that we can easily test both vanilla numpy ndarrays
and our own LabelArray class for strings. | tests/pipeline/test_adjusted_array.py | _gen_overwrite_adjustment_cases | 1quant/zipline | 412 | python | def _gen_overwrite_adjustment_cases(dtype):
'\n Generate test cases for overwrite adjustments.\n\n The algorithm used here is the same as the one used above for\n multiplicative adjustments. The only difference is the semantics of how\n the adjustments are expected to modify the arrays.\n\n This is parameterized on `make_input` and `make_expected_output` functions,\n which take 2-D lists of values and transform them into desired input/output\n arrays. We do this so that we can easily test both vanilla numpy ndarrays\n and our own LabelArray class for strings.\n '
adjustment_type = {float64_dtype: Float64Overwrite, datetime64ns_dtype: Datetime64Overwrite, int64_dtype: Int64Overwrite, bytes_dtype: ObjectOverwrite, unicode_dtype: ObjectOverwrite, object_dtype: ObjectOverwrite}[dtype]
make_expected_dtype = as_dtype(dtype)
missing_value = default_missing_value_for_dtype(datetime64ns_dtype)
if (dtype == object_dtype):
def make_overwrite_value(dtype, value):
return str(value)
else:
make_overwrite_value = coerce_to_dtype
adjustments = {}
buffer_as_of = ([None] * 6)
baseline = make_expected_dtype([[2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2]])
buffer_as_of[0] = make_expected_dtype([[2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2]])
adjustments[1] = [adjustment_type(0, 0, 0, 0, make_overwrite_value(dtype, 1))]
buffer_as_of[1] = make_expected_dtype([[1, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2]])
buffer_as_of[2] = buffer_as_of[1]
adjustments[3] = [adjustment_type(1, 2, 1, 1, make_overwrite_value(dtype, 3)), adjustment_type(0, 1, 0, 0, make_overwrite_value(dtype, 4))]
buffer_as_of[3] = make_expected_dtype([[4, 2, 2], [4, 3, 2], [2, 3, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2]])
adjustments[4] = [adjustment_type(0, 3, 2, 2, make_overwrite_value(dtype, 5))]
buffer_as_of[4] = make_expected_dtype([[4, 2, 5], [4, 3, 5], [2, 3, 5], [2, 2, 5], [2, 2, 2], [2, 2, 2]])
adjustments[5] = [adjustment_type(0, 4, 1, 1, make_overwrite_value(dtype, 6)), adjustment_type(2, 2, 2, 2, make_overwrite_value(dtype, 7))]
buffer_as_of[5] = make_expected_dtype([[4, 6, 5], [4, 6, 5], [2, 6, 7], [2, 6, 5], [2, 6, 2], [2, 2, 2]])
return _gen_expectations(baseline, missing_value, adjustments, buffer_as_of, nrows=6, perspective_offsets=(0, 1)) | def _gen_overwrite_adjustment_cases(dtype):
'\n Generate test cases for overwrite adjustments.\n\n The algorithm used here is the same as the one used above for\n multiplicative adjustments. The only difference is the semantics of how\n the adjustments are expected to modify the arrays.\n\n This is parameterized on `make_input` and `make_expected_output` functions,\n which take 2-D lists of values and transform them into desired input/output\n arrays. We do this so that we can easily test both vanilla numpy ndarrays\n and our own LabelArray class for strings.\n '
adjustment_type = {float64_dtype: Float64Overwrite, datetime64ns_dtype: Datetime64Overwrite, int64_dtype: Int64Overwrite, bytes_dtype: ObjectOverwrite, unicode_dtype: ObjectOverwrite, object_dtype: ObjectOverwrite}[dtype]
make_expected_dtype = as_dtype(dtype)
missing_value = default_missing_value_for_dtype(datetime64ns_dtype)
if (dtype == object_dtype):
def make_overwrite_value(dtype, value):
return str(value)
else:
make_overwrite_value = coerce_to_dtype
adjustments = {}
buffer_as_of = ([None] * 6)
baseline = make_expected_dtype([[2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2]])
buffer_as_of[0] = make_expected_dtype([[2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2]])
adjustments[1] = [adjustment_type(0, 0, 0, 0, make_overwrite_value(dtype, 1))]
buffer_as_of[1] = make_expected_dtype([[1, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2]])
buffer_as_of[2] = buffer_as_of[1]
adjustments[3] = [adjustment_type(1, 2, 1, 1, make_overwrite_value(dtype, 3)), adjustment_type(0, 1, 0, 0, make_overwrite_value(dtype, 4))]
buffer_as_of[3] = make_expected_dtype([[4, 2, 2], [4, 3, 2], [2, 3, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2]])
adjustments[4] = [adjustment_type(0, 3, 2, 2, make_overwrite_value(dtype, 5))]
buffer_as_of[4] = make_expected_dtype([[4, 2, 5], [4, 3, 5], [2, 3, 5], [2, 2, 5], [2, 2, 2], [2, 2, 2]])
adjustments[5] = [adjustment_type(0, 4, 1, 1, make_overwrite_value(dtype, 6)), adjustment_type(2, 2, 2, 2, make_overwrite_value(dtype, 7))]
buffer_as_of[5] = make_expected_dtype([[4, 6, 5], [4, 6, 5], [2, 6, 7], [2, 6, 5], [2, 6, 2], [2, 2, 2]])
return _gen_expectations(baseline, missing_value, adjustments, buffer_as_of, nrows=6, perspective_offsets=(0, 1))<|docstring|>Generate test cases for overwrite adjustments.
The algorithm used here is the same as the one used above for
multiplicative adjustments. The only difference is the semantics of how
the adjustments are expected to modify the arrays.
This is parameterized on `make_input` and `make_expected_output` functions,
which take 2-D lists of values and transform them into desired input/output
arrays. We do this so that we can easily test both vanilla numpy ndarrays
and our own LabelArray class for strings.<|endoftext|> |
9e1684caffc1048f982746181c302b92cad4900e0257d39bfd1730482ee0de31 | def _gen_overwrite_1d_array_adjustment_case(dtype):
'\n Generate test cases for overwrite adjustments.\n\n The algorithm used here is the same as the one used above for\n multiplicative adjustments. The only difference is the semantics of how\n the adjustments are expected to modify the arrays.\n\n This is parameterized on `make_input` and `make_expected_output` functions,\n which take 1-D lists of values and transform them into desired input/output\n arrays. We do this so that we can easily test both vanilla numpy ndarrays\n and our own LabelArray class for strings.\n '
adjustment_type = {float64_dtype: Float641DArrayOverwrite, datetime64ns_dtype: Datetime641DArrayOverwrite}[dtype]
make_expected_dtype = as_dtype(dtype)
missing_value = default_missing_value_for_dtype(datetime64ns_dtype)
adjustments = {}
buffer_as_of = ([None] * 6)
baseline = make_expected_dtype([[2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2]])
buffer_as_of[0] = make_expected_dtype([[2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2]])
vals1 = [1]
adjustments[1] = [adjustment_type(0, 0, 0, 0, array([coerce_to_dtype(dtype, val) for val in vals1]))]
buffer_as_of[1] = make_expected_dtype([[1, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2]])
buffer_as_of[2] = buffer_as_of[1]
vals3 = [4, 4, 1]
adjustments[3] = [adjustment_type(0, 2, 0, 0, array([coerce_to_dtype(dtype, val) for val in vals3]))]
buffer_as_of[3] = make_expected_dtype([[4, 2, 2], [4, 2, 2], [1, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2]])
vals4 = ([5] * 4)
adjustments[4] = [adjustment_type(0, 3, 2, 2, array([coerce_to_dtype(dtype, val) for val in vals4]))]
buffer_as_of[4] = make_expected_dtype([[4, 2, 5], [4, 2, 5], [1, 2, 5], [2, 2, 5], [2, 2, 2], [2, 2, 2]])
vals5 = range(1, 6)
adjustments[5] = [adjustment_type(0, 4, 1, 1, array([coerce_to_dtype(dtype, val) for val in vals5]))]
buffer_as_of[5] = make_expected_dtype([[4, 1, 5], [4, 2, 5], [1, 3, 5], [2, 4, 5], [2, 5, 2], [2, 2, 2]])
return _gen_expectations(baseline, missing_value, adjustments, buffer_as_of, nrows=6, perspective_offsets=(0, 1)) | Generate test cases for overwrite adjustments.
The algorithm used here is the same as the one used above for
multiplicative adjustments. The only difference is the semantics of how
the adjustments are expected to modify the arrays.
This is parameterized on `make_input` and `make_expected_output` functions,
which take 1-D lists of values and transform them into desired input/output
arrays. We do this so that we can easily test both vanilla numpy ndarrays
and our own LabelArray class for strings. | tests/pipeline/test_adjusted_array.py | _gen_overwrite_1d_array_adjustment_case | 1quant/zipline | 412 | python | def _gen_overwrite_1d_array_adjustment_case(dtype):
'\n Generate test cases for overwrite adjustments.\n\n The algorithm used here is the same as the one used above for\n multiplicative adjustments. The only difference is the semantics of how\n the adjustments are expected to modify the arrays.\n\n This is parameterized on `make_input` and `make_expected_output` functions,\n which take 1-D lists of values and transform them into desired input/output\n arrays. We do this so that we can easily test both vanilla numpy ndarrays\n and our own LabelArray class for strings.\n '
adjustment_type = {float64_dtype: Float641DArrayOverwrite, datetime64ns_dtype: Datetime641DArrayOverwrite}[dtype]
make_expected_dtype = as_dtype(dtype)
missing_value = default_missing_value_for_dtype(datetime64ns_dtype)
adjustments = {}
buffer_as_of = ([None] * 6)
baseline = make_expected_dtype([[2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2]])
buffer_as_of[0] = make_expected_dtype([[2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2]])
vals1 = [1]
adjustments[1] = [adjustment_type(0, 0, 0, 0, array([coerce_to_dtype(dtype, val) for val in vals1]))]
buffer_as_of[1] = make_expected_dtype([[1, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2]])
buffer_as_of[2] = buffer_as_of[1]
vals3 = [4, 4, 1]
adjustments[3] = [adjustment_type(0, 2, 0, 0, array([coerce_to_dtype(dtype, val) for val in vals3]))]
buffer_as_of[3] = make_expected_dtype([[4, 2, 2], [4, 2, 2], [1, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2]])
vals4 = ([5] * 4)
adjustments[4] = [adjustment_type(0, 3, 2, 2, array([coerce_to_dtype(dtype, val) for val in vals4]))]
buffer_as_of[4] = make_expected_dtype([[4, 2, 5], [4, 2, 5], [1, 2, 5], [2, 2, 5], [2, 2, 2], [2, 2, 2]])
vals5 = range(1, 6)
adjustments[5] = [adjustment_type(0, 4, 1, 1, array([coerce_to_dtype(dtype, val) for val in vals5]))]
buffer_as_of[5] = make_expected_dtype([[4, 1, 5], [4, 2, 5], [1, 3, 5], [2, 4, 5], [2, 5, 2], [2, 2, 2]])
return _gen_expectations(baseline, missing_value, adjustments, buffer_as_of, nrows=6, perspective_offsets=(0, 1)) | def _gen_overwrite_1d_array_adjustment_case(dtype):
'\n Generate test cases for overwrite adjustments.\n\n The algorithm used here is the same as the one used above for\n multiplicative adjustments. The only difference is the semantics of how\n the adjustments are expected to modify the arrays.\n\n This is parameterized on `make_input` and `make_expected_output` functions,\n which take 1-D lists of values and transform them into desired input/output\n arrays. We do this so that we can easily test both vanilla numpy ndarrays\n and our own LabelArray class for strings.\n '
adjustment_type = {float64_dtype: Float641DArrayOverwrite, datetime64ns_dtype: Datetime641DArrayOverwrite}[dtype]
make_expected_dtype = as_dtype(dtype)
missing_value = default_missing_value_for_dtype(datetime64ns_dtype)
adjustments = {}
buffer_as_of = ([None] * 6)
baseline = make_expected_dtype([[2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2]])
buffer_as_of[0] = make_expected_dtype([[2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2]])
vals1 = [1]
adjustments[1] = [adjustment_type(0, 0, 0, 0, array([coerce_to_dtype(dtype, val) for val in vals1]))]
buffer_as_of[1] = make_expected_dtype([[1, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2]])
buffer_as_of[2] = buffer_as_of[1]
vals3 = [4, 4, 1]
adjustments[3] = [adjustment_type(0, 2, 0, 0, array([coerce_to_dtype(dtype, val) for val in vals3]))]
buffer_as_of[3] = make_expected_dtype([[4, 2, 2], [4, 2, 2], [1, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2]])
vals4 = ([5] * 4)
adjustments[4] = [adjustment_type(0, 3, 2, 2, array([coerce_to_dtype(dtype, val) for val in vals4]))]
buffer_as_of[4] = make_expected_dtype([[4, 2, 5], [4, 2, 5], [1, 2, 5], [2, 2, 5], [2, 2, 2], [2, 2, 2]])
vals5 = range(1, 6)
adjustments[5] = [adjustment_type(0, 4, 1, 1, array([coerce_to_dtype(dtype, val) for val in vals5]))]
buffer_as_of[5] = make_expected_dtype([[4, 1, 5], [4, 2, 5], [1, 3, 5], [2, 4, 5], [2, 5, 2], [2, 2, 2]])
return _gen_expectations(baseline, missing_value, adjustments, buffer_as_of, nrows=6, perspective_offsets=(0, 1))<|docstring|>Generate test cases for overwrite adjustments.
The algorithm used here is the same as the one used above for
multiplicative adjustments. The only difference is the semantics of how
the adjustments are expected to modify the arrays.
This is parameterized on `make_input` and `make_expected_output` functions,
which take 1-D lists of values and transform them into desired input/output
arrays. We do this so that we can easily test both vanilla numpy ndarrays
and our own LabelArray class for strings.<|endoftext|> |
59f088322f3b32d1d007d83fd54d8ac17ea7c0e6b18d44070439b8fd6ebaf0d8 | def verify_tools(self):
'\n Verify that we the Android APK tools in `briefcase` will operate on\n this system, downloading tools as needed.\n '
super().verify_tools()
self.android_sdk = AndroidSDK.verify(self) | Verify that we the Android APK tools in `briefcase` will operate on
this system, downloading tools as needed. | src/briefcase/platforms/android/gradle.py | verify_tools | pombredanne/briefcase | 917 | python | def verify_tools(self):
'\n Verify that we the Android APK tools in `briefcase` will operate on\n this system, downloading tools as needed.\n '
super().verify_tools()
self.android_sdk = AndroidSDK.verify(self) | def verify_tools(self):
'\n Verify that we the Android APK tools in `briefcase` will operate on\n this system, downloading tools as needed.\n '
super().verify_tools()
self.android_sdk = AndroidSDK.verify(self)<|docstring|>Verify that we the Android APK tools in `briefcase` will operate on
this system, downloading tools as needed.<|endoftext|> |
8edb0ee2d2b03b3c7fe0b728f801d91897c9dc43029fe51fde781f5cc53945e7 | def output_format_template_context(self, app: BaseConfig):
'\n Additional template context required by the output format.\n\n :param app: The config object for the app\n '
try:
version_code = app.version_code
except AttributeError:
parsed = parsed_version(app.version)
version_triple = (list(parsed.release) + [0, 0])[:3]
version_code = '{v[0]:d}{v[1]:02d}{v[2]:02d}{build:02d}'.format(v=version_triple, build=int(getattr(app, 'build', '0'))).lstrip('0')
return {'version_code': version_code} | Additional template context required by the output format.
:param app: The config object for the app | src/briefcase/platforms/android/gradle.py | output_format_template_context | pombredanne/briefcase | 917 | python | def output_format_template_context(self, app: BaseConfig):
'\n Additional template context required by the output format.\n\n :param app: The config object for the app\n '
try:
version_code = app.version_code
except AttributeError:
parsed = parsed_version(app.version)
version_triple = (list(parsed.release) + [0, 0])[:3]
version_code = '{v[0]:d}{v[1]:02d}{v[2]:02d}{build:02d}'.format(v=version_triple, build=int(getattr(app, 'build', '0'))).lstrip('0')
return {'version_code': version_code} | def output_format_template_context(self, app: BaseConfig):
'\n Additional template context required by the output format.\n\n :param app: The config object for the app\n '
try:
version_code = app.version_code
except AttributeError:
parsed = parsed_version(app.version)
version_triple = (list(parsed.release) + [0, 0])[:3]
version_code = '{v[0]:d}{v[1]:02d}{v[2]:02d}{build:02d}'.format(v=version_triple, build=int(getattr(app, 'build', '0'))).lstrip('0')
return {'version_code': version_code}<|docstring|>Additional template context required by the output format.
:param app: The config object for the app<|endoftext|> |
323ecb005adc83da57ff01955b25fe2b92eb4f11a4e545f5a4ec0035739fc37a | def build_app(self, app: BaseConfig, **kwargs):
'\n Build an application.\n\n :param app: The application to build\n '
print('[{app.app_name}] Building Android APK...'.format(app=app))
try:
self.subprocess.run([self.gradlew_path(app), 'assembleDebug'], env=self.android_sdk.env, cwd=self.bundle_path(app), check=True)
except subprocess.CalledProcessError:
print()
raise BriefcaseCommandError('Error while building project.') | Build an application.
:param app: The application to build | src/briefcase/platforms/android/gradle.py | build_app | pombredanne/briefcase | 917 | python | def build_app(self, app: BaseConfig, **kwargs):
'\n Build an application.\n\n :param app: The application to build\n '
print('[{app.app_name}] Building Android APK...'.format(app=app))
try:
self.subprocess.run([self.gradlew_path(app), 'assembleDebug'], env=self.android_sdk.env, cwd=self.bundle_path(app), check=True)
except subprocess.CalledProcessError:
print()
raise BriefcaseCommandError('Error while building project.') | def build_app(self, app: BaseConfig, **kwargs):
'\n Build an application.\n\n :param app: The application to build\n '
print('[{app.app_name}] Building Android APK...'.format(app=app))
try:
self.subprocess.run([self.gradlew_path(app), 'assembleDebug'], env=self.android_sdk.env, cwd=self.bundle_path(app), check=True)
except subprocess.CalledProcessError:
print()
raise BriefcaseCommandError('Error while building project.')<|docstring|>Build an application.
:param app: The application to build<|endoftext|> |
3a770287e13c80212e92fcb98cb8a8f7d903b16ecb248ad87481f223c9df6b85 | def run_app(self, app: BaseConfig, device_or_avd=None, **kwargs):
'\n Start the application.\n\n :param app: The config object for the app\n :param device: The device to target. If ``None``, the user will\n be asked to re-run the command selecting a specific device.\n '
(device, name, avd) = self.android_sdk.select_target_device(device_or_avd=device_or_avd)
if (device is None):
if (avd is None):
avd = self.android_sdk.create_emulator()
(device, name) = self.android_sdk.start_emulator(avd)
print()
print('[{app.app_name}] Starting app on {name} (device ID {device})'.format(app=app, name=name, device=device))
adb = self.android_sdk.adb(device=device)
package = '{app.package_name}.{app.module_name}'.format(app=app)
print()
print('[{app.app_name}] Stopping old versions of the app...'.format(app=app))
adb.force_stop_app(package)
print()
print('[{app.app_name}] Installing app...'.format(app=app))
adb.install_apk(self.binary_path(app))
print()
print('[{app.app_name}] Clearing device log...'.format(app=app))
adb.clear_log()
print()
print('[{app.app_name}] Launching app...'.format(app=app))
adb.start_app(package, 'org.beeware.android.MainActivity')
print()
print('[{app.app_name}] Following device log output (type CTRL-C to stop log)...'.format(app=app))
print(('=' * 75))
adb.logcat() | Start the application.
:param app: The config object for the app
:param device: The device to target. If ``None``, the user will
be asked to re-run the command selecting a specific device. | src/briefcase/platforms/android/gradle.py | run_app | pombredanne/briefcase | 917 | python | def run_app(self, app: BaseConfig, device_or_avd=None, **kwargs):
'\n Start the application.\n\n :param app: The config object for the app\n :param device: The device to target. If ``None``, the user will\n be asked to re-run the command selecting a specific device.\n '
(device, name, avd) = self.android_sdk.select_target_device(device_or_avd=device_or_avd)
if (device is None):
if (avd is None):
avd = self.android_sdk.create_emulator()
(device, name) = self.android_sdk.start_emulator(avd)
print()
print('[{app.app_name}] Starting app on {name} (device ID {device})'.format(app=app, name=name, device=device))
adb = self.android_sdk.adb(device=device)
package = '{app.package_name}.{app.module_name}'.format(app=app)
print()
print('[{app.app_name}] Stopping old versions of the app...'.format(app=app))
adb.force_stop_app(package)
print()
print('[{app.app_name}] Installing app...'.format(app=app))
adb.install_apk(self.binary_path(app))
print()
print('[{app.app_name}] Clearing device log...'.format(app=app))
adb.clear_log()
print()
print('[{app.app_name}] Launching app...'.format(app=app))
adb.start_app(package, 'org.beeware.android.MainActivity')
print()
print('[{app.app_name}] Following device log output (type CTRL-C to stop log)...'.format(app=app))
print(('=' * 75))
adb.logcat() | def run_app(self, app: BaseConfig, device_or_avd=None, **kwargs):
'\n Start the application.\n\n :param app: The config object for the app\n :param device: The device to target. If ``None``, the user will\n be asked to re-run the command selecting a specific device.\n '
(device, name, avd) = self.android_sdk.select_target_device(device_or_avd=device_or_avd)
if (device is None):
if (avd is None):
avd = self.android_sdk.create_emulator()
(device, name) = self.android_sdk.start_emulator(avd)
print()
print('[{app.app_name}] Starting app on {name} (device ID {device})'.format(app=app, name=name, device=device))
adb = self.android_sdk.adb(device=device)
package = '{app.package_name}.{app.module_name}'.format(app=app)
print()
print('[{app.app_name}] Stopping old versions of the app...'.format(app=app))
adb.force_stop_app(package)
print()
print('[{app.app_name}] Installing app...'.format(app=app))
adb.install_apk(self.binary_path(app))
print()
print('[{app.app_name}] Clearing device log...'.format(app=app))
adb.clear_log()
print()
print('[{app.app_name}] Launching app...'.format(app=app))
adb.start_app(package, 'org.beeware.android.MainActivity')
print()
print('[{app.app_name}] Following device log output (type CTRL-C to stop log)...'.format(app=app))
print(('=' * 75))
adb.logcat()<|docstring|>Start the application.
:param app: The config object for the app
:param device: The device to target. If ``None``, the user will
be asked to re-run the command selecting a specific device.<|endoftext|> |
b57317affc68a8994cb447f3a533a664cf84513636998e6bd8a5121f525fa151 | def package_app(self, app: BaseConfig, **kwargs):
'\n Package the app for distribution.\n\n This involves building the release app bundle.\n\n :param app: The application to build\n '
print('[{app.app_name}] Building Android App Bundle and APK in release mode...'.format(app=app))
try:
self.subprocess.run([self.gradlew_path(app), 'bundleRelease'], env=self.android_sdk.env, cwd=self.bundle_path(app), check=True)
except subprocess.CalledProcessError:
print()
raise BriefcaseCommandError('Error while building project.') | Package the app for distribution.
This involves building the release app bundle.
:param app: The application to build | src/briefcase/platforms/android/gradle.py | package_app | pombredanne/briefcase | 917 | python | def package_app(self, app: BaseConfig, **kwargs):
'\n Package the app for distribution.\n\n This involves building the release app bundle.\n\n :param app: The application to build\n '
print('[{app.app_name}] Building Android App Bundle and APK in release mode...'.format(app=app))
try:
self.subprocess.run([self.gradlew_path(app), 'bundleRelease'], env=self.android_sdk.env, cwd=self.bundle_path(app), check=True)
except subprocess.CalledProcessError:
print()
raise BriefcaseCommandError('Error while building project.') | def package_app(self, app: BaseConfig, **kwargs):
'\n Package the app for distribution.\n\n This involves building the release app bundle.\n\n :param app: The application to build\n '
print('[{app.app_name}] Building Android App Bundle and APK in release mode...'.format(app=app))
try:
self.subprocess.run([self.gradlew_path(app), 'bundleRelease'], env=self.android_sdk.env, cwd=self.bundle_path(app), check=True)
except subprocess.CalledProcessError:
print()
raise BriefcaseCommandError('Error while building project.')<|docstring|>Package the app for distribution.
This involves building the release app bundle.
:param app: The application to build<|endoftext|> |
acad61d7599ef4023241538a7f4bf17a9254b8af95f22f372ce10baf7b0b22d9 | def f(x, y):
'Doc string' | Doc string | python/ql/test/query-tests/Expressions/general/_private.py | f | AlexTereshenkov/ql | 4,036 | python | def f(x, y):
| def f(x, y):
<|docstring|>Doc string<|endoftext|> |
e745ede56361298352f7b71ce4b7c46a5f4ca4722ba7bad86b0701056043f066 | def do_modular_double(gate, target_reg, controls):
'\n Args:\n gate (ModularBimultiplicationGate):\n The gate being decomposed.\n target_reg (projectq.types.Qureg):\n The register to mod-multiply by the inverse factor.\n controls (list[Qubit]):\n Control qubits.\n '
assert (0 < gate.modulus <= (1 << len(target_reg)))
h = ((gate.modulus + 1) // 2)
((OffsetGate((- h)) & controls) | target_reg)
(((OffsetGate(h) & target_reg[(- 1)]) & controls) | target_reg[:(- 1)])
((X & controls) | target_reg[(- 1)])
((LeftRotateBits & controls) | target_reg) | Args:
gate (ModularBimultiplicationGate):
The gate being decomposed.
target_reg (projectq.types.Qureg):
The register to mod-multiply by the inverse factor.
controls (list[Qubit]):
Control qubits. | src/dirty_period_finding/decompositions/modular_double_rules.py | do_modular_double | Strilanc/PaperImpl-PeriodFindCutQubits | 5 | python | def do_modular_double(gate, target_reg, controls):
'\n Args:\n gate (ModularBimultiplicationGate):\n The gate being decomposed.\n target_reg (projectq.types.Qureg):\n The register to mod-multiply by the inverse factor.\n controls (list[Qubit]):\n Control qubits.\n '
assert (0 < gate.modulus <= (1 << len(target_reg)))
h = ((gate.modulus + 1) // 2)
((OffsetGate((- h)) & controls) | target_reg)
(((OffsetGate(h) & target_reg[(- 1)]) & controls) | target_reg[:(- 1)])
((X & controls) | target_reg[(- 1)])
((LeftRotateBits & controls) | target_reg) | def do_modular_double(gate, target_reg, controls):
'\n Args:\n gate (ModularBimultiplicationGate):\n The gate being decomposed.\n target_reg (projectq.types.Qureg):\n The register to mod-multiply by the inverse factor.\n controls (list[Qubit]):\n Control qubits.\n '
assert (0 < gate.modulus <= (1 << len(target_reg)))
h = ((gate.modulus + 1) // 2)
((OffsetGate((- h)) & controls) | target_reg)
(((OffsetGate(h) & target_reg[(- 1)]) & controls) | target_reg[:(- 1)])
((X & controls) | target_reg[(- 1)])
((LeftRotateBits & controls) | target_reg)<|docstring|>Args:
gate (ModularBimultiplicationGate):
The gate being decomposed.
target_reg (projectq.types.Qureg):
The register to mod-multiply by the inverse factor.
controls (list[Qubit]):
Control qubits.<|endoftext|> |
b22931d3d83e4135f869452b9133248065a2701c94147f522c568fd681057c73 | def numUniqueEmails(emails):
'\n \t:type emails: List[str]\n :rtype: int\n '
result = []
result = set(result)
for i in emails:
data = i.split('@')
local = data[0]
domain = data[1]
local = local.replace('.', '')
indexPlus = local.index('+')
result.add(((local[:indexPlus] + '@') + domain))
return len(result) | :type emails: List[str]
:rtype: int | String/numUniqueEmails.py | numUniqueEmails | konantian/LeetCode | 2 | python | def numUniqueEmails(emails):
'\n \t:type emails: List[str]\n :rtype: int\n '
result = []
result = set(result)
for i in emails:
data = i.split('@')
local = data[0]
domain = data[1]
local = local.replace('.', )
indexPlus = local.index('+')
result.add(((local[:indexPlus] + '@') + domain))
return len(result) | def numUniqueEmails(emails):
'\n \t:type emails: List[str]\n :rtype: int\n '
result = []
result = set(result)
for i in emails:
data = i.split('@')
local = data[0]
domain = data[1]
local = local.replace('.', )
indexPlus = local.index('+')
result.add(((local[:indexPlus] + '@') + domain))
return len(result)<|docstring|>:type emails: List[str]
:rtype: int<|endoftext|> |
f813949258a24cbca7884d64ff7639762b970b721d0716b7983fb9167b7a0f95 | @nottest
def build_tests(app=None, testing_shed_tools=False, master_api_key=None, user_api_key=None, name_prefix='TestForTool_', baseclass=ToolTestCase, create_admin=False, user_email=None, G=None, contains=None):
'\n If the module level variable `toolbox` is set, generate `ToolTestCase`\n classes for all of its tests and put them into this modules globals() so\n they can be discovered by nose.\n '
(host, port, url) = target_url_parts()
keep_outputs_dir = setup_keep_outdir()
galaxy_interactor_kwds = {'galaxy_url': url, 'master_api_key': master_api_key, 'api_key': user_api_key, 'keep_outputs_dir': keep_outputs_dir, 'user_api_key_is_admin_key': True}
if (create_admin and (not user_api_key)):
galaxy_interactor_kwds['test_user'] = user_email
galaxy_interactor = GalaxyInteractorApi(**galaxy_interactor_kwds)
if (not G):
G = globals()
for key in G.copy().keys():
if key.startswith('TestForTool_'):
del G[key]
tests_summary = galaxy_interactor.get_tests_summary()
for (tool_id, tool_summary) in tests_summary.items():
if (contains and (contains not in tool_id)):
continue
name = (name_prefix + tool_id.replace(' ', '_'))
baseclasses = (baseclass,)
namespace = dict()
all_versions_test_count = 0
for (tool_version, version_summary) in tool_summary.items():
count = version_summary['count']
for i in range(count):
test_function_name = ('test_tool_%06d' % all_versions_test_count)
def make_test_method(tool_version, test_index):
def test_tool(self):
self.do_it(tool_version=tool_version, test_index=test_index)
test_tool.__name__ = test_function_name
return test_tool
test_method = make_test_method(tool_version, i)
test_method.__doc__ = ('( %s ) > Test-%d' % (tool_id, (all_versions_test_count + 1)))
namespace[test_function_name] = test_method
namespace['tool_id'] = tool_id
namespace['galaxy_interactor'] = galaxy_interactor
namespace['master_api_key'] = master_api_key
namespace['user_api_key'] = (user_api_key or galaxy_interactor.api_key)
namespace['test_count'] = count
all_versions_test_count += 1
new_class_obj = type(str(name), baseclasses, namespace)
G[name] = new_class_obj
return G | If the module level variable `toolbox` is set, generate `ToolTestCase`
classes for all of its tests and put them into this modules globals() so
they can be discovered by nose. | test/functional/test_toolbox.py | build_tests | knutwa-ext/galaxy | 1,085 | python | @nottest
def build_tests(app=None, testing_shed_tools=False, master_api_key=None, user_api_key=None, name_prefix='TestForTool_', baseclass=ToolTestCase, create_admin=False, user_email=None, G=None, contains=None):
'\n If the module level variable `toolbox` is set, generate `ToolTestCase`\n classes for all of its tests and put them into this modules globals() so\n they can be discovered by nose.\n '
(host, port, url) = target_url_parts()
keep_outputs_dir = setup_keep_outdir()
galaxy_interactor_kwds = {'galaxy_url': url, 'master_api_key': master_api_key, 'api_key': user_api_key, 'keep_outputs_dir': keep_outputs_dir, 'user_api_key_is_admin_key': True}
if (create_admin and (not user_api_key)):
galaxy_interactor_kwds['test_user'] = user_email
galaxy_interactor = GalaxyInteractorApi(**galaxy_interactor_kwds)
if (not G):
G = globals()
for key in G.copy().keys():
if key.startswith('TestForTool_'):
del G[key]
tests_summary = galaxy_interactor.get_tests_summary()
for (tool_id, tool_summary) in tests_summary.items():
if (contains and (contains not in tool_id)):
continue
name = (name_prefix + tool_id.replace(' ', '_'))
baseclasses = (baseclass,)
namespace = dict()
all_versions_test_count = 0
for (tool_version, version_summary) in tool_summary.items():
count = version_summary['count']
for i in range(count):
test_function_name = ('test_tool_%06d' % all_versions_test_count)
def make_test_method(tool_version, test_index):
def test_tool(self):
self.do_it(tool_version=tool_version, test_index=test_index)
test_tool.__name__ = test_function_name
return test_tool
test_method = make_test_method(tool_version, i)
test_method.__doc__ = ('( %s ) > Test-%d' % (tool_id, (all_versions_test_count + 1)))
namespace[test_function_name] = test_method
namespace['tool_id'] = tool_id
namespace['galaxy_interactor'] = galaxy_interactor
namespace['master_api_key'] = master_api_key
namespace['user_api_key'] = (user_api_key or galaxy_interactor.api_key)
namespace['test_count'] = count
all_versions_test_count += 1
new_class_obj = type(str(name), baseclasses, namespace)
G[name] = new_class_obj
return G | @nottest
def build_tests(app=None, testing_shed_tools=False, master_api_key=None, user_api_key=None, name_prefix='TestForTool_', baseclass=ToolTestCase, create_admin=False, user_email=None, G=None, contains=None):
'\n If the module level variable `toolbox` is set, generate `ToolTestCase`\n classes for all of its tests and put them into this modules globals() so\n they can be discovered by nose.\n '
(host, port, url) = target_url_parts()
keep_outputs_dir = setup_keep_outdir()
galaxy_interactor_kwds = {'galaxy_url': url, 'master_api_key': master_api_key, 'api_key': user_api_key, 'keep_outputs_dir': keep_outputs_dir, 'user_api_key_is_admin_key': True}
if (create_admin and (not user_api_key)):
galaxy_interactor_kwds['test_user'] = user_email
galaxy_interactor = GalaxyInteractorApi(**galaxy_interactor_kwds)
if (not G):
G = globals()
for key in G.copy().keys():
if key.startswith('TestForTool_'):
del G[key]
tests_summary = galaxy_interactor.get_tests_summary()
for (tool_id, tool_summary) in tests_summary.items():
if (contains and (contains not in tool_id)):
continue
name = (name_prefix + tool_id.replace(' ', '_'))
baseclasses = (baseclass,)
namespace = dict()
all_versions_test_count = 0
for (tool_version, version_summary) in tool_summary.items():
count = version_summary['count']
for i in range(count):
test_function_name = ('test_tool_%06d' % all_versions_test_count)
def make_test_method(tool_version, test_index):
def test_tool(self):
self.do_it(tool_version=tool_version, test_index=test_index)
test_tool.__name__ = test_function_name
return test_tool
test_method = make_test_method(tool_version, i)
test_method.__doc__ = ('( %s ) > Test-%d' % (tool_id, (all_versions_test_count + 1)))
namespace[test_function_name] = test_method
namespace['tool_id'] = tool_id
namespace['galaxy_interactor'] = galaxy_interactor
namespace['master_api_key'] = master_api_key
namespace['user_api_key'] = (user_api_key or galaxy_interactor.api_key)
namespace['test_count'] = count
all_versions_test_count += 1
new_class_obj = type(str(name), baseclasses, namespace)
G[name] = new_class_obj
return G<|docstring|>If the module level variable `toolbox` is set, generate `ToolTestCase`
classes for all of its tests and put them into this modules globals() so
they can be discovered by nose.<|endoftext|> |
8c82ea2316130ee8025fe40bac56ed7ebb5875e36aea0339e1f036edc2f9b700 | def do_it(self, tool_id=None, tool_version=None, test_index=0, resource_parameters=None):
'\n Run through a tool test case.\n '
resource_parameters = (resource_parameters or {})
if (tool_id is None):
tool_id = self.tool_id
assert tool_id
verify_tool(tool_id, self.galaxy_interactor, resource_parameters=resource_parameters, test_index=test_index, tool_version=tool_version, register_job_data=register_job_data) | Run through a tool test case. | test/functional/test_toolbox.py | do_it | knutwa-ext/galaxy | 1,085 | python | def do_it(self, tool_id=None, tool_version=None, test_index=0, resource_parameters=None):
'\n \n '
resource_parameters = (resource_parameters or {})
if (tool_id is None):
tool_id = self.tool_id
assert tool_id
verify_tool(tool_id, self.galaxy_interactor, resource_parameters=resource_parameters, test_index=test_index, tool_version=tool_version, register_job_data=register_job_data) | def do_it(self, tool_id=None, tool_version=None, test_index=0, resource_parameters=None):
'\n \n '
resource_parameters = (resource_parameters or {})
if (tool_id is None):
tool_id = self.tool_id
assert tool_id
verify_tool(tool_id, self.galaxy_interactor, resource_parameters=resource_parameters, test_index=test_index, tool_version=tool_version, register_job_data=register_job_data)<|docstring|>Run through a tool test case.<|endoftext|> |
554f105daaa15a30ea0b3d241465487ce09a575c5accf2f6792e2e898557da5b | def get_current_timezone(location='Europe/Bucharest'):
' @location: Continent/City'
if (location not in pytz.all_timezones):
raise TimezoneNotFoundError
current_datetime = datetime.now(pytz.timezone(location)).strftime('%d.%m.%Y-%H:%M:%S')
_location = '-'.join(location.split('/'))
return f'{_location}-{current_datetime}' | @location: Continent/City | core_dev/timezone/timezone.py | get_current_timezone | alexzanderr/_core-dev | 0 | python | def get_current_timezone(location='Europe/Bucharest'):
' '
if (location not in pytz.all_timezones):
raise TimezoneNotFoundError
current_datetime = datetime.now(pytz.timezone(location)).strftime('%d.%m.%Y-%H:%M:%S')
_location = '-'.join(location.split('/'))
return f'{_location}-{current_datetime}' | def get_current_timezone(location='Europe/Bucharest'):
' '
if (location not in pytz.all_timezones):
raise TimezoneNotFoundError
current_datetime = datetime.now(pytz.timezone(location)).strftime('%d.%m.%Y-%H:%M:%S')
_location = '-'.join(location.split('/'))
return f'{_location}-{current_datetime}'<|docstring|>@location: Continent/City<|endoftext|> |
2558d0590a8bc8efe0b4eb6339cc8f07ef4b03384995defae1bd3bd2c93ebef6 | def get_current_timezone_time(location='Europe/Bucharest', time_format='%H:%M:%S'):
' @location: Continent/City'
if (location not in pytz.all_timezones):
raise TimezoneNotFoundError
return datetime.now(pytz.timezone(location)).strftime(time_format) | @location: Continent/City | core_dev/timezone/timezone.py | get_current_timezone_time | alexzanderr/_core-dev | 0 | python | def get_current_timezone_time(location='Europe/Bucharest', time_format='%H:%M:%S'):
' '
if (location not in pytz.all_timezones):
raise TimezoneNotFoundError
return datetime.now(pytz.timezone(location)).strftime(time_format) | def get_current_timezone_time(location='Europe/Bucharest', time_format='%H:%M:%S'):
' '
if (location not in pytz.all_timezones):
raise TimezoneNotFoundError
return datetime.now(pytz.timezone(location)).strftime(time_format)<|docstring|>@location: Continent/City<|endoftext|> |
8c2e269c784f759e909d6dc4a6200d51b012e9c4111e9b60909e0d44f16a2010 | def get_current_timezone_date(location='Europe/Bucharest', date_format='%d.%m.%Y'):
' @location: Continent/City'
if (location not in pytz.all_timezones):
raise TimezoneNotFoundError
return datetime.now(pytz.timezone(location)).strftime(date_format) | @location: Continent/City | core_dev/timezone/timezone.py | get_current_timezone_date | alexzanderr/_core-dev | 0 | python | def get_current_timezone_date(location='Europe/Bucharest', date_format='%d.%m.%Y'):
' '
if (location not in pytz.all_timezones):
raise TimezoneNotFoundError
return datetime.now(pytz.timezone(location)).strftime(date_format) | def get_current_timezone_date(location='Europe/Bucharest', date_format='%d.%m.%Y'):
' '
if (location not in pytz.all_timezones):
raise TimezoneNotFoundError
return datetime.now(pytz.timezone(location)).strftime(date_format)<|docstring|>@location: Continent/City<|endoftext|> |
cdd54415942174ab5d9b41e1160a31d0e29fa8a97ca4374bc82e0bc16e12afc0 | def processed_actions_to_wrapper_actions_FindCave(dataset_actions, camera_margin=5):
'\n Turn a batch of actions from dataset (`batch_iter`) to a numpy\n array that corresponds to batch of actions of ActionShaping wrapper (_actions).\n\n Camera margin sets the threshold what is considered "moving camera".\n\n Note: Hardcoded to work for actions in ActionShaping._actions, with "intuitive"\n ordering of actions.\n If you change ActionShaping._actions, remember to change this!\n\n Array elements are integers corresponding to actions, or "-1"\n for actions that did not have any corresponding discrete match.\n '
camera_actions = dataset_actions[(:, 10:)].astype(np.float32)
attack_actions = dataset_actions[(:, 0)].astype(np.float32)
forward_actions = dataset_actions[(:, 3)].astype(np.float32)
jump_actions = dataset_actions[(:, 4)].astype(np.float32)
back_actions = dataset_actions[(:, 1)].astype(np.float32)
left_actions = dataset_actions[(:, 5)].astype(np.float32)
right_actions = dataset_actions[(:, 6)].astype(np.float32)
equip_actions = dataset_actions[(:, 2)]
use_actions = dataset_actions[(:, 9)].astype(np.float32)
sneak_actions = dataset_actions[(:, 7)].astype(np.float32)
sprint_actions = dataset_actions[(:, 8)].astype(np.float32)
batch_size = len(camera_actions)
actions = np.zeros((batch_size,), dtype=int)
for i in range(len(camera_actions)):
if (camera_actions[i][0] < (- camera_margin)):
actions[i] = 3
elif (camera_actions[i][0] > camera_margin):
actions[i] = 4
elif (camera_actions[i][1] > camera_margin):
actions[i] = 5
elif (camera_actions[i][1] < (- camera_margin)):
actions[i] = 6
elif (forward_actions[i] == 1):
if (jump_actions[i] == 1):
actions[i] = 2
elif (attack_actions[i] == 1):
actions[i] = 11
else:
actions[i] = 1
elif (attack_actions[i] == 1):
actions[i] = 0
elif (left_actions[i] == 1):
actions[i] = 8
elif (right_actions[i] == 1):
actions[i] = 9
elif (back_actions[i] == 1):
actions[i] = 7
elif (jump_actions[i] == 1):
actions[i] = 10
else:
actions[i] = 12
return actions | Turn a batch of actions from dataset (`batch_iter`) to a numpy
array that corresponds to batch of actions of ActionShaping wrapper (_actions).
Camera margin sets the threshold what is considered "moving camera".
Note: Hardcoded to work for actions in ActionShaping._actions, with "intuitive"
ordering of actions.
If you change ActionShaping._actions, remember to change this!
Array elements are integers corresponding to actions, or "-1"
for actions that did not have any corresponding discrete match. | kairos_minerl/src/kairos_minerl/gail_wrapper.py | processed_actions_to_wrapper_actions_FindCave | viniciusguigo/kairos_minerl_basalt | 26 | python | def processed_actions_to_wrapper_actions_FindCave(dataset_actions, camera_margin=5):
'\n Turn a batch of actions from dataset (`batch_iter`) to a numpy\n array that corresponds to batch of actions of ActionShaping wrapper (_actions).\n\n Camera margin sets the threshold what is considered "moving camera".\n\n Note: Hardcoded to work for actions in ActionShaping._actions, with "intuitive"\n ordering of actions.\n If you change ActionShaping._actions, remember to change this!\n\n Array elements are integers corresponding to actions, or "-1"\n for actions that did not have any corresponding discrete match.\n '
camera_actions = dataset_actions[(:, 10:)].astype(np.float32)
attack_actions = dataset_actions[(:, 0)].astype(np.float32)
forward_actions = dataset_actions[(:, 3)].astype(np.float32)
jump_actions = dataset_actions[(:, 4)].astype(np.float32)
back_actions = dataset_actions[(:, 1)].astype(np.float32)
left_actions = dataset_actions[(:, 5)].astype(np.float32)
right_actions = dataset_actions[(:, 6)].astype(np.float32)
equip_actions = dataset_actions[(:, 2)]
use_actions = dataset_actions[(:, 9)].astype(np.float32)
sneak_actions = dataset_actions[(:, 7)].astype(np.float32)
sprint_actions = dataset_actions[(:, 8)].astype(np.float32)
batch_size = len(camera_actions)
actions = np.zeros((batch_size,), dtype=int)
for i in range(len(camera_actions)):
if (camera_actions[i][0] < (- camera_margin)):
actions[i] = 3
elif (camera_actions[i][0] > camera_margin):
actions[i] = 4
elif (camera_actions[i][1] > camera_margin):
actions[i] = 5
elif (camera_actions[i][1] < (- camera_margin)):
actions[i] = 6
elif (forward_actions[i] == 1):
if (jump_actions[i] == 1):
actions[i] = 2
elif (attack_actions[i] == 1):
actions[i] = 11
else:
actions[i] = 1
elif (attack_actions[i] == 1):
actions[i] = 0
elif (left_actions[i] == 1):
actions[i] = 8
elif (right_actions[i] == 1):
actions[i] = 9
elif (back_actions[i] == 1):
actions[i] = 7
elif (jump_actions[i] == 1):
actions[i] = 10
else:
actions[i] = 12
return actions | def processed_actions_to_wrapper_actions_FindCave(dataset_actions, camera_margin=5):
'\n Turn a batch of actions from dataset (`batch_iter`) to a numpy\n array that corresponds to batch of actions of ActionShaping wrapper (_actions).\n\n Camera margin sets the threshold what is considered "moving camera".\n\n Note: Hardcoded to work for actions in ActionShaping._actions, with "intuitive"\n ordering of actions.\n If you change ActionShaping._actions, remember to change this!\n\n Array elements are integers corresponding to actions, or "-1"\n for actions that did not have any corresponding discrete match.\n '
camera_actions = dataset_actions[(:, 10:)].astype(np.float32)
attack_actions = dataset_actions[(:, 0)].astype(np.float32)
forward_actions = dataset_actions[(:, 3)].astype(np.float32)
jump_actions = dataset_actions[(:, 4)].astype(np.float32)
back_actions = dataset_actions[(:, 1)].astype(np.float32)
left_actions = dataset_actions[(:, 5)].astype(np.float32)
right_actions = dataset_actions[(:, 6)].astype(np.float32)
equip_actions = dataset_actions[(:, 2)]
use_actions = dataset_actions[(:, 9)].astype(np.float32)
sneak_actions = dataset_actions[(:, 7)].astype(np.float32)
sprint_actions = dataset_actions[(:, 8)].astype(np.float32)
batch_size = len(camera_actions)
actions = np.zeros((batch_size,), dtype=int)
for i in range(len(camera_actions)):
if (camera_actions[i][0] < (- camera_margin)):
actions[i] = 3
elif (camera_actions[i][0] > camera_margin):
actions[i] = 4
elif (camera_actions[i][1] > camera_margin):
actions[i] = 5
elif (camera_actions[i][1] < (- camera_margin)):
actions[i] = 6
elif (forward_actions[i] == 1):
if (jump_actions[i] == 1):
actions[i] = 2
elif (attack_actions[i] == 1):
actions[i] = 11
else:
actions[i] = 1
elif (attack_actions[i] == 1):
actions[i] = 0
elif (left_actions[i] == 1):
actions[i] = 8
elif (right_actions[i] == 1):
actions[i] = 9
elif (back_actions[i] == 1):
actions[i] = 7
elif (jump_actions[i] == 1):
actions[i] = 10
else:
actions[i] = 12
return actions<|docstring|>Turn a batch of actions from dataset (`batch_iter`) to a numpy
array that corresponds to batch of actions of ActionShaping wrapper (_actions).
Camera margin sets the threshold what is considered "moving camera".
Note: Hardcoded to work for actions in ActionShaping._actions, with "intuitive"
ordering of actions.
If you change ActionShaping._actions, remember to change this!
Array elements are integers corresponding to actions, or "-1"
for actions that did not have any corresponding discrete match.<|endoftext|> |
ab1109d1370a40fbf229fe1c030694b1ec12c52baa4ada5e20eac73c48d441ec | def processed_actions_to_wrapper_actions_Waterfall(dataset_actions, camera_margin=5):
'\n Turn a batch of actions from dataset (`batch_iter`) to a numpy\n array that corresponds to batch of actions of ActionShaping wrapper (_actions).\n\n Camera margin sets the threshold what is considered "moving camera".\n\n Note: Hardcoded to work for actions in ActionShaping._actions, with "intuitive"\n ordering of actions.\n If you change ActionShaping._actions, remember to change this!\n\n Array elements are integers corresponding to actions, or "-1"\n for actions that did not have any corresponding discrete match.\n '
camera_actions = dataset_actions[(:, 10:)].astype(np.float32)
attack_actions = dataset_actions[(:, 0)].astype(np.float32)
forward_actions = dataset_actions[(:, 3)].astype(np.float32)
jump_actions = dataset_actions[(:, 4)].astype(np.float32)
back_actions = dataset_actions[(:, 1)].astype(np.float32)
left_actions = dataset_actions[(:, 5)].astype(np.float32)
right_actions = dataset_actions[(:, 6)].astype(np.float32)
equip_actions = dataset_actions[(:, 2)]
use_actions = dataset_actions[(:, 9)].astype(np.float32)
sneak_actions = dataset_actions[(:, 7)].astype(np.float32)
sprint_actions = dataset_actions[(:, 8)].astype(np.float32)
batch_size = len(camera_actions)
actions = np.zeros((batch_size,), dtype=int)
equip_actions_dict = dict()
equip_actions_dict['water_bucket'] = 12
equip_actions_dict['stone_pickaxe'] = 13
equip_actions_dict['stone_shovel'] = 14
equip_actions_dict['cobblestone'] = 15
currently_equipped_item = 'stone_pickaxe'
for i in range(len(camera_actions)):
if ((equip_actions[i] != 'none') and (equip_actions[i] in equip_actions_dict)):
currently_equipped_item = equip_actions[i]
if (use_actions[i] == 1):
actions[i] = equip_actions_dict[currently_equipped_item]
elif (camera_actions[i][0] < (- camera_margin)):
actions[i] = 3
elif (camera_actions[i][0] > camera_margin):
actions[i] = 4
elif (camera_actions[i][1] > camera_margin):
actions[i] = 5
elif (camera_actions[i][1] < (- camera_margin)):
actions[i] = 6
elif (forward_actions[i] == 1):
if (jump_actions[i] == 1):
actions[i] = 2
elif (attack_actions[i] == 1):
actions[i] = 11
else:
actions[i] = 1
elif (attack_actions[i] == 1):
actions[i] = 0
elif (left_actions[i] == 1):
actions[i] = 8
elif (right_actions[i] == 1):
actions[i] = 9
elif (back_actions[i] == 1):
actions[i] = 7
elif (jump_actions[i] == 1):
actions[i] = 10
else:
actions[i] = 16
return actions | Turn a batch of actions from dataset (`batch_iter`) to a numpy
array that corresponds to batch of actions of ActionShaping wrapper (_actions).
Camera margin sets the threshold what is considered "moving camera".
Note: Hardcoded to work for actions in ActionShaping._actions, with "intuitive"
ordering of actions.
If you change ActionShaping._actions, remember to change this!
Array elements are integers corresponding to actions, or "-1"
for actions that did not have any corresponding discrete match. | kairos_minerl/src/kairos_minerl/gail_wrapper.py | processed_actions_to_wrapper_actions_Waterfall | viniciusguigo/kairos_minerl_basalt | 26 | python | def processed_actions_to_wrapper_actions_Waterfall(dataset_actions, camera_margin=5):
'\n Turn a batch of actions from dataset (`batch_iter`) to a numpy\n array that corresponds to batch of actions of ActionShaping wrapper (_actions).\n\n Camera margin sets the threshold what is considered "moving camera".\n\n Note: Hardcoded to work for actions in ActionShaping._actions, with "intuitive"\n ordering of actions.\n If you change ActionShaping._actions, remember to change this!\n\n Array elements are integers corresponding to actions, or "-1"\n for actions that did not have any corresponding discrete match.\n '
camera_actions = dataset_actions[(:, 10:)].astype(np.float32)
attack_actions = dataset_actions[(:, 0)].astype(np.float32)
forward_actions = dataset_actions[(:, 3)].astype(np.float32)
jump_actions = dataset_actions[(:, 4)].astype(np.float32)
back_actions = dataset_actions[(:, 1)].astype(np.float32)
left_actions = dataset_actions[(:, 5)].astype(np.float32)
right_actions = dataset_actions[(:, 6)].astype(np.float32)
equip_actions = dataset_actions[(:, 2)]
use_actions = dataset_actions[(:, 9)].astype(np.float32)
sneak_actions = dataset_actions[(:, 7)].astype(np.float32)
sprint_actions = dataset_actions[(:, 8)].astype(np.float32)
batch_size = len(camera_actions)
actions = np.zeros((batch_size,), dtype=int)
equip_actions_dict = dict()
equip_actions_dict['water_bucket'] = 12
equip_actions_dict['stone_pickaxe'] = 13
equip_actions_dict['stone_shovel'] = 14
equip_actions_dict['cobblestone'] = 15
currently_equipped_item = 'stone_pickaxe'
for i in range(len(camera_actions)):
if ((equip_actions[i] != 'none') and (equip_actions[i] in equip_actions_dict)):
currently_equipped_item = equip_actions[i]
if (use_actions[i] == 1):
actions[i] = equip_actions_dict[currently_equipped_item]
elif (camera_actions[i][0] < (- camera_margin)):
actions[i] = 3
elif (camera_actions[i][0] > camera_margin):
actions[i] = 4
elif (camera_actions[i][1] > camera_margin):
actions[i] = 5
elif (camera_actions[i][1] < (- camera_margin)):
actions[i] = 6
elif (forward_actions[i] == 1):
if (jump_actions[i] == 1):
actions[i] = 2
elif (attack_actions[i] == 1):
actions[i] = 11
else:
actions[i] = 1
elif (attack_actions[i] == 1):
actions[i] = 0
elif (left_actions[i] == 1):
actions[i] = 8
elif (right_actions[i] == 1):
actions[i] = 9
elif (back_actions[i] == 1):
actions[i] = 7
elif (jump_actions[i] == 1):
actions[i] = 10
else:
actions[i] = 16
return actions | def processed_actions_to_wrapper_actions_Waterfall(dataset_actions, camera_margin=5):
'\n Turn a batch of actions from dataset (`batch_iter`) to a numpy\n array that corresponds to batch of actions of ActionShaping wrapper (_actions).\n\n Camera margin sets the threshold what is considered "moving camera".\n\n Note: Hardcoded to work for actions in ActionShaping._actions, with "intuitive"\n ordering of actions.\n If you change ActionShaping._actions, remember to change this!\n\n Array elements are integers corresponding to actions, or "-1"\n for actions that did not have any corresponding discrete match.\n '
camera_actions = dataset_actions[(:, 10:)].astype(np.float32)
attack_actions = dataset_actions[(:, 0)].astype(np.float32)
forward_actions = dataset_actions[(:, 3)].astype(np.float32)
jump_actions = dataset_actions[(:, 4)].astype(np.float32)
back_actions = dataset_actions[(:, 1)].astype(np.float32)
left_actions = dataset_actions[(:, 5)].astype(np.float32)
right_actions = dataset_actions[(:, 6)].astype(np.float32)
equip_actions = dataset_actions[(:, 2)]
use_actions = dataset_actions[(:, 9)].astype(np.float32)
sneak_actions = dataset_actions[(:, 7)].astype(np.float32)
sprint_actions = dataset_actions[(:, 8)].astype(np.float32)
batch_size = len(camera_actions)
actions = np.zeros((batch_size,), dtype=int)
equip_actions_dict = dict()
equip_actions_dict['water_bucket'] = 12
equip_actions_dict['stone_pickaxe'] = 13
equip_actions_dict['stone_shovel'] = 14
equip_actions_dict['cobblestone'] = 15
currently_equipped_item = 'stone_pickaxe'
for i in range(len(camera_actions)):
if ((equip_actions[i] != 'none') and (equip_actions[i] in equip_actions_dict)):
currently_equipped_item = equip_actions[i]
if (use_actions[i] == 1):
actions[i] = equip_actions_dict[currently_equipped_item]
elif (camera_actions[i][0] < (- camera_margin)):
actions[i] = 3
elif (camera_actions[i][0] > camera_margin):
actions[i] = 4
elif (camera_actions[i][1] > camera_margin):
actions[i] = 5
elif (camera_actions[i][1] < (- camera_margin)):
actions[i] = 6
elif (forward_actions[i] == 1):
if (jump_actions[i] == 1):
actions[i] = 2
elif (attack_actions[i] == 1):
actions[i] = 11
else:
actions[i] = 1
elif (attack_actions[i] == 1):
actions[i] = 0
elif (left_actions[i] == 1):
actions[i] = 8
elif (right_actions[i] == 1):
actions[i] = 9
elif (back_actions[i] == 1):
actions[i] = 7
elif (jump_actions[i] == 1):
actions[i] = 10
else:
actions[i] = 16
return actions<|docstring|>Turn a batch of actions from dataset (`batch_iter`) to a numpy
array that corresponds to batch of actions of ActionShaping wrapper (_actions).
Camera margin sets the threshold what is considered "moving camera".
Note: Hardcoded to work for actions in ActionShaping._actions, with "intuitive"
ordering of actions.
If you change ActionShaping._actions, remember to change this!
Array elements are integers corresponding to actions, or "-1"
for actions that did not have any corresponding discrete match.<|endoftext|> |
ab7e0bf20099cb375b7d75479a24763b51b1c0e1368e3d1d5378bd3cd8a4964b | def processed_actions_to_wrapper_actions_Animalpen(dataset_actions, camera_margin=5):
'\n Turn a batch of actions from dataset (`batch_iter`) to a numpy\n array that corresponds to batch of actions of ActionShaping wrapper (_actions).\n\n Camera margin sets the threshold what is considered "moving camera".\n\n Note: Hardcoded to work for actions in ActionShaping._actions, with "intuitive"\n ordering of actions.\n If you change ActionShaping._actions, remember to change this!\n\n Array elements are integers corresponding to actions, or "-1"\n for actions that did not have any corresponding discrete match.\n '
camera_actions = dataset_actions[(:, 10:)].astype(np.float32)
attack_actions = dataset_actions[(:, 0)].astype(np.float32)
forward_actions = dataset_actions[(:, 3)].astype(np.float32)
jump_actions = dataset_actions[(:, 4)].astype(np.float32)
back_actions = dataset_actions[(:, 1)].astype(np.float32)
left_actions = dataset_actions[(:, 5)].astype(np.float32)
right_actions = dataset_actions[(:, 6)].astype(np.float32)
equip_actions = dataset_actions[(:, 2)]
use_actions = dataset_actions[(:, 9)].astype(np.float32)
sneak_actions = dataset_actions[(:, 7)].astype(np.float32)
sprint_actions = dataset_actions[(:, 8)].astype(np.float32)
batch_size = len(camera_actions)
actions = np.zeros((batch_size,), dtype=int)
equip_actions_dict = dict()
equip_actions_dict['carrot'] = 12
equip_actions_dict['fence'] = 13
equip_actions_dict['fence_gate'] = 14
equip_actions_dict['wheat'] = 15
equip_actions_dict['wheat_seeds'] = 16
currently_equipped_item = 'stone_pickaxe'
for i in range(len(camera_actions)):
if ((equip_actions[i] != 'none') and (equip_actions[i] in equip_actions_dict)):
currently_equipped_item = equip_actions[i]
if (equip_actions[i] == 'carrot'):
actions[i] = equip_actions_dict['carrot']
elif (equip_actions[i] == 'wheat'):
actions[i] = equip_actions_dict['wheat']
elif (equip_actions[i] == 'wheat_seeds'):
actions[i] = equip_actions_dict['wheat_seeds']
elif (use_actions[i] == 1):
actions[i] = equip_actions_dict[currently_equipped_item]
elif (camera_actions[i][0] < (- camera_margin)):
actions[i] = 3
elif (camera_actions[i][0] > camera_margin):
actions[i] = 4
elif (camera_actions[i][1] > camera_margin):
actions[i] = 5
elif (camera_actions[i][1] < (- camera_margin)):
actions[i] = 6
elif (forward_actions[i] == 1):
if (jump_actions[i] == 1):
actions[i] = 2
elif (attack_actions[i] == 1):
actions[i] = 11
else:
actions[i] = 1
elif (attack_actions[i] == 1):
actions[i] = 0
elif (left_actions[i] == 1):
actions[i] = 8
elif (right_actions[i] == 1):
actions[i] = 9
elif (back_actions[i] == 1):
actions[i] = 7
elif (jump_actions[i] == 1):
actions[i] = 10
else:
actions[i] = 17
return actions | Turn a batch of actions from dataset (`batch_iter`) to a numpy
array that corresponds to batch of actions of ActionShaping wrapper (_actions).
Camera margin sets the threshold what is considered "moving camera".
Note: Hardcoded to work for actions in ActionShaping._actions, with "intuitive"
ordering of actions.
If you change ActionShaping._actions, remember to change this!
Array elements are integers corresponding to actions, or "-1"
for actions that did not have any corresponding discrete match. | kairos_minerl/src/kairos_minerl/gail_wrapper.py | processed_actions_to_wrapper_actions_Animalpen | viniciusguigo/kairos_minerl_basalt | 26 | python | def processed_actions_to_wrapper_actions_Animalpen(dataset_actions, camera_margin=5):
'\n Turn a batch of actions from dataset (`batch_iter`) to a numpy\n array that corresponds to batch of actions of ActionShaping wrapper (_actions).\n\n Camera margin sets the threshold what is considered "moving camera".\n\n Note: Hardcoded to work for actions in ActionShaping._actions, with "intuitive"\n ordering of actions.\n If you change ActionShaping._actions, remember to change this!\n\n Array elements are integers corresponding to actions, or "-1"\n for actions that did not have any corresponding discrete match.\n '
camera_actions = dataset_actions[(:, 10:)].astype(np.float32)
attack_actions = dataset_actions[(:, 0)].astype(np.float32)
forward_actions = dataset_actions[(:, 3)].astype(np.float32)
jump_actions = dataset_actions[(:, 4)].astype(np.float32)
back_actions = dataset_actions[(:, 1)].astype(np.float32)
left_actions = dataset_actions[(:, 5)].astype(np.float32)
right_actions = dataset_actions[(:, 6)].astype(np.float32)
equip_actions = dataset_actions[(:, 2)]
use_actions = dataset_actions[(:, 9)].astype(np.float32)
sneak_actions = dataset_actions[(:, 7)].astype(np.float32)
sprint_actions = dataset_actions[(:, 8)].astype(np.float32)
batch_size = len(camera_actions)
actions = np.zeros((batch_size,), dtype=int)
equip_actions_dict = dict()
equip_actions_dict['carrot'] = 12
equip_actions_dict['fence'] = 13
equip_actions_dict['fence_gate'] = 14
equip_actions_dict['wheat'] = 15
equip_actions_dict['wheat_seeds'] = 16
currently_equipped_item = 'stone_pickaxe'
for i in range(len(camera_actions)):
if ((equip_actions[i] != 'none') and (equip_actions[i] in equip_actions_dict)):
currently_equipped_item = equip_actions[i]
if (equip_actions[i] == 'carrot'):
actions[i] = equip_actions_dict['carrot']
elif (equip_actions[i] == 'wheat'):
actions[i] = equip_actions_dict['wheat']
elif (equip_actions[i] == 'wheat_seeds'):
actions[i] = equip_actions_dict['wheat_seeds']
elif (use_actions[i] == 1):
actions[i] = equip_actions_dict[currently_equipped_item]
elif (camera_actions[i][0] < (- camera_margin)):
actions[i] = 3
elif (camera_actions[i][0] > camera_margin):
actions[i] = 4
elif (camera_actions[i][1] > camera_margin):
actions[i] = 5
elif (camera_actions[i][1] < (- camera_margin)):
actions[i] = 6
elif (forward_actions[i] == 1):
if (jump_actions[i] == 1):
actions[i] = 2
elif (attack_actions[i] == 1):
actions[i] = 11
else:
actions[i] = 1
elif (attack_actions[i] == 1):
actions[i] = 0
elif (left_actions[i] == 1):
actions[i] = 8
elif (right_actions[i] == 1):
actions[i] = 9
elif (back_actions[i] == 1):
actions[i] = 7
elif (jump_actions[i] == 1):
actions[i] = 10
else:
actions[i] = 17
return actions | def processed_actions_to_wrapper_actions_Animalpen(dataset_actions, camera_margin=5):
'\n Turn a batch of actions from dataset (`batch_iter`) to a numpy\n array that corresponds to batch of actions of ActionShaping wrapper (_actions).\n\n Camera margin sets the threshold what is considered "moving camera".\n\n Note: Hardcoded to work for actions in ActionShaping._actions, with "intuitive"\n ordering of actions.\n If you change ActionShaping._actions, remember to change this!\n\n Array elements are integers corresponding to actions, or "-1"\n for actions that did not have any corresponding discrete match.\n '
camera_actions = dataset_actions[(:, 10:)].astype(np.float32)
attack_actions = dataset_actions[(:, 0)].astype(np.float32)
forward_actions = dataset_actions[(:, 3)].astype(np.float32)
jump_actions = dataset_actions[(:, 4)].astype(np.float32)
back_actions = dataset_actions[(:, 1)].astype(np.float32)
left_actions = dataset_actions[(:, 5)].astype(np.float32)
right_actions = dataset_actions[(:, 6)].astype(np.float32)
equip_actions = dataset_actions[(:, 2)]
use_actions = dataset_actions[(:, 9)].astype(np.float32)
sneak_actions = dataset_actions[(:, 7)].astype(np.float32)
sprint_actions = dataset_actions[(:, 8)].astype(np.float32)
batch_size = len(camera_actions)
actions = np.zeros((batch_size,), dtype=int)
equip_actions_dict = dict()
equip_actions_dict['carrot'] = 12
equip_actions_dict['fence'] = 13
equip_actions_dict['fence_gate'] = 14
equip_actions_dict['wheat'] = 15
equip_actions_dict['wheat_seeds'] = 16
currently_equipped_item = 'stone_pickaxe'
for i in range(len(camera_actions)):
if ((equip_actions[i] != 'none') and (equip_actions[i] in equip_actions_dict)):
currently_equipped_item = equip_actions[i]
if (equip_actions[i] == 'carrot'):
actions[i] = equip_actions_dict['carrot']
elif (equip_actions[i] == 'wheat'):
actions[i] = equip_actions_dict['wheat']
elif (equip_actions[i] == 'wheat_seeds'):
actions[i] = equip_actions_dict['wheat_seeds']
elif (use_actions[i] == 1):
actions[i] = equip_actions_dict[currently_equipped_item]
elif (camera_actions[i][0] < (- camera_margin)):
actions[i] = 3
elif (camera_actions[i][0] > camera_margin):
actions[i] = 4
elif (camera_actions[i][1] > camera_margin):
actions[i] = 5
elif (camera_actions[i][1] < (- camera_margin)):
actions[i] = 6
elif (forward_actions[i] == 1):
if (jump_actions[i] == 1):
actions[i] = 2
elif (attack_actions[i] == 1):
actions[i] = 11
else:
actions[i] = 1
elif (attack_actions[i] == 1):
actions[i] = 0
elif (left_actions[i] == 1):
actions[i] = 8
elif (right_actions[i] == 1):
actions[i] = 9
elif (back_actions[i] == 1):
actions[i] = 7
elif (jump_actions[i] == 1):
actions[i] = 10
else:
actions[i] = 17
return actions<|docstring|>Turn a batch of actions from dataset (`batch_iter`) to a numpy
array that corresponds to batch of actions of ActionShaping wrapper (_actions).
Camera margin sets the threshold what is considered "moving camera".
Note: Hardcoded to work for actions in ActionShaping._actions, with "intuitive"
ordering of actions.
If you change ActionShaping._actions, remember to change this!
Array elements are integers corresponding to actions, or "-1"
for actions that did not have any corresponding discrete match.<|endoftext|> |
559c8432c16058fc8041a399c6c14d8d20e488c4259070a09d62c50dc02dd277 | def processed_actions_to_wrapper_actions_Villagehouse(dataset_actions, camera_margin=5):
'\n Turn a batch of actions from dataset (`batch_iter`) to a numpy\n array that corresponds to batch of actions of ActionShaping wrapper (_actions).\n\n Camera margin sets the threshold what is considered "moving camera".\n\n Note: Hardcoded to work for actions in ActionShaping._actions, with "intuitive"\n ordering of actions.\n If you change ActionShaping._actions, remember to change this!\n\n Array elements are integers corresponding to actions, or "-1"\n for actions that did not have any corresponding discrete match.\n '
camera_actions = dataset_actions[(:, 10:)].astype(np.float32)
attack_actions = dataset_actions[(:, 0)].astype(np.float32)
forward_actions = dataset_actions[(:, 3)].astype(np.float32)
jump_actions = dataset_actions[(:, 4)].astype(np.float32)
back_actions = dataset_actions[(:, 1)].astype(np.float32)
left_actions = dataset_actions[(:, 5)].astype(np.float32)
right_actions = dataset_actions[(:, 6)].astype(np.float32)
equip_actions = dataset_actions[(:, 2)]
use_actions = dataset_actions[(:, 9)].astype(np.float32)
sneak_actions = dataset_actions[(:, 7)].astype(np.float32)
sprint_actions = dataset_actions[(:, 8)].astype(np.float32)
batch_size = len(camera_actions)
actions = np.zeros((batch_size,), dtype=int)
equip_actions_dict = dict()
equip_actions_dict['carrot'] = 12
equip_actions_dict['fence'] = 13
equip_actions_dict['fence_gate'] = 14
equip_actions_dict['wheat'] = 15
equip_actions_dict['wheat_seeds'] = 16
equip_actions_dict['acacia_door'] = 12
equip_actions_dict['acacia_fence'] = 13
equip_actions_dict['cactus'] = 14
equip_actions_dict['cobblestone'] = 15
equip_actions_dict['dirt'] = 16
equip_actions_dict['fence'] = 17
equip_actions_dict['flower_pot'] = 18
equip_actions_dict['glass'] = 19
equip_actions_dict['ladder'] = 20
equip_actions_dict['log#0'] = 21
equip_actions_dict['log#1'] = 22
equip_actions_dict['log2#0'] = 23
equip_actions_dict['planks#0'] = 24
equip_actions_dict['planks#1'] = 25
equip_actions_dict['planks#4'] = 26
equip_actions_dict['red_flower'] = 27
equip_actions_dict['sand,sandstone#0'] = 28
equip_actions_dict['sandstone#2'] = 29
equip_actions_dict['sandstone_stairs'] = 30
equip_actions_dict['spruce_door'] = 31
equip_actions_dict['spruce_fence'] = 32
equip_actions_dict['stone_axe'] = 33
equip_actions_dict['stone_pickaxe'] = 34
equip_actions_dict['stone_stairs'] = 35
equip_actions_dict['torch'] = 36
equip_actions_dict['wooden_door'] = 37
equip_actions_dict['wooden_pressure_plate'] = 38
currently_equipped_item = 'stone_pickaxe'
for i in range(len(camera_actions)):
if ((equip_actions[i] != 'none') and (equip_actions[i] in equip_actions_dict)):
currently_equipped_item = equip_actions[i]
if (use_actions[i] == 1):
actions[i] = equip_actions_dict[currently_equipped_item]
elif (camera_actions[i][0] < (- camera_margin)):
actions[i] = 3
elif (camera_actions[i][0] > camera_margin):
actions[i] = 4
elif (camera_actions[i][1] > camera_margin):
actions[i] = 5
elif (camera_actions[i][1] < (- camera_margin)):
actions[i] = 6
elif (forward_actions[i] == 1):
if (jump_actions[i] == 1):
actions[i] = 2
elif (attack_actions[i] == 1):
actions[i] = 11
else:
actions[i] = 1
elif (attack_actions[i] == 1):
actions[i] = 0
elif (left_actions[i] == 1):
actions[i] = 8
elif (right_actions[i] == 1):
actions[i] = 9
elif (back_actions[i] == 1):
actions[i] = 7
elif (jump_actions[i] == 1):
actions[i] = 10
else:
actions[i] = 39
return actions | Turn a batch of actions from dataset (`batch_iter`) to a numpy
array that corresponds to batch of actions of ActionShaping wrapper (_actions).
Camera margin sets the threshold what is considered "moving camera".
Note: Hardcoded to work for actions in ActionShaping._actions, with "intuitive"
ordering of actions.
If you change ActionShaping._actions, remember to change this!
Array elements are integers corresponding to actions, or "-1"
for actions that did not have any corresponding discrete match. | kairos_minerl/src/kairos_minerl/gail_wrapper.py | processed_actions_to_wrapper_actions_Villagehouse | viniciusguigo/kairos_minerl_basalt | 26 | python | def processed_actions_to_wrapper_actions_Villagehouse(dataset_actions, camera_margin=5):
'\n Turn a batch of actions from dataset (`batch_iter`) to a numpy\n array that corresponds to batch of actions of ActionShaping wrapper (_actions).\n\n Camera margin sets the threshold what is considered "moving camera".\n\n Note: Hardcoded to work for actions in ActionShaping._actions, with "intuitive"\n ordering of actions.\n If you change ActionShaping._actions, remember to change this!\n\n Array elements are integers corresponding to actions, or "-1"\n for actions that did not have any corresponding discrete match.\n '
camera_actions = dataset_actions[(:, 10:)].astype(np.float32)
attack_actions = dataset_actions[(:, 0)].astype(np.float32)
forward_actions = dataset_actions[(:, 3)].astype(np.float32)
jump_actions = dataset_actions[(:, 4)].astype(np.float32)
back_actions = dataset_actions[(:, 1)].astype(np.float32)
left_actions = dataset_actions[(:, 5)].astype(np.float32)
right_actions = dataset_actions[(:, 6)].astype(np.float32)
equip_actions = dataset_actions[(:, 2)]
use_actions = dataset_actions[(:, 9)].astype(np.float32)
sneak_actions = dataset_actions[(:, 7)].astype(np.float32)
sprint_actions = dataset_actions[(:, 8)].astype(np.float32)
batch_size = len(camera_actions)
actions = np.zeros((batch_size,), dtype=int)
equip_actions_dict = dict()
equip_actions_dict['carrot'] = 12
equip_actions_dict['fence'] = 13
equip_actions_dict['fence_gate'] = 14
equip_actions_dict['wheat'] = 15
equip_actions_dict['wheat_seeds'] = 16
equip_actions_dict['acacia_door'] = 12
equip_actions_dict['acacia_fence'] = 13
equip_actions_dict['cactus'] = 14
equip_actions_dict['cobblestone'] = 15
equip_actions_dict['dirt'] = 16
equip_actions_dict['fence'] = 17
equip_actions_dict['flower_pot'] = 18
equip_actions_dict['glass'] = 19
equip_actions_dict['ladder'] = 20
equip_actions_dict['log#0'] = 21
equip_actions_dict['log#1'] = 22
equip_actions_dict['log2#0'] = 23
equip_actions_dict['planks#0'] = 24
equip_actions_dict['planks#1'] = 25
equip_actions_dict['planks#4'] = 26
equip_actions_dict['red_flower'] = 27
equip_actions_dict['sand,sandstone#0'] = 28
equip_actions_dict['sandstone#2'] = 29
equip_actions_dict['sandstone_stairs'] = 30
equip_actions_dict['spruce_door'] = 31
equip_actions_dict['spruce_fence'] = 32
equip_actions_dict['stone_axe'] = 33
equip_actions_dict['stone_pickaxe'] = 34
equip_actions_dict['stone_stairs'] = 35
equip_actions_dict['torch'] = 36
equip_actions_dict['wooden_door'] = 37
equip_actions_dict['wooden_pressure_plate'] = 38
currently_equipped_item = 'stone_pickaxe'
for i in range(len(camera_actions)):
if ((equip_actions[i] != 'none') and (equip_actions[i] in equip_actions_dict)):
currently_equipped_item = equip_actions[i]
if (use_actions[i] == 1):
actions[i] = equip_actions_dict[currently_equipped_item]
elif (camera_actions[i][0] < (- camera_margin)):
actions[i] = 3
elif (camera_actions[i][0] > camera_margin):
actions[i] = 4
elif (camera_actions[i][1] > camera_margin):
actions[i] = 5
elif (camera_actions[i][1] < (- camera_margin)):
actions[i] = 6
elif (forward_actions[i] == 1):
if (jump_actions[i] == 1):
actions[i] = 2
elif (attack_actions[i] == 1):
actions[i] = 11
else:
actions[i] = 1
elif (attack_actions[i] == 1):
actions[i] = 0
elif (left_actions[i] == 1):
actions[i] = 8
elif (right_actions[i] == 1):
actions[i] = 9
elif (back_actions[i] == 1):
actions[i] = 7
elif (jump_actions[i] == 1):
actions[i] = 10
else:
actions[i] = 39
return actions | def processed_actions_to_wrapper_actions_Villagehouse(dataset_actions, camera_margin=5):
'\n Turn a batch of actions from dataset (`batch_iter`) to a numpy\n array that corresponds to batch of actions of ActionShaping wrapper (_actions).\n\n Camera margin sets the threshold what is considered "moving camera".\n\n Note: Hardcoded to work for actions in ActionShaping._actions, with "intuitive"\n ordering of actions.\n If you change ActionShaping._actions, remember to change this!\n\n Array elements are integers corresponding to actions, or "-1"\n for actions that did not have any corresponding discrete match.\n '
camera_actions = dataset_actions[(:, 10:)].astype(np.float32)
attack_actions = dataset_actions[(:, 0)].astype(np.float32)
forward_actions = dataset_actions[(:, 3)].astype(np.float32)
jump_actions = dataset_actions[(:, 4)].astype(np.float32)
back_actions = dataset_actions[(:, 1)].astype(np.float32)
left_actions = dataset_actions[(:, 5)].astype(np.float32)
right_actions = dataset_actions[(:, 6)].astype(np.float32)
equip_actions = dataset_actions[(:, 2)]
use_actions = dataset_actions[(:, 9)].astype(np.float32)
sneak_actions = dataset_actions[(:, 7)].astype(np.float32)
sprint_actions = dataset_actions[(:, 8)].astype(np.float32)
batch_size = len(camera_actions)
actions = np.zeros((batch_size,), dtype=int)
equip_actions_dict = dict()
equip_actions_dict['carrot'] = 12
equip_actions_dict['fence'] = 13
equip_actions_dict['fence_gate'] = 14
equip_actions_dict['wheat'] = 15
equip_actions_dict['wheat_seeds'] = 16
equip_actions_dict['acacia_door'] = 12
equip_actions_dict['acacia_fence'] = 13
equip_actions_dict['cactus'] = 14
equip_actions_dict['cobblestone'] = 15
equip_actions_dict['dirt'] = 16
equip_actions_dict['fence'] = 17
equip_actions_dict['flower_pot'] = 18
equip_actions_dict['glass'] = 19
equip_actions_dict['ladder'] = 20
equip_actions_dict['log#0'] = 21
equip_actions_dict['log#1'] = 22
equip_actions_dict['log2#0'] = 23
equip_actions_dict['planks#0'] = 24
equip_actions_dict['planks#1'] = 25
equip_actions_dict['planks#4'] = 26
equip_actions_dict['red_flower'] = 27
equip_actions_dict['sand,sandstone#0'] = 28
equip_actions_dict['sandstone#2'] = 29
equip_actions_dict['sandstone_stairs'] = 30
equip_actions_dict['spruce_door'] = 31
equip_actions_dict['spruce_fence'] = 32
equip_actions_dict['stone_axe'] = 33
equip_actions_dict['stone_pickaxe'] = 34
equip_actions_dict['stone_stairs'] = 35
equip_actions_dict['torch'] = 36
equip_actions_dict['wooden_door'] = 37
equip_actions_dict['wooden_pressure_plate'] = 38
currently_equipped_item = 'stone_pickaxe'
for i in range(len(camera_actions)):
if ((equip_actions[i] != 'none') and (equip_actions[i] in equip_actions_dict)):
currently_equipped_item = equip_actions[i]
if (use_actions[i] == 1):
actions[i] = equip_actions_dict[currently_equipped_item]
elif (camera_actions[i][0] < (- camera_margin)):
actions[i] = 3
elif (camera_actions[i][0] > camera_margin):
actions[i] = 4
elif (camera_actions[i][1] > camera_margin):
actions[i] = 5
elif (camera_actions[i][1] < (- camera_margin)):
actions[i] = 6
elif (forward_actions[i] == 1):
if (jump_actions[i] == 1):
actions[i] = 2
elif (attack_actions[i] == 1):
actions[i] = 11
else:
actions[i] = 1
elif (attack_actions[i] == 1):
actions[i] = 0
elif (left_actions[i] == 1):
actions[i] = 8
elif (right_actions[i] == 1):
actions[i] = 9
elif (back_actions[i] == 1):
actions[i] = 7
elif (jump_actions[i] == 1):
actions[i] = 10
else:
actions[i] = 39
return actions<|docstring|>Turn a batch of actions from dataset (`batch_iter`) to a numpy
array that corresponds to batch of actions of ActionShaping wrapper (_actions).
Camera margin sets the threshold what is considered "moving camera".
Note: Hardcoded to work for actions in ActionShaping._actions, with "intuitive"
ordering of actions.
If you change ActionShaping._actions, remember to change this!
Array elements are integers corresponding to actions, or "-1"
for actions that did not have any corresponding discrete match.<|endoftext|> |
5157959f78aa73f9a5cd3bfa5c4accbbfd0d564f72e9e3a929e378b266b54055 | def processed_actions_to_wrapper_actions_Navigation(dataset_actions, camera_margin=5):
'\n Turn a batch of actions from dataset (`batch_iter`) to a numpy\n array that corresponds to batch of actions of ActionShaping wrapper (_actions).\n\n Camera margin sets the threshold what is considered "moving camera".\n\n Note: Hardcoded to work for actions in ActionShaping._actions, with "intuitive"\n ordering of actions.\n If you change ActionShaping._actions, remember to change this!\n\n Array elements are integers corresponding to actions, or "-1"\n for actions that did not have any corresponding discrete match.\n '
camera_actions = dataset_actions[(:, 10:)].astype(np.float32)
attack_actions = dataset_actions[(:, 0)].astype(np.float32)
forward_actions = dataset_actions[(:, 3)].astype(np.float32)
jump_actions = dataset_actions[(:, 4)].astype(np.float32)
back_actions = dataset_actions[(:, 1)].astype(np.float32)
left_actions = dataset_actions[(:, 5)].astype(np.float32)
right_actions = dataset_actions[(:, 6)].astype(np.float32)
equip_actions = dataset_actions[(:, 2)]
use_actions = dataset_actions[(:, 9)].astype(np.float32)
sneak_actions = dataset_actions[(:, 7)].astype(np.float32)
sprint_actions = dataset_actions[(:, 8)].astype(np.float32)
batch_size = len(camera_actions)
actions = np.zeros((batch_size,), dtype=int)
for i in range(len(camera_actions)):
if (camera_actions[i][1] < (- camera_margin)):
actions[i] = 3
elif (camera_actions[i][1] > camera_margin):
actions[i] = 4
elif (camera_actions[i][0] > camera_margin):
actions[i] = 5
elif (camera_actions[i][0] < (- camera_margin)):
actions[i] = 6
elif (forward_actions[i] == 1):
if (jump_actions[i] == 1):
actions[i] = 2
elif (attack_actions[i] == 1):
actions[i] = 11
else:
actions[i] = 1
elif (attack_actions[i] == 1):
actions[i] = 0
elif (left_actions[i] == 1):
actions[i] = 8
elif (right_actions[i] == 1):
actions[i] = 9
elif (jump_actions[i] == 1):
actions[i] = 10
elif (back_actions[i] == 1):
actions[i] = 7
elif sum(dataset_actions[(i, (0, 1, 3, 4, 5, 6, 7, 8, 9))].astype(np.float32)):
actions[i] = 12
else:
actions[i] = 99
return actions | Turn a batch of actions from dataset (`batch_iter`) to a numpy
array that corresponds to batch of actions of ActionShaping wrapper (_actions).
Camera margin sets the threshold what is considered "moving camera".
Note: Hardcoded to work for actions in ActionShaping._actions, with "intuitive"
ordering of actions.
If you change ActionShaping._actions, remember to change this!
Array elements are integers corresponding to actions, or "-1"
for actions that did not have any corresponding discrete match. | kairos_minerl/src/kairos_minerl/gail_wrapper.py | processed_actions_to_wrapper_actions_Navigation | viniciusguigo/kairos_minerl_basalt | 26 | python | def processed_actions_to_wrapper_actions_Navigation(dataset_actions, camera_margin=5):
'\n Turn a batch of actions from dataset (`batch_iter`) to a numpy\n array that corresponds to batch of actions of ActionShaping wrapper (_actions).\n\n Camera margin sets the threshold what is considered "moving camera".\n\n Note: Hardcoded to work for actions in ActionShaping._actions, with "intuitive"\n ordering of actions.\n If you change ActionShaping._actions, remember to change this!\n\n Array elements are integers corresponding to actions, or "-1"\n for actions that did not have any corresponding discrete match.\n '
camera_actions = dataset_actions[(:, 10:)].astype(np.float32)
attack_actions = dataset_actions[(:, 0)].astype(np.float32)
forward_actions = dataset_actions[(:, 3)].astype(np.float32)
jump_actions = dataset_actions[(:, 4)].astype(np.float32)
back_actions = dataset_actions[(:, 1)].astype(np.float32)
left_actions = dataset_actions[(:, 5)].astype(np.float32)
right_actions = dataset_actions[(:, 6)].astype(np.float32)
equip_actions = dataset_actions[(:, 2)]
use_actions = dataset_actions[(:, 9)].astype(np.float32)
sneak_actions = dataset_actions[(:, 7)].astype(np.float32)
sprint_actions = dataset_actions[(:, 8)].astype(np.float32)
batch_size = len(camera_actions)
actions = np.zeros((batch_size,), dtype=int)
for i in range(len(camera_actions)):
if (camera_actions[i][1] < (- camera_margin)):
actions[i] = 3
elif (camera_actions[i][1] > camera_margin):
actions[i] = 4
elif (camera_actions[i][0] > camera_margin):
actions[i] = 5
elif (camera_actions[i][0] < (- camera_margin)):
actions[i] = 6
elif (forward_actions[i] == 1):
if (jump_actions[i] == 1):
actions[i] = 2
elif (attack_actions[i] == 1):
actions[i] = 11
else:
actions[i] = 1
elif (attack_actions[i] == 1):
actions[i] = 0
elif (left_actions[i] == 1):
actions[i] = 8
elif (right_actions[i] == 1):
actions[i] = 9
elif (jump_actions[i] == 1):
actions[i] = 10
elif (back_actions[i] == 1):
actions[i] = 7
elif sum(dataset_actions[(i, (0, 1, 3, 4, 5, 6, 7, 8, 9))].astype(np.float32)):
actions[i] = 12
else:
actions[i] = 99
return actions | def processed_actions_to_wrapper_actions_Navigation(dataset_actions, camera_margin=5):
'\n Turn a batch of actions from dataset (`batch_iter`) to a numpy\n array that corresponds to batch of actions of ActionShaping wrapper (_actions).\n\n Camera margin sets the threshold what is considered "moving camera".\n\n Note: Hardcoded to work for actions in ActionShaping._actions, with "intuitive"\n ordering of actions.\n If you change ActionShaping._actions, remember to change this!\n\n Array elements are integers corresponding to actions, or "-1"\n for actions that did not have any corresponding discrete match.\n '
camera_actions = dataset_actions[(:, 10:)].astype(np.float32)
attack_actions = dataset_actions[(:, 0)].astype(np.float32)
forward_actions = dataset_actions[(:, 3)].astype(np.float32)
jump_actions = dataset_actions[(:, 4)].astype(np.float32)
back_actions = dataset_actions[(:, 1)].astype(np.float32)
left_actions = dataset_actions[(:, 5)].astype(np.float32)
right_actions = dataset_actions[(:, 6)].astype(np.float32)
equip_actions = dataset_actions[(:, 2)]
use_actions = dataset_actions[(:, 9)].astype(np.float32)
sneak_actions = dataset_actions[(:, 7)].astype(np.float32)
sprint_actions = dataset_actions[(:, 8)].astype(np.float32)
batch_size = len(camera_actions)
actions = np.zeros((batch_size,), dtype=int)
for i in range(len(camera_actions)):
if (camera_actions[i][1] < (- camera_margin)):
actions[i] = 3
elif (camera_actions[i][1] > camera_margin):
actions[i] = 4
elif (camera_actions[i][0] > camera_margin):
actions[i] = 5
elif (camera_actions[i][0] < (- camera_margin)):
actions[i] = 6
elif (forward_actions[i] == 1):
if (jump_actions[i] == 1):
actions[i] = 2
elif (attack_actions[i] == 1):
actions[i] = 11
else:
actions[i] = 1
elif (attack_actions[i] == 1):
actions[i] = 0
elif (left_actions[i] == 1):
actions[i] = 8
elif (right_actions[i] == 1):
actions[i] = 9
elif (jump_actions[i] == 1):
actions[i] = 10
elif (back_actions[i] == 1):
actions[i] = 7
elif sum(dataset_actions[(i, (0, 1, 3, 4, 5, 6, 7, 8, 9))].astype(np.float32)):
actions[i] = 12
else:
actions[i] = 99
return actions<|docstring|>Turn a batch of actions from dataset (`batch_iter`) to a numpy
array that corresponds to batch of actions of ActionShaping wrapper (_actions).
Camera margin sets the threshold what is considered "moving camera".
Note: Hardcoded to work for actions in ActionShaping._actions, with "intuitive"
ordering of actions.
If you change ActionShaping._actions, remember to change this!
Array elements are integers corresponding to actions, or "-1"
for actions that did not have any corresponding discrete match.<|endoftext|> |
658d735a2e5644b4499e8a10f9a64641c32472be1e3cd2c15bc348f69ddc6ceb | def StreptococcusSpAcc21(directed: bool=False, verbose: int=2, cache_path: str='graphs/string', **additional_graph_kwargs: Dict) -> EnsmallenGraph:
'Return new instance of the Streptococcus sp. ACC21 graph.\n\n The graph is automatically retrieved from the STRING repository. \n\n\t\n\n Parameters\n -------------------\n directed: bool = False,\n Wether to load the graph as directed or undirected.\n By default false.\n verbose: int = 2,\n Wether to show loading bars during the retrieval and building\n of the graph.\n cache_path: str = "graphs",\n Where to store the downloaded graphs.\n additional_graph_kwargs: Dict,\n Additional graph kwargs.\n\n Returns\n -----------------------\n Instace of Streptococcus sp. ACC21 graph.\n\n\tReport\n\t---------------------\n\tAt the time of rendering these methods (please see datetime below), the graph\n\thad the following characteristics:\n\t\n\tDatetime: 2021-02-02 23:04:35.261807\n\t\n\tThe undirected graph Streptococcus sp. ACC21 has 1966 nodes and 149155\n\tweighted edges, of which none are self-loops. The graph is dense as it\n\thas a density of 0.07722 and has 5 connected components, where the component\n\twith most nodes has 1950 nodes and the component with the least nodes has\n\t3 nodes. The graph median node degree is 117, the mean node degree is 151.73,\n\tand the node degree mode is 11. The top 5 most central nodes are 1161413.HMPREF1510_0375\n\t(degree 920), 1161413.HMPREF1510_0932 (degree 829), 1161413.HMPREF1510_0971\n\t(degree 769), 1161413.HMPREF1510_0258 (degree 761) and 1161413.HMPREF1510_0651\n\t(degree 725).\n\t\n\n\tReferences\n\t---------------------\n\tPlease cite the following if you use the data:\n\t\n\t@article{szklarczyk2019string,\n\t title={STRING v11: protein--protein association networks with increased coverage, supporting functional discovery in genome-wide experimental datasets},\n\t author={Szklarczyk, Damian and Gable, Annika L and Lyon, David and Junge, Alexander and Wyder, Stefan and Huerta-Cepas, Jaime and Simonovic, Milan and Doncheva, Nadezhda T and Morris, John H and Bork, Peer and others},\n\t journal={Nucleic acids research},\n\t volume={47},\n\t number={D1},\n\t pages={D607--D613},\n\t year={2019},\n\t publisher={Oxford University Press}\n\t}\n\t\n\n\tUsage example\n\t----------------------\n\tThe usage of this graph is relatively straightforward:\n\t\n\t.. code:: python\n\t\n\t # First import the function to retrieve the graph from the datasets\n\t from ensmallen_graph.datasets.string import StreptococcusSpAcc21\n\t\n\t # Then load the graph\n\t graph = StreptococcusSpAcc21()\n\t\n\t # Finally, you can do anything with it, for instance, compute its report:\n\t print(graph)\n\t\n\t # If you need to run a link prediction task with validation,\n\t # you can split the graph using a connected holdout as follows:\n\t train_graph, validation_graph = graph.connected_holdout(\n\t # You can use an 80/20 split the holdout, for example.\n\t train_size=0.8,\n\t # The random state is used to reproduce the holdout.\n\t random_state=42,\n\t # Wether to show a loading bar.\n\t verbose=True\n\t )\n\t\n\t # Remember that, if you need, you can enable the memory-time trade-offs:\n\t train_graph.enable(\n\t vector_sources=True,\n\t vector_destinations=True,\n\t vector_outbounds=True\n\t )\n\t\n\t # Consider using the methods made available in the Embiggen package\n\t # to run graph embedding or link prediction tasks.\n '
return AutomaticallyRetrievedGraph(graph_name='StreptococcusSpAcc21', dataset='string', directed=directed, verbose=verbose, cache_path=cache_path, additional_graph_kwargs=additional_graph_kwargs)() | Return new instance of the Streptococcus sp. ACC21 graph.
The graph is automatically retrieved from the STRING repository.
Parameters
-------------------
directed: bool = False,
Wether to load the graph as directed or undirected.
By default false.
verbose: int = 2,
Wether to show loading bars during the retrieval and building
of the graph.
cache_path: str = "graphs",
Where to store the downloaded graphs.
additional_graph_kwargs: Dict,
Additional graph kwargs.
Returns
-----------------------
Instace of Streptococcus sp. ACC21 graph.
Report
---------------------
At the time of rendering these methods (please see datetime below), the graph
had the following characteristics:
Datetime: 2021-02-02 23:04:35.261807
The undirected graph Streptococcus sp. ACC21 has 1966 nodes and 149155
weighted edges, of which none are self-loops. The graph is dense as it
has a density of 0.07722 and has 5 connected components, where the component
with most nodes has 1950 nodes and the component with the least nodes has
3 nodes. The graph median node degree is 117, the mean node degree is 151.73,
and the node degree mode is 11. The top 5 most central nodes are 1161413.HMPREF1510_0375
(degree 920), 1161413.HMPREF1510_0932 (degree 829), 1161413.HMPREF1510_0971
(degree 769), 1161413.HMPREF1510_0258 (degree 761) and 1161413.HMPREF1510_0651
(degree 725).
References
---------------------
Please cite the following if you use the data:
@article{szklarczyk2019string,
title={STRING v11: protein--protein association networks with increased coverage, supporting functional discovery in genome-wide experimental datasets},
author={Szklarczyk, Damian and Gable, Annika L and Lyon, David and Junge, Alexander and Wyder, Stefan and Huerta-Cepas, Jaime and Simonovic, Milan and Doncheva, Nadezhda T and Morris, John H and Bork, Peer and others},
journal={Nucleic acids research},
volume={47},
number={D1},
pages={D607--D613},
year={2019},
publisher={Oxford University Press}
}
Usage example
----------------------
The usage of this graph is relatively straightforward:
.. code:: python
# First import the function to retrieve the graph from the datasets
from ensmallen_graph.datasets.string import StreptococcusSpAcc21
# Then load the graph
graph = StreptococcusSpAcc21()
# Finally, you can do anything with it, for instance, compute its report:
print(graph)
# If you need to run a link prediction task with validation,
# you can split the graph using a connected holdout as follows:
train_graph, validation_graph = graph.connected_holdout(
# You can use an 80/20 split the holdout, for example.
train_size=0.8,
# The random state is used to reproduce the holdout.
random_state=42,
# Wether to show a loading bar.
verbose=True
)
# Remember that, if you need, you can enable the memory-time trade-offs:
train_graph.enable(
vector_sources=True,
vector_destinations=True,
vector_outbounds=True
)
# Consider using the methods made available in the Embiggen package
# to run graph embedding or link prediction tasks. | bindings/python/ensmallen_graph/datasets/string/streptococcusspacc21.py | StreptococcusSpAcc21 | caufieldjh/ensmallen_graph | 0 | python | def StreptococcusSpAcc21(directed: bool=False, verbose: int=2, cache_path: str='graphs/string', **additional_graph_kwargs: Dict) -> EnsmallenGraph:
'Return new instance of the Streptococcus sp. ACC21 graph.\n\n The graph is automatically retrieved from the STRING repository. \n\n\t\n\n Parameters\n -------------------\n directed: bool = False,\n Wether to load the graph as directed or undirected.\n By default false.\n verbose: int = 2,\n Wether to show loading bars during the retrieval and building\n of the graph.\n cache_path: str = "graphs",\n Where to store the downloaded graphs.\n additional_graph_kwargs: Dict,\n Additional graph kwargs.\n\n Returns\n -----------------------\n Instace of Streptococcus sp. ACC21 graph.\n\n\tReport\n\t---------------------\n\tAt the time of rendering these methods (please see datetime below), the graph\n\thad the following characteristics:\n\t\n\tDatetime: 2021-02-02 23:04:35.261807\n\t\n\tThe undirected graph Streptococcus sp. ACC21 has 1966 nodes and 149155\n\tweighted edges, of which none are self-loops. The graph is dense as it\n\thas a density of 0.07722 and has 5 connected components, where the component\n\twith most nodes has 1950 nodes and the component with the least nodes has\n\t3 nodes. The graph median node degree is 117, the mean node degree is 151.73,\n\tand the node degree mode is 11. The top 5 most central nodes are 1161413.HMPREF1510_0375\n\t(degree 920), 1161413.HMPREF1510_0932 (degree 829), 1161413.HMPREF1510_0971\n\t(degree 769), 1161413.HMPREF1510_0258 (degree 761) and 1161413.HMPREF1510_0651\n\t(degree 725).\n\t\n\n\tReferences\n\t---------------------\n\tPlease cite the following if you use the data:\n\t\n\t@article{szklarczyk2019string,\n\t title={STRING v11: protein--protein association networks with increased coverage, supporting functional discovery in genome-wide experimental datasets},\n\t author={Szklarczyk, Damian and Gable, Annika L and Lyon, David and Junge, Alexander and Wyder, Stefan and Huerta-Cepas, Jaime and Simonovic, Milan and Doncheva, Nadezhda T and Morris, John H and Bork, Peer and others},\n\t journal={Nucleic acids research},\n\t volume={47},\n\t number={D1},\n\t pages={D607--D613},\n\t year={2019},\n\t publisher={Oxford University Press}\n\t}\n\t\n\n\tUsage example\n\t----------------------\n\tThe usage of this graph is relatively straightforward:\n\t\n\t.. code:: python\n\t\n\t # First import the function to retrieve the graph from the datasets\n\t from ensmallen_graph.datasets.string import StreptococcusSpAcc21\n\t\n\t # Then load the graph\n\t graph = StreptococcusSpAcc21()\n\t\n\t # Finally, you can do anything with it, for instance, compute its report:\n\t print(graph)\n\t\n\t # If you need to run a link prediction task with validation,\n\t # you can split the graph using a connected holdout as follows:\n\t train_graph, validation_graph = graph.connected_holdout(\n\t # You can use an 80/20 split the holdout, for example.\n\t train_size=0.8,\n\t # The random state is used to reproduce the holdout.\n\t random_state=42,\n\t # Wether to show a loading bar.\n\t verbose=True\n\t )\n\t\n\t # Remember that, if you need, you can enable the memory-time trade-offs:\n\t train_graph.enable(\n\t vector_sources=True,\n\t vector_destinations=True,\n\t vector_outbounds=True\n\t )\n\t\n\t # Consider using the methods made available in the Embiggen package\n\t # to run graph embedding or link prediction tasks.\n '
return AutomaticallyRetrievedGraph(graph_name='StreptococcusSpAcc21', dataset='string', directed=directed, verbose=verbose, cache_path=cache_path, additional_graph_kwargs=additional_graph_kwargs)() | def StreptococcusSpAcc21(directed: bool=False, verbose: int=2, cache_path: str='graphs/string', **additional_graph_kwargs: Dict) -> EnsmallenGraph:
'Return new instance of the Streptococcus sp. ACC21 graph.\n\n The graph is automatically retrieved from the STRING repository. \n\n\t\n\n Parameters\n -------------------\n directed: bool = False,\n Wether to load the graph as directed or undirected.\n By default false.\n verbose: int = 2,\n Wether to show loading bars during the retrieval and building\n of the graph.\n cache_path: str = "graphs",\n Where to store the downloaded graphs.\n additional_graph_kwargs: Dict,\n Additional graph kwargs.\n\n Returns\n -----------------------\n Instace of Streptococcus sp. ACC21 graph.\n\n\tReport\n\t---------------------\n\tAt the time of rendering these methods (please see datetime below), the graph\n\thad the following characteristics:\n\t\n\tDatetime: 2021-02-02 23:04:35.261807\n\t\n\tThe undirected graph Streptococcus sp. ACC21 has 1966 nodes and 149155\n\tweighted edges, of which none are self-loops. The graph is dense as it\n\thas a density of 0.07722 and has 5 connected components, where the component\n\twith most nodes has 1950 nodes and the component with the least nodes has\n\t3 nodes. The graph median node degree is 117, the mean node degree is 151.73,\n\tand the node degree mode is 11. The top 5 most central nodes are 1161413.HMPREF1510_0375\n\t(degree 920), 1161413.HMPREF1510_0932 (degree 829), 1161413.HMPREF1510_0971\n\t(degree 769), 1161413.HMPREF1510_0258 (degree 761) and 1161413.HMPREF1510_0651\n\t(degree 725).\n\t\n\n\tReferences\n\t---------------------\n\tPlease cite the following if you use the data:\n\t\n\t@article{szklarczyk2019string,\n\t title={STRING v11: protein--protein association networks with increased coverage, supporting functional discovery in genome-wide experimental datasets},\n\t author={Szklarczyk, Damian and Gable, Annika L and Lyon, David and Junge, Alexander and Wyder, Stefan and Huerta-Cepas, Jaime and Simonovic, Milan and Doncheva, Nadezhda T and Morris, John H and Bork, Peer and others},\n\t journal={Nucleic acids research},\n\t volume={47},\n\t number={D1},\n\t pages={D607--D613},\n\t year={2019},\n\t publisher={Oxford University Press}\n\t}\n\t\n\n\tUsage example\n\t----------------------\n\tThe usage of this graph is relatively straightforward:\n\t\n\t.. code:: python\n\t\n\t # First import the function to retrieve the graph from the datasets\n\t from ensmallen_graph.datasets.string import StreptococcusSpAcc21\n\t\n\t # Then load the graph\n\t graph = StreptococcusSpAcc21()\n\t\n\t # Finally, you can do anything with it, for instance, compute its report:\n\t print(graph)\n\t\n\t # If you need to run a link prediction task with validation,\n\t # you can split the graph using a connected holdout as follows:\n\t train_graph, validation_graph = graph.connected_holdout(\n\t # You can use an 80/20 split the holdout, for example.\n\t train_size=0.8,\n\t # The random state is used to reproduce the holdout.\n\t random_state=42,\n\t # Wether to show a loading bar.\n\t verbose=True\n\t )\n\t\n\t # Remember that, if you need, you can enable the memory-time trade-offs:\n\t train_graph.enable(\n\t vector_sources=True,\n\t vector_destinations=True,\n\t vector_outbounds=True\n\t )\n\t\n\t # Consider using the methods made available in the Embiggen package\n\t # to run graph embedding or link prediction tasks.\n '
return AutomaticallyRetrievedGraph(graph_name='StreptococcusSpAcc21', dataset='string', directed=directed, verbose=verbose, cache_path=cache_path, additional_graph_kwargs=additional_graph_kwargs)()<|docstring|>Return new instance of the Streptococcus sp. ACC21 graph.
The graph is automatically retrieved from the STRING repository.
Parameters
-------------------
directed: bool = False,
Wether to load the graph as directed or undirected.
By default false.
verbose: int = 2,
Wether to show loading bars during the retrieval and building
of the graph.
cache_path: str = "graphs",
Where to store the downloaded graphs.
additional_graph_kwargs: Dict,
Additional graph kwargs.
Returns
-----------------------
Instace of Streptococcus sp. ACC21 graph.
Report
---------------------
At the time of rendering these methods (please see datetime below), the graph
had the following characteristics:
Datetime: 2021-02-02 23:04:35.261807
The undirected graph Streptococcus sp. ACC21 has 1966 nodes and 149155
weighted edges, of which none are self-loops. The graph is dense as it
has a density of 0.07722 and has 5 connected components, where the component
with most nodes has 1950 nodes and the component with the least nodes has
3 nodes. The graph median node degree is 117, the mean node degree is 151.73,
and the node degree mode is 11. The top 5 most central nodes are 1161413.HMPREF1510_0375
(degree 920), 1161413.HMPREF1510_0932 (degree 829), 1161413.HMPREF1510_0971
(degree 769), 1161413.HMPREF1510_0258 (degree 761) and 1161413.HMPREF1510_0651
(degree 725).
References
---------------------
Please cite the following if you use the data:
@article{szklarczyk2019string,
title={STRING v11: protein--protein association networks with increased coverage, supporting functional discovery in genome-wide experimental datasets},
author={Szklarczyk, Damian and Gable, Annika L and Lyon, David and Junge, Alexander and Wyder, Stefan and Huerta-Cepas, Jaime and Simonovic, Milan and Doncheva, Nadezhda T and Morris, John H and Bork, Peer and others},
journal={Nucleic acids research},
volume={47},
number={D1},
pages={D607--D613},
year={2019},
publisher={Oxford University Press}
}
Usage example
----------------------
The usage of this graph is relatively straightforward:
.. code:: python
# First import the function to retrieve the graph from the datasets
from ensmallen_graph.datasets.string import StreptococcusSpAcc21
# Then load the graph
graph = StreptococcusSpAcc21()
# Finally, you can do anything with it, for instance, compute its report:
print(graph)
# If you need to run a link prediction task with validation,
# you can split the graph using a connected holdout as follows:
train_graph, validation_graph = graph.connected_holdout(
# You can use an 80/20 split the holdout, for example.
train_size=0.8,
# The random state is used to reproduce the holdout.
random_state=42,
# Wether to show a loading bar.
verbose=True
)
# Remember that, if you need, you can enable the memory-time trade-offs:
train_graph.enable(
vector_sources=True,
vector_destinations=True,
vector_outbounds=True
)
# Consider using the methods made available in the Embiggen package
# to run graph embedding or link prediction tasks.<|endoftext|> |
bd3c6ccfda3a43c98570f2a555eef1366bc0848b82082f9ee43e3e986daa1a0e | def __init__(self, workdir=None, mode='update', alwaysUseLatest=False, timeout=(20 * 60), retry=None, env=None, logEnviron=True, description=None, descriptionDone=None, descriptionSuffix=None, codebase='', **kwargs):
"\n @type workdir: string\n @param workdir: local directory (relative to the Builder's root)\n where the tree should be placed\n\n @type alwaysUseLatest: boolean\n @param alwaysUseLatest: whether to always update to the most\n recent available sources for this build.\n\n Normally the Source step asks its Build for a list of all\n Changes that are supposed to go into the build, then computes a\n 'source stamp' (revision number or timestamp) that will cause\n exactly that set of changes to be present in the checked out\n tree. This is turned into, e.g., 'cvs update -D timestamp', or\n 'svn update -r revnum'. If alwaysUseLatest=True, bypass this\n computation and always update to the latest available sources\n for each build.\n\n The source stamp helps avoid a race condition in which someone\n commits a change after the master has decided to start a build\n but before the worker finishes checking out the sources. At best\n this results in a build which contains more changes than the\n buildmaster thinks it has (possibly resulting in the wrong\n person taking the blame for any problems that result), at worst\n is can result in an incoherent set of sources (splitting a\n non-atomic commit) which may not build at all.\n\n @type logEnviron: boolean\n @param logEnviron: If this option is true (the default), then the\n step's logfile will describe the environment\n variables on the worker. In situations where the\n environment is not relevant and is long, it may\n be easier to set logEnviron=False.\n\n @type codebase: string\n @param codebase: Specifies which changes in a build are processed by\n the step. The default codebase value is ''. The codebase must correspond\n to a codebase assigned by the codebaseGenerator. If no codebaseGenerator\n is defined in the master then codebase doesn't need to be set, the\n default value will then match all changes.\n "
descriptions_for_mode = {'clobber': 'checkout', 'export': 'exporting'}
descriptionDones_for_mode = {'clobber': 'checkout', 'export': 'export'}
if (not description):
description = [descriptions_for_mode.get(mode, 'updating')]
if (not descriptionDone):
descriptionDone = [descriptionDones_for_mode.get(mode, 'update')]
if ((not descriptionSuffix) and codebase):
descriptionSuffix = [codebase]
LoggingBuildStep.__init__(self, description=description, descriptionDone=descriptionDone, descriptionSuffix=descriptionSuffix, **kwargs)
self.workdir = workdir
self.sourcestamp = None
self.codebase = codebase
if self.codebase:
self.name = properties.Interpolate('%(kw:name)s-%(kw:codebase)s', name=self.name, codebase=self.codebase)
self.alwaysUseLatest = alwaysUseLatest
self.logEnviron = logEnviron
self.env = env
self.timeout = timeout
self.retry = retry | @type workdir: string
@param workdir: local directory (relative to the Builder's root)
where the tree should be placed
@type alwaysUseLatest: boolean
@param alwaysUseLatest: whether to always update to the most
recent available sources for this build.
Normally the Source step asks its Build for a list of all
Changes that are supposed to go into the build, then computes a
'source stamp' (revision number or timestamp) that will cause
exactly that set of changes to be present in the checked out
tree. This is turned into, e.g., 'cvs update -D timestamp', or
'svn update -r revnum'. If alwaysUseLatest=True, bypass this
computation and always update to the latest available sources
for each build.
The source stamp helps avoid a race condition in which someone
commits a change after the master has decided to start a build
but before the worker finishes checking out the sources. At best
this results in a build which contains more changes than the
buildmaster thinks it has (possibly resulting in the wrong
person taking the blame for any problems that result), at worst
is can result in an incoherent set of sources (splitting a
non-atomic commit) which may not build at all.
@type logEnviron: boolean
@param logEnviron: If this option is true (the default), then the
step's logfile will describe the environment
variables on the worker. In situations where the
environment is not relevant and is long, it may
be easier to set logEnviron=False.
@type codebase: string
@param codebase: Specifies which changes in a build are processed by
the step. The default codebase value is ''. The codebase must correspond
to a codebase assigned by the codebaseGenerator. If no codebaseGenerator
is defined in the master then codebase doesn't need to be set, the
default value will then match all changes. | bb-master/sandbox/lib/python3.5/site-packages/buildbot/steps/source/base.py | __init__ | Alecto3-D/testable-greeter | 2 | python | def __init__(self, workdir=None, mode='update', alwaysUseLatest=False, timeout=(20 * 60), retry=None, env=None, logEnviron=True, description=None, descriptionDone=None, descriptionSuffix=None, codebase=, **kwargs):
"\n @type workdir: string\n @param workdir: local directory (relative to the Builder's root)\n where the tree should be placed\n\n @type alwaysUseLatest: boolean\n @param alwaysUseLatest: whether to always update to the most\n recent available sources for this build.\n\n Normally the Source step asks its Build for a list of all\n Changes that are supposed to go into the build, then computes a\n 'source stamp' (revision number or timestamp) that will cause\n exactly that set of changes to be present in the checked out\n tree. This is turned into, e.g., 'cvs update -D timestamp', or\n 'svn update -r revnum'. If alwaysUseLatest=True, bypass this\n computation and always update to the latest available sources\n for each build.\n\n The source stamp helps avoid a race condition in which someone\n commits a change after the master has decided to start a build\n but before the worker finishes checking out the sources. At best\n this results in a build which contains more changes than the\n buildmaster thinks it has (possibly resulting in the wrong\n person taking the blame for any problems that result), at worst\n is can result in an incoherent set of sources (splitting a\n non-atomic commit) which may not build at all.\n\n @type logEnviron: boolean\n @param logEnviron: If this option is true (the default), then the\n step's logfile will describe the environment\n variables on the worker. In situations where the\n environment is not relevant and is long, it may\n be easier to set logEnviron=False.\n\n @type codebase: string\n @param codebase: Specifies which changes in a build are processed by\n the step. The default codebase value is . The codebase must correspond\n to a codebase assigned by the codebaseGenerator. If no codebaseGenerator\n is defined in the master then codebase doesn't need to be set, the\n default value will then match all changes.\n "
descriptions_for_mode = {'clobber': 'checkout', 'export': 'exporting'}
descriptionDones_for_mode = {'clobber': 'checkout', 'export': 'export'}
if (not description):
description = [descriptions_for_mode.get(mode, 'updating')]
if (not descriptionDone):
descriptionDone = [descriptionDones_for_mode.get(mode, 'update')]
if ((not descriptionSuffix) and codebase):
descriptionSuffix = [codebase]
LoggingBuildStep.__init__(self, description=description, descriptionDone=descriptionDone, descriptionSuffix=descriptionSuffix, **kwargs)
self.workdir = workdir
self.sourcestamp = None
self.codebase = codebase
if self.codebase:
self.name = properties.Interpolate('%(kw:name)s-%(kw:codebase)s', name=self.name, codebase=self.codebase)
self.alwaysUseLatest = alwaysUseLatest
self.logEnviron = logEnviron
self.env = env
self.timeout = timeout
self.retry = retry | def __init__(self, workdir=None, mode='update', alwaysUseLatest=False, timeout=(20 * 60), retry=None, env=None, logEnviron=True, description=None, descriptionDone=None, descriptionSuffix=None, codebase=, **kwargs):
"\n @type workdir: string\n @param workdir: local directory (relative to the Builder's root)\n where the tree should be placed\n\n @type alwaysUseLatest: boolean\n @param alwaysUseLatest: whether to always update to the most\n recent available sources for this build.\n\n Normally the Source step asks its Build for a list of all\n Changes that are supposed to go into the build, then computes a\n 'source stamp' (revision number or timestamp) that will cause\n exactly that set of changes to be present in the checked out\n tree. This is turned into, e.g., 'cvs update -D timestamp', or\n 'svn update -r revnum'. If alwaysUseLatest=True, bypass this\n computation and always update to the latest available sources\n for each build.\n\n The source stamp helps avoid a race condition in which someone\n commits a change after the master has decided to start a build\n but before the worker finishes checking out the sources. At best\n this results in a build which contains more changes than the\n buildmaster thinks it has (possibly resulting in the wrong\n person taking the blame for any problems that result), at worst\n is can result in an incoherent set of sources (splitting a\n non-atomic commit) which may not build at all.\n\n @type logEnviron: boolean\n @param logEnviron: If this option is true (the default), then the\n step's logfile will describe the environment\n variables on the worker. In situations where the\n environment is not relevant and is long, it may\n be easier to set logEnviron=False.\n\n @type codebase: string\n @param codebase: Specifies which changes in a build are processed by\n the step. The default codebase value is . The codebase must correspond\n to a codebase assigned by the codebaseGenerator. If no codebaseGenerator\n is defined in the master then codebase doesn't need to be set, the\n default value will then match all changes.\n "
descriptions_for_mode = {'clobber': 'checkout', 'export': 'exporting'}
descriptionDones_for_mode = {'clobber': 'checkout', 'export': 'export'}
if (not description):
description = [descriptions_for_mode.get(mode, 'updating')]
if (not descriptionDone):
descriptionDone = [descriptionDones_for_mode.get(mode, 'update')]
if ((not descriptionSuffix) and codebase):
descriptionSuffix = [codebase]
LoggingBuildStep.__init__(self, description=description, descriptionDone=descriptionDone, descriptionSuffix=descriptionSuffix, **kwargs)
self.workdir = workdir
self.sourcestamp = None
self.codebase = codebase
if self.codebase:
self.name = properties.Interpolate('%(kw:name)s-%(kw:codebase)s', name=self.name, codebase=self.codebase)
self.alwaysUseLatest = alwaysUseLatest
self.logEnviron = logEnviron
self.env = env
self.timeout = timeout
self.retry = retry<|docstring|>@type workdir: string
@param workdir: local directory (relative to the Builder's root)
where the tree should be placed
@type alwaysUseLatest: boolean
@param alwaysUseLatest: whether to always update to the most
recent available sources for this build.
Normally the Source step asks its Build for a list of all
Changes that are supposed to go into the build, then computes a
'source stamp' (revision number or timestamp) that will cause
exactly that set of changes to be present in the checked out
tree. This is turned into, e.g., 'cvs update -D timestamp', or
'svn update -r revnum'. If alwaysUseLatest=True, bypass this
computation and always update to the latest available sources
for each build.
The source stamp helps avoid a race condition in which someone
commits a change after the master has decided to start a build
but before the worker finishes checking out the sources. At best
this results in a build which contains more changes than the
buildmaster thinks it has (possibly resulting in the wrong
person taking the blame for any problems that result), at worst
is can result in an incoherent set of sources (splitting a
non-atomic commit) which may not build at all.
@type logEnviron: boolean
@param logEnviron: If this option is true (the default), then the
step's logfile will describe the environment
variables on the worker. In situations where the
environment is not relevant and is long, it may
be easier to set logEnviron=False.
@type codebase: string
@param codebase: Specifies which changes in a build are processed by
the step. The default codebase value is ''. The codebase must correspond
to a codebase assigned by the codebaseGenerator. If no codebaseGenerator
is defined in the master then codebase doesn't need to be set, the
default value will then match all changes.<|endoftext|> |
2238c98d1475973c3b6c0c9b6eb1264cea0aed2392a6187c43ef53ebdbea9e40 | def _hasAttrGroupMember(self, attrGroup, attr):
'\n The hasattr equivalent for attribute groups: returns whether the given\n member is in the attribute group.\n '
method_name = ('%s_%s' % (attrGroup, attr))
return hasattr(self, method_name) | The hasattr equivalent for attribute groups: returns whether the given
member is in the attribute group. | bb-master/sandbox/lib/python3.5/site-packages/buildbot/steps/source/base.py | _hasAttrGroupMember | Alecto3-D/testable-greeter | 2 | python | def _hasAttrGroupMember(self, attrGroup, attr):
'\n The hasattr equivalent for attribute groups: returns whether the given\n member is in the attribute group.\n '
method_name = ('%s_%s' % (attrGroup, attr))
return hasattr(self, method_name) | def _hasAttrGroupMember(self, attrGroup, attr):
'\n The hasattr equivalent for attribute groups: returns whether the given\n member is in the attribute group.\n '
method_name = ('%s_%s' % (attrGroup, attr))
return hasattr(self, method_name)<|docstring|>The hasattr equivalent for attribute groups: returns whether the given
member is in the attribute group.<|endoftext|> |
3bc827bb8eee1b94fc161097640870a1f804a3e86ef4d4aef8bc07e828343c83 | def _getAttrGroupMember(self, attrGroup, attr):
'\n The getattr equivalent for attribute groups: gets and returns the\n attribute group member.\n '
method_name = ('%s_%s' % (attrGroup, attr))
return getattr(self, method_name) | The getattr equivalent for attribute groups: gets and returns the
attribute group member. | bb-master/sandbox/lib/python3.5/site-packages/buildbot/steps/source/base.py | _getAttrGroupMember | Alecto3-D/testable-greeter | 2 | python | def _getAttrGroupMember(self, attrGroup, attr):
'\n The getattr equivalent for attribute groups: gets and returns the\n attribute group member.\n '
method_name = ('%s_%s' % (attrGroup, attr))
return getattr(self, method_name) | def _getAttrGroupMember(self, attrGroup, attr):
'\n The getattr equivalent for attribute groups: gets and returns the\n attribute group member.\n '
method_name = ('%s_%s' % (attrGroup, attr))
return getattr(self, method_name)<|docstring|>The getattr equivalent for attribute groups: gets and returns the
attribute group member.<|endoftext|> |
fd9835c72d7447e68ad455863d7f425fd6b956a9761a682731a8758e8407dd0a | def _listAttrGroupMembers(self, attrGroup):
'\n Returns a list of all members in the attribute group.\n '
from inspect import getmembers, ismethod
methods = getmembers(self, ismethod)
group_prefix = (attrGroup + '_')
group_len = len(group_prefix)
group_members = [method[0][group_len:] for method in methods if method[0].startswith(group_prefix)]
return group_members | Returns a list of all members in the attribute group. | bb-master/sandbox/lib/python3.5/site-packages/buildbot/steps/source/base.py | _listAttrGroupMembers | Alecto3-D/testable-greeter | 2 | python | def _listAttrGroupMembers(self, attrGroup):
'\n \n '
from inspect import getmembers, ismethod
methods = getmembers(self, ismethod)
group_prefix = (attrGroup + '_')
group_len = len(group_prefix)
group_members = [method[0][group_len:] for method in methods if method[0].startswith(group_prefix)]
return group_members | def _listAttrGroupMembers(self, attrGroup):
'\n \n '
from inspect import getmembers, ismethod
methods = getmembers(self, ismethod)
group_prefix = (attrGroup + '_')
group_len = len(group_prefix)
group_members = [method[0][group_len:] for method in methods if method[0].startswith(group_prefix)]
return group_members<|docstring|>Returns a list of all members in the attribute group.<|endoftext|> |
a7ecc524e2ce6ca8494f9cc4e301d6fb91861f1a7e06a43c2a33fb02ddc4b963 | def updateSourceProperty(self, name, value, source=''):
"\n Update a property, indexing the property by codebase if codebase is not\n ''. Source steps should generally use this instead of setProperty.\n "
if (source == ''):
source = self.__class__.__name__
if (self.codebase != ''):
assert (not isinstance(self.getProperty(name, None), str)), ("Sourcestep %s has a codebase, other sourcesteps don't" % self.name)
property_dict = self.getProperty(name, {})
property_dict[self.codebase] = value
LoggingBuildStep.setProperty(self, name, property_dict, source)
else:
assert (not isinstance(self.getProperty(name, None), dict)), ('Sourcestep %s does not have a codebase, other sourcesteps do' % self.name)
LoggingBuildStep.setProperty(self, name, value, source) | Update a property, indexing the property by codebase if codebase is not
''. Source steps should generally use this instead of setProperty. | bb-master/sandbox/lib/python3.5/site-packages/buildbot/steps/source/base.py | updateSourceProperty | Alecto3-D/testable-greeter | 2 | python | def updateSourceProperty(self, name, value, source=):
"\n Update a property, indexing the property by codebase if codebase is not\n . Source steps should generally use this instead of setProperty.\n "
if (source == ):
source = self.__class__.__name__
if (self.codebase != ):
assert (not isinstance(self.getProperty(name, None), str)), ("Sourcestep %s has a codebase, other sourcesteps don't" % self.name)
property_dict = self.getProperty(name, {})
property_dict[self.codebase] = value
LoggingBuildStep.setProperty(self, name, property_dict, source)
else:
assert (not isinstance(self.getProperty(name, None), dict)), ('Sourcestep %s does not have a codebase, other sourcesteps do' % self.name)
LoggingBuildStep.setProperty(self, name, value, source) | def updateSourceProperty(self, name, value, source=):
"\n Update a property, indexing the property by codebase if codebase is not\n . Source steps should generally use this instead of setProperty.\n "
if (source == ):
source = self.__class__.__name__
if (self.codebase != ):
assert (not isinstance(self.getProperty(name, None), str)), ("Sourcestep %s has a codebase, other sourcesteps don't" % self.name)
property_dict = self.getProperty(name, {})
property_dict[self.codebase] = value
LoggingBuildStep.setProperty(self, name, property_dict, source)
else:
assert (not isinstance(self.getProperty(name, None), dict)), ('Sourcestep %s does not have a codebase, other sourcesteps do' % self.name)
LoggingBuildStep.setProperty(self, name, value, source)<|docstring|>Update a property, indexing the property by codebase if codebase is not
''. Source steps should generally use this instead of setProperty.<|endoftext|> |
305258d2f1ddaa67c160e23e03193664559afb9c08b13a3de67d97a9350c905b | def computeSourceRevision(self, changes):
"Each subclass must implement this method to do something more\n precise than -rHEAD every time. For version control systems that use\n repository-wide change numbers (SVN, P4), this can simply take the\n maximum such number from all the changes involved in this build. For\n systems that do not (CVS), it needs to create a timestamp based upon\n the latest Change, the Build's treeStableTimer, and an optional\n self.checkoutDelay value."
return None | Each subclass must implement this method to do something more
precise than -rHEAD every time. For version control systems that use
repository-wide change numbers (SVN, P4), this can simply take the
maximum such number from all the changes involved in this build. For
systems that do not (CVS), it needs to create a timestamp based upon
the latest Change, the Build's treeStableTimer, and an optional
self.checkoutDelay value. | bb-master/sandbox/lib/python3.5/site-packages/buildbot/steps/source/base.py | computeSourceRevision | Alecto3-D/testable-greeter | 2 | python | def computeSourceRevision(self, changes):
"Each subclass must implement this method to do something more\n precise than -rHEAD every time. For version control systems that use\n repository-wide change numbers (SVN, P4), this can simply take the\n maximum such number from all the changes involved in this build. For\n systems that do not (CVS), it needs to create a timestamp based upon\n the latest Change, the Build's treeStableTimer, and an optional\n self.checkoutDelay value."
return None | def computeSourceRevision(self, changes):
"Each subclass must implement this method to do something more\n precise than -rHEAD every time. For version control systems that use\n repository-wide change numbers (SVN, P4), this can simply take the\n maximum such number from all the changes involved in this build. For\n systems that do not (CVS), it needs to create a timestamp based upon\n the latest Change, the Build's treeStableTimer, and an optional\n self.checkoutDelay value."
return None<|docstring|>Each subclass must implement this method to do something more
precise than -rHEAD every time. For version control systems that use
repository-wide change numbers (SVN, P4), this can simply take the
maximum such number from all the changes involved in this build. For
systems that do not (CVS), it needs to create a timestamp based upon
the latest Change, the Build's treeStableTimer, and an optional
self.checkoutDelay value.<|endoftext|> |
85402c1fe30f6443e51e432cb59fb6c2f8133594c5b53ee42b12196f292055db | def write(self, string):
' Adds a string to the console queue '
if (string != '\n'):
self.queue.put(string)
return | Adds a string to the console queue | Troop/src/interface/console.py | write | mathigatti/EP | 1 | python | def write(self, string):
' '
if (string != '\n'):
self.queue.put(string)
return | def write(self, string):
' '
if (string != '\n'):
self.queue.put(string)
return<|docstring|>Adds a string to the console queue<|endoftext|> |
aa464da4c89961ad034da04cd8ef99ed4a398c202030f5d9159fc2d41875e516 | def flush(self, *args, **kwargs):
' Override '
return | Override | Troop/src/interface/console.py | flush | mathigatti/EP | 1 | python | def flush(self, *args, **kwargs):
' '
return | def flush(self, *args, **kwargs):
' '
return<|docstring|>Override<|endoftext|> |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.