code stringlengths 66 870k | docstring stringlengths 19 26.7k | func_name stringlengths 1 138 | language stringclasses 1
value | repo stringlengths 7 68 | path stringlengths 5 324 | url stringlengths 46 389 | license stringclasses 7
values |
|---|---|---|---|---|---|---|---|
def linux_find_processes(self, names):
"""But what if a blacklisted process spawns after we call
this? We'd have to call this every time we do anything.
"""
pids = []
proc_pid_dirs = glob.glob('/proc/[0-9]*/')
comm_file = ''
for proc_pid_dir in proc_pid_dirs:
... | But what if a blacklisted process spawns after we call
this? We'd have to call this every time we do anything.
| linux_find_processes | python | mandiant/flare-fakenet-ng | fakenet/diverters/linutil.py | https://github.com/mandiant/flare-fakenet-ng/blob/master/fakenet/diverters/linutil.py | Apache-2.0 |
def _linux_find_sock_by_endpoint_unsafe(self, ipver, proto_name, ip, port,
local=True):
"""Search /proc/net/tcp for a socket whose local (field 1, zero-based)
or remote (field 2) address matches ip:port and return the
corresponding inode (field 9).
... | Search /proc/net/tcp for a socket whose local (field 1, zero-based)
or remote (field 2) address matches ip:port and return the
corresponding inode (field 9).
Fields referenced above are zero-based.
Example contents of /proc/net/tcp (wrapped and double-spaced)
sl local_addre... | _linux_find_sock_by_endpoint_unsafe | python | mandiant/flare-fakenet-ng | fakenet/diverters/linutil.py | https://github.com/mandiant/flare-fakenet-ng/blob/master/fakenet/diverters/linutil.py | Apache-2.0 |
def linux_get_pid_comm_by_endpoint(self, ipver, proto_name, ip, port):
"""Obtain a pid and executable name associated with an endpoint.
NOTE: procfs does not allow us to answer questions like "who just
called send()?"; only questions like "who owns a socket associated with
this local po... | Obtain a pid and executable name associated with an endpoint.
NOTE: procfs does not allow us to answer questions like "who just
called send()?"; only questions like "who owns a socket associated with
this local port?" Since fork() etc. can result in multiple ownership,
the real answer m... | linux_get_pid_comm_by_endpoint | python | mandiant/flare-fakenet-ng | fakenet/diverters/linutil.py | https://github.com/mandiant/flare-fakenet-ng/blob/master/fakenet/diverters/linutil.py | Apache-2.0 |
def handle_nonlocal(self, nfqpkt):
"""Handle comms sent to IP addresses that are not bound to any adapter.
This allows analysts to observe when malware is communicating with
hard-coded IP addresses in MultiHost mode.
"""
try:
pkt = LinuxPacketCtx('handle_nonlocal', n... | Handle comms sent to IP addresses that are not bound to any adapter.
This allows analysts to observe when malware is communicating with
hard-coded IP addresses in MultiHost mode.
| handle_nonlocal | python | mandiant/flare-fakenet-ng | fakenet/diverters/linux.py | https://github.com/mandiant/flare-fakenet-ng/blob/master/fakenet/diverters/linux.py | Apache-2.0 |
def handle_incoming(self, nfqpkt):
"""Incoming packet hook.
Specific to incoming packets:
5.) If SingleHost mode:
a.) Conditionally fix up source IPs to support IP forwarding for
otherwise foreign-destined packets
4.) Conditionally mangle destination ports to... | Incoming packet hook.
Specific to incoming packets:
5.) If SingleHost mode:
a.) Conditionally fix up source IPs to support IP forwarding for
otherwise foreign-destined packets
4.) Conditionally mangle destination ports to implement port forwarding
for unb... | handle_incoming | python | mandiant/flare-fakenet-ng | fakenet/diverters/linux.py | https://github.com/mandiant/flare-fakenet-ng/blob/master/fakenet/diverters/linux.py | Apache-2.0 |
def handle_outgoing(self, nfqpkt):
"""Outgoing packet hook.
Specific to outgoing packets:
4.) If SingleHost mode:
a.) Conditionally log packets destined for foreign IP addresses
(the corresponding check for MultiHost mode is called by
handle_nonlocal(... | Outgoing packet hook.
Specific to outgoing packets:
4.) If SingleHost mode:
a.) Conditionally log packets destined for foreign IP addresses
(the corresponding check for MultiHost mode is called by
handle_nonlocal())
b.) Conditionally mangle destin... | handle_outgoing | python | mandiant/flare-fakenet-ng | fakenet/diverters/linux.py | https://github.com/mandiant/flare-fakenet-ng/blob/master/fakenet/diverters/linux.py | Apache-2.0 |
def check_log_nonlocal(self, crit, pkt):
"""Conditionally log packets having a foreign destination.
Each foreign destination will be logged only once if the Linux
Diverter's internal log_nonlocal_only_once flag is set. Otherwise, any
foreign destination IP address will be logged each ti... | Conditionally log packets having a foreign destination.
Each foreign destination will be logged only once if the Linux
Diverter's internal log_nonlocal_only_once flag is set. Otherwise, any
foreign destination IP address will be logged each time it is observed.
| check_log_nonlocal | python | mandiant/flare-fakenet-ng | fakenet/diverters/linux.py | https://github.com/mandiant/flare-fakenet-ng/blob/master/fakenet/diverters/linux.py | Apache-2.0 |
def redirIcmpIpUnconditionally(self, crit, pkt):
"""Redirect ICMP to loopback or external IP if necessary.
On Windows, we can't conveniently use an iptables REDIRECT rule to get
ICMP packets sent back home for free, so here is some code.
"""
if (pkt.is_icmp and
p... | Redirect ICMP to loopback or external IP if necessary.
On Windows, we can't conveniently use an iptables REDIRECT rule to get
ICMP packets sent back home for free, so here is some code.
| redirIcmpIpUnconditionally | python | mandiant/flare-fakenet-ng | fakenet/diverters/windows.py | https://github.com/mandiant/flare-fakenet-ng/blob/master/fakenet/diverters/windows.py | Apache-2.0 |
def fix_gateway(self):
"""Check if there is a gateway configured on any of the Ethernet
interfaces. If that's not the case, then locate configured IP address
and set a gateway automatically. This is necessary for VMWare Host-Only
DHCP server which leaves default gateway empty.
""... | Check if there is a gateway configured on any of the Ethernet
interfaces. If that's not the case, then locate configured IP address
and set a gateway automatically. This is necessary for VMWare Host-Only
DHCP server which leaves default gateway empty.
| fix_gateway | python | mandiant/flare-fakenet-ng | fakenet/diverters/winutil.py | https://github.com/mandiant/flare-fakenet-ng/blob/master/fakenet/diverters/winutil.py | Apache-2.0 |
def fix_dns(self):
"""Check if there is a DNS server on any of the Ethernet interfaces. If
that's not the case, then locate configured IP address and set a DNS
server automatically.
"""
fixed = False
for adapter in self.get_adapters_info():
if self.check_ipa... | Check if there is a DNS server on any of the Ethernet interfaces. If
that's not the case, then locate configured IP address and set a DNS
server automatically.
| fix_dns | python | mandiant/flare-fakenet-ng | fakenet/diverters/winutil.py | https://github.com/mandiant/flare-fakenet-ng/blob/master/fakenet/diverters/winutil.py | Apache-2.0 |
def failEarly(self):
"""Raise exceptions upon construction rather than later."""
# Test generating banner
banner_generated = str(self)
# Test generating and getting length of banner
banner_generated_len = len(self)
return banner_generated, banner_generated_len | Raise exceptions upon construction rather than later. | failEarly | python | mandiant/flare-fakenet-ng | fakenet/listeners/BannerFactory.py | https://github.com/mandiant/flare-fakenet-ng/blob/master/fakenet/listeners/BannerFactory.py | Apache-2.0 |
def __len__(self):
"""Needed for pyftpdlib.
If the length changes between the time when the caller obtains the
length and the time when the caller obtains the latest generated
string, then there is not much that could reasonably be done. It would
be possible to cache the... | Needed for pyftpdlib.
If the length changes between the time when the caller obtains the
length and the time when the caller obtains the latest generated
string, then there is not much that could reasonably be done. It would
be possible to cache the formatted banner with a short... | __len__ | python | mandiant/flare-fakenet-ng | fakenet/listeners/BannerFactory.py | https://github.com/mandiant/flare-fakenet-ng/blob/master/fakenet/listeners/BannerFactory.py | Apache-2.0 |
def genBanner(self, config, bannerdict, defaultbannerkey='!generic'):
"""Select and initialize a banner.
Supported banner escapes:
!<key> - Use the banner whose key in bannerdict is <key>
!random - Use a random banner from bannerdict
!generic - Every listener... | Select and initialize a banner.
Supported banner escapes:
!<key> - Use the banner whose key in bannerdict is <key>
!random - Use a random banner from bannerdict
!generic - Every listener supporting banners must have a generic
Banners can include literal '
' ... | genBanner | python | mandiant/flare-fakenet-ng | fakenet/listeners/BannerFactory.py | https://github.com/mandiant/flare-fakenet-ng/blob/master/fakenet/listeners/BannerFactory.py | Apache-2.0 |
def log_message(self, log_level, is_process_blacklisted, message, *args):
"""The primary objective of this method is to control the log messages
generated for requests from blacklisted processes.
In a case where the DNS server is same as the local machine, the DNS
requests from a blackl... | The primary objective of this method is to control the log messages
generated for requests from blacklisted processes.
In a case where the DNS server is same as the local machine, the DNS
requests from a blacklisted process will reach the DNS listener (which
listens on port 53 locally) ... | log_message | python | mandiant/flare-fakenet-ng | fakenet/listeners/DNSListener.py | https://github.com/mandiant/flare-fakenet-ng/blob/master/fakenet/listeners/DNSListener.py | Apache-2.0 |
def main():
"""
Run from the flare-fakenet-ng root dir with the following command:
python2 -m fakenet.listeners.HTTPListener
"""
logging.basicConfig(format='%(asctime)s [%(name)15s] %(message)s', datefmt='%m/%d/%y %I:%M:%S %p', level=logging.DEBUG)
config = {'port': '8443', 'usessl': 'Yes'... |
Run from the flare-fakenet-ng root dir with the following command:
python2 -m fakenet.listeners.HTTPListener
| main | python | mandiant/flare-fakenet-ng | fakenet/listeners/HTTPListener.py | https://github.com/mandiant/flare-fakenet-ng/blob/master/fakenet/listeners/HTTPListener.py | Apache-2.0 |
def safe_join(root, path):
"""
Joins a path to a root path, even if path starts with '/', using os.sep
"""
# prepending a '/' ensures '..' does not traverse past the root
# of the path
if not path.startswith('/'):
path = '/' + path
normpath = os.path.normpath(path)
return roo... |
Joins a path to a root path, even if path starts with '/', using os.sep
| safe_join | python | mandiant/flare-fakenet-ng | fakenet/listeners/ListenerBase.py | https://github.com/mandiant/flare-fakenet-ng/blob/master/fakenet/listeners/ListenerBase.py | Apache-2.0 |
def abs_config_path(path):
"""
Attempts to return the absolute path of a path from a configuration
setting.
First tries just to just take the abspath() of the parameter to see
if it exists relative to the current working directory. If that does
not exist, attempts to find it relative to the 'f... |
Attempts to return the absolute path of a path from a configuration
setting.
First tries just to just take the abspath() of the parameter to see
if it exists relative to the current working directory. If that does
not exist, attempts to find it relative to the 'fakenet' package
directory. Ret... | abs_config_path | python | mandiant/flare-fakenet-ng | fakenet/listeners/ListenerBase.py | https://github.com/mandiant/flare-fakenet-ng/blob/master/fakenet/listeners/ListenerBase.py | Apache-2.0 |
def main():
"""
Run from the flare-fakenet-ng root dir with the following command:
python2 -m fakenet.listeners.TFTPListener
"""
logging.basicConfig(format='%(asctime)s [%(name)15s] %(message)s', datefmt='%m/%d/%y %I:%M:%S %p', level=logging.DEBUG)
config = {'port': '69', 'protocol': 'udp'... |
Run from the flare-fakenet-ng root dir with the following command:
python2 -m fakenet.listeners.TFTPListener
| main | python | mandiant/flare-fakenet-ng | fakenet/listeners/TFTPListener.py | https://github.com/mandiant/flare-fakenet-ng/blob/master/fakenet/listeners/TFTPListener.py | Apache-2.0 |
def create_cert(self, cn, ca_cert=None, ca_key=None, cert_dir=None):
"""
Create a cert given the common name, a signing CA, CA private key and
the directory output.
return: tuple(None, None) on error
tuple(cert_file_path, key_file_path) on success
"""
f_... |
Create a cert given the common name, a signing CA, CA private key and
the directory output.
return: tuple(None, None) on error
tuple(cert_file_path, key_file_path) on success
| create_cert | python | mandiant/flare-fakenet-ng | fakenet/listeners/ssl_utils/__init__.py | https://github.com/mandiant/flare-fakenet-ng/blob/master/fakenet/listeners/ssl_utils/__init__.py | Apache-2.0 |
def abs_config_path(self, path):
"""
Attempts to return the absolute path of a path from a configuration
setting.
"""
# Try absolute path first
abspath = os.path.abspath(path)
if os.path.exists(abspath):
return abspath
if getattr(sys, 'frozen... |
Attempts to return the absolute path of a path from a configuration
setting.
| abs_config_path | python | mandiant/flare-fakenet-ng | fakenet/listeners/ssl_utils/__init__.py | https://github.com/mandiant/flare-fakenet-ng/blob/master/fakenet/listeners/ssl_utils/__init__.py | Apache-2.0 |
def HandleRequest(req, method, post_data=None):
"""Sample dynamic HTTP response handler.
Parameters
----------
req : BaseHTTPServer.BaseHTTPRequestHandler
The BaseHTTPRequestHandler that recevied the request
method: str
The HTTP method, either 'HEAD', 'GET', 'POST' as of this writin... | Sample dynamic HTTP response handler.
Parameters
----------
req : BaseHTTPServer.BaseHTTPRequestHandler
The BaseHTTPRequestHandler that recevied the request
method: str
The HTTP method, either 'HEAD', 'GET', 'POST' as of this writing
post_data: str
The HTTP post data receive... | HandleRequest | python | mandiant/flare-fakenet-ng | test/CustomProviderExample.py | https://github.com/mandiant/flare-fakenet-ng/blob/master/test/CustomProviderExample.py | Apache-2.0 |
def HandleTcp(sock):
"""Handle a TCP buffer.
Parameters
----------
sock : socket
The connected socket with which to recv and send data
"""
while True:
try:
data = None
data = sock.recv(1024)
except socket.timeout:
pass
if not ... | Handle a TCP buffer.
Parameters
----------
sock : socket
The connected socket with which to recv and send data
| HandleTcp | python | mandiant/flare-fakenet-ng | test/CustomProviderExample.py | https://github.com/mandiant/flare-fakenet-ng/blob/master/test/CustomProviderExample.py | Apache-2.0 |
def HandleUdp(sock, data, addr):
"""Handle a UDP buffer.
Parameters
----------
sock : socket
The connected socket with which to recv and send data
data : str
The data received
addr : tuple
The host and port of the remote peer
"""
if data:
resp = b''.join(... | Handle a UDP buffer.
Parameters
----------
sock : socket
The connected socket with which to recv and send data
data : str
The data received
addr : tuple
The host and port of the remote peer
| HandleUdp | python | mandiant/flare-fakenet-ng | test/CustomProviderExample.py | https://github.com/mandiant/flare-fakenet-ng/blob/master/test/CustomProviderExample.py | Apache-2.0 |
def get_ips(ipvers):
"""Return IP addresses bound to local interfaces including loopbacks.
Parameters
----------
ipvers : list
IP versions desired (4, 6, or both); ensures the netifaces semantics
(e.g. netiface.AF_INET) are localized to this function.
"""
specs = []
resu... | Return IP addresses bound to local interfaces including loopbacks.
Parameters
----------
ipvers : list
IP versions desired (4, 6, or both); ensures the netifaces semantics
(e.g. netiface.AF_INET) are localized to this function.
| get_ips | python | mandiant/flare-fakenet-ng | test/test.py | https://github.com/mandiant/flare-fakenet-ng/blob/master/test/test.py | Apache-2.0 |
def _irc_evt_handler(self, srv, evt):
"""Check for each case and set the corresponding success flag."""
if evt.type == 'join':
if evt.target.startswith(self.join_chan):
self.join_ok = True
elif evt.type == 'welcome':
if evt.arguments[0].startswith('Welcome... | Check for each case and set the corresponding success flag. | _irc_evt_handler | python | mandiant/flare-fakenet-ng | test/test.py | https://github.com/mandiant/flare-fakenet-ng/blob/master/test/test.py | Apache-2.0 |
def _irc_script(self, srv):
"""Callback manages individual test cases for IRC."""
# Clear success flags
self.welcome_ok = False
self.join_ok = False
self.privmsg_ok = False
self.pubmsg_ok = False
# This handler should set the success flags in success cases
... | Callback manages individual test cases for IRC. | _irc_script | python | mandiant/flare-fakenet-ng | test/test.py | https://github.com/mandiant/flare-fakenet-ng/blob/master/test/test.py | Apache-2.0 |
def _run_irc_script(self, nm, callback):
"""Connect to server and give control to callback."""
r = irc.client.Reactor()
srv = r.server()
srv.connect(self.hostname, self.port, self.nick)
retval = callback(srv)
srv.close()
return retval | Connect to server and give control to callback. | _run_irc_script | python | mandiant/flare-fakenet-ng | test/test.py | https://github.com/mandiant/flare-fakenet-ng/blob/master/test/test.py | Apache-2.0 |
def _filterMatchingTests(self, tests, matchspec):
"""Remove tests that match negative specifications (regexes preceded by
a minus sign) or do not match positive specifications (regexes not
preceded by a minus sign).
Modifies the contents of the tests dictionary.
"""
nega... | Remove tests that match negative specifications (regexes preceded by
a minus sign) or do not match positive specifications (regexes not
preceded by a minus sign).
Modifies the contents of the tests dictionary.
| _filterMatchingTests | python | mandiant/flare-fakenet-ng | test/test.py | https://github.com/mandiant/flare-fakenet-ng/blob/master/test/test.py | Apache-2.0 |
def _test_ftp(self, hostname, port=0):
"""Note that the FakeNet-NG Proxy listener won't know what to do with
this client if you point it at some random port, because the client
listens silently for the server 220 welcome message which doesn't give
the Proxy listener anything to work with... | Note that the FakeNet-NG Proxy listener won't know what to do with
this client if you point it at some random port, because the client
listens silently for the server 220 welcome message which doesn't give
the Proxy listener anything to work with to decide where to forward it.
| _test_ftp | python | mandiant/flare-fakenet-ng | test/test.py | https://github.com/mandiant/flare-fakenet-ng/blob/master/test/test.py | Apache-2.0 |
def preprocess_input(audio_path, dim_ordering='default'):
'''Reads an audio file and outputs a Mel-spectrogram.
'''
if dim_ordering == 'default':
dim_ordering = K.image_dim_ordering()
assert dim_ordering in {'tf', 'th'}
if librosa_exists():
import librosa
else:
raise Run... | Reads an audio file and outputs a Mel-spectrogram.
| preprocess_input | python | fchollet/deep-learning-models | audio_conv_utils.py | https://github.com/fchollet/deep-learning-models/blob/master/audio_conv_utils.py | MIT |
def decode_predictions(preds, top_n=5):
'''Decode the output of a music tagger model.
# Arguments
preds: 2-dimensional numpy array
top_n: integer in [0, 50], number of items to show
'''
assert len(preds.shape) == 2 and preds.shape[1] == 50
results = []
for pred in preds:
... | Decode the output of a music tagger model.
# Arguments
preds: 2-dimensional numpy array
top_n: integer in [0, 50], number of items to show
| decode_predictions | python | fchollet/deep-learning-models | audio_conv_utils.py | https://github.com/fchollet/deep-learning-models/blob/master/audio_conv_utils.py | MIT |
def preprocess_input(x):
"""Preprocesses a numpy array encoding a batch of images.
This function applies the "Inception" preprocessing which converts
the RGB values from [0, 255] to [-1, 1]. Note that this preprocessing
function is different from `imagenet_utils.preprocess_input()`.
# Arguments
... | Preprocesses a numpy array encoding a batch of images.
This function applies the "Inception" preprocessing which converts
the RGB values from [0, 255] to [-1, 1]. Note that this preprocessing
function is different from `imagenet_utils.preprocess_input()`.
# Arguments
x: a 4D numpy array consis... | preprocess_input | python | fchollet/deep-learning-models | inception_resnet_v2.py | https://github.com/fchollet/deep-learning-models/blob/master/inception_resnet_v2.py | MIT |
def conv2d_bn(x,
filters,
kernel_size,
strides=1,
padding='same',
activation='relu',
use_bias=False,
name=None):
"""Utility function to apply conv + BN.
# Arguments
x: input tensor.
filters: filter... | Utility function to apply conv + BN.
# Arguments
x: input tensor.
filters: filters in `Conv2D`.
kernel_size: kernel size as in `Conv2D`.
padding: padding mode in `Conv2D`.
activation: activation in `Conv2D`.
strides: strides in `Conv2D`.
name: name of the ops... | conv2d_bn | python | fchollet/deep-learning-models | inception_resnet_v2.py | https://github.com/fchollet/deep-learning-models/blob/master/inception_resnet_v2.py | MIT |
def inception_resnet_block(x, scale, block_type, block_idx, activation='relu'):
"""Adds a Inception-ResNet block.
This function builds 3 types of Inception-ResNet blocks mentioned
in the paper, controlled by the `block_type` argument (which is the
block name used in the official TF-slim implementation)... | Adds a Inception-ResNet block.
This function builds 3 types of Inception-ResNet blocks mentioned
in the paper, controlled by the `block_type` argument (which is the
block name used in the official TF-slim implementation):
- Inception-ResNet-A: `block_type='block35'`
- Inception-ResNet-B: `b... | inception_resnet_block | python | fchollet/deep-learning-models | inception_resnet_v2.py | https://github.com/fchollet/deep-learning-models/blob/master/inception_resnet_v2.py | MIT |
def InceptionResNetV2(include_top=True,
weights='imagenet',
input_tensor=None,
input_shape=None,
pooling=None,
classes=1000):
"""Instantiates the Inception-ResNet v2 architecture.
Optionally loads weig... | Instantiates the Inception-ResNet v2 architecture.
Optionally loads weights pre-trained on ImageNet.
Note that when using TensorFlow, for best performance you should
set `"image_data_format": "channels_last"` in your Keras config
at `~/.keras/keras.json`.
The model and the weights are compatible w... | InceptionResNetV2 | python | fchollet/deep-learning-models | inception_resnet_v2.py | https://github.com/fchollet/deep-learning-models/blob/master/inception_resnet_v2.py | MIT |
def conv2d_bn(x,
filters,
num_row,
num_col,
padding='same',
strides=(1, 1),
name=None):
"""Utility function to apply conv + BN.
Arguments:
x: input tensor.
filters: filters in `Conv2D`.
num_row: height o... | Utility function to apply conv + BN.
Arguments:
x: input tensor.
filters: filters in `Conv2D`.
num_row: height of the convolution kernel.
num_col: width of the convolution kernel.
padding: padding mode in `Conv2D`.
strides: strides in `Conv2D`.
name: name of ... | conv2d_bn | python | fchollet/deep-learning-models | inception_v3.py | https://github.com/fchollet/deep-learning-models/blob/master/inception_v3.py | MIT |
def InceptionV3(include_top=True,
weights='imagenet',
input_tensor=None,
input_shape=None,
pooling=None,
classes=1000):
"""Instantiates the Inception v3 architecture.
Optionally loads weights pre-trained
on ImageNet. Note that ... | Instantiates the Inception v3 architecture.
Optionally loads weights pre-trained
on ImageNet. Note that when using TensorFlow,
for best performance you should set
`image_data_format="channels_last"` in your Keras config
at ~/.keras/keras.json.
The model and the weights are compatible with both
... | InceptionV3 | python | fchollet/deep-learning-models | inception_v3.py | https://github.com/fchollet/deep-learning-models/blob/master/inception_v3.py | MIT |
def MobileNet(input_shape=None,
alpha=1.0,
depth_multiplier=1,
dropout=1e-3,
include_top=True,
weights='imagenet',
input_tensor=None,
pooling=None,
classes=1000):
"""Instantiates the MobileNet architectur... | Instantiates the MobileNet architecture.
Note that only TensorFlow is supported for now,
therefore it only works with the data format
`image_data_format='channels_last'` in your Keras config
at `~/.keras/keras.json`.
To load a MobileNet model via `load_model`, import the custom
objects `relu6`... | MobileNet | python | fchollet/deep-learning-models | mobilenet.py | https://github.com/fchollet/deep-learning-models/blob/master/mobilenet.py | MIT |
def _conv_block(inputs, filters, alpha, kernel=(3, 3), strides=(1, 1)):
"""Adds an initial convolution layer (with batch normalization and relu6).
# Arguments
inputs: Input tensor of shape `(rows, cols, 3)`
(with `channels_last` data format) or
(3, rows, cols) (with `channels_fi... | Adds an initial convolution layer (with batch normalization and relu6).
# Arguments
inputs: Input tensor of shape `(rows, cols, 3)`
(with `channels_last` data format) or
(3, rows, cols) (with `channels_first` data format).
It should have exactly 3 inputs channels,
... | _conv_block | python | fchollet/deep-learning-models | mobilenet.py | https://github.com/fchollet/deep-learning-models/blob/master/mobilenet.py | MIT |
def _depthwise_conv_block(inputs, pointwise_conv_filters, alpha,
depth_multiplier=1, strides=(1, 1), block_id=1):
"""Adds a depthwise convolution block.
A depthwise convolution block consists of a depthwise conv,
batch normalization, relu6, pointwise convolution,
batch normali... | Adds a depthwise convolution block.
A depthwise convolution block consists of a depthwise conv,
batch normalization, relu6, pointwise convolution,
batch normalization and relu6 activation.
# Arguments
inputs: Input tensor of shape `(rows, cols, channels)`
(with `channels_last` data... | _depthwise_conv_block | python | fchollet/deep-learning-models | mobilenet.py | https://github.com/fchollet/deep-learning-models/blob/master/mobilenet.py | MIT |
def MusicTaggerCRNN(weights='msd', input_tensor=None,
include_top=True):
'''Instantiate the MusicTaggerCRNN architecture,
optionally loading weights pre-trained
on Million Song Dataset. Note that when using TensorFlow,
for best performance you should set
`image_dim_ordering="tf"`... | Instantiate the MusicTaggerCRNN architecture,
optionally loading weights pre-trained
on Million Song Dataset. Note that when using TensorFlow,
for best performance you should set
`image_dim_ordering="tf"` in your Keras config
at ~/.keras/keras.json.
The model and the weights are compatible with... | MusicTaggerCRNN | python | fchollet/deep-learning-models | music_tagger_crnn.py | https://github.com/fchollet/deep-learning-models/blob/master/music_tagger_crnn.py | MIT |
def identity_block(input_tensor, kernel_size, filters, stage, block):
"""The identity block is the block that has no conv layer at shortcut.
# Arguments
input_tensor: input tensor
kernel_size: defualt 3, the kernel size of middle conv layer at main path
filters: list of integers, the fi... | The identity block is the block that has no conv layer at shortcut.
# Arguments
input_tensor: input tensor
kernel_size: defualt 3, the kernel size of middle conv layer at main path
filters: list of integers, the filterss of 3 conv layer at main path
stage: integer, current stage lab... | identity_block | python | fchollet/deep-learning-models | resnet50.py | https://github.com/fchollet/deep-learning-models/blob/master/resnet50.py | MIT |
def conv_block(input_tensor, kernel_size, filters, stage, block, strides=(2, 2)):
"""conv_block is the block that has a conv layer at shortcut
# Arguments
input_tensor: input tensor
kernel_size: defualt 3, the kernel size of middle conv layer at main path
filters: list of integers, the ... | conv_block is the block that has a conv layer at shortcut
# Arguments
input_tensor: input tensor
kernel_size: defualt 3, the kernel size of middle conv layer at main path
filters: list of integers, the filterss of 3 conv layer at main path
stage: integer, current stage label, used f... | conv_block | python | fchollet/deep-learning-models | resnet50.py | https://github.com/fchollet/deep-learning-models/blob/master/resnet50.py | MIT |
def ResNet50(include_top=True, weights='imagenet',
input_tensor=None, input_shape=None,
pooling=None,
classes=1000):
"""Instantiates the ResNet50 architecture.
Optionally loads weights pre-trained
on ImageNet. Note that when using TensorFlow,
for best performance ... | Instantiates the ResNet50 architecture.
Optionally loads weights pre-trained
on ImageNet. Note that when using TensorFlow,
for best performance you should set
`image_data_format="channels_last"` in your Keras config
at ~/.keras/keras.json.
The model and the weights are compatible with both
... | ResNet50 | python | fchollet/deep-learning-models | resnet50.py | https://github.com/fchollet/deep-learning-models/blob/master/resnet50.py | MIT |
def VGG16(include_top=True, weights='imagenet',
input_tensor=None, input_shape=None,
pooling=None,
classes=1000):
"""Instantiates the VGG16 architecture.
Optionally loads weights pre-trained
on ImageNet. Note that when using TensorFlow,
for best performance you should set
... | Instantiates the VGG16 architecture.
Optionally loads weights pre-trained
on ImageNet. Note that when using TensorFlow,
for best performance you should set
`image_data_format="channels_last"` in your Keras config
at ~/.keras/keras.json.
The model and the weights are compatible with both
Te... | VGG16 | python | fchollet/deep-learning-models | vgg16.py | https://github.com/fchollet/deep-learning-models/blob/master/vgg16.py | MIT |
def VGG19(include_top=True, weights='imagenet',
input_tensor=None, input_shape=None,
pooling=None,
classes=1000):
"""Instantiates the VGG19 architecture.
Optionally loads weights pre-trained
on ImageNet. Note that when using TensorFlow,
for best performance you should set
... | Instantiates the VGG19 architecture.
Optionally loads weights pre-trained
on ImageNet. Note that when using TensorFlow,
for best performance you should set
`image_data_format="channels_last"` in your Keras config
at ~/.keras/keras.json.
The model and the weights are compatible with both
Te... | VGG19 | python | fchollet/deep-learning-models | vgg19.py | https://github.com/fchollet/deep-learning-models/blob/master/vgg19.py | MIT |
def Xception(include_top=True, weights='imagenet',
input_tensor=None, input_shape=None,
pooling=None,
classes=1000):
"""Instantiates the Xception architecture.
Optionally loads weights pre-trained
on ImageNet. This model is available for TensorFlow only,
and can o... | Instantiates the Xception architecture.
Optionally loads weights pre-trained
on ImageNet. This model is available for TensorFlow only,
and can only be used with inputs following the TensorFlow
data format `(width, height, channels)`.
You should set `image_data_format="channels_last"` in your Keras ... | Xception | python | fchollet/deep-learning-models | xception.py | https://github.com/fchollet/deep-learning-models/blob/master/xception.py | MIT |
def beam_search_generator(sess, net, initial_state, initial_sample,
early_term_token, beam_width, forward_model_fn, forward_args):
'''Run beam search! Yield consensus tokens sequentially, as a generator;
return when reaching early_term_token (newline).
Args:
sess: tensorflow session reference
... | Run beam search! Yield consensus tokens sequentially, as a generator;
return when reaching early_term_token (newline).
Args:
sess: tensorflow session reference
net: tensorflow net graph (must be compatible with the forward_net function)
initial_state: initial hidden state of the net
... | beam_search_generator | python | pender/chatbot-rnn | chatbot.py | https://github.com/pender/chatbot-rnn/blob/master/chatbot.py | MIT |
def __init__(self, cell_fn, partition_size=128, partitions=1, layers=2):
"""Create a RNN cell composed sequentially of a number of RNNCells.
Args:
cell_fn: reference to RNNCell function to create each partition in each layer.
partition_size: how many horizontal cells to include i... | Create a RNN cell composed sequentially of a number of RNNCells.
Args:
cell_fn: reference to RNNCell function to create each partition in each layer.
partition_size: how many horizontal cells to include in each partition.
partitions: how many horizontal partitions to include ... | __init__ | python | pender/chatbot-rnn | model.py | https://github.com/pender/chatbot-rnn/blob/master/model.py | MIT |
def _rnn_state_placeholders(state):
"""Convert RNN state tensors to placeholders, reflecting the same nested tuple structure."""
# Adapted from @carlthome's comment:
# https://github.com/tensorflow/tensorflow/issues/2838#issuecomment-302019188
if isinstance(state, tf.contrib.rnn.LSTMStateTuple):
... | Convert RNN state tensors to placeholders, reflecting the same nested tuple structure. | _rnn_state_placeholders | python | pender/chatbot-rnn | model.py | https://github.com/pender/chatbot-rnn/blob/master/model.py | MIT |
def forward_model(self, sess, state, input_sample):
'''Run a forward pass. Return the updated hidden state and the output probabilities.'''
shaped_input = np.array([[input_sample]], np.float32)
inputs = {self.input_data: shaped_input}
self.add_state_to_feed_dict(inputs, state)
[p... | Run a forward pass. Return the updated hidden state and the output probabilities. | forward_model | python | pender/chatbot-rnn | model.py | https://github.com/pender/chatbot-rnn/blob/master/model.py | MIT |
def check_container_exec_instances(context, num):
"""Modern docker versions remove ExecIDs after they finished, but older
docker versions leave ExecIDs behind. This test is for asserting that
the ExecIDs are cleaned up one way or another"""
container_info = context.docker_client.inspect_container(
... | Modern docker versions remove ExecIDs after they finished, but older
docker versions leave ExecIDs behind. This test is for asserting that
the ExecIDs are cleaned up one way or another | check_container_exec_instances | python | Yelp/paasta | general_itests/steps/paasta_execute_docker_command.py | https://github.com/Yelp/paasta/blob/master/general_itests/steps/paasta_execute_docker_command.py | Apache-2.0 |
def tail_paasta_logs_let_threads_be_threads(context):
"""This test lets tail_paasta_logs() fire off processes to do work. We
verify that the work was done, basically irrespective of how it was done.
"""
service = "fake_service"
context.levels = ["fake_level1", "fake_level2"]
context.components =... | This test lets tail_paasta_logs() fire off processes to do work. We
verify that the work was done, basically irrespective of how it was done.
| tail_paasta_logs_let_threads_be_threads | python | Yelp/paasta | general_itests/steps/tail_paasta_logs.py | https://github.com/Yelp/paasta/blob/master/general_itests/steps/tail_paasta_logs.py | Apache-2.0 |
def register_bounce_method(name: str) -> Callable[[BounceMethod], BounceMethod]:
"""Returns a decorator that registers that bounce function at a given name
so get_bounce_method_func can find it."""
def outer(bounce_func: BounceMethod):
_bounce_method_funcs[name] = bounce_func
return bounce_... | Returns a decorator that registers that bounce function at a given name
so get_bounce_method_func can find it. | register_bounce_method | python | Yelp/paasta | paasta_tools/bounce_lib.py | https://github.com/Yelp/paasta/blob/master/paasta_tools/bounce_lib.py | Apache-2.0 |
def brutal_bounce(
new_config: BounceMethodConfigDict,
new_app_running: bool,
happy_new_tasks: Collection,
old_non_draining_tasks: Sequence,
margin_factor=1.0,
) -> BounceMethodResult:
"""Pays no regard to safety. Starts the new app if necessary, and kills any
old ones. Mostly meant as an ex... | Pays no regard to safety. Starts the new app if necessary, and kills any
old ones. Mostly meant as an example of the simplest working bounce method,
but might be tolerable for some services.
:param new_config: The configuration dictionary representing the desired new app.
:param new_app_running: Whethe... | brutal_bounce | python | Yelp/paasta | paasta_tools/bounce_lib.py | https://github.com/Yelp/paasta/blob/master/paasta_tools/bounce_lib.py | Apache-2.0 |
def upthendown_bounce(
new_config: BounceMethodConfigDict,
new_app_running: bool,
happy_new_tasks: Collection,
old_non_draining_tasks: Sequence,
margin_factor=1.0,
) -> BounceMethodResult:
"""Starts a new app if necessary; only kills old apps once all the requested tasks for the new version are ... | Starts a new app if necessary; only kills old apps once all the requested tasks for the new version are running.
See the docstring for brutal_bounce() for parameters and return value.
| upthendown_bounce | python | Yelp/paasta | paasta_tools/bounce_lib.py | https://github.com/Yelp/paasta/blob/master/paasta_tools/bounce_lib.py | Apache-2.0 |
def crossover_bounce(
new_config: BounceMethodConfigDict,
new_app_running: bool,
happy_new_tasks: Collection,
old_non_draining_tasks: Sequence,
margin_factor=1.0,
) -> BounceMethodResult:
"""Starts a new app if necessary; slowly kills old apps as instances of the new app become happy.
See t... | Starts a new app if necessary; slowly kills old apps as instances of the new app become happy.
See the docstring for brutal_bounce() for parameters and return value.
| crossover_bounce | python | Yelp/paasta | paasta_tools/bounce_lib.py | https://github.com/Yelp/paasta/blob/master/paasta_tools/bounce_lib.py | Apache-2.0 |
def downthenup_bounce(
new_config: BounceMethodConfigDict,
new_app_running: bool,
happy_new_tasks: Collection,
old_non_draining_tasks: Sequence,
margin_factor=1.0,
) -> BounceMethodResult:
"""Stops any old apps and waits for them to die before starting a new one.
See the docstring for bruta... | Stops any old apps and waits for them to die before starting a new one.
See the docstring for brutal_bounce() for parameters and return value.
| downthenup_bounce | python | Yelp/paasta | paasta_tools/bounce_lib.py | https://github.com/Yelp/paasta/blob/master/paasta_tools/bounce_lib.py | Apache-2.0 |
def down_bounce(
new_config: BounceMethodConfigDict,
new_app_running: bool,
happy_new_tasks: Collection,
old_non_draining_tasks: Sequence,
margin_factor=1.0,
) -> BounceMethodResult:
"""
Stops old apps, doesn't start any new apps.
Used for the graceful_app_drain script.
"""
retur... |
Stops old apps, doesn't start any new apps.
Used for the graceful_app_drain script.
| down_bounce | python | Yelp/paasta | paasta_tools/bounce_lib.py | https://github.com/Yelp/paasta/blob/master/paasta_tools/bounce_lib.py | Apache-2.0 |
def broadcast_log_all_services_running_here(line: str, soa_dir=DEFAULT_SOA_DIR) -> None:
"""Log a line of text to paasta logs of all services running on this host.
:param line: text to log
"""
system_paasta_config = load_system_paasta_config()
cluster = system_paasta_config.get_cluster()
servic... | Log a line of text to paasta logs of all services running on this host.
:param line: text to log
| broadcast_log_all_services_running_here | python | Yelp/paasta | paasta_tools/broadcast_log_to_services.py | https://github.com/Yelp/paasta/blob/master/paasta_tools/broadcast_log_to_services.py | Apache-2.0 |
def get_registrations(self) -> List[str]:
"""
To support apollo we always register in
cassandra_<cluster>.main
"""
registrations = self.config_dict.get("registrations", [])
for registration in registrations:
try:
decompose_job_id(registration)
... |
To support apollo we always register in
cassandra_<cluster>.main
| get_registrations | python | Yelp/paasta | paasta_tools/cassandracluster_tools.py | https://github.com/Yelp/paasta/blob/master/paasta_tools/cassandracluster_tools.py | Apache-2.0 |
def load_cassandracluster_instance_config(
service: str,
instance: str,
cluster: str,
load_deployments: bool = True,
soa_dir: str = DEFAULT_SOA_DIR,
) -> CassandraClusterDeploymentConfig:
"""Read a service instance's configuration for CassandraCluster.
If a branch isn't specified for a conf... | Read a service instance's configuration for CassandraCluster.
If a branch isn't specified for a config, the 'branch' key defaults to
paasta-${cluster}.${instance}.
:param service: The service name
:param instance: The instance of the service to retrieve
:param cluster: The cluster to read the conf... | load_cassandracluster_instance_config | python | Yelp/paasta | paasta_tools/cassandracluster_tools.py | https://github.com/Yelp/paasta/blob/master/paasta_tools/cassandracluster_tools.py | Apache-2.0 |
def container_lifetime(
pod: V1Pod,
) -> datetime.timedelta:
"""Return a time duration for how long the pod is alive"""
st = pod.status.start_time
return datetime.datetime.now(st.tzinfo) - st | Return a time duration for how long the pod is alive | container_lifetime | python | Yelp/paasta | paasta_tools/check_flink_services_health.py | https://github.com/Yelp/paasta/blob/master/paasta_tools/check_flink_services_health.py | Apache-2.0 |
def healthy_flink_containers_cnt(si_pods: Sequence[V1Pod], container_type: str) -> int:
"""Return count of healthy Flink containers with given type"""
return len(
[
pod
for pod in si_pods
if pod.metadata.labels["flink.yelp.com/container-type"] == container_type
... | Return count of healthy Flink containers with given type | healthy_flink_containers_cnt | python | Yelp/paasta | paasta_tools/check_flink_services_health.py | https://github.com/Yelp/paasta/blob/master/paasta_tools/check_flink_services_health.py | Apache-2.0 |
def check_under_registered_taskmanagers(
instance_config: FlinkDeploymentConfig,
expected_count: int,
cr_name: str,
is_eks: bool,
) -> Tuple[bool, str, str]:
"""Check if not enough taskmanagers have been registered to the jobmanager and
returns both the result of the check in the form of a boole... | Check if not enough taskmanagers have been registered to the jobmanager and
returns both the result of the check in the form of a boolean and a human-readable
text to be used in logging or monitoring events.
| check_under_registered_taskmanagers | python | Yelp/paasta | paasta_tools/check_flink_services_health.py | https://github.com/Yelp/paasta/blob/master/paasta_tools/check_flink_services_health.py | Apache-2.0 |
def get_cr_name(si_pods: Sequence[V1Pod]) -> str:
"""Returns the flink custom resource name based on the pod name. We are randomly choosing jobmanager pod here.
This change is related to FLINK-3129
"""
jobmanager_pod = [
pod
for pod in si_pods
if pod.metadata.labels["flink.yelp.... | Returns the flink custom resource name based on the pod name. We are randomly choosing jobmanager pod here.
This change is related to FLINK-3129
| get_cr_name | python | Yelp/paasta | paasta_tools/check_flink_services_health.py | https://github.com/Yelp/paasta/blob/master/paasta_tools/check_flink_services_health.py | Apache-2.0 |
def check_kubernetes_pod_replication(
instance_config: Union[KubernetesDeploymentConfig, EksDeploymentConfig],
pods_by_service_instance: Dict[str, Dict[str, List[V1Pod]]],
replication_checker: KubeSmartstackEnvoyReplicationChecker,
dry_run: bool = False,
) -> Optional[bool]:
"""Checks a service's re... | Checks a service's replication levels based on how the service's replication
should be monitored. (smartstack/envoy or k8s)
:param instance_config: an instance of KubernetesDeploymentConfig or EksDeploymentConfig
:param replication_checker: an instance of KubeSmartstackEnvoyReplicationChecker
| check_kubernetes_pod_replication | python | Yelp/paasta | paasta_tools/check_kubernetes_services_replication.py | https://github.com/Yelp/paasta/blob/master/paasta_tools/check_kubernetes_services_replication.py | Apache-2.0 |
def read_oom_events_from_scribe(cluster, superregion, num_lines=1000):
"""Read the latest 'num_lines' lines from OOM_EVENTS_STREAM and iterate over them."""
# paasta configs incls a map for cluster -> env that is expected by scribe
log_reader_config = load_system_paasta_config().get_log_reader()
cluster... | Read the latest 'num_lines' lines from OOM_EVENTS_STREAM and iterate over them. | read_oom_events_from_scribe | python | Yelp/paasta | paasta_tools/check_oom_events.py | https://github.com/Yelp/paasta/blob/master/paasta_tools/check_oom_events.py | Apache-2.0 |
def latest_oom_events(cluster, superregion, interval=60):
"""
:returns: {(service, instance): [OOMEvent, OOMEvent,...] }
if the number of events > 0
"""
start_timestamp = int(time.time()) - interval
res = {}
for e in read_oom_events_from_scribe(cluster, superregion):
if e["... |
:returns: {(service, instance): [OOMEvent, OOMEvent,...] }
if the number of events > 0
| latest_oom_events | python | Yelp/paasta | paasta_tools/check_oom_events.py | https://github.com/Yelp/paasta/blob/master/paasta_tools/check_oom_events.py | Apache-2.0 |
def compose_sensu_status(
instance, oom_events, is_check_enabled, alert_threshold, check_interval
):
"""
:param instance: InstanceConfig
:param oom_events: a list of OOMEvents
:param is_check_enabled: boolean to indicate whether the check enabled for the instance
"""
interval_string = f"{che... |
:param instance: InstanceConfig
:param oom_events: a list of OOMEvents
:param is_check_enabled: boolean to indicate whether the check enabled for the instance
| compose_sensu_status | python | Yelp/paasta | paasta_tools/check_oom_events.py | https://github.com/Yelp/paasta/blob/master/paasta_tools/check_oom_events.py | Apache-2.0 |
def send_sensu_event(instance, oom_events, args):
"""
:param instance: InstanceConfig
:param oom_events: a list of OOMEvents
"""
check_name = compose_check_name_for_service_instance(
"oom-killer", instance.service, instance.instance
)
monitoring_overrides = instance.get_monitoring()
... |
:param instance: InstanceConfig
:param oom_events: a list of OOMEvents
| send_sensu_event | python | Yelp/paasta | paasta_tools/check_oom_events.py | https://github.com/Yelp/paasta/blob/master/paasta_tools/check_oom_events.py | Apache-2.0 |
def set_local_vars_configuration_to_none(obj: Any, visited: Set[int] = None) -> None:
"""
Recursive function to ensure that k8s clientlib objects are pickleable.
Without this, k8s clientlib objects can't be used by multiprocessing functions
as those pickle data to shuttle between processes.
"""
... |
Recursive function to ensure that k8s clientlib objects are pickleable.
Without this, k8s clientlib objects can't be used by multiprocessing functions
as those pickle data to shuttle between processes.
| set_local_vars_configuration_to_none | python | Yelp/paasta | paasta_tools/check_services_replication_tools.py | https://github.com/Yelp/paasta/blob/master/paasta_tools/check_services_replication_tools.py | Apache-2.0 |
def instance_is_not_bouncing(
instance_config: Union[KubernetesDeploymentConfig, EksDeploymentConfig],
applications: List[Application],
) -> bool:
"""
:param instance_config: a KubernetesDeploymentConfig or an EksDeploymentConfig with the configuration of the instance
:param applications: a list of... |
:param instance_config: a KubernetesDeploymentConfig or an EksDeploymentConfig with the configuration of the instance
:param applications: a list of all deployments or stateful sets on the cluster that match the service
and instance of provided instance_config
| instance_is_not_bouncing | python | Yelp/paasta | paasta_tools/cleanup_kubernetes_jobs.py | https://github.com/Yelp/paasta/blob/master/paasta_tools/cleanup_kubernetes_jobs.py | Apache-2.0 |
def get_applications_to_kill(
applications_dict: Dict[Tuple[str, str], List[Application]],
cluster: str,
valid_services: Set[Tuple[str, str]],
soa_dir: str,
eks: bool = False,
) -> List[Application]:
"""
:param applications_dict: A dictionary with (service, instance) as keys and a list of a... |
:param applications_dict: A dictionary with (service, instance) as keys and a list of applications for each tuple
:param cluster: paasta cluster
:param valid_services: a set with the valid (service, instance) tuples for this cluster
:param soa_dir: The SOA config directory to read from
:return: li... | get_applications_to_kill | python | Yelp/paasta | paasta_tools/cleanup_kubernetes_jobs.py | https://github.com/Yelp/paasta/blob/master/paasta_tools/cleanup_kubernetes_jobs.py | Apache-2.0 |
def cleanup_unused_apps(
soa_dir: str,
cluster: str,
kill_threshold: float = 0.5,
force: bool = False,
eks: bool = False,
) -> None:
"""Clean up old or invalid jobs/apps from kubernetes. Retrieves
both a list of apps currently in kubernetes and a list of valid
app ids in order to determi... | Clean up old or invalid jobs/apps from kubernetes. Retrieves
both a list of apps currently in kubernetes and a list of valid
app ids in order to determine what to kill.
:param soa_dir: The SOA config directory to read from
:param cluster: paasta cluster to clean
:param kill_threshold: The decimal f... | cleanup_unused_apps | python | Yelp/paasta | paasta_tools/cleanup_kubernetes_jobs.py | https://github.com/Yelp/paasta/blob/master/paasta_tools/cleanup_kubernetes_jobs.py | Apache-2.0 |
def write_auto_config_data(
service: str,
extra_info: str,
data: Dict[str, Any],
soa_dir: str = DEFAULT_SOA_DIR,
sub_dir: Optional[str] = None,
comment: Optional[str] = None,
) -> Optional[str]:
"""
Replaces the contents of an automated config file for a service, or creates the file if i... |
Replaces the contents of an automated config file for a service, or creates the file if it does not exist.
Returns the filename of the modified file, or None if no file was written.
| write_auto_config_data | python | Yelp/paasta | paasta_tools/config_utils.py | https://github.com/Yelp/paasta/blob/master/paasta_tools/config_utils.py | Apache-2.0 |
def get_currently_deployed_sha(service, deploy_group, soa_dir=DEFAULT_SOA_DIR):
"""Tries to determine the currently deployed sha for a service and deploy_group,
returns None if there isn't one ready yet"""
try:
deployments = load_v2_deployments_json(service=service, soa_dir=soa_dir)
return d... | Tries to determine the currently deployed sha for a service and deploy_group,
returns None if there isn't one ready yet | get_currently_deployed_sha | python | Yelp/paasta | paasta_tools/deployment_utils.py | https://github.com/Yelp/paasta/blob/master/paasta_tools/deployment_utils.py | Apache-2.0 |
def get_currently_deployed_version(
service, deploy_group, soa_dir=DEFAULT_SOA_DIR
) -> Optional[DeploymentVersion]:
"""Tries to determine the currently deployed version for a service and deploy_group,
returns None if there isn't one ready yet"""
try:
deployments = load_v2_deployments_json(servi... | Tries to determine the currently deployed version for a service and deploy_group,
returns None if there isn't one ready yet | get_currently_deployed_version | python | Yelp/paasta | paasta_tools/deployment_utils.py | https://github.com/Yelp/paasta/blob/master/paasta_tools/deployment_utils.py | Apache-2.0 |
def register_drain_method(
name: str,
) -> Callable[[_RegisterDrainMethod_T], _RegisterDrainMethod_T]:
"""Returns a decorator that registers a DrainMethod subclass at a given name
so get_drain_method/list_drain_methods can find it."""
def outer(drain_method: _RegisterDrainMethod_T) -> _RegisterDrainMet... | Returns a decorator that registers a DrainMethod subclass at a given name
so get_drain_method/list_drain_methods can find it. | register_drain_method | python | Yelp/paasta | paasta_tools/drain_lib.py | https://github.com/Yelp/paasta/blob/master/paasta_tools/drain_lib.py | Apache-2.0 |
async def drain(self, task: DrainTask) -> None:
"""Make a task stop receiving new traffic."""
raise NotImplementedError() | Make a task stop receiving new traffic. | drain | python | Yelp/paasta | paasta_tools/drain_lib.py | https://github.com/Yelp/paasta/blob/master/paasta_tools/drain_lib.py | Apache-2.0 |
async def stop_draining(self, task: DrainTask) -> None:
"""Make a task that has previously been downed start receiving traffic again."""
raise NotImplementedError() | Make a task that has previously been downed start receiving traffic again. | stop_draining | python | Yelp/paasta | paasta_tools/drain_lib.py | https://github.com/Yelp/paasta/blob/master/paasta_tools/drain_lib.py | Apache-2.0 |
async def is_draining(self, task: DrainTask) -> bool:
"""Return whether a task is being drained."""
raise NotImplementedError() | Return whether a task is being drained. | is_draining | python | Yelp/paasta | paasta_tools/drain_lib.py | https://github.com/Yelp/paasta/blob/master/paasta_tools/drain_lib.py | Apache-2.0 |
async def is_safe_to_kill(self, task: DrainTask) -> bool:
"""Return True if a task is drained and ready to be killed, or False if we should wait."""
raise NotImplementedError() | Return True if a task is drained and ready to be killed, or False if we should wait. | is_safe_to_kill | python | Yelp/paasta | paasta_tools/drain_lib.py | https://github.com/Yelp/paasta/blob/master/paasta_tools/drain_lib.py | Apache-2.0 |
def parse_success_codes(self, success_codes_str: str) -> Set[int]:
"""Expand a string like 200-399,407-409,500 to a set containing all the integers in between."""
acceptable_response_codes: Set[int] = set()
for series_str in str(success_codes_str).split(","):
if "-" in series_str:
... | Expand a string like 200-399,407-409,500 to a set containing all the integers in between. | parse_success_codes | python | Yelp/paasta | paasta_tools/drain_lib.py | https://github.com/Yelp/paasta/blob/master/paasta_tools/drain_lib.py | Apache-2.0 |
async def issue_request(self, url_spec: UrlSpec, task: DrainTask) -> None:
"""Issue a request to the URL specified by url_spec regarding the task given."""
format_params = self.get_format_params(task)
urls = [
self.format_url(url_spec["url_format"], param) for param in format_params
... | Issue a request to the URL specified by url_spec regarding the task given. | issue_request | python | Yelp/paasta | paasta_tools/drain_lib.py | https://github.com/Yelp/paasta/blob/master/paasta_tools/drain_lib.py | Apache-2.0 |
def load_eks_service_config_no_cache(
service: str,
instance: str,
cluster: str,
load_deployments: bool = True,
soa_dir: str = DEFAULT_SOA_DIR,
) -> "EksDeploymentConfig":
"""Read a service instance's configuration for EKS.
If a branch isn't specified for a config, the 'branch' key defaults... | Read a service instance's configuration for EKS.
If a branch isn't specified for a config, the 'branch' key defaults to
paasta-${cluster}.${instance}.
:param name: The service name
:param instance: The instance of the service to retrieve
:param cluster: The cluster to read the configuration for
... | load_eks_service_config_no_cache | python | Yelp/paasta | paasta_tools/eks_tools.py | https://github.com/Yelp/paasta/blob/master/paasta_tools/eks_tools.py | Apache-2.0 |
def load_eks_service_config(
service: str,
instance: str,
cluster: str,
load_deployments: bool = True,
soa_dir: str = DEFAULT_SOA_DIR,
) -> "EksDeploymentConfig":
"""Read a service instance's configuration for EKS.
If a branch isn't specified for a config, the 'branch' key defaults to
p... | Read a service instance's configuration for EKS.
If a branch isn't specified for a config, the 'branch' key defaults to
paasta-${cluster}.${instance}.
:param name: The service name
:param instance: The instance of the service to retrieve
:param cluster: The cluster to read the configuration for
... | load_eks_service_config | python | Yelp/paasta | paasta_tools/eks_tools.py | https://github.com/Yelp/paasta/blob/master/paasta_tools/eks_tools.py | Apache-2.0 |
def are_services_up_in_pod(
envoy_host: str,
envoy_admin_port: int,
envoy_admin_endpoint_format: str,
registrations: Collection[str],
pod_ip: str,
pod_port: int,
) -> bool:
"""Returns whether a service in a k8s pod is reachable via envoy
:param envoy_host: The host that this check should... | Returns whether a service in a k8s pod is reachable via envoy
:param envoy_host: The host that this check should contact for replication information.
:param envoy_admin_port: The port that Envoy's admin interface is listening on
:param registrations: The service_name.instance_name of the services
:param... | are_services_up_in_pod | python | Yelp/paasta | paasta_tools/envoy_tools.py | https://github.com/Yelp/paasta/blob/master/paasta_tools/envoy_tools.py | Apache-2.0 |
def are_namespaces_up_in_eds(
envoy_eds_path: str,
namespaces: Collection[str],
pod_ip: str,
pod_port: int,
) -> bool:
"""Returns whether a Pod is registered on Envoy through the EDS
:param envoy_eds_path: path where EDS yaml files are stored
:param namespaces: list of namespaces to check
... | Returns whether a Pod is registered on Envoy through the EDS
:param envoy_eds_path: path where EDS yaml files are stored
:param namespaces: list of namespaces to check
:param pod_ip: IP of the pod
:param pod_port: The port to reach the service in the pod
| are_namespaces_up_in_eds | python | Yelp/paasta | paasta_tools/envoy_tools.py | https://github.com/Yelp/paasta/blob/master/paasta_tools/envoy_tools.py | Apache-2.0 |
def get_casper_endpoints(
clusters_info: Mapping[str, Any]
) -> FrozenSet[Tuple[str, int]]:
"""Filters out and returns casper endpoints from Envoy clusters."""
casper_endpoints: Set[Tuple[str, int]] = set()
for cluster_status in clusters_info["cluster_statuses"]:
if "host_statuses" in cluster_st... | Filters out and returns casper endpoints from Envoy clusters. | get_casper_endpoints | python | Yelp/paasta | paasta_tools/envoy_tools.py | https://github.com/Yelp/paasta/blob/master/paasta_tools/envoy_tools.py | Apache-2.0 |
def get_backends_from_eds(namespace: str, envoy_eds_path: str) -> List[Tuple[str, int]]:
"""Returns a list of backends for a given namespace. Casper backends are also returned (if present).
:param namespace: return backends for this namespace
:param envoy_eds_path: path where EDS yaml files are stored
... | Returns a list of backends for a given namespace. Casper backends are also returned (if present).
:param namespace: return backends for this namespace
:param envoy_eds_path: path where EDS yaml files are stored
:returns backends: a list of touples representing the backends for
the re... | get_backends_from_eds | python | Yelp/paasta | paasta_tools/envoy_tools.py | https://github.com/Yelp/paasta/blob/master/paasta_tools/envoy_tools.py | Apache-2.0 |
def get_backends(
service: str,
envoy_host: str,
envoy_admin_port: int,
envoy_admin_endpoint_format: str,
) -> Dict[str, List[Tuple[EnvoyBackend, bool]]]:
"""Fetches JSON from Envoy admin's /clusters endpoint and returns a list of backends.
:param service: If None, return backends for all servi... | Fetches JSON from Envoy admin's /clusters endpoint and returns a list of backends.
:param service: If None, return backends for all services, otherwise only return backends for this particular
service.
:param envoy_host: The host that this check should contact for replication information.
... | get_backends | python | Yelp/paasta | paasta_tools/envoy_tools.py | https://github.com/Yelp/paasta/blob/master/paasta_tools/envoy_tools.py | Apache-2.0 |
def get_multiple_backends(
services: Optional[Sequence[str]],
envoy_host: str,
envoy_admin_port: int,
envoy_admin_endpoint_format: str,
resolve_hostnames: bool = True,
) -> Dict[str, List[Tuple[EnvoyBackend, bool]]]:
"""Fetches JSON from Envoy admin's /clusters endpoint and returns a list of bac... | Fetches JSON from Envoy admin's /clusters endpoint and returns a list of backends.
:param services: If None, return backends for all services, otherwise only return backends for these particular
services.
:param envoy_host: The host that this check should contact for replication informatio... | get_multiple_backends | python | Yelp/paasta | paasta_tools/envoy_tools.py | https://github.com/Yelp/paasta/blob/master/paasta_tools/envoy_tools.py | Apache-2.0 |
def match_backends_and_pods(
backends: Iterable[EnvoyBackend],
pods: Iterable[V1Pod],
) -> List[Tuple[Optional[EnvoyBackend], Optional[V1Pod]]]:
"""Returns tuples of matching (backend, pod) pairs, as matched by IP. Each backend will be listed exactly
once. If a backend does not match with a pod, (backen... | Returns tuples of matching (backend, pod) pairs, as matched by IP. Each backend will be listed exactly
once. If a backend does not match with a pod, (backend, None) will be included.
If a pod's IP does not match with any backends, (None, pod) will be included.
:param backends: An iterable of Envoy backend ... | match_backends_and_pods | python | Yelp/paasta | paasta_tools/envoy_tools.py | https://github.com/Yelp/paasta/blob/master/paasta_tools/envoy_tools.py | Apache-2.0 |
def get_replication_for_all_services(
envoy_host: str,
envoy_admin_port: int,
envoy_admin_endpoint_format: str,
) -> Dict[str, int]:
"""Returns the replication level for all services known to this Envoy
:param envoy_host: The host that this check should contact for replication information.
:par... | Returns the replication level for all services known to this Envoy
:param envoy_host: The host that this check should contact for replication information.
:param envoy_admin_port: The port number that this check should contact for replication information.
:param envoy_admin_endpoint_format: The format of E... | get_replication_for_all_services | python | Yelp/paasta | paasta_tools/envoy_tools.py | https://github.com/Yelp/paasta/blob/master/paasta_tools/envoy_tools.py | Apache-2.0 |
def _yocalhost_rule(port, comment, protocol="tcp"):
"""Return an iptables rule allowing access to a yocalhost port."""
return iptables.Rule(
protocol=protocol,
src="0.0.0.0/0.0.0.0",
dst="169.254.255.254/255.255.255.255",
target="ACCEPT",
matches=(
("comment",... | Return an iptables rule allowing access to a yocalhost port. | _yocalhost_rule | python | Yelp/paasta | paasta_tools/firewall.py | https://github.com/Yelp/paasta/blob/master/paasta_tools/firewall.py | Apache-2.0 |
def services_running_here():
"""Generator helper that yields (service, instance, mac address) of both
mesos tasks.
"""
for container in get_running_mesos_docker_containers():
if container["HostConfig"]["NetworkMode"] != "bridge":
continue
service = container["Labels"].get("p... | Generator helper that yields (service, instance, mac address) of both
mesos tasks.
| services_running_here | python | Yelp/paasta | paasta_tools/firewall.py | https://github.com/Yelp/paasta/blob/master/paasta_tools/firewall.py | Apache-2.0 |
def _ensure_common_chain():
"""The common chain allows access for all services to certain resources."""
iptables.ensure_chain(
"PAASTA-COMMON",
(
# Allow return traffic for incoming connections
iptables.Rule(
protocol="ip",
src="0.0.0.0/0.0... | The common chain allows access for all services to certain resources. | _ensure_common_chain | python | Yelp/paasta | paasta_tools/firewall.py | https://github.com/Yelp/paasta/blob/master/paasta_tools/firewall.py | Apache-2.0 |
def ensure_service_chains(service_groups, soa_dir, synapse_service_dir):
"""Ensure service chains exist and have the right rules.
service_groups is a dict {ServiceGroup: set([mac_address..])}
Returns dictionary {[service chain] => [list of mac addresses]}.
"""
chains = {}
for service, macs in ... | Ensure service chains exist and have the right rules.
service_groups is a dict {ServiceGroup: set([mac_address..])}
Returns dictionary {[service chain] => [list of mac addresses]}.
| ensure_service_chains | python | Yelp/paasta | paasta_tools/firewall.py | https://github.com/Yelp/paasta/blob/master/paasta_tools/firewall.py | Apache-2.0 |
def general_update(soa_dir, synapse_service_dir):
"""Update iptables to match the current PaaSTA state."""
ensure_shared_chains()
service_chains = ensure_service_chains(
active_service_groups(), soa_dir, synapse_service_dir
)
ensure_dispatch_chains(service_chains)
garbage_collect_old_ser... | Update iptables to match the current PaaSTA state. | general_update | python | Yelp/paasta | paasta_tools/firewall.py | https://github.com/Yelp/paasta/blob/master/paasta_tools/firewall.py | Apache-2.0 |
Subsets and Splits
Django Code with Docstrings
Filters Python code examples from Django repository that contain Django-related code, helping identify relevant code snippets for understanding Django framework usage patterns.
SQL Console for Shuu12121/python-treesitter-filtered-datasetsV2
Retrieves Python code examples from Django repository that contain 'django' in the code, which helps identify Django-specific code snippets but provides limited analytical insights beyond basic filtering.
SQL Console for Shuu12121/python-treesitter-filtered-datasetsV2
Retrieves specific code examples from the Flask repository but doesn't provide meaningful analysis or patterns beyond basic data retrieval.
HTTPX Repo Code and Docstrings
Retrieves specific code examples from the httpx repository, which is useful for understanding how particular libraries are used but doesn't provide broader analytical insights about the dataset.
Requests Repo Docstrings & Code
Retrieves code examples with their docstrings and file paths from the requests repository, providing basic filtering but limited analytical value beyond finding specific code samples.
Quart Repo Docstrings & Code
Retrieves code examples with their docstrings from the Quart repository, providing basic code samples but offering limited analytical value for understanding broader patterns or relationships in the dataset.